Deep learning
Optimized Glaucoma Detection Using HCCNN with PSO-Driven Hyperparameter Tuning
Biomed Phys Eng Express. 2025 Apr 7. doi: 10.1088/2057-1976/adc9b7. Online ahead of print.
ABSTRACT

This study is focused on creating an effective glaucoma detection system employing a Hybrid Centric Convolutional Neural Network (HCCNN) model. By using Particle Swarm Optimization (PSO), classification accuracy is increased and computing complexity is reduced. Modified U-Net is also used to segment the optic disc (OD) and optic cup (OC) regions of classified glaucoma images in order to determine the severity of glaucoma.
Methods:
The proposed HCCNN model can extract features from fundus images that show signs of glaucoma. To improve the model performance, hyperparameters like dropout rate, learning rate, and the number of units in dense layer are optimized using the PSO method. The PSO algorithm iteratively assesses and modifies these parameters to minimise classification error.The classified glaucoma image is subjected to channel separation to enhance the visibility of relevant features. This channel separated image is segmented using the modified U-Net to delineate the OC and OD regions.
Results:
Experimental findings indicate that the PSO-HCCNN model attains classification accuracy of 94% and 97% on DRISHTI-GS and RIM-ONE datasets. Performance criteria including accuracy, sensitivity, specificity, and AUC are employed to assess the system's efficacy, demonstrating a notable enhancement in the early detection rates of glaucoma. To evaluate the segmentation performance, parameters such as Dice coefficient, and Jaccard index are computed.
Conclusion:
The integration of PSO with the HCCNN model considerably enhances glaucoma detection from fundus images by optimising essential parameters and accurate OD and OC segmentation, resulting in a robust and precise classification model. This method has potential uses in ophthalmology and may help physicians detect glaucoma early and accurately.
.
PMID:40194525 | DOI:10.1088/2057-1976/adc9b7
Dimensionality Reduction of Genetic Data using Contrastive Learning
Genetics. 2025 Apr 7:iyaf068. doi: 10.1093/genetics/iyaf068. Online ahead of print.
ABSTRACT
We introduce a framework for using contrastive learning for dimensionality reduction on genetic datasets to create PCA-like population visualizations. Contrastive learning is a self-supervised deep learning method that uses similarities between samples to train the neural network to discriminate between samples. Many of the advances in these types of models have been made for computer vision, but some common methodology does not translate well from image to genetic data. We define a loss function that outperforms loss functions commonly used in contrastive learning, and a data augmentation scheme tailored specifically towards SNP genotype datasets. We compare the performance of our method to PCA and contemporary non-linear methods with respect to how well they preserve local and global structure, and how well they generalize to new data. Our method displays good preservation of global structure and has improved generalization properties over t-SNE, UMAP, and popvae, while preserving relative distances between individuals to a high extent. A strength of the deep learning framework is the possibility of projecting new samples and fine-tuning to new datasets using a pre-trained model without access to the original training data, and the ability to incorporate more domain-specific information in the model. We show examples of population classification on two datasets of dog and human genotypes.
PMID:40194517 | DOI:10.1093/genetics/iyaf068
Deep Learning for the Prediction of Time-to-Seizure in Epilepsy using Routine EEG (P3-9.003)
Neurology. 2025 Apr 8;104(7_Supplement_1):2403. doi: 10.1212/WNL.0000000000209122. Epub 2025 Apr 7.
ABSTRACT
OBJECTIVE: To develop and validate a deep learning model to predict time-to-seizure in patients with epilepsy (PWE) from routine EEG.
BACKGROUND: While interictal epileptiform discharges (IEDs) on EEG are associated with higher seizure recurrence, routine EEG has low sensitivity for IEDs and is prone to overinterpretation. Deep learning can extract features from EEG beyond IEDs and map them to complex outcomes, such as seizure risk through time, offering valuable information to guide epilepsy management.
DESIGN/METHODS: We selected all PWE undergoing routine EEG at our institution from 2018-2019, using EEGs recorded after July 2019 as the testing set. Patients with unclear epilepsy diagnoses or seizure during the EEG were excluded. Medical charts were reviewed for the date of the first seizure after the EEG (exact date or extrapolated from seizure frequency) and the date of last follow-up. EEGs were segmented into overlapping 30-second windows and input into a deep transformer model alongside the following clinical features: age, sex, epilepsy type, epilepsy duration, seizure frequency prior to EEG, focal lesion on neuroimaging, family history of epilepsy, and history of febrile seizures. A random survival forest (RSF) using clinical features only was used as a baseline. Models were trained to predict seizure hazards over 18 months at logarithmically spaced intervals and evaluated on the testing set using Uno's concordance index.
RESULTS: We included 504 EEGs from 451 patients for training and 92 EEGs from 83 patients for testing. The deep learning model achieved a concordance index of 0.67, compared to 0.63 for the clinical-only RSF model. Including IEDs as a predictor did not improve the RSF model's performance.
CONCLUSIONS: Deep learning can extract complex information from routine EEG to predict time-to-seizure, outperforming traditional predictors. This suggests a potential role of automated EEG analysis in the follow-up of PWE. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Dr. Lemoine has received research support from Canadian Institute of Health Research. Dr. Lemoine has received research support from Fonds de Recherche du Québec -- Santé. Dr. Lesage has stock in Labeo Technologies Inc.. Dr. Nguyen has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Paladin Pharma. Dr. Nguyen has received personal compensation in the range of $500-$4,999 for serving on a Speakers Bureau for Paladin Pharma. The institution of Prof. Bou Assi has received research support from NSERC. The institution of Prof. Bou Assi has received research support from FRQS. The institution of Prof. Bou Assi has received research support from Brain Canada Foundation . The institution of Prof. Bou Assi has received research support from Epilepsy Canada Foundation. The institution of Prof. Bou Assi has received research support from Savoy Foundation.
PMID:40194014 | DOI:10.1212/WNL.0000000000209122
Ensemble deep learning for Alzheimer's disease diagnosis using MRI: Integrating features from VGG16, MobileNet, and InceptionResNetV2 models
PLoS One. 2025 Apr 7;20(4):e0318620. doi: 10.1371/journal.pone.0318620. eCollection 2025.
ABSTRACT
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by the accumulation of amyloid plaques and neurofibrillary tangles in the brain, leading to distinctive patterns of neuronal dysfunction and the cognitive decline emblematic of dementia. Currently, over 5 million individuals aged 65 and above are living with AD in the United States, a number projected to rise by 2050. Traditional diagnostic methods are fraught with challenges, including low accuracy and a significant propensity for misdiagnosis. In response to these diagnostic challenges, our study develops and evaluates an innovative deep learning (DL) ensemble model that integrates features from three pre-trained models-VGG16, MobileNet, and InceptionResNetV2-for the precise identification of AD markers from MRI scans. This approach aims to overcome the limitations of individual models in handling varying image shapes and textures, thereby improving diagnostic accuracy. The ultimate goal is to support primary radiologists by streamlining the diagnostic process, facilitating early detection, and enabling timely treatment of AD. Upon rigorous evaluation, our ensemble model demonstrated superior performance over contemporary classifiers, achieving a notable accuracy of 97.93%, along with a specificity of 98.04%, sensitivity of 95.89%, precision of 95.94%, and an F1-score of 87.50%. These results not only underscore the efficacy of the ensemble approach but also highlight the potential for DL to revolutionize AD diagnosis, offering a promising pathway to more accurate, early detection and intervention.
PMID:40193955 | DOI:10.1371/journal.pone.0318620
Artificial Intelligence-powered Prediction of Brain Tumor Recurrence After Gamma Knife Radiotherapy: A Neural Network Approach (P3-6.004)
Neurology. 2025 Apr 8;104(7_Supplement_1):1881. doi: 10.1212/WNL.0000000000208805. Epub 2025 Apr 7.
ABSTRACT
OBJECTIVE: To develop and evaluate a deep learning model for predicting brain tumor recurrence following Gamma Knife radiotherapy using multimodal MRI images, radiation therapy details, and clinical parameters.
BACKGROUND: Brain metastases are common, with over 150,000 new cases annually in the U.S. Gamma Knife radiotherapy is a widely used treatment for brain tumors. However, recurrence is a concern, requiring early detection for timely intervention. Previous studies using AI in brain tumor prognosis have primarily focused on glioblastomas, leaving a gap in research regarding metastatic brain tumors post-Gamma Knife therapy. This study aims to address this by developing predictive models for recurrence risk.
DESIGN/METHODS: The study utilized the Brain Tumor Radiotherapy GammaKnife dataset from The Cancer Imaging Archive (TCIA), including MRI images, lesion annotations, and radiation dose details. Data preprocessing involved normalizing MRI images, extracting lesion-specific radiation doses, and applying data augmentation. A 3D Convolutional Neural Network was designed with multiple convolutional layers and trained using stratified sampling. The model was trained for 50 epochs with a batch size of 16 and optimized using the Adam optimizer.
RESULTS: The proof-of-concept model successfully integrated multimodal data and identified stable tumors with accuracy of 79.5% and specificity of 84.4%. However, true negative rates were low indicating difficulty in predicting recurrence. To reduce overfitting, techniques such as augmentation, dropout layers, model checkpoints, and cross validation have been employed to improve generalization. Further steps include feature extraction from complex radiomic profiles to enhance model robustness and accuracy prediction.
CONCLUSIONS: Our study demonstrates the feasibility of using AI to predict brain tumor recurrence post-Gamma Knife radiotherapy. While initial results are promising, further refinement, including the addition of radiomic features and model tuning, is set to improve recurrence prediction and aid in clinical decision-making. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Mr. Pandya has nothing to disclose. Mr. Patel has nothing to disclose. Miss Anand has nothing to disclose.
PMID:40193918 | DOI:10.1212/WNL.0000000000208805
Perivascular Tau in Autopsy Cases with Definite Cerebral Amyloid Angiopathy (P10-3.007)
Neurology. 2025 Apr 8;104(7_Supplement_1):1900. doi: 10.1212/WNL.0000000000208815. Epub 2025 Apr 7.
ABSTRACT
OBJECTIVE: The main aims of this study were to quantify the burden of tau-pathology in an autopsy cohort of clinical cases with cerebral amyloid angiopathy (CAA) and to characterize the presence of perivascular tau (PVT) accumulation and its relationship with CAA.
BACKGROUND: CAA and Alzheimer's Disease Neuropathological Changes, such as tau-tangles, often coexist. The role of tau pathology in the pathophysiology of CAA remains to be determined.
DESIGN/METHODS: Autopsy cases with a neuropathologically confirmed clinical diagnosis of CAA were evaluated. Samples were taken from cortical areas and underwent immunohistochemistry against amyloid-β (Aβ) and phosphorylated tau (At8). Deep-learning based models were created and applied to the samples to quantify 1) density of intraneuronal tau-tangles; 2) percentage area of cortical CAA and Aβ-plaques using the Aiforia® platform; 3) percentage area of total cortical tau-burden, using QuPath. Linear-mixed effects models were applied to assess the association between tau and CAA burden. The presence of dyshoric CAA (flamelike Aβ deposits that radiate into the perivascular neuropil), and PVT (accentuated accumulation of tau around the vessel) were visually identified on Aβ vs. At8-stained sections respectively. Single-vessel analysis was performed to determine whether there was an association between PVT and CAA using Chi-square tests.
RESULTS: A total of 76 sections in 19 CAA cases (median age-at-death 76 years [64-88]; 7 females) were analyzed. Higher tau-tangles and total tau-burden were observed in the temporal cortex versus the occipital cortex (p=0.05). CAA burden was not associated with tau-tangles or total tau-burden. Dyshoric CAA was observed around at least one vessel in 71 (93.4%) of the sections and PVT in 32 (34%) of the sections. In single-vessel analysis, PVT was significantly associated with both dyshoric CAA (p=0.004) and any CAA (p=0.0005).
CONCLUSIONS: Tau was not regionally associated with CAA in this autopsy-cohort. Accumulation of PVT was significantly associated with CAA in the single-vessel analysis. Disclosure: Prof. Farias Da Guarda has nothing to disclose. Ms. vom Eigen has nothing to disclose. Dr. van Harten has nothing to disclose. Ms. Auger has nothing to disclose. Dr. Greenberg has received personal compensation in the range of $5,000-$9,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Bayer. Dr. Greenberg has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Bristol Myers Squib. The institution of Dr. Greenberg has received personal compensation in the range of $10,000-$49,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Alnylam. Dr. Greenberg has received research support from National Institutes of Health. Dr. Greenberg has received publishing royalties from a publication relating to health care. Dr. Viswanathan has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Alnylam Pharmaceuticals. Dr. Viswanathan has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Biogen. Dr. Viswanathan has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Roche Pharmaceuticals. Dr. van Veluw has received personal compensation in the range of $500-$4,999 for serving as a Consultant for Eisai. The institution of Dr. van Veluw has received research support from NIH. The institution of Dr. van Veluw has received research support from Sanofi. The institution of Dr. van Veluw has received research support from Leducq Foundation. The institution of Dr. van Veluw has received research support from American Heart Association. The institution of Dr. van Veluw has received research support from Frechette Family Foundation. The institution of Dr. van Veluw has received research support from BrightFocus Foundation. The institution of Dr. van Veluw has received research support from Therini Bio. Dr. Perosa has nothing to disclose. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff.
PMID:40193903 | DOI:10.1212/WNL.0000000000208815
Classifying Parkinson's Disease Patients from Healthy Controls Using a ResNet18 Convolutional Neural Network Model of T1-Weighted MRI (P9-5.020)
Neurology. 2025 Apr 8;104(7_Supplement_1):4989. doi: 10.1212/WNL.0000000000212057. Epub 2025 Apr 7.
ABSTRACT
OBJECTIVE: Evaluating the efficacy of 2D and 3D ResNet18-based convolutional neural network models in classifying Parkinson's Disease patients from healthy controls using T1-weighted MRI images.
BACKGROUND: Parkinson's Disease (PD) is a neurodegenerative disorder affecting millions worldwide, with progressive motor and non-motor symptoms. Early diagnosis is critical for optimal management, but current neuroimaging techniques can be complex and time-consuming. Recently, deep learning techniques and advancements have shown potential in automating diagnostic processes. This study aimed to assess the performance of 2D and 3D ResNet18-based convolutional neural network (CNN) models in distinguishing PD patients from healthy controls using T1-weighted magnetic resonance imaging (MRI).
DESIGN/METHODS: We developed two CNN models: a 2D model that utilized mid-sagittal T1-weighted MRI slices, and a 3D model based on volumetric brain data. Preprocessing included data augmentation and transfer learning to enhance model performance. Data were split into 85% for training and 15% for testing, with performance evaluated through accuracy, sensitivity, specificity, and area under the curve (AUC). The models were trained and validated using GPU acceleration for optimized computational efficiency.
RESULTS: The 3D CNN model achieved an accuracy of 94%, outperforming the 2D CNN model, which had an accuracy of 91%. The 3D model also exhibited superior sensitivity (92% vs. 89%) and AUC (0.94 vs. 0.92). Confusion matrices revealed higher specificity and reduced false positives for the 3D model, highlighting its superior diagnostic performance.
CONCLUSIONS: Our results demonstrate that the 3D ResNet18-based CNN significantly outperforms its 2D counterpart in classifying PD patients from T1-weighted MRI images, achieving higher accuracy, sensitivity, and AUC. The superior performance of the 3D model can be attributed to its ability to capture more complex anatomical features, enhancing its diagnostic capability. Further studies should aim to validate the findings across larger, more diverse populations and explore hybrid models that integrate 2D and 3D approaches. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Dr. Negida has nothing to disclose. Dr. Azzam has nothing to disclose. Dr. Serag has nothing to disclose. Dr. Hassan has nothing to disclose. Rehab Diab has nothing to disclose. Dr. Diab has nothing to disclose. Dr. Hefnawy has nothing to disclose. Mr. Ali has nothing to disclose. Dr. Berman has received personal compensation in the range of $5,000-$9,999 for serving as an officer or member of the Board of Directors for International Parkinson and Movement Disorder Society. The institution of Dr. Berman has received research support from Dystonia Medical Research Foundation. The institution of Dr. Berman has received research support from Administration for Community Living. The institution of Dr. Berman has received research support from The Parkinson Foundation. The institution of Dr. Berman has received research support from National Institutes of Health. Dr. Barrett has received personal compensation in the range of $500-$4,999 for serving as a Consultant for Springer Healthcare LLC. The institution of Dr. Barrett has received research support from Kyowa Kirin. The institution of Dr. Barrett has received research support from NIH.
PMID:40193876 | DOI:10.1212/WNL.0000000000212057
Quantitative Retinal Vascular Features as Biomarkers for CADASIL: A Case-Control Study from the UK Biobank (P9-13.003)
Neurology. 2025 Apr 8;104(7_Supplement_1):4963. doi: 10.1212/WNL.0000000000212039. Epub 2025 Apr 7.
ABSTRACT
OBJECTIVE: To assess the potential of quantitative retinal vascular features as biomarkers of CADASIL.
BACKGROUND: Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is the most common inherited cerebral small vessel disease, yet there is currently no biomarker for early detection. Given the anatomical and embryological similarities, the retina has been regarded as a window to cerebral microcirculation. We hypothesized that patients with CADASIL have different quantitative fundoscopic retinal features than matched controls.
DESIGN/METHODS: We conducted a cross-sectional, matched case-control study involving 49 CADASIL cases and 49 age- and sex-matched controls using genetic data from the UK Biobank between 2006 and 2010. Retinal fundoscopic images obtained from the UK Biobank were analyzed using AutoMorph, a deep learning-based tool that measures retinal vascular features, including fractal dimension, tortuosity, and vessel width. Baseline characteristics including age, sex, hypertension, diabetes, smoking status were compared using chi-square or Mann-Whitney-U test appropriately. Wilcoxon signed-rank test or paired-t test was used appropriately to compare these retinal features between cases and controls.
RESULTS: Our analysis included 49 CADASIL cases (mean age of 52.5 ± 7.9, 49% females) and 49 controls (mean age of 52.8 ± 7.9, 49% females). Vascular risk factors including hypertension, diabetes and smoking status were similar between the two groups (p>0.05). No statistically significant differences were observed between CADASIL cases and controls in fractal dimension (p=0.665), average width (p=0.104) or tortuosity measures like distance tortuosity (p=0.423), squared curvature tortuosity (p=0.925) and tortuosity density (p=0.870).
CONCLUSIONS: Quantitative retinal vascular features analyzed in this study did not significantly differentiate CADASIL cases from controls. This could reflect the potential inclusion of CADASIL patients in the early or asymptomatic stages, where retinal changes may not yet be apparent. Furthermore, healthy volunteer bias in the UK Biobank might have influenced these findings. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Mr. Vallamchetla has nothing to disclose. Dr. badr has nothing to disclose. Mr. Abdelkader has nothing to disclose. Mr. Shourav has nothing to disclose. Xin Li has received personal compensation for serving as an employee of Arizona State University. Yalin Wang has nothing to disclose. The institution of Dr. Meschia has received research support from NINDS. The institution of Dr. Meschia has received research support from NINDS. Dr. Dumitrascu has nothing to disclose. Dr. Lin has nothing to disclose.
PMID:40193854 | DOI:10.1212/WNL.0000000000212039
An integrated AI knowledge graph framework of bacterial enzymology and metabolism
Proc Natl Acad Sci U S A. 2025 Apr 15;122(15):e2425048122. doi: 10.1073/pnas.2425048122. Epub 2025 Apr 7.
ABSTRACT
The study of bacterial metabolism holds immense significance for improving human health and advancing agricultural practices. The prospective applications of genomically encoded bacterial metabolism present a compelling opportunity, particularly in the light of the rapid expansion of genomic sequencing data. Current metabolic inference tools face challenges in scaling with large datasets, leading to increased computational demands, and often exhibit limited inter-relatability and interoperability. Here, we introduce the Integrated Biosynthetic Inference Suite (IBIS), which employs deep learning models and a knowledge graph to facilitate rapid, scalable bacterial metabolic inference. This system leverages a series of Transformer based models to generate high quality, meaningful embeddings for individual enzymes, biosynthetic domains, and metabolic pathways. These embedded representations enable rapid, large-scale comparisons of metabolic proteins and pathways, surpassing the capabilities of conventional methodologies. The examination of evolutionary and functionally conserved metabolites across diverse bacterial species is facilitated by integrating the predictive capabilities of IBIS into a graph database enriched with comprehensive metadata. The consideration of both primary and specialized metabolism, combined with an embedding logic for enzyme discovery, uniquely positions IBIS to identify potential novel metabolic pathways. With the expansion of genomic data necessitating transformative approaches to advance molecular metabolism research, IBIS delivers an AI-driven holistic investigation of bacterial metabolism.
PMID:40193601 | DOI:10.1073/pnas.2425048122
NA_mCNN: Classification of Sodium Transporters in Membrane Proteins by Integrating Multi-Window Deep Learning and ProtTrans for Their Therapeutic Potential
J Proteome Res. 2025 Apr 7. doi: 10.1021/acs.jproteome.4c00884. Online ahead of print.
ABSTRACT
Sodium transporters maintain cellular homeostasis by transporting ions, minerals, and nutrients across the membrane, and Na+/K+ ATPases facilitate the cotransport of solutes in neurons, muscle cells, and epithelial cells. Sodium transporters are important for many physiological processes, and their dysfunction leads to diseases such as hypertension, diabetes, neurological disorders, and cancer. The NA_mCNN computational method highlights the functional diversity and significance of sodium transporters in membrane proteins using protein language model embeddings (PLMs) and multiple-window scanning deep learning models. This work investigates PLMs that include Tape, ProtTrans, ESM-1b-1280, and ESM-2-128 to achieve more accuracy in sodium transporter classification. Five-fold cross-validation and independent testing demonstrate ProtTrans embedding robustness. In cross-validation, ProtTrans achieved an AUC of 0.9939, a sensitivity of 0.9829, and a specificity of 0.9889, demonstrating its ability to distinguish positive and negative samples. In independent testing, ProtTrans maintained a sensitivity of 0.9765, a specificity of 0.9991, and an AUC of 0.9975, which indicates its high level of discrimination. This study advances the understanding of sodium transporter diversity and function, as well as their role in human pathophysiology. Our goal is to use deep learning techniques and protein language models for identifying sodium transporters to accelerate identification and develop new therapeutic interventions.
PMID:40193588 | DOI:10.1021/acs.jproteome.4c00884
Deep learning analysis of hematoxylin and eosin-stained benign breast biopsies to predict future invasive breast cancer
JNCI Cancer Spectr. 2025 Apr 7:pkaf037. doi: 10.1093/jncics/pkaf037. Online ahead of print.
ABSTRACT
BACKGROUND: Benign breast disease (BBD) is an important risk factor for breast cancer (BC) development. In this study, we analyzed hematoxylin and eosin-stained whole slide images (WSIs) from diagnostic BBD biopsies using different deep learning (DL) approaches to predict those who subsequently developed breast cancer (cases) and those who did not (controls).
METHODS: We randomly divided cases and controls from a nested case-control study of 946 women with BBD into training (331 cases, 331 controls) and test (142 cases, 142 controls) sets. We employed customized VGG-16 and AutoML models for image-only classification using WSIs; logistic regression for classification using only clinico-pathological characteristics; and a multimodal network combining WSIs and clinico-pathological characteristics for classification.
RESULTS: Both image-only (area under the receiver operating characteristic curve, AUROCs of 0.83 (standard error, SE: 0.001) and 0.78 (SE: 0.001) for customized VGG-16 and AutoML, respectively)) and multimodal (AUROC of 0.89 (SE: 0.03)) networks had high discriminatory accuracy for BC. The clinico-pathological characteristics only model had the lowest AUROC of 0.54 (SE: 0.03). Additionally, compared to the customized VGG-16 which performed better than AutoML, the multimodal network had improved accuracy, 0.89 (SE: 0.03) vs 0.83 (SE: 0.02), sensitivity, 0.93 (SE: 0.04) vs 0.83 (SE: 0.003), and specificity, namely 0.86 (SE: 0.03) vs 0.84 (SE: 0.003).
CONCLUSION: This study opens promising avenues for BC risk assessment in women with benign breast disease. Integrating whole slide images and clinico-pathological characteristics through a multimodal approach significantly improved predictive model performance. Future research will explore DL techniques to understand BBD progression to invasive BC.
PMID:40193520 | DOI:10.1093/jncics/pkaf037
Active learning regression quality prediction model and grinding mechanism for ceramic bearing grinding processing
PLoS One. 2025 Apr 7;20(4):e0320494. doi: 10.1371/journal.pone.0320494. eCollection 2025.
ABSTRACT
The study aims to explore quality prediction in ceramic bearing grinding processing, with particular focus on the effect of grinding parameters on surface roughness. The study uses active learning regression model for model construction and optimization, and empirical analysis of surface quality under different grinding conditions. At the same time, various deep learning models are utilized to conduct experiments on quality prediction in grinding processing. The experimental setup covers a variety of grinding parameters, including grinding wheel linear speed, grinding depth and feed rate, to ensure the accuracy and reliability of the model under different conditions. According to the experimental results, when the grinding depth increases to 21 μm, the average training loss of the model further decreases to 0.03622, and the surface roughness Ra value significantly decreases to 0.1624 μm. In addition, the experiment also found that increasing the grinding wheel linear velocity and moderately adjusting the grinding depth can significantly improve the machining quality. For example, when the grinding wheel linear velocity is 45 m/s and the grinding depth is 0.015 mm, the Ra value drops to 0.1876 μm. The results of the study not only provide theoretical support for the grinding processing of ceramic bearings, but also provide a basis for the optimization of grinding parameters in actual production, which has an important industrial application value.
PMID:40193368 | DOI:10.1371/journal.pone.0320494
TractCloud-FOV: Deep Learning-Based Robust Tractography Parcellation in Diffusion MRI With Incomplete Field of View
Hum Brain Mapp. 2025 Apr 1;46(5):e70201. doi: 10.1002/hbm.70201.
ABSTRACT
Tractography parcellation classifies streamlines reconstructed from diffusion MRI into anatomically defined fiber tracts for clinical and research applications. However, clinical scans often have incomplete fields of view (FOV) where brain regions are partially imaged, leading to partial, or truncated fiber tracts. To address this challenge, we introduce TractCloud-FOV, a deep learning framework that robustly parcellates tractography under conditions of incomplete FOV. We propose a novel training strategy, FOV-Cut Augmentation (FOV-CA), in which we synthetically cut tractograms to simulate a spectrum of real-world inferior FOV cutoff scenarios. This data augmentation approach enriches the training set with realistic truncated streamlines, enabling the model to achieve superior generalization. We evaluate the proposed TractCloud-FOV on both synthetically cut tractography and two real-life datasets with incomplete FOV. TractCloud-FOV significantly outperforms several state-of-the-art methods on all testing datasets in terms of streamline classification accuracy, generalization ability, tract anatomical depiction, and computational efficiency. Overall, TractCloud-FOV achieves efficient and consistent tractography parcellation in diffusion MRI with incomplete FOV.
PMID:40193105 | DOI:10.1002/hbm.70201
Phantom-based evaluation of image quality in Transformer-enhanced 2048-matrix CT imaging at low and ultralow doses
Jpn J Radiol. 2025 Apr 7. doi: 10.1007/s11604-025-01755-z. Online ahead of print.
ABSTRACT
PURPOSE: To compare the quality of standard 512-matrix, standard 1024-matrix, and Swin2SR-based 2048-matrix phantom images under different scanning protocols.
MATERIALS AND METHODS: The Catphan 600 phantom was scanned using a multidetector CT scanner under two protocols: 120 kV/100 mA (CT dose index volume = 3.4 mGy) to simulate low-dose CT, and 70 kV/40 mA (0.27 mGy) to simulate ultralow-dose CT. Raw data were reconstructed into standard 512-matrix images using three methods: filtered back projection (FBP), adaptive statistical iterative reconstruction at 40% intensity (ASIR-V), and deep learning image reconstruction at high intensity (DLIR-H). The Swin2SR super-resolution model was used to generate 2048-matrix images (Swin2SR-2048), while the super-resolution convolutional neural network (SRCNN) model generated 2048-matrix images (SRCNN-2048). The quality of 2048-matrix images generated by the two models (Swin2SR and SRCNN) was compared. Image quality was evaluated by ImQuest software (v7.2.0.0, Duke University) based on line pair clarity, task-based transfer function (TTF), image noise, and noise power spectrum (NPS).
RESULTS: At equivalent radiation doses and reconstruction method, Swin2SR-2048 images identified more line pairs than both standard-512 and standard-1024 images. Except for the 0.27 mGy/DLIR-H/standard kernel sequence, TTF-50% of Teflon increased after super-resolution processing. Statistically significant differences in TTF-50% were observed between the standard 512, 1024, and Swin2SR-2048 images (all p < 0.05). Swin2SR-2048 images exhibited lower image noise and NPSpeak compared to both standard 512- and 1024-matrix images, with significant differences observed in all three matrix types (all p < 0.05). Swin2SR-2048 images also demonstrated superior quality compared to SRCNN-2048, with significant differences in image noise (p < 0.001), NPSpeak (p < 0.05), and TTF-50% for Teflon (p < 0.05).
CONCLUSION: Transformer-enhanced 2048-matrix CT images improve spatial resolution and reduce image noise compared to standard-512 and -1024 matrix images.
PMID:40193009 | DOI:10.1007/s11604-025-01755-z
Artificial intelligence to predict treatment response in rheumatoid arthritis and spondyloarthritis: a scoping review
Rheumatol Int. 2025 Apr 7;45(4):91. doi: 10.1007/s00296-025-05825-3.
ABSTRACT
To analyse the types and applications of artificial intelligence (AI) technologies to predict treatment response in rheumatoid arthritis (RA) and spondyloarthritis (SpA). A comprehensive search in Medline, Embase, and Cochrane databases (up to August 2024) identified studies using AI to predict treatment response in RA and SpA. Data on study design, AI methodologies, data sources, and outcomes were extracted and synthesized. Findings were summarized descriptively. Of the 4257 articles identified, 89 studies met the inclusion criteria (74 on RA, 7 on SpA, 4 on Psoriatic Arthritis and 4 a mix of them). AI models primarily employed supervised machine learning techniques (e.g., random forests, support vector machines), unsupervised clustering, and deep learning. Data sources included electronic medical records, clinical biomarkers, genetic and proteomic data, and imaging. Predictive performance varied by methodology, with accuracy ranging from 60 to 70% and AUC values between 0.63 and 0.92. Multi-omics approaches and imaging-based models showed promising results in predicting responses to biologic DMARDs and JAK inhibitors but methodological heterogeneity limited generalizability. AI technologies exhibit substantial potential in predicting treatment responses in RA and SpA, enhancing personalized medicine. However, challenges such as methodological variability, data integration, and external validation remain. Future research should focus on refining AI models, ensuring their robustness across diverse patient populations, and facilitating their integration into clinical practice to optimize therapeutic decision-making in rheumatology.
PMID:40192881 | DOI:10.1007/s00296-025-05825-3
AI-based automatic estimation of single-kidney glomerular filtration rate and split renal function using non-contrast CT
Insights Imaging. 2025 Apr 7;16(1):84. doi: 10.1186/s13244-025-01959-x.
ABSTRACT
OBJECTIVES: To address SPECT's radioactivity, complexity, and costliness in measuring renal function, this study employs artificial intelligence (AI) with non-contrast CT to estimate single-kidney glomerular filtration rate (GFR) and split renal function (SRF).
METHODS: 245 patients with atrophic kidney or hydronephrosis were included from two centers (Training set: 128 patients from Center I; Test set: 117 patients from Center II). The renal parenchyma and hydronephrosis regions in non-contrast CT were automatically segmented by deep learning. Radiomic features were extracted and combined with clinical characteristics using multivariable linear regression (MLR) to obtain a radiomics-clinical-estimated GFR (rcGFR). The relative contribution of single-kidney rcGFR to overall rcGFR, the percent renal parenchymal volume, and the percent renal hydronephrosis volume were combined by MLR to generate the estimation of SRF (rcphSRF). The Pearson correlation coefficient (r), mean absolute error (MAE), and Lin's concordance coefficient (CCC) were calculated to evaluate the correlations, differences, and agreements between estimations and SPECT-based measurements, respectively.
RESULTS: Compared to manual segmentation, deep learning-based automatic segmentation could reduce the average segmentation time by 434.6 times to 3.4 s. Compared to single-kidney GFR measured by SPECT, the rcGFR had a significant correlation of r = 0.75 (p < 0.001), MAE of 10.66 mL/min/1.73 m2, and CCC of 0.70. Compared to SRF measured by SPECT, the rcphSRF had a significant correlation of r = 0.92 (p < 0.001), MAE of 7.87%, and CCC of 0.88.
CONCLUSIONS: The non-contrast CT and AI methods are feasible to estimate single-kidney GFR and SRF in patients with atrophic kidney or hydronephrosis.
CRITICAL RELEVANCE STATEMENT: For patients with an atrophic kidney or hydronephrosis, non-contrast CT and artificial intelligence methods can be used to estimate single-kidney glomerular filtration rate and split renal function, which may minimize the radiation risk, enhance diagnostic efficiency, and reduce costs.
KEY POINTS: Renal function can be assessed using non-contrast CT and AI. Estimated renal function significantly correlated with the SPECT-based measurements. The efficiency of renal function estimation can be refined by the proposed method.
PMID:40192862 | DOI:10.1186/s13244-025-01959-x
Cutting-edge computational approaches to plant phenotyping
Plant Mol Biol. 2025 Apr 7;115(2):56. doi: 10.1007/s11103-025-01582-w.
ABSTRACT
Precision agriculture methods can achieve the highest yield by applying the optimum amount of water, selecting appropriate pesticides, and managing crops in a way that minimises environmental impact. A rapidly emerging advanced research area, computer vision and deep learning, plays a significant role in effective crop management, such as superior genotype selection, plant classification, weed and pest detection, root localization, fruit counting and ripeness detection, and yield prediction. Also, phenotyping of plants involves analysing characteristics of plants such as chlorophyll content, leaf size, growth rate, leaf surface temperature, photosynthesis efficiency, leaf count, emergence time, shoot biomass, and germination time. This article presents an exhaustive study of recent techniques in computer vision and deep learning in plant science, with examples. The study provides the frequently used imaging parameters for plant image analysis with formulae, the most popular deep neural networks for plant classification and detection, object counting, and various applications. Furthermore, we discuss the publicly available plant image datasets for disease detection, weed control, and fruit detection with the evaluation metrics, tools and frameworks, future advancements and challenges in machine learning and deep learning models.
PMID:40192856 | DOI:10.1007/s11103-025-01582-w
Real-life benefit of artificial intelligence-based fracture detection in a pediatric emergency department
Eur Radiol. 2025 Apr 7. doi: 10.1007/s00330-025-11554-9. Online ahead of print.
ABSTRACT
OBJECTIVES: This study aimed to evaluate the performance of an artificial intelligence (AI)-based software for fracture detection in pediatric patients within a real-life clinical setting. Specifically, it sought to assess (1) the stand-alone AI performance in real-life cohort and in selected set of medicolegal relevant fractures and (2) its influence on the diagnostic performance of inexperienced emergency room physicians.
MATERIALS AND METHODS: The retrospective study involved 1672 radiographs of children under 18 years, obtained consecutively (real-life cohort) and selective (medicolegal cohort) in a tertiary pediatric emergency department. On these images, the stand-alone performance of a commercially available, deep learning-based software was determined. Additionally, three pediatric residents independently reviewed the radiographs before and after AI assistance, and the impact on their diagnostic accuracy was assessed.
RESULTS: In our cohort (median age 10.9 years, 59% male), the AI demonstrated a sensitivity of 92%, specificity of 83%, and accuracy of 87%. For medicolegally relevant fractures, the AI achieved a sensitivity of 100% for proximal tibia fractures, but only 68% for radial condyle fractures. AI assistance improved the residents' patient-wise sensitivity from 84 to 87%, specificity from 91 to 92%, and diagnostic accuracy from 88 to 90%. In 2% of cases, the readers, with the assistance of AI, erroneously discarded their correct diagnosis.
CONCLUSION: The AI exhibited strong stand-alone performance in a pediatric setting and can modestly enhance the diagnostic accuracy of inexperienced physicians. However, the economic implications must be weighed against the potential benefits in patient safety.
KEY POINTS: Question Does an artificial intelligence-based software for fracture detection influence inexperienced physicians in a real-life pediatric trauma population? Findings Addition of a well-performing artificial intelligence-based software led to a limited increase in diagnostic accuracy of inexperienced human readers. Clinical relevance Diagnosing fractures in children is especially challenging for less experienced physicians. High-performing artificial intelligence-based software as a "second set of eyes," enhances diagnostic accuracy in a common pediatric emergency room setting.
PMID:40192806 | DOI:10.1007/s00330-025-11554-9
Skull CT metadata for automatic bone age assessment by using three-dimensional deep learning framework
Int J Legal Med. 2025 Apr 7. doi: 10.1007/s00414-025-03469-3. Online ahead of print.
ABSTRACT
Bone age assessment (BAA) means challenging tasks in forensic science especially in some extreme situations like only skulls found. This study aimed to develop an accurate three-dimensional deep learning (DL) framework at skull CT metadata for BAA and try to explore new skull markers. In this study, retrospective data of 385,175 Skull CT slices from 1,085 patients ranging from 16.32 to 90.56 years were obtained. The cohort was randomly split into a training set (90%, N = 976) and a test set (10%, N = 109). Additional 101 patients were collected from another center as an external validation set. Evaluations and comparisons with other state-of-the-art DL models and traditional machine learning (ML) models based on hand-crafted methods were hierarchically performed. The mean absolute error (MAE) was the primary parameter. A total of 1186 patients (mean age ± SD: 54.72 ± 14.91, 603 males & 583 females) were evaluated. Our method achieved the best MAE on the training set, test set and external validation set were 6.51, 5.70, and 8.86 years in males, while in females, the best MAE were 6.10, 7.84, and 10.56 years, respectively. In the test set, the MAE of other 2D or 3D models and ML methods based on manual features were ranged from 10.12 to 14.12. The model results showed a tendency of larger errors in the elderly group. The results suggested the proposed three-dimensional DL framework performed better than existing DL and manual methods. Furthermore, our framework explored new skeletal markers for BAA and could serve as a backbone for extracting features from three-dimensional skull CT metadata in a professional manner.
PMID:40192774 | DOI:10.1007/s00414-025-03469-3
A Raman spectroscopy algorithm based on convolutional neural networks and multilayer perceptrons: qualitative and quantitative analyses of chemical warfare agent simulants
Analyst. 2025 Apr 7. doi: 10.1039/d5an00075k. Online ahead of print.
ABSTRACT
Rapid and reliable detection of chemical warfare agents (CWAs) is essential for military defense and counter-terrorism operations. Although Raman spectroscopy provides a non-destructive method for on-site detection, existing methods show difficulty in coping with complex spectral overlap and concentration changes when analyzing mixtures containing trace components and highly complex mixtures. Based on the idea of convolutional neural networks and multi-layer perceptrons, this study proposes a qualitative and quantitative analysis algorithm of Raman spectroscopy based on deep learning (RS-MLP). The reference feature library is built from pure substance spectral features, while multi-head attention adaptively captures mixture weights. The MLP-Mixer then performs hierarchical feature matching for qualitative identification and quantitative analysis. The recognition rate of spectral data for the four types of combinations used for validation reached 100%, with an average root mean square error (RMSE) of less than 0.473% for the concentration prediction of three components. Furthermore, the model exhibited robust performance even under conditions of highly overlapping spectra. At the same time, the interpretability of the model is also enhanced. The model has excellent accuracy and robustness in component identification and concentration identification in complex mixtures and provides a practical solution for rapid and non-contact detection of persistent chemicals in complex environments.
PMID:40192710 | DOI:10.1039/d5an00075k