Deep learning

ConfuseNN: Interpreting convolutional neural network inferences in population genomics with data shuffling

Tue, 2025-04-08 06:00

bioRxiv [Preprint]. 2025 Mar 27:2025.03.24.644668. doi: 10.1101/2025.03.24.644668.

ABSTRACT

Convolutional neural networks (CNNs) have become powerful tools for population genomic inference, yet understanding which genomic features drive their performance remains challenging. We introduce ConfuseNN, a method that systematically shuffles input haplotype matrices to disrupt specific population genetic features and evaluate their contribution to CNN performance. By sequentially removing signals from linkage disequilibrium, allele frequency, and other population genetic patterns in test data, we evaluate how each feature contributes to CNN performance. We applied ConfuseNN to three published CNNs for demographic history and selection inference, confirming the importance of specific data features and identifying limitations of network architecture and of simulated training and testing data design. ConfuseNN provides an accessible biologically motivated framework for interpreting CNN behavior across different tasks in population genetics, helping bridge the gap between powerful deep learning approaches and traditional population genetic theory.

PMID:40196528 | PMC:PMC11974698 | DOI:10.1101/2025.03.24.644668

Categories: Literature Watch

Point-SPV: end-to-end enhancement of object recognition in simulated prosthetic vision using synthetic viewing points

Tue, 2025-04-08 06:00

Front Hum Neurosci. 2025 Mar 24;19:1549698. doi: 10.3389/fnhum.2025.1549698. eCollection 2025.

ABSTRACT

Prosthetic vision systems aim to restore functional sight for visually impaired individuals by replicating visual perception by inducing phosphenes through electrical stimulation in the visual cortex, yet there remain challenges in visual representation strategies such as including gaze information and task-dependent optimization. In this paper, we introduce Point-SPV, an end-to-end deep learning model designed to enhance object recognition in simulated prosthetic vision. Point-SPV takes an initial step toward gaze-based optimization by simulating viewing points, representing potential gaze locations, and training the model on patches surrounding these points. Our approach prioritizes task-oriented representation, aligning visual outputs with object recognition needs. A behavioral gaze-contingent object discrimination experiment demonstrated that Point-SPV outperformed a conventional edge detection method, by facilitating observers to gain a higher recognition accuracy, faster reaction times, and a more efficient visual exploration. Our work highlights how task-specific optimization may enhance representations in prosthetic vision, offering a foundation for future exploration and application.

PMID:40196449 | PMC:PMC11973266 | DOI:10.3389/fnhum.2025.1549698

Categories: Literature Watch

The role of trustworthy and reliable AI for multiple sclerosis

Tue, 2025-04-08 06:00

Front Digit Health. 2025 Mar 24;7:1507159. doi: 10.3389/fdgth.2025.1507159. eCollection 2025.

ABSTRACT

This paper investigates the importance of Trustworthy Machine Learning (ML) in the context of Multiple Sclerosis (MS) research and care. Due to the complex and individual nature of MS, the need for reliable and trustworthy ML models is essential. In this paper, key aspects of trustworthy ML, such as out-of-distribution generalization, explainability, uncertainty quantification and calibration are explored, highlighting their significance for healthcare applications. Challenges in integrating these ML tools into clinical workflows are addressed, discussing the difficulties in interpreting AI outputs, data diversity, and the need for comprehensive, quality data. It calls for collaborative efforts among researchers, clinicians, and policymakers to develop ML solutions that are technically sound, clinically relevant, and patient-centric.

PMID:40196398 | PMC:PMC11973328 | DOI:10.3389/fdgth.2025.1507159

Categories: Literature Watch

A study on early diagnosis for fracture non-union prediction using deep learning and bone morphometric parameters

Tue, 2025-04-08 06:00

Front Med (Lausanne). 2025 Mar 24;12:1547588. doi: 10.3389/fmed.2025.1547588. eCollection 2025.

ABSTRACT

BACKGROUND: Early diagnosis of non-union fractures is vital for treatment planning, yet studies using bone morphometric parameters for this purpose are scarce. This study aims to create a fracture micro-CT image dataset, design a deep learning algorithm for fracture segmentation, and develop an early diagnosis model for fracture non-union.

METHODS: Using fracture animal models, micro-CT images from 12 rats at various healing stages (days 1, 7, 14, 21, 28, and 35) were analyzed. Fracture lesion frames were annotated to create a high-resolution dataset. We proposed the Vision Mamba Triplet Attention and Edge Feature Decoupling Module UNet (VM-TE-UNet) for fracture area segmentation. And we extracted bone morphometric parameters to establish an early diagnostic evaluation system for the non-union of fractures.

RESULTS: A dataset comprising 2,448 micro-CT images of the rat fracture lesions with fracture Region of Interest (ROI), bone callus and healing characteristics was established and used to train and test the proposed VM-TE-UNet which achieved a Dice Similarity Coefficient of 0.809, an improvement over the baseline's 0.765, and reduced the 95th Hausdorff Distance to 13.1. Through ablation studies, comparative experiments, and result analysis, the algorithm's effectiveness and superiority were validated. Significant differences (p < 0.05) were observed between the fracture and fracture non-union groups during the inflammatory and repair phases. Key indices, such as the average CT values of hematoma and cartilage tissues, BS/TS and BS/TV of mineralized cartilage, BS/TV of osteogenic tissue, and BV/TV of osteogenic tissue, align with clinical methods for diagnosing fracture non-union by assessing callus presence and local soft tissue swelling. On day 14, the early diagnosis model achieved an AUC of 0.995, demonstrating its ability to diagnose fracture non-union during the soft-callus phase.

CONCLUSION: This study proposed the VM-TE-UNet for fracture areas segmentation, extracted micro-CT indices, and established an early diagnostic model for fracture non-union. We believe that the prediction model can effectively screen out samples of poor fracture rehabilitation caused by blood supply limitations in rats 14 days after fracture, rather than the widely accepted 35 or 40 days. This provides important reference for the clinical prediction of fracture non-union and early intervention treatment.

PMID:40196347 | PMC:PMC11973290 | DOI:10.3389/fmed.2025.1547588

Categories: Literature Watch

Transformer-based artificial intelligence on single-cell clinical data for homeostatic mechanism inference and rational biomarker discovery

Tue, 2025-04-08 06:00

medRxiv [Preprint]. 2025 Mar 25:2025.03.24.25324556. doi: 10.1101/2025.03.24.25324556.

ABSTRACT

Artificial intelligence (AI) applied to single-cell data has the potential to transform our understanding of biological systems by revealing patterns and mechanisms that simpler traditional methods miss. Here, we develop a general-purpose, interpretable AI pipeline consisting of two deep learning models: the Multi- Input Set Transformer++ (MIST) model for prediction and the single-cell FastShap model for interpretability. We apply this pipeline to a large set of routine clinical data containing single-cell measurements of circulating red blood cells (RBC), white blood cells (WBC), and platelets (PLT) to study population fluxes and homeostatic hematological mechanisms. We find that MIST can use these single-cell measurements to explain 70-82% of the variation in blood cell population sizes among patients (RBC count, PLT count, WBC count), compared to 5-20% explained with current approaches. MIST's accuracy implies that substantial information on cellular production and clearance is present in the single-cell measurements. MIST identified substantial crosstalk among RBC, WBC, and PLT populations, suggesting co-regulatory relationships that we validated and investigated using interpretability maps generated by single-cell FastShap. The maps identify granular single-cell subgroups most important for each population's size, enabling generation of evidence-based hypotheses for co-regulatory mechanisms. The interpretability maps also enable rational discovery of a single-WBC biomarker, "Down Shift", that complements an existing marker of inflammation and strengthens diagnostic associations with diseases including sepsis, heart disease, and diabetes. This study illustrates how single-cell data can be leveraged for mechanistic inference with potential clinical relevance and how this AI pipeline can be applied to power scientific discovery.

PMID:40196278 | PMC:PMC11974774 | DOI:10.1101/2025.03.24.25324556

Categories: Literature Watch

Artificial Intelligence Prediction of Age from Echocardiography as a Marker for Cardiovascular Disease

Tue, 2025-04-08 06:00

medRxiv [Preprint]. 2025 Mar 26:2025.03.25.25324627. doi: 10.1101/2025.03.25.25324627.

ABSTRACT

Accurate understanding of biological aging and the impact of environmental stressors is crucial for understanding cardiovascular health and identifying patients at risk for adverse outcomes. Chronological age stands as perhaps the most universal risk predictor across virtually all populations and diseases. While chronological age is readily discernible, efforts to distinguish between biologically older versus younger individuals can, in turn, potentially identify individuals with accelerated versus delayed cardiovascular aging. This study presents a deep learning artificial intelligence (AI) approach to predict age from echocardiogram videos, leveraging 2,610,266 videos from 166,508 studies from 90,738 unique patients and using the trained models to identify features of accelerated and delayed aging. Leveraging multi-view echocardiography, our AI age prediction model achieved a mean absolute error (MAE) of 6.76 (6.65 - 6.87) years and a coefficient of determination (R 2 ) of 0.732 (0.72 - 0.74). Stratification by age prediction revealed associations with increased risk of coronary artery disease, heart failure, and stroke. The age prediction can also identify heart transplant recipients as a discontinuous prediction of age is seen before and after a heart transplant. Guided back propagation visualizations highlighted the model's focus on the mitral valve, mitral apparatus, and basal inferior wall as crucial for the assessment of age. These findings underscore the potential of computer vision-based assessment of echocardiography in enhancing cardiovascular risk assessment and understanding biological aging in the heart.

PMID:40196275 | PMC:PMC11974980 | DOI:10.1101/2025.03.25.25324627

Categories: Literature Watch

Vision Transformer Autoencoders for Unsupervised Representation Learning: Capturing Local and Non-Local Features in Brain Imaging to Reveal Genetic Associations

Tue, 2025-04-08 06:00

medRxiv [Preprint]. 2025 Mar 25:2025.03.24.25324549. doi: 10.1101/2025.03.24.25324549.

ABSTRACT

The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract endophenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we leverage a vision transformer (ViT) model due to a different inductive bias and its ability to potentially capture unique patterns through its pairwise attention mechanism. Our approach based on 128 endophenotypes derived from average pooling discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which were not found in the GWAS catalog to have had any associations with brain structure. Our interpretation results demonstrate the ViT's capability in capturing non-local patterns such as left-right hemisphere symmetry within brain MRI data, by leveraging its attention mechanism and positional embeddings. Our results highlight the advantages of transformer-based architectures in feature extraction and representation for genetic discovery.

PMID:40196251 | PMC:PMC11974795 | DOI:10.1101/2025.03.24.25324549

Categories: Literature Watch

Deep learning-based generation of DSC MRI parameter maps using DCE MRI data

Mon, 2025-04-07 06:00

AJNR Am J Neuroradiol. 2025 Apr 7:ajnr.A8768. doi: 10.3174/ajnr.A8768. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Perfusion and perfusion-related parameter maps obtained using dynamic susceptibility contrast (DSC) MRI and dynamic contrast enhanced (DCE) MRI are both useful for clinical diagnosis and research. However, using both DSC and DCE MRI in the same scan session requires two doses of gadolinium contrast agent. The objective was to develop deep-learning based methods to synthesize DSC-derived parameter maps from DCE MRI data.

MATERIALS AND METHODS: Independent analysis of data collected in previous studies was performed. The database contained sixty-four participants, including patients with and without brain tumors. The reference parameter maps were measured from DSC MRI performed following DCE MRI. A conditional generative adversarial network (cGAN) was designed and trained to generate synthetic DSC-derived maps from DCE MRI data. The median parameter values and distributions between synthetic and real maps were compared using linear regression and Bland-Altman plots.

RESULTS: Using cGAN, realistic DSC parameter maps could be synthesized from DCE MRI data. For controls without brain tumors, the synthesized parameters had distributions similar to the ground truth values. For patients with brain tumors, the synthesized parameters in the tumor region correlated linearly with the ground truth values. In addition, areas not visible due to susceptibility artifacts in real DSC maps could be visualized using DCE-derived DSC maps.

CONCLUSIONS: DSC-derived parameter maps could be synthesized using DCE MRI data, including susceptibility-artifact-prone regions. This shows the potential to obtain both DSC and DCE parameter maps from DCE MRI using a single dose of contrast agent.

ABBREVIATIONS: cGAN=conditional generative adversarial network; Ktrans=volume transfer constant; rCBV=relative cerebral blood volume; rCBF=relative cerebral blood flow; Ve=extravascular extracellular volume; Vp=plasma volume.

PMID:40194853 | DOI:10.3174/ajnr.A8768

Categories: Literature Watch

Severity Classification of Pediatric Spinal Cord Injuries Using Structural MRI Measures and Deep Learning: A Comprehensive Analysis Across All Vertebral Levels

Mon, 2025-04-07 06:00

AJNR Am J Neuroradiol. 2025 Apr 7:ajnr.A8770. doi: 10.3174/ajnr.A8770. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Spinal cord injury (SCI) in the pediatric population presents a unique challenge in diagnosis and prognosis due to the complexity of performing clinical assessments on children. Accurate evaluation of structural changes in the spinal cord is essential for effective treatment planning. This study aims to evaluate structural characteristics in pediatric patients with SCI by comparing cross-sectional area (CSA), anterior-posterior (AP) width, and right-left (RL) width across all vertebral levels of the spinal cord between typically developing (TD) and participants with SCI. We employed deep learning techniques to utilize these measures for detecting SCI cases and determining their injury severity.

MATERIALS AND METHODS: Sixty-one pediatric participants (ages 6-18), including 20 with chronic SCI and 41 TD, were enrolled and scanned using a 3T MRI scanner. All SCI participants underwent the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) test to assess their neurological function and determine their American Spinal Injury Association (ASIA) Impairment Scale (AIS) category. T2-weighted MRI scans were utilized to measure CSA, AP width, and RL widths along the entire cervical and thoracic cord. These measures were automatically extracted at every vertebral level of the spinal cord using the SCT toolbox. Deep convolutional neural networks (CNNs) were utilized to classify participants into SCI or TD groups and determine their AIS classification based on structural parameters and demographic factors such as age and height.

RESULTS: Significant differences (p<0.05) were found in CSA, AP width, and RL width between SCI and TD participants, indicating notable structural alterations due to SCI. The CNN-based models demonstrated high performance, achieving 96.59% accuracy in distinguishing SCI from TD participants. Furthermore, the models determined AIS category classification with 94.92% accuracy.

CONCLUSIONS: The study demonstrates the effectiveness of integrating cross-sectional structural imaging measures with deep learning methods for classification and severity assessment of pediatric SCI. The deep learning approach outperforms traditional machine learning models in diagnostic accuracy, offering potential improvements in patient care in pediatric SCI management.

ABBREVIATIONS: SCI = Spinal Cord Injury, TD = Typically Developing, CSA = Cross-Sectional Area, AP = Anterior-Posterior, RL = Right-Left, ASIA = American Spinal Injury Association, AIS = American Spinal Injury Association, CNN = Convolutional Neural Network.

PMID:40194851 | DOI:10.3174/ajnr.A8770

Categories: Literature Watch

ENsiRNA: A multimodality method for siRNA-mRNA and modified siRNA efficacy prediction based on geometric graph neural network

Mon, 2025-04-07 06:00

J Mol Biol. 2025 Apr 5:169131. doi: 10.1016/j.jmb.2025.169131. Online ahead of print.

ABSTRACT

With the rise of small interfering RNA (siRNA) as a therapeutic tool, effective siRNA design is crucial. Current methods often emphasize sequence-related features, overlooking structural information. To address this, we introduce ENsiRNA, a multimodal approach utilizing a geometric graph neural network to predict the efficacy of both standard and modified siRNA. ENsiRNA integrates sequence features from a pretrained RNA language model, structural characteristics, and thermodynamic data or chemical modification to enhance prediction accuracy. Our results indicate that ENsiRNA outperforms existing methods, achieving over a 13% improvement in Pearson Correlation Coefficient (PCC) for standard siRNA across various tests. For modified siRNA, despite challenges associated with RNA folding methods, ENsiRNA still demonstrates competitive performance in different datasets. This novel method highlights the significance of structural information and multimodal strategies in siRNA prediction, advancing the field of therapeutic design.

PMID:40194620 | DOI:10.1016/j.jmb.2025.169131

Categories: Literature Watch

Enhanced inhibitor-kinase affinity prediction via integrated multimodal analysis of drug molecule and protein sequence features

Mon, 2025-04-07 06:00

Int J Biol Macromol. 2025 Apr 5:142871. doi: 10.1016/j.ijbiomac.2025.142871. Online ahead of print.

ABSTRACT

The accurate prediction of inhibitor-kinase binding affinity is pivotal for advancing drug development and precision medicine. In this study, we developed predictive models for human kinases, including cyclin-dependent kinases (CDKs), mitogen-activated protein kinases (MAP kinases), glycogen synthase kinases (GSKs), CDK-like kinases (CMGC kinase group) and receptor tyrosine kinases (RTKs)-key regulators of cellular signaling and disease progression. These kinases serve as primary drug targets in cancer and other critical diseases. To enhance affinity prediction precision, we introduce an innovative multimodal fusion model, KinNet. The model integrates the GraphKAN network, which effectively captures both local and global structural features of drug molecules. Furthermore, it leverages kernel functions and learnable activation functions to dynamically optimize node and edge feature representations. Additionally, the model incorporates the Conv-Enhanced Mamba module, combining Conv1D's ability to capture local features with Mamba's strength in processing long sequences, facilitating comprehensive feature extraction from protein sequences and molecular fingerprints. Experimental results confirm that the KinNet model achieves superior prediction accuracy compared to existing approaches, underscoring its potential to elucidate inhibitor-kinase binding mechanisms. This model serves as a robust computational framework to support drug discovery and the development of kinase-targeted therapies.

PMID:40194581 | DOI:10.1016/j.ijbiomac.2025.142871

Categories: Literature Watch

Sensitivity of a deep-learning-based breast cancer risk prediction model

Mon, 2025-04-07 06:00

Phys Med Biol. 2025 Apr 7. doi: 10.1088/1361-6560/adc9f8. Online ahead of print.

ABSTRACT

When it comes to the implementation of Deep-Learning (DL) based Breast Cancer Risk (BCR) prediction models in clinical settings, it is important to be aware that these models could be sensitive to various factors, especially those arising from the acquisition process. In this work, we investigated how sensitive the state-of-the-art BCR prediction model is to realistic image alterations that can occur as a result of different positioning during the acquisition process.&#xD;&#xD;Approach: 5076 mammograms (1269 exams, 650 participants) from the Slovenian and Belgium (University Hospital Leuven) Breast Cancer Screening Programs were collected. The Original MIRAI model was used for 1-5-year BCR estimation. First, BCR was predicted for the original mammograms, which were not changed. Then, a series of different image alteration techniques was performed, such as swapping left and right breasts, removing tissue below the inframammary fold, translations, cropping, rotations, registration and pectoral muscle removal. In addition, a subset of 81 exams, where at least one of the mammograms had to be retaken due to inadequate image quality, served as an approximation of a test-retest experiment. Bland-Altman plots were used to determine prediction bias and 95% limits of agreement (LOA). Additionally, the mean absolute difference in BCR (Mean AD) was calculated. The impact on the overall discrimination performance was evaluated with the AUC.&#xD;&#xD;Results: Swapping left and right breasts had no impact on the predicted BCR. The removal of skin tissue below the inframammary fold had minimal impact on the predicted BCR (1-5-year LOA: [-0.02, 0.01]). The model was sensitive to translation, rotation, registration, and cropping, where LOAs of up to ±0.1 were observed. Partial pectoral muscle removal did not have a major impact on predicted BCR, while complete removal of pectoral muscle introduced substantial prediction bias and LOAs (1-year LOA: [-0.07, 0.04], 5-year LOA: [-0.06, 0.03]). The approximation of a real test-retest experiment resulted in LOAs similar to those of simulated image alterations. None of the alterations impacted the overall BCR discrimination performance; the initial 1-year AUC (0.90 [0.88, 0.92]) and 5-year AUC (0.77 [0.75, 0.80]) remained unchanged.&#xD;&#xD;Significance: While tested image alterations do not impact overall BCR discrimination performance, substantial changes in predicted 1-5-year BCR can occur on an individual basis.

PMID:40194545 | DOI:10.1088/1361-6560/adc9f8

Categories: Literature Watch

A deep learning approach for quantifying CT perfusion parameters in stroke

Mon, 2025-04-07 06:00

Biomed Phys Eng Express. 2025 Apr 7. doi: 10.1088/2057-1976/adc9b6. Online ahead of print.

ABSTRACT

&#xD;Computed tomography perfusion (CTP) imaging is widely used for assessing acute ischemic stroke. However, conventional methods for quantifying CTP images, such as singular value decomposition (SVD), often lead to oscillations in the estimated residue function and underestimation of tissue perfusion. In addition, the use of global arterial input function (AIF) potentially leads to erroneous parameter estimates. We aim to develop a method for accurately estimating physiological parameters from CTP images.&#xD;Approach:&#xD;We introduced a Transformer-based network to learn voxel-wise temporal features of CTP images. With global AIF and concentration time curve (CTC) of brain tissue as inputs, the network estimated local AIF and flow-scaled residue function. The derived parameters, including cerebral blood flow (CBF) and bolus arrival delay (BAD), were validated on both simulated and patient data (ISLES18 dataset), and were compared with multiple SVD-based methods, including standard SVD (sSVD), block-circulant SVD (cSVD) and oscillation-index SVD (oSVD).&#xD;Main results:&#xD;On data simulating multiple scenarios, local AIF estimated by the proposed method correlated with true AIF with a coefficient of 0.97±0.04 (P<0.001), estimated CBF with a mean error of 4.95 ml/100 g/min, and estimated BAD with a mean error of 0.51 s; the latter two errors were significantly lower than those of the SVD-based methods (P<0.001). The CBF estimated by the SVD-based methods were underestimated by 10%~15%. For patient data, the CBF estimates of the proposed method were significantly higher than those of the sSVD method in both normally perfused and ischemic tissues, by 13.83 ml/100 g/min or 39.33% and 8.55 ml/100 g/min or 57.73% (P<0.001), respectively, which was in agreement with the simulation results.&#xD;Significance:&#xD;The proposed method is capable of estimating local AIF and perfusion parameters from CTP images with high accuracy, potentially improving CTP's performance and efficiency in diagnosing and treating ischemic stroke.&#xD.

PMID:40194529 | DOI:10.1088/2057-1976/adc9b6

Categories: Literature Watch

Optimized Glaucoma Detection Using HCCNN with PSO-Driven Hyperparameter Tuning

Mon, 2025-04-07 06:00

Biomed Phys Eng Express. 2025 Apr 7. doi: 10.1088/2057-1976/adc9b7. Online ahead of print.

ABSTRACT

&#xD;This study is focused on creating an effective glaucoma detection system employing a Hybrid Centric Convolutional Neural Network (HCCNN) model. By using Particle Swarm Optimization (PSO), classification accuracy is increased and computing complexity is reduced. Modified U-Net is also used to segment the optic disc (OD) and optic cup (OC) regions of classified glaucoma images in order to determine the severity of glaucoma.&#xD;Methods:&#xD;The proposed HCCNN model can extract features from fundus images that show signs of glaucoma. To improve the model performance, hyperparameters like dropout rate, learning rate, and the number of units in dense layer are optimized using the PSO method. The PSO algorithm iteratively assesses and modifies these parameters to minimise classification error.The classified glaucoma image is subjected to channel separation to enhance the visibility of relevant features. This channel separated image is segmented using the modified U-Net to delineate the OC and OD regions.&#xD;Results:&#xD;Experimental findings indicate that the PSO-HCCNN model attains classification accuracy of 94% and 97% on DRISHTI-GS and RIM-ONE datasets. Performance criteria including accuracy, sensitivity, specificity, and AUC are employed to assess the system's efficacy, demonstrating a notable enhancement in the early detection rates of glaucoma. To evaluate the segmentation performance, parameters such as Dice coefficient, and Jaccard index are computed.&#xD;Conclusion:&#xD;The integration of PSO with the HCCNN model considerably enhances glaucoma detection from fundus images by optimising essential parameters and accurate OD and OC segmentation, resulting in a robust and precise classification model. This method has potential uses in ophthalmology and may help physicians detect glaucoma early and accurately.&#xD.

PMID:40194525 | DOI:10.1088/2057-1976/adc9b7

Categories: Literature Watch

Dimensionality Reduction of Genetic Data using Contrastive Learning

Mon, 2025-04-07 06:00

Genetics. 2025 Apr 7:iyaf068. doi: 10.1093/genetics/iyaf068. Online ahead of print.

ABSTRACT

We introduce a framework for using contrastive learning for dimensionality reduction on genetic datasets to create PCA-like population visualizations. Contrastive learning is a self-supervised deep learning method that uses similarities between samples to train the neural network to discriminate between samples. Many of the advances in these types of models have been made for computer vision, but some common methodology does not translate well from image to genetic data. We define a loss function that outperforms loss functions commonly used in contrastive learning, and a data augmentation scheme tailored specifically towards SNP genotype datasets. We compare the performance of our method to PCA and contemporary non-linear methods with respect to how well they preserve local and global structure, and how well they generalize to new data. Our method displays good preservation of global structure and has improved generalization properties over t-SNE, UMAP, and popvae, while preserving relative distances between individuals to a high extent. A strength of the deep learning framework is the possibility of projecting new samples and fine-tuning to new datasets using a pre-trained model without access to the original training data, and the ability to incorporate more domain-specific information in the model. We show examples of population classification on two datasets of dog and human genotypes.

PMID:40194517 | DOI:10.1093/genetics/iyaf068

Categories: Literature Watch

Deep Learning for the Prediction of Time-to-Seizure in Epilepsy using Routine EEG (P3-9.003)

Mon, 2025-04-07 06:00

Neurology. 2025 Apr 8;104(7_Supplement_1):2403. doi: 10.1212/WNL.0000000000209122. Epub 2025 Apr 7.

ABSTRACT

OBJECTIVE: To develop and validate a deep learning model to predict time-to-seizure in patients with epilepsy (PWE) from routine EEG.

BACKGROUND: While interictal epileptiform discharges (IEDs) on EEG are associated with higher seizure recurrence, routine EEG has low sensitivity for IEDs and is prone to overinterpretation. Deep learning can extract features from EEG beyond IEDs and map them to complex outcomes, such as seizure risk through time, offering valuable information to guide epilepsy management.

DESIGN/METHODS: We selected all PWE undergoing routine EEG at our institution from 2018-2019, using EEGs recorded after July 2019 as the testing set. Patients with unclear epilepsy diagnoses or seizure during the EEG were excluded. Medical charts were reviewed for the date of the first seizure after the EEG (exact date or extrapolated from seizure frequency) and the date of last follow-up. EEGs were segmented into overlapping 30-second windows and input into a deep transformer model alongside the following clinical features: age, sex, epilepsy type, epilepsy duration, seizure frequency prior to EEG, focal lesion on neuroimaging, family history of epilepsy, and history of febrile seizures. A random survival forest (RSF) using clinical features only was used as a baseline. Models were trained to predict seizure hazards over 18 months at logarithmically spaced intervals and evaluated on the testing set using Uno's concordance index.

RESULTS: We included 504 EEGs from 451 patients for training and 92 EEGs from 83 patients for testing. The deep learning model achieved a concordance index of 0.67, compared to 0.63 for the clinical-only RSF model. Including IEDs as a predictor did not improve the RSF model's performance.

CONCLUSIONS: Deep learning can extract complex information from routine EEG to predict time-to-seizure, outperforming traditional predictors. This suggests a potential role of automated EEG analysis in the follow-up of PWE. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Dr. Lemoine has received research support from Canadian Institute of Health Research. Dr. Lemoine has received research support from Fonds de Recherche du Québec -- Santé. Dr. Lesage has stock in Labeo Technologies Inc.. Dr. Nguyen has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Paladin Pharma. Dr. Nguyen has received personal compensation in the range of $500-$4,999 for serving on a Speakers Bureau for Paladin Pharma. The institution of Prof. Bou Assi has received research support from NSERC. The institution of Prof. Bou Assi has received research support from FRQS. The institution of Prof. Bou Assi has received research support from Brain Canada Foundation . The institution of Prof. Bou Assi has received research support from Epilepsy Canada Foundation. The institution of Prof. Bou Assi has received research support from Savoy Foundation.

PMID:40194014 | DOI:10.1212/WNL.0000000000209122

Categories: Literature Watch

Ensemble deep learning for Alzheimer's disease diagnosis using MRI: Integrating features from VGG16, MobileNet, and InceptionResNetV2 models

Mon, 2025-04-07 06:00

PLoS One. 2025 Apr 7;20(4):e0318620. doi: 10.1371/journal.pone.0318620. eCollection 2025.

ABSTRACT

Alzheimer's disease (AD) is a neurodegenerative disorder characterized by the accumulation of amyloid plaques and neurofibrillary tangles in the brain, leading to distinctive patterns of neuronal dysfunction and the cognitive decline emblematic of dementia. Currently, over 5 million individuals aged 65 and above are living with AD in the United States, a number projected to rise by 2050. Traditional diagnostic methods are fraught with challenges, including low accuracy and a significant propensity for misdiagnosis. In response to these diagnostic challenges, our study develops and evaluates an innovative deep learning (DL) ensemble model that integrates features from three pre-trained models-VGG16, MobileNet, and InceptionResNetV2-for the precise identification of AD markers from MRI scans. This approach aims to overcome the limitations of individual models in handling varying image shapes and textures, thereby improving diagnostic accuracy. The ultimate goal is to support primary radiologists by streamlining the diagnostic process, facilitating early detection, and enabling timely treatment of AD. Upon rigorous evaluation, our ensemble model demonstrated superior performance over contemporary classifiers, achieving a notable accuracy of 97.93%, along with a specificity of 98.04%, sensitivity of 95.89%, precision of 95.94%, and an F1-score of 87.50%. These results not only underscore the efficacy of the ensemble approach but also highlight the potential for DL to revolutionize AD diagnosis, offering a promising pathway to more accurate, early detection and intervention.

PMID:40193955 | DOI:10.1371/journal.pone.0318620

Categories: Literature Watch

Artificial Intelligence-powered Prediction of Brain Tumor Recurrence After Gamma Knife Radiotherapy: A Neural Network Approach (P3-6.004)

Mon, 2025-04-07 06:00

Neurology. 2025 Apr 8;104(7_Supplement_1):1881. doi: 10.1212/WNL.0000000000208805. Epub 2025 Apr 7.

ABSTRACT

OBJECTIVE: To develop and evaluate a deep learning model for predicting brain tumor recurrence following Gamma Knife radiotherapy using multimodal MRI images, radiation therapy details, and clinical parameters.

BACKGROUND: Brain metastases are common, with over 150,000 new cases annually in the U.S. Gamma Knife radiotherapy is a widely used treatment for brain tumors. However, recurrence is a concern, requiring early detection for timely intervention. Previous studies using AI in brain tumor prognosis have primarily focused on glioblastomas, leaving a gap in research regarding metastatic brain tumors post-Gamma Knife therapy. This study aims to address this by developing predictive models for recurrence risk.

DESIGN/METHODS: The study utilized the Brain Tumor Radiotherapy GammaKnife dataset from The Cancer Imaging Archive (TCIA), including MRI images, lesion annotations, and radiation dose details. Data preprocessing involved normalizing MRI images, extracting lesion-specific radiation doses, and applying data augmentation. A 3D Convolutional Neural Network was designed with multiple convolutional layers and trained using stratified sampling. The model was trained for 50 epochs with a batch size of 16 and optimized using the Adam optimizer.

RESULTS: The proof-of-concept model successfully integrated multimodal data and identified stable tumors with accuracy of 79.5% and specificity of 84.4%. However, true negative rates were low indicating difficulty in predicting recurrence. To reduce overfitting, techniques such as augmentation, dropout layers, model checkpoints, and cross validation have been employed to improve generalization. Further steps include feature extraction from complex radiomic profiles to enhance model robustness and accuracy prediction.

CONCLUSIONS: Our study demonstrates the feasibility of using AI to predict brain tumor recurrence post-Gamma Knife radiotherapy. While initial results are promising, further refinement, including the addition of radiomic features and model tuning, is set to improve recurrence prediction and aid in clinical decision-making. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Mr. Pandya has nothing to disclose. Mr. Patel has nothing to disclose. Miss Anand has nothing to disclose.

PMID:40193918 | DOI:10.1212/WNL.0000000000208805

Categories: Literature Watch

Perivascular Tau in Autopsy Cases with Definite Cerebral Amyloid Angiopathy (P10-3.007)

Mon, 2025-04-07 06:00

Neurology. 2025 Apr 8;104(7_Supplement_1):1900. doi: 10.1212/WNL.0000000000208815. Epub 2025 Apr 7.

ABSTRACT

OBJECTIVE: The main aims of this study were to quantify the burden of tau-pathology in an autopsy cohort of clinical cases with cerebral amyloid angiopathy (CAA) and to characterize the presence of perivascular tau (PVT) accumulation and its relationship with CAA.

BACKGROUND: CAA and Alzheimer's Disease Neuropathological Changes, such as tau-tangles, often coexist. The role of tau pathology in the pathophysiology of CAA remains to be determined.

DESIGN/METHODS: Autopsy cases with a neuropathologically confirmed clinical diagnosis of CAA were evaluated. Samples were taken from cortical areas and underwent immunohistochemistry against amyloid-β (Aβ) and phosphorylated tau (At8). Deep-learning based models were created and applied to the samples to quantify 1) density of intraneuronal tau-tangles; 2) percentage area of cortical CAA and Aβ-plaques using the Aiforia® platform; 3) percentage area of total cortical tau-burden, using QuPath. Linear-mixed effects models were applied to assess the association between tau and CAA burden. The presence of dyshoric CAA (flamelike Aβ deposits that radiate into the perivascular neuropil), and PVT (accentuated accumulation of tau around the vessel) were visually identified on Aβ vs. At8-stained sections respectively. Single-vessel analysis was performed to determine whether there was an association between PVT and CAA using Chi-square tests.

RESULTS: A total of 76 sections in 19 CAA cases (median age-at-death 76 years [64-88]; 7 females) were analyzed. Higher tau-tangles and total tau-burden were observed in the temporal cortex versus the occipital cortex (p=0.05). CAA burden was not associated with tau-tangles or total tau-burden. Dyshoric CAA was observed around at least one vessel in 71 (93.4%) of the sections and PVT in 32 (34%) of the sections. In single-vessel analysis, PVT was significantly associated with both dyshoric CAA (p=0.004) and any CAA (p=0.0005).

CONCLUSIONS: Tau was not regionally associated with CAA in this autopsy-cohort. Accumulation of PVT was significantly associated with CAA in the single-vessel analysis. Disclosure: Prof. Farias Da Guarda has nothing to disclose. Ms. vom Eigen has nothing to disclose. Dr. van Harten has nothing to disclose. Ms. Auger has nothing to disclose. Dr. Greenberg has received personal compensation in the range of $5,000-$9,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Bayer. Dr. Greenberg has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Bristol Myers Squib. The institution of Dr. Greenberg has received personal compensation in the range of $10,000-$49,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Alnylam. Dr. Greenberg has received research support from National Institutes of Health. Dr. Greenberg has received publishing royalties from a publication relating to health care. Dr. Viswanathan has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Alnylam Pharmaceuticals. Dr. Viswanathan has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Biogen. Dr. Viswanathan has received personal compensation in the range of $500-$4,999 for serving on a Scientific Advisory or Data Safety Monitoring board for Roche Pharmaceuticals. Dr. van Veluw has received personal compensation in the range of $500-$4,999 for serving as a Consultant for Eisai. The institution of Dr. van Veluw has received research support from NIH. The institution of Dr. van Veluw has received research support from Sanofi. The institution of Dr. van Veluw has received research support from Leducq Foundation. The institution of Dr. van Veluw has received research support from American Heart Association. The institution of Dr. van Veluw has received research support from Frechette Family Foundation. The institution of Dr. van Veluw has received research support from BrightFocus Foundation. The institution of Dr. van Veluw has received research support from Therini Bio. Dr. Perosa has nothing to disclose. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff.

PMID:40193903 | DOI:10.1212/WNL.0000000000208815

Categories: Literature Watch

Classifying Parkinson's Disease Patients from Healthy Controls Using a ResNet18 Convolutional Neural Network Model of T1-Weighted MRI (P9-5.020)

Mon, 2025-04-07 06:00

Neurology. 2025 Apr 8;104(7_Supplement_1):4989. doi: 10.1212/WNL.0000000000212057. Epub 2025 Apr 7.

ABSTRACT

OBJECTIVE: Evaluating the efficacy of 2D and 3D ResNet18-based convolutional neural network models in classifying Parkinson's Disease patients from healthy controls using T1-weighted MRI images.

BACKGROUND: Parkinson's Disease (PD) is a neurodegenerative disorder affecting millions worldwide, with progressive motor and non-motor symptoms. Early diagnosis is critical for optimal management, but current neuroimaging techniques can be complex and time-consuming. Recently, deep learning techniques and advancements have shown potential in automating diagnostic processes. This study aimed to assess the performance of 2D and 3D ResNet18-based convolutional neural network (CNN) models in distinguishing PD patients from healthy controls using T1-weighted magnetic resonance imaging (MRI).

DESIGN/METHODS: We developed two CNN models: a 2D model that utilized mid-sagittal T1-weighted MRI slices, and a 3D model based on volumetric brain data. Preprocessing included data augmentation and transfer learning to enhance model performance. Data were split into 85% for training and 15% for testing, with performance evaluated through accuracy, sensitivity, specificity, and area under the curve (AUC). The models were trained and validated using GPU acceleration for optimized computational efficiency.

RESULTS: The 3D CNN model achieved an accuracy of 94%, outperforming the 2D CNN model, which had an accuracy of 91%. The 3D model also exhibited superior sensitivity (92% vs. 89%) and AUC (0.94 vs. 0.92). Confusion matrices revealed higher specificity and reduced false positives for the 3D model, highlighting its superior diagnostic performance.

CONCLUSIONS: Our results demonstrate that the 3D ResNet18-based CNN significantly outperforms its 2D counterpart in classifying PD patients from T1-weighted MRI images, achieving higher accuracy, sensitivity, and AUC. The superior performance of the 3D model can be attributed to its ability to capture more complex anatomical features, enhancing its diagnostic capability. Further studies should aim to validate the findings across larger, more diverse populations and explore hybrid models that integrate 2D and 3D approaches. Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff. Disclosure: Dr. Negida has nothing to disclose. Dr. Azzam has nothing to disclose. Dr. Serag has nothing to disclose. Dr. Hassan has nothing to disclose. Rehab Diab has nothing to disclose. Dr. Diab has nothing to disclose. Dr. Hefnawy has nothing to disclose. Mr. Ali has nothing to disclose. Dr. Berman has received personal compensation in the range of $5,000-$9,999 for serving as an officer or member of the Board of Directors for International Parkinson and Movement Disorder Society. The institution of Dr. Berman has received research support from Dystonia Medical Research Foundation. The institution of Dr. Berman has received research support from Administration for Community Living. The institution of Dr. Berman has received research support from The Parkinson Foundation. The institution of Dr. Berman has received research support from National Institutes of Health. Dr. Barrett has received personal compensation in the range of $500-$4,999 for serving as a Consultant for Springer Healthcare LLC. The institution of Dr. Barrett has received research support from Kyowa Kirin. The institution of Dr. Barrett has received research support from NIH.

PMID:40193876 | DOI:10.1212/WNL.0000000000212057

Categories: Literature Watch

Pages