Deep learning

scAMZI: attention-based deep autoencoder with zero-inflated layer for clustering scRNA-seq data

Tue, 2025-04-08 06:00

BMC Genomics. 2025 Apr 7;26(1):350. doi: 10.1186/s12864-025-11511-2.

ABSTRACT

BACKGROUND: Clustering scRNA-seq data plays a vital role in scRNA-seq data analysis and downstream analyses. Many computational methods have been proposed and achieved remarkable results. However, there are several limitations of these methods. First, they do not fully exploit cellular features. Second, they are developed based on gene expression information and lack of flexibility in integrating intercellular relationships. Finally, the performance of these methods is affected by dropout event.

RESULTS: We propose a novel deep learning (DL) model based on attention autoencoder and zero-inflated (ZI) layer, namely scAMZI, to cluster scRNA-seq data. scAMZI is mainly composed of SimAM (a Simple, parameter-free Attention Module), autoencoder, ZINB (Zero-Inflated Negative Binomial) model and ZI layer. Based on ZINB model, we introduce autoencoder and SimAM to reduce dimensionality of data and learn feature representations of cells and relationships between cells. Meanwhile, ZI layer is used to handle zero values in the data. We compare the performance of scAMZI with nine methods (three shallow learning algorithms and six state-of-the-art DL-based methods) on fourteen benchmark scRNA-seq datasets of various sizes (from hundreds to tens of thousands of cells) with known cell types. Experimental results demonstrate that scAMZI outperforms competing methods.

CONCLUSIONS: scAMZI outperforms competing methods and can facilitate downstream analyses such as cell annotation, marker gene discovery, and cell trajectory inference. The package of scAMZI is made freely available at https://doi.org/10.5281/zenodo.13131559 .

PMID:40197174 | DOI:10.1186/s12864-025-11511-2

Categories: Literature Watch

Deep Learning Applications in Imaging of Acute Ischemic Stroke: A Systematic Review and Narrative Summary

Tue, 2025-04-08 06:00

Radiology. 2025 Apr;315(1):e240775. doi: 10.1148/radiol.240775.

ABSTRACT

Background Acute ischemic stroke (AIS) is a major cause of morbidity and mortality, requiring swift and precise clinical decisions based on neuroimaging. Recent advances in deep learning-based computer vision and language artificial intelligence (AI) models have demonstrated transformative performance for several stroke-related applications. Purpose To evaluate deep learning applications for imaging in AIS in adult patients, providing a comprehensive overview of the current state of the technology and identifying opportunities for advancement. Materials and Methods A systematic literature review was conducted following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A comprehensive search of four databases from January 2016 to January 2024 was performed, targeting deep learning applications for imaging of AIS, including automated detection of large vessel occlusion and measurement of Alberta Stroke Program Early CT Score. Articles were selected based on predefined inclusion and exclusion criteria, focusing on convolutional neural networks and transformers. The top-represented areas were addressed, and the relevant information was extracted and summarized. Results Of 380 studies included, 171 (45.0%) focused on stroke lesion segmentation, 129 (33.9%) on classification and triage, 31 (8.2%) on outcome prediction, 15 (3.9%) on generative AI and large language models, and 11 (2.9%) on rapid or low-dose imaging specific to stroke applications. Detailed data extraction was performed for 68 studies. Public AIS datasets are also highlighted, for researchers developing AI models for stroke imaging. Conclusion Deep learning applications have permeated AIS imaging, particularly for stroke lesion segmentation. However, challenges remain, including the need for standardized protocols and test sets, larger public datasets, and performance validation in real-world settings. © RSNA, 2025 Supplemental material is available for this article.

PMID:40197098 | DOI:10.1148/radiol.240775

Categories: Literature Watch

Unified Deep Learning of Molecular and Protein Language Representations with T5ProtChem

Tue, 2025-04-08 06:00

J Chem Inf Model. 2025 Apr 8. doi: 10.1021/acs.jcim.5c00051. Online ahead of print.

ABSTRACT

Deep learning has revolutionized difficult tasks in chemistry and biology, yet existing language models often treat these domains separately, relying on concatenated architectures and independently pretrained weights. These approaches fail to fully exploit the shared atomic foundations of molecular and protein sequences. Here, we introduce T5ProtChem, a unified model based on the T5 architecture, designed to simultaneously process molecular and protein sequences. Using a new pretraining objective, ProtiSMILES, T5ProtChem bridges the molecular and protein domains, enabling efficient, generalizable protein-chemical modeling. The model achieves a state-of-the-art performance in tasks such as binding affinity prediction and reaction prediction, while having a strong performance in protein function prediction. Additionally, it supports novel applications, including covalent binder classification and sequence-level adduct prediction. These results demonstrate the versatility of unified language models for drug discovery, protein engineering, and other interdisciplinary efforts in computational biology and chemistry.

PMID:40197028 | DOI:10.1021/acs.jcim.5c00051

Categories: Literature Watch

Deciphering the Scattering of Mechanically Driven Polymers Using Deep Learning

Tue, 2025-04-08 06:00

J Chem Theory Comput. 2025 Apr 8. doi: 10.1021/acs.jctc.5c00409. Online ahead of print.

ABSTRACT

We present a deep learning approach for analyzing two-dimensional scattering data of semiflexible polymers under external forces. In our framework, scattering functions are compressed into a three-dimensional latent space using a Variational Autoencoder (VAE), and two converter networks establish a bidirectional mapping between the polymer parameters (bending modulus, stretching force, and steady shear) and the scattering functions. The training data are generated using off-lattice Monte Carlo simulations to avoid the orientational bias inherent in lattice models, ensuring robust sampling of polymer conformations. The feasibility of this bidirectional mapping is demonstrated by the organized distribution of polymer parameters in the latent space. By integrating the converter networks with the VAE, we obtain a generator that produces scattering functions from given polymer parameters and an inferrer that directly extracts polymer parameters from scattering data. While the generator can be utilized in a traditional least-squares fitting procedure, the inferrer produces comparable results in a single pass and operates 3 orders of magnitude faster. This approach offers a scalable automated tool for polymer scattering analysis and provides a promising foundation for extending the method to other scattering models, experimental validation, and the study of time-dependent scattering data.

PMID:40197011 | DOI:10.1021/acs.jctc.5c00409

Categories: Literature Watch

HTRecNet: a deep learning study for efficient and accurate diagnosis of hepatocellular carcinoma and cholangiocarcinoma

Tue, 2025-04-08 06:00

Front Cell Dev Biol. 2025 Mar 24;13:1549811. doi: 10.3389/fcell.2025.1549811. eCollection 2025.

ABSTRACT

BACKGROUND: Hepatocellular carcinoma (HCC) and cholangiocarcinoma (CCA) represent the primary liver cancer types. Traditional diagnostic techniques, reliant on radiologist interpretation, are both time-intensive and often inadequate for detecting the less prevalent CCA. There is an emergent need to explore automated diagnostic methods using deep learning to address these challenges.

METHODS: This study introduces HTRecNet, a novel deep learning framework for enhanced diagnostic precision and efficiency. The model incorporates sophisticated data augmentation strategies to optimize feature extraction, ensuring robust performance even with constrained sample sizes. A comprehensive dataset of 5,432 histopathological images was divided into 5,096 for training and validation, and 336 for external testing. Evaluation was conducted using five-fold cross-validation and external validation, applying metrics such as accuracy, area under the receiver operating characteristic curve (AUC), and Matthews correlation coefficient (MCC) against established clinical benchmarks.

RESULTS: The training and validation cohorts comprised 1,536 images of normal liver tissue, 3,380 of HCC, and 180 of CCA. HTRecNet showed exceptional efficacy, consistently achieving AUC values over 0.99 across all categories. In external testing, the model reached an accuracy of 0.97 and an MCC of 0.95, affirming its reliability in distinguishing between normal, HCC, and CCA tissues.

CONCLUSION: HTRecNet markedly enhances the capability for early and accurate differentiation of HCC and CCA from normal liver tissues. Its high diagnostic accuracy and efficiency position it as an invaluable tool in clinical settings, potentially transforming liver cancer diagnostic protocols. This system offers substantial support for refining diagnostic workflows in healthcare environments focused on liver malignancies.

PMID:40196844 | PMC:PMC11973358 | DOI:10.3389/fcell.2025.1549811

Categories: Literature Watch

Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues

Tue, 2025-04-08 06:00

Bioinform Adv. 2025 Apr 2;5(1):vbaf021. doi: 10.1093/bioadv/vbaf021. eCollection 2025.

ABSTRACT

MOTIVATION: Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.

RESULTS: We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 μ m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.

AVAILABILITY AND IMPLEMENTATION: Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.

PMID:40196751 | PMC:PMC11974384 | DOI:10.1093/bioadv/vbaf021

Categories: Literature Watch

Generative frame interpolation enhances tracking of biological objects in time-lapse microscopy

Tue, 2025-04-08 06:00

bioRxiv [Preprint]. 2025 Mar 26:2025.03.23.644838. doi: 10.1101/2025.03.23.644838.

ABSTRACT

Object tracking in microscopy videos is crucial for understanding biological processes. While existing methods often require fine-tuning tracking algorithms to fit the image dataset, here we explored an alternative paradigm: augmenting the image time-lapse dataset to fit the tracking algorithm. To test this approach, we evaluated whether generative video frame interpolation can augment the temporal resolution of time-lapse microscopy and facilitate object tracking in multiple biological contexts. We systematically compared the capacity of Latent Diffusion Model for Video Frame Interpolation (LDMVFI), Real-time Intermediate Flow Estimation (RIFE), Compression-Driven Frame Interpolation (CDFI), and Frame Interpolation for Large Motion (FILM) to generate synthetic microscopy images derived from interpolating real images. Our testing image time series ranged from fluorescently labeled nuclei to bacteria, yeast, cancer cells, and organoids. We showed that the off-the-shelf frame interpolation algorithms produced bio-realistic image interpolation even without dataset-specific retraining, as judged by high structural image similarity and the capacity to produce segmentations that closely resemble results from real images. Using a simple tracking algorithm based on mask overlap, we confirmed that frame interpolation significantly improved tracking across several datasets without requiring extensive parameter tuning and capturing complex trajectories that were difficult to resolve in the original image time series. Taken together, our findings highlight the potential of generative frame interpolation to improve tracking in time-lapse microscopy across diverse scenarios, suggesting that a generalist tracking algorithm for microscopy could be developed by combining deep learning segmentation models with generative frame interpolation.

PMID:40196554 | PMC:PMC11974701 | DOI:10.1101/2025.03.23.644838

Categories: Literature Watch

ConfuseNN: Interpreting convolutional neural network inferences in population genomics with data shuffling

Tue, 2025-04-08 06:00

bioRxiv [Preprint]. 2025 Mar 27:2025.03.24.644668. doi: 10.1101/2025.03.24.644668.

ABSTRACT

Convolutional neural networks (CNNs) have become powerful tools for population genomic inference, yet understanding which genomic features drive their performance remains challenging. We introduce ConfuseNN, a method that systematically shuffles input haplotype matrices to disrupt specific population genetic features and evaluate their contribution to CNN performance. By sequentially removing signals from linkage disequilibrium, allele frequency, and other population genetic patterns in test data, we evaluate how each feature contributes to CNN performance. We applied ConfuseNN to three published CNNs for demographic history and selection inference, confirming the importance of specific data features and identifying limitations of network architecture and of simulated training and testing data design. ConfuseNN provides an accessible biologically motivated framework for interpreting CNN behavior across different tasks in population genetics, helping bridge the gap between powerful deep learning approaches and traditional population genetic theory.

PMID:40196528 | PMC:PMC11974698 | DOI:10.1101/2025.03.24.644668

Categories: Literature Watch

Point-SPV: end-to-end enhancement of object recognition in simulated prosthetic vision using synthetic viewing points

Tue, 2025-04-08 06:00

Front Hum Neurosci. 2025 Mar 24;19:1549698. doi: 10.3389/fnhum.2025.1549698. eCollection 2025.

ABSTRACT

Prosthetic vision systems aim to restore functional sight for visually impaired individuals by replicating visual perception by inducing phosphenes through electrical stimulation in the visual cortex, yet there remain challenges in visual representation strategies such as including gaze information and task-dependent optimization. In this paper, we introduce Point-SPV, an end-to-end deep learning model designed to enhance object recognition in simulated prosthetic vision. Point-SPV takes an initial step toward gaze-based optimization by simulating viewing points, representing potential gaze locations, and training the model on patches surrounding these points. Our approach prioritizes task-oriented representation, aligning visual outputs with object recognition needs. A behavioral gaze-contingent object discrimination experiment demonstrated that Point-SPV outperformed a conventional edge detection method, by facilitating observers to gain a higher recognition accuracy, faster reaction times, and a more efficient visual exploration. Our work highlights how task-specific optimization may enhance representations in prosthetic vision, offering a foundation for future exploration and application.

PMID:40196449 | PMC:PMC11973266 | DOI:10.3389/fnhum.2025.1549698

Categories: Literature Watch

The role of trustworthy and reliable AI for multiple sclerosis

Tue, 2025-04-08 06:00

Front Digit Health. 2025 Mar 24;7:1507159. doi: 10.3389/fdgth.2025.1507159. eCollection 2025.

ABSTRACT

This paper investigates the importance of Trustworthy Machine Learning (ML) in the context of Multiple Sclerosis (MS) research and care. Due to the complex and individual nature of MS, the need for reliable and trustworthy ML models is essential. In this paper, key aspects of trustworthy ML, such as out-of-distribution generalization, explainability, uncertainty quantification and calibration are explored, highlighting their significance for healthcare applications. Challenges in integrating these ML tools into clinical workflows are addressed, discussing the difficulties in interpreting AI outputs, data diversity, and the need for comprehensive, quality data. It calls for collaborative efforts among researchers, clinicians, and policymakers to develop ML solutions that are technically sound, clinically relevant, and patient-centric.

PMID:40196398 | PMC:PMC11973328 | DOI:10.3389/fdgth.2025.1507159

Categories: Literature Watch

A study on early diagnosis for fracture non-union prediction using deep learning and bone morphometric parameters

Tue, 2025-04-08 06:00

Front Med (Lausanne). 2025 Mar 24;12:1547588. doi: 10.3389/fmed.2025.1547588. eCollection 2025.

ABSTRACT

BACKGROUND: Early diagnosis of non-union fractures is vital for treatment planning, yet studies using bone morphometric parameters for this purpose are scarce. This study aims to create a fracture micro-CT image dataset, design a deep learning algorithm for fracture segmentation, and develop an early diagnosis model for fracture non-union.

METHODS: Using fracture animal models, micro-CT images from 12 rats at various healing stages (days 1, 7, 14, 21, 28, and 35) were analyzed. Fracture lesion frames were annotated to create a high-resolution dataset. We proposed the Vision Mamba Triplet Attention and Edge Feature Decoupling Module UNet (VM-TE-UNet) for fracture area segmentation. And we extracted bone morphometric parameters to establish an early diagnostic evaluation system for the non-union of fractures.

RESULTS: A dataset comprising 2,448 micro-CT images of the rat fracture lesions with fracture Region of Interest (ROI), bone callus and healing characteristics was established and used to train and test the proposed VM-TE-UNet which achieved a Dice Similarity Coefficient of 0.809, an improvement over the baseline's 0.765, and reduced the 95th Hausdorff Distance to 13.1. Through ablation studies, comparative experiments, and result analysis, the algorithm's effectiveness and superiority were validated. Significant differences (p < 0.05) were observed between the fracture and fracture non-union groups during the inflammatory and repair phases. Key indices, such as the average CT values of hematoma and cartilage tissues, BS/TS and BS/TV of mineralized cartilage, BS/TV of osteogenic tissue, and BV/TV of osteogenic tissue, align with clinical methods for diagnosing fracture non-union by assessing callus presence and local soft tissue swelling. On day 14, the early diagnosis model achieved an AUC of 0.995, demonstrating its ability to diagnose fracture non-union during the soft-callus phase.

CONCLUSION: This study proposed the VM-TE-UNet for fracture areas segmentation, extracted micro-CT indices, and established an early diagnostic model for fracture non-union. We believe that the prediction model can effectively screen out samples of poor fracture rehabilitation caused by blood supply limitations in rats 14 days after fracture, rather than the widely accepted 35 or 40 days. This provides important reference for the clinical prediction of fracture non-union and early intervention treatment.

PMID:40196347 | PMC:PMC11973290 | DOI:10.3389/fmed.2025.1547588

Categories: Literature Watch

Transformer-based artificial intelligence on single-cell clinical data for homeostatic mechanism inference and rational biomarker discovery

Tue, 2025-04-08 06:00

medRxiv [Preprint]. 2025 Mar 25:2025.03.24.25324556. doi: 10.1101/2025.03.24.25324556.

ABSTRACT

Artificial intelligence (AI) applied to single-cell data has the potential to transform our understanding of biological systems by revealing patterns and mechanisms that simpler traditional methods miss. Here, we develop a general-purpose, interpretable AI pipeline consisting of two deep learning models: the Multi- Input Set Transformer++ (MIST) model for prediction and the single-cell FastShap model for interpretability. We apply this pipeline to a large set of routine clinical data containing single-cell measurements of circulating red blood cells (RBC), white blood cells (WBC), and platelets (PLT) to study population fluxes and homeostatic hematological mechanisms. We find that MIST can use these single-cell measurements to explain 70-82% of the variation in blood cell population sizes among patients (RBC count, PLT count, WBC count), compared to 5-20% explained with current approaches. MIST's accuracy implies that substantial information on cellular production and clearance is present in the single-cell measurements. MIST identified substantial crosstalk among RBC, WBC, and PLT populations, suggesting co-regulatory relationships that we validated and investigated using interpretability maps generated by single-cell FastShap. The maps identify granular single-cell subgroups most important for each population's size, enabling generation of evidence-based hypotheses for co-regulatory mechanisms. The interpretability maps also enable rational discovery of a single-WBC biomarker, "Down Shift", that complements an existing marker of inflammation and strengthens diagnostic associations with diseases including sepsis, heart disease, and diabetes. This study illustrates how single-cell data can be leveraged for mechanistic inference with potential clinical relevance and how this AI pipeline can be applied to power scientific discovery.

PMID:40196278 | PMC:PMC11974774 | DOI:10.1101/2025.03.24.25324556

Categories: Literature Watch

Artificial Intelligence Prediction of Age from Echocardiography as a Marker for Cardiovascular Disease

Tue, 2025-04-08 06:00

medRxiv [Preprint]. 2025 Mar 26:2025.03.25.25324627. doi: 10.1101/2025.03.25.25324627.

ABSTRACT

Accurate understanding of biological aging and the impact of environmental stressors is crucial for understanding cardiovascular health and identifying patients at risk for adverse outcomes. Chronological age stands as perhaps the most universal risk predictor across virtually all populations and diseases. While chronological age is readily discernible, efforts to distinguish between biologically older versus younger individuals can, in turn, potentially identify individuals with accelerated versus delayed cardiovascular aging. This study presents a deep learning artificial intelligence (AI) approach to predict age from echocardiogram videos, leveraging 2,610,266 videos from 166,508 studies from 90,738 unique patients and using the trained models to identify features of accelerated and delayed aging. Leveraging multi-view echocardiography, our AI age prediction model achieved a mean absolute error (MAE) of 6.76 (6.65 - 6.87) years and a coefficient of determination (R 2 ) of 0.732 (0.72 - 0.74). Stratification by age prediction revealed associations with increased risk of coronary artery disease, heart failure, and stroke. The age prediction can also identify heart transplant recipients as a discontinuous prediction of age is seen before and after a heart transplant. Guided back propagation visualizations highlighted the model's focus on the mitral valve, mitral apparatus, and basal inferior wall as crucial for the assessment of age. These findings underscore the potential of computer vision-based assessment of echocardiography in enhancing cardiovascular risk assessment and understanding biological aging in the heart.

PMID:40196275 | PMC:PMC11974980 | DOI:10.1101/2025.03.25.25324627

Categories: Literature Watch

Vision Transformer Autoencoders for Unsupervised Representation Learning: Capturing Local and Non-Local Features in Brain Imaging to Reveal Genetic Associations

Tue, 2025-04-08 06:00

medRxiv [Preprint]. 2025 Mar 25:2025.03.24.25324549. doi: 10.1101/2025.03.24.25324549.

ABSTRACT

The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract endophenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we leverage a vision transformer (ViT) model due to a different inductive bias and its ability to potentially capture unique patterns through its pairwise attention mechanism. Our approach based on 128 endophenotypes derived from average pooling discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which were not found in the GWAS catalog to have had any associations with brain structure. Our interpretation results demonstrate the ViT's capability in capturing non-local patterns such as left-right hemisphere symmetry within brain MRI data, by leveraging its attention mechanism and positional embeddings. Our results highlight the advantages of transformer-based architectures in feature extraction and representation for genetic discovery.

PMID:40196251 | PMC:PMC11974795 | DOI:10.1101/2025.03.24.25324549

Categories: Literature Watch

Deep learning-based generation of DSC MRI parameter maps using DCE MRI data

Mon, 2025-04-07 06:00

AJNR Am J Neuroradiol. 2025 Apr 7:ajnr.A8768. doi: 10.3174/ajnr.A8768. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Perfusion and perfusion-related parameter maps obtained using dynamic susceptibility contrast (DSC) MRI and dynamic contrast enhanced (DCE) MRI are both useful for clinical diagnosis and research. However, using both DSC and DCE MRI in the same scan session requires two doses of gadolinium contrast agent. The objective was to develop deep-learning based methods to synthesize DSC-derived parameter maps from DCE MRI data.

MATERIALS AND METHODS: Independent analysis of data collected in previous studies was performed. The database contained sixty-four participants, including patients with and without brain tumors. The reference parameter maps were measured from DSC MRI performed following DCE MRI. A conditional generative adversarial network (cGAN) was designed and trained to generate synthetic DSC-derived maps from DCE MRI data. The median parameter values and distributions between synthetic and real maps were compared using linear regression and Bland-Altman plots.

RESULTS: Using cGAN, realistic DSC parameter maps could be synthesized from DCE MRI data. For controls without brain tumors, the synthesized parameters had distributions similar to the ground truth values. For patients with brain tumors, the synthesized parameters in the tumor region correlated linearly with the ground truth values. In addition, areas not visible due to susceptibility artifacts in real DSC maps could be visualized using DCE-derived DSC maps.

CONCLUSIONS: DSC-derived parameter maps could be synthesized using DCE MRI data, including susceptibility-artifact-prone regions. This shows the potential to obtain both DSC and DCE parameter maps from DCE MRI using a single dose of contrast agent.

ABBREVIATIONS: cGAN=conditional generative adversarial network; Ktrans=volume transfer constant; rCBV=relative cerebral blood volume; rCBF=relative cerebral blood flow; Ve=extravascular extracellular volume; Vp=plasma volume.

PMID:40194853 | DOI:10.3174/ajnr.A8768

Categories: Literature Watch

Severity Classification of Pediatric Spinal Cord Injuries Using Structural MRI Measures and Deep Learning: A Comprehensive Analysis Across All Vertebral Levels

Mon, 2025-04-07 06:00

AJNR Am J Neuroradiol. 2025 Apr 7:ajnr.A8770. doi: 10.3174/ajnr.A8770. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Spinal cord injury (SCI) in the pediatric population presents a unique challenge in diagnosis and prognosis due to the complexity of performing clinical assessments on children. Accurate evaluation of structural changes in the spinal cord is essential for effective treatment planning. This study aims to evaluate structural characteristics in pediatric patients with SCI by comparing cross-sectional area (CSA), anterior-posterior (AP) width, and right-left (RL) width across all vertebral levels of the spinal cord between typically developing (TD) and participants with SCI. We employed deep learning techniques to utilize these measures for detecting SCI cases and determining their injury severity.

MATERIALS AND METHODS: Sixty-one pediatric participants (ages 6-18), including 20 with chronic SCI and 41 TD, were enrolled and scanned using a 3T MRI scanner. All SCI participants underwent the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) test to assess their neurological function and determine their American Spinal Injury Association (ASIA) Impairment Scale (AIS) category. T2-weighted MRI scans were utilized to measure CSA, AP width, and RL widths along the entire cervical and thoracic cord. These measures were automatically extracted at every vertebral level of the spinal cord using the SCT toolbox. Deep convolutional neural networks (CNNs) were utilized to classify participants into SCI or TD groups and determine their AIS classification based on structural parameters and demographic factors such as age and height.

RESULTS: Significant differences (p<0.05) were found in CSA, AP width, and RL width between SCI and TD participants, indicating notable structural alterations due to SCI. The CNN-based models demonstrated high performance, achieving 96.59% accuracy in distinguishing SCI from TD participants. Furthermore, the models determined AIS category classification with 94.92% accuracy.

CONCLUSIONS: The study demonstrates the effectiveness of integrating cross-sectional structural imaging measures with deep learning methods for classification and severity assessment of pediatric SCI. The deep learning approach outperforms traditional machine learning models in diagnostic accuracy, offering potential improvements in patient care in pediatric SCI management.

ABBREVIATIONS: SCI = Spinal Cord Injury, TD = Typically Developing, CSA = Cross-Sectional Area, AP = Anterior-Posterior, RL = Right-Left, ASIA = American Spinal Injury Association, AIS = American Spinal Injury Association, CNN = Convolutional Neural Network.

PMID:40194851 | DOI:10.3174/ajnr.A8770

Categories: Literature Watch

ENsiRNA: A multimodality method for siRNA-mRNA and modified siRNA efficacy prediction based on geometric graph neural network

Mon, 2025-04-07 06:00

J Mol Biol. 2025 Apr 5:169131. doi: 10.1016/j.jmb.2025.169131. Online ahead of print.

ABSTRACT

With the rise of small interfering RNA (siRNA) as a therapeutic tool, effective siRNA design is crucial. Current methods often emphasize sequence-related features, overlooking structural information. To address this, we introduce ENsiRNA, a multimodal approach utilizing a geometric graph neural network to predict the efficacy of both standard and modified siRNA. ENsiRNA integrates sequence features from a pretrained RNA language model, structural characteristics, and thermodynamic data or chemical modification to enhance prediction accuracy. Our results indicate that ENsiRNA outperforms existing methods, achieving over a 13% improvement in Pearson Correlation Coefficient (PCC) for standard siRNA across various tests. For modified siRNA, despite challenges associated with RNA folding methods, ENsiRNA still demonstrates competitive performance in different datasets. This novel method highlights the significance of structural information and multimodal strategies in siRNA prediction, advancing the field of therapeutic design.

PMID:40194620 | DOI:10.1016/j.jmb.2025.169131

Categories: Literature Watch

Enhanced inhibitor-kinase affinity prediction via integrated multimodal analysis of drug molecule and protein sequence features

Mon, 2025-04-07 06:00

Int J Biol Macromol. 2025 Apr 5:142871. doi: 10.1016/j.ijbiomac.2025.142871. Online ahead of print.

ABSTRACT

The accurate prediction of inhibitor-kinase binding affinity is pivotal for advancing drug development and precision medicine. In this study, we developed predictive models for human kinases, including cyclin-dependent kinases (CDKs), mitogen-activated protein kinases (MAP kinases), glycogen synthase kinases (GSKs), CDK-like kinases (CMGC kinase group) and receptor tyrosine kinases (RTKs)-key regulators of cellular signaling and disease progression. These kinases serve as primary drug targets in cancer and other critical diseases. To enhance affinity prediction precision, we introduce an innovative multimodal fusion model, KinNet. The model integrates the GraphKAN network, which effectively captures both local and global structural features of drug molecules. Furthermore, it leverages kernel functions and learnable activation functions to dynamically optimize node and edge feature representations. Additionally, the model incorporates the Conv-Enhanced Mamba module, combining Conv1D's ability to capture local features with Mamba's strength in processing long sequences, facilitating comprehensive feature extraction from protein sequences and molecular fingerprints. Experimental results confirm that the KinNet model achieves superior prediction accuracy compared to existing approaches, underscoring its potential to elucidate inhibitor-kinase binding mechanisms. This model serves as a robust computational framework to support drug discovery and the development of kinase-targeted therapies.

PMID:40194581 | DOI:10.1016/j.ijbiomac.2025.142871

Categories: Literature Watch

Sensitivity of a deep-learning-based breast cancer risk prediction model

Mon, 2025-04-07 06:00

Phys Med Biol. 2025 Apr 7. doi: 10.1088/1361-6560/adc9f8. Online ahead of print.

ABSTRACT

When it comes to the implementation of Deep-Learning (DL) based Breast Cancer Risk (BCR) prediction models in clinical settings, it is important to be aware that these models could be sensitive to various factors, especially those arising from the acquisition process. In this work, we investigated how sensitive the state-of-the-art BCR prediction model is to realistic image alterations that can occur as a result of different positioning during the acquisition process.&#xD;&#xD;Approach: 5076 mammograms (1269 exams, 650 participants) from the Slovenian and Belgium (University Hospital Leuven) Breast Cancer Screening Programs were collected. The Original MIRAI model was used for 1-5-year BCR estimation. First, BCR was predicted for the original mammograms, which were not changed. Then, a series of different image alteration techniques was performed, such as swapping left and right breasts, removing tissue below the inframammary fold, translations, cropping, rotations, registration and pectoral muscle removal. In addition, a subset of 81 exams, where at least one of the mammograms had to be retaken due to inadequate image quality, served as an approximation of a test-retest experiment. Bland-Altman plots were used to determine prediction bias and 95% limits of agreement (LOA). Additionally, the mean absolute difference in BCR (Mean AD) was calculated. The impact on the overall discrimination performance was evaluated with the AUC.&#xD;&#xD;Results: Swapping left and right breasts had no impact on the predicted BCR. The removal of skin tissue below the inframammary fold had minimal impact on the predicted BCR (1-5-year LOA: [-0.02, 0.01]). The model was sensitive to translation, rotation, registration, and cropping, where LOAs of up to ±0.1 were observed. Partial pectoral muscle removal did not have a major impact on predicted BCR, while complete removal of pectoral muscle introduced substantial prediction bias and LOAs (1-year LOA: [-0.07, 0.04], 5-year LOA: [-0.06, 0.03]). The approximation of a real test-retest experiment resulted in LOAs similar to those of simulated image alterations. None of the alterations impacted the overall BCR discrimination performance; the initial 1-year AUC (0.90 [0.88, 0.92]) and 5-year AUC (0.77 [0.75, 0.80]) remained unchanged.&#xD;&#xD;Significance: While tested image alterations do not impact overall BCR discrimination performance, substantial changes in predicted 1-5-year BCR can occur on an individual basis.

PMID:40194545 | DOI:10.1088/1361-6560/adc9f8

Categories: Literature Watch

A deep learning approach for quantifying CT perfusion parameters in stroke

Mon, 2025-04-07 06:00

Biomed Phys Eng Express. 2025 Apr 7. doi: 10.1088/2057-1976/adc9b6. Online ahead of print.

ABSTRACT

&#xD;Computed tomography perfusion (CTP) imaging is widely used for assessing acute ischemic stroke. However, conventional methods for quantifying CTP images, such as singular value decomposition (SVD), often lead to oscillations in the estimated residue function and underestimation of tissue perfusion. In addition, the use of global arterial input function (AIF) potentially leads to erroneous parameter estimates. We aim to develop a method for accurately estimating physiological parameters from CTP images.&#xD;Approach:&#xD;We introduced a Transformer-based network to learn voxel-wise temporal features of CTP images. With global AIF and concentration time curve (CTC) of brain tissue as inputs, the network estimated local AIF and flow-scaled residue function. The derived parameters, including cerebral blood flow (CBF) and bolus arrival delay (BAD), were validated on both simulated and patient data (ISLES18 dataset), and were compared with multiple SVD-based methods, including standard SVD (sSVD), block-circulant SVD (cSVD) and oscillation-index SVD (oSVD).&#xD;Main results:&#xD;On data simulating multiple scenarios, local AIF estimated by the proposed method correlated with true AIF with a coefficient of 0.97±0.04 (P<0.001), estimated CBF with a mean error of 4.95 ml/100 g/min, and estimated BAD with a mean error of 0.51 s; the latter two errors were significantly lower than those of the SVD-based methods (P<0.001). The CBF estimated by the SVD-based methods were underestimated by 10%~15%. For patient data, the CBF estimates of the proposed method were significantly higher than those of the sSVD method in both normally perfused and ischemic tissues, by 13.83 ml/100 g/min or 39.33% and 8.55 ml/100 g/min or 57.73% (P<0.001), respectively, which was in agreement with the simulation results.&#xD;Significance:&#xD;The proposed method is capable of estimating local AIF and perfusion parameters from CTP images with high accuracy, potentially improving CTP's performance and efficiency in diagnosing and treating ischemic stroke.&#xD.

PMID:40194529 | DOI:10.1088/2057-1976/adc9b6

Categories: Literature Watch

Pages