Deep learning
Integrative Computational Analysis of Common EXO5 Haplotypes: Impact on Protein Dynamics, Genome Stability, and Cancer Progression
J Chem Inf Model. 2025 Mar 21. doi: 10.1021/acs.jcim.5c00067. Online ahead of print.
ABSTRACT
Understanding the impact of common germline variants on protein structure, function, and disease progression is crucial in cancer research. This study presents a comprehensive analysis of the EXO5 gene, which encodes a DNA exonuclease involved in DNA repair that was previously associated with cancer susceptibility. We employed an integrated approach combining genomic and clinical data analysis, deep learning variant effect prediction, and molecular dynamics (MD) simulations to investigate the effects of common EXO5 haplotypes on protein structure, dynamics, and cancer outcomes. We characterized the haplotype structure of EXO5 across diverse human populations, identifying five common haplotypes, and studied their impact on the EXO5 protein. Extensive, all-atom MD simulations revealed significant structural and dynamic differences among the EXO5 protein variants, particularly in their catalytic region. The L151P EXO5 protein variant exhibited the most substantial conformational changes, potentially disruptive for EXO5's function and nuclear localization. Analysis of The Cancer Genome Atlas data showed that cancer patients carrying L151P EXO5 had significantly shorter progression-free survival in prostate and pancreatic cancers and exhibited increased genomic instability. This study highlights the strength of our methodology in uncovering the effects of common genetic variants on protein function and their implications for disease outcomes.
PMID:40115981 | DOI:10.1021/acs.jcim.5c00067
Artificial intelligence algorithm was used to establish and verify the prediction model of portal hypertension in hepatocellular carcinoma based on clinical parameters and imaging features
J Gastrointest Oncol. 2025 Feb 28;16(1):159-175. doi: 10.21037/jgo-2024-931. Epub 2025 Feb 26.
ABSTRACT
BACKGROUND: Portal hypertension (PHT) is an important factor leading to a poor prognosis in patients with hepatocellular carcinoma (HCC). Identifying patients with PHT for individualized treatment is of great clinical significance. The prediction model of HCC combined PHT is in urgent need of clinical practice. Combining clinical parameters and imaging features can improve prediction accuracy. The application of artificial intelligence algorithms can further tap the potential of data, optimize the prediction model, and provide strong support for early intervention and personalized treatment of PHT. This study aimed to establish a prediction model of PHT based on the clinicopathological features of PHT and computed tomography scanning features of the non-tumor liver area in the portal vein stage.
METHODS: A total of 884 patients were enrolled in this study, and randomly divided into a training set of 707 patients (of whom 89 had PHT) and a validation set of 177 patients (of whom 23 had PHT) at a ratio of 8:2. Univariate and multivariate logistic regression analyses were performed to screen the clinical features. Radiomics and deep-learning features were extracted from the non-tumorous liver regions. Feature selection was conducted using t-tests, correlation analyses, and least absolute shrinkage and selection operator regression models. Finally, a predictive model for PHT in HCC patients was constructed by combining clinical features with the selected radiomics and deep-learning features.
RESULTS: Portal vein diameter (PVD), Child-Pugh score, and fibrosis 4 (FIB-4) score were identified as independent risk factors for PHT. The predictive model that incorporated clinical features, radiomics features from non-tumorous liver regions, and deep-learning features had an area under the curve (AUC) of 0.966 [95% confidence interval (CI): 0.954-0.979] and a sensitivity of 0.966 in the training set, and an AUC of 0.698 (95% CI: 0.565-0.831) and a sensitivity of 0.609 in the validation set.
CONCLUSIONS: The preoperative evaluation showed that increased PVD, higher Child-Pugh score, and increased FIB-4 score were independent risk factors for PHT in patients with HCC. To predict the occurrence of PHT more effectively, we construct a comprehensive prediction model. The model incorporates clinical parameters, radiomic features, and deep learning features. This fusion of multi-modal features enables the model to capture complex information related to PHT more comprehensively, thus achieving high prediction accuracy and practicability.
PMID:40115915 | PMC:PMC11921233 | DOI:10.21037/jgo-2024-931
AM-MTEEG: multi-task EEG classification based on impulsive associative memory
Front Neurosci. 2025 Mar 6;19:1557287. doi: 10.3389/fnins.2025.1557287. eCollection 2025.
ABSTRACT
Electroencephalogram-based brain-computer interfaces (BCIs) hold promise for healthcare applications but are hindered by cross-subject variability and limited data. This article proposes a multi-task (MT) classification model, AM-MTEEG, which integrates deep learning-based convolutional and impulsive networks with bidirectional associative memory (AM) for cross-subject EEG classification. AM-MTEEG deals with the EEG classification of each subject as an independent task and utilizes common features across subjects. The model is built with a convolutional encoder-decoder and a population of impulsive neurons to extract shared features across subjects, as well as a Hebbian-learned bidirectional associative memory matrix to classify EEG within one subject. Experimental results on two BCI competition datasets demonstrate that AM-MTEEG improves average accuracy over state-of-the-art methods and reduces performance variance across subjects. Visualization of neuronal impulses in the bidirectional associative memory network reveal a precise mapping between hidden-layer neuron activities and specific movements. Given four motor imagery categories, the reconstructed waveforms resemble the real event-related potentials, highlighting the biological interpretability of the model beyond classification.
PMID:40115889 | PMC:PMC11922916 | DOI:10.3389/fnins.2025.1557287
Improvement of BCI performance with bimodal SSMVEPs: enhancing response intensity and reducing fatigue
Front Neurosci. 2025 Mar 6;19:1506104. doi: 10.3389/fnins.2025.1506104. eCollection 2025.
ABSTRACT
Steady-state visual evoked potential (SSVEP) is a widely used brain-computer interface (BCI) paradigm, valued for its multi-target capability and limited EEG electrode requirements. Conventional SSVEP methods frequently lead to visual fatigue and decreased recognition accuracy because of the flickering light stimulation. To address these issues, we developed an innovative steady-state motion visual evoked potential (SSMVEP) paradigm that integrated motion and color stimuli, designed specifically for augmented reality (AR) glasses. Our study aimed to enhance SSMVEP response intensity and reduce visual fatigue. Experiments were conducted under controlled laboratory conditions. EEG data were analyzed using the deep learning algorithm of EEGNet and fast Fourier transform (FFT) to calculate the classification accuracy and assess the response intensity. Experimental results showed that the bimodal motion-color integrated paradigm significantly outperformed single-motion SSMVEP and single-color SSVEP paradigms, respectively, achieving the highest accuracy of 83.81% ± 6.52% under the medium brightness (M) and area ratio of C of 0.6. Enhanced signal-to-noise ratio (SNR) and reduced visual fatigue were also observed, as confirmed by objective measures and subjective reports. The findings verified the bimodal paradigm as a novel application in SSVEP-based BCIs, enhancing both brain response intensity and user comfort.
PMID:40115888 | PMC:PMC11922886 | DOI:10.3389/fnins.2025.1506104
Fast aberration correction in 3D transcranial photoacoustic computed tomography via a learning-based image reconstruction method
Photoacoustics. 2025 Feb 20;43:100698. doi: 10.1016/j.pacs.2025.100698. eCollection 2025 Jun.
ABSTRACT
Transcranial photoacoustic computed tomography (PACT) holds significant potential as a neuroimaging modality. However, compensating for skull-induced aberrations in reconstructed images remains a challenge. Although optimization-based image reconstruction methods (OBRMs) can account for the relevant wave physics, they are computationally demanding and generally require accurate estimates of the skull's viscoelastic parameters. To circumvent these issues, a learning-based image reconstruction method was investigated for three-dimensional (3D) transcranial PACT. The method was systematically assessed in virtual imaging studies that involved stochastic 3D numerical head phantoms and applied to experimental data acquired by use of a physical head phantom that involved a human skull. The results demonstrated that the learning-based method yielded accurate images and exhibited robustness to errors in the assumed skull properties, while substantially reducing computational times compared to an OBRM. To the best of our knowledge, this is the first demonstration of a learned image reconstruction method for 3D transcranial PACT.
PMID:40115737 | PMC:PMC11923815 | DOI:10.1016/j.pacs.2025.100698
(KAUH-BCMD) dataset: advancing mammographic breast cancer classification with multi-fusion preprocessing and residual depth-wise network
Front Big Data. 2025 Mar 6;8:1529848. doi: 10.3389/fdata.2025.1529848. eCollection 2025.
ABSTRACT
The categorization of benign and malignant patterns in digital mammography is a critical step in the diagnosis of breast cancer, facilitating early detection and potentially saving many lives. Diverse breast tissue architectures often obscure and conceal breast issues. Classifying worrying regions (benign and malignant patterns) in digital mammograms is a significant challenge for radiologists. Even for specialists, the first visual indicators are nuanced and irregular, complicating identification. Therefore, radiologists want an advanced classifier to assist in identifying breast cancer and categorizing regions of concern. This study presents an enhanced technique for the classification of breast cancer using mammography images. The collection comprises real-world data from King Abdullah University Hospital (KAUH) at Jordan University of Science and Technology, consisting of 7,205 photographs from 5,000 patients aged 18-75. After being classified as benign or malignant, the pictures underwent preprocessing by rescaling, normalization, and augmentation. Multi-fusion approaches, such as high-boost filtering and contrast-limited adaptive histogram equalization (CLAHE), were used to improve picture quality. We created a unique Residual Depth-wise Network (RDN) to enhance the precision of breast cancer detection. The suggested RDN model was compared with many prominent models, including MobileNetV2, VGG16, VGG19, ResNet50, InceptionV3, Xception, and DenseNet121. The RDN model exhibited superior performance, achieving an accuracy of 97.82%, precision of 96.55%, recall of 99.19%, specificity of 96.45%, F1 score of 97.85%, and validation accuracy of 96.20%. The findings indicate that the proposed RDN model is an excellent instrument for early diagnosis using mammography images and significantly improves breast cancer detection when integrated with multi-fusion and efficient preprocessing approaches.
PMID:40115240 | PMC:PMC11922913 | DOI:10.3389/fdata.2025.1529848
Overview of the Head and Neck Tumor Segmentation for Magnetic Resonance Guided Applications (HNTS-MRG) 2024 Challenge
Head Neck Tumor Segm MR Guid Appl (2024). 2025;15273:1-35. doi: 10.1007/978-3-031-83274-1_1. Epub 2025 Mar 3.
ABSTRACT
Magnetic resonance (MR)-guided radiation therapy (RT) is enhancing head and neck cancer (HNC) treatment through superior soft tissue contrast and longitudinal imaging capabilities. However, manual tumor segmentation remains a significant challenge, spurring interest in artificial intelligence (AI)-driven automation. To accelerate innovation in this field, we present the Head and Neck Tumor Segmentation for MR-Guided Applications (HNTS-MRG) 2024 Challenge, a satellite event of the 27th International Conference on Medical Image Computing and Computer Assisted Intervention. This challenge addresses the scarcity of large, publicly available AI-ready adaptive RT datasets in HNC and explores the potential of incorporating multi-timepoint data to enhance RT auto-segmentation performance. Participants tackled two HNC segmentation tasks: automatic delineation of primary gross tumor volume (GTVp) and gross metastatic regional lymph nodes (GTVn) on pre-RT (Task 1) and mid-RT (Task 2) T2-weighted scans. The challenge provided 150 HNC cases for training and 50 for final testing hosted on grand-challenge.org using a Docker submission framework. In total, 19 independent teams from across the world qualified by submitting both their algorithms and corresponding papers, resulting in 18 submissions for Task 1 and 15 submissions for Task 2. Evaluation using the mean aggregated Dice Similarity Coefficient showed top-performing AI methods achieved scores of 0.825 in Task 1 and 0.733 in Task 2. These results surpassed clinician interobserver variability benchmarks, marking significant strides in automated tumor segmentation for MR-guided RT applications in HNC.
PMID:40115167 | PMC:PMC11925392 | DOI:10.1007/978-3-031-83274-1_1
Segment Like A Doctor: Learning reliable clinical thinking and experience for pancreas and pancreatic cancer segmentation
Med Image Anal. 2025 Mar 13;102:103539. doi: 10.1016/j.media.2025.103539. Online ahead of print.
ABSTRACT
Pancreatic cancer is a lethal invasive tumor with one of the worst prognosis. Accurate and reliable segmentation for pancreas and pancreatic cancer on computerized tomography (CT) images is vital in clinical diagnosis and treatment. Although certain deep learning-based techniques have been tentatively applied to this task, current performance of pancreatic cancer segmentation is far from meeting the clinical needs due to the tiny size, irregular shape and extremely uncertain boundary of the cancer. Besides, most of the existing studies are established on the black-box models which only learn the annotation distribution instead of the logical thinking and diagnostic experience of high-level medical experts, the latter is more credible and interpretable. To alleviate the above issues, we propose a novel Segment-Like-A-Doctor (SLAD) framework to learn the reliable clinical thinking and experience for pancreas and pancreatic cancer segmentation on CT images. Specifically, SLAD aims to simulate the essential logical thinking and experience of doctors in the progressive diagnostic stages of pancreatic cancer: organ, lesion and boundary stage. Firstly, in the organ stage, an Anatomy-aware Masked AutoEncoder (AMAE) is introduced to model the doctors' overall cognition for the anatomical distribution of abdominal organs on CT images by self-supervised pretraining. Secondly, in the lesion stage, a Causality-driven Graph Reasoning Module (CGRM) is designed to learn the global judgment of doctors for lesion detection by exploring topological feature difference between the causal lesion and the non-causal organ. Finally, in the boundary stage, a Diffusion-based Discrepancy Calibration Module (DDCM) is developed to fit the refined understanding of doctors for uncertain boundary of pancreatic cancer by inferring the ambiguous segmentation discrepancy based on the trustworthy lesion core. Experimental results on three independent datasets demonstrate that our approach boosts pancreatic cancer segmentation accuracy by 4%-9% compared with the state-of-the-art methods. Additionally, the tumor-vascular involvement analysis is also conducted to verify the superiority of our method in clinical applications. Our source codes will be publicly available at https://github.com/ZouLiwen-1999/SLAD.
PMID:40112510 | DOI:10.1016/j.media.2025.103539
Generative T2*-weighted images as a substitute for true T2*-weighted images on brain MRI in patients with acute stroke
Diagn Interv Imaging. 2025 Mar 19:S2211-5684(25)00048-8. doi: 10.1016/j.diii.2025.03.004. Online ahead of print.
ABSTRACT
PURPOSE: The purpose of this study was to validate a deep learning algorithm that generates T2*-weighted images from diffusion-weighted (DW) images and to compare its performance with that of true T2*-weighted images for hemorrhage detection on MRI in patients with acute stroke.
MATERIALS AND METHODS: This single-center, retrospective study included DW- and T2*-weighted images obtained less than 48 hours after symptom onset in consecutive patients admitted for acute stroke. Datasets were divided into training (60 %), validation (20 %), and test (20 %) sets, with stratification by stroke type (hemorrhagic/ischemic). A generative adversarial network was trained to produce generative T2*-weighted images using DW images. Concordance between true T2*-weighted images and generative T2*-weighted images for hemorrhage detection was independently graded by two readers into three categories (parenchymal hematoma, hemorrhagic infarct or no hemorrhage), and discordances were resolved by consensus reading. Sensitivity, specificity and accuracy of generative T2*-weighted images were estimated using true T2*-weighted images as the standard of reference.
RESULTS: A total of 1491 MRI sets from 939 patients (487 women, 452 men) with a median age of 71 years (first quartile, 57; third quartile, 81; range: 21-101) were included. In the test set (n = 300), there were no differences between true T2*-weighted images and generative T2*-weighted images for intraobserver reproducibility (κ = 0.97 [95 % CI: 0.95-0.99] vs. 0.95 [95 % CI: 0.92-0.97]; P = 0.27) and interobserver reproducibility (κ = 0.93 [95 % CI: 0.90-0.97] vs. 0.92 [95 % CI: 0.88-0.96]; P = 0.64). After consensus reading, concordance between true T2*-weighted images and generative T2*-weighted images was excellent (κ = 0.92; 95 % CI: 0.91-0.96). Generative T2*-weighted images achieved 90 % sensitivity (73/81; 95 % CI: 81-96), 97 % specificity (213/219; 95 % CI: 94-99) and 95 % accuracy (286/300; 95 % CI: 92-97) for the diagnosis of any cerebral hemorrhage (hemorrhagic infarct or parenchymal hemorrhage).
CONCLUSION: Generative T2*-weighted images and true T2*-weighted images have non-different diagnostic performances for hemorrhage detection in patients with acute stroke and may be used to shorten MRI protocols.
PMID:40113490 | DOI:10.1016/j.diii.2025.03.004
Automated Detection of Microcracks Within Second Harmonic Generation Images of Cartilage Using Deep Learning
J Orthop Res. 2025 Mar 20. doi: 10.1002/jor.26071. Online ahead of print.
ABSTRACT
Articular cartilage, essential for smooth joint movement, can sustain micrometer-scale microcracks in its collagen network from low-energy impacts previously considered non-injurious. These microcracks may propagate under cyclic loading, impairing cartilage function and potentially initiating osteoarthritis (OA). Detecting and analyzing microcracks is crucial for understanding early cartilage damage but traditionally relies on manual analyses of second harmonic generation (SHG) images, which are labor-intensive, limit scalability, and delay insights. To address these challenges, we established and validated a YOLOv8-based deep learning model to automate the detection, segmentation, and quantification of cartilage microcracks from SHG images. Data augmentation during training improved model robustness, while evaluation metrics, including precision, recall, and F1-score, confirmed high accuracy and reliability, achieving a true positive rate of 95%. Our model consistently outperformed human annotators, demonstrating superior accuracy, repeatability, all while reducing labor demands. Error analyses indicated precise predictions for microcrack length and width, with moderate variability in estimations of orientation. Our results demonstrate the transformative potential of deep learning in cartilage research, enabling large-scale studies, accelerating analyses, and providing insights into soft tissue damage and engineered material mechanics. Expanding our data set to include diverse anatomical regions and disease stages will further enhance performance and generalization of our YOLOv8-based model. By automating microcrack detection, this study advances understanding of microdamage in cartilage and potential mechanisms of progression of OA. Our publicly available model and data set empower researchers to develop personalized therapies and preventive strategies, ultimately advancing joint health and preserving quality of life.
PMID:40113341 | DOI:10.1002/jor.26071
SERS-ATB: a comprehensive database server for antibiotic SERS spectral visualization and deep-learning identification
Environ Pollut. 2025 Mar 18:126083. doi: 10.1016/j.envpol.2025.126083. Online ahead of print.
ABSTRACT
The rapid and accurate identification of antibiotics in environmental samples is critical for addressing the growing concern of antibiotic pollution, particularly in water sources. Antibiotic contamination poses a significant risk to ecosystems and human health by contributing to the spread of antibiotic resistance. SERS, known for its high sensitivity and specificity, is a powerful tool for antibiotic identification. However, its broader application is constrained by the lack of a large-scale antibiotic spectral database crucial for environmental and clinical use. To address this need, we systematically collected 12,800 SERS spectra for 200 environmentally relevant antibiotics and developed an open-access, web-based database at http://sers.test.bniu.net/. We compared six machine learning algorithms with a CNN model, which achieved the highest accuracy at 98.94%, making it the preferred database model. For external validation, CNN demonstrated an accuracy of 82.8%, underscoring its reliability and practicality for real-world applications. The SERS database and CNN prediction model represent a novel resource for environmental monitoring, offering significant advantages in terms of accessibility, speed, and scalability. This study establishes the large-scale, public SERS spectral databases for antibiotics, facilitating the integration of SERS into environmental programs, with the potential to improve antibiotic detection, pollution management, and resistance mitigation.
PMID:40113206 | DOI:10.1016/j.envpol.2025.126083
Geometric deep learning and multiple-instance learning for 3D cell-shape profiling
Cell Syst. 2025 Mar 19;16(3):101229. doi: 10.1016/j.cels.2025.101229.
ABSTRACT
The three-dimensional (3D) morphology of cells emerges from complex cellular and environmental interactions, serving as an indicator of cell state and function. In this study, we used deep learning to discover morphology representations and understand cell states. This study introduced MorphoMIL, a computational pipeline combining geometric deep learning and attention-based multiple-instance learning to profile 3D cell and nuclear shapes. We used 3D point-cloud input and captured morphological signatures at single-cell and population levels, accounting for phenotypic heterogeneity. We applied these methods to over 95,000 melanoma cells treated with clinically relevant and cytoskeleton-modulating chemical and genetic perturbations. The pipeline accurately predicted drug perturbations and cell states. Our framework revealed subtle morphological changes associated with perturbations, key shapes correlating with signaling activity, and interpretable insights into cell-state heterogeneity. MorphoMIL demonstrated superior performance and generalized across diverse datasets, paving the way for scalable, high-throughput morphological profiling in drug discovery. A record of this paper's transparent peer review process is included in the supplemental information.
PMID:40112779 | DOI:10.1016/j.cels.2025.101229
Evaluation of De Vries et al.: Quantifying cellular shapes and how they correlate to cellular responses
Cell Syst. 2025 Mar 19;16(3):101242. doi: 10.1016/j.cels.2025.101242.
ABSTRACT
One snapshot of the peer review process for "Geometric deep learning and multiple instance learning for 3D cell shape profiling" (De Vries et al., 2025).1.
PMID:40112776 | DOI:10.1016/j.cels.2025.101242
Identification of heart failure subtypes using transformer-based deep learning modelling: a population-based study of 379,108 individuals
EBioMedicine. 2025 Mar 19;114:105657. doi: 10.1016/j.ebiom.2025.105657. Online ahead of print.
ABSTRACT
BACKGROUND: Heart failure (HF) is a complex syndrome with varied presentations and progression patterns. Traditional classification systems based on left ventricular ejection fraction (LVEF) have limitations in capturing the heterogeneity of HF. We aimed to explore the application of deep learning, specifically a Transformer-based approach, to analyse electronic health records (EHR) for a refined subtyping of patients with HF.
METHODS: We utilised linked EHR from primary and secondary care, sourced from the Clinical Practice Research Datalink (CPRD) Aurum, which encompassed health data of over 30 million individuals in the UK. Individuals aged 35 and above with incident reports of HF between January 1, 2005, and January 1, 2018, were included. We proposed a Transformer-based approach to cluster patients based on all clinical diagnoses, procedures, and medication records in EHR. Statistical machine learning (ML) methods were used for comparative benchmarking. The models were trained on a derivation cohort and assessed for their ability to delineate distinct clusters and prognostic value by comparing one-year all-cause mortality and HF hospitalisation rates among the identified subgroups in a separate validation cohort. Association analyses were conducted to elucidate the clinical characteristics of the derived clusters.
FINDINGS: A total of 379,108 patients were included in the HF subtyping analysis. The Transformer-based approach outperformed alternative methods, delineating more distinct and prognostically valuable clusters. This approach identified seven unique HF patient clusters characterised by differing patterns of mortality, hospitalisation, and comorbidities. These clusters were labelled based on the dominant clinical features present at the initial diagnosis of HF: early-onset, hypertension, ischaemic heart disease, metabolic problems, chronic obstructive pulmonary disease (COPD), thyroid dysfunction, and late-onset clusters. The Transformer-based subtyping approach successfully captured the multifaceted nature of HF.
INTERPRETATION: This study identified seven distinct subtypes, including COPD-related and thyroid dysfunction-related subgroups, which are two high-risk subgroups not recognised in previous subtyping analyses. These insights lay the groundwork for further investigations into tailored and effective management strategies for HF.
FUNDING: British Heart Foundation, European Union - Horizon Europe, and Novo Nordisk Research Centre Oxford.
PMID:40112740 | DOI:10.1016/j.ebiom.2025.105657
Intelligent monitoring of fruit and vegetable freshness in supply chain based on 3D printing and lightweight deep convolutional neural networks (DCNN)
Food Chem. 2025 Mar 15;480:143886. doi: 10.1016/j.foodchem.2025.143886. Online ahead of print.
ABSTRACT
In this study, an innovative intelligent system for supervising the quality of fresh produce was proposed, which combined 3D printing technology and deep convolutional neural networks (DCNN). Through 3D printing technology, sensitive, lightweight, and customizable dual-color CO2 monitoring labels were fabricated using bromothymol blue and methyl red as indicators. These labels were applied to sensitively monitor changes in CO2 levels during the storage of vegetables such as green vegetables, cucumbers, okras, plums, and jujubes. The ΔE of the labels was found to have a significant positive correlation with CO2 levels and weight loss rate, while showing a strong inverse relationship with hardness, indirectly reflecting the freshness of the produce. In addition, four lightweight DCNN models (GhostNet, MobileNetv2, ShuffleNet, and Xception) were applied to recognize label images from different storage days, with MobileNetv2 achieving the best performance. The classification accuracy for three freshness levels of okra was 96.06 %, 91.12 %, and 93.86 %, respectively. A mobile application was developed based on this model, which demonstrated excellent performance in recognizing labels at different storage stages, making it suitable for practical applications and effectively distinguishing freshness levels. By combining the novel labels with advanced DCNN models, the accuracy and real-time capabilities of food monitoring can be significantly improved.
PMID:40112721 | DOI:10.1016/j.foodchem.2025.143886
Light scattering imaging modal expansion cytometry for label-free single-cell analysis with deep learning
Comput Methods Programs Biomed. 2025 Mar 15;264:108726. doi: 10.1016/j.cmpb.2025.108726. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Single-cell imaging plays a key role in various fields, including drug development, disease diagnosis, and personalized medicine. To obtain multi-modal information from a single-cell image, especially for label-free cells, this study develops modal expansion cytometry for label-free single-cell analysis.
METHODS: The study utilizes a deep learning-based architecture to expand single-mode light scattering images into multi-modality images, including bright-field (non-fluorescent) and fluorescence images, for label-free single-cell analysis. By combining adversarial loss, L1 distance loss, and VGG perceptual loss, a new network optimization method is proposed. The effectiveness of this method is verified by experiments on simulated images, standard spheres of different sizes, and multiple cell types (such as cervical cancer and leukemia cells). Additionally, the capability of this method in single-cell analysis is assessed through multi-modal cell classification experiments, such as cervical cancer subtypes.
RESULTS: This is demonstrated by using both cervical cancer cells and leukemia cells. The expanded bright-field and fluorescence images derived from the light scattering images align closely with those obtained through conventional microscopy, showing a contour ratio near 1 for both the whole cell and its nucleus. Using machine learning, the subtyping of cervical cancer cells achieved 92.85 % accuracy with the modal expansion images, which represents an improvement of nearly 20 % over single-mode light scattering images.
CONCLUSIONS: This study demonstrates the light scattering imaging modal expansion cytometry with deep learning has the capability to expand the single-mode light scattering image into the artificial multimodal images of label-free single cells, which not only provides the visualization of cells but also helps for the cell classification, showing great potential in the field of single-cell analysis such as cancer cell diagnosis.
PMID:40112688 | DOI:10.1016/j.cmpb.2025.108726
The impact of training image quality with a novel protocol on artificial intelligence-based LGE-MRI image segmentation for potential atrial fibrillation management
Comput Methods Programs Biomed. 2025 Mar 15;264:108722. doi: 10.1016/j.cmpb.2025.108722. Online ahead of print.
ABSTRACT
BACKGROUND: Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting up to 2 % of the population. Catheter ablation is a promising treatment for AF, particularly for paroxysmal AF patients, but it often has high recurrence rates. Developing in silico models of patients' atria during the ablation procedure using cardiac MRI data may help reduce these rates.
OBJECTIVE: This study aims to develop an effective automated deep learning-based segmentation pipeline by compiling a specialized dataset and employing standardized labeling protocols to improve segmentation accuracy and efficiency. In doing so, we aim to achieve the highest possible accuracy and generalization ability while minimizing the burden on clinicians involved in manual data segmentation.
METHODS: We collected LGE-MRI data from VMRC and the cDEMRIS database. Two specialists manually labeled the data using standardized protocols to reduce subjective errors. Neural network (nnU-Net and smpU-Net++) performance was evaluated using statistical tests, including sensitivity and specificity analysis. A new database of LGE-MRI images, based on manual segmentation, was created (VMRC).
RESULTS: Our approach with consistent labeling protocols achieved a Dice coefficient of 92.4 % ± 0.8 % for the cavity and 64.5 % ± 1.9 % for LA walls. Using the pre-trained RIFE model, we attained a Dice score of approximately 89.1 % ± 1.6 % for atrial LGE-MRI imputation, outperforming classical methods. Sensitivity and specificity values demonstrated substantial enhancement in the performance of neural networks trained with the new protocol.
CONCLUSION: Standardized labeling and RIFE applications significantly improved machine learning tool efficiency for constructing 3D LA models. This novel approach supports integrating state-of-the-art machine learning methods into broader in silico pipelines for predicting ablation outcomes in AF patients.
PMID:40112687 | DOI:10.1016/j.cmpb.2025.108722
An improved Artificial Protozoa Optimizer for CNN architecture optimization
Neural Netw. 2025 Mar 13;187:107368. doi: 10.1016/j.neunet.2025.107368. Online ahead of print.
ABSTRACT
In this paper, we propose a novel neural architecture search (NAS) method called MAPOCNN, which leverages an enhanced version of the Artificial Protozoa Optimizer (APO) to optimize the architecture of Convolutional Neural Networks (CNNs). The APO is known for its rapid convergence, high stability, and minimal parameter involvement. To further improve its performance, we introduce MAPO (Modified Artificial Protozoa Optimizer), which incorporates the phototaxis behavior of protozoa. This addition helps mitigate the risk of premature convergence, allowing the algorithm to explore a broader range of possible CNN architectures and ultimately identify more optimal solutions. Through rigorous experimentation on benchmark datasets, including Rectangle and Mnist-random, we demonstrate that MAPOCNN not only achieves faster convergence times but also performs competitively when compared to other state-of-the-art NAS algorithms. The results highlight the effectiveness of MAPOCNN in efficiently discovering CNN architectures that outperform existing methods in terms of both speed and accuracy. This work presents a promising direction for optimizing deep learning architectures using biologically inspired optimization techniques.
PMID:40112636 | DOI:10.1016/j.neunet.2025.107368
REDInet: a temporal convolutional network-based classifier for A-to-I RNA editing detection harnessing million known events
Brief Bioinform. 2025 Mar 4;26(2):bbaf107. doi: 10.1093/bib/bbaf107.
ABSTRACT
A-to-I ribonucleic acid (RNA) editing detection is still a challenging task. Current bioinformatics tools rely on empirical filters and whole genome sequencing or whole exome sequencing data to remove background noise, sequencing errors, and artifacts. Sometimes they make use of cumbersome and time-consuming computational procedures. Here, we present REDInet, a temporal convolutional network-based deep learning algorithm, to profile RNA editing in human RNA sequencing (RNAseq) data. It has been trained on REDIportal RNA editing sites, the largest collection of human A-to-I changes from >8000 RNAseq data of the genotype-tissue expression project. REDInet can classify editing events with high accuracy harnessing RNAseq nucleotide frequencies of 101-base windows without the need for coupled genomic data.
PMID:40112338 | DOI:10.1093/bib/bbaf107
Deep learning analysis of magnetic resonance imaging accurately detects early-stage perihilar cholangiocarcinoma in patients with primary sclerosing cholangitis
Hepatology. 2025 Mar 20. doi: 10.1097/HEP.0000000000001314. Online ahead of print.
ABSTRACT
BACKGROUND AND AIMS: Among those with primary sclerosing cholangitis (PSC), perihilar CCA (pCCA) is often diagnosed at a late-stage and is a leading source of mortality. Detection of pCCA in PSC when curative action can be taken is challenging. Our aim was to create a deep learning model that analyzed magnetic resonance imaging (MRI) to detect early-stage pCCA and compare its diagnostic performance with expert radiologists.
APPROACH AND RESULTS: We conducted a multicenter, international, retrospective cohort study involving adults with large duct PSC who underwent contrast-enhanced MRI. Senior abdominal radiologists reviewed the images. All patients with pCCA had early-stage cancer and were registered for liver transplantation. We trained a 3D DenseNet-121 model, a form of deep learning, using MRI images and assessed its performance in a separate test cohort. The study included 398 patients (training cohort n=150; test cohort n=248). pCCA was present in 230 individuals (training cohort n=64; test cohort n=166). In the test cohort, the respective performances of the model compared to the radiologists were: sensitivity 87.9% versus 50.0%, p<0.001; specificity 84.1% versus 100.0%, p<0.001; area under receiving operating curve 86.0% versus 75.0%, p<0.001. Even when a mass was absent, the model had a higher sensitivity for pCCA than radiologists (91.6% vs. 50.6%, p<0.001) and maintained good specificity (84.1%).
CONCLUSION: The 3D DenseNet-121 MRI model effectively detects early-stage pCCA in PSC patients. Compared to expert radiologists, the model missed fewer cases of cancer.
PMID:40112296 | DOI:10.1097/HEP.0000000000001314