Deep learning
Development and Evaluation of a Deep Learning-Based Pulmonary Hypertension Screening Algorithm Using a Digital Stethoscope
J Am Heart Assoc. 2025 Feb 3:e036882. doi: 10.1161/JAHA.124.036882. Online ahead of print.
ABSTRACT
BACKGROUND: Despite the poor outcomes related to the presence of pulmonary hypertension, it often goes undiagnosed in part because of low suspicion and screening tools not being easily accessible such as echocardiography. A new readily available screening tool to identify elevated pulmonary artery systolic pressures is needed to help with the prognosis and timely treatment of underlying causes such as heart failure or pulmonary vascular remodeling. We developed a deep learning-based method that uses phonocardiograms (PCGs) for the detection of elevated pulmonary artery systolic pressure, an indicator of pulmonary hypertension.
METHODS: Approximately 6000 PCG recordings with the corresponding echocardiogram-based estimated pulmonary artery systolic pressure values, as well as ≈169 000 PCG recordings without associated echocardiograms, were used for training a deep convolutional network to detect pulmonary artery systolic pressures ≥40 mm Hg in a semisupervised manner. Each 15-second PCG, recorded using a digital stethoscope, was processed to generate 5-second mel-spectrograms. An additional labeled data set of 196 patients was used for testing. GradCAM++ was used to visualize high importance segments contributing to the network decision.
RESULTS: An average area under the receiver operator characteristic curve of 0.79 was obtained across 5 cross-validation folds. The testing data set gave a sensitivity of 0.71 and a specificity of 0.73, with pulmonic and left subclavicular locations having higher sensitivities. GradCAM++ technique highlighted physiologically meaningful PCG segments in example pulmonary hypertension recordings.
CONCLUSIONS: We demonstrated the feasibility of using digital stethoscopes in conjunction with deep learning algorithms as a low-cost, noninvasive, and easily accessible screening tool for early detection of pulmonary hypertension.
PMID:39895552 | DOI:10.1161/JAHA.124.036882
CT-based radiomics: A potential indicator of KRAS mutation in pulmonary adenocarcinoma
Tumori. 2025 Feb 2:3008916251314659. doi: 10.1177/03008916251314659. Online ahead of print.
ABSTRACT
PURPOSE: This study aimed to validate a CT-based radiomics signature for predicting Kirsten rat sarcoma (KRAS) mutation status in lung adenocarcinoma (LADC).
MATERIALS AND METHODS: A total of 815 LADC patients were included. Radiomics features were extracted from non-contrast-enhanced CT (NECT) and contrast-enhanced CT (CECT) images using Pyradiomics. CT-based radiomics were combined with clinical features to distinguish KRAS mutation status. Four feature selection methods and four deep learning classifiers were employed. Data was split into 70% training and 30% test sets, with SMOTE addressing imbalance in the training set. Model performance was evaluated using AUC, accuracy, precision, F1 score, and recall.
RESULTS: The analysis revealed that 10.4% of patients showed KRAS mutations. The study extracted 1061 radiomics features and combined them with 17 clinical features. After feature selection, two signatures were constructed using top 10, 20, and 50 features. The best performance was achieved using Multilayer Perceptron with 20 features. CECT, it showed 66% precision, 76% recall, 69% F1-score, 84% accuracy, and AUC of 93.3% and 87.4% for train and test sets, respectively. For NECT, accuracy was 85% and 82%, with AUC of 90.7% and 87.6% for train and test sets, respectively.
CONCLUSIONS: CT-based radiomics signature is a noninvasive method that can predict KRAS mutation status of LADC when mutational profiling is unavailable.
PMID:39894961 | DOI:10.1177/03008916251314659
Unveiling encephalopathy signatures: A deep learning approach with locality-preserving features and hybrid neural network for EEG analysis
Neurosci Lett. 2025 Jan 31:138146. doi: 10.1016/j.neulet.2025.138146. Online ahead of print.
ABSTRACT
EEG signals exhibit spatio-temporal characteristics due to the neural activity dispersion in space over the brain and the dynamic temporal patterns of electrical activity in neurons. This study tries to effectively utilize the spatio-temporal nature of EEG signals for diagnosing encephalopathy using a combination of novel locality preserving feature extraction using Local Binary Patterns (LBP) and a custom fine-tuned Long Short-Term Memory (LSTM) neural network. A carefully curated primary EEG dataset is used to assess the effectiveness of the technique for treatment of encephalopathies. EEG signals of all electrodes are mapped onto a spatial matrix from which the custom feature extraction method isolates spatial features of the signals. These spatial features are further given to the neural network, which learns to combine the spatial information with temporal dynamics summarizing pertinent details from the raw EEG data. Such a unified representation is key to perform reliable disease classification at the output layer of the neural network, leading to a robust classification system, potentially providing improved diagnosis and treatment. The proposed method shows promising potential for enhancing the automated diagnosis of encephalopathy, with a remarkable accuracy rate of 90.5%. To the best of our knowledge, this is the first attempt to compress and represent both spatial and temporal features into a single vector for encephalopathy detection, simplifying visual diagnosis and providing a robust feature for automated predictions. This advancement holds significant promise for ensuring early detection and intervention strategies in the clinical environment, which in turn enhances patient care.
PMID:39894198 | DOI:10.1016/j.neulet.2025.138146
NLP for Analyzing Electronic Health Records and Clinical Notes in Cancer Research: A Review
J Pain Symptom Manage. 2025 Jan 31:S0885-3924(25)00037-5. doi: 10.1016/j.jpainsymman.2025.01.019. Online ahead of print.
ABSTRACT
This review examines the application of natural language processing (NLP) techniques in cancer research using electronic health records (EHRs) and clinical notes. It addresses gaps in existing literature by providing a broader perspective than previous studies focused on specific cancer types or applications. A comprehensive literature search in the Scopus database identified 94 relevant studies published between 2019 and 2024. The analysis revealed a growing trend in NLP applications for cancer research, with information extraction (47 studies) and text classification (40 studies) emerging as predominant NLP tasks, followed by named entity recognition (7 studies). Among cancer types, breast, lung, and colorectal cancers were found to be the most studied. A significant shift from rule-based and traditional machine learning approaches to advanced deep learning techniques and transformer-based models was observed. It was found that dataset sizes used in existing studies varied widely, ranging from small, manually annotated datasets to large-scale EHRs. The review highlighted key challenges, including the limited generalizability of proposed solutions and the need for improved integration into clinical workflows. While NLP techniques show significant potential in analyzing EHRs and clinical notes for cancer research, future work should focus on improving model generalizability, enhancing robustness in handling complex clinical language, and expanding applications to understudied cancer types. The integration of NLP tools into palliative medicine and addressing ethical considerations remain crucial for utilizing the full potential of NLP in enhancing cancer diagnosis, treatment, and patient outcomes. This review provides valuable insights into the current state and future directions of NLP applications in cancer research.
PMID:39894080 | DOI:10.1016/j.jpainsymman.2025.01.019
ABIET: An explainable transformer for identifying functional groups in biological active molecules
Comput Biol Med. 2025 Feb 1;187:109740. doi: 10.1016/j.compbiomed.2025.109740. Online ahead of print.
ABSTRACT
Recent advancements in deep learning have revolutionized the field of drug discovery, with Transformer-based models emerging as powerful tools for molecular design and property prediction. However, the lack of explainability in such models remains a significant challenge. In this study, we introduce ABIET (Attention-Based Importance Estimation Tool), an explainable Transformer model designed to identify the most critical regions for drug-target interactions - functional groups (FGs) - in biologically active molecules. Functional groups play a pivotal role in determining chemical behavior and biological interactions. Our approach leverages attention weights from Transformer-encoder architectures trained on SMILES representations to assess the relative importance of molecular subregions. By processing attention scores using a specific strategy - considering bidirectional interactions, layer-based extraction, and activation transformations - we effectively distinguish FGs from non-FG atoms. Experimental validation on diverse datasets targeting pharmacological receptors, including VEGFR2, AA2A, GSK3, JNK3, and DRD2, demonstrates the model's robustness and interpretability. Comparative analysis with state-of-the-art gradient-based and perturbation-based methods confirms ABIET's superior performance, with functional groups receiving statistically higher importance scores. This work enhances the transparency of Transformer predictions, providing critical insights for molecular design, structure-activity analysis, and targeted drug development.
PMID:39894011 | DOI:10.1016/j.compbiomed.2025.109740
Attention-based deep learning models for predicting anomalous shock of wastewater treatment plants
Water Res. 2025 Jan 23;275:123192. doi: 10.1016/j.watres.2025.123192. Online ahead of print.
ABSTRACT
Quickly grasping the time-consuming water quality indicators (WQIs) such as total nitrogen (TN) and total phosphorus (TP) of influent is an essential prerequisite for wastewater treatment plants (WWTPs) to prompt respond to sudden shock loads. Soft detection methods based on machine learning models, especially deep learning models, perform well in predicting the normal fluctuations of these time-consuming WQIs but hardly predict their sudden fluctuations mainly due to the lack of extreme fluctuation data for model training. This work employs attention mechanisms to aid deep learning models in learning patterns of anomalous water quality. The lack of interpretability has always hindered deep learning models from optimizing for different application scenarios. Therefore, the local and global sensitivity analyses are performed based on the best-performing attention-based deep learning and ordinary machine learning models, respectively, allowing for reliable feature importance quantification with a small computational burden. In the case study, three types of attention-based deep learning models were developed, including attention-based multilayer perceptron (A-MLP), Transformer composed of stacked A-MLP encoder and A-MLP decoder, and feature-temporal attention-based long short-term memory (FTA-LSTM) neural network with encoder-decoder architecture. These developed attention-based deep learning models consistently outperform the corresponding baseline models in predicting the testing set of TN, TP, and chemical oxygen demand (COD) time series and the anomalous values therein, clearly demonstrating the positive effect of the integrated attention mechanism. Among them, the prediction performance of FTA-LSTM outperforms A-MLP and Transformer (2.01-38.48 % higher R2, 0-85.14 % higher F1-score, 0-62.57 % higher F2-score). Predicting anomalous water quality using attention-based deep learning models is a novel attempt that drives the WWTPs' operation towards being safer, cleaner, and more cost-efficient.
PMID:39893907 | DOI:10.1016/j.watres.2025.123192
Deep learning to decode sites of RNA translation in normal and cancerous tissues
Nat Commun. 2025 Feb 2;16(1):1275. doi: 10.1038/s41467-025-56543-0.
ABSTRACT
The biological process of RNA translation is fundamental to cellular life and has wide-ranging implications for human disease. Accurate delineation of RNA translation variation represents a significant challenge due to the complexity of the process and technical limitations. Here, we introduce RiboTIE, a transformer model-based approach designed to enhance the analysis of ribosome profiling data. Unlike existing methods, RiboTIE leverages raw ribosome profiling counts directly to robustly detect translated open reading frames (ORFs) with high precision and sensitivity, evaluated on a diverse set of datasets. We demonstrate that RiboTIE successfully recapitulates known findings and provides novel insights into the regulation of RNA translation in both normal brain and medulloblastoma cancer samples. Our results suggest that RiboTIE is a versatile tool that can significantly improve the accuracy and depth of Ribo-Seq data analysis, thereby advancing our understanding of protein synthesis and its implications in disease.
PMID:39894899 | DOI:10.1038/s41467-025-56543-0
3D convolutional deep learning for nonlinear estimation of body composition from whole body morphology
NPJ Digit Med. 2025 Feb 2;8(1):79. doi: 10.1038/s41746-025-01469-6.
ABSTRACT
Body composition prediction from 3D optical imagery has previously been studied with linear algorithms. In this study, we present a novel application of deep 3D convolutional graph networks and nonlinear Gaussian process regression for human body shape parameterization and body composition estimation. We trained and tested linear and nonlinear models with ablation studies on a novel ensemble body shape dataset containing 4286 scans. Nonlinear GPR produced up to a 20% reduction in prediction error and up to a 30% increase in precision over linear regression for both sexes in 10 tested body composition variables. Deep shape features produced 6-8% reduction in prediction error over linear PCA features for males only, and a 4-14% reduction in precision error for both sexes. All coefficients of determination (R2) for all predicted variables were above 0.86 and achieved lower estimation RMSEs than all previous work on 10 metrics of body composition.
PMID:39894882 | DOI:10.1038/s41746-025-01469-6
Functional feature extraction and validation from twelve-lead electrocardiograms to identify atrial fibrillation
Commun Med (Lond). 2025 Feb 2;5(1):32. doi: 10.1038/s43856-025-00749-2.
ABSTRACT
BACKGROUND: Deep learning methods on standard, 12-lead electrocardiograms (ECG) have resulted in the ability to identify individuals at high-risk for the development of atrial fibrillation. However, the process remains a "black box" and does not help clinicians in understanding the electrocardiographic changes at an individual level. we propose a nonparametric feature extraction approach to identify features that are associated with the development of atrial fibrillation (AF).
METHODS: We apply functional principal component analysis to the raw ECG tracings collected in the Chronic Renal Insufficiency Cohort (CRIC) study. We define and select the features using ECGs from participants enrolled in Phase I (2003-2008) of the study. Cox proportional hazards models are used to evaluate the association of selected ECG features and their changes with the incident risk of AF during study follow-up. The findings are then validated in ECGs from participants enrolled in Phase III (2013-2015).
RESULTS: We identify four features that are related to the P-wave amplitude, QRS complex and ST segment. Both their initial measurement and 3-year changes are associated with the development of AF. In particular, one standard deviation in the 3-year decline of the P-wave amplitude is independently associated with a 29% increased risk of incident AF in the multivariable model (HR: 1.29, 95% CI: [1.16, 1.43]).
CONCLUSIONS: Compared with deep learning methods, our features are intuitive and can provide insights into the longitudinal ECG changes at an individual level that precede the development of AF.
PMID:39894874 | DOI:10.1038/s43856-025-00749-2
Optimization of sparse-view CT reconstruction based on convolutional neural network
Med Phys. 2025 Feb 2. doi: 10.1002/mp.17636. Online ahead of print.
ABSTRACT
BACKGROUND: Sparse-view CT shortens scan time and reduces radiation dose but results in severe streak artifacts due to insufficient sampling data. Deep learning methods can now suppress these artifacts and improve image quality in sparse-view CT reconstruction.
PURPOSE: The quality of sparse-view CT reconstructed images can still be improved. Additionally, the interpretability of deep learning-based optimization methods for these reconstruction images is lacking, and the role of different network layers in artifact removal requires further study. Moreover, the optimization capability of these methods for reconstruction images from various sparse views needs enhancement. This study aims to improve the network's optimization ability for sparse-view reconstructed images, enhance interpretability, and boost generalization by establishing multiple network structures and datasets.
METHODS: In this paper, we developed a sparse-view CT reconstruction images improvement network (SRII-Net) based on U-Net. We added a copy pathway in the network and designed a residual image output block to boost the network's performance. Multiple networks with different connectivity structures were established using SRII-Net to analyze the contribution of each layer to artifact removal, improving the network's interpretability. Additionally, we created multiple datasets with reconstructed images of various sampling views to train and test the proposed network, investigating how these datasets from different sampling views affect the network's generalization ability.
RESULTS: The results show that the proposed method outperforms current networks, with significant improvements in metrics like PSNR and SSIM. Image optimization time is at the millisecond level. By comparing the performance of different network structures, we've identified the impact of various hierarchical structures. The image detail information learned by shallow layers and the high-level abstract feature information learned by deep layers play a crucial role in optimizing sparse-view CT reconstruction images. Training the network with multiple mixed datasets revealed that, under a certain amount of data, selecting the appropriate categories of sampling views and their corresponding samples can effectively enhance the network's optimization ability for reconstructing images with different sampling views.
CONCLUSIONS: The network in this paper effectively suppresses artifacts in reconstructed images with different sparse views, improving generalization. We have also created diverse network structures and datasets to deepen the understanding of artifact removal in deep learning networks, offering insights for noise reduction and image enhancement in other imaging methods.
PMID:39894762 | DOI:10.1002/mp.17636
Multi-Dimensional Features Extraction for Voice Pathology Detection Based on Deep Learning Methods
J Voice. 2025 Feb 1:S0892-1997(24)00486-7. doi: 10.1016/j.jvoice.2024.12.048. Online ahead of print.
ABSTRACT
PURPOSE: Voice pathology detection is a rapidly evolving field of scientific research focused on the identification and diagnosis of voice disorders. Early detection and diagnosis of these disorders is critical, as it increases the likelihood of effective treatment and reduces the burden on medical professionals.
METHODS: The objective of this scientific paper is to develop a comprehensive model that utilizes various deep learning techniques to improve the detection of voice pathology. To achieve this, the paper employs several techniques to extract a set of sensitive features from the original voice signal by analyzing the time-frequency characteristics of the signal. In this regard, as a means of extracting these features, a state-of-the-art approach combining Gammatonegram features with Scalogram Teager_Kaiser Energy Operator (TKEO) features is proposed, and the proposed feature extraction scheme is named Combine Gammatonegram with (TKEO) Scalogram (CGT Scalogram). In this study, ResNet deep learning is used to recognize healthy voices from pathological voices. To evaluate the performance of the proposed model, it is trained and tested using the Saarbrucken voice database.
RESULTS: In the end, the proposed system yielded impressive results with an accuracy of 96%, a precision of 96.3%, and a recall of 96.1% for binary classification and an accuracy of 94.4%, a precision of 94.5%, and a recall of 94% for multi-class.
CONCLUSION: The results of the experiments demonstrate the effectiveness of the feature selection technique in maximizing the prediction accuracy in both binary and multi-class classifications.
PMID:39894721 | DOI:10.1016/j.jvoice.2024.12.048
Microsatellite stable gastric cancer can be classified into two molecular subtypes with different immunotherapy response and prognosis based on gene sequencing and computational pathology
Lab Invest. 2025 Jan 31:104101. doi: 10.1016/j.labinv.2025.104101. Online ahead of print.
ABSTRACT
Most gastric cancer (GC) patients exhibit microsatellite stability (MSS), yet comprehensive subtyping for prognostic prediction and clinical treatment decisions for MSS GC is lacking. In this work, RNA-sequencing gene expression data and clinical information of MSS GC patients were obtained from The Cancer Genome Atlas (TCGA) and the Gene Expression Omnibus (GEO) databases. We employed several machine learning methods to develop and validate a signature based on immune-related genes (IRGs) for subtyping MSS GC patients. Moreover, two deep learning models based on the Vision Transformer (ViT) architecture were developed to predict GC tumor tiles and identify MSS GC subtypes from digital pathology slides. Microsatellite status was evaluated by immunohistochemistry, and prognostic data as well as H&E whole slide images were collected from 105 MSS GC patients to serve as an independent validation cohort. A signature comprising five IRGs was established and validated, stratifying MSS GC patients into high-risk (MSS-HR) and low-risk (MSS-LR) groups. This signature demonstrated consistent performance, with areas under the receiver operating characteristic (ROC) curve (AUC) of 0.65, 0.70, and 0.70 at 1, 3, and 5 years in the TCGA cohort, and 0.70, 0.60, and 0.62 in the GEO cohort, respectively. The MSS-HR subtype exhibited higher levels of tumor immune dysfunction and exclusion, suggesting a greater potential for immune escape compared to the MSS-LR subtype. Moreover, the MSS-HR/LR subtypes showed differential sensitivities to various therapeutic drugs. Leveraging morphological differences, the tumor recognition segmentation model (TRSM) achieved an impressive AUC of 0.97, while the MSS-HR/LR identification model (MSSIM) effectively classified MSS-HR/LR subtypes with an AUC of 0.94. Both models demonstrated promising results in classifying MSS GC patients in the external validation cohort, highlighting the strong ability to accurately differentiate between MSS GC subtypes. The IRGs-related MSS-HR/LR subtypes had potential in enhancing outcome prediction accuracy and guide treatment strategies. This research may optimize precision treatment and improve the prognosis for MSS GC patients.
PMID:39894411 | DOI:10.1016/j.labinv.2025.104101
Deep learning assisted prediction of osteogenic capability of orthopedic implant surfaces based on early cell morphology
Acta Biomater. 2025 Jan 31:S1742-7061(25)00079-0. doi: 10.1016/j.actbio.2025.01.059. Online ahead of print.
ABSTRACT
The surface modification of titanium (Ti) and its alloys is crucial for improving their osteogenic capability, as their bio-inert nature limits effective osseointegration despite their prevalent use in orthopedic implants. However, these modification methods produce varied surface properties, making it challenging to standardize criteria for assessing the osteogenic capacity of implant surfaces. Additionally, traditional evaluation experiments are time-consuming and inefficient. To overcome these limitations, this study introduced a high-throughput, efficient screening method for assessing the osteogenic capability of implant surfaces based on early cell morphology and deep learning. The Orthopedic Implants-Osteogenic Differentiation Network (OIODNet) was developed using early cell morphology images and corresponding alkaline phosphatase (ALP) activity values from cells cultured on Ti and its alloy surfaces, achieving performance metrics exceeding 0.98 across all six evaluation parameters. Validation through metal-polyphenol network (MPN) coatings and cell experiments demonstrated a strong correlation between OIODNet's predictions and actual ALP activity outcomes, confirming its accuracy in predicting osteogenic potential based on early cell morphology. The Osteogenic Predictor application offers an intuitive tool for predicting the osteogenic capacity of implant surfaces. Overall, this research highlights the potential to accelerate progress at the intersection of artificial intelligence and biomaterials, paving the way for more efficient screening of osteogenic capabilities in orthopedic implants. STATEMENT OF SIGNIFICANCE: By leveraging deep learning, this study introduces the Orthopedic Implants-Osteogenic Differentiation Network (OIODNet), which utilizes early cell morphology data and alkaline phosphatase (ALP) activity values to provide a high-throughput, accurate method for predicting osteogenic capability. With performance metrics exceeding 0.98, OIODNet's accuracy was further validated through experiments involving metal-polyphenol network (MPN) coatings, showing a strong correlation between the model's predictions and experimental outcomes. This research offers a powerful tool for more efficient screening of implant surfaces, marking a transformative step in the integration of artificial intelligence and biomaterials, while opening new avenues for advancing orthopedic implant technologies.
PMID:39894326 | DOI:10.1016/j.actbio.2025.01.059
Automated Measurement of Pelvic Parameters Using Convolutional Neural Network in Complex Spinal Deformities: Overcoming Challenges in Coronal Deformity Cases
Spine J. 2025 Jan 31:S1529-9430(25)00053-1. doi: 10.1016/j.spinee.2025.01.020. Online ahead of print.
ABSTRACT
BACKGROUND CONTEXT: Accurate and consistent measurement of sagittal alignment is challenging, particularly in patients with severe coronal deformities, including degenerative lumbar scoliosis (DLS).
PURPOSE: This study aimed to develop and validate an artificial intelligence (AI)-based system for automating the measurement of key sagittal parameters, including lumbar lordosis, pelvic incidence, pelvic tilt, and sacral slope, with a focus on its applicability across a wide range of deformities, including severe coronal deformities, such as DLS.
DESIGN: Retrospective observational study.
PATIENT SAMPLE: A total of 1,011 standing lumbar lateral radiographs, including DLS.
OUTCOME MEASURE: Interclass and intraclass correlation coefficients (CC), and Bland-Altman plots.
METHODS: The model utilizes a deep-learning framework, incorporating a U-Net for segmentation and a Keypoint Region-based Convolutional Neural Network for keypoint detection. The ground truth masks were annotated by an experienced orthopedic specialist. The performance of the model was evaluated against ground truth measurements and assessments from two expert raters using interclass and intraclass CC, and Bland-Altman plots.
RESULTS: In the test set of 113 patients, 39 (34.5%) had DLS, with a mean Cobb's angle of 14.8° ± 4.4°. The AI model achieved an intraclass CC of 1.00 across all parameters, indicating perfect consistency. Interclass CCs comparing the AI model to ground truth ranged from 0.96 to 0.99, outperforming experienced orthopedic surgeons. Bland-Altman analysis revealed no significant systemic bias, with most differences falling within clinically acceptable ranges. A 5-fold cross-validation further demonstrated robust performance, with interclass CCs ranging from 0.96 to 0.99 across diverse subsets.
CONCLUSION: This AI-based system offers a reliable and efficient automated measurement of sagittal parameters in spinal deformities, including severe coronal deformities. The superior performance of the model compared with that of expert raters highlights its potential for clinical applications.
PMID:39894276 | DOI:10.1016/j.spinee.2025.01.020
Multiscale deep learning radiomics for predicting recurrence-free survival in pancreatic cancer: A multicenter study
Radiother Oncol. 2025 Jan 31:110770. doi: 10.1016/j.radonc.2025.110770. Online ahead of print.
ABSTRACT
PURPOSE: This multicenter study aimed to develop and validate a multiscale deep learning radiomics nomogram for predicting recurrence-free survival (RFS) in patients with pancreatic ductal adenocarcinoma (PDAC).
MATERIALS AND METHODS: A total of 469 PDAC patients from four hospitals were divided into training and test sets. Handcrafted radiomics and deep learning (DL) features were extracted from optimal regions of interest, encompassing both intratumoral and peritumoral areas. Univariate Cox regression, LASSO regression, and multivariate Cox regression selected features for three image signatures (intratumoral, peritumoral radiomics, and DL). A multiscale nomogram was constructed and validated against the 8th AJCC staging system.
RESULTS: The 4 mm peritumoral VOI yielded the best radiomics prediction, while a 15 mm expansion was optimal for deep learning. The multiscale nomogram demonstrated a C-index of 0.82 (95 % CI: 0.78-0.85) in the training set and 0.70 (95 % CI: 0.64-0.76) in the external test 1 (high-volume hospital), with the external test 2 (low-volume hospital) showing a C-index of 0.78 (95 % CI: 0.65-0.91). These outperformed the AJCC system's C-index (0.54-0.57). The area under the curve (AUC) for recurrence prediction at 0.5, 1, and 2 years was 0.89, 0.94, and 0.89 in the training set, with AUC values ranging from 0.75 to 0.97 in the external validation sets, consistently surpassing the AJCC system across all sets.. Kaplan-Meier analysis confirmed significant differences in prognosis between high- and low-risk groups (P < 0.01 across all cohorts).
CONCLUSION: The multiscale nomogram effectively stratifies recurrence risk in PDAC patients, enhancing presurgical clinical decision-making and potentially improving patient outcomes.
PMID:39894259 | DOI:10.1016/j.radonc.2025.110770
Deep-ER: Deep Learning ECCENTRIC Reconstruction for fast high-resolution neurometabolic imaging
Neuroimage. 2025 Jan 31:121045. doi: 10.1016/j.neuroimage.2025.121045. Online ahead of print.
ABSTRACT
INTRODUCTION: Altered neurometabolism is an important pathological mechanism in many neurological diseases and brain cancer, which can be mapped non-invasively by Magnetic Resonance Spectroscopic Imaging (MRSI). Advanced MRSI using non-cartesian compressed-sense acquisition enables fast high-resolution metabolic imaging but has lengthy reconstruction times that limits throughput and needs expert user interaction. Here, we present a robust and efficient Deep Learning reconstruction embedded in a physical model within an end-to-end automated processing pipeline to obtain high-quality metabolic maps.
METHODS: Fast high-resolution whole-brain metabolic imaging was performed at 3.4 mm3 isotropic resolution with acquisition times between 4:11-9:21 min:s using ECCENTRIC pulse sequence on a 7T MRI scanner. Data were acquired in a high-resolution phantom and 27 human participants, including 22 healthy volunteers and 5 glioma patients. A deep neural network using recurring interlaced convolutional layers with joint dual-space feature representation was developed for deep learning ECCENTRIC reconstruction (Deep-ER). 21 subjects were used for training and 6 subjects for testing. Deep-ER performance was compared to iterative compressed sensing Total Generalized Variation reconstruction using image and spectral quality metrics.
RESULTS: Deep-ER demonstrated 600-fold faster reconstruction than conventional methods, providing improved spatial-spectral quality and metabolite quantification with 12%-45% (P<0.05) higher signal-to-noise and 8%-50% (P<0.05) smaller Cramer-Rao lower bounds. Metabolic images clearly visualize glioma tumor heterogeneity and boundary. Deep-ER generalizes reliably to unseen data.
CONCLUSION: Deep-ER provides efficient and robust reconstruction for sparse-sampled MRSI. The accelerated acquisition-reconstruction MRSI is compatible with high-throughput imaging workflow. It is expected that such improved performance will facilitate basic and clinical MRSI applications for neuroscience and precision medicine.
PMID:39894238 | DOI:10.1016/j.neuroimage.2025.121045
Detecting living microalgae in ship ballast water based on stained microscopic images and deep learning
Mar Pollut Bull. 2025 Feb 1;213:117608. doi: 10.1016/j.marpolbul.2025.117608. Online ahead of print.
ABSTRACT
Motivated by the need of rapid detection of living microalgae cells in ship ballast water, this study is intended to determine the activities of microalgae using stained microscopic images and detect the living cells with image processing algorithms. The staining selectivity on living cells of neutral red dye is utilized to distinguish the activities of microalgae. A deep-learning-based detection model was designed and tested using the microscopic images of stained microalgae cells. The results showed that the deep learning model achieved high accuracies without considering the activities of microalgae: The model's average precisions (APs) on Platymonas helgolandica tsingtaoensis and Alexandrium catenella were 99.3 % and 98.3 %, respectively. In contrast, the detection accuracies of living microalgae cells were slightly lower: The model's APs on living Platymonas helgolandica tsingtaoensis and Alexandrium catenella were 91.7 % and 91.9 %, respectively. The model achieved high detection accuracy and determined the activities of microalgae cells.
PMID:39893717 | DOI:10.1016/j.marpolbul.2025.117608
Unraveling Human Hepatocellular Responses to PFAS and Aqueous Film-Forming Foams (AFFFs) for Molecular Hazard Prioritization and In Vivo Translation
Environ Sci Technol. 2025 Feb 2. doi: 10.1021/acs.est.4c10595. Online ahead of print.
ABSTRACT
Aqueous film-forming foams (AFFFs) are complex product mixtures that often contain per- and polyfluorinated alkyl substances (PFAS) to enhance fire suppression and protect firefighters. However, PFAS have been associated with a range of adverse health effects (e.g., liver and thyroid disease and cancer), and innovative approach methods to better understand their toxicity potential and identify safer alternatives are needed. In this study, we investigated a set of 30 substances (e.g., AFFF, PFAS, and clinical drugs) using differentiated cultures of human hepatocytes (HepaRG, 2D), high-throughput transcriptomics, deep learning of cell morphology images, and liver enzyme leakage assays with benchmark dose analysis to (1) predict the potency ranges for human liver injury, (2) delineate gene- and pathway-level transcriptomic points-of-departure for molecular hazard characterization and prioritization, (3) characterize human hepatocellular response similarities to inform regulatory read-across efforts, and (4) introduce an innovative approach to translate mechanistic hepatocellular response data to predict the potency ranges for PFAS-induced hepatomegaly in vivo. Collectively, these data fill important mechanistic knowledge gaps with PFAS/AFFF and represent a scalable platform to address the thousands of PFAS in commerce for greener chemistries and next-generation risk assessments.
PMID:39893674 | DOI:10.1021/acs.est.4c10595
Protocol for functional screening of CFTR-targeted genetic therapies in patient-derived organoids using DETECTOR deep-learning-based analysis
STAR Protoc. 2025 Jan 31;6(1):103593. doi: 10.1016/j.xpro.2024.103593. Online ahead of print.
ABSTRACT
Here, we present a protocol for the rapid functional screening of gene editing and addition strategies in patient-derived organoids using the deep-learning-based tool DETECTOR (detection of targeted editing of cystic fibrosis transmembrane conductance regulator [CFTR] in organoids). We describe steps for wet-lab experiments, image acquisition, and CFTR function analysis by DETECTOR. We also detail procedures for applying pre-trained models and training custom models on new customized datasets. For complete details on the use and execution of this protocol, refer to Bulcaen et al.1.
PMID:39893642 | DOI:10.1016/j.xpro.2024.103593
End-To-End Deep Learning Explains Antimicrobial Resistance in Peak-Picking-Free MALDI-MS Data
Anal Chem. 2025 Feb 2. doi: 10.1021/acs.analchem.4c05113. Online ahead of print.
ABSTRACT
Mass spectrometry is used to determine infectious microbial species in thousands of clinical laboratories across the world. The vast amount of data allows modern data analysis methods that harvest more information and potentially answer new questions. Here, we present an end-to-end deep learning model for predicting antibiotic resistance using raw matrix assisted laser desorption ionization mass spectrometry (MALDI-MS) data. We used a 1-dimensional convolutional neural network to model (almost) raw data, skipping conventional peak-picking and directly predict resistance. The model's performance is state-of-the-art, having AUCs between 0.93 and 0.99 in all antimicrobial resistance phenotypes and validates across time and location. Feature attribution values highlight important insights into the model and how the end-to-end workflow can be improved further. This study showcases that reliable resistance phenotyping using MALDI-MS data is attainable and highlights the gains of using end-to-end deep learning for spectrometry data.
PMID:39893590 | DOI:10.1021/acs.analchem.4c05113