Deep learning
Development and validation of MRI-derived deep learning score for non-invasive prediction of PD-L1 expression and prognostic stratification in head and neck squamous cell carcinoma
Cancer Imaging. 2025 Feb 16;25(1):14. doi: 10.1186/s40644-025-00837-5.
ABSTRACT
BACKGROUND: Immunotherapy has revolutionized the treatment landscape for head and neck squamous cell carcinoma (HNSCC) and PD-L1 combined positivity score (CPS) scoring is recommended as a biomarker for immunotherapy. Therefore, this study aimed to develop an MRI-based deep learning score (DLS) to non-invasively assess PD-L1 expression status in HNSCC patients and evaluate its potential effeciency in predicting prognostic stratification following treatment with immune checkpoint inhibitors (ICI).
METHODS: In this study, we collected data from four patient cohorts comprising a total of 610 HNSCC patients from two separate institutions. We developed deep learning models based on the ResNet-101 convolutional neural network to analyze three MRI sequences (T1WI, T2WI, and contrast-enhanced T1WI). Tumor regions were manually segmented, and features extracted from different MRI sequences were fused using a transformer-based model incorporating attention mechanisms. The model's performance in predicting PD-L1 expression was evaluated using the area under the curve (AUC), sensitivity, specificity, and calibration metrics. Survival analyses were conducted using Kaplan-Meier survival curves and log-rank tests to evaluate the prognostic significance of the DLS.
RESULTS: The DLS demonstrated high predictive accuracy for PD-L1 expression, achieving an AUC of 0.981, 0.860 and 0.803 in the training, internal and external validation cohort. Patients with higher DLS scores demonstrated significantly improved progression-free survival (PFS) in both the internal validation cohort (hazard ratio: 0.491; 95% CI, 0.270-0.892; P = 0.005) and the external validation cohort (hazard ratio: 0.617; 95% CI, 0.391-0.973; P = 0.040). In the ICI-treated cohort, the DLS achieved an AUC of 0.739 for predicting durable clinical benefit (DCB).
CONCLUSIONS: The proposed DLS offered a non-invasive and accurate approach for assessing PD-L1 expression in patients with HNSCC and effectively stratified HNSCC patients to benefit from immunotherapy based on PFS.
PMID:39956910 | DOI:10.1186/s40644-025-00837-5
AI and Neurology
Neurol Res Pract. 2025 Feb 17;7(1):11. doi: 10.1186/s42466-025-00367-2.
ABSTRACT
BACKGROUND: Artificial Intelligence is influencing medicine on all levels. Neurology, one of the most complex and progressive medical disciplines, is no exception. No longer limited to neuroimaging, where data-driven approaches were initiated, machine and deep learning methodologies are taking neurologic diagnostics, prognostication, predictions, decision making and even therapy to very promising potentials.
MAIN BODY: In this review, the basic principles of different types of Artificial Intelligence and the options to apply them to neurology are summarized. Examples of noteworthy studies on such applications are presented from the fields of acute and intensive care neurology, stroke, epilepsy, and movement disorders. Finally, these potentials are matched with risks and challenges jeopardizing ethics, safety and equality, that need to be heeded by neurologists welcoming Artificial Intelligence to their field of expertise.
CONCLUSION: Artificial intelligence is and will be changing neurology. Studies need to be taken to the prospective level and algorithms undergo federated learning to reach generalizability. Neurologists need to master not only the benefits but also the risks in safety, ethics and equity of such data-driven form of medicine.
PMID:39956906 | DOI:10.1186/s42466-025-00367-2
Advanced prognostic modeling with deep learning: assessing long-term outcomes in liver transplant recipients from deceased and living donors
J Transl Med. 2025 Feb 16;23(1):188. doi: 10.1186/s12967-025-06183-1.
ABSTRACT
BACKGROUND: Predicting long-term outcomes in liver transplantation remain a challenging endeavor. This research aims to harness the power of deep learning to develop an advanced prognostic model for assessing long-term outcomes, with a specific focus on distinguishing between deceased and living donor transplantation.
METHODS: A comprehensive dataset from UNOS encompassing clinical, demographic, and transplant-related variables of liver transplant recipients from deceased and living donors was utilized. The main dataset has been transformed into Deceased Donor-Recipient and Living Donor-Recipient dataset. After manual extraction, the dimensionality reduction was performed with Principal component analysis in both datasets and top ranked 23 attributes were collected. A Deeplearning4j Multilayer Perceptron classifier has been employed and long-term survival analysis has been conducted with the help of liver follow-up data. The performance evaluation is done separately in datasets and evaluated the survival probabilities of 23 years.
RESULTS: UNOS database comprises 410 attributes and 353,589 records from 1998 to 2023. The outcome from the deep learning model was compared with actual graft survival to ensure the accuracy. The model trained 23 attributes and obtained Sensitivity, Specificity and accuracy values were 99.9, 99.9 and 99.91% using R-Living donor dataset. The Sensitivity, Specificity and Accuracy value obtained using R-Deceased donor dataset were 99.7, 99.7 and 99.86%. The short term and long-term survival prediction after liver transplantation has been done successfully with Dl4jMLP classifier with appropriate selection of attributes irrespective of donor type. This study's finding suggesting that the distinction between deceased and living donor transplantation does not significantly affect survival prediction after liver transplantation is noteworthy.
CONCLUSIONS: The utility of the Deeplearning4j model in survival prediction after liver transplantation has been validated in this study. Based on the findings, deceased donor transplantation could be promoted over living donor transplantation.
PMID:39956905 | DOI:10.1186/s12967-025-06183-1
Helmet material design for mitigating traumatic axonal injuries through AI-driven constitutive law enhancement
Commun Eng. 2025 Feb 16;4(1):22. doi: 10.1038/s44172-025-00370-0.
ABSTRACT
Sports helmets provide incomplete protection against brain injuries. Here we aim to improve helmet liner efficiency by employing a novel approach that optimizes their properties. By exploiting a finite element model that simulates head impacts, we developed deep learning models that predict the peak rotational velocity and acceleration of a dummy head protected by various liner materials. The deep learning models exhibited a remarkable correlation coefficient of 0.99 within the testing dataset with mean absolute error of 0.8 rad.s-1 and 0.6 krad.s-2 respectively, highlighting their predictive ability. Deep learning-based material optimization demonstrated a significant reduction in the risk of brain injuries, ranging from -5% to -65%, for impact energies between 250 and 500 Joules. This result emphasizes the effectiveness of material design to mitigate sport-related brain injury risks. This research introduces promising avenues for optimizing helmet designs to enhance their protective capabilities.
PMID:39956866 | DOI:10.1038/s44172-025-00370-0
Predicting visual field global and local parameters from OCT measurements using explainable machine learning
Sci Rep. 2025 Feb 16;15(1):5685. doi: 10.1038/s41598-025-89557-1.
ABSTRACT
Glaucoma is characterised by progressive vision loss due to retinal ganglion cell deterioration, leading to gradual visual field (VF) impairment. The standard VF test may be impractical in some cases, where optical coherence tomography (OCT) can offer predictive insights into VF for multimodal diagnoses. However, predicting VF measures from OCT data remains challenging. To address this, five regression models were developed to predict VF measures from OCT, Shapley Additive exPlanations (SHAP) analysis was performed for interpretability, and a clinical software tool called OCT to VF Predictor was developed. To evaluate the models, a total of 268 glaucomatous eyes (86 early, 72 moderate, 110 advanced) and 226 normal eyes were included. The machine learning models outperformed recent OCT-based VF prediction deep learning studies, with correlation coefficients of 0.76, 0.80 and 0.76 for mean deviation, visual field index and pattern standard deviation, respectively. Introducing the pointwise normalisation and step-size concept, a mean absolute error of 2.51 dB was obtained in pointwise sensitivity prediction, and the grayscale prediction model yielded a mean structural similarity index of 77%. The SHAP-based analysis provided critical insights into the most relevant features for glaucoma diagnosis, showing promise in assisting eye care practitioners through an explainable AI tool.
PMID:39956834 | DOI:10.1038/s41598-025-89557-1
RNA-protein interaction prediction using network-guided deep learning
Commun Biol. 2025 Feb 16;8(1):247. doi: 10.1038/s42003-025-07694-9.
ABSTRACT
Accurate computational determination of RNA-protein interactions remains challenging, particularly when encountering unknown RNAs and proteins. The limited number of RNAs and their flexibility constrained the effectiveness of the deep-learning models for RNA-protein interaction prediction. Here, we introduce ZHMolGraph, which integrates graph neural network and unsupervised large language models to predict RNA-protein interaction. We validate ZHMolGraph predictions on two benchmark datasets and outperform the current best methods. For the dataset of entirely unknown RNAs and proteins, ZHMolGraph shows an improvement in achieving high AUROC of 79.8% and AUPRC of 82.0%. This represents a substantial improvement of 7.1%-28.7% in AUROC and 4.6%-30.0% in AUPRC over other methods. We utilize ZHMolGraph to enhance the challenging SARS-CoV-2 RPI and unbound RNA-protein complex predictions. Such enhancements make ZHMolGraph a reliable option for genome-wide RNA-protein prediction. ZHMolGraph holds broad potential for modeling and designing RNA-protein complexes.
PMID:39956833 | DOI:10.1038/s42003-025-07694-9
A dataset for surface defect detection on complex structured parts based on photometric stereo
Sci Data. 2025 Feb 16;12(1):276. doi: 10.1038/s41597-025-04454-6.
ABSTRACT
Automated Optical Inspection (AOI) technology is crucial for industrial defect detection but struggles with shadows and surface reflectivity, resulting in false positives and missed detections, especially on non-planar parts. To address these issues, a novel defect detection technique based on deep learning and photometric stereo vision was proposed, along with the creation of the Metal Surface Defect Dataset (MSDD). The proposed Stroboscopic Illuminant Image Acquisition (SIIA) method uses a specially arranged illuminant setup and a Taylor Series Channel Mixer (TSCM) to blend multi-angle illumination images into pseudo-color images. This approach enables end-to-end defect detection using universal object detectors. The method involves mapping color space transformations to spatial domain transformations and utilizing hue randomization for data augmentation. Four object detection methods (FCOS, YOLOv5, YOLOv8, and RT-DETR) were validated on the MSDD, achieving an mAP of 86.1%, surpassing traditional methods. The MSDD includes 138,585 single-channel images and 9,239 mixed images, covering eight defect types. This dataset is essential for automated visual inspection of metal surfaces and is freely accessible for research purposes.
PMID:39956811 | DOI:10.1038/s41597-025-04454-6
Topological identification and interpretation for single-cell epigenetic regulation elucidation in multi-tasks using scAGDE
Nat Commun. 2025 Feb 16;16(1):1691. doi: 10.1038/s41467-025-57027-x.
ABSTRACT
Single-cell ATAC-seq technology advances our understanding of single-cell heterogeneity in gene regulation by enabling exploration of epigenetic landscapes and regulatory elements. However, low sequencing depth per cell leads to data sparsity and high dimensionality, limiting the characterization of gene regulatory elements. Here, we develop scAGDE, a single-cell chromatin accessibility model-based deep graph representation learning method that simultaneously learns representation and clustering through explicit modeling of data generation. Our evaluations demonstrated that scAGDE outperforms existing methods in cell segregation, key marker identification, and visualization across diverse datasets while mitigating dropout events and unveiling hidden chromatin-accessible regions. We find that scAGDE preferentially identifies enhancer-like regions and elucidates complex regulatory landscapes, pinpointing putative enhancers regulating the constitutive expression of CTLA4 and the transcriptional dynamics of CD8A in immune cells. When applied to human brain tissue, scAGDE successfully annotated cis-regulatory element-specified cell types and revealed functional diversity and regulatory mechanisms of glutamatergic neurons.
PMID:39956806 | DOI:10.1038/s41467-025-57027-x
CT-Based Deep Learning Predicts Prognosis in Esophageal Squamous Cell Cancer Patients Receiving Immunotherapy Combined with Chemotherapy
Acad Radiol. 2025 Feb 15:S1076-6332(25)00101-1. doi: 10.1016/j.acra.2025.01.046. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: Immunotherapy combined with chemotherapy has improved outcomes for some esophageal squamous cell carcinoma (ESCC) patients, but accurate pre-treatment risk stratification remains a critical gap. This study constructed a deep learning (DL) model to predict survival outcomes in ESCC patients receiving immunotherapy combined with chemotherapy.
MATERIALS AND METHODS: A DL model was developed to predict survival outcomes in ESCC patients receiving immunotherapy and chemotherapy. Retrospective data from 482 patients across three institutions were split into training (N=322), internal test (N=79), and external test (N=81) sets. Unenhanced computed tomography (CT) scans were processed to analyze tumor and peritumoral regions. The model evaluated multiple input configurations: original tumor regions of interest (ROIs), ROI subregions, and ROIs expanded by 1 and 3 pixels. Performance was assessed using Harrell's C-index and receiver operating characteristic (ROC) curves. A multimodal model combined DL-derived risk scores with five key clinical and laboratory features. The Shapley Additive Explanations (SHAP) method elucidated the contribution of individual features to model predictions.
RESULTS: The DL model with 1-pixel peritumoral expansion achieved the best accuracy, yielding a C-index of 0.75 for the internal test set and 0.60 for the external test set. Hazard ratios for high-risk patients were 1.82 (95% CI: 1.19-2.46; P=0.02) in internal test set. The multimodal model achieved C-indices of 0.74 and 0.61 for internal and external test sets, respectively. Kaplan-Meier analysis revealed significant survival differences between high- and low-risk groups (P<0.05). SHAP analysis identified tumor response, risk score, and age as critical contributors to predictions.
CONCLUSION: This DL model demonstrates efficacy in stratifying ESCC patients by survival risk, particularly when integrating peritumoral imaging and clinical features. The model could serve as a valuable pre-treatment tool to facilitate the implementation of personalized treatment strategies for ESCC patients undergoing immunotherapy and chemotherapy.
PMID:39956748 | DOI:10.1016/j.acra.2025.01.046
Comment on "A deep learning approach for the screening of referable age-related macular degeneration - Model development and external validation"
J Formos Med Assoc. 2025 Feb 15:S0929-6646(25)00059-2. doi: 10.1016/j.jfma.2025.02.017. Online ahead of print.
NO ABSTRACT
PMID:39956680 | DOI:10.1016/j.jfma.2025.02.017
Application of artificial intelligence in the detection of Borrmann type 4 advanced gastric cancer in upper endoscopy (with video)
Cancer. 2025 Feb 15;131(4):e35768. doi: 10.1002/cncr.35768.
ABSTRACT
BACKGROUND: Borrmann type-4 (B-4) advanced gastric cancer is challenging to diagnose through routine endoscopy, leading to a poor prognosis. The objective of this study was to develop an artificial intelligence (AI)-based system capable of detecting B-4 gastric cancers using upper endoscopy.
METHODS: Endoscopic images from 259 patients who were diagnosed with B-4 gastric cancer and 595 controls who had benign conditions were retrospectively collected from Seoul National University Hospital for training and testing. Internal validation involved prospectively collected endoscopic videos from eight patients with B-4 gastric cancer and 148 controls. For external validation, endoscopic images and videos from patients with B-4 gastric cancer and controls at the Seoul National University Bundang Hospital were used. To calculate patient-based accuracy, sensitivity, and specificity, a diagnosis of B-4 was made for patients in whom greater than 50% of the images were identified as B-4 gastric cancer.
RESULTS: The accuracy of the patient-based diagnosis was highest in the internal image test set, with accuracy, sensitivity, and specificity of 93.22%, 92.86%, and 93.39%, respectively. The accuracy of the model in the internal validation videos, the external validation images, and the external validation videos was 91.03%, 91.86%, and 86.71%, respectively. Notably, in both the internal and external video sets, the AI model demonstrated 100% sensitivity for diagnosing patients who had B-4 gastric cancer.
CONCLUSIONS: An innovative AI-based model was developed to identify B-4 gastric cancer using endoscopic images. This AI model is specialized for the highly sensitive detection of rare B-4 gastric cancer and is expected to assist clinicians in real-time endoscopy.
PMID:39955610 | DOI:10.1002/cncr.35768
Development of a diagnostic classification model for lateral cephalograms based on multitask learning
BMC Oral Health. 2025 Feb 15;25(1):246. doi: 10.1186/s12903-025-05588-0.
ABSTRACT
OBJECTIVES: This study aimed to develop a cephalometric classification method based on multitask learning for eight diagnostic classifications.
METHODS: This study was retrospective. A total of 3,310 lateral cephalograms were collected to construct a dataset. Eight clinical classifications were employed, including sagittal and vertical skeletal facial patterns, maxillary and mandibular anteroposterior positions, inclinations of upper and lower incisors, as well as their anteroposterior positions. The images were manually annotated for initially classification, which was verified by senior orthodontists. The data were randomly divided into training, validation, and test sets at a ratio of approximately 8:1:1. The multitask learning classification model was constructed based on the ResNeXt50_32 × 4d network and consisted of shared layers and task-specific layers. The performance of the model was evaluated using classification accuracy, precision, sensitivity, specificity and area under the curve (AUC).
RESULTS: This model could perform eight clinical diagnostic classifications on cephalograms within an average of 0.0096 s. The accuracy of the six classifications was 0.8-0.9, and the accuracy of the two classifications was 0.75-0.8. The overall AUC values for each classification exceeded 0.9.
CONCLUSIONS: An automatic diagnostic classification model for lateral cephalograms was established based on multitask learning to achieve simultaneous classification of eight common clinical diagnostic items. The multitask learning model achieved better classification performance and reduced the computational costs, providing a novel perspective and reference for addressing such problems.
PMID:39955570 | DOI:10.1186/s12903-025-05588-0
Machine learning via DARTS-Optimized MobileViT models for pancreatic Cancer diagnosis with graph-based deep learning
BMC Med Inform Decis Mak. 2025 Feb 15;25(1):81. doi: 10.1186/s12911-025-02923-x.
ABSTRACT
The diagnosis of pancreatic cancer presents a significant challenge due to the asymptomatic nature of the disease and the fact that it is frequently detected at an advanced stage. This study presents a novel approach combining graph-based data representation with DARTS-optimised MobileViT models, with the objective of enhancing diagnostic accuracy and reliability. The images of the pancreatic CT were transformed into graph structures using the Harris Corner Detection algorithm, which enables the capture of complex spatial relationships. Subsequently, the graph representations were processed using MobileViT models that had been optimised with Differentiable Architecture Search (DARTS), thereby enabling dynamic architectural adaptation. To further enhance classification accuracy, advanced machine learning algorithms, including K-Nearest Neighbours (KNN), Support Vector Machines (SVM), Random Forest (RF), and XGBoost, were applied. The MobileViTv2_150 and MobileViTv2_200 models demonstrated remarkable performance, with an accuracy of 97.33% and an F1 score of 96.25%, surpassing the capabilities of traditional CNN and Vision Transformer models. This innovative integration of graph-based deep learning and machine learning techniques demonstrates the potential of the proposed method to establish a new standard for early pancreatic cancer diagnosis. Furthermore, the study highlights the scalability of this approach for broader applications in medical imaging, which could lead to improved patient outcomes.
PMID:39955532 | DOI:10.1186/s12911-025-02923-x
Breaking barriers: noninvasive AI model for BRAF<sup>V600E</sup> mutation identification
Int J Comput Assist Radiol Surg. 2025 Feb 15. doi: 10.1007/s11548-024-03290-0. Online ahead of print.
ABSTRACT
OBJECTIVE: BRAFV600E is the most common mutation found in thyroid cancer and is particularly associated with papillary thyroid carcinoma (PTC). Currently, genetic mutation detection relies on invasive procedures. This study aimed to extract radiomic features and utilize deep transfer learning (DTL) from ultrasound images to develop a noninvasive artificial intelligence model for identifying BRAFV600E mutations.
MATERIALS AND METHODS: Regions of interest (ROI) were manually annotated in the ultrasound images, and radiomic and DTL features were extracted. These were used in a joint DTL-radiomics (DTLR) model. Fourteen DTL models were employed, and feature selection was performed using the LASSO regression. Eight machine learning methods were used to construct predictive models. Model performance was primarily evaluated using area under the curve (AUC), accuracy, sensitivity and specificity. The interpretability of the model was visualized using gradient-weighted class activation maps (Grad-CAM).
RESULTS: Sole reliance on radiomics for identification of BRAFV600E mutations had limited capability, but the optimal DTLR model, combined with ResNet152, effectively identified BRAFV600E mutations. In the validation set, the AUC, accuracy, sensitivity and specificity were 0.833, 80.6%, 76.2% and 81.7%, respectively. The AUC of the DTLR model was higher than that of the DTL and radiomics models. Visualization using the ResNet152-based DTLR model revealed its ability to capture and learn ultrasound image features related to BRAFV600E mutations.
CONCLUSION: The ResNet152-based DTLR model demonstrated significant value in identifying BRAFV600E mutations in patients with PTC using ultrasound images. Grad-CAM has the potential to objectively stratify BRAF mutations visually. The findings of this study require further collaboration among more centers and the inclusion of additional data for validation.
PMID:39955452 | DOI:10.1007/s11548-024-03290-0
Self supervised artificial intelligence predicts poor outcome from primary cutaneous squamous cell carcinoma at diagnosis
NPJ Digit Med. 2025 Feb 15;8(1):105. doi: 10.1038/s41746-025-01496-3.
ABSTRACT
Primary cutaneous squamous cell carcinoma (cSCC) is responsible for ~10,000 deaths annually in the United States. Stratification of risk of poor outcome at initial biopsy would significantly impact clinical decision-making during the initial post operative period where intervention has been shown to be most effective. Using whole-slide images (WSI) from 163 patients from 3 institutions, we developed a self supervised deep-learning model to predict poor outcomes in cSCC patients from histopathological features at initial diagnosis, and validated it using WSI from 563 patients, collected from two other academic institutions. For disease-free survival prediction, the model attained a concordance index of 0.73 in the development cohort and 0.84 in the Mayo cohort. The model's interpretability revealed that features like poor differentiation and deep invasion were strongly associated with poor prognosis. Furthermore, the model is effective in stratifying risk among BWH T2a and AJCC T2, known for outcome heterogeneity.
PMID:39955424 | DOI:10.1038/s41746-025-01496-3
An explainable and accurate transformer-based deep learning model for wheeze classification utilizing real-world pediatric data
Sci Rep. 2025 Feb 15;15(1):5656. doi: 10.1038/s41598-025-89533-9.
ABSTRACT
Auscultation is a method that involves listening to sounds from the patient's body, mainly using a stethoscope, to diagnose diseases. The stethoscope allows for non-invasive, real-time diagnosis, and it is ideal for diagnosing respiratory diseases and first aid. However, accurate interpretation of respiratory sounds using a stethoscope is a subjective process that requires considerable expertise from clinicians. To overcome the shortcomings of existing stethoscopes, research is actively being conducted to develop an artificial intelligence deep learning model that can interpret breathing sounds recorded through electronic stethoscopes. Most recent studies in this area have focused on CNN-based respiratory sound classification models. However, such CNN models are limited in their ability to accurately interpret conditions that require longer overall length and more detailed context. Therefore, in the present work, we apply the Transformer model-based Audio Spectrogram Transformer (AST) model to our actual clinical practice data. This prospective study targeted children who visited the pediatric departments of two university hospitals in South Korea from 2019 to 2020. A pediatric pulmonologist recorded breath sounds, and a pediatric breath sound dataset was constructed through double-blind verification. We then developed a deep learning model that applied the pre-trained weights of the AST model to our data with a total of 194 wheezes and 531 other respiratory sounds. We compared the performance of the proposed model with that of a previously published CNN-based model and also conducted performance tests using previous datasets. To ensure the reliability of the proposed model, we visualized the classification process using Score-Class Activation Mapping (Score-CAM). Our model had an accuracy of 91.1%, area under the curve (AUC) of 86.6%, precision of 88.2%, recall of 76.9%, and F1-score of 82.2%. Ultimately, the proposed transformer-based model showed high accuracy in wheezing detection, and the decision-making process of the model was also verified to be reliable. The artificial intelligence deep learning model we have developed and described in this study is expected to help accurately diagnose pediatric respiratory diseases in real-world clinical practice.
PMID:39955399 | DOI:10.1038/s41598-025-89533-9
Classification patterns identification of immunogenic cell death-related genes in heart failure based on deep learning
Sci Rep. 2025 Feb 15;15(1):5633. doi: 10.1038/s41598-025-89333-1.
ABSTRACT
Heart failure (HF) is a complex and prevalent condition, particularly in the elderly, presenting symptoms like chest tightness, shortness of breath, and dyspnea. The study aimed to improve the classification of HF subtypes and identify potential drug targets by exploring the role of Immunogenic Cell Death (ICD), a process known for its role in tumor immunity but underexplored in HF research. Additionally, the study sought to apply deep learning models to enhance HF classification and identify diagnosis-related genes. Various deep learning encoder models were employed to evaluate their effectiveness in clustering HF based on ICD-related genes. Identified HF subtypes were further refined using differentially expressed genes, allowing for the assessment of immune infiltration and functional enrichment. Advanced machine learning techniques were used to identify diagnosis-related genes, and these genes were used to construct nomogram models. The study also explored gene interactions with miRNA and transcription factors. Distinct HF subtypes were identified through clustering based on ICD-related genes. Differentially expressed genes revealed significant variations in immune infiltration and functional enrichment across these subtypes. The diagnostic model showed excellent performance, with an AUC exceeding 0.99 in both internal and external test sets. Diagnosis-related genes were also identified, serving as the foundation for nomogram models and further exploration of their regulatory interactions. This study provides a novel insight into HF by combining the exploration of ICD, the application of deep learning models, and the identification of diagnosis-related genes. These findings contribute to a deeper understanding of HF subtypes and highlight potential therapeutic targets for improving HF classification and treatment.
PMID:39955386 | DOI:10.1038/s41598-025-89333-1
Exploration of contemporary modernization in UWSNs in the context of localization including opportunities for future research in machine learning and deep learning
Sci Rep. 2025 Feb 15;15(1):5672. doi: 10.1038/s41598-025-89916-y.
ABSTRACT
The exchange of information in Wireless Sensor Networks (WSNs) across different environments, whether they are above the ground, underground, underwater, or in space has advanced significantly over time. Among these advancements, precise localization of nodes within the network remains a key and vital challenge. In the context of Underwater Wireless Sensor Networks (UWSNs), localization plays a pivotal role in enabling the efficient execution of diverse underwater applications such as environmental monitoring, disaster management, military surveillance and many more. This review article is focusing on three primary aspects, the first section focuses on the fundamentals of localization in UWSNs, providing an in depth and comprehensive discussion on various localization methods. Where we have highlighted the two main categories that are anchor based and anchor free localization along with their respective subcategories. The second section of this article examines the diverse challenges that may emerge during the implementation of the localization process. To enhance clarity and structure, these challenges have been carefully analyzed and categorized into three main groups and that are, (i) Algorithmic challenges, (ii) Technical challenges, and (iii) Environmental challenges. The third section of this article begins by presenting the latest advancements in UWSNs localization, followed by an exploration of how Machine Learning (ML) and Deep Learning (DL) models can contribute in enhancing the localization process. To evaluate the potential benefits of the ML and DL techniques, we have assessed their performance through simulations, focusing on metrics such as localization error, velocity estimation error, Root Mean Square Error (RMSE), and energy consumption. This review also aims to provide actionable insights and a guideline for future research directions and opportunities for practitioners in the field of UWSNs localization. Which will ultimately help in enhancing the performance and reliability of underwater applications by advancing localization techniques and promoting seamless integration.
PMID:39955359 | DOI:10.1038/s41598-025-89916-y
Unified total body CT image with multiple organ specific windowings: validating improved diagnostic accuracy and speed in trauma cases
Sci Rep. 2025 Feb 15;15(1):5654. doi: 10.1038/s41598-024-83346-y.
ABSTRACT
Total-body CT scans are useful in saving trauma patients; however, interpreting numerous images with varied window settings slows injury detection. We developed an algorithm for "unified total-body CT image with multiple organ-specific windowings (Uni-CT)", and assessing its impact on physician accuracy and speed in trauma CT interpretation. From November 7, 2008, to June 19, 2020, 40 cases of total-body CT images for blunt trauma with multiple injuries, were collected from the emergency department of Osaka General Medical Center and randomly divided into two groups. In half of the cases, the Uni-CT algorithm using semantic segmentation assigned visibility-friendly window settings to each organ. Four physicians with varying levels of experience interpreted 20 cases using the algorithm and 20 cases in conventional settings. The performance was analyzed based on the accuracy, sensitivity, specificity of the target findings, and diagnosis speed. In the proposal and conventional groups, patients had an average of 2.6 and 2.5 targeting findings, mean ages of 51.8 and 57.7 years, and male proportions of 60% and 45%, respectively. The agreement rate for physicians' diagnoses was κ = 0.70. Average accuracy, sensitivity, and specificity of target findings were 84.8%, 74.3%, 96.9% and 85.5%, 81.2%, 91.5%, respectively, with no significant differences. Diagnostic speed per case averaged 71.9 and 110.4 s in each group (p < 0.05). The Uni-CT algorithm improved the diagnostic speed of total-body CT for trauma, maintaining accuracy comparable to that of conventional methods.
PMID:39955327 | DOI:10.1038/s41598-024-83346-y
Deep learning-based organ-wise dosimetry of (64)Cu-DOTA-rituximab through only one scanning
Sci Rep. 2025 Feb 15;15(1):5627. doi: 10.1038/s41598-025-88498-z.
ABSTRACT
This study aimed to generate a delayed 64Cu-dotatate (DOTA)-rituximab positron emission tomography (PET) image from its early-scanned image by deep learning to mitigate the inconvenience and cost of estimating absorbed radiopharmaceutical doses. We acquired PET images from six patients with malignancies at 1, 24, and 48 h post-injection (p. i.) with 8 mCi 64Cu-DOTA-rituximab to fit a time-activity curve for dosimetry. We used a paired image-to-image translation (I2I) model based on a generative adversarial network to generate delayed images from early PET images. The image similarity function between the generated image and its ground truth was determined by comparing L1 and perceptual losses. We also applied organ-wise dosimetry to acquired and generated images using OLINDA/EXM. The quality of the generated images was good, even of tumors, when using the L1 loss function as an additional loss to the adversarial loss function. The organ-wise cumulative uptake and corresponding equivalent dose were estimated. Although the absorbed dose in some organs was accurately measured, predictions for organs associated with body clearance were relatively inaccurate. These results suggested that paired I2I can be used to alleviate burdensome dosimetry for radioimmunoconjugates.
PMID:39955298 | DOI:10.1038/s41598-025-88498-z