Deep learning
Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning
Med Image Comput Comput Assist Interv. 2024 Oct;15004:102-112. doi: 10.1007/978-3-031-72083-3_10. Epub 2024 Oct 14.
ABSTRACT
Whole slide image (WSI) classification plays a crucial role in digital pathology data analysis. However, the immense size of WSIs and the absence of fine-grained sub-region labels pose significant challenges for accurate WSI classification. Typical classification-driven deep learning methods often struggle to generate informative image representations, which can compromise the robustness of WSI classification. In this study, we address this challenge by incorporating both discriminative and contrastive learning techniques for WSI classification. Different from the existing contrastive learning methods for WSI classification that primarily rely on pseudo labels assigned to patches based on the WSI-level labels, our approach takes a different route to directly focus on constructing positive and negative samples at the WSI-level. Specifically, we select a subset of representative image patches to represent WSIs and create positive and negative samples at the WSI-level, facilitating effective learning of informative image features. Experimental results on two datasets and ablation studies have demonstrated that our method significantly improved the WSI classification performance compared to state-of-the-art deep learning methods and enabled learning of informative features that promoted robustness of the WSI classification.
PMID:40046787 | PMC:PMC11877581 | DOI:10.1007/978-3-031-72083-3_10
Next-generation approach to skin disorder prediction employing hybrid deep transfer learning
Front Big Data. 2025 Feb 19;8:1503883. doi: 10.3389/fdata.2025.1503883. eCollection 2025.
ABSTRACT
INTRODUCTION: Skin diseases significantly impact individuals' health and mental wellbeing. However, their classification remains challenging due to complex lesion characteristics, overlapping symptoms, and limited annotated datasets. Traditional convolutional neural networks (CNNs) often struggle with generalization, leading to suboptimal classification performance. To address these challenges, this study proposes a Hybrid Deep Transfer Learning Method (HDTLM) that integrates DenseNet121 and EfficientNetB0 for improved skin disease prediction.
METHODS: The proposed hybrid model leverages DenseNet121's dense connectivity for capturing intricate patterns and EfficientNetB0's computational efficiency and scalability. A dataset comprising 19 skin conditions with 19,171 images was used for training and validation. The model was evaluated using multiple performance metrics, including accuracy, precision, recall, and F1-score. Additionally, a comparative analysis was conducted against state-of-the-art models such as DenseNet121, EfficientNetB0, VGG19, MobileNetV2, and AlexNet.
RESULTS: The proposed HDTLM achieved a training accuracy of 98.18% and a validation accuracy of 97.57%. It consistently outperformed baseline models, achieving a precision of 0.95, recall of 0.96, F1-score of 0.95, and an overall accuracy of 98.18%. The results demonstrate the hybrid model's superior ability to generalize across diverse skin disease categories.
DISCUSSION: The findings underscore the effectiveness of the HDTLM in enhancing skin disease classification, particularly in scenarios with significant domain shifts and limited labeled data. By integrating complementary strengths of DenseNet121 and EfficientNetB0, the proposed model provides a robust and scalable solution for automated dermatological diagnostics.
PMID:40046767 | PMC:PMC11879938 | DOI:10.3389/fdata.2025.1503883
Leveraging automated time-lapse microscopy coupled with deep learning to automate colony forming assay
Front Oncol. 2025 Feb 19;15:1520972. doi: 10.3389/fonc.2025.1520972. eCollection 2025.
ABSTRACT
INTRODUCTION: The colony forming assay (CFA) stands as a cornerstone technique for evaluating the clonal expansion ability of single cancer cells and is crucial for assessing drug efficacy. However, traditional CFAs rely on labor-intensive, endpoint manual counting, offering limited insights into the dynamic effects of treatment. To overcome these limitations, we developed an Artificial Intelligence (AI)-assisted automated CFA combining time-lapse microscopy for real-time tracking of colony formation.
METHODS: Using B-acute lymphoblastic leukemia (B-ALL) cells from an E2A-PBX1 mouse model, we cultured them in a collagen-based 3D matrix with cytokines under static conditions in a low volume (60 µl) culture vessel and validated its comparability to methylcellulose-based media. No significant differences in final colony count or plating efficiency were observed. Our automated platform utilizes a deep learning and multi-object tracking approach for colony counting. Brightfield images were used to train a YOLOv8 object detection network, achieving a mAP50 score of 86% for identifying single cells, clusters, and colonies, and 97% accuracy for Z-stack colony identification with a multi-object tracking algorithm. The detection model accurately identified the majority of objects in the dataset.
RESULTS: This AI-assisted CFA was successfully applied for density optimization, enabling the determination of seeding densities that maximize plating efficiency (PE), and for IC50 determination, offering an efficient, less labor-intensive method for testing drug concentrations. In conclusion, our novel AI-assisted automated colony counting platform enables automated, high-throughput analysis of colony dynamics, significantly reducing labor and increasing accuracy. Furthermore, it allows detailed, long-term studies of cell-cell interactions and treatment responses using live-cell imaging and AI-assisted cell tracking.
DISCUSSION: Future integration with a perfusion-based drug screening system promises to enhance personalized cancer therapy by optimizing broad drug screening approaches and enabling real-time evaluation of therapeutic efficacy.
PMID:40046624 | PMC:PMC11879803 | DOI:10.3389/fonc.2025.1520972
Deep learning combining imaging, dose and clinical data for predicting bowel toxicity after pelvic radiotherapy
Phys Imaging Radiat Oncol. 2025 Feb 1;33:100710. doi: 10.1016/j.phro.2025.100710. eCollection 2025 Jan.
ABSTRACT
BACKGROUND AND PURPOSE: A comprehensive understanding of radiotherapy toxicity requires analysis of multimodal data. However, it is challenging to develop a model that can analyse both 3D imaging and clinical data simultaneously. In this study, a deep learning model is proposed for simultaneously analysing computed tomography scans, dose distributions, and clinical metadata to predict toxicity, and identify the impact of clinical risk factors and anatomical regions.
MATERIALS AND METHODS: : A deep model based on multiple instance learning with feature-level fusion and attention was developed. The study used a dataset of 313 patients treated with 3D conformal radiation therapy and volumetric modulated arc therapy, with heterogeneous cohorts varying in dose, volume, fractionation, concomitant therapies, and follow-up periods. The dataset included 3D computed tomography scans, planned dose distributions to the bowel cavity, and patient clinical data. The model was trained on patient-reported data on late bowel toxicity.
RESULTS: Results showed that the network can identify potential risk factors and critical anatomical regions. Analysis of clinical data jointly with imaging and dose for bowel urgency and faecal incontinence improved performance (area under receiver operating characteristic curve [AUC] of 88% and 78%, respectively) while best performance for diarrhoea was when analysing clinical features alone (68% AUC).
CONCLUSIONS: Results demonstrated that feature-level fusion along with attention enables the network to analyse multimodal data. This method also provides explanations for each input's contribution to the final result and detects spatial associations of toxicity.
PMID:40046574 | PMC:PMC11880715 | DOI:10.1016/j.phro.2025.100710
The prognostic value of pathologic lymph node imaging using deep learning-based outcome prediction in oropharyngeal cancer patients
Phys Imaging Radiat Oncol. 2025 Feb 14;33:100733. doi: 10.1016/j.phro.2025.100733. eCollection 2025 Jan.
ABSTRACT
BACKGROUND AND PURPOSE: Deep learning (DL) models can extract prognostic image features from pre-treatment PET/CT scans. The study objective was to explore the potential benefits of incorporating pathologic lymph node (PL) spatial information in addition to that of the primary tumor (PT) in DL-based models for predicting local control (LC), regional control (RC), distant-metastasis-free survival (DMFS), and overall survival (OS) in oropharyngeal cancer (OPC) patients.
MATERIALS AND METHODS: The study included 409 OPC patients treated with definitive (chemo)radiotherapy between 2010 and 2022. Patient data, including PET/CT scans, manually contoured PT (GTVp) and PL (GTVln) structures, clinical variables, and endpoints, were collected. Firstly, a DL-based method was employed to segment tumours in PET/CT, resulting in predicted probability maps for PT (TPMp) and PL (TPMln). Secondly, different combinations of CT, PET, manual contours and probability maps from 300 patients were used to train DL-based outcome prediction models for each endpoint through 5-fold cross validation. Model performance, assessed by concordance index (C-index), was evaluated using a test set of 100 patients.
RESULTS: Including PL improved the C-index results for all endpoints except LC. For LC, comparable C-indices (around 0.66) were observed between models trained using only PT and those incorporating PL as additional structure. Models trained using PT and PL combined into a single structure achieved the highest C-index of 0.65 and 0.80 for RC and DMFS prediction, respectively. Models trained using these target structures as separate entities achieved the highest C-index of 0.70 for OS.
CONCLUSION: Incorporating lymph node spatial information improved the prediction performance for RC, DMFS, and OS.
PMID:40046573 | PMC:PMC11880716 | DOI:10.1016/j.phro.2025.100733
Improvement in positional accuracy of neural-network predicted hydration sites of proteins by incorporating atomic details of water-protein interactions and site-searching algorithm
Biophys Physicobiol. 2025 Jan 30;22(1):e220004. doi: 10.2142/biophysico.bppb-v22.0004. eCollection 2025.
ABSTRACT
Visualization of hydration structures over the entire protein surface is necessary to understand why the aqueous environment is essential for protein folding and functions. However, it is still difficult for experiments. Recently, we developed a convolutional neural network (CNN) to predict the probability distribution of hydration water molecules over protein surfaces and in protein cavities. The deep network was optimized using solely the distribution patterns of protein atoms surrounding each hydration water molecule in high-resolution X-ray crystal structures and successfully provided probability distributions of hydration water molecules. Despite the effectiveness of the probability distribution, the positional differences of the predicted positions obtained from the local maxima as predicted sites remained inadequate in reproducing the hydration sites in the crystal structure models. In this work, we modified the deep network by subdividing atomic classes based on the electronic properties of atoms composing amino acids. In addition, the exclusion volumes of each protein atom and hydration water molecule were taken to predict the hydration sites from the probability distribution. These information on chemical properties of atoms leads to an improvement in positional prediction accuracy. We selected the best CNN from 47 CNNs constructed by systematically varying the number of channels and layers of neural networks. Here, we report the improvements in prediction accuracy by the reorganized CNN together with the details in the architecture, training data, and peak search algorithm.
PMID:40046557 | PMC:PMC11876803 | DOI:10.2142/biophysico.bppb-v22.0004
Retraction: Risk management system and intelligent decision-making for prefabricated building project under deep learning modified teaching-learning-based optimization
PLoS One. 2025 Mar 5;20(3):e0319589. doi: 10.1371/journal.pone.0319589. eCollection 2025.
NO ABSTRACT
PMID:40043015 | DOI:10.1371/journal.pone.0319589
Deep Learning Enhanced Near Infrared-II Imaging and Image-Guided Small Interfering Ribonucleic Acid Therapy of Ischemic Stroke
ACS Nano. 2025 Mar 5. doi: 10.1021/acsnano.4c18035. Online ahead of print.
ABSTRACT
Small interfering RNA (siRNA) targeting the NOD-like receptor family pyrin domain-containing 3 (NLRP3) inflammasome has emerged as a promising therapeutic strategy to mitigate infarct volume and brain injury following ischemic stroke. However, the clinical translation of siRNA-based therapies is significantly hampered by the formidable blood-brain barrier (BBB), which restricts drug penetration into the central nervous system. To address this challenge, we have developed an innovative long-circulating near-infrared II (NIR-II) nanoparticle platform YWFC NPs, which is meticulously engineered to enhance BBB transcytosis and enable efficient delivery of siRNA targeting NLRP3 (siNLRP3@YWFC NPs) in preclinical models of ischemic stroke. Furthermore, we integrated advanced deep learning neural network algorithms to optimize in vivo NIR-II imaging of the cerebral infarct penumbra, achieving an improved signal-to-background ratio at 72 h poststroke. In vivo studies employing middle cerebral artery occlusion (MCAO) mouse models demonstrated that image-guided therapy with siNLRP3@YWFC NPs, guided by prolonged NIR-II imaging, resulted in significant therapeutic benefits.
PMID:40042964 | DOI:10.1021/acsnano.4c18035
On the Upper Bounds of Number of Linear Regions and Generalization Error of Deep Convolutional Neural Networks
IEEE Trans Pattern Anal Mach Intell. 2025 Mar 5;PP. doi: 10.1109/TPAMI.2025.3548620. Online ahead of print.
ABSTRACT
Understanding the effect of hyperparameters of the network structure on the performance of Convolutional Neural Networks (CNNs) remains the most fundamental and urgent issue in deep learning, and we attempt to address this issue based on the piecewise linear (PWL) function nature of CNNs in this paper. Firstly, the operations of convolutions, ReLUs and Max pooling in a CNN are represented as the multiplication of multiple matrices for a fixed sample in order to obtain an algebraic expression of CNNs, this expression clearly suggests that CNNs are PWL functions. Although such representation has high time complexity, it provides a more convenient and intuitive way to study the mathematical properties of CNNs. Secondly, we develop a tight bound of the number of linear regions and the upper bounds of generalization error for CNNs, both taking into account factors such as the number of layers, dimension of pooling, and the width in the network. The above research results provide a possible guidance for designing and training CNNs.
PMID:40042958 | DOI:10.1109/TPAMI.2025.3548620
Deep Learning-Based Saturation Compensation for High Dynamic Range Multispectral Fluorescence Lifetime Imaging
IEEE Trans Biomed Eng. 2025 Mar 5;PP. doi: 10.1109/TBME.2025.3548297. Online ahead of print.
ABSTRACT
In multispectral fluorescence lifetime imaging (FLIm), achieving consistent imaging quality across all spectral channels is crucial for accurately identifying a wide range of fluorophores. However, these essential measurements are frequently compromised by saturation artifacts due to the inherently limited dynamic range of detection systems. To address this issue, we present SatCompFLImNet, a deep learning-based network specifically designed to correct saturation artifacts in multispectral FLIm, facilitating high dynamic range applications. Leveraging generative adversarial networks, SatCompFLImNet effectively compensates for saturated fluorescence signals, ensuring accurate lifetime measurements across various levels of saturation. Extensively validated with simulated and real-world data, SatCompFLImNet demonstrates remarkable capability in correcting saturation artifacts, improving signal-to-noise ratios, and maintaining fidelity of lifetime measurements. By enabling reliable fluorescence lifetime measurements under a variety of saturation conditions, SatCompFLImNet paves the way for improved diagnostic tools and a deeper understanding of biological processes, making it a pivotal advancement for research and clinical diagnostics in tissue characterization and disease pathogenesis.
PMID:40042955 | DOI:10.1109/TBME.2025.3548297
mmWave Radar for Sit-to-Stand Analysis: A Comparative Study with Wearables and Kinect
IEEE Trans Biomed Eng. 2025 Mar 5;PP. doi: 10.1109/TBME.2025.3548092. Online ahead of print.
ABSTRACT
This study investigates a novel approach for analyzing Sit-to-Stand (STS) movements using millimeterwave (mmWave) radar technology, aiming to develop a noncontact, privacy-preserving, and all-day operational solution for healthcare applications. A 60GHz mmWave radar system was employed to collect radar point cloud data from 45 participants performing STS motions. Using a deep learning-based pose estimation model and Inverse Kinematics (IK), we calculated joint angles, segmented STS motions, and extracted clinically relevant features for fall risk assessment. The extracted features were compared with those obtained from Kinect and wearable sensors. While Kinect provided a reference for motion capture, we acknowledge its limitations compared to the gold-standard VICON system, which is planned for future validation. The results demonstrated that mmWave radar effectively captures general motion patterns and large joint movements (e.g., trunk), though challenges remain for more finegrained motion analysis. This study highlights the unique advantages and limitations of mmWave radar and other sensors, emphasizing the potential of integrated sensor technologies to enhance the accuracy and reliability of motion analysis in clinical and biomedical research. Future work will expand the scope to more complex movements and incorporate high-precision motion capture systems to further validate the findings.
PMID:40042953 | DOI:10.1109/TBME.2025.3548092
Counterfactual Bidirectional Co-Attention Transformer for Integrative Histology-Genomic Cancer Risk Stratification
IEEE J Biomed Health Inform. 2025 Mar 5;PP. doi: 10.1109/JBHI.2025.3548048. Online ahead of print.
ABSTRACT
Applying deep learning to predict patient prognostic survival outcomes using histological whole-slide images (WSIs) and genomic data is challenging due to the morphological and transcriptomic heterogeneity present in the tumor microenvironment. Existing deep learning-enabled methods often exhibit learning biases, primarily because the genomic knowledge used to guide directional feature extraction from WSIs may be irrelevant or incomplete. This results in a suboptimal and sometimes myopic understanding of the overall pathological landscape, potentially overlooking crucial histological insights. To tackle these challenges, we propose the CounterFactual Bidirectional Co-Attention Transformer framework. By integrating a bidirectional co-attention layer, our framework fosters effective feature interactions between the genomic and histology modalities and ensures consistent identification of prognostic features from WSIs. Using counterfactual reasoning, our model utilizes causality to model unimodal and multimodal knowledge for cancer risk stratification. This approach directly addresses and reduces bias, enables the exploration of 'what-if' scenarios, and offers a deeper understanding of how different features influence survival outcomes. Our framework, validated across eight diverse cancer benchmark datasets from The Cancer Genome Atlas (TCGA), represents a major improvement over current histology-genomic model learning methods. It shows an average 2.5% improvement in c-index performance over 18 state-of-the-art models in predicting patient prognoses across eight cancer types. Our code is released at https://github.com/BusyJzy599/CFBCT-main.
PMID:40042950 | DOI:10.1109/JBHI.2025.3548048
SecProGNN: Predicting Bronchoalveolar Lavage Fluid Secreted Protein Using Graph Neural Network
IEEE J Biomed Health Inform. 2025 Mar 5;PP. doi: 10.1109/JBHI.2025.3548263. Online ahead of print.
ABSTRACT
Bronchoalveolar lavage fluid (BALF) is a liquid obtained from the alveoli and bronchi, often used to study pulmonary diseases. So far, proteomic analyses have identified over three thousand proteins in BALF. However, the comprehensive characterization of these proteins remains challenging due to their complexity and technological limitations. This paper presented a novel deep learning framework called SecProGNN, designed to predict secretory proteins in BALF. Firstly, SecProGNN represented proteins as graph-structured data, with amino acids connected based on their interactions. Then, these graphs were processed through graph neural networks (GNNs) model to extract graph features. Finally, the extracted feature vectors were fed into a multi-layer perceptron (MLP) module to predict BALF secreted proteins. Additionally, by utilizing SecProGNN, we investigated potential biomarkers for lung adenocarcinoma and identified 16 promising candidates that may be secreted into BALF.
PMID:40042949 | DOI:10.1109/JBHI.2025.3548263
A deep learning framework for automated and generalized synaptic event analysis
Elife. 2025 Mar 5;13:RP98485. doi: 10.7554/eLife.98485.
ABSTRACT
Quantitative information about synaptic transmission is key to our understanding of neural function. Spontaneously occurring synaptic events carry fundamental information about synaptic function and plasticity. However, their stochastic nature and low signal-to-noise ratio present major challenges for the reliable and consistent analysis. Here, we introduce miniML, a supervised deep learning-based method for accurate classification and automated detection of spontaneous synaptic events. Comparative analysis using simulated ground-truth data shows that miniML outperforms existing event analysis methods in terms of both precision and recall. miniML enables precise detection and quantification of synaptic events in electrophysiological recordings. We demonstrate that the deep learning approach generalizes easily to diverse synaptic preparations, different electrophysiological and optical recording techniques, and across animal species. miniML provides not only a comprehensive and robust framework for automated, reliable, and standardized analysis of synaptic events, but also opens new avenues for high-throughput investigations of neural function and dysfunction.
PMID:40042890 | DOI:10.7554/eLife.98485
A dual-stage framework for segmentation of the brain anatomical regions with high accuracy
MAGMA. 2025 Mar 5. doi: 10.1007/s10334-025-01233-7. Online ahead of print.
ABSTRACT
OBJECTIVE: This study presents a novel deep learning-based framework for precise brain MR region segmentation, aiming to identify the location and the shape details of different anatomical structures within the brain.
MATERIALS AND METHODS: The approach uses a two-stage 3D segmentation technique on a dataset of adult subjects, including cognitively normal participants and individuals with cognitive decline. Stage 1 employs a 3D U-Net to segment 13 brain regions, achieving a mean DSC of 0.904 ± 0.060 and a mean HD95 of 1.52 ± 1.53 mm (a mean DSC of 0.885 ± 0.065 and a mean HD95 of 1.57 ± 1.35 mm for smaller parts). For challenging regions like hippocampus, thalamus, cerebrospinal fluid, amygdala, basal ganglia, and corpus callosum, Stage 2 with SegResNet refines segmentation, improving mean DSC to 0.921 ± 0.048 and HD95 to 1.17 ± 0.69 mm.
RESULTS: Statistical analysis reveals significant improvements (p-value < 0.001) for these regions, with DSC increases ranging from 1.3 to 3.2% and HD95 reductions of 0.06-0.33 mm. Comparisons with recent studies highlight the superior performance of the performed method.
DISCUSSION: The inclusion of a second stage for refining the segmentation of smaller regions demonstrates substantial improvements, establishing the framework's potential for precise and reliable brain region segmentation across diverse cognitive groups.
PMID:40042762 | DOI:10.1007/s10334-025-01233-7
Artificial intelligence for the detection of airway nodules in chest CT scans
Eur Radiol. 2025 Mar 5. doi: 10.1007/s00330-025-11468-6. Online ahead of print.
ABSTRACT
OBJECTIVES: Incidental airway tumors are rare and can easily be overlooked on chest CT, especially at an early stage. Therefore, we developed and assessed a deep learning-based artificial intelligence (AI) system for detecting and localizing airway nodules.
MATERIALS AND METHODS: At a single academic hospital, we retrospectively analyzed cancer diagnoses and radiology reports from patients who received a chest or chest-abdomen CT scan between 2004 and 2020 to find cases presenting as airway nodules. Primary cancers were verified through bronchoscopy with biopsy or cytologic testing. The malignancy status of other nodules was confirmed with bronchoscopy only or follow-up CT scans if such evidence was unavailable. An AI system was trained and evaluated with a ten-fold cross-validation procedure. The performance of the system was assessed with a free-response receiver operating characteristic curve.
RESULTS: We identified 160 patients with airway nodules (median age of 64 years [IQR: 54-70], 58 women) and added a random sample of 160 patients without airway nodules (median age of 60 years [IQR: 48-69], 80 women). The sensitivity of the AI system was 75.1% (95% CI: 67.6-81.6%) for detecting all nodules with an average number of false positives per scan of 0.25 in negative patients and 0.56 in positive patients. At the same operating point, the sensitivity was 79.0% (95% CI: 70.4-86.6%) for the subset of tumors. A subgroup analysis showed that the system detected the majority of subtle tumors.
CONCLUSION: The AI system detects most airway nodules on chest CT with an acceptable false positive rate.
KEY POINTS: Question Incidental airway tumors are rare and are susceptible to being overlooked on chest CT. Findings An AI system can detect most benign and malignant airway nodules with an acceptable false positive rate, including nodules that have very subtle features. Clinical relevance An AI system shows potential for supporting radiologists in detecting airway tumors.
PMID:40042650 | DOI:10.1007/s00330-025-11468-6
Evaluating fusion models for predicting occult lymph node metastasis in tongue squamous cell carcinoma
Eur Radiol. 2025 Mar 5. doi: 10.1007/s00330-025-11473-9. Online ahead of print.
ABSTRACT
OBJECTIVES: This study evaluated and compared the effectiveness of various predictive models for forecasting occult lymph node metastasis (LNM) in tongue squamous cell carcinoma (TSCC) patients.
METHODS: In this retrospective diagnostic experiment, 268 patients were recruited from three medical centers. Based on the different hospitals from which the patients were recruited, they were divided into a training set, an internal testing set, and two external testing sets, comprising 107, 53, 63, and 45 patients, respectively. Several predictive models were developed using patients' contrast-enhanced magnetic resonance imaging (CEMRI), including two-dimensional deep learning (2D DL), conventional radiomics (C-radiomics), and intratumoral heterogeneity radiomics (ITH-radiomics). Univariate and multivariate logistic regression analyses were conducted on the clinical data. Finally, two fusion strategies were used to construct the model.
RESULTS: The ITH-radiomics model exhibited superior discriminative power compared to C-radiomics model. The late fusion model had the highest area under the curve (AUC) across all test sets (0.81-0.85). Compared to the late fusion model, the AUC values for the early fusion, 2D DL, C-radiomics, and ITH-radiomics models in the test sets ranged from 0.77 to 0.82, 0.64 to 0.81, 0.66 to 0.77, and 0.77 to 0.80, respectively. Additionally, the late fusion model demonstrated the highest accuracy (76-89%) and specificity (87-100%) across the test sets.
CONCLUSIONS: The evaluation of the models' effectiveness revealed that the decision-based late fusion model, which integrated 2D DL, C-radiomics, ITH-radiomics, and clinical data, achieved the best results. This predictive approach can more accurately assess patients' conditions and aid in selecting surgical plans.
KEY POINTS: Question How well does fusing multiple models work for predicting occult lymph node metastasis in patients with tongue squamous cell carcinoma? Findings The late fusion model, incorporating two-dimensional deep learning, conventional-radiomics, intratumoral heterogeneity-radiomics, and clinical features, achieved the best results compared to each individual model. Clinical relevance Patients with a high intratumoral heterogeneity-radiomics index exhibit an increased risk of occult lymph node metastasis in tongue squamous cell carcinoma patients, which showed that the late fusion model achieves superior predictive performance compared to the early fusion model.
PMID:40042648 | DOI:10.1007/s00330-025-11473-9
Advancing methodologies for assessing the impact of land use changes on water quality: a comprehensive review and recommendations
Environ Geochem Health. 2025 Mar 5;47(4):101. doi: 10.1007/s10653-025-02413-z.
ABSTRACT
With increasing scholarly focus on the ramifications of land use changes on water quality, although substantial research has been undertaken, the findings demonstrate pronounced spatial variability and the heterogeneity of research methodologies. To address this critical gap, this review offers a rigorous evaluation of the strengths and limitations of current research methodologies, providing targeted recommendations for refinement. It systematically assesses the existing body of literature concerning the influence of land use changes on water quality, with particular emphasis on the spatial heterogeneity of research results and the uniformity of employed methodologies. Despite variations in geographical contexts and research subjects, the methodological paradigms remain largely consistent, typically encompassing the acquisition and analysis of water quality and land use data, the delineation of buffer zones, and the application of correlation and regression analyses. However, these approaches encounter limitations in addressing regional disparities, nonlinear interactions, and real-time monitoring complexities. The review advocates for methodological advancements, such as the integration of automated monitoring systems and IoT technologies, alongside the fusion of deep learning algorithms with remote sensing techniques, to enhance both the precision and efficiency of data collection. Furthermore, it recommends the standardization of buffer zone delineation, the reinforcement of foundational water quality assessments, and the utilization of catchment-scale analyses to more accurately capture the influence of land use changes on water quality. Future inquiries should prioritize the development of interdisciplinary ecological models to elucidate the interaction and feedback mechanisms between land use, water quality, and climate change.
PMID:40042544 | DOI:10.1007/s10653-025-02413-z
A deep-learning retinal aging biomarker for cognitive decline and incident dementia
Alzheimers Dement. 2025 Mar;21(3):e14601. doi: 10.1002/alz.14601.
ABSTRACT
INTRODUCTION: The utility of retinal photography-derived aging biomarkers for predicting cognitive decline remains under-explored.
METHODS: A memory-clinic cohort in Singapore was followed-up for 5 years. RetiPhenoAge, a retinal aging biomarker, was derived from retinal photographs using deep-learning. Using competing risk analysis, we determined the associations of RetiPhenoAge with cognitive decline and dementia, with the UK Biobank utilized as the replication cohort. The associations of RetiPhenoAge with MRI markers(cerebral small vessel disease [CSVD] and neurodegeneration) and its underlying plasma proteomic profile were evaluated.
RESULTS: Of 510 memory-clinic subjects(N = 155 cognitive decline), RetiPhenoAge associated with incident cognitive decline (subdistribution hazard ratio [SHR] 1.34, 95% confidence interval [CI] 1.10-1.64, p = 0.004), and incident dementia (SHR 1.43, 95% CI 1.02-2.01, p = 0.036). In the UK Biobank (N = 33 495), RetiPhenoAge similarly predicted incident dementia (SHR 1.25, 95% CI 1.09-1.41, p = 0.008). RetiPhenoAge significantly associated with CSVD, brain atrophy, and plasma proteomic signatures related to aging.
DISCUSSION: RetiPhenoAge may provide a non-invasive prognostic screening tool for cognitive decline and dementia.
HIGHLIGHTS: RetiPhenoAge, a retinal aging marker, was studied in an Asian memory clinic cohort. Older RetiPhenoAge predicted future cognitive decline and incident dementia. It also linked to neuropathological markers, and plasma proteomic profiles of aging. UK Biobank replication found that RetiPhenoAge predicted 12-year incident dementia. Future studies should validate RetiPhenoAge as a prognostic biomarker for dementia.
PMID:40042460 | DOI:10.1002/alz.14601
Deep-Learning-Based Approaches for Rational Design of Stapled Peptides With High Antimicrobial Activity and Stability
Microb Biotechnol. 2025 Mar;18(3):e70121. doi: 10.1111/1751-7915.70121.
ABSTRACT
Antimicrobial peptides (AMPs) face stability and toxicity challenges in clinical use. Stapled modification enhances their stability and effectiveness, but its application in peptide design is rarely reported. This study built ten prediction models for stapled AMPs using deep and machine learning, tested their accuracy with an independent data set and wet lab experiments, and characterised stapled loop structures using structural, sequence and amino acid descriptors. AlphaFold improved stapled peptide structure prediction. The support vector machine model performed best, while two deep learning models achieved the highest accuracy of 1.0 on an external test set. Designed cysteine- and lysine-stapled peptides inhibited various bacteria with low concentrations and showed good serum stability and low haemolytic activity. This study highlights the potential of the deep learning method in peptide modification and design.
PMID:40042163 | DOI:10.1111/1751-7915.70121