Deep learning
Risk score stratification of cutaneous melanoma patients based on whole slide images analysis by deep learning
J Eur Acad Dermatol Venereol. 2025 Jan 24. doi: 10.1111/jdv.20538. Online ahead of print.
ABSTRACT
BACKGROUND: There is a need to improve risk stratification of primary cutaneous melanomas to better guide adjuvant therapy. Taking into account that haematoxylin and eosin (HE)-stained tumour tissue contains a huge amount of clinically unexploited morphological informations, we developed a weakly-supervised deep-learning approach, SmartProg-MEL, to predict survival outcomes in stages I to III melanoma patients from HE-stained whole slide image (WSI).
METHODS: We designed a deep neural network that extracts morphological features from WSI to predict 5-y overall survival (OS), and assign a survival risk score to each patient. The model was trained and validated on a discovery cohort of primary cutaneous melanomas (IHP-MEL-1, n = 342). Performance was tested on two external and independent datasets (IHP-MEL-2, n = 161; and TCGA cohort n = 63). It was compared with well-established prognostic factors. Concordance index (c-index) was used as a metric.
RESULTS: On the discovery cohort, the SmartProg-MEL predicts the 5-y OS with a c-index of 0.78 on the cross-validation data and of 0.72 on the cross-testing series. In the external cohorts, the model achieved a c-index of 0.71 and 0.69 for the IHP-MEL-2 and TCGA dataset respectively. Furthermore, SmartProg-MEL was an independent and the most powerful prognostic factor in multivariate analysis (HR = 1.84, p-value < 0.005). Finally, the model was able to dichotomize patients in two groups-a low and a high-risk group-each associated with a significantly different 5-y OS (p-value < 0.001 for IHP-MEL-1 and p-value = 0.01 for IHP-MEL-2).
CONCLUSION: The performance of our fully automated SmartProg-MEL model outperforms the current clinicopathological factors in terms of prediction of 5-y OS and risk stratification of cutaneous melanoma patients. Incorporation of SmartProg-MEL in the clinical workflow could guide the decision-making process by improving the identification of patients that may benefit from adjuvant therapy.
PMID:39853986 | DOI:10.1111/jdv.20538
Deep-Learning Generated Synthetic Material Decomposition Images Based on Single-Energy CT to Differentiate Intracranial Hemorrhage and Contrast Staining Within 24 Hours After Endovascular Thrombectomy
CNS Neurosci Ther. 2025 Jan;31(1):e70235. doi: 10.1111/cns.70235.
ABSTRACT
AIMS: To develop a transformer-based generative adversarial network (trans-GAN) that can generate synthetic material decomposition images from single-energy CT (SECT) for real-time detection of intracranial hemorrhage (ICH) after endovascular thrombectomy.
MATERIALS: We retrospectively collected data from two hospitals, consisting of 237 dual-energy CT (DECT) scans, including matched iodine overlay maps, virtual noncontrast, and simulated SECT images. These scans were randomly divided into a training set (n = 190) and an internal validation set (n = 47) in a 4:1 ratio based on the proportion of ICH. Additionally, 26 SECT scans were included as an external validation set. We compared our trans-GAN with state-of-the-art generation methods using several physical metrics of the generated images and evaluated the diagnostic efficacy of the generated images for differentiating ICH from contrast staining.
RESULTS: In comparison with other generation methods, the images generated by trans-GAN exhibited superior quantitative performance. Meanwhile, in terms of ICH detection, the use of generated images from both the internal and external validation sets resulted in a higher area under the receiver operating characteristic curve (0.88 vs. 0.68 and 0.69 vs. 0.54, respectively) and kappa values (0.83 vs. 0.56 and 0.51 vs. 0.31, respectively) compared with input SECT images.
CONCLUSION: Our proposed trans-GAN provides a new approach based on SECT for real-time differentiation of ICH and contrast staining in hospitals without DECT conditions.
PMID:39853936 | DOI:10.1111/cns.70235
Prediction of facial nerve outcomes after surgery for vestibular schwannoma using machine learning-based models: a systematic review and meta-analysis
Neurosurg Rev. 2025 Jan 24;48(1):79. doi: 10.1007/s10143-025-03230-9.
ABSTRACT
Postoperative facial nerve (FN) dysfunction is associated with a significant impact on the quality of life of patients and can result in psychological stress and disorders such as depression and social isolation. Preoperative prediction of FN outcomes can play a critical role in vestibular schwannomas (VSs) patient care. Several studies have developed machine learning (ML)-based models in predicting FN outcomes following resection of VS. This systematic review and meta-analysis aimed to evaluate the diagnostic accuracy of ML-based models in predicting FN outcomes following resection in the setting of VS. On December 12, 2024, the four electronic databases, Pubmed, Embase, Scopus, and Web of Science, were systematically searched. Studies that evaluated the performance outcomes of the ML-based predictive models were included. The pooled sensitivity, specificity, area under the curve (AUC), and diagnostic odds ratio (DOR) were calculated through the R program. Five studies with 807 individuals with VS, encompassing 35 models, were included. The meta-analysis showed a pooled sensitivity of 82% (95%CI: 76-87%), specificity of 79% (95%CI: 74-84%), and DOR of 12.94 (95%CI: 8.65-19.34) with an AUC of 0.841. The meta-analysis of the best performance model demonstrated a pooled sensitivity of 91% (95%CI: 80-96%), specificity of 87% (95%CI: 82-91%), and DOR of 46.84 (95%CI: 19.8-110.8). Additionally, the analysis demonstrated an AUC of 0.92, a sensitivity of 0.884, and a false positive rate of 0.136 for the best performance models. ML-based models possess promising diagnostic accuracy in predicting FN outcomes following resection.
PMID:39853510 | DOI:10.1007/s10143-025-03230-9
Scanner-based real-time three-dimensional brain + body slice-to-volume reconstruction for T2-weighted 0.55-T low-field fetal magnetic resonance imaging
Pediatr Radiol. 2025 Jan 24. doi: 10.1007/s00247-025-06165-x. Online ahead of print.
ABSTRACT
BACKGROUND: Motion correction methods based on slice-to-volume registration (SVR) for fetal magnetic resonance imaging (MRI) allow reconstruction of three-dimensional (3-D) isotropic images of the fetal brain and body. However, all existing SVR methods are confined to research settings, which limits clinical integration. Furthermore, there have been no reported SVR solutions for low-field 0.55-T MRI.
OBJECTIVE: Integration of automated SVR motion correction methods directly into fetal MRI scanning process via the Gadgetron framework to enable automated T2-weighted (T2W) 3-D fetal brain and body reconstruction in the low-field 0.55-T MRI scanner within the duration of the scan.
MATERIALS AND METHODS: A deep learning fully automated pipeline was developed for T2W 3-D rigid and deformable (D/SVR) reconstruction of the fetal brain and body of 0.55-T T2W datasets. Next, it was integrated into 0.55-T low-field MRI scanner environment via a Gadgetron workflow that enables launching of the reconstruction process directly during scanning in real-time.
RESULTS: During prospective testing on 12 cases (22-40 weeks gestational age), the fetal brain and body reconstructions were available on average 6:42 ± 3:13 min after the acquisition of the final stack and could be assessed and archived on the scanner console during the ongoing fetal MRI scan. The output image data quality was rated as good to acceptable for interpretation. The retrospective testing of the pipeline on 83 0.55-T datasets demonstrated stable reconstruction quality for low-field MRI.
CONCLUSION: The proposed pipeline allows scanner-based prospective T2W 3-D motion correction for low-field 0.55-T fetal MRI via direct online integration into the scanner environment.
PMID:39853394 | DOI:10.1007/s00247-025-06165-x
Deep-Learning-Assisted Digital Fluorescence Immunoassay on Magnetic Beads for Ultrasensitive Determination of Protein Biomarkers
Anal Chem. 2025 Jan 24. doi: 10.1021/acs.analchem.4c05877. Online ahead of print.
ABSTRACT
Digital fluorescence immunoassay (DFI) based on random dispersion magnetic beads (MBs) is one of the powerful methods for ultrasensitive determination of protein biomarkers. However, in the DFI, improving the limit of detection (LOD) is challenging since the ratio of signal-to-background and the speed of manual counting beads are low. Herein, we developed a deep-learning network (ATTBeadNet) by utilizing a new hybrid attention mechanism within a UNet3+ framework for accurately and fast counting the MBs and proposed a DFI using CdS quantum dots (QDs) with narrow peak and optical stability as reported at first time. The developed ATTBeadNet was applied to counting the MBs, resulting in the F1 score (95.91%) being higher than those of other methods (ImageJ, 68.33%; computer vision-based, 92.99%; fully convolutional network, 75.00%; mask region-based convolutional neural network, 70.34%). On principle-on-proof, a sandwich MB-based DFI was proposed, in which human interleukin-6 (IL-6) was taken as a model protein biomarker, while antibody-bound streptavidin-coated MBs were used as capture MBs and antibody-HRP-tyramide-functionalized CdS QDs were used as the binding reporter. When the developed ATTBeadNet was applied to the MB-based DFI of IL-6 (20 μL), the linear range from 5 to 100 fM and an LOD of 3.1 fM were achieved, which are better than those using the ImageJ method (linear range from 30 to 100 fM and LOD of 20 fM). This work demonstrates that the integration of the deep-learning network with DFI is a promising strategy for the highly sensitive and accurate determination of protein biomarkers.
PMID:39853309 | DOI:10.1021/acs.analchem.4c05877
Deep learning-based design and experimental validation of a medicine-like human antibody library
Brief Bioinform. 2024 Nov 22;26(1):bbaf023. doi: 10.1093/bib/bbaf023.
ABSTRACT
Antibody generation requires the use of one or more time-consuming methods, namely animal immunization, and in vitro display technologies. However, the recent availability of large amounts of antibody sequence and structural data in the public domain along with the advent of generative deep learning algorithms raises the possibility of computationally generating novel antibody sequences with desirable developability attributes. Here, we describe a deep learning model for computationally generating libraries of highly human antibody variable regions whose intrinsic physicochemical properties resemble those of the variable regions of the marketed antibody-based biotherapeutics (medicine-likeness). We generated 100000 variable region sequences of antigen-agnostic human antibodies belonging to the IGHV3-IGKV1 germline pair using a training dataset of 31416 human antibodies that satisfied our computational developability criteria. The in-silico generated antibodies recapitulate intrinsic sequence, structural, and physicochemical properties of the training antibodies, and compare favorably with the experimentally measured biophysical attributes of 100 variable regions of marketed and clinical stage antibody-based biotherapeutics. A sample of 51 highly diverse in-silico generated antibodies with >90th percentile medicine-likeness and > 90% humanness was evaluated by two independent experimental laboratories. Our data show the in-silico generated sequences exhibit high expression, monomer content, and thermal stability along with low hydrophobicity, self-association, and non-specific binding when produced as full-length monoclonal antibodies. The ability to computationally generate developable human antibody libraries is a first step towards enabling in-silico discovery of antibody-based biotherapeutics. These findings are expected to accelerate in-silico discovery of antibody-based biotherapeutics and expand the druggable antigen space to include targets refractory to conventional antibody discovery methods requiring in vitro antigen production.
PMID:39851074 | DOI:10.1093/bib/bbaf023
Characterization of saffron from different origins by HS-GC-IMS and authenticity identification combined with deep learning
Food Chem X. 2024 Nov 13;24:101981. doi: 10.1016/j.fochx.2024.101981. eCollection 2024 Dec 30.
ABSTRACT
With the rising demand of saffron, it is essential to standardize the confirmation of its origin and identify any adulteration to maintain a good quality led market product. However, a rapid and reliable strategy for identifying the adulteration saffron is still lacks. Herein, a combination of headspace-gas chromatography-ion mobility spectrometry (HS-GC-IMS) and convolutional neural network (CNN) was developed. Sixty-nine volatile compounds (VOCs) including 7 groups of isomers were detected rapidly and directly. A CNN prediction model based on GC-IMS data was proposed. With the merit of minimal data prepossessing and automatic feature extraction capability, GC-IMS images were directly input to the CNN model. The origin prediction results were output with the average accuracy about 90 %, which was higher than traditional methods like PCA (61 %) and SVM (71 %). This established CNN also showed ability in identifying counterfeit saffron with a high accuracy of 98 %, which can be used to authenticate saffron.
PMID:39850938 | PMC:PMC11754009 | DOI:10.1016/j.fochx.2024.101981
Detecting anomalies in smart wearables for hypertension: a deep learning mechanism
Front Public Health. 2025 Jan 15;12:1426168. doi: 10.3389/fpubh.2024.1426168. eCollection 2024.
ABSTRACT
INTRODUCTION: The growing demand for real-time, affordable, and accessible healthcare has underscored the need for advanced technologies that can provide timely health monitoring. One such area is predicting arterial blood pressure (BP) using non-invasive methods, which is crucial for managing cardiovascular diseases. This research aims to address the limitations of current healthcare systems, particularly in remote areas, by leveraging deep learning techniques in Smart Health Monitoring (SHM).
METHODS: This paper introduces a novel neural network architecture, ResNet-LSTM, to predict BP from physiological signals such as electrocardiogram (ECG) and photoplethysmogram (PPG). The combination of ResNet's feature extraction capabilities and LSTM's sequential data processing offers improved prediction accuracy. Comprehensive error analysis was conducted, and the model was validated using Leave-One-Out (LOO) cross-validation and an additional dataset.
RESULTS: The ResNet-LSTM model showed superior performance, particularly with PPG data, achieving a mean absolute error (MAE) of 6.2 mmHg and a root mean square error (RMSE) of 8.9 mmHg for BP prediction. Despite the higher computational cost (~4,375 FLOPs), the improved accuracy and generalization across datasets demonstrate the model's robustness and suitability for continuous BP monitoring.
DISCUSSION: The results confirm the potential of integrating ResNet-LSTM into SHM for accurate and non-invasive BP prediction. This approach also highlights the need for accurate anomaly detection in continuous monitoring systems, especially for wearable devices. Future work will focus on enhancing cloud-based infrastructures for real-time analysis and refining anomaly detection models to improve patient outcomes.
PMID:39850864 | PMC:PMC11755415 | DOI:10.3389/fpubh.2024.1426168
Dynamic-budget superpixel active learning for semantic segmentation
Front Artif Intell. 2025 Jan 9;7:1498956. doi: 10.3389/frai.2024.1498956. eCollection 2024.
ABSTRACT
INTRODUCTION: Active learning can significantly decrease the labeling cost of deep learning workflows by prioritizing the limited labeling budget to high-impact data points that have the highest positive impact on model accuracy. Active learning is especially useful for semantic segmentation tasks where we can selectively label only a few high-impact regions within these high-impact images. Most established regional active learning algorithms deploy a static-budget querying strategy where a fixed percentage of regions are queried in each image. A static budget could result in over- or under-labeling images as the number of high-impact regions in each image can vary.
METHODS: In this paper, we present a novel dynamic-budget superpixel querying strategy that can query the optimal numbers of high-uncertainty superpixels in an image to improve the querying efficiency of regional active learning algorithms designed for semantic segmentation.
RESULTS: For two distinct datasets, we show that by allowing a dynamic budget for each image, the active learning algorithm is more effective compared to static-budget querying at the same low total labeling budget. We investigate both low- and high-budget scenarios and the impact of superpixel size on our dynamic active learning scheme. In a low-budget scenario, our dynamic-budget querying outperforms static-budget querying by 5.6% mIoU on a specialized agriculture field image dataset and 2.4% mIoU on Cityscapes.
DISCUSSION: The presented dynamic-budget querying strategy is simple, effective, and can be easily adapted to other regional active learning algorithms to further improve the data efficiency of semantic segmentation tasks.
PMID:39850848 | PMC:PMC11754207 | DOI:10.3389/frai.2024.1498956
Study on the application of deep learning artificial intelligence techniques in the diagnosis of nasal bone fracture
Int J Burns Trauma. 2024 Dec 15;14(6):125-132. doi: 10.62347/VCJP9652. eCollection 2024.
ABSTRACT
PURPOSE: To evaluate the identification of nasal bone fractures and their clinical diagnostic significance for three-dimensional (3D) reconstruction of maxillofacial computed tomography (CT) images by applying artificial intelligence (AI) with deep learning (DL).
METHODS: CT maxillofacial 3D reconstruction images of 39 patients with normal nasal bone and 43 patients with nasal bone fracture were retrospectively analysed, and a total of 247 images were obtained in three directions: the orthostatic, left lateral and right lateral positions. The CT scan images of all patients were reviewed by two senior specialists to confirm the presence or absence of nasal fractures. Binary classification prediction was performed using the YOLOX detection model + GhostNetv2 classification model with a DL algorithm. Accuracy, sensitivity, and specificity were used to evaluate the efficacy of the AI model. Manual independent review, and AI model-assisted manual independent review were used to identify nasal fractures.
RESULTS: Compared with those of manual independent detection, the accuracy, sensitivity, and specificity of AI-assisted film reading improved between junior and senior physicians. The differences were statistically significant (P<0.05), and all were higher than manual independent detection.
CONCLUSIONS: Based on deep learning methods, an artificial intelligence model can be used to assist in the diagnosis of nasal bone fractures, which helps to promote the practical clinical application of deep learning methods.
PMID:39850782 | PMC:PMC11751554 | DOI:10.62347/VCJP9652
Fully automated coronary artery calcium score and risk categorization from chest CT using deep learning and multiorgan segmentation: A validation study from National Lung Screening Trial (NLST)
Int J Cardiol Heart Vasc. 2025 Jan 2;56:101593. doi: 10.1016/j.ijcha.2024.101593. eCollection 2025 Feb.
ABSTRACT
BACKGROUND: The National Lung Screening Trial (NLST) has shown that screening with low dose CT in high-risk population was associated with reduction in lung cancer mortality. These patients are also at high risk of coronary artery disease, and we used deep learning model to automatically detect, quantify and perform risk categorisation of coronary artery calcification score (CACS) from non-ECG gated Chest CT scans.
MATERIALS AND METHODS: Automated calcium quantification was performed using a neural network based on Mask regions with convolutional neural networks (R-CNN) for multiorgan segmentation. Manual evaluation of calcium was carried out using proprietary software. This study used 80 patients to train the segmentation model and randomly selected 1442 patients were used for the validation of the algorithm. We compared the model generated results with Ground Truth.
RESULTS: Automatic cardiac and aortic segmentation model worked well (Mean Dice score: 0.91). Cohen's kappa coefficient between the reference actual and the interclass computed predictive categories on the test set is 0.72 (95 % CI: 0.61-0.83). Our method correctly classifies the risk group in 78.8 % of the cases and classifies the subjects in the same group. F-score is measured as 0.78; 0.71; 0.81; 0.82; 0.92 in calcium score categories 0(CS:0), I (1-99), II (100-400), III (400-1000), IV (>1000), respectively. 79 % of the predictive scores lie in the same categories, 20 % of the predictive scores are one category up or down, and only 1.2 % patients were more than one category off. For the presence/absence of coronary artery calcifications, our deep learning model achieved a sensitivity of 90 % and a specificity of 94 %.
CONCLUSION: Fully automated model shows good correlation compared with reference standards. Automating the process could improve diagnostic ability, risk categorization, facilitate primary prevention intervention, improve morbidity and mortality, and decrease healthcare costs.
PMID:39850777 | PMC:PMC11754490 | DOI:10.1016/j.ijcha.2024.101593
MambaTab: A Plug-and-Play Model for Learning Tabular Data
Proc (IEEE Conf Multimed Inf Process Retr). 2024 Aug;2024:369-375. doi: 10.1109/mipr62202.2024.00065. Epub 2024 Oct 15.
ABSTRACT
Despite the prevalence of images and texts in machine learning, tabular data remains widely used across various domains. Existing deep learning models, such as convolutional neural networks and transformers, perform well however demand extensive preprocessing and tuning limiting accessibility and scalability. This work introduces an innovative approach based on a structured state-space model (SSM), MambaTab, for tabular data. SSMs have strong capabilities for efficiently extracting effective representations from data with long-range dependencies. MambaTab leverages Mamba, an emerging SSM variant, for end-to-end supervised learning on tables. Compared to state-of-the-art baselines, MambaTab delivers superior performance while requiring significantly fewer parameters, as empirically validated on diverse benchmark datasets. MambaTab's efficiency, scalability, generalizability, and predictive gains signify it as a lightweight, "plug-and-play" solution for diverse tabular data with promise for enabling wider practical applications.
PMID:39850741 | PMC:PMC11755428 | DOI:10.1109/mipr62202.2024.00065
Optimizing predictions of environmental variables and species distributions on tidal flats by combining Sentinel-2 images and their deep-learning features with OBIA
Int J Remote Sens. 2024 Nov 19;46(2):811-834. doi: 10.1080/01431161.2024.2423909. eCollection 2025.
ABSTRACT
Tidal flat ecosystems, are under steady decline due to anthropogenic pressures including sea level rise and climate change. Monitoring and managing these coastal systems requires accurate and up-to-date mapping. Sediment characteristics and macrozoobenthos are major indicators of the environmental status of tidal flats. Field monitoring of these indicators is often restricted by low accessibility and high costs. Despite limitations in spectral contrast, integrating remote sensing with deep learning proved efficient for deriving macrozoobenthos and sediment properties. In this study, we combined deep-learning features derived from Sentinel-2 images and Object-Based Image Analysis (OBIA) to explicitly include spatial aspects in the prediction of tsediment and macrozoobenthos properties of tidal flats , as well as the distribution of four benthic species. The deep-learning features extracted from a convolutional autoencoder model were analysed with OBIA to include spatial, textural, and contextual information. Object sets of varying sizes and shapes based on the spectral bands and/or the deep-learning features, served as the spatial units. These object sets and the field-collected points were used to train the Random Forest prediction model. Predictions were made for the tidal basins Pinkegat and Zoutkamperlaag in the Dutch Wadden Sea for 2018 to 2020. The overall prediction scores of the environmental variables ranged between 0.31 and 0.54. The species-distribution prediction model achieved accuracies ranging from 0.54 to 0.68 for the four benthic species). There was an average improvement of 21% points on predictions using objects with deep learning features compared to the pixel-based predictions with just the spectral bands. The mean spatial unit that captured the patterns best ranged between 0.3 ha and 13 ha for the different variables. Overall, using both OBIA and deep-learning features consistently improved the predictions, making it a valuable combination for monitoring these important environmental variables of coastal regions.
PMID:39850715 | PMC:PMC11755323 | DOI:10.1080/01431161.2024.2423909
Artificial Intelligence in Diagnosis and Management of Nail Disorders: A Narrative Review
Indian Dermatol Online J. 2024 Dec 11;16(1):40-49. doi: 10.4103/idoj.idoj_460_24. eCollection 2025 Jan-Feb.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) is revolutionizing healthcare by enabling systems to perform tasks traditionally requiring human intelligence. In healthcare, AI encompasses various subfields, including machine learning, deep learning, natural language processing, and expert systems. In the specific domain of onychology, AI presents a promising avenue for diagnosing nail disorders, analyzing intricate patterns, and improving diagnostic accuracy. This review provides a comprehensive overview of the current applications of AI in onychology, focusing on its role in diagnosing onychomycosis, subungual melanoma, nail psoriasis, nail fold capillaroscopy, and nail involvement in systemic diseases.
MATERIALS AND METHODS: A literature review on AI in nail disorders was conducted via PubMed and Google Scholar, yielding relevant studies. AI algorithms, particularly deep convolutional neural networks (CNNs), have demonstrated high sensitivity and specificity in interpreting nail images, aiding differential diagnosis as well as enhancing the efficiency of diagnostic processes in a busy clinical setting. In studies evaluating onychomycosis, AI has shown the ability to distinguish between normal nails, fungal infections, and other differentials, including nail psoriasis, with a high accuracy. AI systems have proven effective in identifying subungual melanoma. For nail psoriasis, AI has been used to automate the scoring of disease severity, reducing the time and effort required. AI applications in nail fold capillaroscopy have aided the analysis of diagnosis and prognosis of connective tissue diseases. AI applications have also been extended to recognize nail manifestations of systemic diseases, by analyzing changes in nail morphology and coloration. AI also facilitates the management of nail disorders by offering tools for personalized treatment planning, remote care, treatment monitoring, and patient education.
CONCLUSION: Despite these advancements, challenges such as data scarcity, image heterogeneity, interpretability issues, regulatory compliance, and poor workflow integration hinder the seamless adoption of AI in onychology practice. Ongoing research and collaboration between AI developers and nail experts is crucial to realize the full potential of AI in improving patient outcomes in onychology.
PMID:39850679 | PMC:PMC11753549 | DOI:10.4103/idoj.idoj_460_24
EquiRank: Improved protein-protein interface quality estimation using protein language-model-informed equivariant graph neural networks
Comput Struct Biotechnol J. 2024 Dec 30;27:160-170. doi: 10.1016/j.csbj.2024.12.015. eCollection 2025.
ABSTRACT
Quality estimation of the predicted interaction interface of protein complex structural models is not only important for complex model evaluation and selection but also useful for protein-protein docking. Despite recent progress fueled by symmetry-aware deep learning architectures and pretrained protein language models (pLMs), existing methods for estimating protein complex quality have yet to fully exploit the collective potentials of these advances for accurate estimation of protein-protein interface. Here we present EquiRank, an improved protein-protein interface quality estimation method by leveraging the strength of a symmetry-aware E(3) equivariant deep graph neural network (EGNN) and integrating pLM embeddings from the pretrained ESM-2 model. Our method estimates the quality of the protein-protein interface through an effective graph-based representation of interacting residue pairs, incorporating a diverse set of features, including ESM-2 embeddings, and then by learning the representation using symmetry-aware EGNNs. Our experimental results demonstrate improved ranking performance on diverse datasets over existing latest protein complex quality estimation methods including the top-performing CASP15 protein complex quality estimation method VoroIF_GNN and the self-assessment module of AlphaFold-Multimer repurposed for protein complex scoring and across different performance evaluation metrics. Additionally, our ablation studies demonstrate the contributions of both pLMs and the equivariant nature of EGNN for improved protein-protein interface quality estimation performance. EquiRank is freely available at https://github.com/mhshuvo1/EquiRank.
PMID:39850657 | PMC:PMC11755013 | DOI:10.1016/j.csbj.2024.12.015
Photoacoustic Imaging with Attention-Guided Deep Learning for Predicting Axillary Lymph Node Status in Breast Cancer
Acad Radiol. 2025 Jan 22:S1076-6332(24)00968-1. doi: 10.1016/j.acra.2024.12.020. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: Preoperative assessment of axillary lymph node (ALN) status is essential for breast cancer management. This study explores the use of photoacoustic (PA) imaging combined with attention-guided deep learning (DL) for precise prediction of ALN status.
MATERIALS AND METHODS: This retrospective study included patients with histologically confirmed early-stage breast cancer from 2022 to 2024, randomly divided (8:2) into training and test cohorts. All patients underwent preoperative dual modal photoacoustic-ultrasound (PA-US) examination, were treated with surgery and sentinel lymph node biopsy or ALN dissection, and were pathologically examined to determine the ALN status. Attention-guided DL model was developed using PA-US images to predict ALN status. A clinical model, constructed via multivariate logistic regression, served as the baseline for comparison. Subsequently, a nomogram incorporating the DL model and independent clinical parameters was developed. The performance of the models was evaluated through discrimination, calibration, and clinical applicability.
RESULTS: A total of 324 patients (mean age ± standard deviation, 51.0 ± 10.9 years) were included in the study and were divided into a development cohort (n = 259 [79.9%]) and a test cohort (n = 65 [20.1%]). The clinical model incorporating three independent clinical parameters yielded an area under the curve (AUC) of 0.775 (95% confidence interval [CI], 0.711-0.829) in the training cohort and 0.783 (95% CI, 0.654-0.897) in the test cohort for predicting ALN status. In comparison, the nomogram showed superior predictive performance, with an AUC of 0.906 (95% CI, 0.867-0.940) in the training cohort and 0.868 (95% CI, 0.769-0.954) in the test cohort. Decision curve analysis further confirmed the nomogram's clinical applicability, demonstrating a better net benefit across relevant threshold probabilities.
CONCLUSION: This study highlights the effectiveness of attention-guided PA imaging in breast cancer patients, providing novel nomograms for individualized clinical decision-making in predicting ALN node status.
PMID:39848886 | DOI:10.1016/j.acra.2024.12.020
Advanced Distance-Resolved Evaluation of the Perienhancing Tumor Areas with FLAIR Hyperintensity Indicates Different ADC Profiles by <em>MGMT</em> Promoter Methylation Status in Glioblastoma
AJNR Am J Neuroradiol. 2025 Jan 23. doi: 10.3174/ajnr.A8493. Online ahead of print.
ABSTRACT
BACKGROUND AND PURPOSE: Whether differences in the O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status of glioblastoma (GBM) are reflected in MRI markers remains largely unknown. In this work, we analyze the ADC in the perienhancing infiltration zone of GBM according to the corresponding MGMT status by using a novel distance-resolved 3D evaluation.
MATERIALS AND METHODS: One hundred one patients with IDH wild-type GBM were retrospectively analyzed. GBM was segmented in 3D with deep learning. Tissue with FLAIR hyperintensity around the contrast-enhanced tumor was divided into concentric distance-resolved subvolumes. Mean ADC was calculated for the 3D tumor core and for the distance-resolved volumes around the core. Differences in group mean ADC between patients with MGMT promoter methylated (mMGMT, n = 43) and MGMT promoter unmethylated (uMGMT, n = 58) GBM was analyzed with Wilcoxon signed rank test.
RESULTS: For both mMGMT and uMGMT GBM, mean ADC values around the tumor core significantly increased as a function of distance from the core toward the periphery of the perienhancing FLAIR hyperintensity (approximately 10% increase within 5 voxels with P < 001). While group mean ADC in the tumor core was not significantly different, the distance-resolved ADC profile around the core was approximately 10% higher in mMGMT than in uMGMT GBM (P < 10-8 at 5 voxel distance from the tumor core).
CONCLUSIONS: Distance-resolved volumetric ADC analysis around the tumor core reveals tissue signatures of GBM imperceptible to the human eye on conventional MRI. The different ADC profiles around the core suggest epigenetically influenced differences in perienhancing tissue characteristics between mMGMT and uMGMT GBM.
PMID:39848779 | DOI:10.3174/ajnr.A8493
Contrast-enhanced ultrasound-based AI model for multi-classification of focal liver lesions
J Hepatol. 2025 Jan 21:S0168-8278(25)00018-2. doi: 10.1016/j.jhep.2025.01.011. Online ahead of print.
ABSTRACT
BACKGROUND & AIMS: Accurate multi-classification is the prerequisite for reasonable management of focal liver lesions (FLLs). Ultrasound is the common image examination, but lacks accuracy. Contrast enhanced ultrasound (CEUS) offers better performance, but highly relies on experience. Therefore, we aimed to develop a CEUS-based artificial intelligence (AI) model for FLL multi-classification and evaluate its performance in multicenter clinical tests.
METHODS: Since January 2017 to December 2023, CEUS videos, immunohistochemical biomarkers and clinical information of solid FLLs>1cm in adults were collected from 52 centers to build and test the model. It aimed to classify FLLs into six types: hepatocellular carcinoma, hepatic metastasis, intrahepatic cholangiocarcinoma, hepatic hemangioma, hepatic abscess and others. First, Module-Disease, Module-Biomarker and Module-Clinic were built in training set A and validation set. Then, three modules were aggregated as Model-DCB in training set B and internal test set. Model-DCB performance was compared with CEUS and MRI radiologists in three external test sets.
RESULTS: In total 3725 FLLs from 52 centers were divided into training set A (n=2088), validation set (n=592), training set B (n=234), internal test set (n=110), external test set A (n=113), B (n=276) and C (n=312). In external test sets A, B and C, Model-DCB all achieved significantly better performance (Accuracy from 0.85 to 0.86) than junior CEUS-radiologists (0.59-0.73), and comparable to senior CEUS-radiologists (0.79-0.85) and senior MRI-radiologists (0.82-0.86). In multiple subgroup analyses on demographic characteristics, tumor characteristics and ultrasound devices, its accuracy ranged from 0.79 to 0.92.
CONCLUSIONS: CEUS-based Model-DCB provides accurate multi-classification of FLLs. It holds promise to benefit a wide range of population, especially for patients in remote suburban areas who have difficulty accessing MRI.
IMPACT AND IMPLICATIONS: Ultrasound is the most common image examination for screening focal liver lesions (FLLs), but it lacks accuracy for multi-classification, which is the prerequisite for reasonable management. Contrast enhanced ultrasound (CEUS) offers better diagnostic performance, but highly relies on work experience of radiologists. We develop a CEUS-based model (Model-DCB) can assist junior CEUS radiologists to achieve comparable diagnostic ability to senior CEUS radiologists and senior MRI radiologists. The combination of ultrasound device, CEUS examination and Model-DCB enables even patients in remote areas to obtain excellent diagnostic performance through examination by junior radiologists.
CLINICAL TRIAL: NCT04682886.
PMID:39848548 | DOI:10.1016/j.jhep.2025.01.011
Structural and functional alterations in hypothalamic subregions in male patients with alcohol use disorder
Drug Alcohol Depend. 2025 Jan 15;268:112554. doi: 10.1016/j.drugalcdep.2025.112554. Online ahead of print.
ABSTRACT
BACKGROUND: The hypothalamus is involved in stress regulation and reward processing, with its various nuclei exhibiting unique functions and connections. However, human neuroimaging studies on the hypothalamic subregions are limited in drug addiction. This study examined the volumes and functional connectivity of hypothalamic subregions in individuals with alcohol use disorder (AUD).
METHOD: The study included 24 male patients with AUD who had maintained abstinence and 24 healthy male controls, all of whom underwent brain structural and resting-state functional magnetic resonance imaging. The hypothalamus was segmented into five subunits using a deep learning-based algorithm, with comparisons of volumes and functional connectivity (FC) between the two groups. The relationships between these measures and alcohol-related characteristics were examined in the AUD group.
RESULTS: Findings indicated lower volumes in the anterior-superior (corrected-p < 0.001) and tuberal-superior subunits (corrected-p = 0.002) and altered FC of these and the anterior-inferior subunit among AUD patients (corrected-p < 0.05). Moreover, greater disease severity and a longer history of heavy drinking correlated with lower volumes in the anterior-superior (r = -0.42, p = 0.045) and tuberal-superior subregions (r = -0.61, p = 0.013), respectively. Conversely, a longer abstinence duration was associated with larger volumes in the anterior-superior (r = 0.56, p = 0.008) and tuberal-superior subunits (r = 0.40, p = 0.048) and with higher FC between the tuberal-superior hypothalamus and the thalamus, caudate, and anterior cingulate cortex (r = 0.55, p = 0.014).
CONCLUSIONS: Our results suggest that specific regional alterations within the hypothalamus, particularly the superior subregions, are associated with AUD, and more importantly, that these alterations may be reversible with prolonged abstinence.
PMID:39848134 | DOI:10.1016/j.drugalcdep.2025.112554
Multiscale feature enhanced gating network for atrial fibrillation detection
Comput Methods Programs Biomed. 2025 Jan 20;261:108606. doi: 10.1016/j.cmpb.2025.108606. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Atrial fibrillation (AF) is a significant cause of life-threatening heart disease due to its potential to lead to stroke and heart failure. Although deep learning-assisted diagnosis of AF based on ECG holds significance in clinical settings, it remains unsatisfactory due to insufficient consideration of noise and redundant features. In this work, we propose a novel multiscale feature-enhanced gating network (MFEG Net) for AF diagnosis.
METHOD: The network integrates multiscale convolution, adaptive feature enhancement (FE), and dynamic temporal processing. The multiscale convolution helps capture global and local information. The FE module consists of a soft-threshold residual shrinkage component, a dilated convolution module, and a Squeeze-and-Excitation (SE) module, eliminating redundant features and emphasizing effective features. The design allows the network to focus on the most relevant AF features, thereby enhancing its robustness and accuracy in the presence of noise and irrelevant information. The dynamic temporal module helps the network learn and recognize the time dependence associated with AF. The novel design endows the model with excellent robustness to cope with random noise in real-world environments.
RESULT: Compared with the state-of-the-art methods, our model exhibits excellent classification performance with an accuracy of 0.930, an F1 score of 0.883, and remarkable resilience to noise interference on the PhysioNet Challenge 2017 dataset. Moreover, the model was trained on the CinC2017 database and validated on the CPSC2018 database and AFDB database, achieving accuracies of 0.908 and 0.938, respectively.
CONCLUSION: The excellent classification performance of MFEG Net, coupled with its robustness in processing noisy electrocardiogram signals, makes it a powerful method for automatic atrial fibrillation detection. This method has made significant progress over state-of-the-art methods and may alleviate the burden of manual diagnosis for clinical doctors.
PMID:39847993 | DOI:10.1016/j.cmpb.2025.108606