Deep learning

Effects of Deep Learning-Based Reconstruction on the Quality of Accelerated Contrast-Enhanced Neck MRI

Wed, 2025-04-30 06:00

Korean J Radiol. 2025 May;26(5):446-445. doi: 10.3348/kjr.2024.1059.

ABSTRACT

OBJECTIVE: To compare the quality of deep learning-reconstructed turbo spin-echo (DL-TSE) and conventionally interpolated turbo spin-echo (Conv-TSE) techniques in contrast-enhanced MRI of the neck.

MATERIALS AND METHODS: Contrast-enhanced T1-weighted DL-TSE and Conv-TSE images were acquired using 3T scanners from 106 patients. DL-TSE employed a closed-source, 'work-in-progress' (WIP No. 1062, iTSE, version 10; Siemens Healthineers) algorithm for interpolation and denoising to achieve the same in-plane resolution (axial: 0.26 × 0.26 mm²; coronal: 0.29 × 0.29 mm²) while reducing scan times by 15.9% and 52.6% for axial and coronal scans, respectively. The full width at half maximum (FWHM) and percent signal ghosting were measured using stationary and flow phantom scans, respectively. In patient images, non-uniformity (NU), contrast-to-noise ratio (CNR), and regional mucosal FWHM were evaluated. Two neuroradiologists visually rated the patient images for overall quality, sharpness, regional mucosal conspicuity, artifacts, and lesions using a 5-point Likert scale.

RESULTS: FWHM in the stationary phantom scan was consistently sharper in DL-TSE. The percent signal ghosting outside the flow phantom was lower in DL-TSE (0.06% vs. 0.14%) but higher within the phantom (8.92% vs. 1.75%) compared to Conv-TSE. In patient scans, DL-TSE showed non-inferior NU and higher CNR. Regional mucosal FWHM was significantly better in DL-TSE, particularly in the oropharynx (coronal: 1.08 ± 0.31 vs. 1.52 ± 0.46 mm) and hypopharynx (coronal: 1.26 ± 0.35 vs. 1.91 ± 0.56 mm) (both P < 0.001). DL-TSE demonstrated higher overall image quality (axial: 4.61 ± 0.49 vs. 3.32 ± 0.54) and sharpness (axial: 4.40 ± 0.56 vs. 3.11 ± 0.53) (both P < 0.001). In addition, mucosal conspicuity was improved, especially in the oropharynx (axial: 4.41 ± 0.67 vs. 3.40 ± 0.69) and hypopharynx (axial: 4.45 ± 0.58 vs. 3.58 ± 0.63) (both P < 0.001). Extracorporeal ghost artifacts were reduced in DL-TSE (axial: 4.32 ± 0.60 vs. 3.90 ± 0.71, P < 0.001) but artifacts overlapping anatomical structures were slightly more pronounced (axial: 3.78 ± 0.74 vs. 3.95 ± 0.72, P < 0.001). Lesions were detected with higher confidence in DL-TSE.

CONCLUSION: DL-based reconstruction applied to accelerated neck MRI improves overall image quality, sharpness, mucosal conspicuity in motion-prone regions, and lesion detection confidence. Despite more pronounced ghost artifacts overlapping anatomical structures, DL-TSE enables substantial scan time reduction while enhancing diagnostic performance.

PMID:40307199 | DOI:10.3348/kjr.2024.1059

Categories: Literature Watch

Real-time morphological and dosimetric adaptation in nasopharyngeal carcinoma radiotherapy: insights from autosegmented fractional fan-beam CT

Wed, 2025-04-30 06:00

Radiat Oncol. 2025 Apr 30;20(1):68. doi: 10.1186/s13014-025-02643-6.

ABSTRACT

BACKGROUND: To quantify morphological and dosimetric variations in nasopharyngeal carcinoma (NPC) radiotherapy via autosegmented fan-beam computed tomography (FBCT) and to inform decision-making regarding appropriate objectives and optimal timing for adaptive radiotherapy (ART).

METHODS: This retrospective study analyzed 23 NPC patients (681 FBCT scans) treated at Sun Yat-sen Cancer Center from August 2022 to May 2024. The inclusion criterion was as follows: ≥1 weekly FBCT via a CT-linac with ≤ 2 fractions between scans. Four deep learning-based autosegmentation models were developed to assess weekly volume, Dice similarity coefficient (DSC), and dose variations in organs at risk (OARs) and target volumes.

RESULTS: A systematic review of autosegmentation on FBCT scans demonstrated satisfactory accuracy overall, and missegmentation was manually modified. Linear decreases in volume and/or DSC were observed in the parotid glands, submandibular glands, thyroid, spinal cord, and target volumes (R² > 0.7). The linear dose variation included coverage of the low risk planning target volume (-3.01%), the mean dose to the parotid glands (+ 2.45 Gy) and thyroid (+ 1.18 Gy), the D1% of the brainstem (+ 0.56 Gy), and the maximum dose to the spinal cord (+ 1.12 Gy). The greatest reduction in target volume coverage was noted in PGTVns, reaching 7.15%. The most significant dose changes occurred during weeks 3-6.

CONCLUSIONS: During NPC radiotherapy, the progressive dose deviations may not be corrected through repositioning alone, necessitating ART intervention. As dose variations in OARs rarely exceed 3 Gy and target coverage fluctuations remain within 10%, ART does not need to be performed frequently, and weeks 3-6 represent the most appropriate window.

PMID:40307931 | DOI:10.1186/s13014-025-02643-6

Categories: Literature Watch

MSRP-TODNet: a multi-scale reinforced region wise analyser for tiny object detection

Wed, 2025-04-30 06:00

BMC Res Notes. 2025 Apr 30;18(1):200. doi: 10.1186/s13104-025-07263-7.

ABSTRACT

OBJECTIVE: Detecting small, faraway objects in real-time surveillance is challenging due to limited pixel representation, affecting classifier performance. Deep Learning (DL) techniques generate feature maps to enhance detection, but conventional methods suffer from high computational costs. To address this, we propose Multi-Scale Region-wise Pixel Analysis with GAN for Tiny Object Detection (MSRP-TODNet). The model is trained and tested on VisDrone VID 2019 and MS-COCO datasets. First, images undergo two-fold pre-processing using Improved Wiener Filter (IWF) for artifact removal and Adjusted Contrast Enhancement Method (ACEM) for blurring correction. The Multi-Agent Reinforcement Learning (MARL) algorithm splits the pre-processed image into four regions, analyzing each pixel to generate feature maps. These are processed by the Enhanced Feature Pyramid Network (EFPN), which merges them into a single feature map. Finally, a Generative Adversarial Network (GAN) detects objects with bounding boxes.

RESULTS: Experimental results on the DOTA dataset demonstrate that MSRP-TODNet outperforms existing state-of-the-art methods. Specifically, it achieves an mAP @0.5 of 84.2%, mAP @0.5:0.95 of 54.1%, and an F1-Score of 84.0%, surpassing improved TPH-YOLOv5, YOLOv7-Tiny, and DRDet by margins of 1.7%-6.1% in detection performance. These results demonstrate the framework's effectiveness for accurate, real-time small object detection in UAV surveillance and aerial imagery.

PMID:40307915 | DOI:10.1186/s13104-025-07263-7

Categories: Literature Watch

Improving the accuracy of prediction models for small datasets of Cytochrome P450 inhibition with deep learning

Wed, 2025-04-30 06:00

J Cheminform. 2025 Apr 30;17(1):66. doi: 10.1186/s13321-025-01015-2.

ABSTRACT

The cytochrome P450 (CYP) superfamily metabolises a wide range of compounds; however, drug-induced CYP inhibition can lead to adverse interactions. Identifying potential CYP inhibitors is crucial for safe drug administration. This study investigated the application of deep learning techniques to the prediction of CYP inhibition, focusing on the challenges posed by limited datasets for CYP2B6 and CYP2C8 isoforms. To tackle these limitations, we leveraged larger datasets for related CYP isoforms, compiling comprehensive data from public databases containing IC50 values for 12,369 compounds that target seven CYP isoforms. We constructed single-task, fine-tuning, multitask, and multitask models incorporating data imputation on the missing values. Notably, the multitask models with data imputation demonstrated significant improvement in CYP inhibition prediction over the single-task models. Using the most accurate prediction models, we evaluated the inhibitory activity of approved drugs against CYP2B6 and CYP2C8. Among the 1,808 approved drugs analysed, our multitask models with data imputation identified 161 and 154 potential inhibitors of CYP2B6 and CYP2C8, respectively. This study underscores the significant potential of multitask deep learning, particularly when utilising a graph convolutional network with data imputation, to enhance the accuracy of CYP inhibition predictions under the conditions of limited data availability.Scientific contributionThis study demonstrates that even with small datasets, accurate prediction models can be constructed by utilising related data effectively. Also, our imputation techniques on the missing values improved the prediction accuracy of CYP2B6 and CYP2C8 inhibition significantly.

PMID:40307863 | DOI:10.1186/s13321-025-01015-2

Categories: Literature Watch

Artificial intelligence in retinal image analysis for hypertensive retinopathy diagnosis: a comprehensive review and perspective

Wed, 2025-04-30 06:00

Vis Comput Ind Biomed Art. 2025 May 1;8(1):11. doi: 10.1186/s42492-025-00194-x.

ABSTRACT

Hypertensive retinopathy (HR) occurs when the choroidal vessels, which form the photosensitive layer at the back of the eye, are injured owing to high blood pressure. Artificial intelligence (AI) in retinal image analysis (RIA) for HR diagnosis involves the use of advanced computational algorithms and machine learning (ML) strategies to recognize and evaluate signs of HR in retinal images automatically. This review aims to advance the field of HR diagnosis by investigating the latest ML and deep learning techniques, and highlighting their efficacy and capability for early diagnosis and intervention. By analyzing recent advancements and emerging trends, this study seeks to inspire further innovation in automated RIA. In this context, AI shows significant potential for enhancing the accuracy, effectiveness, and consistency of HR diagnoses. This will eventually lead to better clinical results by enabling earlier intervention and precise management of the condition. Overall, the integration of AI into RIA represents a considerable step forward in the early identification and treatment of HR, offering substantial benefits to both healthcare providers and patients.

PMID:40307650 | DOI:10.1186/s42492-025-00194-x

Categories: Literature Watch

Improved Image Quality of Virtual Monochromatic Images with Deep Learning Image Reconstruction Algorithm on Dual-Energy CT in Patients with Pancreatic Ductal Adenocarcinoma

Wed, 2025-04-30 06:00

J Imaging Inform Med. 2025 Apr 30. doi: 10.1007/s10278-025-01514-6. Online ahead of print.

ABSTRACT

This study aimed to evaluate the image quality of virtual monochromatic images (VMIs) reconstructed with deep learning image reconstruction (DLIR) using dual-energy CT (DECT) to diagnose pancreatic ductal adenocarcinoma (PDAC). Fifty patients with histologically confirmed PDAC who underwent multiphasic contrast-enhanced DECT between 2019 and 2022 were retrospectively analyzed. VMIs at 40-100 keV were reconstructed using hybrid iterative reconstruction (ASiR-V 30% and ASiR-V 50%) and DLIR (TFI-M) algorithms. Quantitative analyses included contrast-to-noise ratios (CNR) of the major abdominal vessels, liver, pancreas, and the PDAC. Qualitative image quality assessments included image noise, soft-tissue sharpness, vessel contrast, and PDAC conspicuity. Noise power spectrum (NPS) analysis was performed to examine the variance and spatial frequency characteristics of image noise using a phantom. TFI-M significantly improved image quality compared to ASiR-V 30% and ASiR-V 50%, especially at lower keV levels. VMIs with TFI-M showed reduced image noise and higher pancreas-to-tumor CNR at 40 keV. Qualitative evaluations confirmed DLIR's superiority in noise reduction, tissue sharpness, and vessel conspicuity, with substantial interobserver agreement (κ = 0.61-0.78). NPS analysis demonstrated effective noise reduction across spatial frequencies. DLIR significantly improved the image quality of VMIs on DECT by reducing image noise and increasing CNR, particularly at lower keV levels. These improvements may improve PDAC detection and assessment, making it a valuable tool for pancreatic cancer imaging.

PMID:40307592 | DOI:10.1007/s10278-025-01514-6

Categories: Literature Watch

Evaluation of deliverable artificial intelligence-based automated volumetric arc radiation therapy planning for whole pelvic radiation in gynecologic cancer

Wed, 2025-04-30 06:00

Sci Rep. 2025 Apr 30;15(1):15219. doi: 10.1038/s41598-025-99717-y.

ABSTRACT

This study aimed to develop a deep learning (DL)-based deliverable whole pelvic volumetric arc radiation therapy (VMAT) for patients with gynecologic cancer using a prototype DL-based automated planning support system, named RatoGuide, to evaluate its clinical validity. In our hospital, 110 patients with gynecologic cancer were registered. The prescribed dose was 50.4 Gy/28 fr. A DL-based three-dimensional dose prediction model was first trained by the dose distribution and structure data of whole pelvic VMAT (n = 100) created on the Monaco treatment planning system (TPS). The structure data of the test data (n = 10) were then input to RatoGuide, and RatoGuide predicted the dose distribution of the whole pelvic VMAT plan (PreDose). We established deliverable plans with Monaco and Eclipse TPS (DeliDose) based on PreDose and vendor-supplied optimization objectives. Medical physicists then manually developed plans (CliDose) for the test data. Finally, we evaluated and compared the dose distribution and dose constraints of PreDose, DeliDose, and CliDose. DeliDose, in both Eclipse and Monaco, was comparable to PreDose in most Dose constraints, planning target volume (PTV) coverage, and Dmax of the bladder, rectum, and bowel bag were better for DeliDose than for PreDose. Additionally, DeliDose demonstrated no significant difference from CliDose in most dose constraints. The blinded average scores of radiation oncologists for DeliDose and CliDose were 4.2 ± 0.4 and 4.3 ± 0.5, respectively, in Eclipse, and 4.0 ± 0.6 and 3.9 ± 0.5, respectively, in Monaco (5 is the max score and 3 is clinically acceptable). We indicated that RatoGuide can eliminate variations in plan quality between hospitals in whole pelvic VMAT irradiation and help develop VMAT plans in a short time.

PMID:40307456 | DOI:10.1038/s41598-025-99717-y

Categories: Literature Watch

Deep learning-based classification of coronary arteries and left ventricle using multimodal data for autonomous protocol selection or adjustment in angiography

Wed, 2025-04-30 06:00

Sci Rep. 2025 Apr 30;15(1):15186. doi: 10.1038/s41598-025-99651-z.

ABSTRACT

Optimal selection of X-ray imaging parameters is crucial in coronary angiography and structural cardiac procedures to ensure optimal image quality and minimize radiation exposure. These anatomydependent parameters are organized into customizable organ programs, but manual selection of the programs increases workload and complexity. Our research introduces a deep learning algorithm that autonomously detects three target anatomies:the left coronary artery (LCA), right coronary artery (RCA), and left ventricle (LV),based on singleX-ray frames without vessel structure and enables adjustment of imaging parameters by choosing the appropriate organ program. We compared three deep-learning architectures: ResNet-50 for image data, a Multilayer Perceptron (MLP) for angulation data, and a multimodal approach combining both. The dataset for training and validation included 275 radiographic sequences from clinical examinations, incorporating coronary angiography, left ventriculography, and corresponding C-arm angulation, using only the first non-contrast frame of the sequence for the possibility of adapting the system before the actual contrast injection. The dataset was acquired from multiple sites, ensuring variation in acquisition and patient statistics. An independent test set of 146 sequences was used for evaluation. The multimodal model outperformed the others, achieving an average F1 score of 0.82 and an AUC of 0.87, matching expert evaluations. The model effectively classified cardiac anatomies based on pre-contrast angiographic frames without visible coronary or ventricular structures. The proposed deep learning model accurately predicts cardiac anatomy for cine acquisitions, enabling the potential for quick and automatic selection of imaging parameters to optimize image quality and reduce radiation exposure. This model has the potential to streamline clinical workflows, improve diagnostic accuracy, and enhance safety for both patients and operators.

PMID:40307429 | DOI:10.1038/s41598-025-99651-z

Categories: Literature Watch

Targeted molecular generation with latent reinforcement learning

Wed, 2025-04-30 06:00

Sci Rep. 2025 Apr 30;15(1):15202. doi: 10.1038/s41598-025-99785-0.

ABSTRACT

Computational methods for generating molecules with specific physiochemical properties or biological activity can greatly assist drug discovery efforts. Deep learning generative models constitute a significant step towards that direction. We introduce a novel approach that utilizes a Reinforcement Learning paradigm, called proximal policy optimization, for optimizing molecules in the latent space of a pretrained generative model. Working in the latent space of a generative model lets us bypass the need for explicitly defining chemical rules when computationally designing molecules. The generation of molecules is achieved through navigating the latent space for identifying regions that correspond to molecules with desired properties. Proximal policy optimization is a state-of-the-art policy gradient algorithm capable of operating in continuous high-dimensional spaces in a sample-efficient manner. We have paired our optimization framework with the latent spaces of two different architectures of autoencoder models showing that the method is agnostic to the underlying architecture. We present results on commonly used benchmarks for molecule optimization that demonstrate that our method has comparable or even superior performance to state-of-the-art approaches. We additionally show how our method can generate molecules that contain a pre-specified substructure while simultaneously optimizing for molecular properties, a task highly relevant to real drug discovery scenarios.

PMID:40307420 | DOI:10.1038/s41598-025-99785-0

Categories: Literature Watch

A deep learning based framework for enhanced reference evapotranspiration estimation: evaluating accuracy and forecasting strategies

Wed, 2025-04-30 06:00

Sci Rep. 2025 Apr 30;15(1):15136. doi: 10.1038/s41598-025-99713-2.

ABSTRACT

Affordable and efficient agricultural methods enhance crop yield and water management by optimizing resources. Precise irrigation relies on accurate estimation of reference evapotranspiration (ETo). Numerous analytical and empirical methods exist to compute ETo but these methods are costlier, requires time and perform poorly under limited availability of meteorological data. This study first evaluated the performances of three deep learning sequential models-Long short-term memory (LSTM), Neural Basis Expansion Analysis for Time Series (N-BEATS) and, Temporal Convolutional Network model (TCN), for predicting daily ETo possessing temporal characteristics. In this TCN is considered as baseline model to be compared with other models. In the results, TCN performed better, so it is further utilized to evaluate two strategies of ETo prediction that makes the second objective of the paper. In the first approach, historic data is used to predict future ETo using TCN which is standard method. And, in recursive approach, TCN predicted climatological data and, ETo is computed. This is required for better irrigation planning in data-scarce situations. The results demonstrate that the TCN model provided satisfactory performance with the Nash-Sutcliffe Efficiency (NSE) = 0.99, Theil U2 = 0.005, RMSE = 0.092 and, MAE = 0.048. Also, with the recursive strategy, ETo values computed found more accurate than using standard approach. Thus, comparative study among sequential architecture revealed TCN outperformed LSTM and N-BEATS models and, is an efficient method for predicting ETo time-series and, could also assist in the precise management of water resources in data scarcity.

PMID:40307385 | DOI:10.1038/s41598-025-99713-2

Categories: Literature Watch

Effective reduction of unnecessary biopsies through a deep-learning-assisted aggressive prostate cancer detector

Wed, 2025-04-30 06:00

Sci Rep. 2025 Apr 30;15(1):15211. doi: 10.1038/s41598-025-99795-y.

ABSTRACT

Despite being one of the most prevalent cancers, prostate cancer (PCa) shows a significantly high survival rate, provided there is timely detection and treatment. Currently, several screening and diagnostic tests are required to be carried out in order to detect PCa. These tests are often invasive, requiring either a biopsy (Gleason score and ISUP) or blood tests (PSA). Computational methods have been shown to help this process, using multiparametric MRI (mpMRI) data to detect PCa, effectively providing value during the diagnosis and monitoring stages. While delineating lesions requires a high degree of experience and expertise from the radiologists, being subject to a high degree of inter-observer variability, often leading to inconsistent readings, these computational models can leverage the information from mpMRI to locate the lesions with a high degree of certainty. By considering as positive samples only those that have an ISUP≥2 we can train aggressive index lesion detection models. The main advantage of this approach is that, by focusing only on aggressive disease, the output of such a model can also be seen as an indication for biopsy, effectively reducing unnecessary biopsy screenings. In this work, we utilize both the highly heterogeneous ProstateNet dataset, and the PI-CAI dataset, to develop accurate aggressive disease detection models.

PMID:40307379 | DOI:10.1038/s41598-025-99795-y

Categories: Literature Watch

A multimodal and fully automated system for prediction of pathological complete response to neoadjuvant chemotherapy in breast cancer

Wed, 2025-04-30 06:00

Sci Adv. 2025 May 2;11(18):eadr1576. doi: 10.1126/sciadv.adr1576. Epub 2025 Apr 30.

ABSTRACT

Accurately predicting pathological complete response (pCR) before neoadjuvant chemotherapy (NAC) is crucial for patients with breast cancer. In this study, we developed a multimodal integrated fully automated pipeline system (MIFAPS) in forecasting pCR to NAC, using a multicenter and prospective dataset of 1004 patients with locally advanced breast cancer, incorporating pretreatment magnetic resonance imaging, whole slide image, and clinical risk factors. The results demonstrated that MIFAPS offered a favorable predictive performance in both the pooled external test set [area under the curve (AUC) = 0.882] and the prospective test set (AUC = 0.909). In addition, MIFAPS significantly outperformed single-modality models (P < 0.05). Furthermore, the high deep learning scores were associated with immune-related pathways and the promotion of antitumor cells in the microenvironment during biological basis exploration. Overall, our study demonstrates a promising approach for improving the prediction of pCR to NAC in patients with breast cancer through the integration of multimodal data.

PMID:40305609 | DOI:10.1126/sciadv.adr1576

Categories: Literature Watch

Massive experimental quantification allows interpretable deep learning of protein aggregation

Wed, 2025-04-30 06:00

Sci Adv. 2025 May 2;11(18):eadt5111. doi: 10.1126/sciadv.adt5111. Epub 2025 Apr 30.

ABSTRACT

Protein aggregation is a pathological hallmark of more than 50 human diseases and a major problem for biotechnology. Methods have been proposed to predict aggregation from sequence, but these have been trained and evaluated on small and biased experimental datasets. Here we directly address this data shortage by experimentally quantifying the aggregation of >100,000 protein sequences. This unprecedented dataset reveals the limited performance of existing computational methods and allows us to train CANYA, a convolution-attention hybrid neural network that accurately predicts aggregation from sequence. We adapt genomic neural network interpretability analyses to reveal CANYA's decision-making process and learned grammar. Our results illustrate the power of massive experimental analysis of random sequence-spaces and provide an interpretable and robust neural network model to predict aggregation.

PMID:40305601 | DOI:10.1126/sciadv.adt5111

Categories: Literature Watch

Comparison of Multimodal Deep Learning Approaches for Predicting Clinical Deterioration in Ward Patients: An Observational Cohort Study

Wed, 2025-04-30 06:00

J Med Internet Res. 2025 Apr 30. doi: 10.2196/75340. Online ahead of print.

ABSTRACT

BACKGROUND: Implementing machine learning models to identify clinical deterioration on the wards is associated with decreased morbidity and mortality. However, these models have high false positive rates and only use structured data.

OBJECTIVE: We aim to compare models with and without information from clinical notes for predicting deterioration.

METHODS: Adults admitted to the wards at the University of Chicago (development cohort) and University of Wisconsin-Madison (external validation cohort) were included. Predictors consisted of structured and unstructured variables extracted from notes as Concept Unique Identifiers (CUIs). We parameterized CUIs in five ways: Standard Tokenization (ST), ICD Rollup using Tokenization (ICDR-T), ICD Rollup using Binary Variables (ICDR-BV), CUIs as SapBERT Embeddings (SE), and CUI Clustering using SapBERT embeddings (CC). Each parameterization method combined with structured data and structured data-only were compared for predicting intensive care unit transfer or death in the next 24 hours using deep recurrent neural networks.

RESULTS: The development (UC) cohort included 284,302 patients, while the external validation (UW) cohort included 248,055. In total, 4.9% (N=26,281) of patients experienced the outcome. The SE model achieved the highest AUPRC (0.208), followed by CC (0.199) and the structured-only model (0.199), ICDR-BV (0.194), ICDR-T (0.166), and ST (0.158). The CC and structured-only models achieved the highest AUROC (0.870), followed by ICDR-T (0.867), ICDR-BV (0.866), ST (0.860), and SE (0.859). In terms of sensitivity and positive predictive value, the CC model achieved the greatest positive predictive value (12.53%) and sensitivity (52.15%) at the cutoff that flagged 5% of the observations in the test set. At the 15% cutoff, the ICDR-T, CC, and ICDR-BV models tied for the highest positive predictive value at 5.67%, while their sensitivities were 70.95%, 70.92%, and 70.86%, respectively. All models were well calibrated, achieving Brier scores in the range of 0.011-0.012. The modified IG method revealed that CUIs corresponding to terms such as "NPO - Nothing by mouth", "Chemotherapy", "Transplanted tissue", and "Dialysis procedure" were most predictive of deterioration.

CONCLUSIONS: A multimodal model combining structured data with embeddings using SapBERT had the highest AUPRC, but performance was similar between models with and without CUIs. Although the addition of CUIs from notes to structured data did not meaningfully improve model performance for predicting clinical deterioration, models using CUIs could provide clinicians with relevant information and additional clinical context for supporting decision-making.

PMID:40305429 | DOI:10.2196/75340

Categories: Literature Watch

Optimizing Immunotherapy: The Synergy of Immune Checkpoint Inhibitors with Artificial Intelligence in Melanoma Treatment

Wed, 2025-04-30 06:00

Biomolecules. 2025 Apr 16;15(4):589. doi: 10.3390/biom15040589.

ABSTRACT

Immune checkpoint inhibitors (ICIs) have transformed melanoma treatment; however, predicting patient responses remains a significant challenge. This study reviews the potential of artificial intelligence (AI) to optimize ICI therapy in melanoma by integrating various diagnostic tools. Through a comprehensive literature review, we analyzed studies on AI applications in melanoma immunotherapy, focusing on predictive modeling, biomarker identification, and treatment response prediction. Key findings highlight the efficacy of AI in improving ICI outcomes. Machine learning models successfully identified prognostic cytokine signatures linked to nivolumab clearance. The combination of AI with RNAseq analysis had the potential for the development of personalized treatment with ICIs. A machine learning-based approach was able to assess the risk-benefit ratio for the prediction of immune-related adverse events (irAEs) using the electronic health record (EHR) data. Deep learning algorithms demonstrated high accuracy in tumor microenvironment analysis, including tumor region identification and lymphocyte detection. AI-assisted quantification of tumor-infiltrating lymphocytes (TILs) proved prognostically valuable in primary melanoma and predictive of anti-PD-1 therapy response in metastatic cases. Integrating multiple diagnostic modalities, such as CT imaging and laboratory data, modestly enhanced predictive performance for 1-year survival in advanced cancers treated with immunotherapy. These findings underscore the potential of AI-driven approaches to refine biomarker identification, treatment prediction, and patient stratification in melanoma immunotherapy. While promising, clinical validation and implementation challenges remain.

PMID:40305346 | DOI:10.3390/biom15040589

Categories: Literature Watch

Heterogeneous Riemannian Few-Shot Learning Network

Wed, 2025-04-30 06:00

IEEE Trans Neural Netw Learn Syst. 2025 Apr 30;PP. doi: 10.1109/TNNLS.2025.3561930. Online ahead of print.

ABSTRACT

How to learn and accurately distinguish new concepts from few samples, as humans do, is a long-standing concern in artificial intelligence (AI). Studies in brain science and neuroscience have shown that human brain perception is based on nonlinear manifolds, and high-dimensional manifolds can facilitate concept learning in neural circuits. Based on this inspiration, in this paper, we propose a heterogeneous Riemannian few-shot learning network (HRFL-Net), which is the first few-shot learning method to perform end-to-end deep learning on heterogeneous Riemannian manifolds. Specifically, to enhance the geometric invariance of the image representation, the image features are projected into three heterogeneous Riemannian manifold spaces. Then, the implicit Riemannian kernel function maps the manifolds to the separable high-dimensional reproducing Hilbert space. It is assumed that the embedded kernel features of the complementary manifolds are mapped to the same common subspace. Thus, a novel neural network-based Riemannian metric learning method is designed to solve the subspace feature vectors by imposing orthogonal normalized projection, which overcomes the data extension limitation of the Riemannian metric. Finally, with the optimization objective of increasing the interclass distance and decreasing the intraclass distance in Hilbert space, the HRFL-Net is trained with end-to-end stochastic optimization, and the optimal aggregation subspace is learned during the gradient descent process. Thus, the proposed HRFL-Net can be easily generalized to challenging nonconvex data. The evaluation of four public datasets shows that the proposed HRFL-Net has significant superiority and also achieves competitive results compared with the state-of-the-art methods.

PMID:40305249 | DOI:10.1109/TNNLS.2025.3561930

Categories: Literature Watch

Deep Rib Fracture Instance Segmentation and Classification from CT on the RibFrac Challenge

Wed, 2025-04-30 06:00

IEEE Trans Med Imaging. 2025 Apr 30;PP. doi: 10.1109/TMI.2025.3565514. Online ahead of print.

ABSTRACT

Rib fractures are a common and potentially severe injury that can be challenging and labor-intensive to detect in CT scans. While there have been efforts to address this field, the lack of large-scale annotated datasets and evaluation benchmarks has hindered the development and validation of deep learning algorithms. To address this issue, the RibFrac Challenge was introduced, providing a benchmark dataset of over 5,000 rib fractures from 660 CT scans, with voxel-level instance mask annotations and diagnosis labels for four clinical categories (buckle, nondisplaced, displaced, or segmental). The challenge includes two tracks: a detection (instance segmentation) track evaluated by an FROC-style metric and a classification track evaluated by an F1-style metric. During the MICCAI 2020 challenge period, 243 results were evaluated, and seven teams were invited to participate in the challenge summary. The analysis revealed that several top rib fracture detection solutions achieved performance comparable or even better than human experts. Nevertheless, the current rib fracture classification solutions are hardly clinically applicable, which can be an interesting area in the future. As an active benchmark and research resource, the data and online evaluation of the RibFrac Challenge are available at the challenge website (https://ribfrac. grand-challenge.org/). In addition, we further analyzed the impact of two post-challenge advancements-largescale pretraining and rib segmentation-based on our internal baseline for rib fracture detection. These findings lay a foundation for future research and development in AI-assisted rib fracture diagnosis.

PMID:40305244 | DOI:10.1109/TMI.2025.3565514

Categories: Literature Watch

Molecular Modelling in Bioactive Peptide Discovery and Characterisation

Wed, 2025-04-30 06:00

Biomolecules. 2025 Apr 3;15(4):524. doi: 10.3390/biom15040524.

ABSTRACT

Molecular modelling is a vital tool in the discovery and characterisation of bioactive peptides, providing insights into their structural properties and interactions with biological targets. Many models predicting bioactive peptide function or structure rely on their intrinsic properties, including the influence of amino acid composition, sequence, and chain length, which impact stability, folding, aggregation, and target interaction. Homology modelling predicts peptide structures based on known templates. Peptide-protein interactions can be explored using molecular docking techniques, but there are challenges related to the inherent flexibility of peptides, which can be addressed by more computationally intensive approaches that consider their movement over time, called molecular dynamics (MD). Virtual screening of many peptides, usually against a single target, enables rapid identification of potential bioactive peptides from large libraries, typically using docking approaches. The integration of artificial intelligence (AI) has transformed peptide discovery by leveraging large amounts of data. AlphaFold is a general protein structure prediction tool based on deep learning that has greatly improved the predictions of peptide conformations and interactions, in addition to providing estimates of model accuracy at each residue which greatly guide interpretation. Peptide function and structure prediction are being further enhanced using Protein Language Models (PLMs), which are large deep-learning-derived statistical models that learn computer representations useful to identify fundamental patterns of proteins. Recent methodological developments are discussed in the context of canonical peptides, as well as those with modifications and cyclisations. In designing potential peptide therapeutics, the main outstanding challenge for these methods is the incorporation of diverse non-canonical amino acids and cyclisations.

PMID:40305228 | DOI:10.3390/biom15040524

Categories: Literature Watch

Multi-task Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer

Wed, 2025-04-30 06:00

Cancer Res. 2025 Apr 30. doi: 10.1158/0008-5472.CAN-24-4190. Online ahead of print.

ABSTRACT

Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer (LAGC) receiving neoadjuvant chemotherapy (NAC), providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 LAGC patients to develop and validate a multi-task deep learning model named co-attention tri-oriented spatial Mamba (CTSMamba) to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies.

PMID:40305075 | DOI:10.1158/0008-5472.CAN-24-4190

Categories: Literature Watch

X-ray CT metal artifact reduction using neural attenuation field prior

Wed, 2025-04-30 06:00

Med Phys. 2025 Apr 30. doi: 10.1002/mp.17859. Online ahead of print.

ABSTRACT

BACKGROUND: The presence of metal objects in computed tomography (CT) imaging introduces severe artifacts that degrade image quality and hinder accurate diagnosis. While several deep learning-based metal artifact reduction (MAR) methods have been proposed, they often exhibit poor performance on unseen data and require large datasets to train neural networks.

PURPOSE: In this work, we propose a sinogram inpainting method for metal artifact reduction that leverages a neural attenuation field (NAF) as a prior. This new method, dubbed NAFMAR, operates in a self-supervised manner by optimizing a model-based neural field, thus eliminating the need for large training datasets.

METHODS: NAF is optimized to generate prior images, which are then used to inpaint metal traces in the original sinogram. To address the corruption of x-ray projections caused by metal objects, a 3D forward projection of the original corrupted image is performed to identify metal traces. Consequently, NAF is optimized using a metal trace-masked ray sampling strategy that selectively utilizes uncorrupted rays to supervise the network. Moreover, a metal-aware loss function is proposed to prioritize metal-associated regions during optimization, thereby enhancing the network to learn more informed representations of anatomical features. After optimization, the NAF images are rendered to generate NAF prior images, which serve as priors to correct original projections through interpolation. Experiments are conducted to compare NAFMAR with other prior-based inpainting MAR methods.

RESULTS: The proposed method provides an accurate prior without requiring extensive datasets. Images corrected using NAFMAR showed sharp features and preserved anatomical structures. Our comprehensive evaluation, involving simulated dental CT and clinical pelvic CT images, demonstrated the effectiveness of NAF prior compared to other prior information, including the linear interpolation and data-driven convolutional neural networks (CNNs). NAFMAR outperformed all compared baselines in terms of structural similarity index measure (SSIM) values, and its peak signal-to-noise ratio (PSNR) value was comparable to that of the dual-domain CNN method.

CONCLUSIONS: NAFMAR presents an effective, high-fidelity solution for metal artifact reduction in 3D tomographic imaging without the need for large datasets.

PMID:40305006 | DOI:10.1002/mp.17859

Categories: Literature Watch

Pages