Deep learning

Assessment of Elapsed Time Between Dental Radiographs Using Siamese Network

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:1418-1422. doi: 10.3233/SHTI250636.

ABSTRACT

Recently, machine learning methods have emerged to predict dental disease progression, often relying on costly annotated datasets and frequently exhibiting low generalization performance. This study evaluates the application of Siamese networks for detecting subtle changes in longitudinal dental x-rays and predicting the time span category between dental treatments using periapical radiographs and patient demographic data. We assume that the ability of these models to detect the time intervals between dental treatments would ensure their capability to identify more complex patterns related to disease progression. The baseline models based on CNNs and MLP achieved moderate performance, while the Siamese network models demonstrated significant improvements, with the highest-performing model achieving an accuracy of 86.32% ± 1.60%. Moreover, the introduction of demographic features such as age and gender into the model led to a significant reduction in performance variance. These results underscore the effectiveness of Siamese networks in capturing subtle temporal changes in dental radiographs in longitudinal settings, offering the potential to integrate these models into clinical workflows. Future research will explore self-supervised learning models for dental disease progression, especially in clinical settings with limited labeled data.

PMID:40380739 | DOI:10.3233/SHTI250636

Categories: Literature Watch

Medication Recommender System for ICU Patients Using Autoencoders

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:1343-1347. doi: 10.3233/SHTI250621.

ABSTRACT

Patients admitted to the intensive care unit (ICU) are often treated with multiple high-risk medications. Over- and underprescribing of indicated medications, and inappropriate choice of medications frequently occur in the ICU. This risk has to be minimized. We evaluate the performance of recommendation methods in suggesting appropriate medications and examine whether incorporating clinical patient data beyond the medication list improves recommendations. Using the MIMIC-III dataset, we formulate medication list completion as a recommendation task. Our analysis includes four autoencoder-based approaches and two strong baselines. We used as inputs either only known medications, or medications together with patient data. We showed that medication recommender systems based on autoencoders may successfully recommend medications in the ICU.

PMID:40380724 | DOI:10.3233/SHTI250621

Categories: Literature Watch

A Deep-Learning Framework for Ovarian Cancer Subtype Classification Using Whole Slide Images

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:1290-1294. doi: 10.3233/SHTI250606.

ABSTRACT

Ovarian cancer, a leading cause of cancer-related deaths among women, comprises distinct subtypes each requiring different treatment approaches. This paper presents a deep-learning framework for classifying ovarian cancer subtypes using Whole Slide Imaging (WSI). Our method contains three stages: image tiling, feature extraction, and multi-instance learning. Our approach is trained and validated on a public dataset from 80 distinct patients, achieving up to 89,8% accuracy with a notable improvement in computational efficiency. The results demonstrate the potential of our framework to augment diagnostic precision in clinical settings, offering a scalable solution for the accurate classification of ovarian cancer subtypes.

PMID:40380710 | DOI:10.3233/SHTI250606

Categories: Literature Watch

Energy-Efficient AI for Medical Diagnostics: Performance and Sustainability Analysis of ResNet and MobileNet

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:1225-1229. doi: 10.3233/SHTI250585.

ABSTRACT

Artificial intelligence (AI) has transformed medical diagnostics by enhancing the accuracy of disease detection, particularly through deep learning models to analyze medical imaging data. However, the energy demands of training these models, such as ResNet and MobileNet, are substantial and often overlooked; however, researchers mainly focus on improving model accuracy. This study compares the energy use of these two models for classifying thoracic diseases using the well-known CheXpert dataset. We calculate power and energy consumption during training using the EnergyEfficientAI library. Results demonstrate that MobileNet outperforms ResNet by consuming less power and completing training faster, resulting in lower overall energy costs. This study highlights the importance of prioritizing energy efficiency in AI model development, promoting sustainable, eco-friendly approaches to advance medical diagnosis.

PMID:40380690 | DOI:10.3233/SHTI250585

Categories: Literature Watch

Leveraging Vision Transformers in Multimodal Models for Retinal OCT Analysis

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:1135-1139. doi: 10.3233/SHTI250567.

ABSTRACT

Optical Coherence Tomography (OCT) has become an indispensable imaging modality in ophthalmology, providing high-resolution cross-sectional images of the retina. Accurate classification of OCT images is crucial for diagnosing retinal diseases such as Age-related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). This study explores the efficacy of various deep learning models, including convolutional neural networks (CNNs) and Vision Transformers (ViTs), in classifying OCT images. We also investigate the impact of integrating metadata (patient age, sex, eye laterality, and year) into the classification process, even when a significant portion of metadata is missing. Our results demonstrate that multimodal models leveraging both image and metadata inputs, such as the Multimodal ResNet18, can achieve competitive performance compared to image-only models, such as DenseNet121. Notably, DenseNet121 and Multimodal ResNet18 achieved the highest accuracy of 95.16%, with DenseNet121 showing a slightly higher F1-score of 0.9313. The multimodal ViT-based model also demonstrated promising results, achieving an accuracy of 93.22%, indicating the potential of Vision Transformers (ViTs) in medical image analysis, especially for handling complex multimodal data.

PMID:40380672 | DOI:10.3233/SHTI250567

Categories: Literature Watch

Long Short-Term Memory Network for Accelerometer-Based Hypertension Classification

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:914-918. doi: 10.3233/SHTI250505.

ABSTRACT

This study investigates the application of a Long Short-Term Memory (LSTM) architecture for classifying hypertension using accelerometer data, specifically focusing on physical activity and sleep from the publicly available NHANES 2011-2012 dataset. The LSTM model captures the sequential patterns in this data, providing insights into behavioral patterns related to hypertension. The performance of the LSTM model is compared to traditional machine learning methods as well as other commonly used sequence models, including Recurrent Neural Networks (RNN), Transformers (TF), and 1D Convolutional Networks (Conv1D). The results show that the LSTM model achieves superior accuracy at 96.37%, outperforming the RNN (75.67%), TF (77.10%), and Conv1D (89.34%), as well as the other machine learning models, which range from 60.92% to 64.75%. These findings underscore the potential of LSTM models for integration into wearable health monitoring systems, enabling early detection or management of hypertension.

PMID:40380612 | DOI:10.3233/SHTI250505

Categories: Literature Watch

Artificial Intelligence Powered Audiomics: The Futuristic Biomarker in Pulmonary Medicine - A State-of-the-Art Review

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:884-885. doi: 10.3233/SHTI250491.

ABSTRACT

AI-driven "audiomics" leverages voice and respiratory sounds as non-invasive biomarkers to diagnose and manage pulmonary conditions, including COVID-19, tuberculosis, ILD, asthma, and COPD. By analyzing acoustic features, machine and deep learning enhance diagnostic accuracy and track disease progression. Key applications include cough-based TB detection, smartphone COVID-19 screening, and speech analysis for asthma and COPD monitoring. Ethical challenges like data privacy and standardization remain barriers to clinical adoption. With ongoing research, audiomics holds promise for transforming respiratory diagnostics and personalized care.

PMID:40380599 | DOI:10.3233/SHTI250491

Categories: Literature Watch

Patient Survival Prediction by Analyzing Pathological Images of Patients After Liver Transplantation

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:657-661. doi: 10.3233/SHTI250430.

ABSTRACT

Predicting whether a patient will develop cancer using nuclear features on pathological images is important for decision making regarding patient treatment after liver transplantation or hepatectomy. Unlike manual segmentation to extract nuclei parts from pathology images, we performed the entire process of predicting patient survival automatically. In addition, we established a method to correctly predict survival even in cases where the amount of data is small. After segmenting nuclei from pathological images extracted from patients who underwent liver transplantation, we trained a deep learning model to distinguish survival/death by overlapping the segmented mask image and the original volume image. The cohort was collected from the liver transplantation group (n=67). Approximately two pathological images were collected from each patient, and one of the large pathological images was split into an average of 30 small-sized images to train the classification model. The VIT (Vision Transformer) model provided by the python timm library was used to classify whether the pathological images had recurred cancer. The methods used for survival analysis were CoxPH and Kaplan-Meier models, and survival results obtained from deep learning models were compared with other patient variables to determine how well they predicted patient survival. The indicators measured for comparison were C-index and AUC, NRI and HR were calculated. As the number of patients being diagnosed and the number of images resulting from them become more complex and larger, experts may make misjudgments. Artificial intelligence technology quickly and accurately judges this complex and large amount of data.

PMID:40380539 | DOI:10.3233/SHTI250430

Categories: Literature Watch

Challenging Black-Box Models: Interpretable Explanations for ECG Classification

Sat, 2025-05-17 06:00

Stud Health Technol Inform. 2025 May 15;327:587-588. doi: 10.3233/SHTI250405.

ABSTRACT

Deep learning methods achieve high performance, while often lacking explainability, hindering application in the field. We propose the use of a logistic regression classifier based on temporal aligned Electrocardiograms, and the utilisation of interpretable feature importance. This work suggests that non-deep learning based classifiers achieve comparable performance, and introduce new opportunities to on-the-fly counterfactual explanations. The code, pretrained model, and extracted kernels are available under github.com/imi-ms/rlign.

PMID:40380515 | DOI:10.3233/SHTI250405

Categories: Literature Watch

Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study

Fri, 2025-05-16 06:00

JMIR Med Inform. 2025 May 16;13:e66917. doi: 10.2196/66917.

ABSTRACT

BACKGROUND: The capabilities of large language models (LLMs) to self-assess their own confidence in answering questions within the biomedical realm remain underexplored.

OBJECTIVE: This study evaluates the confidence levels of 12 LLMs across 5 medical specialties to assess LLMs' ability to accurately judge their own responses.

METHODS: We used 1965 multiple-choice questions that assessed clinical knowledge in the following areas: internal medicine, obstetrics and gynecology, psychiatry, pediatrics, and general surgery. Models were prompted to provide answers and to also provide their confidence for the correct answers (score: range 0%-100%). We calculated the correlation between each model's mean confidence score for correct answers and the overall accuracy of each model across all questions. The confidence scores for correct and incorrect answers were also analyzed to determine the mean difference in confidence, using 2-sample, 2-tailed t tests.

RESULTS: The correlation between the mean confidence scores for correct answers and model accuracy was inverse and statistically significant (r=-0.40; P=.001), indicating that worse-performing models exhibited paradoxically higher confidence. For instance, a top-performing model-GPT-4o-had a mean accuracy of 74% (SD 9.4%), with a mean confidence of 63% (SD 8.3%), whereas a low-performing model-Qwen2-7B-showed a mean accuracy of 46% (SD 10.5%) but a mean confidence of 76% (SD 11.7%). The mean difference in confidence between correct and incorrect responses was low for all models, ranging from 0.6% to 5.4%, with GPT-4o having the highest mean difference (5.4%, SD 2.3%; P=.003).

CONCLUSIONS: Better-performing LLMs show more aligned overall confidence levels. However, even the most accurate models still show minimal variation in confidence between right and wrong answers. This may limit their safe use in clinical settings. Addressing overconfidence could involve refining calibration methods, performing domain-specific fine-tuning, and involving human oversight when decisions carry high risks. Further research is needed to improve these strategies before broader clinical adoption of LLMs.

PMID:40378406 | DOI:10.2196/66917

Categories: Literature Watch

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning

Fri, 2025-05-16 06:00

Neurology. 2025 Jun 10;104(11):e213691. doi: 10.1212/WNL.0000000000213691. Epub 2025 May 16.

ABSTRACT

BACKGROUND AND OBJECTIVES: Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection.

METHODS: We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 z-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness.

RESULTS: We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap.

DISCUSSION: The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.

PMID:40378378 | DOI:10.1212/WNL.0000000000213691

Categories: Literature Watch

Multiplexing and Sensing with Fluorescence Lifetime Imaging Microscopy Empowered by Phasor U-Net

Fri, 2025-05-16 06:00

Anal Chem. 2025 May 16. doi: 10.1021/acs.analchem.5c02028. Online ahead of print.

ABSTRACT

Fluorescence lifetime imaging microscopy (FLIM) has been widely used as an essential multiplexing and sensing tool in frontier fields such as materials science and life sciences. However, the accuracy of lifetime estimation is compromised by limited time-correlated photon counts, and data processing is time-demanding due to the large data volume. Here, we introduce Phasor U-Net, a deep learning method designed for rapid and accurate FLIM imaging. Phasor U-Net incorporates two lightweight U-Net subnetworks to perform denoising and deconvolution to reduce the noise and calibrate the data caused by the instrumental response function, thus facilitating the downstream phasor analysis. Phasor U-Net is solely trained on computer-generated datasets, circumventing the necessity for large experimental datasets. The method reduced the modified Kullback-Leibler divergence on the phasor plots by 1.5-8-fold compared with the direct phasor method and reduced the mean absolute error of the lifetime images by 1.18-4.41-fold. We then show that this method can be used for multiplexed imaging on the small intestine samples of mice labeled by two fluorescence dyes with almost identical emission spectra. We further demonstrate that the size of quantum dots can be better estimated with measured lifetime information. This general method will open a new paradigm for more fundamental research with FLIM.

PMID:40378347 | DOI:10.1021/acs.analchem.5c02028

Categories: Literature Watch

An efficient leukemia prediction method using machine learning and deep learning with selected features

Fri, 2025-05-16 06:00

PLoS One. 2025 May 16;20(5):e0320669. doi: 10.1371/journal.pone.0320669. eCollection 2025.

ABSTRACT

Leukemia is a serious problem affecting both children and adults, leading to death if left untreated. Leukemia is a kind of blood cancer described by the rapid proliferation of abnormal blood cells. An early, trustworthy, and precise identification of leukemia is important to treating and saving patients' lives. Acute and myelogenous lymphocytic, chronic and myelogenous leukemia are the four kinds of leukemia. Manual inspection of microscopic images is frequently used to identify these malignant growth cells. Leukemia symptoms include fatigue, a lack of enthusiasm, a dull appearance, recurring illnesses, and easy blood loss. Identifying subtypes of leukemia for specialized therapy is one of the hurdles in this area. The suggested work predicts and classifies leukemia subtypes in gene data CuMiDa (GSE9476) using feature selection and ML techniques. The Curated Microarray Database (CuMiDa) collected 64 samples representing five classes of leukemia genes out of 22283 genes. The proposed approach utilizes the 25 most differentiating selected features for classification using machine and deep learning techniques. This study has a classification accuracy of 96.15% using Random Fores, 92.30 using Linear Regression, 96.15% using SVM, and 100% using LSTM. Deep learning methods have been shown to outperform traditional methods in leukemia gene classification by utilizing specific features.

PMID:40378164 | DOI:10.1371/journal.pone.0320669

Categories: Literature Watch

DeepMBEnzy: An AI-Driven Database of Mycotoxin Biotransformation Enzymes

Fri, 2025-05-16 06:00

J Agric Food Chem. 2025 May 16. doi: 10.1021/acs.jafc.5c02477. Online ahead of print.

ABSTRACT

Mycotoxins are toxic fungal metabolites that pose significant health risks. Enzyme biotransformation is a promising option for detoxifying mycotoxins and for elucidating their intracellular metabolism. However, few mycotoxin-biotransformation enzymes have been identified thus far. Here, we developed an enzyme promiscuity prediction for mycotoxin biotransformation (EPP-MB) model by fine-tuning a pretrained model using a cold protein data-splitting approach. The EPP-MB model leverages deep learning to predict enzymes capable of mycotoxin biotransformation, achieving a validation accuracy of 79% against a data set of experimentally confirmed mycotoxin-biotransforming enzymes. We applied the model to predict potential biotransformation enzymes for over 4000 mycotoxins and compiled these into the DeepMBEnzy database, which archives the predicted enzymes and related information for each mycotoxin, providing researchers with a user-friendly, publicly accessible interface at https://synbiodesign.com/DeepMBEnzy/. DeepMBEnzy is designed to facilitate the exploration and utilization of enzyme candidates in mycotoxin biotransformation, supporting further advancements in mycotoxin detoxification research and applications.

PMID:40378051 | DOI:10.1021/acs.jafc.5c02477

Categories: Literature Watch

A deep learning-based approach to automated rib fracture detection and CWIS classification

Fri, 2025-05-16 06:00

Int J Comput Assist Radiol Surg. 2025 May 16. doi: 10.1007/s11548-025-03390-5. Online ahead of print.

ABSTRACT

PURPOSE: Trauma-induced rib fractures are a common injury. The number and characteristics of these fractures influence whether a patient is treated nonoperatively or surgically. Rib fractures are typically diagnosed using CT scans, yet 19.2-26.8% of fractures are still missed during assessment. Another challenge in managing rib fractures is the interobserver variability in their classification. Purpose of this study was to develop and assess an automated method that detects rib fractures in CT scans, and classifies them according to the Chest Wall Injury Society (CWIS) classification.

METHODS: 198 CT scans were collected, of which 170 were used for training and internal validation, and 28 for external validation. Fractures and their classifications were manually annotated in each of the scans. A detection and classification network was trained for each of the three components of the CWIS classifications. In addition, a rib number labeling network was trained for obtaining the rib number of a fracture. Experiments were performed to assess the method performance.

RESULTS: On the internal test set, the method achieved a detection sensitivity of 80%, at a precision of 87%, and an F1-score of 83%, with a mean number of FPPS (false positives per scan) of 1.11. Classification sensitivity varied, with the lowest being 25% for complex fractures and the highest being 97% for posterior fractures. The correct rib number was assigned to 94% of the detected fractures. The custom-trained nnU-Net correctly labeled 95.5% of all ribs and 98.4% of fractured ribs in 30 patients. The detection and classification performance on the external validation dataset was slightly better, with a fracture detection sensitivity of 84%, precision of 85%, F1-score of 84%, FPPS of 0.96 and 95% of the fractures were assigned the correct rib number.

CONCLUSION: The method developed is able to accurately detect and classify rib fractures in CT scans, there is room for improvement in the (rare and) underrepresented classes in the training set.

PMID:40377883 | DOI:10.1007/s11548-025-03390-5

Categories: Literature Watch

Impact of sarcopenia and obesity on mortality in older adults with SARS-CoV-2 infection: automated deep learning body composition analysis in the NAPKON-SUEP cohort

Fri, 2025-05-16 06:00

Infection. 2025 May 16. doi: 10.1007/s15010-025-02555-3. Online ahead of print.

ABSTRACT

INTRODUCTION: Severe respiratory infections pose a major challenge in clinical practice, especially in older adults. Body composition analysis could play a crucial role in risk assessment and therapeutic decision-making. This study investigates whether obesity or sarcopenia has a greater impact on mortality in patients with severe respiratory infections. The study focuses on the National Pandemic Cohort Network (NAPKON-SUEP) cohort, which includes patients over 60 years of age with confirmed severe COVID-19 pneumonia. An innovative approach was adopted, using pre-trained deep learning models for automated analysis of body composition based on routine thoracic CT scans.

METHODS: The study included 157 hospitalized patients (mean age 70 ± 8 years, 41% women, mortality rate 39%) from the NAPKON-SUEP cohort at 57 study sites. A pre-trained deep learning model was used to analyze body composition (muscle, bone, fat, and intramuscular fat volumes) from thoracic CT images of the NAPKON-SUEP cohort. Binary logistic regression was performed to investigate the association between obesity, sarcopenia, and mortality.

RESULTS: Non-survivors exhibited lower muscle volume (p = 0.043), higher intramuscular fat volume (p = 0.041), and a higher BMI (p = 0.031) compared to survivors. Among all body composition parameters, muscle volume adjusted to weight was the strongest predictor of mortality in the logistic regression model, even after adjusting for factors such as sex, age, diabetes, chronic lung disease and chronic kidney disease, (odds ratio = 0.516). In contrast, BMI did not show significant differences after adjustment for comorbidities.

CONCLUSION: This study identifies muscle volume derived from routine CT scans as a major predictor of survival in patients with severe respiratory infections. The results underscore the potential of AI supported CT-based body composition analysis for risk stratification and clinical decision making, not only for COVID-19 patients but also for all patients over 60 years of age with severe acute respiratory infections. The innovative application of pre-trained deep learning models opens up new possibilities for automated and standardized assessment in clinical practice.

PMID:40377852 | DOI:10.1007/s15010-025-02555-3

Categories: Literature Watch

Development and validation of clinical-radiomics deep learning model based on MRI for endometrial cancer molecular subtypes classification

Fri, 2025-05-16 06:00

Insights Imaging. 2025 May 16;16(1):107. doi: 10.1186/s13244-025-01966-y.

ABSTRACT

OBJECTIVES: This study aimed to develop and validate a clinical-radiomics deep learning (DL) model based on MRI for endometrial cancer (EC) molecular subtypes classification.

METHODS: This multicenter retrospective study included EC patients undergoing surgery, MRI, and molecular pathology diagnosis across three institutions from January 2020 to March 2024. Patients were divided into training, internal, and external validation cohorts. A total of 386 handcrafted radiomics features were extracted from each MR sequence, and MoCo-v2 was employed for contrastive self-supervised learning to extract 2048 DL features per patient. Feature selection integrated selected features into 12 machine learning methods. Model performance was evaluated with the AUC.

RESULTS: A total of 526 patients were included (mean age, 55.01 ± 11.07). The radiomics model and clinical model demonstrated comparable performance across the internal and external validation cohorts, with macro-average AUCs of 0.70 vs 0.69 and 0.70 vs 0.67 (p = 0.51), respectively. The radiomics DL model, compared to the radiomics model, improved AUCs for POLEmut (0.68 vs 0.79), NSMP (0.71 vs 0.74), and p53abn (0.76 vs 0.78) in the internal validation (p = 0.08). The clinical-radiomics DL Model outperformed both the clinical model and radiomics DL model (macro-average AUC = 0.79 vs 0.69 and 0.73, in the internal validation [p = 0.02], 0.74 vs 0.67 and 0.69 in the external validation [p = 0.04]).

CONCLUSIONS: The clinical-radiomics DL model based on MRI effectively distinguished EC molecular subtypes and demonstrated strong potential, with robust validation across multiple centers. Future research should explore larger datasets to further uncover DL's potential.

CRITICAL RELEVANCE STATEMENT: Our clinical-radiomics DL model based on MRI has the potential to distinguish EC molecular subtypes. This insight aids in guiding clinicians in tailoring individualized treatments for EC patients.

KEY POINTS: Accurate classification of EC molecular subtypes is crucial for prognostic risk assessment. The clinical-radiomics DL model outperformed both the clinical model and the radiomics DL model. The MRI features exhibited better diagnostic performance for POLEmut and p53abn.

PMID:40377781 | DOI:10.1186/s13244-025-01966-y

Categories: Literature Watch

Geospatial artificial intelligence for detection and mapping of small water bodies in satellite imagery

Fri, 2025-05-16 06:00

Environ Monit Assess. 2025 May 16;197(6):657. doi: 10.1007/s10661-025-14066-7.

ABSTRACT

Remote sensing (RS) data is extensively used in the observation and management of surface water and the detection of water bodies for studying ecological and hydrological processes. Small waterbodies are often neglected because of their tiny presence in the image, but being very large in numbers, they significantly impact the ecosystem. However, the detection of small waterbodies in satellite images is challenging because of their varying sizes and tones. In this work, a geospatial artificial intelligence (GeoAI) approach is proposed to detect small water bodies in RS images and generate a spatial map of it along with area statistics. The proposed approach aims to detect waterbodies of different shapes and sizes including those with vegetation cover. For this purpose, a deep neural network (DNN) is trained using the Indian Space Research Organization's (ISRO) Cartosat-3 multispectral satellite images, which effectively extracts the boundaries of small water bodies with a mean precision of 0.92 and overall accuracy over 96%. A comparative analysis with other popular existing methods using the same data demonstrates the superior performance of the proposed method. The proposed GeoAI approach efficiently generates a map of small water bodies automatically from the input satellite image which can be utilized for monitoring and management of these micro water resources.

PMID:40377752 | DOI:10.1007/s10661-025-14066-7

Categories: Literature Watch

New approaches to lesion assessment in multiple sclerosis

Fri, 2025-05-16 06:00

Curr Opin Neurol. 2025 May 19. doi: 10.1097/WCO.0000000000001378. Online ahead of print.

ABSTRACT

PURPOSE OF REVIEW: To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research.

RECENT FINDINGS: Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance.

SUMMARY: Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.

PMID:40377692 | DOI:10.1097/WCO.0000000000001378

Categories: Literature Watch

Automated CT segmentation for lower extremity tissues in lymphedema evaluation using deep learning

Fri, 2025-05-16 06:00

Eur Radiol. 2025 May 16. doi: 10.1007/s00330-025-11673-3. Online ahead of print.

ABSTRACT

OBJECTIVES: Clinical assessment of lymphedema, particularly for lymphedema severity and fluid-fibrotic lesions, remains challenging with traditional methods. We aimed to develop and validate a deep learning segmentation tool for automated tissue component analysis in lower extremity CT scans.

MATERIALS AND METHODS: For development datasets, lower extremity CT venography scans were collected in 118 patients with gynecologic cancers for algorithm training. Reference standards were created by segmentation of fat, muscle, and fluid-fibrotic tissue components using 3D slicer. A deep learning model based on the Unet++ architecture with an EfficientNet-B7 encoder was developed and trained. Segmentation accuracy of the deep learning model was validated in an internal validation set (n = 10) and an external validation set (n = 10) using Dice similarity coefficient (DSC) and volumetric similarity (VS). A graphical user interface (GUI) tool was developed for the visualization of the segmentation results.

RESULTS: Our deep learning algorithm achieved high segmentation accuracy. Mean DSCs for each component and all components ranged from 0.945 to 0.999 in the internal validation set and 0.946 to 0.999 in the external validation set. Similar performance was observed in the VS, with mean VSs for all components ranging from 0.97 to 0.999. In volumetric analysis, mean volumes of the entire leg and each component did not differ significantly between reference standard and deep learning measurements (p > 0.05). Our GUI displays lymphedema mapping, highlighting segmented fat, muscle, and fluid-fibrotic components in the entire leg.

CONCLUSION: Our deep learning algorithm provides an automated segmentation tool enabling accurate segmentation, volume measurement of tissue component, and lymphedema mapping.

KEY POINTS: Question Clinical assessment of lymphedema remains challenging, particularly for tissue segmentation and quantitative severity evaluation. Findings A deep learning algorithm achieved DSCs > 0.95 and VS > 0.97 for fat, muscle, and fluid-fibrotic components in internal and external validation datasets. Clinical relevance The developed deep learning tool accurately segments and quantifies lower extremity tissue components on CT scans, enabling automated lymphedema evaluation and mapping with high segmentation accuracy.

PMID:40377677 | DOI:10.1007/s00330-025-11673-3

Categories: Literature Watch

Pages