Deep learning

Addressing bias in biomarker discovery for inflammatory bowel diseases: A multi-faceted analytical approach

Sun, 2025-07-20 06:00

Int Immunopharmacol. 2025 Jul 19;163:115238. doi: 10.1016/j.intimp.2025.115238. Online ahead of print.

ABSTRACT

Xiang-Guang et al. investigate the identification of novel biomarkers linked to M1 macrophage infiltration in inflammatory bowel diseases (IBD). Utilizing advanced bioinformatics and machine learning techniques, the researchers developed predictive models and employed the SHAP algorithm to assess feature importance, revealing that the top ten features corresponded exclusively to host genes. However, significant concerns regarding the model-specific nature of SHAP assessments raise doubts about the reliability of feature importance. To address these issues, we advocate for a multifaceted approach combining feature agglomeration (FA), highly variable gene selection (HVGS), and Spearman's correlation for a more accurate analysis. This integrated methodology aims to enhance our understanding of biological factors in IBD and improve diagnostic and therapeutic strategies.

PMID:40684706 | DOI:10.1016/j.intimp.2025.115238

Categories: Literature Watch

Continual source-free active domain adaptation for nasopharyngeal carcinoma tumor segmentation across multiple hospitals

Sun, 2025-07-20 06:00

Neural Netw. 2025 Jul 13;192:107869. doi: 10.1016/j.neunet.2025.107869. Online ahead of print.

ABSTRACT

The Nasopharyngeal carcinoma (NPC) is a common malignant tumor and precise gross tumor volume (GTV) delineation is crucial for effective NPC radiotherapy. Deep learning techniques have enabled automated GTV segmentation, nevertheless, model performance often degrades due to domain shifts in multi-center data scenarios. Recent source-free active domain adaptation methods have achieved promising results; however, these are still limited by several issues: (i) dependency on the source data features, (ii) inappropriate selection of biased and redundant samples, and (iii) catastrophic forgetting. In the current investigation, we propose a novel continual source-free active domain adaptation (CSFADA) framework for GTV segmentation of NPC. Inspired by self-supervised and cross-correlation learning, we introduce a domain reference and invariants selection strategy to address the first two challenges mentioned above. To this end, the strategy first acquire target domain knowledge in a self-supervised learning manner. It then computes a domain-distance and a domain-invariance score for each sample, thereby selecting informative samples. To address the third challenge mentioned above, we develop a dual-stage recurrent distillation strategy based on the clinical practice. Specifically, the stage I employs self-supervised learning approaches to learn generalizable representations and preserve source domain knowledge. The stage II decouples classical knowledge distillation to avoid optimization conflicts and thus better preserve source domain information. We conduct extensive experiments on datasets from three centers for GTV segmentation of NPC. The experimental results demonstrate the superiority of our proposed methods. Our code is publicly available at https://github.com/YZC-99/CSFADA.git.

PMID:40684700 | DOI:10.1016/j.neunet.2025.107869

Categories: Literature Watch

Recognition of microplastic aging features based on multimodal data fusion and attention mechanisms

Sun, 2025-07-20 06:00

J Hazard Mater. 2025 Jul 17;496:139301. doi: 10.1016/j.jhazmat.2025.139301. Online ahead of print.

ABSTRACT

Microplastics undergo complex physicochemical changes during aging, which traditional single-modality methods struggle to explain. We analyzed 1371 samples across seven aging types using a deep learning model integrating SEM images and FT-IR data via multimodal fusion and attention mechanisms. The model achieved 96.4 % validation accuracy, surpassing single-image (85.3 %) and single-spectroscopy (47.8 %) models. Attention mechanisms highlighted key features: chemical aging linked the CO peak (1700-1750 cm⁻¹) to surface etching; UV aging associated the O-H peak (3300-3500 cm⁻¹) with dense cracks; physical aging connected CC vibrations (1650-1680 cm⁻¹) to wear marks. The model performed robustly on complex aging samples, achieving an 80.9 % dual-attribution success rate in UV scenarios. It identified UV degradation as the primary factor in natural aging (78.6 % frequency) and indicated potential chemical degradation risks in paddy fields. Joint features were visualized via t-SNE and validated using Mahalanobis distance-based metric learning. This approach enhances our understanding of microplastic aging mechanisms and provides a foundation for linking laboratory observations with natural environmental conditions, supporting the development of methods for lifecycle management and ecological risk assessment of microplastics.

PMID:40684512 | DOI:10.1016/j.jhazmat.2025.139301

Categories: Literature Watch

Deep learning-based eye sign communication system for people with speech impairments

Sun, 2025-07-20 06:00

Disabil Rehabil Assist Technol. 2025 Jul 20:1-22. doi: 10.1080/17483107.2025.2532698. Online ahead of print.

ABSTRACT

Objective: People with motor difficulties and speech impairments often struggle to communicate their needs and views. Augmentative and Alternative Communication (AAC) offers solutions through gestures, body language, or specialized equipment. However, eye gaze and eye signs remain the sole communication method for some individuals. While existing eye-gaze devices leverage deep learning, their pre-calibration techniques can be unreliable and susceptible to lighting conditions. On the other hand, the research into eye sign-based communication is at a very novice stage.

Methods: In this research, we propose an eye sign-based communication system that operates on deep learning principles and accepts eye sign patterns from speech-impaired or paraplegic individuals via a standard webcam. The system converts the eye signs into alphabets, words, or sentences and displays the resulting text visually on the screen. In addition, it provides a vocal prompt for the user and the caretaker. It functions effectively in various lighting conditions without requiring calibration and integrates a text prediction function for user convenience. Impact Experiments conducted with participants aged between 18 and 35 years yielded average accuracy rates of 98%, 99%, and 99% for alphabet, word, and sentence formation, respectively. These results demonstrate the system's robustness and potential to significantly benefit individuals with speech impairments.

PMID:40684450 | DOI:10.1080/17483107.2025.2532698

Categories: Literature Watch

Enhancing cardiac disease detection via a fusion of machine learning and medical imaging

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26269. doi: 10.1038/s41598-025-12030-6.

ABSTRACT

Cardiovascular illnesses continue to be a predominant cause of mortality globally, underscoring the necessity for prompt and precise diagnosis to mitigate consequences and healthcare expenditures. This work presents a complete hybrid methodology that integrates machine learning techniques with medical image analysis to improve the identification of cardiovascular diseases. This research integrates many imaging modalities such as echocardiography, cardiac MRI, and chest radiographs with patient health records, enhancing diagnosis accuracy beyond standard techniques that depend exclusively on numerical clinical data. During the preprocessing phase, essential visual elements are collected from medical pictures utilizing image processing methods and convolutional neural networks (CNNs). These are subsequently integrated with clinical characteristics and input into various machine learning classifiers, including Support Vector Machines (SVM), Random Forest (RF), XGBoost, and Deep Neural Networks (DNNs), to differentiate between healthy persons and patients with cardiovascular illnesses. The proposed method attained a remarkable diagnostic accuracy of up to 96%, exceeding models reliant exclusively on clinical data. This study highlights the capability of integrating artificial intelligence with medical imaging to create a highly accurate and non-invasive diagnostic instrument for cardiovascular disease.

PMID:40683984 | DOI:10.1038/s41598-025-12030-6

Categories: Literature Watch

Advancing EEG based stress detection using spiking neural networks and convolutional spiking neural networks

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26267. doi: 10.1038/s41598-025-10270-0.

ABSTRACT

Accurate and efficient analysis of Electroencephalogram (EEG) signals is crucial for applications like neurological diagnosis and Brain-Computer Interfaces (BCI). Traditional methods often fall short in capturing the intricate temporal dynamics inherent in EEG data. This paper explores the use of Convolutional Spiking Neural Networks (CSNNs) to enhance EEG signal classification. We apply Discrete Wavelet Transform (DWT) for feature extraction and evaluate CSNN performance on the Physionet EEG dataset, benchmarking it against traditional deep learning and machine learning methods. The findings indicate that CSNNs achieve high accuracy, reaching 98.75% in 10-fold cross-validation, and an impressive F1 score of 98.60%. Notably, this F1-score represents an improvement over previous benchmarks, highlighting the effectiveness of our approach. Along with offering advantages in temporal precision and energy efficiency, CSNNs emerge as a promising solution for next-generation EEG analysis systems.

PMID:40683976 | DOI:10.1038/s41598-025-10270-0

Categories: Literature Watch

A novel hybrid convolutional and transformer network for lymphoma classification

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26259. doi: 10.1038/s41598-025-11277-3.

ABSTRACT

Lymphoma poses a critical health challenge worldwide, demanding computer aided solutions towards diagnosis, treatment, and research to significantly enhance patient outcomes and combat this pervasive disease. Accurate classification of lymphoma subtypes from Whole Slide Images (WSIs) remains a complex challenge due to morphological similarities among subtypes and the limitations of models that fail to jointly capture local and global features. Traditional diagnostic methods, limited by subjectivity and inconsistencies, highlight the need for advanced, Artificial Intelligence (AI)-driven solutions. This study proposes a hybrid deep learning framework-Hybrid Convolutional and Transformer Network for Lymphoma Classification (HCTN-LC)-designed to enhance the precision and interpretability of lymphoma subtype classification. The model employs a dual-pathway architecture that combines a lightweight SqueezeNet for local feature extraction with a Vision Transformer (ViT) for capturing global context. A Feature Fusion and Enhancement Module (FFEM) is introduced to dynamically integrate features from both pathways. The model is trained and evaluated on a large WSI dataset encompassing three lymphoma subtypes: CLL, FL, and MCL. HCTN-LC achieves superior performance with an overall accuracy of 99.87%, sensitivity of 99.87%, specificity of 99.93%, and AUC of 0.9991, outperforming several recent hybrid models. Grad-CAM visualizations confirm the model's focus on diagnostically relevant regions. The proposed HCTN-LC demonstrates strong potential for real-time and low-resource clinical deployment, offering a robust and interpretable AI tool for hematopathological diagnosis.

PMID:40683974 | DOI:10.1038/s41598-025-11277-3

Categories: Literature Watch

Development of an optimized deep learning model for predicting slope stability in nano silica stabilized soils

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26213. doi: 10.1038/s41598-025-11497-7.

ABSTRACT

Assessing the stability of infinite slopes is a critical challenge in geotechnical engineering, particularly by introducing Nano-silica (NS) stabilization, which changes soil properties and increases mechanical strength. Traditional methods of analysis of slope stability, such as the methods of limit equilibrium methods (LEM) and the methods of finite elements (FEM), often require extensive computational resources and effectively do not capture non-linear relations between soil stabilization and mechanisms of inclination failure. This study suggests a hybrid classification model of deep learning, integration of convolutional neural networks (CNN), long short-term memory (LSTM), and recurrent neural networks (RNN), optimized by Optuna to predict the stability of NS stabilized infinite slope. The model's training and verification were used with the data file containing 3,159 cases of inclination with different percentages of NS, and geotechnical parameters. The results show that RNN-CNN-LSTM, optimized through OPTUNA algorithms, overcomes conventional machine learning models and achieves an accuracy of 99.4% on unseen test data, supported by stable validation trends and robust predictive performance. Soil Index (SI), Unit Weight (γ), Curing Days (CD), Nano-Silica Content (NS%), Cohesion (c), Internal Friction Angle (Ø), Slope Height (H), Slope Angle (β), Pore Water Pressure Ratio (ru) were the features used for Explainable Artificial Intelligence (XAI) and SHAP (Shapley Additive Explanations). Furthermore, XAI and SHAP techniques were employed to enhance model interpretability, revealing that features c, NS%, and β are the most influential factors governing slope stability. This research shows that hybrid models of deep learning combined with techniques of optimization and interpretability provide a powerful and efficient tool for geotechnical engineers to assess the stability of inclination, reduce computing efforts, and improve predictive accuracy. The proposed framework can be integrated into early warning systems and monitoring platforms in real time, increasing the assessment of risks and infrastructure resistance in regions susceptible to landslides.

PMID:40683973 | DOI:10.1038/s41598-025-11497-7

Categories: Literature Watch

Influence of high-performance image-to-image translation networks on clinical visual assessment and outcome prediction: utilizing ultrasound to MRI translation in prostate cancer

Sat, 2025-07-19 06:00

Int J Comput Assist Radiol Surg. 2025 Jul 19. doi: 10.1007/s11548-025-03481-3. Online ahead of print.

ABSTRACT

PURPOSE: Image-to-image (I2I) translation networks have emerged as promising tools for generating synthetic medical images; however, their clinical reliability and ability to preserve diagnostically relevant features remain underexplored. This study evaluates the performance of state-of-the-art 2D/3D I2I networks for converting ultrasound (US) images to synthetic MRI in prostate cancer (PCa) imaging. The novelty lies in combining radiomics, expert clinical evaluation, and classification performance to comprehensively benchmark these models for potential integration into real-world diagnostic workflows.

METHODS: A dataset of 794 PCa patients was analyzed using ten leading I2I networks to synthesize MRI from US input. Radiomics feature (RF) analysis was performed using Spearman correlation to assess whether high-performing networks (SSIM > 0.85) preserved quantitative imaging biomarkers. A qualitative evaluation by seven experienced physicians assessed the anatomical realism, presence of artifacts, and diagnostic interpretability of synthetic images. Additionally, classification tasks using synthetic images were conducted using two machine learning and one deep learning model to assess the practical diagnostic benefit.

RESULTS: Among all networks, 2D-Pix2Pix achieved the highest SSIM (0.855 ± 0.032). RF analysis showed that 76 out of 186 features were preserved post-translation, while the remainder were degraded or lost. Qualitative feedback revealed consistent issues with low-level feature preservation and artifact generation, particularly in lesion-rich regions. These evaluations were conducted to assess whether synthetic MRI retained clinically relevant patterns, supported expert interpretation, and improved diagnostic accuracy. Importantly, classification performance using synthetic MRI significantly exceeded that of US-based input, achieving average accuracy and AUC of ~ 0.93 ± 0.05.

CONCLUSION: Although 2D-Pix2Pix showed the best overall performance in similarity and partial RF preservation, improvements are still required in lesion-level fidelity and artifact suppression. The combination of radiomics, qualitative, and classification analyses offered a holistic view of the current strengths and limitations of I2I models, supporting their potential in clinical applications pending further refinement and validation.

PMID:40683943 | DOI:10.1007/s11548-025-03481-3

Categories: Literature Watch

Application of image guided analyses to monitor fecal microbial composition and diversity in a human cohort

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26237. doi: 10.1038/s41598-025-10629-3.

ABSTRACT

The critical role of gut microbiota in human health and disease has been increasingly illustrated over the past decades, with a significant amount of research demonstrating an unmet need for self-monitor of the fecal microbial composition in an easily-accessible, rapid-time manner. In this study, we employed a tool for Smartphone Microbiome Evaluation and Analysis in Rapid-time (SMEAR) that uses images of fecal smears to predict microbial compositional characteristics in a human cohort. A subset of human fecal samples was randomly retrieved from the second wave of data collection in the Healthy Life in an Urban Setting (HELIUS) study cohort. Per sample, 16S rRNA gene sequencing data was generated in addition to an image of a fecal smear, spread on a standard A4 paper. Metagenomics-paired pictures were used to validate a computer vision-based technology to classify whether the sample is of low or high relative abundance of the 50 most abundant genera, and α-diversity (Shannon-index). In total, 888 fecal samples were used as an application of the SMEAR technology. SMEAR gave accurate predictions whether a fecal sample is of low or high relative abundance of Sporobacter, Oscillibacter and Intestinimonas (very good performance, AUC > 0.8, p-value < 0.001, for all models), as well as Neglecta, Megasphaera, Lachnospira, Methanobrevibacter, Harryflintia, Roseburia, and Dialister (good performance, AUC > 0.75, p-value < 0.001, for all models). Likewise, SMEAR could classify whether a fecal sample was of low or high α-diversity (AUC = 0.83, p-value < 0.001). Our study demonstrates that SMEAR robustly predicts microbial composition and diversity from digital images of fecal smears in a human cohort. These findings establish SMEAR as a new benchmark for rapid, cost-effective microbiome diagnostics and pave the way for its direct application in research settings and clinical validation.

PMID:40683926 | DOI:10.1038/s41598-025-10629-3

Categories: Literature Watch

Deep learning to identify stroke within 4.5 h using DWI and FLAIR in a prospective multicenter study

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26262. doi: 10.1038/s41598-025-10804-6.

ABSTRACT

To enhance thrombolysis eligibility in acute ischemic stroke, we developed a deep learning model to estimate stroke onset within 4.5 h using diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) images. Given the variability in human interpretation, our multimodal Res-U-Net (mRUNet) model integrates a modified U-Net and ResNet-34 to classify stroke onset as < 4.5 or ≥ 4.5 h. Using DWI and FLAIR images from patients scanned within 24 h of symptom onset, the modified U-Net generated a DWI-FLAIR mismatch image, while ResNet-34 performed the final classification. mRUNet was evaluated against ResNet-34 and DenseNet-121 on an internal test set (n = 123) and two external test sets: a single-center (n = 468) and a multi-center (n = 1151). mRUNet achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.903 on the internal set and 0.910 and 0.868 on external sets, significantly outperforming ResNet-34 and DenseNet-121. Our mRUNet model demonstrated robust and consistent classification of the 4.5-h onset-time window across datasets. By leveraging DWI and FLAIR images as a tissue clock, this model may support timely and individualized thrombolysis in patients with unclear stroke onset, such as those with wake-up stroke, in clinical settings.

PMID:40683923 | DOI:10.1038/s41598-025-10804-6

Categories: Literature Watch

A deep learning-based prognostic approach for predicting turbofan engine degradation and remaining useful life

Sat, 2025-07-19 06:00

Sci Rep. 2025 Jul 19;15(1):26251. doi: 10.1038/s41598-025-09155-z.

ABSTRACT

Predicting the Remaining Useful Life (RUL) of turbofan engines can prevent air disasters caused by component degradation. It is an important procedure in prognostics and health management (PHM). Therefore, a deep learning-based RUL prediction approach is proposed. The CMAPSS benchmark dataset is used to determine the RUL of aviation engines, focusing specifically on the FD001 and FD003 sub-datasets.In this study, we propose a CAELSTM (Convolutional Autoencoder and Attention-based LSTM) hybrid model for RUL prediction. First, the sub-datasets are preprocessed, and a piecewise linear degradation model is applied. The proposed model utilizes an autoencoder followed by an LSTM layer with an attention mechanism, which focuses on the most relevant components of the sequences. A fully connected layer of the convolutional neural network is used to further process the important features. Finally, the proposed model is evaluated and compared with other approaches. The results show that the model surpasses state-of-the-art methods, achieving RMSE values of 14.44 and 13.40 for FD001 and FD003, respectively. Other evaluation criteria, such as MAE and scoring, were also used, with MAE achieving values of 10.49 and 10.68 for FD001 and FD003, respectively. The scoring achieved values of 282.38 and 264.47 for the same sub-datasets. These results highlight the model's promise for improving prognostics and health management (PHM) systems, offering a dependable tool for predictive maintenance in aerospace and related fields. They also demonstrate the effectiveness and superiority of the model in enhancing aviation safety.

PMID:40683914 | DOI:10.1038/s41598-025-09155-z

Categories: Literature Watch

2.5D Deep Learning-Based Prediction of Pathological Grading of Clear Cell Renal Cell Carcinoma Using Contrast-Enhanced CT: A Multicenter Study

Sat, 2025-07-19 06:00

Acad Radiol. 2025 Jul 19:S1076-6332(25)00636-1. doi: 10.1016/j.acra.2025.06.056. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: To develop and validate a deep learning model based on arterial phase-enhanced CT for predicting the pathological grading of clear cell renal cell carcinoma (ccRCC).

MATERIALS AND METHODS: Data from 564 patients diagnosed with ccRCC from five distinct hospitals were retrospectively analyzed. Patients from centers 1 and 2 were randomly divided into a training set (n=283) and an internal test set (n=122). Patients from centers 3, 4, and 5 served as external validation sets 1 (n=60), 2 (n=38), and 3 (n=61), respectively. A 2D model, a 2.5D model (three-slice input), and a radiomics-based multi-layer perceptron (MLP) model were developed. Model performance was evaluated using the area under the curve (AUC), accuracy, and sensitivity.

RESULTS: The 2.5D model outperformed the 2D and MLP models. Its AUCs were 0.959 (95% CI: 0.9438-0.9738) for the training set, 0.879 (95% CI: 0.8401-0.9180) for the internal test set, and 0.870 (95% CI: 0.8076-0.9334), 0.862 (95% CI: 0.7581-0.9658), and 0.849 (95% CI: 0.7766-0.9216) for the three external validation sets, respectively. The corresponding accuracy values were 0.895, 0.836, 0.827, 0.825, and 0.839. Compared to the MLP model, the 2.5D model achieved significantly higher AUCs (increases of 0.150 [p<0.05], 0.112 [p<0.05], and 0.088 [p<0.05]) and accuracies (increases of 0.077 [p<0.05], 0.075 [p<0.05], and 0.101 [p<0.05]) in the external validation sets.

CONCLUSION: The 2.5D model based on 2.5D CT image input demonstrated improved predictive performance for the WHO/ISUP grading of ccRCC.

PMID:40683765 | DOI:10.1016/j.acra.2025.06.056

Categories: Literature Watch

A Multisite Fusion-Based Deep Convolutional Neural Network for Classification of Helicobacter pylori Infection Status Using Endoscopic Images: A Multicenter Study

Sat, 2025-07-19 06:00

J Gastroenterol Hepatol. 2025 Jul 19. doi: 10.1111/jgh.70004. Online ahead of print.

ABSTRACT

BACKGROUND AND AIM: We aimed to develop a deep convolutional neural network (DCNN) that integrates features from multiple sites of the stomach to classify Hp infection status, distinguishing between uninfected, previously infected, and currently infected.

METHODS: Ten deep learning architectures were employed to develop DCNN models using a training dataset comprising 3380 white-light images collected from 676 subjects across eight centers. External validation was conducted with a separate dataset consisting of images from 126 individuals. External testing was subsequently performed to assess and compare the diagnostic efficacy between single-site and multisite fusion DCNN models.

RESULTS: Among these models, the DCNN model using Wide-ResNet emerged as the top performer, achieving a high accuracy of 68.11% (95% confidence interval [CI]: 63.36%-73.09%) with an area under the curve (AUC) of 75.06% (95% CI: 70.22%-80.24%) for noninfection, 69.18% (95% CI: 64.51%-74.03%) for past infection, and 77.04% (95% CI: 72.12%-82.39%) for current infection using images from a single site on the lesser gastric curvature. In comparison, the voting-based multisite fusion DCNN model demonstrated superior accuracy (73.83%, 95% CI: 69.12%-78.65%) and AUC (77.51%, 95% CI: 72.89%-82.59%), particularly notable for noninfection and current infection. Additionally, the DCNN model exhibited heightened sensitivity, specificity, and precision compared to experienced endoscopists.

CONCLUSIONS: The DCNN model, crafted through a voting-based multisite fusion, displayed stellar performance, excelling in the classification of Hp infection status into uninfected and currently infected.

PMID:40682425 | DOI:10.1111/jgh.70004

Categories: Literature Watch

Latent Class Analysis Identifies Distinct Patient Phenotypes Associated With Mistaken Treatment Decisions and Adverse Outcomes in Coronary Artery Disease

Sat, 2025-07-19 06:00

Angiology. 2025 Jul 19:33197251350182. doi: 10.1177/00033197251350182. Online ahead of print.

ABSTRACT

This study aimed to identify patient characteristics linked to mistaken treatments and major adverse cardiovascular events (MACE) in percutaneous coronary intervention (PCI) for coronary artery disease (CAD) using deep learning-based fractional flow reserve (DEEPVESSEL-FFR, DVFFR). A retrospective cohort of 3,840 PCI patients was analyzed using latent class analysis (LCA) based on eight factors. Mistaken treatment was defined as negative DVFFR patients undergoing revascularization or positive DVFFR patients not receiving it. MACE included all-cause mortality, rehospitalization for unstable angina, and non-fatal myocardial infarction. Patients were classified into comorbidities (Class 1), smoking-drinking (Class 2), and relatively healthy (Class 3) groups. Mistaken treatment was highest in Class 2 (15.4% vs. 6.7%, P < .001), while MACE was highest in Class 1 (7.0% vs. 4.8%, P < .001). Adjusted analyses showed increased mistaken treatment risk in Class 1 (OR 1.96; 95% CI 1.49-2.57) and Class 2 (OR 1.69; 95% CI 1.28-2.25) compared with Class 3. Class 1 also had higher MACE risk (HR 1.53; 95% CI 1.10-2.12). In conclusion, comorbidities and smoking-drinking classes had higher mistaken treatment and MACE risks compared with the relatively healthy class.

PMID:40682405 | DOI:10.1177/00033197251350182

Categories: Literature Watch

Emerging Role of MRI-Based Artificial Intelligence in Individualized Treatment Strategies for Hepatocellular Carcinoma: A Narrative Review

Sat, 2025-07-19 06:00

J Magn Reson Imaging. 2025 Jul 19. doi: 10.1002/jmri.70048. Online ahead of print.

ABSTRACT

Hepatocellular carcinoma (HCC) is the most common subtype of primary liver cancer, with significant variability in patient outcomes even within the same stage according to the Barcelona Clinic Liver Cancer staging system. Accurately predicting patient prognosis and potential treatment response prior to therapy initiation is crucial for personalized clinical decision-making. This review focuses on the application of artificial intelligence (AI) in magnetic resonance imaging for guiding individualized treatment strategies in HCC management. Specifically, we emphasize AI-based tools for pre-treatment prediction of therapeutic response and prognosis. AI techniques such as radiomics and deep learning have shown strong potential in extracting high-dimensional imaging features to characterize tumors and liver parenchyma, predict treatment outcomes, and support prognostic stratification. These advances contribute to more individualized and precise treatment planning. However, challenges remain in model generalizability, interpretability, and clinical integration, highlighting the need for standardized imaging datasets and multi-omics fusion to fully realize the potential of AI in personalized HCC care. Evidence level: 5. Technical efficacy: 4.

PMID:40682357 | DOI:10.1002/jmri.70048

Categories: Literature Watch

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs

Sat, 2025-07-19 06:00

Clin Oral Implants Res. 2025 Jul 18. doi: 10.1111/clr.70003. Online ahead of print.

ABSTRACT

OBJECTIVES: To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region.

MATERIALS AND METHODS: Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies.

RESULTS: The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05).

CONCLUSIONS: AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

PMID:40682303 | DOI:10.1111/clr.70003

Categories: Literature Watch

AI-Driven segmentation and morphogeometric profiling of epicardial adipose tissue in type 2 diabetes

Fri, 2025-07-18 06:00

Cardiovasc Diabetol. 2025 Jul 18;24(1):294. doi: 10.1186/s12933-025-02829-y.

ABSTRACT

BACKGROUND: Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D.

METHODS: A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier.

RESULTS: EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703.

CONCLUSIONS: This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.

PMID:40682091 | DOI:10.1186/s12933-025-02829-y

Categories: Literature Watch

Open-access ultrasonic diaphragm dataset and an automatic diaphragm measurement using deep learning network

Fri, 2025-07-18 06:00

Respir Res. 2025 Jul 18;26(1):251. doi: 10.1186/s12931-025-03325-3.

ABSTRACT

BACKGROUND: The assessment of diaphragm function is crucial for effective clinical management and the prevention of complications associated with diaphragmatic dysfunction. However, current measurement methodologies rely on manual techniques that are susceptible to human error: How does the performance of an automatic diaphragm measurement system based on a segmentation neural network focusing on diaphragm thickness and excursion compare with existing methodologies?

METHODS: The proposed system integrates segmentation and parameter measurement, leveraging a newly established ultrasound diaphragm dataset. This dataset comprises B-mode ultrasound images and videos for diaphragm thickness assessment, as well as M-mode images and videos for movement measurement. We introduce a novel deep learning-based segmentation network, the Multi-ratio Dilated U-Net (MDRU-Net), to enable accurate diaphragm measurements. The system additionally incorporates a comprehensive implementation plan for automated measurement.

RESULTS: Automatic measurement results are compared against manual assessments conducted by clinicians, revealing an average error of 8.12% in diaphragm thickening fraction measurements and a mere 4.3% average relative error in diaphragm excursion measurements. The results indicate overall minor discrepancies and enhanced potential for clinical detection of diaphragmatic conditions. Additionally, we design a user-friendly automatic measurement system for assessing diaphragm parameters and an accompanying method for measuring ultrasound-derived diaphragm parameters.

CONCLUSIONS: In this paper, we constructed a diaphragm ultrasound dataset of thickness and excursion. Based on the U-Net architecture, we developed an automatic diaphragm segmentation algorithm and designed an automatic parameter measurement scheme. A comparative error analysis was conducted against manual measurements. Overall, the proposed diaphragm ultrasound segmentation algorithm demonstrated high segmentation performance and efficiency. The automatic measurement scheme based on this algorithm exhibited high accuracy, eliminating subjective influence and enhancing the automation of diaphragm ultrasound parameter assessment, thereby providing new possibilities for diaphragm evaluation.

PMID:40682068 | DOI:10.1186/s12931-025-03325-3

Categories: Literature Watch

Pages