Deep learning

Development and validation of prediction models for stroke and myocardial infarction in type 2 diabetes based on health insurance claims: does machine learning outperform traditional regression approaches?

Tue, 2025-02-18 06:00

Cardiovasc Diabetol. 2025 Feb 18;24(1):80. doi: 10.1186/s12933-025-02640-9.

ABSTRACT

BACKGROUND: Digitalization and big health system data open new avenues for targeted prevention and treatment strategies. We aimed to develop and validate prediction models for stroke and myocardial infarction (MI) in patients with type 2 diabetes based on routinely collected high-dimensional health insurance claims and compared predictive performance of traditional regression with state-of-the-art machine learning including deep learning methods.

METHODS: We used German health insurance claims from 2014 to 2019 with 287 potentially relevant literature-derived variables to predict 3-year risk of MI and stroke. Following a train-test split approach, we compared the performance of logistic methods with and without forward selection, LASSO-regularization, random forests (RF), gradient boosting (GB), multi-layer-perceptrons (MLP) and feature-tokenizer transformers (FTT). We assessed discrimination (Areas Under the Precision-Recall and Receiver-Operator Curves, AUPRC and AUROC) and calibration.

RESULTS: Among n = 371,006 patients with type 2 diabetes (mean age: 67.2 years), 3.5% (n = 13,030) had MIs and 3.4% (n = 12,701) strokes. AUPRCs were 0.035 (MI) and 0.034 (stroke) for a null model, between 0.082 (MLP) and 0.092 (GB) for MI, and between 0.061 (MLP) and 0.073 (GB) for stoke. AUROCs were 0.5 for null models, between 0.70 (RF, MLP, FTT) and 0.71 (all other models) for MI, and between 0.66 (MLP) and 0.69 (GB) for stroke. All models were well calibrated.

CONCLUSIONS: Discrimination performance of claims-based models reached a ceiling at around 0.09 AUPRC and 0.7 AUROC. While for AUROC this performance was comparable to existing epidemiological models incorporating clinical information, comparison of other, potentially more relevant metrics, such as AUPRC, sensitivity and Positive Predictive Value was hampered by lack of reporting in the literature. The fact that machine learning including deep learning methods did not outperform more traditional approaches may suggest that feature richness and complexity were exploited before the choice of algorithm could become critical to maximize performance. Future research might focus on the impact of different feature derivation approaches on performance ceilings. In the absence of other more powerful screening alternatives, applying transparent regression-based models in routine claims, though certainly imperfect, remains a promising scalable low-cost approach for population-based cardiovascular risk prediction and stratification.

PMID:39966813 | DOI:10.1186/s12933-025-02640-9

Categories: Literature Watch

Integrating ultrasound radiomics and clinicopathological features for machine learning-based survival prediction in patients with nonmetastatic triple-negative breast cancer

Tue, 2025-02-18 06:00

BMC Cancer. 2025 Feb 18;25(1):291. doi: 10.1186/s12885-025-13635-w.

ABSTRACT

OBJECTIVE: This study aimed to evaluate the predictive value of implementing machine learning models based on ultrasound radiomics and clinicopathological features in the survival analysis of triple-negative breast cancer (TNBC) patients.

METHODS AND MATERIALS: All patients, including retrospective cohort (training cohort, n = 306; internal validation cohort, n = 77) and prospective external validation cohort (n = 82), were diagnosed as locoregional TNBC and underwent pre-intervention sonographic evaluation in this multi-center study. A thorough chart review was conducted for each patient to collect clinicopathological and sonographic features, and ultrasound radiomics features were obtained by PyRadiomics. Deep learning algorithms were utilized to delineate ROIs on ultrasound images. Radiomics analysis pipeline modules were developed for analyzing features. Radiomic scores, clinical scores, and combined nomograms were analyzed to predict 2-year, 3-year, and 5-year overall survival (OS) and disease-free survival (DFS). Receiver operating characteristic (ROC) curves, calibration curves, and decision curves were used to evaluate the prediction performance.

FINDINGS: Both clinical and radiomic scores showed good performance for overall survival and disease-free survival prediction in internal (median AUC of 0.82 and 0.72 respectively) and external validation (median AUC of 0.70 and 0.74 respectively). The combined nomograms had AUCs of 0.80-0.93 and 0.73-0.89 in the internal and external validation, which had best predictive performance in all tasks (p < 0.05), especially for 5-year OS (p < 0.01). For the overall evaluation of six tasks, combined models obtained better performance than clinical and radiomic scores [AUCs of 0.83 (0.73,0.93), 0.81 (0.72,0.93), and 0.70 (0.61,0.85) respectively].

INTERPRETATION: The combined nomograms based on pre-intervention ultrasound radiomics and clinicopathological features demonstrated exemplary performance in survival analysis. The new models may allow us to non-invasively classify TNBC patients with various disease outcome.

PMID:39966783 | DOI:10.1186/s12885-025-13635-w

Categories: Literature Watch

Predicting mother and newborn skin-to-skin contact using a machine learning approach

Tue, 2025-02-18 06:00

BMC Pregnancy Childbirth. 2025 Feb 18;25(1):182. doi: 10.1186/s12884-025-07313-9.

ABSTRACT

BACKGROUND: Despite the known benefits of skin-to-skin contact (SSC), limited data exists on its implementation, especially its influencing factors. The current study was designed to use machine learning (ML) to identify the predictors of SSC.

METHODS: This study implemented predictive SSC approaches based on the data obtained from the "Iranian Maternal and Neonatal Network (IMaN Net)" from January 2020 to January 2022. A predictive model was built using nine statistical learning models (linear regression, logistic regression, decision tree classification, random forest classification, deep learning feedforward, extreme gradient boost model, light gradient boost model, support vector machine, and permutation feature classification with k-nearest neighbors). Demographic, obstetric, and maternal and neonatal clinical factors were considered as potential predicting factors and were extracted from the patient's medical records. The area under the receiver operating characteristic curve (AUROC), accuracy, precision, recall, and F_1 Score were measured to evaluate the diagnostic performance.

RESULTS: Of 8031 eligible mothers, 3759 (46.8%) experienced SSC. The algorithms created by deep learning (AUROC: 0.81, accuracy: 0.75, precision: 0.67, recall: 0.77, and F_1 Score: 0.73) and linear regression (AUROC: 0.80, accuracy: 0.75, precision: 0.66, recall: 0.75, and F_1 Score: 0.71) had the highest performance in predicting SSC. Doula support, neonatal weight, gestational age, attending childbirth classes, and maternal age were the critical predictors for SSC based on the top two algorithms with superior performance.

CONCLUSIONS: Although this study found that the ML model performed well in predicting SSC, more research is needed to make a better conclusion about its performance.

PMID:39966775 | DOI:10.1186/s12884-025-07313-9

Categories: Literature Watch

Segmentation methods and dosimetric evaluation of 3D-printed immobilization devices in head and neck radiotherapy

Tue, 2025-02-18 06:00

BMC Cancer. 2025 Feb 18;25(1):289. doi: 10.1186/s12885-025-13669-0.

ABSTRACT

BACKGROUND: Treatment planning systems (TPS) often exclude immobilization devices from optimization and calculation, potentially leading to inaccurate dose estimates. This study employed deep learning methods to automatically segment 3D-printed head and neck immobilization devices and evaluate their dosimetric impact in head and neck VMAT.

METHODS: Computed tomography (CT) positioning images from 49 patients were used to train the Mask2Former model to segment 3D-printed headrests and MFIFs. Based on the results, four body structure sets were generated for each patient to evaluate the impact on dose distribution in volumetric modulated arc therapy (VMAT) plans: S (without immobilization devices), S_MF (with MFIFs), S_3D (with 3D-printed headrests), and S_3D+MF (with both). VMAT plans (P, P_MF, P_3D, and P_3D+MF) were created for each structure set. Dose-volume histogram (DVH) data and dose distribution of the four plans were compared to assess the impact of the 3D-printed headrests and MFIFs on target and normal tissue doses. Gafchromic EBT3 film measurements were used for patient-specific verification to validate dose calculation accuracy.

RESULTS: The Mask2Former model achieved a mean average precision (mAP) of 0.898 and 0.895, with a Dice index of 0.956 and 0.939 for the 3D-printed headrest on the validation and test sets, respectively. For the MFIF, the Dice index was 0.980 and 0.981 on the validation and test sets, respectively. Compared to P, P_MF reduced the V100% for PGTVnx, PGTVnd, PGTVrpn, PTV1, and PTV2 by 5.99%, 6.51%, 5.93%, 2.24%, and 1.86%, respectively(P ≤ 0.004). P_3D reduced the same targets by 1.78%, 2.56%, 1.75%, 1.16%, and 1.48%(P < 0.001), with a 31.3% increase in skin dose (P < 0.001). P_3D+MF reduced the V100% by 9.15%, 10.18%, 9.16%, 3.36%, and 3.28% (P < 0.001), respectively, while increasing the skin dose by 31.6% (P < 0.001). EBT3 film measurements showed that the P_3D+MF dose distribution was more aligned with actual measurements, achieving a mean gamma pass rate of 92.14% under the 3%/3 mm criteria.

CONCLUSIONS: This study highlights the potential of Mask2Former in 3D-printed headrest and MFIF segmentation automation, providing a novel approach to enhance personalized radiation therapy plan accuracy. The attenuation effects of 3D-printed headrests and MFIFs reduce V100% and Dmean for PTVs in head and neck cancer patients, while the buildup effects of 3D-printed headrests increases the skin dose (31.3%). Challenges such as segmentation inaccuracies for small targets and artifacts from metal fasteners in MFIFs highlight the need for model optimization and validation on larger, more diverse datasets.

PMID:39966735 | DOI:10.1186/s12885-025-13669-0

Categories: Literature Watch

Letter to the Editor: "A Deep Learning System to Predict Epithelial Dysplasia in Oral Leukoplakia"

Tue, 2025-02-18 06:00

J Dent Res. 2025 Feb 18:220345251317097. doi: 10.1177/00220345251317097. Online ahead of print.

NO ABSTRACT

PMID:39966688 | DOI:10.1177/00220345251317097

Categories: Literature Watch

Deep learning-based classification of diffusion-weighted imaging-fluid-attenuated inversion recovery mismatch

Tue, 2025-02-18 06:00

Sci Rep. 2025 Feb 18;15(1):5924. doi: 10.1038/s41598-025-90214-w.

ABSTRACT

The presence of a diffusion-weighted imaging (DWI)-fluid-attenuated inversion recovery (FLAIR) mismatch holds potential value in identifying candidates for recanalization treatment. However, the visual assessment of DWI-FLAIR mismatch is subject to limitations due to variability among raters, which affects accuracy and consistency. To overcome these challenges, we aimed to develop and validate a deep learning-based classifier to categorize the mismatch. We screened consecutive acute ischemic stroke patients who underwent DWI and FLAIR imaging from a four stroke centers. Two centers were used for model development and internal testing (derivation cohort), while two independent centers served as external validation cohorts. We developed Convolutional Neural Network-based classifiers for two binary classifications: DWI-FLAIR match versus non-match (Label Set I) and match versus mismatch (Label Set II). A total of 2369 patients from the derivation set and 679 patients from two external validation sets (350 and 329 patients) were included in the analysis. For Label Set I, the internal test set AUC was 0.862 (95% CI 0.841-0.884, with external validation AUCs of 0.829 (0.785-0.873) and 0.835 (0.790-0.879). Label Set II showed higher performance with internal test AUC of 0.934 (0.911-0.957) and external validation AUCs of 0.883 (0.829-0.938) and 0.913 (0.876-0.951). A deep learning-based classifier for the DWI-FLAIR mismatch can be used to diminish subjectivity and support targeted decision-making in the treatment of acute stroke patients.

PMID:39966647 | DOI:10.1038/s41598-025-90214-w

Categories: Literature Watch

HDL-ACO hybrid deep learning and ant colony optimization for ocular optical coherence tomography image classification

Tue, 2025-02-18 06:00

Sci Rep. 2025 Feb 18;15(1):5888. doi: 10.1038/s41598-025-89961-7.

ABSTRACT

Optical Coherence Tomography (OCT) plays a crucial role in diagnosing ocular diseases, yet conventional CNN-based models face limitations such as high computational overhead, noise sensitivity, and data imbalance. This paper introduces HDL-ACO, a novel Hybrid Deep Learning (HDL) framework that integrates Convolutional Neural Networks with Ant Colony Optimization (ACO) to enhance classification accuracy and computational efficiency. The proposed methodology involves pre-processing the OCT dataset using discrete wavelet transform and ACO-optimized augmentation, followed by multiscale patch embedding to generate image patches of varying sizes. The hybrid deep learning model leverages ACO-based hyperparameter optimization to enhance feature selection and training efficiency. Furthermore, a Transformer-based feature extraction module integrates content-aware embeddings, multi-head self-attention, and feedforward neural networks to improve classification performance. Experimental results demonstrate that HDL-ACO outperforms state-of-the-art models, including ResNet-50, VGG-16, and XGBoost, achieving 95% training accuracy and 93% validation accuracy. The proposed framework offers a scalable, resource-efficient solution for real-time clinical OCT image classification.

PMID:39966596 | DOI:10.1038/s41598-025-89961-7

Categories: Literature Watch

Hybrid Greylag Goose deep learning with layered sparse network for women nutrition recommendation during menstrual cycle

Tue, 2025-02-18 06:00

Sci Rep. 2025 Feb 18;15(1):5959. doi: 10.1038/s41598-025-88728-4.

ABSTRACT

A complex biological process involves physical changes and hormonal fluctuation in the menstrual cycle. The traditional nutrition recommendation models often offer general guidelines but fail to address the specific requirements of women during various menstrual cycle stages. This paper proposes a novel Optimization Hybrid Deep Learning (OdriHDL) model to provide a personalized health nutrition recommendation for women during their menstrual cycle. It involves pre-processing the data through Missing Value Imputation, Z-score Normalization, and One-hot encoding. Next, feature extraction is accomplished using the Layered Sparse Autoencoder Network. Then, the extracted features are utilized by the Hybrid Attention-based Bidirectional Convolutional Greylag Goose Gated Recurrent Network (HABi-ConGRNet) for nutrient recommendation. The hyper-parameter tuning of HABi-ConGRNet is carried out using Greylag Goose Optimization Algorithm to enhance the model performance. The Python platform is used for the simulation of collected data, and several performance metrics are employed to analyze the performance. The OdriHDL model demonstrates superior performance, achieving a maximum accuracy of 97.52% and enhanced precision rate in contrast to the existing methods, like RNN, CNN-LSTM, and attention GRU. The findings suggest that OdriHDL captures complex patterns between nutritional needs and menstrual symptoms and provides robust solutions to unique physiological changes experienced by women.

PMID:39966547 | DOI:10.1038/s41598-025-88728-4

Categories: Literature Watch

A web-based artificial intelligence system for label-free virus classification and detection of cytopathic effects

Tue, 2025-02-18 06:00

Sci Rep. 2025 Feb 18;15(1):5904. doi: 10.1038/s41598-025-89639-0.

ABSTRACT

Identifying viral replication within cells demands labor-intensive isolation methods, requiring specialized personnel and additional confirmatory tests. To facilitate this process, we developed an AI-powered automated system called AI Recognition of Viral CPE (AIRVIC), specifically designed to detect and classify label-free cytopathic effects (CPEs) induced by SARS-CoV-2, BAdV-1, BPIV3, BoAHV-1, and two strains of BoGHV-4 in Vero and MDBK cell lines. AIRVIC utilizes convolutional neural networks, with ResNet50 as the primary architecture, trained on 40,369 microscopy images at various magnifications. AIRVIC demonstrated strong CPE detection, achieving 100% accuracy for the BoGHV-4 DN-599 strain in MDBK cells, the highest among tested strains. In contrast, the BoGHV-4 MOVAR 33/63 strain in Vero cells showed a lower accuracy of 87.99%, the lowest among all models tested. For virus classification, a multi-class accuracy of 87.61% was achieved for bovine viruses in MDBK cells; however, it dropped to 63.44% when the virus was identified without specifying the cell line. To the best of our knowledge, this is the first research article published in English to utilize AI for distinguishing animal virus infections in cell culture. AIRVIC's hierarchical structure highlights its adaptability to virological diagnostics, providing unbiased infectivity scoring and facilitating viral isolation and antiviral efficacy testing. Additionally, AIRVIC is accessible as a web-based platform, allowing global researchers to leverage its capabilities in viral diagnostics and beyond.

PMID:39966536 | DOI:10.1038/s41598-025-89639-0

Categories: Literature Watch

Predicting Satisfaction With Chat-Counseling at a 24/7 Chat Hotline for the Youth: Natural Language Processing Study

Tue, 2025-02-18 06:00

JMIR AI. 2025 Feb 18;4:e63701. doi: 10.2196/63701.

ABSTRACT

BACKGROUND: Chat-based counseling services are popular for the low-threshold provision of mental health support to youth. In addition, they are particularly suitable for the utilization of natural language processing (NLP) for improved provision of care.

OBJECTIVE: Consequently, this paper evaluates the feasibility of such a use case, namely, the NLP-based automated evaluation of satisfaction with the chat interaction. This preregistered approach could be used for evaluation and quality control procedures, as it is particularly relevant for those services.

METHODS: The consultations of 2609 young chatters (around 140,000 messages) and corresponding feedback were used to train and evaluate classifiers to predict whether a chat was perceived as helpful or not. On the one hand, we trained a word vectorizer in combination with an extreme gradient boosting (XGBoost) classifier, applying cross-validation and extensive hyperparameter tuning. On the other hand, we trained several transformer-based models, comparing model types, preprocessing, and over- and undersampling techniques. For both model types, we selected the best-performing approach on the training set for a final performance evaluation on the 522 users in the final test set.

RESULTS: The fine-tuned XGBoost classifier achieved an area under the receiver operating characteristic score of 0.69 (P<.001), as well as a Matthews correlation coefficient of 0.25 on the previously unseen test set. The selected Longformer-based model did not outperform this baseline, scoring 0.68 (P=.69). A Shapley additive explanations explainability approach suggested that help seekers rating a consultation as helpful commonly expressed their satisfaction already within the conversation. In contrast, the rejection of offered exercises predicted perceived unhelpfulness.

CONCLUSIONS: Chat conversations include relevant information regarding the perceived quality of an interaction that can be used by NLP-based prediction approaches. However, to determine if the moderate predictive performance translates into meaningful service improvements requires randomized trials. Further, our results highlight the relevance of contrasting pretrained models with simpler baselines to avoid the implementation of unnecessarily complex models.

TRIAL REGISTRATION: Open Science Framework SR4Q9; https://osf.io/sr4q9.

PMID:39965198 | DOI:10.2196/63701

Categories: Literature Watch

Integrating State-Space Modeling, Parameter Estimation, Deep Learning, and Docking Techniques in Drug Repurposing: A Case Study on COVID-19 Cytokine Storm

Tue, 2025-02-18 06:00

J Am Med Inform Assoc. 2025 Feb 18:ocaf035. doi: 10.1093/jamia/ocaf035. Online ahead of print.

ABSTRACT

OBJECTIVE: This study addresses the significant challenges posed by emerging SARS-CoV-2 variants, particularly in developing diagnostics and therapeutics. Drug repurposing is investigated by identifying critical regulatory proteins impacted by the virus, providing rapid and effective therapeutic solutions for better disease management.

MATERIALS AND METHODS: We employed a comprehensive approach combining mathematical modeling and efficient parameter estimation to study the transient responses of regulatory proteins in both normal and virus-infected cells. Proportional-integral-derivative (PID) controllers were used to pinpoint specific protein targets for therapeutic intervention. Additionally, advanced deep learning models and molecular docking techniques were applied to analyze drug-target and drug-drug interactions, ensuring both efficacy and safety of the proposed treatments. This approach was applied to a case study focused on the cytokine storm in COVID-19, centering on Angiotensin-converting enzyme 2 (ACE2), which plays a key role in SARS-CoV-2 infection.

RESULTS: Our findings suggest that activating ACE2 presents a promising therapeutic strategy, whereas inhibiting AT1R seems less effective. Deep learning models, combined with molecular docking, identified Lomefloxacin and Fostamatinib as stable drugs with no significant thermodynamic interactions, suggesting their safe concurrent use in managing COVID-19-induced cytokine storms.

DISCUSSION: The results highlight the potential of ACE2 activation in mitigating lung injury and severe inflammation caused by SARS-CoV-2. This integrated approach accelerates the identification of safe and effective treatment options for emerging viral variants.

CONCLUSION: This framework provides an efficient method for identifying critical regulatory proteins and advancing drug repurposing, contributing to the rapid development of therapeutic strategies for COVID-19 and future global pandemics.

PMID:39965087 | DOI:10.1093/jamia/ocaf035

Categories: Literature Watch

Multi-agent deep reinforcement learning-based robotic arm assembly research

Tue, 2025-02-18 06:00

PLoS One. 2025 Feb 18;20(2):e0311550. doi: 10.1371/journal.pone.0311550. eCollection 2025.

ABSTRACT

Due to the complexity and variability of application scenarios and the increasing demands for assembly, single-agent algorithms often face challenges in convergence and exhibit poor performance in robotic arm assembly processes. To address these issues, this paper proposes a method that employs a multi-agent reinforcement learning algorithm for the shaft-hole assembly of robotic arms, with a specific focus on square shaft-hole assemblies. First, we analyze the stages of hole-seeking, alignment, and insertion in the shaft-hole assembly process, based on a comprehensive study of the interactions between shafts and holes. Next, a reward function is designed by integrating the decoupled multi-agent deterministic deep deterministic policy gradient (DMDDPG) algorithm. Finally, a simulation environment is created in Gazebo, using circular and square shaft-holes as experimental subjects to model the robotic arm's shaft-hole assembly. The simulation results indicate that the proposed algorithm, which models the first three joints and the last three joints of the robotic arm as multi-agents, demonstrates not only enhanced adaptability but also faster and more stable convergence.

PMID:39965012 | DOI:10.1371/journal.pone.0311550

Categories: Literature Watch

Unsupervised neural network-based image stitching method for bladder endoscopy

Tue, 2025-02-18 06:00

PLoS One. 2025 Feb 18;20(2):e0311637. doi: 10.1371/journal.pone.0311637. eCollection 2025.

ABSTRACT

Bladder endoscopy enables the observation of intravesical lesion characteristics, making it an essential tool in urology. Image stitching techniques are commonly employed to expand the field of view of bladder endoscopy. Traditional image stitching methods rely on feature matching. In recent years, deep-learning techniques have garnered significant attention in the field of computer vision. However, the commonly employed supervised learning approaches often require a substantial amount of labeled data, which can be challenging to acquire, especially in the context of medical data. To address this limitation, this study proposes an unsupervised neural network-based image stitching method for bladder endoscopy, which eliminates the need for labeled datasets. The method comprises two modules: an unsupervised alignment network and an unsupervised fusion network. In the unsupervised alignment network, we employed feature convolution, regression networks, and linear transformations to align images. In the unsupervised fusion network, we achieved image fusion from features to pixel by simultaneously eliminating artifacts and enhancing the resolution. Experiments demonstrated our method's consistent stitching success rate of 98.11% and robust image stitching accuracy at various resolutions. Our method eliminates sutures and flocculent debris from cystoscopy images, presenting good image smoothness while preserving rich textural features. Moreover, our method could successfully stitch challenging scenes such as dim and blurry scenes. Our application of unsupervised deep learning methods in the field of cystoscopy image stitching was successfully validated, laying the foundation for real-time panoramic stitching of bladder endoscopic video images. This advancement provides opportunities for the future development of computer-vision-assisted diagnostic systems for bladder cavities.

PMID:39964991 | DOI:10.1371/journal.pone.0311637

Categories: Literature Watch

Toward equitable major histocompatibility complex binding predictions

Tue, 2025-02-18 06:00

Proc Natl Acad Sci U S A. 2025 Feb 25;122(8):e2405106122. doi: 10.1073/pnas.2405106122. Epub 2025 Feb 18.

ABSTRACT

Deep learning tools that predict peptide binding by major histocompatibility complex (MHC) proteins play an essential role in developing personalized cancer immunotherapies and vaccines. In order to ensure equitable health outcomes from their application, MHC binding prediction methods must work well across the vast landscape of MHC alleles observed across human populations. Here, we show that there are alarming disparities across individuals in different racial and ethnic groups in how much binding data are associated with their MHC alleles. We introduce a machine learning framework to assess the impact of this data imbalance for predicting binding for any given MHC allele, and apply it to develop a state-of-the-art MHC binding prediction model that additionally provides per-allele performance estimates. We demonstrate that our MHC binding model successfully mitigates much of the data disparities observed across racial groups. To address remaining inequities, we devise an algorithmic strategy for targeted data collection. Our work lays the foundation for further development of equitable MHC binding models for use in personalized immunotherapies.

PMID:39964728 | DOI:10.1073/pnas.2405106122

Categories: Literature Watch

Deep learning for retinal vessel segmentation: a systematic review of techniques and applications

Tue, 2025-02-18 06:00

Med Biol Eng Comput. 2025 Feb 18. doi: 10.1007/s11517-025-03324-y. Online ahead of print.

ABSTRACT

Ophthalmic diseases are a leading cause of vision loss, with retinal damage being irreversible. Retinal blood vessels are vital for diagnosing eye conditions, as even subtle changes in their structure can signal underlying issues. Retinal vessel segmentation is key for early detection and treatment of eye diseases. Traditionally, ophthalmologists manually segmented vessels, a time-consuming process based on clinical and geometric features. However, deep learning advancements have led to automated methods with impressive results. This systematic review, following PRISMA guidelines, examines 79 studies on deep learning-based retinal vessel segmentation published between 2020 and 2024 from four databases: Web of Science, Scopus, IEEE Xplore, and PubMed. The review focuses on datasets, segmentation models, evaluation metrics, and emerging trends. U-Net and Transformer architectures have shown success, with U-Net's encoder-decoder structure preserving details and Transformers capturing global context through self-attention mechanisms. Despite their effectiveness, challenges remain, suggesting future research should explore hybrid models combining U-Net, Transformers, and GANs to improve segmentation accuracy. This review offers a comprehensive look at the current landscape and future directions in retinal vessel segmentation.

PMID:39964659 | DOI:10.1007/s11517-025-03324-y

Categories: Literature Watch

TongueTransUNet: toward effective tongue contour segmentation using well-managed dataset

Tue, 2025-02-18 06:00

Med Biol Eng Comput. 2025 Feb 18. doi: 10.1007/s11517-024-03278-7. Online ahead of print.

ABSTRACT

In modern telehealth and healthcare information systems medical image analysis is essential to understand the context of the images and its complex structure from large, inconsistent-quality, and distributed datasets. Achieving desired results faces a few challenges for deep learning. Examples of these challenges are date size, labeling, balancing, training, and feature extraction. These challenges made the AI model complex and expensive to be built and difficult to understand which made it a black box and produce hysteresis and irrelevant, illegal, and unethical output in some cases. In this article, lingual ultrasound is studied to extract tongue contour to understand language behavior and language signature and utilize it as biofeedback for different applications. This article introduces a design strategy that can work effectively using a well-managed dynamic-size dataset. It includes a hybrid architecture using UNet, Vision Transformer (ViT), and contrastive loss in latent space to build a foundation model cumulatively. The process starts with building a reference representation in the embedding space using human experts to validate any new input for training data. UNet and ViT encoders are used to extract the input feature representations. The contrastive loss was then compared to the new feature embedding with the reference in the embedding space. The UNet-based decoder is used to reconstruct the image to its original size. Before releasing the final results, quality control is used to assess the segmented contour, and if rejected, the algorithm requests an action from a human expert to annotate it manually. The results show an improved accuracy over the traditional techniques as it contains only high quality and relevant features.

PMID:39964658 | DOI:10.1007/s11517-024-03278-7

Categories: Literature Watch

Exploring the potential performance of 0.2 T low-field unshielded MRI scanner using deep learning techniques

Tue, 2025-02-18 06:00

MAGMA. 2025 Feb 18. doi: 10.1007/s10334-025-01234-6. Online ahead of print.

ABSTRACT

OBJECTIVE: Using deep learning-based techniques to overcome physical limitations and explore the potential performance of 0.2 T low-field unshielded MRI in terms of imaging quality and speed.

METHODS: First, fast and high-quality unshielded imaging is achieved using active electromagnetic shielding and basic super-resolution. Then, the speed of basic super-resolution imaging is further improved by reducing the number of excitations. Next, the feasibility of using cross-field super-resolution to map low-field low-resolution images to high-field ultra-high-resolution images is analyzed. Finally, by cascading basic and cross-field super-resolution, the quality of the low-field low-resolution image is improved to the level of the high-field ultra-high-resolution image.

RESULTS: Under unshielded conditions, our 0.2 T scanner can achieve image quality comparable to that of a 1.5 T scanner (acquisition resolution of 512 × 512, spatial resolution of 0.45 mm2), and a single-orientation imaging time of less than 3.3 min.

DISCUSSION: The proposed strategy overcomes the physical limitations of the hardware and rapidly acquires images close to the high-field level on a low-field unshielded MRI scanner. These findings have significant practical implications for the advances in MRI technology, supporting the shift from conventional scanners to point-of-care imaging systems.

PMID:39964601 | DOI:10.1007/s10334-025-01234-6

Categories: Literature Watch

Genetic insights into the shared molecular mechanisms of Crohn's disease and breast cancer: a Mendelian randomization and deep learning approach

Tue, 2025-02-18 06:00

Discov Oncol. 2025 Feb 18;16(1):198. doi: 10.1007/s12672-025-01978-6.

ABSTRACT

The objective of this study was to explore the potential genetic link between Crohn's disease and breast cancer, with a focus on identifying druggable genes that may have therapeutic relevance. We assessed the causal relationship between these diseases through Mendelian randomization and investigated gene-drug interactions using computational predictions. This study sought to identify common genetic pathways possibly involved in immune responses and cancer progression, providing a foundation for future targeted treatment research. The dataset comprises single nucleotide polymorphisms used as instrumental variables for Crohn's disease, analyzed to explore their possible impact on breast cancer risk. Gene ontology and pathway enrichment analyses were conducted to identify genes shared between the two conditions, supported by protein-protein interaction networks, colocalization analyses, and deep learning-based predictions of gene-drug interactions. The identified hub genes and predicted gene-drug interactions offer preliminary insights into possible therapeutic targets for breast cancer and immune-related conditions. This dataset may be valuable for researchers studying genetic links between autoimmune diseases and cancer and for those interested in the early identification of potential drug targets.

PMID:39964572 | DOI:10.1007/s12672-025-01978-6

Categories: Literature Watch

Deep learning-based time-of-flight (ToF) enhancement of non-ToF PET scans for different radiotracers

Tue, 2025-02-18 06:00

Eur J Nucl Med Mol Imaging. 2025 Feb 18. doi: 10.1007/s00259-025-07119-z. Online ahead of print.

ABSTRACT

AIM: To evaluate a deep learning-based time-of-flight (DLToF) model trained to enhance the image quality of non-ToF PET images for different tracers, reconstructed using BSREM algorithm, towards ToF images.

METHODS: A 3D residual U-NET model was trained using 8 different tracers (FDG: 75% and non-FDG: 25%) from 11 sites from US, Europe and Asia. A total of 309 training and 33 validation datasets scanned on GE Discovery MI (DMI) ToF scanners were used for development of DLToF models of three strengths: low (L), medium (M) and high (H). The training and validation pairs consisted of target ToF and input non-ToF BSREM reconstructions using site-preferred regularisation parameters (beta values). The contrast and noise properties of each model were defined by adjusting the beta value of target ToF images. A total of 60 DMI datasets, consisting of a set of 4 tracers (18F-FDG, 18F-PSMA, 68Ga-PSMA, 68Ga-DOTATATE) and 15 exams each, were collected for testing and quantitative analysis of the models based on standardized uptake value (SUV) in regions of interest (ROI) placed in lesions, lungs and liver. Each dataset includes 5 image series: ToF and non-ToF BSREM and three DLToF images. The image series (300 in total) were blind scored on a 5-point Likert score by 4 readers based on lesion detectability, diagnostic confidence, and image noise/quality.

RESULTS: In lesion SUVmax quantification with respect to ToF BSREM, DLToF-H achieved the best results among the three models by reducing the non-ToF BSREM errors from -39% to -6% for 18F-FDG (38 lesions); from -42% to -7% for 18F-PSMA (35 lesions); from -34% to -4% for 68Ga-PSMA (23 lesions) and from -34% to -12% for 68Ga-DOTATATE (32 lesions). Quantification results in liver and lung also showed ToF-like performance of DLToF models. Clinical reader resulted showed that DLToF-H results in an improved lesion detectability on average for all four radiotracers whereas DLToF-L achieved the highest scores for image quality (noise level). The results of DLToF-M however showed that this model results in the best trade-off between lesion detection and noise level and hence achieved the highest score for diagnostic confidence on average for all radiotracers.

CONCLUSION: This study demonstrated that the DLToF models are suitable for both FDG and non-FDG tracers and could be utilized for digital BGO PET/CT scanners to provide an image quality and lesion detectability comparable and close to ToF.

PMID:39964543 | DOI:10.1007/s00259-025-07119-z

Categories: Literature Watch

Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study

Tue, 2025-02-18 06:00

Eur J Nucl Med Mol Imaging. 2025 Feb 18. doi: 10.1007/s00259-025-07132-2. Online ahead of print.

ABSTRACT

PURPOSE: Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MRSYN) and performing automated quantitative regional analysis using MRSYN-derived segmentation.

METHODS: In this retrospective study, 139 subjects who underwent brain [18F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MRSYN; subsequently, a separate model was trained to segment MRSYN into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [18F]FBB PET images using the acquired ROIs. For evaluation of MRSYN, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MRSYN-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MRSYN and ground-truth MR (MRGT).

RESULTS: Compared to MRGT, the mean SSIM of MRSYN was 0.974 ± 0.005. The MRSYN-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MRSYN and those from MRGT, excluding the precuneus.

CONCLUSION: We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MRSYN. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.

PMID:39964542 | DOI:10.1007/s00259-025-07132-2

Categories: Literature Watch

Pages