Deep learning

A ViTUNeT-based model using YOLOv8 for efficient LVNC diagnosis and automatic cleaning of dataset

Tue, 2025-06-03 06:00

J Integr Bioinform. 2025 Jun 4. doi: 10.1515/jib-2024-0048. Online ahead of print.

ABSTRACT

Left ventricular non-compaction is a cardiac condition marked by excessive trabeculae in the left ventricle's inner wall. Although various methods exist to measure these structures, the medical community still lacks consensus on the best approach. Previously, we developed DL-LVTQ, a tool based on a UNet neural network, to quantify trabeculae in this region. In this study, we expand the dataset to include new patients with Titin cardiomyopathy and healthy individuals with fewer trabeculae, requiring retraining of our models to enhance predictions. We also propose ViTUNeT, a neural network architecture combining U-Net and Vision Transformers to segment the left ventricle more accurately. Additionally, we train a YOLOv8 model to detect the ventricle and integrate it with ViTUNeT model to focus on the region of interest. Results from ViTUNet and YOLOv8 are similar to DL-LVTQ, suggesting dataset quality limits further accuracy improvements. To test this, we analyze MRI images and develop a method using two YOLOv8 models to identify and remove problematic images, leading to better results. Combining YOLOv8 with deep learning networks offers a promising approach for improving cardiac image analysis and segmentation.

PMID:40460443 | DOI:10.1515/jib-2024-0048

Categories: Literature Watch

Upper Airway Volume Predicts Brain Structure and Cognition in Adolescents

Tue, 2025-06-03 06:00

Am J Respir Crit Care Med. 2025 Jun 3. Online ahead of print.

ABSTRACT

RATIONALE: One in ten children experiences sleep-disordered breathing (SDB). Untreated SDB is associated with poor cognition, but the underlying mechanisms are less understood.

OBJECTIVE: We assessed the relationship between magnetic resonance imaging (MRI)-derived upper airway volume and children's cognition and regional cortical gray matter volumes.

METHODS: We used five-year data from the Adolescent Brain Cognitive Development study (n=11,875 children, 9-10 years at baseline). Upper airway volumes were derived using a deep learning model applied to 5,552,640 brain MRI slices. The primary outcome was the Total Cognition Composite score from the National Institutes of Health Toolbox (NIH-TB). Secondary outcomes included other NIH-TB measures and cortical gray matter volumes.

RESULTS: The habitual snoring group had significantly smaller airway volumes than non-snorers (mean difference=1.2 cm3; 95% CI, 1.0-1.4 cm3; P<0.001). Deep learning-derived airway volume predicted the Total Cognition Composite score (estimated mean difference=3.68 points; 95% CI, 2.41-4.96; P<0.001) per one-unit increase in the natural log of airway volume (~2.7-fold raw volume increase). This airway volume increase was also associated with an average 0.02 cm3 increase in right temporal pole volume (95% CI, 0.01-0.02 cm3; P<0.001). Similar airway volume predicted most NIH-TB domain scores and multiple frontal and temporal gray matter volumes. These brain volumes mediated the relationship between airway volume and cognition.

CONCLUSIONS: We demonstrate a novel application of deep learning-based airway segmentation in a large pediatric cohort. Upper airway volume is a potential biomarker for cognitive outcomes in pediatric SDB, offers insights into neurobiological mechanisms, and informs future studies on risk stratification. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).

PMID:40460372

Categories: Literature Watch

A dynamic early-warning method for bridge structural safety based on data reconstruction and depth prediction

Tue, 2025-06-03 06:00

PLoS One. 2025 Jun 3;20(6):e0324816. doi: 10.1371/journal.pone.0324816. eCollection 2025.

ABSTRACT

The structural response of bridges involves a complex interplay of various coupled effects, rendering the identification of long-term variation trends inherently challenging. Consequently, effectively detecting and alerting abnormal monitoring data for bridge structures under complex coupled loads remains a significant difficulty. To address this issue, this study proposes a dynamic early-warning method for bridge structural safety, leveraging data reconstruction and deep learning-based prediction. First, the singular value decomposition (SVD) algorithm is employed to decompose and reconstruct the monitoring data based on the contribution rate of influencing factors, thereby decoupling the data from various coupled effects. Second, a deep learning architecture utilizing a long short-term memory (LSTM) network is applied to establish a prediction model for each group of decomposed monitoring data, significantly enhancing prediction accuracy. Building on this foundation, the dynamic early-warning system for bridge structural safety is realized by integrating anomaly diagnosis theory with both predicted and measured data. A validation case using measured strain data demonstrates that the proposed method accurately predicts bridge strain data and calculates real-time adaptive thresholds, enabling real-time detection of anomalous monitoring data.

PMID:40460166 | DOI:10.1371/journal.pone.0324816

Categories: Literature Watch

Geometric Deep Learning for Multimodal Data in CKD

Tue, 2025-06-03 06:00

J Am Soc Nephrol. 2025 Jun 3. doi: 10.1681/ASN.0000000778. Online ahead of print.

NO ABSTRACT

PMID:40459949 | DOI:10.1681/ASN.0000000778

Categories: Literature Watch

Deep learning model for differentiating thyroid eye disease and orbital myositis on computed tomography (CT) imaging

Tue, 2025-06-03 06:00

Orbit. 2025 Jun 3:1-9. doi: 10.1080/01676830.2025.2510587. Online ahead of print.

ABSTRACT

PURPOSE: To develop a deep learning model using orbital computed tomography (CT) imaging to accurately distinguish thyroid eye disease (TED) and orbital myositis, two conditions with overlapping clinical presentations.

METHODS: Retrospective, single-center cohort study spanning 12 years including normal controls, TED, and orbital myositis patients with orbital imaging and examination by an oculoplastic surgeon. A deep learning model employing a Visual Geometry Group-16 network was trained on various binary combinations of TED, orbital myositis, and controls using single slices of coronal orbital CT images.

RESULTS: A total of 1628 images from 192 patients (110 TED, 51 orbital myositis, 31 controls) were included. The primary model comparing orbital myositis and TED had accuracy of 98.4% and area under the receiver operating characteristic curve (AUC) of 0.999. In detecting orbital myositis, it had a sensitivity, specificity, and F1 score of 0.964, 0.994, and 0.984, respectively.

CONCLUSIONS: Deep learning models can differentiate TED and orbital myositis based on a single, coronal orbital CT image with high accuracy. Their ability to distinguish these conditions based not only on extraocular muscle enlargement but also other salient features suggests potential applications in diagnostics and treatment beyond these conditions.

PMID:40459922 | DOI:10.1080/01676830.2025.2510587

Categories: Literature Watch

Knowledge enhanced protein subcellular localization prediction from 3D fluorescence microscope images

Tue, 2025-06-03 06:00

Bioinformatics. 2025 Jun 3:btaf331. doi: 10.1093/bioinformatics/btaf331. Online ahead of print.

ABSTRACT

MOTIVATION: Pinpointing the subcellular location of proteins is essential for studying protein function and related diseases. Advances in spatial proteomics have shown that automatic recognition of protein subcellular localization from images could highly facilitate protein translocation analysis and biomarker discovery, but existing machine learning works have been mostly limited to processing 2D images. By contrast, 3D images have higher spatial resolution and allow researchers to observe cellular structures in their natural context, but currently there are only a few studies of 3D image processing for protein distribution analysis due to the lack of data and complexity of modeling.

RESULTS: We develop a knowledge-enhanced protein subcellular localization model, KE3DLoc, which could recognize distribution patterns in 3D fluorescence microscope images using deep learning methods. The model designs an image feature extraction module that incorporates information from 3D and 2D projected cells, and implements an asymmetric loss and confidence weights to a data imbalance and weak cell annotation issues. Besides, considering that the biological knowledge in the Gene Ontology (GO) database can provide valuable support for protein location understanding, the KE3DLoc model incorporates a novel knowledge enhancement module that optimizes the protein representation by related knowledge graphs derived from the GO. Since the image module and the knowledge module calculate features from different levels, KE3DLoc designs protein ID aggregation to enhance the consistency of protein features across different cells. Experimental results on three public datasets have demonstrated that the KE3DLoc significantly outperforms existing methods and provides valuable insights for spatial proteomics research.

AVAILABILITY: All datasets and codes used in this study are available at GitHub: https://github.com/PRBioimages/KE3DLoc.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:40459878 | DOI:10.1093/bioinformatics/btaf331

Categories: Literature Watch

Effect of contrast enhancement on diagnosis of interstitial lung abnormality in automatic quantitative CT measurement

Tue, 2025-06-03 06:00

Eur Radiol. 2025 Jun 3. doi: 10.1007/s00330-025-11715-w. Online ahead of print.

ABSTRACT

OBJECTIVE: To investigate the effect of contrast enhancement on the diagnosis of interstitial lung abnormalities (ILA) in automatic quantitative CT measurement in patients with paired pre- and post-contrast scans.

MATERIALS AND METHODS: Patients who underwent chest CT for thoracic surgery between April 2017 and December 2020 were retrospectively analyzed. ILA quantification was performed using deep learning-based automated software. Cases were categorized as ILA or non-ILA according to the Fleischner Society's definition, based on the quantification results or radiologist assessment (reference standard). Measurement variability, agreement, and diagnostic performance between the pre- and post-contrast scans were evaluated.

RESULTS: In 1134 included patients, post-contrast scans quantified a slightly larger volume of nonfibrotic ILA (mean difference: -0.2%), due to increased ground-glass opacity and reticulation volumes (-0.2% and -0.1%), whereas the fibrotic ILA volume remained unchanged (0.0%). ILA was diagnosed in 15 (1.3%), 22 (1.9%), and 40 (3.5%) patients by pre- and post-contrast scans and radiologists, respectively. The agreement between the pre- and post-contrast scans was substantial (κ = 0.75), but both pre-contrast (κ = 0.46) and post-contrast (κ = 0.54) scans demonstrated moderate agreement with the radiologist. The sensitivity for ILA (32.5% vs. 42.5%, p = 0.221) and specificity for non-ILA (99.8% vs. 99.5%, p = 0.248) were comparable between pre- and post-contrast scans. Radiologist's reclassification for equivocal ILA due to unilateral abnormalities increased the sensitivity for ILA (67.5% and 75.0%, respectively) in both pre- and post-contrast scans.

CONCLUSION: Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance; however, radiologists may need to improve sensitivity reclassifying equivocal ILA.

KEY POINTS: Question The effect of contrast enhancement on the automated quantification of interstitial lung abnormality (ILA) remains unknown. Findings Automated quantification measured slightly larger ground-glass opacity and reticulation volumes on post-contrast scans than on pre-contrast scans; however, contrast enhancement did not affect the sensitivity for interstitial lung abnormality. Clinical relevance Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance.

PMID:40459739 | DOI:10.1007/s00330-025-11715-w

Categories: Literature Watch

Deep learning-based automatic segmentation of arterial vessel walls and plaques in MR vessel wall images for quantitative assessment

Tue, 2025-06-03 06:00

Eur Radiol. 2025 Jun 3. doi: 10.1007/s00330-025-11697-9. Online ahead of print.

ABSTRACT

OBJECTIVES: To develop and validate a deep-learning-based automatic method for vessel walls and atherosclerotic plaques segmentation for quantitative evaluation in MR vessel wall images.

MATERIALS AND METHODS: A total of 193 patients (107 patients for training and validation, 39 patients for internal test, 47 patients for external test) with atherosclerotic plaque from five centers underwent T1-weighted MRI scans and were included in the dataset. The first step of the proposed method was constructing a purely learning-based convolutional neural network (CNN) named Vessel-SegNet to segment the lumen and the vessel wall. The second step is using the vessel wall priors (including manual prior and Tversky-loss-based automatic prior) to improve the plaque segmentation, which utilizes the morphological similarity between the vessel wall and the plaque. The Dice similarity coefficient (DSC), intraclass correlation coefficient (ICC), etc., were used to evaluate the similarity, agreement, and correlations.

RESULTS: Most of the DSCs for lumen and vessel wall segmentation were above 90%. The introduction of vessel wall priors can increase the DSC for plaque segmentation by over 10%, reaching 88.45%. Compared to dice-loss-based vessel wall priors, the Tversky-loss-based priors can further improve DSC by nearly 3%, reaching 82.84%. Most of the ICC values between the Vessel-SegNet and manual methods in the 6 quantitative measurements are greater than 85% (p-value < 0.001).

CONCLUSION: The proposed CNN-based segmentation model can quickly and accurately segment vessel walls and plaques for quantitative evaluation. Due to the lack of testing with other equipment, populations, and anatomical studies, the reliability of the research results still requires further exploration.

KEY POINTS: Question How can the accuracy and efficiency of vessel component segmentation for quantification, including the lumen, vessel wall, and plaque, be improved? Findings Improved CNN models, manual/automatic vessel wall priors, and Tversky loss can improve the performance of semi-automatic/automatic vessel components segmentation for quantification. Clinical relevance Manual segmentation of vessel components is a time-consuming yet important process. Rapid and accurate segmentation of the lumen, vessel walls, and plaques for quantification assessment helps patients obtain more accurate, efficient, and timely stroke risk assessments and clinical recommendations.

PMID:40459736 | DOI:10.1007/s00330-025-11697-9

Categories: Literature Watch

AI-Driven Biomarker Discovery and Personalized Allergy Treatment: Utilizing Machine Learning and NGS

Tue, 2025-06-03 06:00

Curr Allergy Asthma Rep. 2025 Jun 3;25(1):27. doi: 10.1007/s11882-025-01207-8.

ABSTRACT

PURPOSE OF REVIEW: This review explores the transformative potential of artificial intelligence (AI) and next-generation sequencing (NGS) in allergy diagnostics and treatment. It focuses on leveraging these technologies to enhance precision in biomarker discovery, patient stratification, and personalized management strategies for allergic diseases. RECENT FINDINGS: AI-driven algorithms, particularly machine learning and deep learning, have enabled the identification of complex molecular patterns and predictive markers in allergies, such as IgE levels and cytokine profiles. Integration with NGS techniques, including single-cell RNA sequencing, has uncovered unique immune response signatures, providing insights into molecular mechanisms driving allergic reactions. These innovations have advanced diagnostic accuracy, treatment personalization, and real-time monitoring capabilities, especially in allergen immunotherapy. Combining AI and NGS technologies represents a paradigm shift in allergy research and clinical practice. These advancements facilitate precision diagnostics and personalized treatments, ensuring safer and more effective interventions tailored to individual patient profiles. Despite data integration and clinical implementation challenges, these technologies promise improved outcomes and quality of life for allergy sufferers.

PMID:40459653 | DOI:10.1007/s11882-025-01207-8

Categories: Literature Watch

Comparison of AI-Automated and Manual Subfoveal Choroidal Thickness Measurements in an Elderly Population Using Optical Coherence Tomography

Tue, 2025-06-03 06:00

Transl Vis Sci Technol. 2025 Jun 2;14(6):9. doi: 10.1167/tvst.14.6.9.

ABSTRACT

PURPOSE: To evaluate the agreement and correlation between manual and automated measurements of subfoveal choroidal thickness (SFCT) using enhanced depth imaging spectral-domain optical coherence tomography in an elderly population and to investigate the factors influencing measurement discrepancies.

METHODS: Based on the Beijing Eye Study, SFCT was measured manually using Heidelberg Eye Explorer software and automatically via a TransUNet-based deep learning model. Agreement between manual and automated SFCT measurements was assessed using Bland-Altman plots, intraclass correlation coefficients (ICC), and Pearson correlation coefficients.

RESULTS: Among 2896 participants, automated and manual measurements of SFCT demonstrated strong correlation (ICC = 0.971; 95% confidence interval [CI], 0.969-0.973; Pearson = 0.974, P < 0.001). Subgroup analyses showed similarly high correlation across participants aged ≥60 years (ICC = 0.954, Pearson = 0.974), aged <60 years (ICC = 0.971; Pearson = 0.953), with axial length ≥23 mm (ICC = 0.969; Pearson = 0.974), and axial length <23 mm (ICC = 0.959; Pearson = 0.963). Participants with SFCT <300 µm showed higher consistency (ICC = 0.942; Pearson = 0.944) compared to those with SFCT ≥300 µm (ICC = 0.867; Pearson = 0.868). Significant fixed and proportional biases were observed in all subgroups (P < 0.001), with manual measurements consistently lower than automated values.

CONCLUSIONS: Despite the presence of systematic biases, automated SFCT measurements showed excellent consistency and strong correlation with manual measurements across a large elderly population. These findings support the potential utility of AI-assisted SFCT measurement in clinical settings.

TRANSLATIONAL RELEVANCE: This study validates AI-based SFCT measurement in a large elderly cohort, enhancing diagnostic accuracy and bridging research with practice.

PMID:40459523 | DOI:10.1167/tvst.14.6.9

Categories: Literature Watch

Pollen morphology, deep learning, phylogenetics, and the evolution of environmental adaptations in Podocarpus

Tue, 2025-06-03 06:00

New Phytol. 2025 Jun 3. doi: 10.1111/nph.70250. Online ahead of print.

ABSTRACT

Podocarpus pollen morphology is shaped by both phylogenetic history and the environment. We analyzed the relationship between pollen traits quantified using deep learning and environmental factors within a comparative phylogenetic framework. We investigated the influence of mean annual temperature, annual precipitation, altitude, and solar radiation in driving morphological change. We used trait-environment regression models to infer the temperature tolerances of 31 Neotropical Podocarpidites fossils. Ancestral state reconstructions were applied to the Podocarpus phylogeny with and without the inclusion of fossils. Our results show that temperature and solar radiation influence pollen morphology, with thermal stress driving an increase in pollen size and higher ultraviolet B radiation selecting for thicker corpus walls. Fossil temperature tolerances inferred from trait-environment models aligned with paleotemperature estimates from global paleoclimate models. Incorporating fossils into ancestral state reconstructions revealed that early ancestral Podocarpus lineages were likely adapted to warm climates, with cool-temperature tolerance evolving independently in high-latitude and high-altitude species. Our results highlight the importance of deep learning-derived features in advancing our understanding of plant environmental adaptations over evolutionary timescales. Deep learning allows us to quantify subtle interspecific differences in pollen morphology and link these traits to environmental preferences through statistical and phylogenetic analyses.

PMID:40458972 | DOI:10.1111/nph.70250

Categories: Literature Watch

Automated Classification of Cervical Spinal Stenosis using Deep Learning on CT Scans

Tue, 2025-06-03 06:00

Spine (Phila Pa 1976). 2025 Jun 3. doi: 10.1097/BRS.0000000000005414. Online ahead of print.

ABSTRACT

STUDY DESIGN: Retrospective study.

OBJECTIVE: To develop and validate a computed tomography-based deep learning(DL) model for diagnosing cervical spinal stenosis(CSS).

SUMMARY OF BACKGROUND DATA: Although magnetic resonance imaging (MRI) is widely used for diagnosing CSS, its inherent limitations, including prolonged scanning time, limited availability in resource-constrained settings, and contraindications for patients with metallic implants, make computed tomography (CT) a critical alternative in specific clinical scenarios. The development of CT-based DL models for CSS detection holds promise in transcending the diagnostic efficacy limitations of conventional CT imaging, thereby serving as an intelligent auxiliary tool to optimize healthcare resource allocation.

METHODS: Paired CT/MRI images were collected. CT images were divided into training, validation, and test sets in an 8:1:1 ratio. The two-stage model architecture employed: (1) a Faster R-CNN-based detection model for localization, annotation, and extraction of regions of interest (ROI); (2) comparison of 16 convolutional neural network (CNN) models for stenosis classification to select the best-performing model. The evaluation metrics included accuracy, F1-score, and Cohen's κ coefficient, with comparisons made against diagnostic results from physicians with varying years of experience.

RESULTS: In the multiclass classification task, four high-performing models (DL1-b0, DL2-121, DL3-101, and DL4-26d) achieved accuracies of 88.74%, 89.40%, 89.40%, and 88.08%, respectively. All models demonstrated >80% consistency with senior physicians and >70% consistency with junior physicians.In the binary classification task, the models achieved accuracies of 94.70%, 96.03%, 96.03%, and 94.70%, respectively. All four models demonstrated consistency rates slightly below 90% with junior physicians. However, when compared with senior physicians, three models (excluding DL4-26d) exhibited consistency rates exceeding 90%.

CONCLUSIONS: The DL model developed in this study demonstrated high accuracy in CT image analysis of CSS, with a diagnostic performance comparable to that of senior physicians.

PMID:40458958 | DOI:10.1097/BRS.0000000000005414

Categories: Literature Watch

Deep Learning Pipeline for Automated Assessment of Distances Between Tonsillar Tumors and the Internal Carotid Artery

Tue, 2025-06-03 06:00

Head Neck. 2025 Jun 3. doi: 10.1002/hed.28200. Online ahead of print.

ABSTRACT

BACKGROUND: Evaluating the minimum distance (dTICA) between the internal carotid artery (ICA) and tonsillar tumors (TT) on imaging is essential for preoperative planning; we propose a tool to automatically extract dTICA.

METHODS: CT scans of 96 patients with TT were selected from the cancer imaging archive. nnU-Net, a deep learning framework, was implemented to automatically segment both the TT and ICA from these scans. Dice similarity coefficient (DSC) and average hausdorff distance (AHD) were used to evaluate the performance of the nnU-Net. Thereafter, an automated tool was built to calculate the magnitude of dTICA from these segmentations.

RESULTS: The average DSC and AHD were 0.67, 2.44 mm, and 0.83, 0.49 mm for the TT and ICA, respectively. The mean dTICA was 6.66 mm and statistically varied by tumor T stage (p = 0.00456).

CONCLUSION: The proposed pipeline can accurately and automatically capture dTICA, potentially assisting clinicians in preoperative evaluation.

PMID:40458868 | DOI:10.1002/hed.28200

Categories: Literature Watch

Artificial intelligence for detecting traumatic intracranial haemorrhage with CT: A workflow-oriented implementation

Tue, 2025-06-03 06:00

Neuroradiol J. 2025 Jun 3:19714009251346477. doi: 10.1177/19714009251346477. Online ahead of print.

ABSTRACT

The objective of this study was to assess the performance of an artificial intelligence (AI) algorithm in detecting intracranial haemorrhages (ICHs) on non-contrast CT scans (NCCT). Another objective was to gauge the department's acceptance of said algorithm. Surveys conducted at three and nine months post-implementation revealed an increase in radiologists' acceptance of the AI tool with an increasing performance. However, a significant portion still preferred an additional physician given comparable cost. Our findings emphasize the importance of careful software implementation into a robust IT architecture.

PMID:40458857 | DOI:10.1177/19714009251346477

Categories: Literature Watch

A Multihead Attention Deep Learning Algorithm to Detect Amblyopia Using Fixation Eye Movements

Tue, 2025-06-03 06:00

Ophthalmol Sci. 2025 Mar 27;5(5):100775. doi: 10.1016/j.xops.2025.100775. eCollection 2025 Sep-Oct.

ABSTRACT

OBJECTIVE: To develop an attention-based deep learning (DL) model based on eye movements acquired during a simple visual fixation task to detect amblyopic subjects across different types and severity from controls.

DESIGN: An observational study.

SUBJECTS: We recruited 40 controls and 95 amblyopic subjects (anisometropic = 32; strabismic = 29; and mixed = 34) at the Cleveland Clinic from 2020 to 2024.

METHODS: Binocular horizontal and vertical eye positions were recorded using infrared video-oculography during binocular and monocular viewing. Amblyopic subjects were classified as those without nystagmus (n = 42) and those with nystagmus with fusion maldevelopment nystagmus (FMN) or nystagmus that did not meet the criteria of FMN or infantile nystagmus syndrome (n = 53). A multihead attention-based transformer encoder model was trained and cross-validated on deblinked and denoised eye position data acquired during fixation.

MAIN OUTCOME MEASURES: Detection of amblyopia across types (anisometropia, strabismus, or mixed) and severity (treated, mild, moderate, or severe) and subjects with and without nystagmus was evaluated with area under the receiver-operator characteristic curves, area under the precision-recall curve (AUPRC), and accuracy.

RESULTS: Area under the receiver-operator characteristic curves for classification of subjects per type were 0.70 ± 0.16 for anisometropia (AUPRC: 0.72 ± 0.08), 0.78 ± 0.15 for strabismus (AUPRC: 0.81 ± 0.16), and 0.80 ± 0.13 for mixed (AUPRC: 0.82 ± 0.15). Area under the receiver-operator characteristic curves for classification of amblyopia subjects per severity were 0.77 ± 0.12 for treated/mild (AUPRC: 0.76 ± 0.18), and 0.78 ± 0.09 for moderate/severe (AUPRC: 0.79 ± 0.16). Th area under the receiver-operator characteristic curve for classification of subjects with nystagmus was 0.83 ± 0.11 (AUPRC: 0.81 ± 0.18), and the area under the receiver-operator characteristic curve for those without nystagmus was 0.75 ± 0.15 (AUPRC: 0.76 ± 0.09).

CONCLUSIONS: The multihead transformer DL model classified amblyopia subjects regardless of the type, severity, and presence of nystagmus. The model's ability to identify amblyopia using eye movements alone demonstrates the feasibility of using eye-tracking data in clinical settings to perform objective classifications and complement traditional amblyopia evaluations.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

PMID:40458668 | PMC:PMC12127649 | DOI:10.1016/j.xops.2025.100775

Categories: Literature Watch

Revolutionizing precision oncology: the role of artificial intelligence in personalized pediatric cancer care

Tue, 2025-06-03 06:00

Front Med (Lausanne). 2025 May 19;12:1555893. doi: 10.3389/fmed.2025.1555893. eCollection 2025.

ABSTRACT

Artificial intelligence (AI) has recently garnered significant public attention. Among the various fields where AI can be applied, medicine stands out as one with immense potential. In particular, AI is transforming precision oncology by providing innovative approaches to customize cancer treatments for individual patients. This article examines the latest developments in AI-powered tools designed to improve cancer diagnosis accuracy and predict treatment outcomes. The integration of AI into precision oncology is transforming cancer care by enabling more personalized and effective treatments, minimizing treatment-related toxicities, and enhancing patient survival rates. As AI advances, it will be pivotal in developing more targeted and successful cancer therapies. The field is still in its early stages, and future progress will benefit from establishing standards and guidelines to promote rigorous methodological design and uphold ethical principles. This research highlights the transformative potential of AI in addressing the challenges posed by cancer heterogeneity.

PMID:40458648 | PMC:PMC12127379 | DOI:10.3389/fmed.2025.1555893

Categories: Literature Watch

Ventricular volume adjustment of brain regions depicts brain changes associated with HIV infection and aging better than intracranial volume adjustment

Tue, 2025-06-03 06:00

Front Neurol. 2025 May 19;16:1516168. doi: 10.3389/fneur.2025.1516168. eCollection 2025.

ABSTRACT

INTRODUCTION: While the adjustment of intracranial volume (ICV) is reported to have a significant influence in the outcomes of the analyses of brain structural measures, our study offers a paradigm shift, positing that adjusting for lateral ventricle (LV) inter-individual variability may reveal more atrophic patterns that might be overlooked in analyses without this adjustment,-and such LV-adjusted atrophic patterns may reduce discrepancies observed in earlier studies and better elucidate complex conditions associated with HIV, such as HAND.

METHODS: To test this hypothesis, we employed a number of adjustment strategies on MRI T1-image-derived data extracted using deep learning models and compared their ability to identify the presence and extent of HIV-specific atrophic patterns based on statistical measures and strength.

RESULTS: Our results show that both ICV adjustments may be effective to identify atrophic patterns associated with either aging or HIV in areas of the thalamus, basal ganglia, ventral DC and lateral ventricle, some of which may be overlooked without these adjustments. We also report that LV adjustmenst detect most atrophic patterns associated with HIV and HAND across multiple subcortical regions with more strong statistical strengths, especially the areas of the basal ganglia (putamen, pallidum, caudate nucleus), hippocampus, thalamus, ventral DC, basal forebrain, third ventricle, fourth ventricle, and inferior lateral ventricle. The analyses of LV-adjusted metrics also show that atrophic patterns observed in the hippocampus, thalamus and pallidum were strongly correlated with HAND(especially dysfunction in executive function) and clinical markers (i.e., CD4/CD8 ratio).

CONCLUSION: We conclude that models that control for individual variability in intracranial and ventricular volumes have the potential to minimize discrepancies and variations in structural reports of HIV, improving the diagnostic power of identified patterns and fostering greater consistency across research studies. More importantly, adjusting for LV may not only detect atrophic patterns that could be overlooked in analyses performed without any adjustments, but the outcomes obtained from the adjustments may better explain HIV-associated conditions such as HAND and underlying immunological issues often observed in subjects with HIV treated with combination antiretroviral therapy, considering that the adjustments account for certain aspects of regional interaction.

PMID:40458466 | PMC:PMC12127162 | DOI:10.3389/fneur.2025.1516168

Categories: Literature Watch

An efficient non-parametric feature calibration method for few-shot plant disease classification

Tue, 2025-06-03 06:00

Front Plant Sci. 2025 May 19;16:1541982. doi: 10.3389/fpls.2025.1541982. eCollection 2025.

ABSTRACT

The temporal and spatial irregularity of plant diseases results in insufficient image data for certain diseases, challenging traditional deep learning methods that rely on large amounts of manually annotated data for training. Few-shot learning has emerged as an effective solution to this problem. This paper proposes a method based on the Feature Adaptation Score (FAS) metric, which calculates the FAS for each feature layer in the Swin-TransformerV2 structure. By leveraging the strict positive correlation between FAS scores and test accuracy, we can identify the Swin-Transformer V2-F6 network structure suitable for few-shot plant disease classification without training the network. Furthermore, based on this network structure, we designed the Plant Disease Feature Calibration (PDFC) algorithm, which uses extracted features from the PlantVillage dataset to calibrate features from other datasets. Experiments demonstrate that the combination of the Swin-Transformer V2F6 network structure and the PDFC algorithm significantly improves the accuracy of few-shot plant disease classification, surpassing existing state-of-the-art models. Our research provides an efficient and accurate solution for few-shot plant disease classification, offering significant practical value.

PMID:40458225 | PMC:PMC12127352 | DOI:10.3389/fpls.2025.1541982

Categories: Literature Watch

Study on the relationship between vaginal dose and radiation-induced vaginal injury following cervical cancer radiotherapy, and model development

Tue, 2025-06-03 06:00

Front Public Health. 2025 May 19;13:1585481. doi: 10.3389/fpubh.2025.1585481. eCollection 2025.

ABSTRACT

OBJECTIVE: This study investigates the relationship between vaginal radiation dose and radiation-induced vaginal injury in cervical cancer patients, with the aim of developing a risk prediction model to support personalized treatment strategies.

METHODS: A retrospective analysis was performed on the clinical data of 66 cervical cancer patients treated between December 2022 and December 2023. The Synthetic Minority Over-sampling Technique (SMOTE) was employed for data augmentation. Univariate and multivariate analyses were conducted to identify key factors influencing radiation-induced vaginal injury, and five distinct algorithms were applied to develop predictive models. The AUC/ROC metric was used to assess the performance of the models.

RESULTS: Univariate analysis revealed significant associations between the posterior-inferior border of the symphysis (PIBS) point dose and brachytherapy dose with radiation-induced vaginal injury (p < 0.05). Multivariate analysis confirmed PIBS point dose, brachytherapy dose, age, external beam radiation dose, and vaginal involvement as significant factors. A neural network algorithm was chosen to construct the radiation-induced vaginal injury model, which was subsequently developed into an online tool.

CONCLUSION: The developed predictive model can assess the risk of radiation-induced vaginal injury, thereby facilitating the development of individualized radiotherapy plans.

PMID:40458090 | PMC:PMC12128087 | DOI:10.3389/fpubh.2025.1585481

Categories: Literature Watch

Comparison of Deep Learning Models for Objective Auditory Brainstem Response Detection: A Multicenter Validation Study

Tue, 2025-06-03 06:00

Trends Hear. 2025 Jan-Dec;29:23312165251347773. doi: 10.1177/23312165251347773. Epub 2025 Jun 3.

ABSTRACT

Auditory brainstem response (ABR) interpretation in clinical practice often relies on visual inspection by audiologists, which is prone to inter-practitioner variability. While deep learning (DL) algorithms have shown promise in objectifying ABR detection in controlled settings, their applicability to real-world clinical data is hindered by small datasets and insufficient heterogeneity. This study evaluates the generalizability of nine DL models for ABR detection using large, multicenter datasets. The primary dataset analyzed, Clinical Dataset I, comprises 128,123 labeled ABRs from 13,813 participants across a wide range of ages and hearing levels, and was divided into a training set (90%) and a held-out test set (10%). The models included convolutional neural networks (CNNs; AlexNet, VGG, ResNet), transformer-based architectures (Transformer, Patch Time Series Transformer [PatchTST], Differential Transformer, and Differential PatchTST), and hybrid CNN-transformer models (ResTransformer, ResPatchTST). Performance was assessed on the held-out test set and four external datasets (Clinical II, Southampton, PhysioNet, Mendeley) using accuracy and area under the receiver operating characteristic curve (AUC). ResPatchTST achieved the highest performance on the held-out test set (accuracy: 91.90%, AUC: 0.976). Transformer-based models, particularly PatchTST, showed superior generalization to external datasets, maintaining robust accuracy across diverse clinical settings. Additional experiments highlighted the critical role of dataset size and diversity in enhancing model robustness. We also observed that incorporating acquisition parameters and demographic features as auxiliary inputs yielded performance gains in cross-center generalization. These findings underscore the potential of DL models-especially transformer-based architectures-for accurate and generalizable ABR detection, and highlight the necessity of large, diverse datasets in developing clinically reliable systems.

PMID:40457875 | DOI:10.1177/23312165251347773

Categories: Literature Watch

Pages