Deep learning

A novel framework for the automated characterization of Gram-stained blood culture slides using a large-scale vision transformer

Mon, 2025-02-24 06:00

J Clin Microbiol. 2025 Feb 24:e0151424. doi: 10.1128/jcm.01514-24. Online ahead of print.

ABSTRACT

This study introduces a new framework for the artificial intelligence-based characterization of Gram-stained whole-slide images (WSIs). As a test for the diagnosis of bloodstream infections, Gram stains provide critical early data to inform patient treatment in conjunction with data from rapid molecular tests. In this work, we developed a novel transformer-based model for Gram-stained WSI classification, which is more scalable to large data sets than previous convolutional neural network-based methods as it does not require patch-level manual annotations. We also introduce a large Gram stain data set from Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire, USA) to evaluate our model, exploring the classification of five major categories of Gram-stained WSIs: gram-positive cocci in clusters, gram-positive cocci in pairs/chains, gram-positive rods, gram-negative rods, and slides with no bacteria. Our model achieves a classification accuracy of 0.858 (95% CI: 0.805, 0.905) and an area under the receiver operating characteristic curve (AUC) of 0.952 (95% CI: 0.922, 0.976) using fivefold nested cross-validation on our 475-slide data set, demonstrating the potential of large-scale transformer models for Gram stain classification. Results were measured against the final clinical laboratory Gram stain report after growth of organism in culture. We further demonstrate the generalizability of our trained model by applying it without additional fine-tuning on a second 27-slide external data set from Stanford Health (Palo Alto, California, USA) where it achieves a binary classification accuracy of 0.926 (95% CI: 0.885, 0.960) and an AUC of 0.8651 (95% CI: 0.6337, 0.9917) while distinguishing gram-positive from gram-negative bacteria.

IMPORTANCE: This study introduces a scalable transformer-based deep learning model for automating Gram-stained whole-slide image classification. It surpasses previous methods by eliminating the need for manual annotations and demonstrates high accuracy and generalizability across multiple data sets, enhancing the speed and reliability of Gram stain analysis.

PMID:39992156 | DOI:10.1128/jcm.01514-24

Categories: Literature Watch

MRI-Based Topology Deep Learning Model for Noninvasive Prediction of Microvascular Invasion and Assisting Prognostic Stratification in HCC

Mon, 2025-02-24 06:00

Liver Int. 2025 Mar;45(3):e16205. doi: 10.1111/liv.16205.

ABSTRACT

BACKGROUND & AIMS: Microvascular invasion (MVI) is associated with poor prognosis in hepatocellular carcinoma (HCC). Topology may improve the predictive performance and interpretability of deep learning (DL). We aimed to develop and externally validate an MRI-based topology DL model for preoperative prediction of MVI.

METHODS: This dual-centre retrospective study included consecutive surgically treated HCC patients from two tertiary care hospitals. Automatic liver and tumour segmentations were performed with DL methods. A pure convolutional neural network (CNN) model, a topology-CNN (TopoCNN) model and a topology-CNN-clinical (TopoCNN+Clinic) model were developed and externally validated. Model performance was assessed using the area under the receiver operating characteristic curve (AUC). Cox regression analyses were conducted to identify risk factors for recurrence-free survival within 2 years (early RFS) and overall survival (OS).

RESULTS: In total, 589 patients were included (292 [49.6%] with pathologically confirmed MVI). The AUCs of the TopoCNN and TopoCNN+Clinic models were 0.890 and 0.895 for the internal test dataset and 0.871 and 0.879 for the external test dataset, respectively. For tumours ≤ 3.0 cm, the AUCs of the TopoCNN and TopoCNN+Clinic models were 0.879 and 0.929 for the internal test dataset, and 0.763 and 0.758 for the external test dataset. The TopoCNN-derived MVI prediction probability was an independent risk factor for early RFS (hazard ratio 6.64) and OS (hazard ratio 13.33).

CONCLUSIONS: The MRI topological DL model based on automatic liver and tumour segmentation could accurately predict MVI and effectively stratify postoperative early RFS and OS, which may assist in personalised treatment decision-making.

PMID:39992060 | DOI:10.1111/liv.16205

Categories: Literature Watch

Exploring Structure Diversity in Atomic Resolution Microscopy With Graph

Mon, 2025-02-24 06:00

Adv Mater. 2025 Feb 23:e2417478. doi: 10.1002/adma.202417478. Online ahead of print.

ABSTRACT

The emergence of deep learning (DL) has provided great opportunities for the high-throughput analysis of atomic-resolution micrographs. However, the DL models trained by image patches in fixed size generally lack efficiency and flexibility when processing micrographs containing diversified atomic configurations. Herein, inspired by the similarity between the atomic structures and graphs, a few-shot learning framework based on an equivariant graph neural network (EGNN) to analyze a library of atomic structures (e.g., vacancies, phases, grain boundaries, doping, etc.) is described, showing significantly promoted robustness and three orders of magnitude reduced computing parameters compared to the image-driven DL models, which is especially evident for those aggregated vacancy lines with flexible lattice distortion. Besides, the intuitiveness of graphs enables quantitative and straightforward extraction of the atomic-scale structural features in batches, thus statistically unveiling the self-assembly dynamics of vacancy lines under electron beam irradiation. A versatile model toolkit is established by integrating EGNN sub-models for single structure recognition to process images involving varied configurations in the form of a task chain, leading to the discovery of novel doping configurations with superior electrocatalytic properties for hydrogen evolution reactions. This work provides a powerful tool to explore structure diversity in a fast, accurate, and intelligent manner.

PMID:39988855 | DOI:10.1002/adma.202417478

Categories: Literature Watch

ProCeSa: Contrast-Enhanced Structure-Aware Network for Thermostability Prediction with Protein Language Models

Mon, 2025-02-24 06:00

J Chem Inf Model. 2025 Feb 23. doi: 10.1021/acs.jcim.4c01752. Online ahead of print.

ABSTRACT

Proteins play a fundamental role in biology, and their thermostability is essential for their proper functionality. The precise measurement of thermostability is crucial, traditionally relying on resource-intensive experiments. Recent advances in deep learning, particularly in protein language models (PLMs), have significantly accelerated the progress in protein thermostability prediction. These models utilize various biological characteristics or deep representations generated by PLMs to represent the protein sequences. However, effectively incorporating structural information, based on the PLM embeddings, while not considering atomic protein structures, remains an open and formidable challenge. Here, we propose a novel Protein Contrast-enhanced Structure-Aware (ProCeSa) model that seamlessly integrates both sequence and structural information extracted from PLMs to enhance thermostability prediction. Our model employs a contrastive learning scheme guided by the categories of amino acid residues, allowing it to discern intricate patterns within protein sequences. Rigorous experiments conducted on publicly available data sets establish the superiority of our method over state-of-the-art approaches, excelling in both classification and regression tasks. Our results demonstrate that ProCeSa addresses the complex challenge of predicting protein thermostability by utilizing PLM-derived sequence embeddings, without requiring access to atomic structural data.

PMID:39988825 | DOI:10.1021/acs.jcim.4c01752

Categories: Literature Watch

Deep-learning approach for developing bilayered electromagnetic interference shielding composite aerogels based on multimodal data fusion neural networks

Sun, 2025-02-23 06:00

J Colloid Interface Sci. 2025 Feb 20;688:79-92. doi: 10.1016/j.jcis.2025.02.133. Online ahead of print.

ABSTRACT

A non-experimental approach to developing high-performance EMI shielding materials is urgently needed to reduce costs and manpower. In this investigation, a multimodal data fusion neural network model is proposed to predict the EMI shielding performances of silver-modified four-pronged zinc oxide/waterborne polyurethane/barium ferrite (Ag@F-ZnO/WPU/BF) aerogels. First, 16 Ag@F-ZnO/WPU/BF samples with varying Ag@F-ZnO and BF contents were successfully prepared using the pre-casting and directional freezing techniques. The experimental results demonstrate that these aerogels perform well in terms of averaged EMI shielding effectiveness (SET) up to 78.6 dB and absorption coefficient as high as 0.96. On the basis of composite ingredients and microstructural images, the established multimodal neural network model can effectively predict the EMI shielding performances of Ag@F-ZnO/WPU/BF aerogels. Notably, the multimodal model of fully connected neural network (FCNN) and residual neural network (ResNet) utilizing GatedFusion method yields the best root mean squared error (RMSE) and mean absolute error (MAE) values of 0.7626 and 0.4918, respectively, and correlation coefficient (R) of 0.9885. In addition, this multimodal model successfully predicts the EMI performances of four new aerogels with an average error of less than 5 %, demonstrating its strong generalization capability. The accuracy and efficiency of material property prediction based on multimodal neural network model are largely improved by integrating multiple data sources, offering new possibility for reducing experimental burdens, accelerating the development of new materials, and gaining a deeper understanding of material mechanisms.

PMID:39987843 | DOI:10.1016/j.jcis.2025.02.133

Categories: Literature Watch

Deep learning and electrocardiography: systematic review of current techniques in cardiovascular disease diagnosis and management

Sun, 2025-02-23 06:00

Biomed Eng Online. 2025 Feb 23;24(1):23. doi: 10.1186/s12938-025-01349-w.

ABSTRACT

This paper reviews the recent advancements in the application of deep learning combined with electrocardiography (ECG) within the domain of cardiovascular diseases, systematically examining 198 high-quality publications. Through meticulous categorization and hierarchical segmentation, it provides an exhaustive depiction of the current landscape across various cardiovascular ailments. Our study aspires to furnish interested readers with a comprehensive guide, thereby igniting enthusiasm for further, in-depth exploration and research in this realm.

PMID:39988715 | DOI:10.1186/s12938-025-01349-w

Categories: Literature Watch

Deep learning algorithms for detecting fractured instruments in root canals

Sun, 2025-02-23 06:00

BMC Oral Health. 2025 Feb 23;25(1):293. doi: 10.1186/s12903-025-05652-9.

ABSTRACT

BACKGROUND: Identifying fractured endodontic instruments (FEIs) in periapical radiographs (PAs) is a critical yet challenging aspect of root canal treatment (RCT) due to anatomical complexities and overlapping structures. Deep learning (DL) models offer potential solutions, yet their comparative performance in this domain remains underexplored.

METHODS: A dataset of 700 annotated PAs, including 381 teeth with FEIs, was divided into training, validation, and test sets (60/20/20 split). Five DL models-DenseNet201, EfficientNet B0, ResNet-18, VGG-19, and MaxVit-T-were trained using transfer learning and data augmentation techniques. Performance was evaluated using accuracy, AUC and MCC. Statistical analysis included the Friedman test with post-hoc corrections.

RESULTS: DenseNet201 achieved the highest AUC (0.900) and MCC (0.810), outperforming other models in FEI detection. ResNet-18 demonstrated robust results, while EfficientNet B0 and VGG-19 provided moderate performance. MaxVit-T underperformed, with metrics near random guessing. Statistical analysis revealed significant differences among models (p < 0.05), but pairwise comparisons were not significant.

CONCLUSIONS: DenseNet201's superior performance highlights its clinical potential for FEI detection, while ResNet-18 offers a balance between accuracy and computational efficiency. The findings highlight the need for model-task alignment and optimization in medical imaging applications.

PMID:39988714 | DOI:10.1186/s12903-025-05652-9

Categories: Literature Watch

Retinal vascular alterations in cognitive impairment: A multicenter study in China

Sun, 2025-02-23 06:00

Alzheimers Dement. 2025 Feb;21(2):e14593. doi: 10.1002/alz.14593.

ABSTRACT

INTRODUCTION: Foundational models suggest Alzheimer's disease (AD) can be diagnosed using retinal images, but the specific structural features remain poorly understood. This study investigates retinal vascular changes in individuals with cognitive impairment in three East Asian regions.

METHODS: A multicenter study was conducted in Shanghai, Hong Kong, and Ningxia, collecting retinal images from 176 patients with mild cognitive impairment (MCI) or AD and 264 controls. The VC-Net deep learning model segmented arterial/venous networks, extracting 36 vascular features.

RESULTS: Significant reductions in vessel length, segment number, and vascular density were observed in cognitively impaired patients, while venous structure and complexity were correlated with the level of cognitive function.

DISCUSSION: Retinal vascular changes may serve as indicators of cognitive impairment, requiring validation in larger cohorts and exploration of the underlying mechanisms.

HIGHLIGHTS: A deep learning segmentation model extracted diverse retinal vascular features. Significant alterations in the structure of retinal arterial/venous networks were identified. Partitioning vessel-rich retinal zones improved detection of vascular changes. Decreases in vessel length, segment number, and vascular density were found in CI individuals.

PMID:39988572 | DOI:10.1002/alz.14593

Categories: Literature Watch

Ventilator pressure prediction employing voting regressor with time series data of patient breaths

Sun, 2025-02-23 06:00

Health Informatics J. 2025 Jan-Mar;31(1):14604582241295912. doi: 10.1177/14604582241295912.

ABSTRACT

Objectives: Mechanical ventilator plays a vital role in saving millions of lives. Patients with COVID-19 symptoms need a ventilator to survive during the pandemic. Studies have reported that the mortality rates rise from 50% to 97% in those requiring mechanical ventilation during COVID-19. The pumping of air into the patient's lungs using a ventilator requires a particular air pressure. High or low ventilator pressure can result in a patient's life loss as high air pressure in the ventilator causes the patient lung damage while lower pressure provides insufficient oxygen. Consequently, precise prediction of ventilator pressure is a task of great significance in this regard. The primary aim of this study is to predict the airway pressure in the ventilator respiratory circuit during the breath. Methods: A novel hybrid ventilator pressure predictor (H-VPP) approach is proposed. The ventilator exploratory data analysis reveals that the high values of lung attributes R and C during initial time step values are the prominent causes of high ventilator pressure. Results: Experiments using the proposed approach indicate H-VPP achieves a 0.78 R2, mean absolute error of 0.028, and mean squared error of 0.003. These results are better than other machine learning and deep learning models employed in this study. Conclusion: Extensive experimentation indicates the superior performance of the proposed approach for ventilator pressure prediction with high accuracy. Furthermore, performance comparison with state-of-the-art studies corroborates the superior performance of the proposed approach.

PMID:39988551 | DOI:10.1177/14604582241295912

Categories: Literature Watch

Artificial Intelligence non-invasive methods for neonatal jaundice detection: A review

Sun, 2025-02-23 06:00

Artif Intell Med. 2025 Feb 19:103088. doi: 10.1016/j.artmed.2025.103088. Online ahead of print.

ABSTRACT

Neonatal jaundice is a common and potentially fatal health condition in neonates, especially in low and middle income countries, where it contributes considerably to neonatal morbidity and death. Traditional diagnostic approaches, such as Total Serum Bilirubin (TSB) testing, are invasive and could lead to discomfort, infection risk, and diagnostic delays. As a result, there is a rising interest in non-invasive approaches for detecting jaundice early and accurately. An in-depth analysis of non-invasive techniques for detecting neonatal jaundice is presented by this review, exploring several AI-driven techniques, such as Machine Learning (ML) and Deep Learning (DL), which have demonstrated the ability to enhance diagnostic accuracy by evaluating complex patterns in neonatal skin color and other relevant features. It is identified that AI models incorporating variants of neural networks achieve an accuracy rate of over 90% in detecting jaundice when compared to traditional methods. Furthermore, satisfactory outcomes in field settings have been demonstrated by mobile-based applications that use smartphone cameras to estimate bilirubin levels, providing a practical alternative for resource-constrained areas. The potential impact of AI-based solutions on reducing neonatal morbidity and mortality is evaluated by this review, with a focus on real-world clinical challenges, highlighting the effectiveness and practicality of AI-based strategies as an assistive tool in revolutionizing neonatal care through early jaundice diagnosis, while also addressing the ethical and practical implications of integrating these technologies in clinical practice. Future research areas, such as the development of new imaging technologies and the incorporation of wearable sensors for real-time bilirubin monitoring, are recommended by the paper.

PMID:39988547 | DOI:10.1016/j.artmed.2025.103088

Categories: Literature Watch

Incorporating indirect MRI information in a CT-based deep learning model for prostate auto-segmentation

Sun, 2025-02-23 06:00

Radiother Oncol. 2025 Feb 21:110806. doi: 10.1016/j.radonc.2025.110806. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Computed tomography (CT) imaging poses challenges for delineation of soft tissue structures for prostate cancer external beam radiotherapy. Guidelines require the input of magnetic resonance imaging (MRI) information. We developed a deep learning (DL) prostate and organ-at-risk contouring model designed to find the MRI-truth in CT imaging.

MATERIAL AND METHODS: The study utilized CT-scan data from 165 prostate cancer patients, with 136 scans for training and 29 for testing. The research focused on contouring five regions of interest (ROIs): clinical target volume of the prostate including the venous plexus (VP) (CTV-iVP) and excluding the VP (CTV-eVP), bladder, anorectum and the whole seminal vesicles (SV) according to The European Society for Radiotherapy and Oncology (ESTRO) and Advisory Committee on Radiation Oncology Practice (ACROP) contouring guidelines. Human delineation included fusion of MRI-imaging with the planning CT-scans in the process, but the model itself has never been shown MRI-images during its development. Model training involved a three-dimensional U-Net architecture. A qualitative review was independently performed by two clinicians scoring the model on time-based criteria and the DL segmentation results were compared to manual adaptations using the Dice similarity coefficient (DSC) and the 95th percentile Hausdorff distance (HD95).

RESULTS: The qualitative review of DL segmentations for CTV-iVP and CTV-eVP showed 2 or 3 out of 3 in 96 % of cases, indicating minimal manual adjustments were needed by clinicians. The DL model demonstrated comparable quantitative performance in delineating CTV-iVP and CTV-eVP with a DSC of 89 % with a standard deviation of 3.3 %. HD95 is 4 mm for CTV-iVP and 4.1 mm CTV-eVP with a standard deviation of 2.1 mm for both contours. Anorectum, bladder and SV scored 3 out of 3 in the qualitative analysis in 62 %, 72 % and 55 % of cases respectively. DSC and HD95 are 90 % and 5.5 mm for anorectum, 96 % and 2.9 mm for bladder, and 81 % and 4.6 mm for the seminal vesicles.

CONCLUSION: To our knowledge, this is the first DL model designed to implement MRI contouring guidelines in CT imaging and the first model trained according to ESTRO-ACROP contouring guidelines. This CT-based DL model presents a valuable tool for aiding prostate delineation without requiring the actual MRI information.

PMID:39988305 | DOI:10.1016/j.radonc.2025.110806

Categories: Literature Watch

DENSE-SIM: A modular pipeline for the evaluation of cine DENSE images with sub-voxel ground-truth strain

Sun, 2025-02-23 06:00

J Cardiovasc Magn Reson. 2025 Feb 21:101866. doi: 10.1016/j.jocmr.2025.101866. Online ahead of print.

ABSTRACT

BACKGROUND: Myocardial strain is a valuable biomarker for diagnosing and predicting cardiac conditions, offering additional prognostic information to traditional metrics like ejection fraction. While cardiovascular magnetic resonance (CMR) methods, particularly cine displacement encoding with stimulated echoes (DENSE), are the gold standard for strain estimation, evaluation of regional strain estimation requires precise ground truth. This study introduces DENSE-Sim, an open-source simulation pipeline for generating realistic cine DENSE images with high-resolution known ground truth strain, enabling evaluation of accuracy and precision in strain analysis pipelines.

METHODS: This pipeline is a modular tool designed for simulating cine DENSE images and evaluating strain estimation performance. It comprises four main modules: 1) anatomy generation, for creating end-diastolic cardiac shapes; 2) motion generation, to produce myocardial deformations over time and Lagrangian strain; 3) DENSE image generation, using Bloch equation simulations with realistic noise, spiral sampling, and phase-cycling; and 4) strain evaluation. To illustrate the pipeline, a synthetic dataset of 180 short-axis slices was created, and analysed using the commonly-used DENSEanalysis tool. The impact of the spatial regularization parameter (k) in DENSEanalysis was evaluated against the ground-truth pixel strain, to particularly assess the resulting bias and variance characteristics.

RESULTS: Simulated strain profiles were generated with a myocardial SNR ranging from 3.9 to 17.7. For end-systolic radial strain, DENSEanalysis average signed error (ASE) in Green strain ranged from 0.04 ± 0.09 (true-calculated, mean ± std) for a typical regularization (k=0.9), to - 0.01 ± 0.21 at low regularization (k=0.1). Circumferential strain ASE ranged from - 0.00 ± 0.04 at k=0.9 to - 0.01 ± 0.10 at k=0.1. This demonstrates that the circumferential strain closely matched the ground truth, while radial strain displayed more significant underestimations, particularly near the endocardium. A lower regularization parameter from 0.3 to 0.6 depending on the myocardial SNR, would be more appropriate to estimate the radial strain, as a compromise between noise compensation and global strain accuracy.

CONCLUSION: Generating realistic cine DENSE images with high-resolution ground-truth strain and myocardial segmentation enables accurate evaluation of strain analysis tools, while reproducing key in vivo acquisition features, and will facilitate the future development of deep-learning models for myocardial strain analysis, enhancing clinical CMR workflows.

PMID:39988298 | DOI:10.1016/j.jocmr.2025.101866

Categories: Literature Watch

Missing-modality enabled multi-modal fusion architecture for medical data

Sun, 2025-02-23 06:00

J Biomed Inform. 2025 Feb 21:104796. doi: 10.1016/j.jbi.2025.104796. Online ahead of print.

ABSTRACT

BACKGROUND: Fusion of multi-modal data can improve the performance of deep learning models. However, missing modalities are common in medical data due to patient specificity, which is detrimental to the performance of multi-modal models in applications. Therefore, it is critical to adapt the models to missing modalities.

OBJECTIVE: This study aimed to develop an effective multi-modal fusion architecture for medical data that was robust to missing modalities and further improved the performance for clinical tasks.

METHODS: X-ray chest radiographs for the image modality, radiology reports for the text modality, and structured value data for the tabular data modality were fused in this study. Each modality pair was fused with a Transformer-based bi-modal fusion module, and the three bi-modal fusion modules were then combined into a tri-modal fusion framework. Additionally, multivariate loss functions were introduced into the training process to improve models' robustness to missing modalities during the inference process. Finally, we designed comparison and ablation experiments to validate the effectiveness of the fusion, the robustness to missing modalities, and the enhancements from each key component. Experiments were conducted on MIMIC-IV and MIMIC-CXR datasets with the 14-label disease diagnosis and patient in-hospital mortality prediction task The area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) were used to evaluate models' performance.

RESULTS: Our proposed architecture showed superior predictive performance, achieving the average AUROC and AUPRC of 0.916 and 0.551 in the 14-label classification task, 0.816 and 0.392 in the mortality prediction task. while the best average AUROC and AUPRC among the comparison methods were 0.876, 0.492 in the 14-label classification task and 0.806, 0.366 in the mortality prediction task. Both metrics decreased only slightly when tested with modal-incomplete data. Different levels of enhancements were achieved through three key components.

CONCLUSIONS: The proposed multi-modal fusion architecture effectively fused three modalities and showed strong robustness to missing modalities. This architecture holds promise for scaling up to more modalities to enhance the clinical practicality of the model.

PMID:39988001 | DOI:10.1016/j.jbi.2025.104796

Categories: Literature Watch

Explainable paroxysmal atrial fibrillation diagnosis using an artificial intelligence-enabled electrocardiogram

Sun, 2025-02-23 06:00

Korean J Intern Med. 2025 Feb 21. doi: 10.3904/kjim.2024.130. Online ahead of print.

ABSTRACT

BACKGROUND/AIMS: Atrial fibrillation (AF) significantly contributes to global morbidity and mortality. Paroxysmal atrial fibrillation (PAF) is particularly common among patients with cryptogenic strokes or transient ischemic attacks and has a silent nature. This study aims to develop reliable artificial intelligence (AI) algorithms to detect early signs of AF in patients with normal sinus rhythm (NSR) using a 12-lead electrocardiogram (ECG).

METHODS: Between 2013 and 2020, 552,372 ECG traces from 318,321 patients were collected and split into training (n = 331,422), validation (n = 110,475), and test sets (n = 110,475). Deep neural networks were then trained to predict AF onset within one month of NSR. Model performance was evaluated using the area under the receiver operating characteristic curve (AUROC). An explainable AI technique was employed to identify the inference evidence underlying the predictions of deep learning models.

RESULTS: The AUROC for early diagnosis of PAF was 0.905 ± 0.007. The findings reveal that the vicinity of the T wave, including the ST segment and S-peak, significantly influences the ability of the trained neural network to diagnose PAF. Additionally, comparing the summarized ECG in NSR with those in PAF revealed that nonspecific ST-T abnormalities and inverted T waves were associated with PAF.

CONCLUSIONS: Deep learning can predict AF onset from NSR while detecting key features that influence decisions. This suggests that identifying undetected AF may serve as a predictive tool for PAF screening, offering valuable insights into cardiac dysfunction and stroke risk.

PMID:39987899 | DOI:10.3904/kjim.2024.130

Categories: Literature Watch

Multi-atlas multi-modality morphometry analysis of the South Texas Alzheimer's Disease Research Center postmortem repository

Sun, 2025-02-23 06:00

Neuroimage Clin. 2025 Feb 18;45:103752. doi: 10.1016/j.nicl.2025.103752. Online ahead of print.

ABSTRACT

Histopathology provides critical insights into the neurological processes inducing neurodegenerative diseases and their impact on the brain, but brain banks combining histology and neuroimaging data are difficult to create. As part of an ongoing global effort to establish new brain banks providing both high-quality neuroimaging scans and detailed histopathology examinations, the South Texas Alzheimer's Disease Re- search Center postmortem repository was recently created with the specific purpose of studying comorbid dementias. As the repository is reaching a milestone of two hundred brain donations and a hundred curated MRI sessions are ready for processing, robust statistical analyses can now be conducted. In this work, we report the very first morphometry analysis conducted with this new data set. We describe the processing pipelines that were specifically developed to exploit the available MRI sequences, and we explain how we addressed several postmortem neuroimaging challenges, such as the separation of brain tissues from fixative fluids, the need for updated brain atlases, and the tissue contrast changes induced by brain fixation. In general, our results establish that a combination of structural MRI sequences can provide enough informa- tion for state-of-the-art Deep Learning algorithms to almost perfectly separate brain tissues from a formalin buffered solution. Regional brain volumes are challenging to measure in postmortem scans, but robust estimates sensitive to sex differences and age trends, reflecting clinical diagnosis, neuropathology findings, and the shrinkage induced by tissue fixation can be obtained. We hope that the new processing methods developed in this work, such as the lightweight Deep Networks we used to identify the formalin signal in multimodal MRI scans and the MRI synthesis tools we used to fix our anisotropic resolution brain scans, will inspire other research teams working with postmortem MRI scans.

PMID:39987858 | DOI:10.1016/j.nicl.2025.103752

Categories: Literature Watch

Deep learning imputes DNA methylation states in single cells and enhances the detection of epigenetic alterations in schizophrenia

Sat, 2025-02-22 06:00

Cell Genom. 2025 Feb 15:100774. doi: 10.1016/j.xgen.2025.100774. Online ahead of print.

ABSTRACT

DNA methylation (DNAm) is a key epigenetic mark with essential roles in gene regulation, mammalian development, and human diseases. Single-cell technologies enable profiling DNAm at cytosines in individual cells, but they often suffer from low coverage for CpG sites. We introduce scMeFormer, a transformer-based deep learning model for imputing DNAm states at each CpG site in single cells. Comprehensive evaluations across five single-nucleus DNAm datasets from human and mouse demonstrate scMeFormer's superior performance over alternative models, achieving high-fidelity imputation even with coverage reduced to 10% of original CpG sites. Applying scMeFormer to a single-nucleus DNAm dataset from the prefrontal cortex of patients with schizophrenia and controls identified thousands of schizophrenia-associated differentially methylated regions that would have remained undetectable without imputation and added granularity to our understanding of epigenetic alterations in schizophrenia. We anticipate that scMeFormer will be a valuable tool for advancing single-cell DNAm studies.

PMID:39986279 | DOI:10.1016/j.xgen.2025.100774

Categories: Literature Watch

Genetic association studies using disease liabilities from deep neural networks

Sat, 2025-02-22 06:00

Am J Hum Genet. 2025 Feb 19:S0002-9297(25)00019-9. doi: 10.1016/j.ajhg.2025.01.019. Online ahead of print.

ABSTRACT

The case-control study is a widely used method for investigating the genetic underpinnings of binary traits. However, long-term, prospective cohort studies often grapple with absent or evolving health-related outcomes. Here, we propose two methods, liability and meta, for conducting genome-wide association studies (GWASs) that leverage disease liabilities calculated from deep patient phenotyping. Analyzing 38 common traits in ∼300,000 UK Biobank participants, we identified an increased number of loci in comparison to the number identified by the conventional case-control approach, and there were high replication rates in larger external GWASs. Further analyses confirmed the disease specificity of the genetic architecture; the meta method demonstrated higher robustness when phenotypes were imputed with low accuracy. Additionally, polygenic risk scores based on disease liabilities more effectively predicted newly diagnosed cases in the 2022 dataset, which were controls in the earlier 2019 dataset. Our findings demonstrate that integrating high-dimensional phenotypic data into deep neural networks enhances genetic association studies while capturing disease-relevant genetic architecture.

PMID:39986278 | DOI:10.1016/j.ajhg.2025.01.019

Categories: Literature Watch

Electrocardiographic-Driven artificial intelligence Model: A new approach to predicting One-Year mortality in heart failure with reduced ejection fraction patients

Sat, 2025-02-22 06:00

Int J Med Inform. 2025 Feb 19;197:105843. doi: 10.1016/j.ijmedinf.2025.105843. Online ahead of print.

ABSTRACT

BACKGROUND: Despite the proliferation of heart failure (HF) mortality prediction models, their practical utility is limited. Addressing this, we utilized a significant dataset to develop and validate a deep learning artificial intelligence (AI) model for predicting one-year mortality in heart failure with reduced ejection fraction (HFrEF) patients. The study's focus was to assess the effectiveness of an AI algorithm, trained on an extensive collection of ECG data, in predicting one-year mortality in HFrEF patients.

METHODS: We selected HFrEF patients who had high-quality baseline ECGs from two hospital visits between September 2016 and May 2021. A total of 3,894 HFrEF patients (64% male, mean age 64.3, mean ejection fraction 29.8%) were included. Using this ECG data, we developed a deep learning model and evaluated its performance using the area under the receiver operating characteristic curve (AUROC).

RESULTS: The model, validated against 16,228 independent ECGs from the original cohort, achieved an AUROC of 0.826 (95 % CI, 0.794-0.859). It displayed a high sensitivity of 99.0 %, positive predictive value of 16.6 %, and negative predictive value of 98.4 %. Importantly, the deep learning algorithm emerged as an independent predictor of 1-yr mortality of HFrEF patients with an adjusted hazards ratio of 4.12 (95 % CI 2.32-7.33, p < 0.001).

CONCLUSIONS: The depth and quality of our dataset and our AI-driven ECG analysis model significantly enhance the prediction of one-year mortality in HFrEF patients. This promises a more personalized, future-focused approach in HF patient management.

PMID:39986123 | DOI:10.1016/j.ijmedinf.2025.105843

Categories: Literature Watch

Specific glycomacropeptide detection via polyacrylamide gel electrophoresis with dual imaging and signal-fusion deep learning

Sat, 2025-02-22 06:00

Food Chem. 2025 Feb 12;476:143293. doi: 10.1016/j.foodchem.2025.143293. Online ahead of print.

ABSTRACT

Herein, we report a sodium dodecyl-sulfate polyacrylamide gel electrophoresis (SDS-PAGE) method featuring dual imaging and signal-fusion deep learning for specific identification and analysis of glycomacropeptide (GMP) in milk sample. Conventional SDS-PAGE methods lack specificity because of the signle staining of protein bands, and the overlap between GMP and β-lactoglobulin (βLg). Our dual imaging method generated a pair of complementary detection signals by recruiting intrinsic fluorescence imaging (IFI) and silver staining. Comparing the IFI image with the staining image highlighted the presence of GMP and differentiated it from βLg. Additionally, we trained a signal-fusion deep learning model to improve the quantitative performance of our method. The model fused the features extracted from the paired detection signals (IFI and staining) and accurately classified them into different mixing ratios (proportion of GMP-containing whey in the sample), indicating the potential for quantitative analysis on the mixing ratios of GMP added into whey sample. The developed method has the merits of specificity, sensitivity and simplilcity, and has potential to analysis of protein/peptides with unique IFI properties in food safety, basic research and biopharming etc.

PMID:39986063 | DOI:10.1016/j.foodchem.2025.143293

Categories: Literature Watch

Building rooftop extraction from high resolution aerial images using multiscale global perceptron with spatial context refinement

Sat, 2025-02-22 06:00

Sci Rep. 2025 Feb 22;15(1):6499. doi: 10.1038/s41598-025-91206-6.

ABSTRACT

Building rooftop extraction has been applied in various fields, such as cartography, urban planning, automatic driving, and intelligent city construction. Automatic building detection and extraction algorithms using high spatial resolution aerial images can provide precise location and geometry information, significantly reducing time, costs, and labor. Recently, deep learning algorithms, especially convolution neural networks (CNNs) and Transformer, have robust local or global feature extraction ability, achieving advanced performance in intelligent interpretation compared with conventional methods. However, buildings often exhibit scale variation, spectral heterogeneity, and similarity with complex geometric shapes. Hence, the building rooftop extraction results exist fragmentation and lack spatial details using these methods. To address these issues, this study developed a multi-scale global perceptron network based on Transformer and CNN using novel encoder-decoders for enhancing contextual representation of buildings. Specifically, an improved multi-head-attention encoder is employed by constructing multi-scale tokens to enhance global semantic correlations. Meanwhile, the context refinement decoder is developed and synergistically uses high-level semantic representation and shallow features to restore spatial details. Overall, quantitative analysis and visual experiments confirmed that the proposed model is more efficient and superior to other state-of-the-art methods, with a 95.18% F1 score on the WHU dataset and a 93.29% F1 score on the Massub dataset.

PMID:39987354 | DOI:10.1038/s41598-025-91206-6

Categories: Literature Watch

Pages