Deep learning

Artificial intelligence-assisted automated heart failure detection and classification from electronic health records

Fri, 2024-05-03 06:00

ESC Heart Fail. 2024 May 3. doi: 10.1002/ehf2.14828. Online ahead of print.

ABSTRACT

AIMS: Electronic health records (EHR) linked to Digital Imaging and Communications in Medicine (DICOM), biological specimens, and deep learning (DL) algorithms could potentially improve patient care through automated case detection and surveillance. We hypothesized that by applying keyword searches to routinely stored EHR, in conjunction with AI-powered automated reading of DICOM echocardiography images and analysing biomarkers from routinely stored plasma samples, we were able to identify heart failure (HF) patients.

METHODS AND RESULTS: We used EHR data between 1993 and 2021 from Tayside and Fife (~20% of the Scottish population). We implemented a keyword search strategy complemented by filtering based on International Classification of Diseases (ICD) codes and prescription data to EHR data set. We then applied DL for the automated interpretation of echocardiographic DICOM images. These methods were then integrated with the analysis of routinely stored plasma samples to identify and categorize patients into HF with reduced ejection fraction (HFrEF), HF with preserved ejection fraction (HFpEF), and controls without HF. The final diagnosis was verified through a manual review of medical records, measured natriuretic peptides in stored blood samples, and by comparing clinical outcomes among groups. In our study, we selected the patient cohort through an algorithmic workflow. This process started with 60 850 EHR data and resulted in a final cohort of 578 patients, divided into 186 controls, 236 with HFpEF, and 156 with HFrEF, after excluding individuals with mismatched data or significant valvular heart disease. The analysis of baseline characteristics revealed that compared with controls, patients with HFrEF and HFpEF were generally older, had higher BMI, and showed a greater prevalence of co-morbidities such as diabetes, COPD, and CKD. Echocardiographic analysis, enhanced by DL, provided high coverage, and detailed insights into cardiac function, showing significant differences in parameters such as left ventricular diameter, ejection fraction, and myocardial strain among the groups. Clinical outcomes highlighted a higher risk of hospitalization and mortality for HF patients compared with controls, with particularly elevated risk ratios for both HFrEF and HFpEF groups. The concordance between the algorithmic selection of patients and manual validation demonstrated high accuracy, supporting the effectiveness of our approach in identifying and classifying HF subtypes, which could significantly impact future HF diagnosis and management strategies.

CONCLUSIONS: Our study highlights the feasibility of combining keyword searches in EHR, DL automated echocardiographic interpretation, and biobank resources to identify HF subtypes.

PMID:38700133 | DOI:10.1002/ehf2.14828

Categories: Literature Watch

Electron ionization mass spectrometry feature peak relationships combined with deep classification model to assist similarity algorithm for fast and accurate identification of compounds

Fri, 2024-05-03 06:00

Rapid Commun Mass Spectrom. 2024 Jul 15;38(13):e9752. doi: 10.1002/rcm.9752.

ABSTRACT

RATIONALE: Gas chromatography-mass spectrometry (GC-MS) combines chromatography and MS, providing full play to the advantages of high separation efficiency of GC, strong qualitative ability of MS, and high sensitivity of detector. In GC-MS data processing, determining the experimental compounds is one of the most important analytical steps, which is usually realized by one-to-one similarity calculations between the experimental mass spectrum and the standard mass spectrum library. Although the accuracy of the algorithm has been improved in recent years, it is still difficult to distinguish structurally similar mass spectra, especially isomers. At the same time, the library capacity is very large and increasing every year, and the algorithm needs to perform large numbers of calculations with irrelevant compounds in the library to recognize unknown compounds, which leads to a significant reduction in efficiency.

METHODS: This work proposed to exclude a large number of irrelevant mass spectra by presearching, perform preliminary similarity calculations using similarity algorithms, and finally improve the accuracy of similarity calculations using deep classification models. The replica library of NIST17 is used as the query data, and the master library is used as the reference database.

RESULTS: Compared with the traditional recognition algorithm, the preprocessing algorithm has reduced the time by 4.2 h, and by adding the deep learning models 1 and 2 as the final determination, the recognition accuracy has been improved by 1.9% and 6.5%, respectively, based on the original algorithm.

CONCLUSIONS: This method improves the recognition efficiency compared to conventional algorithms and at the same time has better recognition accuracy for structurally similar mass spectra and isomers.

PMID:38700125 | DOI:10.1002/rcm.9752

Categories: Literature Watch

Prospective Deep Learning-based Quantitative Assessment of Coronary Plaque by CT Angiography Compared with Intravascular Ultrasound

Fri, 2024-05-03 06:00

Eur Heart J Cardiovasc Imaging. 2024 May 3:jeae115. doi: 10.1093/ehjci/jeae115. Online ahead of print.

ABSTRACT

AIMS: Coronary computed tomography angiography provides noninvasive assessment of coronary stenosis severity and flow impairment. Automated artificial intelligence analysis may assist in precise quantification and characterization of coronary atherosclerosis, enabling patient-specific risk determination and management strategies. This multicenter international study compared an automated deep-learning-based method for segmenting coronary atherosclerosis in coronary computed tomography angiography (CCTA) against the reference standard of intravascular ultrasound (IVUS).

METHODS AND RESULTS: The study included clinically stable patients with known coronary artery disease from 15 centers in the U.S. and Japan. An artificial intelligence (AI)-enabled plaque analysis service was utilized to quantify and characterize total plaque (TPV), vessel, lumen, calcified plaque (CP), non-calcified plaque (NCP), and low attenuation plaque (LAP) volumes derived from CCTA and compared with IVUS measurements in a blinded, core laboratory-adjudicated fashion. In 237 patients, 432 lesions were assessed; mean lesion length was 24.5 mm. Mean IVUS-TPV was 186.0 mm3. AI-enabled plaque analysis on CCTA showed strong correlation and high accuracy when compared with IVUS; correlation coefficient, slope, and Y intercept for TPV were 0.91, 0.99, and 1.87, respectively; for CP volume 0.91, 1.05, and 5.32, respectively; and for NCP volume 0.87, 0.98, and 15.24, respectively. Bland-Altman analysis demonstrated strong agreement with little bias for these measurements.

CONCLUSIONS: Artificial intelligence enabled CCTA quantification and characterization of atherosclerosis demonstrated strong agreement with IVUS reference standard measurements. This tool may prove effective for accurate evaluation of coronary atherosclerotic burden and cardiovascular risk assessment.[ClinicalTrails.gov identifier: NCT05138289].

PMID:38700097 | DOI:10.1093/ehjci/jeae115

Categories: Literature Watch

Automated detection of anterior crossbite on intraoral images and videos utilizing deep learning

Fri, 2024-05-03 06:00

Int J Comput Dent. 2024 May 3;0(0):0. doi: 10.3290/j.ijcd.b5290567. Online ahead of print.

ABSTRACT

AIM: Malocclusion has emerged as a burgeoning global public health concern. Individuals with an anterior crossbite face an elevated risk of exhibiting characteristics such as a concave facial profile, negative overjet, and poor masticatory efficiency. In response to this issue, we proposed a convolutional neural network (CNN)-based model designed for the automated detection and classification of intraoral images and videos.

MATERIALS AND METHODS: A total of 1865 intraoral images were included in this study, 1493 (80%) of which were allocated for training and 372 (20%) for testing the CNN. Additionally, we tested the models on 10 videos, spanning a cumulative duration of 124 seconds. To assess the performance of our predictions, metrics including accuracy, sensitivity, specificity, precision, F1-score, area under the precision-recall (AUPR) curve, and area under the receiver operating characteristic (ROC) curve (AUC) were employed.

RESULTS: The trained model exhibited commendable classification performance, achieving an accuracy of 0.965 and an AUC of 0.986. Moreover, it demonstrated superior specificity (0.992 vs. 0.978 and 0.956, P < 0.05) in comparison to assessments by two orthodontists. Conversely, the CNN model displayed diminished sensitivity (0.89 vs. 0.96 and 0.92, P < 0.05) relative to the orthodontists. Notably, the CNN model accomplished a perfect classification rate, successfully identifying 100% of the videos in the test set.

CONCLUSION: The deep learning (DL) model exhibited remarkable classification accuracy in identifying anterior crossbite through both intraoral images and videos. This proficiency holds the potential to expedite the detection of severe malocclusions, facilitating timely classification for appropriate treatment and, consequently, mitigating the risk of complications.

PMID:38700086 | DOI:10.3290/j.ijcd.b5290567

Categories: Literature Watch

Deep learning assisted fluid volume calculation for assessing anti-vascular endothelial growth factor effect in diabetic macular edema

Fri, 2024-05-03 06:00

Heliyon. 2024 Apr 17;10(8):e29775. doi: 10.1016/j.heliyon.2024.e29775. eCollection 2024 Apr 30.

ABSTRACT

OBJECTIVE: To develop an algorithm using deep learning methods to calculate the volume of intraretinal and subretinal fluid in optical coherence tomography (OCT) images for assessing diabetic macular edema (DME) patients' condition changes.

DESIGN: Cross-sectional study.

PARTICIPANTS: Treatment-naive patients diagnosed with DME recruited from April 2020 to November 2021.

METHODS: The deep learning network, which was built for autonomous segmentation utilizing an encoder-decoder network based on the U-Net architecture, was used to calculate the volume of intraretinal fluid (IRF) and subretinal fluid (SRF). The alterations of retinal vessel density and thickness, and the correlation between best-corrected visual acuity (BCVA) and OCT parameters were analyzed.

RESULTS: 2,955 OCT images of fourteen eyes from DME patients with IRF and SRF who received anti-vascular endothelial growth factor (VEGF) agents were obtained. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve of the algorithm was 0.993 for IRF and 0.998 for SRF. The volumes of IRF and SRF were significantly decreased from 1.93 ± 0.58 /1.14 ± 0.25 mm3 (baseline) to 0.26 ± 0.13 /0.26 ± 0.18 mm3 (post-injection), respectively (p = 0.0170 for IRF, and p = 0.0004 for SRF). The Spearman correlation demonstrated that the reduction of IRF volume was negatively correlated with age (coefficient = -0.698, p = 0.006).

CONCLUSION: We developed a deep learning assisted fluid volume calculation algorithm with high sensitivity and specificity for assessing the volume of IRF and SRF in DME patients. Key words: deep learning; diabetic macular edema; optical coherence tomography.

PMID:38699726 | PMC:PMC11063453 | DOI:10.1016/j.heliyon.2024.e29775

Categories: Literature Watch

Spectral analysis and Bi-LSTM deep network-based approach in detection of mild cognitive impairment from electroencephalography signals

Fri, 2024-05-03 06:00

Cogn Neurodyn. 2024 Apr;18(2):597-614. doi: 10.1007/s11571-023-10010-y. Epub 2023 Oct 3.

ABSTRACT

Mild cognitive impairment (MCI) is a neuropsychological syndrome that is characterized by cognitive impairments. It typically affects adults 60 years of age and older. It is a noticeable decline in the cognitive function of the patient, and if left untreated it gets converted to Alzheimer's disease (AD). For that reason, early diagnosis of MCI is important as it slows down the conversion of the disease to AD. Early and accurate diagnosis of MCI requires recognition of the clinical characteristics of the disease, extensive testing, and long-term observations. These observations and tests can be subjective, expensive, incomplete, or inaccurate. Electroencephalography (EEG) is a powerful choice for the diagnosis of diseases with its advantages such as being non-invasive, based on findings, less costly, and getting results in a short time. In this study, a new EEG-based model is developed which can effectively detect MCI patients with higher accuracy. For this purpose, a dataset consisting of EEG signals recorded from a total of 34 subjects, 18 of whom were MCI and 16 control groups was used, and their ages ranged from 40 to 77. To conduct the experiment, the EEG signals were denoised using Multiscale Principal Component Analysis (MSPCA), and to increase the size of the dataset Data Augmentation (DA) method was performed. The tenfold cross-validation method was used to validate the model, moreover, the power spectral density (PSD) of the EEG signals was extracted from the EEG signals using three spectral analysis methods, the periodogram, welch, and multitaper. The PSD graphs of the EEG signals showed signal differences between the subjects of control and the MCI group, indicating that the signal power of MCI patients is lower compared to control groups. To classify the subjects, one of the best classifiers of deep learning algorithms called the Bi-directional long-short-term-memory (Bi-LSTM) was used, and several machine learning algorithms, such as decision tree (DT), support vector machine (SVM), and k-nearest neighbor (KNN). These algorithms were trained and tested using the extracted feature vectors from the control and the MCI groups. Additionally, the values of the coefficient matrix of those algorithms were compared and evaluated with the performance evaluation matrix to determine which one performed the best overall. According to the experimental results, the proposed deep learning model of multitaper spectral analysis approach with Bi-LSTM deep learning algorithm attained the highest number of correctly classified samples for diagnosing MCI patients and achieved a remarkable accuracy compared to the other proposed models. The achieved classification results of the deep learning model are reported to be 98.97% accuracy, 98.34% sensitivity, 99.67% specificity, 99.70% precision, 99.02% f1 score, and 97.94% Matthews correlation coefficient (MCC).

PMID:38699612 | PMC:PMC11061085 | DOI:10.1007/s11571-023-10010-y

Categories: Literature Watch

A real-time constellation image classification method of wireless communication signals based on the lightweight network MobileViT

Fri, 2024-05-03 06:00

Cogn Neurodyn. 2024 Apr;18(2):659-671. doi: 10.1007/s11571-023-10015-7. Epub 2023 Oct 10.

ABSTRACT

Automatic modulation classification (AMC) is a challenging topic in the development of cognitive radio, which can sense and learn surrounding electromagnetic environments and help to make corresponding decisions. In this paper, we propose to complete the real-time AMC through constructing a lightweight neural network MobileViT driven by the clustered constellation images. Firstly, the clustered constellation images are transformed from I/Q sequences to help extract robust and discriminative features. Then the lightweight neural network called MobileViT is developed for the real-time constellation image classification. Experimental results on the public dataset RadioML 2016.10a with edge computing platform demonstrate the superiority and efficiency of MobileViT. Furthermore, the extensive ablation tests prove the robustness of the proposed method to the learning rate and batch size. To the best of our knowledge, this is the first attempt to deploy the deep learning model to complete the real-time classification of modulation schemes of received signals at the edge.

PMID:38699610 | PMC:PMC11061089 | DOI:10.1007/s11571-023-10015-7

Categories: Literature Watch

Optimizing feature subset for schizophrenia detection using multichannel EEG signals and rough set theory

Fri, 2024-05-03 06:00

Cogn Neurodyn. 2024 Apr;18(2):431-446. doi: 10.1007/s11571-023-10011-x. Epub 2024 Jan 8.

ABSTRACT

Schizophrenia (SZ) is a mental disorder that causes lifelong disorders based on delusions, cognitive deficits, and hallucinations. By visual assessment, SZ diagnosis is time-consuming and complicated, because brain states are more effectively revealed by electroencephalogram (EEG) signals, which are effectively used in SZ diagnosis. The application of existing deep learning methods in SZ detection is effective in the classification of 2-dimensional images, and these methods require more computational resources. Therefore, dimensionality reduction is necessary for SZ diagnosis using EEG signals. To reduce the dimensionality of the data, an improved CAO (ICAO) dimensionality reduction method is proposed, which integrates horizontal and vertical crossover approaches with AOA. The optimal feature subset is achieved by satisfying the ICAO conditions, and a fitness function is evaluated based on rough sets for improved accuracy in feature selection. Therefore a Crossover-boosted Archimedes optimization algorithm (AOA) with rough sets for Schizophrenia detection (CAORS-SD) was proposed using multichannel EEG signals from both SZ and normal patients. The signals are decomposed using multivariate empirical mode decomposition into multivariate intrinsic mode functions (MIMFs). Entropy metrics such as spectral entropy, permutation entropy, approximate entropy, sample entropy, and SVD entropy are evaluated on the MIMF domain to detect SZ. The processing time of the kernel support vector machine classifier is minimized with fewer features, reducing the risk Fof overfitting. Accuracy, sensitivity, specificity, precision, and F1-score of the CAORS-SD model should be conducted to diagnose SZ. Therefore, the proposed CAORS-SD method achieves the higher performance of accuracy, sensitivity, specificity, precision, and F1-score values of 96.34, 98.95, 96.86, 98.52, and 96.74% respectively. Also, the CAORS-SD method minimizes the error rate and significantly reduces the execution time.

PMID:38699607 | PMC:PMC11061098 | DOI:10.1007/s11571-023-10011-x

Categories: Literature Watch

Machine learning approaches in the prediction of positive axillary lymph nodes post neoadjuvant chemotherapy using MRI, CT, or ultrasound: A systematic review

Fri, 2024-05-03 06:00

Eur J Radiol Open. 2024 Apr 24;12:100561. doi: 10.1016/j.ejro.2024.100561. eCollection 2024 Jun.

ABSTRACT

BACKGROUND AND OBJECTIVE: Neoadjuvant chemotherapy is a standard treatment approach for locally advanced breast cancer. Conventional imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound, have been used for axillary lymph node evaluation which is crucial for treatment planning and prognostication. This systematic review aims to comprehensively examine the current research on applying machine learning algorithms for predicting positive axillary lymph nodes following neoadjuvant chemotherapy utilizing imaging modalities, including MRI, CT, and ultrasound.

METHODS: A systematic search was conducted across databases, including PubMed, Scopus, and Web of Science, to identify relevant studies published up to December 2023. Articles employing machine learning algorithms to predict positive axillary lymph nodes using MRI, CT, or ultrasound data after neoadjuvant chemotherapy were included. The review follows the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines, encompassing data extraction and quality assessment.

RESULTS: Seven studies were included, comprising 1502 patients. Four studies used MRI, two used CT, and one applied ultrasound. Two studies developed deep-learning models, while five used classic machine-learning models mainly based on multiple regression. Across the studies, the models showed high predictive accuracy, with the best-performing models combining radiomics and clinical data.

CONCLUSION: This systematic review demonstrated the potential of utilizing advanced data analysis techniques, such as deep learning radiomics, in improving the prediction of positive axillary lymph nodes in breast cancer patients following neoadjuvant chemotherapy.

PMID:38699592 | PMC:PMC11063585 | DOI:10.1016/j.ejro.2024.100561

Categories: Literature Watch

Pine wilt disease detection algorithm based on improved YOLOv5

Fri, 2024-05-03 06:00

Front Plant Sci. 2024 Apr 18;15:1302361. doi: 10.3389/fpls.2024.1302361. eCollection 2024.

ABSTRACT

Pine wilt disease (PWD) poses a significant threat to forests due to its high infectivity and lethality. The absence of an effective treatment underscores the importance of timely detection and isolation of infected trees for effective prevention and control. While deep learning techniques combined unmanned aerial vehicle (UAV) remote sensing images offer promise for accurate identification of diseased pine trees in their natural environments, they often demand extensive prior professional knowledge and struggle with efficiency. This paper proposes a detection model YOLOv5L-s-SimAM-ASFF, which achieves remarkable precision, maintains a lightweight structure, and facilitates real-time detection of diseased pine trees in UAV RGB images under natural conditions. This is achieved through the integration of the ShuffleNetV2 network, a simple parameter-free attention module known as SimAM, and adaptively spatial feature fusion (ASFF). The model boasts a mean average precision (mAP) of 95.64% and a recall rate of 91.28% in detecting pine wilt diseased trees, while operating at an impressive 95.70 frames per second (FPS). Furthermore, it significantly reduces model size and parameter count compared to the original YOLOv5-Lite. These findings indicate that the proposed model YOLOv5L-s-SimAM-ASFF is most suitable for real-time, high-accuracy, and lightweight detection of PWD-infected trees. This capability is crucial for precise localization and quantification of infected trees, thereby providing valuable guidance for effective management and eradication efforts.

PMID:38699534 | PMC:PMC11063304 | DOI:10.3389/fpls.2024.1302361

Categories: Literature Watch

Out-of-Distribution Detection Algorithms for Robust Insect Classification

Fri, 2024-05-03 06:00

Plant Phenomics. 2024 Apr 30;6:0170. doi: 10.34133/plantphenomics.0170. eCollection 2024.

ABSTRACT

Plants encounter a variety of beneficial and harmful insects during their growth cycle. Accurate identification (i.e., detecting insects' presence) and classification (i.e., determining the type or class) of these insect species is critical for implementing prompt and suitable mitigation strategies. Such timely actions carry substantial economic and environmental implications. Deep learning-based approaches have produced models with good insect classification accuracy. Researchers aim to implement identification and classification models in agriculture, facing challenges when input images markedly deviate from the training distribution (e.g., images like vehicles, humans, or a blurred image or insect class that is not yet trained on). Out-of-distribution (OOD) detection algorithms provide an exciting avenue to overcome these challenges as they ensure that a model abstains from making incorrect classification predictions on images that belong to non-insect and/or untrained insect classes. As far as we know, no prior in-depth exploration has been conducted on the role of the OOD detection algorithms in addressing agricultural issues. Here, we generate and evaluate the performance of state-of-the-art OOD algorithms on insect detection classifiers. These algorithms represent a diversity of methods for addressing an OOD problem. Specifically, we focus on extrusive algorithms, i.e., algorithms that wrap around a well-trained classifier without the need for additional co-training. We compared three OOD detection algorithms: (a) maximum softmax probability, which uses the softmax value as a confidence score; (b) Mahalanobis distance (MAH)-based algorithm, which uses a generative classification approach; and (c) energy-based algorithm, which maps the input data to a scalar value, called energy. We performed an extensive series of evaluations of these OOD algorithms across three performance axes: (a) Base model accuracy: How does the accuracy of the classifier impact OOD performance? (b) How does the level of dissimilarity to the domain impact OOD performance? (c) Data imbalance: How sensitive is OOD performance to the imbalance in per-class sample size? Evaluating OOD algorithms across these performance axes provides practical guidelines to ensure the robust performance of well-trained models in the wild, which is a key consideration for agricultural applications. Based on this analysis, we proposed the most effective OOD algorithm as wrapper for the insect classifier with highest accuracy. We presented the results of its OOD detection performance in the paper. Our results indicate that OOD detection algorithms can significantly enhance user trust in insect pest classification by abstaining classification under uncertain conditions.

PMID:38699404 | PMC:PMC11065417 | DOI:10.34133/plantphenomics.0170

Categories: Literature Watch

Contribution of <em>de novo</em> retroelements to birth defects and childhood cancers

Fri, 2024-05-03 06:00

medRxiv [Preprint]. 2024 Apr 16:2024.04.15.24305733. doi: 10.1101/2024.04.15.24305733.

ABSTRACT

Insertion of active retroelements-L1s, Alu s, and SVAs-can disrupt proper genome function and lead to various disorders including cancer. However, the role of de novo retroelements (DNRTs) in birth defects and childhood cancers has not been well characterized due to the lack of adequate data and efficient computational tools. Here, we examine whole-genome sequencing data of 3,244 trios from 12 birth defect and childhood cancer cohorts in the Gabriella Miller Kids First Pediatric Research Program. Using an improved version of our tool xTea (x-Transposable element analyzer) that incorporates a deep-learning module, we identified 162 DNRTs, as well as 2 pseudogene insertions. Several variants are likely to be causal, such as a de novo Alu insertion that led to the ablation of a whole exon in the NF1 gene in a proband with brain tumor. We observe a high de novo SVA insertion burden in both high-intolerance loss-of-function genes and exons as well as more frequent de novo Alu insertions of paternal origin. We also identify potential mosaic DNRTs from embryonic stages. Our study reveals the important roles of DNRTs in causing birth defects and predisposition to childhood cancers.

PMID:38699361 | PMC:PMC11065029 | DOI:10.1101/2024.04.15.24305733

Categories: Literature Watch

Using Deep learning to Predict Cardiovascular Magnetic Resonance Findings from Echocardiography Videos

Fri, 2024-05-03 06:00

medRxiv [Preprint]. 2024 Apr 19:2024.04.16.24305936. doi: 10.1101/2024.04.16.24305936.

ABSTRACT

BACKGROUND: Echocardiography is the most common modality for assessing cardiac structure and function. While cardiac magnetic resonance (CMR) imaging is less accessible, CMR can provide unique tissue characterization including late gadolinium enhancement (LGE), T1 and T2 mapping, and extracellular volume (ECV) which are associated with tissue fibrosis, infiltration, and inflammation. While deep learning has been shown to uncover findings not recognized by clinicians, it is unknown whether CMR-based tissue characteristics can be derived from echocardiography videos using deep learning. We hypothesized that deep learning applied to echocardiography could predict CMR-based measurements.

METHODS: In a retrospective single-center study, adult patients with CMRs and echocardiography studies within 30 days were included. A video-based convolutional neural network was trained on echocardiography videos to predict CMR-derived labels including wall motion abnormality (WMA) presence, LGE presence, and abnormal T1, T2 or ECV across echocardiography views. The model performance was evaluated in a held-out test dataset not used for training.

RESULTS: The study population included 1,453 adult patients (mean age 56±18 years, 42% female) with 2,556 paired echocardiography studies occurring on average 2 days after CMR (interquartile range 2 days prior to 6 days after). The model had high predictive capability for presence of WMA (AUC 0.873 [95%CI 0.816-0.922]), however, the model was unable to reliably detect the presence of LGE (AUC 0.699 [0.613-0.780]), native T1 (AUC 0.614 [0.500-0.715]), T2 0.553 [0.420-0.692], or ECV 0.564 [0.455-0.691]).

CONCLUSIONS: Deep learning applied to echocardiography accurately identified CMR-based WMA, but was unable to predict tissue characteristics, suggesting that signal for these tissue characteristics may not be present within ultrasound videos, and that the use of CMR for tissue characterization remains essential within cardiology.

CLINICAL PERSPECTIVE: Tissue characterization of the heart muscle is useful for clinical diagnosis and prognosis by identifying myocardial fibrosis, inflammation, and infiltration, and can be measured using cardiac MRI. While echocardiography is highly accessible and provides excellent functional information, its ability to provide tissue characterization information is limited at this time. Our study using a deep learning approach to predict cardiac MRI-based tissue characteristics from echocardiography showed limited ability to do so, suggesting that alternative approaches, including non-deep learning methods should be considered in future research.

PMID:38699330 | PMC:PMC11065018 | DOI:10.1101/2024.04.16.24305936

Categories: Literature Watch

SCIseg: Automatic Segmentation of T2-weighted Intramedullary Lesions in Spinal Cord Injury

Fri, 2024-05-03 06:00

medRxiv [Preprint]. 2024 Apr 21:2024.01.03.24300794. doi: 10.1101/2024.01.03.24300794.

ABSTRACT

PURPOSE: To develop a deep learning tool for the automatic segmentation of T2-weighted intramedullary lesions in spinal cord injury (SCI).

MATERIAL AND METHODS: This retrospective study included a cohort of SCI patients from three sites enrolled between July 2002 and February 2023. A deep learning model, SCIseg , was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The data consisted of T2-weighted MRI acquired using different scanner manufacturers with heterogeneous image resolutions (isotropic/anisotropic), orientations (axial/sagittal), lesion etiologies (traumatic/ischemic/hemorrhagic) and lesions spread across the cervical, thoracic and lumbar spine. The segmentations from the proposed model were visually and quantitatively compared with other open-source baselines. Wilcoxon signed-rank test was used to compare quantitative MRI biomarkers (lesion volume, lesion length, and maximal axial damage ratio) computed from manual lesion masks and those obtained automatically with SCIseg predictions.

RESULTS: MRI data from 191 SCI patients (mean age, 48.1 years ± 17.9 [SD]; 142 males) were used for model training and evaluation. SCIseg achieved the best segmentation performance for both the cord and lesions. There was no statistically significant difference between lesion length and maximal axial damage ratio computed from manually annotated lesions and those obtained using SCIseg .

CONCLUSION: Automatic segmentation of intramedullary lesions commonly seen in SCI replaces the tedious manual annotation process and enables the extraction of relevant lesion morphometrics in large cohorts. The proposed model segments lesions across different etiologies, scanner manufacturers, and heterogeneous image resolutions. SCIseg is open-source and accessible through the Spinal Cord Toolbox.

SUMMARY: Automatic segmentation of the spinal cord and T2-weighted lesions in spinal cord injury on MRI scans across different treatment strategies, lesion etiologies, sites, scanner manufacturers, and heterogeneous image resolutions.

KEY RESULTS: An open-source, automatic method, SCIseg , was trained and evaluated on a dataset of 191 spinal cord injury patients from three sites for the segmentation of the spinal cord and T2-weighted lesions. SCIseg generalizes across traumatic and non-traumatic lesions, scanner manufacturers, and heterogeneous image resolutions, enabling the automatic extraction of lesion morphometrics in large multi-site cohorts. Quantitative MRI biomarkers, namely, lesion length and maximal axial damage ratio derived from the automatic predictions showed no statistically significant difference when compared with manual ground truth, implying reliability in SCIseg's predictions.

PMID:38699309 | PMC:PMC11065035 | DOI:10.1101/2024.01.03.24300794

Categories: Literature Watch

Identifying autism spectrum disorder from multi-modal data with privacy-preserving

Thu, 2024-05-02 06:00

Npj Ment Health Res. 2024 May 2;3(1):15. doi: 10.1038/s44184-023-00050-x.

ABSTRACT

The application of deep learning models to precision medical diagnosis often requires the aggregation of large amounts of medical data to effectively train high-quality models. However, data privacy protection mechanisms make it difficult to perform medical data collection from different medical institutions. In autism spectrum disorder (ASD) diagnosis, automatic diagnosis using multimodal information from heterogeneous data has not yet achieved satisfactory performance. To address the privacy preservation issue as well as to improve ASD diagnosis, we propose a deep learning framework using multimodal feature fusion and hypergraph neural networks for disease prediction in federated learning (FedHNN). By introducing the federated learning strategy, each local model is trained and computed independently in a distributed manner without data sharing, allowing rapid scaling of medical datasets to achieve robust and scalable deep learning predictive models. To further improve the performance with privacy preservation, we improve the hypergraph model for multimodal fusion to make it suitable for autism spectrum disorder (ASD) diagnosis tasks by capturing the complementarity and correlation between modalities through a hypergraph fusion strategy. The results demonstrate that our proposed federated learning-based prediction model is superior to all local models and outperforms other deep learning models. Overall, our proposed FedHNN has good results in the work of using multi-site data to improve the performance of ASD identification.

PMID:38698164 | DOI:10.1038/s44184-023-00050-x

Categories: Literature Watch

An automated approach for real-time informative frames classification in laryngeal endoscopy using deep learning

Thu, 2024-05-02 06:00

Eur Arch Otorhinolaryngol. 2024 May 2. doi: 10.1007/s00405-024-08676-z. Online ahead of print.

ABSTRACT

PURPOSE: Informative image selection in laryngoscopy has the potential for improving automatic data extraction alone, for selective data storage and a faster review process, or in combination with other artificial intelligence (AI) detection or diagnosis models. This paper aims to demonstrate the feasibility of AI in providing automatic informative laryngoscopy frame selection also capable of working in real-time providing visual feedback to guide the otolaryngologist during the examination.

METHODS: Several deep learning models were trained and tested on an internal dataset (n = 5147 images) and then tested on an external test set (n = 646 images) composed of both white light and narrow band images. Four videos were used to assess the real-time performance of the best-performing model.

RESULTS: ResNet-50, pre-trained with the pretext strategy, reached a precision = 95% vs. 97%, recall = 97% vs, 89%, and the F1-score = 96% vs. 93% on the internal and external test set respectively (p = 0.062). The four testing videos are provided in the supplemental materials.

CONCLUSION: The deep learning model demonstrated excellent performance in identifying diagnostically relevant frames within laryngoscopic videos. With its solid accuracy and real-time capabilities, the system is promising for its development in a clinical setting, either autonomously for objective quality control or in conjunction with other algorithms within a comprehensive AI toolset aimed at enhancing tumor detection and diagnosis.

PMID:38698163 | DOI:10.1007/s00405-024-08676-z

Categories: Literature Watch

Constructing personalized characterizations of structural brain aberrations in patients with dementia using explainable artificial intelligence

Thu, 2024-05-02 06:00

NPJ Digit Med. 2024 May 2;7(1):110. doi: 10.1038/s41746-024-01123-7.

ABSTRACT

Deep learning approaches for clinical predictions based on magnetic resonance imaging data have shown great promise as a translational technology for diagnosis and prognosis in neurological disorders, but its clinical impact has been limited. This is partially attributed to the opaqueness of deep learning models, causing insufficient understanding of what underlies their decisions. To overcome this, we trained convolutional neural networks on structural brain scans to differentiate dementia patients from healthy controls, and applied layerwise relevance propagation to procure individual-level explanations of the model predictions. Through extensive validations we demonstrate that deviations recognized by the model corroborate existing knowledge of structural brain aberrations in dementia. By employing the explainable dementia classifier in a longitudinal dataset of patients with mild cognitive impairment, we show that the spatially rich explanations complement the model prediction when forecasting transition to dementia and help characterize the biological manifestation of disease in the individual brain. Overall, our work exemplifies the clinical potential of explainable artificial intelligence in precision medicine.

PMID:38698139 | DOI:10.1038/s41746-024-01123-7

Categories: Literature Watch

A novel transformer-based DL model enhanced by position-sensitive attention and gated hierarchical LSTM for aero-engine RUL prediction

Thu, 2024-05-02 06:00

Sci Rep. 2024 May 2;14(1):10061. doi: 10.1038/s41598-024-59095-3.

ABSTRACT

Accurate prediction of remaining useful life (RUL) for aircraft engines is essential for proactive maintenance and safety assurance. However, existing methods such as physics-based models, classical recurrent neural networks, and convolutional neural networks face limitations in capturing long-term dependencies and modeling complex degradation patterns. In this study, we propose a novel deep-learning model based on the Transformer architecture to address these limitations. Specifically, to address the issue of insensitivity to local context in the attention mechanism employed by the Transformer encoder, we introduce a position-sensitive self-attention (PSA) unit to enhance the model's ability to incorporate local context by attending to the positional relationships of the input data at each time step. Additionally, a gated hierarchical long short-term memory network (GHLSTM) is designed to perform regression prediction at different time scales on the latent features, thereby improving the accuracy of RUL estimation for mechanical equipment. Experiments on the C-MAPSS dataset demonstrate that the proposed model outperforms existing methods in RUL prediction, showcasing its effectiveness in modeling complex degradation patterns and long-term dependencies.

PMID:38698017 | DOI:10.1038/s41598-024-59095-3

Categories: Literature Watch

Comparison between R2'-based and R2*-based χ-separation methods: A clinical evaluation in individuals with multiple sclerosis

Thu, 2024-05-02 06:00

NMR Biomed. 2024 May 2:e5167. doi: 10.1002/nbm.5167. Online ahead of print.

ABSTRACT

Susceptibility source separation, or χ-separation, estimates diamagnetic (χdia) and paramagnetic susceptibility (χpara) signals in the brain using local field and R2' (= R2* - R2) maps. Recently proposed R2*-based χ-separation methods allow for χ-separation using only multi-echo gradient echo (ME-GRE) data, eliminating the need for additional data acquisition for R2 mapping. Although this approach reduces scan time and enhances clinical utility, the impact of missing R2 information remains a subject of exploration. In this study, we evaluate the viability of two previously proposed R2*-based χ-separation methods as alternatives to their R2'-based counterparts: model-based R2*-χ-separation versus χ-separation and deep learning-based χ-sepnet-R2* versus χ-sepnet-R2'. Their performances are assessed in individuals with multiple sclerosis (MS), comparing them with their corresponding R2'-based counterparts (i.e., R2*-χ-separation vs. χ-separation and χ-sepnet-R2* vs. χ-sepnet-R2'). The evaluations encompass qualitative visual assessments by experienced neuroradiologists and quantitative analyses, including region of interest analyses and linear regression analyses. Qualitatively, R2*-χ-separation tends to report higher χpara and χdia values compared with χ-separation, leading to less distinct lesion contrasts, while χ-sepnet-R2* closely aligns with χ-sepnet-R2'. Quantitative analysis reveals a robust correlation between both R2*-based methods and their R2'-based counterparts (r ≥ 0.88). Specifically, in the whole-brain voxels, χ-sepnet-R2* exhibits higher correlation and better linearity than R2*-χ-separation (χdia/χpara from R2*-χ-separation: r = 0.88/0.90, slope = 0.79/0.86; χdia/χpara from χ-sepnet-R2*: r = 0.90/0.92, slope = 0.99/0.97). In MS lesions, both R2*-based methods display comparable correlation and linearity (χdia/χpara from R2*-χ-separation: r = 0.90/0.91, slope = 0.98/0.91; χdia/χpara from χ-sepnet-R2*: r = 0.88/0.88, slope = 0.91/0.95). Notably, χ-sepnet-R2* demonstrates negligible offsets, whereas R2*-χ-separation exhibits relatively large offsets (0.02 ppm in the whole brain and 0.01 ppm in the MS lesions), potentially indicating the false presence of myelin or iron in MS lesions. Overall, both R2*-based χ-separation methods demonstrated their viability as alternatives to their R2'-based counterparts. χ-sepnet-R2* showed better alignment with its R2'-based counterpart with minimal susceptibility offsets, compared with R2*-χ-separation that reported higher χpara and χdia values compared with R2'-based χ-separation.

PMID:38697612 | DOI:10.1002/nbm.5167

Categories: Literature Watch

ECG-surv: A deep learning-based model to predict time to 1-year mortality from 12-lead electrocardiogram

Thu, 2024-05-02 06:00

Biomed J. 2024 Apr 30:100732. doi: 10.1016/j.bj.2024.100732. Online ahead of print.

ABSTRACT

BACKGROUND: Electrocardiogram (ECG) abnormalities have demonstrated potential as prognostic indicators of patient survival. However, the traditional statistical approach is constrained by structured data input, limiting its ability to fully leverage the predictive value of ECG data in prognostic modeling.

METHODS: This study aims to introduce and evaluate a deep-learning model to simultaneously handle censored data and unstructured ECG data for survival analysis. We herein introduce a novel deep neural network called ECG-surv, which includes a feature extraction neural network and a time-to-event analysis neural network. The proposed model is specifically designed to predict the time to 1-year mortality by extracting and analyzing unique features from 12-lead ECG data. ECG-surv was evaluated using both an independent test set and an external set, which were collected using different ECG devices.

RESULTS: The performance of ECG-surv surpassed that of the Cox proportional model, which included demographics and ECG waveform parameters, in predicting 1-year all-cause mortality, with a significantly higher concordance index (C-index) in ECG-surv than in the Cox model using both the independent test set (0.860 [95% CI: 0.859- 0.861] vs. 0.796 [95% CI: 0.791- 0.800]) and the external test set (0.813 [95% CI: 0.807- 0.814] vs. 0.764 [95% CI: 0.755- 0.770]). ECG-surv also demonstrated exceptional predictive ability for cardiovascular death (C-index of 0.891 [95% CI: 0.890- 0.893]), outperforming the Framingham risk Cox model (C-index of 0.734 [95% CI: 0.715-0.752]).

CONCLUSION: ECG-surv effectively utilized unstructured ECG data in a survival analysis. It outperformed traditional statistical approaches in predicting 1-year all-cause mortality and cardiovascular death, which makes it a valuable tool for predicting patient survival.

PMID:38697480 | DOI:10.1016/j.bj.2024.100732

Categories: Literature Watch

Pages