Deep learning

Detection of Sleep Apnea Using Wearable AI: Systematic Review and Meta-Analysis

Tue, 2024-09-10 06:00

J Med Internet Res. 2024 Sep 10;26:e58187. doi: 10.2196/58187.

ABSTRACT

BACKGROUND: Early detection of sleep apnea, the health condition where airflow either ceases or decreases episodically during sleep, is crucial to initiate timely interventions and avoid complications. Wearable artificial intelligence (AI), the integration of AI algorithms into wearable devices to collect and analyze data to offer various functionalities and insights, can efficiently detect sleep apnea due to its convenience, accessibility, affordability, objectivity, and real-time monitoring capabilities, thereby addressing the limitations of traditional approaches such as polysomnography.

OBJECTIVE: The objective of this systematic review was to examine the effectiveness of wearable AI in detecting sleep apnea, its type, and its severity.

METHODS: Our search was conducted in 6 electronic databases. This review included English research articles evaluating wearable AI's performance in identifying sleep apnea, distinguishing its type, and gauging its severity. Two researchers independently conducted study selection, extracted data, and assessed the risk of bias using an adapted Quality Assessment of Studies of Diagnostic Accuracy-Revised tool. We used both narrative and statistical techniques for evidence synthesis.

RESULTS: Among 615 studies, 38 (6.2%) met the eligibility criteria for this review. The pooled mean accuracy, sensitivity, and specificity of wearable AI in detecting apnea events in respiration (apnea and nonapnea events) were 0.893, 0.793, and 0.947, respectively. The pooled mean accuracy of wearable AI in differentiating types of apnea events in respiration (normal, obstructive sleep apnea, central sleep apnea, mixed apnea, and hypopnea) was 0.815. The pooled mean accuracy, sensitivity, and specificity of wearable AI in detecting sleep apnea were 0.869, 0.938, and 0.752, respectively. The pooled mean accuracy of wearable AI in identifying the severity level of sleep apnea (normal, mild, moderate, and severe) and estimating the severity score (Apnea-Hypopnea Index) was 0.651 and 0.877, respectively. Subgroup analyses found different moderators of wearable AI performance for different outcomes, such as the type of algorithm, type of data, type of sleep apnea, and placement of wearable devices.

CONCLUSIONS: Wearable AI shows potential in identifying and classifying sleep apnea, but its current performance is suboptimal for routine clinical use. We recommend concurrent use with traditional assessments until improved evidence supports its reliability. Certified commercial wearables are needed for effectively detecting sleep apnea, predicting its occurrence, and delivering proactive interventions. Researchers should conduct further studies on detecting central sleep apnea, prioritize deep learning algorithms, incorporate self-reported and nonwearable data, evaluate performance across different device placements, and provide detailed findings for effective meta-analyses.

PMID:39255014 | DOI:10.2196/58187

Categories: Literature Watch

Efficacy of compressed sensing and deep learning reconstruction for adult female pelvic MRI at 1.5 T

Tue, 2024-09-10 06:00

Eur Radiol Exp. 2024 Sep 10;8(1):103. doi: 10.1186/s41747-024-00506-5.

ABSTRACT

BACKGROUND: We aimed to determine the capabilities of compressed sensing (CS) and deep learning reconstruction (DLR) with those of conventional parallel imaging (PI) for improving image quality while reducing examination time on female pelvic 1.5-T magnetic resonance imaging (MRI).

METHODS: Fifty-two consecutive female patients with various pelvic diseases underwent MRI with T1- and T2-weighted sequences using CS and PI. All CS data was reconstructed with and without DLR. Signal-to-noise ratio (SNR) of muscle and contrast-to-noise ratio (CNR) between fat tissue and iliac muscle on T1-weighted images (T1WI) and between myometrium and straight muscle on T2-weighted images (T2WI) were determined through region-of-interest measurements. Overall image quality (OIQ) and diagnostic confidence level (DCL) were evaluated on 5-point scales. SNRs and CNRs were compared using Tukey's test, and qualitative indexes using the Wilcoxon signed-rank test.

RESULTS: SNRs of T1WI and T2WI obtained using CS with DLR were higher than those using CS without DLR or conventional PI (p < 0.010). CNRs of T1WI and T2WI obtained using CS with DLR were higher than those using CS without DLR or conventional PI (p < 0.003). OIQ of T1WI and T2WI obtained using CS with DLR were higher than that using CS without DLR or conventional PI (p < 0.001). DCL of T2WI obtained using CS with DLR was higher than that using conventional PI or CS without DLR (p < 0.001).

CONCLUSION: CS with DLR provided better image quality and shorter examination time than those obtainable with PI for female pelvic 1.5-T MRI.

RELEVANCE STATEMENT: CS with DLR can be considered effective for attaining better image quality and shorter examination time for female pelvic MRI at 1.5 T compared with those obtainable with PI.

KEY POINTS: Patients underwent MRI with T1- and T2-weighted sequences using CS and PI. All CS data was reconstructed with and without DLR. CS with DLR allowed for examination times significantly shorter than those of PI and provided significantly higher signal- and CNRs, as well as OIQ.

PMID:39254920 | DOI:10.1186/s41747-024-00506-5

Categories: Literature Watch

Applying deep learning-based ensemble model to [(18)F]-FDG-PET-radiomic features for differentiating benign from malignant parotid gland diseases

Tue, 2024-09-10 06:00

Jpn J Radiol. 2024 Sep 10. doi: 10.1007/s11604-024-01649-6. Online ahead of print.

ABSTRACT

OBJECTIVES: To develop and identify machine learning (ML) models using pretreatment 2-deoxy-2-[18F]fluoro-D-glucose ([18F]-FDG)-positron emission tomography (PET)-based radiomic features to differentiate benign from malignant parotid gland diseases (PGDs).

MATERIALS AND METHODS: This retrospective study included 62 patients with 63 PGDs who underwent pretreatment [18F]-FDG-PET/computed tomography (CT). The lesions were assigned to the training (n = 44) and testing (n = 19) cohorts. In total, 49 [18F]-FDG-PET-based radiomic features were utilized to differentiate benign from malignant PGDs using five different conventional ML algorithmic models (random forest, neural network, k-nearest neighbors, logistic regression, and support vector machine) and the deep learning (DL)-based ensemble ML model. In the training cohort, each conventional ML model was constructed using the five most important features selected by the recursive feature elimination method with the tenfold cross-validation and synthetic minority oversampling technique. The DL-based ensemble ML model was constructed using the five most important features of the bagging and multilayer stacking methods. The area under the receiver operating characteristic curves (AUCs) and accuracies were used to compare predictive performances.

RESULTS: In total, 24 benign and 39 malignant PGDs were identified. Metabolic tumor volume and four GLSZM features (GLSZM_ZSE, GLSZM_SZE, GLSZM_GLNU, and GLSZM_ZSNU) were the five most important radiomic features. All five features except GLSZM_SZE were significantly higher in malignant PGDs than in benign ones (each p < 0.05). The DL-based ensemble ML model had the best performing classifier in the training and testing cohorts (AUC = 1.000, accuracy = 1.000 vs AUC = 0.976, accuracy = 0.947).

CONCLUSIONS: The DL-based ensemble ML model using [18F]-FDG-PET-based radiomic features can be useful for differentiating benign from malignant PGDs. The DL-based ensemble ML model using [18F]-FDG-PET-based radiomic features can overcome the previously reported limitation of [18F]-FDG-PET/CT scan for differentiating benign from malignant PGDs. The DL-based ensemble ML approach using [18F]-FDG-PET-based radiomic features can provide useful information for managing PGD.

PMID:39254903 | DOI:10.1007/s11604-024-01649-6

Categories: Literature Watch

Bilinear Perceptual Fusion Algorithm Based on Brain Functional and Structural Data for ASD Diagnosis and Regions of Interest Identification

Tue, 2024-09-10 06:00

Interdiscip Sci. 2024 Sep 10. doi: 10.1007/s12539-024-00651-w. Online ahead of print.

ABSTRACT

Autism spectrum disorder (ASD) is a serious mental disorder with a complex pathogenesis mechanism and variable presentation among individuals. Although many deep learning algorithms have been used to diagnose ASD, most of them focus on a single modality of data, resulting in limited information extraction and poor stability. In this paper, we propose a bilinear perceptual fusion (BPF) algorithm that leverages data from multiple modalities. In our algorithm, different schemes are used to extract features according to the characteristics of functional and structural data. Through bilinear operations, the associations between the functional and structural features of each region of interest (ROI) are captured. Then the associations are used to integrate the feature representation. Graph convolutional neural networks (GCNs) can effectively utilize topology and node features in brain network analysis. Therefore, we design a deep learning framework called BPF-GCN and conduct experiments on publicly available ASD dataset. The results show that the classification accuracy of BPF-GCN reached 82.35%, surpassing existing methods. This demonstrates the superiority of its classification performance, and the framework can extract ROIs related to ASD. Our work provides a valuable reference for the timely diagnosis and treatment of ASD.

PMID:39254805 | DOI:10.1007/s12539-024-00651-w

Categories: Literature Watch

Understanding Learning from EEG Data: Combining Machine Learning and Feature Engineering Based on Hidden Markov Models and Mixed Models

Tue, 2024-09-10 06:00

Neuroinformatics. 2024 Sep 10. doi: 10.1007/s12021-024-09690-6. Online ahead of print.

ABSTRACT

Theta oscillations, ranging from 4-8 Hz, play a significant role in spatial learning and memory functions during navigation tasks. Frontal theta oscillations are thought to play an important role in spatial navigation and memory. Electroencephalography (EEG) datasets are very complex, making any changes in the neural signal related to behaviour difficult to interpret. However, multiple analytical methods are available to examine complex data structures, especially machine learning-based techniques. These methods have shown high classification performance, and their combination with feature engineering enhances their capability. This paper proposes using hidden Markov and linear mixed effects models to extract features from EEG data. Based on the engineered features obtained from frontal theta EEG data during a spatial navigation task in two key trials (first, last) and between two conditions (learner and non-learner), we analysed the performance of six machine learning methods on classifying learner and non-learner participants. We also analysed how different standardisation methods used to pre-process the EEG data contribute to classification performance. We compared the classification performance of each trial with data gathered from the same subjects, including solely coordinate-based features, such as idle time and average speed. We found that more machine learning methods perform better classification using coordinate-based data. However, only deep neural networks achieved an area under the ROC curve higher than 80% using the theta EEG data alone. Our findings suggest that standardising the theta EEG data and using deep neural networks enhances the classification of learner and non-learner subjects in a spatial learning task.

PMID:39254794 | DOI:10.1007/s12021-024-09690-6

Categories: Literature Watch

Prediction of antibiotic resistance mechanisms using a protein language model

Tue, 2024-09-10 06:00

Bioinformatics. 2024 Sep 10:btae550. doi: 10.1093/bioinformatics/btae550. Online ahead of print.

ABSTRACT

MOTIVATION: Antibiotic resistance has emerged as a major global health threat, with an increasing number of bacterial infections becoming difficult to treat. Predicting the underlying resistance mechanisms of antibiotic resistance genes (ARGs) is crucial for understanding and combating this problem. However, existing methods struggle to accurately predict resistance mechanisms for ARGs with low similarity to known sequences and lack sufficient interpretability of the prediction models.

RESULTS: In this study, we present a novel approach for predicting ARG resistance mechanisms using ProteinBERT, a protein language model based on deep learning. Our method outperforms state-of-the-art techniques on diverse ARG datasets, including those with low homology to the training data, highlighting its potential for predicting the resistance mechanisms of unknown ARGs. Attention analysis of the model reveals that it considers biologically relevant features, such as conserved amino acid residues and antibiotic target binding sites, when making predictions. These findings provide valuable insights into the molecular basis of antibiotic resistance and demonstrate the interpretability of protein language models, offering a new perspective on their application in bioinformatics.

AVAILABILITY: The source code is available for free at https://github.com/hmdlab/ARG-BERT. The output results of the model are published at https://waseda.box.com/v/ARG-BERT-suppl.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:39254573 | DOI:10.1093/bioinformatics/btae550

Categories: Literature Watch

Application of artificial intelligence in glaucoma. Part 2. Neural networks and machine learning in the monitoring and treatment of glaucoma

Tue, 2024-09-10 06:00

Vestn Oftalmol. 2024;140(4):80-85. doi: 10.17116/oftalma202414004180.

ABSTRACT

The second part of the literature review on the application of artificial intelligence (AI) methods for screening, diagnosing, monitoring, and treating glaucoma provides information on how AI methods enhance the effectiveness of glaucoma monitoring and treatment, presents technologies that use machine learning, including neural networks, to predict disease progression and determine the need for anti-glaucoma surgery. The article also discusses the methods of personalized treatment based on projection machine learning methods and outlines the problems and prospects of using AI in solving tasks related to screening, diagnosing, and treating glaucoma.

PMID:39254394 | DOI:10.17116/oftalma202414004180

Categories: Literature Watch

Recognition of Molecular Structure of Phosphonium Salts from the Visual Appearance of Material with Deep Learning Can Reveal Subtle Homologs

Tue, 2024-09-10 06:00

Small. 2024 Sep 10:e2403423. doi: 10.1002/smll.202403423. Online ahead of print.

ABSTRACT

Determining molecular structures is foundational in chemistry and biology. The notion of discerning molecular structures simply from the visual appearance of a material remained almost unthinkable until the advent of machine learning. This paper introduces a pioneering approach bridging the visual appearance of materials (both at the micro- and nanostructural levels) with traditional chemical structure analysis methods. Quaternary phosphonium salts are opted as the model compounds, given their significant roles in diverse chemical and medicinal fields and their ability to form homologs with only minute intermolecular variances. This research results in the successful creation of a neural network model capable of recognizing molecular structures from visual electron microscopy images of the material. The performance of the model is evaluated and related to the chemical nature of the studied chemicals. Additionally, unsupervised domain transfer is tested as a method to use the resulting model on optical microscopy images, as well as test models trained on optical images directly. The robustness of the method is further tested using a complex system of phosphonium salt mixtures. To the best of the authors' knowledge, this study offers the first evidence of the feasibility of discerning nearly indistinguishable molecular structures.

PMID:39254289 | DOI:10.1002/smll.202403423

Categories: Literature Watch

MetFinder: A Tool for Automated Quantitation of Metastatic Burden in Histological Sections From Preclinical Models

Tue, 2024-09-10 06:00

Pigment Cell Melanoma Res. 2024 Sep 10. doi: 10.1111/pcmr.13195. Online ahead of print.

ABSTRACT

As efforts to study the mechanisms of melanoma metastasis and novel therapeutic approaches multiply, researchers need accurate, high-throughput methods to evaluate the effects on tumor burden resulting from specific interventions. We show that automated quantification of tumor content from whole slide images is a compelling solution to assess in vivo experiments. In order to increase the outflow of data collection from preclinical studies, we assembled a large dataset with annotations and trained a deep neural network for the quantitative analysis of melanoma tumor content on histopathological sections of murine models. After assessing its performance in segmenting these images, the tool obtained consistent results with an orthogonal method (bioluminescence) of measuring metastasis in an experimental setting. This AI-based algorithm, made freely available to academic laboratories through a web-interface called MetFinder, promises to become an asset for melanoma researchers and pathologists interested in accurate, quantitative assessment of metastasis burden.

PMID:39254030 | DOI:10.1111/pcmr.13195

Categories: Literature Watch

A Framework for Measuring Tree Rings Based on Panchromatic Images and Deep Learning

Tue, 2024-09-10 06:00

Plant Cell Environ. 2024 Sep 10. doi: 10.1111/pce.15091. Online ahead of print.

ABSTRACT

Tree-ring data are pivotal for decoding the age and growth patterns of trees, reflecting the impact of environmental factors over time. Addressing the significant shortcomings of traditional, labour-intensive and resource-demanding methods, we propose an innovative automated technique that utilizes panchromatic images and deep learning for measuring tree rings. The method utilizes convolutional neural networks to enhance image quality, precisely delineate tree rings through segmentation and perform ring counting and width calculation in the post-processing stage. We compiled an extensive data set from diverse sources, including Beijing Forestry University and the Summer Palace, to train our algorithm. The performance of our method was validated empirically, demonstrating its potential to transform tree-ring analysis and provide deeper insights into ecological and climatological research.

PMID:39253958 | DOI:10.1111/pce.15091

Categories: Literature Watch

Predicting Late Gadolinium Enhancement of Acute Myocardial Infarction in Contrast-Free Cardiac Cine MRI Using Deep Generative Learning

Tue, 2024-09-10 06:00

Circ Cardiovasc Imaging. 2024 Sep 10:e016786. doi: 10.1161/CIRCIMAGING.124.016786. Online ahead of print.

ABSTRACT

BACKGROUND: Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) is a standard technique for diagnosing myocardial infarction (MI), which, however, poses risks due to gadolinium contrast usage. Techniques enabling MI assessment based on contrast-free CMR are desirable to overcome the limitations associated with contrast enhancement.

METHODS: We introduce a novel deep generative learning method, termed cine-generated enhancement (CGE), which transforms standard contrast-free cine CMR into LGE-equivalent images for MI assessment. CGE features with multislice spatiotemporal feature extractor, enhancement contrast modulation, and sophisticated loss function. Data from 430 patients with acute MI from 3 centers were collected. After image quality control, 1525 pairs (289 patients) of center I were used for training, and 293 slices (52 patients) of the same center were reserved for internal testing. The 40 patients (401 slices) of the other 2 centers were used for external testing. The CGE robustness was further tested in 20 normal subjects in a public cine CMR data set. CGE images were compared with LGE for image quality assessment and MI quantification regarding scar size and transmurality.

RESULTS: The CGE method produced images of superior quality to LGE in both internal and external data sets. There was a significant (P<0.001) correlation between CGE and LGE measurements of scar size (Pearson correlation, 0.79/0.80; intraclass correlation coefficient, 0.79/0.77) and transmurality (Pearson correlation, 0.76/0.64; intraclass correlation coefficient, 0.76/0.63) in internal/external data set. Considering all data sets, CGE demonstrated high sensitivity (91.27%) and specificity (95.83%) in detecting scars. Realistic enhancement images were obtained for the normal subjects in the public data set without false positive subjects.

CONCLUSIONS: CGE achieved superior image quality to LGE and accurate scar delineation in patients with acute MI of both internal and external data sets. CGE can significantly simplify the CMR examination, reducing scan times and risks associated with gadolinium-based contrasts, which are crucial for acute patients.

PMID:39253820 | DOI:10.1161/CIRCIMAGING.124.016786

Categories: Literature Watch

Safety and efficiency of a fully automatic workflow for auto-segmentation in radiotherapy using three commercially available deep learning-based applications

Tue, 2024-09-10 06:00

Phys Imaging Radiat Oncol. 2024 Aug 13;31:100627. doi: 10.1016/j.phro.2024.100627. eCollection 2024 Jul.

ABSTRACT

Advancements in radiotherapy auto-segmentation necessitate reliable and efficient workflows. Therefore, a standardized fully automatic workflow was developed for three commercially available deep learning-based auto-segmentation applications and compared to a manual workflow for safety and efficiency. The workflow underwent safety evaluation with failure mode and effects analysis. Notably, eight failure modes were reduced, including seven with severity factors ≥7, indicating the effect on patients, and two with Risk Priority Number value >125, which assesses relative risk level. Efficiency, measured by mouse clicks, showed zero clicks with the automatic workflow. This automation illustrated improvement in both safety and efficiency of workflow.

PMID:39253729 | PMC:PMC11381787 | DOI:10.1016/j.phro.2024.100627

Categories: Literature Watch

Addressing Deep Learning Model Calibration Using Evidential Neural Networks And Uncertainty-Aware Training

Tue, 2024-09-10 06:00

Proc IEEE Int Symp Biomed Imaging. 2023 Apr 18;34:1-5. doi: 10.1109/ISBI53787.2023.10230515.

ABSTRACT

In terms of accuracy, deep learning (DL) models have had considerable success in classification problems for medical imaging applications. However, it is well-known that the outputs of such models, which typically utilise the SoftMax function in the final classification layer can be over-confident, i.e. they are poorly calibrated. Two competing solutions to this problem have been proposed: uncertainty-aware training and evidential neural networks (ENNs). In this paper we perform an investigation into the improvements to model calibration that can be achieved by each of these approaches individually, and their combination. We perform experiments on two classification tasks: a simpler MNIST digit classification task and a more complex and realistic medical imaging artefact detection task using Phase Contrast Cardiac Magnetic Resonance images. The experimental results demonstrate that model calibration can suffer when the task becomes challenging enough to require a higher capacity model. However, in our complex artefact detection task we saw an improvement in calibration for both a low and higher capacity model when implementing both the ENN and uncertainty-aware training together, indicating that this approach can offer a promising way to improve calibration in such settings. The findings highlight the potential use of these approaches to improve model calibration in a complex application, which would in turn improve clinician trust in DL models.

PMID:39253557 | PMC:PMC7616424 | DOI:10.1109/ISBI53787.2023.10230515

Categories: Literature Watch

Advancing Glaucoma Diagnosis: Employing Confidence-Calibrated Label Smoothing Loss for Model Calibration

Tue, 2024-09-10 06:00

Ophthalmol Sci. 2024 Jun 22;4(6):100555. doi: 10.1016/j.xops.2024.100555. eCollection 2024 Nov-Dec.

ABSTRACT

OBJECTIVE: The aim of our research is to enhance the calibration of machine learning models for glaucoma classification through a specialized loss function named Confidence-Calibrated Label Smoothing (CC-LS) loss. This approach is specifically designed to refine model calibration without compromising accuracy by integrating label smoothing and confidence penalty techniques, tailored to the specifics of glaucoma detection.

DESIGN: This study focuses on the development and evaluation of a calibrated deep learning model.

PARTICIPANTS: The study employs fundus images from both external datasets-the Online Retinal Fundus Image Database for Glaucoma Analysis and Research (482 normal, 168 glaucoma) and the Retinal Fundus Glaucoma Challenge (720 normal, 80 glaucoma)-and an extensive internal dataset (4639 images per category), aiming to bolster the model's generalizability. The model's clinical performance is validated using a comprehensive test set (47 913 normal, 1629 glaucoma) from the internal dataset.

METHODS: The CC-LS loss function seamlessly integrates label smoothing, which tempers extreme predictions to avoid overfitting, with confidence-based penalties. These penalties deter the model from expressing undue confidence in incorrect classifications. Our study aims at training models using the CC-LS and comparing their performance with those trained using conventional loss functions.

MAIN OUTCOME MEASURES: The model's precision is evaluated using metrics like the Brier score, sensitivity, specificity, and the false positive rate, alongside qualitative heatmap analyses for a holistic accuracy assessment.

RESULTS: Preliminary findings reveal that models employing the CC-LS mechanism exhibit superior calibration metrics, as evidenced by a Brier score of 0.098, along with notable accuracy measures: sensitivity of 81%, specificity of 80%, and weighted accuracy of 80%. Importantly, these enhancements in calibration are achieved without sacrificing classification accuracy.

CONCLUSIONS: The CC-LS loss function presents a significant advancement in the pursuit of deploying machine learning models for glaucoma diagnosis. By improving calibration, the CC-LS ensures that clinicians can interpret and trust the predictive probabilities, making artificial intelligence-driven diagnostic tools more clinically viable. From a clinical standpoint, this heightened trust and interpretability can potentially lead to more timely and appropriate interventions, thereby optimizing patient outcomes and safety.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

PMID:39253549 | PMC:PMC11381854 | DOI:10.1016/j.xops.2024.100555

Categories: Literature Watch

Artificial Intelligence-Based Disease Activity Monitoring to Personalized Neovascular Age-Related Macular Degeneration Treatment: A Feasibility Study

Tue, 2024-09-10 06:00

Ophthalmol Sci. 2024 Jun 17;4(6):100565. doi: 10.1016/j.xops.2024.100565. eCollection 2024 Nov-Dec.

ABSTRACT

PURPOSE: To evaluate the performance of a disease activity (DA) model developed to detect DA in participants with neovascular age-related macular degeneration (nAMD).

DESIGN: Post hoc analysis.

PARTICIPANTS: Patient dataset from the phase III HAWK and HARRIER (H&H) studies.

METHODS: An artificial intelligence (AI)-based DA model was developed to generate a DA score based on measurements of OCT images and other parameters collected from H&H study participants. Disease activity assessments were classified into 3 categories based on the extent of agreement between the DA model's scores and the H&H investigators' decisions: agreement ("easy"), disagreement ("noisy"), and close to the decision boundary ("difficult"). Then, a panel of 10 international retina specialists ("panelists") reviewed a sample of DA assessments of these 3 categories that contributed to the training of the final DA model. A panelists' majority vote on the reviewed cases was used to evaluate the accuracy, sensitivity, and specificity of the DA model.

MAIN OUTCOME MEASURES: The DA model's performance in detecting DA compared with the DA assessments made by the investigators and panelists' majority vote.

RESULTS: A total of 4472 OCT DA assessments were used to develop the model; of these, panelists reviewed 425, categorized as "easy" (17.2%), "noisy" (20.5%), and "difficult" (62.4%). False-positive and false negative rates of the DA model's assessments decreased after changing the assessment in some cases reviewed by the panelists and retraining the DA model. Overall, the DA model achieved 80% accuracy. For "easy" cases, the DA model reached 96% accuracy and performed as well as the investigators (96% accuracy) and panelists (90% accuracy). For "noisy" cases, the DA model performed similarly to panelists and outperformed the investigators (84%, 86%, and 16% accuracies, respectively). The DA model also outperformed the investigators for "difficult" cases (74% and 53% accuracies, respectively) but underperformed the panelists (86% accuracy) owing to lower specificity. Subretinal and intraretinal fluids were the main clinical parameters driving the DA assessments made by the panelists.

CONCLUSIONS: These results demonstrate the potential of using an AI-based DA model to optimize treatment decisions in the clinical setting and in detecting and monitoring DA in patients with nAMD.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

PMID:39253548 | PMC:PMC11381777 | DOI:10.1016/j.xops.2024.100565

Categories: Literature Watch

Imputation of cancer proteomics data with a deep model that learns from many datasets

Tue, 2024-09-10 06:00

bioRxiv [Preprint]. 2024 Aug 28:2024.08.26.609780. doi: 10.1101/2024.08.26.609780.

ABSTRACT

Missing values are a major challenge in the analysis of mass spectrometry proteomics data. Missing values hinder reproducibility, decrease statistical power for identifying differentially expressed (DE) proteins and make it challenging to analyze low-abundance proteins. We present Lupine, a deep learning-based method for imputing, or estimating, missing values in tandem mass tag (TMT) proteomics data. Lupine is, to our knowledge, the first imputation method that is designed to learn jointly from many datasets, and we provide evidence that this approach leads to more accurate predictions. We validated Lupine by applying it to TMT data from > 1,000 cancer patient samples spanning ten cancer types from the Clinical Proteomics Tumor Atlas Consortium (CPTAC). Lupine outperforms the state of the art for TMT imputation, identifies more DE proteins than other methods, corrects for TMT batch effects, and learns a meaningful representation of proteins and patient samples. Lupine is implemented as an open source Python package.

PMID:39253518 | PMC:PMC11383014 | DOI:10.1101/2024.08.26.609780

Categories: Literature Watch

Dissecting the regulatory logic of specification and differentiation during vertebrate embryogenesis

Tue, 2024-09-10 06:00

bioRxiv [Preprint]. 2024 Aug 27:2024.08.27.609971. doi: 10.1101/2024.08.27.609971.

ABSTRACT

The interplay between transcription factors and chromatin accessibility regulates cell type diversification during vertebrate embryogenesis. To systematically decipher the gene regulatory logic guiding this process, we generated a single-cell multi-omics atlas of RNA expression and chromatin accessibility during early zebrafish embryogenesis. We developed a deep learning model to predict chromatin accessibility based on DNA sequence and found that a small number of transcription factors underlie cell-type-specific chromatin landscapes. While Nanog is well-established in promoting pluripotency, we discovered a new function in priming the enhancer accessibility of mesendodermal genes. In addition to the classical stepwise mode of differentiation, we describe instant differentiation, where pluripotent cells skip intermediate fate transitions and terminally differentiate. Reconstruction of gene regulatory interactions reveals that this process is driven by a shallow network in which maternally deposited regulators activate a small set of transcription factors that co-regulate hundreds of differentiation genes. Notably, misexpression of these transcription factors in pluripotent cells is sufficient to ectopically activate their targets. This study provides a rich resource for analyzing embryonic gene regulation and reveals the regulatory logic of instant differentiation.

PMID:39253514 | PMC:PMC11383055 | DOI:10.1101/2024.08.27.609971

Categories: Literature Watch

Vasculature segmentation in 3D hierarchical phase-contrast tomography images of human kidneys

Tue, 2024-09-10 06:00

bioRxiv [Preprint]. 2024 Aug 26:2024.08.25.609595. doi: 10.1101/2024.08.25.609595.

ABSTRACT

Efficient algorithms are needed to segment vasculature in new three-dimensional (3D) medical imaging datasets at scale for a wide range of research and clinical applications. Manual segmentation of vessels in images is time-consuming and expensive. Computational approaches are more scalable but have limitations in accuracy. We organized a global machine learning competition, engaging 1,401 participants, to help develop new deep learning methods for 3D blood vessel segmentation. This paper presents a detailed analysis of the top-performing solutions using manually curated 3D Hierarchical Phase-Contrast Tomography datasets of the human kidney, focusing on the segmentation accuracy and morphological analysis, thereby establishing a benchmark for future studies in blood vessel segmentation within phase-contrast tomography imaging.

PMID:39253466 | PMC:PMC11383006 | DOI:10.1101/2024.08.25.609595

Categories: Literature Watch

The technological assessment of green buildings using artificial neural networks

Tue, 2024-09-10 06:00

Heliyon. 2024 Aug 15;10(16):e36400. doi: 10.1016/j.heliyon.2024.e36400. eCollection 2024 Aug 30.

ABSTRACT

This study aims to construct a comprehensive evaluation model for efficiently assessing appropriate technologies within green buildings. Initially, an Internet of Things (IoT)-based environmental monitoring system is devised and implemented to collect real-time environmental parameters both inside and outside the building. To evaluate the technical suitability of green buildings, this study employs a multifaceted approach encompassing various criteria, including energy efficiency, environmental impact, economic benefits, user comfort, and sustainability. Specifically, it involves real-time monitoring of environmental parameters, analysis of energy consumption data, and indoor environmental quality indicators derived from user satisfaction surveys. Subsequently, a Multi-Layer Perceptron (MLP) is selected as a conventional artificial neural network (ANN) model, while a Long Short-Term Memory (LSTM) model is chosen as an advanced recurrent neural network model in the realm of deep learning. These models are utilized to process and explore the collected data and assess the technical suitability of green buildings. The dataset comprises physical quantities such as temperature, humidity, and light intensity, as well as economic indicators including energy efficiency and building operating costs. Furthermore, the assessment process considers the building's life cycle assessment and indoor environmental quality factors such as health, comfort, and safety. By incorporating these comprehensive criteria, a holistic evaluation of green building technologies is achieved, ensuring the selected technologies' suitability and effectiveness. The model prediction results demonstrate that the proposed hybrid evaluation model exhibits high accuracy and robust stability in predicting building environmental parameters. For instance, the Root Mean Square Error (RMSE) for temperature prediction is 1.2 °C, the Mean Absolute Error (MAE) is 0.9 °C, and the determination coefficient (R2) reaches 0.95. Similarly, for humidity prediction, the RMSE, MAE, and R2 are 3.5 %, 2.8 %, and 0.88. Compared to the traditional MLP and LSTM models alone, the proposed hybrid model shows significant improvements in predicting building energy consumption, with approximately 15 % and 12 % reductions in RMSE and MAE, respectively, and an increase in R2 values of approximately 7 percentage points. These findings indicate that by amalgamation of the IoT and ANNs, this study successfully establishes a comprehensive model for accurately assessing technologies suitable for green buildings. This approach offers a novel perspective and methodology for the design and evaluation of green buildings.

PMID:39253242 | PMC:PMC11382187 | DOI:10.1016/j.heliyon.2024.e36400

Categories: Literature Watch

Development of deep learning model for diagnosing muscle-invasive bladder cancer on MRI with vision transformer

Tue, 2024-09-10 06:00

Heliyon. 2024 Aug 10;10(16):e36144. doi: 10.1016/j.heliyon.2024.e36144. eCollection 2024 Aug 30.

ABSTRACT

RATIONALE AND OBJECTIVES: To develop and validate a deep learning (DL) model to automatically diagnose muscle-invasive bladder cancer (MIBC) on MRI with Vision Transformer (ViT).

MATERIALS AND METHODS: This multicenter retrospective study included patients with BC who reported to two institutions between January 2016 and June 2020 (training dataset) and a third institution between May 2017 and May 2022 (test dataset). The diagnostic model for MIBC and the segmentation model for BC on MRI were developed using the training dataset with 5-fold cross-validation. ViT- and convolutional neural network (CNN)-based diagnostic models were developed and compared for diagnostic performance using the area under the curve (AUC). The performance of the diagnostic model with manual and auto-generated regions of interest (ROImanual and ROIauto, respectively) was validated on the test dataset and compared to that of radiologists (three senior and three junior radiologists) using Vesical Imaging Reporting and Data System scoring.

RESULTS: The training and test datasets included 170 and 53 patients, respectively. Mean AUC of the top 10 ViT-based models with 5-fold cross-validation outperformed those of the CNN-based models (0.831 ± 0.003 vs. 0.713 ± 0.007-0.812 ± 0.006, p < .001). The diagnostic model with ROImanual achieved AUC of 0.872 (95 % CI: 0.777, 0.968), which was comparable to that of junior radiologists (AUC = 0.862, 0.873, and 0.930). Semi-automated diagnosis with the diagnostic model with ROIauto achieved AUC of 0.815 (95 % CI: 0.696, 0.935).

CONCLUSION: The DL model effectively diagnosed MIBC. The ViT-based model outperformed CNN-based models, highlighting its utility in medical image analysis.

PMID:39253215 | PMC:PMC11381713 | DOI:10.1016/j.heliyon.2024.e36144

Categories: Literature Watch

Pages