Deep learning

Deep learning-based immunohistochemical estimation of breast cancer via ultrasound image applications

Wed, 2024-01-24 06:00

Front Oncol. 2024 Jan 9;13:1263685. doi: 10.3389/fonc.2023.1263685. eCollection 2023.

ABSTRACT

BACKGROUND: Breast cancer is the key global menace to women's health, which ranks first by mortality rate. The rate reduction and early diagnostics of breast cancer are the mainstream of medical research. Immunohistochemical examination is the most important link in the process of breast cancer treatment, and its results directly affect physicians' decision-making on follow-up medical treatment.

PURPOSE: This study aims to develop a computer-aided diagnosis (CAD) method based on deep learning to classify breast ultrasound (BUS) images according to immunohistochemical results.

METHODS: A new depth learning framework guided by BUS image data analysis was proposed for the classification of breast cancer nodes in BUS images. The proposed CAD classification network mainly comprised three innovation points. First, a multilevel feature distillation network (MFD-Net) based on CNN, which could extract feature layers of different scales, was designed. Then, the image features extracted at different depths were fused to achieve multilevel feature distillation using depth separable convolution and reverse depth separable convolution to increase convolution depths. Finally, a new attention module containing two independent submodules, the channel attention module (CAM) and the spatial attention module (SAM), was introduced to improve the model classification ability in channel and space.

RESULTS: A total of 500 axial BUS images were retrieved from 294 patients who underwent BUS examination, and these images were detected and cropped, resulting in breast cancer node BUS image datasets, which were classified according to immunohistochemical findings, and the datasets were randomly subdivided into a training set (70%) and a test set (30%) in the classification process, with the results of the four immune indices output simultaneously from training and testing, in the model comparison experiment. Taking ER immune indicators as an example, the proposed model achieved a precision of 0.8933, a recall of 0.7563, an F1 score of 0.8191, and an accuracy of 0.8386, significantly outperforming the other models. The results of the designed ablation experiment also showed that the proposed multistage characteristic distillation structure and attention module were key in improving the accuracy rate.

CONCLUSION: The extensive experiments verify the high efficiency of the proposed method. It is considered the first classification of breast cancer by immunohistochemical results in breast cancer image processing, and it provides an effective aid for postoperative breast cancer treatment, greatly reduces the difficulty of diagnosis for doctors, and improves work efficiency.

PMID:38264739 | PMC:PMC10803514 | DOI:10.3389/fonc.2023.1263685

Categories: Literature Watch

Meet the authors: Hanchuan Peng, Peng Xie, and Feng Xiong

Wed, 2024-01-24 06:00

Patterns (N Y). 2024 Jan 12;5(1):100912. doi: 10.1016/j.patter.2023.100912. eCollection 2024 Jan 12.

ABSTRACT

In a recent paper at Patterns, Hanchuan Peng, Peng Xie, and Feng Xiong from Southeast University describe a deep learning method to characterize complete single-neuron morphologies, which can discover neuron projection patterns of diverse cells and learn neuronal morphology representation. In this interview, the authors shared the story behind the paper and their research experience. This interview is a companion to these authors' recent paper, "DSM: Deep sequential model for complete neuronal morphology representation and feature extraction."1.

PMID:38264723 | PMC:PMC10801219 | DOI:10.1016/j.patter.2023.100912

Categories: Literature Watch

DSM: Deep sequential model for complete neuronal morphology representation and feature extraction

Wed, 2024-01-24 06:00

Patterns (N Y). 2023 Dec 13;5(1):100896. doi: 10.1016/j.patter.2023.100896. eCollection 2024 Jan 12.

ABSTRACT

The full morphology of single neurons is indispensable for understanding cell types, the basic building blocks in brains. Projecting trajectories are critical to extracting biologically relevant information from neuron morphologies, as they provide valuable information for both connectivity and cell identity. We developed an artificial intelligence method, deep sequential model (DSM), to extract concise, cell-type-defining features from projections across brain regions. DSM achieves more than 90% accuracy in classifying 12 major neuron projection types without compromising performance when spatial noise is present. Such remarkable robustness enabled us to efficiently manage and analyze several major full-morphology data sources, showcasing how characteristic long projections can define cell identities. We also succeeded in applying our model to both discovering previously unknown neuron subtypes and analyzing exceptional co-expressed genes involved in neuron projection circuits.

PMID:38264721 | PMC:PMC10801254 | DOI:10.1016/j.patter.2023.100896

Categories: Literature Watch

Functional microRNA-targeting drug discovery by graph-based deep learning

Wed, 2024-01-24 06:00

Patterns (N Y). 2024 Jan 3;5(1):100909. doi: 10.1016/j.patter.2023.100909. eCollection 2024 Jan 12.

ABSTRACT

MicroRNAs are recognized as key drivers in many cancers but targeting them with small molecules remains a challenge. We present RiboStrike, a deep-learning framework that identifies small molecules against specific microRNAs. To demonstrate its capabilities, we applied it to microRNA-21 (miR-21), a known driver of breast cancer. To ensure selectivity toward miR-21, we performed counter-screens against miR-122 and DICER. Auxiliary models were used to evaluate toxicity and rank the candidates. Learning from various datasets, we screened a pool of nine million molecules and identified eight, three of which showed anti-miR-21 activity in both reporter assays and RNA sequencing experiments. Target selectivity of these compounds was assessed using microRNA profiling and RNA sequencing analysis. The top candidate was tested in a xenograft mouse model of breast cancer metastasis, demonstrating a significant reduction in lung metastases. These results demonstrate RiboStrike's ability to nominate compounds that target the activity of miRNAs in cancer.

PMID:38264717 | PMC:PMC10801238 | DOI:10.1016/j.patter.2023.100909

Categories: Literature Watch

External validation of a deep learning algorithm for automated echocardiographic strain measurements

Wed, 2024-01-24 06:00

Eur Heart J Digit Health. 2023 Nov 20;5(1):60-68. doi: 10.1093/ehjdh/ztad072. eCollection 2024 Jan.

ABSTRACT

AIMS: Echocardiographic strain imaging reflects myocardial deformation and is a sensitive measure of cardiac function and wall-motion abnormalities. Deep learning (DL) algorithms could automate the interpretation of echocardiographic strain imaging.

METHODS AND RESULTS: We developed and trained an automated DL-based algorithm for left ventricular (LV) strain measurements in an internal dataset. Global longitudinal strain (GLS) was validated externally in (i) a real-world Taiwanese cohort of participants with and without heart failure (HF), (ii) a core-lab measured dataset from the multinational prevalence of microvascular dysfunction-HF and preserved ejection fraction (PROMIS-HFpEF) study, and regional strain in (iii) the HMC-QU-MI study of patients with suspected myocardial infarction. Outcomes included measures of agreement [bias, mean absolute difference (MAD), root-mean-squared-error (RMSE), and Pearson's correlation (R)] and area under the curve (AUC) to identify HF and regional wall-motion abnormalities. The DL workflow successfully analysed 3741 (89%) studies in the Taiwanese cohort, 176 (96%) in PROMIS-HFpEF, and 158 (98%) in HMC-QU-MI. Automated GLS showed good agreement with manual measurements (mean ± SD): -18.9 ± 4.5% vs. -18.2 ± 4.4%, respectively, bias 0.68 ± 2.52%, MAD 2.0 ± 1.67, RMSE = 2.61, R = 0.84 in the Taiwanese cohort; and -15.4 ± 4.1% vs. -15.9 ± 3.6%, respectively, bias -0.65 ± 2.71%, MAD 2.19 ± 1.71, RMSE = 2.78, R = 0.76 in PROMIS-HFpEF. In the Taiwanese cohort, automated GLS accurately identified patients with HF (AUC = 0.89 for total HF and AUC = 0.98 for HF with reduced ejection fraction). In HMC-QU-MI, automated regional strain identified regional wall-motion abnormalities with an average AUC = 0.80.

CONCLUSION: DL algorithms can interpret echocardiographic strain images with similar accuracy as conventional measurements. These results highlight the potential of DL algorithms to democratize the use of cardiac strain measurements and reduce time-spent and costs for echo labs globally.

PMID:38264705 | PMC:PMC10802824 | DOI:10.1093/ehjdh/ztad072

Categories: Literature Watch

Automatic triage of twelve-lead electrocardiograms using deep convolutional neural networks: a first implementation study

Wed, 2024-01-24 06:00

Eur Heart J Digit Health. 2023 Nov 8;5(1):89-96. doi: 10.1093/ehjdh/ztad070. eCollection 2024 Jan.

ABSTRACT

AIMS: Expert knowledge to correctly interpret electrocardiograms (ECGs) is not always readily available. An artificial intelligence (AI)-based triage algorithm (DELTAnet), able to support physicians in ECG prioritization, could help reduce current logistic burden of overreading ECGs and improve time to treatment for acute and life-threatening disorders. However, the effect of clinical implementation of such AI algorithms is rarely investigated.

METHODS AND RESULTS: Adult patients at non-cardiology departments who underwent ECG testing as a part of routine clinical care were included in this prospective cohort study. DELTAnet was used to classify 12-lead ECGs into one of the following triage classes: normal, abnormal not acute, subacute, and acute. Performance was compared with triage classes based on the final clinical diagnosis. Moreover, the associations between predicted classes and clinical outcomes were investigated. A total of 1061 patients and ECGs were included. Performance was good with a mean concordance statistic of 0.96 (95% confidence interval 0.95-0.97) when comparing DELTAnet with the clinical triage classes. Moreover, zero ECGs that required a change in policy or referral to the cardiologist were missed and there was a limited number of cases predicted as acute that did not require follow-up (2.6%).

CONCLUSION: This study is the first to prospectively investigate the impact of clinical implementation of an ECG-based AI triage algorithm. It shows that DELTAnet is efficacious and safe to be used in clinical practice for triage of 12-lead ECGs in non-cardiology hospital departments.

PMID:38264701 | PMC:PMC10802816 | DOI:10.1093/ehjdh/ztad070

Categories: Literature Watch

Revealing brain connectivity: graph embeddings for EEG representation learning and comparative analysis of structural and functional connectivity

Wed, 2024-01-24 06:00

Front Neurosci. 2024 Jan 8;17:1288433. doi: 10.3389/fnins.2023.1288433. eCollection 2023.

ABSTRACT

This study employs deep learning techniques to present a compelling approach for modeling brain connectivity in EEG motor imagery classification through graph embedding. The compelling aspect of this study lies in its combination of graph embedding, deep learning, and different brain connectivity types, which not only enhances classification accuracy but also enriches the understanding of brain function. The approach yields high accuracy, providing valuable insights into brain connections and has potential applications in understanding neurological conditions. The proposed models consist of two distinct graph-based convolutional neural networks, each leveraging different types of brain connectivities to enhance classification performance and gain a deeper understanding of brain connections. The first model, Adjacency-based Convolutional Neural Network Model (Adj-CNNM), utilizes a graph representation based on structural brain connectivity to embed spatial information, distinguishing it from prior spatial filtering approaches dependent on subjects and tasks. Extensive tests on a benchmark dataset-IV-2a demonstrate that an accuracy of 72.77% is achieved by the Adj-CNNM, surpassing baseline and state-of-the-art methods. The second model, Phase Locking Value Convolutional Neural Network Model (PLV-CNNM), incorporates functional connectivity to overcome structural connectivity limitations and identifies connections between distinct brain regions. The PLV-CNNM achieves an overall accuracy of 75.10% across the 1-51 Hz frequency range. In the preferred 8-30 Hz frequency band, known for motor imagery data classification (including α, μ, and β waves), individual accuracies of 91.9%, 90.2%, and 85.8% are attained for α, μ, and β, respectively. Moreover, the model performs admirably with 84.3% accuracy when considering the entire 8-30 Hz band. Notably, the PLV-CNNM reveals robust connections between different brain regions during motor imagery tasks, including the frontal and central cortex and the central and parietal cortex. These findings provide valuable insights into brain connectivity patterns, enriching the comprehension of brain function. Additionally, the study offers a comprehensive comparative analysis of diverse brain connectivity modeling methods.

PMID:38264495 | PMC:PMC10804888 | DOI:10.3389/fnins.2023.1288433

Categories: Literature Watch

A novel ensemble learning method for crop leaf disease recognition

Wed, 2024-01-24 06:00

Front Plant Sci. 2024 Jan 8;14:1280671. doi: 10.3389/fpls.2023.1280671. eCollection 2023.

ABSTRACT

Deep learning models have been widely applied in the field of crop disease recognition. There are various types of crops and diseases, each potentially possessing distinct and effective features. This brings a great challenge to the generalization performance of recognition models and makes it very difficult to build a unified model capable of achieving optimal recognition performance on all kinds of crops and diseases. In order to solve this problem, we have proposed a novel ensemble learning method for crop leaf disease recognition (named ELCDR). Unlike the traditional voting strategy of ensemble learning, ELCDR assigns different weights to the models based on their feature extraction performance during ensemble learning. In ELCDR, the models' feature extraction performance is measured by the distribution of the feature vectors of the training set. If a model could distinguish more feature differences between different categories, then it receives a higher weight during ensemble learning. We conducted experiments on the disease images of four kinds of crops. The experimental results show that in comparison to the optimal single model recognition method, ELCDR improves by as much as 1.5 (apple), 0.88 (corn), 2.25 (grape), and 1.5 (rice) percentage points in accuracy. Compared with the voting strategy of ensemble learning, ELCDR improves by as much as 1.75 (apple), 1.25 (corn), 0.75 (grape), and 7 (rice) percentage points in accuracy in each case. Additionally, ELCDR also has improvements on precision, recall, and F1 measure metrics. These experiments provide evidence of the effectiveness of ELCDR in the realm of crop leaf disease recognition.

PMID:38264019 | PMC:PMC10804852 | DOI:10.3389/fpls.2023.1280671

Categories: Literature Watch

Comparison between a deep-learning and a pixel-based approach for the automated quantification of HIV target cells in foreskin tissue

Wed, 2024-01-24 06:00

Sci Rep. 2024 Jan 23;14(1):1985. doi: 10.1038/s41598-024-52613-3.

ABSTRACT

The availability of target cells expressing the HIV receptors CD4 and CCR5 in genital tissue is a critical determinant of HIV susceptibility during sexual transmission. Quantification of immune cells in genital tissue is therefore an important outcome for studies on HIV susceptibility and prevention. Immunofluorescence microscopy allows for precise visualization of immune cells in mucosal tissues; however, this technique is limited in clinical studies by the lack of an accurate, unbiased, high-throughput image analysis method. Current pixel-based thresholding methods for cell counting struggle in tissue regions with high cell density and autofluorescence, both of which are common features in genital tissue. We describe a deep-learning approach using the publicly available StarDist method to count cells in immunofluorescence microscopy images of foreskin stained for nuclei, CD3, CD4, and CCR5. The accuracy of the model was comparable to manual counting (gold standard) and surpassed the capability of a previously described pixel-based cell counting method. We show that the performance of our deep-learning model is robust in tissue regions with high cell density and high autofluorescence. Moreover, we show that this deep-learning analysis method is both easy to implement and to adapt for the identification of other cell types in genital mucosal tissue.

PMID:38263439 | DOI:10.1038/s41598-024-52613-3

Categories: Literature Watch

APOLLO 11 Project, Consortium in Advanced Lung Cancer Patients Treated With Innovative Therapies: Integration of Real-World Data and Translational Research

Tue, 2024-01-23 06:00

Clin Lung Cancer. 2023 Dec 22:S1525-7304(23)00269-3. doi: 10.1016/j.cllc.2023.12.012. Online ahead of print.

ABSTRACT

INTRODUCTION: Despite several therapeutic efforts, lung cancer remains a highly lethal disease. Novel therapeutic approaches encompass immune-checkpoint inhibitors, targeted therapeutics and antibody-drug conjugates, with different results. Several studies have been aimed at identifying biomarkers able to predict benefit from these therapies and create a prediction model of response, despite this there is a lack of information to help clinicians in the choice of therapy for lung cancer patients with advanced disease. This is primarily due to the complexity of lung cancer biology, where a single or few biomarkers are not sufficient to provide enough predictive capability to explain biologic differences; other reasons include the paucity of data collected by single studies performed in heterogeneous unmatched cohorts and the methodology of analysis. In fact, classical statistical methods are unable to analyze and integrate the magnitude of information from multiple biological and clinical sources (eg, genomics, transcriptomics, and radiomics).

METHODS AND OBJECTIVES: APOLLO11 is an Italian multicentre, observational study involving patients with a diagnosis of advanced lung cancer (NSCLC and SCLC) treated with innovative therapies. Retrospective and prospective collection of multiomic data, such as tissue- (eg, for genomic, transcriptomic analysis) and blood-based biologic material (eg, ctDNA, PBMC), in addition to clinical and radiological data (eg, for radiomic analysis) will be collected. The overall aim of the project is to build a consortium integrating different datasets and a virtual biobank from participating Italian lung cancer centers. To face with the large amount of data provided, AI and ML techniques will be applied will be applied to manage this large dataset in an effort to build an R-Model, integrating retrospective and prospective population-based data. The ultimate goal is to create a tool able to help physicians and patients to make treatment decisions.

CONCLUSION: APOLLO11 aims to propose a breakthrough approach in lung cancer research, replacing the old, monocentric viewpoint towards a multicomprehensive, multiomic, multicenter model. Multicenter cancer datasets incorporating common virtual biobank and new methodologic approaches including artificial intelligence, machine learning up to deep learning is the road to the future in oncology launched by this project.

PMID:38262770 | DOI:10.1016/j.cllc.2023.12.012

Categories: Literature Watch

AI-based X-ray fracture analysis of the distal radius: accuracy between representative classification, detection and segmentation deep learning models for clinical practice

Tue, 2024-01-23 06:00

BMJ Open. 2024 Jan 23;14(1):e076954. doi: 10.1136/bmjopen-2023-076954.

ABSTRACT

OBJECTIVES: To aid in selecting the optimal artificial intelligence (AI) solution for clinical application, we directly compared performances of selected representative custom-trained or commercial classification, detection and segmentation models for fracture detection on musculoskeletal radiographs of the distal radius by aligning their outputs.

DESIGN AND SETTING: This single-centre retrospective study was conducted on a random subset of emergency department radiographs from 2008 to 2018 of the distal radius in Germany.

MATERIALS AND METHODS: An image set was created to be compatible with training and testing classification and segmentation models by annotating examinations for fractures and overlaying fracture masks, if applicable. Representative classification and segmentation models were trained on 80% of the data. After output binarisation, their derived fracture detection performances as well as that of a standard commercially available solution were compared on the remaining X-rays (20%) using mainly accuracy and area under the receiver operating characteristic (AUROC).

RESULTS: A total of 2856 examinations with 712 (24.9%) fractures were included in the analysis. Accuracies reached up to 0.97 for the classification model, 0.94 for the segmentation model and 0.95 for BoneView. Cohen's kappa was at least 0.80 in pairwise comparisons, while Fleiss' kappa was 0.83 for all models. Fracture predictions were visualised with all three methods at different levels of detail, ranking from downsampled image region for classification over bounding box for detection to single pixel-level delineation for segmentation.

CONCLUSIONS: All three investigated approaches reached high performances for detection of distal radius fractures with simple preprocessing and postprocessing protocols on the custom-trained models. Despite their underlying structural differences, selection of one's fracture analysis AI tool in the frame of this study reduces to the desired flavour of automation: automated classification, AI-assisted manual fracture reading or minimised false negatives.

PMID:38262641 | DOI:10.1136/bmjopen-2023-076954

Categories: Literature Watch

Detection of fibrosing interstitial lung disease-suspected chest radiographs using a deep learning-based computer-aided detection system: a retrospective, observational study

Tue, 2024-01-23 06:00

BMJ Open. 2024 Jan 22;14(1):e078841. doi: 10.1136/bmjopen-2023-078841.

ABSTRACT

OBJECTIVES: To investigate the effectiveness of BMAX, a deep learning-based computer-aided detection system for detecting fibrosing interstitial lung disease (ILD) on chest radiographs among non-expert and expert physicians in the real-world clinical setting.

DESIGN: Retrospective, observational study.

SETTING: This study used chest radiograph images consecutively taken in three medical facilities with various degrees of referral. Three expert ILD physicians interpreted each image and determined whether it was a fibrosing ILD-suspected image (fibrosing ILD positive) or not (fibrosing ILD negative). Interpreters, including non-experts and experts, classified each of 120 images extracted from the pooled data for the reading test into positive or negative for fibrosing ILD without and with the assistance of BMAX.

PARTICIPANTS: Chest radiographs of patients aged 20 years or older with two or more visits that were taken during consecutive periods were accumulated. 1251 chest radiograph images were collected, from which 120 images (24 positive and 96 negative images) were randomly extracted for the reading test. The interpreters for the reading test were 20 non-expert physicians and 5 expert physicians (3 pulmonologists and 2 radiologists).

PRIMARY AND SECONDARY OUTCOME MEASURES: The primary outcome was the comparison of area under the receiver-operating characteristic curve (ROC-AUC) for identifying fibrosing ILD-positive images by non-experts without versus with BMAX. The secondary outcome was the comparison of sensitivity, specificity and accuracy by non-experts and experts without versus with BMAX.

RESULTS: The mean ROC-AUC of non-expert interpreters was 0.795 (95% CI; 0.765 to 0.825) without BMAX and 0.825 (95% CI; 0.799 to 0.850) with BMAX (p=0.005). After using BMAX, sensitivity was improved from 0.744 (95% CI; 0.697 to 0.791) to 0.802 (95% CI; 0.754 to 0.850) among non-experts (p=0.003), but not among experts (p=0.285). Specificity and accuracy were not changed after using BMAX among either non-expert or expert interpreters.

CONCLUSION: BMAX was useful for detecting fibrosing ILD-suspected chest radiographs for non-expert physicians.

TRIAL REGISTRATION NUMBER: jRCT1032220090.

PMID:38262640 | DOI:10.1136/bmjopen-2023-078841

Categories: Literature Watch

Explainable deep learning diagnostic system for prediction of lung disease from medical images

Tue, 2024-01-23 06:00

Comput Biol Med. 2024 Jan 19;170:108012. doi: 10.1016/j.compbiomed.2024.108012. Online ahead of print.

ABSTRACT

Around the globe, respiratory lung diseases pose a severe threat to human survival. Based on a central goal to reduce contiguous transmission from infected to healthy persons, several technologies have evolved for diagnosing lung pathologies. One of the emerging technologies is the utility of Artificial Intelligence (AI) based on computer vision for processing wide varieties of medical imaging but AI methods without explainability are often treated as a black box. Based on a view to demystifying the rationale influencing AI decisions, this paper designed and developed a novel low-cost explainable deep-learning diagnostic tool for predicting lung disease from medical images. For this, we investigated explainable deep learning (DL) models (conventional DL and vision transformers (ViTs)) for performing prediction of the existence of pneumonia, COVID19, or no-disease from both original and data augmentation (DA)-based medical images (from two chest X-ray datasets). The results show that our experimental consideration of the DA that combines the impact of cropping, rotation, and horizontal flipping (CROP+ROT+HF) for transforming input images and then passed as input to an Inception-V3 architecture yielded a performance that surpasses all the ViTs and other conventional DL approaches in most of the evaluated performance metrics. Overall, the results suggest that the utility of data augmentation schemes aided the DL methods to yield higher classification accuracies. Furthermore, we compared five different class activation mapping (CAM) algorithms (GradCAM, GradCAM++, EigenGradCAM, AblationCAM, and RandomCAM). The result shows that most of the examined CAM algorithms were effective in identifying the attention region containing the existence of pneumonia or COVID-19 from the medical images (chest X-rays). Our developed low-cost AI diagnostic tool (pilot system) can assist medical experts and radiographers in proffering early diagnosis of lung disease. For this, we selected five to seven deep learning models and the explainable algorithms were deployed on a novel web interface implemented via a Gradio framework.

PMID:38262202 | DOI:10.1016/j.compbiomed.2024.108012

Categories: Literature Watch

Deep learning-based algorithm for the detection of idiopathic full thickness macular holes in spectral domain optical coherence tomography

Tue, 2024-01-23 06:00

Int J Retina Vitreous. 2024 Jan 23;10(1):9. doi: 10.1186/s40942-024-00526-8.

ABSTRACT

BACKGROUND: Automated identification of spectral domain optical coherence tomography (SD-OCT) features can improve retina clinic workflow efficiency as they are able to detect pathologic findings. The purpose of this study was to test a deep learning (DL)-based algorithm for the identification of Idiopathic Full Thickness Macular Hole (IFTMH) features and stages of severity in SD-OCT B-scans.

METHODS: In this cross-sectional study, subjects solely diagnosed with either IFTMH or Posterior Vitreous Detachment (PVD) were identified excluding secondary causes of macular holes, any concurrent maculopathies, or incomplete records. SD-OCT scans (512 × 128) from all subjects were acquired with CIRRUS™ HD-OCT (ZEISS, Dublin, CA) and reviewed for quality. In order to establish a ground truth classification, each SD-OCT B-scan was labeled by two trained graders and adjudicated by a retina specialist when applicable. Two test sets were built based on different gold-standard classification methods. The sensitivity, specificity and accuracy of the algorithm to identify IFTMH features in SD-OCT B-scans were determined. Spearman's correlation was run to examine if the algorithm's probability score was associated with the severity stages of IFTMH.

RESULTS: Six hundred and one SD-OCT cube scans from 601 subjects (299 with IFTMH and 302 with PVD) were used. A total of 76,928 individual SD-OCT B-scans were labeled gradable by the algorithm and yielded an accuracy of 88.5% (test set 1, 33,024 B-scans) and 91.4% (test set 2, 43,904 B-scans) in identifying SD-OCT features of IFTMHs. A Spearman's correlation coefficient of 0.15 was achieved between the algorithm's probability score and the stages of the 299 (47 [15.7%] stage 2, 56 [18.7%] stage 3 and 196 [65.6%] stage 4) IFTMHs cubes studied.

CONCLUSIONS: The DL-based algorithm was able to accurately detect IFTMHs features on individual SD-OCT B-scans in both test sets. However, there was a low correlation between the algorithm's probability score and IFTMH severity stages. The algorithm may serve as a clinical decision support tool that assists with the identification of IFTMHs. Further training is necessary for the algorithm to identify stages of IFTMHs.

PMID:38263402 | DOI:10.1186/s40942-024-00526-8

Categories: Literature Watch

Multi-channel feature extraction for virtual histological staining of photon absorption remote sensing images

Tue, 2024-01-23 06:00

Sci Rep. 2024 Jan 23;14(1):2009. doi: 10.1038/s41598-024-52588-1.

ABSTRACT

Accurate and fast histological staining is crucial in histopathology, impacting diagnostic precision and reliability. Traditional staining methods are time-consuming and subjective, causing delays in diagnosis. Digital pathology plays a vital role in advancing and optimizing histology processes to improve efficiency and reduce turnaround times. This study introduces a novel deep learning-based framework for virtual histological staining using photon absorption remote sensing (PARS) images. By extracting features from PARS time-resolved signals using a variant of the K-means method, valuable multi-modal information is captured. The proposed multi-channel cycleGAN model expands on the traditional cycleGAN framework, allowing the inclusion of additional features. Experimental results reveal that specific combinations of features outperform the conventional channels by improving the labeling of tissue structures prior to model training. Applied to human skin and mouse brain tissue, the results underscore the significance of choosing the optimal combination of features, as it reveals a substantial visual and quantitative concurrence between the virtually stained and the gold standard chemically stained hematoxylin and eosin images, surpassing the performance of other feature combinations. Accurate virtual staining is valuable for reliable diagnostic information, aiding pathologists in disease classification, grading, and treatment planning. This study aims to advance label-free histological imaging and opens doors for intraoperative microscopy applications.

PMID:38263394 | DOI:10.1038/s41598-024-52588-1

Categories: Literature Watch

Impact of a deep learning sepsis prediction model on quality of care and survival

Tue, 2024-01-23 06:00

NPJ Digit Med. 2024 Jan 23;7(1):14. doi: 10.1038/s41746-023-00986-6.

ABSTRACT

Sepsis remains a major cause of mortality and morbidity worldwide. Algorithms that assist with the early recognition of sepsis may improve outcomes, but relatively few studies have examined their impact on real-world patient outcomes. Our objective was to assess the impact of a deep-learning model (COMPOSER) for the early prediction of sepsis on patient outcomes. We completed a before-and-after quasi-experimental study at two distinct Emergency Departments (EDs) within the UC San Diego Health System. We included 6217 adult septic patients from 1/1/2021 through 4/30/2023. The exposure tested was a nurse-facing Best Practice Advisory (BPA) triggered by COMPOSER. In-hospital mortality, sepsis bundle compliance, 72-h change in sequential organ failure assessment (SOFA) score following sepsis onset, ICU-free days, and the number of ICU encounters were evaluated in the pre-intervention period (705 days) and the post-intervention period (145 days). The causal impact analysis was performed using a Bayesian structural time-series approach with confounder adjustments to assess the significance of the exposure at the 95% confidence level. The deployment of COMPOSER was significantly associated with a 1.9% absolute reduction (17% relative decrease) in in-hospital sepsis mortality (95% CI, 0.3%-3.5%), a 5.0% absolute increase (10% relative increase) in sepsis bundle compliance (95% CI, 2.4%-8.0%), and a 4% (95% CI, 1.1%-7.1%) reduction in 72-h SOFA change after sepsis onset in causal inference analysis. This study suggests that the deployment of COMPOSER for early prediction of sepsis was associated with a significant reduction in mortality and a significant increase in sepsis bundle compliance.

PMID:38263386 | DOI:10.1038/s41746-023-00986-6

Categories: Literature Watch

Therapy-induced modulation of tumor vasculature and oxygenation in a murine glioblastoma model quantified by deep learning-based feature extraction

Tue, 2024-01-23 06:00

Sci Rep. 2024 Jan 23;14(1):2034. doi: 10.1038/s41598-024-52268-0.

ABSTRACT

Glioblastoma presents characteristically with an exuberant, poorly functional vasculature that causes malperfusion, hypoxia and necrosis. Despite limited clinical efficacy, anti-angiogenesis resulting in vascular normalization remains a promising therapeutic approach. Yet, fundamental questions concerning anti-angiogenic therapy remain unanswered, partly due to the scale and resolution gap between microscopy and clinical imaging and a lack of quantitative data readouts. To what extend does treatment lead to vessel regression or vessel normalization and does it ameliorate or aggravate hypoxia? Clearly, a better understanding of the underlying mechanisms would greatly benefit the development of desperately needed improved treatment regimens. Here, using orthotopic transplantation of Gli36 cells, a widely used murine glioma model, we present a mesoscopic approach based on light sheet fluorescence microscopic imaging of wholemount stained tumors. Deep learning-based segmentation followed by automated feature extraction allowed quantitative analyses of the entire tumor vasculature and oxygenation statuses. Unexpectedly in this model, the response to both cytotoxic and anti-angiogenic therapy was dominated by vessel normalization with little evidence for vessel regression. Equally surprising, only cytotoxic therapy resulted in a significant alleviation of hypoxia. Taken together, we provide and evaluate a quantitative workflow that addresses some of the most urgent mechanistic questions in anti-angiogenic therapy.

PMID:38263339 | DOI:10.1038/s41598-024-52268-0

Categories: Literature Watch

Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

Tue, 2024-01-23 06:00

Sci Rep. 2024 Jan 23;14(1):2032. doi: 10.1038/s41598-024-52063-x.

ABSTRACT

Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures.

PMID:38263232 | DOI:10.1038/s41598-024-52063-x

Categories: Literature Watch

The application value of deep learning in the background of precision medicine in glioblastoma

Tue, 2024-01-23 06:00

Sci Prog. 2024 Jan-Mar;107(1):368504231223353. doi: 10.1177/00368504231223353.

ABSTRACT

Introduction: Glioblastoma is a highly malignant central nervous system tumor, World Health Organization Ⅳ, glioblastoma is the most common primary malignancy, due to its own specificity and complexity, different patients often benefit from the current conventional treatment regimen because of different molecular subtypes, in the context of precision medicine, the application of deep learning to identify the salient features of tumors on brain imaging, prognostic predictive assessment combined with clinical data to maximize the benefits of each patient from the treatment regimen is a non-invasive and feasible regimen. Methods: We conducted a comprehensive review of the existing literature on the role of deep learning in glioblastomas, covering molecular classification and diagnosis, prognosis assessment. Results: Data based on a variety of magnetic resonance imaging sequences, genetic information, and clinical combinations enable noninvasive predictive tumor diagnosis of glioblastoma and assess overall survival and treatment response accuracy. For images, standardized image acquisition and data extraction techniques can be effectively translated into learning models for clinical practice. However, it must be recognized that interventions in the treatment of glioblastoma using deep learning are still in their infancy, and the robustness of the model is challenged, as the current total number of glioblastoma samples is insufficient for large-scale experimental methods, which is directly related to the difficulty of application of the model. Conclusion: Compared to radiomics and shallow machine learning, deep learning can be a more robust, non-invasive, and effective approach, providing more valuable information as clinicians develop personalized medical protocols for glioblastoma patients.

PMID:38262933 | DOI:10.1177/00368504231223353

Categories: Literature Watch

Deep Learning Enables Automatic Correction of Experimental HDX-MS Data with Applications in Protein Modeling

Tue, 2024-01-23 06:00

J Am Soc Mass Spectrom. 2024 Jan 23. doi: 10.1021/jasms.3c00285. Online ahead of print.

ABSTRACT

Observed mass shifts associated with deuterium incorporation in hydrogen-deuterium exchange mass spectrometry (HDX-MS) frequently deviate from the initial signals due to back and forward exchange. In typical HDX-MS experiments, the impact of these disparities on data interpretation is generally low because relative and not absolute mass changes are investigated. However, for more advanced data processing including optimization, experimental error correction is imperative for accurate results. Here the potential for automatic HDX-MS data correction using models generated by deep neural networks is demonstrated. A multilayer perceptron (MLP) is used to learn a mapping between uncorrected HDX-MS data and data with mass shifts corrected for back and forward exchange. The model is rigorously tested at various levels including peptide level mass changes, residue level protection factors following optimization, and ability to correctly identify native protein folds using HDX-MS guided protein modeling. AI is shown to demonstrate considerable potential for amending HDX-MS data and improving fidelity across all levels. With access to big data, online tools may eventually be able to predict corrected mass shifts in HDX-MS profiles. This should improve throughput in workflows that require the reporting of real mass changes as well as allow retrospective correction of historic profiles to facilitate new discoveries with these data.

PMID:38262924 | DOI:10.1021/jasms.3c00285

Categories: Literature Watch

Pages