Deep learning

SHAP value-based ERP analysis (SHERPA): Increasing the sensitivity of EEG signals with explainable AI methods

Thu, 2024-03-07 06:00

Behav Res Methods. 2024 Mar 7. doi: 10.3758/s13428-023-02335-7. Online ahead of print.

ABSTRACT

Conventionally, event-related potential (ERP) analysis relies on the researcher to identify the sensors and time points where an effect is expected. However, this approach is prone to bias and may limit the ability to detect unexpected effects or to investigate the full range of the electroencephalography (EEG) signal. Data-driven approaches circumvent this limitation, however, the multiple comparison problem and the statistical correction thereof affect both the sensitivity and specificity of the analysis. In this study, we present SHERPA - a novel approach based on explainable artificial intelligence (XAI) designed to provide the researcher with a straightforward and objective method to find relevant latency ranges and electrodes. SHERPA is comprised of a convolutional neural network (CNN) for classifying the conditions of the experiment and SHapley Additive exPlanations (SHAP) as a post hoc explainer to identify the important temporal and spatial features. A classical EEG face perception experiment is employed to validate the approach by comparing it to the established researcher- and data-driven approaches. Likewise, SHERPA identified an occipital cluster close to the temporal coordinates for the N170 effect expected. Most importantly, SHERPA allows quantifying the relevance of an ERP for a psychological mechanism by calculating an "importance score". Hence, SHERPA suggests the presence of a negative selection process at the early and later stages of processing. In conclusion, our new method not only offers an analysis approach suitable in situations with limited prior knowledge of the effect in question but also an increased sensitivity capable of distinguishing neural processes with high precision.

PMID:38453828 | DOI:10.3758/s13428-023-02335-7

Categories: Literature Watch

Application of deep-learning to the automatic segmentation and classification of lateral lymph nodes on ultrasound images of papillary thyroid carcinoma

Thu, 2024-03-07 06:00

Asian J Surg. 2024 Mar 6:S1015-9584(24)00401-9. doi: 10.1016/j.asjsur.2024.02.140. Online ahead of print.

ABSTRACT

PURPOSE: It is crucial to preoperatively diagnose lateral cervical lymph node (LN) metastases (LNMs) in papillary thyroid carcinoma (PTC) patients. This study aims to develop deep-learning models for the automatic segmentation and classification of LNM on original ultrasound images.

METHODS: This study included 1000 lateral cervical LN ultrasound images (consisting of 512 benign and 558 metastatic LNs) collected from 728 patients at the Chongqing General Hospital between March 2022 and July 2023. Three instance segmentation models (MaskRCNN, SOLO and Mask2Former) were constructed to segment and classify ultrasound images of lateral cervical LNs by recognizing each object individually and in a pixel-by-pixel manner. The segmentation and classification results of the three models were compared with an experienced sonographer in the test set.

RESULTS: Upon completion of a 200-epoch learning cycle, the loss among the three unique models became negligible. To evaluate the performance of the deep-learning models, the intersection over union threshold was set at 0.75. The mean average precision scores for MaskRCNN, SOLO and Mask2Former were 88.8%, 86.7% and 89.5%, respectively. The segmentation accuracies of the MaskRCNN, SOLO, Mask2Former models and sonographer were 85.6%, 88.0%, 89.5% and 82.3%, respectively. The classification AUCs of the MaskRCNN, SOLO, Mask2Former models and sonographer were 0.886, 0.869, 0.90.2 and 0.852 in the test set, respectively.

CONCLUSIONS: The deep learning models could automatically segment and classify lateral cervical LNs with an AUC of 0.92. This approach may serve as a promising tool to assist sonographers in diagnosing lateral cervical LNMs among patients with PTC.

PMID:38453612 | DOI:10.1016/j.asjsur.2024.02.140

Categories: Literature Watch

Synthesizing Contrast-Enhanced MR Images from Noncontrast MR Images Using Deep Learning

Thu, 2024-03-07 06:00

AJNR Am J Neuroradiol. 2024 Mar 7;45(3):312-319. doi: 10.3174/ajnr.A8107.

ABSTRACT

BACKGROUND AND PURPOSE: Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning.

MATERIALS AND METHODS: We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent).

RESULTS: The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm's performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale).

CONCLUSIONS: We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.

PMID:38453408 | DOI:10.3174/ajnr.A8107

Categories: Literature Watch

Global Research Evolution and Frontier Analysis of Artificial Intelligence in Brain Injury: A Bibliometric Analysis

Thu, 2024-03-07 06:00

Brain Res Bull. 2024 Mar 5:110920. doi: 10.1016/j.brainresbull.2024.110920. Online ahead of print.

ABSTRACT

Research on artificial intelligence for brain injury is currently a prominent area of scientific research. A significant amount of related literature has been accumulated in this field. This study aims to identify hotspots and clarify research resources by conducting literature metrology visualization analysis, providing valuable ideas and references for related fields. The research object of this paper consists of 3000 articles cited in the core database of Web of Science from 1998 to 2023. These articles are visualized and analyzed using VOSviewer and CiteSpace. The bibliometric analysis reveals a continuous increase in the number of articles published on this topic, particularly since 2016, indicating significant growth. The United States stands out as the leading country in artificial intelligence for brain injury, followed by China, which tends to catch up. The core research institutions are primarily universities in developed countries, but there is a lack of cooperation and communication between research groups. With the development of computer technology, the research in this field has shown strong wave characteristics, experiencing the early stage of applied research based on expert systems, the middle stage of prediction research based on machine learning, and the current phase of diversified research focused on deep learning. Artificial intelligence has innovative development prospects in brain injury, providing a new orientation for the treatment and auxiliary diagnosis in this field.

PMID:38453035 | DOI:10.1016/j.brainresbull.2024.110920

Categories: Literature Watch

Development and validation of a prehospital termination of resuscitation (TOR) rule for Out - Of Hospital Cardiac Arrest (OHCA) Cases using general purpose artificial intelligence (AI)

Thu, 2024-03-07 06:00

Resuscitation. 2024 Mar 5:110165. doi: 10.1016/j.resuscitation.2024.110165. Online ahead of print.

ABSTRACT

BACKGROUND: Prehospital identification of futile resuscitation efforts (defined as a predicted probability of survival lower than 1%) for out-of-hospital cardiac arrest (OHCA) may reduce unnecessary transport. Reliable prediction variables for OHCA 'termination of resuscitation' (TOR) rules are needed to guide treatment decisions. The Universal TOR rule uses only three variables (Absence of Prehospital ROSC, Event not witnessed by EMS and no shock delivered on the scene) has been externally validated and is used by many EMS systems. Deep learning, an artificial intelligence (AI) platform is an attractive model to guide the development of TOR rule for OHCA. The purpose of this study was to assess the feasibility of developing an AI-TOR rule for neurologically favorable outcomes using general purpose AI and compare its performance to the Universal TOR rule.

METHODS: We identified OHCA cases of presumed cardiac etiology who were 18 years of age or older from 2016 to 2019 in the All-Japan Utstein Registry. We divided the dataset into 2 parts, the first half (2016- 2017) was used as a training dataset for rule development and second half (2018- 2019) for validation. The AI software (Prediction One®) created the model using the training dataset with internal cross-validation. It also evaluated the prediction accuracy and displayed the ranking of influencing variables. We performed validation using the second half cases and calculated the prediction model AUC. The top four of the 11 variables identified in the model were then selected as prognostic factors to be used in an AI-TOR rule, and sensitivity, specificity, positive predictive value, and negative predictive value were calculated from validation cohort. This was then compared to the performance of the Universal TOR rule using same dataset.

RESULTS: There were 504,561 OHCA cases, 18 years of age or older, 302,799 cases were presumed cardiac origin. Of these, 149,425 cases were used for the training dataset and 153,374 cases for the validation dataset. The model developed by AI using 11 variables had an AUC of 0.969, and its AUC for the validation dataset was 0.965. The top four influencing variables for neurologically favorable outcome were Prehospital ROSC, witnessed by EMS, Age (68 years old and younger) and nonasystole. The AUC calculated using the 4 variables for the AI-TOR rule was 0.953, and its AUC for the validation dataset was 0.952 (95%CI 0.949 -0.954). Of 80,198 patients in the validation cohort that satisfied all four criteria for the AI-TOR rule, 58 (0.07%) had a neurologically favorable one-month survival. The specificity of AI-TOR rule was 0.990, and the PPV was 0.999 for predicting lack of neurologically favorable survival, both the specificity and PPV were higher than that achieved with the universal TOR (0.959, 0.998).

CONCLUSIONS: The accuracy of prediction models using AI software to determine outcomes in OHCA was excellent and the AI-TOR rule's variables from prediction model performed better than the Universal TOR rule. External validation of our findings as well as further research into the utility of using AI platforms for TOR prediction in clinical practice is needed.

PMID:38452995 | DOI:10.1016/j.resuscitation.2024.110165

Categories: Literature Watch

Proteome-scale movements and compartment connectivity during the eukaryotic cell cycle

Thu, 2024-03-07 06:00

Cell. 2024 Mar 1:S0092-8674(24)00180-6. doi: 10.1016/j.cell.2024.02.014. Online ahead of print.

ABSTRACT

Cell cycle progression relies on coordinated changes in the composition and subcellular localization of the proteome. By applying two distinct convolutional neural networks on images of millions of live yeast cells, we resolved proteome-level dynamics in both concentration and localization during the cell cycle, with resolution of ∼20 subcellular localization classes. We show that a quarter of the proteome displays cell cycle periodicity, with proteins tending to be controlled either at the level of localization or concentration, but not both. Distinct levels of protein regulation are preferentially utilized for different aspects of the cell cycle, with changes in protein concentration being mostly involved in cell cycle control and changes in protein localization in the biophysical implementation of the cell cycle program. We present a resource for exploring global proteome dynamics during the cell cycle, which will aid in understanding a fundamental biological process at a systems level.

PMID:38452761 | DOI:10.1016/j.cell.2024.02.014

Categories: Literature Watch

Explainable AI-based Deep-SHAP for mapping the multivariate relationships between regional neuroimaging biomarkers and cognition

Thu, 2024-03-07 06:00

Eur J Radiol. 2024 Mar 2;174:111403. doi: 10.1016/j.ejrad.2024.111403. Online ahead of print.

ABSTRACT

BACKGROUND: Mild cognitive impairment (MCI)/Alzheimer's disease (AD) is associated with cognitive decline beyond normal aging and linked to the alterations of brain volume quantified by magnetic resonance imaging (MRI) and amyloid-beta (Aβ) quantified by positron emission tomography (PET). Yet, the complex relationships between these regional imaging measures and cognition in MCI/AD remain unclear. Explainable artificial intelligence (AI) may uncover such relationships.

METHOD: We integrate the AI-based deep learning neural network and Shapley additive explanations (SHAP) approaches and introduce the Deep-SHAP method to investigate the multivariate relationships between regional imaging measures and cognition. After validating this approach on simulated data, we apply it to real experimental data from MCI/AD patients.

RESULTS: Deep-SHAP significantly predicted cognition using simulated regional features and identified the ground-truth simulated regions as the most significant multivariate predictors. When applied to experimental MRI data, Deep-SHAP revealed that the insula, lateral occipital, medial frontal, temporal pole, and occipital fusiform gyrus are the primary contributors to global cognitive decline in MCI/AD. Furthermore, when applied to experimental amyloid Pittsburgh compound B (PiB)-PET data, Deep-SHAP identified the key brain regions for global cognitive decline in MCI/AD as the inferior temporal, parahippocampal, inferior frontal, supratemporal, and lateral frontal gray matter.

CONCLUSION: Deep-SHAP method uncovered the multivariate relationships between regional brain features and cognition, offering insights into the most critical modality-specific brain regions involved in MCI/AD mechanisms.

PMID:38452732 | DOI:10.1016/j.ejrad.2024.111403

Categories: Literature Watch

3D-MRI super-resolution reconstruction using multi-modality based on multi-resolution CNN

Thu, 2024-03-07 06:00

Comput Methods Programs Biomed. 2024 Mar 5;248:108110. doi: 10.1016/j.cmpb.2024.108110. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: High-resolution (HR) MR images provide rich structural detail to assist physicians in clinical diagnosis and treatment plan. However, it is arduous to acquire HR MRI due to equipment limitations, scanning time or patient comfort. Instead, HR MRI could be obtained through a number of computer assisted post-processing methods that have proven to be effective and reliable. This paper aims to develop a convolutional neural network (CNN) based super-resolution reconstruction framework for low-resolution (LR) T2w images.

METHOD: In this paper, we propose a novel multi-modal HR MRI generation framework based on deep learning techniques. Specifically, we construct a CNN based on multi-resolution analysis to learn an end-to-end mapping between LR T2w and HR T2w, where HR T1w is fed into the network to offer detailed a priori information to help generate HR T2w. Furthermore, a low-frequency filtering module is introduced to filter out the interference from HR-T1w during high-frequency information extraction. Based on the idea of multi-resolution analysis, detailed features extracted from HR T1w and LR T2w are fused at two scales in the network and then HR T2w is reconstructed by upsampling and dense connectivity module.

RESULTS: Extensive quantitative and qualitative evaluations demonstrate that the proposed method enhances the recovered HR T2w details and outperforms other state-of-the-art methods. In addition, the experimental results also suggest that our network has a lightweight structure and favorable generalization performance.

CONCLUSION: The results show that the proposed method is capable of reconstructing HR T2w with higher accuracy. Meanwhile, the super-resolution reconstruction results on other dataset illustrate the excellent generalization ability of the method.

PMID:38452685 | DOI:10.1016/j.cmpb.2024.108110

Categories: Literature Watch

Medical long-tailed learning for imbalanced data: Bibliometric analysis

Thu, 2024-03-07 06:00

Comput Methods Programs Biomed. 2024 Feb 29;247:108106. doi: 10.1016/j.cmpb.2024.108106. Online ahead of print.

ABSTRACT

BACKGROUND: In the last decade, long-tail learning has become a popular research focus in deep learning applications in medicine. However, no scientometric reports have provided a systematic overview of this scientific field. We utilized bibliometric techniques to identify and analyze the literature on long-tailed learning in deep learning applications in medicine and investigate research trends, core authors, and core journals. We expanded our understanding of the primary components and principal methodologies of long-tail learning research in the medical field.

METHODS: Web of Science was utilized to collect all articles on long-tailed learning in medicine published until December 2023. The suitability of all retrieved titles and abstracts was evaluated. For bibliometric analysis, all numerical data were extracted. CiteSpace was used to create clustered and visual knowledge graphs based on keywords.

RESULTS: A total of 579 articles met the evaluation criteria. Over the last decade, the annual number of publications and citation frequency both showed significant growth, following a power-law and exponential trend, respectively. Noteworthy contributors to this field include Husanbir Singh Pannu, Fadi Thabtah, and Talha Mahboob Alam, while leading journals such as IEEE ACCESS, COMPUTERS IN BIOLOGY AND MEDICINE, IEEE TRANSACTIONS ON MEDICAL IMAGING, and COMPUTERIZED MEDICAL IMAGING AND GRAPHICS have emerged as pivotal platforms for disseminating research in this area. The core of long-tailed learning research within the medical domain is encapsulated in six principal themes: deep learning for imbalanced data, model optimization, neural networks in image analysis, data imbalance in health records, CNN in diagnostics and risk assessment, and genetic information in disease mechanisms.

CONCLUSION: This study summarizes recent advancements in applying long-tail learning to deep learning in medicine through bibliometric analysis and visual knowledge graphs. It explains new trends, sources, core authors, journals, and research hotspots. Although this field has shown great promise in medical deep learning research, our findings will provide pertinent and valuable insights for future research and clinical practice.

PMID:38452661 | DOI:10.1016/j.cmpb.2024.108106

Categories: Literature Watch

Peanut origin traceability: A hybrid neural network combining an electronic nose system and a hyperspectral system

Thu, 2024-03-07 06:00

Food Chem. 2024 Mar 2;447:138915. doi: 10.1016/j.foodchem.2024.138915. Online ahead of print.

ABSTRACT

Peanuts, sourced from various regions, exhibit noticeable differences in quality owing to the impact of their natural environments. This study proposes a fast and nondestructive detection method to identify peanut quality by combining an electronic nose system with a hyperspectral system. First, the electronic nose and hyperspectral systems are used to gather gas and spectral information from peanuts. Second, a module for extracting gas and spectral information is designed, combining the lightweight multi-head transposed attention mechanism (LMTA) and convolutional computation. The fusion of gas and spectral information is achieved through matrix combination and lightweight convolution. A hybrid neural network, named UnitFormer, is designed based on the information extraction and fusion processes. UnitFormer demonstrates an accuracy of 99.06 %, a precision of 99.12 %, and a recall of 99.05 %. In conclusion, UnitFormer effectively distinguishes quality differences among peanuts from various regions, offering an effective technological solution for quality supervision in the food market.

PMID:38452539 | DOI:10.1016/j.foodchem.2024.138915

Categories: Literature Watch

Automated entry of paper-based patient-reported outcomes: Applying deep learning to the Japanese orthopaedic association back pain evaluation questionnaire

Thu, 2024-03-07 06:00

Comput Biol Med. 2024 Feb 19;172:108197. doi: 10.1016/j.compbiomed.2024.108197. Online ahead of print.

ABSTRACT

BACKGROUND: Health-related patient-reported outcomes (HR-PROs) are crucial for assessing the quality of life among individuals experiencing low back pain. However, manual data entry from paper forms, while convenient for patients, imposes a considerable tallying burden on collectors. In this study, we developed a deep learning (DL) model capable of automatically reading these paper forms.

METHODS: We employed the Japanese Orthopaedic Association Back Pain Evaluation Questionnaire, a globally recognized assessment tool for low back pain. The questionnaire comprised 25 low back pain-related multiple-choice questions and three pain-related visual analog scales (VASs). We collected 1305 forms from an academic medical center as the training set, and 483 forms from a community medical center as the test set. The performance of our DL model for multiple-choice questions was evaluated using accuracy as a categorical classification task. The performance for VASs was evaluated using the correlation coefficient and absolute error as regression tasks.

RESULT: In external validation, the mean accuracy of the categorical questions was 0.997. When outputs for categorical questions with low probability (threshold: 0.9996) were excluded, the accuracy reached 1.000 for the remaining 65 % of questions. Regarding the VASs, the average of the correlation coefficients was 0.989, with the mean absolute error being 0.25.

CONCLUSION: Our DL model demonstrated remarkable accuracy and correlation coefficients when automatic reading paper-based HR-PROs during external validation.

PMID:38452472 | DOI:10.1016/j.compbiomed.2024.108197

Categories: Literature Watch

Transcriptomic signature of cancer cachexia by integration of machine learning, literature mining and meta-analysis

Thu, 2024-03-07 06:00

Comput Biol Med. 2024 Feb 28;172:108233. doi: 10.1016/j.compbiomed.2024.108233. Online ahead of print.

ABSTRACT

BACKGROUND: Cancer cachexia is a severe metabolic syndrome marked by skeletal muscle atrophy. A successful clinical intervention for cancer cachexia is currently lacking. The study of cachexia mechanisms is largely based on preclinical animal models and the availability of high-throughput transcriptomic datasets of cachectic mouse muscles is increasing through the extensive use of next generation sequencing technologies.

METHODS: Cachectic mouse muscle transcriptomic datasets of ten different studies were combined and mined by seven attribute weighting models, which analysed both categorical variables and numerical variables. The transcriptomic signature of cancer cachexia was identified by attribute weighting algorithms and was used to evaluate the performance of eleven pattern discovery models. The signature was employed to find the best combination of drugs (drug repurposing) for developing cancer cachexia treatment strategies, as well as to evaluate currently used cachexia drugs by literature mining.

RESULTS: Attribute weighting algorithms ranked 26 genes as the transcriptomic signature of muscle from mice with cancer cachexia. Deep Learning and Random Forest models performed better in differentiating cancer cachexia cases based on muscle transcriptomic data. Literature mining revealed that a combination of melatonin and infliximab has negative interactions with 2 key genes (Rorc and Fbxo32) upregulated in the transcriptomic signature of cancer cachexia in muscle.

CONCLUSIONS: The integration of machine learning, meta-analysis and literature mining was found to be an efficient approach to identifying a robust transcriptomic signature for cancer cachexia, with implications for improving clinical diagnosis and management of this condition.

PMID:38452471 | DOI:10.1016/j.compbiomed.2024.108233

Categories: Literature Watch

Bioinspiration from bats and new paradigms for autonomy in natural environments

Thu, 2024-03-07 06:00

Bioinspir Biomim. 2024 Mar 7. doi: 10.1088/1748-3190/ad311e. Online ahead of print.

ABSTRACT

Achieving autonomous operation in complex natural environment remains an unsolved challenge. Conventional engineering approaches to this problem have focused on collecting large amounts of sensory data that are used to create detailed digital models of the environment. However, this only postpones solving the challenge of identifying the relevant sensory information and linking it to action control to the domain of the digital world model. Furthermore, it imposes high demands in terms of computing power and introduces large processing latencies that hamper autonomous real-time performance. Certain species of bats that are able to navigate and hunt their prey in dense vegetation could be a biological model system for an alternative approach to addressing the fundamental issues associated with autonomy in complex natural environments. Bats navigating in dense vegetation rely on clutter echoes, i.e., signals that consist of unresolved contributions from many scatters. Yet, the animals are able to extract the relevant information from these input signals with brains that are often less than one gram in mass. Pilot results indicate that information relevant to location identification and passageway finding can be directly obtained from clutter echoes, opening up the possibility that the bats' skill can be replicated in man-made autonomous systems.&#xD.

PMID:38452384 | DOI:10.1088/1748-3190/ad311e

Categories: Literature Watch

AACFlow: An end-to-end model based on attention augmented convolutional neural network and flow-attention mechanism for identification of anticancer peptides

Thu, 2024-03-07 06:00

Bioinformatics. 2024 Mar 7:btae142. doi: 10.1093/bioinformatics/btae142. Online ahead of print.

ABSTRACT

MOTIVATION: Anticancer peptides (ACPs) have natural cationic properties and can act on the anionic cell membrane of cancer cells to kill cancer cells. Therefore, ACPs have become a potential anticancer drug with good research value and prospect.

RESULTS: In this paper, we propose AACFlow, an end-to-end model for identification of ACPs based on deep learning. End-to-end models have more room to automatically adjust according to the data, making the overall fit better and reducing error propagation. The combination of attention augmented convolutional neural network (AAConv) and multi-layer convolutional neural network (CNN) forms a deep representation learning module, which is used to obtain global and local information on the sequence. Based on the concept of flow network, multi-head flow-attention mechanism is introduced to mine the deep features of the sequence to improve the efficiency of the model. On the independent test dataset, the ACC, Sn, Sp, and AUC values of AACFlow are 83.9%, 83.0%, 84.8%, and 0.892, respectively, which are 4.9%, 1.5%, 8.0%, and 0.016 higher than those of the baseline model. The MCC value is 67.85%. In addition, we visualize the features extracted by each module to enhance the interpretability of the model. Various experiments show that our model is more competitive in predicting ACPs.

AVAILABILITY: The codes and datasets are accessible at https://github.com/z11code/AACFlow.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38452348 | DOI:10.1093/bioinformatics/btae142

Categories: Literature Watch

Deep Learning-based Segmentation of CT Scans Predicts Disease Progression and Mortality in IPF

Thu, 2024-03-07 06:00

Am J Respir Crit Care Med. 2024 Mar 7. doi: 10.1164/rccm.202311-2185OC. Online ahead of print.

ABSTRACT

RATIONALE: Despite evidence demonstrating a prognostic role for CT scans in IPF, image-based biomarkers are not routinely used in clinical practice or trials.

OBJECTIVES: Develop automated imaging biomarkers using deep learning based segmentation of CT scans.

METHODS: We developed segmentation processes for four anatomical biomarkers which were applied to a unique cohort of treatment-naive IPF patients enrolled in the PROFILE study and tested against a further UK cohort. The relationship between CT biomarkers, lung function, disease progression and mortality were assessed.

MEASUREMENTS AND MAIN RESULTS: Data was analysed from 446 PROFILE patients. Median follow-up was 39.1 months (IQR 18.1-66.4) with cumulative incidence of death of 277 over 5 years (62.1%). Segmentation was successful on 97.8% of all scans, across multiple imaging vendors at slice thicknesses 0.5-5mm. Of 4 segmentations, lung volume showed strongest correlation with FVC (r=0.82, p<0.001). Lung, vascular and fibrosis volumes were consistently associated across cohorts with differential five-year survival, which persisted after adjustment for baseline GAP score. Lower lung volume (HR 0.98, CI 0.96-0.99, p=0.001), increased vascular volume (HR 1.30, CI 1.12-1.51, p=0.001) and increased fibrosis volume (HR 1.17, CI 1.12-1.22, p=<0.001) were associated with reduced two-year progression-free survival in the pooled PROFILE cohort. Longitudinally, decreasing lung volume (HR 3.41; 95% CI 1.36-8.54; p=0.009) and increasing fibrosis volume (HR 2.23; 95% CI 1.22-4.08; p=0.009) were associated with differential survival.

CONCLUSIONS: Automated models can rapidly segment IPF CT scans, providing prognostic near and long-term information, which could be used in routine clinical practice or as key trial endpoints. This article is open access and distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).

PMID:38452227 | DOI:10.1164/rccm.202311-2185OC

Categories: Literature Watch

Development of a multi-wear-site, deep learning-based physical activity intensity classification algorithm using raw acceleration data

Thu, 2024-03-07 06:00

PLoS One. 2024 Mar 7;19(3):e0299295. doi: 10.1371/journal.pone.0299295. eCollection 2024.

ABSTRACT

BACKGROUND: Accelerometers are widely adopted in research and consumer devices as a tool to measure physical activity. However, existing algorithms used to estimate activity intensity are wear-site-specific. Non-compliance to wear instructions may lead to misspecifications. In this study, we developed deep neural network models to classify device placement and activity intensity based on raw acceleration data. Performances of these models were evaluated by making comparisons to the ground truth and results derived from existing count-based algorithms.

METHODS: 54 participants (26 adults 26.9±8.7 years; 28 children, 12.1±2.3 years) completed a series of activity tasks in a laboratory with accelerometers attached to each of their hip, wrist, and chest. Their metabolic rates at rest and during activity periods were measured using the portable COSMED K5; data were then converted to metabolic equivalents, and used as the ground truth for activity intensity. Deep neutral networks using the Long Short-Term Memory approach were trained and evaluated based on raw acceleration data collected from accelerometers. Models to classify wear-site and activity intensity, respectively, were evaluated.

RESULTS: The trained models correctly classified wear-sites and activity intensities over 90% of the time, which outperformed count-based algorithms (wear-site correctly specified: 83% to 85%; wear-site misspecified: 64% to 75%). When additional parameters of age, height and weight of participants were specified, the accuracy of some prediction models surpassed 95%.

CONCLUSIONS: Results of the study suggest that accelerometer placement could be determined prospectively, and non-wear-site-specific algorithms had satisfactory accuracies. The performances, in terms of intensity classification, of these models also exceeded typical count-based algorithms. Without being restricted to one specific wear-site, research protocols for accelerometers wear could allow more autonomy to participants, which may in turn improve their acceptance and compliance to wear protocols, and in turn more accurate results.

PMID:38452147 | DOI:10.1371/journal.pone.0299295

Categories: Literature Watch

Generalized biomolecular modeling and design with RoseTTAFold All-Atom

Thu, 2024-03-07 06:00

Science. 2024 Mar 7:eadl2528. doi: 10.1126/science.adl2528. Online ahead of print.

ABSTRACT

Deep learning methods have revolutionized protein structure prediction and design but are currently limited to protein-only systems. We describe RoseTTAFold All-Atom (RFAA) which combines a residue-based representation of amino acids and DNA bases with an atomic representation of all other groups to model assemblies containing proteins, nucleic acids, small molecules, metals, and covalent modifications given their sequences and chemical structures. By fine tuning on denoising tasks we obtain RFdiffusionAA, which builds protein structures around small molecules. Starting from random distributions of amino acid residues surrounding target small molecules, we design and experimentally validate, through crystallography and binding measurements, proteins that bind the cardiac disease therapeutic digoxigenin, the enzymatic cofactor heme, and the light harvesting molecule bilin.

PMID:38452047 | DOI:10.1126/science.adl2528

Categories: Literature Watch

AI-luminating Artificial Intelligence in Inflammatory Bowel Diseases: A Narrative Review on the Role of AI in Endoscopy, Histology, and Imaging for IBD

Thu, 2024-03-07 06:00

Inflamm Bowel Dis. 2024 Mar 7:izae030. doi: 10.1093/ibd/izae030. Online ahead of print.

ABSTRACT

Endoscopy, histology, and cross-sectional imaging serve as fundamental pillars in the detection, monitoring, and prognostication of inflammatory bowel disease (IBD). However, interpretation of these studies often relies on subjective human judgment, which can lead to delays, intra- and interobserver variability, and potential diagnostic discrepancies. With the rising incidence of IBD globally coupled with the exponential digitization of these data, there is a growing demand for innovative approaches to streamline diagnosis and elevate clinical decision-making. In this context, artificial intelligence (AI) technologies emerge as a timely solution to address the evolving challenges in IBD. Early studies using deep learning and radiomics approaches for endoscopy, histology, and imaging in IBD have demonstrated promising results for using AI to detect, diagnose, characterize, phenotype, and prognosticate IBD. Nonetheless, the available literature has inherent limitations and knowledge gaps that need to be addressed before AI can transition into a mainstream clinical tool for IBD. To better understand the potential value of integrating AI in IBD, we review the available literature to summarize our current understanding and identify gaps in knowledge to inform future investigations.

PMID:38452040 | DOI:10.1093/ibd/izae030

Categories: Literature Watch

Tongue feature dataset construction and real-time detection

Thu, 2024-03-07 06:00

PLoS One. 2024 Mar 7;19(3):e0296070. doi: 10.1371/journal.pone.0296070. eCollection 2024.

ABSTRACT

BACKGROUND: Tongue diagnosis in traditional Chinese medicine (TCM) provides clinically important, objective evidence from direct observation of specific features that assist with diagnosis. However, the current interpretation of tongue features requires a significant amount of manpower and time. TCM physicians may have different interpretations of features displayed by the same tongue. An automated interpretation system that interprets tongue features would expedite the interpretation process and yield more consistent results.

MATERIALS AND METHODS: This study applied deep learning visualization to tongue diagnosis. After collecting tongue images and corresponding interpretation reports by TCM physicians in a single teaching hospital, various tongue features such as fissures, tooth marks, and different types of coatings were annotated manually with rectangles. These annotated data and images were used to train a deep learning object detection model. Upon completion of training, the position of each tongue feature was dynamically marked.

RESULTS: A large high-quality manually annotated tongue feature dataset was constructed and analyzed. A detection model was trained with average precision (AP) 47.67%, 58.94%, 71.25% and 59.78% for fissures, tooth marks, thick and yellow coatings, respectively. At over 40 frames per second on a NVIDIA GeForce GTX 1060, the model was capable of detecting tongue features from any viewpoint in real time.

CONCLUSIONS/SIGNIFICANCE: This study constructed a tongue feature dataset and trained a deep learning object detection model to locate tongue features in real time. The model provided interpretability and intuitiveness that are often lacking in general neural network models and implies good feasibility for clinical application.

PMID:38452007 | DOI:10.1371/journal.pone.0296070

Categories: Literature Watch

The segmentation and intelligent recognition of structural surfaces in borehole images based on the U2-Net network

Thu, 2024-03-07 06:00

PLoS One. 2024 Mar 7;19(3):e0299471. doi: 10.1371/journal.pone.0299471. eCollection 2024.

ABSTRACT

Structural planes decrease the strength and stability of rock masses, severely affecting their mechanical properties and deformation and failure characteristics. Therefore, investigation and analysis of structural planes are crucial tasks in mining rock mechanics. The drilling camera obtains image information of deep structural planes of rock masses through high-definition camera methods, providing important data sources for the analysis of deep structural planes of rock masses. This paper addresses the problems of high workload, low efficiency, high subjectivity, and poor accuracy brought about by manual processing based on current borehole image analysis and conducts an intelligent segmentation study of borehole image structural planes based on the U2-Net network. By collecting data from 20 different borehole images in different lithological regions, a dataset consisting of 1,013 borehole images with structural plane type, lithology, and color was established. Data augmentation methods such as image flipping, color jittering, blurring, and mixup were applied to expand the dataset to 12,421 images, meeting the requirements for deep network training data. Based on the PyTorch deep learning framework, the initial U2-Net network weights were set, the learning rate was set to 0.001, the training batch was 4, and the Adam optimizer adaptively adjusted the learning rate during the training process. A dedicated network model for segmenting structural planes was obtained, and the model achieved a maximum F-measure value of 0.749 when the confidence threshold was set to 0.7, with an accuracy rate of up to 0.85 within the range of recall rate greater than 0.5. Overall, the model has high accuracy for segmenting structural planes and very low mean absolute error, indicating good segmentation accuracy and certain generalization of the network. The research method in this paper can serve as a reference for the study of intelligent identification of structural planes in borehole images.

PMID:38451909 | DOI:10.1371/journal.pone.0299471

Categories: Literature Watch

Pages