Deep learning

DMFGAN: a multifeature data augmentation method for grape leaf disease identification

Thu, 2024-10-24 06:00

Plant J. 2024 Oct 24. doi: 10.1111/tpj.17042. Online ahead of print.

ABSTRACT

The use of deep learning techniques to identify grape leaf diseases relies on large, high-quality datasets. However, a large number of images occupy more computing resources and are prone to pattern collapse during training. In this paper, a depth-separable multifeature generative adversarial network (DMFGAN) was proposed to enhance grape leaf disease data. First, a multifeature extraction block (MFEB) based on the four-channel feature fusion strategy is designed to improve the quality of the generated image and avoid the problem of poor feature learning ability of the adversarial generation network caused by the single-channel feature extraction method. Second, a depth-based D-discriminator is designed to improve the discriminator capability and reduce the number of model parameters. Third, SeLU activation function was substituted for DCGAN activation function to overcome the problem that DCGAN activation function was not enough to fit grape leaf disease image data. Finally, an MFLoss function with a gradient penalty term is proposed to reduce the mode collapse during the training of generative adversarial networks. By comparing the visual indicators and evaluation indicators of the images generated by different models, and using the recognition network to verify the enhanced grape disease data, the results show that the method is effective in enhancing grape leaf disease data. Under the same experimental conditions, DMFGAN generates higher quality and more diverse images with fewer parameters than other generative adversarial networks. The mode breakdown times of generative adversarial networks in training process are reduced, which is more effective in practical application.

PMID:39446313 | DOI:10.1111/tpj.17042

Categories: Literature Watch

Complementary value of molecular, phenotypic, and functional aging biomarkers in dementia prediction

Thu, 2024-10-24 06:00

Geroscience. 2024 Oct 24. doi: 10.1007/s11357-024-01376-w. Online ahead of print.

ABSTRACT

DNA methylation age (MA), brain age (BA), and frailty index (FI) are putative aging biomarkers linked to dementia risk. We investigated their relationship and combined potential for prediction of cognitive impairment and future dementia risk using the ADNI database. Of several MA algorithms, DunedinPACE and GrimAge2, associated with memory, were combined in a composite MA alongside BA and a data-driven FI in predictive analyses. Pairwise correlations between age- and sex-adjusted measures for MA (aMA), aBA, and aFI were low. FI outperformed BA and MA in all diagnostic tasks. A model including age, sex, and aFI achieved an area under the curve (AUC) of 0.94 for differentiating cognitively normal controls (CN) from dementia patients in a held-out test set. When combined with clinical biomarkers (apolipoprotein E ε4 allele count, memory, executive function), a model including aBA and aFI predicted 5-year dementia risk among MCI patients with an out-of-sample AUC of 0.88. In the prognostic model, BA and FI offered complementary value (both βs 0.50). The tested MAs did not improve predictions. Results were consistent across FI algorithms, with data-driven health deficit selection yielding the best performance. FI had a stronger adverse effect on prognosis in males, while BA's impact was greater in females. Our findings highlight the complementary value of BA and FI in dementia prediction. The results support a multidimensional view of dementia, including an intertwined relationship between the biomarkers, sex, and prognosis. The tested MA's limited contribution suggests caution in their use for individual risk assessment of dementia.

PMID:39446224 | DOI:10.1007/s11357-024-01376-w

Categories: Literature Watch

Forecasting dominance of SARS-CoV-2 lineages by anomaly detection using deep AutoEncoders

Thu, 2024-10-24 06:00

Brief Bioinform. 2024 Sep 23;25(6):bbae535. doi: 10.1093/bib/bbae535.

ABSTRACT

The COVID-19 pandemic is marked by the successive emergence of new SARS-CoV-2 variants, lineages, and sublineages that outcompete earlier strains, largely due to factors like increased transmissibility and immune escape. We propose DeepAutoCoV, an unsupervised deep learning anomaly detection system, to predict future dominant lineages (FDLs). We define FDLs as viral (sub)lineages that will constitute >10% of all the viral sequences added to the GISAID, a public database supporting viral genetic sequence sharing, in a given week. DeepAutoCoV is trained and validated by assembling global and country-specific data sets from over 16 million Spike protein sequences sampled over a period of ~4 years. DeepAutoCoV successfully flags FDLs at very low frequencies (0.01%-3%), with median lead times of 4-17 weeks, and predicts FDLs between ~5 and ~25 times better than a baseline approach. For example, the B.1.617.2 vaccine reference strain was flagged as FDL when its frequency was only 0.01%, more than a year before it was considered for an updated COVID-19 vaccine. Furthermore, DeepAutoCoV outputs interpretable results by pinpointing specific mutations potentially linked to increased fitness and may provide significant insights for the optimization of public health 'pre-emptive' intervention strategies.

PMID:39446192 | DOI:10.1093/bib/bbae535

Categories: Literature Watch

MicroHDF: predicting host phenotypes with metagenomic data using a deep forest-based framework

Thu, 2024-10-24 06:00

Brief Bioinform. 2024 Sep 23;25(6):bbae530. doi: 10.1093/bib/bbae530.

ABSTRACT

The gut microbiota plays a vital role in human health, and significant effort has been made to predict human phenotypes, especially diseases, with the microbiota as a promising indicator or predictor with machine learning (ML) methods. However, the accuracy is impacted by a lot of factors when predicting host phenotypes with the metagenomic data, e.g. small sample size, class imbalance, high-dimensional features, etc. To address these challenges, we propose MicroHDF, an interpretable deep learning framework to predict host phenotypes, where a cascade layers of deep forest units is designed for handling sample class imbalance and high dimensional features. The experimental results show that the performance of MicroHDF is competitive with that of existing state-of-the-art methods on 13 publicly available datasets of six different diseases. In particular, it performs best with the area under the receiver operating characteristic curve of 0.9182 ± 0.0098 and 0.9469 ± 0.0076 for inflammatory bowel disease (IBD) and liver cirrhosis, respectively. Our MicroHDF also shows better performance and robustness in cross-study validation. Furthermore, MicroHDF is applied to two high-risk diseases, IBD and autism spectrum disorder, as case studies to identify potential biomarkers. In conclusion, our method provides an effective and reliable prediction of the host phenotype and discovers informative features with biological insights.

PMID:39446191 | DOI:10.1093/bib/bbae530

Categories: Literature Watch

An optimized siamese neural network with deep linear graph attention model for gynaecological abdominal pelvic masses classification

Thu, 2024-10-24 06:00

Abdom Radiol (NY). 2024 Oct 24. doi: 10.1007/s00261-024-04633-w. Online ahead of print.

ABSTRACT

An adnexal mass, also known as a pelvic mass, is a growth that develops in or near the uterus, ovaries, fallopian tubes, and supporting tissues. For women suspected of having ovarian cancer, timely and accurate detection of a malignant pelvic mass is crucial for effective triage, referral, and follow-up therapy. While various deep learning techniques have been proposed for identifying pelvic masses, current methods are often not accurate enough and can be computationally intensive. To address these issues, this manuscript introduces an optimized Siamese circle-inspired neural network with deep linear graph attention (SCINN-DLGN) model designed for pelvic mass classification. The SCINN-DLGN model is intended to classify pelvic masses into three categories: benign, malignant, and healthy. Initially, real-time MRI pelvic mass images undergo pre-processing using semantic-aware structure-preserving median morpho-filtering to enhance image quality. Following this, the region of interest (ROI) within the pelvic mass images is segmented using an EfficientNet-based U-Net framework, which reduces noise and improves the accuracy of segmentation. The segmented images are then analysed using the SCINN-DLGN model, which extracts geometric features from the ROI. These features are classified into benign, malignant, or healthy categories using a deep clustering algorithm integrated into the linear graph attention model. The proposed system is implemented on a Python platform, and its performance is evaluated using real-time MRI pelvic mass datasets. The SCINN-DLGN model achieves an impressive 99.9% accuracy and 99.8% recall, demonstrating superior efficiency compared to existing methods and highlighting its potential for further advancement in the field.

PMID:39446167 | DOI:10.1007/s00261-024-04633-w

Categories: Literature Watch

Diagnostic accuracy of deep learning-based algorithms in laryngoscopy: a systematic review and meta-analysis

Thu, 2024-10-24 06:00

Eur Arch Otorhinolaryngol. 2024 Oct 24. doi: 10.1007/s00405-024-09049-2. Online ahead of print.

ABSTRACT

PURPOSE: Laryngoscopy is routinely used for suspicious vocal cord lesions with limited performance. Accumulated studies have demonstrated the bright prospect of deep learning in processing medical imaging. In this study, we perform a systematic review and meta-analysis to investigate diagnostic utility of deep learning in laryngoscopy.

METHODS: The study was performed according to the Primary Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. We comprehensively retrieved articles from the PubMed, Scopus, Embase, and Web of Science up to July 14, 2024. Eligible studies with application of deep learning algorithm in laryngoscopy were assessed and enrolled by two independent investigators. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio with 95% confidence intervals (CIs) were calculated using a random effects model.

RESULTS: We retained 9 eligible studies adding up to 106,175 endoscopic images for the meta-analysis. The pooled sensitivity and specificity to diagnose laryngeal cancer were 0.95(95% CI: 0.85-0.98) and 0.96 (95% CI: 0.91-0.98). The area under the curve of deep learning was 0.99 (95%CI: 0.97-0.99).

CONCLUSION: Deep learning demonstrated excellent diagnostic efficacy in assessing laryngeal cancer with laryngoscope images in current studies, which manifests its potential of aiding endoscopist for laryngeal cancer diagnosis and clinical decision making.

PMID:39446141 | DOI:10.1007/s00405-024-09049-2

Categories: Literature Watch

Individual cognitive traits can be predicted from task-based dynamic functional connectivity with a deep convolutional-recurrent model

Thu, 2024-10-24 06:00

Cereb Cortex. 2024 Oct 3;34(10):bhae412. doi: 10.1093/cercor/bhae412.

ABSTRACT

There has been increased interest in understanding the neural substrates of intelligence and several human traits from neuroimaging data. Deep learning can be used to predict different cognitive measures, such as general and fluid intelligence, from different functional magnetic resonance imaging experiments providing information about the main brain areas involved in these predictions. Using neuroimaging and behavioral data from 874 subjects provided by the Human Connectome Project, we predicted various cognitive scores using dynamic functional connectivity derived from language and working memory functional magnetic resonance imaging task states, using a 360-region multimodal atlas. The deep model joins multiscale convolutional and long short-term memory layers and was trained under a 10-fold stratified cross-validation. We removed the confounding effects of gender, age, total brain volume, motion and the multiband reconstruction algorithm using multiple linear regression. We can explain 17.1% and 16% of general intelligence variance for working memory and language tasks, respectively. We showed that task-based dynamic functional connectivity has more predictive power than resting-state dynamic functional connectivity when compared to the literature and that removing confounders significantly reduces the prediction performance. No specific cortical network showed significant relevance in the prediction of general and fluid intelligence, suggesting a spatial homogeneous distribution of the intelligence construct in the brain.

PMID:39445422 | DOI:10.1093/cercor/bhae412

Categories: Literature Watch

Subject-aware PET Denoising with Contrastive Adversarial Domain Generalization

Thu, 2024-10-24 06:00

IEEE Nucl Sci Symp Conf Rec (1997). 2024 Oct-Nov;2024. doi: 10.1109/nss/mic/rtsd57108.2024.10656150. Epub 2024 Sep 25.

ABSTRACT

Recent advances in deep learning (DL) have greatly improved the performance of positron emission tomography (PET) denoising performance. However, DL model performance can vary a lot across subjects, due to the large variability of the count levels and spatial distributions. A generalizable DL model that mitigates the subject-wise variations is highly expected toward a reliable and trustworthy system for clinical application. In this work, we propose a contrastive adversarial learning framework for subject-wise domain generalization (DG). Specifically, we configure a contrastive discriminator in addition to the UNet-based denoising module to check the subject-related information in the bottleneck feature, while the denoising module is adversarially trained to enforce the extraction of subject-invariant features. The sampled low-count realizations from the list-mode data are used as anchor-positive pairs to be close to each other, while the other subjects are used as negative samples to be distributed far away. We evaluated on 97 18F-MK6240 tau PET studies, each having 20 noise realizations with 25% fractions of events. Training, validation, and testing were implemented using 1400, 120, and 420 pairs of 3D image volumes in a subject-independent manner. The proposed contrastive adversarial DG demonstrated superior denoising performance than conventional UNet without subject-wise DG and cross-entropy-based adversarial DG.

PMID:39445307 | PMC:PMC11497478 | DOI:10.1109/nss/mic/rtsd57108.2024.10656150

Categories: Literature Watch

Prediction of benign and malignant ground glass pulmonary nodules based on multi-feature fusion of attention mechanism

Thu, 2024-10-24 06:00

Front Oncol. 2024 Oct 9;14:1447132. doi: 10.3389/fonc.2024.1447132. eCollection 2024.

ABSTRACT

OBJECTIVES: The purpose of this study was to develop and validate a new feature fusion algorithm to improve the classification performance of benign and malignant ground-glass nodules (GGNs) based on deep learning.

METHODS: We retrospectively collected 385 cases of GGNs confirmed by surgical pathology from three hospitals. We utilized 239 GGNs from Hospital 1 as the training and internal validation set, and 115 and 31 GGNs from Hospital 2 and Hospital 3, respectively, as external test sets 1 and 2. Among these GGNs, 172 were benign and 203 were malignant. First, we evaluated clinical and morphological features of GGNs at baseline chest CT and simultaneously extracted whole-lung radiomics features. Then, deep convolutional neural networks (CNNs) and backpropagation neural networks (BPNNs) were applied to extract deep features from whole-lung CT images, clinical, morphological features, and whole-lung radiomics features separately. Finally, we integrated these four types of deep features using an attention mechanism. Multiple metrics were employed to evaluate the predictive performance of the model.

RESULTS: The deep learning model integrating clinical, morphological, radiomics and whole lung CT image features with attention mechanism (CMRI-AM) achieved the best performance, with area under the curve (AUC) values of 0.941 (95% CI: 0.898-0.972), 0.861 (95% CI: 0.823-0.882), and 0.906 (95% CI: 0.878-0.932) on the internal validation set, external test set 1, and external test set 2, respectively. The AUC differences between the CMRI-AM model and other feature combination models were statistically significant in all three groups (all p<0.05).

CONCLUSION: Our experimental results demonstrated that (1) applying attention mechanism to fuse whole-lung CT images, radiomics features, clinical, and morphological features is feasible, (2) clinical, morphological, and radiomics features provide supplementary information for the classification of benign and malignant GGNs based on CT images, and (3) utilizing baseline whole-lung CT features to predict the benign and malignant of GGNs is an effective method. Therefore, optimizing the fusion of baseline whole-lung CT features can effectively improve the classification performance of GGNs.

PMID:39445066 | PMC:PMC11496306 | DOI:10.3389/fonc.2024.1447132

Categories: Literature Watch

The development and validation of a prognostic prediction modeling study in acute myocardial infarction patients after percutaneous coronary intervention: hemorrhea and major cardiovascular adverse events

Thu, 2024-10-24 06:00

J Thorac Dis. 2024 Sep 30;16(9):6216-6228. doi: 10.21037/jtd-24-1362. Epub 2024 Sep 26.

ABSTRACT

BACKGROUND: Percutaneous coronary intervention (PCI) is one of the most important diagnostic and therapeutic techniques in cardiology. At present, the traditional prediction models for postoperative events after PCI are ineffective, but machine learning has great potential in identification and prediction of risk. Machine learning can reduce overfitting through regularization techniques, cross-validation and ensemble learning, making the model more accurate in predicting large amounts of complex unknown data. This study sought to identify the risk of hemorrhea and major adverse cardiovascular events (MACEs) in patients after PCI through machine learning.

METHODS: The entire study population consisted of 7,931 individual patients who underwent PCI at Jiangsu Provincial Hospital and The Affiliated Wuxi Second People's Hospital from January 2007 to January 2022. The risk of postoperative hemorrhea and MACE (including cardiac death and in-stent restenosis) was predicted by 53 clinical features after admission. The population was assigned to the training set and the validation set in a specific ratio by simple randomization. Different machine learning algorithms, including eXtreme Gradient Boosting (XGBoost), random forest (RF), and deep learning neural network (DNN), were trained to build prediction models. A 5-fold cross-validation was applied to correct errors. Several evaluation indexes, including the area under the receiver operating characteristic (ROC) curve (AUC), accuracy (Acc), sensitivity (Sens), specificity (Spec), and net reclassification improvement (NRI), were used to compare the predictive performance. To improve the interpretability of the model and identify risk factors individually, SHapley Additive exPlanation (SHAP) was introduced.

RESULTS: In this study, 306 patients (3.9%) experienced hemorrhea, 107 patients (1.3%) experienced cardiac death, and 218 patients (2.7%) developed in-stent restenosis. In the training set and validation set, except for previous PCI and statins, there were no significant differences. XGBoost was observed to be the best predictor of every event, namely hemorrhea [AUC: 0.921, 95% confidence interval (CI): 0.864-0.978, Acc: 0.845, Sens: 0.851, Spec: 0.837 and NRI: 0.140], cardiac death (AUC: 0.939, 95% CI: 0.903-0.975, Acc: 0.914, Sens: 0.950, Spec: 0.800 and NRI: 0.148), and in-stent restenosis (AUC: 0.915; 95% CI: 0.863-0.967, Acc: 0.834, Sens: 0.778, Spec: 0.902 and NRI: 0.077). SHAP showed that the number of stents had the greatest influence on hemorrhea, while age and drug-coated balloon were the main factors in cardiogenic death and stent restenosis (all P<0.05).

CONCLUSIONS: The XGBoost model (machine learning) performed better than the traditional logistic regression model in identifying hemorrhea and MACE after PCI. Machine learning models can be used as a tool for risk prediction. The machine learning model described in this study can personalize the prediction of hemorrhea and MACE after PCI for specific patients, helping clinicians adjust intervenable features.

PMID:39444902 | PMC:PMC11494537 | DOI:10.21037/jtd-24-1362

Categories: Literature Watch

Optimising ovarian tumor classification using a novel CT sequence selection algorithm

Thu, 2024-10-24 06:00

Sci Rep. 2024 Oct 23;14(1):25010. doi: 10.1038/s41598-024-75555-2.

ABSTRACT

Gynaecological cancers, especially ovarian cancer, remain a critical public health issue, particularly in regions like India, where there are challenges related to cancer awareness, variable pathology, and limited access to screening facilities. These challenges often lead to the diagnosis of cancer at advanced stages, resulting in poorer outcomes for patients. The goal of this study is to enhance the accuracy of classifying ovarian tumours, with a focus on distinguishing between malignant and early-stage cases, by applying advanced deep learning methods. In our approach, we utilized three pre-trained deep learning models-Xception, ResNet50V2, and ResNet50V2FPN-to classify ovarian tumors using publicly available Computed Tomography (CT) scan data. To further improve the model's performance, we developed a novel CT Sequence Selection Algorithm, which optimises the use of CT images for a more precise classification of ovarian tumours. The models were trained and evaluated on selected TIFF images, comparing the performance of the ResNet50V2FPN model with and without the CT Sequence Selection Algorithm. Our experimental results show the Comparative evaluation against the ResNet50V2 FPN model, both with and without the CT Sequence Selection Algorithm, demonstrates the superiority of the proposed algorithm over existing state-of-the-art methods. This research presents a promising approach for improving the early detection and management of gynecological cancers, with potential benefits for patient outcomes, especially in areas with limited healthcare resources.

PMID:39443517 | DOI:10.1038/s41598-024-75555-2

Categories: Literature Watch

sChemNET: a deep learning framework for predicting small molecules targeting microRNA function

Wed, 2024-10-23 06:00

Nat Commun. 2024 Oct 23;15(1):9149. doi: 10.1038/s41467-024-49813-w.

ABSTRACT

MicroRNAs (miRNAs) have been implicated in human disorders, from cancers to infectious diseases. Targeting miRNAs or their target genes with small molecules offers opportunities to modulate dysregulated cellular processes linked to diseases. Yet, predicting small molecules associated with miRNAs remains challenging due to the small size of small molecule-miRNA datasets. Herein, we develop a generalized deep learning framework, sChemNET, for predicting small molecules affecting miRNA bioactivity based on chemical structure and sequence information. sChemNET overcomes the limitation of sparse chemical information by an objective function that allows the neural network to learn chemical space from a large body of chemical structures yet unknown to affect miRNAs. We experimentally validated small molecules predicted to act on miR-451 or its targets and tested their role in erythrocyte maturation during zebrafish embryogenesis. We also tested small molecules targeting the miR-181 network and other miRNAs using in-vitro and in-vivo experiments. We demonstrate that our machine-learning framework can predict bioactive small molecules targeting miRNAs or their targets in humans and other mammalian organisms.

PMID:39443444 | DOI:10.1038/s41467-024-49813-w

Categories: Literature Watch

SpliceTransformer predicts tissue-specific splicing linked to human diseases

Wed, 2024-10-23 06:00

Nat Commun. 2024 Oct 23;15(1):9129. doi: 10.1038/s41467-024-53088-6.

ABSTRACT

We present SpliceTransformer (SpTransformer), a deep-learning framework that predicts tissue-specific RNA splicing alterations linked to human diseases based on genomic sequence. SpTransformer outperforms all previous methods on splicing prediction. Application to approximately 1.3 million genetic variants in the ClinVar database reveals that splicing alterations account for 60% of intronic and synonymous pathogenic mutations, and occur at different frequencies across tissue types. Importantly, tissue-specific splicing alterations match their clinical manifestations independent of gene expression variation. We validate the enrichment in three brain disease datasets involving over 164,000 individuals. Additionally, we identify single nucleotide variations that cause brain-specific splicing alterations, and find disease-associated genes harboring these single nucleotide variations with distinct expression patterns involved in diverse biological processes. Finally, SpTransformer analysis of whole exon sequencing data from blood samples of patients with diabetic nephropathy predicts kidney-specific RNA splicing alterations with 83% accuracy, demonstrating the potential to infer disease-causing tissue-specific splicing events. SpTransformer provides a powerful tool to guide biological and clinical interpretations of human diseases.

PMID:39443442 | DOI:10.1038/s41467-024-53088-6

Categories: Literature Watch

Anatomical landmark detection on bi-planar radiographs for predicting spinopelvic parameters

Wed, 2024-10-23 06:00

Spine Deform. 2024 Oct 23. doi: 10.1007/s43390-024-00990-0. Online ahead of print.

ABSTRACT

INTRODUCTION: Accurate landmark detection is essential for precise analysis of anatomical structures, supporting diagnosis, treatment planning, and monitoring in patients with spinal deformities. Conventional methods rely on laborious landmark identification by medical experts, which motivates automation. The proposed deep learning pipeline processes bi-planar radiographs to determine spinopelvic parameters and Cobb angles without manual supervision.

METHODS: The dataset used for training and evaluation consisted of 555 bi-planar radiographs from un-instrumented patients, which were manually annotated by medical professionals. The pipeline performed a pre-processing step to determine regions of interest, including the cervical spine, thoracolumbar spine, sacrum, and pelvis. For each ROI, a segmentation network was trained to identify vertebral bodies and pelvic landmarks. The U-Net architecture was trained on 455 bi-planar radiographs using binary cross-entropy loss. The post-processing algorithm determined spinal alignment and angular parameters based on the segmentation output. We evaluated the pipeline on a test set of 100 previously unseen bi-planar radiographs, using the mean absolute difference between annotated and predicted landmarks as the performance metric. The spinopelvic parameter predictions of the pipeline were compared to the measurements of two experienced medical professionals using intraclass correlation coefficient (ICC) and mean absolute deviation (MAD).

RESULTS: The pipeline was able to successfully predict the Cobb angles in 61% of all test cases and achieved mean absolute differences of 3.3° (3.6°) and averaged ICC of 0.88. For thoracic kyphosis, lumbar lordosis, sagittal vertical axis, sacral slope, pelvic tilt, and pelvic incidence, the pipeline produced reasonable outputs in 69%, 58%, 86%, 85%, 84%, and 84% of the cases. The MAD was 5.6° (7.8°), 4.7° (4.3°), 2.8 mm (3.0 mm), 4.5° (7.2°), 1.8° (1.8°), and 5.3° (7.7°), while the ICC was measured at 0.69, 0.82, 0.99, 0.61, 0.96, and 0.70, respectively.

CONCLUSION: Despite limitations in patients with severe pathologies and high BMI, the pipeline automatically predicted coronal and sagittal spinopelvic parameters, which has the potential to simplify clinical routines and large-scale retrospective data analysis.

PMID:39443425 | DOI:10.1007/s43390-024-00990-0

Categories: Literature Watch

Identification of nitric oxide-mediated necroptosis as the predominant death route in Parkinson's disease

Wed, 2024-10-23 06:00

Mol Biomed. 2024 Oct 24;5(1):44. doi: 10.1186/s43556-024-00213-y.

ABSTRACT

Parkinson's disease (PD) involves multiple forms of neuronal cell death, but the dominant pathway involved in disease progression remains unclear. This study employed RNA sequencing (RNA-seq) of brain tissue to explore gene expression patterns across different stages of PD. Using the Scaden deep learning algorithm, we predicted neurocyte subtypes and modelled dynamic interactions for five classic cell death pathways to identify the predominant routes of neuronal death during PD progression. Our cell type-specific analysis revealed an increasing shift towards necroptosis, which was strongly correlated with nitric oxide synthase (NOS) expression across most neuronal subtypes. In vitro experiments confirmed that nitric oxide (NO) is a key mediator of necroptosis, leading to nuclear shrinkage and decreased mitochondrial membrane potential via phosphorylation of the PIP1/PIP3/MLKL signalling cascade. Importantly, specific necroptosis inhibitors significantly mitigated neuronal damage in both in vitro and in vivo PD models. Further analysis revealed that NO-mediated necroptosis is prevalent in early-onset Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS) and across multiple brain regions but not in brain tumours. Our findings suggest that NO-mediated necroptosis is a critical pathway in PD and other neurodegenerative disorders, providing potential targets for therapeutic intervention.

PMID:39443410 | DOI:10.1186/s43556-024-00213-y

Categories: Literature Watch

Deep Learning Segmentation of Chromogenic Dye RNAscope From Breast Cancer Tissue

Wed, 2024-10-23 06:00

J Imaging Inform Med. 2024 Oct 23. doi: 10.1007/s10278-024-01301-9. Online ahead of print.

ABSTRACT

RNAscope staining of breast cancer tissue allows pathologists to deduce genetic characteristics of the cancer by inspection at the microscopic level, which can lead to better diagnosis and treatment. Chromogenic RNAscope staining is easy to fit into existing pathology workflows, but manually analyzing the resulting tissue samples is time consuming. There is also a lack of peer-reviewed, performant solutions for automated analysis of chromogenic RNAscope staining. This paper covers the development and optimization of a novel deep learning method focused on accurate segmentation of RNAscope dots (which signify gene expression) from breast cancer tissue. The deep learning network is convolutional and uses ConvNeXt as its backbone. The upscaling portions of the network use custom, heavily regularized blocks to prevent overfitting and early convergence on suboptimal solutions. The resulting network is modest in size for a segmentation network and able to function well with little training data. This deep learning network was also able to outperform manual expert annotation at finding the positions of RNAscope dots, having a final F 1 -score of 0.745. In comparison, the expert inter-rater F 1 -score was 0.596.

PMID:39443395 | DOI:10.1007/s10278-024-01301-9

Categories: Literature Watch

Gut metagenome-derived image augmentation and deep learning improve prediction accuracy of metabolic disease classification

Wed, 2024-10-23 06:00

Yi Chuan. 2024 Oct;46(10):886-896. doi: 10.16288/j.yczz.24-086.

ABSTRACT

In recent years, statistics and machine learning methods have been widely used to analyze the relationship between human gut microbial metagenome and metabolic diseases, which is of great significance for the functional annotation and development of microbial communities. In this study, we proposed a new and scalable framework for image enhancement and deep learning of gut metagenome, which could be used in the classification of human metabolic diseases. Each data sample in three representative human gut metagenome datasets was transformed into image and enhanced, and put into the machine learning models of logistic regression (LR), support vector machine (SVM), Bayesian network (BN) and random forest (RF), and the deep learning models of multilayer perceptron (MLP) and convolutional neural network (CNN). The accuracy performance of the overall evaluation model for disease prediction was verified by accuracy (A), accuracy (P), recall (R), F1 score (F1), area under ROC curve (AUC) and 10 fold cross-validation. The results showed that the overall performance of MLP model was better than that of CNN, LR, SVM, BN, RF and PopPhy-CNN, and the performance of MLP and CNN models was further improved after data enhancement (random rotation and adding salt-and-pepper noise). The accuracy of MLP model in disease prediction was further improved by 4%-11%, F1 by 1%-6% and AUC by 5%-10%. The above results showed that human gut metagenome image enhancement and deep learning could accurately extract microbial characteristics and effectively predict the host disease phenotype. The source code and datasets used in this study can be publicly accessed in https://github.com/HuaXWu/GM_ML_Classification.git.

PMID:39443316 | DOI:10.16288/j.yczz.24-086

Categories: Literature Watch

Automated quantification of cerebral microbleeds in susceptibility-weighted MRI: association with vascular risk factors, white matter hyperintensity burden, and cognitive function

Wed, 2024-10-23 06:00

AJNR Am J Neuroradiol. 2024 Oct 23:ajnr.A8552. doi: 10.3174/ajnr.A8552. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: To train and validate a deep learning (DL)-based segmentation model for cerebral microbleeds (CMB) on susceptibility-weighted MRI; and to find associations between CMB, cognitive impairment, and vascular risk factors.

MATERIALS AND METHODS: Participants in this single-institution retrospective study underwent brain MRI to evaluate cognitive impairment between January-September 2023. For training the DL model, the nnU-Net framework was used without modifications. The DL model's performance was evaluated on independent internal and external validation datasets. Linear regression analysis was used to find associations between log-transformed CMB numbers, cognitive function (mini-mental status examination [MMSE]), white matter hyperintensity (WMH) burden, and clinical vascular risk factors (age, sex, hypertension, diabetes, lipid profiles, and body mass index).

RESULTS: Training of the DL model (n = 287) resulted in a robust segmentation performance with an average dice score of 0.73 (95% CI, 0.67-0.79) in an internal validation set, (n = 67) and modest performance in an external validation set (dice score = 0.46, 95% CI, 0.33-0.59, n = 68). In a temporally independent clinical dataset (n = 448), older age, hypertension, and WMH burden were significantly associated with CMB numbers in all distributions (total, lobar, deep, and cerebellar; all P <.01). MMSE was significantly associated with hyperlipidemia (β = 1.88, 95% CI, 0.96-2.81, P <.001), WMH burden (β = -0.17 per 1% WMH burden, 95% CI, -0.27-0.08, P <.001), and total CMB number (β = -0.01 per 1 CMB, 95% CI, -0.02-0.001, P = .04) after adjusting for age and sex.

CONCLUSIONS: The DL model showed a robust segmentation performance for CMB. In all distributions, CMB had significant positive associations with WMH burden. Increased WMH burden and CMB numbers were associated with decreased cognitive function.

ABBREVIATIONS: CMB = cerebral microbleed; DL = deep learning, DSC = dice similarity coefficient; MMSE = mini-mental status examination; SVD = small vessel disease; SWI = susceptibility-weighted image; WMH = white matter hyperintensity.

PMID:39443150 | DOI:10.3174/ajnr.A8552

Categories: Literature Watch

Empowering informed choices: How computer vision can assists consumers in making decisions about meat quality

Wed, 2024-10-23 06:00

Meat Sci. 2024 Sep 21;219:109675. doi: 10.1016/j.meatsci.2024.109675. Online ahead of print.

ABSTRACT

Consumers often find it challenging to assess meat sensory quality, influenced by tenderness and intramuscular fat (IMF). This study aims to develop a computer vision system (CVS) using smartphone images to classify beef and pork steak tenderness (1), predicting shear force (SF) and IMF content (2), and performing a comparative evaluation between consumer assessments and the method's output (3). The dataset consisted of 924 beef and 514 pork steaks (one image per steak). We trained a deep neural network for image classification and regression. The model achieved an F1-score of 68.1 % in classifying beef as tender. After re-categorizing the dataset into 'tender' and 'tough', the F1-score for identifying tender increased to 76.6 %. For pork loin tenderness, the model achieved an F1-score of 81.4 %. This score slightly improved to 81.5 % after re-categorization into two classes. The regression models for predicting SF and IMF in beef steak achieved an R2 value of 0.64 and 0.62, respectively, with a root mean squared prediction error (RMSEP) of 16.9 N and 2.6 %. For pork loin, the neural network predicted SF with an R2 value of 0.76 and an RMSEP of 9.15 N, and IMF with an R2 value of 0.54 and an RMSEP of 1.22 %. In 1000 paired comparisons, the neural network correctly identified the more tender beef steak in 76.5 % of cases, compared to a 46.7 % accuracy rate for human assessments. These findings suggest that CVS can provide a more objective method for evaluating meat tenderness and IMF before purchase, potentially enhancing consumer satisfaction.

PMID:39442454 | DOI:10.1016/j.meatsci.2024.109675

Categories: Literature Watch

Target-specified reference-based deep learning network for joint image deblurring and resolution enhancement in surgical zoom lens camera calibration

Wed, 2024-10-23 06:00

Comput Biol Med. 2024 Oct 22;183:109309. doi: 10.1016/j.compbiomed.2024.109309. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: For the augmented reality of surgical navigation, which overlays a 3D model of the surgical target on an image, accurate camera calibration is imperative. However, when the checkerboard images for calibration are captured using a surgical microscope having high magnification, blur owing to the narrow depth of focus and blocking artifacts caused by limited resolution around the fine edges occur. These artifacts strongly affect the localization of corner points of the checkerboard in these images, resulting in inaccurate calibration, which leads to a large displacement in augmented reality. To solve this problem, in this study, we proposed a novel target-specific deep learning network that simultaneously enhances both the blur and spatial resolution of an image for surgical zoom lens camera calibration.

METHODS: As a scheme of an end-to-end convolutional deep neural network, the proposed network is specifically intended for the checkerboard image enhancement used in camera calibration. Through the symmetric architecture of the network, which consists of encoding and decoding layers, the distinctive spatial features of the encoding layers are transferred and merged with the output of the decoding layers. Additionally, by integrating a multi-frame framework including subpixel motion estimation and ideal reference image with the symmetric architecture, joint image deblurring and enhanced resolution were efficiently achieved.

RESULTS: From experimental comparisons, we verified the capability of the proposed method to improve the subjective and objective performances of surgical microscope calibration. Furthermore, we confirmed that the augmented reality overlap ratio, which quantitatively indicates augmented reality accuracy, from calibration with the enhanced image of the proposed method is higher than that of the previous methods.

CONCLUSIONS: These findings suggest that the proposed network provides sharp high-resolution images from blurry low-resolution inputs. Furthermore, we demonstrate superior performance in camera calibration by using surgical microscopic images, thus showing its potential applications in the field of practical surgical navigation.

PMID:39442443 | DOI:10.1016/j.compbiomed.2024.109309

Categories: Literature Watch

Pages