Deep learning

The development and validation of a prognostic prediction modeling study in acute myocardial infarction patients after percutaneous coronary intervention: hemorrhea and major cardiovascular adverse events

Thu, 2024-10-24 06:00

J Thorac Dis. 2024 Sep 30;16(9):6216-6228. doi: 10.21037/jtd-24-1362. Epub 2024 Sep 26.

ABSTRACT

BACKGROUND: Percutaneous coronary intervention (PCI) is one of the most important diagnostic and therapeutic techniques in cardiology. At present, the traditional prediction models for postoperative events after PCI are ineffective, but machine learning has great potential in identification and prediction of risk. Machine learning can reduce overfitting through regularization techniques, cross-validation and ensemble learning, making the model more accurate in predicting large amounts of complex unknown data. This study sought to identify the risk of hemorrhea and major adverse cardiovascular events (MACEs) in patients after PCI through machine learning.

METHODS: The entire study population consisted of 7,931 individual patients who underwent PCI at Jiangsu Provincial Hospital and The Affiliated Wuxi Second People's Hospital from January 2007 to January 2022. The risk of postoperative hemorrhea and MACE (including cardiac death and in-stent restenosis) was predicted by 53 clinical features after admission. The population was assigned to the training set and the validation set in a specific ratio by simple randomization. Different machine learning algorithms, including eXtreme Gradient Boosting (XGBoost), random forest (RF), and deep learning neural network (DNN), were trained to build prediction models. A 5-fold cross-validation was applied to correct errors. Several evaluation indexes, including the area under the receiver operating characteristic (ROC) curve (AUC), accuracy (Acc), sensitivity (Sens), specificity (Spec), and net reclassification improvement (NRI), were used to compare the predictive performance. To improve the interpretability of the model and identify risk factors individually, SHapley Additive exPlanation (SHAP) was introduced.

RESULTS: In this study, 306 patients (3.9%) experienced hemorrhea, 107 patients (1.3%) experienced cardiac death, and 218 patients (2.7%) developed in-stent restenosis. In the training set and validation set, except for previous PCI and statins, there were no significant differences. XGBoost was observed to be the best predictor of every event, namely hemorrhea [AUC: 0.921, 95% confidence interval (CI): 0.864-0.978, Acc: 0.845, Sens: 0.851, Spec: 0.837 and NRI: 0.140], cardiac death (AUC: 0.939, 95% CI: 0.903-0.975, Acc: 0.914, Sens: 0.950, Spec: 0.800 and NRI: 0.148), and in-stent restenosis (AUC: 0.915; 95% CI: 0.863-0.967, Acc: 0.834, Sens: 0.778, Spec: 0.902 and NRI: 0.077). SHAP showed that the number of stents had the greatest influence on hemorrhea, while age and drug-coated balloon were the main factors in cardiogenic death and stent restenosis (all P<0.05).

CONCLUSIONS: The XGBoost model (machine learning) performed better than the traditional logistic regression model in identifying hemorrhea and MACE after PCI. Machine learning models can be used as a tool for risk prediction. The machine learning model described in this study can personalize the prediction of hemorrhea and MACE after PCI for specific patients, helping clinicians adjust intervenable features.

PMID:39444902 | PMC:PMC11494537 | DOI:10.21037/jtd-24-1362

Categories: Literature Watch

Optimising ovarian tumor classification using a novel CT sequence selection algorithm

Thu, 2024-10-24 06:00

Sci Rep. 2024 Oct 23;14(1):25010. doi: 10.1038/s41598-024-75555-2.

ABSTRACT

Gynaecological cancers, especially ovarian cancer, remain a critical public health issue, particularly in regions like India, where there are challenges related to cancer awareness, variable pathology, and limited access to screening facilities. These challenges often lead to the diagnosis of cancer at advanced stages, resulting in poorer outcomes for patients. The goal of this study is to enhance the accuracy of classifying ovarian tumours, with a focus on distinguishing between malignant and early-stage cases, by applying advanced deep learning methods. In our approach, we utilized three pre-trained deep learning models-Xception, ResNet50V2, and ResNet50V2FPN-to classify ovarian tumors using publicly available Computed Tomography (CT) scan data. To further improve the model's performance, we developed a novel CT Sequence Selection Algorithm, which optimises the use of CT images for a more precise classification of ovarian tumours. The models were trained and evaluated on selected TIFF images, comparing the performance of the ResNet50V2FPN model with and without the CT Sequence Selection Algorithm. Our experimental results show the Comparative evaluation against the ResNet50V2 FPN model, both with and without the CT Sequence Selection Algorithm, demonstrates the superiority of the proposed algorithm over existing state-of-the-art methods. This research presents a promising approach for improving the early detection and management of gynecological cancers, with potential benefits for patient outcomes, especially in areas with limited healthcare resources.

PMID:39443517 | DOI:10.1038/s41598-024-75555-2

Categories: Literature Watch

sChemNET: a deep learning framework for predicting small molecules targeting microRNA function

Wed, 2024-10-23 06:00

Nat Commun. 2024 Oct 23;15(1):9149. doi: 10.1038/s41467-024-49813-w.

ABSTRACT

MicroRNAs (miRNAs) have been implicated in human disorders, from cancers to infectious diseases. Targeting miRNAs or their target genes with small molecules offers opportunities to modulate dysregulated cellular processes linked to diseases. Yet, predicting small molecules associated with miRNAs remains challenging due to the small size of small molecule-miRNA datasets. Herein, we develop a generalized deep learning framework, sChemNET, for predicting small molecules affecting miRNA bioactivity based on chemical structure and sequence information. sChemNET overcomes the limitation of sparse chemical information by an objective function that allows the neural network to learn chemical space from a large body of chemical structures yet unknown to affect miRNAs. We experimentally validated small molecules predicted to act on miR-451 or its targets and tested their role in erythrocyte maturation during zebrafish embryogenesis. We also tested small molecules targeting the miR-181 network and other miRNAs using in-vitro and in-vivo experiments. We demonstrate that our machine-learning framework can predict bioactive small molecules targeting miRNAs or their targets in humans and other mammalian organisms.

PMID:39443444 | DOI:10.1038/s41467-024-49813-w

Categories: Literature Watch

SpliceTransformer predicts tissue-specific splicing linked to human diseases

Wed, 2024-10-23 06:00

Nat Commun. 2024 Oct 23;15(1):9129. doi: 10.1038/s41467-024-53088-6.

ABSTRACT

We present SpliceTransformer (SpTransformer), a deep-learning framework that predicts tissue-specific RNA splicing alterations linked to human diseases based on genomic sequence. SpTransformer outperforms all previous methods on splicing prediction. Application to approximately 1.3 million genetic variants in the ClinVar database reveals that splicing alterations account for 60% of intronic and synonymous pathogenic mutations, and occur at different frequencies across tissue types. Importantly, tissue-specific splicing alterations match their clinical manifestations independent of gene expression variation. We validate the enrichment in three brain disease datasets involving over 164,000 individuals. Additionally, we identify single nucleotide variations that cause brain-specific splicing alterations, and find disease-associated genes harboring these single nucleotide variations with distinct expression patterns involved in diverse biological processes. Finally, SpTransformer analysis of whole exon sequencing data from blood samples of patients with diabetic nephropathy predicts kidney-specific RNA splicing alterations with 83% accuracy, demonstrating the potential to infer disease-causing tissue-specific splicing events. SpTransformer provides a powerful tool to guide biological and clinical interpretations of human diseases.

PMID:39443442 | DOI:10.1038/s41467-024-53088-6

Categories: Literature Watch

Anatomical landmark detection on bi-planar radiographs for predicting spinopelvic parameters

Wed, 2024-10-23 06:00

Spine Deform. 2024 Oct 23. doi: 10.1007/s43390-024-00990-0. Online ahead of print.

ABSTRACT

INTRODUCTION: Accurate landmark detection is essential for precise analysis of anatomical structures, supporting diagnosis, treatment planning, and monitoring in patients with spinal deformities. Conventional methods rely on laborious landmark identification by medical experts, which motivates automation. The proposed deep learning pipeline processes bi-planar radiographs to determine spinopelvic parameters and Cobb angles without manual supervision.

METHODS: The dataset used for training and evaluation consisted of 555 bi-planar radiographs from un-instrumented patients, which were manually annotated by medical professionals. The pipeline performed a pre-processing step to determine regions of interest, including the cervical spine, thoracolumbar spine, sacrum, and pelvis. For each ROI, a segmentation network was trained to identify vertebral bodies and pelvic landmarks. The U-Net architecture was trained on 455 bi-planar radiographs using binary cross-entropy loss. The post-processing algorithm determined spinal alignment and angular parameters based on the segmentation output. We evaluated the pipeline on a test set of 100 previously unseen bi-planar radiographs, using the mean absolute difference between annotated and predicted landmarks as the performance metric. The spinopelvic parameter predictions of the pipeline were compared to the measurements of two experienced medical professionals using intraclass correlation coefficient (ICC) and mean absolute deviation (MAD).

RESULTS: The pipeline was able to successfully predict the Cobb angles in 61% of all test cases and achieved mean absolute differences of 3.3° (3.6°) and averaged ICC of 0.88. For thoracic kyphosis, lumbar lordosis, sagittal vertical axis, sacral slope, pelvic tilt, and pelvic incidence, the pipeline produced reasonable outputs in 69%, 58%, 86%, 85%, 84%, and 84% of the cases. The MAD was 5.6° (7.8°), 4.7° (4.3°), 2.8 mm (3.0 mm), 4.5° (7.2°), 1.8° (1.8°), and 5.3° (7.7°), while the ICC was measured at 0.69, 0.82, 0.99, 0.61, 0.96, and 0.70, respectively.

CONCLUSION: Despite limitations in patients with severe pathologies and high BMI, the pipeline automatically predicted coronal and sagittal spinopelvic parameters, which has the potential to simplify clinical routines and large-scale retrospective data analysis.

PMID:39443425 | DOI:10.1007/s43390-024-00990-0

Categories: Literature Watch

Identification of nitric oxide-mediated necroptosis as the predominant death route in Parkinson's disease

Wed, 2024-10-23 06:00

Mol Biomed. 2024 Oct 24;5(1):44. doi: 10.1186/s43556-024-00213-y.

ABSTRACT

Parkinson's disease (PD) involves multiple forms of neuronal cell death, but the dominant pathway involved in disease progression remains unclear. This study employed RNA sequencing (RNA-seq) of brain tissue to explore gene expression patterns across different stages of PD. Using the Scaden deep learning algorithm, we predicted neurocyte subtypes and modelled dynamic interactions for five classic cell death pathways to identify the predominant routes of neuronal death during PD progression. Our cell type-specific analysis revealed an increasing shift towards necroptosis, which was strongly correlated with nitric oxide synthase (NOS) expression across most neuronal subtypes. In vitro experiments confirmed that nitric oxide (NO) is a key mediator of necroptosis, leading to nuclear shrinkage and decreased mitochondrial membrane potential via phosphorylation of the PIP1/PIP3/MLKL signalling cascade. Importantly, specific necroptosis inhibitors significantly mitigated neuronal damage in both in vitro and in vivo PD models. Further analysis revealed that NO-mediated necroptosis is prevalent in early-onset Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS) and across multiple brain regions but not in brain tumours. Our findings suggest that NO-mediated necroptosis is a critical pathway in PD and other neurodegenerative disorders, providing potential targets for therapeutic intervention.

PMID:39443410 | DOI:10.1186/s43556-024-00213-y

Categories: Literature Watch

Deep Learning Segmentation of Chromogenic Dye RNAscope From Breast Cancer Tissue

Wed, 2024-10-23 06:00

J Imaging Inform Med. 2024 Oct 23. doi: 10.1007/s10278-024-01301-9. Online ahead of print.

ABSTRACT

RNAscope staining of breast cancer tissue allows pathologists to deduce genetic characteristics of the cancer by inspection at the microscopic level, which can lead to better diagnosis and treatment. Chromogenic RNAscope staining is easy to fit into existing pathology workflows, but manually analyzing the resulting tissue samples is time consuming. There is also a lack of peer-reviewed, performant solutions for automated analysis of chromogenic RNAscope staining. This paper covers the development and optimization of a novel deep learning method focused on accurate segmentation of RNAscope dots (which signify gene expression) from breast cancer tissue. The deep learning network is convolutional and uses ConvNeXt as its backbone. The upscaling portions of the network use custom, heavily regularized blocks to prevent overfitting and early convergence on suboptimal solutions. The resulting network is modest in size for a segmentation network and able to function well with little training data. This deep learning network was also able to outperform manual expert annotation at finding the positions of RNAscope dots, having a final F 1 -score of 0.745. In comparison, the expert inter-rater F 1 -score was 0.596.

PMID:39443395 | DOI:10.1007/s10278-024-01301-9

Categories: Literature Watch

Gut metagenome-derived image augmentation and deep learning improve prediction accuracy of metabolic disease classification

Wed, 2024-10-23 06:00

Yi Chuan. 2024 Oct;46(10):886-896. doi: 10.16288/j.yczz.24-086.

ABSTRACT

In recent years, statistics and machine learning methods have been widely used to analyze the relationship between human gut microbial metagenome and metabolic diseases, which is of great significance for the functional annotation and development of microbial communities. In this study, we proposed a new and scalable framework for image enhancement and deep learning of gut metagenome, which could be used in the classification of human metabolic diseases. Each data sample in three representative human gut metagenome datasets was transformed into image and enhanced, and put into the machine learning models of logistic regression (LR), support vector machine (SVM), Bayesian network (BN) and random forest (RF), and the deep learning models of multilayer perceptron (MLP) and convolutional neural network (CNN). The accuracy performance of the overall evaluation model for disease prediction was verified by accuracy (A), accuracy (P), recall (R), F1 score (F1), area under ROC curve (AUC) and 10 fold cross-validation. The results showed that the overall performance of MLP model was better than that of CNN, LR, SVM, BN, RF and PopPhy-CNN, and the performance of MLP and CNN models was further improved after data enhancement (random rotation and adding salt-and-pepper noise). The accuracy of MLP model in disease prediction was further improved by 4%-11%, F1 by 1%-6% and AUC by 5%-10%. The above results showed that human gut metagenome image enhancement and deep learning could accurately extract microbial characteristics and effectively predict the host disease phenotype. The source code and datasets used in this study can be publicly accessed in https://github.com/HuaXWu/GM_ML_Classification.git.

PMID:39443316 | DOI:10.16288/j.yczz.24-086

Categories: Literature Watch

Automated quantification of cerebral microbleeds in susceptibility-weighted MRI: association with vascular risk factors, white matter hyperintensity burden, and cognitive function

Wed, 2024-10-23 06:00

AJNR Am J Neuroradiol. 2024 Oct 23:ajnr.A8552. doi: 10.3174/ajnr.A8552. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: To train and validate a deep learning (DL)-based segmentation model for cerebral microbleeds (CMB) on susceptibility-weighted MRI; and to find associations between CMB, cognitive impairment, and vascular risk factors.

MATERIALS AND METHODS: Participants in this single-institution retrospective study underwent brain MRI to evaluate cognitive impairment between January-September 2023. For training the DL model, the nnU-Net framework was used without modifications. The DL model's performance was evaluated on independent internal and external validation datasets. Linear regression analysis was used to find associations between log-transformed CMB numbers, cognitive function (mini-mental status examination [MMSE]), white matter hyperintensity (WMH) burden, and clinical vascular risk factors (age, sex, hypertension, diabetes, lipid profiles, and body mass index).

RESULTS: Training of the DL model (n = 287) resulted in a robust segmentation performance with an average dice score of 0.73 (95% CI, 0.67-0.79) in an internal validation set, (n = 67) and modest performance in an external validation set (dice score = 0.46, 95% CI, 0.33-0.59, n = 68). In a temporally independent clinical dataset (n = 448), older age, hypertension, and WMH burden were significantly associated with CMB numbers in all distributions (total, lobar, deep, and cerebellar; all P <.01). MMSE was significantly associated with hyperlipidemia (β = 1.88, 95% CI, 0.96-2.81, P <.001), WMH burden (β = -0.17 per 1% WMH burden, 95% CI, -0.27-0.08, P <.001), and total CMB number (β = -0.01 per 1 CMB, 95% CI, -0.02-0.001, P = .04) after adjusting for age and sex.

CONCLUSIONS: The DL model showed a robust segmentation performance for CMB. In all distributions, CMB had significant positive associations with WMH burden. Increased WMH burden and CMB numbers were associated with decreased cognitive function.

ABBREVIATIONS: CMB = cerebral microbleed; DL = deep learning, DSC = dice similarity coefficient; MMSE = mini-mental status examination; SVD = small vessel disease; SWI = susceptibility-weighted image; WMH = white matter hyperintensity.

PMID:39443150 | DOI:10.3174/ajnr.A8552

Categories: Literature Watch

Empowering informed choices: How computer vision can assists consumers in making decisions about meat quality

Wed, 2024-10-23 06:00

Meat Sci. 2024 Sep 21;219:109675. doi: 10.1016/j.meatsci.2024.109675. Online ahead of print.

ABSTRACT

Consumers often find it challenging to assess meat sensory quality, influenced by tenderness and intramuscular fat (IMF). This study aims to develop a computer vision system (CVS) using smartphone images to classify beef and pork steak tenderness (1), predicting shear force (SF) and IMF content (2), and performing a comparative evaluation between consumer assessments and the method's output (3). The dataset consisted of 924 beef and 514 pork steaks (one image per steak). We trained a deep neural network for image classification and regression. The model achieved an F1-score of 68.1 % in classifying beef as tender. After re-categorizing the dataset into 'tender' and 'tough', the F1-score for identifying tender increased to 76.6 %. For pork loin tenderness, the model achieved an F1-score of 81.4 %. This score slightly improved to 81.5 % after re-categorization into two classes. The regression models for predicting SF and IMF in beef steak achieved an R2 value of 0.64 and 0.62, respectively, with a root mean squared prediction error (RMSEP) of 16.9 N and 2.6 %. For pork loin, the neural network predicted SF with an R2 value of 0.76 and an RMSEP of 9.15 N, and IMF with an R2 value of 0.54 and an RMSEP of 1.22 %. In 1000 paired comparisons, the neural network correctly identified the more tender beef steak in 76.5 % of cases, compared to a 46.7 % accuracy rate for human assessments. These findings suggest that CVS can provide a more objective method for evaluating meat tenderness and IMF before purchase, potentially enhancing consumer satisfaction.

PMID:39442454 | DOI:10.1016/j.meatsci.2024.109675

Categories: Literature Watch

Target-specified reference-based deep learning network for joint image deblurring and resolution enhancement in surgical zoom lens camera calibration

Wed, 2024-10-23 06:00

Comput Biol Med. 2024 Oct 22;183:109309. doi: 10.1016/j.compbiomed.2024.109309. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: For the augmented reality of surgical navigation, which overlays a 3D model of the surgical target on an image, accurate camera calibration is imperative. However, when the checkerboard images for calibration are captured using a surgical microscope having high magnification, blur owing to the narrow depth of focus and blocking artifacts caused by limited resolution around the fine edges occur. These artifacts strongly affect the localization of corner points of the checkerboard in these images, resulting in inaccurate calibration, which leads to a large displacement in augmented reality. To solve this problem, in this study, we proposed a novel target-specific deep learning network that simultaneously enhances both the blur and spatial resolution of an image for surgical zoom lens camera calibration.

METHODS: As a scheme of an end-to-end convolutional deep neural network, the proposed network is specifically intended for the checkerboard image enhancement used in camera calibration. Through the symmetric architecture of the network, which consists of encoding and decoding layers, the distinctive spatial features of the encoding layers are transferred and merged with the output of the decoding layers. Additionally, by integrating a multi-frame framework including subpixel motion estimation and ideal reference image with the symmetric architecture, joint image deblurring and enhanced resolution were efficiently achieved.

RESULTS: From experimental comparisons, we verified the capability of the proposed method to improve the subjective and objective performances of surgical microscope calibration. Furthermore, we confirmed that the augmented reality overlap ratio, which quantitatively indicates augmented reality accuracy, from calibration with the enhanced image of the proposed method is higher than that of the previous methods.

CONCLUSIONS: These findings suggest that the proposed network provides sharp high-resolution images from blurry low-resolution inputs. Furthermore, we demonstrate superior performance in camera calibration by using surgical microscopic images, thus showing its potential applications in the field of practical surgical navigation.

PMID:39442443 | DOI:10.1016/j.compbiomed.2024.109309

Categories: Literature Watch

The evaluation of a novel single-lead biopotential device for home sleep testing

Wed, 2024-10-23 06:00

Sleep. 2024 Oct 23:zsae248. doi: 10.1093/sleep/zsae248. Online ahead of print.

ABSTRACT

STUDY OBJECTIVES: This paper reports on the clinical evaluation of the sleep staging performance of a novel single-lead biopotential device.

METHODS: 133 patients suspected of obstructive sleep apnea were included in a multi-site cohort. All patients underwent polysomnography and received the study device, a single-lead biopotential measurement device attached to the forehead. Clinical endpoint parameters were selected to evaluate the device's ability to determine sleep stages. Finally, the device's performance was compared to the clinical study results of comparable devices.

RESULTS: Concurrent PSG and study device data were successfully acquired for 106 of the 133 included patients. The results of this study demonstrated significant similarity in overall sleep staging performance (5-stage Cohen's Kappa of 0.70) to the best-performing reduced-lead biopotential device to which it was compared (5-stage Cohen's Kappa of 0.73). Contrary to the comparator devices, the study device reported a higher Cohen's Kappa for REM (0.78) compared to N3 (0.61), which can be explained by its particular measuring electrode placement (diagonally across the lateral cross-section of the eye). This placement was optimized to ensure the polarity of rapid eye movements could be adequately captured, enhancing the capacity to discriminate between N3 and REM sleep when using only a single-lead setup.

CONCLUSIONS: The results of this study demonstrate the feasibility of incorporating a single-lead biopotential extension in a reduced-channel home sleep apnea testing setup. Such incorporation could narrow the gap in the functionality of reduced-channel home sleep testing and in-lab polysomnography without compromising the patient's ease of use and comfort.

PMID:39441980 | DOI:10.1093/sleep/zsae248

Categories: Literature Watch

Uncertainty Qualification for Deep Learning-Based Elementary Reaction Property Prediction

Wed, 2024-10-23 06:00

J Chem Inf Model. 2024 Oct 23. doi: 10.1021/acs.jcim.4c01358. Online ahead of print.

ABSTRACT

The prediction of the thermodynamic and kinetic properties of elementary reactions has shown rapid improvement due to the implementation of deep learning (DL) methods. While various studies have reported the success in predicting reaction properties, the quantification of prediction uncertainty has seldom been investigated, thus compromising the confidence in using these predicted properties in practical applications. Here, we integrated graph convolutional neural networks (GCNN) with three uncertainty prediction techniques, including deep ensemble, Monte Carlo (MC)-dropout, and evidential learning, to provide insights into the uncertainty quantification and utility. The deep ensemble model outperforms others in accuracy and shows the highest reliability in estimating prediction uncertainty across all elementary reaction property data sets. We also verified that the deep ensemble model showed a satisfactory capability in recognizing epistemic and aleatoric uncertainties. Additionally, we adopted a Monte Carlo Tree Search method for extracting the explainable reaction substructures, providing a chemical explanation for DL predicted properties and corresponding uncertainties. Finally, to demonstrate the utility of uncertainty qualification in practical applications, we performed an uncertainty-guided calibration of the DL-constructed kinetic model, which achieved a 25% higher hit ratio in identifying dominant reaction pathways compared to that of the calibration without uncertainty guidance.

PMID:39441973 | DOI:10.1021/acs.jcim.4c01358

Categories: Literature Watch

EuDockScore: Euclidean graph neural networks for scoring protein-protein interfaces

Wed, 2024-10-23 06:00

Bioinformatics. 2024 Oct 23:btae636. doi: 10.1093/bioinformatics/btae636. Online ahead of print.

ABSTRACT

MOTIVATION: Protein-protein interactions are essential for a variety of biological phenomena including mediating bio-chemical reactions, cell signaling, and the immune response. Proteins seek to form interfaces which reduce overall system energy. Although determination of single polypeptide chain protein structures has been revolutionized by deep learning techniques, complex prediction has still not been perfected. Additionally, experimentally determining structures is incredibly resource and time expensive. An alternative is the technique of computational docking, which takes the solved individual structures of proteins to produce candidate interfaces (decoys). Decoys are then scored using a mathematical function that assess the quality of the system, know as a scoring functions. Beyond docking, scoring functions are a critical component of assessing structures produced by many protein generative models. Scoring models are also used as a final filtering in many generative deep learning models including those that generate antibody binders, and those which perform docking.

RESULTS: In this work we present improved scoring functions for protein-protein interactions which utilizes cutting-edge euclidean graph neural network architectures, to assess protein-protein interfaces. These euclidean docking score models are known as EuDockScore, and EuDockScore-Ab with the latter being antibody-antigen dock specific. Finally, we provided EuDockScore-AFM a model trained on antibody-antigen outputs from AlphaFold-Multimer which proves useful in re-ranking large numbers of AlphaFold-Multimer outputs.

AVAILABILITY: The code for these models is available at https://gitlab.com/mcfeemat/eudockscore.

PMID:39441796 | DOI:10.1093/bioinformatics/btae636

Categories: Literature Watch

An ensemble deep learning model for medical image fusion with Siamese neural networks and VGG-19

Wed, 2024-10-23 06:00

PLoS One. 2024 Oct 23;19(10):e0309651. doi: 10.1371/journal.pone.0309651. eCollection 2024.

ABSTRACT

Multimodal medical image fusion methods, which combine complementary information from many multi-modality medical images, are among the most important and practical approaches in numerous clinical applications. Various conventional image fusion techniques have been developed for multimodality image fusion. Complex procedures for weight map computing, fixed fusion strategy and lack of contextual understanding remain difficult in conventional and machine learning approaches, usually resulting in artefacts that degrade the image quality. This work proposes an efficient hybrid learning model for medical image fusion using pre-trained and non-pre-trained networks i.e. VGG-19 and SNN with stacking ensemble method. The model leveraging the unique capabilities of each architecture, can effectively preserve the detailed information with high visual quality, for numerous combinations of image modalities in image fusion challenges, notably improved contrast, increased resolution, and lower artefacts. Additionally, this ensemble model can be more robust in the fusion of various combinations of source images that are publicly available from Havard-Medical-Image-Fusion Datasets, GitHub. and Kaggle. Our proposed model performance is superior in terms of visual quality and performance metrics to that of the existing fusion methods in literature like PCA+DTCWT, NSCT, DWT, DTCWT+NSCT, GADCT, CNN and VGG-19.

PMID:39441782 | DOI:10.1371/journal.pone.0309651

Categories: Literature Watch

Grade-Skewed Domain Adaptation via Asymmetric Bi-Classifier Discrepancy Minimization for Diabetic Retinopathy Grading

Wed, 2024-10-23 06:00

IEEE Trans Med Imaging. 2024 Oct 23;PP. doi: 10.1109/TMI.2024.3485064. Online ahead of print.

ABSTRACT

Diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Deep learning has exhibited promising performance in the grading of DR. Certain deep learning strategies have facilitated convenient regular eye check-ups, which are crucial for managing DR and preventing severe visual impairment. However, the generalization performance on cross-center, cross-vendor, and cross-user test datasets is compromised due to domain shift. Furthermore, the presence of small lesions and the imbalanced grade distribution, resulting from the characteristics of DR grading (e.g., the progressive nature of DR disease and the design of grading standards), complicates image-level domain adaptation for DR grading. The general predictions of the models trained on grade-skewed source domains will be significantly biased toward the majority grades, which further increases the adaptation difficulty. We formulate this problem as a grade-skewed domain adaptation challenge. Under the grade-skewed domain adaptation problem, we propose a novel method for image-level supervised DR grading via Asymmetric Bi-Classifier Discrepancy Minimization (ABiD). First, we propose optimizing the feature extractor by minimizing the discrepancy between the predictions of the asymmetric bi-classifier based on two classification criteria to encourage the exploration of crucial features in adjacent grades and stretch the distribution of adjacent grades in the latent space. Moreover, the classifier difference is maximized by using the forward and inverse distribution compensation mechanism to locate easily confused instances, which avoids pseudolabel bias on the target domain. The experimental results on two public DR datasets and one private DR dataset demonstrate that our method outperforms state-of-the-art methods significantly.

PMID:39441682 | DOI:10.1109/TMI.2024.3485064

Categories: Literature Watch

Robust Myocardial Perfusion MRI Quantification with DeepFermi

Wed, 2024-10-23 06:00

IEEE Trans Biomed Eng. 2024 Oct 23;PP. doi: 10.1109/TBME.2024.3485233. Online ahead of print.

ABSTRACT

Stress perfusion cardiac magnetic resonance is an important technique for examining and assessing the blood supply of the myocardium. Currently, the majority of clinical perfusion scans are evaluated based on visual assessment by experienced clinicians. This makes the process subjective, and to this end, quantitative methods have been proposed to offer a more user-independent assessment of perfusion. These methods, however, rely on time-consuming deconvolution analysis and are susceptible to data outliers caused by artifacts due to cardiac or respiratory motion. In our work, we introduce a novel deep-learning method that integrates the commonly used Fermi function with a neural network architecture for fast, accurate, and robust myocardial perfusion quantification. This approach employs the Fermi model to ensure that the perfusion maps are consistent with measured data, while also utilizing a prior based on a 3D convolutional neural network to generalize spatio-temporal information across different patient data. Our network is trained within a self-supervised learning framework, which circumvents the need for ground-truth perfusion labels that are challenging to obtain. Furthermore, we extended this training methodology by adopting a technique that ensures estimations are resistant to data outliers, thereby improving robustness against motion artifacts. Our simulation experiments demonstrated an overall improvement in the accuracy and robustness of perfusion parameter estimation, consistently outperforming traditional deconvolution analysis algorithms across varying Signal-to-Noise Ratio scenarios in the presence of data outliers. For the in vivo studies, our method generated robust perfusion estimates that aligned with clinical diagnoses, while being approximately five times faster than conventional algorithms.

PMID:39441677 | DOI:10.1109/TBME.2024.3485233

Categories: Literature Watch

Clinical Utility of Deep Learning Assistance for Detecting Various Abnormal Findings in Color Retinal Fundus Images: A Reader Study

Wed, 2024-10-23 06:00

Transl Vis Sci Technol. 2024 Oct 1;13(10):34. doi: 10.1167/tvst.13.10.34.

ABSTRACT

PURPOSE: To evaluate the clinical usefulness of a deep learning-based detection device for multiple abnormal findings on retinal fundus photographs for readers with varying expertise.

METHODS: Fourteen ophthalmologists (six residents, eight specialists) assessed 399 fundus images with respect to 12 major ophthalmologic findings, with or without the assistance of a deep learning algorithm, in two separate reading sessions. Sensitivity, specificity, and reading time per image were compared.

RESULTS: With algorithmic assistance, readers significantly improved in sensitivity for all 12 findings (P < 0.05) but tended to be less specific (P < 0.05) for hemorrhage, drusen, membrane, and vascular abnormality, more profoundly so in residents. Sensitivity without algorithmic assistance was significantly lower in residents (23.1%∼75.8%) compared to specialists (55.1%∼97.1%) in nine findings, but it improved to similar levels with algorithmic assistance (67.8%∼99.4% in residents, 83.2%∼99.5% in specialists) with only hemorrhage remaining statistically significantly lower. Variances in sensitivity were significantly reduced for all findings. Reading time per image decreased in images with fewer than three findings per image, more profoundly in residents. When simulated based on images acquired from a health screening center, average reading time was estimated to be reduced by 25.9% (from 16.4 seconds to 12.1 seconds per image) for residents, and by 2.0% (from 9.6 seconds to 9.4 seconds) for specialists.

CONCLUSIONS: Deep learning-based computer-assisted detection devices increase sensitivity, reduce inter-reader variance in sensitivity, and reduce reading time in less complicated images.

TRANSLATIONAL RELEVANCE: This study evaluated the influence that algorithmic assistance in detecting abnormal findings on retinal fundus photographs has on clinicians, possibly predicting its influence on clinical application.

PMID:39441571 | DOI:10.1167/tvst.13.10.34

Categories: Literature Watch

Deep learning-based approach for acquisition time reduction in ventilation SPECT in patients after lung transplantation

Wed, 2024-10-23 06:00

Radiol Phys Technol. 2024 Oct 23. doi: 10.1007/s12194-024-00853-3. Online ahead of print.

ABSTRACT

We aimed to evaluate the image quality and diagnostic performance of chronic lung allograft dysfunction (CLAD) with lung ventilation single-photon emission computed tomography (SPECT) images acquired briefly using a convolutional neural network (CNN) in patients after lung transplantation and to explore the feasibility of short acquisition times. We retrospectively identified 93 consecutive lung-transplant recipients who underwent ventilation SPECT/computed tomography (CT). We employed a CNN to distinguish the images acquired in full time from those acquired in a short time. The image quality was evaluated using the structural similarity index (SSIM) loss and normalized mean square error (NMSE). The correlation between functional volume/morphological volume (F/M) ratios of full-time SPECT images and predicted SPECT images was evaluated. Differences in the F/M ratio were evaluated using Bland-Altman plots, and the diagnostic performance was compared using the area under the curve (AUC). The learning curve, obtained using MSE, converged within 100 epochs. The NMSE was significantly lower (P < 0.001) and the SSIM was significantly higher (P < 0.001) for the CNN-predicted SPECT images compared to the short-time SPECT images. The F/M ratio of full-time SPECT images and predicted SPECT images showed a significant correlation (r = 0.955, P < 0.0001). The Bland-Altman plot revealed a bias of -7.90% in the F/M ratio. The AUC values were 0.942 for full-time SPECT images, 0.934 for predicted SPECT images and 0.872 for short-time SPECT images. Our findings suggest that a deep-learning-based approach can significantly curtail the acquisition time of ventilation SPECT, while preserving the image quality and diagnostic accuracy for CLAD.

PMID:39441494 | DOI:10.1007/s12194-024-00853-3

Categories: Literature Watch

Deep learning super-resolution reconstruction for fast and high-quality cine cardiovascular magnetic resonance

Wed, 2024-10-23 06:00

Eur Radiol. 2024 Oct 23. doi: 10.1007/s00330-024-11145-0. Online ahead of print.

ABSTRACT

OBJECTIVES: To compare standard-resolution balanced steady-state free precession (bSSFP) cine images with cine images acquired at low resolution but reconstructed with a deep learning (DL) super-resolution algorithm.

MATERIALS AND METHODS: Cine cardiovascular magnetic resonance (CMR) datasets (short-axis and 4-chamber views) were prospectively acquired in healthy volunteers and patients at normal (cineNR: 1.89 × 1.96 mm2, reconstructed at 1.04 × 1.04 mm2) and at a low-resolution (2.98 × 3.00 mm2, reconstructed at 1.04 × 1.04 mm2). Low-resolution images were reconstructed using compressed sensing DL denoising and resolution upscaling (cineDL). Left ventricular ejection fraction (LVEF), end-diastolic volume index (LVEDVi), and strain were assessed. Apparent signal-to-noise (aSNR) and contrast-to-noise ratios (aCNR) were calculated. Subjective image quality was assessed on a 5-point Likert scale. Student's paired t-test, Wilcoxon matched-pairs signed-rank-test, and intraclass correlation coefficient (ICC) were used for statistical analysis.

RESULTS: Thirty participants were analyzed (37 ± 16 years; 20 healthy volunteers and 10 patients). Short-axis views whole-stack acquisition duration of cineDL was shorter than cineNR (57.5 ± 8.7 vs 98.7 ± 12.4 s; p < 0.0001). No differences were noted for: LVEF (59 ± 7 vs 59 ± 7%; ICC: 0.95 [95% confidence interval: 0.94, 0.99]; p = 0.17), LVEDVi (85.0 ± 13.5 vs 84.4 ± 13.7 mL/m2; ICC: 0.99 [0.98, 0.99]; p = 0.12), longitudinal strain (-19.5 ± 4.3 vs -19.8 ± 3.9%; ICC: 0.94 [0.88, 0.97]; p = 0.52), short-axis aSNR (81 ± 49 vs 69 ± 38; p = 0.32), aCNR (53 ± 31 vs 45 ± 27; p = 0.33), or subjective image quality (5.0 [IQR 4.9, 5.0] vs 5.0 [IQR 4.7, 5.0]; p = 0.99).

CONCLUSION: Deep-learning reconstruction of cine images acquired at a lower spatial resolution led to a decrease in acquisition times of 42% with shorter breath-holds without affecting volumetric results or image quality.

KEY POINTS: Question Cine CMR acquisitions are time-intensive and vulnerable to artifacts. Findings Low-resolution upscaled reconstructions using DL super-resolution decreased acquisition times by 35-42% without a significant difference in volumetric results or subjective image quality. Clinical relevance DL super-resolution reconstructions of bSSFP cine images acquired at a lower spatial resolution reduce acquisition times while preserving diagnostic accuracy, improving the clinical feasibility of cine imaging by decreasing breath hold duration.

PMID:39441391 | DOI:10.1007/s00330-024-11145-0

Categories: Literature Watch

Pages