Deep learning

Correction: Deep learning assists early-detection of hypertension-mediated heart change on ECG signals

Mon, 2024-12-16 06:00

Hypertens Res. 2024 Dec 16. doi: 10.1038/s41440-024-01980-5. Online ahead of print.

NO ABSTRACT

PMID:39681651 | DOI:10.1038/s41440-024-01980-5

Categories: Literature Watch

Novel approach for Arabic fake news classification using embedding from large language features with CNN-LSTM ensemble model and explainable AI

Mon, 2024-12-16 06:00

Sci Rep. 2024 Dec 16;14(1):30463. doi: 10.1038/s41598-024-82111-5.

ABSTRACT

The widespread fake news challenges the management of low-quality information, making effective detection strategies necessary. This study addresses this critical issue by advancing fake news detection in Arabic and overcoming limitations in existing approaches. Deep learning models, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM), EfficientNetB4, Inception, Xception, ResNet, ConvLSTM and a novel voting ensemble framework combining CNN and LSTM are employed for text classification. The proposed framework integrates the ELMO word embedding technique having contextual representation capabilities, which is compared with GloVe, BERT, FastText and FastText subwords. Comprehensive experiments demonstrate that the proposed voting ensemble, combined with ELMo word embeddings, consistently outperforms previous approaches. It achieves an accuracy of 98.42%, precision of 98.54%, recall of 99.5%, and an F1 score of 98.93%, offering an efficient and highly effective solution for text classification tasks.The proposed framework benchmark against state-of-the-art transformer architectures, including BERT and RoBERTa, demonstrates competitive performance with significantly reduced inference time and enhanced interpretability accompanied by a 5-fold cross-validation technique. Furthermore, this research utilizes the LIME XAI technique to provide deeper insights into the contribution of each feature in predicting a specific target class. These findings show the proposed framework's effectiveness in dealing with the issues of detecting false news, particularly in Arabic text. By generating higher performance metrics and displaying comparable results, this work opens the way for more reliable and interpretable text classification solutions.

PMID:39681596 | DOI:10.1038/s41598-024-82111-5

Categories: Literature Watch

Artificial Intelligence for Dental Implant Classification and Peri-Implant Pathology Identification in 2D Radiographs: A Systematic Review

Mon, 2024-12-16 06:00

J Dent. 2024 Dec 14:105533. doi: 10.1016/j.jdent.2024.105533. Online ahead of print.

ABSTRACT

OBJECTIVE: This systematic review aimed to summarize and evaluate the available information regarding the performance of artificial intelligence on dental implant classification and peri-implant pathology identification in 2D radiographs.

DATA SOURCES: Electronic databases (Medline, Embase, and Cochrane) were searched up to September 2024 for relevant observational studies and both randomized and controlled clinical trials. The search was limited to studies published in English from the last 7 years. Two reviewers independently conducted both study selection and data extraction. Risk of bias assessment was also performed individually by both operators using the Quality Assessment Diagnostic Tool (QUADAS-2).

STUDY SELECTION: Of the 1,465 records identified, 29 references were selected to perform qualitative analysis. The study characteristics were tabulated in a self-designed table. QUADAS-2 tool identified 10 and 15 studies to respectively have a high and an unclear risk of bias, while only four were categorized as low risk of bias. Overall, accuracy rates for dental implant classification ranged from 67% to 99%. Peri-implant pathology identification showed results with accuracy detection rates over 78,6%.

CONCLUSIONS: While AI-based models, particularly convolutional neural networks, have shown high accuracy in dental implant classification and peri-implant pathology detection, several limitations must be addressed before widespread clinical application. More advanced AI techniques, such as Federated Learning should be explored to improve the generalizability and efficiency of these models in clinical practice.

CLINICAL SIGNIFICANCE: AI-based models offer can and clinicians to accurately classify unknown dental implants and enable early detection of peri-implantitis, improving patient outcomes and streamline treatment planning.

PMID:39681182 | DOI:10.1016/j.jdent.2024.105533

Categories: Literature Watch

Predicting molecular subtypes of breast cancer based on multi-parametric MRI dataset using deep learning method

Mon, 2024-12-16 06:00

Magn Reson Imaging. 2024 Dec 14:110305. doi: 10.1016/j.mri.2024.110305. Online ahead of print.

ABSTRACT

PURPOSE: To develop a multi-parametric MRI model for the prediction of molecular subtypes of breast cancer using five types of breast cancer preoperative MRI images.

METHODS: In this study, we retrospectively analyzed clinical data and five types of MRI images (FS-T1WI, T2WI, Contrast-enhanced T1-weighted imaging (T1-C), DWI, and ADC) from 325 patients with pathologically confirmed breast cancer. Using the five types of MRI images as inputs to the ResNeXt50 model respectively, five base models were constructed, and then the outputs of the five base models were fused using an ensemble learning approach to develop a multi-parametric MRI model. Breast cancer was classified into four molecular subtypes based on immunohistochemical results: luminal A, luminal B, human epidermal growth factor receptor 2-positive (HER2-positive), and triple-negative (TN). The whole dataset was randomly divided into a training set (n = 260; 76 luminal A, 80 luminal B, 50 HER2-positive, 54 TN) and a testing set (n = 65; 20 luminal A, 20 luminal B, 12 HER2-positive, 13 TN). Accuracy, sensitivity, specificity, receiver operating characteristic curve (ROC) and area under the curve (AUC) were calculated to assess the predictive performance of the models.

RESULTS: In the testing set, for the assessment of the four molecular subtypes of breast cancer, the multi-parametric MRI model yielded an AUC of 0.859-0.912; the AUCs based on the FS-T1WI, T2WI, T1-C, DWI, and ADC models achieved respectively 0.632-0. 814, 0.641-0.788, 0.621-0.709, 0.620-0.701and 0.611-0.785.

CONCLUSION: The multi-parametric MRI model we developed outperformed the base models in predicting breast cancer molecular subtypes. Our study also showed the potential of FS-T1WI base model in predicting breast cancer molecular subtypes.

PMID:39681144 | DOI:10.1016/j.mri.2024.110305

Categories: Literature Watch

Pose analysis in free-swimming adult zebrafish, Danio rerio: "fishy" origins of movement design

Mon, 2024-12-16 06:00

Brain Behav Evol. 2024 Dec 16:1-26. doi: 10.1159/000543081. Online ahead of print.

ABSTRACT

Introduction Movement requires maneuvers that generate thrust to either make turns or move the body forward in physical space. The computational space for perpetually controlling the relative position of every point on the body surface can be vast. We hypothesize the evolution of efficient design for movement that minimizes active (neural) control by leveraging the passive (reactive) forces between the body and the surrounding medium at play. To test our hypothesis, we investigate the presence of stereotypical postures during free-swimming in adult zebrafish, Danio rerio. Methods We perform markerless tracking using DeepLabCut, a deep learning pose estimation toolkit, to track geometric relationships between body parts. To identify putative clusters of postural configurations obtained from twelve freely behaving zebrafish, we use unsupervised multivariate time-series analysis (B-SOiD machine learning software). Results When applied to single individuals, this method reveals a best-fit for 36 to 50 clusters in contrast to 86 clusters for data pooled from all 12 animals. The centroids of each cluster obtained over 14,000 sequential frames recorded for a single fish represent an apriori classification into relatively stable "target body postures" and inter-pose "transitional postures" that lead towards and away from a target pose. We use multidimensional scaling of mean parameter values for each cluster to map cluster-centroids within two dimensions of postural space. From a post-priori visual analysis, we condense neighboring postural variants into 15 superclusters or core body configurations. We develop a nomenclature specifying the anteroposterior level/s (upper, mid and lower) and degree of bending. Conclusion Our results suggest that constraining bends to mainly three levels in adult zebrafish preempts the neck, fore- and hindlimb design for maneuverability in land vertebrates.

PMID:39681106 | DOI:10.1159/000543081

Categories: Literature Watch

Ultra-fast prediction of D-pi-A organic dye absorption maximum with advanced ensemble deep learning models

Mon, 2024-12-16 06:00

Spectrochim Acta A Mol Biomol Spectrosc. 2024 Dec 9;329:125536. doi: 10.1016/j.saa.2024.125536. Online ahead of print.

ABSTRACT

The quick and precise estimation of D-π-A Organic Dye absorption maxima in different solvents is an important challenge for the efficient design of novel chemical structures that could improve the performance of dye-sensitized solar cells (DSSCs) and related technologies. Time-Dependent Density Functional Theory (TD-DFT) has often been employed for these predictions, but it has limitations, including high computing costs and functional dependence, particularly for solvent interactions. In this study, we introduce a high-accuracy and rapid deep-learning ensemble method using daylight fingerprints as chemical descriptors to predict the absorption maxima (λmax) of D-π-A organic dyes in 18 different solvent environments. This study introduces a novel approach leveraging advanced ensemble deep learning of 10 models of multiple neural architectures including convolutional networks to demonstrate exceptional predictive power in capturing complex relationships between molecular structures with solvent interaction and absorption maximum. Leveraging a comprehensive range of molecular descriptors from organic dye fingerprints, we developed a highly accurate ensemble model with an R2 of 0.94 and a mean absolute error (MAE) of 8.6 nm, which enhances predictive accuracy and significantly reduces computational time. Additionally, we developed a user-friendly web-based platform that allows for quick prediction of absorption maxima including solvent effect. This tool, which directly uses SMILES representations and advanced deep learning techniques, offers significant potential for accelerating the discovery of efficient dye candidates for various applications, including solar energy, environmental solutions, and medical research. This research opens the door to more effective next-generation dye design, which will facilitate rapid testing in a variety of fields and design an efficient new material.

PMID:39681030 | DOI:10.1016/j.saa.2024.125536

Categories: Literature Watch

Descriptive overview of AI applications in x-ray imaging and radiotherapy

Mon, 2024-12-16 06:00

J Radiol Prot. 2024 Dec 16. doi: 10.1088/1361-6498/ad9f71. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) is transforming medical radiation applications by handling complex data, learning patterns, and making accurate predictions, leading to improved patient outcomes. This article examines the use of AI in optimizing radiation doses for X-ray imaging, improving radiotherapy outcomes, and briefly addresses the benefits, challenges, and limitations of AI integration into clinical workflows. In diagnostic radiology, AI plays a pivotal role in optimizing radiation exposure, reducing noise, enhancing image contrast, and lowering radiation doses, especially in high-dose procedures like computed tomography. Deep learning-powered CT reconstruction methods have already been incorporated into clinical routine. Moreover, AI-powered methodologies have been developed to provide real-time, patient-specific radiation dose estimates. These AI-driven tools have the potential to streamline workflows and potentially become integral parts of imaging practices. In radiotherapy, AI's ability to automate and enhance the precision of treatment planning is emphasized. Traditional methods, such as manual contouring, are time-consuming and prone to variability. AI-driven techniques, particularly deep learning models, are automating the segmentation of organs and tumors, improving the accuracy of radiation delivery, and minimizing damage to healthy tissues. Moreover, AI supports adaptive radiotherapy, allowing continuous optimization of treatment plans based on changes in a patient's anatomy over time, ensuring the highest accuracy in radiation delivery and better therapeutic outcomes. Some of these methods have been validated and integrated into radiation treatment systems, while others are not yet ready for routine clinical use mainly due to challenges in validation, particularly ensuring reliability across diverse patient populations and clinical settings. Despite the potential of AI, there are challenges in fully integrating these technologies into clinical practice. Issues such as data protection, privacy, data quality, model validation, and the need for large and diverse datasets are crucial to ensuring the reliability of AI systems.

PMID:39681008 | DOI:10.1088/1361-6498/ad9f71

Categories: Literature Watch

Development of A Low-Dose Strategy for Propagation-based Imaging Helical Computed Tomography (PBI-HCT): High Image Quality and Reduced Radiation Dose

Mon, 2024-12-16 06:00

Biomed Phys Eng Express. 2024 Dec 16. doi: 10.1088/2057-1976/ad9f66. Online ahead of print.

ABSTRACT


Propagation-based imaging computed tomography (PBI-CT) has been recently emerging for visualizing low-density materials due to its excellent image contrast and high resolution. Based on this, PBI-CT with a helical acquisition mode (PBI-HCT) offers superior imaging quality (e.g., fewer ring artifacts) and dose uniformity, making it ideal for biomedical imaging applications. However, the excessive radiation dose associated with high-resolution PBI-HCT may potentially harm objects or hosts being imaged, especially in live animal imaging, raising a great need to reduce radiation dose.
Methods:
In this study, we strategically integrated Sparse2Noise (a deep learning approach) with PBI-HCT imaging to reduce radiation dose without compromising image quality. Sparse2Noise uses paired low-dose noisy images with different photon fluxes and projection numbers for high-quality reconstruction via a convolutional neural network (CNN). Then, we examined the imaging quality and radiation dose of PBI-HCT imaging using Sparse2Noise, as compared to when Sparse2Noise was used in low-dose PBI-CT imaging (circular scanning mode). Furthermore, we conducted a comparison study on using Sparse2Noise versus two other state-of-the-art low-dose imaging algorithms (i.e., Noise2Noise and Noise2Inverse) for imaging low-density materials using PBI-HCT at equivalent dose levels.
Results:
Sparse2Noise allowed for a 90% dose reduction in PBI-HCT imaging while maintaining high image quality. As compared to PBI-CT imaging, the use of Sparse2Noise in PBI-HCT imaging shows more effective by reducing additional radiation dose (30%-36%). Furthermore, helical scanning mode also enhances the performance of existing low-dose algorithms (Noise2Noise and Noise2Inverse); nevertheless, Sparse2Noise shows significantly higher signal-to-noise ratio (SNR) value compared to Noise2Noise and Noise2Inverse at the same radiation dose level.
Conclusions and significance:
Our proposed low-dose imaging strategy Sparse2Noise can be effectively applied to the PBI-HCT imaging technique and requires lower dose for acceptable quality imaging. This would represent a significant advance imaging for low-density materials imaging and for future live animal imaging applications.&#xD.

PMID:39681007 | DOI:10.1088/2057-1976/ad9f66

Categories: Literature Watch

Validation of a Rapid Algorithm for Repeated Intensity Modulated Radiation Therapy Dose Calculations

Mon, 2024-12-16 06:00

Biomed Phys Eng Express. 2024 Dec 16. doi: 10.1088/2057-1976/ad9f6a. Online ahead of print.

ABSTRACT

As adaptive radiotherapy workflows and deep learning model training rise in popularity, the need for a repeated applications of a rapid dose calculation algorithm increases. In this work we evaluate the feasibility of a simple algorithm that can calculate dose directly from MLC positions in near real-time. Given the necessary machine parameters, the intensity modulated radiation therapy (IMRT) doses are calculated and can be used in optimization, deep learning model training, or other cases where fast repeated segment dose calculations are needed. The algorithm uses normalized beamlets to modify a pre-calculated patient specific open field into any MLC segment shape. This algorithm was validated on 91 prostate IMRT plans as well as 20 lung IMRT plans generated for the Elekta Unity MR-Linac. IMRT plans calculated using the proposed method were found to match reference Monte Carlo calculated dose within 98.02±0.84% and 96.57±2.41% for prostate and lung patients respectively with a 3%/2 mm gamma criterion. After the patient-specific open field calculation, the algorithm can calculate the dose of a 9-field IMRT plan in 1.016±0.284 s for a single patient or 0.264 ms per patient for a parallelized batch of 24 patients relevant for deep learning training. The presented algorithm demonstrates an alternative rapid IMRT dose calculator that does not rely on training a deep learning model while still being competitive in terms of speed and accuracy making it a compelling choice in cases where repetitive dose calculation is desired.

PMID:39681005 | DOI:10.1088/2057-1976/ad9f6a

Categories: Literature Watch

Bidirectional Interaction Directional Variance Attention Model Based on Increased-Transformer for Thyroid Nodule Classification

Mon, 2024-12-16 06:00

Biomed Phys Eng Express. 2024 Dec 16. doi: 10.1088/2057-1976/ad9f68. Online ahead of print.

ABSTRACT

Malignant thyroid nodules are closely linked to cancer, making the precise classification of thyroid nodules into benign and malignant categories highly significant. However, the subtle differences in contour between benign and malignant thyroid nodules, combined with the texture features obscured by the inherent noise in ultrasound images, often result in low classification accuracy in most models. To address this, we propose a Bidirectional Interaction Directional Variance Attention Model based on Increased-Transformer, named IFormer-DVNet. This paper proposes the Increased-Transformer, which enables global feature modeling of feature maps extracted by the Convolutional Feature Extraction Module (CFEM). This design maximally alleviates noise interference in ultrasound images. The Bidirectional Interaction Directional Variance Attention module (BIDVA) dynamically calculates attention weights using the variance of input tensors along both vertical and horizontal directions. This allows the model to focus more effectively on regions with rich information in the image. The vertical and horizontal features are interactively combined to enhance the model's representational capability. During the model training process, we designed a Multi-Dimensional Loss function (MD Loss) to stretch the boundary distance between different classes and reduce the distance between samples of the same class. Additionally, the MD Loss function helps mitigate issues related to class imbalance in the dataset. We evaluated our network model using the public TNCD dataset and a private dataset. The results show that our network achieved an accuracy of 76.55% on the TNCD dataset and 93.02% on the private dataset. Compared to other state-of-the-art classification networks, our model outperformed them across all evaluation metrics.

PMID:39681000 | DOI:10.1088/2057-1976/ad9f68

Categories: Literature Watch

Blood Pressure Estimation Using Explainable Deep-Learning Models Based on Photoplethysmography

Mon, 2024-12-16 06:00

Anesth Analg. 2025 Jan 1;140(1):119-128. doi: 10.1213/ANE.0000000000007295. Epub 2024 Dec 16.

ABSTRACT

BACKGROUND: Due to their invasiveness, arterial lines are not typically used in routine monitoring, despite their superior responsiveness in hemodynamic monitoring and detecting intraoperative hypotension. To address this issue, noninvasive, continuous arterial pressure monitoring is necessary. We developed a deep-learning model that reconstructs continuous mean arterial pressure (MAP) using the photoplethysmograhy (PPG) signal and compared it to the arterial line gold standard.

METHODS: We analyzed high-frequency PPG signals from 117 patients in neuroradiology and digestive surgery with a median of 2201 (interquartile range [IQR], 788-4775) measurements per patient. We compared models with different combinations of convolutional and recurrent layers using as inputs for our neural network high-frequency PPG and derived features including dicrotic notch relative amplitude, perfusion index, and heart rate. Mean absolute error (MAE) was used as performance metrics. Explainability of the deep-learning model was reconstructed with Grad-CAM, a visualization technique using saliency maps to highlight the parts of an input that are significant for a deep-learning model decision-making process.

RESULTS: An MAP baseline model, which consisted only of standard cuff measures, reached an MAE of 6.1 (± 14.5) mm Hg. In contrast, the deep-learning model achieved an MAE of 3.5 (± 4.4) mm Hg on the external test set (a 42.6% improvement). This model also achieved the narrowest confidence intervals and met international standards used within the community (grade A). The saliency map revealed that the deep-learning model primarily extracts information near the dicrotic notch region.

CONCLUSIONS: Our deep-learning model noninvasively estimates arterial pressure with high accuracy. This model may show potential as a decision-support tool in operating-room settings, particularly in scenarios where invasive blood pressure monitoring is unavailable.

PMID:39680992 | DOI:10.1213/ANE.0000000000007295

Categories: Literature Watch

Correction: Deep Learning to Estimate Cardiovascular Risk From Chest Radiographs

Mon, 2024-12-16 06:00

Ann Intern Med. 2024 Dec 17. doi: 10.7326/ANNALS-24-03386. Online ahead of print.

NO ABSTRACT

PMID:39680924 | DOI:10.7326/ANNALS-24-03386

Categories: Literature Watch

Deep Learning for Predicting Acute Exacerbation and Mortality of Interstitial Lung Disease

Mon, 2024-12-16 06:00

Ann Am Thorac Soc. 2024 Dec 16. doi: 10.1513/AnnalsATS.202403-284OC. Online ahead of print.

ABSTRACT

RATIONALE: Some patients with interstitial lung disease (ILD) have a high mortality rate or experience acute exacerbation of ILD (AE-ILD) that results in increased mortality. Early identification of these high-risk patients and accurate prediction of the onset of these important events is important to determine treatment strategies. Although various factors that affect disease behavior among patients with ILD hinder the accurate prediction of these events, the use of longitudinal information may enable better prediction.

OBJECTIVES: To develop a deep-learning (DL) model to predict composite outcomes defined as the first occurrence of AE-ILD and mortality using longitudinal data.

METHODS: Longitudinal clinical and environmental data were retrospectively collected from consecutive patients with ILD at two specialty centers between January 2008 and December 2015. A DL model was developed to predict composite outcomes using longitudinal data from 80% of patients from the first center, which was then validated using data from the remaining 20% patients and second center. The developed model was compared with the univariate Cox proportional hazard (CPH) model using the ILD gender-age-physiology (ILD-GAP) score and multivariate CPH model at the time of ILD diagnosis.

MEASUREMENTS AND MAIN RESULTS: AE-ILD was reported in 218 patients among the 1,175 patients enrolled, whereas 380 died without developing AE-ILD. The truncated concordance index (C-index) values of univariate/multivariate CPH models for composite outcomes within 12, 24, and 36 months after prediction were 0.789/0.843, 0.788/0.853, and 0.787/0.853 in internal validation, and 0.650/0.718, 0.652/0.756, and 0.640/0.756 in external validation, respectively. At 12 months after ILD diagnosis, the DL model outperformed the univariate CPH model and multivariate CPH model for composite outcomes within 12 months, with C-index values of 0.842, 0.840, and 0.839 in internal validation, and 0.803, 0.744, and 0.746 in external validation, respectively. Neutrophils, C-reactive protein, ILD-GAP score, and exposure to suspended particulate matter were strongly associated with the composite outcomes.

CONCLUSIONS: The DL model can accurately predict the incidence of AE-ILD or mortality using longitudinal data.

PMID:39680875 | DOI:10.1513/AnnalsATS.202403-284OC

Categories: Literature Watch

CATALYZE: A DEEP LEARNING APPROCH FOR CATARACT ASSESSEMENT AND GRADING ON SS-OCT ANTERION IMAGES

Mon, 2024-12-16 06:00

J Cataract Refract Surg. 2024 Dec 16. doi: 10.1097/j.jcrs.0000000000001598. Online ahead of print.

ABSTRACT

PURPOSE: To assess an new objective deep learning model cataract grading method based on Swept-Source Optical Coherence Tomography (SS-OCT) scans provided by the Anterion® (Heidelberg, Germany).

SETTING: Single centre study at the Rothschild Foundation, Paris, France.

DESIGN: Prospective cross-sectional study.

METHODS: All patients consulting for cataract evaluation and consenting to study participation were included. History of previous ocular surgery, cornea or retina disorders, and ocular dryness were exclusion criteria. Our CATALYZE pipeline was applied to Anterion® image providing layer-wise cataract metrics and an overall Clinical Significance Index of cataract (CSI). Ocular scatter index (OSI) was also measured with a double-pass aberrometer (OQAS®), and compared to our CSI.

RESULTS: Five hundred forty eight eyes were included, 331 in the development set (48 with cataract and 283 controls) and 217 in the validation set (85 with cataract and 132 controls) of 315 patients aged 19-85 years (mean ± SD: 50 ± 21 years). The CSI correlated with the OSI (r2 = 0.87, P <0.01). CSI area under the ROC curve (AUROC) was comparable to OSI AUROC (0.985 vs 0.981 respectively, P>0.05) with 95% sensitivity and 95% specificity.

CONCLUSIONS: Our deep learning pipeline CATALYZE based on Anterion® SS-OCT is a reliable and comprehensive objective cataract grading method.

PMID:39680679 | DOI:10.1097/j.jcrs.0000000000001598

Categories: Literature Watch

DFASGCNS: A prognostic model for ovarian cancer prediction based on dual fusion channels and stacked graph convolution

Mon, 2024-12-16 06:00

PLoS One. 2024 Dec 16;19(12):e0315924. doi: 10.1371/journal.pone.0315924. eCollection 2024.

ABSTRACT

Ovarian cancer is a malignant tumor with different clinicopathological and molecular characteristics. Due to its nonspecific early symptoms, the majority of patients are diagnosed with local or extensive metastasis, severely affecting treatment and prognosis. The occurrence of ovarian cancer is influenced by multiple complex mechanisms including genomics, transcriptomics, and proteomics. Integrating multiple types of omics data aids in predicting the survival rate of ovarian cancer patients. However, existing methods only fuse multi-omics data at the feature level, neglecting the shared and complementary neighborhood information among samples of multi-omics data, and failing to consider the potential interactions between different omics data at the molecular level. In this paper, we propose a prognostic model for ovarian cancer prediction named Dual Fusion Channels and Stacked Graph Convolutional Neural Network (DFASGCNS). The DFASGCNS utilizes dual fusion channels to learn feature representations of different omics data and the associations between samples. Stacked graph convolutional network is used to comprehensively learn the deep and intricate correlation networks present in multi-omics data, enhancing the model's ability to represent multi-omics data. An attention mechanism is introduced to allocate different weights to important features of different omics data, optimizing the feature representation of multi-omics data. Experimental results demonstrate that compared to existing methods, the DFASGCNS model exhibits significant advantages in ovarian cancer prognosis prediction and survival analysis. Kaplan-Meier curve analysis results indicate significant differences in the survival subgroups predicted by the DFASGCNS model, contributing to a deeper understanding of the pathogenesis of ovarian cancer and providing more reliable auxiliary diagnostic information for the prognosis assessment of ovarian cancer patients.

PMID:39680618 | DOI:10.1371/journal.pone.0315924

Categories: Literature Watch

Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images

Mon, 2024-12-16 06:00

Radiol Phys Technol. 2024 Dec 16. doi: 10.1007/s12194-024-00871-1. Online ahead of print.

ABSTRACT

This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson's correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84-30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858-0.9868. Similarly, PSNR values ranged 32.34-32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941-0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson's correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.

PMID:39680317 | DOI:10.1007/s12194-024-00871-1

Categories: Literature Watch

SpatialCVGAE: Consensus Clustering Improves Spatial Domain Identification of Spatial Transcriptomics Using VGAE

Mon, 2024-12-16 06:00

Interdiscip Sci. 2024 Dec 16. doi: 10.1007/s12539-024-00676-1. Online ahead of print.

ABSTRACT

The advent of spatially resolved transcriptomics (SRT) has provided critical insights into the spatial context of tissue microenvironments. Spatial clustering is a fundamental aspect of analyzing spatial transcriptomics data. However, spatial clustering methods often suffer from instability caused by the sparsity and high noise in the SRT data. To address this challenge, we propose SpatialCVGAE, a consensus clustering framework designed for SRT data analysis. SpatialCVGAE adopts the expression of high-variable genes from different dimensions along with multiple spatial graphs as inputs to variational graph autoencoders (VGAEs), learning multiple latent representations for clustering. These clustering results are then integrated using a consensus clustering approach, which enhances the model's stability and robustness by combining multiple clustering outcomes. Experiments demonstrate that SpatialCVGAE effectively mitigates the instability typically associated with non-ensemble deep learning methods, significantly improving both the stability and accuracy of the results. Compared to previous non-ensemble methods in representation learning and post-processing, our method fully leverages the diversity of multiple representations to accurately identify spatial domains, showing superior robustness and adaptability. All code and public datasets used in this paper are available at https://github.com/wenwenmin/SpatialCVGAE .

PMID:39680300 | DOI:10.1007/s12539-024-00676-1

Categories: Literature Watch

Diagnostic performance of neural network algorithms in skull fracture detection on CT scans: a systematic review and meta-analysis

Mon, 2024-12-16 06:00

Emerg Radiol. 2024 Dec 16. doi: 10.1007/s10140-024-02300-7. Online ahead of print.

ABSTRACT

BACKGROUND AND AIM: The potential intricacy of skull fractures as well as the complexity of underlying anatomy poses diagnostic hurdles for radiologists evaluating computed tomography (CT) scans. The necessity for automated diagnostic tools has been brought to light by the shortage of radiologists and the growing demand for rapid and accurate fracture diagnosis. Convolutional Neural Networks (CNNs) are a potential new class of medical imaging technologies that use deep learning (DL) to improve diagnosis accuracy. The objective of this systematic review and meta-analysis is to assess how well CNN models diagnose skull fractures on CT images.

METHODS: PubMed, Scopus, and Web of Science were searched for studies published before February 2024 that used CNN models to detect skull fractures on CT scans. Meta-analyses were conducted for area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Egger's and Begg's tests were used to assess publication bias.

RESULTS: Meta-analysis was performed for 11 studies with 20,798 patients. Pooled average AUC for implementing pre-training for transfer learning in CNN models within their training model's architecture was 0.96 ± 0.02. The pooled averages of the studies' sensitivity and specificity were 1.0 and 0.93, respectively. The accuracy was obtained 0.92 ± 0.04. Studies showed heterogeneity, which was explained by differences in model topologies, training models, and validation techniques. There was no significant publication bias detected.

CONCLUSION: CNN models perform well in identifying skull fractures on CT scans. Although there is considerable heterogeneity and possibly publication bias, the results suggest that CNNs have the potential to improve diagnostic accuracy in the imaging of acute skull trauma. To further enhance these models' practical applicability, future studies could concentrate on the utility of DL models in prospective clinical trials.

PMID:39680295 | DOI:10.1007/s10140-024-02300-7

Categories: Literature Watch

Interpretable deep learning survival predictions in sporadic Creutzfeldt-Jakob disease

Mon, 2024-12-16 06:00

J Neurol. 2024 Dec 16;272(1):62. doi: 10.1007/s00415-024-12815-1.

ABSTRACT

BACKGROUND: Sporadic Creutzfeldt-Jakob disease (sCJD) is a rapidly progressive and fatal prion disease with significant public health implications. Survival is heterogenous, posing challenges for prognostication and care planning. We developed a survival model using diagnostic data from comprehensive UK sCJD surveillance.

METHODS: Using national CJD surveillance data from the United Kingdom (UK), we included 655 cases of probable or definite sCJD according to 2017 international consensus diagnostic criteria between 01/2017 and 01/2022. Data included symptoms at diagnosis, CSF RT-QuIC and 14-3-3, MRI and EEG findings, as well as sex, age, PRNP codon 129 polymorphism, CSF total protein and S100b. An artificial neural network based multitask logistic regression was used for survival analysis. Model-agnostic interpretation methods was used to assess the contribution of individual features on model outcome.

RESULTS: Our algorithm had a c-index of 0.732, IBS of 0.079, and AUC at 5 and 10 months of 0.866 and 0.872, respectively. This modestly improved on Cox proportional hazard model (c-index 0.730, IBS 0.083, AUC 0.852 and 0863) but was not statistically significant. Both models identified codon 129 polymorphism and CSF 14-3-3 to be significant predictive features.

CONCLUSIONS: sCJD survival can be predicted using routinely collected clinical data at diagnosis. Our analysis pipeline has similar levels of performance to classical methods and provide clinically meaningful interpretation which help deepen clinical understanding of the condition. Further development and clinical validation will facilitate improvements in prognostication, care planning, and stratification to clinical trials.

PMID:39680177 | DOI:10.1007/s00415-024-12815-1

Categories: Literature Watch

Primary angle-closed diseases recognition through artificial intelligence-based anterior segment-optical coherence tomography imaging

Mon, 2024-12-16 06:00

Graefes Arch Clin Exp Ophthalmol. 2024 Dec 16. doi: 10.1007/s00417-024-06709-1. Online ahead of print.

ABSTRACT

PURPOSE: In this study, artificial intelligence (AI) was used to deeply learn the classification of the anterior segment-Optical Coherence Tomography (AS-OCT) images. This AI systems automatically analyzed the angular structure of the AS-OCT images and automatically classified anterior chamber angle. It would improve the efficiency of AS-OCT image analysis.

METHODS: The subjects were from the glaucoma disease screening and prevention project for elderly people in Shanghai community. Each scan contained 72 cross-sectional AS-OCT frames. We developed a deep learning-based AS-OCT image automatic anterior chamber angle analysis software. Classifier performance was evaluated against glaucoma experts' grading of AS-OCT images as standard. Outcome evaluation included accuracy (ACC) and area under the receiver operator curve (AUC).

RESULTS: 94895 AS-OCT images were collected from 687 participants, in which 69,243 images were annotated as open, 16,433 images were annotated as closed, and 9219 images were annotated as non-gradable. The class-balanced train data were formed from randomly extracting the same number of open angle images as the closed angle images, which contained 22,393 images (11127 open, 11256 closed). The best-performing classifier was developed by applying transfer learning to the ResNet-50 architecture. against experts' grading, this classifier achieved an AUC of 0.9635.

CONCLUSION: Deep learning classifiers effectively detect angle closure based on automated analysis of AS-OCT images. This system could be used to automate clinical evaluations of the anterior chamber angle and improve efficiency of interpreting AS-OCT images. The results demonstrated the potential of the deep learning system for rapid recognition of high-risk populations of PACD.

PMID:39680113 | DOI:10.1007/s00417-024-06709-1

Categories: Literature Watch

Pages