Deep learning
Correction: Deep Learning to Estimate Cardiovascular Risk From Chest Radiographs
Ann Intern Med. 2024 Dec 17. doi: 10.7326/ANNALS-24-03386. Online ahead of print.
NO ABSTRACT
PMID:39680924 | DOI:10.7326/ANNALS-24-03386
Deep Learning for Predicting Acute Exacerbation and Mortality of Interstitial Lung Disease
Ann Am Thorac Soc. 2024 Dec 16. doi: 10.1513/AnnalsATS.202403-284OC. Online ahead of print.
ABSTRACT
RATIONALE: Some patients with interstitial lung disease (ILD) have a high mortality rate or experience acute exacerbation of ILD (AE-ILD) that results in increased mortality. Early identification of these high-risk patients and accurate prediction of the onset of these important events is important to determine treatment strategies. Although various factors that affect disease behavior among patients with ILD hinder the accurate prediction of these events, the use of longitudinal information may enable better prediction.
OBJECTIVES: To develop a deep-learning (DL) model to predict composite outcomes defined as the first occurrence of AE-ILD and mortality using longitudinal data.
METHODS: Longitudinal clinical and environmental data were retrospectively collected from consecutive patients with ILD at two specialty centers between January 2008 and December 2015. A DL model was developed to predict composite outcomes using longitudinal data from 80% of patients from the first center, which was then validated using data from the remaining 20% patients and second center. The developed model was compared with the univariate Cox proportional hazard (CPH) model using the ILD gender-age-physiology (ILD-GAP) score and multivariate CPH model at the time of ILD diagnosis.
MEASUREMENTS AND MAIN RESULTS: AE-ILD was reported in 218 patients among the 1,175 patients enrolled, whereas 380 died without developing AE-ILD. The truncated concordance index (C-index) values of univariate/multivariate CPH models for composite outcomes within 12, 24, and 36 months after prediction were 0.789/0.843, 0.788/0.853, and 0.787/0.853 in internal validation, and 0.650/0.718, 0.652/0.756, and 0.640/0.756 in external validation, respectively. At 12 months after ILD diagnosis, the DL model outperformed the univariate CPH model and multivariate CPH model for composite outcomes within 12 months, with C-index values of 0.842, 0.840, and 0.839 in internal validation, and 0.803, 0.744, and 0.746 in external validation, respectively. Neutrophils, C-reactive protein, ILD-GAP score, and exposure to suspended particulate matter were strongly associated with the composite outcomes.
CONCLUSIONS: The DL model can accurately predict the incidence of AE-ILD or mortality using longitudinal data.
PMID:39680875 | DOI:10.1513/AnnalsATS.202403-284OC
CATALYZE: A DEEP LEARNING APPROCH FOR CATARACT ASSESSEMENT AND GRADING ON SS-OCT ANTERION IMAGES
J Cataract Refract Surg. 2024 Dec 16. doi: 10.1097/j.jcrs.0000000000001598. Online ahead of print.
ABSTRACT
PURPOSE: To assess an new objective deep learning model cataract grading method based on Swept-Source Optical Coherence Tomography (SS-OCT) scans provided by the Anterion® (Heidelberg, Germany).
SETTING: Single centre study at the Rothschild Foundation, Paris, France.
DESIGN: Prospective cross-sectional study.
METHODS: All patients consulting for cataract evaluation and consenting to study participation were included. History of previous ocular surgery, cornea or retina disorders, and ocular dryness were exclusion criteria. Our CATALYZE pipeline was applied to Anterion® image providing layer-wise cataract metrics and an overall Clinical Significance Index of cataract (CSI). Ocular scatter index (OSI) was also measured with a double-pass aberrometer (OQAS®), and compared to our CSI.
RESULTS: Five hundred forty eight eyes were included, 331 in the development set (48 with cataract and 283 controls) and 217 in the validation set (85 with cataract and 132 controls) of 315 patients aged 19-85 years (mean ± SD: 50 ± 21 years). The CSI correlated with the OSI (r2 = 0.87, P <0.01). CSI area under the ROC curve (AUROC) was comparable to OSI AUROC (0.985 vs 0.981 respectively, P>0.05) with 95% sensitivity and 95% specificity.
CONCLUSIONS: Our deep learning pipeline CATALYZE based on Anterion® SS-OCT is a reliable and comprehensive objective cataract grading method.
PMID:39680679 | DOI:10.1097/j.jcrs.0000000000001598
DFASGCNS: A prognostic model for ovarian cancer prediction based on dual fusion channels and stacked graph convolution
PLoS One. 2024 Dec 16;19(12):e0315924. doi: 10.1371/journal.pone.0315924. eCollection 2024.
ABSTRACT
Ovarian cancer is a malignant tumor with different clinicopathological and molecular characteristics. Due to its nonspecific early symptoms, the majority of patients are diagnosed with local or extensive metastasis, severely affecting treatment and prognosis. The occurrence of ovarian cancer is influenced by multiple complex mechanisms including genomics, transcriptomics, and proteomics. Integrating multiple types of omics data aids in predicting the survival rate of ovarian cancer patients. However, existing methods only fuse multi-omics data at the feature level, neglecting the shared and complementary neighborhood information among samples of multi-omics data, and failing to consider the potential interactions between different omics data at the molecular level. In this paper, we propose a prognostic model for ovarian cancer prediction named Dual Fusion Channels and Stacked Graph Convolutional Neural Network (DFASGCNS). The DFASGCNS utilizes dual fusion channels to learn feature representations of different omics data and the associations between samples. Stacked graph convolutional network is used to comprehensively learn the deep and intricate correlation networks present in multi-omics data, enhancing the model's ability to represent multi-omics data. An attention mechanism is introduced to allocate different weights to important features of different omics data, optimizing the feature representation of multi-omics data. Experimental results demonstrate that compared to existing methods, the DFASGCNS model exhibits significant advantages in ovarian cancer prognosis prediction and survival analysis. Kaplan-Meier curve analysis results indicate significant differences in the survival subgroups predicted by the DFASGCNS model, contributing to a deeper understanding of the pathogenesis of ovarian cancer and providing more reliable auxiliary diagnostic information for the prognosis assessment of ovarian cancer patients.
PMID:39680618 | DOI:10.1371/journal.pone.0315924
Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images
Radiol Phys Technol. 2024 Dec 16. doi: 10.1007/s12194-024-00871-1. Online ahead of print.
ABSTRACT
This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson's correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84-30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858-0.9868. Similarly, PSNR values ranged 32.34-32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941-0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson's correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.
PMID:39680317 | DOI:10.1007/s12194-024-00871-1
SpatialCVGAE: Consensus Clustering Improves Spatial Domain Identification of Spatial Transcriptomics Using VGAE
Interdiscip Sci. 2024 Dec 16. doi: 10.1007/s12539-024-00676-1. Online ahead of print.
ABSTRACT
The advent of spatially resolved transcriptomics (SRT) has provided critical insights into the spatial context of tissue microenvironments. Spatial clustering is a fundamental aspect of analyzing spatial transcriptomics data. However, spatial clustering methods often suffer from instability caused by the sparsity and high noise in the SRT data. To address this challenge, we propose SpatialCVGAE, a consensus clustering framework designed for SRT data analysis. SpatialCVGAE adopts the expression of high-variable genes from different dimensions along with multiple spatial graphs as inputs to variational graph autoencoders (VGAEs), learning multiple latent representations for clustering. These clustering results are then integrated using a consensus clustering approach, which enhances the model's stability and robustness by combining multiple clustering outcomes. Experiments demonstrate that SpatialCVGAE effectively mitigates the instability typically associated with non-ensemble deep learning methods, significantly improving both the stability and accuracy of the results. Compared to previous non-ensemble methods in representation learning and post-processing, our method fully leverages the diversity of multiple representations to accurately identify spatial domains, showing superior robustness and adaptability. All code and public datasets used in this paper are available at https://github.com/wenwenmin/SpatialCVGAE .
PMID:39680300 | DOI:10.1007/s12539-024-00676-1
Diagnostic performance of neural network algorithms in skull fracture detection on CT scans: a systematic review and meta-analysis
Emerg Radiol. 2024 Dec 16. doi: 10.1007/s10140-024-02300-7. Online ahead of print.
ABSTRACT
BACKGROUND AND AIM: The potential intricacy of skull fractures as well as the complexity of underlying anatomy poses diagnostic hurdles for radiologists evaluating computed tomography (CT) scans. The necessity for automated diagnostic tools has been brought to light by the shortage of radiologists and the growing demand for rapid and accurate fracture diagnosis. Convolutional Neural Networks (CNNs) are a potential new class of medical imaging technologies that use deep learning (DL) to improve diagnosis accuracy. The objective of this systematic review and meta-analysis is to assess how well CNN models diagnose skull fractures on CT images.
METHODS: PubMed, Scopus, and Web of Science were searched for studies published before February 2024 that used CNN models to detect skull fractures on CT scans. Meta-analyses were conducted for area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Egger's and Begg's tests were used to assess publication bias.
RESULTS: Meta-analysis was performed for 11 studies with 20,798 patients. Pooled average AUC for implementing pre-training for transfer learning in CNN models within their training model's architecture was 0.96 ± 0.02. The pooled averages of the studies' sensitivity and specificity were 1.0 and 0.93, respectively. The accuracy was obtained 0.92 ± 0.04. Studies showed heterogeneity, which was explained by differences in model topologies, training models, and validation techniques. There was no significant publication bias detected.
CONCLUSION: CNN models perform well in identifying skull fractures on CT scans. Although there is considerable heterogeneity and possibly publication bias, the results suggest that CNNs have the potential to improve diagnostic accuracy in the imaging of acute skull trauma. To further enhance these models' practical applicability, future studies could concentrate on the utility of DL models in prospective clinical trials.
PMID:39680295 | DOI:10.1007/s10140-024-02300-7
Interpretable deep learning survival predictions in sporadic Creutzfeldt-Jakob disease
J Neurol. 2024 Dec 16;272(1):62. doi: 10.1007/s00415-024-12815-1.
ABSTRACT
BACKGROUND: Sporadic Creutzfeldt-Jakob disease (sCJD) is a rapidly progressive and fatal prion disease with significant public health implications. Survival is heterogenous, posing challenges for prognostication and care planning. We developed a survival model using diagnostic data from comprehensive UK sCJD surveillance.
METHODS: Using national CJD surveillance data from the United Kingdom (UK), we included 655 cases of probable or definite sCJD according to 2017 international consensus diagnostic criteria between 01/2017 and 01/2022. Data included symptoms at diagnosis, CSF RT-QuIC and 14-3-3, MRI and EEG findings, as well as sex, age, PRNP codon 129 polymorphism, CSF total protein and S100b. An artificial neural network based multitask logistic regression was used for survival analysis. Model-agnostic interpretation methods was used to assess the contribution of individual features on model outcome.
RESULTS: Our algorithm had a c-index of 0.732, IBS of 0.079, and AUC at 5 and 10 months of 0.866 and 0.872, respectively. This modestly improved on Cox proportional hazard model (c-index 0.730, IBS 0.083, AUC 0.852 and 0863) but was not statistically significant. Both models identified codon 129 polymorphism and CSF 14-3-3 to be significant predictive features.
CONCLUSIONS: sCJD survival can be predicted using routinely collected clinical data at diagnosis. Our analysis pipeline has similar levels of performance to classical methods and provide clinically meaningful interpretation which help deepen clinical understanding of the condition. Further development and clinical validation will facilitate improvements in prognostication, care planning, and stratification to clinical trials.
PMID:39680177 | DOI:10.1007/s00415-024-12815-1
Primary angle-closed diseases recognition through artificial intelligence-based anterior segment-optical coherence tomography imaging
Graefes Arch Clin Exp Ophthalmol. 2024 Dec 16. doi: 10.1007/s00417-024-06709-1. Online ahead of print.
ABSTRACT
PURPOSE: In this study, artificial intelligence (AI) was used to deeply learn the classification of the anterior segment-Optical Coherence Tomography (AS-OCT) images. This AI systems automatically analyzed the angular structure of the AS-OCT images and automatically classified anterior chamber angle. It would improve the efficiency of AS-OCT image analysis.
METHODS: The subjects were from the glaucoma disease screening and prevention project for elderly people in Shanghai community. Each scan contained 72 cross-sectional AS-OCT frames. We developed a deep learning-based AS-OCT image automatic anterior chamber angle analysis software. Classifier performance was evaluated against glaucoma experts' grading of AS-OCT images as standard. Outcome evaluation included accuracy (ACC) and area under the receiver operator curve (AUC).
RESULTS: 94895 AS-OCT images were collected from 687 participants, in which 69,243 images were annotated as open, 16,433 images were annotated as closed, and 9219 images were annotated as non-gradable. The class-balanced train data were formed from randomly extracting the same number of open angle images as the closed angle images, which contained 22,393 images (11127 open, 11256 closed). The best-performing classifier was developed by applying transfer learning to the ResNet-50 architecture. against experts' grading, this classifier achieved an AUC of 0.9635.
CONCLUSION: Deep learning classifiers effectively detect angle closure based on automated analysis of AS-OCT images. This system could be used to automate clinical evaluations of the anterior chamber angle and improve efficiency of interpreting AS-OCT images. The results demonstrated the potential of the deep learning system for rapid recognition of high-risk populations of PACD.
PMID:39680113 | DOI:10.1007/s00417-024-06709-1
Enhancing diabetic retinopathy and macular edema detection through multi scale feature fusion using deep learning model
Graefes Arch Clin Exp Ophthalmol. 2024 Dec 16. doi: 10.1007/s00417-024-06687-4. Online ahead of print.
ABSTRACT
BACKGROUND: This work tackles the growing problem of early identification of diabetic retinopathy and diabetic macular edema. The deep neural network design utilizes multi-scale feature fusion to improve automated diagnostic accuracy. Methods This approach uses convolutional neural networks (CNN) and is designed to combine higher-level semantic inputs with low-level textural characteristics. The contextual and localized abstract representations that complement each other are combined via a unique fusion technique.
RESULTS: Use the MESSIDOR dataset, which comprises retinal images labeled with pathological annotations, for model training and validation to ensure robust algorithm development. The suggested model shows a 98% general precision and good performance in diabetic retinopathy. This model achieves an impressive nearly 100% exactness for diabetic macular edema, with particularly high accuracy (0.99).
CONCLUSION: Consistent performance increases the likelihood that the vision will be upheld through public screening and extensive clinical integration.
PMID:39680112 | DOI:10.1007/s00417-024-06687-4
Deep learning can detect elbow disease in dogs screened for elbow dysplasia
Vet Radiol Ultrasound. 2025 Jan;66(1):e13465. doi: 10.1111/vru.13465.
ABSTRACT
Medical image analysis based on deep learning is a rapidly advancing field in veterinary diagnostics. The aim of this retrospective diagnostic accuracy study was to develop and assess a convolutional neural network (CNN, EfficientNet) to evaluate elbow radiographs from dogs screened for elbow dysplasia. An auto-cropping tool based on the deep learning model RetinaNet was developed for radiograph preprocessing to crop the radiographs to the region of interest around the elbow joint. A total of 7229 radiographs with corresponding International Elbow Working Group scoring were included for training (n = 4000), validation (n = 1000), and testing (n = 2229) of CNN models for elbow diagnostics. The radiographs were classified in a binary manner as normal (negative class) or abnormal (positive class), where abnormal radiographs had various severities of osteoarthrosis and/or visible primary elbow dysplasia lesions. Explainable artificial intelligence analysis were performed on both correctly and incorrectly classified radiographs using VarGrad heatmaps to visualize regions of importance for the CNN model's predictions. The highest-performing CNN model showed excellent test accuracy, sensitivity, and specificity, all achieving a value of 0.98. Explainability analysis showed frequent highlighting along the margins of the anconeal process of both correctly and incorrectly classified radiographs. Uncertainty estimation using entropy to characterize the uncertainty of the model predictions showed that radiographs with ambiguous predictions could be flagged for human evaluation. Our study demonstrates robust performance of CNNs for detecting abnormal elbow joints in dogs screened for elbow dysplasia.
PMID:39679734 | DOI:10.1111/vru.13465
Automated Bone Cancer Detection Using Deep Learning on X-Ray Images
Surg Innov. 2024 Dec 16:15533506241299886. doi: 10.1177/15533506241299886. Online ahead of print.
ABSTRACT
In recent days, bone cancer is a life-threatening health issue that can lead to death. However, physicians use CT-scan, X-rays, or MRI images to recognize bone cancer, but still require techniques to increase precision and reduce human labor. These methods face challenges such as high costs, time consumption, and the risk of misdiagnosis due to the complexity of bone tumor appearances. Therefore, it is essential to establish an automated system to detect healthy bones from cancerous ones. In this regard, Artificial intelligence, particularly deep learning, shows increased attention in the medical image analysis process. This research presents a new Golden Search Optimization along with Deep Learning Enabled Computer Aided Diagnosis for Bone Cancer Classification (GSODL-CADBCC) on X-ray images. The aim of the GSODL-CADBCC approach is to accurately distinguish the input X-ray images into healthy and cancerous. This research presents the GSODL-CADBCC technique that leverages the bilateral filtering technique to remove the noise. This method uses the SqueezeNet model to generate feature vectors, and the GSO algorithm efficiently selects the hyperparameters. Finally, the extracted features can be classified by improved cuckoo search with a long short-term memory model. The experimental results demonstrate that the GSODL- CADBCC approach attains highest performance with an average accuracy of 95.52% on the training set data and 94.79% on the testing set data. This automated approach not only reduces the need for manual interpretation but also minimizes the risk of diagnostic errors and provides a viable option for precise medical imaging-based bone cancer screening.
PMID:39679470 | DOI:10.1177/15533506241299886
A self-attention-driven deep learning framework for inference of transcriptional gene regulatory networks
Brief Bioinform. 2024 Nov 22;26(1):bbae639. doi: 10.1093/bib/bbae639.
ABSTRACT
The interactions between transcription factors (TFs) and the target genes could provide a basis for constructing gene regulatory networks (GRNs) for mechanistic understanding of various biological complex processes. From gene expression data, particularly single-cell transcriptomic data containing rich cell-to-cell variations, it is highly desirable to infer TF-gene interactions (TGIs) using deep learning technologies. Numerous models or software including deep learning-based algorithms have been designed to identify transcriptional regulatory relationships between TFs and the downstream genes. However, these methods do not significantly improve predictions of TGIs due to some limitations regarding constructing underlying interactive structures linking regulatory components. In this study, we introduce a deep learning framework, DeepTGI, that encodes gene expression profiles from single-cell and/or bulk transcriptomic data and predicts TGIs with high accuracy. Our approach could fuse the features extracted from Auto-encoder with self-attention mechanism and other networks and could transform multihead attention modules to define representative features. By comparing it with other models or methods, DeepTGI exhibits its superiority to identify more potential TGIs and to reconstruct the GRNs and, therefore, could provide broader perspectives for discovery of more biological meaningful TGIs and for understanding transcriptional gene regulatory mechanisms.
PMID:39679439 | DOI:10.1093/bib/bbae639
Fast and customizable image formation model for optical coherence tomography
Biomed Opt Express. 2024 Nov 13;15(12):6783-6798. doi: 10.1364/BOE.534263. eCollection 2024 Dec 1.
ABSTRACT
Optical coherence tomography (OCT) is a technique that performs high-resolution, three-dimensional, imaging of semi-transparent scattering biological tissues. Models of OCT image formation are needed for applications such as aiding image interpretation and validating OCT signal processing techniques. Existing image formation models generally trade off between model realism and computation time. In particular, the most realistic models tend to be highly computationally demanding, which becomes a limiting factor when simulating C-scan generation. Here we present an OCT image formation model based on the first-order Born approximation that is significantly faster than existing models, whilst maintaining a high degree of realism. This model is made more powerful because it is amenable to simulation of phase sensitive OCT, thus making it applicable to scenarios where sample displacement is of interest, such as optical coherence elastography (OCE) or Doppler OCT. The low computational cost of the model also makes it suitable for creating large OCT data sets needed for training deep learning OCT signal processing models. We present details of our novel image formation model and demonstrate its accuracy and computational efficiency.
PMID:39679414 | PMC:PMC11640576 | DOI:10.1364/BOE.534263
The impact of body mass index on rehabilitation outcomes after lower limb amputation
PM R. 2024 Dec 16. doi: 10.1002/pmrj.13292. Online ahead of print.
ABSTRACT
PURPOSE: To determine the effect of obesity on physical function and clinical outcome measures in patients who received inpatient rehabilitation services for lower extremity amputation.
METHODS: A retrospective review was performed on patients with lower extremity amputation (n = 951). Patients were stratified into five categories adjusted for limb loss mass across different levels of healthy body mass index (BMI), overweight, and obesity. Outcomes included the Inpatient Rehabilitation Facility Patient Assessment Instrument functional scores (GG section), discharge home, length of stay (LOS), therapy time, discharge location, medical complications and acute care readmissions. Deep learning neural networks (DLNNs) were developed to learn the relationships between adjusted BMI and discharge home.
RESULTS: The severely obese group (BMI > 40 kg/m2) demonstrated 7%-13% lower toileting hygiene functional scores at discharge compared to the remaining groups (p < .001). The severely obese group also demonstrated 8%-9% lower sit-to-lying and lying-to-sitting bed mobility scores than the other groups (both p < .001). Sit-to-stand scores were 16%-21% worse and toilet transfer scores were 12%-20% worse in the BMI > 40 kg/m2 group than the other groups (all p < .001). Walking 50 ft with two turns was most difficult for the BMI > 40 kg/m2 group, with mean scores 7%-27% lower than the other BMI groups (p = .011). Wheelchair mobility scores for propelling 150 ft were worst for the severely obese group (4.9 points vs. 5.1-5.5 points for all other groups; p = .021). The LOS was longest in the BMI > 40 group and shortest in the BMI < 25 group (15.0 days vs. 13.3 days; p = .032). Logistic regression analysis indicated that BMI > 40 kg/m2 was associated with lower odds risk (OR) of discharge-to-home (OR = 0.504 [0.281-0.904]; p < .022). DLNNs found that adjusted BMI and BMI category were ranked 11th and 12th out of 90 model variables in predicting discharge home.
CONCLUSION: Patients with severe obesity (>40 kg/m2) achieved lower functional independence for several tasks and are less likely to be discharged home despite higher therapy volume than other groups. If a patient is going home, obesity will pose unique demands on the caregivers and resources can be put in place to help reintegrate the patient into life.
PMID:39676648 | DOI:10.1002/pmrj.13292
Making sense of missense: challenges and opportunities in variant pathogenicity prediction
Dis Model Mech. 2024 Dec 1;17(12):dmm052218. doi: 10.1242/dmm.052218. Epub 2024 Dec 16.
ABSTRACT
Computational tools for predicting variant pathogenicity are widely used to support clinical variant interpretation. Recently, several models, which do not rely on known variant classifications during training, have been developed. These approaches can potentially overcome biases of current clinical databases, such as misclassifications, and can potentially better generalize to novel, unclassified variants. AlphaMissense is one such model, built on the highly successful protein structure prediction model, AlphaFold. AlphaMissense has shown great performance in benchmarks of functional and clinical data, outperforming many supervised models that were trained on similar data. However, like other in silico predictors, AlphaMissense has notable limitations. As a large deep learning model, it lacks interpretability, does not assess the functional impact of variants, and provides pathogenicity scores that are not disease specific. Improving interpretability and precision in computational tools for variant interpretation remains a promising area for advancing clinical genetics.
PMID:39676521 | DOI:10.1242/dmm.052218
Automatic Segmentation of Sylvian Fissure in Brain Ultrasound Images of Pre-Term Infants Using Deep Learning Models
Ultrasound Med Biol. 2024 Dec 14:S0301-5629(24)00440-X. doi: 10.1016/j.ultrasmedbio.2024.11.016. Online ahead of print.
ABSTRACT
OBJECTIVE: Segmentation of brain sulci in pre-term infants is crucial for monitoring their development. While magnetic resonance imaging has been used for this purpose, cranial ultrasound (cUS) is the primary imaging technique used in clinical practice. Here, we present the first study aiming to automate brain sulci segmentation in pre-term infants using ultrasound images.
METHODS: Our study focused on segmentation of the Sylvian fissure in a single cUS plane (C3), although this approach could be extended to other sulci and planes. We evaluated the performance of deep learning models, specifically U-Net and ResU-Net, in automating the segmentation process in two scenarios. First, we conducted cross-validation on images acquired from the same ultrasound machine. Second, we applied fine-tuning techniques to adapt the models to images acquired from different vendors.
RESULTS: The ResU-Net approach achieved Dice and Sensitivity scores of 0.777 and 0.784, respectively, in the cross-validation experiment. When applied to external datasets, results varied based on similarity to the training images. Similar images yielded comparable results, while different images showed a drop in performance. Additionally, this study highlighted the advantages of ResU-Net over U-Net, suggesting that residual connections enhance the model's ability to learn and represent complex anatomical structures.
CONCLUSION: This study demonstrated the feasibility of using deep learning models to automatically segment the Sylvian fissure in cUS images. Accurate sonographic characterisation of cerebral sulci can improve the understanding of brain development and aid in identifying infants with different developmental trajectories, potentially impacting later functional outcomes.
PMID:39676003 | DOI:10.1016/j.ultrasmedbio.2024.11.016
A deep learning approach for the screening of referable age-related macular degeneration - Model development and external validation
J Formos Med Assoc. 2024 Dec 14:S0929-6646(24)00567-9. doi: 10.1016/j.jfma.2024.12.008. Online ahead of print.
ABSTRACT
PURPOSE: To develop a deep learning image assessment software, VeriSee™ AMD, and to validate its accuracy in diagnosing referable age-related macular degeneration (AMD).
METHODS: For model development, a total of 6801 judgable 45-degree color fundus images from patients, aged 50 years and over, were collected. These images were assessed for AMD severity by ophthalmologists, according to the Age-Related Eye Disease Studies (AREDS) AMD category. Referable AMD was defined as category three (intermediate) or four (advanced). Of these images, 6123 were used for model training and validation. The other 678 images were used for testing the accuracy of VeriSee™ AMD relative to the ophthalmologists. Area under the receiver operating characteristic curve (AUC) for VeriSee™ AMD, and the sensitivities and specificities for VeriSee™ AMD and ophthalmologists were calculated. For external validation, another 937 color fundus images were used to test the accuracy of VeriSee™ AMD.
RESULTS: During model development, the AUC for VeriSee™ AMD in diagnosing referable AMD was 0.961. The accuracy for VeriSee™ AMD for testing was 92.04% (sensitivity 90.0% and specificity 92.43%). The mean accuracy of the ophthalmologists in diagnosing referable AMD was 85.8% (range: 75.93%-97.31%). During external validation, VeriSee AMD achieved a sensitivity of 90.03%, a specificity of 96.44%, and an accuracy of 92.04%.
CONCLUSIONS: VeriSee™ AMD demonstrated good sensitivity and specificity in diagnosing referable AMD from color fundus images. The findings of this study support the use of VeriSee™ AMD in assisting with the clinical screening of intermediate and advanced AMD using color fundus photography.
PMID:39675993 | DOI:10.1016/j.jfma.2024.12.008
Conditional generative diffusion deep learning for accelerated diffusion tensor and kurtosis imaging
Magn Reson Imaging. 2024 Dec 13:110309. doi: 10.1016/j.mri.2024.110309. Online ahead of print.
ABSTRACT
PURPOSE: The purpose of this study was to develop DiffDL, a generative diffusion probabilistic model designed to produce high-quality diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) metrics from a reduced set of diffusion-weighted images (DWIs). This model addresses the challenge of prolonged data acquisition times in diffusion MRI while preserving metric accuracy.
METHODS: DiffDL was trained using data from the Human Connectome Project, including 300 training/validation subjects and 50 testing subjects. High-quality DTI and DKI metrics were generated using many DWIs and combined with subsets of DWIs to form training pairs. A UNet architecture was used for denoising, trained over 500 epochs with a linear noise schedule. Performance was evaluated against conventional DTI/DKI modeling and a reference UNet model using normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), and Pearson correlation coefficient (PCC).
RESULTS: DiffDL showed significant improvements in the quality and accuracy of fractional anisotropy (FA) and mean diffusivity (MD) maps compared to conventional methods and the baseline UNet model. For DKI metrics, DiffDL outperformed conventional DKI modeling and the UNet model across various acceleration scenarios. Quantitative analysis demonstrated superior NMAE, PSNR, and PCC values for DiffDL, capturing the full dynamic range of DTI and DKI metrics. The generative nature of DiffDL allowed for multiple predictions, enabling uncertainty quantification and enhancing performance.
CONCLUSION: The DiffDL framework demonstrated the potential to significantly reduce data acquisition times in diffusion MRI while maintaining high metric quality. Future research should focus on optimizing computational demands and validating the model with clinical cohorts and standard MRI scanners.
PMID:39675686 | DOI:10.1016/j.mri.2024.110309
Online monitoring of Haematococcus lacustris cell cycle using machine and deep learning techniques
Bioresour Technol. 2024 Dec 13:131976. doi: 10.1016/j.biortech.2024.131976. Online ahead of print.
ABSTRACT
Optimal control and process optimization of astaxanthin production from Haematococcuslacustris is directly linked to its complex cell cycle ranging from vegetative green cells to astaxanthin-rich cysts. This study developed an automated online monitoring system classifying four different cell cycle stages using a scanning microscope. Decision-tree based machine learning and deep learning convolutional neural network algorithms were developed, validated, and evaluated. SHapley Additive exPlanations was used to examine the most important system requirements for accurate image classification. The models achieved accuracies on unseen data of 92.4 and 90.9%, respectively. Furthermore, both models were applied to a photobioreactor culturing H.lacustris, effectively monitoring the transition from a green culture in the exponential growth phase to a stationary red culture. Therefore, online image analysis using artificial intelligence models has great potential for process optimization and as a data-driven decision support tool during microalgae cultivation.
PMID:39675638 | DOI:10.1016/j.biortech.2024.131976