Deep learning
Machine Learning Methods in Protein-Protein Docking
Methods Mol Biol. 2024;2780:107-126. doi: 10.1007/978-1-0716-3985-6_7.
ABSTRACT
An exponential increase in the number of publications that address artificial intelligence (AI) usage in life sciences has been noticed in recent years, while new modeling techniques are constantly being reported. The potential of these methods is vast-from understanding fundamental cellular processes to discovering new drugs and breakthrough therapies. Computational studies of protein-protein interactions, crucial for understanding the operation of biological systems, are no exception in this field. However, despite the rapid development of technology and the progress in developing new approaches, many aspects remain challenging to solve, such as predicting conformational changes in proteins, or more "trivial" issues as high-quality data in huge quantities.Therefore, this chapter focuses on a short introduction to various AI approaches to study protein-protein interactions, followed by a description of the most up-to-date algorithms and programs used for this purpose. Yet, given the considerable pace of development in this hot area of computational science, at the time you read this chapter, the development of the algorithms described, or the emergence of new (and better) ones should come as no surprise.
PMID:38987466 | DOI:10.1007/978-1-0716-3985-6_7
Selection of pre-trained weights for transfer learning in automated cytomegalovirus retinitis classification
Sci Rep. 2024 Jul 10;14(1):15899. doi: 10.1038/s41598-024-67121-7.
ABSTRACT
Cytomegalovirus retinitis (CMVR) is a significant cause of vision loss. Regular screening is crucial but challenging in resource-limited settings. A convolutional neural network is a state-of-the-art deep learning technique to generate automatic diagnoses from retinal images. However, there are limited numbers of CMVR images to train the model properly. Transfer learning (TL) is a strategy to train a model with a scarce dataset. This study explores the efficacy of TL with different pre-trained weights for automated CMVR classification using retinal images. We utilised a dataset of 955 retinal images (524 CMVR and 431 normal) from Siriraj Hospital, Mahidol University, collected between 2005 and 2015. Images were processed using Kowa VX-10i or VX-20 fundus cameras and augmented for training. We employed DenseNet121 as a backbone model, comparing the performance of TL with weights pre-trained on ImageNet, APTOS2019, and CheXNet datasets. The models were evaluated based on accuracy, loss, and other performance metrics, with the depth of fine-tuning varied across different pre-trained weights. The study found that TL significantly enhances model performance in CMVR classification. The best results were achieved with weights sequentially transferred from ImageNet to APTOS2019 dataset before application to our CMVR dataset. This approach yielded the highest mean accuracy (0.99) and lowest mean loss (0.04), outperforming other methods. The class activation heatmaps provided insights into the model's decision-making process. The model with APTOS2019 pre-trained weights offered the best explanation and highlighted the pathologic lesions resembling human interpretation. Our findings demonstrate the potential of sequential TL in improving the accuracy and efficiency of CMVR diagnosis, particularly in settings with limited data availability. They highlight the importance of domain-specific pre-training in medical image classification. This approach streamlines the diagnostic process and paves the way for broader applications in automated medical image analysis, offering a scalable solution for early disease detection.
PMID:38987446 | DOI:10.1038/s41598-024-67121-7
A deep learning based cognitive model to probe the relation between psychophysics and electrophysiology of flicker stimulus
Brain Inform. 2024 Jul 10;11(1):18. doi: 10.1186/s40708-024-00231-0.
ABSTRACT
The flicker stimulus is a visual stimulus of intermittent illumination. A flicker stimulus can appear flickering or steady to a human subject, depending on the physical parameters associated with the stimulus. When the flickering light appears steady, flicker fusion is said to have occurred. This work aims to bridge the gap between the psychophysics of flicker fusion and the electrophysiology associated with flicker stimulus through a Deep Learning based computational model of flicker perception. Convolutional Recurrent Neural Networks (CRNNs) were trained with psychophysics data of flicker stimulus obtained from a human subject. We claim that many of the reported features of electrophysiology of the flicker stimulus, including the presence of fundamentals and harmonics of the stimulus, can be explained as the result of a temporal convolution operation on the flicker stimulus. We further show that the convolution layer output of a CRNN trained with psychophysics data is more responsive to specific frequencies as in human EEG response to flicker, and the convolution layer of a trained CRNN can give a nearly sinusoidal output for 10 hertz flicker stimulus as reported for some human subjects.
PMID:38987386 | DOI:10.1186/s40708-024-00231-0
Automatic wild bird repellent system that is based on deep-learning-based wild bird detection and integrated with a laser rotation mechanism
Sci Rep. 2024 Jul 10;14(1):15924. doi: 10.1038/s41598-024-66920-2.
ABSTRACT
Wild bird repulsion is critical in agriculture because it helps avoid agricultural food losses and mitigates the risk of avian influenza. Wild birds transmit avian influenza in poultry farms and thus cause large economic losses. In this study, we developed an automatic wild bird repellent system that is based on deep-learning-based wild bird detection and integrated with a laser rotation mechanism. When a wild bird appears at a farm, the proposed system detects the bird's position in an image captured by its detection unit and then uses a laser beam to repel the bird. The wild bird detection model of the proposed system was optimized for detecting small pixel targets, and trained through a deep learning method by using wild bird images captured at different farms. Various wild bird repulsion experiments were conducted using the proposed system at an outdoor duck farm in Yunlin, Taiwan. The statistical test results of our experimental data indicated that the proposed automatic wild bird repellent system effectively reduced the number of wild birds in the farm. The experimental results indicated that the developed system effectively repelled wild birds, with a high repulsion rate of 40.3% each day.
PMID:38987345 | DOI:10.1038/s41598-024-66920-2
Computational tools to predict context-specific protein complexes
Curr Opin Struct Biol. 2024 Jul 9;88:102883. doi: 10.1016/j.sbi.2024.102883. Online ahead of print.
ABSTRACT
Interactions between thousands of proteins define cells' protein-protein interaction (PPI) network. Some of these interactions lead to the formation of protein complexes. It is challenging to identify a protein complex in a haystack of protein-protein interactions, and it is even more difficult to predict all protein complexes of the complexome. Simulations and machine learning approaches try to crack these problems by looking at the PPI network or predicted protein structures. Clustering of PPI networks led to the first protein complex predictions, while most recently, atomistic models of protein complexes and deep-learning-based structure prediction methods have also emerged. The simulation of PPI level interactions even enables the quantitative prediction of protein complexes. These methods, the required data sources, and their potential future developments are discussed in this review.
PMID:38986166 | DOI:10.1016/j.sbi.2024.102883
Data modeling analysis of GFRP tubular filled concrete column based on small sample deep meta learning method
PLoS One. 2024 Jul 10;19(7):e0305038. doi: 10.1371/journal.pone.0305038. eCollection 2024.
ABSTRACT
The meta-learning method proposed in this paper addresses the issue of small-sample regression in the application of engineering data analysis, which is a highly promising direction for research. By integrating traditional regression models with optimization-based data augmentation from meta-learning, the proposed deep neural network demonstrates excellent performance in optimizing glass fiber reinforced plastic (GFRP) for wrapping concrete short columns. When compared with traditional regression models, such as Support Vector Regression (SVR), Gaussian Process Regression (GPR), and Radial Basis Function Neural Networks (RBFNN), the meta-learning method proposed here performs better in modeling small data samples. The success of this approach illustrates the potential of deep learning in dealing with limited amounts of data, offering new opportunities in the field of material data analysis.
PMID:38985781 | DOI:10.1371/journal.pone.0305038
HAPI: An efficient Hybrid Feature Engineering-based Approach for Propaganda Identification in social media
PLoS One. 2024 Jul 10;19(7):e0302583. doi: 10.1371/journal.pone.0302583. eCollection 2024.
ABSTRACT
Social media platforms serve as communication tools where users freely share information regardless of its accuracy. Propaganda on these platforms refers to the dissemination of biased or deceptive information aimed at influencing public opinion, encompassing various forms such as political campaigns, fake news, and conspiracy theories. This study introduces a Hybrid Feature Engineering Approach for Propaganda Identification (HAPI), designed to detect propaganda in text-based content like news articles and social media posts. HAPI combines conventional feature engineering methods with machine learning techniques to achieve high accuracy in propaganda detection. This study is conducted on data collected from Twitter via its API, and an annotation scheme is proposed to categorize tweets into binary classes (propaganda and non-propaganda). Hybrid feature engineering entails the amalgamation of various features, including Term Frequency-Inverse Document Frequency (TF-IDF), Bag of Words (BoW), Sentimental features, and tweet length, among others. Multiple Machine Learning classifiers undergo training and evaluation utilizing the proposed methodology, leveraging a selection of 40 pertinent features identified through the hybrid feature selection technique. All the selected algorithms including Multinomial Naive Bayes (MNB), Support Vector Machine (SVM), Decision Tree (DT), and Logistic Regression (LR) achieved promising results. The SVM-based HaPi (SVM-HaPi) exhibits superior performance among traditional algorithms, achieving precision, recall, F-Measure, and overall accuracy of 0.69, 0.69, 0.69, and 69.2%, respectively. Furthermore, the proposed approach is compared to well-known existing approaches where it overperformed most of the studies on several evaluation metrics. This research contributes to the development of a comprehensive system tailored for propaganda identification in textual content. Nonetheless, the purview of propaganda detection transcends textual data alone. Deep learning algorithms like Artificial Neural Networks (ANN) offer the capability to manage multimodal data, incorporating text, images, audio, and video, thereby considering not only the content itself but also its presentation and contextual nuances during dissemination.
PMID:38985703 | DOI:10.1371/journal.pone.0302583
AI-based Fully-Automated Stress Left Ventricular Ejection Fraction as a Prognostic Marker in Patients Undergoing Stress-CMR
Eur Heart J Cardiovasc Imaging. 2024 Jul 10:jeae168. doi: 10.1093/ehjci/jeae168. Online ahead of print.
ABSTRACT
AIM: To determine in patients undergoing stress CMR whether fully automated stress artificial intelligence (AI)-based left ventricular ejection fraction (LVEFAI) can provide incremental prognostic value to predict death above traditional prognosticators.
MATERIEL AND RESULTS: Between 2016 and 2018, we conducted a longitudinal study that included all consecutive patients referred for vasodilator stress CMR. LVEFAI was assessed using AI-algorithm combines multiple deep learning networks for LV segmentation. The primary outcome was all-cause death assessed using the French National Registry of Death. Cox regression was used to evaluate the association of stress LVEFAI with death after adjustment for traditional risk factors and CMR findings.In 9,712 patients (66±15 years, 67% men), there was an excellent correlation between stress LVEFAI and LVEF measured by expert (LVEFexpert) (r=0.94, p<0.001). Stress LVEFAI was associated with death (median [IQR] follow-up 4.5 [3.7-5.2] years) before and after adjustment for risk factors (adjusted hazard ratio [HR], 0.84 [95% CI, 0.82-0.87] per 5% increment, p<0.001). Stress LVEFAI had similar significant association with death occurrence compared with LVEFexpert. After adjustment, stress LVEFAI value showed the greatest improvement in model discrimination and reclassification over and above traditional risk factors and stress CMR findings (C-statistic improvement: 0.11; NRI=0.250; IDI=0.049, all p<0.001; LR-test p<0.001), with an incremental prognostic value over LVEFAI determined at rest.
CONCLUSION: AI-based fully automated LVEF measured at stress is independently associated with the occurrence of death in patients undergoing stress CMR, with an additional prognostic value above traditional risk factors, inducible ischemia and LGE.
PMID:38985691 | DOI:10.1093/ehjci/jeae168
SKGC: A General Semantic-level Knowledge Guided Classification Framework for Fetal Congenital Heart Disease
IEEE J Biomed Health Inform. 2024 Jul 10;PP. doi: 10.1109/JBHI.2024.3426068. Online ahead of print.
ABSTRACT
Congenital heart disease (CHD) is the most common congenital disability affecting healthy development and growth, even resulting in pregnancy termination or fetal death. Recently, deep learning techniques have made remarkable progress to assist in diagnosing CHD. One very popular method is directly classifying fetal ultrasound images, recognized as abnormal and normal, which tends to focus more on global features and neglects semantic knowledge of anatomical structures. The other approach is segmentation-based diagnosis, which requires a large number of pixel-level annotation masks for training. However, the detailed pixel-level segmentation annotation is costly or even unavailable. Based on the above analysis, we propose SKGC, a universal framework to identify normal or abnormal four-chamber heart (4CH) images, guided by a few annotation masks, while improving accuracy remarkably. SKGC consists of a semantic-level knowledge extraction module (SKEM), a multi-knowledge fusion module (MFM), and a classification module (CM). SKEM is responsible for obtaining high-level semantic knowledge, serving as an abstract representation of the anatomical structures that obstetricians focus on. MFM is a lightweight but efficient module that fuses semantic-level knowledge with the original specific knowledge in ultrasound images. CM classifies the fused knowledge and can be replaced by any advanced classifier. Moreover, we design a new loss function that enhances the constraint between the foreground and background predictions, improving the quality of the semantic-level knowledge. Experimental results on the collected real-world NA-4CH and the publicly FEST datasets show that SKGC achieves impressive performance with the best accuracy of 99.68% and 95.40%, respectively. Notably, the accuracy improves from 74.68% to 88.14% using only 10 labeled masks.
PMID:38985556 | DOI:10.1109/JBHI.2024.3426068
Unsupervised Domain Adaptation for Low-Dose CT Reconstruction via Bayesian Uncertainty Alignment
IEEE Trans Neural Netw Learn Syst. 2024 Jul 10;PP. doi: 10.1109/TNNLS.2024.3409573. Online ahead of print.
ABSTRACT
Low-dose computed tomography (LDCT) image reconstruction techniques can reduce patient radiation exposure while maintaining acceptable imaging quality. Deep learning (DL) is widely used in this problem, but the performance of testing data (also known as target domain) is often degraded in clinical scenarios due to the variations that were not encountered in training data (also known as source domain). Unsupervised domain adaptation (UDA) of LDCT reconstruction has been proposed to solve this problem through distribution alignment. However, existing UDA methods fail to explore the usage of uncertainty quantification, which is crucial for reliable intelligent medical systems in clinical scenarios with unexpected variations. Moreover, existing direct alignment for different patients would lead to content mismatch issues. To address these issues, we propose to leverage a probabilistic reconstruction framework to conduct a joint discrepancy minimization between source and target domains in both the latent and image spaces. In the latent space, we devise a Bayesian uncertainty alignment to reduce the epistemic gap between the two domains. This approach reduces the uncertainty level of target domain data, making it more likely to render well-reconstructed results on target domains. In the image space, we propose a sharpness-aware distribution alignment (SDA) to achieve a match of second-order information, which can ensure that the reconstructed images from the target domain have similar sharpness to normal-dose CT (NDCT) images from the source domain. Experimental results on two simulated datasets and one clinical low-dose imaging dataset show that our proposed method outperforms other methods in quantitative and visualized performance.
PMID:38985555 | DOI:10.1109/TNNLS.2024.3409573
Multiscale Bowel Sound Event Spotting in Highly Imbalanced Wearable Monitoring Data: Algorithm Development and Validation Study
JMIR AI. 2024 Jul 10;3:e51118. doi: 10.2196/51118.
ABSTRACT
BACKGROUND: Abdominal auscultation (i.e., listening to bowel sounds (BSs)) can be used to analyze digestion. An automated retrieval of BS would be beneficial to assess gastrointestinal disorders noninvasively.
OBJECTIVE: This study aims to develop a multiscale spotting model to detect BSs in continuous audio data from a wearable monitoring system.
METHODS: We designed a spotting model based on the Efficient-U-Net (EffUNet) architecture to analyze 10-second audio segments at a time and spot BSs with a temporal resolution of 25 ms. Evaluation data were collected across different digestive phases from 18 healthy participants and 9 patients with inflammatory bowel disease (IBD). Audio data were recorded in a daytime setting with a smart T-Shirt that embeds digital microphones. The data set was annotated by independent raters with substantial agreement (Cohen κ between 0.70 and 0.75), resulting in 136 hours of labeled data. In total, 11,482 BSs were analyzed, with a BS duration ranging between 18 ms and 6.3 seconds. The share of BSs in the data set (BS ratio) was 0.0089. We analyzed the performance depending on noise level, BS duration, and BS event rate. We also report spotting timing errors.
RESULTS: Leave-one-participant-out cross-validation of BS event spotting yielded a median F1-score of 0.73 for both healthy volunteers and patients with IBD. EffUNet detected BSs under different noise conditions with 0.73 recall and 0.72 precision. In particular, for a signal-to-noise ratio over 4 dB, more than 83% of BSs were recognized, with precision of 0.77 or more. EffUNet recall dropped below 0.60 for BS duration of 1.5 seconds or less. At a BS ratio greater than 0.05, the precision of our model was over 0.83. For both healthy participants and patients with IBD, insertion and deletion timing errors were the largest, with a total of 15.54 minutes of insertion errors and 13.08 minutes of deletion errors over the total audio data set. On our data set, EffUNet outperformed existing BS spotting models that provide similar temporal resolution.
CONCLUSIONS: The EffUNet spotter is robust against background noise and can retrieve BSs with varying duration. EffUNet outperforms previous BS detection approaches in unmodified audio data, containing highly sparse BS events.
PMID:38985504 | DOI:10.2196/51118
Radiomics incorporating deep features for predicting Parkinson's disease in <sup>123</sup>I-Ioflupane SPECT
EJNMMI Phys. 2024 Jul 10;11(1):60. doi: 10.1186/s40658-024-00651-1.
ABSTRACT
PURPOSE: 123I-Ioflupane SPECT is an effective tool for the diagnosis and progression assessment of Parkinson's disease (PD). Radiomics and deep learning (DL) can be used to track and analyze the underlying image texture and features to predict the Hoehn-Yahr stages (HYS) of PD. In this study, we aim to predict HYS at year 0 and year 4 after the first diagnosis with combined imaging, radiomics and DL-based features using 123I-Ioflupane SPECT images at year 0.
METHODS: In this study, 161 subjects from the Parkinson's Progressive Marker Initiative database underwent baseline 3T MRI and 123I-Ioflupane SPECT, with HYS assessment at years 0 and 4 after first diagnosis. Conventional imaging features (IF) and radiomic features (RaF) for striatum uptakes were extracted from SPECT images using MRI- and SPECT-based (SPECT-V and SPECT-T) segmentations respectively. A 2D DenseNet was used to predict HYS of PD, and simultaneously generate deep features (DF). The random forest algorithm was applied to develop models based on DF, RaF, IF and combined features to predict HYS (stage 0, 1 and 2) at year 0 and (stage 0, 1 and ≥ 2) at year 4, respectively. Model predictive accuracy and receiver operating characteristic (ROC) analysis were assessed for various prediction models.
RESULTS: For the diagnostic accuracy at year 0, DL (0.696) outperformed most models, except DF + IF in SPECT-V (0.704), significantly superior based on paired t-test. For year 4, accuracy of DF + RaF model in MRI-based method is the highest (0.835), significantly better than DF + IF, IF + RaF, RaF and IF models. And DL (0.820) surpassed models in both SPECT-based methods. The area under the ROC curve (AUC) highlighted DF + RaF model (0.854) in MRI-based method at year 0 and DF + RaF model (0.869) in SPECT-T method at year 4, outperforming DL models, respectively. And then, there was no significant differences between SPECT-based and MRI-based segmentation methods except for the imaging feature models.
CONCLUSION: The combination of radiomic and deep features enhances the prediction accuracy of PD HYS compared to only radiomics or DL. This suggests the potential for further advancements in predictive model performance for PD HYS at year 0 and year 4 after first diagnosis using 123I-Ioflupane SPECT images at year 0, thereby facilitating early diagnosis and treatment for PD patients. No significant difference was observed in radiomics results obtained between MRI- and SPECT-based striatum segmentations for radiomic and deep features.
PMID:38985382 | DOI:10.1186/s40658-024-00651-1
Artificial intelligence-enhanced opportunistic screening of osteoporosis in CT scan: a scoping Review
Osteoporos Int. 2024 Jul 10. doi: 10.1007/s00198-024-07179-1. Online ahead of print.
ABSTRACT
PURPOSE: This scoping review aimed to assess the current research on artificial intelligence (AI)--enhanced opportunistic screening approaches for stratifying osteoporosis and osteopenia risk by evaluating vertebral trabecular bone structure in CT scans.
METHODS: PubMed, Scopus, and Web of Science databases were systematically searched for studies published between 2018 and December 2023. Inclusion criteria encompassed articles focusing on AI techniques for classifying osteoporosis/osteopenia or determining bone mineral density using CT scans of vertebral bodies. Data extraction included study characteristics, methodologies, and key findings.
RESULTS: Fourteen studies met the inclusion criteria. Three main approaches were identified: fully automated deep learning solutions, hybrid approaches combining deep learning and conventional machine learning, and non-automated solutions using manual segmentation followed by AI analysis. Studies demonstrated high accuracy in bone mineral density prediction (86-96%) and classification of normal versus osteoporotic subjects (AUC 0.927-0.984). However, significant heterogeneity was observed in methodologies, workflows, and ground truth selection.
CONCLUSIONS: The review highlights AI's promising potential in enhancing opportunistic screening for osteoporosis using CT scans. While the field is still in its early stages, with most solutions at the proof-of-concept phase, the evidence supports increased efforts to incorporate AI into radiologic workflows. Addressing knowledge gaps, such as standardizing benchmarks and increasing external validation, will be crucial for advancing the clinical application of these AI-enhanced screening methods. Integration of such technologies could lead to improved early detection of osteoporotic conditions at a low economic cost.
PMID:38985200 | DOI:10.1007/s00198-024-07179-1
Deep learning in pulmonary nodule detection and segmentation: a systematic review
Eur Radiol. 2024 Jul 10. doi: 10.1007/s00330-024-10907-0. Online ahead of print.
ABSTRACT
OBJECTIVES: The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature.
METHODS: This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information.
RESULTS: After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient.
CONCLUSIONS: This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research.
CLINICAL RELEVANCE STATEMENT: Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility.
KEY POINTS: Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
PMID:38985185 | DOI:10.1007/s00330-024-10907-0
Stepwise Transfer Learning for Expert-Level Pediatric Brain Tumor MRI Segmentation in a Limited Data Scenario
Radiol Artif Intell. 2024 Jul 10:e230254. doi: 10.1148/ryai.230254. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop, externally test, and evaluate clinical acceptability of a deep learning (DL) pediatric brain tumor segmentation model using stepwise transfer learning. Materials and Methods In this retrospective study, the authors leveraged two T2-weighted MRI datasets (May 2001-December 2015) from a national brain tumor consortium (n = 184; median age, 7 years (range: 1-23 years); 94 male) and a pediatric cancer center (n = 100; median age, 8 years (range: 1-19 years); 47 male) to develop and evaluate DL neural networks for pediatric low-grade glioma segmentation using a novel stepwise transfer learning approach to maximize performance in a limited data scenario. The best model was externally-tested on an independent test set and subjected to randomized, blinded evaluation by three clinicians, wherein they assessed clinical acceptability of expert- and artificial intelligence (AI)-generated segmentations via 10-point Likert scales and Turing tests. Results The best AI model used in-domain, stepwise transfer learning (median DSC: 0.88 [IQR 0.72-0.91] versus 0.812 [0.56-0.89] for baseline model; P = .049). On external testing, AI model yielded excellent accuracy using reference standards from three clinical experts (Expert-1: 0.83 [0.75-0.90]; Expert-2: 0.81 [0.70-0.89]; Expert-3: 0.81 [0.68-0.88]; mean accuracy: 0.82)). On clinical benchmarking (n = 100 scans), experts rated AI-based segmentations higher on average compared with other experts (median Likert score: median 9 [IQR 7-9]) versus 7 [IQR 7-9]) and rated more AI segmentations as clinically acceptable (80.2% versus 65.4%). Experts correctly predicted the origin of AI segmentations in an average of 26.0% of cases. Conclusion Stepwise transfer learning enabled expert-level, automated pediatric brain tumor auto-segmentation and volumetric measurement with a high level of clinical acceptability. ©RSNA, 2024.
PMID:38984985 | DOI:10.1148/ryai.230254
Artificial Intelligence Outcome Prediction in Neonates with Encephalopathy (AI-OPiNE)
Radiol Artif Intell. 2024 Jul 10:e240076. doi: 10.1148/ryai.240076. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a deep learning algorithm to predict 2-year neurodevelopmental outcomes in neonates with hypoxic-ischemic encephalopathy (HIE) using MRI and basic clinical data. Materials and Methods In this study, MRI data of term neonates with encephalopathy in the High Dose Erythropoietin for Asphyxia (HEAL) trial (ClinicalTrials.gov: NCT02811263), who were enrolled from 17 institutions between January 25th, 2017 and October ninth, 2019, were retrospectively analyzed. The harmonized MRI protocol included T1-weighted, T2-weighted, and diffusion tensor imaging. Deep learning classifiers were trained to predict the primary outcome of the HEAL trial (death or any neurodevelopmental impairment [NDI] at 2 years) using multisequence MRI and basic clinical variables, including sex and gestational age at birth. Model performance was evaluated on a test sets comprising 10% of cases from 15 institutions (in-distribution test set, n = 41) and 100% of cases from 2 institutions (out-of-distribution test set, n = 41). Model performance in predicting additional secondary outcomes, including death alone, was also assessed. Results For the 414 neonates (mean gestational age, 39 weeks ± 1.4, 232 males, 182 females), in the study cohort, 198 (48%) died or had any NDI at 2 years. The deep learning model achieved an area under the receiver operating characteristic curve (AUC) of 0.74 (95% CI: 0.60-0.86) and 63% accuracy on the in-distribution test set and an AUC of 0.77 (95% CI: 0.63-0.90) and 78% accuracy on the out-of-distribution test set. Performance was similar or better for predicting secondary outcomes. Conclusion Deep learning analysis of neonatal brain MRI yielded high performance for predicting 2-year neurodevelopmental outcomes. ©RSNA, 2024.
PMID:38984984 | DOI:10.1148/ryai.240076
Deep Learning-Based Vascular Aging Prediction From Retinal Fundus Images
Transl Vis Sci Technol. 2024 Jul 1;13(7):10. doi: 10.1167/tvst.13.7.10.
ABSTRACT
PURPOSE: The purpose of this study was to establish and validate a deep learning model to screen vascular aging using retinal fundus images. Although vascular aging is considered a novel cardiovascular risk factor, the assessment methods are currently limited and often only available in developed regions.
METHODS: We used 8865 retinal fundus images and clinical parameters of 4376 patients from two independent datasets for training a deep learning algorithm. The gold standard for vascular aging was defined as a pulse wave velocity ≥1400 cm/s. The probability of the presence of vascular aging was defined as deep learning retinal vascular aging score, the Reti-aging score. We compared the performance of the deep learning model and clinical parameters by calculating the area under the receiver operating characteristics curve (AUC). We recruited clinical specialists, including ophthalmologists and geriatricians, to assess vascular aging in patients using retinal fundus images, aiming to compare the diagnostic performance between deep learning models and clinical specialists. Finally, the potential of Reti-aging score for identifying new-onset hypertension (NH) and new-onset carotid artery plaque (NCP) in the subsequent three years was examined.
RESULTS: The Reti-aging score model achieved an AUC of 0.826 (95% confidence interval [CI] = 0.793-0.855) and 0.779 (95% CI = 0.765-0.794) in the internal and external dataset. It showed better performance in predicting vascular aging compared with the prediction with clinical parameters. The average accuracy of ophthalmologists (66.3%) was lower than that of the Reti-aging score model, whereas geriatricians were unable to make predictions based on retinal fundus images. The Reti-aging score was associated with the risk of NH and NCP (P < 0.05).
CONCLUSIONS: The Reti-aging score model might serve as a novel method to predict vascular aging through analysis of retinal fundus images. Reti-aging score provides a novel indicator to predict new-onset cardiovascular diseases.
TRANSLATIONAL RELEVANCE: Given the robust performance of our model, it provides a new and reliable method for screening vascular aging, especially in undeveloped areas.
PMID:38984914 | DOI:10.1167/tvst.13.7.10
Proton spot dose estimation based on positron activity distributions with neural network
Med Phys. 2024 Jul 10. doi: 10.1002/mp.17297. Online ahead of print.
ABSTRACT
BACKGROUND: Positron emission tomography (PET) has been investigated for its ability to reconstruct proton-induced positron activity distributions in proton therapy. This technique holds potential for range verification in clinical practice. Recently, deep learning-based dose estimation from positron activity distributions shows promise for in vivo proton dose monitoring and guided proton therapy.
PURPOSE: This study evaluates the effectiveness of three classical neural network models, recurrent neural network (RNN), U-Net, and Transformer, for proton dose estimating. It also investigates the characteristics of these models, providing valuable insights for selecting the appropriate model in clinical practice.
METHODS: Proton dose calculations for spot beams were simulated using Geant4. Computed tomography (CT) images from four head cases were utilized, with three for training neural networks and the remaining one for testing. The neural networks were trained with one-dimensional (1D) positron activity distributions as inputs and generated 1D dose distributions as outputs. The impact of the number of training samples on the networks was examined, and their dose prediction performance in both homogeneous brain and heterogeneous nasopharynx sites was evaluated. Additionally, the effect of positron activity distribution uncertainty on dose prediction performance was investigated. To quantitatively evaluate the models, mean relative error (MRE) and absolute range error (ARE) were used as evaluation metrics.
RESULTS: The U-Net exhibited a notable advantage in range verification with a smaller number of training samples, achieving approximately 75% of AREs below 0.5 mm using only 500 training samples. The networks performed better in the homogeneous brain site compared to the heterogeneous nasopharyngeal site. In the homogeneous brain site, all networks exhibited small AREs, with approximately 90% of the AREs below 0.5 mm. The Transformer exhibited the best overall dose distribution prediction, with approximately 92% of MREs below 3%. In the heterogeneous nasopharyngeal site, all networks demonstrated acceptable AREs, with approximately 88% of AREs below 3 mm. The Transformer maintained the best overall dose distribution prediction, with approximately 85% of MREs below 5%. The performance of all three networks in dose prediction declined as the uncertainty of positron activity distribution increased, and the Transformer consistently outperformed the other networks in all cases.
CONCLUSIONS: Both the U-Net and the Transformer have certain advantages in the proton dose estimation task. The U-Net proves well suited for range verification with a small training sample size, while the Transformer outperforms others at dose-guided proton therapy.
PMID:38984805 | DOI:10.1002/mp.17297
Geometric Epitope and Paratope Prediction
Bioinformatics. 2024 Jul 10:btae405. doi: 10.1093/bioinformatics/btae405. Online ahead of print.
ABSTRACT
MOTIVATION: Identifying the binding sites of antibodies is essential for developing vaccines and synthetic antibodies. In this paper, we investigate the optimal representation for predicting the binding sites in the two molecules and emphasize the importance of geometric information.
RESULTS: Specifically, we compare different geometric deep learning methods applied to proteins' inner (I-GEP) and outer (O-GEP) structures. We incorporate 3D coordinates and spectral geometric descriptors as input features to fully leverage the geometric information. Our research suggests that different geometrical representation information are useful for different tasks. Surface-based models are more efficient in predicting the binding of the epitope, while graph models are better in paratope prediction, both achieving significant performance improvements. Moreover we analyse the impact of structural changes in antibodies and antigens resulting from conformational rearrangements or reconstruction errors. Through this investigation, we showcase the robustness of geometric deep learning methods and spectral geometric descriptors to such perturbations.
AVAILABILITY AND IMPLEMENTATION: The python code for the models and the processing pipeline is open-source and available at https://github.com/Marco-Peg/GEP.
SUPPLEMENTARY INFORMATION: The supplementary material includes comprehensive details about the proposed method and additional results.
PMID:38984742 | DOI:10.1093/bioinformatics/btae405
The utilization of artificial intelligence in glaucoma: diagnosis versus screening
Front Ophthalmol (Lausanne). 2024 Mar 6;4:1368081. doi: 10.3389/fopht.2024.1368081. eCollection 2024.
ABSTRACT
With advancements in the implementation of artificial intelligence (AI) in different ophthalmology disciplines, it continues to have a significant impact on glaucoma diagnosis and screening. This article explores the distinct roles of AI in specialized ophthalmology clinics and general practice, highlighting the critical balance between sensitivity and specificity in diagnostic and screening models. Screening models prioritize sensitivity to detect potential glaucoma cases efficiently, while diagnostic models emphasize specificity to confirm disease with high accuracy. AI applications, primarily using machine learning (ML) and deep learning (DL), have been successful in detecting glaucomatous optic neuropathy from colored fundus photographs and other retinal imaging modalities. Diagnostic models integrate data extracted from various forms of modalities (including tests that assess structural optic nerve damage as well as those evaluating functional damage) to provide a more nuanced, accurate and thorough approach to diagnosing glaucoma. As AI continues to evolve, the collaboration between technology and clinical expertise should focus more on improving specificity of glaucoma diagnostic models to assess ophthalmologists to revolutionize glaucoma diagnosis and improve patients care.
PMID:38984126 | PMC:PMC11182276 | DOI:10.3389/fopht.2024.1368081