Deep learning
Application of Deep Neural Networks in the Manufacturing Process of Mesenchymal Stem Cells Therapeutics
Int J Stem Cells. 2024 Sep 26. doi: 10.15283/ijsc24070. Online ahead of print.
ABSTRACT
Current image-based analysis methods for monitoring cell confluency and status depend on individual interpretations, which can lead to wide variations in the quality of cell therapeutics. To overcome these limitations, images of mesenchymal stem cells cultured adherently in various types of culture vessels were captured and analyzed using a deep neural network. Among the various deep learning methods, a classification and detection algorithm was selected to verify cell confluency and status. We confirmed that the image classification algorithm demonstrates significant accuracy for both single- and multistack images. Abnormal cells could be detected exclusively in single-stack images, as multistack culture was performed only when abnormal cells were absent in the single-stack culture. This study is the first to analyze cell images based on a deep learning method that directly impacts yield and quality, which are important product parameters in stem cell therapeutics.
PMID:39322430 | DOI:10.15283/ijsc24070
Cardiovascular care with digital twin technology in the era of generative artificial intelligence
Eur Heart J. 2024 Sep 26:ehae619. doi: 10.1093/eurheartj/ehae619. Online ahead of print.
ABSTRACT
Digital twins, which are in silico replications of an individual and its environment, have advanced clinical decision-making and prognostication in cardiovascular medicine. The technology enables personalized simulations of clinical scenarios, prediction of disease risk, and strategies for clinical trial augmentation. Current applications of cardiovascular digital twins have integrated multi-modal data into mechanistic and statistical models to build physiologically accurate cardiac replicas to enhance disease phenotyping, enrich diagnostic workflows, and optimize procedural planning. Digital twin technology is rapidly evolving in the setting of newly available data modalities and advances in generative artificial intelligence, enabling dynamic and comprehensive simulations unique to an individual. These twins fuse physiologic, environmental, and healthcare data into machine learning and generative models to build real-time patient predictions that can model interactions with the clinical environment to accelerate personalized patient care. This review summarizes digital twins in cardiovascular medicine and their potential future applications by incorporating new personalized data modalities. It examines the technical advances in deep learning and generative artificial intelligence that broaden the scope and predictive power of digital twins. Finally, it highlights the individual and societal challenges as well as ethical considerations that are essential to realizing the future vision of incorporating cardiology digital twins into personalized cardiovascular care.
PMID:39322420 | DOI:10.1093/eurheartj/ehae619
PreAlgPro: Prediction of allergenic proteins with pre-trained protein language model and efficient neutral network
Int J Biol Macromol. 2024 Sep 23:135762. doi: 10.1016/j.ijbiomac.2024.135762. Online ahead of print.
ABSTRACT
Allergy is a prevalent phenomenon, involving allergens such as nuts and milk. Avoiding exposure to allergens is the most effective preventive measure against allergic reactions. However, current homology-based methods for identifying allergenic proteins encounter challenges when dealing with non-homologous data. Traditional machine learning approaches rely on manually extracted features, which lack important protein functional characteristics, including evolutionary information. Consequently, there is still considerable room for improvement in existing methods. In this study, we present PreAlgPro, a method for identifying allergenic proteins based on pre-trained protein language models and deep learning techniques. Specifically, we employed the ProtT5 model to extract protein embedding features, replacing the manual feature extraction step. Furthermore, we devised an Attention-CNN neural network architecture to identify potential features that contribute to the classification of allergenic proteins. The performance of our model was evaluated on four independent test sets, and the experimental results demonstrate that PreAlgPro surpasses existing state-of-the-art methods. Additionally, we collected allergenic protein samples to validate the robustness of the model and conducted an analysis of model interpretability.
PMID:39322150 | DOI:10.1016/j.ijbiomac.2024.135762
Deep learning for enhanced spectral analysis of MA-XRF datasets of paintings
Sci Adv. 2024 Sep 27;10(39):eadp6234. doi: 10.1126/sciadv.adp6234. Epub 2024 Sep 25.
ABSTRACT
Recent advancements of noninvasive imaging techniques applied for the study and conservation of paintings have driven a rapid development of cutting-edge computational methods. Macro x-ray fluorescence (MA-XRF), a well-established tool in this domain, generates complex and voluminous datasets that pose analytical challenges. To address this, we have incorporated machine learning strategies specifically designed for the analysis as they allow for identification of nontrivial dependencies and classification within these high-dimensional data, thereby promising comprehensive interrogation. We introduce a deep learning algorithm trained on a synthetic dataset that allows for fast and accurate analysis of the XRF spectra in MA-XRF datasets. This approach successfully overcomes the limitations commonly associated with traditional deconvolution methods. Applying this methodology to a painting by Raphael, we demonstrate that our model not only achieves superior accuracy in quantifying the fluorescence line intensities but also effectively eliminates the artifacts typically observed in elemental maps generated through conventional analysis methods.
PMID:39321288 | DOI:10.1126/sciadv.adp6234
Deep learning for blood glucose level prediction: How well do models generalize across different data sets?
PLoS One. 2024 Sep 25;19(9):e0310801. doi: 10.1371/journal.pone.0310801. eCollection 2024.
ABSTRACT
Deep learning-based models for predicting blood glucose levels in diabetic patients can facilitate proactive measures to prevent critical events and are essential for closed-loop control therapy systems. However, selecting appropriate models from the literature may not always yield conclusive results, as the choice could be influenced by biases or misleading evaluations stemming from different methodologies, datasets, and preprocessing techniques. This study aims to compare and comprehensively analyze the performance of various deep learning models across diverse datasets to assess their applicability and generalizability across a broader spectrum of scenarios. Commonly used deep learning models for blood glucose level forecasting, such as feed-forward neural network, convolutional neural network, long short-term memory network (LSTM), temporal convolutional neural network, and self-attention network (SAN), are considered in this study. To evaluate the generalization capabilities of each model, four datasets of varying sizes, encompassing samples from different age groups and conditions, are utilized. Performance metrics include Root Mean Square Error (RMSE), Mean Absolute Difference (MAD), and Coefficient of Determination (CoD) for analytical asssessment, Clarke Error Grid (CEG) for clinical assessments, Kolmogorov-Smirnov (KS) test for statistical analysis, and generalization ability evaluations to obtain both coarse and granular insights. The experimental findings indicate that the LSTM model demonstrates superior performance with the lowest root mean square error and highest generalization capability among all other models, closely followed by SAN. The ability of LSTM and SAN to capture long-term dependencies in blood glucose data and their correlations with various influencing factors and events contribute to their enhanced performance. Despite the lower predictive performance, the FFN was able to capture patterns and trends in the data, suggesting its applicability in forecasting future direction. Moreover, this study helps in identifying the optimal model based on specific objectives, whether prioritizing generalization or accuracy.
PMID:39321157 | DOI:10.1371/journal.pone.0310801
UTSRMorph: A Unified Transformer and Superresolution Network for Unsupervised Medical Image Registration
IEEE Trans Med Imaging. 2024 Sep 25;PP. doi: 10.1109/TMI.2024.3467919. Online ahead of print.
ABSTRACT
Complicated image registration is a key issue in medical image analysis, and deep learning-based methods have achieved better results than traditional methods. The methods include ConvNet-based and Transformer-based methods. Although ConvNets can effectively utilize local information to reduce redundancy via small neighborhood convolution, the limited receptive field results in the inability to capture global dependencies. Transformers can establish long-distance dependencies via a self-attention mechanism; however, the intense calculation of the relationships among all tokens leads to high redundancy. We propose a novel unsupervised image registration method named the unified Transformer and superresolution (UTSRMorph) network, which can enhance feature representation learning in the encoder and generate detailed displacement fields in the decoder to overcome these problems. We first propose a fusion attention block to integrate the advantages of ConvNets and Transformers, which inserts a ConvNet-based channel attention module into a multihead self-attention module. The overlapping attention block, a novel cross-attention method, uses overlapping windows to obtain abundant correlations with match information of a pair of images. Then, the blocks are flexibly stacked into a new powerful encoder. The decoder generation process of a high-resolution deformation displacement field from low-resolution features is considered as a superresolution process. Specifically, the superresolution module was employed to replace interpolation upsampling, which can overcome feature degradation. UTSRMorph was compared to state-of-the-art registration methods in the 3D brain MR (OASIS, IXI) and MR-CT datasets (abdomen, craniomaxillofacial). The qualitative and quantitative results indicate that UTSRMorph achieves relatively better performance. The code and datasets used are publicly available at https://github.com/Runshi-Zhang/UTSRMorph.
PMID:39321000 | DOI:10.1109/TMI.2024.3467919
An Explainable Unified Framework of Spatio-Temporal Coupling Learning with Application to Dynamic Brain Functional Connectivity Analysis
IEEE Trans Med Imaging. 2024 Sep 25;PP. doi: 10.1109/TMI.2024.3467384. Online ahead of print.
ABSTRACT
Time-series data such as fMRI and MEG carry a wealth of inherent spatio-temporal coupling relationship, and their modeling via deep learning is essential for uncovering biological mechanisms. However, current machine learning models for mining spatio-temporal information usually overlook this intrinsic coupling association, in addition to poor explainability. In this paper, we present an explainable learning framework for spatio-temporal coupling. Specifically, this framework constructs a deep learning network based on spatio-temporal correlation, which can well integrate the time-varying coupled relationships between node representation and inter-node connectivity. Furthermore, it explores spatio-temporal evolution at each time step, providing a better explainability of the analysis results. Finally, we apply the proposed framework to brain dynamic functional connectivity (dFC) analysis. Experimental results demonstrate that it can effectively capture the variations in dFC during brain development and the evolution of spatio-temporal information at the resting state. Two distinct developmental functional connectivity (FC) patterns are identified. Specifically, the connectivity among regions related to emotional regulation decreases, while the connectivity associated with cognitive activities increases. In addition, children and young adults display notable cyclic fluctuations in resting-state brain dFC.
PMID:39320999 | DOI:10.1109/TMI.2024.3467384
The Development and Application of KinomePro-DL: A Deep Learning Based Online Small Molecule Kinome Selectivity Profiling Prediction Platform
J Chem Inf Model. 2024 Sep 25. doi: 10.1021/acs.jcim.4c00595. Online ahead of print.
ABSTRACT
Characterizing the kinome selectivity profiles of kinase inhibitors is essential in the early stages of novel small-molecule drug discovery. This characterization is critical for interpreting potential adverse events caused by off-target polypharmacology effects and provides unique pharmacological insights for drug repurposing development of existing kinase inhibitor drugs. However, experimental profiling of whole kinome selectivity is still time-consuming and resource-demanding. Here, we report a deep learning classification model using an in-house built data set of inhibitors against 191 well-representative kinases constructed based on a novel strategy by systematically cleaning and integrating six public data sets. This model, a multitask deep neural network, predicts the kinome selectivity profiles of compounds with novel structures. The model demonstrates excellent predictive performance, with auROC, prc-AUC, Accuracy, and Binary_cross_entropy of 0.95, 0.92, 0.90, and 0.37, respectively. It also performs well in a priori testing for inhibitors targeting different categories of proteins from internal compound collections, significantly improving over similar models on data sets from practical application scenarios. Integrated to subsequent machine learning-enhanced virtual screening workflow, novel CDK2 kinase inhibitors with potent kinase inhibitory activity and excellent kinome selectivity profiles are successfully identified. Additionally, we developed a free online web server, KinomePro-DL, to predict the kinome selectivity profiles and kinome-wide polypharmacology effects of small molecules (available on kinomepro-dl.pharmablock.com). Uniquely, our model allows users to quickly fine-tune it with their own training data sets, enhancing both prediction accuracy and robustness.
PMID:39320984 | DOI:10.1021/acs.jcim.4c00595
Accurate prediction of discontinuous crack paths in random porous media via a generative deep learning model
Proc Natl Acad Sci U S A. 2024 Oct;121(40):e2413462121. doi: 10.1073/pnas.2413462121. Epub 2024 Sep 25.
ABSTRACT
Pore structures provide extra freedoms for the design of porous media, leading to desirable properties, such as high catalytic rate, energy storage efficiency, and specific strength. This unfortunately makes the porous media susceptible to failure. Deep understanding of the failure mechanism in microstructures is a key to customizing high-performance crack-resistant porous media. However, solving the fracture problem of the porous materials is computationally intractable due to the highly complicated configurations of microstructures. To bridge the structural configurations and fracture responses of random porous media, a unique generative deep learning model is developed. A two-step strategy is proposed to deconstruct the fracture process, which sequentially corresponds to elastic deformation and crack propagation. The geometry of microstructure is translated into a scalar of elastic field as an intermediate variable, and then, the crack path is predicted. The neural network precisely characterizes the strong interactions among pore structures, the multiscale behaviors of fracture, and the discontinuous essence of crack propagation. Crack paths in random porous media are accurately predicted by simply inputting the images of targets, without inputting any additional input physical information. The prediction model enjoys an outstanding performance with a prediction accuracy of 90.25% and possesses a robust generalization capability. The accuracy of the present model is a record so far, and the prediction is accomplished within a second. This study opens an avenue to high-throughput evaluation of the fracture behaviors of heterogeneous materials with complex geometries.
PMID:39320916 | DOI:10.1073/pnas.2413462121
Deep learning in Cobb angle automated measurement on X-rays: a systematic review and meta-analysis
Spine Deform. 2024 Sep 25. doi: 10.1007/s43390-024-00954-4. Online ahead of print.
ABSTRACT
PURPOSE: This study aims to provide an overview of different deep learning algorithms (DLAs), identify the limitations, and summarize potential solutions to improve the performance of DLAs.
METHODS: We reviewed eligible studies on DLAs for automated Cobb angle estimation on X-rays and conducted a meta-analysis. A systematic literature search was conducted in six databases up until September 2023. Our meta-analysis included an evaluation of reported circular mean absolute error (CMAE) from the studies, as well as a subgroup analysis of implementation strategies. Risk of bias was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). This study was registered in PROSPERO prior to initiation (CRD42023403057).
RESULTS: We identified 120 articles from our systematic search (n = 3022), eventually including 50 studies in the systematic review and 17 studies in the meta-analysis. The overall estimate for CMAE was 2.99 (95% CI 2.61-3.38), with high heterogeneity (94%, p < 0.01). Segmentation-based methods showed greater accuracy (p < 0.01), with a CMAE of 2.40 (95% CI 1.85-2.95), compared to landmark-based methods, which had a CMAE of 3.31 (95% CI 2.89-3.72).
CONCLUSIONS: According to our limited meta-analysis results, DLAs have shown relatively high accuracy for automated Cobb angle measurement. In terms of CMAE, segmentation-based methods may perform better than landmark-based methods. We also summarized potential ways to improve model design in future studies. It is important to follow quality guidelines when reporting on DLAs.
PMID:39320698 | DOI:10.1007/s43390-024-00954-4
Classification of diabetic retinopathy algorithm based on a novel dual-path multi-module model
Med Biol Eng Comput. 2024 Sep 25. doi: 10.1007/s11517-024-03194-w. Online ahead of print.
ABSTRACT
Diabetic retinopathy is a chronic disease of the eye that is precipitated via diabetes. As the disease progresses, the blood vessels in the retina are issue to modifications such as dilation, leakage, and new blood vessel formation. Early detection and treatment of the lesions are vital for the prevention and reduction of imaginative and prescient loss. A new dual-path multi-module network algorithm for diabetic retinopathy classification is proposed in this paper, aiming to accurately classify the diabetic retinopathy stage to facilitate early diagnosis and intervention. To obtain the purpose of fact augmentation, the algorithm first enhances retinal lesion features using color correcting and multi-scale fusion algorithms. It then optimizes the local records via a multi-path multiplexing structure with convolutional kernels of exclusive sizes. Finally, a multi-feature fusion module is used to improve the accuracy of the diabetic retinopathy classification model. Two public datasets and a real hospital dataset are used to validate the algorithm. The accuracy is 98.9%, 99.3%, and 98.3%, respectively. The experimental results not only confirm the advancement and practicability of the algorithm in the field of automatic DR diagnosis, but also foretell its broad application prospects in clinical settings, which is expected to provide strong technical support for the early screening and treatment of diabetic retinopathy.
PMID:39320579 | DOI:10.1007/s11517-024-03194-w
Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound
Insights Imaging. 2024 Sep 19;15(1):227. doi: 10.1186/s13244-024-01810-9.
ABSTRACT
OBJECTIVES: To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning.
METHODS: The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology.
RESULTS: All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked.
CONCLUSION: Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology.
CRITICAL RELEVANCE STATEMENT: The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening.
KEY POINTS: Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.
PMID:39320560 | DOI:10.1186/s13244-024-01810-9
An ensemble machine learning model assists in the diagnosis of gastric ectopic pancreas and gastric stromal tumors
Insights Imaging. 2024 Sep 19;15(1):225. doi: 10.1186/s13244-024-01809-2.
ABSTRACT
OBJECTIVE: To develop an ensemble machine learning (eML) model using multiphase computed tomography (MPCT) for distinguishing between gastric ectopic pancreas (GEP) and gastric stromal tumors (GIST) in lesions < 3 cm.
METHODS: In this study, we retrospectively collected MPCT images from 138 patients between April 2017 and June 2023 across two centers. Cohort 1 comprised 94 patients divided into a training cohort and an internal validation cohort, while the 44 patients from Cohort 2 constituted the external validation cohort. Deep learning (DL) models were constructed based on the lesion region, and radiomics features were extracted to develop radiomics models, which were later integrated into the fusion model. Model performance was assessed through the analysis of the area under the receiver operating characteristic curve (AUROC). The diagnostic efficacy of the optimal model was compared with that of a radiologist. Additionally, the radiologist with the assistance of the eML model provides a secondary diagnosis, to assess the potential clinical value of the model.
RESULTS: After evaluation using an external validation cohort, the radiomics model demonstrated the highest performance in the venous phase, achieving AUROC of 0.87. The DL model showed optimal performance in the non-contrast phase, with AUROC of 0.81. The eML achieved the best performance across all models, with AUROC of 0.90. The use of eML-assisted analysis resulted in a significant improvement in the junior radiologist's accuracy, rising from 0.77 to 0.93 (p < 0.05). However, the senior radiologist's accuracy, while improving from 0.86 to 0.95, did not exhibit a statistically significant difference.
CONCLUSION: eML model based on MPCT can effectively distinguish between GEPs and GISTs < 3 cm.
CRITICAL RELEVANCE STATEMENT: The multiphase CT-based fusion model, incorporating radiomics and DL technology, proves effective in distinguishing between GEP and gastric stromal tumors, serving as a valuable tool to enhance diagnoses and offering references for clinical decision-making.
KEY POINTS: No studies yet differentiated these tumors via radiomics or DL. Radiomics and DL methodologies unveil potentially distinct phenotypes within lesions. Quantitative analysis on CT for GIST and ectopic pancreas. Ensemble learning aids accurate diagnoses, assisting treatment decisions.
PMID:39320559 | DOI:10.1186/s13244-024-01809-2
A Comprehensive study on the different types of soil desiccation cracks and their implications for soil identification using deep learning techniques
Eur Phys J E Soft Matter. 2024 Sep 25;47(9):57. doi: 10.1140/epje/s10189-024-00453-4.
ABSTRACT
Rapid drying of soil leads to its fracture. The cracks left behind by these fractures are best seen in soils such as clays that are fine in the texture and shrink on drying, but this can be seen in other soils too. Hence, different soils from the same region show different characteristic desiccation cracks and can thus be used to identify the soil type. In this paper, three types soils namely clay, silt, and sandy-clay-loam from the Brahmaputra river basin in India are studied for their crack patterns using both conventional studies of hierarchical crack patterns using Euler numbers and fractal dimensions, as well as by applying deep-learning techniques to the images. Fractal dimension analysis is found to be an useful pre-processing tool for deep learning image analysis. Feed forward neural networks with and without data augmentation and with the use of filters and noise suggest that data augmentation increases the robustness and improves the accuracy of the model. Even on the introduction of noise, to mimic a real-life situation, 92.09% accuracy in identification of soil was achieved, proving the combination of conventional studies of desiccation crack images with deep learning algorithms to be an effective tool for identification of real soil types.
PMID:39320558 | DOI:10.1140/epje/s10189-024-00453-4
Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence
J Imaging Inform Med. 2024 Sep 25. doi: 10.1007/s10278-024-01267-8. Online ahead of print.
ABSTRACT
Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.
PMID:39320548 | DOI:10.1007/s10278-024-01267-8
Cross-site Validation of AI Segmentation and Harmonization in Breast MRI
J Imaging Inform Med. 2024 Sep 25. doi: 10.1007/s10278-024-01266-9. Online ahead of print.
ABSTRACT
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.
PMID:39320547 | DOI:10.1007/s10278-024-01266-9
Predicting Gene Comutation of EGFR and TP53 by Radiomics and Deep Learning in Patients With Lung Adenocarcinomas
J Thorac Imaging. 2024 Sep 25. doi: 10.1097/RTI.0000000000000817. Online ahead of print.
ABSTRACT
PURPOSE: This study was designed to construct progressive binary classification models based on radiomics and deep learning to predict the presence of epidermal growth factor receptor (EGFR) and TP53 mutations and to assess the models' capacities to identify patients who are suitable for TKI-targeted therapy and those with poor prognoses.
MATERIALS AND METHODS: A total of 267 patients with lung adenocarcinomas who underwent genetic testing and noncontrast chest computed tomography from our hospital were retrospectively included. Clinical information and imaging characteristics were gathered, and high-throughput feature acquisition on all defined regions of interest (ROIs) was carried out. We selected features and constructed clinical models, radiomics models, deep learning models, and ensemble models to predict EGFR status with all patients and TP53 status with EGFR-positive patients, respectively. The validity and reliability of each model were expressed as the area under the curve (AUC), sensitivity, specificity, accuracy, precision, and F1 score.
RESULTS: We constructed 7 kinds of models for 2 different dichotomies, namely, the clinical model, the radiomics model, the DL model, the rad-clin model, the DL-clin model, the DL-rad model, and the DL-rad-clin model. For EGFR- and EGFR+, the DL-rad-clin model got the highest AUC value of 0.783 (95% CI: 0.677-0.889), followed by the rad-clin model, the DL-clin model, and the DL-rad model. In the group with an EGFR mutation, for TP53- and TP53+, the rad-clin model got the highest AUC value of 0.811 (95% CI: 0.651-0.972), followed by the DL-rad-clin model and the DL-rad model.
CONCLUSION: Our progressive binary classification models based on radiomics and deep learning may provide a good reference and complement for the clinical identification of TKI responders and those with poor prognoses.
PMID:39319553 | DOI:10.1097/RTI.0000000000000817
Using 3D facial information to predict malnutrition and consequent complications
Nutr Clin Pract. 2024 Sep 25. doi: 10.1002/ncp.11215. Online ahead of print.
ABSTRACT
BACKGROUND: Phase angle (PhA) correlates with body composition and could predict the nutrition status of patients and disease prognosis. We aimed to explore the feasibility of predicting PhA-diagnosed malnutrition using facial image information based on deep learning (DL).
METHODS: From August 2021 to April 2022, inpatients were enrolled from surgery, gastroenterology, and oncology departments in a tertiary hospital. Subjective global assessment was used as the gold standard of malnutrition diagnosis. The highest Youden index value was selected as the PhA cutoff point. We developed a multimodal DL framework to automatically analyze the three-dimensional (3D) facial data and accurately determine patients' PhA categories. The framework was trained and validated using a cross-validation approach and tested on an independent dataset.
RESULTS: Four hundred eighty-two patients were included in the final dataset, including 176 with malnourishment. In male patients, the PhA value with the highest Youden index was 5.55°, and the area under the receiver operating characteristic curve (AUC) = 0.68; in female patients, the PhA value with the highest Youden index was 4.88°, and AUC = 0.69. Inpatients with low PhA had higher incidence of infectious complications during the hospital stay (P = 0.003). The DL model trained with 4096 points extracted from 3D facial data had the best performance. The algorithm showed fair performance in predicting PhA, with an AUC of 0.77 and an accuracy of 0.74.
CONCLUSION: Predicting the PhA of inpatients from facial images is feasible and can be used for malnutrition assessment and prognostic prediction.
PMID:39319394 | DOI:10.1002/ncp.11215
Proceedings of the 2024 Transplant AI Symposium
Front Transplant. 2024 Aug 29;3:1399324. doi: 10.3389/frtra.2024.1399324. eCollection 2024.
ABSTRACT
With recent advancements in deep learning (DL) techniques, the use of artificial intelligence (AI) has become increasingly prevalent in all fields. Currently valued at 9.01 billion USD, it is a rapidly growing market, projected to increase by 40% per annum. There has been great interest in how AI could transform the practice of medicine, with the potential to improve all healthcare spheres from workflow management, accessibility, and cost efficiency to enhanced diagnostics with improved prognostic accuracy, allowing the practice of precision medicine. The applicability of AI is particularly promising for transplant medicine, in which it can help navigate the complex interplay of a myriad of variables and improve patient care. However, caution must be exercised when developing DL models, ensuring they are trained with large, reliable, and diverse datasets to minimize bias and increase generalizability. There must be transparency in the methodology and extensive validation of the model, including randomized controlled trials to demonstrate performance and cultivate trust among physicians and patients. Furthermore, there is a need to regulate this rapidly evolving field, with updated policies for the governance of AI-based technologies. Taking this in consideration, we summarize the latest transplant AI developments from the Ajmera Transplant Center's inaugural symposium.
PMID:39319335 | PMC:PMC11421390 | DOI:10.3389/frtra.2024.1399324
The potential impact of AI innovations on US occupations
PNAS Nexus. 2024 Sep 24;3(9):pgae320. doi: 10.1093/pnasnexus/pgae320. eCollection 2024 Sep.
ABSTRACT
An occupation is comprised of interconnected tasks, and it is these tasks, not occupations themselves, that are affected by Artificial Intelligence (AI). To evaluate how tasks may be impacted, previous approaches utilized manual annotations or coarse-grained matching. Leveraging recent advancements in machine learning, we replace coarse-grained matching with more precise deep learning approaches. Introducing the AI Impact measure, we employ Deep Learning Natural Language Processing to automatically identify AI patents that may impact various occupational tasks at scale. Our methodology relies on a comprehensive dataset of 17,879 task descriptions and quantifies AI's potential impact through analysis of 24,758 AI patents filed with the United States Patent and Trademark Office between 2015 and 2022. Our results reveal that some occupations will potentially be impacted, and that impact is intricately linked to specific skills. These include not only routine tasks (codified as a series of steps), as previously thought but also nonroutine ones (e.g. diagnosing health conditions, programming computers, and tracking flight routes). However, AI's impact on labor is limited by the fact that some of the occupations affected are augmented rather than replaced (e.g. neurologists, software engineers, air traffic controllers), and the sectors affected are experiencing labor shortages (e.g. IT, Healthcare, Transport).
PMID:39319327 | PMC:PMC11421150 | DOI:10.1093/pnasnexus/pgae320