Deep learning
An Intelligent Early Warning System for Harmful Algal Blooms: Harnessing the Power of Big Data and Deep Learning
Environ Sci Technol. 2024 Mar 4. doi: 10.1021/acs.est.3c03906. Online ahead of print.
ABSTRACT
Harmful algal blooms (HABs) pose a significant ecological threat and economic detriment to freshwater environments. In order to develop an intelligent early warning system for HABs, big data and deep learning models were harnessed in this study. Data collection was achieved utilizing the vertical aquatic monitoring system (VAMS). Subsequently, the analysis and stratification of the vertical aquatic layer were conducted employing the "DeepDPM-Spectral Clustering" method. This approach drastically reduced the number of predictive models and enhanced the adaptability of the system. The Bloomformer-2 model was developed to conduct both single-step and multistep predictions of Chl-a, integrating the " Alert Level Framework" issued by the World Health Organization to accomplish early warning for HABs. The case study conducted in Taihu Lake revealed that during the winter of 2018, the water column could be partitioned into four clusters (Groups W1-W4), while in the summer of 2019, the water column could be partitioned into five clusters (Groups S1-S5). Moreover, in a subsequent predictive task, Bloomformer-2 exhibited superiority in performance across all clusters for both the winter of 2018 and the summer of 2019 (MAE: 0.175-0.394, MSE: 0.042-0.305, and MAPE: 0.228-2.279 for single-step prediction; MAE: 0.184-0.505, MSE: 0.101-0.378, and MAPE: 0.243-4.011 for multistep prediction). The prediction for the 3 days indicated that Group W1 was in a Level I alert state at all times. Conversely, Group S1 was mainly under an Level I alert, with seven specific time points escalating to a Level II alert. Furthermore, the end-to-end architecture of this system, coupled with the automation of its various processes, minimized human intervention, endowing it with intelligent characteristics. This research highlights the transformative potential of integrating big data and artificial intelligence in environmental management and emphasizes the importance of model interpretability in machine learning applications.
PMID:38436579 | DOI:10.1021/acs.est.3c03906
Multitask deep learning for prediction of microvascular invasion and recurrence-free survival in hepatocellular carcinoma based on MRI images
Liver Int. 2024 Mar 4. doi: 10.1111/liv.15870. Online ahead of print.
ABSTRACT
BACKGROUND AND AIMS: Accurate preoperative prediction of microvascular invasion (MVI) and recurrence-free survival (RFS) is vital for personalised hepatocellular carcinoma (HCC) management. We developed a multitask deep learning model to predict MVI and RFS using preoperative MRI scans.
METHODS: Utilising a retrospective dataset of 725 HCC patients from seven institutions, we developed and validated a multitask deep learning model focused on predicting MVI and RFS. The model employs a transformer architecture to extract critical features from preoperative MRI scans. It was trained on a set of 234 patients and internally validated on a set of 58 patients. External validation was performed using three independent sets (n = 212, 111, 110).
RESULTS: The multitask deep learning model yielded high MVI prediction accuracy, with AUC values of 0.918 for the training set and 0.800 for the internal test set. In external test sets, AUC values were 0.837, 0.815 and 0.800. Radiologists' sensitivity and inter-rater agreement for MVI prediction improved significantly when integrated with the model. For RFS, the model achieved C-index values of 0.763 in the training set and ranged between 0.628 and 0.728 in external test sets. Notably, PA-TACE improved RFS only in patients predicted to have high MVI risk and low survival scores (p < .001).
CONCLUSIONS: Our deep learning model allows accurate MVI and survival prediction in HCC patients. Prospective studies are warranted to assess the clinical utility of this model in guiding personalised treatment in conjunction with clinical criteria.
PMID:38436551 | DOI:10.1111/liv.15870
Detection and coverage estimation of purple nutsedge in turf with image classification neural networks
Pest Manag Sci. 2024 Mar 4. doi: 10.1002/ps.8055. Online ahead of print.
ABSTRACT
BACKGROUND: Accurate detection of weeds and estimation of their coverage is crucial for implementing precision herbicide applications. Deep learning (DL) techniques are typically used for weed detection and coverage estimation by analyzing information at the pixel or individual plant level, which requires a substantial amount of annotated data for training. This study aims to evaluate the effectiveness of using image classification neural networks (NNs) for detecting and estimating weed coverage in bermudagrass turf.
RESULTS: The weed detection NNs, including DenseNet, GoogLeNet, and ResNet, exhibited high overall accuracy and F1 scores (≥0.971) throughout the k-fold cross-validation. DenseNet outperformed GoogLeNet and ResNet with the highest overall accuracy and F1 scores (0.977). Among the evaluated NNs, the DenseNet showed the highest overall accuracy and F1 scores (0.996) in the validation and testing datasets for estimating weed coverage. The inference speed of ResNet was similar to GoogLeNet but noticeably faster than DenseNet. ResNet was the most efficient and accurate deep convolution neural network (DCNN) for weed detection and coverage estimation.
CONCLUSION: These results demonstrated that the developed NNs could effectively detect weeds and estimate their coverage in bermudagrass turf, allowing the calculation of herbicide requirements for variable-rate herbicide applications. The proposed method can be employed in a machine vision-based autonomous site-specific spraying system of smart sprayers. This article is protected by copyright. All rights reserved.
PMID:38436512 | DOI:10.1002/ps.8055
Pan-cancer image segmentation based on feature pyramids and Mask R-CNN framework
Med Phys. 2024 Mar 4. doi: 10.1002/mp.17014. Online ahead of print.
ABSTRACT
BACKGROUND: Cancer, a disease with a high mortality rate, poses a great threat to patients' physical and mental health and can lead to huge medical costs and emotional damage. With the continuous development of artificial intelligence technologies, deep learning-based cancer image segmentation techniques are becoming increasingly important in cancer detection and accurate diagnosis. However, in segmentation tasks, there are differences in efficiency between large and small objects and limited segmentation effects on objects of individual sizes. The previous segmentation frameworks still have room for improvement in multi-scale collaboration when segmenting objects.
PURPOSE: This paper proposes a method to train a deep learning segmentation framework using a feature pyramid processing dataset to improve the average precision (AP) index, and realizes multi-scale cooperation in target segmentation.
OBJECTIVE: Pan-Cancer Histology Dataset for Nuclei Instance Segmentation and Classification (PanNuke) dataset was selected to include approximately 7500 pathology images with cells from 19 different types of tissues, including five classifications of cancer, non-cancer, inflammation, death, and connective tissue.
METHODS: First, the method uses whole-slide images in the pan-cancer histology dataset for nuclei instance segmentation and classification (PanNuke) dataset, combined with the mask region convolutional neural network (Mask R-CNN) segmentation framework and improved loss function to segment and detect each cellular tissue in cancerous sections. Second, to address the problem of non-synergistic object segmentation at different scales in cancerous tissue segmentation, a scheme using feature pyramids to process the dataset was adopted as part of the feature extraction module.
RESULTS: Extensive experimental results on this dataset show that the method in this paper yields 0.269 AP and a boost of about 4% compared to the original Mask R-CNN framework.
CONCLUSIONS: It is effective and feasible to use feature pyramid to process data set to improve the effect of medical image segmentation.
PMID:38436455 | DOI:10.1002/mp.17014
Assessment of machine-learning predictions for the Mediator complex subunit MED25 ACID domain interactions with transactivation domains
FEBS Lett. 2024 Mar 4. doi: 10.1002/1873-3468.14837. Online ahead of print.
ABSTRACT
The human Mediator complex subunit MED25 binds transactivation domains (TADs) present in various cellular and viral proteins using two binding interfaces, named H1 and H2, which are found on opposite sides of its ACID domain. Here, we use and compare deep learning methods to characterize human MED25-TAD interfaces and assess the predicted models to published experimental data. For the H1 interface, AlphaFold produces predictions with high-reliability scores that agree well with experimental data, while the H2 interface predictions appear inconsistent, preventing reliable binding modes. Despite these limitations, we experimentally assess the validity of MED25 interface predictions with the viral transcriptional activators Lana-1 and IE62. AlphaFold predictions also suggest the existence of a unique hydrophobic pocket for the Arabidopsis MED25 ACID domain.
PMID:38436147 | DOI:10.1002/1873-3468.14837
Wood identification based on macroscopic images using deep and transfer learning approaches
PeerJ. 2024 Feb 28;12:e17021. doi: 10.7717/peerj.17021. eCollection 2024.
ABSTRACT
Identifying forest types is vital for evaluating the ecological, economic, and social benefits provided by forests, and for protecting, managing, and sustaining them. Although traditionally based on expert observation, recent developments have increased the use of technologies such as artificial intelligence (AI). The use of advanced methods such as deep learning will make forest species recognition faster and easier. In this study, the deep network models RestNet18, GoogLeNet, VGG19, Inceptionv3, MobileNetv2, DenseNet201, InceptionResNetv2, EfficientNet and ShuffleNet, which were pre-trained with ImageNet dataset, were adapted to a new dataset. In this adaptation, transfer learning method is used. These models have different architectures that allow a wide range of performance evaluation. The performance of the model was evaluated by accuracy, recall, precision, F1-score, specificity and Matthews correlation coefficient. ShuffleNet was proposed as a lightweight network model that achieves high performance with low computational power and resource requirements. This model was an efficient model with an accuracy close to other models with customisation. This study reveals that deep network models are an effective tool in the field of forest species recognition. This study makes an important contribution to the conservation and management of forests.
PMID:38436000 | PMC:PMC10908261 | DOI:10.7717/peerj.17021
Prospective Evaluation of Automated Contouring for CT-Based Brachytherapy for Gynecologic Malignancies
Adv Radiat Oncol. 2023 Dec 10;9(4):101417. doi: 10.1016/j.adro.2023.101417. eCollection 2024 Apr.
ABSTRACT
PURPOSE: The use of deep learning to auto-contour organs at risk (OARs) in gynecologic radiation treatment is well established. Yet, there is limited data investigating the prospective use of auto-contouring in clinical practice. In this study, we assess the accuracy and efficiency of auto-contouring OARs for computed tomography-based brachytherapy treatment planning of gynecologic malignancies.
METHODS AND MATERIALS: An inhouse contouring tool automatically delineated 5 OARs in gynecologic radiation treatment planning: the bladder, small bowel, sigmoid, rectum, and urethra. Accuracy of each auto-contour was evaluated using a 5-point Likert scale: a score of 5 indicated the contour could be used without edits, while a score of 1 indicated the contour was unusable. During scoring, automated contours were edited and subsequently used for treatment planning. Dice similarity coefficient, mean surface distance, 95% Hausdorff distance, Hausdorff distance, and dosimetric changes between original and edited contours were calculated. Contour approval time and total planning time of a prospective auto-contoured (AC) cohort were compared with times from a retrospective manually contoured (MC) cohort.
RESULTS: Thirty AC cases from January 2022 to July 2022 and 31 MC cases from July 2021 to January 2022 were included. The mean (±SD) Likert score for each OAR was the following: bladder 4.77 (±0.58), small bowel 3.96 (±0.91), sigmoid colon 3.92 (±0.81), rectum 4.6 (±0.71), and urethra 4.27 (±0.78). No ACs required major edits. All OARs had a mean Dice similarity coefficient > 0.86, mean surface distance < 0.48 mm, 95% Hausdorff distance < 3.2 mm, and Hausdorff distance < 10.32 mm between original and edited contours. There was no significant difference in dose-volume histogram metrics (D2.0 cc/D0.1 cc) between original and edited contours (P values > .05). The average time to plan approval in the AC cohort was 19% less than the MC cohort. (AC vs MC, 117.0 + 18.0 minutes vs 144.9 ± 64.5 minutes, P = .045).
CONCLUSIONS: Automated contouring is useful and accurate in clinical practice. Auto-contouring OARs streamlines radiation treatment workflows and decreases time required to design and approve gynecologic brachytherapy plans.
PMID:38435965 | PMC:PMC10906166 | DOI:10.1016/j.adro.2023.101417
Toward image-based personalization of glioblastoma therapy: A clinical and biological validation study of a novel, deep learning-driven tumor growth model
Neurooncol Adv. 2023 Dec 27;6(1):vdad171. doi: 10.1093/noajnl/vdad171. eCollection 2024 Jan-Dec.
ABSTRACT
BACKGROUND: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation.
METHODS: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans.
RESULTS: The parameter ratio Dw/ρ (P < .05 in TCGA) as well as the simulated tumor volume (P < .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans.
CONCLUSIONS: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.
PMID:38435962 | PMC:PMC10907005 | DOI:10.1093/noajnl/vdad171
AI-organoid integrated systems for biomedical studies and applications
Bioeng Transl Med. 2024 Jan 20;9(2):e10641. doi: 10.1002/btm2.10641. eCollection 2024 Mar.
ABSTRACT
In this review, we explore the growing role of artificial intelligence (AI) in advancing the biomedical applications of human pluripotent stem cell (hPSC)-derived organoids. Stem cell-derived organoids, these miniature organ replicas, have become essential tools for disease modeling, drug discovery, and regenerative medicine. However, analyzing the vast and intricate datasets generated from these organoids can be inefficient and error-prone. AI techniques offer a promising solution to efficiently extract insights and make predictions from diverse data types generated from microscopy images, transcriptomics, metabolomics, and proteomics. This review offers a brief overview of organoid characterization and fundamental concepts in AI while focusing on a comprehensive exploration of AI applications in organoid-based disease modeling and drug evaluation. It provides insights into the future possibilities of AI in enhancing the quality control of organoid fabrication, label-free organoid recognition, and three-dimensional image reconstruction of complex organoid structures. This review presents the challenges and potential solutions in AI-organoid integration, focusing on the establishment of reliable AI model decision-making processes and the standardization of organoid research.
PMID:38435826 | PMC:PMC10905559 | DOI:10.1002/btm2.10641
An AI-assisted integrated, scalable, single-cell phenomic-transcriptomic platform to elucidate intratumor heterogeneity against immune response
Bioeng Transl Med. 2024 Jan 2;9(2):e10628. doi: 10.1002/btm2.10628. eCollection 2024 Mar.
ABSTRACT
We present a novel framework combining single-cell phenotypic data with single-cell transcriptomic analysis to identify factors underpinning heterogeneity in antitumor immune response. We developed a pairwise, tumor-immune discretized interaction assay between natural killer (NK-92MI) cells and patient-derived head and neck squamous cell carcinoma (HNSCC) cell lines on a microfluidic cell-trapping platform. Furthermore we generated a deep-learning computer vision algorithm that is capable of automating the acquisition and analysis of a large, live-cell imaging data set (>1 million) of paired tumor-immune interactions spanning a time course of 24 h across multiple HNSCC lines (n = 10). Finally, we combined the response data measured by Kaplan-Meier survival analysis against NK-mediated killing with downstream single-cell transcriptomic analysis to interrogate molecular signatures associated with NK-effector response. As proof-of-concept for the proposed framework, we efficiently identified MHC class I-driven cytotoxic resistance as a key mechanism for immune evasion in nonresponders, while enhanced expression of cell adhesion molecules was found to be correlated with sensitivity against NK-mediated cytotoxicity. We conclude that this integrated, data-driven phenotypic approach holds tremendous promise in advancing the rapid identification of new mechanisms and therapeutic targets related to immune evasion and response.
PMID:38435825 | PMC:PMC10905538 | DOI:10.1002/btm2.10628
Unleashing the potential of fNIRS with machine learning: classification of fine anatomical movements to empower future brain-computer interface
Front Hum Neurosci. 2024 Feb 16;18:1354143. doi: 10.3389/fnhum.2024.1354143. eCollection 2024.
ABSTRACT
In this study, we explore the potential of using functional near-infrared spectroscopy (fNIRS) signals in conjunction with modern machine-learning techniques to classify specific anatomical movements to increase the number of control commands for a possible fNIRS-based brain-computer interface (BCI) applications. The study focuses on novel individual finger-tapping, a well-known task in fNIRS and fMRI studies, but limited to left/right or few fingers. Twenty-four right-handed participants performed the individual finger-tapping task. Data were recorded by using sixteen sources and detectors placed over the motor cortex according to the 10-10 international system. The event's average oxygenated Δ HbO and deoxygenated Δ HbR hemoglobin data were utilized as features to assess the performance of diverse machine learning (ML) models in a challenging multi-class classification setting. These methods include LDA, QDA, MNLR, XGBoost, and RF. A new DL-based model named "Hemo-Net" has been proposed which consists of multiple parallel convolution layers with different filters to extract the features. This paper aims to explore the efficacy of using fNRIS along with ML/DL methods in a multi-class classification task. Complex models like RF, XGBoost, and Hemo-Net produce relatively higher test set accuracy when compared to LDA, MNLR, and QDA. Hemo-Net has depicted a superior performance achieving the highest test set accuracy of 76%, however, in this work, we do not aim at improving the accuracies of models rather we are interested in exploring if fNIRS has the neural signatures to help modern ML/DL methods in multi-class classification which can lead to applications like brain-computer interfaces. Multi-class classification of fine anatomical movements, such as individual finger movements, is difficult to classify with fNIRS data. Traditional ML models like MNLR and LDA show inferior performance compared to the ensemble-based methods of RF and XGBoost. DL-based method Hemo-Net outperforms all methods evaluated in this study and demonstrates a promising future for fNIRS-based BCI applications.
PMID:38435744 | PMC:PMC10904609 | DOI:10.3389/fnhum.2024.1354143
Machine learning based prediction of image quality in prostate MRI using rapid localizer images
J Med Imaging (Bellingham). 2024 Mar;11(2):026001. doi: 10.1117/1.JMI.11.2.026001. Epub 2024 Mar 1.
ABSTRACT
PURPOSE: Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds.
APPROACH: The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient.
RESULTS: In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier.
CONCLUSIONS: A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.
PMID:38435711 | PMC:PMC10905647 | DOI:10.1117/1.JMI.11.2.026001
A Deep Learning Model Combining Multimodal Factors to Predict the Overall Survival of Transarterial Chemoembolization
J Hepatocell Carcinoma. 2024 Feb 26;11:385-397. doi: 10.2147/JHC.S443660. eCollection 2024.
ABSTRACT
BACKGROUND: To develop and validate an overall survival (OS) prediction model for transarterial chemoembolization (TACE).
METHODS: In this retrospective study, 301 patients with hepatocellular carcinoma (HCC) who received TACE from 2012 to 2015 were collected. The residual network was used to extract prognostic information from CT images, which was then combined with the clinical factors adjusted by COX regression to predict survival using a modified deep learning model (DLOPCombin). The DLOPCombin model was compared with the residual network model (DLOPCTR), multiple COX regression model (DLOPCox), Radiomic model (Radiomic), and clinical model.
RESULTS: In the validation cohort, DLOPCombin shows the highest TD AUC of all cohorts, which compared with Radiomic (TD AUC: 0.96vs 0.63) and clinical model (TD AUC: 0.96 vs 0.62) model. DLOPCombin showed significant difference in C index compared with DLOPCTR and DLOPCox models (P < 0.05). Moreover, the DLOPCombin showed good calibration and overall net benefit. Patients with DLOPCombin model score ≤ 0.902 had better OS (33 months vs 15.5 months, P < 0.0001).
CONCLUSION: The deep learning model can effectively predict the patients' overall survival of TACE.
PMID:38435683 | PMC:PMC10906280 | DOI:10.2147/JHC.S443660
FV-EffResNet: an efficient lightweight convolutional neural network for finger vein recognition
PeerJ Comput Sci. 2024 Feb 15;10:e1837. doi: 10.7717/peerj-cs.1837. eCollection 2024.
ABSTRACT
Several deep neural networks have been introduced for finger vein recognition over time, and these networks have demonstrated high levels of performance. However, most current state-of-the-art deep learning systems use networks with increasing layers and parameters, resulting in greater computational costs and complexity. This can make them impractical for real-time implementation, particularly on embedded hardware. To address these challenges, this article concentrates on developing a lightweight convolutional neural network (CNN) named FV-EffResNet for finger vein recognition, aiming to find a balance between network size, speed, and accuracy. The key improvement lies in the utilization of the proposed novel convolution block named the Efficient Residual (EffRes) block, crafted to facilitate efficient feature extraction while minimizing the parameter count. The block decomposes the convolution process, employing pointwise and depthwise convolutions with a specific rectangular dimension realized in two layers (n × 1) and (1 × m) for enhanced handling of finger vein data. The approach achieves computational efficiency through a combination of squeeze units, depthwise convolution, and a pooling strategy. The hidden layers of the network use the Swish activation function, which has been shown to enhance performance compared to conventional functions like ReLU or Leaky ReLU. Furthermore, the article adopts cyclical learning rate techniques to expedite the training process of the proposed network. The effectiveness of the proposed pipeline is demonstrated through comprehensive experiments conducted on four benchmark databases, namely FV-USM, SDUMLA, MMCBNU_600, and NUPT-FV. The experimental results reveal that the EffRes block has a remarkable impact on finger vein recognition. The proposed FV-EffResNet achieves state-of-the-art performance in both identification and verification settings, leveraging the benefits of being lightweight and incurring low computational costs.
PMID:38435623 | PMC:PMC10909234 | DOI:10.7717/peerj-cs.1837
Improving image quality of triple-low-protocol renal artery CT angiography with deep-learning image reconstruction: a comparative study with standard-dose single-energy and dual-energy CT with adaptive statistical iterative reconstruction
Clin Radiol. 2024 Jan 23:S0009-9260(24)00028-X. doi: 10.1016/j.crad.2024.01.008. Online ahead of print.
ABSTRACT
AIM: To investigate the improvement in image quality of triple-low-protocol (low radiation, low contrast medium dose, low injection speed) renal artery computed tomography (CT) angiography (RACTA) using deep-learning image reconstruction (DLIR), in comparison with standard-dose single- and dual-energy CT (DECT) using adaptive statistical iterative reconstruction-Veo (ASIR-V) algorithm.
MATERIALS AND METHODS: Ninety patients for RACTA were divided into different groups: standard-dose single-energy CT (S group) using ASIR-V at 60% strength (60%ASIR-V), DECT (DE group) with 60%ASIR-V including virtual monochromatic images at 40 keV (DE40 group) and 70 keV (DE70 group), and the triple-low protocol single-energy CT (L group) with DLIR at high level (DLIR-H). The effective dose (ED), contrast medium dose, injection speed, standard deviation (SD), signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of abdominal aorta (AA), and left/right renal artery (LRA, RRA), and subjective scores were compared among the different groups.
RESULTS: The L group significantly reduced ED by 37.6% and 31.2%, contrast medium dose by 33.9% and 30.5%, and injection speed by 30% and 30%, respectively, compared to the S and DE groups. The L group had the lowest SD values for all arteries compared to the other groups (p<0.001). The SNR of RRA and LRA in the L group, and the CNR of all arteries in the DE40 group had highest value compared to others (p<0.05). The L group had the best comprehensive score with good consistency (p<0.05).
CONCLUSIONS: The triple-low protocol RACTA with DLIR-H significantly reduces the ED, contrast medium doses, and injection speed, while providing good comprehensive image quality.
PMID:38433041 | DOI:10.1016/j.crad.2024.01.008
High cell density cultivation of Corynebacterium glutamicum by deep learning-assisted medium design and the subsequent feeding strategy
J Biosci Bioeng. 2024 Mar 2:S1389-1723(24)00038-0. doi: 10.1016/j.jbiosc.2024.01.018. Online ahead of print.
ABSTRACT
To improve the cell productivity of Corynebacterium glutamicum, its initial specific growth rate was improved by medium improvement using deep neural network (DNN)-assisted design with Bayesian optimization (BO) and a genetic algorithm (GA). To obtain training data for the DNN, experimental design with an orthogonal array was set up using a chemically defined basal medium (GC XII). Based on the cultivation results for the training data, specific growth rates were observed between 0.04 and 0.3/h. The resulting DNN model estimated the test data with high accuracy (R2test ≥ 0.98). According to the validation cultivation, specific growth rates in the optimal media components estimated by DNN-BO and DNN-GA increased from 0.242 to 0.355/h. Using the optimal media (UCB_3), the specific growth rate, along with other parameters, was evaluated in batch culture. The specific growth rate reached 0.371/h from 3 to 12 h, and the dry cell weight was 28.0 g/L at 22.5 h. From the cultivation, the cell yields against glucose, ammonium ion, phosphate ion, sulfate ion, potassium ion, and magnesium ion were calculated. The cell yield calculation was used to estimate the required amounts of each component, and magnesium was found to limit the cell growth. However, in the follow-up fed-batch cultivation, glucose and magnesium addition was required to achieve the high initial specific growth rate, while appropriate feeding of glucose and magnesium during cultivation resulted in maintaining the high specific growth rate, and obtaining a cell yield of 80 g/Lini.
PMID:38433040 | DOI:10.1016/j.jbiosc.2024.01.018
Developing a GNN-based AI model to predict mitochondrial toxicity using the bagging method
J Toxicol Sci. 2024;49(3):117-126. doi: 10.2131/jts.49.117.
ABSTRACT
Mitochondrial toxicity has been implicated in the development of various toxicities, including hepatotoxicity. Therefore, mitochondrial toxicity has become a major screening factor in the early discovery phase of drug development. Several models have been developed to predict mitochondrial toxicity based on chemical structures. However, they only provide a binary classification of positive or negative results and do not provide the substructures that contribute to a positive decision. Therefore, we developed an artificial intelligence (AI) model to predict mitochondrial toxicity and visualize structural alerts. To construct the model, we used the open-source software library kMoL, which employs a graph neural network approach that allows learning from chemical structure data. We also utilized the integrated gradient method, which enables the visualization of substructures that contribute to positive results. The dataset used to construct the AI model exhibited a significant imbalance, with significantly more negative than positive data. To address this, we employed the bagging method, which resulted in a model with high predictive performance, as evidenced by an F1 score of 0.839. This model can also be used to visualize substructures that contribute to mitochondrial toxicity using the integrated gradient method. Our AI model predicts mitochondrial toxicity based on chemical structures and may contribute to screening mitochondrial toxicity in the early stages of drug discovery.
PMID:38432954 | DOI:10.2131/jts.49.117
DTox: A deep neural network-based in visio lens for large scale toxicogenomics data
J Toxicol Sci. 2024;49(3):105-115. doi: 10.2131/jts.49.105.
ABSTRACT
With the advancement of large-scale omics technologies, particularly transcriptomics data sets on drug and treatment response repositories available in public domain, toxicogenomics has emerged as a key field in safety pharmacology and chemical risk assessment. Traditional statistics-based bioinformatics analysis poses challenges in its application across multidimensional toxicogenomic data, including administration time, dosage, and gene expression levels. Motivated by the visual inspection workflow of field experts to augment their efficiency of screening significant genes to derive meaningful insights, together with the ability of deep neural architectures to learn the image signals, we developed DTox, a deep neural network-based in visio approach. Using the Percellome toxicogenomics database, instead of utilizing the numerical gene expression values of the transcripts (gene probes of the microarray) for dose-time combinations, DTox learned the image representation of 3D surface plots of distinct time and dosage data points to train the classifier on the experts' labels of gene probe significance. DTox outperformed statistical threshold-based bioinformatics and machine learning approaches based on numerical expression values. This result shows the ability of image-driven neural networks to overcome the limitations of classical numeric value-based approaches. Further, by augmenting the model with explainability modules, our study showed the potential to reveal the visual analysis process of human experts in toxicogenomics through the model weights. While the current work demonstrates the application of the DTox model in toxicogenomic studies, it can be further generalized as an in visio approach for multi-dimensional numeric data with applications in various fields in medical data sciences.
PMID:38432953 | DOI:10.2131/jts.49.105
Characterising the distribution of mangroves along the southern coast of Vietnam using multi-spectral indices and a deep learning model
Sci Total Environ. 2024 Mar 1:171367. doi: 10.1016/j.scitotenv.2024.171367. Online ahead of print.
ABSTRACT
Mangroves are an ecologically and economically valuable ecosystem that provides a range of ecological services, including habitat for a diverse range of plant and animal species, protection of coastlines from erosion and storms, carbon sequestration, and improvement of water quality. Despite their significant ecological role, in many areas, including in Vietnam, large scale losses have occurred, although restoration efforts have been underway. Understanding the scale of the loss and the efficacy of restoration requires high resolution temporal monitoring of mangrove cover on large scales. We have produced a time series of 10-m-resolution mangrove cover maps using the Multispectral Instrument on the Sentinel 2 satellites and with this tool measured the changes in mangrove distribution on the Vietnamese Southern Coast (VSC). We extracted the annual mangrove cover ranging from 2016 to 2023 using a deep learning model with a U-Net architecture based on 17 spectral indices. Additionally, a comparison of misclassification by the model with global products was conducted, indicating that the U-Net architecture demonstrated superior performance when compared to experiments including multispectral bands of Sentinel-2 and time-series of Sentinel-1 data, as shown by the highest performing spectral indices. The generated performance metrics, including overall accuracy, precision, recall, and F1-score were above 90 % for entire years. Water indices were investigated as the most important variables for mangrove extraction. Our study revealed some misclassifications by global products such as World Cover and Global Mangrove Watch and highlighted the significance of our study for local analysis. While we did observe a loss of 34,778 ha (42.2 %) of mangrove area in the region, 47,688 ha (57.8 %) of new mangrove area appeared, resulting in a net gain of 12,910 ha (15.65 %) over the eight-year period of the study. The majority of new mangrove areas were concentrated in Ca Mau peninsulas and within estuaries undergoing recovery programs and natural recovery processes. Mangrove loss occurred in regions where industrial development, wind farm projects, reclaimed land, and shrimp pond expansion is occurring. Our study provides a theoretical framework as well as up-to-date data for mapping and monitoring mangrove cover change that can be readily applied at other sites.
PMID:38432378 | DOI:10.1016/j.scitotenv.2024.171367
Deep learning for automatic gross tumor volumes contouring in esophageal cancer based on contrast-enhanced CT images: a multi-institutional study
Int J Radiat Oncol Biol Phys. 2024 Mar 1:S0360-3016(24)00350-X. doi: 10.1016/j.ijrobp.2024.02.035. Online ahead of print.
ABSTRACT
PURPOSE: To develop and externally validate an automatic Artificial Intelligence (AI) tool for delineating gross tumor volume (GTV) in esophageal squamous cell carcinoma (ESCC) patients, which can assist in the neo-adjuvant or radical radiation therapy treatment planning.
METHODS AND MATERIALS: In this multi-institutional study, contrast-enhanced CT images from 580 eligible ESCC patients were retrospectively collected. The GTV contours delineated by two experts via consensus were used as ground truth. A three-dimensional deep learning model was developed for GTV contouring in the training cohort and internally and externally validated in three validation cohorts. The AI tool was compared against twelve board-certified experts in 25 patients randomly selected from the external validation cohort to evaluate its assistance in improving contouring performance and reducing variation. Contouring performance was measured using dice similarity coefficient (DSC) and average surface distance (ASD). Additionally, our previously established radiomics model for predicting pathological complete response (pCR) was utilized to compare AI-generated and ground truth contours, in order to assess the potential of the AI contouring tool in radiomics analysis.
RESULTS: The AI tool demonstrated good GTV contouring performance in multi-center validation cohorts, with median DSC values of 0.865, 0.876, and 0.866, and median ASD values of 0.939 mm, 0.789 mm, and 0.875 mm, respectively. Furthermore, the AI tool significantly improved contouring performance for half of twelve board-certified experts (DSC values, 0.794-0.835 vs 0.856-0.881, P = 0.003-0.048), reduced the intra- and inter-observer variations by 37.4% and 55.2%, respectively, and saved contouring time by 77.6%. In the radiomics analysis, 88.7% of radiomic features from ground truth and AI-generated contours demonstrated stable reproducibility, and similar pCR prediction performance for these contours (P = 0.430) were observed.
CONCLUSIONS: Our AI contouring tool can improve GTV contouring performance and facilitate radiomics analysis in ESCC patients, which indicated its potential on GTV contouring during radiation therapy treatment planning and radiomics studies.
PMID:38432286 | DOI:10.1016/j.ijrobp.2024.02.035