Deep learning
Automated Nuclear Morphometry: A Deep Learning Approach for Prognostication in Canine Pulmonary Carcinoma to Enhance Reproducibility
Vet Sci. 2024 Jun 17;11(6):278. doi: 10.3390/vetsci11060278.
ABSTRACT
The integration of deep learning-based tools into diagnostic workflows is increasingly prevalent due to their efficiency and reproducibility in various settings. We investigated the utility of automated nuclear morphometry for assessing nuclear pleomorphism (NP), a criterion of malignancy in the current grading system in canine pulmonary carcinoma (cPC), and its prognostic implications. We developed a deep learning-based algorithm for evaluating NP (variation in size, i.e., anisokaryosis and/or shape) using a segmentation model. Its performance was evaluated on 46 cPC cases with comprehensive follow-up data regarding its accuracy in nuclear segmentation and its prognostic ability. Its assessment of NP was compared to manual morphometry and established prognostic tests (pathologists' NP estimates (n = 11), mitotic count, histological grading, and TNM-stage). The standard deviation (SD) of the nuclear area, indicative of anisokaryosis, exhibited good discriminatory ability for tumor-specific survival, with an area under the curve (AUC) of 0.80 and a hazard ratio (HR) of 3.38. The algorithm achieved values comparable to manual morphometry. In contrast, the pathologists' estimates of anisokaryosis resulted in HR values ranging from 0.86 to 34.8, with slight inter-observer reproducibility (k = 0.204). Other conventional tests had no significant prognostic value in our study cohort. Fully automated morphometry promises a time-efficient and reproducible assessment of NP with a high prognostic value. Further refinement of the algorithm, particularly to address undersegmentation, and application to a larger study population are required.
PMID:38922025 | DOI:10.3390/vetsci11060278
Accurate prediction of CDR-H3 loop structures of antibodies with deep learning
Elife. 2024 Jun 26;12:RP91512. doi: 10.7554/eLife.91512.
ABSTRACT
Accurate prediction of the structurally diverse complementarity determining region heavy chain 3 (CDR-H3) loop structure remains a primary and long-standing challenge for antibody modeling. Here, we present the H3-OPT toolkit for predicting the 3D structures of monoclonal antibodies and nanobodies. H3-OPT combines the strengths of AlphaFold2 with a pre-trained protein language model and provides a 2.24 Å average RMSDCα between predicted and experimentally determined CDR-H3 loops, thus outperforming other current computational methods in our non-redundant high-quality dataset. The model was validated by experimentally solving three structures of anti-VEGF nanobodies predicted by H3-OPT. We examined the potential applications of H3-OPT through analyzing antibody surface properties and antibody-antigen interactions. This structural prediction tool can be used to optimize antibody-antigen binding and engineer therapeutic antibodies with biophysical properties for specialized drug administration route.
PMID:38921957 | DOI:10.7554/eLife.91512
Computed Tomography Effective Dose and Image Quality in Deep Learning Image Reconstruction in Intensive Care Patients Compared to Iterative Algorithms
Tomography. 2024 Jun 7;10(6):912-921. doi: 10.3390/tomography10060069.
ABSTRACT
Deep learning image reconstruction (DLIR) algorithms employ convolutional neural networks (CNNs) for CT image reconstruction to produce CT images with a very low noise level, even at a low radiation dose. The aim of this study was to assess whether the DLIR algorithm reduces the CT effective dose (ED) and improves CT image quality in comparison with filtered back projection (FBP) and iterative reconstruction (IR) algorithms in intensive care unit (ICU) patients. We identified all consecutive patients referred to the ICU of a single hospital who underwent at least two consecutive chest and/or abdominal contrast-enhanced CT scans within a time period of 30 days using DLIR and subsequently the FBP or IR algorithm (Advanced Modeled Iterative Reconstruction [ADMIRE] model-based algorithm or Adaptive Iterative Dose Reduction 3D [AIDR 3D] hybrid algorithm) for CT image reconstruction. The radiation ED, noise level, and signal-to-noise ratio (SNR) were compared between the different CT scanners. The non-parametric Wilcoxon test was used for statistical comparison. Statistical significance was set at p < 0.05. A total of 83 patients (mean age, 59 ± 15 years [standard deviation]; 56 men) were included. DLIR vs. FBP reduced the ED (18.45 ± 13.16 mSv vs. 22.06 ± 9.55 mSv, p < 0.05), while DLIR vs. FBP and vs. ADMIRE and AIDR 3D IR algorithms reduced image noise (8.45 ± 3.24 vs. 14.85 ± 2.73 vs. 14.77 ± 32.77 and 11.17 ± 32.77, p < 0.05) and increased the SNR (11.53 ± 9.28 vs. 3.99 ± 1.23 vs. 5.84 ± 2.74 and 3.58 ± 2.74, p < 0.05). CT scanners employing DLIR improved the SNR compared to CT scanners using FBP or IR algorithms in ICU patients despite maintaining a reduced ED.
PMID:38921946 | DOI:10.3390/tomography10060069
Breast Cancer Diagnosis Method Based on Cross-Mammogram Four-View Interactive Learning
Tomography. 2024 Jun 1;10(6):848-868. doi: 10.3390/tomography10060065.
ABSTRACT
Computer-aided diagnosis systems play a crucial role in the diagnosis and early detection of breast cancer. However, most current methods focus primarily on the dual-view analysis of a single breast, thereby neglecting the potentially valuable information between bilateral mammograms. In this paper, we propose a Four-View Correlation and Contrastive Joint Learning Network (FV-Net) for the classification of bilateral mammogram images. Specifically, FV-Net focuses on extracting and matching features across the four views of bilateral mammograms while maximizing both their similarities and dissimilarities. Through the Cross-Mammogram Dual-Pathway Attention Module, feature matching between bilateral mammogram views is achieved, capturing the consistency and complementary features across mammograms and effectively reducing feature misalignment. In the reconstituted feature maps derived from bilateral mammograms, the Bilateral-Mammogram Contrastive Joint Learning module performs associative contrastive learning on positive and negative sample pairs within each local region. This aims to maximize the correlation between similar local features and enhance the differentiation between dissimilar features across the bilateral mammogram representations. Our experimental results on a test set comprising 20% of the combined Mini-DDSM and Vindr-mamo datasets, as well as on the INbreast dataset, show that our model exhibits superior performance in breast cancer classification compared to competing methods.
PMID:38921942 | DOI:10.3390/tomography10060065
Artificial Intelligence in Sports Medicine: Reshaping Electrocardiogram Analysis for Athlete Safety-A Narrative Review
Sports (Basel). 2024 May 26;12(6):144. doi: 10.3390/sports12060144.
ABSTRACT
Artificial Intelligence (AI) is redefining electrocardiogram (ECG) analysis in pre-participation examination (PPE) of athletes, enhancing the detection and monitoring of cardiovascular health. Cardiovascular concerns, including sudden cardiac death, pose significant risks during sports activities. Traditional ECG, essential yet limited, often fails to distinguish between benign cardiac adaptations and serious conditions. This narrative review investigates the application of machine learning (ML) and deep learning (DL) in ECG interpretation, aiming to improve the detection of arrhythmias, channelopathies, and hypertrophic cardiomyopathies. A literature review over the past decade, sourcing from PubMed and Google Scholar, highlights the growing adoption of AI in sports medicine for its precision and predictive capabilities. AI algorithms excel at identifying complex cardiac patterns, potentially overlooked by traditional methods, and are increasingly integrated into wearable technologies for continuous monitoring. Overall, by offering a comprehensive overview of current innovations and outlining future advancements, this review supports sports medicine professionals in merging traditional screening methods with state-of-the-art AI technologies. This approach aims to enhance diagnostic accuracy and efficiency in athlete care, promoting early detection and more effective monitoring through AI-enhanced ECG analysis within athlete PPEs.
PMID:38921838 | DOI:10.3390/sports12060144
Video-Based Sign Language Recognition via ResNet and LSTM Network
J Imaging. 2024 Jun 20;10(6):149. doi: 10.3390/jimaging10060149.
ABSTRACT
Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural networks used to extract spatio-temporal features from sign language videos suffer from insufficient feature extraction, resulting in low recognition rates. Nevertheless, a large number of video-based sign language datasets require a significant amount of computing resources for training while ensuring the generalization of the network, which poses a challenge for recognition. In this paper, we present a video-based sign language recognition method based on Residual Network (ResNet) and Long Short-Term Memory (LSTM). As the number of network layers increases, the ResNet network can effectively solve the granularity explosion problem and obtain better time series features. We use the ResNet convolutional network as the backbone model. LSTM utilizes the concept of gates to control unit states and update the output feature values of sequences. ResNet extracts the sign language features. Then, the learned feature space is used as the input of the LSTM network to obtain long sequence features. It can effectively extract the spatio-temporal features in sign language videos and improve the recognition rate of sign language actions. An extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed method, with an accuracy of 85.26%, F1-score of 84.98%, and precision of 87.77% on Argentine Sign Language (LSA64).
PMID:38921626 | DOI:10.3390/jimaging10060149
Automatic Detection of Post-Operative Clips in Mammography Using a U-Net Convolutional Neural Network
J Imaging. 2024 Jun 19;10(6):147. doi: 10.3390/jimaging10060147.
ABSTRACT
BACKGROUND: After breast conserving surgery (BCS), surgical clips indicate the tumor bed and, thereby, the most probable area for tumor relapse. The aim of this study was to investigate whether a U-Net-based deep convolutional neural network (dCNN) may be used to detect surgical clips in follow-up mammograms after BCS.
METHODS: 884 mammograms and 517 tomosynthetic images depicting surgical clips and calcifications were manually segmented and classified. A U-Net-based segmentation network was trained with 922 images and validated with 394 images. An external test dataset consisting of 39 images was annotated by two radiologists with up to 7 years of experience in breast imaging. The network's performance was compared to that of human readers using accuracy and interrater agreement (Cohen's Kappa).
RESULTS: The overall classification accuracy on the validation set after 45 epochs ranged between 88.2% and 92.6%, indicating that the model's performance is comparable to the decisions of a human reader. In 17.4% of cases, calcifications have been misclassified as post-operative clips. The interrater reliability of the model compared to the radiologists showed substantial agreement (κreader1 = 0.72, κreader2 = 0.78) while the readers compared to each other revealed a Cohen's Kappa of 0.84, thus showing near-perfect agreement.
CONCLUSIONS: With this study, we show that surgery clips can adequately be identified by an AI technique. A potential application of the proposed technique is patient triage as well as the automatic exclusion of post-operative cases from PGMI (Perfect, Good, Moderate, Inadequate) evaluation, thus improving the quality management workflow.
PMID:38921624 | DOI:10.3390/jimaging10060147
U-Net Convolutional Neural Network for Mapping Natural Vegetation and Forest Types from Landsat Imagery in Southeastern Australia
J Imaging. 2024 Jun 13;10(6):143. doi: 10.3390/jimaging10060143.
ABSTRACT
Accurate and comparable annual mapping is critical to understanding changing vegetation distribution and informing land use planning and management. A U-Net convolutional neural network (CNN) model was used to map natural vegetation and forest types based on annual Landsat geomedian reflectance composite images for a 500 km × 500 km study area in southeastern Australia. The CNN was developed using 2018 imagery. Label data were a ten-class natural vegetation and forest classification (i.e., Acacia, Callitris, Casuarina, Eucalyptus, Grassland, Mangrove, Melaleuca, Plantation, Rainforest and Non-Forest) derived by combining current best-available regional-scale maps of Australian forest types, natural vegetation and land use. The best CNN generated using six Landsat geomedian bands as input produced better results than a pixel-based random forest algorithm, with higher overall accuracy (OA) and weighted mean F1 score for all vegetation classes (93 vs. 87% in both cases) and a higher Kappa score (86 vs. 74%). The trained CNN was used to generate annual vegetation maps for 2000-2019 and evaluated for an independent test area of 100 km × 100 km using statistics describing accuracy regarding the label data and temporal stability. Seventy-six percent of pixels did not change over the 20 years (2000-2019), and year-on-year results were highly correlated (94-97% OA). The accuracy of the CNN model was further verified for the study area using 3456 independent vegetation survey plots where the species of interest had ≥ 50% crown cover. The CNN showed an 81% OA compared with the plot data. The model accuracy was also higher than the label data (76%), which suggests that imperfect training data may not be a major obstacle to CNN-based mapping. Applying the CNN to other regions would help to test the spatial transferability of these techniques and whether they can support the automated production of accurate and comparable annual maps of natural vegetation and forest types required for national reporting.
PMID:38921620 | DOI:10.3390/jimaging10060143
Residual-Based Multi-Stage Deep Learning Framework for Computer-Aided Alzheimer's Disease Detection
J Imaging. 2024 Jun 11;10(6):141. doi: 10.3390/jimaging10060141.
ABSTRACT
Alzheimer's Disease (AD) poses a significant health risk globally, particularly among the elderly population. Recent studies underscore its prevalence, with over 50% of elderly Japanese facing a lifetime risk of dementia, primarily attributed to AD. As the most prevalent form of dementia, AD gradually erodes brain cells, leading to severe neurological decline. In this scenario, it is important to develop an automatic AD-detection system, and many researchers have been working to develop an AD-detection system by taking advantage of the advancement of deep learning (DL) techniques, which have shown promising results in various domains, including medical image analysis. However, existing approaches for AD detection often suffer from limited performance due to the complexities associated with training hierarchical convolutional neural networks (CNNs). In this paper, we introduce a novel multi-stage deep neural network architecture based on residual functions to address the limitations of existing AD-detection approaches. Inspired by the success of residual networks (ResNets) in image-classification tasks, our proposed system comprises five stages, each explicitly formulated to enhance feature effectiveness while maintaining model depth. Following feature extraction, a deep learning-based feature-selection module is applied to mitigate overfitting, incorporating batch normalization, dropout and fully connected layers. Subsequently, machine learning (ML)-based classification algorithms, including Support Vector Machines (SVM), Random Forest (RF) and SoftMax, are employed for classification tasks. Comprehensive evaluations conducted on three benchmark datasets, namely ADNI1: Complete 1Yr 1.5T, MIRAID and OASIS Kaggle, demonstrate the efficacy of our proposed model. Impressively, our model achieves accuracy rates of 99.47%, 99.10% and 99.70% for ADNI1: Complete 1Yr 1.5T, MIRAID and OASIS datasets, respectively, outperforming existing systems in binary class problems. Our proposed model represents a significant advancement in the AD-analysis domain.
PMID:38921618 | DOI:10.3390/jimaging10060141
Predicting the Temperature Dependence of Surfactant CMCs Using Graph Neural Networks
J Chem Theory Comput. 2024 Jun 26. doi: 10.1021/acs.jctc.4c00314. Online ahead of print.
ABSTRACT
The critical micelle concentration (CMC) of surfactant molecules is an essential property for surfactant applications in the industry. Recently, classical quantitative structure-property relationship (QSPR) and graph neural networks (GNNs), a deep learning technique, have been successfully applied to predict the CMC of surfactants at room temperature. However, these models have not yet considered the temperature dependence of the CMC, which is highly relevant to practical applications. We herein develop a GNN model for the temperature-dependent CMC prediction of surfactants. We collected about 1400 data points from public sources for all surfactant classes, i.e., ionic, nonionic, and zwitterionic, at multiple temperatures. We test the predictive quality of the model for the following scenarios: (i) when CMC data for surfactants are present in the training of the model in at least one different temperature and (ii) CMC data for surfactants are not present in the training, i.e., generalizing to unseen surfactants. In both test scenarios, our model exhibits a high predictive performance of R2 ≥ 0.95 on test data. We also find that the model performance varies with the surfactant class. Finally, we evaluate the model for sugar-based surfactants with complex molecular structures, as these represent a more sustainable alternative to synthetic surfactants and are therefore of great interest for future applications in the personal and home care industries.
PMID:38920084 | DOI:10.1021/acs.jctc.4c00314
Development of a predictive model for 1-year postoperative recovery in patients with lumbar disk herniation based on deep learning and machine learning
Front Neurol. 2024 Jun 11;15:1255780. doi: 10.3389/fneur.2024.1255780. eCollection 2024.
ABSTRACT
BACKGROUND: The aim of this study is to develop a predictive model utilizing deep learning and machine learning techniques that will inform clinical decision-making by predicting the 1-year postoperative recovery of patients with lumbar disk herniation.
METHODS: The clinical data of 470 inpatients who underwent tubular microdiscectomy (TMD) between January 2018 and January 2021 were retrospectively analyzed as variables. The dataset was randomly divided into a training set (n = 329) and a test set (n = 141) using a 10-fold cross-validation technique. Various deep learning and machine learning algorithms including Random Forests, Extreme Gradient Boosting, Support Vector Machines, Extra Trees, K-Nearest Neighbors, Logistic Regression, Light Gradient Boosting Machine, and MLP (Artificial Neural Networks) were employed to develop predictive models for the recovery of patients with lumbar disk herniation 1 year after surgery. The cure rate score of lumbar JOA score 1 year after TMD was used as an outcome indicator. The primary evaluation metric was the area under the receiver operating characteristic curve (AUC), with additional measures including decision curve analysis (DCA), accuracy, sensitivity, specificity, and others.
RESULTS: The heat map of the correlation matrix revealed low inter-feature correlation. The predictive model employing both machine learning and deep learning algorithms was constructed using 15 variables after feature engineering. Among the eight algorithms utilized, the MLP algorithm demonstrated the best performance.
CONCLUSION: Our study findings demonstrate that the MLP algorithm provides superior predictive performance for the recovery of patients with lumbar disk herniation 1 year after surgery.
PMID:38919973 | PMC:PMC11197993 | DOI:10.3389/fneur.2024.1255780
Development of a deep learning model for predicting recurrence of hepatocellular carcinoma after liver transplantation
Front Med (Lausanne). 2024 Jun 11;11:1373005. doi: 10.3389/fmed.2024.1373005. eCollection 2024.
ABSTRACT
BACKGROUND: Liver transplantation (LT) is one of the main curative treatments for hepatocellular carcinoma (HCC). Milan criteria has long been applied to candidate LT patients with HCC. However, the application of Milan criteria failed to precisely predict patients at risk of recurrence. As a result, we aimed to establish and validate a deep learning model comparing with Milan criteria and better guide post-LT treatment.
METHODS: A total of 356 HCC patients who received LT with complete follow-up data were evaluated. The entire cohort was randomly divided into training set (n = 286) and validation set (n = 70). Multi-layer-perceptron model provided by pycox library was first used to construct the recurrence prediction model. Then tabular neural network (TabNet) that combines elements of deep learning and tabular data processing techniques was utilized to compare with Milan criteria and verify the performance of the model we proposed.
RESULTS: Patients with larger tumor size over 7 cm, poorer differentiation of tumor grade and multiple tumor numbers were first classified as high risk of recurrence. We trained a classification model with TabNet and our proposed model performed better than the Milan criteria in terms of accuracy (0.95 vs. 0.86, p < 0.05). In addition, our model showed better performance results with improved AUC, NRI and hazard ratio, proving the robustness of the model.
CONCLUSION: A prognostic model had been proposed based on the use of TabNet on various parameters from HCC patients. The model performed well in post-LT recurrence prediction and the identification of high-risk subgroups.
PMID:38919938 | PMC:PMC11196752 | DOI:10.3389/fmed.2024.1373005
BrainCDNet: a concatenated deep neural network for the detection of brain tumors from MRI images
Front Hum Neurosci. 2024 Jun 11;18:1405586. doi: 10.3389/fnhum.2024.1405586. eCollection 2024.
ABSTRACT
INTRODUCTION: Brain cancer is a frequently occurring disease around the globe and mostly developed due to the presence of tumors in/around the brain. Generally, the prevalence and incidence of brain cancer are much lower than that of other cancer types (breast, skin, lung, etc.). However, brain cancers are associated with high mortality rates, especially in adults, due to the false identification of tumor types, and delay in the diagnosis. Therefore, the minimization of false detection of brain tumor types and early diagnosis plays a crucial role in the improvement of patient survival rate. To achieve this, many researchers have recently developed deep learning (DL)-based approaches since they showed a remarkable performance, particularly in the classification task.
METHODS: This article proposes a novel DL architecture named BrainCDNet. This model was made by concatenating the pooling layers and dealing with the overfitting issues by initializing the weights into layers using 'He Normal' initialization along with the batch norm and global average pooling (GAP). Initially, we sharpen the input images using a Nimble filter, which results in maintaining the edges and fine details. After that, we employed the suggested BrainCDNet for the extraction of relevant features and classification. In this work, two different forms of magnetic resonance imaging (MRI) databases such as binary (healthy vs. pathological) and multiclass (glioma vs. meningioma vs. pituitary) are utilized to perform all these experiments.
RESULTS AND DISCUSSION: Empirical evidence suggests that the presented model attained a significant accuracy on both datasets compared to the state-of-the-art approaches, with 99.45% (binary) and 96.78% (multiclass), respectively. Hence, the proposed model can be used as a decision-supportive tool for radiologists during the diagnosis of brain cancer patients.
PMID:38919881 | PMC:PMC11196409 | DOI:10.3389/fnhum.2024.1405586
A semi-automatic deep learning model based on biparametric MRI scanning strategy to predict bone metastases in newly diagnosed prostate cancer patients
Front Oncol. 2024 Jun 11;14:1298516. doi: 10.3389/fonc.2024.1298516. eCollection 2024.
ABSTRACT
OBJECTIVE: To develop a semi-automatic model integrating radiomics, deep learning, and clinical features for Bone Metastasis (BM) prediction in prostate cancer (PCa) patients using Biparametric MRI (bpMRI) images.
METHODS: A retrospective study included 414 PCa patients (BM, n=136; NO-BM, n=278) from two institutions (Center 1, n=318; Center 2, n=96) between January 2016 and December 2022. MRI scans were confirmed with BM status via PET-CT or ECT pre-treatment. Tumor areas on bpMRI images were delineated as tumor's region of interest (ROI) using auto-delineation tumor models, evaluated with Dice similarity coefficient (DSC). Samples were auto-sketched, refined, and used to train the ResNet BM prediction model. Clinical, radiomics, and deep learning data were synthesized into the ResNet-C model, evaluated using receiver operating characteristic (ROC).
RESULTS: The auto-segmentation model achieved a DSC of 0.607. Clinical BM prediction's internal validation had an accuracy (ACC) of 0.650 and area under the curve (AUC) of 0.713; external cohort had an ACC of 0.668 and AUC of 0.757. The deep learning model yielded an ACC of 0.875 and AUC of 0.907 for the internal, and ACC of 0.833 and AUC of 0.862 for the external cohort. The Radiomics model registered an ACC of 0.819 and AUC of 0.852 internally, and ACC of 0.885 and AUC of 0.903 externally. ResNet-C demonstrated the highest ACC of 0.902 and AUC of 0.934 for the internal, and ACC of 0.885 and AUC of 0.903 for the external cohort.
CONCLUSION: The ResNet-C model, utilizing bpMRI scanning strategy, accurately assesses bone metastasis (BM) status in newly diagnosed prostate cancer (PCa) patients, facilitating precise treatment planning and improving patient prognoses.
PMID:38919538 | PMC:PMC11196796 | DOI:10.3389/fonc.2024.1298516
Developing and comparing deep learning and machine learning algorithms for osteoporosis risk prediction
Front Artif Intell. 2024 Jun 11;7:1355287. doi: 10.3389/frai.2024.1355287. eCollection 2024.
ABSTRACT
INTRODUCTION: Osteoporosis, characterized by low bone mineral density (BMD), is an increasingly serious public health issue. So far, several traditional regression models and machine learning (ML) algorithms have been proposed for predicting osteoporosis risk. However, these models have shown relatively low accuracy in clinical implementation. Recently proposed deep learning (DL) approaches, such as deep neural network (DNN), which can discover knowledge from complex hidden interactions, offer a new opportunity to improve predictive performance. In this study, we aimed to assess whether DNN can achieve a better performance in osteoporosis risk prediction.
METHODS: By utilizing hip BMD and extensive demographic and routine clinical data of 8,134 subjects with age more than 40 from the Louisiana Osteoporosis Study (LOS), we developed and constructed a novel DNN framework for predicting osteoporosis risk and compared its performance in osteoporosis risk prediction with four conventional ML models, namely random forest (RF), artificial neural network (ANN), k-nearest neighbor (KNN), and support vector machine (SVM), as well as a traditional regression model termed osteoporosis self-assessment tool (OST). Model performance was assessed by area under 'receiver operating curve' (AUC) and accuracy.
RESULTS: By using 16 discriminative variables, we observed that the DNN approach achieved the best predictive performance (AUC = 0.848) in classifying osteoporosis (hip BMD T-score ≤ -1.0) and non-osteoporosis risk (hip BMD T-score > -1.0) subjects, compared to the other approaches. Feature importance analysis showed that the top 10 most important variables identified by the DNN model were weight, age, gender, grip strength, height, beer drinking, diastolic pressure, alcohol drinking, smoke years, and economic level. Furthermore, we performed subsampling analysis to assess the effects of varying number of sample size and variables on the predictive performance of these tested models. Notably, we observed that the DNN model performed equally well (AUC = 0.846) even by utilizing only the top 10 most important variables for osteoporosis risk prediction. Meanwhile, the DNN model can still achieve a high predictive performance (AUC = 0.826) when sample size was reduced to 50% of the original dataset.
CONCLUSION: In conclusion, we developed a novel DNN model which was considered to be an effective algorithm for early diagnosis and intervention of osteoporosis in the aging population.
PMID:38919268 | PMC:PMC11196804 | DOI:10.3389/frai.2024.1355287
Rapid Mold Detection in Chinese Herbal Medicine Using Enhanced Deep Learning Technology
J Med Food. 2024 Jun 26. doi: 10.1089/jmf.2024.k.0004. Online ahead of print.
ABSTRACT
Mold contamination poses a significant challenge in the processing and storage of Chinese herbal medicines (CHM), leading to quality degradation and reduced efficacy. To address this issue, we propose a rapid and accurate detection method for molds in CHM, with a specific focus on Atractylodes macrocephala, using electronic nose (e-nose) technology. The proposed method introduces an eccentric temporal convolutional network (ETCN) model, which effectively captures temporal and spatial information from the e-nose data, enabling efficient and precise mold detection in CHM. In our approach, we employ the stochastic resonance (SR) technique to eliminate noise from the raw e-nose data. By comprehensively analyzing data from eight sensors, the SR-enhanced ETCN (SR-ETCN) method achieves an impressive accuracy of 94.3%, outperforming seven other comparative models that use only the response time of 7.0 seconds before the rise phase. The experimental results showcase the ETCN model's accuracy and efficiency, providing a reliable solution for mold detection in Chinese herbal medicine. This study contributes significantly to expediting the assessment of herbal medicine quality, thereby helping to ensure the safety and efficacy of traditional medicinal practices.
PMID:38919153 | DOI:10.1089/jmf.2024.k.0004
Engineering a Robust UDP-Glucose Pyrophosphorylase for Enhanced Biocatalytic Synthesis via ProteinMPNN and Ancestral Sequence Reconstruction
J Agric Food Chem. 2024 Jun 25. doi: 10.1021/acs.jafc.4c03126. Online ahead of print.
ABSTRACT
UDP-glucose is a key metabolite in carbohydrate metabolism and plays a vital role in glycosyl transfer reactions. Its significance spans across the food and agricultural industries. This study focuses on UDP-glucose synthesis via multienzyme catalysis using dextrin, incorporating UTP production and ATP regeneration modules to reduce costs. To address thermal stability limitations of the key UDP-glucose pyrophosphorylase (UGP), a deep learning-based protein sequence design approach and ancestral sequence reconstruction are employed to engineer a thermally stable UGP variant. The engineered UGP variant is significantly 500-fold more thermally stable at 60 °C and has a half-life of 49.8 h compared to the wild-type enzyme. MD simulations and umbrella sampling calculations provide insights into the mechanism behind the enhanced thermal stability. Experimental validation demonstrates that the engineered UGP variant can produce 52.6 mM UDP-glucose within 6 h in an in vitro cascade reaction. This study offers practical insights for efficient UDP-glucose synthesis methods.
PMID:38918953 | DOI:10.1021/acs.jafc.4c03126
Analyzing variation of water inflow to inland lakes under climate change: Integrating deep learning and time series data mining
Environ Res. 2024 Jun 23:119478. doi: 10.1016/j.envres.2024.119478. Online ahead of print.
ABSTRACT
The alarming depletion of global inland lakes in recent decades makes it essential to predict water inflow from rivers to lakes (WIRL) trend and unveil the dominant influencing driver, particularly in the context of climate change. The raw time series data contains multiple components (i.e., long-term trend, seasonal periodicity, and random noise), which makes it challenging for traditional machine/deep learning techniques to effectively capture long-term trend information. In this study, a novel FactorConvSTLnet (FCS) method is developed through integrating STL decomposition, convolutional neural networks (CNN), and factorial analysis into a general framework. FCS is more robust in long-term WIRL trend prediction through separating trend information as a modeling predictor, as well as unveiling predominant drivers. FCS is applied to typical inland lakes (the Aral Sea and the Lake Balkhash) in Central Asia, and results indicate that FCS (Nash-Sutcliffe efficiency=0.88, root mean squared error=67m³/s, mean relative error=10%) outperforms the traditional CNN. Some main findings are: (i) during 1960-1990, reservoir water storage (WSR) was the dominant driver for the two lakes, respectively contributing to 71% and 49%; during 1991-2014 and 2015-2099, evaporation (EVAP) would be the dominant driver, with the contribution of 30% and 47%; (ii) climate change would shift the dominant driver from human activities to natural factors, where EVAP and surface snow amount (SNW) have an increasing influence on WIRL; (iii) compared to SSP1-2.6, the SNW contribution would decrease by 26% under SSP5-8.5, while the EVAP contribution would increase by 9%. The findings reveal the main drivers of shrinkage of the inland lakes and provide the scientific basis for promoting regional ecological sustainability.
PMID:38917931 | DOI:10.1016/j.envres.2024.119478
A deep learning model integrating a wind direction-based dynamic graph network for ozone prediction
Sci Total Environ. 2024 Jun 23:174229. doi: 10.1016/j.scitotenv.2024.174229. Online ahead of print.
ABSTRACT
Ozone pollution is an important environmental issue in many countries. Accurate forecasting of ozone concentration enables relevant authorities to enact timely policies to mitigate adverse impacts. This study develops a novel hybrid deep learning model, named wind direction-based dynamic spatio-temporal graph network (WDDSTG-Net), for hourly ozone concentration prediction. The model uses a dynamic directed graph structure based on hourly changing wind direction data to capture evolving spatial relationships between air quality monitoring stations. It applied the graph attention mechanism to compute dynamic weights between connected stations, thereby aggregating neighborhood information adaptively. For temporal modeling, it utilized a sequence-to-sequence model with attention mechanism to extract long-range temporal dependencies. Additionally, it integrated meteorological predictions to guide the ozone forecasting. The model achieves a mean absolute error of 6.69 μg/m3 and 18.63 μg/m3 for 1-h prediction and 24-h prediction, outperforming several classic models. The model's IAQI accuracy predictions at all stations are above 75 %, with a maximum of 81.74 %. It also exhibits strong capabilities in predicting severe ozone pollution events, with a 24-h true positive rate of 0.77. Compared to traditional static graph models, WDDSTG-Net demonstrates the importance of incorporating short-term wind fluctuations and transport dynamics for data-driven air quality modeling. In principle, it may serve as an effective data-driven approach for the concentration prediction of other airborne pollutants.
PMID:38917895 | DOI:10.1016/j.scitotenv.2024.174229
HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge
Radiother Oncol. 2024 Jun 23:110410. doi: 10.1016/j.radonc.2024.110410. Online ahead of print.
ABSTRACT
BACKGROUND AND PURPOSE: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge.
MATERIALS AND METHODS: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test.
RESULTS: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities.
CONCLUSION: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.
PMID:38917883 | DOI:10.1016/j.radonc.2024.110410