Deep learning
AlphaFold2 in biomedical research: facilitating the development of diagnostic strategies for disease
Front Mol Biosci. 2024 Jul 30;11:1414916. doi: 10.3389/fmolb.2024.1414916. eCollection 2024.
ABSTRACT
Proteins, as the primary executors of physiological activity, serve as a key factor in disease diagnosis and treatment. Research into their structures, functions, and interactions is essential to better understand disease mechanisms and potential therapies. DeepMind's AlphaFold2, a deep-learning protein structure prediction model, has proven to be remarkably accurate, and it is widely employed in various aspects of diagnostic research, such as the study of disease biomarkers, microorganism pathogenicity, antigen-antibody structures, and missense mutations. Thus, AlphaFold2 serves as an exceptional tool to bridge fundamental protein research with breakthroughs in disease diagnosis, developments in diagnostic strategies, and the design of novel therapeutic approaches and enhancements in precision medicine. This review outlines the architecture, highlights, and limitations of AlphaFold2, placing particular emphasis on its applications within diagnostic research grounded in disciplines such as immunology, biochemistry, molecular biology, and microbiology.
PMID:39139810 | PMC:PMC11319189 | DOI:10.3389/fmolb.2024.1414916
Multi-Quantifying Maxillofacial Traits via a Demographic Parity-Based AI Model
BME Front. 2024 Aug 13;5:0054. doi: 10.34133/bmef.0054. eCollection 2024.
ABSTRACT
Objective and Impact Statement: The multi-quantification of the distinct individualized maxillofacial traits, that is, quantifying multiple indices, is vital for diagnosis, decision-making, and prognosis of the maxillofacial surgery. Introduction: While the discrete and demographically disproportionate distributions of the multiple indices restrict the generalization ability of artificial intelligence (AI)-based automatic analysis, this study presents a demographic-parity strategy for AI-based multi-quantification. Methods: In the aesthetic-concerning maxillary alveolar basal bone, which requires quantifying a total of 9 indices from length and width dimensional, this study collected a total of 4,000 cone-beam computed tomography (CBCT) sagittal images, and developed a deep learning model composed of a backbone and multiple regression heads with fully shared parameters to intelligently predict these quantitative metrics. Through auditing of the primary generalization result, the sensitive attribute was identified and the dataset was subdivided to train new submodels. Then, submodels trained from respective subsets were ensembled for final generalization. Results: The primary generalization result showed that the AI model underperformed in quantifying major basal bone indices. The sex factor was proved to be the sensitive attribute. The final model was ensembled by the male and female submodels, which yielded equal performance between genders, low error, high consistency, satisfying correlation coefficient, and highly focused attention. The ensemble model exhibited high similarity to clinicians with minor processing time. Conclusion: This work validates that the demographic parity strategy enables the AI algorithm with greater model generalization ability, even for the highly variable traits, which benefits for the appearance-concerning maxillofacial surgery.
PMID:39139805 | PMC:PMC11319927 | DOI:10.34133/bmef.0054
Linking genetic markers and crop model parameters using neural networks to enhance genomic prediction of integrative traits
Front Plant Sci. 2024 Jul 30;15:1393965. doi: 10.3389/fpls.2024.1393965. eCollection 2024.
ABSTRACT
INTRODUCTION: Predicting the performance (yield or other integrative traits) of cultivated plants is complex because it involves not only estimating the genetic value of the candidates to selection, the interactions between the genotype and the environment (GxE) but also the epistatic interactions between genomic regions for a given trait, and the interactions between the traits contributing to the integrative trait. Classical Genomic Prediction (GP) models mostly account for additive effects and are not suitable to estimate non-additive effects such as epistasis. Therefore, the use of machine learning and deep learning methods has been previously proposed to model those non-linear effects.
METHODS: In this study, we propose a type of Artificial Neural Network (ANN) called Convolutional Neural Network (CNN) and compare it to two classical GP regression methods for their ability to predict an integrative trait of sorghum: aboveground fresh weight accumulation. We also suggest that the use of a crop growth model (CGM) can enhance predictions of integrative traits by decomposing them into more heritable intermediate traits.
RESULTS: The results show that CNN outperformed both LASSO and Bayes C methods in accuracy, suggesting that CNN are better suited to predict integrative traits. Furthermore, the predictive ability of the combined CGM-GP approach surpassed that of GP without the CGM integration, irrespective of the regression method used.
DISCUSSION: These results are consistent with recent works aiming to develop Genome-to-Phenotype models and advocate for the use of non-linear prediction methods, and the use of combined CGM-GP to enhance the prediction of crop performances.
PMID:39139722 | PMC:PMC11319263 | DOI:10.3389/fpls.2024.1393965
<em>M</em> <sup>3</sup>: using mask-attention and multi-scale for multi-modal brain MRI classification
Front Neuroinform. 2024 Jul 29;18:1403732. doi: 10.3389/fninf.2024.1403732. eCollection 2024.
ABSTRACT
INTRODUCTION: Brain diseases, particularly the classification of gliomas and brain metastases and the prediction of HT in strokes, pose significant challenges in healthcare. Existing methods, relying predominantly on clinical data or imaging-based techniques such as radiomics, often fall short in achieving satisfactory classification accuracy. These methods fail to adequately capture the nuanced features crucial for accurate diagnosis, often hindered by noise and the inability to integrate information across various scales.
METHODS: We propose a novel approach that mask attention mechanisms with multi-scale feature fusion for Multimodal brain disease classification tasks, termed M 3, which aims to extract features highly relevant to the disease. The extracted features are then dimensionally reduced using Principal Component Analysis (PCA), followed by classification with a Support Vector Machine (SVM) to obtain the predictive results.
RESULTS: Our methodology underwent rigorous testing on multi-parametric MRI datasets for both brain tumors and strokes. The results demonstrate a significant improvement in addressing critical clinical challenges, including the classification of gliomas, brain metastases, and the prediction of hemorrhagic stroke transformations. Ablation studies further validate the effectiveness of our attention mechanism and feature fusion modules.
DISCUSSION: These findings underscore the potential of our approach to meet and exceed current clinical diagnostic demands, offering promising prospects for enhancing healthcare outcomes in the diagnosis and treatment of brain diseases.
PMID:39139696 | PMC:PMC11320416 | DOI:10.3389/fninf.2024.1403732
Artificial Intelligence to Facilitate Clinical Trial Recruitment in Age-Related Macular Degeneration
Ophthalmol Sci. 2024 Jun 19;4(6):100566. doi: 10.1016/j.xops.2024.100566. eCollection 2024 Nov-Dec.
ABSTRACT
OBJECTIVE: Recent developments in artificial intelligence (AI) have positioned it to transform several stages of the clinical trial process. In this study, we explore the role of AI in clinical trial recruitment of individuals with geographic atrophy (GA), an advanced stage of age-related macular degeneration, amidst numerous ongoing clinical trials for this condition.
DESIGN: Cross-sectional study.
SUBJECTS: Retrospective dataset from the INSIGHT Health Data Research Hub at Moorfields Eye Hospital in London, United Kingdom, including 306 651 patients (602 826 eyes) with suspected retinal disease who underwent OCT imaging between January 1, 2008 and April 10, 2023.
METHODS: A deep learning model was trained on OCT scans to identify patients potentially eligible for GA trials, using AI-generated segmentations of retinal tissue. This method's efficacy was compared against a traditional keyword-based electronic health record (EHR) search. A clinical validation with fundus autofluorescence (FAF) images was performed to calculate the positive predictive value of this approach, by comparing AI predictions with expert assessments.
MAIN OUTCOME MEASURES: The primary outcomes included the positive predictive value of AI in identifying trial-eligible patients, and the secondary outcome was the intraclass correlation between GA areas segmented on FAF by experts and AI-segmented OCT scans.
RESULTS: The AI system shortlisted a larger number of eligible patients with greater precision (1139, positive predictive value: 63%; 95% confidence interval [CI]: 54%-71%) compared with the EHR search (693, positive predictive value: 40%; 95% CI: 39%-42%). A combined AI-EHR approach identified 604 eligible patients with a positive predictive value of 86% (95% CI: 79%-92%). Intraclass correlation of GA area segmented on FAF versus AI-segmented area on OCT was 0.77 (95% CI: 0.68-0.84) for cases meeting trial criteria. The AI also adjusts to the distinct imaging criteria from several clinical trials, generating tailored shortlists ranging from 438 to 1817 patients.
CONCLUSIONS: This study demonstrates the potential for AI in facilitating automated prescreening for clinical trials in GA, enabling site feasibility assessments, data-driven protocol design, and cost reduction. Once treatments are available, similar AI systems could also be used to identify individuals who may benefit from treatment.
FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
PMID:39139546 | PMC:PMC11321286 | DOI:10.1016/j.xops.2024.100566
AlphaFold 2-based stacking model for protein solubility prediction and its transferability on seed storage proteins
Int J Biol Macromol. 2024 Aug 11:134601. doi: 10.1016/j.ijbiomac.2024.134601. Online ahead of print.
ABSTRACT
Accurate protein solubility prediction is crucial in screening suitable candidates for food application. Existing models often rely only on sequences, overlooking important structural details. In this study, a regression model for protein solubility was developed using both the sequences and predicted structures of 2983 E. coli proteins. The sequence and structural level properties of the proteins were bioinformatically extracted and subjected to multilayer perceptron (MLP). Moreover, residue level features and contact maps were utilized to construct a graph convolutional network (GCN). The out-of-fold predictions of the two models were combined and fed into multiple meta-regressors to create a stacking model. The stacking model with support vector regressor (SVR) achieved R2 of 0.502 and 0.468 on test and external validation datasets, respectively, displaying higher performance compared to existing regression models. Based on the improved performance compared to its based models, the stacking model effectively captured the strength of its base models as well as the significance of the different features used. Furthermore, the model's transferability was indirectly validated on a dataset of seed storage proteins using Osborne definition as well as on a case study using molecular dynamic simulation, showing potential for application beyond microbial proteins to food and agriculture-related ones.
PMID:39137857 | DOI:10.1016/j.ijbiomac.2024.134601
Spatio-temporal learning and explaining for dynamic functional connectivity analysis: Application to depression
J Affect Disord. 2024 Aug 11:S0165-0327(24)01228-X. doi: 10.1016/j.jad.2024.08.014. Online ahead of print.
ABSTRACT
Background Functional connectivity has been shown to fluctuate over time. The present study aimed to identifying major depressive disorders (MDD) with dynamic functional connectivity (dFC) from resting-state fMRI data, which would be helpful to produce tools of early depression diagnosis and enhance our understanding of depressive etiology. Methods The resting-state fMRI data of 178 subjects were collected, including 89 MDD and 89 healthy controls. We propose a spatio-temporal learning and explaining framework for dFC analysis. A yet effective spatio-temporal model is developed to classifying MDD from healthy controls with dFCs. The model is a stacking neural network model, which learns network structure information by a multi-layer perceptron based spatial encoder, and learns time-varying patterns by a Transformer based temporal encoder. We propose to explain the spatio-temporal model with a two-stage explanation method of importance feature extracting and disorder-relevant pattern exploring. The layer-wise relevance propagation (LRP) method is introduced to extract the most relevant input features in the model, and the attention mechanism with LRP is applied to extract the important time steps of dFCs. The disorder-relevant functional connections, brain regions, and brain states in the model are further explored and identified. Results We achieved the best classification performance in identifying MDD from healthy controls with dFC data. The top important functional connectivity, brain regions, and dynamic states closely related to MDD have been identified. Limitations The data preprocessing may affect the classification performance of the model, and this study needs further validation in a larger patient population. Conclusions The experimental results demonstrate that the proposed spatio-temporal model could effectively classify MDD, and uncover structural and temporal patterns of dFCs in depression.
PMID:39137835 | DOI:10.1016/j.jad.2024.08.014
Transformer for low concentration image denoising in magnetic particle imaging
Phys Med Biol. 2024 Aug 13. doi: 10.1088/1361-6560/ad6ede. Online ahead of print.
ABSTRACT
OBJECTIVE: Magnetic particle imaging (MPI) is an emerging tracer-based in vivo imaging technology. The use of MPI at low Superparamagnetic Iron Oxide Nanoparticle (SPION) concentrations has the potential to be a promising area of clinical application due to the inherent safety for humans. However, low tracer concentrations reduce the signal-to-noise ratio (SNR) of the magnetization signal, leading to severe noise artifacts in the reconstructed MPI images. Hardware improvements have high complexity, while traditional methods lack robustness to different noise levels, making it difficult to improve the quality of low concentration MPI images.
APPROACH: Here, we propose a novel deep learning method for MPI image denoising and quality enhancing based on a sparse lightweight transformer model. The proposed residual-local transformer structure reduces model complexity to avoid overfitting, in which an information retention block facilitates feature extraction capabilities for the image details. Besides, we design a noisy concentration dataset to train our model. Then, we evaluate our method with both simulated and real MPI image data.
MAIN RESULTS: Simulation experiment results show that our method can achieve the best performance compared with the existing deep learning methods for MPI image denoising. More importantly, our method is effectively performed on the real MPI image of samples with an Fe concentration down to 67 μgFe/mL.
SIGNIFICANCE: Our method provides great potential for obtaining high quality MPI images at low concentrations.
PMID:39137818 | DOI:10.1088/1361-6560/ad6ede
Predicting time-of-flight with Cerenkov light in BGO: a three-stage network approach with multiple timing kernels prior
Phys Med Biol. 2024 Aug 13. doi: 10.1088/1361-6560/ad6ed8. Online ahead of print.
ABSTRACT
In the quest for enhanced image quality in positron emission tomography (PET) reconstruction, the introduction of Time-of-Flight (TOF) constraints in TOF-PET reconstruction offers superior signal-to-noise ratio (SNR). By employing BGO detectors capable of simultaneously emitting prompt Cerenkov light and scintillation light, this approach combines the high time resolution of prompt photons with the high energy resolution of scintillation light, thereby presenting a promising avenue for acquiring more precise TOF information. In Stage One, we train a raw method capable of predicting TOF information based on coincidence waveform pairs. In Stage Two, the data is categorized into 25 classes based on signal rise time, and the pre-trained raw method is utilized to obtain TOF kernels for each of the 25 classes, thereby generating prior knowledge. Within Stage Three, our proposed Deep Learning (DL) module, combined with a bias fine-tuning module, utilizes the kernel prior to provide bias compensation values for the data, thereby refining the first-stage outputs and obtaining more accurate TOF predictions.The three-stage network built upon the LED method resulted in improvements of 11.7 ps and 41.8 ps for full width at half maximum (FWHM) and full width at tenth maximum (FWTM), respectively. Optimal performance was achieved with FWHM of 128.2 ps and FWTM of 286.6 ps when CNN and Transformer were utilized in Stages One and Three, respectively. Further enhancements of 2.3 ps and 3.5 ps for FWHM and FWTM were attained through data augmentation methods. This study employs neural networks to compensate for the timing delays in mixed (Cerenkov and scintillation photons) signals, combining multiple timing kernels as prior knowledge With deep learning models. This integration yields optimal predictive performance, offering a superior solution for TOF-PET research utilizing Cerenkov signals.
PMID:39137808 | DOI:10.1088/1361-6560/ad6ed8
Turn-table micro-CT scanner for dynamic perfusion imaging in mice: design, implementation, and evaluation
Phys Med Biol. 2024 Aug 13. doi: 10.1088/1361-6560/ad6edd. Online ahead of print.
ABSTRACT

This study introduces a novel desktop micro-CT scanner designed for dynamic perfusion imaging in mice, aimed at enhancing preclinical imaging capabilities with high resolution and low radiation doses.
Approach:
The micro-CT system features a custom-built rotating table capable of both circular and helical scans, enabled by a small-bore slip ring for continuous rotation. Images were reconstructed with a temporal resolution of 3.125 seconds and an isotropic voxel size of 65 µm, with potential for higher resolution scanning. The system's static performance was validated using standard quality assurance phantoms. Dynamic performance was assessed with a custom 3D-bioprinted tissue-mimetic phantom simulating single-compartment vascular flow. Flow measurements ranged from 1.5 mL/min to 9 mL/min, with perfusion metrics such as time-to-peak (TTP), mean transit time (MTT), and blood flow index (BFI) calculated. In vivo experiments involved mice with different genetic risk factors for Alzheimer's and cardiovascular diseases to showcase the system's capabilities for perfusion imaging.
Main Results:
The static performance validation confirmed that the system meets standard quality metrics, such as spatial resolution and uniformity. The dynamic evaluation with the 3D-bioprinted phantom demonstrated linearity in hemodynamic flow measurements and effective quantification of perfusion metrics. In vivo experiments highlighted the system's potential to capture detailed perfusion maps of the brain, lungs, and kidneys. The observed differences in perfusion characteristics between genotypic mice illustrated the system's capability to detect physiological variations, though the small sample size precludes definitive conclusions.
Significance:
The turn-table micro-CT system represents a significant advancement in preclinical imaging, providing high-resolution, low-dose dynamic imaging for a range of biological and medical research applications. Future work will focus on improving temporal resolution, expanding spectral capabilities, and integrating deep learning techniques for enhanced image reconstruction and analysis.
PMID:39137802 | DOI:10.1088/1361-6560/ad6edd
A lightweight intelligent laryngeal cancer detection system for rural areas
Am J Otolaryngol. 2024 Aug 8;45(6):104474. doi: 10.1016/j.amjoto.2024.104474. Online ahead of print.
ABSTRACT
OBJECTIVE: Early diagnosis of laryngeal cancer (LC) is crucial, particularly in rural areas. Despite existing studies on deep learning models for LC identification, challenges remain in selecting suitable models for rural areas with shortages of laryngologists and limited computer resources. We present the intelligent laryngeal cancer detection system (ILCDS), a deep learning-based solution tailored for effective LC screening in resource-constrained rural areas.
METHODS: We compiled a dataset comprised of 2023 laryngoscopic images and applied data augmentation techniques for dataset expansion. Subsequently, we utilized eight deep learning models-AlexNet, VGG, ResNet, DenseNet, MobileNet, ShuffleNet, Vision Transformer, and Swin Transformer-for LC identification. A comprehensive evaluation of their performances and efficiencies was conducted, and the most suitable model was selected to assemble the ILCDS.
RESULTS: Regarding performance, all models attained an average accuracy exceeding 90 % on the test set. Particularly noteworthy are VGG, DenseNet, and MobileNet, which exceeded an accuracy of 95 %, with scores of 95.32 %, 95.75 %, and 95.99 %, respectively. Regarding efficiency, MobileNet excels owing to its compact size and fast inference speed, making it an ideal model for integration into ILCDS.
CONCLUSION: The ILCDS demonstrated promising accuracy in LC detection while maintaining modest computational resource requirements, indicating its potential to enhance LC screening accuracy and alleviate the workload on otolaryngologists in rural areas.
PMID:39137696 | DOI:10.1016/j.amjoto.2024.104474
Discovery of potential antidiabetic peptides using deep learning
Comput Biol Med. 2024 Aug 12;180:109013. doi: 10.1016/j.compbiomed.2024.109013. Online ahead of print.
ABSTRACT
Antidiabetic peptides (ADPs), peptides with potential antidiabetic activity, hold significant importance in the treatment and control of diabetes. Despite their therapeutic potential, the discovery and prediction of ADPs remain challenging due to limited data, the complex nature of peptide functions, and the expensive and time-consuming nature of traditional wet lab experiments. This study aims to address these challenges by exploring methods for the discovery and prediction of ADPs using advanced deep learning techniques. Specifically, we developed two models: a single-channel CNN and a three-channel neural network (CNN + RNN + Bi-LSTM). ADPs were primarily gathered from the BioDADPep database, alongside thousands of non-ADPs sourced from anticancer, antibacterial, and antiviral peptide datasets. Subsequently, data preprocessing was performed with the evolutionary scale model (ESM-2), followed by model training and evaluation through 10-fold cross-validation. Furthermore, this work collected a series of newly published ADPs as an independent test set through literature review, and found that the CNN model achieved the highest accuracy (90.48 %) in predicting the independent test set, surpassing existing ADP prediction tools. Finally, the application of the model was considered. SeqGAN was used to generate new candidate ADPs, followed by screening with the constructed CNN model. Selected peptides were then evaluated using physicochemical property prediction and structural forecasts for pharmaceutical potential. In summary, this study not only established robust ADP prediction models but also employed these models to screen a batch of potential ADPs, addressing a critical need in the field of peptide-based antidiabetic research.
PMID:39137670 | DOI:10.1016/j.compbiomed.2024.109013
CNN-BLSTM based deep learning framework for eukaryotic kinome classification: An explainability based approach
Comput Biol Chem. 2024 Aug 8;112:108169. doi: 10.1016/j.compbiolchem.2024.108169. Online ahead of print.
ABSTRACT
Classification of protein families from their sequences is an enduring task in Proteomics and related studies. Numerous deep-learning models have been moulded to tackle this challenge, but due to the black-box character, they still fall short in reliability. Here, we present a novel explainability pipeline that explains the pivotal decisions of the deep learning model on the classification of the Eukaryotic kinome. Based on a comparative and experimental analysis of the most cutting-edge deep learning algorithms, the best deep learning model CNN-BLSTM was chosen to classify the eight eukaryotic kinase sequences to their corresponding families. As a substitution for the conventional class activation map-based interpretation of CNN-based models in the domain, we have cascaded the GRAD CAM and Integrated Gradient (IG) explainability modus operandi for improved and responsible results. To ensure the trustworthiness of the classifier, we have masked the kinase domain traces, identified from the explainability pipeline and observed a class-specific drop in F1-score from 0.96 to 0.76. In compliance with the Explainable AI paradigm, our results are promising and contribute to enhancing the trustworthiness of deep learning models for biological sequence-associated studies.
PMID:39137619 | DOI:10.1016/j.compbiolchem.2024.108169
A deep learning approach to identify the fetal head position using transperineal ultrasound during labor
Eur J Obstet Gynecol Reprod Biol. 2024 Aug 9;301:147-153. doi: 10.1016/j.ejogrb.2024.08.012. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop a deep learning (DL)-model using convolutional neural networks (CNN) to automatically identify the fetal head position at transperineal ultrasound in the second stage of labor.
MATERIAL AND METHODS: Prospective, multicenter study including singleton, term, cephalic pregnancies in the second stage of labor. We assessed the fetal head position using transabdominal ultrasound and subsequently, obtained an image of the fetal head on the axial plane using transperineal ultrasound and labeled it according to the transabdominal ultrasound findings. The ultrasound images were randomly allocated into the three datasets containing a similar proportion of images of each subtype of fetal head position (occiput anterior, posterior, right and left transverse): the training dataset included 70 %, the validation dataset 15 %, and the testing dataset 15 % of the acquired images. The pre-trained ResNet18 model was employed as a foundational framework for feature extraction and classification. CNN1 was trained to differentiate between occiput anterior (OA) and non-OA positions, CNN2 classified fetal head malpositions into occiput posterior (OP) or occiput transverse (OT) position, and CNN3 classified the remaining images as right or left OT. The DL-model was constructed using three convolutional neural networks (CNN) working simultaneously for the classification of fetal head positions. The performance of the algorithm was evaluated in terms of accuracy, sensitivity, specificity, F1-score and Cohen's kappa.
RESULTS: Between February 2018 and May 2023, 2154 transperineal images were included from eligible participants across 16 collaborating centers. The overall performance of the model for the classification of the fetal head position in the axial plane at transperineal ultrasound was excellent, with an of 94.5 % (95 % CI 92.0--97.0), a sensitivity of 95.6 % (95 % CI 96.8-100.0), a specificity of 91.2 % (95 % CI 87.3-95.1), a F1-score of 0.92 and a Cohen's kappa of 0.90. The best performance was achieved by the CNN1 - OA position vs fetal head malpositions - with an accuracy of 98.3 % (95 % CI 96.9-99.7), followed by CNN2 - OP vs OT positions - with an accuracy of 93.9 % (95 % CI 89.6-98.2), and finally, CNN3 - right vs left OT position - with an accuracy of 91.3 % (95 % CI 83.5-99.1).
CONCLUSIONS: We have developed a DL-model capable of assessing fetal head position using transperineal ultrasound during the second stage of labor with an excellent overall accuracy. Future studies should validate our DL model using larger datasets and real-time patients before introducing it into routine clinical practice.
PMID:39137593 | DOI:10.1016/j.ejogrb.2024.08.012
ConvColor DL: Concatenated convolutional and handcrafted color features fusion for beef quality identification
Food Chem. 2024 Aug 8;460(Pt 3):140795. doi: 10.1016/j.foodchem.2024.140795. Online ahead of print.
ABSTRACT
Beef is an important food product in human nutrition. The evaluation of the quality and safety of this food product is a matter that needs attention. Non-destructive determination of beef quality by image processing methods shows great potential for food safety, as it helps prevent wastage. Traditionally, beef quality determination by image processing methods has been based on handcrafted color features. It is, however, difficult to determine meat quality based on the color space model alone. This study introduces an effective beef quality classification approach by concatenating learning-based global and handcrafted color features. According to experimental results, the convVGG16 + HLS + HSV + RGB + Bi-LSTM model achieved high performance values. This model's accuracy, precision, recall, F1-score, AUC, Jaccard index, and MCC values were 0.989, 0.990, 0.989, 0.990, 0.992, 0.979, and 0.983, respectively.
PMID:39137577 | DOI:10.1016/j.foodchem.2024.140795
Geometric deep learning for molecular property predictions with chemical accuracy across chemical space
J Cheminform. 2024 Aug 13;16(1):99. doi: 10.1186/s13321-024-00895-0.
ABSTRACT
Chemical engineers heavily rely on precise knowledge of physicochemical properties to model chemical processes. Despite the growing popularity of deep learning, it is only rarely applied for property prediction due to data scarcity and limited accuracy for compounds in industrially-relevant areas of the chemical space. Herein, we present a geometric deep learning framework for predicting gas- and liquid-phase properties based on novel quantum chemical datasets comprising 124,000 molecules. Our findings reveal that the necessity for quantum-chemical information in deep learning models varies significantly depending on the modeled physicochemical property. Specifically, our top-performing geometric model meets the most stringent criteria for "chemically accurate" thermochemistry predictions. We also show that by carefully selecting the appropriate model featurization and evaluating prediction uncertainties, the reliability of the predictions can be strongly enhanced. These insights represent a crucial step towards establishing deep learning as the standard property prediction workflow in both industry and academia.Scientific contributionWe propose a flexible property prediction tool that can handle two-dimensional and three-dimensional molecular information. A thermochemistry prediction methodology that achieves high-level quantum chemistry accuracy for a broad application range is presented. Trained deep learning models and large novel molecular databases of real-world molecules are provided to offer a directly usable and fast property prediction solution to practitioners.
PMID:39138560 | DOI:10.1186/s13321-024-00895-0
A novel hierarchical network-based approach to unveil the complexity of functional microbial genome
BMC Genomics. 2024 Aug 14;25(1):786. doi: 10.1186/s12864-024-10692-6.
ABSTRACT
Biological networks serve a crucial role in elucidating intricate biological processes. While interspecies environmental interactions have been extensively studied, the exploration of gene interactions within species, particularly among individual microorganisms, is less developed. The increasing amount of microbiome genomic data necessitates a more nuanced analysis of microbial genome structures and functions. In this context, we introduce a complex structure using higher-order network theory, "Solid Motif Structures (SMS)", via a hierarchical biological network analysis of genomes within the same genus, effectively linking microbial genome structure with its function. Leveraging 162 high-quality genomes of Microcystis, a key freshwater cyanobacterium within microbial ecosystems, we established a genome structure network. Employing deep learning techniques, such as adaptive graph encoder, we uncovered 27 critical functional subnetworks and their associated SMSs. Incorporating metagenomic data from seven geographically distinct lakes, we conducted an investigation into Microcystis' functional stability under varying environmental conditions, unveiling unique functional interaction models for each lake. Our work compiles these insights into an extensive resource repository, providing novel perspectives on the functional dynamics within Microcystis. This research offers a hierarchical network analysis framework for understanding interactions between microbial genome structures and functions within the same genus.
PMID:39138557 | DOI:10.1186/s12864-024-10692-6
Accurate, automated classification of radiographic knee osteoarthritis severity using a novel method of deep learning: Plug-in modules
Knee Surg Relat Res. 2024 Aug 13;36(1):24. doi: 10.1186/s43019-024-00228-3.
ABSTRACT
BACKGROUND: Fine-grained classification deals with data with a large degree of similarity, such as cat or bird species, and similarly, knee osteoarthritis severity classification [Kellgren-Lawrence (KL) grading] is one such fine-grained classification task. Recently, a plug-in module (PIM) that can be integrated into convolutional neural-network-based or transformer-based networks has been shown to provide strong discriminative regions for fine-grained classification, with results that outperformed the previous deep learning models. PIM utilizes each pixel of an image as an independent feature and can subsequently better classify images with minor differences. It was hypothesized that, as a fine-grained classification task, knee osteoarthritis severity may be classified well using PIMs. The aim of the study was to develop this automated knee osteoarthritis classification model.
METHODS: A deep learning model that classifies knee osteoarthritis severity of a radiograph was developed utilizing PIMs. A retrospective analysis on prospectively collected data was performed. The model was trained and developed using the Osteoarthritis Initiative dataset and was subsequently tested on an independent dataset, the Multicenter Osteoarthritis Study (test set size: 17,040). The final deep learning model was designed through an ensemble of four different PIMs.
RESULTS: The accuracy of the model was 84%, 43%, 70%, 81%, and 96% for KL grade 0, 1, 2, 3, and 4, respectively, with an overall accuracy of 75.7%.
CONCLUSIONS: The ensemble of PIMs could classify knee osteoarthritis severity using simple radiographs with a fine accuracy. Although improvements will be needed in the future, the model has been proven to have the potential to be clinically useful.
PMID:39138550 | DOI:10.1186/s43019-024-00228-3
Application of artificial intelligence in dental crown prosthesis: a scoping review
BMC Oral Health. 2024 Aug 13;24(1):937. doi: 10.1186/s12903-024-04657-0.
ABSTRACT
BACKGROUND: In recent years, artificial intelligence (AI) has made remarkable advancements and achieved significant accomplishments across the entire field of dentistry. Notably, efforts to apply AI in prosthodontics are continually progressing. This scoping review aims to present the applications and performance of AI in dental crown prostheses and related topics.
METHODS: We conducted a literature search of PubMed, Scopus, Web of Science, Google Scholar, and IEEE Xplore databases from January 2010 to January 2024. The included articles addressed the application of AI in various aspects of dental crown treatment, including fabrication, assessment, and prognosis.
RESULTS: The initial electronic literature search yielded 393 records, which were reduced to 315 after eliminating duplicate references. The application of inclusion criteria led to analysis of 12 eligible publications in the qualitative review. The AI-based applications included in this review were related to detection of dental crown finish line, evaluation of AI-based color matching, evaluation of crown preparation, evaluation of dental crown designed by AI, identification of a dental crown in an intraoral photo, and prediction of debonding probability.
CONCLUSIONS: AI has the potential to increase efficiency in processes such as fabricating and evaluating dental crowns, with a high level of accuracy reported in most of the analyzed studies. However, a significant number of studies focused on designing crowns using AI-based software, and these studies had a small number of patients and did not always present their algorithms. Standardized protocols for reporting and evaluating AI studies are needed to increase the evidence and effectiveness.
PMID:39138474 | DOI:10.1186/s12903-024-04657-0
Reproducibility and across-site transferability of an improved deep learning approach for aneurysm detection and segmentation in time-of-flight MR-angiograms
Sci Rep. 2024 Aug 13;14(1):18749. doi: 10.1038/s41598-024-68805-w.
ABSTRACT
This study aimed to (1) replicate a deep-learning-based model for cerebral aneurysm segmentation in TOF-MRAs, (2) improve the approach by testing various fully automatic pre-processing pipelines, and (3) rigorously validate the model's transferability on independent, external test-datasets. A convolutional neural network was trained on 235 TOF-MRAs acquired on local scanners from a single vendor to segment intracranial aneurysms. Different pre-processing pipelines including bias field correction, resampling, cropping and intensity-normalization were compared regarding their effect on model performance. The models were tested on independent, external same-vendor and other-vendor test-datasets, each comprised of 70 TOF-MRAs, including patients with and without aneurysms. The best-performing model achieved excellent results on the external same-vendor test-dataset, surpassing the results of the previous publication with an improved sensitivity (0.97 vs. ~ 0.86), a higher Dice score coefficient (DSC, 0.60 ± 0.25 vs. 0.53 ± 0.31), and an improved false-positive rate (0.87 ± 1.35 vs. ~ 2.7 FPs/case). The model further showed excellent performance in the external other-vendor test-datasets (DSC 0.65 ± 0.26; sensitivity 0.92, 0.96 ± 2.38 FPs/case). Specificity was 0.38 and 0.53, respectively. Raising the voxel-size from 0.5 × 0.5×0.5 mm to 1 × 1×1 mm reduced the false-positive rate seven-fold. This study successfully replicated core principles of a previous approach for detecting and segmenting cerebral aneurysms in TOF-MRAs with a robust, fully automatable pre-processing pipeline. The model demonstrated robust transferability on two independent external datasets using TOF-MRAs from the same scanner vendor as the training dataset and from other vendors. These findings are very encouraging regarding the clinical application of such an approach.
PMID:39138338 | DOI:10.1038/s41598-024-68805-w