Deep learning
BioStructNet: Structure-Based Network with Transfer Learning for Predicting Biocatalyst Functions
J Chem Theory Comput. 2024 Dec 20. doi: 10.1021/acs.jctc.4c01391. Online ahead of print.
ABSTRACT
Enzyme-substrate interactions are essential to both biological processes and industrial applications. Advanced machine learning techniques have significantly accelerated biocatalysis research, revolutionizing the prediction of biocatalytic activities and facilitating the discovery of novel biocatalysts. However, the limited availability of data for specific enzyme functions, such as conversion efficiency and stereoselectivity, presents challenges for prediction accuracy. In this study, we developed BioStructNet, a structure-based deep learning network that integrates both protein and ligand structural data to capture the complexity of enzyme-substrate interactions. Benchmarking studies with different algorithms showed the enhanced predictive accuracy of BioStructNet. To further optimize the prediction accuracy for the small data set, we implemented transfer learning in the framework, training a source model on a large data set and fine-tuning it on a small, function-specific data set, using the CalB data set as a case study. The model performance was validated by comparing the attention heat maps generated by the BioStructNet interaction module with the enzyme-substrate interactions revealed from molecular dynamics simulations of enzyme-substrate complexes. BioStructNet would accelerate the discovery of functional enzymes for industrial use, particularly in cases where the training data sets for machine learning are small.
PMID:39705058 | DOI:10.1021/acs.jctc.4c01391
Deep learning model for low-dose CT late iodine enhancement imaging and extracellular volume quantification
Eur Radiol. 2024 Dec 20. doi: 10.1007/s00330-024-11288-0. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop and validate deep learning (DL)-models that denoise late iodine enhancement (LIE) images and enable accurate extracellular volume (ECV) quantification.
METHODS: This study retrospectively included patients with chest discomfort who underwent CT myocardial perfusion + CT angiography + LIE from two hospitals. Two DL models, residual dense network (RDN) and conditional generative adversarial network (cGAN), were developed and validated. 423 patients were randomly divided into training (182 patients), tuning (48 patients), internal validation (92 patients) and external validation group (101 patients). LIEsingle (single-stack image), LIEaveraging (averaging multiple-stack images), LIERDN (single-stack image denoised by RDN) and LIEGAN (single-stack image denoised by cGAN) were generated. We compared image quality score, signal-to-noise (SNR) and contrast-to-noise (CNR) of four LIE sets. The identifiability of denoised images for positive LIE and increased ECV (> 30%) was assessed.
RESULTS: The image quality of LIEGAN (SNR: 13.3 ± 1.9; CNR: 4.5 ± 1.1) and LIERDN (SNR: 20.5 ± 4.7; CNR: 7.5 ± 2.3) images was markedly better than that of LIEsingle (SNR: 4.4 ± 0.7; CNR: 1.6 ± 0.4). At per-segment level, the area under the curve (AUC) of LIERDN images for LIE evaluation was significantly improved compared with those of LIEGAN and LIEsingle images (p = 0.040 and p < 0.001, respectively). Meanwhile, the AUC and accuracy of ECVRDN were significantly higher than those of ECVGAN and ECVsingle at per-segment level (p < 0.001 for all).
CONCLUSIONS: RDN model generated denoised LIE images with markedly higher SNR and CNR than the cGAN-model and original images, which significantly improved the identifiability of visual analysis. Moreover, using denoised single-stack images led to accurate CT-ECV quantification.
KEY POINTS: Question Can the developed models denoise CT-derived late iodine enhancement high images and improve signal-to-noise ratio? Findings The residual dense network model significantly improved the image quality for late iodine enhancement and enabled accurate CT- extracellular volume quantification. Clinical relevance The residual dense network model generates denoised late iodine enhancement images with the highest signal-to-noise ratio and enables accurate quantification of extracellular volume.
PMID:39704803 | DOI:10.1007/s00330-024-11288-0
Correction to: A review of multimodal deep learning methods for genomic-enabled prediction in plant breeding
Genetics. 2024 Dec 20:iyae200. doi: 10.1093/genetics/iyae200. Online ahead of print.
NO ABSTRACT
PMID:39704758 | DOI:10.1093/genetics/iyae200
Research on the effectiveness of multi-view slice correction strategy based on deep learning in high pitch helical CT reconstruction
J Xray Sci Technol. 2024 Dec 19. doi: 10.3233/XST-240128. Online ahead of print.
ABSTRACT
BACKGROUND: Recent studies have explored layered correction strategies, employing a slice-by-slice approach to mitigate the prominent limited-view artifacts present in reconstructed images from high-pitch helical CT scans. However, challenges persist in determining the angles, quantity, and sequencing of slices.
OBJECTIVE: This study aims to explore the optimal slicing method for high pitch helical scanning 3D reconstruction. We investigate the impact of slicing angle, quantity, order, and model on correction effectiveness, aiming to offer valuable insights for the clinical application of deep learning methods.
METHODS: In this study, we constructed and developed a series of data-driven slice correction strategies for 3D high pitch helical CT images using slice theory, and conducted extensive experiments by adjusting the order, increasing the number, and replacing the model.
RESULTS: The experimental results indicate that indiscriminately augmenting the number of correction directions does not significantly enhance the quality of 3D reconstruction. Instead, optimal reconstruction outcomes are attained by aligning the final corrected slice direction with the observation direction.
CONCLUSIONS: The data-driven slicing correction strategy can effectively solve the problem of artifacts in high pitch helical scanning. Increasing the number of slices does not significantly improve the quality of the reconstruction results, ensuring that the final correction angle is consistent with the observation angle to achieve the best reconstruction quality.
PMID:39704749 | DOI:10.3233/XST-240128
A reconstruction method for ptychography based on residual dense network
J Xray Sci Technol. 2024 Dec 18. doi: 10.3233/XST-240114. Online ahead of print.
ABSTRACT
BACKGROUND: Coherent diffraction imaging (CDI) is an important lens-free imaging method. As a variant of CDI, ptychography enables the imaging of objects with arbitrary lateral sizes. However, traditional phase retrieval methods are time-consuming for ptychographic imaging of large-size objects, e.g., integrated circuits (IC). Especially when ptychography is combined with computed tomography (CT) or computed laminography (CL), time consumption increases greatly.
OBJECTIVE: In this work, we aim to propose a new deep learning-based approach to implement a quick and robust reconstruction of ptychography.
METHODS: Inspired by the strong advantages of the residual dense network for computer vision tasks, we propose a dense residual two-branch network (RDenPtycho) based on the ptychography two-branch reconstruction architecture for the fast and robust reconstruction of ptychography. The network relies on the residual dense block to construct mappings from diffraction patterns to amplitudes and phases. In addition, we integrate the physical processes of ptychography into the training of the network to further improve the performance.
RESULTS: The proposed RDenPtycho is evaluated using the publicly available ptychography dataset from the Advanced Photon Source. The results show that the proposed method can faithfully and robustly recover the detailed information of the objects. Ablation experiments demonstrate the effectiveness of the components in the proposed method for performance enhancement.
SIGNIFICANCE: The proposed method enables fast, accurate, and robust reconstruction of ptychography, and is of potential significance for 3D ptychography. The proposed method and experiments can resolve similar problems in other fields.
PMID:39704747 | DOI:10.3233/XST-240114
scGraph2Vec: a deep generative model for gene embedding augmented by graph neural network and single-cell omics data
Gigascience. 2024 Jan 2;13:giae108. doi: 10.1093/gigascience/giae108.
ABSTRACT
BACKGROUND: Exploring the cellular processes of genes from the aspects of biological networks is of great interest to understanding the properties of complex diseases and biological systems. Biological networks, such as protein-protein interaction networks and gene regulatory networks, provide insights into the molecular basis of cellular processes and often form functional clusters in different tissue and disease contexts.
RESULTS: We present scGraph2Vec, a deep learning framework for generating informative gene embeddings. scGraph2Vec extends the variational graph autoencoder framework and integrates single-cell datasets and gene-gene interaction networks. We demonstrate that the gene embeddings are biologically interpretable and enable the identification of gene clusters representing functional or tissue-specific cellular processes. By comparing similar tools, we showed that scGraph2Vec clearly distinguished different gene clusters and aggregated more biologically functional genes. scGraph2Vec can be widely applied in diverse biological contexts. We illustrated that the embeddings generated by scGraph2Vec can infer disease-associated genes from genome-wide association study data (e.g., COVID-19 and Alzheimer's disease), identify additional driver genes in lung adenocarcinoma, and reveal regulatory genes responsible for maintaining or transitioning melanoma cell states.
CONCLUSIONS: scGraph2Vec not only reconstructs tissue-specific gene networks but also obtains a latent representation of genes implying their biological functions.
PMID:39704704 | DOI:10.1093/gigascience/giae108
Artificial intelligence guided search for van der Waals materials with high optical anisotropy
Mater Horiz. 2024 Dec 20. doi: 10.1039/d4mh01332h. Online ahead of print.
ABSTRACT
The exploration of van der Waals (vdW) materials, renowned for their unique optical properties, is pivotal for advanced photonics. These materials exhibit exceptional optical anisotropy, both in-plane and out-of-plane, making them an ideal platform for novel photonic applications. However, the manual search for vdW materials with giant optical anisotropy is a labor-intensive process unsuitable for the fast screening of materials with unique properties. Here, we leverage geometrical and machine learning (ML) approaches to streamline this search, employing deep learning architectures, including the recently developed Atomistic Line Graph Neural Network. Within the geometrical approach, we clustered vdW materials based on in-plane and out-of-plane birefringence values and correlated optical anisotropy with crystallographic parameters. The more accurate ML model demonstrates high predictive capability, validated through density functional theory and ellipsometry measurements. Experimental verification with 2H-MoTe2 and CdPS3 confirms the theoretical predictions, underscoring the potential of ML in discovering and optimizing vdW materials with unprecedented optical performance.
PMID:39704611 | DOI:10.1039/d4mh01332h
Improved deep learning-based IVIM parameter estimation via the use of more "realistic" simulated brain data
Med Phys. 2024 Dec 20. doi: 10.1002/mp.17583. Online ahead of print.
ABSTRACT
BACKGROUND: Due to the low signal-to-noise ratio (SNR) and the limited number of b-values, precise parameter estimation of intravoxel incoherent motion (IVIM) imaging remains an open issue to date, especially for brain imaging where the relatively small difference between D and D* easily leads to outliers and obvious graininess in estimated results.
PURPOSE: To propose a synthetic data driven supervised learning method (SDD-IVIM) for improving precision and noise robustness in IVIM parameter estimation without relying on real-world data for neural network training.
METHODS: On account of the absence of standard IVIM parametric maps from real-world data, a novel model-based method for generating synthetic human brain IVIM data was introduced. Initially, the parameter values of synthetic IVIM parametric maps were sampled from the complex distributions composed of a series of simple and uniform distributions. Subsequently, these parametric maps were modulated with human brain texture to imitate brain tissue structure. Finally, they were used to generate synthetic human brain multi-b-value diffusion-weighted (DW) images based on the IVIM bi-exponential model. With the proposed data synthesis method, an ordinary U-Net with spatial smoothness was employed for IVIM parameter mapping within a supervised learning framework. The performance of SDD-IVIM was evaluated on both numerical phantom and 20 glioma patients. The estimated IVIM parametric maps were compared to those derived from five state-of-the-art methods.
RESULTS: In numerical phantom experiments, SDD-IVIM method produces IVIM parametric maps with lower mean absolute error, lower mean bias, and higher structural similarity compared to the other five methods, especially when the SNR of DW images is low. In glioma patient experiments, SDD-IVIM method offers lower coefficient of variation and more reasonable contrast-to-noise ratio between tumor and contralateral normal appearing white matter than the other five methods.
CONCLUSION: Our method owns superior performance in parametric map quality, parameter estimation precision, and lesion characterization in IVIM parameter estimation, with strong resistance to noise.
PMID:39704604 | DOI:10.1002/mp.17583
Assessment of ComBat Harmonization Performance on Structural Magnetic Resonance Imaging Measurements
Hum Brain Mapp. 2024 Dec 15;45(18):e70085. doi: 10.1002/hbm.70085.
ABSTRACT
Data aggregation across multiple research centers is gaining importance in the context of MRI research, driving diverse high-dimensional datasets to form large-scale heterogeneous sample, increasing statistical power and relevance of machine learning and deep learning algorithm. Site-related effects have been demonstrated to introduce bias in MRI features and confound subsequent analyses. Although Combating Batch (ComBat) technique has been recently reported to successfully harmonize multi-scale neuroimaging features, its performance assessments are still limited and largely based on qualitative visualizations and statistical analyses. In this study, we stand out by using a robust cross-validation approach to assess ComBat performances applied on volume- and surface-based measures acquired across three sites. A machine learning approach based on Multi-Class Gaussian Process Classifier was applied to predict imaging site based on raw and harmonized brain features, providing quantitative insights into ComBat effectiveness, and verifying the association between biological covariates and harmonized brain features. Our findings showed differences in terms of ComBat performances across measures of regional brain morphology, demonstrating tissue specific site effect modeling. ComBat adjustment of site effects also varied across regional level of each specific volume-based and surface-based measures. ComBat effectively eliminates unwanted data site-related variability, by maintaining or even enhancing data association with biological factors. Of note, ComBat has demonstrated flexibility and robustness of application on unseen independent gray matter volume data from the same sites.
PMID:39704541 | DOI:10.1002/hbm.70085
Classifying Alzheimer's Disease Using a Finite Basis Physics Neural Network
Microsc Res Tech. 2024 Dec 20. doi: 10.1002/jemt.24727. Online ahead of print.
ABSTRACT
The disease amyloid plaques, neurofibrillary tangles, synaptic dysfunction, and neuronal death gradually accumulate throughout Alzheimer's disease (AD), resulting in cognitive decline and functional disability. The challenges of dataset quality, interpretability, ethical integration, population variety, and picture standardization must be addressed using deep learning for the functional magnetic resonance imaging (MRI) classification of AD in order to guarantee a trustworthy and practical therapeutic application. In this manuscript Classifying AD using a finite basis physics neural network (CAD-FBPINN) is proposed. Initially, images are collected from AD Neuroimaging Initiative (ADNI) dataset. The images are fed to Pre-processing segment. During the preprocessing phase the reverse lognormal Kalman filter (RLKF) is used to enhance the input images. Then the preprocessed images are given to the feature extraction process. Feature extraction is done by Newton-time-extracting wavelet transform (NTEWT), which is used to extract the statistical features such as the mean, kurtosis, and skewness. Finally the features extracted are given to FBPINNs for Classifying AD such as early mild cognitive impairment (EMCI), AD, mild cognitive impairment (MCI), late mild cognitive impairment (LMCI), normal control (NC), and subjective memory complaints (SMCs). In General, FBPINN does not express adapting optimization strategies to determine optimal factors to ensure correct AD classification. Hence, sea-horse optimization algorithm (SHOA) to optimize FBPINN, which accurately classifies AD. The proposed technique implemented in python and efficacy of the CAD-FBPINN technique is assessed with support of numerous performances like accuracy, precision, Recall, F1-score, specificity and negative predictive value (NPV) is analyzed. Proposed CAD-FBPINN method attain 30.53%, 23.34%, and 32.64% higher accuracy; 20.53%, 25.34%, and 29.64% higher precision; 20.53%, 25.34%, and 29.64% higher NP values analyzed with the existing for Classifying AD Stages through Brain Modifications using FBPINNs Optimized with sea-horse optimizer. Then, the effectiveness of the CAD-FBPINN technique is compared to other methods that are currently in use, such as AD diagnosis and classification using a convolution neural network algorithm (DC-AD-AlexNet), Predicting diagnosis 4 years before Alzheimer's disease incident (PDP-ADI-GCNN), and Using the DC-AD-AlexNet convolution neural network algorithm, diagnose and classify AD.
PMID:39704389 | DOI:10.1002/jemt.24727
Protein stability models fail to capture epistatic interactions of double point mutations
Protein Sci. 2025 Jan;34(1):e70003. doi: 10.1002/pro.70003.
ABSTRACT
There is strong interest in accurate methods for predicting changes in protein stability resulting from amino acid mutations to the protein sequence. Recombinant proteins must often be stabilized to be used as therapeutics or reagents, and destabilizing mutations are implicated in a variety of diseases. Due to increased data availability and improved modeling techniques, recent studies have shown advancements in predicting changes in protein stability when a single-point mutation is made. Less focus has been directed toward predicting changes in protein stability when there are two or more mutations. Here, we analyze the largest available dataset of double point mutation stability and benchmark several widely used protein stability models on this and other datasets. We find that additive models of protein stability perform surprisingly well on this task, achieving similar performance to comparable non-additive predictors according to most metrics. Accordingly, we find that neither artificial intelligence-based nor physics-based protein stability models consistently capture epistatic interactions between single mutations. We observe one notable deviation from this trend, which is that epistasis-aware models provide marginally better predictions than additive models on stabilizing double point mutations. We develop an extension of the ThermoMPNN framework for double mutant modeling, as well as a novel data augmentation scheme, which mitigates some of the limitations in currently available datasets. Collectively, our findings indicate that current protein stability models fail to capture the nuanced epistatic interactions between concurrent mutations due to several factors, including training dataset limitations and insufficient model sensitivity.
PMID:39704075 | DOI:10.1002/pro.70003
Deep Learning-Enabled STEM Imaging for Precise Single-Molecule Identification in Zeolite Structures
Adv Sci (Weinh). 2024 Dec 20:e2408629. doi: 10.1002/advs.202408629. Online ahead of print.
ABSTRACT
Observing chemical reactions in complex structures such as zeolites involves a major challenge in precisely capturing single-molecule behavior at ultra-high spatial resolutions. To address this, a sophisticated deep learning framework tailored has been developed for integrated Differential Phase Contrast Scanning Transmission Electron Microscopy (iDPC-STEM) imaging under low-dose conditions. The framework utilizes a denoising super-resolution model (Denoising Inference Variational Autoencoder Super-Resolution (DIVAESR)) to effectively mitigate shot noise and thereby obtain substantially clearer atomic-resolved iDPC-STEM images. It supports advanced single-molecule detection and analysis, such as conformation matching and elemental clustering, by incorporating object detection and Density Functional Theory (DFT) configurational matching for precise molecular analysis. the model's performance is demonstrated with a significant improvement in standard image quality evaluation metrics including Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). The test conducted using synthetic datasets shows its robustness and extended applicability to real iDPC-STEM images, highlighting its potential in elucidating dynamic behaviors of single molecules in real space. This study lays a critical groundwork for the advancement of deep learning applications within electron microscopy, particularly in unraveling chemical dynamics through precise material characterization and analysis.
PMID:39703985 | DOI:10.1002/advs.202408629
From Images to Loci: Applying 3D Deep Learning to Enable Multivariate and Multitemporal Digital Phenotyping and Mapping the Genetics Underlying Nitrogen Use Efficiency in Wheat
Plant Phenomics. 2024 Dec 19;6:0270. doi: 10.34133/plantphenomics.0270. eCollection 2024.
ABSTRACT
The selection and promotion of high-yielding and nitrogen-efficient wheat varieties can reduce nitrogen fertilizer application while ensuring wheat yield and quality and contribute to the sustainable development of agriculture; thus, the mining and localization of nitrogen use efficiency (NUE) genes is particularly important, but the localization of NUE genes requires a large amount of phenotypic data support. In view of this, we propose the use of low-altitude aerial photography to acquire field images at a large scale, generate 3-dimensional (3D) point clouds and multispectral images of wheat plots, propose a wheat 3D plot segmentation dataset, quantify the plot canopy height via combination with PointNet++, and generate 4 nitrogen utilization-related vegetation indices via index calculations. Six height-related and 24 vegetation-index-related dynamic digital phenotypes were extracted from the digital phenotypes collected at different time points and fitted to generate dynamic curves. We applied height-derived dynamic numerical phenotypes to genome-wide association studies of 160 wheat cultivars (660,000 single-nucleotide polymorphisms) and found that we were able to locate reliable loci associated with height and NUE, some of which were consistent with published studies. Finally, dynamic phenotypes derived from plant indices can also be applied to genome-wide association studies and ultimately locate NUE- and growth-related loci. In conclusion, we believe that our work demonstrates valuable advances in 3D digital dynamic phenotyping for locating genes for NUE in wheat and provides breeders with accurate phenotypic data for the selection and breeding of nitrogen-efficient wheat varieties.
PMID:39703939 | PMC:PMC11658601 | DOI:10.34133/plantphenomics.0270
Deep Learning Methods Using Imagery from a Smartphone for Recognizing Sorghum Panicles and Counting Grains at a Plant Level
Plant Phenomics. 2024 Aug 28;6:0234. doi: 10.34133/plantphenomics.0234. eCollection 2024.
ABSTRACT
High-throughput phenotyping is the bottleneck for advancing field trait characterization and yield improvement in major field crops. Specifically for sorghum (Sorghum bicolor L.), rapid plant-level yield estimation is highly dependent on characterizing the number of grains within a panicle. In this context, the integration of computer vision and artificial intelligence algorithms with traditional field phenotyping can be a critical solution to reduce labor costs and time. Therefore, this study aims to improve sorghum panicle detection and grain number estimation from smartphone-capture images under field conditions. A preharvest benchmark dataset was collected at field scale (2023 season, Kansas, USA), with 648 images of sorghum panicles retrieved via smartphone device, and grain number counted. Each sorghum panicle image was manually labeled, and the images were augmented. Two models were trained using the Detectron2 and Yolov8 frameworks for detection and segmentation, with an average precision of 75% and 89%, respectively. For the grain number, 3 models were trained: MCNN (multiscale convolutional neural network), TCNN-Seed (two-column CNN-Seed), and Sorghum-Net (developed in this study). The Sorghum-Net model showed a mean absolute percentage error of 17%, surpassing the other models. Lastly, a simple equation was presented to relate the count from the model (using images from only one side of the panicle) to the field-derived observed number of grains per sorghum panicle. The resulting framework obtained an estimation of grain number with a 17% error. The proposed framework lays the foundation for the development of a more robust application to estimate sorghum yield using images from a smartphone at the plant level.
PMID:39703938 | PMC:PMC11658820 | DOI:10.34133/plantphenomics.0234
Advancing cybersecurity and privacy with artificial intelligence: current trends and future research directions
Front Big Data. 2024 Dec 5;7:1497535. doi: 10.3389/fdata.2024.1497535. eCollection 2024.
ABSTRACT
INTRODUCTION: The rapid escalation of cyber threats necessitates innovative strategies to enhance cybersecurity and privacy measures. Artificial Intelligence (AI) has emerged as a promising tool poised to enhance the effectiveness of cybersecurity strategies by offering advanced capabilities for intrusion detection, malware classification, and privacy preservation. However, this work addresses the significant lack of a comprehensive synthesis of AI's use in cybersecurity and privacy across the vast literature, aiming to identify existing gaps and guide further progress.
METHODS: This study employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework for a comprehensive literature review, analyzing over 9,350 publications from 2004 to 2023. Utilizing BERTopic modeling, 14 key themes in AI-driven cybersecurity were identified. Topics were clustered and validated through a combination of algorithmic and expert-driven evaluations, focusing on semantic relationships and coherence scores.
RESULTS: AI applications in cybersecurity are concentrated around intrusion detection, malware classification, federated learning in privacy, IoT security, UAV systems and DDoS mitigation. Emerging fields such as adversarial machine learning, blockchain and deep learning are gaining traction. Analysis reveals that AI's adaptability and scalability are critical for addressing evolving threats. Global trends indicate significant contributions from the US, India, UK, and China, highlighting geographical diversity in research priorities.
DISCUSSION: While AI enhances cybersecurity efficacy, challenges such as computational resource demands, adversarial vulnerabilities, and ethical concerns persist. More research in trustworthy AI, standardizing AI-driven methods, legislations for robust privacy protection amongst others is emphasized. The study also highlights key current and future areas of focus, including quantum machine learning, explainable AI, integrating humanized AI and deepfakes.
PMID:39703783 | PMC:PMC11656524 | DOI:10.3389/fdata.2024.1497535
Are we ready to integrate advanced artificial intelligence models in clinical laboratory?
Biochem Med (Zagreb). 2025 Feb 15;35(1):010501. doi: 10.11613/BM.2025.010501. Epub 2024 Dec 15.
ABSTRACT
The application of advanced artificial intelligence (AI) models and algorithms in clinical laboratories is a new inevitable stage of development of laboratory medicine, since in the future, diagnostic and prognostic panels specific to certain diseases will be created from a large amount of laboratory data. Thanks to machine learning (ML), it is possible to analyze a large amount of structured numerical data as well as unstructured digitized images in the field of hematology, cytology and histopathology. Numerous researches refer to the testing of ML models for the purpose of screening various diseases, detecting damage to organ systems, diagnosing malignant diseases, longitudinal monitoring of various biomarkers that would enable predicting the outcome of each patient's treatment. The main advantages of advanced AI in the clinical laboratory are: faster diagnosis using diagnostic and prognostic algorithms, individualization of treatment plans, personalized medicine, better patient treatment outcomes, easier and more precise longitudinal monitoring of biomarkers, etc. Disadvantages relate to the lack of standardization, questionable quality of the entered data and their interpretability, potential over-reliance on technology, new financial investments, privacy concerns, ethical and legal aspects. Further integration of advanced AI will gradually take place on the basis of the knowledge of specialists in laboratory and clinical medicine, experts in information technology and biostatistics, as well as on the basis of evidence-based laboratory medicine. Clinical laboratories will be ready for the full and successful integration of advanced AI once a balance has been established between its potential and the resolution of existing obstacles.
PMID:39703759 | PMC:PMC11654238 | DOI:10.11613/BM.2025.010501
A survey of detection of Parkinson's disease using artificial intelligence models with multiple modalities and various data preprocessing techniques
J Educ Health Promot. 2024 Oct 28;13:388. doi: 10.4103/jehp.jehp_1777_23. eCollection 2024.
ABSTRACT
Parkinson's disease (PD) is a neurodegenerative brain disorder that causes symptoms such as tremors, sleeplessness, behavioral problems, sensory abnormalities, and impaired mobility, according to the World Health Organization (WHO). Artificial intelligence, machine learning (ML), and deep learning (DL) have been used in recent studies (2015-2023) to improve PD diagnosis by categorizing patients and healthy controls based on similar clinical presentations. This study investigates several datasets, modalities, and data preprocessing techniques from the collected data. Issues are also addressed, with suggestions for future PD research involving subgrouping and connection analysis using magnetic resonance imaging (MRI), dopamine transporter scan (DaTscan), and single-photon emission computed tomography (SPECT) data. We have used different models like Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) for detecting PD at an early stage. We have used the Parkinson's Progression Markers Initiative (PPMI) dataset 3D brain images and archived the 86.67%, 94.02%, accuracy of models, respectively.
PMID:39703622 | PMC:PMC11657906 | DOI:10.4103/jehp.jehp_1777_23
Artificial Intelligence in Uveitis: Innovations in Diagnosis and Therapeutic Strategies
Clin Ophthalmol. 2024 Dec 14;18:3753-3766. doi: 10.2147/OPTH.S495307. eCollection 2024.
ABSTRACT
In the dynamic field of ophthalmology, artificial intelligence (AI) is emerging as a transformative tool in managing complex conditions like uveitis. Characterized by diverse inflammatory responses, uveitis presents significant diagnostic and therapeutic challenges. This systematic review explores the role of AI in advancing diagnostic precision, optimizing therapeutic approaches, and improving patient outcomes in uveitis care. A comprehensive search of PubMed, Scopus, Google Scholar, Web of Science, and Embase identified over 10,000 articles using primary and secondary keywords related to AI and uveitis. Rigorous screening based on predefined criteria reduced the pool to 52 high-quality studies, categorized into six themes: diagnostic support algorithms, screening algorithms, standardization of Uveitis Nomenclature (SUN), AI applications in management, systemic implications of AI, and limitations with future directions. AI technologies, including machine learning (ML) and deep learning (DL), demonstrated proficiency in anterior chamber inflammation detection, vitreous haze grading, and screening for conditions like ocular toxoplasmosis. Despite these advancements, challenges such as dataset quality, algorithmic transparency, and ethical concerns persist. Future research should focus on developing robust, multimodal AI systems and fostering collaboration among academia and industry to ensure equitable, ethical, and effective AI applications. The integration of AI heralds a new era in uveitis management, emphasizing precision medicine and enhanced care delivery.
PMID:39703602 | PMC:PMC11656483 | DOI:10.2147/OPTH.S495307
Graph neural networks and transfer entropy enhance forecasting of mesozooplankton community dynamics
Environ Sci Ecotechnol. 2024 Nov 26;23:100514. doi: 10.1016/j.ese.2024.100514. eCollection 2025 Jan.
ABSTRACT
Mesozooplankton are critical components of marine ecosystems, acting as key intermediaries between primary producers and higher trophic levels by grazing on phytoplankton and influencing fish populations. They play pivotal roles in the pelagic food web and export production, affecting the biogeochemical cycling of carbon and nutrients. Therefore, accurately modeling and visualizing mesozooplankton community dynamics is essential for understanding marine ecosystem patterns and informing effective management strategies. However, modeling these dynamics remains challenging due to the complex interplay among physical, chemical, and biological factors, and the detailed parameterization and feedback mechanisms are not fully understood in theory-driven models. Graph neural network (GNN) models offer a promising approach to forecast multivariate features and define correlations among input variables. The high interpretive power of GNNs provides deep insights into the structural relationships among variables, serving as a connection matrix in deep learning algorithms. However, there is insufficient understanding of how interactions between input variables affect model outputs during training. Here we investigate how the graph structure of ecosystem dynamics used to train GNN models affects their forecasting accuracy for mesozooplankton species. We find that forecasting accuracy is closely related to interactions within ecosystem dynamics. Notably, increasing the number of nodes does not always enhance model performance; closely connected species tend to produce similar forecasting outputs in terms of trend and peak timing. Therefore, we demonstrate that incorporating the graph structure of ecosystem dynamics can improve the accuracy of mesozooplankton modeling by providing influential information about species of interest. These findings will provide insights into the influential factors affecting mesozooplankton species and emphasize the importance of constructing appropriate graphs for forecasting these species.
PMID:39703568 | PMC:PMC11655696 | DOI:10.1016/j.ese.2024.100514
Evaluating enrichment use in group-housed rhesus macaques (<em>Macaca mulatta</em>): A machine learning approach
Anim Welf. 2024 Dec 9;33:e59. doi: 10.1017/awf.2024.65. eCollection 2024.
ABSTRACT
Environmental enrichment programmes are widely used to improve welfare of captive and laboratory animals, especially non-human primates. Monitoring enrichment use over time is crucial, as animals may habituate and reduce their interaction with it. In this study we aimed to monitor the interaction with enrichment items in groups of rhesus macaques (Macaca mulatta), each consisting of an average of ten individuals, living in a breeding colony. To streamline the time-intensive task of assessing enrichment programmes we automated the evaluation process by using machine learning technologies. We built two computer vision-based pipelines to evaluate monkeys' interactions with different enrichment items: a white drum containing raisins and a non-food-based puzzle. The first pipeline analyses the usage of enrichment items in nine groups, both when it contains food and when it is empty. The second pipeline counts the number of monkeys interacting with a puzzle across twelve groups. The data derived from the two pipelines reveal that the macaques consistently express interest in the food-based white drum enrichment, even several months after its introduction. The puzzle enrichment was monitored for one month, showing a gradual decline in interaction over time. These pipelines are valuable for assessing enrichment by minimising the time spent on animal observation and data analysis; this study demonstrates that automated methods can consistently monitor macaque engagement with enrichments, systematically tracking habituation responses and long-term effectiveness. Such advancements have significant implications for enhancing animal welfare, enabling the discontinuation of ineffective enrichments and the adaptation of enrichment plans to meet the animals' needs.
PMID:39703214 | PMC:PMC11655280 | DOI:10.1017/awf.2024.65