Deep learning

Development of a Novel Microphysiological System for Peripheral Neurotoxicity Prediction Using Human iPSC-Derived Neurons with Morphological Deep Learning

Tue, 2024-11-26 06:00

Toxics. 2024 Nov 11;12(11):809. doi: 10.3390/toxics12110809.

ABSTRACT

A microphysiological system (MPS) is an in vitro culture technology that reproduces the physiological microenvironment and functionality of humans and is expected to be applied for drug screening. In this study, we developed an MPS for the structured culture of human iPSC-derived sensory neurons and then predicted drug-induced neurotoxicity by morphological deep learning. Using human iPSC-derived sensory neurons, after the administration of representative anti-cancer drugs, the toxic effects on soma and axons were evaluated by an AI model with neurite images. Significant toxicity was detected in positive drugs and could be classified by different effects on soma or axons, suggesting that the current method provides an effective evaluation of chemotherapy-induced peripheral neuropathy. The results of neurofilament light chain expression changes in the MPS device also agreed with clinical reports. Therefore, the present MPS combined with morphological deep learning is a useful platform for in vitro peripheral neurotoxicity assessment.

PMID:39590989 | DOI:10.3390/toxics12110809

Categories: Literature Watch

Evolving and Novel Applications of Artificial Intelligence in Abdominal Imaging

Tue, 2024-11-26 06:00

Tomography. 2024 Nov 18;10(11):1814-1831. doi: 10.3390/tomography10110133.

ABSTRACT

Advancements in artificial intelligence (AI) have significantly transformed the field of abdominal radiology, leading to an improvement in diagnostic and disease management capabilities. This narrative review seeks to evaluate the current standing of AI in abdominal imaging, with a focus on recent literature contributions. This work explores the diagnosis and characterization of hepatobiliary, pancreatic, gastric, colonic, and other pathologies. In addition, the role of AI has been observed to help differentiate renal, adrenal, and splenic disorders. Furthermore, workflow optimization strategies and quantitative imaging techniques used for the measurement and characterization of tissue properties, including radiomics and deep learning, are highlighted. An assessment of how these advancements enable more precise diagnosis, tumor description, and body composition evaluation is presented, which ultimately advances the clinical effectiveness and productivity of radiology. Despite the advancements of AI in abdominal imaging, technical, ethical, and legal challenges persist, and these challenges, as well as opportunities for future development, are highlighted.

PMID:39590942 | DOI:10.3390/tomography10110133

Categories: Literature Watch

Video WeAther RecoGnition (VARG): An Intensity-Labeled Video Weather Recognition Dataset

Tue, 2024-11-26 06:00

J Imaging. 2024 Nov 5;10(11):281. doi: 10.3390/jimaging10110281.

ABSTRACT

Adverse weather (rain, snow, and fog) can negatively impact computer vision tasks by introducing noise in sensor data; therefore, it is essential to recognize weather conditions for building safe and robust autonomous systems in the agricultural and autonomous driving/drone sectors. The performance degradation in computer vision tasks due to adverse weather depends on the type of weather and the intensity, which influences the amount of noise in sensor data. However, existing weather recognition datasets often lack intensity labels, limiting their effectiveness. To address this limitation, we present VARG, a novel video-based weather recognition dataset with weather intensity labels. The dataset comprises a diverse set of short video sequences collected from various social media platforms and videos recorded by the authors, processed into usable clips, and categorized into three major weather categories, rain, fog, and snow, with three intensity classes: absent/no, moderate, and high. The dataset contains 6742 annotated clips from 1079 videos, with the training set containing 5159 clips and the test set containing 1583 clips. Two sets of annotations are provided for training, the first set to train the models as a multi-label weather intensity classifier and the second set to train the models as a multi-class classifier for three weather scenarios. This paper describes the dataset characteristics and presents an evaluation study using several deep learning-based video recognition approaches for weather intensity prediction.

PMID:39590745 | DOI:10.3390/jimaging10110281

Categories: Literature Watch

A Real-Time End-to-End Framework with a Stacked Model Using Ultrasound Video for Cardiac Septal Defect Decision-Making

Tue, 2024-11-26 06:00

J Imaging. 2024 Nov 3;10(11):280. doi: 10.3390/jimaging10110280.

ABSTRACT

Echocardiography is the gold standard for the comprehensive diagnosis of cardiac septal defects (CSDs). Currently, echocardiography diagnosis is primarily based on expert observation, which is laborious and time-consuming. With digitization, deep learning (DL) can be used to improve the efficiency of the diagnosis. This study presents a real-time end-to-end framework tailored for pediatric ultrasound video analysis for CSD decision-making. The framework employs an advanced real-time architecture based on You Only Look Once (Yolo) techniques for CSD decision-making with high accuracy. Leveraging the state of the art with the Yolov8l (large) architecture, the proposed model achieves a robust performance in real-time processes. It can be observed that the experiment yielded a mean average precision (mAP) exceeding 89%, indicating the framework's effectiveness in accurately diagnosing CSDs from ultrasound (US) videos. The Yolov8l model exhibits precise performance in the real-time testing of pediatric patients from Mohammad Hoesin General Hospital in Palembang, Indonesia. Based on the results of the proposed model using 222 US videos, it exhibits 95.86% accuracy, 96.82% sensitivity, and 98.74% specificity. During real-time testing in the hospital, the model exhibits a 97.17% accuracy, 95.80% sensitivity, and 98.15% specificity; only 3 out of the 53 US videos in the real-time process were diagnosed incorrectly. This comprehensive approach holds promise for enhancing clinical decision-making and improving patient outcomes in pediatric cardiology.

PMID:39590744 | DOI:10.3390/jimaging10110280

Categories: Literature Watch

Evaluating the reproducibility of a deep learning algorithm for the prediction of retinal age

Tue, 2024-11-26 06:00

Geroscience. 2024 Nov 26. doi: 10.1007/s11357-024-01445-0. Online ahead of print.

ABSTRACT

Recently, a deep learning algorithm (DLA) has been developed to predict the chronological age from retinal images. The Retinal Age Gap (RAG), a deviation between predicted age from retinal images (Retinal Age, RA) and chronological age, correlates with mortality and age-related diseases. This study evaluated the reliability and accuracy of RA predictions and analyzed various factors that may influence them. We analyzed two groups of participants: Intravisit and Intervisit, both imaged by color fundus photography. RA was predicted using an established algorithm. The Intervisit group comprised 26 subjects, imaged in two sessions. The Intravisit group had 41 subjects, of whom each eye was photographed twice in one session. The mean absolute test-retest difference in predicted RA was 2.39 years for Intervisit and 2.13 years for Intravisit, with the latter showing higher prediction variability. The chronological age was predicted accurately from fundus photographs. Subsetting image pairs based on differential image quality reduced test-retest discrepancies by up to 50%, but mean image quality was not correlated with retest outcomes. Marked diurnal oscillations in RA predictions were observed, with a significant overestimation in the afternoon compared to the morning in the Intravisit cohort. The order of image acquisition across imaging sessions did not influence RA prediction and subjective age perception did not predict RAG. Inter-eye consistency exceeded 3 years. Our study is the first to explore the reliability of RA predictions. Consistent image quality enhances retest outcomes. The observed diurnal variations in RA predictions highlight the need for standardized imaging protocols, but RAG could soon be a reliable metric in clinical investigations.

PMID:39589693 | DOI:10.1007/s11357-024-01445-0

Categories: Literature Watch

Bumblebee social learning outcomes correlate with their flower-facing behaviour

Tue, 2024-11-26 06:00

Anim Cogn. 2024 Nov 26;27(1):80. doi: 10.1007/s10071-024-01918-x.

ABSTRACT

Previous studies suggest that social learning in bumblebees can occur through second-order conditioning, with conspecifics functioning as first-order reinforcers. However, the behavioural mechanisms underlying bumblebees' acquisition of socially learned associations remain largely unexplored. Investigating these mechanisms requires detailed quantification and analysis of the observation process. Here we designed a new 2D paradigm suitable for simple top-down high-speed video recording and analysed bumblebees' observational learning process using a deep-learning-based pose-estimation framework. Two groups of bumblebees observed live conspecifics foraging from either blue or yellow flowers during a single foraging bout, and were subsequently tested for their socially learned colour preferences. Both groups successfully learned the colour indicated by the demonstrators and spent more time facing rewarding flowers-whether occupied by demonstrators or not-compared to non-rewarding flowers. While both groups showed a negative correlation between time spent facing non-rewarding flowers and learning outcomes, the observer bees in the blue group benefited from time spent facing occupied rewarding flowers, whereas the yellow group showed that time facing unoccupied rewarding flowers by the observer bees positively correlated with their learning outcomes. These results suggest that socially influenced colour preferences are shaped by the interplay of different types of observations rather than merely by observing a conspecific at a single colour. Together, these findings provide direct evidence of the dynamical viewing process of observer bees during social observation, opening up new opportunities for exploring the details of more complex social learning in bumblebees and other insects.

PMID:39589587 | DOI:10.1007/s10071-024-01918-x

Categories: Literature Watch

Cross-shaped windows transformer with self-supervised pretraining for clinically significant prostate cancer detection in bi-parametric MRI

Tue, 2024-11-26 06:00

Med Phys. 2024 Nov 26. doi: 10.1002/mp.17546. Online ahead of print.

ABSTRACT

BACKGROUND: Bi-parametric magnetic resonance imaging (bpMRI) has demonstrated promising results in prostate cancer (PCa) detection. Vision transformers have achieved competitive performance compared to convolutional neural network (CNN) in deep learning, but they need abundant annotated data for training. Self-supervised learning can effectively leverage unlabeled data to extract useful semantic representations without annotation and its associated costs.

PURPOSE: This study proposes a novel self-supervised learning framework and a transformer model to enhance PCa detection using prostate bpMRI.

METHODS AND MATERIALS: We introduce a novel end-to-end Cross-Shaped windows (CSwin) transformer UNet model, CSwin UNet, to detect clinically significant prostate cancer (csPCa) in prostate bpMRI. We also propose a multitask self-supervised learning framework to leverage unlabeled data and improve network generalizability. Using a large prostate bpMRI dataset (PI-CAI) with 1476 patients, we first pretrain CSwin transformer using multitask self-supervised learning to improve data-efficiency and network generalizability. We then finetune using lesion annotations to perform csPCa detection. We also test the network generalization using a separate bpMRI dataset with 158 patients (Prostate158).

RESULTS: Five-fold cross validation shows that self-supervised CSwin UNet achieves 0.888 ± 0.010 aread under receiver operating characterstics curve (AUC) and 0.545 ± 0.060 Average Precision (AP) on PI-CAI dataset, significantly outperforming four comparable models (nnFormer, Swin UNETR, DynUNet, Attention UNet, UNet). On model generalizability, self-supervised CSwin UNet achieves 0.79 AUC and 0.45 AP, still outperforming all other comparable methods and demonstrating good generalization to external data.

CONCLUSIONS: This study proposes CSwin UNet, a new transformer-based model for end-to-end detection of csPCa, enhanced by self-supervised pretraining to enhance network generalizability. We employ an automatic weighted loss (AWL) to unify pretext tasks, improving representation learning. Evaluated on two multi-institutional public datasets, our method surpasses existing methods in detection metrics and demonstrates good generalization to external data.

PMID:39589390 | DOI:10.1002/mp.17546

Categories: Literature Watch

Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer

Tue, 2024-11-26 06:00

Med Phys. 2024 Nov 26. doi: 10.1002/mp.17536. Online ahead of print.

ABSTRACT

BACKGROUND: Voxel-based analysis (VBA) for population level radiotherapy (RT) outcomes modeling requires topology preserving inter-patient deformable image registration (DIR) that preserves tumors on moving images while avoiding unrealistic deformations due to tumors occurring on fixed images.

PURPOSE: We developed a tumor-aware recurrent registration (TRACER) deep learning (DL) method and evaluated its suitability for VBA.

METHODS: TRACER consists of encoder layers implemented with stacked 3D convolutional long short term memory network (3D-CLSTM) followed by decoder and spatial transform layers to compute dense deformation vector field (DVF). Multiple CLSTM steps are used to compute a progressive sequence of deformations. Input conditioning was applied by including tumor segmentations with 3D image pairs as input channels. Bidirectional tumor rigidity, image similarity, and deformation smoothness losses were used to optimize the network in an unsupervised manner. TRACER and multiple DL methods were trained with 204 3D computed tomography (CT) image pairs from patients with lung cancers (LC) and evaluated using (a) Dataset I (N = 308 pairs) with DL segmented LCs, (b) Dataset II (N = 765 pairs) with manually delineated LCs, and (c) Dataset III with 42 LC patients treated with RT.

RESULTS: TRACER accurately aligned normal tissues. It best preserved tumors, indicated by the smallest tumor volume difference of 0.24%, 0.40%, and 0.13 % and mean square error in CT intensities of 0.005, 0.005, 0.004, computed between original and resampled moving image tumors, for Datasets I, II, and III, respectively. It resulted in the smallest planned RT tumor dose difference computed between original and resampled moving images of 0.01 and 0.013 Gy when using a female and a male reference.

CONCLUSIONS: TRACER is a suitable method for inter-patient registration involving LC occurring in both fixed and moving images and applicable to voxel-based analysis methods.

PMID:39589333 | DOI:10.1002/mp.17536

Categories: Literature Watch

Multi-objective non-intrusive hearing-aid speech assessment model

Tue, 2024-11-26 06:00

J Acoust Soc Am. 2024 Nov 1;156(5):3574-3587. doi: 10.1121/10.0034362.

ABSTRACT

Because a reference signal is often unavailable in real-world scenarios, reference-free speech quality and intelligibility assessment models are important for many speech processing applications. Despite a great number of deep-learning models that have been applied to build non-intrusive speech assessment approaches and achieve promising performance, studies focusing on the hearing impaired (HI) subjects are limited. This paper presents HASA-Net+, a multi-objective non-intrusive hearing-aid speech assessment model, building upon our previous work, HASA-Net. HASA-Net+ improves HASA-Net in several ways: (1) inclusivity for both normal-hearing and HI listeners, (2) integration with pre-trained speech foundation models and fine-tuning techniques, (3) expansion of predictive capabilities to cover speech quality and intelligibility in diverse conditions, including noisy, denoised, reverberant, dereverberated, and vocoded speech, thereby evaluating its robustness, and (4) validation of the generalization capability using an out-of-domain dataset.

PMID:39589329 | DOI:10.1121/10.0034362

Categories: Literature Watch

Introducing GUIDE for quantitative imaging via generalized uncertainty-driven inference using deep learning

Tue, 2024-11-26 06:00

Elife. 2024 Nov 26;13:RP101069. doi: 10.7554/eLife.101069.

ABSTRACT

This work proposes µGUIDE: a general Bayesian framework to estimate posterior distributions of tissue microstructure parameters from any given biophysical model or signal representation, with exemplar demonstration in diffusion-weighted magnetic resonance imaging. Harnessing a new deep learning architecture for automatic signal feature selection combined with simulation-based inference and efficient sampling of the posterior distributions, µGUIDE bypasses the high computational and time cost of conventional Bayesian approaches and does not rely on acquisition constraints to define model-specific summary statistics. The obtained posterior distributions allow to highlight degeneracies present in the model definition and quantify the uncertainty and ambiguity of the estimated parameters.

PMID:39589260 | DOI:10.7554/eLife.101069

Categories: Literature Watch

Deep Learning Assessment of Small Renal Masses

Tue, 2024-11-26 06:00

Radiology. 2024 Nov;313(2):e241619. doi: 10.1148/radiol.241619.

NO ABSTRACT

PMID:39589240 | DOI:10.1148/radiol.241619

Categories: Literature Watch

Data-Quality-Navigated Machine Learning Strategy with Chemical Intuition to Improve Generalization

Tue, 2024-11-26 06:00

J Chem Theory Comput. 2024 Nov 26. doi: 10.1021/acs.jctc.4c00969. Online ahead of print.

ABSTRACT

Generalizing real-world data has been one of the most difficult challenges for application of machine learning (ML) in practice. Most ML works focused on improvements in algorithms and feature representations. However, the data quality, as the foundation of ML, has been largely overlooked, also leading to the absence of data evaluation and processing methods in ML fields. Motivated by the challenge and need, we selected an important but difficult reorganization energy (RE) prediction task as a test platform, which is an important parameter for the charge mobility of organic semiconductors (OSCs), to propose a data-quality-navigated strategy with chemical intuition. We developed a data diversity evaluation based on structure characteristics of OSC molecules, a reliability evaluation method based on prediction accuracy, a data filtering method based on the uncertainty of K-fold division, and a data split technique by clustering and stratified sampling based on four molecular descriptor-associated REs. Consequently, a representative RE data set (15,989 molecules) with high reliability and diversity can be obtained. For the feature representation, a complementary strategy is proposed by considering the chemical nature of REs and the structure characteristics of OCS molecules as well as the model algorithm. In addition, an ensemble framework consisting of two deep learning models is constructed to avoid the risk of local optimization of the single model. The robustness and generalization of our model are strongly validated against different OSC-like molecules with diverse structures and a wide range of REs and real OSC molecules, greatly outperforming eight adversarial controls. Collectively, our work not only provides a quick and reliable tool to screen efficient OSCs but also offers methodological guidelines for improving the generalization of ML.

PMID:39589234 | DOI:10.1021/acs.jctc.4c00969

Categories: Literature Watch

Augmenting Human Expertise in Weighted Ensemble Simulations through Deep Learning-Based Information Bottleneck

Tue, 2024-11-26 06:00

J Chem Theory Comput. 2024 Nov 26. doi: 10.1021/acs.jctc.4c00919. Online ahead of print.

ABSTRACT

The weighted ensemble (WE) method stands out as a widely used segment-based sampling technique renowned for its rigorous treatment of kinetics. The WE framework typically involves initially mapping the configuration space onto a low-dimensional collective variable (CV) space and then partitioning it into bins. The efficacy of WE simulations heavily depends on the selection of CVs and binning schemes. The recently proposed state predictive information bottleneck (SPIB) method has emerged as a promising tool for automatically constructing CVs from data and guiding enhanced sampling through an iterative manner. In this work, we advance this data-driven pipeline by incorporating prior expert knowledge. Our hybrid approach combines SPIB-learned CVs to enhance sampling in explored regions with expert-based CVs to guide exploration in regions of interest, synergizing the strengths of both methods. Through benchmarking on alanine dipeptide and chignolin systems, we demonstrate that our hybrid approach effectively guides WE simulations to sample states of interest and reduces run-to-run variances. Moreover, our integration of the SPIB model also enhances the analysis and interpretation of WE simulation data by effectively identifying metastable states and pathways and offering direct visualization of dynamics.

PMID:39589127 | DOI:10.1021/acs.jctc.4c00919

Categories: Literature Watch

REDIportal: toward an integrated view of the A-to-I editing

Tue, 2024-11-26 06:00

Nucleic Acids Res. 2024 Nov 26:gkae1083. doi: 10.1093/nar/gkae1083. Online ahead of print.

ABSTRACT

A-to-I RNA editing is the most common non-transient epitranscriptome modification. It plays several roles in human physiology and has been linked to several disorders. Large-scale deep transcriptome sequencing has fostered the characterization of A-to-I editing at the single nucleotide level and the development of dedicated computational resources. REDIportal is a unique and specialized database collecting ∼16 million of putative A-to-I editing sites designed to face the current challenges of epitranscriptomics. Its running version has been enriched with sites from the TCGA project (using data from 31 studies). REDIportal provides an accurate, sustainable and accessible tool enriched with interconnections with widespread ELIXIR core resources such as Ensembl, RNAcentral, UniProt and PRIDE. Additionally, REDIportal now includes information regarding RNA editing in putative double-stranded RNAs, relevant for the immune-related roles of editing, as well as an extended catalog of recoding events. Finally, we report a reliability score per site calculated using a deep learning model trained using a huge collection of positive and negative instances. REDIportal is available at http://srv00.recas.ba.infn.it/atlas/.

PMID:39588754 | DOI:10.1093/nar/gkae1083

Categories: Literature Watch

A scoping review of magnetic resonance angiography and perfusion image synthesis

Tue, 2024-11-26 06:00

Front Dement. 2024 Nov 11;3:1408782. doi: 10.3389/frdem.2024.1408782. eCollection 2024.

ABSTRACT

INTRODUCTION: Deregulation of the cerebrovascular system has been linked to neurodegeneration, part of a putative causal pathway into etiologies such as Alzheimer's disease (AD). In medical imaging, time-of-flight magnetic resonance angiography (TOF-MRA) and perfusion MRI are the most common modalities used to study this system. However, due to lack of resources, many large-scale studies of AD are not acquiring these images; this creates a conundrum, as the lack of evidence limits our knowledge of the interaction between the cerebrovascular system and AD. Deep learning approaches have been used in recent developments to generate synthetic medical images from existing contrasts. In this review, we study the use of artificial intelligence in the generation of synthetic TOF-MRA and perfusion-related images from existing neuroanatomical and neurovascular acquisitions for the study of the cerebrovascular system.

METHOD: Following the PRISMA reporting guidelines we conducted a scoping review of 729 studies relating to image synthesis of TOF-MRA or perfusion imaging, from which 13 met our criteria.

RESULTS: Studies showed that T1-w, T2-w, and FLAIR can be used to synthesize perfusion map and TOF-MRA. Other studies demonstrated that synthetic images could have a greater signal-to-noise ratio compared to real images and that some models trained on healthy subjects could generalize their outputs to an unseen population, such as stroke patients.

DISCUSSION: These findings suggest that generating TOF-MRA and perfusion MRI images holds significant potential for enhancing neurovascular studies, particularly in cases where direct acquisition is not feasible. This approach could provide valuable insights for retrospective studies of several cerebrovascular related diseases such as stroke and AD. While promising, further research is needed to assess their sensitivity and specificity, and ensure their applicability across diverse populations. The use of models to generate TOF-MRA and perfusion MRI using commonly acquired data could be the key for the retrospective study of the cerebrovascular system and elucidate its role in the development of dementia.

PMID:39588202 | PMC:PMC11586219 | DOI:10.3389/frdem.2024.1408782

Categories: Literature Watch

A Review of Datasets, Optimization Strategies, and Learning Algorithms for Analyzing Alzheimer's Dementia Detection

Tue, 2024-11-26 06:00

Neuropsychiatr Dis Treat. 2024 Nov 20;20:2203-2225. doi: 10.2147/NDT.S496307. eCollection 2024.

ABSTRACT

Alzheimer's Dementia (AD) is a progressive neurological disorder that affects memory and cognitive function, necessitating early detection for its effective management. This poses a significant challenge to global public health. The early and accurate detection of dementia is crucial for several reasons. First, timely detection facilitates early intervention and planning of treatment. Second, precise diagnostic methods are essential for distinguishing dementia from other cognitive disorders and medical conditions that may present with similar symptoms. Continuous analysis and improvements in detection methods have contributed to advancements in medical research. It helps to identify new biomarkers, refine existing diagnostic tools, and foster the development of innovative technologies, ultimately leading to more accurate and efficient diagnostic approaches for dementia. This paper presents a critical analysis of multimodal imaging datasets, learning algorithms, and optimisation techniques utilised in the context of Alzheimer's dementia detection. The focus is on understanding the advancements and challenges in employing diverse imaging modalities, such as MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), and EEG (ElectroEncephaloGram). This study evaluated various machine learning algorithms, deep learning models, transfer learning techniques, and generative adversarial networks for the effective analysis of multi-modality imaging data for dementia detection. In addition, a critical examination of optimisation techniques encompassing optimisation algorithms and hyperparameter tuning strategies for processing and analysing images is presented in this study to discern their influence on model performance and generalisation. Thorough examination and enhancement of methods for dementia detection are fundamental for addressing the healthcare challenges posed by dementia, facilitating timely interventions, improving diagnostic accuracy, and advancing research in neurodegenerative diseases.

PMID:39588176 | PMC:PMC11586527 | DOI:10.2147/NDT.S496307

Categories: Literature Watch

Variation and evolution analysis of SARS-CoV-2 using self-game sequence optimization

Tue, 2024-11-26 06:00

Front Microbiol. 2024 Nov 11;15:1485748. doi: 10.3389/fmicb.2024.1485748. eCollection 2024.

ABSTRACT

INTRODUCTION: The evolution of SARS-CoV-2 has precipitated the emergence of new mutant strains, some exhibiting enhanced transmissibility and immune evasion capabilities, thus escalating the infection risk and diminishing vaccine efficacy. Given the continuous impact of SARS-CoV-2 mutations on global public health, the economy, and society, a profound comprehension of potential variations is crucial to effectively mitigate the impact of viral evolution. Yet, this task still faces considerable challenges.

METHODS: This study introduces DARSEP, a method based on Deep learning Associates with Reinforcement learning for SARS-CoV-2 Evolution Prediction, combined with self-game sequence optimization and RetNet-based model.

RESULTS: DARSEP accurately predicts evolutionary sequences and investigates the virus's evolutionary trajectory. It filters spike protein sequences with optimal fitness values from an extensive mutation space, selectively identifies those with a higher likelihood of evading immune detection, and devises a superior evolutionary analysis model for SARS-CoV-2 spike protein sequences. Comprehensive downstream task evaluations corroborate the model's efficacy in predicting potential mutation sites, elucidating SARS-CoV-2's evolutionary direction, and analyzing the development trends of Omicron variant strains through semantic changes.

CONCLUSION: Overall, DARSEP enriches our understanding of the dynamic evolution of SARS-CoV-2 and provides robust support for addressing present and future epidemic challenges.

PMID:39588108 | PMC:PMC11586374 | DOI:10.3389/fmicb.2024.1485748

Categories: Literature Watch

Predicting alfalfa leaf area index by non-linear models and deep learning models

Tue, 2024-11-26 06:00

Front Plant Sci. 2024 Nov 11;15:1458337. doi: 10.3389/fpls.2024.1458337. eCollection 2024.

ABSTRACT

Leaf area index (LAI) of alfalfa is a crucial indicator of its growth status and a predictor of yield. The LAI of alfalfa is influenced by environmental factors, and the limitations of non-linear models in integrating these factors affect the accuracy of LAI predictions. This study explores the potential of classical non-linear models and deep learning for predicting alfalfa LAI. Initially, Logistic, Gompertz, and Richards models were developed based on growth days to assess the applicability of nonlinear models for LAI prediction of alfalfa. In contrast, this study combines environmental factors such as temperature and soil moisture, and proposes a time series prediction model based on mutation point detection method and encoder-attention-decoder BiLSTM network (TMEAD-BiLSTM). The model's performance was analyzed and evaluated against LAI data from different years and cuts. The results indicate that the TMEAD-BiLSTM model achieved the highest prediction accuracy (R² > 0.99), while the non-linear models exhibited lower accuracy (R² > 0.78). The TMEAD-BiLSTM model overcomes the limitations of nonlinear models in integrating environmental factors, enabling rapid and accurate predictions of alfalfa LAI, which can provide valuable references for alfalfa growth monitoring and the establishment of field management practices.

PMID:39588090 | PMC:PMC11586217 | DOI:10.3389/fpls.2024.1458337

Categories: Literature Watch

Diagnosing Allergic Contact Dermatitis Using Deep Learning: Single-Arm, Pragmatic Clinical Trial with an Observer Performance Study to Compare Artificial Intelligence Performance with Human Reader Performance

Tue, 2024-11-26 06:00

Dermatitis. 2024 Nov 26. doi: 10.1089/derm.2024.0302. Online ahead of print.

ABSTRACT

Background: Allergic contact dermatitis is a common, pruritic, debilitating skin disease, affecting at least 20% of the population. Objective: To prospectively validate a computer vision algorithm across all Fitzpatrick skin types. Methods: Each participant was exposed to 10 allergens. The reference criterion was obtained 5 days after initial patch placement by a board-certified dermatologist. The algorithm processed photographs of the test site obtained on Day 5. Human performance in reading the photographs was also evaluated. Results: A total of 206 evaluable participants [mean age 39 years, 66% (136/206) female, and 47% with Fitzpatrick skin types IV-VI] completed testing. Forty-two percent (87/206) of participants experienced 1 or more allergic reaction resulting in a total of 132 allergic reactions. The model provided high discrimination (AUROC 0.86, 95% CI: 0.82-0.90) and specificity (93%, 95% CI: 92%-94%) but with lower sensitivity (58%, 95% CI: 49%-67%). Human performance interpreting the photographs ranged from providing similar performance to the algorithm to providing superior performance when combined across readers. There were no serious adverse events. Conclusions: The combination of a smartphone capture of patch testing sites with deep learning yielded high discrimination across a diverse sample.

PMID:39587877 | DOI:10.1089/derm.2024.0302

Categories: Literature Watch

Interpret Gaussian Process Models by Using Integrated Gradients

Tue, 2024-11-26 06:00

Mol Inform. 2024 Nov 26:e202400051. doi: 10.1002/minf.202400051. Online ahead of print.

ABSTRACT

Gaussian process regression (GPR) is a nonparametric probabilistic model capable of computing not only the predicted mean but also the predicted standard deviation, which represents the confidence level of predictions. It offers great flexibility as it can be non-linearized by designing the kernel function, made robust against outliers by altering the likelihood function, and extended to classification models. Recently, models combining deep learning with GPR, such as Deep Kernel Learning GPR, have been proposed and reported to achieve higher accuracy than GPR. However, due to its nonparametric nature, GPR is challenging to interpret. While Explainable AI (XAI) methods like LIME or kernel SHAP can interpret the predicted mean, interpreting the predicted standard deviation remains difficult. In this study, we propose a novel method to interpret the prediction of GPR by evaluating the importance of explanatory variables. We have incorporated the GPR model with the Integrated Gradients (IG) method to assess the contribution of each feature to the prediction. By evaluating the standard deviation of the posterior distribution, we show that the IG approach provides a detailed decomposition of the predictive uncertainty, attributing it to the uncertainty in individual feature contributions. This methodology not only highlights the variables that are most influential in the prediction but also provides insights into the reliability of the model by quantifying the uncertainty associated with each feature. Through this, we can obtain a deeper understanding of the model's behavior and foster trust in its predictions, especially in domains where interpretability is as crucial as accuracy.

PMID:39587873 | DOI:10.1002/minf.202400051

Categories: Literature Watch

Pages