Deep learning

Generative T2*-weighted images as a substitute for true T2*-weighted images on brain MRI in patients with acute stroke

Thu, 2025-03-20 06:00

Diagn Interv Imaging. 2025 Mar 19:S2211-5684(25)00048-8. doi: 10.1016/j.diii.2025.03.004. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of this study was to validate a deep learning algorithm that generates T2*-weighted images from diffusion-weighted (DW) images and to compare its performance with that of true T2*-weighted images for hemorrhage detection on MRI in patients with acute stroke.

MATERIALS AND METHODS: This single-center, retrospective study included DW- and T2*-weighted images obtained less than 48 hours after symptom onset in consecutive patients admitted for acute stroke. Datasets were divided into training (60 %), validation (20 %), and test (20 %) sets, with stratification by stroke type (hemorrhagic/ischemic). A generative adversarial network was trained to produce generative T2*-weighted images using DW images. Concordance between true T2*-weighted images and generative T2*-weighted images for hemorrhage detection was independently graded by two readers into three categories (parenchymal hematoma, hemorrhagic infarct or no hemorrhage), and discordances were resolved by consensus reading. Sensitivity, specificity and accuracy of generative T2*-weighted images were estimated using true T2*-weighted images as the standard of reference.

RESULTS: A total of 1491 MRI sets from 939 patients (487 women, 452 men) with a median age of 71 years (first quartile, 57; third quartile, 81; range: 21-101) were included. In the test set (n = 300), there were no differences between true T2*-weighted images and generative T2*-weighted images for intraobserver reproducibility (κ = 0.97 [95 % CI: 0.95-0.99] vs. 0.95 [95 % CI: 0.92-0.97]; P = 0.27) and interobserver reproducibility (κ = 0.93 [95 % CI: 0.90-0.97] vs. 0.92 [95 % CI: 0.88-0.96]; P = 0.64). After consensus reading, concordance between true T2*-weighted images and generative T2*-weighted images was excellent (κ = 0.92; 95 % CI: 0.91-0.96). Generative T2*-weighted images achieved 90 % sensitivity (73/81; 95 % CI: 81-96), 97 % specificity (213/219; 95 % CI: 94-99) and 95 % accuracy (286/300; 95 % CI: 92-97) for the diagnosis of any cerebral hemorrhage (hemorrhagic infarct or parenchymal hemorrhage).

CONCLUSION: Generative T2*-weighted images and true T2*-weighted images have non-different diagnostic performances for hemorrhage detection in patients with acute stroke and may be used to shorten MRI protocols.

PMID:40113490 | DOI:10.1016/j.diii.2025.03.004

Categories: Literature Watch

Automated Detection of Microcracks Within Second Harmonic Generation Images of Cartilage Using Deep Learning

Thu, 2025-03-20 06:00

J Orthop Res. 2025 Mar 20. doi: 10.1002/jor.26071. Online ahead of print.

ABSTRACT

Articular cartilage, essential for smooth joint movement, can sustain micrometer-scale microcracks in its collagen network from low-energy impacts previously considered non-injurious. These microcracks may propagate under cyclic loading, impairing cartilage function and potentially initiating osteoarthritis (OA). Detecting and analyzing microcracks is crucial for understanding early cartilage damage but traditionally relies on manual analyses of second harmonic generation (SHG) images, which are labor-intensive, limit scalability, and delay insights. To address these challenges, we established and validated a YOLOv8-based deep learning model to automate the detection, segmentation, and quantification of cartilage microcracks from SHG images. Data augmentation during training improved model robustness, while evaluation metrics, including precision, recall, and F1-score, confirmed high accuracy and reliability, achieving a true positive rate of 95%. Our model consistently outperformed human annotators, demonstrating superior accuracy, repeatability, all while reducing labor demands. Error analyses indicated precise predictions for microcrack length and width, with moderate variability in estimations of orientation. Our results demonstrate the transformative potential of deep learning in cartilage research, enabling large-scale studies, accelerating analyses, and providing insights into soft tissue damage and engineered material mechanics. Expanding our data set to include diverse anatomical regions and disease stages will further enhance performance and generalization of our YOLOv8-based model. By automating microcrack detection, this study advances understanding of microdamage in cartilage and potential mechanisms of progression of OA. Our publicly available model and data set empower researchers to develop personalized therapies and preventive strategies, ultimately advancing joint health and preserving quality of life.

PMID:40113341 | DOI:10.1002/jor.26071

Categories: Literature Watch

SERS-ATB: a comprehensive database server for antibiotic SERS spectral visualization and deep-learning identification

Thu, 2025-03-20 06:00

Environ Pollut. 2025 Mar 18:126083. doi: 10.1016/j.envpol.2025.126083. Online ahead of print.

ABSTRACT

The rapid and accurate identification of antibiotics in environmental samples is critical for addressing the growing concern of antibiotic pollution, particularly in water sources. Antibiotic contamination poses a significant risk to ecosystems and human health by contributing to the spread of antibiotic resistance. SERS, known for its high sensitivity and specificity, is a powerful tool for antibiotic identification. However, its broader application is constrained by the lack of a large-scale antibiotic spectral database crucial for environmental and clinical use. To address this need, we systematically collected 12,800 SERS spectra for 200 environmentally relevant antibiotics and developed an open-access, web-based database at http://sers.test.bniu.net/. We compared six machine learning algorithms with a CNN model, which achieved the highest accuracy at 98.94%, making it the preferred database model. For external validation, CNN demonstrated an accuracy of 82.8%, underscoring its reliability and practicality for real-world applications. The SERS database and CNN prediction model represent a novel resource for environmental monitoring, offering significant advantages in terms of accessibility, speed, and scalability. This study establishes the large-scale, public SERS spectral databases for antibiotics, facilitating the integration of SERS into environmental programs, with the potential to improve antibiotic detection, pollution management, and resistance mitigation.

PMID:40113206 | DOI:10.1016/j.envpol.2025.126083

Categories: Literature Watch

Geometric deep learning and multiple-instance learning for 3D cell-shape profiling

Thu, 2025-03-20 06:00

Cell Syst. 2025 Mar 19;16(3):101229. doi: 10.1016/j.cels.2025.101229.

ABSTRACT

The three-dimensional (3D) morphology of cells emerges from complex cellular and environmental interactions, serving as an indicator of cell state and function. In this study, we used deep learning to discover morphology representations and understand cell states. This study introduced MorphoMIL, a computational pipeline combining geometric deep learning and attention-based multiple-instance learning to profile 3D cell and nuclear shapes. We used 3D point-cloud input and captured morphological signatures at single-cell and population levels, accounting for phenotypic heterogeneity. We applied these methods to over 95,000 melanoma cells treated with clinically relevant and cytoskeleton-modulating chemical and genetic perturbations. The pipeline accurately predicted drug perturbations and cell states. Our framework revealed subtle morphological changes associated with perturbations, key shapes correlating with signaling activity, and interpretable insights into cell-state heterogeneity. MorphoMIL demonstrated superior performance and generalized across diverse datasets, paving the way for scalable, high-throughput morphological profiling in drug discovery. A record of this paper's transparent peer review process is included in the supplemental information.

PMID:40112779 | DOI:10.1016/j.cels.2025.101229

Categories: Literature Watch

Evaluation of De Vries et al.: Quantifying cellular shapes and how they correlate to cellular responses

Thu, 2025-03-20 06:00

Cell Syst. 2025 Mar 19;16(3):101242. doi: 10.1016/j.cels.2025.101242.

ABSTRACT

One snapshot of the peer review process for "Geometric deep learning and multiple instance learning for 3D cell shape profiling" (De Vries et al., 2025).1.

PMID:40112776 | DOI:10.1016/j.cels.2025.101242

Categories: Literature Watch

Identification of heart failure subtypes using transformer-based deep learning modelling: a population-based study of 379,108 individuals

Thu, 2025-03-20 06:00

EBioMedicine. 2025 Mar 19;114:105657. doi: 10.1016/j.ebiom.2025.105657. Online ahead of print.

ABSTRACT

BACKGROUND: Heart failure (HF) is a complex syndrome with varied presentations and progression patterns. Traditional classification systems based on left ventricular ejection fraction (LVEF) have limitations in capturing the heterogeneity of HF. We aimed to explore the application of deep learning, specifically a Transformer-based approach, to analyse electronic health records (EHR) for a refined subtyping of patients with HF.

METHODS: We utilised linked EHR from primary and secondary care, sourced from the Clinical Practice Research Datalink (CPRD) Aurum, which encompassed health data of over 30 million individuals in the UK. Individuals aged 35 and above with incident reports of HF between January 1, 2005, and January 1, 2018, were included. We proposed a Transformer-based approach to cluster patients based on all clinical diagnoses, procedures, and medication records in EHR. Statistical machine learning (ML) methods were used for comparative benchmarking. The models were trained on a derivation cohort and assessed for their ability to delineate distinct clusters and prognostic value by comparing one-year all-cause mortality and HF hospitalisation rates among the identified subgroups in a separate validation cohort. Association analyses were conducted to elucidate the clinical characteristics of the derived clusters.

FINDINGS: A total of 379,108 patients were included in the HF subtyping analysis. The Transformer-based approach outperformed alternative methods, delineating more distinct and prognostically valuable clusters. This approach identified seven unique HF patient clusters characterised by differing patterns of mortality, hospitalisation, and comorbidities. These clusters were labelled based on the dominant clinical features present at the initial diagnosis of HF: early-onset, hypertension, ischaemic heart disease, metabolic problems, chronic obstructive pulmonary disease (COPD), thyroid dysfunction, and late-onset clusters. The Transformer-based subtyping approach successfully captured the multifaceted nature of HF.

INTERPRETATION: This study identified seven distinct subtypes, including COPD-related and thyroid dysfunction-related subgroups, which are two high-risk subgroups not recognised in previous subtyping analyses. These insights lay the groundwork for further investigations into tailored and effective management strategies for HF.

FUNDING: British Heart Foundation, European Union - Horizon Europe, and Novo Nordisk Research Centre Oxford.

PMID:40112740 | DOI:10.1016/j.ebiom.2025.105657

Categories: Literature Watch

Intelligent monitoring of fruit and vegetable freshness in supply chain based on 3D printing and lightweight deep convolutional neural networks (DCNN)

Thu, 2025-03-20 06:00

Food Chem. 2025 Mar 15;480:143886. doi: 10.1016/j.foodchem.2025.143886. Online ahead of print.

ABSTRACT

In this study, an innovative intelligent system for supervising the quality of fresh produce was proposed, which combined 3D printing technology and deep convolutional neural networks (DCNN). Through 3D printing technology, sensitive, lightweight, and customizable dual-color CO2 monitoring labels were fabricated using bromothymol blue and methyl red as indicators. These labels were applied to sensitively monitor changes in CO2 levels during the storage of vegetables such as green vegetables, cucumbers, okras, plums, and jujubes. The ΔE of the labels was found to have a significant positive correlation with CO2 levels and weight loss rate, while showing a strong inverse relationship with hardness, indirectly reflecting the freshness of the produce. In addition, four lightweight DCNN models (GhostNet, MobileNetv2, ShuffleNet, and Xception) were applied to recognize label images from different storage days, with MobileNetv2 achieving the best performance. The classification accuracy for three freshness levels of okra was 96.06 %, 91.12 %, and 93.86 %, respectively. A mobile application was developed based on this model, which demonstrated excellent performance in recognizing labels at different storage stages, making it suitable for practical applications and effectively distinguishing freshness levels. By combining the novel labels with advanced DCNN models, the accuracy and real-time capabilities of food monitoring can be significantly improved.

PMID:40112721 | DOI:10.1016/j.foodchem.2025.143886

Categories: Literature Watch

Light scattering imaging modal expansion cytometry for label-free single-cell analysis with deep learning

Thu, 2025-03-20 06:00

Comput Methods Programs Biomed. 2025 Mar 15;264:108726. doi: 10.1016/j.cmpb.2025.108726. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Single-cell imaging plays a key role in various fields, including drug development, disease diagnosis, and personalized medicine. To obtain multi-modal information from a single-cell image, especially for label-free cells, this study develops modal expansion cytometry for label-free single-cell analysis.

METHODS: The study utilizes a deep learning-based architecture to expand single-mode light scattering images into multi-modality images, including bright-field (non-fluorescent) and fluorescence images, for label-free single-cell analysis. By combining adversarial loss, L1 distance loss, and VGG perceptual loss, a new network optimization method is proposed. The effectiveness of this method is verified by experiments on simulated images, standard spheres of different sizes, and multiple cell types (such as cervical cancer and leukemia cells). Additionally, the capability of this method in single-cell analysis is assessed through multi-modal cell classification experiments, such as cervical cancer subtypes.

RESULTS: This is demonstrated by using both cervical cancer cells and leukemia cells. The expanded bright-field and fluorescence images derived from the light scattering images align closely with those obtained through conventional microscopy, showing a contour ratio near 1 for both the whole cell and its nucleus. Using machine learning, the subtyping of cervical cancer cells achieved 92.85 % accuracy with the modal expansion images, which represents an improvement of nearly 20 % over single-mode light scattering images.

CONCLUSIONS: This study demonstrates the light scattering imaging modal expansion cytometry with deep learning has the capability to expand the single-mode light scattering image into the artificial multimodal images of label-free single cells, which not only provides the visualization of cells but also helps for the cell classification, showing great potential in the field of single-cell analysis such as cancer cell diagnosis.

PMID:40112688 | DOI:10.1016/j.cmpb.2025.108726

Categories: Literature Watch

The impact of training image quality with a novel protocol on artificial intelligence-based LGE-MRI image segmentation for potential atrial fibrillation management

Thu, 2025-03-20 06:00

Comput Methods Programs Biomed. 2025 Mar 15;264:108722. doi: 10.1016/j.cmpb.2025.108722. Online ahead of print.

ABSTRACT

BACKGROUND: Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting up to 2 % of the population. Catheter ablation is a promising treatment for AF, particularly for paroxysmal AF patients, but it often has high recurrence rates. Developing in silico models of patients' atria during the ablation procedure using cardiac MRI data may help reduce these rates.

OBJECTIVE: This study aims to develop an effective automated deep learning-based segmentation pipeline by compiling a specialized dataset and employing standardized labeling protocols to improve segmentation accuracy and efficiency. In doing so, we aim to achieve the highest possible accuracy and generalization ability while minimizing the burden on clinicians involved in manual data segmentation.

METHODS: We collected LGE-MRI data from VMRC and the cDEMRIS database. Two specialists manually labeled the data using standardized protocols to reduce subjective errors. Neural network (nnU-Net and smpU-Net++) performance was evaluated using statistical tests, including sensitivity and specificity analysis. A new database of LGE-MRI images, based on manual segmentation, was created (VMRC).

RESULTS: Our approach with consistent labeling protocols achieved a Dice coefficient of 92.4 % ± 0.8 % for the cavity and 64.5 % ± 1.9 % for LA walls. Using the pre-trained RIFE model, we attained a Dice score of approximately 89.1 % ± 1.6 % for atrial LGE-MRI imputation, outperforming classical methods. Sensitivity and specificity values demonstrated substantial enhancement in the performance of neural networks trained with the new protocol.

CONCLUSION: Standardized labeling and RIFE applications significantly improved machine learning tool efficiency for constructing 3D LA models. This novel approach supports integrating state-of-the-art machine learning methods into broader in silico pipelines for predicting ablation outcomes in AF patients.

PMID:40112687 | DOI:10.1016/j.cmpb.2025.108722

Categories: Literature Watch

An improved Artificial Protozoa Optimizer for CNN architecture optimization

Thu, 2025-03-20 06:00

Neural Netw. 2025 Mar 13;187:107368. doi: 10.1016/j.neunet.2025.107368. Online ahead of print.

ABSTRACT

In this paper, we propose a novel neural architecture search (NAS) method called MAPOCNN, which leverages an enhanced version of the Artificial Protozoa Optimizer (APO) to optimize the architecture of Convolutional Neural Networks (CNNs). The APO is known for its rapid convergence, high stability, and minimal parameter involvement. To further improve its performance, we introduce MAPO (Modified Artificial Protozoa Optimizer), which incorporates the phototaxis behavior of protozoa. This addition helps mitigate the risk of premature convergence, allowing the algorithm to explore a broader range of possible CNN architectures and ultimately identify more optimal solutions. Through rigorous experimentation on benchmark datasets, including Rectangle and Mnist-random, we demonstrate that MAPOCNN not only achieves faster convergence times but also performs competitively when compared to other state-of-the-art NAS algorithms. The results highlight the effectiveness of MAPOCNN in efficiently discovering CNN architectures that outperform existing methods in terms of both speed and accuracy. This work presents a promising direction for optimizing deep learning architectures using biologically inspired optimization techniques.

PMID:40112636 | DOI:10.1016/j.neunet.2025.107368

Categories: Literature Watch

REDInet: a temporal convolutional network-based classifier for A-to-I RNA editing detection harnessing million known events

Thu, 2025-03-20 06:00

Brief Bioinform. 2025 Mar 4;26(2):bbaf107. doi: 10.1093/bib/bbaf107.

ABSTRACT

A-to-I ribonucleic acid (RNA) editing detection is still a challenging task. Current bioinformatics tools rely on empirical filters and whole genome sequencing or whole exome sequencing data to remove background noise, sequencing errors, and artifacts. Sometimes they make use of cumbersome and time-consuming computational procedures. Here, we present REDInet, a temporal convolutional network-based deep learning algorithm, to profile RNA editing in human RNA sequencing (RNAseq) data. It has been trained on REDIportal RNA editing sites, the largest collection of human A-to-I changes from >8000 RNAseq data of the genotype-tissue expression project. REDInet can classify editing events with high accuracy harnessing RNAseq nucleotide frequencies of 101-base windows without the need for coupled genomic data.

PMID:40112338 | DOI:10.1093/bib/bbaf107

Categories: Literature Watch

Deep learning analysis of magnetic resonance imaging accurately detects early-stage perihilar cholangiocarcinoma in patients with primary sclerosing cholangitis

Thu, 2025-03-20 06:00

Hepatology. 2025 Mar 20. doi: 10.1097/HEP.0000000000001314. Online ahead of print.

ABSTRACT

BACKGROUND AND AIMS: Among those with primary sclerosing cholangitis (PSC), perihilar CCA (pCCA) is often diagnosed at a late-stage and is a leading source of mortality. Detection of pCCA in PSC when curative action can be taken is challenging. Our aim was to create a deep learning model that analyzed magnetic resonance imaging (MRI) to detect early-stage pCCA and compare its diagnostic performance with expert radiologists.

APPROACH AND RESULTS: We conducted a multicenter, international, retrospective cohort study involving adults with large duct PSC who underwent contrast-enhanced MRI. Senior abdominal radiologists reviewed the images. All patients with pCCA had early-stage cancer and were registered for liver transplantation. We trained a 3D DenseNet-121 model, a form of deep learning, using MRI images and assessed its performance in a separate test cohort. The study included 398 patients (training cohort n=150; test cohort n=248). pCCA was present in 230 individuals (training cohort n=64; test cohort n=166). In the test cohort, the respective performances of the model compared to the radiologists were: sensitivity 87.9% versus 50.0%, p<0.001; specificity 84.1% versus 100.0%, p<0.001; area under receiving operating curve 86.0% versus 75.0%, p<0.001. Even when a mass was absent, the model had a higher sensitivity for pCCA than radiologists (91.6% vs. 50.6%, p<0.001) and maintained good specificity (84.1%).

CONCLUSION: The 3D DenseNet-121 MRI model effectively detects early-stage pCCA in PSC patients. Compared to expert radiologists, the model missed fewer cases of cancer.

PMID:40112296 | DOI:10.1097/HEP.0000000000001314

Categories: Literature Watch

Utility-based Analysis of Statistical Approaches and Deep Learning Models for Synthetic Data Generation With Focus on Correlation Structures: Algorithm Development and Validation

Thu, 2025-03-20 06:00

JMIR AI. 2025 Mar 20;4:e65729. doi: 10.2196/65729.

ABSTRACT

BACKGROUND: Recent advancements in Generative Adversarial Networks and large language models (LLMs) have significantly advanced the synthesis and augmentation of medical data. These and other deep learning-based methods offer promising potential for generating high-quality, realistic datasets crucial for improving machine learning applications in health care, particularly in contexts where data privacy and availability are limiting factors. However, challenges remain in accurately capturing the complex associations inherent in medical datasets.

OBJECTIVE: This study evaluates the effectiveness of various Synthetic Data Generation (SDG) methods in replicating the correlation structures inherent in real medical datasets. In addition, it examines their performance in downstream tasks using Random Forests (RFs) as the benchmark model. To provide a comprehensive analysis, alternative models such as eXtreme Gradient Boosting and Gated Additive Tree Ensembles are also considered. We compare the following SDG approaches: Synthetic Populations in R (synthpop), copula, copulagan, Conditional Tabular Generative Adversarial Network (ctgan), tabular variational autoencoder (tvae), and tabula for LLMs.

METHODS: We evaluated synthetic data generation methods using both real-world and simulated datasets. Simulated data consist of 10 Gaussian variables and one binary target variable with varying correlation structures, generated via Cholesky decomposition. Real-world datasets include the body performance dataset with 13,393 samples for fitness classification, the Wisconsin Breast Cancer dataset with 569 samples for tumor diagnosis, and the diabetes dataset with 768 samples for diabetes prediction. Data quality is evaluated by comparing correlation matrices, the propensity score mean-squared error (pMSE) for general utility, and F1-scores for downstream tasks as a specific utility metric, using training on synthetic data and testing on real data.

RESULTS: Our simulation study, supplemented with real-world data analyses, shows that the statistical methods copula and synthpop consistently outperform deep learning approaches across various sample sizes and correlation complexities, with synthpop being the most effective. Deep learning methods, including large LLMs, show mixed performance, particularly with smaller datasets or limited training epochs. LLMs often struggle to replicate numerical dependencies effectively. In contrast, methods like tvae with 10,000 epochs perform comparably well. On the body performance dataset, copulagan achieves the best performance in terms of pMSE. The results also highlight that model utility depends more on the relative correlations between features and the target variable than on the absolute magnitude of correlation matrix differences.

CONCLUSIONS: Statistical methods, particularly synthpop, demonstrate superior robustness and utility preservation for synthetic tabular data compared with deep learning approaches. Copula methods show potential but face limitations with integer variables. Deep Learning methods underperform in this context. Overall, these findings underscore the dominance of statistical methods for synthetic data generation for tabular data, while highlighting the niche potential of deep learning approaches for highly complex datasets, provided adequate resources and tuning.

PMID:40112290 | DOI:10.2196/65729

Categories: Literature Watch

Performance evaluation of reduced complexity deep neural networks

Thu, 2025-03-20 06:00

PLoS One. 2025 Mar 20;20(3):e0319859. doi: 10.1371/journal.pone.0319859. eCollection 2025.

ABSTRACT

Deep Neural Networks (DNN) have achieved state-of-the-art performance in medical image classification and are increasingly being used for disease diagnosis. However, these models are quite complex and that necessitates the need to reduce the model complexity for their use in low-power edge applications that are becoming common. The model complexity reduction techniques in most cases comprise of time-consuming operations and are often associated with a loss of model performance in proportion to the model size reduction. In this paper, we propose a simplified model complexity reduction technique based on reducing the number of channels for any DNN and demonstrate the complexity reduction approaches for the ResNet-50 model integration in low-power devices. The model performance of the proposed models was evaluated for multiclass classification of CXR images, as normal, pneumonia, and COVID-19 classes. We demonstrate successive size reductions down to 75%, 87%, and 93% reduction with an acceptable classification performance reduction of 0.5%, 0.5%, and 0.8% respectively. We also provide the results for the model generalization, and visualization with Grad-CAM at an acceptable performance and interpretable level. In addition, a theoretical VLSI architecture for the best performing architecture has been presented.

PMID:40112278 | DOI:10.1371/journal.pone.0319859

Categories: Literature Watch

Psychedelic Drugs in Mental Disorders: Current Clinical Scope and Deep Learning-Based Advanced Perspectives

Thu, 2025-03-20 06:00

Adv Sci (Weinh). 2025 Mar 20:e2413786. doi: 10.1002/advs.202413786. Online ahead of print.

ABSTRACT

Mental disorders are a representative type of brain disorder, including anxiety, major depressive depression (MDD), and autism spectrum disorder (ASD), that are caused by multiple etiologies, including genetic heterogeneity, epigenetic dysregulation, and aberrant morphological and biochemical conditions. Psychedelic drugs such as psilocybin and lysergic acid diethylamide (LSD) have been renewed as fascinating treatment options and have gradually demonstrated potential therapeutic effects in mental disorders. However, the multifaceted conditions of psychiatric disorders resulting from individuality, complex genetic interplay, and intricate neural circuits impact the systemic pharmacology of psychedelics, which disturbs the integration of mechanisms that may result in dissimilar medicinal efficiency. The precise prescription of psychedelic drugs remains unclear, and advanced approaches are needed to optimize drug development. Here, recent studies demonstrating the diverse pharmacological effects of psychedelics in mental disorders are reviewed, and emerging perspectives on structural function, the microbiota-gut-brain axis, and the transcriptome are discussed. Moreover, the applicability of deep learning is highlighted for the development of drugs on the basis of big data. These approaches may provide insight into pharmacological mechanisms and interindividual factors to enhance drug discovery and development for advanced precision medicine.

PMID:40112231 | DOI:10.1002/advs.202413786

Categories: Literature Watch

Uncovering water conservation patterns in semi-arid regions through hydrological simulation and deep learning

Thu, 2025-03-20 06:00

PLoS One. 2025 Mar 20;20(3):e0319540. doi: 10.1371/journal.pone.0319540. eCollection 2025.

ABSTRACT

Under the increasing pressure of global climate change, water conservation (WC) in semi-arid regions is experiencing unprecedented levels of stress. WC involves complex, nonlinear interactions among ecosystem components like vegetation, soil structure, and topography, complicating research. This study introduces a novel approach combining InVEST modeling, spatiotemporal transfer of Water Conservation Reserves (WCR), and deep learning to uncover regional WC patterns and driving mechanisms. The InVEST model evaluates Xiong'an New Area's WC characteristics from 2000 to 2020, showing a 74% average increase in WC depth with an inverted "V" spatial distribution. Spatiotemporal analysis identifies temporal changes, spatial patterns of WCR and land use, and key protection areas, revealing that the WCR in Xiong'an New Area primarily shifts from the lowest WCR areas to lower WCR areas. The potential enhancement areas of WCR are concentrated in the northern region. Deep learning quantifies data complexity, highlighting critical factors like land use, precipitation, and drought influencing WC. This detailed approach enables the development of personalized WC zones and strategies, offering new insights into managing complex spatial and temporal WC data.

PMID:40112018 | DOI:10.1371/journal.pone.0319540

Categories: Literature Watch

Extreme heat prediction through deep learning and explainable AI

Thu, 2025-03-20 06:00

PLoS One. 2025 Mar 20;20(3):e0316367. doi: 10.1371/journal.pone.0316367. eCollection 2025.

ABSTRACT

Extreme heat waves are causing widespread concern for comprehensive studies on their ecological and societal implications. With the ongoing rise in global temperatures, precise forecasting of heatwaves becomes increasingly crucial for proactive planning and ensuring safety. This study investigates the efficacy of deep learning (DL) models, including Artificial Neural Network (ANN), Conolutional Neural Network (CNN) and Long-Short Term Memory (LSTM), using five years of meteorological data from Pakistan Meteorological Department (PMD), by integrating Explainable AI (XAI) techniques to enhance the interpretability of models. Although Weather forecasting has advanced in predicting sunshine, rain, clouds, and general weather patterns, the study of extreme heat, particularly using advanced computer models, remains largely unexplored, overlooking this gap risks significant disruptions in daily life. Our study addresses this gap by collecting five years of weather dataset and developing a comprehensive framework integrating DL and XAI models for extreme heat prediction. Key variables such as temperature, pressure, humidity, wind, and precipitation are examined. Our findings demonstrate that the LSTM model outperforms others with a lead time of 1-3 days and minimal error metrics, achieving an accuracy of 96.2%. Through the utilization of SHAP and LIME XAI methods, we elucidate the significance of humidity and maximum temperature in accurately predicting extreme heat events. Overall, this study emphasizes how important it is to investigate intricate DL models that integrate XAI for the prediction of extreme heat. Making these models understood allows us to identify important parameters, improving heatwave forecasting accuracy and guiding risk-reduction strategies.

PMID:40111979 | DOI:10.1371/journal.pone.0316367

Categories: Literature Watch

Data-driven cultural background fusion for environmental art image classification: Technical support of the dual Kernel squeeze and excitation network

Thu, 2025-03-20 06:00

PLoS One. 2025 Mar 20;20(3):e0313946. doi: 10.1371/journal.pone.0313946. eCollection 2025.

ABSTRACT

This study aims to explore a data-driven cultural background fusion method to improve the accuracy of environmental art image classification. A novel Dual Kernel Squeeze and Excitation Network (DKSE-Net) model is proposed for the complex cultural background and diverse visual representation in environmental art images. This model combines the advantages of adaptive adjustment of receptive fields using the Selective Kernel Network (SKNet) and the characteristics of enhancing channel features using the Squeeze and Excitation Network (SENet). Constructing a DKSE module can comprehensively extract the global and local features of the image. The DKSE module adopts various techniques such as dilated convolution, L2 regularization, Dropout, etc. in the multi-layer convolution process. Firstly, dilated convolution is introduced into the initial layer of the model to enhance the original art image's feature capture ability. Secondly, the pointwise convolution is constrained by L2 regularization, thus enhancing the accuracy and stability of the convolution. Finally, the Dropout technology randomly discards the feature maps before and after global average pooling to prevent overfitting and improve the model's generalization ability. On this basis, the Rectified Linear Unit activation function and depthwise convolution are introduced after the second layer convolution, and batch normalization is performed to improve the efficiency and robustness of feature extraction. The experimental results indicate that the proposed DKSE-Net model significantly outperforms traditional Convolutional Neural Networks (CNNs) and other existing state-of-the-art models in the task of environmental art image classification. Specifically, the DKSE-Net model achieves a classification accuracy of 92.7%, 3.5 percentage points higher than the comparative models. Moreover, when processing images with complex cultural backgrounds, DKSE-Net can effectively integrate different cultural features, achieving a higher classification accuracy and stability. This enhancement in performance provides an important reference for image classification research based on the fusion of cultural backgrounds and demonstrates the broad potential of deep learning technology in the environmental art field.

PMID:40111961 | DOI:10.1371/journal.pone.0313946

Categories: Literature Watch

A Unified Framework for Dynamics Modeling and Control Design Using Deep Learning With Side Information on Stabilizability

Thu, 2025-03-20 06:00

IEEE Trans Neural Netw Learn Syst. 2025 Mar 20;PP. doi: 10.1109/TNNLS.2025.3543926. Online ahead of print.

ABSTRACT

This article presents a unified framework for dynamics modeling and control design using deep learning, focusing on incorporating prior side information on stabilizability. Control theory provides systematic techniques for designing feedback systems while ensuring fundamental properties such as stabilizability, which are crucial for practical control applications. However, conventional data-driven approaches often overlook or struggle to explicitly incorporate such control properties into learned models. To address this, we introduce a novel neural network (NN)-based approach that concurrently learns the system dynamics, a stabilizing feedback controller, and a Lyapunov function for the closed-loop system, thus explicitly guaranteeing stabilizability in the learned model. Our proposed deep learning framework is versatile and applicable across a wide range of control problems, including safety control, -gain control, passivation, and solutions to Hamilton-Jacobi inequalities. By embedding stabilizability as a core property within the learning process, our method allows for the development of learned models that are not only data-driven but also grounded in control-theoretic guarantees, greatly enhancing their utility in real-world control applications. This article includes examples that demonstrate the effectiveness of this approach, showcasing the stability and control performance improvements achieved in various control scenarios. The methods proposed in this article can be easily applied to modeling without control design. The code has been open-sourced and is available at https://github.com/kashctrl/Deep_Stabilizable_Models.

PMID:40111782 | DOI:10.1109/TNNLS.2025.3543926

Categories: Literature Watch

Multi-modal deep representation learning accurately identifies and interprets drug-target interactions

Thu, 2025-03-20 06:00

IEEE J Biomed Health Inform. 2025 Mar 20;PP. doi: 10.1109/JBHI.2025.3553217. Online ahead of print.

ABSTRACT

Deep learning offers efficient solutions for drug-target interaction prediction, but current methods often fail to capture the full complexity of multi-modal data (i.e. sequence, graphs, and three-dimensional structures), limiting both performance and generalization. Here, we present UnitedDTA, a novel explainable deep learning framework capable of integrating multi-modal biomolecule data to improve the binding affinity prediction, especially for novel (unseen) drugs and targets. UnitedDTA enables automatic learning unified discriminative representations from multi-modality data via contrastive learning and cross-attention mechanisms for cross-modality alignment and integration. Comparative results on multiple benchmark datasets show that UnitedDTA significantly outperforms the state-of-the-art drug-target affinity prediction methods and exhibits better generalization ability in predicting unseen drug-target pairs. More importantly, unlike most "black-box" deep learning methods, our well-established model offers better interpretability which enables us to directly infer the important substructures of the drug-target complexes that influence the binding activity, thus providing the insights in unveiling the binding preferences. Moreover, by extending UnitedDTA to other downstream tasks (e.g. molecular property prediction), we showcase the proposed multi-modal representation learning is capable of capturing the latent molecular representations that are closely associated with the molecular property, demonstrating the broad application potential for advancing the drug discovery process.

PMID:40111772 | DOI:10.1109/JBHI.2025.3553217

Categories: Literature Watch

Pages