Deep learning
CodonTransformer: a multispecies codon optimizer using context-aware neural networks
Nat Commun. 2025 Apr 3;16(1):3205. doi: 10.1038/s41467-025-58588-7.
ABSTRACT
Degeneracy in the genetic code allows many possible DNA sequences to encode the same protein. Optimizing codon usage within a sequence to meet organism-specific preferences faces combinatorial explosion. Nevertheless, natural sequences optimized through evolution provide a rich source of data for machine learning algorithms to explore the underlying rules. Here, we introduce CodonTransformer, a multispecies deep learning model trained on over 1 million DNA-protein pairs from 164 organisms spanning all domains of life. The model demonstrates context-awareness thanks to its Transformers architecture and to our sequence representation strategy that combines organism, amino acid, and codon encodings. CodonTransformer generates host-specific DNA sequences with natural-like codon distribution profiles and with minimum negative cis-regulatory elements. This work introduces the strategy of Shared Token Representation and Encoding with Aligned Multi-masking (STREAM) and provides a codon optimization framework with a customizable open-access model and a user-friendly Google Colab interface.
PMID:40180930 | DOI:10.1038/s41467-025-58588-7
Efficacy of a deep learning-based software for chest X-ray analysis in an emergency department
Diagn Interv Imaging. 2025 Apr 3:S2211-5684(25)00067-1. doi: 10.1016/j.diii.2025.03.007. Online ahead of print.
ABSTRACT
PURPOSE: The purpose of this study was to evaluate the efficacy of a deep learning (DL)-based computer-aided detection (CAD) system for the detection of abnormalities on chest X-rays performed in an emergency department setting, where readers have access to relevant clinical information.
MATERIALS AND METHODS: Four hundred and four consecutive chest X-rays performed over a two-month period in patients presenting to an emergency department with respiratory symptoms were retrospectively collected. Five readers (two radiologists, three emergency physicians) with access to clinical information were asked to identify five abnormalities (i.e., consolidation, lung nodule, pleural effusion, pneumothorax, mediastinal/hilar mass) in the dataset without assistance, and then after a 2-week period, with the assistance of a DL-based CAD system. The reference standard was a chest X-ray consensus review by two experienced radiologists. Reader performances were compared between the reading sessions, and interobserver agreement was assessed using Fleiss' kappa test.
RESULTS: The dataset included 118 occurrences of the five abnormalities in 103 chest X-rays. The CAD system improved sensitivity for consolidation, pleural effusion, and nodule, with respective absolute differences of 8.3 % (95 % CI: 3.8-12.7; P < 0.001), 7.9 % (95 % CI: 1.7-14.1; P = 0.012), and 29.5 % (95 % CI: 19.8-38.2; P < 0.001), respectively. Specificity was greater than 89 % for all abnormalities and showed a minimal but significant decrease with DL for nodules and mediastinal/hilar masses (-1.8 % [95 % CI: -2.7 - -0.9]; P < 0.001 and -0.8 % [95 % CI: -1.5 - -0.2]; P = 0.005). Inter-observer agreement improved with DL, with kappa values ranging from 0.40 [95 % CI: 0.37-0.43] for mediastinal/hilar mass to 0.84 [95 % CI: 0.81-0.87] for pneumothorax.
CONCLUSION: Our results suggest that DL-assisted reading increases the sensitivity for detecting important chest X-ray abnormalities in the emergency department, even when clinical information is available to the radiologist.
PMID:40180796 | DOI:10.1016/j.diii.2025.03.007
GCN-BBB: Deep Learning Blood-Brain Barrier (BBB) Permeability PharmacoAnalytics with Graph Convolutional Neural (GCN) Network
AAPS J. 2025 Apr 3;27(3):73. doi: 10.1208/s12248-025-01059-0.
ABSTRACT
The Blood-Brain Barrier (BBB) is a selective barrier between the Central Nervous System (CNS) and the peripheral system, regulating the distribution of molecules. BBB permeability has been crucial in CNS-targeting drug development, such as glioblastoma-related drug discovery. In addition, more CNS diseases still present significant challenges, for instance, neurological disorders like Alzheimer's Disease (AD) and drug abuse. Conversely, cannabinoid drugs that do not cross the BBB are needed to avoid off-target CNS psychotropic effects. In vitro and in vivo experiments measuring BBB permeability are costly and low throughput. Computational pharmacoanalytics modeling, particularly using deep-learning Graph Neural Networks (GNNs), offers a promising alternative. GNNs excel at capturing intricate relationships in graph-based information, such as small molecular structures. In this study, we developed GNNs model for BBB permeability using the graph representation of drugs. The GNNs were compared with other algorithms using molecular fingerprints or physical-chemical descriptors. With a dataset of 1924 molecules, the best GNNs model, a convolutional graph neural network using a normalized Laplacian matrix (GCN_2), achieved a precision of 0.94, recall of 0.96, F1 score of 0.95, and MCC score of 0.77. This outperformed other machine learning algorithms with molecular fingerprints. The findings indicate that the graphic representation of small molecules combined with GNNs architecture is powerful in predicting BBB permeability with high accuracy and recall. The developed GNNs model can be utilized in the initial screening stage for new drug development.
PMID:40180695 | DOI:10.1208/s12248-025-01059-0
Advancing Visual Perception Through VCANet-Crossover Osprey Algorithm: Integrating Visual Technologies
J Imaging Inform Med. 2025 Apr 3. doi: 10.1007/s10278-025-01467-w. Online ahead of print.
ABSTRACT
Diabetic retinopathy (DR) is a significant vision-threatening condition, necessitating accurate and efficient automated screening methods. Traditional deep learning (DL) models struggle to detect subtle lesions and also suffer from high computational complexity. Existing models primarily mimic the primary visual cortex (V1) of the human visual system, neglecting other higher-order processing regions. To overcome these limitations, this research introduces the vision core-adapted network-based crossover osprey algorithm (VCANet-COP) for subtle lesion recognition with better computational efficiency. The model integrates sparse autoencoders (SAEs) to extract vascular structures and lesion-specific features at a pixel level for improved abnormality detection. The front-end network in the VCANet emulates the V1, V2, V4, and inferotemporal (IT) regions to derive subtle lesions effectively and improve lesion detection accuracy. Additionally, the COP algorithm leveraging the osprey optimization algorithm (OOA) with a crossover strategy optimizes hyperparameters and network configurations to ensure better computational efficiency, faster convergence, and enhanced performance in lesion recognition. The experimental assessment of the VCANet-COP model on multiple DR datasets namely Diabetic_Retinopathy_Data (DR-Data), Structured Analysis of the Retina (STARE) dataset, Indian Diabetic Retinopathy Image Dataset (IDRiD), Digital Retinal Images for Vessel Extraction (DRIVE) dataset, and Retinal fundus multi-disease image dataset (RFMID) demonstrates superior performance over baseline works, namely EDLDR, FFU_Net, LSTM_MFORG, fundus-DeepNet, and CNN_SVD by achieving average outcomes of 98.14% accuracy, 97.9% sensitivity, 98.08% specificity, 98.4% precision, 98.1% F1-score, 96.2% kappa coefficient, 2.0% false positive rate (FPR), 2.1% false negative rate (FNR), and 1.5-s execution time. By addressing critical limitations, VCANet-COP provides a scalable and robust solution for real-world DR screening and clinical decision support.
PMID:40180632 | DOI:10.1007/s10278-025-01467-w
Opportunities and Barriers to Artificial Intelligence Adoption in Palliative/Hospice Care for Underrepresented Groups: A Technology Acceptance Model-Based Review
J Hosp Palliat Nurs. 2025 Apr 2. doi: 10.1097/NJH.0000000000001120. Online ahead of print.
ABSTRACT
Underrepresented groups (URGs) in the United States, including African Americans, Latino/Hispanic Americans, Asian Pacific Islanders, and Native Americans, face significant barriers to accessing hospice and palliative care. Factors such as language barriers, cultural perceptions, and mistrust in healthcare systems contribute to the underutilization of these services. Recent advancements in artificial intelligence (AI) offer potential solutions to these challenges by enhancing cultural sensitivity, improving communication, and personalizing care. This article aims to synthesize the literature on AI in palliative/hospice care for URGs through the Technology Acceptance Model (TAM), highlighting current research and application in practice. The scoping review methodology, based on the framework developed by Arksey and O'Malley, was applied to rapidly map the field of AI in palliative and hospice care. A systematic search was conducted in 9 databases to identify studies examining AI applications in hospice and palliative care for URGs. Articles were independently assessed by 2 reviewers and then synthesized via narrative review through the lens of the TAM framework, which focuses on technology acceptance factors such as perceived ease of use and usefulness. Seventeen studies were identified. Findings suggest that AI has the potential to improve decision-making, enhance timely palliative care referrals, and bridge language and cultural gaps. Artificial intelligence tools were found to improve predictive accuracy, support serious illness communication, and assist in addressing language barriers, thus promoting equitable care for URGs. However, barriers such as limited generalizability, biases in data, and challenges in infrastructure were noted, hindering the full adoption of AI in hospice settings. Artificial intelligence has transformative potential to improve hospice care for URGs by enhancing cultural sensitivity, improving communication, and enabling more timely interventions. However, to fully realize its potential, AI solutions must address data biases, infrastructure limitations, and cultural nuances. Future research should prioritize developing culturally competent AI tools that are transparent, explainable, and scalable to ensure equitable access to hospice and palliative care services for all populations.
PMID:40179379 | DOI:10.1097/NJH.0000000000001120
SpaMask: Dual masking graph autoencoder with contrastive learning for spatial transcriptomics
PLoS Comput Biol. 2025 Apr 3;21(4):e1012881. doi: 10.1371/journal.pcbi.1012881. eCollection 2025 Apr.
ABSTRACT
Understanding the spatial locations of cell within tissues is crucial for unraveling the organization of cellular diversity. Recent advancements in spatial resolved transcriptomics (SRT) have enabled the analysis of gene expression while preserving the spatial context within tissues. Spatial domain characterization is a critical first step in SRT data analysis, providing the foundation for subsequent analyses and insights into biological implications. Graph neural networks (GNNs) have emerged as a common tool for addressing this challenge due to the structural nature of SRT data. However, current graph-based deep learning approaches often overlook the instability caused by the high sparsity of SRT data. Masking mechanisms, as an effective self-supervised learning strategy, can enhance the robustness of these models. To this end, we propose SpaMask, dual masking graph autoencoder with contrastive learning for SRT analysis. Unlike previous GNNs, SpaMask masks a portion of spot nodes and spot-to-spot edges to enhance its performance and robustness. SpaMask combines Masked Graph Autoencoders (MGAE) and Masked Graph Contrastive Learning (MGCL) modules, with MGAE using node masking to leverage spatial neighbors for improved clustering accuracy, while MGCL applies edge masking to create a contrastive loss framework that tightens embeddings of adjacent nodes based on spatial proximity and feature similarity. We conducted a comprehensive evaluation of SpaMask on eight datasets from five different platforms. Compared to existing methods, SpaMask achieves superior clustering accuracy and effective batch correction.
PMID:40179332 | DOI:10.1371/journal.pcbi.1012881
3D Hyperspectral Data Analysis with Spatially Aware Deep Learning for Diagnostic Applications
Anal Chem. 2025 Apr 3. doi: 10.1021/acs.analchem.4c05549. Online ahead of print.
ABSTRACT
Nowadays, with the rise of artificial intelligence (AI), deep learning algorithms play an increasingly important role in various traditional fields of research. Recently, these algorithms have already spread into data analysis for Raman spectroscopy. However, most current methods only use 1-dimensional (1D) spectral data classification, instead of considering any neighboring information in space. Despite some successes, this type of methods wastes the 3-dimensional (3D) structure of Raman hyperspectral scans. Therefore, to investigate the feasibility of preserving the spatial information on Raman spectroscopy for data analysis, spatially aware deep learning algorithms were applied into a colorectal tissue data set with 3D Raman hyperspectral scans. This data set contains Raman spectra from normal, hyperplasia, adenoma, carcinoma tissues as well as artifacts. First, a modified version of 3D U-Net was utilized for segmentation; second, another convolutional neural network (CNN) using 3D Raman patches was utilized for pixel-wise classification. Both methods were compared with the conventional 1D CNN method, which worked as baseline. Based on the results of both epithelial tissue detection and colorectal cancer detection, it is shown that using spatially neighboring information on 3D Raman scans can increase the performance of deep learning models, although it might also increase the complexity of network training. Apart from the colorectal tissue data set, experiments were also conducted on a cholangiocarcinoma data set for generalizability verification. The findings in this study can also be potentially applied into future tasks regarding spectroscopic data analysis, especially for improving model performance in a spatially aware way.
PMID:40179245 | DOI:10.1021/acs.analchem.4c05549
Using deep learning artificial intelligence for sex identification and taxonomy of sand fly species
PLoS One. 2025 Apr 3;20(4):e0320224. doi: 10.1371/journal.pone.0320224. eCollection 2025.
ABSTRACT
Sandflies are vectors for several tropical diseases such as leishmaniasis, bartonellosis, and sandfly fever. Moreover, sandflies exhibit species-specificity in transmitting particular pathogen species, with females being responsible for disease transmission. Thus, effective classification of sandfly species and the corresponding sex identification are important for disease surveillance and control, managing breeding/populations, research and development, and conducting epidemiological studies. This is typically performed manually by observing internal morphological features, which maybe an error-prone tedious process. In this work, we developed a deep learning artificial intelligence system to determine the gender and to differentiate between three species of two sandfly subgenera (i.e., Phlebotomus alexandri, Phlebotomus papatasi, and Phlebotomus sergenti). Using locally field-caught and prepared samples over a period of two years, and based on convolutional neural networks, transfer learning, and early fusion of genital and pharynx images, we achieved exceptional classification accuracy (greater than 95%) across multiple performance metrics and using a wide range of pre-trained convolutional neural network models. This study not only contributes to the field of medical entomology by providing an automated and accurate solution for sandfly gender identification and taxonomy, but also establishes a framework for leveraging deep learning techniques in similar vector-borne disease research and control efforts.
PMID:40179129 | DOI:10.1371/journal.pone.0320224
Advancing enterprise risk management with deep learning: A predictive approach using the XGBoost-CNN-BiLSTM model
PLoS One. 2025 Apr 3;20(4):e0319773. doi: 10.1371/journal.pone.0319773. eCollection 2025.
ABSTRACT
Enterprise risk management is a key element to ensure the sustainable and steady development of enterprises. However, traditional risk management methods have certain limitations when facing complex market environments and diverse risk events. This study introduces a deep learning-based risk management model utilizing the XGBoost-CNN-BiLSTM framework to enhance the prediction and detection of risk events. This model combines the structured data processing capabilities of XGBoost, the feature extraction capabilities of CNN, and the time series processing capabilities of BiLSTM to more comprehensively capture the key characteristics of risk events. Through experimental verification on multiple data sets, our model has achieved significant advantages in key indicators such as accuracy, recall, F1 score, and AUC. For example, on the S&P 500 historical data set, our model achieved a precision rate of 93.84% and a recall rate of 95.75%, further verifying its effectiveness in predicting risk events. These experimental results fully demonstrate the robustness and superiority of our model. Our research is of great significance, not only providing a more reliable risk management method for enterprises, but also providing useful inspiration for the application of deep learning in the field of risk management.
PMID:40179109 | DOI:10.1371/journal.pone.0319773
Revisiting Supervised Learning-Based Photometric Stereo Networks
IEEE Trans Pattern Anal Mach Intell. 2025 Apr 3;PP. doi: 10.1109/TPAMI.2025.3557498. Online ahead of print.
ABSTRACT
Deep learning has significantly propelled the development of photometric stereo by handling the challenges posed by unknown reflectance and global illumination effects. However, how supervised learning-based photometric stereo networks resolve these challenges remains to be elucidated. In this paper, we aim to reveal how existing methods address these challenges by revisiting their deep features, deep feature encoding strategies, and network architectures. Based on the insights gained from our analysis, we propose ESSENCE-Net, which effectively encodes deep shading features with an easy-first-encoding strategy, enhances shading features with shading supervision, and accurately decodes normal with spatial context-aware attention. The experimental results verify that the proposed method outperforms state-of-the-art methods on three benchmark datasets, whether with dense or sparse inputs. The code is available at https://github.com/wxy-zju/ESSENCE-Net.
PMID:40178960 | DOI:10.1109/TPAMI.2025.3557498
Towards Better Cephalometric Landmark Detection with Diffusion Data Generation
IEEE Trans Med Imaging. 2025 Apr 3;PP. doi: 10.1109/TMI.2025.3557430. Online ahead of print.
ABSTRACT
Cephalometric landmark detection is essential for orthodontic diagnostics and treatment planning. Nevertheless, the scarcity of samples in data collection and the extensive effort required for manual annotation have significantly impeded the availability of diverse datasets. This limitation has restricted the effectiveness of deep learning-based detection methods, particularly those based on large-scale vision models. To address these challenges, we have developed an innovative data generation method capable of producing diverse cephalometric X-ray images along with corresponding annotations without human intervention. To achieve this, our approach initiates by constructing new cephalometric landmark annotations using anatomical priors. Then, we employ a diffusion-based generator to create realistic X-ray images that correspond closely with these annotations. To achieve precise control in producing samples with different attributes, we introduce a novel prompt cephalometric X-ray image dataset. This dataset includes real cephalometric X-ray images and detailed medical text prompts describing the images. By leveraging these detailed prompts, our method improves the generation process to control different styles and attributes. Facilitated by the large, diverse generated data, we introduce large-scale vision detection models into the cephalometric landmark detection task to improve accuracy. Experimental results demonstrate that training with the generated data substantially enhances the performance. Compared to methods without using the generated data, our approach improves the Success Detection Rate (SDR) by 6.5%, attaining a notable 82.2%. All code and data are available at: https://um-lab.github.io/cepha-generation/.
PMID:40178956 | DOI:10.1109/TMI.2025.3557430
Artificial Intelligence for the Detection of Patient-Ventilator Asynchrony
Respir Care. 2025 Apr 3. doi: 10.1089/respcare.12540. Online ahead of print.
ABSTRACT
Patient-ventilator asynchrony (PVA) is a challenge to invasive mechanical ventilation characterized by misalignment of ventilatory support and patient respiratory effort. PVA is highly prevalent and associated with adverse clinical outcomes, including increased work of breathing, oxygen consumption, and risk of barotrauma. Artificial intelligence (AI) is a potentially transformative solution offering capabilities for automated detection of PVA. This narrative review characterizes the landscape of AI models designed for PVA detection and quantification. A comprehensive literature search identified 13 studies, spanning diverse settings and patient populations. Machine learning (ML) techniques, derivation datasets, types of asynchronies detected, and performance metrics were assessed to provide a contemporary view of AI in this domain. We reviewed 166 articles published between 1989 and April 2024, of which 13 were included, encompassing 332 participants and analyzing >5.8 million breaths. Patient counts ranged between 8 and 107 and breath data ranged between 1,375 and 4.2 M. The reason for invasive mechanical ventilation use was given as ARDS in three articles, whereas the remainder had different invasive mechanical ventilation indications. Various ML methods as well as newer deep learning techniques were used to address PVA types. Sensitivity and specificity of 10 of the 13 models were >0.9, and 8 models reported accuracy of >0.9. AI models have significant potential to address PVA in invasive mechanical ventilation, displaying high accuracy across various populations and asynchrony types. This showcases their potential to accurately detect and quantify PVA. Future work should focus on model validation in diverse clinical settings and patient populations.
PMID:40178919 | DOI:10.1089/respcare.12540
Hyaluronan network remodeling by ZEB1 and ITIH2 enhances the motility and invasiveness of cancer cells
J Clin Invest. 2025 Apr 3:e180570. doi: 10.1172/JCI180570. Online ahead of print.
ABSTRACT
Hyaluronan (HA) in the extracellular matrix promotes epithelial-to-mesenchymal transition (EMT) and metastasis; however, the mechanism by which the HA network constructed by cancer cells regulates cancer progression and metastasis in the tumor microenvironment (TME) remains largely unknown. In this study, inter-alpha-trypsin inhibitor heavy chain 2 (ITIH2), an HA-binding protein, was confirmed to be secreted from mesenchymal-like lung cancer cells when co-cultured with cancer-associated fibroblasts. ITIH2 expression is transcriptionally upregulated by the EMT-inducing transcription factor ZEB1, along with HA synthase 2 (HAS2), which positively correlates with ZEB1 expression. Depletion of ITIH2 and HAS2 reduced HA matrix formation and the migration and invasion of lung cancer cells. Furthermore, ZEB1 facilitates alternative splicing and isoform expression of CD44, an HA receptor, and CD44 knockdown suppresses the motility and invasiveness of lung cancer cells. Using a deep learning-based drug-target interaction algorithm, we identified an ITIH2 inhibitor (sincalide) that inhibited HA matrix formation and migration of lung cancer cells, preventing metastatic colonization of lung cancer cells in mouse models. These findings suggest that ZEB1 remodels the HA network in the TME through the regulation of ITIH2, HAS2, and CD44, presenting a strategy for targeting this network to suppress lung cancer progression.
PMID:40178908 | DOI:10.1172/JCI180570
Arterial phase CT radiomics for non-invasive prediction of Ki-67 proliferation index in pancreatic solid pseudopapillary neoplasms
Abdom Radiol (NY). 2025 Apr 3. doi: 10.1007/s00261-025-04921-z. Online ahead of print.
ABSTRACT
BACKGROUND: This study aimed to preoperatively predict Ki-67 proliferation levels in patients with pancreatic solid pseudopapillary neoplasm (pSPN) using radiomics features extracted from arterial phase helical CT images.
METHODS: We retrospectively analyzed 92 patients (Ningbo Medical Center Lihuili Hospital: n = 64, Taizhou Central Hospital: n = 28) with pathologically confirmed pSPN from June 2015 to June 2023. Ki-67 positivity > 3% was considered high. Radiomics features were extracted using PyRadiomics, with patients from training cohort (n = 64) and validation cohort (n = 28). A radiomics signature was constructed, and a CT radiomics score (CTscore) was calculated. Deep learning models were employed for prediction, with early stopping to prevent overfitting.
RESULTS: Seven key radiomics features were selected via LASSO regression with cross-validation. The deep learning model demonstrated improved accuracy with demographics and CTscore, with key features such as Morphology and CTscore contributing significantly to predictive accuracy. The best-performing models, including GBM and deep learning algorithms, achieved high predictive performance with an AUC of up to 0.946 in the training cohort.
CONCLUSIONS: We developed a robust deep learning-based radiomics model using arterial phase CT images to predict Ki-67 levels in pSPN patients, identifying CTscore and Morphology as key predictors. This non-invasive approach has potential utility in guiding personalized preoperative treatment strategies.
CLINICAL TRIAL NUMBER: Not applicable.
PMID:40178588 | DOI:10.1007/s00261-025-04921-z
Free-breathing, Highly Accelerated, Single-beat, Multisection Cardiac Cine MRI with Generative Artificial Intelligence
Radiol Cardiothorac Imaging. 2025 Apr;7(2):e240272. doi: 10.1148/ryct.240272.
ABSTRACT
Purpose To develop and evaluate a free-breathing, highly accelerated, multisection, single-beat cine sequence for cardiac MRI. Materials and Methods This prospective study, conducted from July 2022 to December 2023, included participants with various cardiac conditions as well as healthy participants who were imaged using a 3-T MRI system. A single-beat sequence was implemented, collecting data for each section in one heartbeat. Images were acquired with an in-plane spatiotemporal resolution of 1.9 × 1.9 mm2 and 37 msec and reconstructed using resolution enhancement generative adversarial inline neural network (REGAIN), a deep learning model. Multibreath-hold k-space-segmented (4.2-fold acceleration) and free-breathing single-beat (14.8-fold acceleration) cine images were collected, both reconstructed with REGAIN. Left ventricular (LV) and right ventricular (RV) parameters between the two methods were evaluated with linear regression, Bland-Altman analysis, and Pearson correlation. Three expert cardiologists independently scored diagnostic and image quality. Scan and rescan reproducibility was evaluated in a subset of participants 1 year apart using the intraclass correlation coefficient (ICC). Results This study included 136 participants (mean age [SD], 54 years ± 15; 69 female, 67 male), 40 healthy and 96 with cardiac conditions. k-Space-segmented and single-beat scan times were 2.6 minutes ± 0.8 and 0.5 minute ± 0.1, respectively. Strong correlations (P < .001) were observed between k-space-segmented and single-beat cine parameters in both LV (r = 0.97-0.99) and RV (r = 0.89-0.98). Scan and rescan reproducibility of single-beat cine was excellent (ICC, 0.97-1.0). Agreement among readers was high, with 125 of 136 (92%) images consistently assessed as diagnostic and 133 of 136 (98%) consistently rated as having good image quality by all readers. Conclusion Free-breathing 30-second single-beat cardiac cine MRI yielded accurate biventricular measurements, reduced scan time, and maintained high diagnostic and image quality compared with conventional multibreath-hold k-space-segmented cine images. Keywords: MR-Imaging, Cardiac, Heart, Imaging Sequences, Comparative Studies, Technology Assessment Supplemental material is available for this article. © RSNA, 2025.
PMID:40178397 | DOI:10.1148/ryct.240272
CoupleVAE: coupled variational autoencoders for predicting perturbational single-cell RNA sequencing data
Brief Bioinform. 2025 Mar 4;26(2):bbaf126. doi: 10.1093/bib/bbaf126.
ABSTRACT
With the rapid advances in single-cell sequencing technology, it is now feasible to conduct in-depth genetic analysis in individual cells. Study on the dynamics of single cells in response to perturbations is of great significance for understanding the functions and behaviors of living organisms. However, the acquisition of post-perturbation cellular states via biological experiments is frequently cost-prohibitive. Predicting the single-cell perturbation responses poses a critical challenge in the field of computational biology. In this work, we propose a novel deep learning method called coupled variational autoencoders (CoupleVAE), devised to predict the postperturbation single-cell RNA-Seq data. CoupleVAE is composed of two coupled VAEs connected by a coupler, initially extracting latent features for controlled and perturbed cells via two encoders, subsequently engaging in mutual translation within the latent space through two nonlinear mappings via a coupler, and ultimately generating controlled and perturbed data by two separate decoders to process the encoded and translated features. CoupleVAE facilitates a more intricate state transformation of single cells within the latent space. Experiments in three real datasets on infection, stimulation and cross-species prediction show that CoupleVAE surpasses the existing comparative models in effectively predicting single-cell RNA-seq data for perturbed cells, achieving superior accuracy.
PMID:40178283 | DOI:10.1093/bib/bbaf126
Data imbalance in drug response prediction: multi-objective optimization approach in deep learning setting
Brief Bioinform. 2025 Mar 4;26(2):bbaf134. doi: 10.1093/bib/bbaf134.
ABSTRACT
Drug response prediction (DRP) methods tackle the complex task of associating the effectiveness of small molecules with the specific genetic makeup of the patient. Anti-cancer DRP is a particularly challenging task requiring costly experiments as underlying pathogenic mechanisms are broad and associated with multiple genomic pathways. The scientific community has exerted significant efforts to generate public drug screening datasets, giving a path to various machine learning models that attempt to reason over complex data space of small compounds and biological characteristics of tumors. However, the data depth is still lacking compared to application domains like computer vision or natural language processing domains, limiting current learning capabilities. To combat this issue and improves the generalizability of the DRP models, we are exploring strategies that explicitly address the imbalance in the DRP datasets. We reframe the problem as a multi-objective optimization across multiple drugs to maximize deep learning model performance. We implement this approach by constructing Multi-Objective Optimization Regularized by Loss Entropy loss function and plugging it into a Deep Learning model. We demonstrate the utility of proposed drug discovery methods and make suggestions for further potential application of the work to achieve desirable outcomes in the healthcare field.
PMID:40178282 | DOI:10.1093/bib/bbaf134
DOMSCNet: a deep learning model for the classification of stomach cancer using multi-layer omics data
Brief Bioinform. 2025 Mar 4;26(2):bbaf115. doi: 10.1093/bib/bbaf115.
ABSTRACT
The rapid advancement of next-generation sequencing (NGS) technology and the expanding availability of NGS datasets have led to a significant surge in biomedical research. To better understand the molecular processes, underlying cancer and to support its development, diagnosis, prediction, and therapy; NGS data analysis is crucial. However, the NGS multi-layer omics high-dimensional dataset is highly complex. In recent times, some computational methods have been developed for cancer omics data interpretation. However, various existing methods face challenges in accounting for diverse types of cancer omics data and struggle to effectively extract informative features for the integrated identification of core units. To address these challenges, we proposed a hybrid feature selection (HFS) technique to detect optimal features from multi-layer omics datasets. Subsequently, this study proposes a novel hybrid deep recurrent neural network-based model DOMSCNet to classify stomach cancer. The proposed model was made generic for all four multi-layer omics datasets. To observe the robustness of the DOMSCNet model, the proposed model was validated with eight external datasets. Experimental results showed that the SelectKBest-maximum relevancy minimum redundancy-Boruta (SMB), HFS technique outperformed all other HFS techniques. Across four multi-layer omics datasets and validated datasets, the proposed DOMSCNet model outdid existing classifiers along with other proposed classifiers.
PMID:40178281 | DOI:10.1093/bib/bbaf115
Application of Deep Learning to Predict the Persistence, Bioaccumulation, and Toxicity of Pharmaceuticals
J Chem Inf Model. 2025 Apr 3. doi: 10.1021/acs.jcim.4c02293. Online ahead of print.
ABSTRACT
This study investigates the application of a deep learning (DL) model, specifically a message-passing neural network (MPNN) implemented through Chemprop, to predict the persistence, bioaccumulation, and toxicity (PBT) characteristics of compounds, with a focus on pharmaceuticals. We employed a clustering strategy to provide a fair assessment of the model performances. By applying the generated model to a set of pharmaceutically relevant molecules, we aim to highlight potential PBT chemicals and extract PBT-relevant substructures. These substructures can serve as structural flags, alerting drug designers to potential environmental issues from the earliest stages of the drug discovery process. Incorporating these findings into pharmaceutical development workflows is expected to drive significant advancements in creating more environmentally friendly drug candidates while preserving their therapeutic efficacy.
PMID:40178174 | DOI:10.1021/acs.jcim.4c02293
Early Colon Cancer Prediction from Histopathological Images Using Enhanced Deep Learning with Confidence Scoring
Cancer Invest. 2025 Apr 3:1-19. doi: 10.1080/07357907.2025.2483302. Online ahead of print.
ABSTRACT
Colon Cancer (CC) arises from abnormal cell growth in the colon, which severely impacts a person's health and quality of life. Detecting CC through histopathological images for early diagnosis offers substantial benefits in medical diagnostics. This study proposes NalexNet, a hybrid deep-learning classifier, to enhance classification accuracy and computational efficiency. The research methodology involves Vahadane stain normalization for preprocessing and Watershed segmentation for accurate tissue separation. The Teamwork Optimization Algorithm (TOA) is employed for optimal feature selection to reduce redundancy and improve classification performance. Furthermore, the NalexNet model is structured with convolutional layers and normal and reduction cells, ensuring efficient feature representation and high classification accuracy. Experimental results demonstrate that the proposed model achieves a precision of 99.9% and an accuracy of 99.5%, significantly outperforming existing models. This study contributes to the development of an automated and computationally efficient CC classification system, which has the potential for real-world clinical implementation, aiding pathologists in early and accurate diagnosis.
PMID:40178023 | DOI:10.1080/07357907.2025.2483302