Deep learning
DeepRNA-Twist: language-model-guided RNA torsion angle prediction with attention-inception network
Brief Bioinform. 2025 May 1;26(3):bbaf199. doi: 10.1093/bib/bbaf199.
ABSTRACT
RNA torsion and pseudo-torsion angles are critical in determining the three-dimensional conformation of RNA molecules, which in turn governs their biological functions. However, current methods are limited by RNA's structural complexity as well as flexibility, with experimental techniques being costly and computational approaches struggling to capture the intricate sequence dependencies needed for accurate predictions. To address these challenges, we introduce DeepRNA-Twist, a novel deep learning framework designed to predict RNA torsion and pseudo-torsion angles directly from sequence. DeepRNA-Twist utilizes RNA language model embeddings, which provides rich, context-aware feature representations of RNA sequences. Additionally, it introduces 2A3IDC module (Attention Augmented Inception Inside Inception with Dilated CNN), combining inception networks with dilated convolutions and multi-head attention mechanism. The dilated convolutions capture long-range dependencies in the sequence without requiring a large number of parameters, while the multi-head attention mechanism enhances the model's ability to focus on both local and global structural features simultaneously. DeepRNA-Twist was rigorously evaluated on benchmark datasets, including RNA-Puzzles, CASP-RNA, and SPOT-RNA-1D, and demonstrated significant improvements over existing methods, achieving state-of-the-art accuracy. Source code is available at https://github.com/abrarrahmanabir/DeepRNA-Twist.
PMID:40315431 | DOI:10.1093/bib/bbaf199
Traffic accident risk prediction based on deep learning and spatiotemporal features of vehicle trajectories
PLoS One. 2025 May 2;20(5):e0320656. doi: 10.1371/journal.pone.0320656. eCollection 2025.
ABSTRACT
With the acceleration of urbanization and the increase in traffic volume, frequent traffic accidents have significantly impacted public safety and socio-economic conditions. Traditional methods for predicting traffic accidents often overlook spatiotemporal features and the complexity of traffic networks, leading to insufficient prediction accuracy in complex traffic environments. To address this, this paper proposes a deep learning model that combines Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and Graph Neural Networks (GNN) for traffic accident risk prediction using vehicle spatiotemporal trajectory data. The model extracts spatial features such as vehicle speed, acceleration, and lane-changing distance through CNN, captures temporal dependencies in trajectories using LSTM, and effectively models the complex spatial structure of traffic networks with GNN, thereby improving prediction accuracy.The main contributions of this paper are as follows: First, an innovative combined model is proposed, which comprehensively considers spatiotemporal features and road network relationships, significantly improving prediction accuracy. Second, the model's strong generalization ability across multiple traffic scenarios is validated, enhancing the accuracy of traditional prediction methods. Finally, a new technical approach is provided, offering theoretical support for the implementation of real-time traffic accident warning systems. Experimental results demonstrate that the model can effectively predict accident risks in various complex traffic scenarios, providing robust support for intelligent traffic management and public safety.
PMID:40315419 | DOI:10.1371/journal.pone.0320656
Ultra-stable and high-performance squeezed vacuum source enabled via artificial intelligence control
Sci Adv. 2025 May 2;11(18):eadu4888. doi: 10.1126/sciadv.adu4888. Epub 2025 May 2.
ABSTRACT
Squeezing states are crucial for advancing quantum metrology beyond the classical limit. Despite this, generating high-performance squeezed light with long-term stability remains a challenge due to system complexity and quantum fragility. We experimentally achieved a record-breaking squeezing level of 4.3 decibels (lossless, 5.9 decibels) using polarization self-rotation (PSR) in atomic vapor, maintaining stability for hours despite environmental disturbances. Overcoming the limitations of the PSR theory model's optimization guidance, which arises from the mutual interference of multiple parameters at this squeezing level, we developed an artificial intelligence (AI) control (AIC) system that harnesses deep learning to discern and manage these complex relationships, thereby enabling self-adapted to external environments. This integrated approach represents a concrete step for the actual application of quantum metrology and information processing, illustrating the synergy between AI and fundamental science in breaking complexity constraints.
PMID:40315327 | DOI:10.1126/sciadv.adu4888
Enhancing privacy in biosecurity with watermarked protein design
Bioinformatics. 2025 May 2:btaf141. doi: 10.1093/bioinformatics/btaf141. Online ahead of print.
ABSTRACT
MOTIVATION: The biosecurity issue arises as the capability of deep learning-based protein design has rapidly increased in recent years. Current regulation procedures for DNA synthesizing focus on the biosecurity but ignore the data privacy.
RESULTS: We propose a general framework for adding watermarks to protein sequences designed by various autoregressive deep learning models. Compared to current regulation procedures, watermarks also ensure robust traceability to achieve biosecurity but maintain privacy of designed sequences by local verification. Benchmarked with other watermarking techniques, the watermark detection efficiency of our method is substantially increased to be more practical in real-world scenarios. Moreover, it provides a convenient way for researchers to claim their own intellectual property since the designer's information could be embedded into the sequence with our framework.
AVAILABILITY AND IMPLEMENTATION: The implementation of the protein watermark framework is freely available to non-commercial users at https://github.com/poseidonchan/ProteinWatermark.
CONTACT AND SUPPLEMENTARY INFORMATION: Contact authors: Yanshuo Chen (cys@umd.edu) and Heng Huang (heng@umd.edu). The step-by-step tutorials of adding and detecting watermarks are also included in the repository at: https://github.com/poseidonchan/ProteinWatermark/tree/main/tutorials.
PMID:40315154 | DOI:10.1093/bioinformatics/btaf141
Graph Anomaly Detection in Time Series: A Survey
IEEE Trans Pattern Anal Mach Intell. 2025 May 2;PP. doi: 10.1109/TPAMI.2025.3566620. Online ahead of print.
ABSTRACT
With the recent advances in technology, a wide range of systems continue to collect a large amount of data over time and thus generate time series. Time-Series Anomaly Detection (TSAD) is an important task in various time-series applications such as e-commerce, cybersecurity, vehicle maintenance, and healthcare monitoring. However, this task is very challenging as it requires considering both the intra-variable dependency (relationships within a variable over time) and the inter-variable dependency (relationships between multiple variables) existing in time-series data. Recent graph-based approaches have made impressive progress in tackling the challenges of this field. In this survey, we conduct a comprehensive and up-to-date review of TSAD using graphs, referred to as G-TSAD. First, we explore the significant potential of graph representation for time-series data and and its contributions to facilitating anomaly detection. Then, we review state-of-the-art graph anomaly detection techniques, mostly leveraging deep learning architectures, in the context of time series. For each method, we discuss its strengths, limitations, and the specific applications where it excels. Finally, we address both the technical and application challenges currently facing the field, and suggest potential future directions for advancing research and improving practical outcomes.
PMID:40315075 | DOI:10.1109/TPAMI.2025.3566620
Brain tissue biomarker impact bone age in central precocious puberty more than hormones: a quantitative synthetic magnetic resonance study
Jpn J Radiol. 2025 May 2. doi: 10.1007/s11604-025-01792-8. Online ahead of print.
ABSTRACT
OBJECTIVE: To investigate which brain tissue component volume (BTCV) biomarkers may be more effective than hormones in influencing bone age development in central precocious puberty (CPP).
METHODS: This retrospective study included 84 children with CPP and 84 controls. Data on cranial synthetic magnetic resonance (SyMR), X-ray bone age, and three hormones were collected. BTCVs-myelin content (MyC), white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and non-WM/GM/MyC/CSF (NoN)-were obtained from SyMRI. A deep learning model assessed Tanner-Whitehouse III (TW3) bone age scores (TW3-RUS, TW3-Carpal). We evaluated the correlation between BTCVs, bone age scores, luteinizing hormone (LH), LH after gonadotropin-releasing hormone (GnRH) stimulation, and follicle-stimulating hormone (FSH).
RESULTS: Children with CPP had lower MyC, WM, and GM than controls. The TW3-RUS score did not correlate with BTCVs or hormones. The TW3-Carpal score was positively correlated with MyC (r = 0.397, P < 0.001) but not with WM, GM, CSF, NoN, or hormones. The regression model showed a positive correlation between the TW3-Carpal score and MyC (β = 0.077, P < 0.001), while LH correlated with GM and NoN (β = - 16.66, P = 0.019; β = 24.62, P = 0.019).
CONCLUSION: The TW3-Carpal score in CPP positively correlates with MyC, while two TW3 scores do not correlate with hormone levels, suggesting myelin has a greater impact on bone age development than hormones. MyC may serve as a potential biomarker in BTCVs for CPP.
PMID:40314875 | DOI:10.1007/s11604-025-01792-8
Should end-to-end deep learning replace handcrafted radiomics?
Eur J Nucl Med Mol Imaging. 2025 May 2. doi: 10.1007/s00259-025-07314-y. Online ahead of print.
NO ABSTRACT
PMID:40314811 | DOI:10.1007/s00259-025-07314-y
Deep learning for automatic volumetric bowel segmentation on body CT images
Eur Radiol. 2025 May 2. doi: 10.1007/s00330-025-11623-z. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop a deep neural network for automatic bowel segmentation and assess its applicability for estimating large bowel length (LBL) in individuals with constipation.
MATERIALS AND METHODS: We utilized contrast-enhanced and non-enhanced abdominal, chest, and whole-body CT images for model development. External testing involved paired pre- and post-contrast abdominal CT images from another hospital. We developed 3D nnU-Net models to segment the gastrointestinal tract and separate it into the esophagus, stomach, small bowel, and large bowel. Segmentation accuracy was evaluated using the Dice similarity coefficient (DSC) based on radiologists' segmentation. We employed the network to estimate LBL in individuals having abdominal CT for health check-ups, and the height-corrected LBL was compared between groups with and without constipation.
RESULTS: One hundred thirty-three CT scans (88 patients; age, 63.6 ± 10.6 years; 39 men) were used for model development, and 60 for external testing (30 patients; age, 48.9 ± 15.8 years; 16 men). In the external dataset, the mean DSC for the entire gastrointestinal tract was 0.985 ± 0.008. The mean DSCs for four-part separation exceeded 0.95, outperforming TotalSegmentator, except for the esophagus (DSC, 0.807 ± 0.173). For LBL measurements, 100 CT scans from 51 patients were used (age, 67.0 ± 6.9 years; 59 scans from men; 59 with constipation). The height-corrected LBL were significantly longer in the constipation group on both per-exam (79.1 ± 12.4 vs 88.8 ± 15.8 cm/m, p = 0.001) and per-subject basis (77.6 ± 13.6 vs 86.9 ± 17.1 cm/m, p = 0.04).
CONCLUSION: Our model accurately segmented the entire gastrointestinal tract and its major compartments from CT scans and enabled the noninvasive estimation of LBL in individuals with constipation.
KEY POINTS: Questions Automated bowel segmentation is a first step for algorithms, including bowel tracing and length measurement, but the complexity of the gastrointestinal tract limits its accuracy. Findings Our 3D nnU-Net model showed high performance in segmentation and four-part separation of the GI tract (DSC > 0.95), except for the esophagus. Clinical relevance Our model accurately segments the gastrointestinal tract and separates it into major compartments. Our model potentially has use in various clinical applications, including semi-automated measurement of LBL in individuals with constipation.
PMID:40314787 | DOI:10.1007/s00330-025-11623-z
A Novel Deep Learning-based Pathomics Score for Prognostic Stratification in Pancreatic Ductal Adenocarcinoma
Pancreas. 2025 May 1;54(5):e430-e441. doi: 10.1097/MPA.0000000000002463.
ABSTRACT
BACKGROUND AND OBJECTIVES: Accurate survival prediction for pancreatic ductal adenocarcinoma (PDAC) is crucial for personalized treatment strategies. This study aims to construct a novel pathomics indicator using hematoxylin and eosin-stained whole slide images and deep learning to enhance PDAC prognosis prediction.
METHODS: A retrospective, 2-center study analyzed 864 PDAC patients diagnosed between January 2015 and March 2022. Using weakly supervised and multiple instance learning, pathologic features predicting 2-year survival were extracted. Pathomics features, including probability histograms and TF-IDF, were selected through random survival forests. Survival analysis was conducted using Kaplan-Meier curves, log-rank tests, and Cox regression, with AUROC and C-index used to assess model discrimination.
RESULTS: The study cohort comprised 489 patients for training, 211 for validation, and 164 in the neoadjuvant therapy (NAT) group. A pathomics score was developed using 7 features, dividing patients into high-risk and low-risk groups based on the median score of 131.11. Significant survival differences were observed between groups (P<0.0001). The pathomics score was a robust independent prognostic factor [Training: hazard ratio (HR)=3.90; Validation: HR=3.49; NAT: HR=4.82; all P<0.001]. Subgroup analyses revealed higher survival rates for early-stage low-risk patients and NAT responders compared to high-risk counterparts (both P<0.05 and P<0.0001). The pathomics model surpassed clinical models in predicting 1-, 2-, and 3-year survival.
CONCLUSIONS: The pathomics score serves as a cost-effective and precise prognostic tool, functioning as an independent prognostic indicator that enables precise stratification and enhances the prediction of prognosis when combined with traditional pathologic features. This advancement has the potential to significantly impact PDAC treatment planning and improve patient outcomes.
PMID:40314741 | DOI:10.1097/MPA.0000000000002463
Deep learning-based automatic cranial implant design through direct defect shape prediction and its comparison study
Med Biol Eng Comput. 2025 May 2. doi: 10.1007/s11517-025-03363-5. Online ahead of print.
ABSTRACT
Defects to human crania are one kind of head bone damages, and cranial implants can be used to repair the defected crania. The automation of the implant design process is crucial in reducing the corresponding therapy time. Taking the cranial implant design problem as a special kind of shape completion task, an automatic cranial implant design workflow is proposed, which consists of a deep neural network for the direct shape prediction of the missing part of the defective cranium and conventional post-processing steps to refine the automatically generated implant. To evaluate the proposed workflow, we employ cross-validation and report an average Dice Similarity Score and boundary Dice Similarity Score of 0.81 and 0.81, respectively. We also measure the surface distance error using the 95th quantile of the Hausdorff Distance, which yields an average of 3.01 mm. Comparison with the manual cranial implant design procedure also revealed the convenience of the proposed workflow. In addition, a plugin is developed for 3D Slicer, which implements the proposed automatic cranial implant design workflow and can facilitate the end-users.
PMID:40314711 | DOI:10.1007/s11517-025-03363-5
Automatic ultrasound image alignment for diagnosis of pediatric distal forearm fractures
Int J Comput Assist Radiol Surg. 2025 May 2. doi: 10.1007/s11548-025-03361-w. Online ahead of print.
ABSTRACT
PURPOSE: The study aims to develop an automatic method to align ultrasound images of the distal forearm for diagnosing pediatric fractures. This approach seeks to bypass the reliance on X-rays for fracture diagnosis, thereby minimizing radiation exposure and making the process less painful, as well as creating a more child-friendly diagnostic pathway.
METHODS: We present a fully automatic pipeline to align paired POCUS images. We first leverage a deep learning model to delineate bone boundaries, from which we obtain key anatomical landmarks. These landmarks are finally used to guide the optimization-based alignment process, for which we propose three optimization constraints: aligning specific points, ensuring parallel orientation of the bone segments, and matching the bone widths.
RESULTS: The method demonstrated high alignment accuracy compared to reference X-rays in terms of boundary distances. A morphology experiment including fracture classification and angulation measurement presents comparable performance when based on the merged ultrasound images and conventional X-rays, justifying the effectiveness of our method in these cases.
CONCLUSIONS: The study introduced an effective and fully automatic pipeline for aligning ultrasound images, showing potential to replace X-rays for diagnosing pediatric distal forearm fractures. Initial tests show that surgeons find many of our results sufficient for diagnosis. Future work will focus on increasing dataset size to improve diagnostic accuracy and reliability.
PMID:40314702 | DOI:10.1007/s11548-025-03361-w
Evolutionary Dynamics and Functional Differences in Clinically Relevant Pen β-Lactamases from <em>Burkholderia</em> spp
J Chem Inf Model. 2025 May 2. doi: 10.1021/acs.jcim.5c00271. Online ahead of print.
ABSTRACT
Antimicrobial resistance (AMR) is a global threat, with Burkholderia species contributing significantly to difficult-to-treat infections. The Pen family of β-lactamases are produced by all Burkholderia spp., and their mutation or overproduction leads to the resistance of β-lactam antibiotics. Here we investigate the dynamic differences among four Pen β-lactamases (PenA, PenI, PenL and PenP) using machine learning driven enhanced sampling molecular dynamics simulations, Markov State Models (MSMs), convolutional variational autoencoder-based deep learning (CVAE) and the BindSiteS-CNN model. In spite of sharing the same catalytic mechanisms, these enzymes exhibit distinct dynamic features due to low sequence identity, resulting in different substrate profiles and catalytic turnover. The BindSiteS-CNN model further reveals local active site dynamics, offering insights into the Pen β-lactamase evolutionary adaptation. Our findings reported here identify critical mutations and propose new hot spots affecting Pen β-lactamase flexibility and function, which can be used to fight emerging resistance in these enzymes.
PMID:40314617 | DOI:10.1021/acs.jcim.5c00271
Deep Learning Radiopathomics for Predicting Tumor Vasculature and Prognosis in Hepatocellular Carcinoma
Radiol Imaging Cancer. 2025 May;7(3):e250141. doi: 10.1148/rycan.250141.
NO ABSTRACT
PMID:40314587 | DOI:10.1148/rycan.250141
Detection of precancerous lesions in cervical images of perimenopausal women using U-net deep learning
Afr J Reprod Health. 2025 Apr 23;29(4):108-119. doi: 10.29063/ajrh2025/v29i4.10.
ABSTRACT
Due to physiological changes during the perimenopausal period, the morphology of cervical cells undergoes certain alterations. Accurate cell image segmentation and lesion identification are of great significance for the early detection of precancerous lesions. Traditional detection methods may have certain limitations, thereby creating an urgent need for the development of more effective models. This study aimed to develop a highly efficient and accurate cervical cell image segmentation and recognition model to enhance the detection of precancerous lesions in perimenopausal women. based on U-shaped Network(U-Net) and Residual Network (ResNet). The model integrates U-Net with Segmentation Network (SegNet) and incorporates the Squeeze-and-Excitation (SE) attention mechanism to create the 2Se/U-Net segmentation model. Additionally, ResNet is optimized with the local discriminant loss function (LD-loss) and deep residual learning (DRL) blocks to develop the LD/ResNet lesion recognition model. The performance of the models is evaluated using data from 103 cytology images of perimenopausal women, focusing on segmentation metrics like mean pixel accuracy (MPA) and mean intersection over union (mIoU), as well as lesion detection metrics such as accuracy (Acc), precision (Pre), recall (Re), and F1-score (F1). Results show that the 2Se/U-Net model achieves an MPA of 92.63% and mIoU of 96.93%, outperforming U-Net by 12.48% and 9.47%, respectively. The LD/ResNet model demonstrates over 97.09% accuracy in recognizing cervical cells and achieves high detection performance for precancerous lesions, with Acc, Pre, and Re at 98.95%, 99.36%, and 98.89%, respectively. The model shows great potential for enhancing cervical cancer screening in clinical settings.
PMID:40314307 | DOI:10.29063/ajrh2025/v29i4.10
Semantical and geometrical protein encoding toward enhanced bioactivity and thermostability
Elife. 2025 May 2;13:RP98033. doi: 10.7554/eLife.98033.
ABSTRACT
Protein engineering is a pivotal aspect of synthetic biology, involving the modification of amino acids within existing protein sequences to achieve novel or enhanced functionalities and physical properties. Accurate prediction of protein variant effects requires a thorough understanding of protein sequence, structure, and function. Deep learning methods have demonstrated remarkable performance in guiding protein modification for improved functionality. However, existing approaches predominantly rely on protein sequences, which face challenges in efficiently encoding the geometric aspects of amino acids' local environment and often fall short in capturing crucial details related to protein folding stability, internal molecular interactions, and bio-functions. Furthermore, there lacks a fundamental evaluation for developed methods in predicting protein thermostability, although it is a key physical property that is frequently investigated in practice. To address these challenges, this article introduces a novel pre-training framework that integrates sequential and geometric encoders for protein primary and tertiary structures. This framework guides mutation directions toward desired traits by simulating natural selection on wild-type proteins and evaluates variant effects based on their fitness to perform specific functions. We assess the proposed approach using three benchmarks comprising over 300 deep mutational scanning assays. The prediction results showcase exceptional performance across extensive experiments compared to other zero-shot learning methods, all while maintaining a minimal cost in terms of trainable parameters. This study not only proposes an effective framework for more accurate and comprehensive predictions to facilitate efficient protein engineering, but also enhances the in silico assessment system for future deep learning models to better align with empirical requirements. The PyTorch implementation is available at https://github.com/ai4protein/ProtSSN.
PMID:40314227 | DOI:10.7554/eLife.98033
A depression detection approach leveraging transfer learning with single-channel EEG
J Neural Eng. 2025 May 2;22(3). doi: 10.1088/1741-2552/adcfc8.
ABSTRACT
Objective.Major depressive disorder (MDD) is a widespread mental disorder that affects health. Many methods combining electroencephalography (EEG) with machine learning or deep learning have been proposed to objectively distinguish between MDD and healthy individuals. However, most current methods detect depression based on multichannel EEG signals, which constrains its application in daily life. The context in which EEG is obtained can vary in terms of study designs and EEG equipment settings, and the available depression EEG data is limited, which could also potentially lessen the efficacy of the model in differentiating between MDD and healthy subjects. To solve the above challenges, a depression detection model leveraging transfer learning with the single-channel EEG is advanced.Approach.We utilized a pretrained ResNet152V2 network to which a flattening layer and dense layer were appended. The method of feature extraction was applied, meaning that all layers within ResNet152V2 were frozen and only the parameters of the newly added layers were adjustable during training. Given the superiority of deep neural networks in image processing, the temporal sequences of EEG signals are first converted into images, transforming the problem of EEG signal categorization into an image classification task. Subsequently, a cross-subject experimental strategy was adopted for model training and performance evaluation.Main results.The model was capable of precisely (approaching 100% accuracy) identifying depression in other individuals by employing single-channel EEG samples obtained from a limited number of subjects. Furthermore, the model exhibited superior performance across four publicly available depression EEG datasets, thereby demonstrating good adaptability in response to variations in EEG caused by the context.Significance.This research not only highlights the impressive potential of deep transfer learning techniques in EEG signal analysis but also paves the way for innovative technical approaches to facilitate early diagnosis of associated mental disorders in the future.
PMID:40314182 | DOI:10.1088/1741-2552/adcfc8
Correction to: DOMSCNet: a deep learning model for the classification of stomach cancer using multi-layer omics data
Brief Bioinform. 2025 May 1;26(3):bbaf218. doi: 10.1093/bib/bbaf218.
NO ABSTRACT
PMID:40314061 | DOI:10.1093/bib/bbaf218
Radiomics-driven neuro-fuzzy framework for rule generation to enhance explainability in MRI-based brain tumor segmentation
Front Neuroinform. 2025 Apr 17;19:1550432. doi: 10.3389/fninf.2025.1550432. eCollection 2025.
ABSTRACT
INTRODUCTION: Brain tumors are a leading cause of mortality worldwide, with early and accurate diagnosis being essential for effective treatment. Although Deep Learning (DL) models offer strong performance in tumor detection and segmentation using MRI, their black-box nature hinders clinical adoption due to a lack of interpretability.
METHODS: We present a hybrid AI framework that integrates a 3D U-Net Convolutional Neural Network for MRI-based tumor segmentation with radiomic feature extraction. Dimensionality reduction is performed using machine learning, and an Adaptive Neuro-Fuzzy Inference System (ANFIS) is employed to produce interpretable decision rules. Each experiment is constrained to a small set of high-impact radiomic features to enhance clarity and reduce complexity.
RESULTS: The framework was validated on the BraTS2020 dataset, achieving an average DICE Score of 82.94% for tumor core segmentation and 76.06% for edema segmentation. Classification tasks yielded accuracies of 95.43% for binary (healthy vs. tumor) and 92.14% for multi-class (healthy vs. tumor core vs. edema) problems. A concise set of 18 fuzzy rules was generated to provide clinically interpretable outputs.
DISCUSSION: Our approach balances high diagnostic accuracy with enhanced interpretability, addressing a critical barrier in applying DL models in clinical settings. Integrating of ANFIS and radiomics supports transparent decision-making, facilitating greater trust and applicability in real-world medical diagnostics assistance.
PMID:40313917 | PMC:PMC12043696 | DOI:10.3389/fninf.2025.1550432
Primer on machine learning applications in brain immunology
Front Bioinform. 2025 Apr 17;5:1554010. doi: 10.3389/fbinf.2025.1554010. eCollection 2025.
ABSTRACT
Single-cell and spatial technologies have transformed our understanding of brain immunology, providing unprecedented insights into immune cell heterogeneity and spatial organisation within the central nervous system. These methods have uncovered complex cellular interactions, rare cell populations, and the dynamic immune landscape in neurological disorders. This review highlights recent advances in single-cell "omics" data analysis and discusses their applicability for brain immunology. Traditional statistical techniques, adapted for single-cell omics, have been crucial in categorizing cell types and identifying gene signatures, overcoming challenges posed by increasingly complex datasets. We explore how machine learning, particularly deep learning methods like autoencoders and graph neural networks, is addressing these challenges by enhancing dimensionality reduction, data integration, and feature extraction. Newly developed foundation models present exciting opportunities for uncovering gene expression programs and predicting genetic perturbations. Focusing on brain development, we demonstrate how single-cell analyses have resolved immune cell heterogeneity, identified temporal maturation trajectories, and uncovered potential therapeutic links to various pathologies, including brain malignancies and neurodegeneration. The integration of single-cell and spatial omics has elucidated the intricate cellular interplay within the developing brain. This mini-review is intended for wet lab biologists at all career stages, offering a concise overview of the evolving landscape of single-cell omics in the age of widely available artificial intelligence.
PMID:40313869 | PMC:PMC12043695 | DOI:10.3389/fbinf.2025.1554010
Decentralized EEG-based detection of major depressive disorder via transformer architectures and split learning
Front Comput Neurosci. 2025 Apr 16;19:1569828. doi: 10.3389/fncom.2025.1569828. eCollection 2025.
ABSTRACT
INTRODUCTION: Major Depressive Disorder (MDD) remains a critical mental health concern, necessitating accurate detection. Traditional approaches to diagnosing MDD often rely on manual Electroencephalography (EEG) analysis to identify potential disorders. However, the inherent complexity of EEG signals along with the human error in interpreting these readings requires the need for more reliable, automated methods of detection.
METHODS: This study utilizes EEG signals to classify MDD and healthy individuals through a combination of machine learning, deep learning, and split learning approaches. State of the art machine learning models i.e., Random Forest, Support Vector Machine, and Gradient Boosting are utilized, while deep learning models such as Transformers and Autoencoders are selected for their robust feature-extraction capabilities. Traditional methods for training machine learning and deep learning models raises data privacy concerns and require significant computational resources. To address these issues, the study applies a split learning framework. In this framework, an ensemble learning technique has been utilized that combines the best performing machine and deep learning models.
RESULTS: Results demonstrate a commendable classification performance with certain ensemble methods, and a Transformer-Random Forest combination achieved 99% accuracy. In addition, to address data-sharing constraints, a split learning framework is implemented across three clients, yielding high accuracy (over 95%) while preserving privacy. The best client recorded 96.23% accuracy, underscoring the robustness of combining Transformers with Random Forest under resource-constrained conditions.
DISCUSSION: These findings demonstrate that distributed deep learning pipelines can deliver precise MDD detection from EEG data without compromising data security. Proposed framework keeps data on local nodes and only exchanges intermediate representations. This approach meets institutional privacy requirements while providing robust classification outcomes.
PMID:40313734 | PMC:PMC12044669 | DOI:10.3389/fncom.2025.1569828