Deep learning
Cross-Database Evaluation of Deep Learning Methods for Intrapartum Cardiotocography Classification
IEEE J Transl Eng Health Med. 2025 Mar 5;13:123-135. doi: 10.1109/JTEHM.2025.3548401. eCollection 2025.
ABSTRACT
Continuous monitoring of fetal heart rate (FHR) and uterine contractions (UC), otherwise known as cardiotocography (CTG), is often used to assess the risk of fetal compromise during labor. However, interpreting CTG recordings visually is challenging for clinicians, given the complexity of CTG patterns, leading to poor sensitivity. Efforts to address this issue have focused on data-driven deep-learning methods to detect fetal compromise automatically. However, their progress is impeded by limited CTG training datasets and the absence of a standardized evaluation workflow, hindering algorithm comparisons. In this study, we use a private CTG dataset of 9,887 CTG recordings with pH measurements and 552 CTG recordings from the open-access CTU-UHB dataset to conduct a cross-database evaluation of six deep-learning models for fetal compromise detection. We explore the impact of input selection of FHR and UC signals, signal pre-processing, downsampling frequency, and the influence of removing intermediate pH samples from the training dataset. Our findings reveal that using only FHR and pre-processing FHR with artefact removal and interpolation provides a significant improvement to classification performance for some model architectures while excluding intermediate pH samples did not significantly improve performance for any model. From our comparison of the six models, ResNet exhibited the strongest fetal compromise classification performance across both databases at a downsampling rate of 1Hz. Finally, class activation maps from highly contributing signal regions in the ResNet model aligned with clinical knowledge of compromised FHR patterns, highlighting the model's interpretability. These insights may serve as a standardized reference for developing and comparing future works in this domain. Clinical and Translational Impact: This study provides a standardized workflow for comparing deep-learning methods for CTG classification. Ensuring new methods show generalizability and interpretability will improve their robustness and applicability in clinical settings.
PMID:40657532 | PMC:PMC12250915 | DOI:10.1109/JTEHM.2025.3548401
Survival Prediction of Esophageal Cancer Using 3D CT Imaging: A Context-Aware Approach With Non-Local Feature Aggregation and Graph-Based Spatial Interaction
IEEE J Transl Eng Health Med. 2025 Apr 21;13:202-213. doi: 10.1109/JTEHM.2025.3562724. eCollection 2025.
ABSTRACT
Accurate prediction of survival rates in esophageal cancer (EC) is crucial for guiding personalized treatment decisions. Deep learning-based survival models have gained increasing attention due to their powerful ability to capture complex embeddings in medical data. However, the primary limitation of current frameworks for predicting survival lies in their lack of attention to the contextual interactions between tumor and lymph node regions, which are vital for survival predictions. In the current study, we aimed to develop an effective EC survival risk prediction using only 3D computed tomography (CT) images.The proposed model consists of two essential components: 1) non-local feature aggregation module(NFAM) that integrates visual features from tumor and lymph nodes at both local and global scales, 2) graph-based spatial interaction module(GSIM) that explores the latent contextual interactions between tumors and lymph nodes.The experimental results demonstrate that our model achieves superior performance compared to state-of-the-art survival prediction methods, emphasizing its robust predictive capability. Moreover, we found that retaining lymph nodes with major axis [Formula: see text]mm yields the best predictive results (C-index: 0.725), offering valuable guidance on choosing prognostic factors for esophageal cancer.For EC survival prediction using solely 3D CT images, integrating lymph node information with tumor information helps to improve the predictive performance of deep learning models.Clinical impact: The American Joint Committee on Cancer (TNM) classification serves as the primary framework for risk stratification, prognostic evaluation, and therapeutic decision-making in oncology. Nevertheless, this prognostic tool has demonstrated limited predictive accuracy in assessing long-term survival for esophageal carcinoma patients undergoing multimodal therapeutic regimens. Notably, even among those categorized within identical staging parameters, significant outcome heterogeneity persists, with survival trajectories diverging substantially across clinically matched populations. Our model serves as a complementary tool to the TNM staging system. By stratifying patients into distinct risk categories, this approach enables accurate prognosis assessment and provides critical guidance for postoperative adjuvant therapy decisions (such as whether to administer adjuvant radiotherapy or chemotherapy), thereby facilitating personalized treatment recommendations.
PMID:40657529 | PMC:PMC12251060 | DOI:10.1109/JTEHM.2025.3562724
Auxiliary diagnosis of hyperpigmented skin diseases using multimodal deep learning
Chin Med J (Engl). 2025 Jul 14. doi: 10.1097/CM9.0000000000003637. Online ahead of print.
NO ABSTRACT
PMID:40653928 | DOI:10.1097/CM9.0000000000003637
A semi-automated workflow for cohort-wise preparation of radiotherapy data for dose-response modeling, including autosegmentation of organs at risk
J Appl Clin Med Phys. 2025 Jul;26(7):e70152. doi: 10.1002/acm2.70152.
ABSTRACT
BACKGROUND: Preparing retrospective dose data for risk modeling using large study cohorts can be time consuming as it often requires patient-wise manual interventions. This is especially the case when considering organs at risk (OARs) not systematically delineated historically. Therefore, we aimed to develop and test a semi-automated workflow for cohort-wise preparation of radiotherapy data from the oncology information system (OIS), including OAR autosegmentation, for risk modeling purposes.
METHODS: A semi-automated workflow, including cohort-wise data extraction from a clinical OIS, cleanup, autosegmentation, quality controls (QCs), and data injection into a research OIS was iteratively developed using 106 patient cases. We evaluated two deep learning (DL)-based methods and compared with four atlas-based methods for autosegmentation of the proximal bronchial tree (PBT), the heart, and the esophagus that were possible to integrate into the workflow. One method was an in-house DL-based model using OARs manually contoured by experts for 100 cases. Geometric and dosimetric agreements with manually contoured OARs were evaluated for 20 independent cases. The final workflow was tested on 50 independent cases.
RESULTS: The DL-based methods were better than the atlas-based at segmenting the PBT (mean Dice similarity coefficient (DSC) 0.81-0.83 versus 0.59-0.80) and the esophagus (mean DSC 0.76-0.77 versus 0.39-0.46). The methods performed similarly for the heart (mean DSC 0.90-0.95 (DL-based) and 0.84-0.90 (atlas-based)). Our in-house autosegmentation model had the highest mean DSC for all OARs. The final version of the workflow successfully prepared data for 80% of the test cases without case-specific manual interventions.
CONCLUSIONS: The semi-automated workflow enabled efficient cohort-wise preparation of OIS data for risk modeling purposes. Our in-house DL-based segmentation model outperformed the other methods.
PMID:40653785 | DOI:10.1002/acm2.70152
Impact of three-dimensional prostate models during robot-assisted radical prostatectomy on surgical margins and functional outcomes
BJU Int. 2025 Jul 13. doi: 10.1111/bju.16850. Online ahead of print.
ABSTRACT
BACKGROUND: Robot-assisted radical prostatectomy (RARP) is the standard surgical procedure for the treatment of prostate cancer. RARP requires a trade-off between performing a wider resection in order to reduce the risk of positive surgical margins (PSMs) and performing minimal resection of the nerve bundles that determine functional outcomes, such as incontinence and potency, which affect patients' quality of life. In order to achieve favourable outcomes, a precise understanding of the three-dimensional (3D) anatomy of the prostate, nerve bundles and tumour lesion is needed.
STUDY DESIGN: This is the protocol for a single-centre feasibility study including a prospective two-arm interventional group (a 3D virtual and a 3D printed prostate model), and a prospective control group.
ENDPOINTS: The primary endpoint will be PSM status and the secondary endpoint will be functional outcomes, including incontinence and sexual function.
PATIENTS AND METHODS: The study will consist of a total of 270 patients: 54 patients will be included in each of the interventional groups (3D virtual, 3D printed models), 54 in the retrospective control group and 108 in the prospective control group. Automated segmentation of prostate gland and lesions will be conducted on multiparametric magnetic resonance imaging (mpMRI) using 'AutoProstate' and 'AutoLesion' deep learning approaches, while manual annotation of the neurovascular bundles, urethra and external sphincter will be conducted on mpMRI by a radiologist. This will result in masks that will be post-processed to generate 3D printed/virtual models. Patients will be allocated to either interventional arm and the surgeon will be given either a 3D printed or a 3D virtual model at the start of the RARP procedure. At the 6-week follow-up, the surgeon will meet with the patient to present PSM status and capture functional outcomes from the patient via questionnaires. We will capture these measures as endpoints for analysis. These questionnaires will be re-administered at 3, 6 and 12 months postoperatively.
PMID:40653671 | DOI:10.1111/bju.16850
The Power of Hellmann-Feynman Theorem: Kohn-Sham DFT Energy Derivatives with Respect to the Parameters of the Exchange-Correlation Functional at Linear Cost
J Phys Chem A. 2025 Jul 13. doi: 10.1021/acs.jpca.5c01771. Online ahead of print.
ABSTRACT
Efficient methods for computing derivatives with respect to the parameters of scientific models are crucial for applications in machine learning. These methods are important when training is done using gradient-based optimization algorithms or when the model is integrated with deep learning, as they help speed up calculations during the backpropagation pass. In the present work, we applied the Hellmann-Feynman theorem to calculate the derivatives of the Kohn-Sham DFT energies with respect to the parameters of the exchange-correlation functional. This approach was implemented in a prototype program on the basis of Python package PySCF. Using the LDA and GGA functionals as examples, we have shown that this approach scales approximately linear with the system size for a series of n-alkanes (CnH2n+2, n = 4...64) with a double-zeta basis set. We demonstrated a significant speedup in the derivative calculations in comparison with the widely used automatic differentiation approach such as PyTorch-based DQC, which has a computational complexity of O(n2.0)-O(n2.5).
PMID:40653651 | DOI:10.1021/acs.jpca.5c01771
Diagnosing pathologic myopia by identifying morphologic patterns using ultra widefield images with deep learning
NPJ Digit Med. 2025 Jul 13;8(1):435. doi: 10.1038/s41746-025-01849-y.
ABSTRACT
Pathologic myopia is a leading cause of visual impairment and blindness. While deep learning-based approaches aid in recognizing pathologic myopia using color fundus photography, they often rely on implicit patterns that lack clinical interpretability. This study aims to diagnose pathologic myopia by identifying clinically significant morphologic patterns, specifically posterior staphyloma and myopic maculopathy, by leveraging ultra-widefield (UWF) images that provide a broad retinal field of view. We curate a large-scale, multi-source UWF myopia dataset called PSMM and introduce RealMNet, an end-to-end lightweight framework designed to identify these challenging patterns. Benefiting from the fast pretraining distillation backbone, RealMNet comprises only 21 million parameters, which facilitates deployment for medical devices. Extensive experiments conducted across three different protocols demonstrate the robustness and generalizability of RealMNet. RealMNet achieves an F1 Score of 0.7970 (95% CI 0.7612-0.8328), mAP of 0.8497 (95% CI 0.8058-0.8937), and AUROC of 0.9745 (95% CI 0.9690-0.9801), showcasing promise in clinical applications.
PMID:40653573 | DOI:10.1038/s41746-025-01849-y
Model predictive control of nonlinear dynamical systems based on long sequence stable Koopman network
ISA Trans. 2025 Jul 7:S0019-0578(25)00352-0. doi: 10.1016/j.isatra.2025.07.003. Online ahead of print.
ABSTRACT
In recent years, the Koopman method has found numerous applications in the field of nonlinear control due to its ability to map nonlinear states into high-dimensional spaces, thereby transforming nonlinear control problems into linear or bilinear problems. However, Koopman methods based on deep learning suffer from slow convergence, and the Koopman coefficients obtained through iterative processes cannot guarantee long-term prediction stability in the high-dimensional mapped space. To address these issues, we propose a Stable Deep Koopman Network with Model Predictive Control (SDKN-MPC) method for nonlinear control. The SDKN-MPC method utilizes the Stable Koopman Solver Algorithm to solve for a stable Koopman operator. It incorporates neural network training for embedding functions, with both training processes interleaved until convergence is achieved towards a unified stable solution. Subsequently, Model Predictive Control (MPC) is employed to control the high-dimensional linear system mapped through the Koopman operator, yielding high-dimensional desired inputs. These inputs undergo further processing through an auxiliary network to obtain the actual predictive control inputs. The proposed method is subjected to long-term predictive performance testing across multiple typical nonlinear control tasks and is compared with existing deep learning-based approaches. The results demonstrate that our method can extract more effective nonlinear features, converges rapidly, and exhibits superior predictive performance compared to existing methods.
PMID:40653404 | DOI:10.1016/j.isatra.2025.07.003
A multimodule graph-based neural network for accurate drug-target interaction prediction via genomic, proteomic, and structural data fusion
Int J Biol Macromol. 2025 Jul 11:145907. doi: 10.1016/j.ijbiomac.2025.145907. Online ahead of print.
ABSTRACT
PURPOSE: Drug-target interaction (DTI) prediction is a critical step for accelerating drug discovery and repurpose. However, the experimental approach for DTI is time-consuming and has experimental costs. Existing computational approaches typically rely on signal data modalities, limiting their ability to fully capture the complex biological mechanisms involved. Therefore, there is significant scope for advancing a deep learning framework that unifies the different data types and provides a more comprehensive understanding of molecular mechanisms for drug-target interaction predictions.
OUTCOMES: To address this, we have developed GINCOVNET, a graph-based neural network for DTI prediction that integrates multiple data modalities, including the molecular structure information, the target sequence, and the molecular and target's perturbed gene expression. Our study evaluation demonstrated that the multi-data fusion model outperformed previous studies with an R2 of 0.976 and MAE of 0.053, also significantly higher compared to previous studies. Our ablation study shows that incorporating gene expression data improves the model's capabilities compared to molecule-target data. Furthermore, the molecular docking of the randomly selected molecule-target pair validates the reliability of our model in identifying potential interactions and its facilitation in the identification of repurposed drug interactions and novel therapeutic discoveries.
PMID:40653240 | DOI:10.1016/j.ijbiomac.2025.145907
CSCE: Cross Supervising and Confidence Enhancement pseudo-labels for semi-supervised subcortical brain structure segmentation
J Neurosci Methods. 2025 Jul 11:110522. doi: 10.1016/j.jneumeth.2025.110522. Online ahead of print.
ABSTRACT
Robust and accurate segmentation of subcortical structures in brain MR images lays the foundation for observation, analysis and treatment planning of various brain diseases. Deep learning techniques based on Deep Neural Networks (DNNs) have achieved remarkable results in medical image segmentation by using abundant labeled data. However, due to the time-consuming and expensive of acquiring high quality annotations of brain subcortical structures, semi-supervised algorithms become practical in application. In this paper, we propose a novel framework for semi-supervised subcortical brain structure segmentation, based on pseudo-labels Cross Supervising and Confidence Enhancement (CSCE). Our framework comprises dual student-teacher models, specifically a U-Net and a TransUNet. For unlabeled data training, the TransUNet teacher generates pseudo-labels to supervise the U-Net student, while the U-Net teacher generates pseudo-labels to supervise the TransUNet student. This mutual supervision between the two models promotes and enhances their performance synergistically. We have designed two mechanisms to enhance the confidence of pseudo-labels to improve the reliability of cross-supervision: a) Using information entropy to describe uncertainty quantitatively; b) Design an auxiliary detection task to perform uncertainty detection on the pseudo-labels output by the teacher model, and then screened out reliable pseudo-labels for cross-supervision. Finally, we construct an end-to-end deep brain structure segmentation network only using one teacher network (U-Net or TransUNet) for inference, the segmentation results are significantly improved without increasing the parameters amount and segmentation time compared with supervised U-Net or TransUNet based segmentation algorithms. Comprehensive experiments are performed on two public benchmark brain MRI datasets. The proposed method achieves the best Dice scores and MHD values on both datasets compared to several recent state-of-the-art semi-supervised segmentation methods.
PMID:40653056 | DOI:10.1016/j.jneumeth.2025.110522
Advancing Rare Neurological Disorder Diagnosis: Addressing Challenges with Systematic Reviews and AI-Driven MRI Meta-Trans Learning Framework for NeuroDegenerative Disorders
Ageing Res Rev. 2025 Jul 11:102831. doi: 10.1016/j.arr.2025.102831. Online ahead of print.
ABSTRACT
Neurological Disorders (ND) affect a large portion of the global population, impacting the brain, spinal cord, and nerves. These disorders fall into categories such as NeuroDevelopmental (NDD), NeuroBiological (NBD), and NeuroDegenerative (NDe) disorders, which range from common to rare conditions. While Artificial Intelligence (AI) has advanced healthcare diagnostics, training Machine Learning (ML) and Deep Learning (DL) models for early detection of rare neurological disorders remains a challenge due to limited patient data. This data scarcity poses a significant public health issue. Meta_Trans Learning (MTAL), which integrates Meta-Learning (MtL) and Transfer Learning (TL), offers a promising solution by leveraging small datasets to extract expert patterns, generalize findings, and reduce AI bias in healthcare. This research systematically reviews studies from 2018 to 2024 to explore how ML and MTAL techniques are applied in diagnosing NDD, NBD, and NDe disorders. It also provides statistical and parametric analysis of ML and DL methods for neurological disorder diagnosis. Lastly, the study introduces a MRI-based NDe-MTAL framework to aid healthcare professionals in early detection of rare neuro disorders, aiming to enhance diagnostic accuracy and advance healthcare practices.
PMID:40653053 | DOI:10.1016/j.arr.2025.102831
TECM-ChI: A TECM network-based method for chromatin interaction prediction
Gene. 2025 Jul 11:149656. doi: 10.1016/j.gene.2025.149656. Online ahead of print.
ABSTRACT
Chromatin interactions refer to regulatory relationships formed between chromatin regions through physical contact or spatial proximity, playing a crucial role in genome function, structure, and the development of diseases. In cancer research, for example, thinking of chromatin as a gel can help explain the spread of cancer. Traditional experimental methods, such as Hi-C and ChIA-PET, are costly, time-consuming, and applicable to only a limited number of cell lines. Increasing evidence shows that DNA sequences and genomic features (CTCF motifs, sequence conservation, chromatin-associated proteins e.g.) are essential predictors of chromatin interactions. However, existing computational methods based on these features suffer from data imbalance and low prediction accuracy, which limits their broader application in biomedical research. To address this, we proposes an entirely new model to investigate the existence of chromatin interactions based on DNA sequences and genomic features, called TECM-ChI. In this model, we first design the FCR (Forward Combine Reverse) method to balance the positive and negative samples in the K562, IMR90, and GM12878 datasets to achieve a 1:1 ratio. Additionally, to fully extract meaningful information from the gene sequences, we develop a preprocessing Three-Encoding module that uses three encoding methods to concatenate each nucleotide into a 45-dimensional vector. Next, we propose the CMANet network model, which combines multi-layer convolution with multiple attention mechanisms. CMANet effectively extracts local features within sequence information and enhances focus on key regions, improving the ability to recognize chromatin interactions. To evaluate TECM-ChI's effectiveness, we conducted model variant experiments, loss performance analysis, and comparative analysis with existing computational methods across three cell lines. Experimental results demonstrate that, compared to the current best models, TECM-ChI achieves accuracy improvements of 4.68 %, 1.31 %, and 2.41 % on the K562, IMR90, and GM12878 datasets, respectively, proving its effectiveness and generalization ability in predicting chromatin interactions. The source code for TECM-ChI is available at https://github.com/Fated-2/TECM-ChI.git.
PMID:40652995 | DOI:10.1016/j.gene.2025.149656
Less might be more: Enhancing clinical translation of DenseNet for OPC prognosis through selective imaging fusion
Radiother Oncol. 2025 Jul 11:111043. doi: 10.1016/j.radonc.2025.111043. Online ahead of print.
NO ABSTRACT
PMID:40652967 | DOI:10.1016/j.radonc.2025.111043
A roadmap for T cell receptor-peptide-bound major histocompatibility complex binding prediction by machine learning: glimpse and foresight
Brief Bioinform. 2025 Jul 2;26(4):bbaf327. doi: 10.1093/bib/bbaf327.
ABSTRACT
Cytotoxic T lymphocytes (CTLs) play a key role in the defense of cancer and infectious diseases. CTLs are mainly activated by T cell receptors (TCRs) after recognizing the peptide-bound class I major histocompatibility complex, and subsequently kill virus-infected cells and tumor cells. Therefore, identification of antigen-specific CTLs and their TCRs is a promising agent for T-cell based intervention. Currently, the experimental identification and validation of antigen-specific CTLs is well-used but extremely resource-intensive. The machine learning methods for TCR-pMHC prediction are growing interest particularly with advances in single-cell technologies. This review clarifies the key biological processes involved in TCR-pMHC binding. After comprehensively comparing the advantages and disadvantages of several state-of-the-art machine learning algorithms for TCR-pMHC prediction, we point out the discrepancies with these machine learning methods under specific disease conditions. Finally, we proposed a roadmap of TCR-pMHC prediction. This roadmap would enable more accurate TCR-pMHC binding prediction when improving data quality, encoding and embedding methods, training models, and application context. This review could facilitate the development of T-cell based vaccines and therapy.
PMID:40652935 | DOI:10.1093/bib/bbaf327
Predicton of major adverse cardiovascular events in patients with hypertrophic cardiomyopathy by deep learning and radiomics
Cardiology. 2025 Jul 11:1-20. doi: 10.1159/000547232. Online ahead of print.
ABSTRACT
Introduction Hypertrophic cardiomyopathy (HCM) patients may be at risk for major adverse cardiovascular events (MACE), making risk stratification essential for implementing interventions in high-risk individuals. Deep transfer learning (DTL) and radiomics have made significant advances in the medical field; however, to date, no studies have combined echocardiography in HCM patients with DTL and radiomics to develop predictive models for identifying individuals at risk for MACE. Methods This study is a retrospective analysis that included 210 HCM patients, with a mean follow-up time of 29.44 ± 16.21 months. Among the patients, 59 experienced MACE and 151 non-MACE. The patients were randomly divided into training and validation sets in an 8:2 ratio. We collected chest parasternal left ventricular long-axis and short-axis images, with the left ventricular myocardial region defined as the region of interest (ROI). Radiomics features were extracted using the Pyradiomics software package, and DTL features were obtained through the pre-trained Resnet50 model. These radiomics and DTL features were then combined, and feature selection was conducted using the Least Absolute Shrinkage and Selection Operator (LASSO). The selected features were used to construct the DTL-RAD predictive model with machine learning algorithms. The model's diagnostic performance was evaluated using the Receiver Operating Characteristic (ROC) curve and Decision Curve Analysis (DCA). Finally, we compared the prediction performance of the DTL-RAD model with those of models built using only radiomics features or only DTL features. Results The diagnostic performance of the DTL-RAD model in both the training and validation sets was excellent, with AUC values of 0.936 and 0.918, specificity values of 0.852 and 0.767, and sensitivity values of 0.892 and 0.929, respectively. It significantly outperformed models that used only radiomics or DTL features. Furthermore, the DCA demonstrated that the DTL-RAD model exhibited superior clinical applicability and effectiveness, surpassing the performance of other models. Conclusion The DTL-RAD model demonstrated exceptional performance in identifying HCM patients at risk of MACE, accurately detecting high-risk individuals among HCM patients at an early stage. This provides a basis for precise clinical intervention, effectively reducing the incidence of MACE in HCM patients.
PMID:40652933 | DOI:10.1159/000547232
A preprocessing method based on 3D U-Net for abdomen segmentation
Comput Biol Med. 2025 Jul 12;196(Pt A):110709. doi: 10.1016/j.compbiomed.2025.110709. Online ahead of print.
ABSTRACT
Deep learning methods have made significant progress in the field of biomedical automatic segmentation but still open to developments, especially due to the insufficient use of preprocessing methods. In this study, a pre-processing step is proposed both to improve segmentation performance and to produce faster segmentation results. In this context, it is intended to obtain the abdomen region of interest (ROI) by using 3D U-Net, which has been shown to be effective in numerous studies. The presented work involves training a 3D U-Net using the CHAOS dataset and samples from the AbdomenCT-1K dataset, comprising a training dataset with 6998 slices. Afterwards, the network was exclusively tested using samples from the AbdomenCT-1K dataset, which consists of 1311 slices, to showcase its generalizability across diverse datasets. The study systematically examined the impact of fine-tuning parameters, including k value for k-fold cross-validation (CV), batch sizes (bs), and learning rates (lr), on the overall segmentation performance. Additionally, the study extended to training 3D U-Net using distinct loss functions, specifically Dice, Focal Dice, and Focal Twersky, to evaluate their respective effects on segmentation outcomes. In various scenarios, the best Dice score recorded was 99.71 %. Using the best models obtained from classical training and CV, each data in the test dataset was evaluated for its Hausdorff Distance (HD), 95th percentile of Hausdorff Distance (HD95), and Average Symmetric Surface Distance (ASSD). Following the segmentation process, the abdomen ROI was identified for each 2D slice using Connected Components Analysis (CCA) and established based on the largest bounding box to mitigate information loss. After applying CCA on the predicted image from the best model, an average reduction of 33.34 % in dimensionality was achieved for the entire test dataset.
PMID:40652756 | DOI:10.1016/j.compbiomed.2025.110709
TNF-α-NF-κB activation through pathological α-Synuclein disrupts the BBB and exacerbates axonopathy
Cell Rep. 2025 Jul 12;44(7):116001. doi: 10.1016/j.celrep.2025.116001. Online ahead of print.
ABSTRACT
Dysfunction of the blood-brain barrier (BBB) is recognized as a key factor in the progression of neurodegenerative diseases (NDs), but the detailed mechanisms behind its pathogenesis and impact on neurodegeneration remain elusive. This study aimed to reveal the pathological effects of α-Synuclein (α-Syn), an aggregation protein in synucleinopathy, on BBB integrity and function and identify therapeutic targets for α-Syn-related vasculopathy. Using a brain endothelial cell model, we investigated the pathological effect of preformed fibril α-Syn (PFF) on BBB integrity, employing generative adversarial network (GAN) deep learning to analyze pathological changes. We found that PFF activates immune responses, increasing endothelial monolayer permeability via the TNF-α-NF-κB pathway. Further in vivo studies with PFF induced α-synucleinopathy, and a transgenic animal model (G2-3) revealed that α-Syn aggregation disrupts the BBB, leading to axonal degeneration that was mitigated by treatment with a non-BBB-penetrating TNF-α inhibitor, etanercept. These findings suggest that targeting brain endothelial TNF-α signaling could be a potential therapeutic approach for synucleinopathy-related NDs.
PMID:40652513 | DOI:10.1016/j.celrep.2025.116001
Artificial intelligence-based action recognition and skill assessment in robotic cardiac surgery simulation: a feasibility study
J Robot Surg. 2025 Jul 13;19(1):384. doi: 10.1007/s11701-025-02563-3.
ABSTRACT
To create a deep neural network capable of recognizing basic surgical actions and categorizing surgeons based on their skills using video data only. Nineteen surgeons with varying levels of robotic experience performed three wet lab tasks on a porcine model: robotic-assisted atrial closure, mitral stitches, and dissection of the thoracic artery. We used temporal labeling to mark two surgical actions: suturing and dissection. Each complete recording was annotated as either "novice" or "expert" based on the operator's experience. The network architecture combined a Convolutional Neural Network for extracting spatial features with a Long Short-Term Memory layer to incorporate temporal information. A total of 435 recordings were analyzed. The fivefold cross-validation yielded a mean accuracy of 98% for the action recognition (AR) and 79% for the skill assessment (SA) network. The AR model achieved an accuracy of 93%, with average recall, precision, and F1-score all at 93%. The SA network had an accuracy of 56% and a predictive certainty of 95%. Gradient-weighted Class Activation Mapping revealed that the algorithm focused on the needle, suture, and instrument tips during suturing, and on the tissue during dissection. AR network demonstrated high accuracy and predictive certainty, even with a limited dataset. The SA network requires more data to become a valuable tool for performance evaluation. When combined, these deep learning models can serve as a foundation for AI-based automated post-procedural assessments in robotic cardiac surgery simulation. ClinicalTrials.gov (NCT05043064).
PMID:40652436 | DOI:10.1007/s11701-025-02563-3
CoBdock-2: enhancing blind docking performance through hybrid feature selection combining ensemble and multimodel feature selection approaches
J Comput Aided Mol Des. 2025 Jul 13;39(1):48. doi: 10.1007/s10822-025-00629-w.
ABSTRACT
Identifying orthosteric binding sites and predicting small molecule affinities remains a key challenge in virtual screening. While blind docking explores the entire protein surface, its precision is hindered by the vast search space. Cavity detection-guided docking improves accuracy by narrowing focus to predicted pockets, but its effectiveness depends heavily on the quality of cavity detection tools. To overcome these limitations, we developed Consensus Blind Dock (CoBDock), a machine learning-based blind docking method that integrates molecular docking and cavity detection results to enhance binding site and pose prediction. Building on this, CoBDock-2 replaces traditional docking tools by extracting 1D numerical representations from protein, ligand, and interaction structural features, and applying advanced ensemble feature selection techniques. By evaluating 21 feature selection methods across 9,598 features, CoBDock-2 identifies key molecular characteristics of orthosteric binding sites. CoBDock-2 demonstrates consistent improvements over the original CoBDock across benchmark datasets (PDBBind v2020-general, MTi, ADS, DUD-E, CASF-2016), achieving 77% binding site identification accuracy (within 8 Å), 55% ligand pose prediction accuracy (RMSD ≤ 2 Å), a 19% reduction in the mean distance to ground truth ligands within the binding site, and an 18.5% decrease in the mean pose RMSD. Statistical analysis across the combined benchmark set confirms the significance of these improvements ( p < 0.05 ). Notably, the Weighted Hybrid Feature Selection variant in CoBDock-2 further increases binding site accuracy to 79.8%, demonstrating the benefit of combining multimodel and ensemble feature selection strategies. Variability in predictions also decreased significantly, highlighting enhanced reliability and generalizability. Also, a low-bias hypothetical comparison with a state-of-the-art DiffDock + NMDN method was conducted to position CoBDock-2 relative to modern deep learning-based docking strategies.
PMID:40652425 | DOI:10.1007/s10822-025-00629-w
iALP: Identification of Allergenic Proteins Based on Large Language Model and Gate Linear Unit
Interdiscip Sci. 2025 Jul 13. doi: 10.1007/s12539-025-00734-2. Online ahead of print.
ABSTRACT
The rising incidence of allergic disorders has emerged as a pressing public health issue worldwide, underscoring the need for intensified research and efficacious intervention measures. Accurate identification of allergenic proteins (ALPs) is essential in preventing allergic reactions and mitigating health risks at an individual level. Although machine learning and deep learning techniques have been widely applied in ALP identification, existing methods often have limitations in capturing their complex features. In response, we introduce a novel method iALP, which leverages a large language model ProtT5 and the gate linear unit (GLU) for ALP identification with high efficacy. The advanced features in ProtT5 enable an in-depth analysis of the complex characteristics of ALPs, while GLU captures the intricate nonlinear features hidden within these proteins. The results demonstrate that iALP achieves an impressive accuracy and F1-score of 0.957 on the test set. Furthermore, it demonstrates superior performance compared to the leading predictors in a separate dataset. We also provide a detailed discussion of the model performance with protein sequences shorter than 100 amino acids. We hope that iALP will facilitate accurate ALP prediction, thereby supporting effective allergy symptom prevention and the implementation of allergen prevention and treatment strategies. The iALP source codes and datasets for prediction tasks can be accessed from the GitHub repository located at https://github.com/xialab-ahu/iALP.git .
PMID:40652417 | DOI:10.1007/s12539-025-00734-2