Deep learning

CT-based radiomics-deep learning model predicts occult lymph node metastasis in early-stage lung adenocarcinoma patients: A multicenter study

Thu, 2025-03-13 06:00

Chin J Cancer Res. 2025 Jan 30;37(1):12-27. doi: 10.21147/j.issn.1000-9604.2025.01.02.

ABSTRACT

OBJECTIVE: The neglect of occult lymph nodes metastasis (OLNM) is one of the pivotal causes of early non-small cell lung cancer (NSCLC) recurrence after local treatments such as stereotactic body radiotherapy (SBRT) or surgery. This study aimed to develop and validate a computed tomography (CT)-based radiomics and deep learning (DL) fusion model for predicting non-invasive OLNM.

METHODS: Patients with radiologically node-negative lung adenocarcinoma from two centers were retrospectively analyzed. We developed clinical, radiomics, and radiomics-clinical models using logistic regression. A DL model was established using a three-dimensional squeeze-and-excitation residual network-34 (3D SE-ResNet34) and a fusion model was created by integrating seleted clinical, radiomics features and DL features. Model performance was assessed using the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, calibration curves, and decision curve analysis (DCA). Five predictive models were compared; SHapley Additive exPlanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM) were employed for visualization and interpretation.

RESULTS: Overall, 358 patients were included: 186 in the training cohort, 48 in the internal validation cohort, and 124 in the external testing cohort. The DL fusion model incorporating 3D SE-Resnet34 achieved the highest AUC of 0.947 in the training dataset, with strong performance in internal and external cohorts (AUCs of 0.903 and 0.907, respectively), outperforming single-modal DL models, clinical models, radiomics models, and radiomics-clinical combined models (DeLong test: P<0.05). DCA confirmed its clinical utility, and calibration curves demonstrated excellent agreement between predicted and observed OLNM probabilities. Features interpretation highlighted the importance of textural characteristics and the surrounding tumor regions in stratifying OLNM risk.

CONCLUSIONS: The DL fusion model reliably and accurately predicts OLNM in early-stage lung adenocarcinoma, offering a non-invasive tool to refine staging and guide personalized treatment decisions. These results may aid clinicians in optimizing surgical and radiotherapy strategies.

PMID:40078558 | PMC:PMC11893343 | DOI:10.21147/j.issn.1000-9604.2025.01.02

Categories: Literature Watch

A spatial and temporal transformer-based EEG emotion recognition in VR environment

Thu, 2025-03-13 06:00

Front Hum Neurosci. 2025 Feb 26;19:1517273. doi: 10.3389/fnhum.2025.1517273. eCollection 2025.

ABSTRACT

With the rapid development of deep learning, Electroencephalograph(EEG) emotion recognition has played a significant role in affective brain-computer interfaces. Many advanced emotion recognition models have achieved excellent results. However, current research is mostly conducted in laboratory settings for emotion induction, which lacks sufficient ecological validity and differs significantly from real-world scenarios. Moreover, emotion recognition models are typically trained and tested on datasets collected in laboratory environments, with little validation of their effectiveness in real-world situations. VR, providing a highly immersive and realistic experience, is an ideal tool for emotional research. In this paper, we collect EEG data from participants while they watched VR videos. We propose a purely Transformer-based method, EmoSTT. We use two separate Transformer modules to comprehensively model the temporal and spatial information of EEG signals. We validate the effectiveness of EmoSTT on a passive paradigm collected in a laboratory environment and an active paradigm emotion dataset collected in a VR environment. Compared with state-of-the-art methods, our method achieves robust emotion classification performance and can be well transferred between different emotion elicitation paradigms.

PMID:40078487 | PMC:PMC11897567 | DOI:10.3389/fnhum.2025.1517273

Categories: Literature Watch

Deep-Learning-Assisted Understanding of the Self-Assembly of Miktoarm Star Block Copolymers

Wed, 2025-03-12 06:00

ACS Nano. 2025 Mar 12. doi: 10.1021/acsnano.5c00811. Online ahead of print.

ABSTRACT

The self-assemblies of topological complex block copolymers, especially the ABn type miktoarm star ones, are fascinating topics in the soft matter field, which represent typical self-assembly behaviors analogous to those of biological membranes. However, their diverse topological asymmetries and versatile spontaneous curvatures result in rather complex phase separations that deviate significantly from the common mechanisms. Thus, numerous trial-and-error experiments with tremendous parameter space and intricate relationships are needed to study their assemblies. Herein, we applied deep learning technology to decipher the phase behaviors of the miktoarm star block copolymer PEO-s-PS2 in an evaporation-induced self-assembly system. A neural network model was trained from practical experimental data encompassing two polymer properties and three synthesis condition parameters as input variables, which successfully predicted a three-dimensional (3D) synthesis-field diagram and mined the relationship between input parameters and obtained structures. This model demonstrated the highly flexible structure modulation directions of the miktoarm star block copolymer, revealing the correlation between the polymer parameters, synthesis conditions, and the output structures due to the significant influence of the variables on spontaneous curvatures. This work demonstrated the efficiency of a deep learning technique in uncovering the underlying rules of complex self-assembly systems, providing valuable insights into the exploration of soft matter science.

PMID:40074545 | DOI:10.1021/acsnano.5c00811

Categories: Literature Watch

RPT: An integrated root phenotyping toolbox for segmenting and quantifying root system architecture

Wed, 2025-03-12 06:00

Plant Biotechnol J. 2025 Mar 12. doi: 10.1111/pbi.70040. Online ahead of print.

ABSTRACT

The dissection of genetic architecture for rice root system is largely dependent on phenotyping techniques, and high-throughput root phenotyping poses a great challenge. In this study, we established a cost-effective root phenotyping platform capable of analysing 1680 root samples within 2 h. To efficiently process a large number of root images, we developed the root phenotyping toolbox (RPT) with an enhanced SegFormer algorithm and used it for root segmentation and root phenotypic traits. Based on this root phenotyping platform and RPT, we screened 18 candidate (quantitative trait loci) QTL regions from 219 rice recombinant inbred lines under drought stress and validated the drought-resistant functions of gene OsIAA8 identified from these QTL regions. This study confirmed that RPT exhibited a great application potential for processing images with various sources and for mining stress-resistance genes of rice cultivars. Our developed root phenotyping platform and RPT software significantly improved high-throughput root phenotyping efficiency, allowing for large-scale root trait analysis, which will promote the genetic architecture improvement of drought-resistant cultivars and crop breeding research in the future.

PMID:40074292 | DOI:10.1111/pbi.70040

Categories: Literature Watch

Deep learning based estimation of heart surface potentials

Wed, 2025-03-12 06:00

Artif Intell Med. 2025 Mar 5;163:103093. doi: 10.1016/j.artmed.2025.103093. Online ahead of print.

ABSTRACT

Electrocardiographic imaging (ECGI) aims to noninvasively estimate heart surface potentials starting from body surface potentials. This is classically based on geometric information on the torso and the heart from imaging, which complicates clinical application. In this study, we aim to develop a deep learning framework to estimate heart surface potentials solely from body surface potentials, enabling wider clinical use. The framework introduces two main components: the transformation of 3D torso and heart geometries into standard 2D representations, and the development of a customized deep learning network model. The 2D torso and heart representations maintain a consistent layout across different subjects, making the proposed framework applicable to different torso-heart geometries. With spatial information incorporated in the 2D representations, the torso-heart physiological relationship can be learnt by the network. The deep learning model is based on a Pix2Pix network, adapted to work with 2.5D data in our task, i.e., 2D body surface potential maps (BSPMs) and 2D heart surface potential maps (HSPMs) with time sequential information. We propose a new loss function tailored to this specific task, which uses a cosine similarity and different weights for different inputs. BSPMs and HSPMs from 11 healthy subjects (8 females and 3 males) and 29 idiopathic ventricular fibrillation (IVF) patients (11 females and 18 males) were used in this study. Performance was assessed on a test set by measuring the similarity and error between the output of the proposed model and the solution provided by mainstream ECGI, by comparing HSPMs, the concatenated electrograms (EGMs), and the estimated activation time (AT) and recovery time (RT). The mean of the mean absolute error (MAE) for the HSPMs was 0.012 ± 0.011, and the mean of the corresponding structural similarity index measure (SSIM) was 0.984 ± 0.026. The mean of the MAE for the EGMs was 0.004 ± 0.004, and the mean of the corresponding Pearson correlation coefficient (PCC) was 0.643 ± 0.352. Results suggest that the model is able to precisely capture the structural and temporal characteristics of the HSPMs. The mean of the absolute time differences between estimated and reference activation times was 6.048 ± 5.188 ms, and the mean of the absolute differences for recovery times was 18.768 ± 17.299 ms. Overall, results show similar performance between the proposed model and standard ECGI, exhibiting low error and consistent clinical patterns, without the need for CT/MRI. The model shows to be effective across diverse torso-heart geometries, and it successfully integrates temporal information in the input. This in turn suggests the possible use of this model in cost effective clinical scenarios like patient screening or post-operative follow-up.

PMID:40073713 | DOI:10.1016/j.artmed.2025.103093

Categories: Literature Watch

PocketDTA: A pocket-based multimodal deep learning model for drug-target affinity prediction

Wed, 2025-03-12 06:00

Comput Biol Chem. 2025 Mar 6;117:108416. doi: 10.1016/j.compbiolchem.2025.108416. Online ahead of print.

ABSTRACT

Drug-target affinity prediction is a fundamental task in the field of drug discovery. Extracting and integrating structural information from proteins effectively is crucial to enhance the accuracy and generalization of prediction, which remains a substantial challenge. This paper proposes a pocket-based multimodal deep learning model named PocketDTA for drug-target affinity prediction, based on the principle of "structure determines function". PocketDTA introduces the pocket graph structure that encodes protein residue features pretrained using a biological language model as nodes, while edges represent different protein sequences and spatial distances. This approach overcomes the limitations of lack of spatial information in traditional prediction models with only protein sequence input. Furthermore, PocketDTA employs relational graph convolutional networks at both atomic and residue levels to extract structural features from drugs and proteins. By integrating multimodal information through deep neural networks, PocketDTA combines sequence and structural data to improve affinity prediction accuracy. Experimental results demonstrate that PocketDTA outperforms state-of-the-art prediction models across multiple benchmark datasets by showing strong generalization under more realistic data splits and confirming the effectiveness of pocket-based methods for affinity prediction.

PMID:40073710 | DOI:10.1016/j.compbiolchem.2025.108416

Categories: Literature Watch

Rapid diagnosis of lung cancer by multi-modal spectral data combined with deep learning

Wed, 2025-03-12 06:00

Spectrochim Acta A Mol Biomol Spectrosc. 2025 Mar 6;335:125997. doi: 10.1016/j.saa.2025.125997. Online ahead of print.

ABSTRACT

Lung cancer is a malignant tumor that poses a serious threat to human health. Existing lung cancer diagnostic techniques face the challenges of high cost and slow diagnosis. Early and rapid diagnosis and treatment are essential to improve the outcome of lung cancer. In this study, a deep learning-based multi-modal spectral information fusion (MSIF) network is proposed for lung adenocarcinoma cell detection. First, multi-modal data of Fourier transform infrared spectra, UV-vis absorbance spectra, and fluorescence spectra of normal and patient cells were collected. Subsequently, the spectral text data were efficiently processed by one-dimensional convolutional neural network. The global and local features of the spectral images are deeply mined by the hybrid model of ResNet and Transformer. An adaptive depth-wise convolution (ADConv) is introduced to be applied to feature extraction, overcoming the shortcomings of conventional convolution. In order to achieve feature learning between multi-modalities, a cross-modal interaction fusion (CMIF) module is designed. This module fuses the extracted spectral image and text features in a multi-faceted interaction, enabling full utilization of multi-modal features through feature sharing. The method demonstrated excellent performance on the test sets of Fourier transform infrared spectra, UV-vis absorbance spectra and fluorescence spectra, achieving 95.83 %, 97.92 % and 100 % accuracy, respectively. In addition, experiments validate the superiority of multi-modal spectral data and the robustness of the model generalization capability. This study not only provides strong technical support for the early diagnosis of lung cancer, but also opens a new chapter for the application of multi-modal data fusion in spectroscopy.

PMID:40073660 | DOI:10.1016/j.saa.2025.125997

Categories: Literature Watch

Online assessment of soluble solids content in strawberries using a developed Vis/NIR spectroscopy system with a hanging grasper

Wed, 2025-03-12 06:00

Food Chem. 2025 Mar 7;478:143671. doi: 10.1016/j.foodchem.2025.143671. Online ahead of print.

ABSTRACT

Online detection of internal quality of strawberries presents challenges particularly concerning fruit damage, detection accuracy, and processing efficiency. This study explores the feasibility of using Vis/NIRS for online detection of SSC in strawberries during hanging transportation. After analyzing SSC distribution in strawberries, an optical sensing system was developed, and optimal configurations were identified using PLSR models. When employing a horizontal optical beam through the strawberry center, the PLSR model combined with SNV preprocessing and CARS feature selection achieved the best conventional chemometric results (RPD of 4.793). Additionally, three 1D-CNN approaches were investigated, with the 1D-CNN-LSTM method exhibiting superior performance (Rp2 of 0.963, RMSEP of 0.209°Brix, RPD of 5.332). These findings demonstrate the excellent capability of our developed system, enhanced by deep learning methods, for online detection of SSC in strawberries. This work may open new avenues for the online assessment of internal quality in small and delicate fruits.

PMID:40073605 | DOI:10.1016/j.foodchem.2025.143671

Categories: Literature Watch

AI-based association analysis for medical imaging using latent-space geometric confounder correction

Wed, 2025-03-12 06:00

Med Image Anal. 2025 Mar 6;102:103529. doi: 10.1016/j.media.2025.103529. Online ahead of print.

ABSTRACT

This study addresses the challenges of confounding effects and interpretability in artificial-intelligence-based medical image analysis. Whereas existing literature often resolves confounding by removing confounder-related information from latent representations, this strategy risks affecting image reconstruction quality in generative models, thus limiting their applicability in feature visualization. To tackle this, we propose a different strategy that retains confounder-related information in latent representations while finding an alternative confounder-free representation of the image data. Our approach views the latent space of an autoencoder as a vector space, where imaging-related variables, such as the learning target (t) and confounder (c), have a vector capturing their variability. The confounding problem is addressed by searching a confounder-free vector which is orthogonal to the confounder-related vector but maximally collinear to the target-related vector. To achieve this, we introduce a novel correlation-based loss that not only performs vector searching in the latent space, but also encourages the encoder to generate latent representations linearly correlated with the variables. Subsequently, we interpret the confounder-free representation by sampling and reconstructing images along the confounder-free vector. The efficacy and flexibility of our proposed method are demonstrated across three applications, accommodating multiple confounders and utilizing diverse image modalities. Results affirm the method's effectiveness in reducing confounder influences, preventing wrong or misleading associations, and offering a unique visual interpretation for in-depth investigations by clinical and epidemiological researchers. The code is released in the following GitLab repository: https://gitlab.com/radiology/compopbio/ai_based_association_analysis.

PMID:40073582 | DOI:10.1016/j.media.2025.103529

Categories: Literature Watch

Predicting C- and S-linked Glycosylation sites from protein sequences using protein language models

Wed, 2025-03-12 06:00

Comput Biol Med. 2025 Mar 11;189:109956. doi: 10.1016/j.compbiomed.2025.109956. Online ahead of print.

ABSTRACT

Among various post-translational modifications (PTMs), predicting C-linked and S-linked glycosites is an essential task, yet experimental techniques such as Capillary Electrophoresis (CE), Enzymatic Deglycosylation, and Mass Spectrometry (MS) are expensive. Therefore, computational techniques are required to predict these glycosites. Here, different language model embeddings and sequential features were explored. Two separate feature selection methods: Recursive Feature Elimination (RFE) and Particle Swarm Optimization (PSO) were employed and utilized for identifying the optimal feature set. Cross-validation results were generated for choosing the final models. Three sampling strategies to handle imbalanced datasets were examined: Random undersampling, Synthetic Minority Over-sampling Technique (SMOTE) and Adaptive Synthetic Sampling Approach for Imbalanced Learning (ADASYN). In this study, two models: DeepCSEmbed-C and DeepCSEmbed-S are proposed for C-linked and S-linked glycosylation prediction respectively. DeepCSEmbed-C is a dual-branch deep learning model comprising a Feedforward Neural Network (FNN) branch and an Inception branch, coupled with a Random undersampling strategy. DeepCSEmbed-S is a Categorical Boosting (CAT) model with the SMOTE oversampling strategy. DeepCSEmbed-C outperformed available state-of-the-art (SOTA) methods, achieving 92.9% sensitivity, 95.1% F1-score and 90.6% MCC on the Independent dataset. Datasets and python scripts for training and testing the models are provided and made freely accessible at https://github.com/nafcoder/DeepCSEmbed.

PMID:40073495 | DOI:10.1016/j.compbiomed.2025.109956

Categories: Literature Watch

Progressive multi-task learning for fine-grained dental implant classification and segmentation in CBCT image

Wed, 2025-03-12 06:00

Comput Biol Med. 2025 Mar 11;189:109896. doi: 10.1016/j.compbiomed.2025.109896. Online ahead of print.

ABSTRACT

With the ongoing advancement of digital technology, oral medicine transitions from traditional diagnostics to computer-assisted diagnosis and treatment. Identifying dental implants in patients without records is complex and time-consuming. Accurate identification of dental implants is crucial for ensuring the sustainability and reliability of implant treatment, particularly in cases where patients lack available medical records. In this paper, we propose a multi-task fine-grained CBCT dental implant classification and segmentation method using deep learning, called MFPT-Net.This method, based on progressive training with multiscale feature extraction and enhancement, can differentiate minor implant features and similar features that are easily confused, such as implant threads. It addresses the problem of large intra-class differences and small inter-class differences of implants, achieving automatic, synchronized classification and segmentation of implant systems in CBCT images. In this paper, 437 CBCT sequences with 723 dental implants, acquired from three different centers, are included in our dataset. This dataset is the first instance of utilizing such a comprehensive collection of data for CBCT analysis. Our method achieved a satisfying classification result with accuracy of 92.98%, average precision of 93.15%, average recall of 93.31%, and average F1 score of 93.18%, which exceeded the second-best model by nearly 10%. Moreover, our segmentation Dice similarity coefficient reached 98.04%, which is significantly better than the current state-of-the-art method. External clinical validation with 252 implants proved our model's clinical feasibility. The result demonstrates that our proposed method could assist dentists with dental implant classification and segmentation in CBCT images, enhancing efficiency and accuracy in clinical practice.

PMID:40073494 | DOI:10.1016/j.compbiomed.2025.109896

Categories: Literature Watch

A 240-target VEP-based BCI system employing narrow-band random sequences

Wed, 2025-03-12 06:00

J Neural Eng. 2025 Mar 12. doi: 10.1088/1741-2552/adbfc1. Online ahead of print.

ABSTRACT

OBJECTIVE: In the field of brain-computer interface (BCI), achieving high information transfer rates (ITR) with a large number of targets remains a challenge. This study aims to address this issue by developing a novel code-modulated visual evoked potential (c-VEP) BCI system capable of handling an extensive instruction set while maintaining high performance.

METHOD: We propose a c-VEP BCI system that employs narrow-band random sequences as visual stimuli and utilizes a convolutional neural network (CNN)-based EEG2Code decoding algorithm. This algorithm predicts corresponding stimulus sequences from EEG data and achieves efficient and accurate classification.

MAIN RESULTS: Offline experiments which conducted in a sequential paradigm, resulted in an average accuracy of 87.66% and a simulated ITR of 260.14 bits/min. In online experiments, the system demonstrated an accuracy of 76.27% and an ITR of 213.80 bits/min in a cued spelling task.

SIGNIFICANCE: This work represents an advancement in c-VEP BCI systems, offering one of the largest known instruction set in VEP-based BCIs and demonstrating robust performance metrics. The proposed system is potential for more practical and efficient BCI applications.

PMID:40073451 | DOI:10.1088/1741-2552/adbfc1

Categories: Literature Watch

Deep learning based on ultrasound images predicting cervical lymph node metastasis in postoperative patients with differentiated thyroid carcinoma

Wed, 2025-03-12 06:00

Br J Radiol. 2025 Mar 12:tqaf047. doi: 10.1093/bjr/tqaf047. Online ahead of print.

ABSTRACT

OBJECTIVES: To develop a deep learning (DL) model based on ultrasound (US) images of lymph nodes for predicting cervical lymph node metastasis (CLNM) in postoperative patients with differentiated thyroid carcinoma (DTC).

METHODS: Retrospective collection of 352 lymph nodes from 330 patients with cytopathology findings between June 2021 and December 2023 at our institution. The database was randomly divided into the training and test cohort at an 8:2 ratio. The DL basic model of longitudinal and cross-sectional of lymph nodes was constructed based on ResNet50 respectively, and the results of the two basic models were fused (1:1) to construct a longitudinal + cross-sectional DL model. Univariate and multivariate analysis were used to assess US features and construct a conventional US model. Subsequently, a combined model was constructed by integrating DL and US.

RESULTS: The diagnostic accuracy of the longitudinal + cross-sectional DL model was higher than that of longitudinal or cross-sectional alone. The AUC of the combined model (US+DL) was 0.855 (95%CI: 0.767-0.942), and the accuracy, sensitivity and specificity were 0.786 (95%CI: 0.671-0.875), 0.972 (95%CI: 0.855-0.999) and 0.588 (95%CI: 0.407-0.754), respectively. Compared with US and DL models, the IDI and NRI of the combined model are both positive.

CONCLUSIONS: This study preliminary shows that the DL model based on US images of lymph nodes has a high diagnostic efficacy for predicting CLNM in postoperative patients with DTC, and the combined model of US+DL is superior to single conventional US and DL for predicting CLNM in this population.

ADVANCES IN KNOWLEDGE: We innovatively used DL of lymph node US images to predict the status of cervical lymph nodes in postoperative patients with DTC.

PMID:40073229 | DOI:10.1093/bjr/tqaf047

Categories: Literature Watch

Genetically supported targets and drug repurposing for brain aging: A systematic study in the UK Biobank

Wed, 2025-03-12 06:00

Sci Adv. 2025 Mar 14;11(11):eadr3757. doi: 10.1126/sciadv.adr3757. Epub 2025 Mar 12.

ABSTRACT

Brain age gap (BAG), the deviation between estimated brain age and chronological age, is a promising marker of brain health. However, the genetic architecture and reliable targets for brain aging remains poorly understood. In this study, we estimate magnetic resonance imaging (MRI)-based brain age using deep learning models trained on the UK Biobank and validated with three external datasets. A genome-wide association study for BAG identified two unreported loci and seven previously reported loci. By integrating Mendelian Randomization (MR) and colocalization analysis on eQTL and pQTL data, we prioritized seven genetically supported druggable genes, including MAPT, TNFSF12, GZMB, SIRPB1, GNLY, NMB, and C1RL, as promising targets for brain aging. We rediscovered 13 potential drugs with evidence from clinical trials of aging and prioritized several drugs with strong genetic support. Our study provides insights into the genetic basis of brain aging, potentially facilitating drug development for brain aging to extend the health span.

PMID:40073132 | DOI:10.1126/sciadv.adr3757

Categories: Literature Watch

FlyVISTA, an integrated machine learning platform for deep phenotyping of sleep in <em>Drosophila</em>

Wed, 2025-03-12 06:00

Sci Adv. 2025 Mar 14;11(11):eadq8131. doi: 10.1126/sciadv.adq8131. Epub 2025 Mar 12.

ABSTRACT

There is great interest in using genetically tractable organisms such as Drosophila to gain insights into the regulation and function of sleep. However, sleep phenotyping in Drosophila has largely relied on simple measures of locomotor inactivity. Here, we present FlyVISTA, a machine learning platform to perform deep phenotyping of sleep in flies. This platform comprises a high-resolution closed-loop video imaging system, coupled with a deep learning network to annotate 35 body parts, and a computational pipeline to extract behaviors from high-dimensional data. FlyVISTA reveals the distinct spatiotemporal dynamics of sleep and wake-associated microbehaviors at baseline, following administration of the sleep-inducing drug gaboxadol, and with dorsal fan-shaped body drivers. We identify a microbehavior ("haltere switch") exclusively seen during quiescence that indicates a deeper sleep stage. These results enable the rigorous analysis of sleep in Drosophila and set the stage for computational analyses of microbehaviors in quiescent animals.

PMID:40073129 | DOI:10.1126/sciadv.adq8131

Categories: Literature Watch

Accelerated Missense Mutation Identification in Intrinsically Disordered Proteins Using Deep Learning

Wed, 2025-03-12 06:00

Biomacromolecules. 2025 Mar 12. doi: 10.1021/acs.biomac.4c01124. Online ahead of print.

ABSTRACT

We use a combination of Brownian dynamics (BD) simulation results and deep learning (DL) strategies for the rapid identification of large structural changes caused by missense mutations in intrinsically disordered proteins (IDPs). We used ∼6500 IDP sequences from MobiDB database of length 20-300 to obtain gyration radii from BD simulation on a coarse-grained single-bead amino acid model (HPS2 model) used by us and others [Dignon, G. L. PLoS Comput. Biol. 2018, 14, e1005941,Tesei, G. Proc. Natl. Acad. Sci. U.S.A. 2021, 118, e2111696118,Seth, S. J. Chem. Phys. 2024, 160, 014902] to generate the training sets for the DL algorithm. Using the gyration radii ⟨Rg⟩ of the simulated IDPs as the training set, we develop a multilayer perceptron neural net (NN) architecture that predicts the gyration radii of 33 IDPs previously studied by using BD simulation with 97% accuracy from the sequence and the corresponding parameters from the HPS model. We now utilize this NN to predict gyration radii of every permutation of missense mutations in IDPs. Our approach successfully identifies mutation-prone regions that induce significant alterations in the radius of gyration when compared to the wild-type IDP sequence. We further validate the prediction by running BD simulations on the subset of identified mutants. The neural network yields a (104-106)-fold faster computation in the search space for potentially harmful mutations. Our findings have substantial implications for rapid identification and understanding of diseases related to missense mutations in IDPs and for the development of potential therapeutic interventions. The method can be extended to accurate predictions of other mutation effects in disordered proteins.

PMID:40072940 | DOI:10.1021/acs.biomac.4c01124

Categories: Literature Watch

An analysis of performance bottlenecks in MRI preprocessing

Wed, 2025-03-12 06:00

Gigascience. 2025 Jan 6;14:giae098. doi: 10.1093/gigascience/giae098.

ABSTRACT

Magnetic resonance imaging (MRI) preprocessing is a critical step for neuroimaging analysis. However, the computational cost of MRI preprocessing pipelines is a major bottleneck for large cohort studies and some clinical applications. While high-performance computing and, more recently, deep learning have been adopted to accelerate the computations, these techniques require costly hardware and are not accessible to all researchers. Therefore, it is important to understand the performance bottlenecks of MRI preprocessing pipelines to improve their performance. Using the Intel VTune profiler, we characterized the bottlenecks of several commonly used MRI preprocessing pipelines from the Advanced Normalization Tools (ANTs), FMRIB Software Library, and FreeSurfer toolboxes. We found few functions contributed to most of the CPU time and that linear interpolation was the largest contributor. Data access was also a substantial bottleneck. We identified a bug in the Insight Segmentation and Registration Toolkit library that impacts the performance of the ANTs pipeline in single precision and a potential issue with the OpenMP scaling in FreeSurfer recon-all. Our results provide a reference for future efforts to optimize MRI preprocessing pipelines.

PMID:40072903 | DOI:10.1093/gigascience/giae098

Categories: Literature Watch

Insights into phosphorylation-induced influences on conformations and inhibitor binding of CDK6 through GaMD trajectory-based deep learning

Wed, 2025-03-12 06:00

Phys Chem Chem Phys. 2025 Mar 12. doi: 10.1039/d4cp04579c. Online ahead of print.

ABSTRACT

The phosphorylation of residue T177 produces a significant effect on the conformational dynamics of CDK6. Gaussian accelerated molecular dynamics (GaMD) simulations followed by deep learning (DL) are applied to explore the molecular mechanism of the phosphorylation-mediated effect on the conformational dynamics of CDK6 bound by three inhibitors 6ZV, 6ZZ and 0RS, in which 6ZV and 6ZZ have been used to test clinical performance. The DL finds that the β-sheets, αC helix as well as the T-loop are involved in obvious differences of conformation contacts and suggests that the T-loop plays a key role in the function of CDK6. The analyses of free energy landscapes (FELs) reveal that the phosphorylation of T177 leads to alterations of the T-loop conformation and the results from principal component analysis (PCA) indicate that the phosphorylation affects the fluctuation behavior of the β-sheets and the T-loop in CDK6. Interaction networks of inhibitors with CDK6 were analyzed and the information reveals that 6ZV contributes more hydrogen binding interactions (HBIs) and hot interaction spots with CDK6. Our MM-GBSA calculations suggest that the binding ability of 6ZV to CDK6 is stronger than 6ZZ and 0RS. We anticipate that this work could provide useful information for further understanding of CDK6 function and developing new promising inhibitors targeting CDK6.

PMID:40072875 | DOI:10.1039/d4cp04579c

Categories: Literature Watch

Fast and Stable Neonatal Brain MR Imaging Using Integrated Learned Subspace Model and Deep Learning

Wed, 2025-03-12 06:00

IEEE Trans Biomed Eng. 2025 Mar 12;PP. doi: 10.1109/TBME.2025.3541643. Online ahead of print.

ABSTRACT

OBJECTIVE: To enable fast and stable neonatal brain MR imaging by integrating learned neonate-specific subspace model and model-driven deep learning.

METHODS: Fast data acquisition is critical for neonatal brain MRI, and deep learning has emerged as an effective tool to accelerate existing fast MRI methods by leveraging prior image information. However, deep learning often requires large amounts of training data to ensure stable image reconstruction, which is not currently available for neonatal MRI applications. In this work, we addressed this problem by utilizing a subspace model-assisted deep learning approach. Specifically, we used a subspace model to capture the spatial features of neonatal brain images. The learned neonate-specific subspace was then integrated with a deep network to reconstruct high-quality neonatal brain images from very sparse k-space data.

RESULTS: The effectiveness and robustness of the proposed method were validated using both the dHCP dataset and testing data from four independent medical centers, yielding very encouraging results. The stability of the proposed method has been confirmed with different perturbations, all showing remarkably stable reconstruction performance. The flexibility of the learned subspace was also shown when combined with other deep neural networks, yielding improved image reconstruction performance.

CONCLUSION: Fast and stable neonatal brain MR imaging can be achieved using subspace-assisted deep learning with sparse sampling. With further development, the proposed method may improve the practical utility of MRI in neonatal imaging applications.

PMID:40072865 | DOI:10.1109/TBME.2025.3541643

Categories: Literature Watch

ViDDAR: Vision Language Model-Based Task-Detrimental Content Detection for Augmented Reality

Wed, 2025-03-12 06:00

IEEE Trans Vis Comput Graph. 2025 Mar 12;PP. doi: 10.1109/TVCG.2025.3549147. Online ahead of print.

ABSTRACT

In Augmented Reality (AR), virtual content enhances user experience by providing additional information. However, improperly positioned or designed virtual content can be detrimental to task performance, as it can impair users' ability to accurately interpret real-world information. In this paper we examine two types of task-detrimental virtual content: obstruction attacks, in which virtual content prevents users from seeing real-world objects, and information manipulation attacks, in which virtual content interferes with users' ability to accurately interpret real-world information. We provide a mathematical framework to characterize these attacks and create a custom open-source dataset for attack evaluation. To address these attacks, we introduce ViDDAR (Vision language model-based Task-Detrimental content Detector for Augmented Reality), a comprehensive full-reference system that leverages Vision Language Models (VLMs) and advanced deep learning techniques to monitor and evaluate virtual content in AR environments, employing a user-edge-cloud architecture to balance performance with low latency. To the best of our knowledge, ViDDAR is the first system to employ VLMs for detecting task-detrimental content in AR settings. Our evaluation results demonstrate that ViDDAR effectively understands complex scenes and detects task-detrimental content, achieving up to 92.15% obstruction detection accuracy with a detection latency of 533 ms, and an 82.46% information manipulation content detection accuracy with a latency of 9.62 s.

PMID:40072851 | DOI:10.1109/TVCG.2025.3549147

Categories: Literature Watch

Pages