Deep learning
Computational models for prediction of m6A sites using deep learning
Methods. 2025 Apr 21:S1046-2023(25)00108-2. doi: 10.1016/j.ymeth.2025.04.011. Online ahead of print.
ABSTRACT
RNA modifications play a crucial role in enhancing the structural and functional diversity of RNA molecules and regulating various stages of the RNA life cycle. Among these modifications, N6-Methyladenosine (m6A) is the most common internal modification in eukaryotic mRNAs and has been extensively studied over the past decade. Accurate identification of m6A modification sites is essential for understanding their function and underlying mechanisms. Traditional methods predominantly rely on machine learning techniques to recognize m6A sites, which often fail to capture the contextual features of these sites comprehensively. In this study, we comprehensively summarize previously published methods based on machine learning and deep learning. We also validate multiple deep learning approaches on benchmark dataset, including previously underutilized methods in m6A site prediction, pre-trained models specifically designed for biological sequence and other basic deep learning methods. Additionally, we further analyze the dataset features and interpret the model's predictions to enhance understanding. Our experimental results clearly demonstrate the effectiveness of the deep learning models, elucidating their strong potential in accurately recognizing m6A modification sites.
PMID:40268153 | DOI:10.1016/j.ymeth.2025.04.011
OrgaMeas: A pipeline that integrates all the processes of organelle image analysis
Biochim Biophys Acta Mol Cell Res. 2025 Apr 21:119964. doi: 10.1016/j.bbamcr.2025.119964. Online ahead of print.
ABSTRACT
Although image analysis has emerged as a key technology in the study of organelle dynamics, the commonly used image-processing methods, such as threshold-based segmentation and manual setting of regions of interests (ROIs), are error-prone and laborious. Here, we present a highly accurate high-throughput image analysis pipeline called OrgaMeas for measuring the morphology and dynamics of organelles. This pipeline mainly consists of two deep learning-based tools: OrgaSegNet and DIC2Cells. OrgaSegNet quantifies many aspects of different organelles by precisely segmenting them. To further process the segmented data at a single-cell level, DIC2Cells automates ROI settings through accurate segmentation of individual cells in differential interference contrast (DIC) images. This pipeline was designed to be low cost and require less coding, to provide an easy-to-use platform. Thus, we believe that OrgaMeas has potential to be readily applied to basic biomedical research, and hopefully to other practical uses such as drug discovery.
PMID:40268058 | DOI:10.1016/j.bbamcr.2025.119964
The prediction of RNA-small molecule binding sites in RNA structures based on geometric deep learning
Int J Biol Macromol. 2025 Apr 21:143308. doi: 10.1016/j.ijbiomac.2025.143308. Online ahead of print.
ABSTRACT
Biological interactions between RNA and small-molecule ligands play a crucial role in determining the specific functions of RNA, such as catalysis and folding, and are essential for guiding drug design in the medical field. Accurately predicting the binding sites of ligands within RNA structures is therefore of significant importance. To address this challenge, we introduced a computational approach named RLBSIF (RNA-Ligand Binding Surface Interaction Fingerprints) based on geometric deep learning. This model utilizes surface geometric features, including shape index and distance-dependent curvature, combined with chemical features represented by atomic charge, to comprehensively characterize RNA-ligand interactions through MaSIF-based surface interaction fingerprints. Additionally, we employ the ResNet18 network to analyze these fingerprints for identifying ligand binding pockets. Trained on 440 binding pockets, RLBSIF achieves an overall pocket-level classification accuracy of 90 %. Through a full-space enumeration method, it can predict binding sites at nucleotide resolution. In two independent tests, RLBSIF outperformed competing models, demonstrating its efficacy in accurately identifying binding sites within complex molecular structures. This method shows promise for drug design and biological product development, providing valuable insights into RNA-ligand interactions and facilitating the design of novel therapeutic interventions. For access to the related source code, please visit RLBSIF on GitHub (https://github.com/ZUSTSTTLAB/RLBSIF).
PMID:40268011 | DOI:10.1016/j.ijbiomac.2025.143308
On factors that influence deep learning-based dose prediction of head and neck tumors
Phys Med Biol. 2025 Apr 23. doi: 10.1088/1361-6560/adcfeb. Online ahead of print.
ABSTRACT
\textit{Objective.} This study investigates key factors influencing deep learning-based dose prediction models for head and neck cancer radiation therapy (RT). The goal is to evaluate model accuracy, robustness, and computational efficiency, and to identify key components necessary for optimal performance.
\\
\textit{Approach.} We systematically analyze the impact of input and dose grid resolution, input type, loss function, model architecture, and noise on model performance. Two datasets are used: a public dataset (OpenKBP) and an in-house clinical dataset (LUMC). Model performance is primarily evaluated using two metrics: dose score and dose-volume histogram (DVH) score.
\\
\textit{Main results.} 
High-resolution inputs improve prediction accuracy (dose score and DVH score) by 8.6--13.5\% compared to low resolution. Using a combination of CT, planning target volumes (PTVs), and organs-at-risk (OARs) as input significantly enhances accuracy, with improvements of 57.4--86.8\% over using CT alone. Integrating mean absolute error (MAE) loss with value-based and criteria-based DVH loss functions further boosts DVH score by 7.2--7.5\% compared to MAE loss alone. In the robustness analysis, most models show minimal degradation under Poisson noise (0--0.3 Gy) but are more susceptible to adversarial noise (0.2--7.8 Gy). Notably, certain models, such as SwinUNETR, demonstrate superior robustness against adversarial perturbations.
\\
\textit{Significance.}
These findings highlight the importance of optimizing deep learning models and provide valuable guidance for achieving more accurate and reliable radiotherapy dose prediction.
PMID:40267938 | DOI:10.1088/1361-6560/adcfeb
FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis
Comput Biol Med. 2025 Apr 22;192(Pt A):110160. doi: 10.1016/j.compbiomed.2025.110160. Online ahead of print.
ABSTRACT
The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly in the context of Radiotherapy (RT) treatment planning. The use of sCT enables the calculation of doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with the introduction of MRI-Positron Emission Tomography (PET) hybrid scanners, the derivation of sCT from MRI can improve the attenuation correction of PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse clinical settings. Moreover, creating centralised multi-centre datasets may pose privacy concerns. To address the aforementioned issues, we introduced FedSynthCT-Brain, an approach based on the Federated Learning (FL) paradigm for MRI-to-sCT in brain imaging. This is among the first applications of FL for MRI-to-sCT, employing a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model. We validated our method using real multicentre data from four European and American centres, simulating heterogeneous scanner types and acquisition modalities, and tested its performance on an independent dataset from a centre outside the federation. In the case of the unseen centre, the federated model achieved a median Mean Absolute Error (MAE) of 102.0 HU across 23 patients, with an interquartile range of 96.7-110.5 HU. The median (interquartile range) for the Structural Similarity Index (SSIM) and the Peak Signal to Noise Ratio (PNSR) were 0.89 (0.86-0.89) and 26.58 (25.52-27.42), respectively. The analysis of the results showed acceptable performances of the federated approach, thus highlighting the potential of FL to enhance MRI-to-sCT to improve generalisability and advancing safe and equitable clinical applications while fostering collaboration and preserving data privacy.
PMID:40267535 | DOI:10.1016/j.compbiomed.2025.110160
Global Trends in Artificial Intelligence and Sepsis-Related Research: A Bibliometric Analysis
Shock. 2025 Apr 22. doi: 10.1097/SHK.0000000000002598. Online ahead of print.
ABSTRACT
BACKGROUND: In the field of bibliometrics, although some studies have conducted literature reviews and analyses on sepsis, these studies mainly focus on specific areas or technologies, such as the relationship between the gut microbiome and sepsis, or immunomodulatory treatments for sepsis. However, there are still few studies that provide comprehensive bibliometric analyses of global scientific publications related to AI in sepsis research.
OBJECTIVE: The aim of this study is to assess the global trend analysis of AI applications in sepsis based on publication output, citations, co-authorship between countries, and co-occurrence of author keywords.
METHODS: A total of 4,382 papers published from 2015 to December 2024 were retrieved and downloaded from the SCIE database in WOS. After selecting the document types as articles and reviews, and conducting eligibility checks on titles and abstracts, the final bibliometric analysis using VOSviewer and CiteSpace included 4,209 papers.
RESULTS: The number of published papers increased sharply starting in 2021, accounting for 58.14% (2,447/4,209) of all included papers. The United States and China together account for approximately 60.16% (2,532/4,209) of the total publications. Among the top 10 institutions in AI research on sepsis, seven are located in the United States. Rishikesan Kamaleswaran is the most contributing author, with PLOS ONE having more citations in this field than other journals. SCIENTIFIC REPORTS is also the most influential journal (NP = 106, H-index = 23, IF: 3.8).
CONCLUSION: This study highlights the popular areas of AI research, provides a comprehensive overview of the research trends of AI in sepsis, and offers potential collaboration and future research prospects. To make AI-based clinical research sufficiently persuasive in sepsis practice, collaborative research is needed to improve the maturity and robustness of AI-driven models.
PMID:40267504 | DOI:10.1097/SHK.0000000000002598
Prediction of Reactivation After Antivascular Endothelial Growth Factor Monotherapy for Retinopathy of Prematurity: Multimodal Machine Learning Model Study
J Med Internet Res. 2025 Apr 23;27:e60367. doi: 10.2196/60367.
ABSTRACT
BACKGROUND: Retinopathy of prematurity (ROP) is the leading preventable cause of childhood blindness. A timely intravitreal injection of antivascular endothelial growth factor (anti-VEGF) is required to prevent retinal detachment with consequent vision impairment and loss. However, anti-VEGF has been reported to be associated with ROP reactivation. Therefore, an accurate prediction of reactivation after treatment is urgently needed.
OBJECTIVE: To develop and validate prediction models for reactivation after anti-VEGF intravitreal injection in infants with ROP using multimodal machine learning algorithms.
METHODS: Infants with ROP undergoing anti-VEGF treatment were recruited from 3 hospitals, and conventional machine learning, deep learning, and fusion models were constructed. The areas under the curve (AUCs), accuracy, sensitivity, and specificity were used to show the performances of the prediction models.
RESULTS: A total of 239 cases with anti-VEGF treatment were recruited, including 90 (37.66%) with reactivation and 149 (62.34%) nonreactivation cases. The AUCs for the conventional machine learning model were 0.806 and 0.805 in the internal validation and test groups, respectively. The average AUC, sensitivity, and specificity in the test for the deep learning model were 0.787, 0.800, and 0.570, respectively. The specificity, AUC, and sensitivity for the fusion model were 0.686, 0.822, and 0.800 in a test, separately.
CONCLUSIONS: We constructed 3 prediction models for ROP reactivation. The fusion model achieved the best performance. Using this prediction model, we could optimize strategies for treating ROP in infants and develop better screening plans after treatment.
PMID:40267476 | DOI:10.2196/60367
Improved Pine Wood Nematode Disease Diagnosis System Based on Deep Learning
Plant Dis. 2025 Apr 23:PDIS06241221RE. doi: 10.1094/PDIS-06-24-1221-RE. Online ahead of print.
ABSTRACT
Pine wilt disease caused by the pine wood nematode, Bursaphelenchus xylophilus, has profound implications for global forestry ecology. Conventional PCR methods need long operating time and are complicated to perform. The need for rapid and effective detection methodologies to curtail its dissemination and reduce pine felling has become more apparent. This study initially proposed the use of fluorescence recognition for the detection of pine wood nematode disease, accompanied by the development of a dedicated fluorescence detection system based on deep learning. This system possesses the capability to perform excitation, detection, as well as data analysis and transmission of test samples. In exploring fluorescence recognition methodologies, the efficacy of five conventional machine learning algorithms was juxtaposed with that of You Only Look Once version 5 and You Only Look Once version 10, both in the pre- and post-image processing stages. Moreover, enhancements were introduced to the You Only Look Once version 5 model. The network's aptitude for discerning features across varied scales and resolutions was bolstered through the integration of Res2Net. Meanwhile, a SimAM attention mechanism was incorporated into the backbone network, and the original PANet structure was replaced by the Bi-FPN within the Head network to amplify feature fusion capabilities. The enhanced YOLOv5 model demonstrates significant improvements, particularly in the recognition of large-size images, achieving an accuracy improvement of 39.98%. The research presents a novel detection system for pine nematode detection, capable of detecting samples with DNA concentrations as low as 1 fg/μl within 20 min. This system integrates detection instruments, laptops, cloud computing, and smartphones, holding tremendous potential for field application.
PMID:40267359 | DOI:10.1094/PDIS-06-24-1221-RE
Deep Learning Model for Histologic Diagnosis of Dysplastic Barrett's Esophagus: Multisite Cohort External Validation
Am J Gastroenterol. 2025 Apr 23. doi: 10.14309/ajg.0000000000003495. Online ahead of print.
ABSTRACT
INTRODUCTION: The risk of progression to esophageal adenocarcinoma (EAC) in Barrett's esophagus (BE) increases with advancing degrees of dysplasia. There is a critical need to improve the diagnosis of BE dysplasia, given substantial interobserver variability and overcalls of dysplasia during manual community pathologist reads. We aimed to externally validate a previously cross-validated BE dysplasia diagnosis deep learning model (BEDDLM) that predicts dysplasia grade on whole slide images (WSIs).
METHODS: We digitized non-dysplastic BE (NDBE), low-grade (LGD), and high-grade dysplasia (HGD) histology slides from three external academic centers. A consensus read by two expert study pathologists was used as the criterion standard. Slide stain characteristics were normalized using cycle-generative adversarial networks (cGANs). Whole slide images were assessed by BEDDLM using an ensemble approach, combining a "You Only Look Once" (YOLO) model followed by a ResNet101 classifier model.
RESULTS: We included 489 WSIs. Consensus histopathology revealed 232 NDBE, 117 LGD, and 140 HGD WSIs. The mean age (SD) was 66·9 (11·4); 413 (84·7%) were males. Using the BEDDLM ensemble model, sensitivity and specificity for NDBE were 73.3% (95% CI: 67.09-78.85%); 93.4% (95% CI: 89.62-96.10%); for LGD, 84.6% (95% CI: 76.78-90.62%), and 80.6% (95% CI: 76.26-84.54%); and for HGD, 80.7% (95% CI: 73.19-86.89%) and 94.8% (95% CI: 91.97-96.91%), respectively. The F1 score was 0.81, 0 .69, and 0.83 for NDBE, LGD, and HGD, respectively.
DISCUSSION: Our externally validated deep learning model demonstrates substantial accuracy for the diagnosis of BE dysplasia grade on WSIs.
PMID:40267276 | DOI:10.14309/ajg.0000000000003495
Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers
J Radiat Res. 2025 Apr 23:rraf019. doi: 10.1093/jrr/rraf019. Online ahead of print.
ABSTRACT
This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.
PMID:40267259 | DOI:10.1093/jrr/rraf019
Mixing individual and collective behaviors to predict out-of-routine mobility
Proc Natl Acad Sci U S A. 2025 Apr 29;122(17):e2414848122. doi: 10.1073/pnas.2414848122. Epub 2025 Apr 23.
ABSTRACT
Predicting human displacements is crucial for addressing various societal challenges, including urban design, traffic congestion, epidemic management, and migration dynamics. While predictive models like deep learning and Markov models offer insights into individual mobility, they often struggle with out-of-routine behaviors. Our study introduces an approach that dynamically integrates individual and collective mobility behaviors, leveraging collective intelligence to enhance prediction accuracy. Evaluating the model on millions of privacy-preserving trajectories across five US cities, we demonstrate its superior performance in predicting out-of-routine mobility, surpassing even advanced deep learning methods. The spatial analysis highlights the model's effectiveness near urban areas with a high density of points of interest, where collective behaviors strongly influence mobility. During disruptive events like the COVID-19 pandemic, our model retains predictive capabilities, unlike individual-based models. By bridging the gap between individual and collective behaviors, our approach offers transparent and accurate predictions, which are crucial for addressing contemporary mobility challenges.
PMID:40267135 | DOI:10.1073/pnas.2414848122
FedOpenHAR: Federated Multitask Transfer Learning for Sensor-Based Human Activity Recognition
J Comput Biol. 2025 Apr 23. doi: 10.1089/cmb.2024.0631. Online ahead of print.
ABSTRACT
Wearable and mobile devices equipped with motion sensors offer important insights into user behavior. Machine learning and, more recently, deep learning techniques have been applied to analyze sensor data. Typically, the focus is on a single task, such as human activity recognition (HAR), and the data is processed centrally on a server or in the cloud. However, the same sensor data can be leveraged for multiple tasks, and distributed machine learning methods can be employed without the need for transmitting data to a central location. In this study, we introduce the FedOpenHAR framework, which explores federated transfer learning in a multitask setting for both sensor-based HAR and device position identification tasks. This approach utilizes transfer learning by training task-specific and personalized layers in a federated manner. The OpenHAR framework, which includes ten smaller datasets, is used for training the models. The main challenge is developing robust models that are applicable to both tasks across different datasets, which may contain only a subset of label types. Multiple experiments are conducted in the Flower federated learning environment using the DeepConvLSTM architecture. Results are presented for both federated and centralized training under various parameters and constraints. By employing transfer learning and training task-specific and personalized federated models, we achieve a higher accuracy (72.4%) compared to a fully centralized training approach (64.5%), and similar accuracy to a scenario where each client performs individual training in isolation (72.6%). However, the advantage of FedOpenHAR over individual training is that, when a new client joins with a new label type (representing a new task), it can begin training from the already existing common layer. Furthermore, if a new client wants to classify a new class in one of the existing tasks, FedOpenHAR allows training to begin directly from the task-specific layers.
PMID:40267073 | DOI:10.1089/cmb.2024.0631
Depth prediction of urban waterlogging based on BiTCN-GRU modeling
PLoS One. 2025 Apr 23;20(4):e0321637. doi: 10.1371/journal.pone.0321637. eCollection 2025.
ABSTRACT
With China's rapid urbanization and the increasing frequency of extreme weather events, heavy rainfall-induced urban waterlogging has become a persistent and pressing challenge. Accurately predicting waterlogging depth is essential for disaster prevention and loss mitigation. However, existing hydrological models often require extensive data and have complex structures, resulting in low prediction accuracy and limited generalization capabilities. To address these challenges, this paper proposes a hybrid deep learning-based approach, the BiTCN-GRU model, for predicting waterlogging depth in urban flood-prone areas. This model integrates Bidirectional Temporal Convolutional Networks (BiTCN) and Gated Recurrent Units (GRU) to enhance prediction performance. Specifically, the gated recurrent units (GRU) is employed for this prediction task. Bidirectional temporal convolutional network (BiTCN) can effectively capture the information features during rainfall and waterlogging depth by forward and backward convolution and use them as inputs to GRU. Experimental results demonstrate the great performance of the proposed model, achieving MAE, RMSE, and R2 values of 1.56, 3.62, and 88.31% for Minshan Road, and 3.44, 8.08, and 92.64% for Huaihe Road datasets, respectively. Compared to models such as GBDT, LSTM, and TCN-LSTM, the BiTCN-GRU model exhibits higher accuracy in predicting waterlogging depth. This hybrid model provides a robust solution for short-term waterlogging prediction, offering valuable scientific insights and theoretical support for urban waterlogging disaster prevention and mitigation.
PMID:40267055 | DOI:10.1371/journal.pone.0321637
Unsupervised non-small cell lung cancer tumor segmentation using cycled generative adversarial network with similarity-based discriminator
J Appl Clin Med Phys. 2025 Apr 23:e70107. doi: 10.1002/acm2.70107. Online ahead of print.
ABSTRACT
BACKGROUND: Tumor segmentation is crucial for lung disease diagnosis and treatment. Most existing deep learning-based automatic segmentation methods rely on manually annotated data for network training.
PURPOSE: This study aims to develop an unsupervised tumor segmentation network smic-GAN by using a similarity-driven generative adversarial network trained with cycle strategy. The proposed method does not rely on any manual annotations and thus reduce the training data preparation workload.
METHODS: A total of 609 CT scans of lung cancer patients are collected, of which 504 are used for training, 35 for validation, and 70 for testing. Smic-GAN is developed and trained to transform lung CT slices with tumors into synthetic images without tumors. Residual images are obtained by subtracting synthetic images from original CT slices. Thresholding, 3D median filtering, morphological erosion, and dilation operations are implemented to generate binary tumor masks from the residual images. Dice similarity, positive predictive value (PPV), sensitivity (SEN), 95% Hausdorff distance (HD95) and average surface distance (ASD) are used to evaluate the accuracy of tumor contouring.
RESULTS: The smic-GAN method achieved a performance comparable to two supervised methods UNet and Incre-MRRN, and outperformed unsupervised cycle-GAN. The Dice value for smic-GAN is significantly better than cycle-GAN (74.5% ± $ \pm $ 11.2% vs. 69.1% ± $ \pm $ 16.0%, p < 0.05). The PPV for smic-GAN, UNet, and Incre-MRRN are 83.8% ± $ \pm $ 21.5%,75.1% ± $ \pm $ 19.7%, and 78.2% ± $ \pm $ 16.6% respectively. The HD95 are 10.3 ± $\pm $ 7.7, 14.5 ± $\pm $ 14.6 and 6.2 ± $\pm $ 4.0 mm, respectively. The ASD are 3.7 ± $\pm $ 2.7, 4.8 ± $\pm $ 3.8, and 2.4 ± $\pm $ 1.8 mm, respectively.
CONCLUSION: The proposed smic-GAN performs comparably to the existing supervised methods UNet and Incre-MRRN. It does not rely on any manual annotations and can reduce the workload of training data preparation. It can also provide a good start for manual annotation in the training of supervised networks.
PMID:40266997 | DOI:10.1002/acm2.70107
Deep learning-based post hoc denoising for 3D volume-rendered cardiac CT in mitral valve prolapse
Int J Cardiovasc Imaging. 2025 Apr 23. doi: 10.1007/s10554-025-03403-z. Online ahead of print.
ABSTRACT
We hypothesized that deep learning-based post hoc denoising could improve the quality of cardiac CT for the 3D volume-rendered (VR) imaging of mitral valve (MV) prolapse. We aimed to evaluate the quality of denoised 3D VR images for visualizing MV prolapse and assess their diagnostic performance and efficiency. We retrospectively reviewed the cardiac CTs of consecutive patients who underwent MV repair in 2023. The original images were iteratively reconstructed and denoised with a residual dense network. 3DVR images of the "surgeon's view" were created with blood chamber transparency to display the MV leaflets. We compared the 3DVR image quality between the original and denoised images with a 100-point scoring system. Diagnostic confidence for prolapse was evaluated across eight MV segments: A1-3, P1-3, and the anterior and posterior commissures. Surgical findings were used as the reference to assess diagnostic ability with the area under curve (AUC). The interpretation time for the denoised 3DVR images was compared with that for multiplanar reformat images. For fifty patients (median age 64 years, 30 males), denoising the 3DVR images significantly improved their image quality scores from 50 to 76 (P <.001). The AUC in identifying MV prolapse improved from 0.91 (95% CI 0.87-0.95) to 0.94 (95% CI 0.91-0.98) (P =.009). The denoised 3DVR images were interpreted five-times faster than the multiplanar reformat images (P <.001). Deep learning-based denoising enhanced the quality of 3DVR imaging of the MV, improving the performance and efficiency in detecting MV prolapse on cardiac CT.
PMID:40266552 | DOI:10.1007/s10554-025-03403-z
Super-resolution deep learning reconstruction to evaluate lumbar spinal stenosis status on magnetic resonance myelography
Jpn J Radiol. 2025 Apr 23. doi: 10.1007/s11604-025-01787-5. Online ahead of print.
ABSTRACT
PURPOSE: To investigate whether super-resolution deep learning reconstruction (SR-DLR) of MR myelography-aided evaluations of lumbar spinal stenosis.
MATERIAL AND METHODS: In this retrospective study, lumbar MR myelography of 40 patients (16 males and 24 females; mean age, 59.4 ± 31.8 years) were analyzed. Using the MR imaging data, MR myelography was separately reconstructed via SR-DLR, deep learning reconstruction (DLR), and conventional zero-filling interpolation (ZIP). Three radiologists, blinded to patient background data and MR reconstruction information, independently evaluated the image sets in terms of the following items: the numbers of levels affected by lumbar spinal stenosis; and cauda equina depiction, sharpness, noise, artifacts, and overall image quality.
RESULTS: The median interobserver agreement in terms of the numbers of lumbar spinal stenosis levels were 0.819, 0.735, and 0.729 for SR-DLR, DLR, and ZIP images, respectively. The imaging quality of the cauda equina, and image sharpness, noise, and overall quality on SR-DLR images were significantly better than those on DLR and ZIP images, as rated by all readers (p < 0.001, Wilcoxon signed-rank test). No significant differences were observed for artifacts on SR-DLR against DLR and ZIP.
CONCLUSIONS: SR-DLR improved the image quality of lumbar MR myelographs compared to DLR and ZIP, and was associated with better interobserver agreement during assessment of lumbar spinal stenosis status.
PMID:40266548 | DOI:10.1007/s11604-025-01787-5
Simultaneous polyclonal antibody sequencing and epitope mapping by cryo electron microscopy and mass spectrometry
Elife. 2025 Apr 23;14:RP101322. doi: 10.7554/eLife.101322.
ABSTRACT
Antibodies are a major component of adaptive immunity against invading pathogens. Here, we explore possibilities for an analytical approach to characterize the antigen-specific antibody repertoire directly from the secreted proteins in convalescent serum. This approach aims to perform simultaneous antibody sequencing and epitope mapping using a combination of single particle cryo-electron microscopy (cryoEM) and bottom-up proteomics techniques based on mass spectrometry (LC-MS/MS). We evaluate the performance of the deep-learning tool ModelAngelo in determining de novo antibody sequences directly from reconstructed 3D volumes of antibody-antigen complexes. We demonstrate that while map quality is a critical bottleneck, it is possible to sequence antibody variable domains from cryoEM reconstructions with accuracies of up to 80-90%. While the rate of errors exceeds the typical levels of somatic hypermutation, we show that the ModelAngelo-derived sequences can be used to assign the used V-genes. This provides a functional guide to assemble de novo peptides from LC-MS/MS data more accurately and improves the tolerance to a background of polyclonal antibody sequences. Following this proof-of-principle, we discuss the feasibility and future directions of this approach to characterize antigen-specific antibody repertoires.
PMID:40266252 | DOI:10.7554/eLife.101322
Multitask Deep Learning for Automated Detection of Endoleak at Digital Subtraction Angiography during Endovascular Aneurysm Repair
Radiol Artif Intell. 2025 Apr 23:e240392. doi: 10.1148/ryai.240392. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and evaluate a novel multitask deep learning framework for automated detection and localization of endoleaks at aortic digital subtraction angiography (DSA) performed during real-world endovascular aneurysm repair (EVAR) procedures for abdominal aortic aneurysm. Materials and Methods This retrospective study analyzed intraoperative aortic DSA images from EVAR patients (January 2017-December 2021). An expert panel assessed each sequence for endoleaks. Each sequence was processed into three input channels: peak density (PD), time to peak (TTP), and area under the time-density curve (AUC-TD), generating three 2D perfusion maps per patient. These maps served as input into a convolutional neural network (CNN) for binary detection (classification) and localization (regression) of endoleaks through multitask learning. Fivefold cross-validation was performed, with patients split 80:20 into training/testing for each fold. Performance metrics included AUC, F1 score, precision, recall and were compared with human experts. Results The study included 220 patients (181 male; median age, 74 years; IQR, 68-79 years). Endoleaks were visible in 111 out of 220 (50.5%) patients. The model identified and localized endoleaks with an AUC of 0.85 (SD 0.0031), F1 score of 0.78 (SD 0.21), 95% precision, and 73% recall. Compared with the procedural team (94% precision, 63% recall), it had higher values in both metrics, with an F1-score within the human observer range (0.75-0.85). Balancing regression and classification by multitask learning delivered optimal results. The interobserver agreement among human experts was moderate (Fleiss' Kappa = 0.404). Conclusion A novel, fully automated deep learning method accurately detected and localized endoleaks on DSA imaging from EVAR procedures. ©RSNA, 2025.
PMID:40266029 | DOI:10.1148/ryai.240392
Deep learning-based detection of generalized convulsive seizures using a wrist-worn accelerometer
Epilepsia. 2025 Apr 23. doi: 10.1111/epi.18406. Online ahead of print.
ABSTRACT
OBJECTIVE: To develop and validate a wrist-worn accelerometer-based, deep-learning tunable algorithm for the automated detection of generalized or bilateral convulsive seizures (CSs) to be integrated with off-the-shelf smartwatches.
METHODS: We conducted a prospective multi-center study across eight European epilepsy monitoring units, collecting data from 384 patients undergoing video electroencephalography (vEEG) monitoring with a wrist-worn three dimensional (3D)-accelerometer sensor. We developed an ensemble-based convolutional neural network architecture with tunable sensitivity through quantile-based aggregation. The model, referred to as Episave, used accelerometer amplitude as input. It was trained on data from 37 patients who had 54 CSs and evaluated on an independent dataset comprising 347 patients, including 33 who had 49 CSs.
RESULTS: Cross-validation on the training set showed that optimal performance was obtained with an aggregation quantile of 60, with a 98% sensitivity, and a false alarm rate (FAR) of 1/6 days. Using this quantile on the independent test set, the model achieved a 96% sensitivity (95% confidence interval [CI]: 90%-100%), a FAR of <1/8 days (95% CI: 1/9-1/7 days) with 1 FA/61 nights, and a median detection latency of 26 s. One of the two missed CSs could be explained by the patient's arm, which was wearing the sensor, being trapped in the bed rail. Other quantiles provided up to 100% sensitivity at the cost of a greater FAR (1/2 days) or very low FAR (1/100 days) at the cost of lower sensitivity (86%).
SIGNIFICANCE: This Phase 2 clinical validation study suggests that deep learning techniques applied to single-sensor accelerometer data can achieve high CS detection performance while enabling tunable sensitivity.
PMID:40265999 | DOI:10.1111/epi.18406
Intelligent Inter- and Intra-Row Early Weed Detection in Commercial Maize Crops
Plants (Basel). 2025 Mar 11;14(6):881. doi: 10.3390/plants14060881.
ABSTRACT
Weed competition in inter- and intra-row zones presents a substantial challenge to crop productivity, with intra-row weeds posing a particularly severe threat. Their proximity to crops and higher occlusion rates increase their negative impact on yields. This study examines the efficacy of advanced deep learning architectures-namely, Faster R-CNN, RT-DETR, and YOLOv11-in the accurate identification of weeds and crops within commercial maize fields. A comprehensive dataset was compiled under varied field conditions, focusing on three major weed species: Cyperus rotundus L., Echinochloa crus-galli L., and Solanum nigrum L. YOLOv11 demonstrated superior performance among the evaluated models, achieving a mean average precision (mAP) of 97.5% while operating in real-time at 34 frames per second (FPS). Faster R-CNN and RT-DETR models achieved a mAP of 91.9% and 97.2%, respectively, with processing capabilities of 11 and 27 FPS. Subsequent hardware evaluations identified YOLOv11m as the most viable solution for field deployment, demonstrating high precision with a mAP of 94.4% and lower energy consumption. The findings emphasize the feasibility of employing these advanced models for efficient inter- and intra-row weed management, particularly for early-stage weed detection with minimal crop interference. This study underscores the potential of integrating State-of-the-Art deep learning technologies into agricultural machinery to enhance weed control, reduce operational costs, and promote sustainable farming practices.
PMID:40265804 | DOI:10.3390/plants14060881