Deep learning

Torg-Pavlov ratio qualification to diagnose developmental cervical spinal stenosis based on HRViT neural network

Thu, 2025-04-24 06:00

BMC Musculoskelet Disord. 2025 Apr 23;26(1):405. doi: 10.1186/s12891-025-08667-z.

ABSTRACT

BACKGROUND: Developing computer-assisted methods to measure the Torg-Pavlov ratio (TPR), defined as the ratio of the sagittal diameter of the cervical spinal canal to the sagittal diameter of the corresponding vertebral body on lateral radiographs, can reduce subjective influence and speed up processing. The TPR is a critical diagnostic parameter for developmental cervical spinal stenosis (DCSS), as it normalizes variations in radiographic magnification and provides a cost-effective alternative to CT/MRI in resource-limited settings. No study focusing on automatic measurement was reported. The aim was to develop a deep learning-based model for automatically measuring the TPR, and then to establish the distribution of asymptomatic Chinese TPR.

METHODS: A total of 1623 lateral cervical X-ray images from normal individuals were collected. 1466 and 157 images were used as the training dataset and testing dataset, respectively. We adopted a neural network called High-Resolution Vision Transformer (HRViT), which was trained on the annotated X-ray image dataset to automatically locate the landmarks and calculate the TPR. The accuracy of the TPR measurement was evaluated using mean absolute error (MAE), intra-class correlation coefficient (ICC), r value and Bland-Altman plot.

RESULTS: The TPR at C2-C7 was 1.26, 0.92, 0.90, 0.93, 0.92, and 0.89, respectively. The MAE between HRViT and surgeon R1 was 0.01, between surgeon R1 and surgeon R2 was 0.17, between surgeon R1 and surgeon R3 was 0.17. The accuracy of HRViT for DCSS diagnosis was 84.1%, which was greatly higher than those of both surgeon R2 (57.3%) and surgeon R3 (56.7%). The consistency of TPR measurements was 0.77-0.9 (ICC) and 0.78-0.9 (r value) between HRViT and surgeon R1.

CONCLUSIONS: We have explored a deep-learning algorithm for automated measurement of the TPR on cervical lateral radiographs to diagnose DCSS, which had outstanding performance comparable to clinical senior doctors.

PMID:40269821 | DOI:10.1186/s12891-025-08667-z

Categories: Literature Watch

Comparison of machine learning models with conventional statistical methods for prediction of percutaneous coronary intervention outcomes: a systematic review and meta-analysis

Thu, 2025-04-24 06:00

BMC Cardiovasc Disord. 2025 Apr 23;25(1):310. doi: 10.1186/s12872-025-04746-0.

ABSTRACT

INTRODUCTION: Percutaneous coronary intervention (PCI) has been the main treatment of coronary artery disease (CAD). In this review, we aimed to compare the performance of machine learning (ML) vs. logistic regression (LR) models in predicting different outcomes after PCI.

METHODS: Studies using ML or deep learning (DL) models to predict mortality, MACE, in-hospital bleeding, and acute kidney injury (AKI) after PCI or primary PCI were included. Articles were excluded if they did not provide a c-statistic, solely used ML models for feature selection, were not in English, or only used logistic or LASSO regression models. Best-performing ML and LR-based models (LR model or conventional risk score) from the same studies were pooled separately to directly compare the performance of ML versus LR. Risk of bias was assessed using the PROBAST and CHARMS checklists.

RESULTS: A total of 59 studies were included. Meta-analysis showed that ML models resulted in a higher c-statistic compared to LR in long-term mortality (0.84 vs. 0.79, P-value = 0.178), short-term mortality (0.91 vs. 0.85, P = 0.149), bleeding (0.81 vs. 0.77 P = 0.261), acute kidney injury (AKI; 0.81 vs. 0.75, P = 0.373), and major adverse cardiac events (MACE; 0.85 vs. 0.75, P = 0.406). PROBAST analysis showed that 93% of long-term mortality, 70% of short-term mortality, 89% of bleeding, 69% of AKI, and 86% of MACE studies had a high risk of bias.

CONCLUSION: No statistical significance existed between ML and LR model. In addition, the high risk of bias in ML studies and complexity in interpretation undermines their validity and may impact their adaption in a clinical settings.

PMID:40269704 | DOI:10.1186/s12872-025-04746-0

Categories: Literature Watch

Machine learning assessment of zoonotic potential in avian influenza viruses using PB2 segment

Thu, 2025-04-24 06:00

BMC Genomics. 2025 Apr 23;26(1):395. doi: 10.1186/s12864-025-11589-8.

ABSTRACT

BACKGROUND: Influenza A virus (IAV) is a major global health threat, causing seasonal epidemics and occasional pandemics. Particularly, Influenza A viruses from avian species pose significant zoonotic threats, with PB2 adaptation serving as a critical first step in cross-species transmission. A comprehensive risk assessment framework based on PB2 sequences is necessary, which should encompass detailed analyses of specific residues and mutations while maintaining sufficient generality for application to non-PB2 segments.

RESULTS: In this study, we developed two complementary approaches: a regression-based model for accurately distinguishing among risk groups, and a SHAP-based risk assessment model for more meaningful risk analyses. For the regression-based risk models, we compared various methodologies, including tree ensemble methods, conventional regression models, and deep learning architectures. The optimized regression model, combined with SHAP value analysis, identified and ranked individual residues contributing to zoonotic potential. The SHAP-based risk model enabled intra-class analyses within the zoonotic risk assessment framework and quantified risk yields from specific mutations.

CONCLUSION: Experimental analyses demonstrated that the Random Forest regression model outperformed other models in most cases, and we validated the target value settings for risk regression through ablation studies. Our SHAP-based analysis identified key residues (271A, 627K, 591R, 588A, 292I, 684S, 684A, 81M, 199S, and 368Q) and mutations (T271A, Q368R/K, E627K, Q591R, A588T/I/V, and I292V/T) critical for zoonotic risk assessment. Using the SHAP-based risk assessment model, we found that influenza A viruses from Phasianidae showed elevated zoonotic risk scores compared to those from other avian species. Additionally, mutations I292V/T, Q368R, A588T/I, V598A/I/T, and E/V627K were identified as significant mutations in the Phasianidae. These PB2-focused quantitative methods provide a robust and generalizable framework for both rapid screening of avians' zoonotic potential and analytical quantification of risks associated with specific residues or mutations.

PMID:40269678 | DOI:10.1186/s12864-025-11589-8

Categories: Literature Watch

An effective model of hybrid adaptive deep learning with attention mechanism for healthcare data analysis in blockchain-based secure transmission over IoT

Thu, 2025-04-24 06:00

Network. 2025 Apr 23:1-39. doi: 10.1080/0954898X.2025.2492375. Online ahead of print.

ABSTRACT

The existing approaches suffer from scalability and security issues while transmitting data. Blockchain is a recently emerged technology, and it is an emerging platform that allows secure transmission. A distributed design is required to address these issues and abide by security regulations. Blockchain has been recently introduced as an alternative solution to solve complex and challenging security issues while storing data. Thus, an intelligent blockchain-assisted IoT architecture is provided in this work to perform secure healthcare data transmission. The first aim of our model is to detect malware attacks in IoT networks. To detect the malware activities, the attack detection data was gathered, and it was fed as input to the Hybrid Adaptive Deep Learning Method. For further enhancement, the FUPOA performs the parameter tuning. A privacy preservation model is employed to secure healthcare data by generating the optimal key formation, in which the key is optimized using FUPOA. This secured data can be stored in the blockchain to increase data integrity and privacy. The optimal feature selection is done by the FUPOA approach. Further, the acquired optimal features are fed to the HADL-AM for predicting the data. The experimental analysis has been done and compared among different approaches.

PMID:40269520 | DOI:10.1080/0954898X.2025.2492375

Categories: Literature Watch

Using deep learning models to decode emotional states in horses

Thu, 2025-04-24 06:00

Sci Rep. 2025 Apr 23;15(1):13154. doi: 10.1038/s41598-025-95853-7.

ABSTRACT

In this study, we explore machine learning models for predicting emotional states in ridden horses. We manually label the images to train the models in a supervised manner. We perform data exploration and use different cropping methods, mainly based on Yolo and Faster R-CNN, to create two new datasets: 1) the cropped body, and 2) the cropped head dataset. We train various convolutional neural network (CNN) models on both cropped and uncropped datasets and compare their performance in emotion prediction of ridden horses. Despite the cropped head dataset lacking important regions like the tail (commonly annotated by experts), it yields the best results with an accuracy of 87%, precision of 79%, and recall of 97%. Furthermore, we update our models using various techniques, such as transfer learning and fine-tuning, to further improve their performance. Finally, we employ three interpretation methods to analyze the internal workings of our models, finding that LIME effectively identifies features similar to those used by experts for annotation.

PMID:40269006 | DOI:10.1038/s41598-025-95853-7

Categories: Literature Watch

An enhanced ensemble defense framework for boosting adversarial robustness of intrusion detection systems

Wed, 2025-04-23 06:00

Sci Rep. 2025 Apr 23;15(1):14177. doi: 10.1038/s41598-025-94023-z.

ABSTRACT

Machine learning (ML) and deep neural networks (DNN) have emerged as powerful tools for enhancing intrusion detection systems (IDS) in cybersecurity. However, recent studies have revealed their vulnerability to adversarial attacks, where maliciously perturbed traffic samples can deceive trained DNN-based detectors, leading to incorrect classifications and compromised system integrity. While numerous defense mechanisms have been proposed to mitigate these adversarial threats, many fail to achieve a balance between robustness against adversarial attacks, maintaining high detection accuracy on clean data, and preserving the functional integrity of traffic flow features. To address these limitations, this research investigates and integrates a comprehensive ensemble of adversarial defense strategies, implemented in two key phases. During the training phase, adversarial training, label smoothing, and Gaussian augmentation are employed to enhance the model's resilience against adversarial perturbations. Additionally, a proactive preprocessing defense strategy is deployed during the testing phase, utilizing a denoising sparse autoencoder to cleanse adversarial input samples before they are fed into the IDS classifier. Comparative evaluations demonstrate that the proposed ensemble defense framework significantly improves the adversarial robustness and classification performance of DNN-based IDS classifiers. Experimental results, validated on the CICIDS2017 and CICIDS2018 datasets, show that the proposed approach achieves aggregated prediction accuracies of 87.34% and 98.78% under majority voting and weighted average schemes, respectively. These findings underscore the effectiveness of the proposed framework in combating adversarial threats while maintaining robust detection capabilities, thereby advancing the state-of-the-art in adversarial defense for intrusion detection systems.

PMID:40268978 | DOI:10.1038/s41598-025-94023-z

Categories: Literature Watch

Single Molecule Localization Super-resolution Dataset for Deep Learning with Paired Low-resolution Images

Wed, 2025-04-23 06:00

Sci Data. 2025 Apr 23;12(1):682. doi: 10.1038/s41597-025-04979-w.

ABSTRACT

Deep learning super-resolution microscopy has advanced rapidly in recent years. Super-resolution images acquired by single molecule localization microscopy (SMLM) are ideal sources for high-quality datasets. However, the scarcity of public datasets limits the development of deep learning methods. Here, we describe a biological image dataset, DL-SMLM, which provides paired low-resolution fluorescence images and super-resolution SMLM data for training super-resolution models. DL-SMLM consists of six different subcellular structures, including microtubules, lumen and membrane of endoplasmic reticulum (ER), Clathrin coated pits (CCPs), outer membrane of mitochondria (OMM) and inner membrane of mitochondria (IMM). There are 188 sets of raw SMLM data and 100 signal levels for each low-resolution image. This allows software developers to generate thousands of training pairs through data segmentation. The performance of the imaging system was further evaluated using DNA origami samples. Finally, we demonstrated examples of super-resolution models trained using data from DL-SMLM, highlighting the effectiveness of DL-SMLM for developing deep learning super-resolution microscopy.

PMID:40268962 | DOI:10.1038/s41597-025-04979-w

Categories: Literature Watch

Semantic Consistency Network with Edge Learner and Connectivity Enhancer for Cervical Tumor Segmentation from Histopathology Images

Wed, 2025-04-23 06:00

Interdiscip Sci. 2025 Apr 23. doi: 10.1007/s12539-025-00691-w. Online ahead of print.

ABSTRACT

Accurate tumor grading and regional identification of cervical tumors are important for diagnosis and prognosis. Traditional manual microscopy methods suffer from time-consuming, labor-intensive, and subjective bias problems, so tumor segmentation methods based on deep learning are gradually becoming a hotspot in current research. Cervical tumors have diverse morphologies, which leads to low similarity between the mask edge and ground-truth edge of existing semantic segmentation models. Moreover, the texture and geometric arrangement features of normal tissues and tumors are highly similar, which causes poor pixel connectivity in the mask of the segmentation model. To this end, we propose an end-to-end semantic consistency network with the edge learner and the connectivity enhancer, i.e., ERNet. First, the edge learner consists of a stacked shallow convolutional neural network, so it can effectively enhance the ability of ERNet to learn and represent polymorphic tumor edges. Second, the connectivity enhancer learns detailed information and contextual information of tumor images, so it can enhance the pixel connectivity of the masks. Finally, edge features and pixel-level features are adaptively coupled, and the segmentation results are additionally optimized by the tumor classification task as a whole. The results show that, compared with those of other state-of-the-art segmentation models, the structural similarity and the mean intersection over union of ERNet are improved to 88.17% and 83.22%, respectively, which reflects the excellent edge similarity and pixel connectivity of the proposed model. Finally, we conduct a generalization experiment on laryngeal tumor images. Therefore, the ERNet network has good clinical popularization and practical value.

PMID:40268829 | DOI:10.1007/s12539-025-00691-w

Categories: Literature Watch

A CVAE-based generative model for generalized B<sub>1</sub> inhomogeneity corrected chemical exchange saturation transfer MRI at 5 T

Wed, 2025-04-23 06:00

Neuroimage. 2025 Apr 21:121202. doi: 10.1016/j.neuroimage.2025.121202. Online ahead of print.

ABSTRACT

Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) has emerged as a powerful tool to image endogenous or exogeneous macromolecules. CEST contrast highly depends on radiofrequency irradiation B1 level. Spatial inhomogeneity of B1 field would bias CEST measurement. Conventional interpolation-based B1 correction method required CEST dataset acquisition under multiple B1 levels, substantially prolonging scan time. The recently proposed supervised deep learning approach reconstructed B1 inhomogeneity corrected CEST effect at the identical B1 as of the training data, hindering its generalization to other B1 levels. In this study, we proposed a Conditional Variational Autoencoder (CVAE)-based generative model to generate B1 inhomogeneity corrected Z spectra from single CEST acquisition. The model was trained from pixel-wise source-target paired Z spectra under multiple B1 with target B1 as a conditional variable. Numerical simulation and healthy human brain imaging at 5 T were respectively performed to evaluate the performance of proposed model in B1 inhomogeneity corrected CEST MRI. Results showed that the generated B1-corrected Z spectra agreed well with the reference averaged from regions with subtle B1 inhomogeneity. Moreover, the performance of the proposed model in correcting B1 inhomogeneity in APT CEST effect, as measured by both MTRasym and [Formula: see text] at 3.5 ppm, were superior over conventional Z/contrast-B1-interplation and other deep learning methods, especially when target B1 were not included in sampling or training dataset. In summary, the proposed model allows generalized B1 inhomogeneity correction, benefiting quantitative CEST MRI in clinical routines.

PMID:40268259 | DOI:10.1016/j.neuroimage.2025.121202

Categories: Literature Watch

End-to-end deep learning-based motion correction and reconstruction for accelerated whole-heart joint T(1)/T(2) mapping

Wed, 2025-04-23 06:00

Magn Reson Imaging. 2025 Apr 21:110396. doi: 10.1016/j.mri.2025.110396. Online ahead of print.

ABSTRACT

PURPOSE: To accelerate 3D whole-heart joint T1/T2 mapping for myocardial tissue characterization using an end-to-end deep learning algorithm for joint motion estimation and model-based motion-corrected reconstruction of multi-contrast undersampled data.

METHODS: A free-breathing high-resolution motion-compensated 3D joint T1/T2 water/fat sequence is employed. The sequence consists of the acquisition of four interleaved volumes with 2-echo encoding, resulting in eight volumes with different contrasts. An end-to-end non-rigid motion-corrected reconstruction network is used to estimate high quality motion-corrected reconstructions from the eight multi-contrast undersampled data for subsequent joint T1/T2 mapping. Reconstruction with the proposed approach was compared against state-of-the-art motion-corrected HD-PROST reconstruction.

RESULTS: The proposed approach yields images with good visual agreement compared to the reference reconstructions. The comparison of the quantitative values in the T1 and T2 maps showed the absence of systematic errors, and a small bias of -6.35 ms and -1.8 ms, respectively. The proposed reconstruction time was 24 seconds in comparison to 2.5 hours with motion-corrected HD-PROST, resulting in a reconstruction speed-up of over 370 times.

CONCLUSION: In conclusion, this study presents a promising method for efficient whole-heart myocardial tissue characterization. Specifically, the research highlights the potential of the multi-contrast end-to-end deep learning algorithm for joint motion estimation and model-based motion-corrected reconstruction of multi-contrast undersampled data. The findings underscore its ability to compute T1 and T2 values with good agreement when compared to the reference motion-corrected HD-PROST method, while substantially reducing reconstruction time.

PMID:40268172 | DOI:10.1016/j.mri.2025.110396

Categories: Literature Watch

Computational models for prediction of m6A sites using deep learning

Wed, 2025-04-23 06:00

Methods. 2025 Apr 21:S1046-2023(25)00108-2. doi: 10.1016/j.ymeth.2025.04.011. Online ahead of print.

ABSTRACT

RNA modifications play a crucial role in enhancing the structural and functional diversity of RNA molecules and regulating various stages of the RNA life cycle. Among these modifications, N6-Methyladenosine (m6A) is the most common internal modification in eukaryotic mRNAs and has been extensively studied over the past decade. Accurate identification of m6A modification sites is essential for understanding their function and underlying mechanisms. Traditional methods predominantly rely on machine learning techniques to recognize m6A sites, which often fail to capture the contextual features of these sites comprehensively. In this study, we comprehensively summarize previously published methods based on machine learning and deep learning. We also validate multiple deep learning approaches on benchmark dataset, including previously underutilized methods in m6A site prediction, pre-trained models specifically designed for biological sequence and other basic deep learning methods. Additionally, we further analyze the dataset features and interpret the model's predictions to enhance understanding. Our experimental results clearly demonstrate the effectiveness of the deep learning models, elucidating their strong potential in accurately recognizing m6A modification sites.

PMID:40268153 | DOI:10.1016/j.ymeth.2025.04.011

Categories: Literature Watch

OrgaMeas: A pipeline that integrates all the processes of organelle image analysis

Wed, 2025-04-23 06:00

Biochim Biophys Acta Mol Cell Res. 2025 Apr 21:119964. doi: 10.1016/j.bbamcr.2025.119964. Online ahead of print.

ABSTRACT

Although image analysis has emerged as a key technology in the study of organelle dynamics, the commonly used image-processing methods, such as threshold-based segmentation and manual setting of regions of interests (ROIs), are error-prone and laborious. Here, we present a highly accurate high-throughput image analysis pipeline called OrgaMeas for measuring the morphology and dynamics of organelles. This pipeline mainly consists of two deep learning-based tools: OrgaSegNet and DIC2Cells. OrgaSegNet quantifies many aspects of different organelles by precisely segmenting them. To further process the segmented data at a single-cell level, DIC2Cells automates ROI settings through accurate segmentation of individual cells in differential interference contrast (DIC) images. This pipeline was designed to be low cost and require less coding, to provide an easy-to-use platform. Thus, we believe that OrgaMeas has potential to be readily applied to basic biomedical research, and hopefully to other practical uses such as drug discovery.

PMID:40268058 | DOI:10.1016/j.bbamcr.2025.119964

Categories: Literature Watch

The prediction of RNA-small molecule binding sites in RNA structures based on geometric deep learning

Wed, 2025-04-23 06:00

Int J Biol Macromol. 2025 Apr 21:143308. doi: 10.1016/j.ijbiomac.2025.143308. Online ahead of print.

ABSTRACT

Biological interactions between RNA and small-molecule ligands play a crucial role in determining the specific functions of RNA, such as catalysis and folding, and are essential for guiding drug design in the medical field. Accurately predicting the binding sites of ligands within RNA structures is therefore of significant importance. To address this challenge, we introduced a computational approach named RLBSIF (RNA-Ligand Binding Surface Interaction Fingerprints) based on geometric deep learning. This model utilizes surface geometric features, including shape index and distance-dependent curvature, combined with chemical features represented by atomic charge, to comprehensively characterize RNA-ligand interactions through MaSIF-based surface interaction fingerprints. Additionally, we employ the ResNet18 network to analyze these fingerprints for identifying ligand binding pockets. Trained on 440 binding pockets, RLBSIF achieves an overall pocket-level classification accuracy of 90 %. Through a full-space enumeration method, it can predict binding sites at nucleotide resolution. In two independent tests, RLBSIF outperformed competing models, demonstrating its efficacy in accurately identifying binding sites within complex molecular structures. This method shows promise for drug design and biological product development, providing valuable insights into RNA-ligand interactions and facilitating the design of novel therapeutic interventions. For access to the related source code, please visit RLBSIF on GitHub (https://github.com/ZUSTSTTLAB/RLBSIF).

PMID:40268011 | DOI:10.1016/j.ijbiomac.2025.143308

Categories: Literature Watch

On factors that influence deep learning-based dose prediction of head and neck tumors

Wed, 2025-04-23 06:00

Phys Med Biol. 2025 Apr 23. doi: 10.1088/1361-6560/adcfeb. Online ahead of print.

ABSTRACT

\textit{Objective.} This study investigates key factors influencing deep learning-based dose prediction models for head and neck cancer radiation therapy (RT). The goal is to evaluate model accuracy, robustness, and computational efficiency, and to identify key components necessary for optimal performance.&#xD;\\&#xD;\textit{Approach.} We systematically analyze the impact of input and dose grid resolution, input type, loss function, model architecture, and noise on model performance. Two datasets are used: a public dataset (OpenKBP) and an in-house clinical dataset (LUMC). Model performance is primarily evaluated using two metrics: dose score and dose-volume histogram (DVH) score.&#xD;\\&#xD;\textit{Main results.} &#xD;High-resolution inputs improve prediction accuracy (dose score and DVH score) by 8.6--13.5\% compared to low resolution. Using a combination of CT, planning target volumes (PTVs), and organs-at-risk (OARs) as input significantly enhances accuracy, with improvements of 57.4--86.8\% over using CT alone. Integrating mean absolute error (MAE) loss with value-based and criteria-based DVH loss functions further boosts DVH score by 7.2--7.5\% compared to MAE loss alone. In the robustness analysis, most models show minimal degradation under Poisson noise (0--0.3 Gy) but are more susceptible to adversarial noise (0.2--7.8 Gy). Notably, certain models, such as SwinUNETR, demonstrate superior robustness against adversarial perturbations.&#xD;\\&#xD;\textit{Significance.}&#xD;These findings highlight the importance of optimizing deep learning models and provide valuable guidance for achieving more accurate and reliable radiotherapy dose prediction.

PMID:40267938 | DOI:10.1088/1361-6560/adcfeb

Categories: Literature Watch

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis

Wed, 2025-04-23 06:00

Comput Biol Med. 2025 Apr 22;192(Pt A):110160. doi: 10.1016/j.compbiomed.2025.110160. Online ahead of print.

ABSTRACT

The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly in the context of Radiotherapy (RT) treatment planning. The use of sCT enables the calculation of doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with the introduction of MRI-Positron Emission Tomography (PET) hybrid scanners, the derivation of sCT from MRI can improve the attenuation correction of PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse clinical settings. Moreover, creating centralised multi-centre datasets may pose privacy concerns. To address the aforementioned issues, we introduced FedSynthCT-Brain, an approach based on the Federated Learning (FL) paradigm for MRI-to-sCT in brain imaging. This is among the first applications of FL for MRI-to-sCT, employing a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model. We validated our method using real multicentre data from four European and American centres, simulating heterogeneous scanner types and acquisition modalities, and tested its performance on an independent dataset from a centre outside the federation. In the case of the unseen centre, the federated model achieved a median Mean Absolute Error (MAE) of 102.0 HU across 23 patients, with an interquartile range of 96.7-110.5 HU. The median (interquartile range) for the Structural Similarity Index (SSIM) and the Peak Signal to Noise Ratio (PNSR) were 0.89 (0.86-0.89) and 26.58 (25.52-27.42), respectively. The analysis of the results showed acceptable performances of the federated approach, thus highlighting the potential of FL to enhance MRI-to-sCT to improve generalisability and advancing safe and equitable clinical applications while fostering collaboration and preserving data privacy.

PMID:40267535 | DOI:10.1016/j.compbiomed.2025.110160

Categories: Literature Watch

Global Trends in Artificial Intelligence and Sepsis-Related Research: A Bibliometric Analysis

Wed, 2025-04-23 06:00

Shock. 2025 Apr 22. doi: 10.1097/SHK.0000000000002598. Online ahead of print.

ABSTRACT

BACKGROUND: In the field of bibliometrics, although some studies have conducted literature reviews and analyses on sepsis, these studies mainly focus on specific areas or technologies, such as the relationship between the gut microbiome and sepsis, or immunomodulatory treatments for sepsis. However, there are still few studies that provide comprehensive bibliometric analyses of global scientific publications related to AI in sepsis research.

OBJECTIVE: The aim of this study is to assess the global trend analysis of AI applications in sepsis based on publication output, citations, co-authorship between countries, and co-occurrence of author keywords.

METHODS: A total of 4,382 papers published from 2015 to December 2024 were retrieved and downloaded from the SCIE database in WOS. After selecting the document types as articles and reviews, and conducting eligibility checks on titles and abstracts, the final bibliometric analysis using VOSviewer and CiteSpace included 4,209 papers.

RESULTS: The number of published papers increased sharply starting in 2021, accounting for 58.14% (2,447/4,209) of all included papers. The United States and China together account for approximately 60.16% (2,532/4,209) of the total publications. Among the top 10 institutions in AI research on sepsis, seven are located in the United States. Rishikesan Kamaleswaran is the most contributing author, with PLOS ONE having more citations in this field than other journals. SCIENTIFIC REPORTS is also the most influential journal (NP = 106, H-index = 23, IF: 3.8).

CONCLUSION: This study highlights the popular areas of AI research, provides a comprehensive overview of the research trends of AI in sepsis, and offers potential collaboration and future research prospects. To make AI-based clinical research sufficiently persuasive in sepsis practice, collaborative research is needed to improve the maturity and robustness of AI-driven models.

PMID:40267504 | DOI:10.1097/SHK.0000000000002598

Categories: Literature Watch

Prediction of Reactivation After Antivascular Endothelial Growth Factor Monotherapy for Retinopathy of Prematurity: Multimodal Machine Learning Model Study

Wed, 2025-04-23 06:00

J Med Internet Res. 2025 Apr 23;27:e60367. doi: 10.2196/60367.

ABSTRACT

BACKGROUND: Retinopathy of prematurity (ROP) is the leading preventable cause of childhood blindness. A timely intravitreal injection of antivascular endothelial growth factor (anti-VEGF) is required to prevent retinal detachment with consequent vision impairment and loss. However, anti-VEGF has been reported to be associated with ROP reactivation. Therefore, an accurate prediction of reactivation after treatment is urgently needed.

OBJECTIVE: To develop and validate prediction models for reactivation after anti-VEGF intravitreal injection in infants with ROP using multimodal machine learning algorithms.

METHODS: Infants with ROP undergoing anti-VEGF treatment were recruited from 3 hospitals, and conventional machine learning, deep learning, and fusion models were constructed. The areas under the curve (AUCs), accuracy, sensitivity, and specificity were used to show the performances of the prediction models.

RESULTS: A total of 239 cases with anti-VEGF treatment were recruited, including 90 (37.66%) with reactivation and 149 (62.34%) nonreactivation cases. The AUCs for the conventional machine learning model were 0.806 and 0.805 in the internal validation and test groups, respectively. The average AUC, sensitivity, and specificity in the test for the deep learning model were 0.787, 0.800, and 0.570, respectively. The specificity, AUC, and sensitivity for the fusion model were 0.686, 0.822, and 0.800 in a test, separately.

CONCLUSIONS: We constructed 3 prediction models for ROP reactivation. The fusion model achieved the best performance. Using this prediction model, we could optimize strategies for treating ROP in infants and develop better screening plans after treatment.

PMID:40267476 | DOI:10.2196/60367

Categories: Literature Watch

Improved Pine Wood Nematode Disease Diagnosis System Based on Deep Learning

Wed, 2025-04-23 06:00

Plant Dis. 2025 Apr 23:PDIS06241221RE. doi: 10.1094/PDIS-06-24-1221-RE. Online ahead of print.

ABSTRACT

Pine wilt disease caused by the pine wood nematode, Bursaphelenchus xylophilus, has profound implications for global forestry ecology. Conventional PCR methods need long operating time and are complicated to perform. The need for rapid and effective detection methodologies to curtail its dissemination and reduce pine felling has become more apparent. This study initially proposed the use of fluorescence recognition for the detection of pine wood nematode disease, accompanied by the development of a dedicated fluorescence detection system based on deep learning. This system possesses the capability to perform excitation, detection, as well as data analysis and transmission of test samples. In exploring fluorescence recognition methodologies, the efficacy of five conventional machine learning algorithms was juxtaposed with that of You Only Look Once version 5 and You Only Look Once version 10, both in the pre- and post-image processing stages. Moreover, enhancements were introduced to the You Only Look Once version 5 model. The network's aptitude for discerning features across varied scales and resolutions was bolstered through the integration of Res2Net. Meanwhile, a SimAM attention mechanism was incorporated into the backbone network, and the original PANet structure was replaced by the Bi-FPN within the Head network to amplify feature fusion capabilities. The enhanced YOLOv5 model demonstrates significant improvements, particularly in the recognition of large-size images, achieving an accuracy improvement of 39.98%. The research presents a novel detection system for pine nematode detection, capable of detecting samples with DNA concentrations as low as 1 fg/μl within 20 min. This system integrates detection instruments, laptops, cloud computing, and smartphones, holding tremendous potential for field application.

PMID:40267359 | DOI:10.1094/PDIS-06-24-1221-RE

Categories: Literature Watch

Deep Learning Model for Histologic Diagnosis of Dysplastic Barrett's Esophagus: Multisite Cohort External Validation

Wed, 2025-04-23 06:00

Am J Gastroenterol. 2025 Apr 23. doi: 10.14309/ajg.0000000000003495. Online ahead of print.

ABSTRACT

INTRODUCTION: The risk of progression to esophageal adenocarcinoma (EAC) in Barrett's esophagus (BE) increases with advancing degrees of dysplasia. There is a critical need to improve the diagnosis of BE dysplasia, given substantial interobserver variability and overcalls of dysplasia during manual community pathologist reads. We aimed to externally validate a previously cross-validated BE dysplasia diagnosis deep learning model (BEDDLM) that predicts dysplasia grade on whole slide images (WSIs).

METHODS: We digitized non-dysplastic BE (NDBE), low-grade (LGD), and high-grade dysplasia (HGD) histology slides from three external academic centers. A consensus read by two expert study pathologists was used as the criterion standard. Slide stain characteristics were normalized using cycle-generative adversarial networks (cGANs). Whole slide images were assessed by BEDDLM using an ensemble approach, combining a "You Only Look Once" (YOLO) model followed by a ResNet101 classifier model.

RESULTS: We included 489 WSIs. Consensus histopathology revealed 232 NDBE, 117 LGD, and 140 HGD WSIs. The mean age (SD) was 66·9 (11·4); 413 (84·7%) were males. Using the BEDDLM ensemble model, sensitivity and specificity for NDBE were 73.3% (95% CI: 67.09-78.85%); 93.4% (95% CI: 89.62-96.10%); for LGD, 84.6% (95% CI: 76.78-90.62%), and 80.6% (95% CI: 76.26-84.54%); and for HGD, 80.7% (95% CI: 73.19-86.89%) and 94.8% (95% CI: 91.97-96.91%), respectively. The F1 score was 0.81, 0 .69, and 0.83 for NDBE, LGD, and HGD, respectively.

DISCUSSION: Our externally validated deep learning model demonstrates substantial accuracy for the diagnosis of BE dysplasia grade on WSIs.

PMID:40267276 | DOI:10.14309/ajg.0000000000003495

Categories: Literature Watch

Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers

Wed, 2025-04-23 06:00

J Radiat Res. 2025 Apr 23:rraf019. doi: 10.1093/jrr/rraf019. Online ahead of print.

ABSTRACT

This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.

PMID:40267259 | DOI:10.1093/jrr/rraf019

Categories: Literature Watch

Pages