Deep learning
Bond-centric modular design of protein assemblies
Nat Mater. 2025 Jul 31. doi: 10.1038/s41563-025-02297-5. Online ahead of print.
ABSTRACT
Directional interactions that generate regular coordination geometries are a powerful means of guiding molecular and colloidal self-assembly, but implementing such high-level interactions with proteins remains challenging due to their complex shapes and intricate interface properties. Here we describe a modular approach to protein nanomaterial design inspired by the rich chemical diversity that can be generated from the small number of atomic valencies. We design protein building blocks using deep learning-based generative tools, incorporating regular coordination geometries and tailorable bonding interactions that enable the assembly of diverse closed and open architectures guided by simple geometric principles. Experimental characterization confirms the successful formation of more than 20 multicomponent polyhedral protein cages, two-dimensional arrays and three-dimensional protein lattices, with a high (10%-50%) success rate and electron microscopy data closely matching the corresponding design models. Due to modularity, individual building blocks can assemble with different partners to generate distinct regular assemblies, resulting in an economy of parts and enabling the construction of reconfigurable networks for designer nanomaterials.
PMID:40745093 | DOI:10.1038/s41563-025-02297-5
Investigating the Impact of the Stationarity Hypothesis on Heart Failure Detection using Deep Convolutional Scattering Networks and Machine Learning
Sci Rep. 2025 Jul 31;15(1):27902. doi: 10.1038/s41598-025-13510-5.
ABSTRACT
Detection of Cardiovascular Diseases (CVDs) has become crucial nowadays, as the World Health Organization (WHO) declares CVDs as the major leading causes of death in the globe. Moreover, the death rate due to CVDs is expected to rise in the next few upcoming years. One of the most valuable contributions that could be given to the cardiology field is developing a reliable model for early detection of CVDs. This paper presents a new approach aimed to classify ECG signals into: Normal Sinus Rhythm (NSR), Arrhythmia Rhythm (ARR), and Congestive Heart Failure (CHF). The proposed approach has been developed based on the stationarity hypothesis of rhythms within the same patient in ECG signals. The stationarity hypothesis assumes that if arrhythmias are found in one part of a long ECG signal, they are likely to occur in other parts of the same signal as well. In this paper, many contributions have been developed with the aim of enhancing automated detection of CVDs under the inter-patient paradigm, including using WSN in conjunction with different Machine Learning (ML) models and the stationarity hypothesis of ECG signals. A deep convolution Wavelet Scattering Network (WSN) in conjunction with a Linear Discriminant (LD) classifier and stationarity hypothesis was implemented with the aim of improving the classification results under inter-patient paradigm. The model achieved impressive results, with an overall accuracy of 99.61%, precision of 99.65%, sensitivity of 99.35%, specificity of 99.74%, and F1-score of 99.49%, across all the three classes.
PMID:40745013 | DOI:10.1038/s41598-025-13510-5
A novel flexible identity-net with diffusion models for painting-style generation
Sci Rep. 2025 Jul 31;15(1):27896. doi: 10.1038/s41598-025-12434-4.
ABSTRACT
Art's unique style and creativity are essential in defining a work's identity, conveying emotions, and shaping audience perception. Recent advancements in diffusion models have revolutionized art design, animation, and gaming, particularly in generating original artwork and visual identities. However, traditional creative processes face challenges such as slow innovation, high costs, and limited scalability. Consequently, deep learning has emerged as a promising solution for enhancing painting-style creative design. In this paper, we present the Painting-Style Design Assistant Network (PDANet), a groundbreaking network architecture designed for advanced style transformation. Our work is supported by the Painting-42 dataset, a meticulously curated collection of 4055 artworks from 42 illustrious Chinese painters, capturing the aesthetic nuances of Chinese painting and offering invaluable design references. Additionally, we introduce a lightweight Identity-Net, designed to enhance large-scale text-to-image (T2I) models by aligning internal knowledge with external control signals. This innovative Identity-Net seamlessly integrates image prompts into the U-Net encoder, enabling the generation of diverse and consistent images. Through extensive quantitative and qualitative evaluations, our approach has demonstrated superior performance compared to existing methods, producing high-quality, versatile content with broad applicability across various creative domains. Our work not only advances the field of AI-driven art but also offers a new paradigm for the future of creative design. The code and data are available at https://github.com/aigc-hi/PDANet .
PMID:40744991 | DOI:10.1038/s41598-025-12434-4
Data Collection for Automatic Depression Identification in Spanish Speakers Using Deep Learning Algorithms: Protocol for a Case-Control Study
JMIR Res Protoc. 2025 Jul 31;14:e60439. doi: 10.2196/60439.
ABSTRACT
BACKGROUND: Depression is a mental health condition that affects millions of people worldwide. Although common, it remains difficult to diagnose due to its heterogeneous symptomatology. Mental health questionnaires are currently the most used assessment method to screen depression; these, however, have a subjective nature due to their dependence on patients' self-assessments. Researchers have been interested in finding an accurate way of identifying depression through an objective biomarker. Recent developments in neural networks and deep learning have enabled the possibility of classifying depression through the computational analysis of voice recordings. However, this approach is heavily dependent on the availability of datasets to train and test deep learning models, and these are scarce. There are also very few languages available. This study proposes a protocol for the collection of a new dataset for deep learning research on voice depression classification, featuring Spanish speakers, professional and smartphone microphones, and a high-quality recording standard.
OBJECTIVE: This work aims to create a high-quality voice depression dataset by recording Spanish speakers with a professional microphone and strict audio quality standards. The data are captured by a smartphone microphone as well for further research in the use of smartphone recordings for deep learning depression classification.
METHODS: Our methodology involves the strategic collection of depressed and nondepressed voice recordings. A minimum participation of 60 subjects was established and 2 health centers were selected to gather data. A total of 3 types of data are collected: voice recordings, depression labels (using the Patient Health Questionnaire-9), and additional data that could potentially influence speech. Recordings are captured with professional-grade and smartphone microphones simultaneously to ensure versatility and practical applicability. Several considerations and guidelines are described to ensure high audio quality and avoid potential bias in deep learning research.
RESULTS: This data collection effort immediately enables new research topics on depression classification. Some potential uses include deep learning research on Spanish speakers, an evaluation of the impact of audio quality on developing audio classification models, and an evaluation of the applicability of voice depression classification technology on smartphone apps.
CONCLUSIONS: This research marks a significant step toward the objective and automated classification of depression in voice recordings. By focusing on the underrepresented demographic of Spanish speakers, the inclusion of smartphone recordings, and addressing the current data limitations in audio quality, this study lays the groundwork for future advancements in deep learning-driven mental health diagnosis.
INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/60439.
PMID:40744439 | DOI:10.2196/60439
Deep learning-driven brain tumor classification and segmentation using non-contrast MRI
Sci Rep. 2025 Jul 30;15(1):27831. doi: 10.1038/s41598-025-13591-2.
ABSTRACT
This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. Non-contrast T1-weighted (T1w) and T2-weighted (T2w) images were combined with their average to form RGB three-channel inputs, enriching the representation for model training. Several convolutional neural network (CNN) architectures were evaluated for tumor classification, while fully convolutional networks (FCNs) were employed for tumor segmentation. Standard preprocessing, normalization, and training procedures were rigorously followed. The RGB fusion of T1w, T2w, and their average significantly enhanced model performance. The classification task achieved a top accuracy of 98.3% using the Darknet53 model, and segmentation attained a mean Dice score of 0.937 with ResNet50. These results demonstrate the effectiveness of multichannel input fusion and model selection in improving brain tumor analysis. While not yet integrated into clinical workflows, this approach holds promise for future development of DL-assisted decision-support tools in radiological practice.
PMID:40745383 | DOI:10.1038/s41598-025-13591-2
A comprehensive multifaceted technical evaluation framework for implementation of auto-segmentation models in radiotherapy
Commun Med (Lond). 2025 Jul 31;5(1):319. doi: 10.1038/s43856-025-01048-6.
ABSTRACT
BACKGROUND: Manual contouring of organs at risk in radiotherapy is time-consuming, taking 1-4 hours per case. Automatic segmentation using deep learning has emerged as a promising solution, with many commercial options now available. However, these methods require rigorous validation before clinical use, and current evaluation approaches lack consistency and comprehensive assessment across publications.
METHODS: We developed the Comprehensive Multifaceted Technical Evaluation framework, which integrates four key assessment components: quantitative geometric measures, qualitative expert evaluation, time efficiency analysis, and dosimetric evaluation. We demonstrated this framework using an in-house automatic segmentation model for brain organs at risk, trained on 100 cases and evaluated by 8 radiation oncology experts from 4 institutions. The evaluation included geometric accuracy measurements, expert ratings of clinical acceptability, time-saving assessments, and dosimetric impact analysis comparing treatment plans.
RESULTS: Here we show that our automatic segmentation model achieved an overall geometric accuracy of 0.78 and outperformed manual inter-rater variability. Expert evaluation revealed that 88% of automatically segmented structures were clinically acceptable with only minor adjustments needed. The evaluation and adjustment process averaged 22 minutes compared to 69 minutes for manual contouring. Dosimetric analysis showed minimal impact on treatment plans, with average dose differences of 0.30 Gray for mean dose and 0.23 Gray for maximum dose.
CONCLUSIONS: The framework provides a robust method for validating automatic segmentation models in radiotherapy. However, establishing standardized benchmarks and consensus guidelines within the radiotherapy community remains essential for proper clinical implementation and comparison of different segmentation tools.
PMID:40745381 | DOI:10.1038/s43856-025-01048-6
Development of a novel deep learning method that transforms tabular input variables into images for the prediction of SLD
Sci Rep. 2025 Jul 31;15(1):28024. doi: 10.1038/s41598-025-12900-z.
ABSTRACT
Steatotic liver disease (SLD), formerly named fatty liver disease, has a prevalence estimated at 30-38% in adults. Detection of SLD is important, since prompt initiation of treatment can stop disease progression, lead to a reduction in adverse outcomes, and reduce the economic burden associated with the disease. We report the development of a novel Deep Learning (DL) method for the prediction of SLD, which consists of transforming the input variables from tabular data into images, with the goal of using the pattern recognition power of DL models to reach the best prediction performance. The dataset used in this study includes registries from 2,999 patients. The data of each patient, originally represented as a vector, is converted into an image replicating each variable in rows and columns. Our DL models reach better results compared to those of traditional ML models at various levels of sensitivity and specificity. A sensitivity of 0.9497, a specificity of 0.6417, and an AUCROC of 0.8662 were reached with one DL model. We also achieved significantly better results relative to those obtained with the Hepatic Steatosis Index (HSI). Our DL models reach higher AUCROC values compared to those of the traditional ML models, and also with respect to those obtained with HSI.
PMID:40745379 | DOI:10.1038/s41598-025-12900-z
Higher glass transition temperatures reduce thermal stress cracking in aqueous solutions relevant to cryopreservation
Sci Rep. 2025 Jul 31;15(1):27903. doi: 10.1038/s41598-025-13295-7.
ABSTRACT
Cryopreservation by vitrification could transform fields ranging from organ transplantation to wildlife conservation, but critical physical challenges remain in scaling this approach from microscopic to macroscopic systems, including the threat of fracture due to accumulated thermal stresses. Here, we provide experimental and computational evidence that these stresses are strongly dependent on the glass transition temperature [Formula: see text] of the vitrification solution, a property which, given the narrow band of chemistries represented within common vitrification solutions, is seldom investigated in thermomechanical analyses. We develop a custom cryomacroscope platform to image glass cracking in four aqueous solution chemistries spanning > 50 °C in [Formula: see text]; we process these images using semantic segmentation deep learning algorithms to analyze the extent of cracking in each; and we perform thermomechanical finite element simulations to disentangle the multiphysics effects driving the observed dependency, providing new insights to inform design of next-generation vitrification solutions that minimize thermal cracking risks.
PMID:40745362 | DOI:10.1038/s41598-025-13295-7
Artificial intelligence-integrated video analysis of vessel area changes and instrument motion for microsurgical skill assessment
Sci Rep. 2025 Jul 31;15(1):27898. doi: 10.1038/s41598-025-13522-1.
ABSTRACT
Mastering microsurgical skills is essential for neurosurgical trainees. Video-based analysis of target tissue changes and surgical instrument motion provides an objective, quantitative method for assessing microsurgical proficiency, potentially enhancing training and patient safety. This study evaluates the effectiveness of an artificial intelligence (AI)-based video analysis model in assessing microsurgical performance and examines the correlation between AI-derived parameters and specific surgical skill components. A dual AI framework was developed, integrating a semantic segmentation model for artificial blood vessel analysis with an instrument tip-tracking algorithm. These models quantified dynamic vessel area fluctuation, tissue deformation error count, instrument path distance, and normalized jerk index during a single-stitch end-to-side anastomosis task performed by 14 surgeons with varying experience levels. The AI-derived parameters were validated against traditional criteria-based rating scales assessing instrument handling, tissue respect, efficiency, suture handling, suturing technique, operation flow, and overall performance. Rating scale scores correlated with microsurgical experience, exhibiting a bimodal distribution that classified performance into good and poor groups. Video-based parameters showed strong correlations with various skill categories. Receiver operating characteristic analysis demonstrated that combining these parameters improved the discrimination of microsurgical performance. The proposed method effectively captures technical microsurgical skills and can assess performance.
PMID:40745359 | DOI:10.1038/s41598-025-13522-1
VCPC: virtual contrastive constraint and prototype calibration for few-shot class-incremental plant disease classification
Plant Methods. 2025 Jul 31;21(1):105. doi: 10.1186/s13007-025-01423-3.
ABSTRACT
Deep learning demonstrates strong generalisation capabilities, driving substantial progress in plant disease recognition systems. However, current methods are predominantly optimised for offline implementation. Real-time crop surveillance systems encounter streaming images containing novel disease classes in few-shot conditions, demanding incrementally adaptive models. This capability is called few-shot class-incremental learning (FSCIL). Here, we introduce VCPV-virtual contrastive constraints with prototype vector calibration-enabling sustainable plant disease classification under FSClL conditions. Specifically, our method consists of two phases: the base class training phase and the incremental training phase. During the base class training phase, the virtual contrastive class constraints (VCC) module is utilised to enhance learning from base classes and allocate sufficient embedding space for new plant disease images. In the incremental training phase, the prototype calibration embedding (PCE) module is introduced to distinguish newly arriving plant disease categories from previous ones, thereby optimising the prototype space and enhancing the recognition accuracy of new categories. We evaluated our approach on the PlantVillage dataset, and the experimental results under both 5-way 5-shot and 3-way 5-shot settings demonstrate that our method achieves state-of-the-art accuracy. At the same time, we achieved promising performance on the publicly available CIFAR-100 dataset. Furthermore, the visualisation results validate that our strategy effectively supports fine-grained, sustainable disease recognition, highlighting the potential of our approach to advance FSCIL in the field of plant disease monitoring.
PMID:40745353 | DOI:10.1186/s13007-025-01423-3
Impact of large language models and vision deep learning models in predicting neoadjuvant rectal score for rectal cancer treated with neoadjuvant chemoradiation
BMC Med Imaging. 2025 Jul 31;25(1):306. doi: 10.1186/s12880-025-01844-5.
ABSTRACT
This study aims to explore Deep Learning methods, namely Large Language Models (LLMs) and Computer Vision models to accurately predict neoadjuvant rectal (NAR) score for locally advanced rectal cancer (LARC) treated with neoadjuvant chemoradiation (NACRT). The NAR score is a validated surrogate endpoint for LARC. 160 CT scans of patients were used in this study, along with 4 different types of radiology reports, 2 generated from CT scans and other 2 from MRI scans, both before and after NACRT. For CT scans, two different approaches with convolutional neural network were utilized to tackle the 3D scan entirely or tackle it slice by slice. For radiology reports, an encoder architecture LLM was used. The performance of the approaches was quantified by the Area under the Receiver Operating Characteristic curve (AUC). The two different approaches for CT scans yielded [Formula: see text] and [Formula: see text] while the LLM trained on post NACRT MRI reports showed the most predictive potential at [Formula: see text] and a statistical improvement, p = 0.03, over the baseline clinical approach (from [Formula: see text] to [Formula: see text])). This study showcases the potential of Large Language Models and the inadequacies of CT scans in predicting NAR values. Clinical trial number Not applicable.
PMID:40745280 | DOI:10.1186/s12880-025-01844-5
Multi-heat keypoint incorporation in deep learning model to tropical cyclone centering and intensity classifying from geostationary satellite images
Sci Rep. 2025 Jul 31;15(1):27949. doi: 10.1038/s41598-025-12733-w.
ABSTRACT
Hydrometeorological forecasting and early warning involve many hazardous elements, with the estimation of intensity and center location of tropical cyclones (TCs) being key. This paper proposes a new multitask deep learning model with attention gate mechanisms to work with satellite images and construct heatmaps for TC's centering and classification. The multi-head keypoint design (MHKD) with the spatial attention mechanism (SAM) is fitted to the decoder layer using multi-resolution inputs from the encoder. In addition, the new loss function is employed with an Euclidean distance to guide centers of heatmaps from lower decoder layers toward higher ones, thereby refining keypoints during the early decoding stage. Experimental results, done on a constructed dataset for the Western North Pacific for 2015-2023 collected from the Japanese Himawari 8/9 geostationary satellite and the best track of the World Meteorological Organization (WMO) Regional Specialized Meteorological Center (RSMC) Tokyo - Typhoon Center, indicate that the proposed model successfully detects most TC existences on combined images from three infrared channels. The model's accuracy can reach over 72% of the Tropical Depression (TD) grade and over 90% for really strong TCs (Severe Tropical Storm (STS) and Typhoon (TY)). Compared to a typical detecting object problem, the main issues come from the complexity of TC cloud patterns, which are nonlinear with actual TC grades or discrimination between TC grades (transition between TD to Tropical Storm (TS), TS to STS, and upgrading and progress of TCs). The proposed MHKD can help reduce the over-estimate rate for the TD grade and under-estimate rates for TS and STS grades, and most notably, the TC center localization yielded an average error of approximately 34 km with a single keypoint or one head attention network (One ATTN) and around 27 km when using three head attention network (Three ATTN).
PMID:40745273 | DOI:10.1038/s41598-025-12733-w
Image dehazing algorithm based on deep transfer learning and local mean adaptation
Sci Rep. 2025 Jul 31;15(1):27956. doi: 10.1038/s41598-025-13613-z.
ABSTRACT
In recent years, haze has significantly hindered the quality and efficiency of daily tasks, reducing the visual perception range. Various approaches have emerged to address image dehazing, including image enhancement, restoration, and deep learning-based dehazing methods. While these methods have improved dehazing performance to some extent, they often struggle in bright regions of the image, leading to distortions and suboptimal dehazing results. Moreover, dehazing models generally exhibit weak noise resistance, with the PSNR value of dehazed images typically falling below 30 dB. Residual noise remains in the processed images, leading to degraded visual quality. Currently, it is challenging for dehazing models to simultaneously ensure effective dehazing in bright regions while maintaining strong noise suppression capabilities. To address both issues simultaneously, we propose an image dehazing algorithm based on deep transfer learning and local mean adaptation. The framework consists of several key modules: an atmospheric light estimation module based on deep transfer learning, a transmission map estimation module utilizing local mean adaptation, a haze-free image reconstruction module, an image enhancement module, and a noise reduction module. This design ensures stable and accurate atmospheric light estimation, enabling the model to process different regions of hazy images effectively and prevent distortion artifacts. Furthermore, to enrich the details of the dehazed pictures and enhance the dehazing performance while improving the model's noise resistance, we incorporate an image enhancement module and a noise reduction module into the proposed dehazing framework. To validate the effectiveness of the proposed algorithm, we conducted dehazing experiments on a Self-Made Synthetic Hazy Dataset, the SOTS (outdoor) dataset, the NH-HAZE dataset, and O-HAZE dataset. Experimental results demonstrate that the proposed dehazing model achieves superior performance across all four datasets. The dehazed images exhibit no color distortion, and the PSNR values consistently exceed 30 dB, indicating that the dehazed images are of high quality. The dehazed images also demonstrate a significant advantage in SSIM performance compared to mainstream dehazing algorithms, consistently achieving a similarity of over 85%. This indicates that the proposed dehazing model effectively mitigates distortion while enhancing noise resistance, exhibiting strong generalization capabilities across different datasets. The experimental results confirm that the proposed dehazing algorithm handles bright regions, such as the sky, and significantly reduces residual noise in the dehazed images. Both aspects demonstrate strong performance, validating the effectiveness and superiority of the proposed dehazing model. Furthermore, the algorithm achieves consistently good dehazing performance across all three hazy datasets, demonstrating its generalization capability. This study presents a novel dehazing method and theoretical framework that can be effectively applied to scenarios such as autonomous driving and intelligent surveillance systems. The proposed model offers a novel approach to image dehazing, contributing to advancements in related fields and promoting further development in haze removal technologies.
PMID:40745205 | DOI:10.1038/s41598-025-13613-z
Application of CycleGAN-based low-light image enhancement algorithm in foreign object detection on belt conveyors in underground mines
Sci Rep. 2025 Jul 31;15(1):27897. doi: 10.1038/s41598-025-10779-4.
ABSTRACT
Clear monitoring images are crucial for the safe operation of belt conveyors in coal mines. However, in underground environments, low illumination and uneven brightness can significantly degrade image quality, thereby affecting the detection of foreign objects in coal flow and reducing the reliability of safety monitoring equipment. To address this issue, an improved CycleGAN-based low-illumination image enhancement algorithm is proposed, which employs a cycle generative adversarial network for unsupervised learning. In the generator network, first, a multi-scale convolutional feature enhancement module is designed to extract features from low-light images at multiple scales, obtaining richer feature information. Second, a lighting enhancement module is designed to achieve global brightness equalization and reduce contrast differences between different regions. Finally, a dynamic effective self-attention aggregation module is designed to suppress the generation of noise and artifacts. For the discriminator network, a global-local discriminator structure is designed to optimize overall illumination while adaptively enhancing shadow and highlight regions. Additionally, a self-feature preservation loss is introduced to constrain the semantic consistency before and after enhancement and avoid detail distortion. Experimental results show that our method achieves PSNR and SSIM values of 27.07 dB and 0.880, respectively, on the CUMT-BelT dataset. In the YOLOv11n fusion image enhancement algorithm foreign object detection task, mAP reaches 94.7%, providing a reference for real-time safety monitoring tasks in complex lighting scenarios in underground coal mines.
PMID:40745194 | DOI:10.1038/s41598-025-10779-4
Using artificial intelligence to assess macular edema treatments in retinitis pigmentosa
Retina. 2025 Jul 30. doi: 10.1097/IAE.0000000000004636. Online ahead of print.
ABSTRACT
PURPOSE: This study validates a deep learning-based artificial intelligence (AI) tool for quantifying macular edema (ME) intraretinal fluid (IRF) volumes in retinitis pigmentosa, and through longitudinal analysis of IRF, provides new insight into treatment efficacy and disease natural history.
METHODS: This retrospective, longitudinal study identified RP patients with ME. A commercially available retinal analysis tool quantified IRF, and was validated for segmentation of ME using spectral-domain optical coherence tomography volume scans. Baseline analysis of IRF versus traditional central subfield thickness (CST), and longitudinal analyses of IRF versus treatment and best-corrected visual acuity (BCVA) were performed.
RESULTS: Forty-four patients were identified. For treatment studies, 52 eyes were in the treated group and 14 eyes in the untreated ME group. Mean follow-up was 5.3 exams (3.7, 6.9) over 2.3 years (1.7, 3.0). Software validation compared automated and manual IRF segmentation of 490 image pairs, finding a Dice coefficient of 0.928 (95% CI: 0.92, 0.99). Cohort mean IRF volume was 230.85 nL (57.42, 403.91) at baseline. IRF change in eyes treated with topical carbonic anhydrase inhibitors (CAIs) was -2.1 nL/year (P=0.81). Oral acetazolamide (AZM)-treated eyes had significant IRF reduction (-33.6 nL/year, P=0.009), and significant improvements in BCVA (logMAR/yr; ETDRS letters equivalent) (-0.041; +2 letters) (P=0.025).
CONCLUSION: A deep learning tool was able to rapidly and accurately quantify IRF in RP-associated ME. Using this analysis tool, we confirmed that treatment with AZM led to significant reduction in long-term IRF. Structural changes (IRF) only translated to significant functional improvements (BCVA) in eyes treated with AZM.
PMID:40743462 | DOI:10.1097/IAE.0000000000004636
Multitask deep learning for the emulation and calibration of an agent-based malaria transmission model
PLoS Comput Biol. 2025 Jul 31;21(7):e1013330. doi: 10.1371/journal.pcbi.1013330. Online ahead of print.
ABSTRACT
Agent-based models of malaria transmission are useful tools for understanding disease dynamics and planning interventions, but they can be computationally intensive to calibrate. We present a multitask deep learning approach for emulating and calibrating a complex agent-based model of malaria transmission. Our neural network emulator was trained on a large suite of simulations from the EMOD malaria model, an agent-based model of malaria transmission dynamics, capturing relationships between immunological parameters and epidemiological outcomes such as age-stratified incidence and prevalence across eight sub-Saharan African study sites. We then use the trained emulator in conjunction with parameter estimation techniques to calibrate the underlying model to reference data. Taken together, this analysis shows the potential of machine learning-guided emulator design for complex scientific processes and their comparison to field data.
PMID:40743314 | DOI:10.1371/journal.pcbi.1013330
AI-Driven fetal distress monitoring SDN-IoMT networks
PLoS One. 2025 Jul 31;20(7):e0328099. doi: 10.1371/journal.pone.0328099. eCollection 2025.
ABSTRACT
The healthcare industry is transforming with the integration of the Internet of Medical Things (IoMT) with AI-powered networks for improved clinical connectivity and advanced monitoring capabilities. However, IoMT devices struggle with traditional network infrastructure due to complexity and eterogeneous. Software-defined networking (SDN) is a powerful solution for efficiently managing and controlling IoMT. Additionally, the integration of artificial intelligence such as Deep Learning (DL) algorithms brings intelligence and decision-making capabilities to SDN-IoMT systems. This study focuses on solving the serious problem of information imbalance in cardiotocography (CTG) characteristics with clinical data of pregnant women, especially fetal heart rate (FHR) and deceleration. To improve the performance of prenatal monitoring, this study proposes a framework using Generative Adversarial Networks (GAN), an advanced DL technique, with an auto-encoder model. FHR and deceleration are important markers in CTG monitoring, which are important for assessing fetal health and preventing complications or death. The proposed framework solves the data imbalance problem using reconstruction error and Wasserstein distance-based GANs. The performance of the model is assessed through simulations performed using Mininet, according to criteria such as accuracy, recall, precision and F1 score. The proposed framework outperforms both the basic and advanced DL models and achieves an effective accuracy of 94.2% and an F1 score of 21.1% in very small classes. Validation using the CTU-UHB dataset confirms the significance compared to state-of-the-art solutions for handling unbalanced CTG data. These findings highlight the potential of AI and SDN-based IoMT to improve prenatal outcomes.
PMID:40743297 | DOI:10.1371/journal.pone.0328099
Enhanced Detection, Using Deep Learning Technology, of Medial Meniscal Posterior Horn Ramp Lesions in Patients with ACL Injury
J Bone Joint Surg Am. 2025 Jul 31. doi: 10.2106/JBJS.24.01530. Online ahead of print.
ABSTRACT
BACKGROUND: Meniscal ramp lesions can impact knee stability, particularly when associated with anterior cruciate ligament (ACL) injuries. Although magnetic resonance imaging (MRI) is the primary diagnostic tool, its diagnostic accuracy remains suboptimal. We aimed to determine whether deep learning technology could enhance MRI-based ramp lesion detection.
METHODS: We reviewed the records of 236 patients who underwent arthroscopic procedures documenting ACL injuries and the status of the medial meniscal posterior horn. A deep learning model was developed using MRI data for ramp lesion detection. Ramp lesion risk factors among patients who underwent ACL reconstruction were analyzed using logistic regression, extreme gradient boosting (XGBoost), and random forest models and were integrated into a final prediction model using Swin Transformer Large architecture.
RESULTS: The deep learning model using MRI data demonstrated superior overall diagnostic performance to the clinicians' assessment (accuracy of 73.3% compared with 68.1%, specificity of 78.0% compared with 62.9%, and sensitivity of 64.7% compared with 76.4%). Incorporating risk factors (age, posteromedial tibial bone marrow edema, and lateral meniscal tears) improved the model's accuracy to 80.7%, with a sensitivity of 81.8% and a specificity of 80.9%.
CONCLUSIONS: Integrating deep learning with MRI data and risk factors significantly enhanced diagnostic accuracy for ramp lesions, surpassing that of the model using MRI alone and that of clinicians. This study highlights the potential of artificial intelligence to provide clinicians with more accurate diagnostic tools for detecting ramp lesions, potentially enhancing treatment and patient outcomes.
LEVEL OF EVIDENCE: Diagnostic Level III. See Instructions for Authors for a complete description of levels of evidence.
PMID:40743295 | DOI:10.2106/JBJS.24.01530
A bearing fault diagnosis method based on hybrid artificial intelligence models
PLoS One. 2025 Jul 31;20(7):e0327646. doi: 10.1371/journal.pone.0327646. eCollection 2025.
ABSTRACT
The working state of rolling bearing severely affects the performance of industrial equipment. Addressing the issue of that the difficulty of incipient weak signals feature extraction influences the rolling bearing diagnosis accuracy, an efficient bearing fault diagnostic technique, a proposition is forwarded for hybrid artificial intelligence models, which integrates Improved Harris Hawks Optimization (IHHO) into the optimization of Deep Belief Networks and Extreme Learning Machines (DBN-ELM). The process employs Maximum Second-order Cyclostationary Blind Deconvolution (CYCBD) to filter out noise from the vibration signals emitted by bearings; secondly, considering the issue with the conventional Harris Hawks Optimization (HHO) algorithm which tends to prematurely converge to local optima, the differential evolution mutation operator is introduced and the escape energy factor is improved from linear to nonlinear in IHHO; then, a double-layer network model based on DBN-ELM is proposed, to avoid the number of hidden layer nodes of DBN from human experience interference, and IHHO is used to optimize DBN structure, which is denoted as IHHO-DBN-ELM method; with the optimal structure is obtained by using a combined IHHO optimized DBN and ELM; in conclusion, the proposed IHHO-DBN-ELM approach is applied to the bearing fault detection using the Western Reserve University's bearing fault dataset. The outcome of the experiments demonstrates that IHHO-DBN-ELM technique successfully extracts fault characteristics from the raw time-domain signals, thereby offering enhanced diagnostic accuracy and superior generalization capabilities.
PMID:40743282 | DOI:10.1371/journal.pone.0327646
Deep Generative Modeling of the Canonical Ensemble with Differentiable Thermal Properties
Phys Rev Lett. 2025 Jul 11;135(2):027301. doi: 10.1103/8wx7-kyx8.
ABSTRACT
It is a long-standing challenge to accurately and efficiently compute thermodynamic quantities of many-body systems at thermal equilibrium. The conventional methods, e.g., Markov chain Monte Carlo, require many steps to equilibrate. The recently developed deep learning methods can perform direct sampling, but only work at a single trained temperature point and risk biased sampling. Here, we propose a variational method for canonical ensembles with differentiable temperatures, which gives thermodynamic quantities as continuous functions of temperature akin to an analytical solution. The proposed method is a general framework that works with any tractable density generative model. At optimal, the model is theoretically guaranteed to be the unbiased Boltzmann distribution. We validated our method by calculating phase transitions in the Ising and XY models, demonstrating that our direct-sampling simulations are as accurate as Markov chain Monte Carlo methods but more efficient. Moreover, our differentiable free energy aligns closely with the exact one to the second-order derivative, indicating that the variational model captures the subtle thermal transitions at the phase transitions. This functional dependence on external parameters is a fundamental advancement in combining the exceptional fitting ability of deep learning with rigorous physical analysis.
PMID:40743158 | DOI:10.1103/8wx7-kyx8