Deep learning
Integrating advanced deep learning techniques for enhanced detection and classification of citrus leaf and fruit diseases
Sci Rep. 2025 Apr 12;15(1):12659. doi: 10.1038/s41598-025-97159-0.
ABSTRACT
In this study, we evaluate the performance of four deep learning models, EfficientNetB0, ResNet50, DenseNet121, and InceptionV3, for the classification of citrus diseases from images. Extensive experiments were conducted on a dataset of 759 images distributed across 9 disease classes, including Black spot, Canker, Greening, Scab, Melanose, and healthy examples of fruits and leaves. Both InceptionV3 and DenseNet121 achieved a test accuracy of 99.12%, with a macro average F1-score of approximately 0.986 and a weighted average F1-score of 0.991, indicating exceptional performance in terms of precision and recall across the majority of the classes. ResNet50 and EfficientNetB0 attained test accuracies of 84.58% and 80.18%, respectively, reflecting moderate performance in comparison. These research results underscore the promise of modern convolutional neural networks for accurate and timely detection of citrus diseases, thereby providing effective tools for farmers and agricultural professionals to implement proactive disease management, reduce crop losses, and improve yield quality.
PMID:40221550 | DOI:10.1038/s41598-025-97159-0
Computer-aided diagnosis of Haematologic disorders detection based on spatial feature learning networks using blood cell images
Sci Rep. 2025 Apr 12;15(1):12548. doi: 10.1038/s41598-025-85815-4.
ABSTRACT
Analyzing biomedical images is vital in permitting the highest-performing imaging and numerous medical applications. Determining the analysis of the disease is an essential stage in handling the patients. Similarly, the statistical value of blood tests, the personal data of patients, and an expert estimation are necessary to diagnose a disease. With the growth of technology, patient-related information is attained rapidly and in big sizes. Currently, numerous physical methods exist to evaluate and forecast blood cancer utilizing the microscopic health information of white blood cell (WBC) images that are stable for prediction and cause many deaths. Machine learning (ML) and deep learning (DL) have aided the classification and collection of patterns in data, foremost in the growth of AI methods employed in numerous haematology fields. This study presents a novel Computer-Aided Diagnosis of Haematologic Disorders Detection Based on Spatial Feature Learning Networks with Hybrid Model (CADHDD-SFLNHM) approach using Blood Cell Images. The main aim of the CADHDD-SFLNHM approach is to enhance the detection and classification of haematologic disorders. At first, the Sobel filter (SF) technique is utilized for preprocessing to improve the quality of blood cell images. Additionally, the modified LeNet-5 model is used in the feature extractor process to capture the essential characteristics of blood cells relevant to disorder classification. The convolutional neural network and bi-directional gated recurrent unit with attention (CNN-BiGRU-A) method is employed to classify and detect haematologic disorders. Finally, the CADHDD-SFLNHM model implements the pelican optimization algorithm (POA) method to fine-tune the hyperparameters involved in the CNN-BiGRU-A method. The experimental result analysis of the CADHDD-SFLNHM model was accomplished using a benchmark database. The performance validation of the CADHDD-SFLNHM model portrayed a superior accuracy value of 97.91% over other techniques.
PMID:40221445 | DOI:10.1038/s41598-025-85815-4
Integrating hybrid bald eagle crow search algorithm and deep learning for enhanced malicious node detection in secure distributed systems
Sci Rep. 2025 Apr 12;15(1):12647. doi: 10.1038/s41598-025-93549-6.
ABSTRACT
A distributed system comprises several independent units, each planned to track its tasks without interconnecting with the rest of them, excluding messaging services. This indicates that a solitary point of failure can reduce a method incapable without caution since no single point can achieve all essential processes. Malicious node recognition is a crucial feature of safeguarding the safety and reliability of distributed methods. Numerous models, ranging from anomaly recognition techniques to machine learning (ML) methods, are used to examine node behaviour and recognize deviances from usual patterns that may designate malicious intent. Advanced cryptographic protocols and intrusion detection devices are often combined to improve the flexibility of these methods against attacks. Moreover, real-time observing and adaptive plans are vital in quickly identifying and answering emerging attacks, contributing to the complete sturdiness of safe distributed methods. This study designs a Hybrid Bald Eagle-Crow Search Algorithm and Deep Learning for Enhanced Malicious Node Detection (HBECSA-DLMND) technique in Secure Distributed Systems. The HBECSA-DLMND technique follows the concept of metaheuristic feature selection with DL-based detection of malicious nodes in distributed systems. To accomplish this, the HBECSA-DLMND technique performs data normalization using the linear scaling normalization (LSN) approach, and the ADASYN approach is employed to handle class imbalance data. Besides, the HBECSA-DLMND method utilizes the HBECSA technique to choose a better subset of features. Meanwhile, the convolutional sparse autoencoder (CSAE) model detects malicious nodes. Finally, the dung beetle optimization (DBO) method is employed for the parameter range of the CSAE method. The experimental evaluation of the HBECSA-DLMND methodology is examined on a benchmark WSN-DS database. The performance validation of the HBECSA-DLMND methodology illustrated a superior accuracy value of 98.99% over existing approaches.
PMID:40221436 | DOI:10.1038/s41598-025-93549-6
Detection of surface defects in soybean seeds based on improved Yolov9
Sci Rep. 2025 Apr 12;15(1):12631. doi: 10.1038/s41598-025-92429-3.
ABSTRACT
As one of the important indicators of soybean seed quality identification, the appearance of soybeans has always been of great concern to people, and in traditional detection, it is mainly through the naked eye to check whether there are defects on its surface. The field of machine learning, particularly deep learning technology, has undergone rapid advancements and development, making it possible to detect the defects of soybean seeds using deep learning technology. This method can effectively replace the traditional detection methods in the past and reduce the human resources consumption in this work, leading to decreased expenses associated with agricultural activities. In this paper, we propose a Yolov9-c-ghost-Forward model improved by introducing GhostConv, a lightweight convolutional module in GhostNet, which enhances the recognition of soybean seed images through grayscale conversion, filtering processing, image segmentation, morphological operations, etc. and greatly reduces the noise in them, to separate the soybean seeds from the original images. Based on the Yolov9 network, the soybean seed features are extracted, and the defects of soybean seeds are detected. Based on the experiments' findings, the recall rate can reach 98.6%, and the mAP0.5 can reach 99.2%. This shows that the model can provide a solid theoretical foundation and technical support for agricultural breeding screening and agricultural development.
PMID:40221419 | DOI:10.1038/s41598-025-92429-3
TRUSTED: The Paired 3D Transabdominal Ultrasound and CT Human Data for Kidney Segmentation and Registration Research
Sci Data. 2025 Apr 12;12(1):615. doi: 10.1038/s41597-025-04467-1.
ABSTRACT
Inter-modal image registration (IMIR) and image segmentation with abdominal Ultrasound (US) data have many important clinical applications, including image-guided surgery, automatic organ measurement, and robotic navigation. However, research is severely limited by the lack of public datasets. We propose TRUSTED (the Tridimensional Renal Ultra Sound TomodEnsitometrie Dataset), comprising paired transabdominal 3DUS and CT kidney images from 48 human patients (96 kidneys), including segmentation, and anatomical landmark annotations by two experienced radiographers. Inter-rater segmentation agreement was over 93% (Dice score), and gold-standard segmentations were generated using the STAPLE algorithm. Seven anatomical landmarks were annotated, for IMIR systems development and evaluation. To validate the dataset's utility, 4 competitive Deep-Learning models for kidney segmentation were benchmarked, yielding average DICE scores from 79.63% to 90.09% for CT, and 70.51% to 80.70% for US images. Four IMIR methods were benchmarked, and Coherent Point Drift performed best with an average Target Registration Error of 4.47 mm and Dice score of 84.10%. The TRUSTED dataset may be used freely to develop and validate segmentation and IMIR methods.
PMID:40221416 | DOI:10.1038/s41597-025-04467-1
The future of Alzheimer's disease risk prediction: a systematic review
Neurol Sci. 2025 Apr 12. doi: 10.1007/s10072-025-08167-x. Online ahead of print.
ABSTRACT
BACKGROUND: Alzheimer's disease is the most prevalent kind of age-associated dementia among older adults globally. Traditional diagnostic models for predicting Alzheimer's disease risks primarily rely on demographic and clinical data to develop policies and assess probabilities. However, recent advancements in machine learning (ML) and other artificial intelligence (AI) have shown promise in developing personalized risk models. These models use specific patient data from medical imaging and related reports. In this systematic review, different studies comprehensively examined the use of ML in magnetic resonance imaging (MRI), genetics, radiomics, and medical data for Alzheimer's disease risk assessment. I highlighted the results of our rigorous analysis of this research and emphasized the exciting potential of ML methods for Alzheimer's disease risk prediction. We also looked at current research projects and possible uses of AI-driven methods to enhance Alzheimer's disease risk prediction and enable more efficient investigating and individualized risk mitigation strategies.
AIM AND METHODS: This review integrates both conventional and AI-based models to thoroughly analyze neuroimaging and non-neuroimaging features used in Alzheimer's disease prediction. This study examined factors related to imaging, radiomics, genetics, and clinical aspects. In addition, this study comprehensively presented machine learning for predicting the risk of Alzheimer's disease detection to benefit both beginner and expert researchers.
RESULTS: A total of 700 publications from 2000 and 2024, were initially retrieved, out of which 120 studies met the inclusion criteria and were elected for review. The diagnosis of neurological disorders, along with the application of deep learning (DL) and machine learning (ML) were central themes in studies on the subject. When analyzing the medical implementation or design of innovative models, various machine learning models applied to neuroimaging and non-neuroimaging data may help researchers and clinicians become more informed. This review provides an extensive guide to the state of Alzheimer's disease risk assessment with artificial AI.
CONCLUSION: By integrating diverse neuroimaging and non-neuroimaging data sources, this study provides researchers with an alternative viewpoint on the application of AI in Alzheimer's disease risk prediction emphasizing its potential to improve early diagnosis and personalized intervention strategies.
PMID:40220257 | DOI:10.1007/s10072-025-08167-x
Deep Learning-Based Image Restoration and Super-Resolution for Fluorescence Microscopy: Overview and Resources
Methods Mol Biol. 2025;2904:21-50. doi: 10.1007/978-1-0716-4414-0_3.
ABSTRACT
Fluorescence microscopy is a key method for the visualization of cellular, subcellular, and molecular live-cell dynamics, enabling access to novel insights into mechanisms of health and disease. However, effects like phototoxicity, the fugitive nature of signals, photo bleaching, and method-inherent noise can degrade the achievable signal-to-noise ratio and image resolution. In recent years, deep learning (DL) approaches have been increasingly applied to remove these degradations. In this review, we give a brief overview over existing classical and DL approaches for denoising, deconvolution, and computational super-resolution of fluorescence microscopy data. We summarize existing open-source databases within these fields as well as code repositories related to corresponding publications and further contribute an example project for DL-based image denoising, which provides a low barrier entry into DL coding and respective applications. In summary, we supply interested researchers with tools to apply or develop DL applications in live-cell imaging and foster research participation in this field.
PMID:40220224 | DOI:10.1007/978-1-0716-4414-0_3
Stable distance regression via spatial-frequency state space model for robot-assisted endomicroscopy
Int J Comput Assist Radiol Surg. 2025 Apr 12. doi: 10.1007/s11548-025-03353-w. Online ahead of print.
ABSTRACT
PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) is a noninvasive technique that enables the direct visualization of tissue at a microscopic level in real time. One of the main challenges in using pCLE is maintaining the probe within a working range of micrometer scale. As a result, the need arises for automatically regressing the probe-tissue distance to enable precise robotic tissue scanning.
METHODS: In this paper, we propose the spatial frequency bidirectional structured state space model (SF-BiS4D) for pCLE probe-tissue distance regression. This model advances traditional state space models by processing image sequences bidirectionally and analyzing data in both the frequency and spatial domains. Additionally, we introduce a guided trajectory planning strategy that generates pseudo-distance labels, facilitating the training of sequential models to generate smooth and stable robotic scanning trajectories. To improve inference speed, we also implement a hierarchical guided fine-tuning (GF) approach that efficiently reduces the size of the BiS4D model while maintaining performance.
RESULTS: The performance of our proposed model has been evaluated both qualitatively and quantitatively using the pCLE regression dataset (PRD). In comparison with existing state-of-the-art (SOTA) methods, our approach demonstrated superior performance in terms of accuracy and stability.
CONCLUSION: Our proposed deep learning-based framework effectively improves distance regression for microscopic visual servoing and demonstrates its potential for integration into surgical procedures requiring precise real-time intraoperative imaging.
PMID:40220066 | DOI:10.1007/s11548-025-03353-w
Video-based multi-target multi-camera tracking for postoperative phase recognition
Int J Comput Assist Radiol Surg. 2025 Apr 12. doi: 10.1007/s11548-025-03344-x. Online ahead of print.
ABSTRACT
PURPOSE: Deep learning methods are commonly used to generate context understanding to support surgeons and medical professionals. By expanding the current focus beyond the operating room (OR) to postoperative workflows, new forms of assistance are possible. In this article, we propose a novel multi-target multi-camera tracking (MTMCT) architecture for postoperative phase recognition, location tracking, and automatic timestamp generation.
METHODS: Three RGB cameras were used to create a multi-camera data set containing 19 reenacted postoperative patient flows. Patients and beds were annotated and used to train the custom MTMCT architecture. It includes bed and patient tracking for each camera and a postoperative patient state module to provide the postoperative phase, current location of the patient, and automatically generated timestamps.
RESULTS: The architecture demonstrates robust performance for single- and multi-patient scenarios by embedding medical domain-specific knowledge. In multi-patient scenarios, the state machine representing the postoperative phases has a traversal accuracy of 84.9 ± 6.0 % , 91.4 ± 1.5 % of timestamps are generated correctly, and the patient tracking IDF1 reaches 92.0 ± 3.6 % . Comparative experiments show the effectiveness of using AFLink for matching partial trajectories in postoperative settings.
CONCLUSION: As our approach shows promising results, it lays the foundation for real-time surgeon support, enhancing clinical documentation and ultimately improving patient care.
PMID:40220065 | DOI:10.1007/s11548-025-03344-x
Analysis of RNA translation with a deep learning architecture provides new insight into translation control
Nucleic Acids Res. 2025 Apr 10;53(7):gkaf277. doi: 10.1093/nar/gkaf277.
ABSTRACT
Accurate annotation of coding regions in RNAs is essential for understanding gene translation. We developed a deep neural network to directly predict and analyze translation initiation and termination sites from RNA sequences. Trained with human transcripts, our model learned hidden rules of translation control and achieved a near perfect prediction of canonical translation sites across entire human transcriptome. Surprisingly, this model revealed a new role of codon usage in regulating translation termination, which was experimentally validated. We also identified thousands of new open reading frames in mRNAs or lncRNAs, some of which were confirmed experimentally. The model trained with human mRNAs achieved high prediction accuracy of canonical translation sites in all eukaryotes and good prediction in polycistronic transcripts from prokaryotes or RNA viruses, suggesting a high degree of conservation in translation control. Collectively, we present TranslationAI (https://www.biosino.org/TranslationAI/), a general and efficient deep learning model for RNA translation that generates new insights into the complexity of translation regulation.
PMID:40219965 | DOI:10.1093/nar/gkaf277
Impact of hypertension on cerebral small vessel disease: A post-mortem study of microvascular pathology from normal-appearing white matter into white matter hyperintensities
J Cereb Blood Flow Metab. 2025 Apr 12:271678X251333256. doi: 10.1177/0271678X251333256. Online ahead of print.
ABSTRACT
Cerebral small vessel disease (SVD) is diagnosed through imaging hallmarks like white matter hyperintensities (WMH). Novel hypotheses imply that endothelial dysfunction, blood-brain barrier (BBB) disruption and neurovascular inflammation contribute to conversion of normal-appearing white matter (NAWM) into WMH in hypertensive individuals. Aiming to unravel the association between chronic hypertension and the earliest WMH pathogenesis, we characterized microvascular pathology in periventricular NAWM into WMH in post-mortem brains of individuals with and without hypertension. Our second aim was to delineate the NAWM-WMH transition from NAWM towards the center of WMH using deep learning, refining WMH segmentation capturing increases in FLAIR signal. Finally, we aimed to demonstrate whether these processes may synergistically contribute to WMH pathogenesis by performing voxel-wise correlations between MRI and microvascular pathology. Larger endothelium disruption, BBB damage and neurovascular inflammation were observed in individuals with hypertension. We did not observe gradual BBB damage nor neurovascular inflammation along the NAWM-WMH transition. We found a strong correlation between BBB damage and neurovascular inflammation in all individuals in both periventricular NAWM and WMH. These novel findings suggest that endothelium disruption, BBB damage and neurovascular inflammation are major contributors to SVD progression, but being already present in NAWM in hypertension.
PMID:40219923 | DOI:10.1177/0271678X251333256
Deep ensemble architecture with improved segmentation model for Alzheimer's disease detection
J Med Eng Technol. 2025 Apr 12:1-25. doi: 10.1080/03091902.2025.2484691. Online ahead of print.
ABSTRACT
The most common cause of dementia, which includes significant cognitive impairment that interferes with day-to-day activities, is Alzheimer's Disease (AD). Deep learning techniques performed better on diagnostic tasks. However, current methods for detecting Alzheimer's disease lack effectiveness, resulting in inaccurate results. To overcome these challenges, a novel deep ensemble architecture for AD classification is proposed in this research. The proposed model involves key phases, including Preprocessing, Segmentation, Feature Extraction, and Classification. Initially, Median filtering is employed for preprocessing. Subsequently, an improved U-Net architecture is employed for segmentation, and then the features including Improved Shape Index Histogram (ISIH), Multi Binary Pattern (MBP), and Multi Texton are extracted from the segmented image. Then, an En-LeCILSTM is proposed, which combines the LeNet, CNN and improved LSTM models. Finally, the resultant output is obtained by averaging the intermediate output of each model, leading to improved detection accuracy. Finally, the proposed model's efficiency is assessed through various analyses, including classifier comparison, and performance metric evaluation. As a result, the En-LeCILSTM model scored a higher accuracy of 0.963 and an F-measure of 0.908, which surpasses the result of traditional methods. The outcomes demonstrate that the proposed model is notably more effective in detecting Alzheimer's disease.
PMID:40219912 | DOI:10.1080/03091902.2025.2484691
Incorporating Respiratory Signals for ML-based Multi-Modal Sleep Stage Classification: A Large-Scale Benchmark Study with Actigraphy and HRV
Sleep. 2025 Apr 11:zsaf091. doi: 10.1093/sleep/zsaf091. Online ahead of print.
ABSTRACT
Insufficient sleep quality is directly linked to various diseases, making reliable sleep monitoring crucial for prevention, diagnosis, and treatment. As sleep laboratories are cost- and resource-prohibitive, wearable sensors offer a promising alternative for long-term unobtrusive sleep monitoring at home. Current unobtrusive sleep detection systems are mostly based on actigraphy (ACT) that tend to overestimate sleep due to a lack of movement in short periods of wakefulness. Previous research established sleep stage classification by combining ACT with cardiac information but has not investigated the incorporation of respiration in large-scale studies. For that reason, this work aims to systematically compare ACT-based sleep-stage classification with multimodal approaches combining ACT, heart rate variability (HRV) as well as respiration rate variability (RRV) using state-of-the-art machine- and deep learning algorithms. The evaluation is performed on a publicly available sleep dataset including more than 1,000 recordings. Respiratory information is introduced through ECG-derived respiration (EDR) features, which are evaluated against traditional respiration belt data. Results show that including RRV features improves the Matthews Correlation Coefficient (MCC), with long short-term memory (LSTM) algorithms performing best. For sleep staging based on AASM standards, the LSTM achieved a median MCC of 0.51 (0.16 IQR). Respiratory information enhanced classification performance, particularly in detecting Wake and Rapid eye movement (REM) sleep epochs. Our findings underscore the potential of including respiratory information in sleep analysis to improve sleep detection algorithms and, thus, help to transfer sleep laboratories into a home monitoring environment. The code used in this work can be found online at https://github.com/mad-lab-fau/sleep_analysis.
PMID:40219765 | DOI:10.1093/sleep/zsaf091
Generative evidential synthesis with integrated segmentation framework for MR-only radiation therapy treatment planning
Med Phys. 2025 Apr 11. doi: 10.1002/mp.17828. Online ahead of print.
ABSTRACT
BACKGROUND: Radiation therapy (RT) planning is a time-consuming process involving the contouring of target volumes and organs at risk, followed by treatment plan optimization. CT is typically used as the primary planning image modality as it provides electron density information needed for dose calculation. MRI is widely used for contouring after registration to CT due to its high soft tissue contrast. However, there exists uncertainties in registration, which propagate throughout treatment planning as contouring errors, and lead to dose inaccuracies. MR-only RT planning has been proposed as a solution to eliminate the need for CT scan and image registration, by synthesizing CT from MRI. A challenge in deploying MR-only planning in clinic is the lack of a method to estimate the reliability of a synthetic CT in the absence of ground truth. While methods have used sampling-based approaches to estimate model uncertainty over multiple inferences, such methods suffer from long run time and are therefore inconvenient for clinical use.
PURPOSE: To develop a fast and robust method for the joint synthesis of CT from MRI, estimation of model uncertainty related to the synthesis accuracy, and segmentation of organs at risk (OARs), in a single model inference.
METHODS: In this work, deep evidential regression is applied to MR-only brain RT planning. The proposed framework uses a multi-task vision transformer combining a single joint nested encoder with two distinct convolutional decoder paths for synthesis and segmentation separately. An evidential layer was added at the end of the synthesis decoder to jointly estimate model uncertainty in a single inference. The framework was trained and tested on a dataset of 119 (80 for training, 9 for validation, and 30 for test) paired T1-weighted MRI and CT scans with OARs contours.
RESULTS: The proposed method achieved mean ± SD SSIM of 0.820 ± 0.039, MAE of 47.4 ± 8.49 HU, and PSNR of 23.4 ± 1.13 for the synthesis task and dice similarity coefficient of 0.799 ± 0.132 (lenses), 0.945 ± 0.020 (eyes), 0.834 ± 0.059 (optic nerves), 0.679 ± 0.148 (chiasm), 0.947 ± 0.014 (temporal lobes), 0.849 ± 0.027 (hippocampus), 0.953 ± 0.024 (brainstem), 0.752 ± 0.228 (cochleae) for segmentation-in a total run time of 6.71 ± 0.25 s. Additionally, experiments on challenging test cases revealed that the proposed evidential uncertainty estimation highlighted the same uncertain regions as Monte Carlo-based epistemic uncertainty, thus highlighting the reliability of the proposed method.
CONCLUSION: A framework leveraging deep evidential regression to jointly synthesize CT from MRI, predict the related synthesis uncertainty, and segment OARs in a single model inference was developed. The proposed approach has the potential to streamline the planning process and provide clinicians with a measure of the reliability of a synthetic CT in the absence of ground truth.
PMID:40219601 | DOI:10.1002/mp.17828
Energy efficient multipath routing in IoT-wireless sensor network via hybrid optimization and deep learning-based energy prediction
Network. 2025 Apr 11:1-50. doi: 10.1080/0954898X.2025.2476081. Online ahead of print.
ABSTRACT
Efficient data transmission in Wireless Sensor Networks (WSNs) is a critical challenge. Traditional routing protocols focus on energy efficiency but do not consider other factors that might degrade performance. This research proposes a novel Hybrid Beluga Whale-Coati Optimization (HBWCO) algorithm to address these issues, focusing on optimizing energy-efficient data transmission. In the proposed approach, initially, sensor nodes and field dimensions are initialized. Then, K-means clustering is applied to grouping nodes. The Deep Q-Net model is used to predict energy levels of nodes. CH is selected as per the node having higher energy. Multipath routing is performed through the HBWCO algorithm, which optimally selects the best routing paths by considering factors like reliability, residual energy, predicted energy, throughput, and traffic intensity. If link breakage occurs, a route maintenance phase is initiated using Source Link Breakage Warning (SLBW) message strategy to notify the source node about the issue of choosing another path. This work offers a comprehensive approach to enhancing energy efficiency in networks. The suggested HBWCO approach is in contrast to the traditional methods. The HBWCO approach has achieved the highest reliability of 0.948 and the highest throughput of 3496. Therefore, the HBWCO algorithm offers an effective solution for data transmission and routing reliability.
PMID:40219585 | DOI:10.1080/0954898X.2025.2476081
Novel CT radiomics models for the postoperative prediction of early recurrence of resectable pancreatic adenocarcinoma: A single-center retrospective study in China
J Appl Clin Med Phys. 2025 Apr 11:e70092. doi: 10.1002/acm2.70092. Online ahead of print.
ABSTRACT
PURPOSE: To assess the predictive capability of CT radiomics features for early recurrence (ER) of pancreatic ductal adenocarcinoma (PDAC).
METHODS: Postoperative PDAC patients were retrospectively selected, all of whom had undergone preoperative CT imaging and surgery. Both patients with resectable or borderline-resectable pancreatic cancer met the eligibility criteria in this study. However, owing to the differences in treatment strategies and such, this research mainly focused on patients with resectable pancreatic cancer. All patients were subject to follow-up assessments for a minimum of 9 months. A total of 250 cases meeting the inclusion criteria were included. A clinical model, a conventional radiomics model, and a deep-radiomics model were constructed for ER prediction (defined as occurring within 9 months) in the training set. A model based on the TNM staging was utilized as a baseline for comparison. Assessment of the models' performance was based on the area under the receiver operating characteristic curve (AUC). Additionally, precision-recall (PR) analysis and calibration assessments were conducted for model evaluation. Furthermore, the clinical utility of the models was evaluated through decision curve analysis (DCA), net reclassification improvement (NRI), and improvement of reclassification index (IRI).
RESULTS: In the test set, the AUC values for ER prediction were as follows: TNM staging, ROC-AUC = 0.673 (95% CI: 0.550, 0.795), PR-AUC = 0.362 (95% CI: 0.493, 0.710); clinical model, ROC-AUC = 0.640 (95% CI: 0.504, 0.775), PR-AUC = 0.481 (95% CI: 0.520, 0.735); radiomics model, ROC-AUC = 0.722 (95% CI: 0.604, 0.839), PR-AUC = 0.575 (95% CI: 0.466, 0.686); and deep-radiomics model, which exhibited the highest ROC-AUC of 0.895 (95% CI: 0.820, 0.970), PR-AUC = 0.834 (95% CI: 0.767, 0.923). The difference in both ROC-AUC and PR-AUC for the deep-radiomics model was statistically significant when compared to the other scores (all p < 0.05). The DCA curve of the deep-radiomics model outperformed the other models. NRI and IRI analyses demonstrated that the deep-radiomics model significantly enhances risk classification compared to the other prediction methods (all p < 0.05).
CONCLUSION: The predictive performance of deep features based on CT images exhibits favorable outcomes in predicting early recurrence.
PMID:40217563 | DOI:10.1002/acm2.70092
Application of the YOLOv11-seg algorithm for AI-based landslide detection and recognition
Sci Rep. 2025 Apr 11;15(1):12421. doi: 10.1038/s41598-025-95959-y.
ABSTRACT
In recent years, landslides have occurred frequently around the world, resulting in significant casualties and property damage. A notable example occurred in 2014, when a landslide in the Argo region of Afghanistan claimed over 2000 lives, becoming one of the most devastating landslide events in history. The increasing frequency and severity of landslides present significant challenges to geological disaster monitoring, making the development of efficient and accurate detection methods critical for disaster mitigation and prevention. This study proposes an intelligent recognition method for landslides, which is based on the latest deep learning model, YOLOv11-seg, which is designed to address the challenges posed by complex terrains and the diverse characteristics of landslides. Using the Bijie-Landslide dataset, the method optimizes the feature extraction and segmentation modules of YOLOv11-seg, enhancing both the accuracy of landslide boundary detection and the pixel-level segmentation of landslide areas. Compared with traditional methods, YOLOv11-seg performs better in detecting complex boundaries and handling occlusion, demonstrating superior detection accuracy and segmentation quality. During the preprocessing phase, various data augmentation techniques, including mirroring, rotation, and color adjustment, were employed, significantly improving the model's generalization performance and robustness across varying terrains, seasons, and lighting conditions. The experimental results indicate that the YOLOv11-seg model excels in several key performance metrics, such as precision, recall, F1 score, and mAP. Specifically, the F1 score reaches 0.8781 for boundary detection and 0.8114 for segmentation, whereas the mAP for bounding box (B) detection and mask (M) segmentation tasks outperforms traditional methods. These results highlight the high reliability and adaptability of YOLOv11-seg for landslide detection. This research provides new technological support for intelligent landslide monitoring and risk assessment, highlighting its potential in geological disaster monitoring.
PMID:40216897 | DOI:10.1038/s41598-025-95959-y
Predicting PD-L1 status in NSCLC patients using deep learning radiomics based on CT images
Sci Rep. 2025 Apr 11;15(1):12495. doi: 10.1038/s41598-025-91575-y.
ABSTRACT
Radiomics refers to the utilization of automated or semi-automated techniques to extract and analyze numerous quantitative features from medical images, such as computerized tomography (CT) or magnetic resonance imaging (MRI) scans. This study aims to develop a deep learning radiomics (DLR)-based approach for predicting programmed death-ligand 1 (PD-L1) expression in patients with non-small cell lung cancer (NSCLC). Data from 352 NSCLC patients with known PD-L1 expression were collected, of which 48.29% (170/352) were tested positive for PD-L1 expression. Tumor regions of interest (ROI) were semi-automatically segmented based on CT images, and DL features were extracted using Residual Network 50. The least absolute shrinkage and selection operator (LASSO) algorithm was used for feature selection and dimensionality reduction. Seven algorithms were used to build models, and the most optimal ones were identified. A combined model integrating DLR with clinical data was also developed. The predictive performance of each model was evaluated using the area under the curve (AUC) of the receiver operating characteristic (ROC) curve analysis. The DLR model, based on CT images, demonstrated an AUC of 0.85 (95% confidence interval (CI), 0.82-0.88), sensitivity of 0.80 (0.74-0.85), and specificity of 0.73 (0.70-0.77) for predicting PD-L1 status. The integrated model exhibited superior performance, with an AUC of 0.91 (0.87-0.95), sensitivity of 0.85 (0.82-0.89), and specificity of 0.75 (0.72-0.80). Our findings indicate that the DLR model holds promise as a valuable tool for predicting the PD-L1 status in patients with NSCLC, which can greatly assist in clinical decision-making and the selection of personalized treatment strategies.
PMID:40216830 | DOI:10.1038/s41598-025-91575-y
A hybrid hierarchical health monitoring solution for autonomous detection, localization and quantification of damage in composite wind turbine blades for tinyML applications
Sci Rep. 2025 Apr 11;15(1):12380. doi: 10.1038/s41598-025-95364-5.
ABSTRACT
Composites are widely used in wind turbine blades due to their excellent strength-to-weight ratio and operational flexibilities. However, wind turbines often operate in harsh environmental conditions that can lead to various types of damage, including abrasion, corrosion, fractures, cracks, and delamination. Early detection through structural health monitoring (SHM) is essential for maintaining the efficient and reliable operation of wind turbines, minimizing downtime and maintenance costs, and optimizing energy output. Further, Damage detection and localization are challenging in curved composites due to their anisotropic nature, edge reflections, and generation of higher harmonics. Previous work has focused on damage localization using deep-learning approaches. However, these models are computationally expensive, and multiple models need to be trained independently for various tasks such as damage classification, localization, and sizing identification. Also, the data generated due to AE waveforms at a minimum sampling rate of 1MSPS is huge, requiring tinyML enabled hardware for real time ML models which can reduce the size of cloud storage required. TinyML hardware can run ML models efficiently with low power consumption. This paper presents a Hybrid Hierarchical Machine-Learning Model (HHMLM) that leverages acoustic emission (AE) data to identify, classify, and locate different types of damage using the single unified model. The AE data is collected using a single sensor, with damage simulated by artificial AE sources (Pencil lead break) and low-velocity impacts. Additionally, simulated abrasion on the blade's leading edge resembles environmental wear. This HHMLM model achieved 96.4% overall accuracy with less computation time than 83.8% for separate conventional Convolutional Neural Network (CNN) models. The developed SHM solution provides a more effective and practical solution for in-service monitoring of wind turbine blades, particularly in wind farm settings, with the potential for future wireless sensors with tiny ML applications.
PMID:40216825 | DOI:10.1038/s41598-025-95364-5
Deep learning assisted analysis of biomarker changes in refractory neovascular AMD after switch to faricimab
Int J Retina Vitreous. 2025 Apr 11;11(1):44. doi: 10.1186/s40942-025-00669-2.
ABSTRACT
BACKGROUND: Artificial intelligence (AI)-driven biomarker segmentation offers an objective and reproducible approach for quantifying key anatomical features in neovascular age-related macular degeneration (nAMD) using optical coherence tomography (OCT). Currently, Faricimab, a novel bispecific inhibitor of vascular endothelial growth factor (VEGF) and angiopoietin-2 (Ang-2), offers new potential in the management of nAMD, particularly in treatment-resistant cases. This study utilizes an advanced deep learning-based segmentation algorithm to analyze OCT biomarkers and evaluate the efficacy and durability of Faricimab over nine months in patients with therapy-refractory nAMD.
METHODS: This retrospective real-world study analyzed patients with treatment-resistant nAMD who switched to Faricimab following inadequate responses to ranibizumab or aflibercept. Automated segmentation of key OCT biomarkers - including fibrovascular pigment epithelium detachment (fvPED), intraretinal fluid (IRF), subretinal fluid (SRF), subretinal hyperreflective material (SHRM), choroidal volume, and central retinal thickness (CRT) - was conducted using a deep learning algorithm based on a convolutional neural network.
RESULTS: A total of 46 eyes from 41 patients completed the nine-month follow-up. Significant reductions in SRF, fvPED, and choroidal volume were observed from baseline (mo0) to three months (mo3) and sustained at nine months (mo9). CRT decreased significantly from 342.7 (interquartile range (iqr): 117.1) µm at mo0 to 296.6 (iqr: 84.3) µm at mo3 and 310.2 (iqr: 93.6) µm at mo9. The deep learning model provided precise quantification of biomarkers, enabling reliable tracking of disease progression. The median injection interval extended from 35 (iqr: 15) days at mo0 to 56 (iqr: 20) days at mo9, representing a 60% increase. Visual acuity remained stable throughout the study. Correlation analysis revealed that higher baseline CRT and fvPED volumes were associated with greater best-corrected visual acuity (BCVA) improvements and longer treatment intervals.
CONCLUSIONS: This study highlights the potential of AI-driven biomarker segmentation as a precise and scalable tool for monitoring disease progression in treatment-resistant nAMD. By enabling objective and reproducible analysis of OCT biomarkers, deep learning algorithms provide critical insights into treatment response. Faricimab demonstrated significant and sustained anatomical improvements, allowing for extended treatment intervals while maintaining disease stability. Future research should focus on refining AI models to improve predictive accuracy and assessing long-term outcomes to further optimize disease management.
TRIAL REGISTRATION: Ethics approval was obtained from the Institutional Review Board of LMU Munich (study ID: 20-0382). This study was conducted in accordance with the Declaration of Helsinki.
PMID:40217505 | DOI:10.1186/s40942-025-00669-2