Deep learning
Applying genetic algorithm to extreme learning machine in prediction of tumbler index with principal component analysis for iron ore sintering
Sci Rep. 2025 Feb 8;15(1):4777. doi: 10.1038/s41598-025-88755-1.
ABSTRACT
As a major burden of blast furnace, sinter mineral with desired quality performance needs to be produced in sinter plants. The tumbler index (TI) is one of the most important indices to characterize the quality of sinter, which depends on the raw materials proportion, operating system parameters and the chemical compositions. To accurately predict TI, an integrate model is proposed in this study. First, to decrease the data dimensionality, the sintering production data is addressed through principal component analysis (PCA) and the principal components with the accumulated contribution rate no more than 95% are extracted as the inputs of the predictive model based on Extreme Learning Machine (ELM). Second, the genetic algorithm (GA) has been applied to promote the improvement of the robustness and generalization performance of the original ELM. Finally, the model is examined using actual production data of a year from a sinter plant, and is compared with the algorithms of single ELM, GA-BP and deep learning method. A comparison is conducted to confirm the superiority of the proposed model with two traditional models. The results showed that an improvement in predictive accuracy can be obtained by the GA-ELM approach, and the accuracy of TI prediction is 81.85% for absolute error under 0.7%.
PMID:39922958 | DOI:10.1038/s41598-025-88755-1
BiAF: research on dynamic goat herd detection and tracking based on machine vision
Sci Rep. 2025 Feb 8;15(1):4754. doi: 10.1038/s41598-025-89231-6.
ABSTRACT
As technology advances, rangeland management is rapidly transitioning toward intelligent systems. To optimize grassland resources and implement scientific grazing practices, livestock grazing monitoring has become a pivotal area of research. Traditional methods, such as manual tracking and wearable monitoring, often disrupt the natural movement and feeding behaviors of grazing livestock, posing significant challenges for in-depth studies of grazing patterns. In this paper, we propose a machine vision-based grazing goat herd detection algorithm that enhances the streamlined ELAN module in YOLOv7-tiny, incorporates an optimized CBAM attention mechanism, refines the SPPCSPC module to reduce the parameter count, and improves the anchor boxes in YOLOv7-tiny to enhance target detection accuracy. The BiAF-YOLOv7 algorithm achieves precision, recall, F1 score, and mAP values of 94.5, 96.7, 94.8, and 96.0%, respectively, on the goat herd dataset. Combined with DeepSORT, our system successfully tracks goat herds, demonstrating the effectiveness of the BiAF-YOLOv7 algorithm as a tool for livestock grazing monitoring. This study not only validates the practicality of the proposed algorithm but also highlights the broader applicability of machine vision-based monitoring in large-scale environments. It provides innovative approaches to achieve grass-animal balance through information-driven methods, such as monitoring and tracking.
PMID:39922902 | DOI:10.1038/s41598-025-89231-6
Looking outside the box with a pathology aware AI approach for analyzing OCT retinal images in Stargardt disease
Sci Rep. 2025 Feb 8;15(1):4739. doi: 10.1038/s41598-025-85213-w.
ABSTRACT
Stargardt disease type 1 (STGD1) is a genetic disorder that leads to progressive vision loss, with no approved treatments currently available. The development of effective therapies faces the challenge of identifying appropriate outcome measures that accurately reflect treatment benefits. Optical Coherence Tomography (OCT) provides high-resolution retinal images, serving as a valuable tool for deriving potential outcome measures, such as retinal thickness. However, automated segmentation of OCT images, particularly in regions disrupted by degeneration, remains complex. In this study, we propose a deep learning-based approach that incorporates a pathology-aware loss function to segment retinal sublayers in OCT images from patients with STGD1. This method targets relatively unaffected regions for sublayer segmentation, ensuring accurate boundary delineation in areas with minimal disruption. In severely affected regions, identified by a box detection model, the total retina is segmented as a single layer to avoid errors. Our model significantly outperforms standard models, achieving an average Dice coefficient of [Formula: see text] for total retina and [Formula: see text] for retinal sublayers. The most substantial improvement was in the segmentation of the photoreceptor inner segment, with Dice coefficient increasing by [Formula: see text]. This approach provides a balance between granularity and reliability, making it suitable for clinical application in tracking disease progression and evaluating therapeutic efficacy.
PMID:39922894 | DOI:10.1038/s41598-025-85213-w
A deep learning-driven multi-layered steganographic approach for enhanced data security
Sci Rep. 2025 Feb 8;15(1):4761. doi: 10.1038/s41598-025-89189-5.
ABSTRACT
In the digital era, ensuring data integrity, authenticity, and confidentiality is critical amid growing interconnectivity and evolving security threats. This paper addresses key limitations of traditional steganographic methods, such as limited payload capacity, susceptibility to detection, and lack of robustness against attacks. A novel multi-layered steganographic framework is proposed, integrating Huffman coding, Least Significant Bit (LSB) embedding, and a deep learning-based encoder-decoder to enhance imperceptibility, robustness, and security. Huffman coding compresses data and obfuscates statistical patterns, enabling efficient embedding within cover images. At the same time, the deep learning encoder adds layer of protection by concealing an image within another. Extensive evaluations using benchmark datasets, including Tiny ImageNet, COCO, and CelebA, demonstrate the approach's superior performance. Key contributions include achieving high visual fidelity with Structural Similarity Index Metrics (SSIM) consistently above 99%, robust data recovery with text recovery accuracy reaching 100% under standard conditions, and enhanced resistance to common attacks such as noise and compression. The proposed framework significantly improves robustness, security, and computational efficiency compared to traditional methods. By balancing imperceptibility and resilience, this paper advances secure communication and digital rights management, addressing modern challenges in data hiding through an innovative combination of compression, adaptive embedding, and deep learning techniques.
PMID:39922893 | DOI:10.1038/s41598-025-89189-5
Deep learning based gasket fault detection: a CNN approach
Sci Rep. 2025 Feb 8;15(1):4776. doi: 10.1038/s41598-025-85223-8.
ABSTRACT
Gasket inspection is a critical step in the quality control of a product. The proposed method automates the detection of misaligned or incorrectly fitting gaskets, ensuring timely repair action. The suggested method uses deep learning approaches to recognize and evaluate radiator images, with a focus on identifying misaligned or incorrectly installed gaskets. Deep learning algorithms are specific for feature extraction and classification together with a convolutional neural network (CNN) module that allows for seamless connection. A gasket inspection system based on a CNN architecture is developed in this work. The system consists of two sets of convolution layers, followed by two sets of batch normalization layer, two sets of RELU layer, max pooling layer and finally fully connected layer for classification of gasket images. The obtained results indicate that our system has great potential for practical applications in the manufacturing industry. Moreover, our system provides a reliable and efficient mechanism for quality control, which can help reduce the risk of defects and ensure product reliability.
PMID:39922855 | DOI:10.1038/s41598-025-85223-8
Harnessing artificial intelligence for predicting breast cancer recurrence: a systematic review of clinical and imaging data
Discov Oncol. 2025 Feb 8;16(1):135. doi: 10.1007/s12672-025-01908-6.
ABSTRACT
Breast cancer is a leading cause of mortality among women, with recurrence prediction remaining a significant challenge. In this context, artificial intelligence application and its resources can serve as a powerful tool in analyzing large amounts of data and predicting cancer recurrence, potentially enabling personalized medical treatment and improving the patient's quality of life. Thus, the systematic review examines the role of AI in predicting breast cancer recurrence using clinical data, imaging data, and combined datasets. Support Vector Machine (SVM) and Neural Networks, especially when applied to combined data, demonstrate strong potential in improving prediction accuracy. SVMs are effective with high-dimensional clinical data, while Neural Networks in genetic and molecular analysis. Despite these advancements, limitations such as dataset diversity, sample size, and evaluation standardization persist, emphasizing the need for further research. AI integration in recurrence prediction offers promising prospects for personalized care but requires rigorous validation for safe clinical application.
PMID:39921795 | DOI:10.1007/s12672-025-01908-6
Innovative laboratory techniques shaping cancer diagnosis and treatment in developing countries
Discov Oncol. 2025 Feb 8;16(1):137. doi: 10.1007/s12672-025-01877-w.
ABSTRACT
Cancer is a major global health challenge, with approximately 19.3 million new cases and 10 million deaths estimated by 2020. Laboratory advancements in cancer detection have transformed diagnostic capabilities, particularly through the use of biomarkers that play crucial roles in risk assessment, therapy selection, and disease monitoring. Tumor histology, single-cell technology, flow cytometry, molecular imaging, liquid biopsy, immunoassays, and molecular diagnostics have emerged as pivotal tools for cancer detection. The integration of artificial intelligence, particularly deep learning and convolutional neural networks, has enhanced the diagnostic accuracy and data analysis capabilities. However, developing countries face significant challenges including financial constraints, inadequate healthcare infrastructure, and limited access to advanced diagnostic technologies. The impact of COVID-19 has further complicated cancer management in resource-limited settings. Future research should focus on precision medicine and early cancer diagnosis through sophisticated laboratory techniques to improve prognosis and health outcomes. This review examines the evolving landscape of cancer detection, focusing on laboratory research breakthroughs and limitations in developing countries, while providing recommendations for advancing tumor diagnostics in resource-constrained environments.
PMID:39921787 | DOI:10.1007/s12672-025-01877-w
Using deep feature distances for evaluating the perceptual quality of MR image reconstructions
Magn Reson Med. 2025 Feb 8. doi: 10.1002/mrm.30437. Online ahead of print.
ABSTRACT
PURPOSE: Commonly used MR image quality (IQ) metrics have poor concordance with radiologist-perceived diagnostic IQ. Here, we develop and explore deep feature distances (DFDs)-distances computed in a lower-dimensional feature space encoded by a convolutional neural network (CNN)-as improved perceptual IQ metrics for MR image reconstruction. We further explore the impact of distribution shifts between images in the DFD CNN encoder training data and the IQ metric evaluation.
METHODS: We compare commonly used IQ metrics (PSNR and SSIM) to two "out-of-domain" DFDs with encoders trained on natural images, an "in-domain" DFD trained on MR images alone, and two domain-adjacent DFDs trained on large medical imaging datasets. We additionally compare these with several state-of-the-art but less commonly reported IQ metrics, visual information fidelity (VIF), noise quality metric (NQM), and the high-frequency error norm (HFEN). IQ metric performance is assessed via correlations with five expert radiologist reader scores of perceived diagnostic IQ of various accelerated MR image reconstructions. We characterize the behavior of these IQ metrics under common distortions expected during image acquisition, including their sensitivity to acquisition noise.
RESULTS: All DFDs and HFEN correlate more strongly with radiologist-perceived diagnostic IQ than SSIM, PSNR, and other state-of-the-art metrics, with correlations being comparable to radiologist inter-reader variability. Surprisingly, out-of-domain DFDs perform comparably to in-domain and domain-adjacent DFDs.
CONCLUSION: A suite of IQ metrics, including DFDs and HFEN, should be used alongside commonly-reported IQ metrics for a more holistic evaluation of MR image reconstruction perceptual quality. We also observe that general vision encoders are capable of assessing visual IQ even for MR images.
PMID:39921580 | DOI:10.1002/mrm.30437
Deep Learning Combined with Quantitative Structure-Activity Relationship Accelerates De Novo Design of Antifungal Peptides
Adv Sci (Weinh). 2025 Feb 8:e2412488. doi: 10.1002/advs.202412488. Online ahead of print.
ABSTRACT
Novel antifungal drugs that evade resistance are urgently needed for Candida infections. Antifungal peptides (AFPs) are potential candidates due to their specific mechanism of action, which makes them less prone to developing drug resistance. An AFP de novo design method, Deep Learning-Quantitative Structure‒Activity Relationship Empirical Screening (DL-QSARES), is developed by integrating deep learning and quantitative structure‒activity relationship empirical screening. After generating candidate AFPs (c_AFPs) through the recombination of dominant amino acids and dipeptide compositions, natural language processing models are utilized and quantitative structure‒activity relationship (QSAR) approaches based on physicochemical properties to screen for promising c_AFPs. Forty-nine promising c_AFPs are screened, and their minimum inhibitory concentrations (MICs) against C. albicans are determined to be 3.9-125 µg mL-1, of which four leading c_AFPs (AFP-8, -10, -11, and -13) has MICs of <10 µg mL-1 against the four tested pathogenic fungi, and AFP-13 has excellent therapeutic efficacy in the animal model.
PMID:39921483 | DOI:10.1002/advs.202412488
A Multi-Task Self-Supervised Strategy for Predicting Molecular Properties and FGFR1 Inhibitors
Adv Sci (Weinh). 2025 Feb 8:e2412987. doi: 10.1002/advs.202412987. Online ahead of print.
ABSTRACT
Studying the molecular properties of drugs and their interactions with human targets aids in better understanding the clinical performance of drugs and guides drug development. In computer-aided drug discovery, it is crucial to utilize effective molecular feature representations for predicting molecular properties and designing ligands with high binding affinity to targets. However, designing an effective multi-task and self-supervised strategy remains a significant challenge for the pretraining framework. In this study, a multi-task self-supervised deep learning framework is proposed, MTSSMol, which utilizes ≈10 million unlabeled drug-like molecules for pretraining to identify potential inhibitors of fibroblast growth factor receptor 1 (FGFR1). During the pretraining of MTSSMol, molecular representations are learned through a graph neural networks (GNNs) encoder. A multi-task self-supervised pretraining strategy is proposed to fully capture the structural and chemical knowledge of molecules. Extensive computational tests on 27 datasets demonstrate that MTSSMol exhibits exceptional performance in predicting molecular properties across different domains. Moreover, MTSSMol's capability is validated to identify potential inhibitors of FGFR1 through molecular docking using RoseTTAFold All-Atom (RFAA) and molecular dynamics simulations. Overall, MTSSMol provides an effective algorithmic framework for enhancing molecular representation learning and identifying potential drug candidates, offering a valuable tool to accelerate drug discovery processes. All of the codes are freely available online at https:// github.com/zhaoqi106/MTSSMol.
PMID:39921455 | DOI:10.1002/advs.202412987
Accurately Models the Relationship Between Physical Response and Structure Using Kolmogorov-Arnold Network
Adv Sci (Weinh). 2025 Feb 7:e2413805. doi: 10.1002/advs.202413805. Online ahead of print.
ABSTRACT
Artificial intelligence (AI) in science is a key area of modern research. However, many current machine learning methods lack interpretability, making it difficult to grasp the physical mechanisms behind various phenomena, which hampers progress in related fields. This study focuses on the Poisson's ratio of a hexagonal lattice elastic network as it varies with structural deformation. By employing the Kolmogorov-Arnold Network (KAN), the transition of the network's Poisson's ratio from positive to negative as the hexagonal structural element shifts from a convex polygon to a concave polygon was accurately predicted. The KAN provides a clear mathematical framework that describes this transition, revealing the connection between the Poisson's ratio and the geometric properties of the hexagonal element, and accurately identifying the geometric parameters at which the Poisson's ratio equals zero. This work demonstrates the significant potential of the KAN network to clarify the mathematical relationships that underpin physical responses and structural behaviors.
PMID:39921316 | DOI:10.1002/advs.202413805
An Efficient Lightweight Multi Head Attention Gannet Convolutional Neural Network Based Mammograms Classification
Int J Med Robot. 2025 Feb;21(1):e70043. doi: 10.1002/rcs.70043.
ABSTRACT
BACKGROUND: This research aims to use deep learning to create automated systems for better breast cancer detection and categorisation in mammogram images, helping medical professionals overcome challenges such as time consumption, feature extraction issues and limited training models.
METHODS: This research introduced a Lightweight Multihead attention Gannet Convolutional Neural Network (LMGCNN) to classify mammogram images effectively. It used wiener filtering, unsharp masking, and adaptive histogram equalisation to enhance images and remove noise, followed by Grey-Level Co-occurrence Matrix (GLCM) for feature extraction. Ideal feature selection is done by a self-adaptive quantum equilibrium optimiser with artificial bee colony.
RESULTS: The research assessed on two datasets, CBIS-DDSM and MIAS, achieving impressive accuracy rates of 98.2% and 99.9%, respectively, which highlight the superior performance of the LMGCNN model while accurately detecting breast cancer compared to previous models.
CONCLUSION: This method illustrates potential in aiding initial and accurate breast cancer detection, possibly leading to improved patient outcomes.
PMID:39921233 | DOI:10.1002/rcs.70043
Deep Learning Enhances Precision of Citrullination Identification in Human and Plant Tissue Proteomes
Mol Cell Proteomics. 2025 Feb 5:100924. doi: 10.1016/j.mcpro.2025.100924. Online ahead of print.
ABSTRACT
Citrullination is a critical yet understudied post-translational modification (PTM) implicated in various biological processes. Exploring its role in health and disease requires a comprehensive understanding of the prevalence of this PTM at a proteome-wide scale. Although mass spectrometry has enabled the identification of citrullination sites in complex biological samples, it faces significant challenges, including limited enrichment tools and a high rate of false positives due to the identical mass with deamidation (+0.9840 Da) and errors in monoisotopic ion selection. These issues often necessitate manual spectrum inspection, reducing throughput in large-scale studies. In this work, we present a novel data analysis pipeline that incorporates the deep learning model Prosit-Cit into the MS database search workflow to improve both the sensitivity and precision of citrullination site identification. Prosit-Cit, an extension of the existing Prosit model, has been trained on ∼53,000 spectra from ∼2,500 synthetic citrullinated peptides and provides precise predictions for chromatographic retention time and fragment ion intensities of both citrullinated and deamidated peptides. This enhances the accuracy of identification and reduces false positives. Our pipeline demonstrated high precision on the evaluation dataset, recovering the majority of known citrullination sites in human tissue proteomes and improving sensitivity by identifying up to 14 times more citrullinated sites. Sequence motif analysis revealed consistency with previously reported findings, validating the reliability of our approach. Furthermore, extending the pipeline to a tissue proteome dataset of the model plant Arabidopsis thaliana enabled the identification of ∼200 citrullination sites across 169 proteins from 30 tissues, representing the first large-scale citrullination mapping in plants. This pipeline can be seamlessly applied to existing proteomics datasets, offering a robust tool for advancing biological discoveries and deepening our understanding of protein citrullination across species.
PMID:39921205 | DOI:10.1016/j.mcpro.2025.100924
A Dual Energy CT-Guided Intelligent Radiation Therapy Platform
Int J Radiat Oncol Biol Phys. 2025 Feb 5:S0360-3016(25)00085-9. doi: 10.1016/j.ijrobp.2025.01.028. Online ahead of print.
ABSTRACT
PURPOSE: The integration of advanced imaging and artificial intelligence (AI) technologies in radiotherapy has revolutionized cancer treatment by enhancing precision and adaptability. This study introduces a novel Dual Energy CT (DECT)-Guided Intelligent Radiation Therapy (DEIT) platform designed to streamline and optimize the radiotherapy process. The DEIT system combines DECT, a newly designed dual-layer multi-leaf collimator, deep learning algorithms for auto-segmentation, automated planning and QA capabilities.
METHODS: The DEIT system integrates an 80-slice CT scanner with an 87 cm bore size, a linear accelerator delivering four photon and five electron energies, and a flat panel imager optimized for MV Cone Beam CT acquisition. A comprehensive evaluation of the system's accuracy was conducted using end-to-end tests. Virtual monoenergetic CT images and electron density images of the DECT were generated and compared on both phantom and patient. The system's auto-segmentation algorithms were tested on five cases for each of the 99 organs at risk, and the automated optimization and planning capabilities were evaluated on clinical cases.
RESULTS: The DEIT system demonstrated systematic errors of less than 1 mm for target localization. DECT reconstruction showed electron density mapping deviations ranging from -0.052 to 0.001, with stable HU consistency across monoenergetic levels above 60 keV, except for high-Z materials at lower energies. Auto-segmentation achieved dice similarity coefficients above 0.9 for most organs with inference time less than 2 seconds. Dose-volume histogram (DVH) comparisons showed improved dose conformity indices and reduced doses to critical structures in Auto-plans compared to Manual Plans across various clinical cases. Additionally, high gamma passing rates at 2%/2mm in both 2D (above 97%) and 3D (above 99%) in vivo analyses further validate the accuracy and reliability of treatment plans.
CONCLUSIONS: The DEIT platform represents a viable solution for radiation treatment. The DEIT system utilizes AI-driven automation, real-time adjustments, and CT imaging to enhance the radiotherapy process, improving both efficiency and flexibility.
PMID:39921109 | DOI:10.1016/j.ijrobp.2025.01.028
Advanced AI-driven detection of interproximal caries in bitewing radiographs using YOLOv8
Sci Rep. 2025 Feb 7;15(1):4641. doi: 10.1038/s41598-024-84737-x.
ABSTRACT
Dental caries is a very common chronic disease that may lead to pain, infection, and tooth loss if its diagnosis at an early stage remains undetected. Traditional methods of tactile-visual examination and bitewing radiography, are subject to intrinsic variability due to factors such as examiner experience and image quality. This variability can result in inconsistent diagnoses. Thus, the present study aimed to develop a deep learning-based AI model using the YOLOv8 algorithm for improving interproximal caries detection in bitewing radiographs. In this retrospective study on 552 radiographs, a total of 1,506 images annotated at Tehran University of Medical Science were processed. The YOLOv8 model was trained and the results were evaluated in terms of precision, recall, and the F1 score, whereby it resulted in a precision of 96.03% for enamel caries and 80.06% for dentin caries, thus showing an overall precision of 84.83%, a recall of 79.77%, and an F1 score of 82.22%. This proves its reliability in reducing false negatives and improving diagnostic accuracy. YOLOv8 enhances interproximal caries detection, offering a reliable tool for dental professionals to improve diagnostic accuracy and clinical outcomes.
PMID:39920198 | DOI:10.1038/s41598-024-84737-x
Evaluation of an artificial intelligence-based system for real-time high-quality photodocumentation during esophagogastroduodenoscopy
Sci Rep. 2025 Feb 8;15(1):4693. doi: 10.1038/s41598-024-83721-9.
ABSTRACT
Complete and high-quality photodocumentation in esophagoduodenogastroscopy (EGD) is essential for accurately diagnosing upper gastrointestinal diseases by reducing blind spot rates. Automated Photodocumentation Task (APT), an artificial intelligence-based system for real-time photodocumentation during EGD, was developed to assist endoscopists in focusing more on the observation rather than repetitive capturing tasks. This study aimed to evaluate the completeness and quality of APT's photodocumentation compared to endoscopists. The dataset comprised 37 EGD videos recorded at Seoul National University Hospital between March and June 2023. Virtual endoscopy was conducted by seven endoscopists and APT, capturing 11 anatomical landmarks from the videos. The primary endpoints were the completeness of capturing landmarks and the quality of the images. APT achieved an average accuracy of 98.16% in capturing landmarks. Compared to that of endoscopists, APT demonstrated similar completeness in photodocumentation (87.72% vs. 85.75%, P = .0.258), and the combined photodocumentation of endoscopists and APT reached higher completeness (91.89% vs. 85.75%, P < .0.001). APT captured images with higher mean opinion scores than those of endoscopists (3.88 vs. 3.41, P < .0.001). In conclusion, APT provides clear, high-quality endoscopic images while minimizing blind spots during EGD in real-time.
PMID:39920187 | DOI:10.1038/s41598-024-83721-9
A comprehensive analysis of deep learning and transfer learning techniques for skin cancer classification
Sci Rep. 2025 Feb 7;15(1):4633. doi: 10.1038/s41598-024-82241-w.
ABSTRACT
Accurately and early diagnosis of melanoma is one of the challenging tasks due to its unique characteristics and different shapes of skin lesions. So, in order to solve this issue, the current study examines various deep learning-based approaches and provide an effective approach for classifying dermoscopic images into two categories of skin lesions. This research focus on skin cancer images and provides solution using deep learning approaches. This research investigates three approaches for classifying skin cancer images. (1) Utilizing three fine-tuned pre-trained networks (VGG19, ResNet18, and MobileNet_V2) as classifiers. (2) Employing three pre-trained networks (ResNet-18, VGG19, and MobileNet v2) as feature extractors in conjunction with four machine learning classifiers (SVM, DT, Naïve Bayes, and KNN). (3) Utilizing a combination of the aforementioned pre-trained networks as feature extractors in conjunction with same machine learning classifiers. All these algorithms are trained using segmented images which are achieved by using the active contour approach. Prior to segmentation, preprocessing step is performed which involves scaling, denoising, and enhancing the image. Experimental performance is measured on the ISIC 2018 dataset which contains 3300 images of skin disease including benign and malignant type cancer images. 80% of the images from the ISIC 2018 dataset are allocated for training, while the remaining 20% are designated for testing. All approaches are trained using different parameters like epoch, batch size, and learning rate. The results indicate that combining ResNet-18 and MobileNet pre-trained networks using concatenation with an SVM classifier achieved the maximum accuracy of 92.87%.
PMID:39920179 | DOI:10.1038/s41598-024-82241-w
Deep learning-based prediction of autoimmune diseases
Sci Rep. 2025 Feb 7;15(1):4576. doi: 10.1038/s41598-025-88477-4.
ABSTRACT
Autoimmune Diseases are a complex group of diseases caused by the immune system mistakenly attacking body tissues. Their etiology involves multiple factors such as genetics, environmental factors, and abnormalities in immune cells, making prediction and treatment challenging. T cells, as a core component of the immune system, play a critical role in the human immune system and have a significant impact on the pathogenesis of autoimmune diseases. Several studies have demonstrated that T-cell receptors (TCRs) may be involved in the pathogenesis of various autoimmune diseases, which provides strong theoretical support and new therapeutic targets for the prediction and treatment of autoimmune diseases. This study focuses on the prediction of several autoimmune diseases mediated by T cells, and proposes two models: one is the AutoY model based on convolutional neural networks, and the other is the LSTMY model, a bidirectional LSTM network model that integrates the attention mechanism. Experimental results show that both models exhibit good performance in the prediction of the four autoimmune diseases, with the AutoY model performing slightly better in comparison. In particular, the average area under the ROC curve (AUC) of the AutoY model exceeded 0.93 in the prediction of all the diseases, and the AUC value reached 0.99 in two diseases, type 1 diabetes and multiple sclerosis. These results demonstrate the high accuracy, stability, and good generalization ability of the two models, which makes them promising tools in the field of autoimmune disease prediction and provides support for the use of the TCR bank for the noninvasive detection of autoimmune disease non-invasive detection is supported.
PMID:39920178 | DOI:10.1038/s41598-025-88477-4
A deep learning approach for automatic 3D segmentation of hip cartilage and labrum from direct hip MR arthrography
Sci Rep. 2025 Feb 7;15(1):4662. doi: 10.1038/s41598-025-86727-z.
ABSTRACT
The objective was to use convolutional neural networks (CNNs) for automatic segmentation of hip cartilage and labrum based on 3D MRI. In this retrospective single-center study, CNNs with a U-Net architecture were used to develop a fully automated segmentation model for hip cartilage and labrum from MRI. Direct hip MR arthrographies (01/2020-10/2021) were selected from 100 symptomatic patients. Institutional routine protocol included a 3D T1 mapping sequence, which was used for manual segmentation of hip cartilage and labrum. 80 hips were used for training and the remaining 20 for testing. Model performance was assessed with six evaluation metrics including Dice similarity coefficient (DSC). In addition, model performance was tested on an external dataset (40 patients) with a 3D T2-weighted sequence from a different institution. Inter-rater agreement of manual segmentation served as benchmark for automatic segmentation performance. 100 patients were included (mean age 30 ± 10 years, 64% female patients). Mean DSC for cartilage was 0.92 ± 0.02 (95% confidence interval [CI] 0.92-0.93) and 0.83 ± 0.04 (0.81-0.85) for labrum and comparable (p = 0.232 and 0.297, respectively) to inter-rater agreement of manual segmentation: DSC cartilage 0.93 ± 0.04 (0.92-0.95); DSC labrum 0.82 ± 0.05 (0.80-0.85). When tested on the external dataset, the DSC was 0.89 ± 0.02 (0.88-0.90) and 0.71 ± 0.04 (0.69-0.73) for cartilage and labrum, respectively.The presented deep learning approach accurately segments hip cartilage and labrum from 3D MRI sequences and can potentially be used in clinical practice to provide rapid and accurate 3D MRI models.
PMID:39920175 | DOI:10.1038/s41598-025-86727-z
Modeling pegcetacoplan treatment effect for atrophic age-related macular degeneration with AI-based progression prediction
Int J Retina Vitreous. 2025 Feb 7;11(1):14. doi: 10.1186/s40942-025-00634-z.
ABSTRACT
BACKGROUND: To illustrate the treatment effect of Pegcetacoplan for atrophy secondary to age-related macular degeneration (AMD), on an individualized topographic progression prediction basis, using a deep learning model.
METHODS: Patients (N = 99) with atrophy secondary to AMD with longitudinal optical coherence tomography (OCT) data were retrospectively analyzed. We used a previously published deep-learning-based atrophy progression prediction algorithm to predict the 2-year atrophy progression, including the topographic likelihood of future retinal pigment epithelial and outer retinal atrophy (RORA), according to the baseline OCT input. The algorithm output was a step-less individualized topographic modeling of the RORA growth, allowing for illustrating the progression line corresponding to an 80% growth compared to the natural course of 100% growth.
RESULTS: The treatment effect of Pegcetacoplan was illustrated as the line when 80% of the growth is reached in this continuous model. Besides the well-known variability of atrophy growth rate, our results showed unequal growth according to the fundus location. It became evident that this difference is of potential functional interest for patient outcomes.
CONCLUSIONS: This model based on an 80% growth of RORA after two years illustrates the variable effect of treatment with Pegcetacoplan according to the individual situation, supporting personalized medical care.
PMID:39920843 | DOI:10.1186/s40942-025-00634-z