Deep learning

Leveraging multi-modal feature learning for predictions of antibody viscosity

Fri, 2025-04-11 06:00

MAbs. 2025 Dec;17(1):2490788. doi: 10.1080/19420862.2025.2490788. Epub 2025 Apr 11.

ABSTRACT

The shift toward subcutaneous administration for biologic therapeutics has gained momentum due to its patient-friendly nature, convenience, reduced healthcare burden, and improved compliance compared to traditional intravenous infusions. However, a significant challenge associated with this transition is managing the viscosity of the administered solutions. High viscosity poses substantial development and manufacturability challenges, directly affecting patients by increasing injection time and causing pain at the injection site. Furthermore, high viscosity formulations can prolong residence time at the injection site, affecting absorption kinetics and potentially altering the intended pharmacological profile and therapeutic efficacy of the biologic candidate. Here, we report the application of a multimodal feature learning workflow for predicting the viscosity of antibodies in therapeutics discovery. It integrates multiple data sources including sequence, structural, physicochemical properties, as well as embeddings from a language model. This approach enables the model to learn from various underlying rules, such as physicochemical rules from molecular simulations and protein evolutionary patterns captured by large, pre-trained deep learning models. By comparing the effectiveness of this approach to other selected published viscosity prediction methods, this study provides insights into their intrinsic viscosity predictive potential and usability in early-stage therapeutics antibody development pipelines.

PMID:40214197 | DOI:10.1080/19420862.2025.2490788

Categories: Literature Watch

Deep learning-based target spraying control of weeds in wheat fields at tillering stage

Fri, 2025-04-11 06:00

Front Plant Sci. 2025 Mar 27;16:1540722. doi: 10.3389/fpls.2025.1540722. eCollection 2025.

ABSTRACT

In this study, a target spraying decision and hysteresis algorithm is designed in conjunction with deep learning, which is deployed on a testbed for validation. The overall scheme of the target spraying control system is first proposed. Then YOLOv5s is lightweighted and improved. Based on this, a target spraying decision and hysteresis algorithm is designed, so that the target spraying system can precisely control the solenoid valve and differentiate spraying according to the distribution of weeds in different areas, and at the same time, successfully solve the operation hysteresis problem between the hardware. Finally, the algorithm was deployed on a testbed and simulated weeds and simulated tillering wheat were selected for bench experiments. Experiments on a dataset of realistic scenarios show that the improved model reduces the GFLOPs (computational complexity) and size by 52.2% and 42.4%, respectively, with mAP and F1 of 91.4% and 85.3%, which is an improvement of 0.2% and 0.8%, respectively, compared to the original model. The results of bench experiments showed that the spraying rate under the speed intervals of 0.3-0.4m/s, 0.4-0.5m/s and 0.5-0.6m/s reached 99.8%, 98.2% and 95.7%, respectively. Therefore, the algorithm can provide excellent spraying accuracy performance for the target spraying system, thus laying a theoretical foundation for the practical application of target spraying.

PMID:40212873 | PMC:PMC11983634 | DOI:10.3389/fpls.2025.1540722

Categories: Literature Watch

A Multiscale Deep-Learning Model for Atom Identification from Low-Signal-to-Noise-Ratio Transmission Electron Microscopy Images

Fri, 2025-04-11 06:00

Small Sci. 2023 Jun 11;3(8):2300031. doi: 10.1002/smsc.202300031. eCollection 2023 Aug.

ABSTRACT

Recent advancements in transmission electron microscopy (TEM) have enabled the study of atomic structures of materials at unprecedented scales as small as tens of picometers (pm). However, accurately detecting atomic positions from TEM images remains a challenging task. Traditional Gaussian fitting and peak-finding algorithms are effective under ideal conditions but perform poorly on images with strong background noise or contamination areas (shown as ultrabright or ultradark contrasts). Moreover, these traditional algorithms require parameter tuning for different magnifications. To overcome these challenges, AtomID-Net is presented, a deep neural network model for atomic detection from multiscale low-SNR experimental images of scanning TEM (scanning transmission electron microscopy (STEM)). The model is trained on real images, which allows the robust and efficient detection of atomic positions, even in the presence of background noise and contamination. The evaluation on a test set of 50 images with a resolution of 800 × 800 yields an average F1-Score of 0.964, which demonstrates significant improvements over existing peak-finding algorithms.

PMID:40213610 | PMC:PMC11935788 | DOI:10.1002/smsc.202300031

Categories: Literature Watch

Multiscale Computational and Artificial Intelligence Models of Linear and Nonlinear Composites: A Review

Fri, 2025-04-11 06:00

Small Sci. 2024 Mar 19;4(5):2300185. doi: 10.1002/smsc.202300185. eCollection 2024 May.

ABSTRACT

Herein, state-of-the-art multiscale modeling methods have been described. This research includes notable molecular, micro-, meso-, and macroscale models for hard (polymer, metal, yarn, fiber, fiber-reinforced polymer, and polymer matrix composites) and soft (biological tissues such as brain white matter [BWM]) composite materials. These numerical models vary from molecular dynamics simulations to finite-element (FE) analyses and machine learning/deep learning surrogate models. Constitutive material models are summarized, such as viscoelastic hyperelastic, and emerging models like fractional viscoelastic. Key challenges such as meshing, data variability and material nonlinearity-driven uncertainty, limitations in terms of computational resources availability, model fidelity, and repeatability are outlined with state-of-the-art models. Latest advancements in FE modeling involving meshless methods, hybrid ML and FE models, and nonlinear constitutive material (linear and nonlinear) models aim to provide readers with a clear outlook on futuristic trends in composite multiscale modeling research and development. The data-driven models presented here are of varying length and time scales, developed using advanced mathematical, numerical, and huge volumes of experimental results as data for digital models. An in-depth discussion on data-driven models would provide researchers with the necessary tools to build real-time composite structure monitoring and lifecycle prediction models.

PMID:40213577 | PMC:PMC11935080 | DOI:10.1002/smsc.202300185

Categories: Literature Watch

Deep Learning-Based Classification of Histone-DNA Interactions Using Drying Droplet Patterns

Fri, 2025-04-11 06:00

Small Sci. 2024 Aug 10;4(11):2400252. doi: 10.1002/smsc.202400252. eCollection 2024 Nov.

ABSTRACT

Developing scalable and accurate predictive analytical methods for the classification of protein-DNA binding is critical for advancing our understanding of molecular biology, disease mechanisms, and a wide spectrum of biotechnological and medical applications. It is discovered that histone-DNA interactions can be stratified based on stain patterns created by the deposition of various nucleoprotein solutions onto a substrate. In this study, a deep-learning neural network is applied to categorize polarized light microscopy images of drying droplet deposits originating from different histone-DNA mixtures. These DNA stain patterns featured high reproducibility across different species and thus enabled comprehensive DNA categorization (100% accuracy) and accurate prediction of their respective binding affinities to histones. Eukaryotic DNA, which has a higher binding affinity to mammalian histones than prokaryotic DNA, is associated with a higher overall prediction accuracy. For a given species, the average prediction accuracy increased with DNA size. To demonstrate generalizability, a pre-trained CNN is challenged with unknown images that originated from DNA samples of species not included in the training set. The CNN classified these unknown histone-DNA samples as either strong or medium binders with 84.4% and 96.25% accuracy, respectively.

PMID:40213456 | PMC:PMC11935254 | DOI:10.1002/smsc.202400252

Categories: Literature Watch

A deep learning model for clinical outcome prediction using longitudinal inpatient electronic health records

Fri, 2025-04-11 06:00

JAMIA Open. 2025 Apr 10;8(2):ooaf026. doi: 10.1093/jamiaopen/ooaf026. eCollection 2025 Apr.

ABSTRACT

OBJECTIVES: Recent advances in deep learning show significant potential in analyzing continuous monitoring electronic health records (EHR) data for clinical outcome prediction. We aim to develop a Transformer-based, Encounter-level Clinical Outcome (TECO) model to predict mortality in the intensive care unit (ICU) using inpatient EHR data.

MATERIALS AND METHODS: The TECO model was developed using multiple baseline and time-dependent clinical variables from 2579 hospitalized COVID-19 patients to predict ICU mortality and was validated externally in an acute respiratory distress syndrome cohort (n = 2799) and a sepsis cohort (n = 6622) from the Medical Information Mart for Intensive Care IV (MIMIC-IV). Model performance was evaluated based on the area under the receiver operating characteristic (AUC) and compared with Epic Deterioration Index (EDI), random forest (RF), and extreme gradient boosting (XGBoost).

RESULTS: In the COVID-19 development dataset, TECO achieved higher AUC (0.89-0.97) across various time intervals compared to EDI (0.86-0.95), RF (0.87-0.96), and XGBoost (0.88-0.96). In the 2 MIMIC testing datasets (EDI not available), TECO yielded higher AUC (0.65-0.77) than RF (0.59-0.75) and XGBoost (0.59-0.74). In addition, TECO was able to identify clinically interpretable features that were correlated with the outcome.

DISCUSSION: The TECO model outperformed proprietary metrics and conventional machine learning models in predicting ICU mortality among patients with COVID-19, widespread inflammation, respiratory illness, and other organ failures.

CONCLUSION: The TECO model demonstrates a strong capability for predicting ICU mortality using continuous monitoring data. While further validation is needed, TECO has the potential to serve as a powerful early warning tool across various diseases in inpatient settings.

PMID:40213364 | PMC:PMC11984207 | DOI:10.1093/jamiaopen/ooaf026

Categories: Literature Watch

Transformers in RNA structure prediction: A review

Fri, 2025-04-11 06:00

Comput Struct Biotechnol J. 2025 Mar 17;27:1187-1203. doi: 10.1016/j.csbj.2025.03.021. eCollection 2025.

ABSTRACT

The Transformer is a deep neural network based on the self-attention mechanism, designed to handle sequential data. Given its tremendous advantages in natural language processing, it has gained traction for other applications. As the primary structure of RNA is a sequence of nucleotides, researchers have applied Transformers to predict secondary and tertiary structures from RNA sequences. The number of Transformer-based models in structure prediction tasks is rapidly increasing as they have performed on par or better than other deep learning networks, such as Convolutional and Recurrent Neural Networks. This article thoroughly examines Transformer-based RNA structure prediction models. Through an in-depth analysis of the models, we aim to explain how their architectural innovations improve their performances and what they still lack. As Transformer-based techniques for RNA structure prediction continue to evolve, this review serves as both a record of past achievements and a guide for future avenues.

PMID:40213272 | PMC:PMC11982051 | DOI:10.1016/j.csbj.2025.03.021

Categories: Literature Watch

Nature's best vs. bruised: A veggie edibility evaluation database

Fri, 2025-04-11 06:00

Data Brief. 2025 Mar 19;60:111483. doi: 10.1016/j.dib.2025.111483. eCollection 2025 Jun.

ABSTRACT

In the realm of evaluating vegetable freshness, automated methods that assess external morphology, texture, and colour have emerged as efficient and cost-effective tools. These methods play a crucial role in sorting high-quality vegetables for both export and local consumption, significantly impacting the revenue of the food industry worldwide. Researchers have recognized the importance of this area, leading to the development of various automated techniques, particularly leveraging advanced deep learning technologies to categorize vegetables into specific classes. However, the effectiveness of these methods heavily relies on the databases used for training and validation, posing a challenge due to the lack of suitable datasets.

PMID:40213046 | PMC:PMC11985062 | DOI:10.1016/j.dib.2025.111483

Categories: Literature Watch

Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-Guided Radiotherapy

Fri, 2025-04-11 06:00

Head Neck Tumor Segm MR Guid Appl (2024). 2025;15273:136-153. doi: 10.1007/978-3-031-83274-1_10. Epub 2025 Mar 3.

ABSTRACT

Radiation therapy (RT) is essential in treating head and neck cancer (HNC), with magnetic resonance imaging (MRI)-guided RT offering superior soft tissue contrast and functional imaging. However, manual tumor segmentation is time-consuming and complex, and therefore remains a challenge. In this study, we present our solution as team TUMOR to the HNTS-MRG24 MICCAI Challenge which is focused on automated segmentation of primary gross tumor volumes (GTVp) and metastatic lymph node gross tumor volume (GTVn) in pre-RT and mid-RT MRI images. We utilized the HNTS-MRG2024 dataset, which consists of 150 MRI scans from patients diagnosed with HNC, including original and registered pre-RT and mid-RT T2-weighted images with corresponding segmentation masks for GTVp and GTVn. We employed two state-of-the-art models in deep learning, nnUNet and MedNeXt. For Task 1, we pretrained models on pre-RT registered and mid-RT images, followed by fine-tuning on original pre-RT images. For Task 2, we combined registered pre-RT images, registered pre-RT segmentation masks, and mid-RT data as a multi-channel input for training. Our solution for Task 1 achieved 1st place in the final test phase with an aggregated Dice Similarity Coefficient of 0.8254, and our solution for Task 2 ranked 8th with a score of 0.7005. The proposed solution is publicly available at Github Repository.

PMID:40213035 | PMC:PMC11982674 | DOI:10.1007/978-3-031-83274-1_10

Categories: Literature Watch

Head and Neck Tumor Segmentation for MRI-Guided Radiation Therapy Using Pre-trained STU-Net Models

Fri, 2025-04-11 06:00

Head Neck Tumor Segm MR Guid Appl (2024). 2025;15273:65-74. doi: 10.1007/978-3-031-83274-1_4. Epub 2025 Mar 3.

ABSTRACT

Accurate segmentation of tumors in MRI-guided radiation therapy (RT) is crucial for effective treatment planning, particularly for complex malignancies such as head and neck cancer (HNC). This study presents a comparative analysis between two state-of-the-art deep learning models, nnU-Net v2 and STU-Net, for automatic tumor segmentation in pre-RT MRI images. While both models are designed for medical image segmentation, STU-Net introduces critical improvements in scalability and transferability, with parameter sizes ranging from 14 million to 1.4 billion. Leveraging large-scale pre-training on datasets such as TotalSegmentator, STU-Net captures complex and variable tumor structures more effectively. We modified the default nnU-Net v2 by adding additional convolutional layers to both the encoder and decoder, improving its performance for MRI data. Based on our experimental results, STU-Net demonstrated better performance than nnU-Net v2 in the head and neck tumor segmentation challenge. These findings suggest that integrating advanced models like STU-Net into clinical work-flows could remarkably enhance the precision of RT planning, potentially improving patient outcomes. Ultimately, the performance of the fine-tuned STU-Net-B model submitted for the final evaluation phase of Task 1 in this challenge achieved a DSCagg-GTVp of 0.76, a DSCagg-GTVn of 0.85, and an overall DSCagg-mean score of 0.81, securing ninth place in the Task 1 rankings. The described solution is by team SZTU-SingularMatrix for Head and Neck Tumor Segmentation for MR-Guided Applications (HNTS-MRG) 2024 challenge. Link to the trained model weights: https://github.com/Duskwang/Weight/releases.

PMID:40213034 | PMC:PMC11983000 | DOI:10.1007/978-3-031-83274-1_4

Categories: Literature Watch

Machine learning and artificial intelligence in type 2 diabetes prediction: a comprehensive 33-year bibliometric and literature analysis

Fri, 2025-04-11 06:00

Front Digit Health. 2025 Mar 27;7:1557467. doi: 10.3389/fdgth.2025.1557467. eCollection 2025.

ABSTRACT

BACKGROUND: Type 2 Diabetes Mellitus (T2DM) remains a critical global health challenge, necessitating robust predictive models to enable early detection and personalized interventions. This study presents a comprehensive bibliometric and systematic review of 33 years (1991-2024) of research on machine learning (ML) and artificial intelligence (AI) applications in T2DM prediction. It highlights the growing complexity of the field and identifies key trends, methodologies, and research gaps.

METHODS: A systematic methodology guided the literature selection process, starting with keyword identification using Term Frequency-Inverse Document Frequency (TF-IDF) and expert input. Based on these refined keywords, literature was systematically selected using PRISMA guidelines, resulting in a dataset of 2,351 articles from Web of Science and Scopus databases. Bibliometric analysis was performed on the entire selected dataset using tools such as VOSviewer and Bibliometrix, enabling thematic clustering, co-citation analysis, and network visualization. To assess the most impactful literature, a dual-criteria methodology combining relevance and impact scores was applied. Articles were qualitatively assessed on their alignment with T2DM prediction using a four-point relevance scale and quantitatively evaluated based on citation metrics normalized within subject, journal, and publication year. Articles scoring above a predefined threshold were selected for detailed review. The selected literature spans four time periods: 1991-2000, 2001-2010, 2011-2020, and 2021-2024.

RESULTS: The bibliometric findings reveal exponential growth in publications since 2010, with the USA and UK leading contributions, followed by emerging players like Singapore and India. Key thematic clusters include foundational ML techniques, epidemiological forecasting, predictive modelling, and clinical applications. Ensemble methods (e.g., Random Forest, Gradient Boosting) and deep learning models (e.g., Convolutional Neural Networks) dominate recent advancements. Literature analysis reveals that, early studies primarily used demographic and clinical variables, while recent efforts integrate genetic, lifestyle, and environmental predictors. Additionally, literature analysis highlights advances in integrating real-world datasets, emerging trends like federated learning, and explainability tools such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations).

CONCLUSION: Future work should address gaps in generalizability, interdisciplinary T2DM prediction research, and psychosocial integration, while also focusing on clinically actionable solutions and real-world applicability to combat the growing diabetes epidemic effectively.

PMID:40212895 | PMC:PMC11983615 | DOI:10.3389/fdgth.2025.1557467

Categories: Literature Watch

Plant stem and leaf segmentation and phenotypic parameter extraction using neural radiance fields and lightweight point cloud segmentation networks

Fri, 2025-04-11 06:00

Front Plant Sci. 2025 Mar 27;16:1491170. doi: 10.3389/fpls.2025.1491170. eCollection 2025.

ABSTRACT

High-quality 3D reconstruction and accurate 3D organ segmentation of plants are crucial prerequisites for automatically extracting phenotypic traits. In this study, we first extract a dense point cloud from implicit representations, which derives from reconstructing the maize plants in 3D by using the Nerfacto neural radiance field model. Second, we propose a lightweight point cloud segmentation network (PointSegNet) specifically for stem and leaf segmentation. This network includes a Global-Local Set Abstraction (GLSA) module to integrate local and global features and an Edge-Aware Feature Propagation (EAFP) module to enhance edge-awareness. Experimental results show that our PointSegNet achieves impressive performance compared to five other state-of-the-art deep learning networks, reaching 93.73%, 97.25%, 96.21%, and 96.73% in terms of mean Intersection over Union (mIoU), precision, recall, and F1-score, respectively. Even when dealing with tomato and soybean plants, with complex structures, our PointSegNet also achieves the best metrics. Meanwhile, based on the principal component analysis (PCA), we further optimize the method to obtain the parameters such as leaf length and leaf width by using PCA principal vectors. Finally, the maize stem thickness, stem height, leaf length, and leaf width obtained from our measurements are compared with the manual test results, yielding R 2 values of 0.99, 0.84, 0.94, and 0.87, respectively. These results indicate that our method has high accuracy and reliability for phenotypic parameter extraction. This study throughout the entire process from 3D reconstruction of maize plants to point cloud segmentation and phenotypic parameter extraction, provides a reliable and objective method for acquiring plant phenotypic parameters and will boost plant phenotypic development in smart agriculture.

PMID:40212877 | PMC:PMC11983422 | DOI:10.3389/fpls.2025.1491170

Categories: Literature Watch

Pre-trained molecular representations enable antimicrobial discovery

Thu, 2025-04-10 06:00

Nat Commun. 2025 Apr 10;16(1):3420. doi: 10.1038/s41467-025-58804-4.

ABSTRACT

The rise in antimicrobial resistance poses a worldwide threat, reducing the efficacy of common antibiotics. Determining the antimicrobial activity of new chemical compounds through experimental methods remains time-consuming and costly. While compound-centric deep learning models promise to accelerate this search and prioritization process, current strategies require large amounts of custom training data. Here, we introduce a lightweight computational strategy for antimicrobial discovery that builds on MolE (Molecular representation through redundancy reduced Embedding), a self-supervised deep learning framework that leverages unlabeled chemical structures to learn task-independent molecular representations. By combining MolE representation learning with available, experimentally validated compound-bacteria activity data, we design a general predictive model that enables assessing compounds with respect to their antimicrobial potential. Our model correctly identifies recent growth-inhibitory compounds that are structurally distinct from current antibiotics. Using this approach, we discover de novo, and experimentally confirm, three human-targeted drugs as growth inhibitors of Staphylococcus aureus. This framework offers a viable, cost-effective strategy to accelerate antibiotic discovery.

PMID:40210659 | DOI:10.1038/s41467-025-58804-4

Categories: Literature Watch

Transforming pulmonary healthcare: the role of artificial intelligence in diagnosis and treatment

Thu, 2025-04-10 06:00

Expert Rev Respir Med. 2025 Apr 10. doi: 10.1080/17476348.2025.2491723. Online ahead of print.

ABSTRACT

INTRODUCTION: Respiratory diseases like pneumonia, asthma, and COPD are major global health concerns, significantly impacting morbidity and mortality rates worldwide.

AREAS COVERED: A selective search on PubMed, Google Scholar, and ScienceDirect (up to 2024) focused on AI in diagnosing and treating respiratory conditions like asthma, pneumonia, and COPD. Studies were chosen for their relevance to prediction models, AI-driven diagnostics, and personalized treatments. This narrative review highlights technological advancements, clinical applications, and challenges in integrating AI into standard practice, with emphasis on predictive tools, deep learning for imaging, and patient outcomes.

EXPERT OPINION: Despite these advancements, significant challenges remain in fully integrating AI into pulmonary healthcare. The need for large, diverse datasets to train AI models is critical, and concerns around data privacy, algorithmic transparency, and potential biases must be carefully managed. Regulatory frameworks also need to evolve to address the unique challenges posed by AI in healthcare. However, with continued research and collaboration between technology developers, clinicians, and policymakers, AI has the potential to revolutionize pulmonary healthcare, ultimately leading to more effective, efficient, and personalized care for patients.

PMID:40210489 | DOI:10.1080/17476348.2025.2491723

Categories: Literature Watch

Integrating artificial intelligence with endoscopic ultrasound in the early detection of bilio-pancreatic lesions: Current advances and future prospects

Thu, 2025-04-10 06:00

Best Pract Res Clin Gastroenterol. 2025 Feb;74:101975. doi: 10.1016/j.bpg.2025.101975. Epub 2025 Jan 4.

ABSTRACT

The integration of Artificial Intelligence (AI) in endoscopic ultrasound (EUS) represents a transformative advancement in the early detection and management of biliopancreatic lesions. This review highlights the current state of AI-enhanced EUS (AI-EUS) for diagnosing solid and cystic pancreatic lesions, as well as biliary diseases. AI-driven models, including machine learning (ML) and deep learning (DL), have shown significant improvements in diagnostic accuracy, particularly in distinguishing pancreatic ductal adenocarcinoma (PDAC) from benign conditions and in the characterization of pancreatic cystic neoplasms. Advanced algorithms, such as convolutional neural networks (CNNs), enable precise image analysis, real-time lesion classification, and integration with clinical and genomic data for personalized care. In biliary diseases, AI-assisted systems enhance bile duct visualization and streamline diagnostic workflows, minimizing operator dependency. Emerging applications, such as AI-guided EUS fine-needle aspiration (FNA) and biopsy (FNB), improve diagnostic yields while reducing errors. Despite these advancements, challenges remain, including data standardization, model interpretability, and ethical concerns regarding data privacy. Future developments aim to integrate multimodal imaging, real-time procedural support, and predictive analytics to further refine the diagnostic and therapeutic potential of AI-EUS. AI-driven innovation in EUS stands poised to revolutionize pancreatico-biliary diagnostics, facilitating earlier detection, enhancing precision, and paving the way for personalized medicine in gastrointestinal oncology and beyond.

PMID:40210329 | DOI:10.1016/j.bpg.2025.101975

Categories: Literature Watch

EcoTaskSched: a hybrid machine learning approach for energy-efficient task scheduling in IoT-based fog-cloud environments

Thu, 2025-04-10 06:00

Sci Rep. 2025 Apr 10;15(1):12296. doi: 10.1038/s41598-025-96974-9.

ABSTRACT

The widespread adoption of cloud services has posed several challenges, primarily revolving around energy and resource efficiency. Integrating cloud and fog resources can help address these challenges by improving fog-cloud computing environments. Nevertheless, the search for optimal task allocation and energy management in such environments continues. Existing studies have introduced notable solutions; however, it is still a challenging issue to efficiently utilize these heterogeneous cloud resources and achieve energy-efficient task scheduling in fog-cloud of things environment. To tackle these challenges, we propose a novel ML-based EcoTaskSched model, which leverages deep learning for energy-efficient task scheduling in fog-cloud networks. The proposed hybrid model integrates Convolutional Neural Networks (CNNs) with Bidirectional Log-Short Term Memory (BiLSTM) to enhance energy-efficient schedulability and reduce energy usage while ensuring QoS provisioning. The CNN model efficiently extracts workload features from tasks and resources, while the BiLSTM captures complex sequential information, predicting optimal task placement sequences. A real fog-cloud environment is implemented using the COSCO framework for the simulation setup together with four physical nodes from the Azure B2s plan to test the proposed model. The DeFog benchmark is used to develop task workloads, and data collection was conducted for both normal and intense workload scenarios. Before preprocessing the data was normalized, treated with feature engineering and augmentation, and then split into training and test sets. To evaluate performance, the proposed EcoTaskSched model demonstrated superiority by significantly reducing energy consumption and improving job completion rates compared to baseline models. Additionally, the EcoTaskSched model maintained a high job completion rate of 85%, outperforming GGCN and BiGGCN. It also achieved a lower average response time, and SLA violation rates, as well as increased throughput, and reduced execution cost compared to other baseline models. In its optimal configuration, the EcoTaskSched model is successfully applied to fog-cloud computing environments, increasing task handling efficiency and reducing energy consumption while maintaining the required QoS parameters. Our future studies will focus on long-term testing of the EcoTaskSched model in real-world IoT environments. We will also assess its applicability by integrating other ML models, which could provide enhanced insights for optimizing scheduling algorithms across diverse fog-cloud settings.

PMID:40211053 | DOI:10.1038/s41598-025-96974-9

Categories: Literature Watch

Enhancing neurological disease diagnostics: fusion of deep transfer learning with optimization algorithm for acute brain stroke prediction using facial images

Thu, 2025-04-10 06:00

Sci Rep. 2025 Apr 10;15(1):12334. doi: 10.1038/s41598-025-97034-y.

ABSTRACT

Stroke is a main risk to life and fitness in current society, particularly in the aging population. Also, the stroke is recognized as a cerebrovascular accident. It contains a nervous illness, which can result from haemorrhage or ischemia of the brain veins, and regular mains to assorted motor and cognitive damages that cooperate with functionality. Screening for stroke comprises physical examination, history taking, and valuation of risk features like age or certain cardiovascular illnesses. Symptoms and signs of stroke include facial weakness. Even though computed tomography (CT) and magnetic resonance imaging (MRI) are standard diagnosis techniques, artificial intelligence (AI) systems have been constructed based on these methods, which deliver fast detection. AI is gaining high attention and is being combined into numerous areas with medicine to enhance the accuracy of analysis and the quality of patient care. This paper proposes an enhancing neurological disease diagnostics fusion of transfer learning for acute brain stroke prediction using facial images (ENDDFTL-ABSPFI) method. The proposed ENDDFTL-ABSPFI method aims to enhance brain stroke detection and classification models using facial imaging. Initially, the image pre-processing stage applies the fuzzy-based median filter (FMF) model to eliminate the noise in input image data. Furthermore, fusion models such as Inception-V3 and EfficientNet-B0 perform the feature extraction. Moreover, the hybrid of convolutional neural network and bidirectional long short-term memory (CNN-BiLSTM) model is employed for the brain stroke classification process. Finally, the multi-objective sailfish optimization (MOSFO)-based hyperparameter selection process is carried out to optimize the classification outcomes of the CNN-BiLSTM model. The simulation validation of the ENDDFTL-ABSPFI technique is investigated under the Kaggle dataset concerning various measures. The comparative evaluation of the ENDDFTL-ABSPFI technique portrayed a superior accuracy value of 98.60% over existing methods.

PMID:40210979 | DOI:10.1038/s41598-025-97034-y

Categories: Literature Watch

Restricted Boltzmann machine with Sobel filter dense adversarial noise secured layer framework for flower species recognition

Thu, 2025-04-10 06:00

Sci Rep. 2025 Apr 10;15(1):12315. doi: 10.1038/s41598-025-95564-z.

ABSTRACT

Recognition is an extremely high-level computer vision evaluating task that primarily involves categorizing objects by identifying and evaluating their key distinguishing characteristics. Categorization is important in botany because it makes comprehending the relationships between various flower species easier to organize. Since there is a great deal of variability among flower species and some flower species may resemble one another, classifying flowers may become difficult. An appropriate technique for classification that uses deep learning technology is vital to categorize flower species effectively. This leads to the design of proposed Sobel Restricted Boltzmann VGG19 (SRB-VGG19), which is highly effective at classifying flower species and is inspired by VGG19 model. This research primarily contributes in three ways. The first contribution deals with the dataset preparation by means of feature extraction through the use of the Sobel filter and the Restricted Boltzmann Machine (RBM) neural network approach through unsupervised learning. The second contribution focuses on improving the VGG19 and DenseNet model for supervised learning, which is used to classify species of flowers into five groups. The third contribution overcomes the issue of data poisoning attack through Fast Gradient Sign Method (FGSM) to the input data samples. The FGSM attack was addressed by forming the Adversarial Noise Layer in the dense block. The Flowers Recognition KAGGLE dataset preprocessing was done to extract only the important features using the Sobel filter that computes the image intensity gradient at every pixel in the image. The Sobel filtered image was then applied to RBM to generate RBM Component Vectorized Flower images (RBMCV) which was divided into 3400 training and 850 testing images. To determine the best CNN, the training pictures are fitted with the existing CNN models. According to experiment results, VGG19 and DenseNet can classify floral species with an accuracy of above 80%. So, VGG19 and DenseNet were fine tuned to design the proposed SRB-VGG19 model. The Novelty of this research was explored by designing two sub models SRB-VGG FCL model, SRB-VGG Dense model and validating the security countermeasure of the model through FGSM attack. The proposed SRB-VGG19 initially begins by forming the RBMCV input images that only includes the essential flower edges. The RBMCV Flower images are trained with SRB-VGG FCL model, SRB-VGG Dense model and the performance analysis was done. When compared to the current deep learning models, the implementation results show that the proposed SRB-VGG19 Dense Model classifies the flower species with a high accuracy of 98.65%.

PMID:40210949 | DOI:10.1038/s41598-025-95564-z

Categories: Literature Watch

Bearing fault diagnosis based on efficient cross space multiscale CNN transformer parallelism

Thu, 2025-04-10 06:00

Sci Rep. 2025 Apr 10;15(1):12344. doi: 10.1038/s41598-025-95895-x.

ABSTRACT

Fault diagnosis of wind turbine bearings is crucial for ensuring operational safety and reliability. However, traditional serial-structured deep learning models often fail to simultaneously extract spatio- temporal features from fault signals in noisy environments, leading to critical information loss. To address this limitation, this paper proposes a Wind Turbine Bearing Fault Diagnosis Model Based on Efficient Cross Space Multiscale CNN Transformer Parallelism (ECMCTP). The model first transforms one-dimensional vibration signals into two-dimensional time-frequency images using Continuous Wavelet Transform (CWT). Subsequently, parallel branches are employed to extract spatio-temporal features: the Convolutional Neural Network (CNN) branch integrates a multiscale feature extraction module, a Reversed Residual Structure (RRS), and an Efficient Multiscale Attention (EMA) mechanism to enhance local and global feature extraction capabilities; the Transformer branch combines Bidirectional Gated Recurrent Units (BiGRU) and Transformer to capture both local temporal dynamics and long-term dependencies. Finally, the features from both branches are concatenated along the channel dimension and classified using a softmax classifier. Experimental results on two publicly available bearing datasets demonstrate that the proposed model achieves 100% accuracy under noise-free conditions and maintains superior noise robustness under low signal-to-noise ratio (SNR) conditions, showcasing excellent robustness and generalization capabilities.

PMID:40210923 | DOI:10.1038/s41598-025-95895-x

Categories: Literature Watch

DeepATsers: a deep learning framework for one-pot SERS biosensor to detect SARS-CoV-2 virus

Thu, 2025-04-10 06:00

Sci Rep. 2025 Apr 10;15(1):12245. doi: 10.1038/s41598-025-96557-8.

ABSTRACT

The integration of Artificial Intelligence (AI) techniques with medical kits has revolutionized disease diagnosis, enabling rapid and accurate identification of various conditions. We developed a novel deep learning model, namely DeepATsers based on a combination of CNN and GAN to employ a one-pot SERS biosensor to rapidly detect COVID-19 infection. The model accurately identifies each SARS-CoV-2 protein (S protein, N protein, VLP protein, Streptavidin protein, and blank signal) from its experimental fingerprint-like spectral data introduced in this study. Several augmentation techniques such as EMSA, Gaussian-noise, GAN, and K-fold cross-validation, and their combinations were utilized for the SERS spectral dataset generalization and prevented model overfitting. The original experimental dataset of 126 spectra was augmented to 780 spectra that resembled the original set by using GAN with a low KL divergence value of 0.02. This significantly improves the average accuracy of protein classification from 0.6000 to 0.9750. The deep learning model deployed optimal hyperparameters and outperformed in most measurements comparing supervised machine learning methods such as RF, GBM, SVM, and KNN, both with and without augmented spectral datasets. For model training, a whole range of spectra wavenumbers ([Formula: see text] to [Formula: see text]) as well as wavenumbers ([Formula: see text] and [Formula: see text]) only for fingerprint peak spectra were employed. The former led to highly accurate 0.9750 predictions in comparison to 0.4318 for the latter one. Finally, independent experimental spectra of SARS-CoV-2 Omicron variant were used in the model verification. Thus, DeepATsers can be considered a robust, generalized, and generative deep learning framework for 1D SERS spectral datasets of SARS-CoV-2.

PMID:40210912 | DOI:10.1038/s41598-025-96557-8

Categories: Literature Watch

Pages