Deep learning

Hybrid AI Framework for the Early Detection of Heart Failure: Integrating Traditional Machine Learning and Generative Language Models With Clinical Data

Thu, 2025-07-10 06:00

Cureus. 2025 Jun 9;17(6):e85638. doi: 10.7759/cureus.85638. eCollection 2025 Jun.

ABSTRACT

Cardiovascular disease (CVD) remains the leading cause of mortality globally, necessitating innovative approaches for early detection and risk stratification. This study introduces a hybrid artificial intelligence (AI) model that synergistically combines Convolutional Neural Networks (CNNs) and Large Language Models (LLMs) to enhance the accuracy of heart failure (HF) prediction. The CNN component effectively captures spatial patterns from structured clinical data, while the LLM component interprets complex, unstructured information, enabling a comprehensive analysis of patient health records. Our hybrid model achieved a superior accuracy of 95.1%, outperforming standalone models and demonstrating high precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC) metrics. Key predictive features (risk factors, symptoms, signs, and electrocardiogram (ECG) investigations) identified include Chest Pain Type, Maximum Heart Rate (maxHR), and Exercise-Induced Angina, aligning with established clinical indicators of cardiac risk. Integrating explainable AI (XAI) techniques, such as Shapley Additive exPlanations (SHAP), provides transparency into the model's decision-making process, fostering trust and facilitating clinical adoption. These findings underscore the potential of hybrid AI models to revolutionize cardiovascular diagnostics by providing accurate, interpretable, and clinically relevant predictions, thereby supporting healthcare professionals in making informed decisions and improving patient outcomes.

PMID:40636608 | PMC:PMC12240570 | DOI:10.7759/cureus.85638

Categories: Literature Watch

Applications of neural networks in liver transplantation

Thu, 2025-07-10 06:00

ILIVER. 2022 Aug 9;1(2):101-110. doi: 10.1016/j.iliver.2022.07.002. eCollection 2022 Jun.

ABSTRACT

The use of neural networks (NNs) as a cutting-edge technique in the medical field has drawn considerable attention. NN models "learn" from a large amount of data and then find corresponding clinical patterns that are challenging for clinicians to recognize. In this study, we focus on liver transplantation (LT), which is an effective treatment for end-stage liver diseases. The management before and after LT produces a massive quantity of medical data, which can be fully processed by NNs. We describe recent progress in the clinical application of NNs to LT in five respects: pre-transplantation evaluation of the donor and recipient, recipient outcome prediction, allocation system development, operation monitoring, and post-transplantation complication prediction. This review provides clinicians and researchers with a description of forefront applications of NNs in the field of LT and discusses prospects and pitfalls.

PMID:40636422 | PMC:PMC12212597 | DOI:10.1016/j.iliver.2022.07.002

Categories: Literature Watch

When liver disease diagnosis encounters deep learning: Analysis, challenges, and prospects

Thu, 2025-07-10 06:00

ILIVER. 2023 Mar 4;2(1):73-87. doi: 10.1016/j.iliver.2023.02.002. eCollection 2023 Mar.

ABSTRACT

The liver is the second-largest organ in the human body and is essential for digesting food and removing toxic substances. Viruses, obesity, alcohol use, and other factors can damage the liver and cause liver disease. The diagnosis of liver disease used to depend on the clinical experience of doctors, which made it subjective, difficult, and time-consuming. Deep learning has made breakthroughs in various fields; thus, there is a growing interest in using deep learning methods to solve problems in liver research to assist doctors in diagnosis and treatment. In this paper, we provide an overview of deep learning in liver research using 139 papers from the last 5 years. We also show the relationship between data modalities, liver topics, and applications in liver research using Sankey diagrams and summarize the deep learning methods used for each liver topic, in addition to the relations and trends between these methods. Finally, we discuss the challenges of and expectations for deep learning in liver research.

PMID:40636411 | PMC:PMC12212720 | DOI:10.1016/j.iliver.2023.02.002

Categories: Literature Watch

Application of biological big data and radiomics in hepatocellular carcinoma

Thu, 2025-07-10 06:00

ILIVER. 2023 Feb 4;2(1):41-49. doi: 10.1016/j.iliver.2023.01.003. eCollection 2023 Mar.

ABSTRACT

Hepatocellular carcinoma (HCC), one of the most common gastrointestinal cancers, has been considered a worldwide threat due to its high incidence and poor prognosis. In recent years, with the continuous emergence and promotion of new sequencing technologies in omics, genomics, transcriptomics, proteomics, and liquid biopsy are used to assess HCC heterogeneity from different perspectives and become a hotspot in the field of tumor precision medicine. In addition, with the continuous improvement of machine learning algorithms and deep learning algorithms, radiomics has made great progress in the field of ultrasound, CT and MRI for HCC. This article mainly reviews the research progress of biological big data and radiomics in HCC, and it provides new methods and ideas for the diagnosis, prognosis, and therapy of HCC.

PMID:40636408 | PMC:PMC12212726 | DOI:10.1016/j.iliver.2023.01.003

Categories: Literature Watch

Fourier convolutional decoder: reconstructing solar flare images via deep learning

Thu, 2025-07-10 06:00

Neural Comput Appl. 2025;37(20):15573-15604. doi: 10.1007/s00521-025-11283-6. Epub 2025 May 27.

ABSTRACT

Reconstructing images from observational data is a complex and time-consuming process, particularly in astronomy, where traditional algorithms like CLEAN require extensive computational resources and expert interpretation to distinguish genuine features from artifacts, especially without ground truth data. To address these challenges, we developed the Fourier convolutional decoder (FCD), a custom-made overcomplete autoencoder trained on simulated data with available ground truth. This enables the network to generate outputs that closely approximate expected ground truth. The model's versatility was demonstrated on both simulated and observational datasets, with a specific application to data from the spectrometer/telescope for imaging X-rays (STIX) on the solar orbiter. In the simulated environment, FCD's performance was evaluated using multiple-image reconstruction metrics, demonstrating its ability to produce accurate images with minimal artifacts. For observational data, FCD was compared with benchmark algorithms, focusing on reconstruction metrics related to Fourier components. Our evaluation found that FCD is the fastest imaging method, with runtimes on the order of milliseconds. Its computational cost is comparable to the most efficient reconstruction algorithm and 280 × faster than the slowest imaging method for single-image reconstruction on a CPU. Additionally, its runtime can be reduced by an order of magnitude for multiple-image reconstruction on a GPU. FCD outperforms or matches state-of-the-art methods on simulated data, achieving a mean MS-SSIM of 0.97, LPIPS of 0.04, PSNR of 35.70 dB, a Dice coefficient of 0.83, and a Hausdorff distance of 5.08. Finally, on experimental STIX observations, FCD remains competitive with top methods despite reduced performance compared to simulated data.

PMID:40636402 | PMC:PMC12234595 | DOI:10.1007/s00521-025-11283-6

Categories: Literature Watch

Deep learning-based feature selection for detection of autism spectrum disorder

Thu, 2025-07-10 06:00

Front Artif Intell. 2025 Jun 25;8:1594372. doi: 10.3389/frai.2025.1594372. eCollection 2025.

ABSTRACT

INTRODUCTION: Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by challenges in communication, social interactions, and repetitive behaviors. The heterogeneity of symptoms across individuals complicates diagnosis. Neuroimaging techniques, particularly resting-state functional MRI (rs-fMRI), have shown potential for identifying neural signatures of ASD, though challenges such as high dimensionality, noise, and small sample sizes hinder their clinical application.

METHODS: This study proposes a novel approach for ASD detection utilizing deep learning and advanced feature selection techniques. A hybrid model combining Stacked Sparse Denoising Autoencoder (SSDAE) and Multi-Layer Perceptron (MLP) is employed to extract relevant features from rs-fMRI data in the ABIDE I dataset, which was preprocessed using the CPAC pipeline. Feature selection is enhanced through an optimized Hiking Optimization Algorithm (HOA) that integrates DynamicOpposites Learning (DOL) and Double Attractors to improve convergence toward the optimal subset of features.

RESULTS: The proposed model is evaluated using multiple ASD datasets. The performance metrics include an average accuracy of 0.735, sensitivity of 0.765, and specificity of 0.752, surpassing the results of existing state-of-the-art methods.

DISCUSSION: The findings demonstrate the effectiveness of the hybrid deep learning approach for ASD detection. The enhanced feature selection process, coupled with the hybrid model, addresses limitations in current neuroimaging analyses and offers a promising direction for more accurate and clinically applicable ASD detection models.

PMID:40636395 | PMC:PMC12237974 | DOI:10.3389/frai.2025.1594372

Categories: Literature Watch

Artificial intelligence in the diagnosis of temporomandibular joint disorders using cone-beam computed tomography (CBCT)

Thu, 2025-07-10 06:00

Bioinformation. 2025 Apr 30;21(4):805-808. doi: 10.6026/973206300210805. eCollection 2025.

ABSTRACT

Temporomandibular joint disorders represent disorders which hinder the proper functioning of TMJ alongside causing pain-related problems. Therefore, it is of interest to analyse 150 CBCT scans using AI integration methods applied for TMD diagnosis. The AI-generated model displayed 92.4% accurate results and 90.8% sensitivity together with 93.7% specificity at a 0.95 AUC that matched radiologist agreement at κ = 0.89. The availability of AI diagnostics cut down diagnostic assessment time to deliver higher efficiency together with greater consistency. The future application of AI-assisted CBCT analysis appears promising yet needs additional verification steps before it becomes clinically available for broader medical use.

PMID:40636185 | PMC:PMC12236576 | DOI:10.6026/973206300210805

Categories: Literature Watch

Advances in artificial intelligence techniques drive the application of radiomics in the clinical research of hepatocellular carcinoma

Thu, 2025-07-10 06:00

ILIVER. 2022 Mar 10;1(1):49-54. doi: 10.1016/j.iliver.2022.02.005. eCollection 2022 Mar.

ABSTRACT

Hepatocellular carcinoma (HCC) remains the most common malignancy to threaten public health globally. With advances in artificial intelligence techniques, radiomics for HCC management provides a novel perspective to solve unmet needs in clinical settings, and reveals pixel-level radiological information for medical imaging big data, correlating the radiological phenotype with targeted clinical issues. Conventional radiomics pipelines depend on handcrafted engineering features, and further deep learning-based radiomics pipelines are supplemented with deep features calculated via self-learning strategies. During the past decade, radiomics has been widely applied in accurate diagnoses and pathological or biological behavior evaluation, as well as in prognosis prediction. In this review, we systematically introduce the main pipelines of artificial intelligence-based radiomics and their efficacy in the clinical studies of HCC.

PMID:40636134 | PMC:PMC12212591 | DOI:10.1016/j.iliver.2022.02.005

Categories: Literature Watch

Accurate classification of benign and malignant breast tumors in ultrasound imaging with an enhanced deep learning model

Thu, 2025-07-10 06:00

Front Bioeng Biotechnol. 2025 Jun 25;13:1526260. doi: 10.3389/fbioe.2025.1526260. eCollection 2025.

ABSTRACT

BACKGROUND: Breast cancer is the most common malignant tumor in women worldwide, and early detection is crucial to improving patient prognosis. However, traditional ultrasound examinations rely heavily on physician judgment, and diagnostic results are easily influenced by individual experience, leading to frequent misdiagnosis or missed diagnosis. Therefore, there is a pressing need for an automated, highly accurate diagnostic method to support the detection and classification of breast cancer. This study aims to build a reliable breast ultrasound image benign and malignant classification model through deep learning technology to improve the accuracy and consistency of diagnosis.

METHODS: This study proposed an innovative deep learning model RcdNet. RcdNet combines deep separable convolution and Convolutional Block Attention Module (CBAM) attention modules to enhance the ability to identify key lesion areas in ultrasound images. The model was internally validated and externally independently tested, and compared with commonly used models such as ResNet, MobileNet, RegNet, ViT and ResNeXt to verify its performance advantage in benign and malignant classification tasks. In addition, the model's attention area was analyzed by heat map visualization to evaluate its clinical interpretability.

RESULTS: The experimental results show that RcdNet outperforms other mainstream deep learning models, including ResNet, MobileNet, and ResNeXt, across all key evaluation metrics. On the external test set, RcdNet achieved an accuracy of 0.9351, a precision of 0.9168, a recall of 0.9495, and an F1-score of 0.9290, demonstrating superior classification performance and strong generalization ability. Furthermore, heat map visualizations confirm that RcdNet accurately attends to clinically relevant features such as tumor edges and irregular structures, aligning well with radiologists' diagnostic focus and enhancing the interpretability and credibility of the model in clinical applications.

CONCLUSION: The RcdNet model proposed in this study performs well in the classification of benign and malignant breast ultrasound images, with high classification accuracy, strong generalization ability and good interpretability. RcdNet can be used as an auxiliary diagnostic tool to help physicians quickly and accurately screen breast cancer, improve the consistency and reliability of diagnosis, and provide strong support for early detection and precise diagnosis and treatment of breast cancer. Future work will focus on integrating RcdNet into real-time ultrasound diagnostic systems and exploring its potential in multi-modal imaging workflows.

PMID:40635689 | PMC:PMC12237964 | DOI:10.3389/fbioe.2025.1526260

Categories: Literature Watch

Deep learning-based automatic detection and grading of disk herniation in lumbar magnetic resonance images

Wed, 2025-07-09 06:00

Sci Rep. 2025 Jul 9;15(1):24700. doi: 10.1038/s41598-025-10401-7.

ABSTRACT

Magnetic resonance imaging of the lumbar spine is a key technique for clarifying the cause of disease. The greatest challenges today are the repetitive and time-consuming process of interpreting these complex MR images and the problem of unequal diagnostic results from physicians with different levels of experience. To address these issues, in this study, an improved YOLOv8 model (GE-YOLOv8) that combines a gradient search module and efficient channel attention was developed. To address the difficulty of intervertebral disc feature extraction, the GS module was introduced into the backbone network, which enhances the feature learning ability for the key structures through the gradient splitting strategy, and the number of parameters was reduced by 2.1%. The ECA module optimizes the weights of the feature channels and enhances the sensitivity of detection for small-target lesions, and the mAP50 was improved by 4.4% compared with that of YOLOv8. GE-YOLOv8 demonstrated the significance of this innovation on the basis of a P value <.001, with YOLOv8 as the baseline. The experimental results on a dataset from the Pingtan Branch of Union Hospital of Fujian Medical University and an external test dataset show that the model has excellent accuracy.

PMID:40634500 | DOI:10.1038/s41598-025-10401-7

Categories: Literature Watch

Transformer optimization with meta learning on pathology images for breast cancer lymph node micrometastasis

Wed, 2025-07-09 06:00

NPJ Digit Med. 2025 Jul 9;8(1):421. doi: 10.1038/s41746-025-01833-6.

ABSTRACT

Lymph node micro-metastasis represents the initial stage of breast cancer spread or metastasis. However, the limited size of these hidden lesions restricts dataset expansion, presenting a significant challenge for manual examination and conventional deep learning techniques. By harnessing the power of meta-learning on limited datasets, we developed a novel network named MetaTrans, equipped with a 34-category dataset (MT-MCD) to effectively pinpoint micro-metastases in lymph nodes from pathological images. MetaTrans demonstrated superior performance on two different multi-center datasets and excelled in the 0-shot task for intraoperative frozen section diagnosis. Beyond breast cancer, MetaTrans efficiently identifies micro-metastases in thyroid and colorectal cancers and can be directly applied to recognize images captured by digital cameras under a microscope. Across all clinical validation scenarios, our method surpasses state-of-the-art baselines, exhibiting robust cross-domain adaptation and task-specific reliability, which highlight its translational potential in diverse pathological settings.

PMID:40634485 | DOI:10.1038/s41746-025-01833-6

Categories: Literature Watch

Enhanced detection of Mpox using federated learning with hybrid ResNet-ViT and adaptive attention mechanisms

Wed, 2025-07-09 06:00

Sci Rep. 2025 Jul 9;15(1):24728. doi: 10.1038/s41598-025-05391-5.

ABSTRACT

Monkeypox (Mpox), caused by the monkeypox virus, has become a global concern due to its rising cases and resemblance to other rash-causing diseases like chickenpox and measles. Traditional diagnostic methods, including visual examination and PCR tests, face limitations such as misdiagnoses, high costs, and unavailability in resource-limited areas. Existing deep learning-based approaches, while effective, often rely on centralized datasets, raising privacy concerns and scalability issues. To address these challenges, this study proposes ResViT-FLBoost model, a federated learning-based framework integrating ResNet and Vision Transformer (ViT) architectures with ensemble classifiers, XGBoost and LightGBM. This system ensures decentralized training across healthcare facilities, preserving data privacy while improving classification performance. The Monkeypox Skin Lesion Dataset (MSLD), consisting of 3192 augmented images, is utilized for training and testing. The framework, implemented in Python, leverages federated learning to collaboratively train models without data sharing, and adaptive attention mechanisms to focus on critical lesion features. Results demonstrate a detection accuracy of 98.78%, significantly outperforming traditional and existing methods in terms of precision, recall, and robustness. The new framework of ResViT-FLBoost incorporates ResNet convolutional features together with ViT contextual representations which are boosted by dynamic attention components. The system employs a deep learning pipeline integration that serves under a federated learning arcitecture that protects patient privacy because it lets distributed model training proceed from various hospital hubs without moving sensitive health information to one central server. The ensemble classifiers XGBoost and LightGBM improve diagnostic outcomes by merging local as well as global features within their classification decisions. These technical innovations provide strong diagnostic ability together with privacy-safe implementation capabilities for deployment in actual healthcare infrastructure. This framework provides a scalable, privacy-preserving solution for Mpox detection, particularly suitable for deployment in resource-constrained settings.

PMID:40634415 | DOI:10.1038/s41598-025-05391-5

Categories: Literature Watch

Deep ensemble learning with transformer models for enhanced Alzheimer's disease detection

Wed, 2025-07-09 06:00

Sci Rep. 2025 Jul 9;15(1):24720. doi: 10.1038/s41598-025-08362-y.

ABSTRACT

The progression of Alzheimer's disease is relentless, leading to a worsening of mental faculties over time. Currently, there is no remedy for this illness. Accurate detection and prompt intervention are pivotal in mitigating the progression of the disease. Recently, researchers have been developing new methods for detecting Alzheimer's at earlier stages, including genetic testing, blood tests for biomarkers, and cognitive assessments. Cognitive assessments involve a series of tests to measure memory, language, attention, and other brain functions. For disease detection, optimal performance necessitates enhanced accuracy and efficient computational capabilities. Our proposition involves the data augmentation of textual data; after this, we deploy our proposed BERT-based deep learning model to make use of its advanced capabilities for improved feature extraction and text comprehension. Our proposed model is a two-branch network. The first branch is based on the BERT encoder, which is used to encode the text data into a fixed-length vector; the BERT output is fed into the convolution layer, followed by the LSTM layer, and finally, the fully connected layer to predict whether a patient has AD or not. The second branch is based on the recurrent convolutional neural network. The recurrent convolutional neural network also takes text data as input and generates the final classification output. Finally, these branches are fused using the ensemble learning approach to give a more robust and accurate output. The proposed model is trained on a corpus of clinical notes from patients with AD and healthy control subjects. Evaluated on the Cookie Theft subset of the DementiaBank Pitt Corpus, our ensemble achieves 94.98% accuracy, 0.9523 F1-score, and 0.93 AUC. The results show that the proposed model outperforms state-of-the-art models for the early diagnosis of AD from text.

PMID:40634379 | DOI:10.1038/s41598-025-08362-y

Categories: Literature Watch

A hybrid deep learning model EfficientNet with GRU for breast cancer detection from histopathology images

Wed, 2025-07-09 06:00

Sci Rep. 2025 Jul 9;15(1):24633. doi: 10.1038/s41598-025-00930-6.

ABSTRACT

Breast cancer remains a significant global health challenge among women, with histopathological image analysis playing a critical role in early detection. However, existing diagnostic models often struggle to extract complex patterns from high-resolution tissue images, limiting their diagnostic accuracy and generalization. This study aims to develop a high-performance deep learning framework for accurate classification of breast cancer in histopathology images, addressing limitations in feature extraction and spatial dependency modeling. A hybrid deep learning model is proposed, integrating EfficientNetV2 for multi-scale feature extraction with a Gated Recurrent Unit (GRU) enhanced by an attention mechanism to model sequential dependencies. The model is trained and evaluated using the BreakHisand Camelyon17 dataset at 200× magnification. Evaluation metrics include precision, recall, F1-score, specificity, Intersection over Union (IoU), and accuracy. The proposed model achieved superior performance compared to existing architectures such as AlexNet, DenseNet, MobileNetV3, and EfficientNet. It attained a precision of 98.15%, recall of 95.68%, F1-score of 96.82%, specificity of 96%, IoU of 93.99%, and accuracy of 95.72% on the test set. The integration of EfficientNetV2 with GRU and attention mechanisms enables effective learning of spatial and contextual features, enhancing the accuracy and interpretability of breast cancer classification from histopathology images. This framework shows strong potential for aiding pathologists in real-time diagnostic workflows.

PMID:40634313 | DOI:10.1038/s41598-025-00930-6

Categories: Literature Watch

Development of a deep learning-based MRI diagnostic model for human Brucella spondylitis

Wed, 2025-07-09 06:00

Biomed Eng Online. 2025 Jul 9;24(1):87. doi: 10.1186/s12938-025-01404-6.

ABSTRACT

INTRODUCTION: Brucella spondylitis (BS) and tuberculous spondylitis (TS) are prevalent spinal infections with distinct treatment protocols. Rapid and accurate differentiation between these two conditions is crucial for effective clinical management; however, current imaging and pathogen-based diagnostic methods fall short of fully meeting clinical requirements. This study explores the feasibility of employing deep learning (DL) models based on conventional magnetic resonance imaging (MRI) to differentiate BS and TS.

METHODS: A total of 310 subjects were enrolled in our hospital, comprising 209 with BS, 101 with TS. The participants were randomly divided into a training set (n = 217) and a test set (n = 93). And 74 with other hospital was external validation set. Integrating Convolutional Block Attention Module (CBAM) into the ResNeXt-50 architecture and training the model using sagittal T2-weighted images (T2WI). Classification performance was evaluated using the area under the receiver operating characteristic (AUC) curve, and diagnostic accuracy was compared against general models such as ResNet50, GoogleNet, EfficientNetV2, and VGG16.

RESULTS: The CBAM-ResNeXt model revealed superior performance, with accuracy, precision, recall, F1-score, and AUC from 0.942, 0.940, 0.928, 0.934, 0.953, respectively. These metrics outperformed those of the general models.

CONCLUSIONS: The proposed model offers promising potential for the diagnosis of BS and TS using conventional MRI. It could serve as an invaluable tool in clinical practice, providing a reliable reference for distinguishing between these two diseases.

PMID:40635011 | DOI:10.1186/s12938-025-01404-6

Categories: Literature Watch

Deep learning for predicting myopia severity classification method

Wed, 2025-07-09 06:00

Biomed Eng Online. 2025 Jul 9;24(1):85. doi: 10.1186/s12938-025-01416-2.

ABSTRACT

BACKGROUND: Myopia is a major cause of vision impairment. To improve the efficiency of myopia screening, this paper proposes a deep learning model, X-ENet, which combines the advantages of depthwise separable convolution and dynamic convolution to classify different severities of myopia. The proposed model not only enables precise extraction of detailed features from fundus images but also achieves lightweight processing, thereby improving both computational efficiency and classification accuracy.

APPROACH: First, fundus images are enhanced and preprocessed to improve feature extraction effectiveness and enhance the model's generalization capability. Then, the model is trained using fivefold cross-validation, leveraging dynamic convolution and depthwise separable convolution to extract features from each fundus image and classify the severity of myopia. Next, Grad-CAM is employed to visualize the model's decision-making process, highlighting the regions contributing to classification. Finally, a user-friendly GUI interface is developed to intuitively present the classification results, thereby enhancing the system's usability and practical applicability.

RESULTS: The experimental results show that the proposed method achieves an accuracy of 0.9104, a precision of 0.8154, a recall of 0.8177, an F1-score of 0.8147, and a specificity of 0.9376 in the classification of myopia severity.

SIGNIFICANCE: The model significantly outperforms existing conventional deep learning models in terms of accuracy, demonstrating strong effectiveness and reliability.

PMID:40634962 | DOI:10.1186/s12938-025-01416-2

Categories: Literature Watch

Intelligent quality assessment of ultrasound images for fetal nuchal translucency measurement during the first trimester of pregnancy based on deep learning models

Wed, 2025-07-09 06:00

BMC Pregnancy Childbirth. 2025 Jul 10;25(1):741. doi: 10.1186/s12884-025-07863-y.

NO ABSTRACT

PMID:40634883 | DOI:10.1186/s12884-025-07863-y

Categories: Literature Watch

Exploring single-head and multi-head CNN and LSTM-based models for road surface classification using on-board vehicle multi-IMU data

Wed, 2025-07-09 06:00

Sci Rep. 2025 Jul 9;15(1):24595. doi: 10.1038/s41598-025-10573-2.

ABSTRACT

Accurate road surface monitoring is essential for ensuring vehicle and pedestrian safety, and it relies on robust data acquisition and analysis methods. This study examines the classification of road surface conditions using single- and multi-head deep learning architectures, specifically Convolutional Neural Networks (CNNs) and CNNs combined with Long Short-Term Memory (LSTM) layers, applied to data from Inertial Measurement Units (IMUs) mounted on vehicle's sprung and unsprung masses. Various model architectures were tested, incorporating IMU data from different positions and utilizing both acceleration and angular velocity features. A grid search was conducted to fine-tune the architectures' hyperparameters, including the number of filters, kernel sizes, and LSTM units. Results show that CNN+LSTM models generally outperformed CNN-only models. The highest-performing model, which used data from three IMUs in a single-head architecture, achieved a macro F1-score of 0.9338. The study highlights the effectiveness of combining IMU data in a single-head architecture and suggests that further improvements in classification accuracy can be achieved by refining the architectures and enhancing the dataset, particularly for more challenging road surface classes.

PMID:40634639 | DOI:10.1038/s41598-025-10573-2

Categories: Literature Watch

Applying deep learning techniques to identify tonsilloliths in panoramic radiography

Wed, 2025-07-09 06:00

Sci Rep. 2025 Jul 9;15(1):24773. doi: 10.1038/s41598-025-10489-x.

ABSTRACT

Tonsilloliths can be seen on panoramic radiographs (PRs) as deposits located on the middle portion of the ramus of the mandible. Although tonsilloliths are clinically harmless, the high risk of misdiagnosis leads to unnecessary advanced examinations and interventions, thus jeopardizing patient safety and increasing unnecessary resource use in the healthcare system. Therefore, this study aims to meet an important clinical need by providing accurate and rapid diagnostic support. The dataset consisted of a total of 275 PRs, with 125 PRs lacking tonsillolith and 150 PRs having tonsillolith. ResNet and EfficientNet CNN models were assessed during the model selection process. An evaluation was conducted to analyze the learning capacity, intricacy, and compatibility of each model with the problem at hand. The effectiveness of the models was evaluated using accuracy, recall, precision, and F1 score measures following the training phase. Both the ResNet18 and EfficientNetB0 models were able to differentiate between tonsillolith-present and tonsillolith-absent conditions with an average accuracy of 89%. ResNet101 demonstrated underperformance when contrasted with other models. EfficientNetB1 exhibits satisfactory accuracy in both categories. The EfficientNetB0 model exhibits a 93% precision, 87% recall, 90% F1 score, and 89% accuracy. This study indicates that implementing AI-powered deep learning techniques would significantly improve the clinical diagnosis of tonsilloliths.

PMID:40634633 | DOI:10.1038/s41598-025-10489-x

Categories: Literature Watch

The role of metabolism in shaping enzyme structures over 400 million years

Wed, 2025-07-09 06:00

Nature. 2025 Jul 9. doi: 10.1038/s41586-025-09205-6. Online ahead of print.

ABSTRACT

Advances in deep learning and AlphaFold2 have enabled the large-scale prediction of protein structures across species, opening avenues for studying protein function and evolution1. Here we analyse 11,269 predicted and experimentally determined enzyme structures that catalyse 361 metabolic reactions across 225 pathways to investigate metabolic evolution over 400 million years in the Saccharomycotina subphylum2. By linking sequence divergence in structurally conserved regions to a variety of metabolic properties of the enzymes, we reveal that metabolism shapes structural evolution across multiple scales, from species-wide metabolic specialization to network organization and the molecular properties of the enzymes. Although positively selected residues are distributed across various structural elements, enzyme evolution is constrained by reaction mechanisms, interactions with metal ions and inhibitors, metabolic flux variability and biosynthetic cost. Our findings uncover hierarchical patterns of structural evolution, in which structural context dictates amino acid substitution rates, with surface residues evolving most rapidly and small-molecule-binding sites evolving under selective constraints without cost optimization. By integrating structural biology with evolutionary genomics, we establish a model in which enzyme evolution is intrinsically governed by catalytic function and shaped by metabolic niche, network architecture, cost and molecular interactions.

PMID:40634610 | DOI:10.1038/s41586-025-09205-6

Categories: Literature Watch

Pages