Deep learning
A transformer based generative chemical language AI model for structural elucidation of organic compounds
J Cheminform. 2025 Jul 12;17(1):103. doi: 10.1186/s13321-025-01016-1.
ABSTRACT
For over half a century, computer-aided structural elucidation systems (CASE) for organic compounds have relied on complex expert systems with explicitly programmed algorithms. These systems are often computationally inefficient for complex compounds due to the vast chemical structural space that must be explored and filtered. In this study, we present a proof-of-concept transformer based generative chemical language artificial intelligence (AI) model, an innovative end-to-end architecture designed to replace the logic and workflow of the classic CASE framework for ultra-fast and accurate spectroscopic-based structural elucidation. Our model employs an encoder-decoder architecture and self-attention mechanisms, similar to those in large language models, to directly generate the most probable chemical structures that match the input spectroscopic data. Trained on ~ 102 k IR, UV, and 1H NMR spectra, it performs structural elucidation of molecules with up to 29 atoms in just a few seconds on a modern CPU, achieving a top-15 accuracy of 83%. This approach demonstrates the potential of transformer based generative AI to accelerate traditional scientific problem-solving processes. The model's ability to iterate quickly based on new data highlights its potential for rapid advancements in structural elucidation.
PMID:40652284 | DOI:10.1186/s13321-025-01016-1
Establishing an AI-based diagnostic framework for pulmonary nodules in computed tomography
BMC Pulm Med. 2025 Jul 12;25(1):339. doi: 10.1186/s12890-025-03806-7.
ABSTRACT
BACKGROUND: Pulmonary nodules seen by computed tomography (CT) can be benign or malignant, and early detection is important for optimal management. The existing manual methods of identifying nodules have limitations, such as being time-consuming and erroneous.
OBJECTIVE: This study aims to develop an Artificial Intelligence (AI) diagnostic scheme that improves the performance of identifying and categorizing pulmonary nodules using CT scans.
METHOD: The proposed deep learning framework used convolutional neural networks, and the image database totaled 1,056 3D-DICOM CT images. The framework was initially preprocessing, including lung segmentation, nodule detection, and classification. Nodule detection was done using the Retina-UNet model, while the features were classified using a Support Vector Machine (SVM). Performance measures, including accreditation, sensitivity, specificity, and the AUROC, were used to evaluate the model's performance during training and validation.
RESULTS: Overall, the developed AI model received an AUROC of 0.9058. The diagnostic accuracy was 90.58%, with an overall positive predictive value of 89% and an overall negative predictive value of 86%. The algorithm effectively handled the CT images at the preprocessing stage, and the deep learning model performed well in detecting and classifying nodules.
CONCLUSION: The application of the new diagnostic framework based on AI algorithms increased the accuracy of the diagnosis compared with the traditional approach. It also provides high reliability for detecting pulmonary nodules and classifying the lesions, thus minimizing intra-observer differences and improving the clinical outcome. In perspective, the advancements may include increasing the size of the annotated data-set and fine-tuning the model due to detection issues of non-solitary nodules.
PMID:40652218 | DOI:10.1186/s12890-025-03806-7
Deep learning algorithm for identifying osteopenia/osteoporosis using cervical radiography
Sci Rep. 2025 Jul 12;15(1):25274. doi: 10.1038/s41598-025-11285-3.
ABSTRACT
Due to symptomatic gait imbalance and a high incidence of falls, patients with cervical disease-including degenerative cervical myelopathy-have a significantly increased risk of fragility fractures. To prevent such fractures in patients with cervical disease, treating osteoporosis is an important strategy. This study aimed to validate the diagnostic yield of a deep learning algorithm for detecting osteopenia/osteoporosis using cervical radiography and compare its diagnostic accuracy with that of spine surgeons. Samples were divided into training (n = 200) and test (n = 30) datasets. The deep learning algorithm, designed to detect T-scores of the femoral neck or lumbar spine <-1.0 using cervical radiography, was constructed using a convolutional neural network model. The number of correct diagnoses was compared between the algorithm and nine spine surgeons using the independent test dataset. The results indicated that the algorithm's diagnostic accuracy, sensitivity, and specificity in the independent test dataset were 0.800, 0.818, and 0.750, respectively. The rate of corrected answers by the deep learning algorithm was significantly higher than that of nine spine surgeons in the test dataset (80.0% vs. 60.6%; p = 0.032). In conclusion, the diagnostic yield of the algorithm was higher than that of spine surgeons.
PMID:40652099 | DOI:10.1038/s41598-025-11285-3
Development and validation of a prognostic model for predicting survival and immunotherapy benefits in melanoma based on metabolism-relevant genes
Discov Oncol. 2025 Jul 12;16(1):1321. doi: 10.1007/s12672-025-03186-8.
ABSTRACT
Skin cutaneous melanoma (SKCM) is a fatal form of skin cancer. Metabolism-related genes (MRGs) comprise a group of genes that possess the ability to modulate and regulate metabolic pathways. In this study, the expression levels of MRGs were used to classify SKCM patients into three molecular subtypes. Then the differentially expressed genes (DEGs) among the three MRGs molecular clusters were applied into the LASSO and COX regression analysis to identify five signature genes. Furthermore, we developed a prognostic model to predict the prognosis of SKCM patients and evaluate the response of SKCM patients to immunotherapy based on the expression of signature genes. Pathological features were extracted using the ResNet50 deep learning framework and Cellprofiler software, and feature selection was performed using elastic regression and univariate Cox analysis to obtain 13 pathological features related to the MRGs prognostic model. Single-cell RNA sequencing (scRNA-seq) analysis identified the expression of MRGs in multiple cell types and found that SLC5A3 + malignant cells may mediate potential communication with tumor-associated fibroblasts through the PI3K-AKT pathway and cholesterol metabolism pathway. It is worth noting that, through in vitro experiments, including western blot (WB), quantitative PCR (qPCR), and immunohistochemistry (IHC) techniques, we found differences in the expression levels of signature genes between normal melanocytes and SKCM cells. In addition, suppressing the expression of the signature gene, SLC5A3, in a SKCM cell line through the utilization of small interfering RNAs (siRNAs) could inhibit the proliferation and migration of cells, as evidenced by the implementation of colony formation assay, CCK8, and cell transfection techniques.
PMID:40652057 | DOI:10.1007/s12672-025-03186-8
A deep learning approach for heart disease detection using a modified multiclass attention mechanism with BiLSTM
Sci Rep. 2025 Jul 12;15(1):25273. doi: 10.1038/s41598-025-09594-8.
ABSTRACT
Heart disease remains the leading cause of death globally, mainly caused by delayed diagnosis and indeterminate categorization. Many of traditional ML/DL methods have limitations of misclassification, similar features, less training data, heavy computation, and noise disturbance. This study proposes a novel methodology of Modified Multiclass Attention Mechanism based on Deep Bidirectional Long Short-Term Memory (M2AM with Deep BiLSTM). We propose a novel model that incorporates class-aware attention weights, which dynamically modulate the focus of attention on input features according to their importance for a specific heart disease class. With an emphasis on the informative data, M2AM can improve feature representation and well-cure the problems of mis-classification, overlapped features, and fragility against noise. We utilized a large dataset of 6000 samples and 14 features, resulting in noticeable noise reduction from the MIT-BIH and INCART databases. Applying an Improved Adaptive band-pass filter (IABPF) to the signals resulted in noticeable noise reduction and an enhancement of signal quality. Additionally, wavelet transforms were employed to achieve accurate segmentation, allowing the model to discern the complex patterns present in the data. The proposed mechanism achieved high performance in the performance metrics, with accuracy of 98.82%, precision of 97.20%, recall of 98.34%, and F-measure of 98.92%. It surpassed methods such as the Classic Deep BiLSTM (SD-BiLSTM), and the standard approaches of Naive Bayes (NB), DNN-Taylos (DNNT), Multilayer perceptron (MLP-NN) and convolutional neural network (CNN). This work provides a solution to significant limitations of current methods and improves the accuracy of classification, indicating substantial progress in accurate diagnosis of heart diseases.
PMID:40652020 | DOI:10.1038/s41598-025-09594-8
Accurate and real-time brain tumour detection and classification using optimized YOLOv5 architecture
Sci Rep. 2025 Jul 12;15(1):25286. doi: 10.1038/s41598-025-07773-1.
ABSTRACT
The brain tumours originate in the brain or its surrounding structures, such as the pituitary and pineal glands, and can be benign or malignant. While benign tumours may grow into neighbouring tissues, metastatic tumours occur when cancer from other organs spreads to the brain. This is because identification and staging of such tumours are critical because basically all aspects involving a patient's disease entail accurate diagnosis as well as the staging of the tumour. Image segmentation is incredibly valuable to medical imaging since it can make possible to simulate surgical operations, diseases diagnosis, anatomical and pathologic analysis. This study performs the prediction and classification of brain tumours present in MRI, a combined classification and localization framework model is proposed connecting Fully Convolutional Neural Network (FCNN) and You Only Look Once version 5 (YOLOv5). The FCNN model is designed to classify images into four categories: benign - glial, adenomas and pituitary related, and meningeal. It utilizes a derivative of Root Mean Square Propagation (RMSProp)optimization to boost the classification rate, based upon which the performance was evaluated with the standard measures that are precision, recall, F1 coefficient, specificity and accuracy. Subsequently, the YOLOv5 architectural design for more accurate detection of tumours is incorporated, with the subsequent use of FCNN for creation of the segmentation's masks of the tumours. Thus, the analysis proves that the suggested approach has more accuracy than the existing system with 98.80% average accuracy in the identification and categorization of brain tumour. This integration of detection and segmentation models presents one of the most effective techniques for enhancing the diagnostic performance of the system to add value within the medical imaging field. On the basis of these findings, it becomes possible to conclude that the advancements in the deep learning structures could apparently improve the tumour diagnosis while contributing to the finetuning of the clinical management.
PMID:40651993 | DOI:10.1038/s41598-025-07773-1
A crack detection and quantification method using matched filter and photograph reconstruction
Sci Rep. 2025 Jul 12;15(1):25266. doi: 10.1038/s41598-025-08280-z.
ABSTRACT
Crack detection is a critical task for bridge maintenance and management. While popular deep learning algorithms have shown promise, their reliance on large, high-quality training datasets, which are often unavailable in engineering practice, limits their applicability. By contrast, traditional digital image processing methods offer low computational costs and strong interpretability, making continued research in this area highly valuable. This study proposes an automatic crack detection and quantification approach based on digital image processing combined with unmanned aerial vehicle (UAV) flight parameters. First, the characteristics of the bridge images collected by the UAVs were thoroughly analyzed. An enhanced matched-filter algorithm was designed to achieve crack segmentation. Morphological methods were employed to extract the skeletons of the segmented cracks, enabling the calculation of actual crack lengths. Finally, a 3D model was constructed by integrating the detection results with the image-shooting parameters. This 3D model, annotated with detected cracks, provides an intuitive and comprehensive representation of bridge damage, facilitating informed decision making in maintenance planning and resource allocation. To verify the accuracy of the enhanced matched filter algorithm, it was compared with other digital image processing methods on public datasets, achieving average results of 97.9% for Pixel Accuracy (PA), 72.5% for the F1-score, and 58.1% for Intersection over Union (Iou) across three typical sub-datasets. Moreover, the proposed methodologies were successfully applied to an arch bridge with an error of only 2%, thereby demonstrating their applicability to real-world scenarios.
PMID:40651987 | DOI:10.1038/s41598-025-08280-z
Content oriented 3D-CNN sequence learning architecture for academic activities recognition using a realistic CAD dataset
Sci Rep. 2025 Jul 12;15(1):25250. doi: 10.1038/s41598-025-07620-3.
ABSTRACT
In computer vision, video analytic researchers have been developing techniques for human activity recognition in several application domains. Academic institutions are in possession of rich video repository generated by the surveillance system in respective campuses. One major requirement is to develop lightweight adaptable models capable of recognizing academic activities, enabling effective decision making in various application domains. This research article proposes a lightweight 3D-CNN architecture for recognizing a novel set of academic activities using a realistic campus video dataset. The proposed sequence learning model is obtained by utilizing spatial and temporal video information enabling accurate classification of the target activity sequences. The proposed model is compared with the LSTM model, a state-of-the-art algorithm for time-series and sequence-learning problems, by performing sufficient experimentations. Experimental results demonstrate that the proposed 3D-CNN model outperforms other variants, achieving the highest accuracy of 95%, minimum computational cost of 13.3 GFLOPs, and low memory overhead of 18,464 KB. These performance indicators make the proposed model an efficient and effective classifier for the proposed academic activity recognition task.
PMID:40651985 | DOI:10.1038/s41598-025-07620-3
Multimodal deep learning for cephalometric landmark detection and treatment prediction
Sci Rep. 2025 Jul 12;15(1):25205. doi: 10.1038/s41598-025-06229-w.
ABSTRACT
In orthodontics and maxillofacial surgery, accurate cephalometric analysis and treatment outcome prediction are critical for clinical decision-making. Traditional approaches rely on manual landmark identification, which is time-consuming and subject to inter-observer variability, while existing automated methods typically utilize single imaging modalities with limited accuracy. This paper presents DeepFuse, a novel multi-modal deep learning framework that integrates information from lateral cephalograms, CBCT volumes, and digital dental models to simultaneously perform landmark detection and treatment outcome prediction. The framework employs modality-specific encoders, an attention-guided fusion mechanism, and dual-task decoders to leverage complementary information across imaging techniques. Extensive experiments on three clinical datasets demonstrate that DeepFuse achieves a mean radial error of 1.21 mm for landmark detection, representing a 13% improvement over state-of-the-art methods, with a clinical acceptability rate of 92.4% at the 2 mm threshold. For treatment outcome prediction, the framework attains an overall accuracy of 85.6%, significantly outperforming both conventional prediction models and experienced clinicians. The proposed approach enhances diagnostic precision and treatment planning while providing interpretable visualization of decision factors, demonstrating significant potential for clinical integration in orthodontic and maxillofacial practice.
PMID:40651957 | DOI:10.1038/s41598-025-06229-w
Highly adaptable deep-learning platform for automated detection and analysis of vesicle exocytosis
Nat Commun. 2025 Jul 12;16(1):6450. doi: 10.1038/s41467-025-61579-3.
ABSTRACT
Activity recognition in live-cell imaging is labor-intensive and requires significant human effort. Existing automated analysis tools are largely limited in versatility. We present the Intelligent Vesicle Exocytosis Analysis (IVEA) platform, an ImageJ plugin for automated, reliable analysis of fluorescence-labeled vesicle fusion events and other burst-like activity. IVEA includes three specialized modules for detecting: (1) synaptic transmission in neurons, (2) single-vesicle exocytosis in any cell type, and (3) nano-sensor-detected exocytosis. Each module uses distinct techniques, including deep learning, allowing the detection of rare events often missed by humans at a speed estimated to be approximately 60 times faster than manual analysis. IVEA's versatility can be expanded by refining or training new models via an integrated interface. With its impressive speed and remarkable accuracy, IVEA represents a seminal advancement in exocytosis image analysis and other burst-like fluorescence fluctuations applicable to a wide range of microscope types and fluorescent dyes.
PMID:40651941 | DOI:10.1038/s41467-025-61579-3
Computational exploration of global venoms for antimicrobial discovery with Venomics artificial intelligence
Nat Commun. 2025 Jul 12;16(1):6446. doi: 10.1038/s41467-025-60051-6.
ABSTRACT
The rise of antibiotic-resistant pathogens, particularly gram-negative bacteria, highlights the urgent need for novel therapeutics. Drug-resistant infections now contribute to approximately 5 million deaths annually, yet traditional antibiotic discovery has significantly stagnated. Venoms form an immense and largely untapped reservoir of bioactive molecules with antimicrobial potential. In this study, we mined global venomics datasets to identify new antimicrobial candidates. Using deep learning, we explored 16,123 venom proteins, generating 40,626,260 venom-encrypted peptides. From these, we identified 386 candidates that are structurally and functionally distinct from known antimicrobial peptides. They display high net charge and elevated hydrophobicity, characteristics conducive to bacterial-membrane disruption. Structural studies revealed that many of these peptides adopt flexible conformations that transition to α-helical conformations in membrane-mimicking environments, supporting their antimicrobial potential. Of the 58 peptides selected for experimental validation, 53 display potent antimicrobial activity. Mechanistic assays indicated that they primarily exert their effects through bacterial-membrane depolarization, mirroring AMP-like mechanisms. In a murine model of Acinetobacter baumannii infection, lead peptides significantly reduced bacterial burden without observable toxicity. Our findings demonstrate that venoms are a rich source of previously hidden antimicrobial scaffolds, and that integrating large-scale computational mining with experimental validation can accelerate the discovery of urgently needed antibiotics.
PMID:40645962 | DOI:10.1038/s41467-025-60051-6
An investigation of simple neural network models using smartphone signals for recognition of manual industrial tasks
Sci Rep. 2025 Jul 11;15(1):25088. doi: 10.1038/s41598-025-06726-y.
ABSTRACT
This article addresses the challenge of human activity recognition (HAR) in industrial environments, focusing on the effectiveness of various neural network architectures. In particular, simpler Feedforward Neural Networks (FNN) are focused on with an aim to optimize computational performance without compromising accuracy. Three FNN configurations-FNN1, FNN2, and FNN3-were evaluated alongside the Convolutional Neural Network (CNN 1D) model for comparative analysis. The results indicate that the FNN achieved accuracy rates ranging from 94.28 to 99.19%, while the CNN 1D exhibited an accuracy of 98.12%. Despite the CNN 1D's efficiency for real-time applications, the FNN's fast training times and high accuracy make them particularly valuable in resource-constrained environments such as mobile devices. The findings suggest that while more complex models such Long Short-Term Memory (LSTM)-Auto-Encoder configurations, that have been tried by the same research group before, may offer better adaptability, simpler architectures can provide effective results in HAR tasks. Notably, these simpler models can be adopted in cascading systems operating online, serving as detectors of known activities for real-time monitoring and classification.
PMID:40645957 | DOI:10.1038/s41598-025-06726-y
A foundational model for in vitro fertilization trained on 18 million time-lapse images
Nat Commun. 2025 Jul 11;16(1):6235. doi: 10.1038/s41467-025-61116-2.
ABSTRACT
Embryo assessment in in vitro fertilization (IVF) involves multiple tasks-including ploidy prediction, quality scoring, component segmentation, embryo identification, and timing of developmental milestones. Existing methods address these tasks individually, leading to inefficiencies due to high costs and lack of standardization. Here, we introduce FEMI (Foundational IVF Model for Imaging), a foundation model trained on approximately 18 million time-lapse embryo images. We evaluate FEMI on ploidy prediction, blastocyst quality scoring, embryo component segmentation, embryo witnessing, blastulation time prediction, and stage prediction. FEMI attains area under the receiver operating characteristic (AUROC) > 0.75 for ploidy prediction using only image data-significantly outpacing benchmark models. It has higher accuracy than both traditional and deep-learning approaches for overall blastocyst quality and its subcomponents. Moreover, FEMI has strong performance in embryo witnessing, blastulation-time, and stage prediction. Our results demonstrate that FEMI can leverage large-scale, unlabelled data to improve predictive accuracy in several embryology-related tasks in IVF.
PMID:40645954 | DOI:10.1038/s41467-025-61116-2
Pediatric pancreas segmentation from MRI scans with deep learning
Pancreatology. 2025 Jun 16:S1424-3903(25)00119-X. doi: 10.1016/j.pan.2025.06.006. Online ahead of print.
ABSTRACT
OBJECTIVE: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls.
METHODS: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement.
RESULTS: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 ± 3.9 years) and 42 healthy children (mean age: 11.19 ± 4.88 years). PanSegNet achieved DSC scores of 88 % (controls), 81 % (AP), and 80 % (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability.
CONCLUSION: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.
PMID:40645819 | DOI:10.1016/j.pan.2025.06.006
Deep learning applications in orthopaedics: a systematic review and future directions
Acta Ortop Mex. 2025 May-Jun;39(3):152-163.
ABSTRACT
INTRODUCTION: artificial intelligence and deep learning in orthopedics have gained mass interest in recent years. In prior studies, researchers have demonstrated different applications, from radiographic assessment to bone tumor diagnosis. The purpose of this review is to analyze the current literature on AI and deep learning tools to identify the most used tools in the risk assessment, outcome assessment, imaging, and basic science fields.
MATERIAL AND METHODS: searches were conducted in PubMed, EMBASE and Google Scholar from January 2020 up to October 31st, 2023. We identified 862 studies, 595 of which were included in the systematic review. A total of 281 studies about radiographic assessment, 102 about spine-oriented surgery, 95 about outcome assessment, 84 about fundamental AI orthopedic education, and 33 basic science applications were included. Primary outcomes were diagnostic accuracy, study design and reporting standards reported in the literature. Estimates were pooled using random effects meta-analysis.
RESULTS: 53 different imaging methods were used to measure radiographic aspects. A total of 185 different machine learning algorithms were used, with the convolutional neural network architecture being the most common (73%). To improve diagnostic accuracy and speed were the most commonly achieved results (62%).
CONCLUSION: heterogeneity was high among the studies, and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms for medical imaging. There is an immediate need for the development of artificial intelligence-specific guidelines to provide guidance around key issues in this field.
PMID:40645786
AI-based Toxicity Prediction Models using ToxCast Data: Current Status and Future Directions for Explainable Models
Toxicology. 2025 Jul 9:154230. doi: 10.1016/j.tox.2025.154230. Online ahead of print.
ABSTRACT
Artificial intelligence (AI) offers new opportunities for developing toxicity prediction models to screen environmental chemicals. U.S. EPA's ToxCast program provides one of the largest toxicological databases and has consequently become the most widely used data source for developing AI-driven models. ToxCastIn this review, we analyzed 93 peer-reviewed papers published since 2015 to provide an overview of ToxCast data-based AI models. We overviewed the current landscape in terms of database structure, target endpoints, molecular representations, and learning algorithms. Most models focus on data-rich endpoints and organ-specific toxicity mechanisms, particularly endocrine disruption and hepatotoxicity. While conventional molecular fingerprints and descriptors are still common, recent studies employ alternative representations-graphs, images, and text-leveraging advances in deep learning. Likewise, traditional supervised machine-learning algorithms remain prevalent, but newer work increasingly adopts semi- and unsupervised approaches to tackle data-sparsity challenges. Beyond classical structure-based QSAR, ToxCast data are also being used as biological features to predict in vivo toxicity. We conclude by discussing current limitations and future directions for applying ToxCast-based AI models to accelerate next-generation risk assessment (NGRA).
PMID:40645553 | DOI:10.1016/j.tox.2025.154230
Artificial Intelligence in Airway Management: A Systematic Review and Meta-Analysis
Anaesth Crit Care Pain Med. 2025 Jul 9:101589. doi: 10.1016/j.accpm.2025.101589. Online ahead of print.
ABSTRACT
BACKGROUND: Airway management is the cornerstone of anesthesia care. Complications of difficult airways are usually fatal to patients. Artificial intelligence (AI) has shown promising results in enhancing clinicians' performance in various settings. We therefore aimed to summarize the current evidence on the use of AI models in the prediction of a difficult airway.
METHODS: We searched two databases, PubMed and Science Direct, for all relevant articles published until March 2025. Statistical software R version 4.4.2 was then utilized to meta-analyze the area under receiver operating curves (AUROC) to identify the best-performing models.
RESULTS: After the eligibility assessment, 13 studies met the inclusion criteria and were thus included in the review. Only two studies developed models for patients in the ED, and the remaining 11 studies developed models for patients undergoing different surgeries under general anesthesia. The deep learning model with the best discriminative ability for difficult airways was VGG (AUC 0.84; 95% CI [0.83, 0.84] I2 = 0%). For the traditional machine learning models, those with good discriminative ability for difficult airways included SVM (AUC 0.80; 95% CI [0.65, 0.96] I2 = 99.7%) and NB (AUC 0.81; 95% CI [0.51, 1.10] I2 = 99.3%).
CONCLUSIONS: Our study found that while some AI models have good discriminative ability (AUC ≥ 0.80) for difficult airways, most of them have just average discriminative ability AUC < 0.80. This, therefore, indicates a need to develop models with better discriminative ability and to validate the developed models.
PMID:40645499 | DOI:10.1016/j.accpm.2025.101589
Artificial intelligence in nutrition and ageing research - A primer on the benefits
Maturitas. 2025 Jul 7;200:108662. doi: 10.1016/j.maturitas.2025.108662. Online ahead of print.
ABSTRACT
Artificial intelligence (AI) is increasingly impacting multiple domains. The application of AI in nutrition and ageing research has significant potential to transform healthcare outcomes for the ageing population. This review provides critical insights into how AI techniques-such as machine learning, natural language processing, and deep learning-are used in the context of care for older people to predict health outcomes, identify risk factors, and enhance dietary assessments. Trained on large datasets, AI models have demonstrated high accuracy in diagnosing malnutrition, predicting bone mineral density abnormalities, and forecasting risks of chronic diseases, thereby addressing significant gaps in early detection and intervention strategies. In addition, we review novel applications of AI in automating dietary intake assessments through image recognition and analysing eating behaviours; these offer innovative tools for personalised nutrition interventions. The review also discusses and showcases the integration of AI in research logistics, such as AI-assisted literature screening and data synthesis, which can accelerate scientific discovery in this domain. Despite these promising advancements, there are critical challenges hindering the widespread adoption of AI, including issues around data quality, ethical considerations, and the interpretability of AI models. By addressing these barriers, the review underscores the necessity for interdisciplinary collaboration to best harness AI's potential. Our goal is for this review to serve as a guide for researchers and practitioners aiming to understand and leverage AI technologies in nutrition and healthy ageing. By bridging the gap between AI's promise and its practical applications, this review directs future innovations that could positively affect the health and well-being of the ageing population.
PMID:40645039 | DOI:10.1016/j.maturitas.2025.108662
RMDNet: RNA-aware dung beetle optimization-based multi-branch integration network for RNA-protein binding sites prediction
BMC Bioinformatics. 2025 Jul 11;26(1):176. doi: 10.1186/s12859-025-06197-y.
ABSTRACT
RNA-binding proteins (RBPs) play crucial roles in gene regulation. Their dysregulation has been increasingly linked to neurodegenerative diseases, liver cancer, and lung cancer. Although experimental methods like CLIP-seq accurately identify RNA-protein binding sites, they are time-consuming and costly. To address this, we propose RMDNet-a deep learning framework that integrates CNN, CNN-Transformer, and ResNet branches to capture features at multiple sequence scales. These features are fused with structural representations derived from RNA secondary structure graphs. The graphs are processed using a graph neural network with DiffPool. To optimize feature integration, we incorporate an improved dung beetle optimization algorithm, which adaptively assigns fusion weights during inference. Evaluations on the RBP-24 benchmark show that RMDNet outperforms state-of-the-art models including GraphProt, DeepRKE, and DeepDW across multiple metrics. On the RBP-31 dataset, it demonstrates strong generalization ability, while ablation studies on RBPsuite2.0 validate the contributions of individual modules. We assess biological interpretability by extracting candidate binding motifs from the first-layer CNN kernels. Several motifs closely match experimentally validated RBP motifs, confirming the model's capacity to learn biologically meaningful patterns. A downstream case study on YTHDF1 focuses on analyzing interpretable spatial binding patterns, using a large-scale prediction dataset and CLIP-seq peak alignment. The results confirm that the model captures localized binding signals and spatial consistency with experimental annotations. Overall, RMDNet is a robust and interpretable tool for predicting RNA-protein binding sites. It has broad potential in disease mechanism research and therapeutic target discovery. The source code is available https://github.com/cskyan/RMDNet .
PMID:40646507 | DOI:10.1186/s12859-025-06197-y
Interpretable MRI Subregional Radiomics-Deep Learning Model for Preoperative Lymphovascular Invasion Prediction in Rectal Cancer: A Dual-Center Study
J Imaging Inform Med. 2025 Jul 11. doi: 10.1007/s10278-025-01586-4. Online ahead of print.
ABSTRACT
Develop a fusion model based on explainable machine learning, combining multiparametric MRI subregional radiomics and deep learning, to preoperatively predict the lymphovascular invasion status in rectal cancer. We collected data from RC patients with histopathological confirmation from two medical centers, with 301 patients used as a training set and 75 patients as an external validation set. Using K-means clustering techniques, we meticulously divided the tumor areas into multiple subregions and extracted crucial radiomic features from them. Additionally, we employed an advanced Vision Transformer (ViT) deep learning model to extract features. These features were integrated to construct the SubViT model. To better understand the decision-making process of the model, we used the Shapley Additive Properties (SHAP) tool to evaluate the model's interpretability. Finally, we comprehensively assessed the performance of the SubViT model through receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and the Delong test, comparing it with other models. In this study, the SubViT model demonstrated outstanding predictive performance in the training set, achieving an area under the curve (AUC) of 0.934 (95% confidence interval: 0.9074 to 0.9603). It also performed well in the external validation set, with an AUC of 0.884 (95% confidence interval: 0.8055 to 0.9616), outperforming both subregion radiomics and imaging-based models. Furthermore, decision curve analysis (DCA) indicated that the SubViT model provides higher clinical utility compared to other models. As an advanced composite model, the SubViT model demonstrated its efficiency in the non-invasive assessment of local vascular invasion (LVI) in rectal cancer.
PMID:40646374 | DOI:10.1007/s10278-025-01586-4