Deep learning
Comparison of the diagnostic accuracy of VSBONE BSI versions for detecting bone metastases in breast and prostate carcinoma patients using conventional and CZT detector gamma cameras
Ann Nucl Med. 2025 Feb 14. doi: 10.1007/s12149-025-02020-z. Online ahead of print.
ABSTRACT
OBJECTIVE: Bone scintigraphy is widely employed for detecting bone metastases, with the bone scan index (BSI) gaining traction as a quantitative tool in this domain. VSBONE BSI, an automated image analysis software, identifies abnormal hyperaccumulation areas in bone scintigraphy and computes BSI scores. The software, originally developed using data from conventional gamma cameras (C-Camera), has undergone two upgrades. This study hypothesized that the upgrades enhance the diagnostic accuracy for bone metastases and assessed the software's applicability to images obtained using a cadmium-zinc-telluride detector gamma camera (CZT-Camera). The aim was to compare the diagnostic accuracy of VSBONE BSI across software versions using both conventional and CZT detectors and to evaluate its utility.
METHODS: A total of 287 patients with breast or prostate carcinoma who underwent whole-body bone scintigraphy were included. VSBONE BSI automatically analyzed and calculated the BSI. The analysis results were compared with the presence or absence of metastases for each software version by using detector type of camera. The diagnostic agreement was evaluated.
RESULTS: Receiver operating characteristic analysis showed an area under the curve (AUC) exceeding 0.7 across all groups, indicating good diagnostic performance. AUC values significantly increased with version upgrades for all patients and for breast carcinoma patients. In metastasis-negative cases, BSI values decreased with each software version upgrade, with the reduction being more pronounced in breast carcinoma patients scanned with the CZT-Camera.
CONCLUSIONS: Using the VSBONE BSI, version 2 or 3 had a higher rate of diagnostic concordance with the clinical prognosis than version 1. In metastasis-negative patients, newer software versions yielded lower BSI values, especially for breast carcinoma patients scanned using the CZT-Camera, highlighting the improved diagnostic accuracy of the updated software.
PMID:39951220 | DOI:10.1007/s12149-025-02020-z
Insights from the eyes: a systematic review and meta-analysis of the intersection between eye-tracking and artificial intelligence in dementia
Aging Ment Health. 2025 Feb 14:1-9. doi: 10.1080/13607863.2025.2464704. Online ahead of print.
ABSTRACT
OBJECTIVES: Dementia can change oculomotor behavior, which is detectable through eye-tracking. This study aims to systematically review and conduct a meta-analysis of current literature on the intersection between eye-tracking and artificial intelligence (AI) in detecting dementia.
METHOD: PubMed, Embase, Scopus, Web of Science, Cochrane, and IEEE databases were searched up to July 2023. All types of studies that utilized eye-tracking and AI to detect dementia and reported the performance metrics, were included. Data on the dementia type, performance, artificial intelligence, and eye-tracking paradigms were extracted. The registered protocol is available online on PROSPERO (ID: CRD42023451996).
RESULTS: Nine studies were finally included with a sample size ranging from 57 to 583 participants. Alzheimer's disease (AD) was the most common dementia type. Six studies used a machine learning model while three used a deep learning model. Meta-analysis revealed the accuracy, sensitivity, and specificity of using eye-tracking and artificial intelligence in detecting dementia, 88% [95% CI (83%-92%)], 85% [95% CI (75%-93%)], and 86% [95% CI (79%-93%)], respectively.
CONCLUSION: Eye-tracking coupled with AI revealed promising results in terms of dementia detection. Further studies must incorporate larger sample sizes, standardized guidelines, and include other dementia types.
PMID:39950960 | DOI:10.1080/13607863.2025.2464704
A combination of deep learning models and type-2 fuzzy for EEG motor imagery classification through spatiotemporal-frequency features
J Med Eng Technol. 2025 Feb 14:1-14. doi: 10.1080/03091902.2025.2463577. Online ahead of print.
ABSTRACT
Developing a robust and effective technique is crucial for interpreting a user's brainwave signals accurately in the realm of biomedical signal processing. The variability and uncertainty present in EEG patterns over time, compounded by noise, pose notable challenges, particularly in mental tasks like motor imagery. Introducing fuzzy components can enhance the system's ability to withstand noisy environments. The emergence of deep learning has significantly impacted artificial intelligence and data analysis, prompting extensive exploration into assessing and understanding brain signals. This work introduces a hybrid series architecture called FCLNET, which combines Compact-CNN to extract frequency and spatial features alongside the LSTM network for temporal feature extraction. The activation functions in the CNN architecture were implemented using type-2 fuzzy functions to tackle uncertainties. Hyperparameters of the FCLNET model are tuned by the Bayesian optimisation algorithm. The efficacy of this approach is assessed through the BCI Competition IV-2a database and the BCI Competition IV-1 database. By incorporating type-2 fuzzy activation functions and employing Bayesian optimisation for tuning, the proposed architecture indicates good classification accuracy compared to the literature. Outcomes showcase the exceptional achievements of the FCLNET model, suggesting that integrating fuzzy units into other classifiers could lead to advancements in motor imagery-based BCI systems.
PMID:39950750 | DOI:10.1080/03091902.2025.2463577
Triboelectric Sensors Based on Glycerol/PVA Hydrogel and Deep Learning Algorithms for Neck Movement Monitoring
ACS Appl Mater Interfaces. 2025 Feb 14. doi: 10.1021/acsami.4c20821. Online ahead of print.
ABSTRACT
Prolonged use of digital devices and sedentary lifestyles have led to an increase in the prevalence of cervical spondylosis among young people, highlighting the urgent need for preventive measures. Recent advancements in triboelectric nanogenerators (TENGs) have shown their potential as self-powered sensors. In this study, we introduce a novel, flexible, and stretchable TENG for neck movement detection. The proposed TENG utilizes a glycerol/poly(vinyl alcohol) (GL/PVA) hydrogel and silicone rubber (GH-TENG). Through optimization of its concentration and thickness parameters and the use of environmentally friendly dopants, the sensitivity of the GH-TENG was improved to 4.50 V/kPa. Subsequently, we developed a smart neck ring with the proposed sensor for human neck movement monitoring. By leveraging the convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) algorithm, sensor data can be efficiently analyzed in both spatial and temporal dimensions, achieving a promising recognition accuracy of 97.14%. Additionally, we developed a neck motion monitoring system capable of accurately identifying and recording neck movements. The system can timely alert users if they maintain the same neck posture for more than 30 min and provide corresponding recommendations. By deployment on a Raspberry Pi 4B, the system offers a portable and efficient solution for cervical health protection.
PMID:39950449 | DOI:10.1021/acsami.4c20821
Genomic prediction with NetGP based on gene network and multi-omics data in plants
Plant Biotechnol J. 2025 Feb 14. doi: 10.1111/pbi.14577. Online ahead of print.
ABSTRACT
Genomic selection (GS) is a new breeding strategy. Generally, traditional methods are used for predicting traits based on the whole genome. However, the prediction accuracy of these models remains limited because they cannot fully reflect the intricate nonlinear interactions between genotypes and traits. Here, a novel single nucleotide polymorphism (SNP) feature extraction technique based on the Pearson-Collinearity Selection (PCS) is firstly presented and improves prediction accuracy across several known models. Furthermore, gene network prediction model (NetGP) is a novel deep learning approach designed for phenotypic prediction. It utilizes transcriptomic dataset (Trans), genomic dataset (SNP) and multi-omics dataset (Trans + SNP). The NetGP model demonstrated better performance compared to other models in genomic predictions, transcriptomic predictions and multi-omics predictions. NetGP multi-omics model performed better than independent genomic or transcriptomic prediction models. Prediction performance evaluations using several other plants' data showed good generalizability for NetGP. Taken together, our study not only offers a novel and effective tool for plant genomic selection but also points to new avenues for future plant breeding research.
PMID:39950326 | DOI:10.1111/pbi.14577
A deep-learning model for predicting tyrosine kinase inhibitor response from histology in gastrointestinal stromal tumor
J Pathol. 2025 Feb 14. doi: 10.1002/path.6399. Online ahead of print.
ABSTRACT
Over 90% of gastrointestinal stromal tumors (GISTs) harbor mutations in KIT or PDGFRA that can predict response to tyrosine kinase inhibitor (TKI) therapies, as recommended by NCCN (National Comprehensive Cancer Network) guidelines. However, gene sequencing for mutation testing is expensive and time-consuming and is susceptible to a variety of preanalytical factors. To overcome the challenges associated with genetic screening by sequencing, in the current study we developed an artificial intelligence-based deep-learning (DL) model that uses convolutional neural networks (CNN) to analyze digitized hematoxylin and eosin staining in tumor histological sections to predict potential response to imatinib or avapritinib treatment in GIST patients. Assessment with an independent testing set showed that our DL model could predict imatinib sensitivity with an area under the curve (AUC) of 0.902 in case-wise analysis and 0.807 in slide-wise analysis. Case-level AUCs for predicting imatinib-dose-adjustment cases, avapritinib-sensitive cases, and wildtype GISTs were 0.920, 0.958, and 0.776, respectively, while slide-level AUCs for these respective groups were 0.714, 0.922, and 0.886, respectively. Our model showed comparable or better prediction of actual response to TKI than sequencing-based screening (accuracy 0.9286 versus 0.8929; DL model versus sequencing), while predictions of nonresponse to imatinib/avapritinib showed markedly higher accuracy than sequencing (0.7143 versus 0.4286). These results demonstrate the potential of a DL model to improve predictions of treatment response to TKI therapy from histology in GIST patients. © 2025 The Pathological Society of Great Britain and Ireland.
PMID:39950223 | DOI:10.1002/path.6399
Use of artificial intelligence for gestational age estimation: a systematic review and meta-analysis
Front Glob Womens Health. 2025 Jan 30;6:1447579. doi: 10.3389/fgwh.2025.1447579. eCollection 2025.
ABSTRACT
INTRODUCTION: Estimating a reliable gestational age (GA) is essential in providing appropriate care during pregnancy. With advancements in data science, there are several publications on the use of artificial intelligence (AI) models to estimate GA using ultrasound (US) images. The aim of this meta-analysis is to assess the accuracy of AI models in assessing GA against US as the gold standard.
METHODS: A literature search was performed in PubMed, CINAHL, Wiley Cochrane Library, Scopus, and Web of Science databases. Studies that reported use of AI models for GA estimation with US as the reference standard were included. Risk of bias assessment was performed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Mean error in GA was estimated using STATA version-17 and subgroup analysis on trimester of GA assessment, AI models, study design, and external validation was performed.
RESULTS: Out of the 1,039 studies screened, 17 were included in the review, and of these 10 studies were included in the meta-analysis. Five (29%) studies were from high-income countries (HICs), four (24%) from upper-middle-income countries (UMICs), one (6%) from low-and middle-income countries (LMIC), and the remaining seven studies (41%) used data across different income regions. The pooled mean error in GA estimation based on 2D images (n = 6) and blind sweep videos (n = 4) was 4.32 days (95% CI: 2.82, 5.83; l 2: 97.95%) and 2.55 days (95% CI: -0.13, 5.23; l 2: 100%), respectively. On subgroup analysis based on 2D images, the mean error in GA estimation in the first trimester was 7.00 days (95% CI: 6.08, 7.92), 2.35 days (95% CI: 1.03, 3.67) in the second, and 4.30 days (95% CI: 4.10, 4.50) in the third trimester. In studies using deep learning for 2D images, those employing CNN reported a mean error of 5.11 days (95% CI: 1.85, 8.37) in gestational age estimation, while one using DNN indicated a mean error of 5.39 days (95% CI: 5.10, 5.68). Most studies exhibited an unclear or low risk of bias in various domains, including patient selection, index test, reference standard, flow and timings and applicability domain.
CONCLUSION: Preliminary experience with AI models shows good accuracy in estimating GA. This holds tremendous potential for pregnancy dating, especially in resource-poor settings where trained interpreters may be limited.
SYSTEMATIC REVIEW REGISTRATION: PROSPERO, identifier (CRD42022319966).
PMID:39950139 | PMC:PMC11821921 | DOI:10.3389/fgwh.2025.1447579
COVID-19 recognition from chest X-ray images by combining deep learning with transfer learning
Digit Health. 2025 Feb 13;11:20552076251319667. doi: 10.1177/20552076251319667. eCollection 2025 Jan-Dec.
ABSTRACT
OBJECTIVE: Based on the current research status, this paper proposes a deep learning model named Covid-DenseNet for COVID-19 detection from CXR (computed tomography) images, aiming to build a model with smaller computational complexity, stronger generalization ability, and excellent performance on benchmark datasets and other datasets with different sample distribution features and sample sizes.
METHODS: The proposed model first extracts and obtains features of multiple scales from the input image through transfer learning, followed by assigning internal weights to the extracted features through the attention mechanism to enhance important features and suppress irrelevant features; finally, the model fuses these features of different scales through the multi-scale fusion architecture we designed to obtain richer semantic information and improve modeling efficiency.
RESULTS: We evaluated our model and compared it with advanced models on three publicly available chest radiology datasets of different types, one of which is the baseline dataset, on which we constructed the model Covid-DenseNet, and the recognition accuracy on this test set was 96.89%, respectively. With recognition accuracy of 98.02% and 96.21% on the other two publicly available datasets, our model performs better than other advanced models. In addition, the performance of the model was further evaluated on external test sets, trained on data sets with balanced sample distribution (experiment 1) and unbalanced sample distribution (experiment 2), identified on the same external test set, and compared with DenseNet121. The recognition accuracy of the model in experiment 1 and experiment 2 is 80% and 77.5% respectively, which is 3.33% and 4.17% higher than that of DenseNet121 on external test set. On this basis, we also changed the number of samples in experiment 1 and experiment 2, and compared the impact of the change in the number of training set samples on the recognition accuracy of the model on the external test set. The results showed that when the number of samples increased and the sample features became more abundant, the trained Covid-DenseNet performed better on the external test set and the model became more robust.
CONCLUSION: Compared with other advanced models, our model has achieved better results on multiple datasets, and the recognition effect on external test sets is also quite good, with good generalization performance and robustness, and with the enrichment of sample features, the robustness of the model is further improved, and it has better clinical practice ability.
PMID:39949849 | PMC:PMC11822832 | DOI:10.1177/20552076251319667
Universal representation learning for multivariate time series using the instance-level and cluster-level supervised contrastive learning
Data Min Knowl Discov. 2024 May;38(3):1493-1519. doi: 10.1007/s10618-024-01006-1. Epub 2024 Feb 9.
ABSTRACT
The multivariate time series classification (MTSC) task aims to predict a class label for a given time series. Recently, modern deep learning-based approaches have achieved promising performance over traditional methods for MTSC tasks. The success of these approaches relies on access to the massive amount of labeled data (i.e., annotating or assigning tags to each sample that shows its corresponding category). However, obtaining a massive amount of labeled data is usually very time-consuming and expensive in many real-world applications such as medicine, because it requires domain experts' knowledge to annotate data. Insufficient labeled data prevents these models from learning discriminative features, resulting in poor margins that reduce generalization performance. To address this challenge, we propose a novel approach: supervised contrastive learning for time series classification (SupCon-TSC). This approach improves the classification performance by learning the discriminative low-dimensional representations of multivariate time series, and its end-to-end structure allows for interpretable outcomes. It is based on supervised contrastive (SupCon) loss to learn the inherent structure of multivariate time series. First, two separate augmentation families, including strong and weak augmentation methods, are utilized to generate augmented data for the source and target networks, respectively. Second, we propose the instance-level, and cluster-level SupCon learning approaches to capture contextual information to learn the discriminative and universal representation for multivariate time series datasets. In the instance-level SupCon learning approach, for each given anchor instance that comes from the source network, the low-variance output encodings from the target network are sampled as positive and negative instances based on their labels. However, the cluster-level approach is performed between each instance and cluster centers among batches, as opposed to the instance-level approach. The cluster-level SupCon loss attempts to maximize the similarities between each instance and cluster centers among batches. We tested this novel approach on two small cardiopulmonary exercise testing (CPET) datasets and the real-world UEA Multivariate time series archive. The results of the SupCon-TSC model on CPET datasets indicate its capability to learn more discriminative features than existing approaches in situations where the size of the dataset is small. Moreover, the results on the UEA archive show that training a classifier on top of the universal representation features learned by our proposed method outperforms the state-of-the-art approaches.
PMID:39949582 | PMC:PMC11825059 | DOI:10.1007/s10618-024-01006-1
Benefits, limits, and risks of ChatGPT in medicine
Front Artif Intell. 2025 Jan 30;8:1518049. doi: 10.3389/frai.2025.1518049. eCollection 2025.
ABSTRACT
ChatGPT represents a transformative technology in healthcare, with demonstrated impacts across clinical practice, medical education, and research. Studies show significant efficiency gains, including 70% reduction in administrative time for discharge summaries and achievement of medical professional-level performance on standardized tests (60% accuracy on USMLE, 78.2% on PubMedQA). ChatGPT offers personalized learning platforms, automated scoring, and instant access to vast medical knowledge in medical education, addressing resource limitations and enhancing training efficiency. It streamlines clinical workflows by supporting triage processes, generating discharge summaries, and alleviating administrative burdens, allowing healthcare professionals to focus more on patient care. Additionally, ChatGPT facilitates remote monitoring and chronic disease management, providing personalized advice, medication reminders, and emotional support, thus bridging gaps between clinical visits. Its ability to process and synthesize vast amounts of data accelerates research workflows, aiding in literature reviews, hypothesis generation, and clinical trial designs. This paper aims to gather and analyze published studies involving ChatGPT, focusing on exploring its advantages and disadvantages within the healthcare context. To aid in understanding and progress, our analysis is organized into six key areas: (1) Information and Education, (2) Triage and Symptom Assessment, (3) Remote Monitoring and Support, (4) Mental Healthcare Assistance, (5) Research and Decision Support, and (6) Language Translation. Realizing ChatGPT's full potential in healthcare requires addressing key limitations, such as its lack of clinical experience, inability to process visual data, and absence of emotional intelligence. Ethical, privacy, and regulatory challenges further complicate its integration. Future improvements should focus on enhancing accuracy, developing multimodal AI models, improving empathy through sentiment analysis, and safeguarding against artificial hallucination. While not a replacement for healthcare professionals, ChatGPT can serve as a powerful assistant, augmenting their expertise to improve efficiency, accessibility, and quality of care. This collaboration ensures responsible adoption of AI in transforming healthcare delivery. While ChatGPT demonstrates significant potential in healthcare transformation, systematic evaluation of its implementation across different healthcare settings reveals varying levels of evidence quality-from robust randomized trials in medical education to preliminary observational studies in clinical practice. This heterogeneity in evidence quality necessitates a structured approach to future research and implementation.
PMID:39949509 | PMC:PMC11821943 | DOI:10.3389/frai.2025.1518049
A Tutorial on the Use of Artificial Intelligence Tools for Facial Emotion Recognition in R
Multivariate Behav Res. 2025 Feb 14:1-15. doi: 10.1080/00273171.2025.2455497. Online ahead of print.
ABSTRACT
Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.
PMID:39949325 | DOI:10.1080/00273171.2025.2455497
An arrhythmia classification using a deep learning and optimisation-based methodology
J Med Eng Technol. 2025 Feb 14:1-9. doi: 10.1080/03091902.2025.2463574. Online ahead of print.
ABSTRACT
The work proposes a methodology for five different classes of ECG signals. The methodology utilises moving average filter and discrete wavelet transformation for the remove of baseline wandering and powerline interference. The preprocessed signals are segmented by R peak detection process. Thereafter, the greyscale and scalograms images have been formed. The features of the images are extracted using the EfficientNet-B0 deep learning model. These features are normalised using z-score normalisation method and then optimal features are selected using the hybrid feature selection method. The hybrid feature selection is constructed utilising two filter methods and Self Adaptive Bald Eagle Search (SABES) optimisation algorithm. The proposed methodology has been applied to the ECG signals for the classification of the five types of beats. The methodology acquired 99.31% of accuracy.
PMID:39949269 | DOI:10.1080/03091902.2025.2463574
A fully automated U-net based ROIs localization and bone age assessment method
Math Biosci Eng. 2025 Jan 3;22(1):138-151. doi: 10.3934/mbe.2025007.
ABSTRACT
Bone age assessment (BAA) is a widely used clinical practice for the biological development of adolescents. The Tanner Whitehouse (TW) method is a traditionally mainstream method that manually extracts multiple regions of interest (ROIs) related to skeletal maturity to infer bone age. In this paper, we propose a deep learning-based method for fully automatic ROIs localization and BAA. The method consists of two parts: a U-net-based backbone, selected for its strong performance in semantic segmentation, which enables precise and efficient localization without the need for complex pre- or post-processing. This method achieves a localization precision of 99.1% on the public RSNA dataset. Second, an InceptionResNetV2 network is utilized for feature extraction from both the ROIs and the whole image, as it effectively captures both local and global features, making it well-suited for bone age prediction. The BAA neural network combines the advantages of both ROIs-based methods (TW3 method) and global feature-based methods (GP method), providing high interpretability and accuracy. Numerical experiments demonstrate that the method achieves a mean absolute error (MAE) of 0.38 years for males and 0.45 years for females on the public RSNA dataset, and 0.41 years for males and 0.44 years for females on an in-house dataset, validating the accuracy of both localization and prediction.
PMID:39949166 | DOI:10.3934/mbe.2025007
Epileptic seizure detection in EEG signals via an enhanced hybrid CNN with an integrated attention mechanism
Math Biosci Eng. 2025 Jan;22(1):73-105. doi: 10.3934/mbe.2025004. Epub 2024 Dec 25.
ABSTRACT
Epileptic seizures, a prevalent neurological condition, necessitate precise and prompt identification for optimal care. Nevertheless, the intricate characteristics of electroencephalography (EEG) signals, noise, and the want for real-time analysis require enhancement in the creation of dependable detection approaches. Despite advances in machine learning and deep learning, capturing the intricate spatial and temporal patterns in EEG data remains challenging. This study introduced a novel deep learning framework combining a convolutional neural network (CNN), bidirectional gated recurrent unit (BiGRU), and convolutional block attention module (CBAM). The CNN extracts spatial features, the BiGRU captures long-term temporal dependencies, and the CBAM emphasizes critical spatial and temporal regions, creating a hybrid architecture optimized for EEG pattern recognition. Evaluation of a public EEG dataset revealed superior performance compared to existing methods. The model achieved 99.00% accuracy in binary classification, 96.20% in three-class tasks, 92.00% in four-class scenarios, and 89.00% in five-class classification. High sensitivity (89.00-99.00%) and specificity (89.63-99.00%) across all tasks highlighted the model's robust ability to identify diverse EEG patterns. This approach supports healthcare professionals in diagnosing epileptic seizures accurately and promptly, improving patient outcomes and quality of life.
PMID:39949163 | DOI:10.3934/mbe.2025004
Revolutionizing Brain Tumor Detection Using Explainable AI in MRI Images
NMR Biomed. 2025 Mar;38(3):e70001. doi: 10.1002/nbm.70001.
ABSTRACT
Due to the complex structure of the brain, variations in tumor shapes and sizes, and the resemblance between tumor and healthy tissues, the reliable and efficient identification of brain tumors through magnetic resonance imaging (MRI) presents a persistent challenge. Given that manual identification of tumors is often time-consuming and prone to errors, there is a clear need for advanced automated procedures to enhance detection accuracy and efficiency. Our study addresses the difficulty by creating an improved convolutional neural network (CNN) framework derived from DenseNet121 to augment the accuracy of brain tumor detection. The proposed model was comprehensively evaluated against 12 baseline CNN models and 5 state-of-the-art architectures, namely Vision Transformer (ViT), ConvNeXt, MobileNetV3, FastViT, and InternImage. The proposed model achieved exceptional accuracy rates of 98.4% and 99.3% on two separate datasets, outperforming all 17 models evaluated. Our improved model was integrated using Explainable AI (XAI) techniques, particularly Grad-CAM++, facilitating accurate diagnosis and localization of complex tumor instances, including small metastatic lesions and nonenhancing low-grade gliomas. The XAI framework distinctly highlights essential areas signifying tumor presence, hence enhancing the model's accuracy and interpretability. The results highlight the potential of our method as a reliable diagnostic instrument for healthcare practitioners' ability to comprehend and confirm artificial intelligence (AI)-driven predictions but also bring transparency to the model's decision-making process, ultimately improving patient outcomes. This advancement signifies a significant progression in the use of AI in neuro-oncology, enhancing diagnostic interpretability and precision.
PMID:39948696 | DOI:10.1002/nbm.70001
In vivo electrophysiology recordings and computational modeling can predict octopus arm movement
Bioelectron Med. 2025 Feb 14;11(1):4. doi: 10.1186/s42234-025-00166-9.
ABSTRACT
The octopus has many features that make it advantageous for revealing principles of motor circuits and control and predicting behavior. Here, an array of carbon electrodes providing single-unit electrophysiology recordings were implanted into the octopus anterior nerve cord. The number of spikes and arm movements in response to stimulation at different locations along the arm were recorded. We observed that the number of spikes occurring within the first 100 ms after stimulation were predictive of the resultant movement response. Machine learning models showed that temporal electrophysiological features could be used to predict whether an arm movement occurred with 88.64% confidence, and if it was a lateral arm movement or a grasping motion with 75.45% confidence. Both supervised and unsupervised methods were applied to gain streaming measurements of octopus arm movements and how their motor circuitry produces rich movement types in real time. For kinematic analysis, deep learning models and unsupervised dimensionality reduction identified a consistent set of features that could be used to distinguish different types of arm movements. The neural circuits and the computational models identified here generated predictions for how to evoke a particular, complex movement in an orchestrated sequence for an individual motor circuit. This study demonstrates how real-time motor behaviors can be predicted and distinguished, contributing to the development of brain-machine interfaces. The ability to accurately model and predict complex movement patterns has broad implications for advancing technologies in robotics, neuroprosthetics, and artificial intelligence, paving the way for more sophisticated and adaptable systems.
PMID:39948616 | DOI:10.1186/s42234-025-00166-9
Prediction of cognitive conversion within the Alzheimer's disease continuum using deep learning
Alzheimers Res Ther. 2025 Feb 13;17(1):41. doi: 10.1186/s13195-025-01686-x.
ABSTRACT
BACKGROUND: Early diagnosis and accurate prognosis of cognitive decline in Alzheimer's disease (AD) is important to timely assignment to optimal treatment modes. We aimed to develop a deep learning model to predict cognitive conversion to guide re-assignment decisions to more intensive therapies where needed.
METHODS: Longitudinal data including five variable sets, i.e. demographics, medical history, neuropsychological outcomes, laboratory and neuroimaging results, from the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort were analyzed. We first developed a deep learning model to predicted cognitive conversion using all five variable sets. We then gradually removed variable sets to obtained parsimonious models for four different years of forecasting after baseline within acceptable frames of reduction in overall model fit (AUC remaining > 0.8).
RESULTS: A total of 607 individuals were included at baseline, of whom 538 participants were followed up at 12 months, 482 at 24 months, 268 at 36 months and 280 at 48 months. Predictive performance was excellent with AUCs ranging from 0.87 to 0.92 when all variable sets were considered. Parsimonious prediction models that still had a good performance with AUC 0.80-0.84 were established, each only including two variable sets. Neuropsychological outcomes were included in all parsimonious models. In addition, biomarker was included at year 1 and year 2, imaging data at year 3 and demographics at year 4. Under our pre-set threshold, the rate of upgrade to more intensive therapies according to predicted cognitive conversion was always higher than according to actual cognitive conversion so as to decrease the false positive rate, indicating the proportion of patients who would have missed upgraded treatment based on prognostic models although they actually needed it.
CONCLUSIONS: Neurophysiological tests combined with other indicator sets that vary along the AD continuum can improve can provide aid for clinical treatment decisions leading to improved management of the disease.
TRAIL REGISTRATION INFORMATION: ClinicalTrials.gov Identifier: NCT00106899 (Registration Date: 31 March 2005).
PMID:39948600 | DOI:10.1186/s13195-025-01686-x
A multicentre implementation trial of an Artificial Intelligence-driven biomarker to inform Shared decisions for androgen deprivation therapy in men undergoing prostate radiotherapy: the ASTuTE protocol
BMC Cancer. 2025 Feb 13;25(1):250. doi: 10.1186/s12885-025-13622-1.
ABSTRACT
BACKGROUND: Androgen deprivation therapy (ADT) improves outcomes in men undergoing definitive radiotherapy for prostate cancer but carries significant toxicities. Clinical parameters alone are insufficient to accurately identify patients who will derive the most benefit, highlighting the need for improved patient selection tools to minimize unnecessary exposure to ADT's side effects while ensuring optimal oncological outcomes. The ArteraAI Prostate Test, incorporating a multimodal artificial intelligence (MMAI)-driven digital histopathology-based biomarker, offers prognostic and predictive information to aid in this selection. However, its clinical utility in real-world settings has yet to be measured prospectively.
METHODS: This multicentre implementation trial aims to collect real-world data on the use of the previously validated Artera MMAI-driven prognostic and predictive biomarkers in men with intermediate-risk prostate cancer undergoing curative radiotherapy. The prognostic biomarker estimates the 10-year risk of metastasis, while the predictive biomarker determines the likely benefit from short-term ADT (ST-ADT). A total of 800 participants considering ST-ADT in conjunction with curative radiotherapy will be recruited from multiple Australian centers. Eligible patients with intermediate-risk prostate cancer, as defined by the National Comprehensive Cancer Network, will be asked to participate. The primary endpoint is the percentage of patients for whom testing led to a change in the shared ST-ADT recommendation, analyzed using descriptive statistics and McNemar's test comparing recommendations before and after biomarker testing. Secondary endpoints include the impact on quality of life and 5-year disease control, assessed through linkage with the Prostate Cancer Outcomes Registry. The sample size will be re-evaluated at an interim analysis after 200 patients.
DISCUSSION: ASTuTE will determine the impact of a novel prognostic and predictive biomarker on shared decision-making in the short term, and both quality of life and disease control in the medium term. If the biomarker demonstrates a significant impact on treatment decisions, it could lead to more personalized treatment strategies for men with intermediate-risk prostate cancer, potentially reducing overtreatment and improving quality of life. A potential limitation is the variability in clinical practice across different centers inherent in real-world studies.
TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry, ACTRN12623000713695p. Registered 5 July 2023.
PMID:39948585 | DOI:10.1186/s12885-025-13622-1
Robust CRW crops leaf disease detection and classification in agriculture using hybrid deep learning models
Plant Methods. 2025 Feb 13;21(1):18. doi: 10.1186/s13007-025-01332-5.
ABSTRACT
The problem of plant diseases is huge as it affects the crop quality and leads to reduced crop production. Crop-Convolutional neural network (CNN) depiction is that several scholars have used the approaches of machine learning (ML) and deep learning (DL) techniques and have configured their models to specific crops to diagnose plant diseases. In this logic, it is unjustifiable to apply crop-specific models as farmers are resource-poor and possess a low digital literacy level. This study presents a Slender-CNN model of plant disease detection in corn (C), rice (R) and wheat (W) crops. The designed architecture incorporates parallel convolution layers of different dimensions in order to localize the lesions with multiple scales accurately. The experimentation results show that the designed network achieves the accuracy of 88.54% as well as overcomes several benchmark CNN models: VGG19, EfficientNetb6, ResNeXt, DenseNet201, AlexNet, YOLOv5 and MobileNetV3. In addition, the validated model demonstrates its effectiveness as a multi-purpose device by correctly categorizing the healthy and the infected class of individual types of crops, providing 99.81%, 87.11%, and 98.45% accuracy for CRW crops, respectively. Furthermore, considering the best performance values achieved and compactness of the proposed model, it can be employed for on-farm agricultural diseased crops identification finding applications even in resource-limited settings.
PMID:39948565 | DOI:10.1186/s13007-025-01332-5
Applications of digital health technologies and artificial intelligence algorithms in COPD: systematic review
BMC Med Inform Decis Mak. 2025 Feb 13;25(1):77. doi: 10.1186/s12911-025-02870-7.
ABSTRACT
BACKGROUND: Chronic Obstructive Pulmonary Disease (COPD) represents a significant global health challenge, placing considerable burdens on healthcare systems. The rise of digital health technologies (DHTs) and artificial intelligence (AI) algorithms offers new opportunities to improve COPD predictive capabilities, diagnostic accuracy, and patient management. This systematic review explores the types of data in COPD under DHTs, the AI algorithms employed for data analysis, and identifies key application areas reported in the literature.
METHODS: A systematic search was conducted in PubMed and Web of Science for studies published up to December 2024 that applied AI algorithms in digital health for COPD management. Inclusion criteria focused on original research utilizing AI algorithms and digital health technologies for COPD, while review articles were excluded. Two independent reviewers screened the studies, resolving discrepancies through consensus.
RESULTS: From an initial pool of 265 studies, 41 met the inclusion criteria. Analysis of these studies highlighted a diverse range of data types and modalities collected from DHTs in the COPD context, including clinical data, patient-reported outcomes, and environmental/lifestyle data. Machine learning (ML) algorithms were employed in 34 studies, and deep learning (DL) algorithms in 16. Support vector machines and boosting were the most frequently used ML models, while deep neural networks (DNN) and convolutional neural networks (CNN) were the most commonly used DL models. The review identified three key application domains for AI in COPD: screening and diagnosis (10 studies), exacerbation prediction (22 studies), and patient monitoring (9 studies). Disease progression prediction was a prevalent focus across three domains, with promising accuracy and performance metrics reported.
CONCLUSIONS: Digital health technologies and AI algorithms have a wide range of applications and promise for COPD management. ML models, in particularly, show great potential in improving digital health solutions for COPD. Future research should focus on enhancing global collaboration to explore the cost-effectiveness and data-sharing capabilities of DHTs, enhancing the interpretability of AI models, and validating these algorithms through clinical trials to facilitate their safe integration into the routine COPD management.
PMID:39948530 | DOI:10.1186/s12911-025-02870-7