Deep learning
Artificial Intelligence Applications in Cardio-Oncology: A Comprehensive Review
Curr Cardiol Rep. 2025 Feb 19;27(1):56. doi: 10.1007/s11886-025-02215-w.
ABSTRACT
PURPOSE OF REVIEW: This review explores the role of artificial intelligence (AI) in cardio-oncology, focusing on its latest application across problems in diagnosis, prognosis, risk stratification, and management of cardiovascular (CV) complications in cancer patients. It also highlights multi-omics analysis, explainable AI, and real-time decision-making, while addressing challenges like data heterogeneity and ethical concerns.
RECENT FINDINGS: AI can advance cardio-oncology by leveraging imaging, electronic health records (EHRs), electrocardiograms (ECG), and multi-omics data for early cardiotoxicity detection, stratification and long-term risk prediction. Novel AI-ECG models and imaging techniques improve diagnostic accuracy, while multi-omics analysis identifies biomarkers for personalized treatment. However, significant barriers, including data heterogeneity, lack of transparency, and regulatory challenges, hinder widespread adoption. AI significantly enhances early detection and intervention in cardio-oncology. Future efforts should address the impact of AI technologies on clinical outcomes, and ethical challenges, to enable broader clinical adoption and improve patient care.
PMID:39969610 | DOI:10.1007/s11886-025-02215-w
Comparison of different dental age estimation methods with deep learning: Willems, Cameriere-European, London Atlas
Int J Legal Med. 2025 Feb 19. doi: 10.1007/s00414-025-03452-y. Online ahead of print.
ABSTRACT
This study aimed to compare dental age estimates using Willems, Cameriere-Europe, London Atlas, and deep learning methods on panoramic radiographs of Turkish children. The dental ages of 1169 children (613 girls, 556 boys) who agreed to participate in the study were determined by 4 different methods. The Convolutional Neural Network models examined were implemented in the TensorFlow library. Simple correlations and intraclass correlations between children's chronological ages and dental age estimates were calculated. Goodness-of-fit criteria were calculated based on the errors in dental age estimates and the smallest possible values for the Akaike Information Criterion, the Bayesian-Schwarz Criterion, the Root Mean Squared Error, and the coefficient of determination value. Simple correlations were observed between dental age and chronological ages in all four methods (p < 0.001). However, there was a statistically significant difference between the average dental age estimates of methods other than the London Atlas for boys (p = 0.179) and the four methods for girls (p < 0.001). The intra-class correlation between chronological age and methods was examined, and almost perfect agreement was observed in all methods. Moreover, the predictions of all methods were similar to each other in each gender and overall (Intraclass correlation [ICCW] = 0.92, ICCCE=0.94, ICCLA=0.95, ICCDL=0.89 for all children). The London Atlas is only suitable for boys in predicting the age of Turkish children, Willems, Cameriere-Europe formulas, and deep learning methods need revision.
PMID:39969569 | DOI:10.1007/s00414-025-03452-y
Robust and generalizable artificial intelligence for multi-organ segmentation in ultra-low-dose total-body PET imaging: a multi-center and cross-tracer study
Eur J Nucl Med Mol Imaging. 2025 Feb 19. doi: 10.1007/s00259-025-07156-8. Online ahead of print.
ABSTRACT
PURPOSE: Positron Emission Tomography (PET) is a powerful molecular imaging tool that visualizes radiotracer distribution to reveal physiological processes. Recent advances in total-body PET have enabled low-dose, CT-free imaging; however, accurate organ segmentation using PET-only data remains challenging. This study develops and validates a deep learning model for multi-organ PET segmentation across varied imaging conditions and tracers, addressing critical needs for fully PET-based quantitative analysis.
MATERIALS AND METHODS: This retrospective study employed a 3D deep learning-based model for automated multi-organ segmentation on PET images acquired under diverse conditions, including low-dose and non-attenuation-corrected scans. Using a dataset of 798 patients from multiple centers with varied tracers, model robustness and generalizability were evaluated via multi-center and cross-tracer tests. Ground-truth labels for 23 organs were generated from CT images, and segmentation accuracy was assessed using the Dice similarity coefficient (DSC).
RESULTS: In the multi-center dataset from four different institutions, our model achieved average DSC values of 0.834, 0.825, 0.819, and 0.816 across varying dose reduction factors and correction conditions for FDG PET images. In the cross-tracer dataset, the model reached average DSC values of 0.737, 0.573, 0.830, 0.661, and 0.708 for DOTATATE, FAPI, FDG, Grazytracer, and PSMA, respectively.
CONCLUSION: The proposed model demonstrated effective, fully PET-based multi-organ segmentation across a range of imaging conditions, centers, and tracers, achieving high robustness and generalizability. These findings underscore the model's potential to enhance clinical diagnostic workflows by supporting ultra-low dose PET imaging.
CLINICAL TRIAL NUMBER: Not applicable. This is a retrospective study based on collected data, which has been approved by the Research Ethics Committee of Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine.
PMID:39969540 | DOI:10.1007/s00259-025-07156-8
Prediction of adverse pathology in prostate cancer using a multimodal deep learning approach based on [(18)F]PSMA-1007 PET/CT and multiparametric MRI
Eur J Nucl Med Mol Imaging. 2025 Feb 19. doi: 10.1007/s00259-025-07134-0. Online ahead of print.
ABSTRACT
PURPOSE: Accurate prediction of adverse pathology (AP) in prostate cancer (PCa) patients is crucial for formulating effective treatment strategies. This study aims to develop and evaluate a multimodal deep learning model based on [18F]PSMA-1007 PET/CT and multiparametric MRI (mpMRI) to predict the presence of AP, and investigate whether the model that integrates [18F]PSMA-1007 PET/CT and mpMRI outperforms the individual PET/CT or mpMRI models in predicting AP.
METHODS: 341 PCa patients who underwent radical prostatectomy (RP) with mpMRI and PET/CT scans were retrospectively analyzed. We generated deep learning signature from mpMRI and PET/CT with a multimodal deep learning model (MPC) based on convolutional neural networks and transformer, which was subsequently incorporated with clinical characteristics to construct an integrated model (MPCC). These models were compared with clinical models and single mpMRI or PET/CT models.
RESULTS: The MPCC model showed the best performance in predicting AP (AUC, 0.955 [95% CI: 0.932-0.975]), which is higher than MPC model (AUC, 0.930 [95% CI: 0.901-0.955]). The performance of the MPC model is better than that of single PET/CT (AUC, 0.813 [95% CI: 0.780-0.845]) or mpMRI (AUC, 0.865 [95% CI: 0.829-0.901]). Additionally, MPCC model is also effective in predicting single adverse pathological features.
CONCLUSION: The deep learning model that integrates mpMRI and [18F]PSMA-1007 PET/CT enhances the predictive capabilities for the presence of AP in PCa patients. This improvement aids physicians in making informed preoperative decisions, ultimately enhancing patient prognosis.
PMID:39969539 | DOI:10.1007/s00259-025-07134-0
Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly
Radiol Artif Intell. 2025 Feb 19:e240115. doi: 10.1148/ryai.240115. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Fetal ventriculomegaly (VM) and its severity and associated central nervous system (CNS) abnormalities are important indicators of high risk for impaired neurodevelopmental outcomes. Recently, a novel fetal brain age prediction method using a 2D single-channel convolutional neural network (CNN) with multiplanar MRI slices showed the potential to detect fetuses with VM. The purpose of this study examines the diagnostic performance of deep learning-based fetal brain age prediction model to distinguish fetuses with VM (n = 317) from typically developing fetuses (n = 183), the severity of VM, and the presence of associated CNS abnormalities. The predicted age difference (PAD) was measured by subtracting predicted brain age from gestational age in fetuses with VM and typically development. PAD and absolute value of PAD (AAD) were compared between VM and typically developing fetuses. In addition, PAD and AAD were compared between subgroups by VM severity and the presence of associated CNS abnormalities in VM. Fetuses with VM showed significantly larger AAD than typically developing (P < .001), and fetuses with severe VM showed larger AAD than those with moderate VM (P = .004). Fetuses with VM and associated CNS abnormalities had significantly lower PAD than fetuses with isolated VM (P = .005). These findings suggest that fetal brain age prediction using the 2D single-channel CNN method has the clinical ability to assist in identifying not only the enlargement of the ventricles but also the presence of associated CNS abnormalities. ©RSNA, 2025.
PMID:39969279 | DOI:10.1148/ryai.240115
ACU-Net: Attention-based convolutional U-Net model for segmenting brain tumors in fMRI images
Digit Health. 2025 Feb 17;11:20552076251320288. doi: 10.1177/20552076251320288. eCollection 2025 Jan-Dec.
ABSTRACT
OBJECTIVE: Accurate segmentation of brain tumors in medical imaging is essential for diagnosis and treatment planning. Current techniques often struggle with capturing complex tumor features and are computationally demanding, limiting their clinical application. This study introduces the attention-based convolutional U-Net (ACU-Net) model, designed to improve segmentation accuracy and efficiency in fMRI images by incorporating attention mechanisms that selectively highlight critical features while preserving spatial context.
METHODS: The ACU-Net model combines convolutional neural networks (CNNs) with attention mechanisms to enhance feature extraction and spatial coherence. We evaluated ACU-Net on the BraTS 2018 and BraTS 2020 fMRI datasets using rigorous data splitting for training, validation, and testing. Performance metrics, particularly Dice scores, were used to assess segmentation accuracy across different tumor regions, including whole tumor (WT), tumor core (TC), and enhancing tumor (ET) classes.
RESULTS: ACU-Net demonstrated high segmentation accuracy, achieving Dice scores of 99.23%, 99.27%, and 96.99% for WT, TC, and ET, respectively, on the BraTS 2018 dataset, and 98.72%, 98.40%, and 97.66% for WT, TC, and ET on the BraTS 2020 dataset. These results indicate that ACU-Net effectively captures tumor boundaries and subregions with precision, surpassing traditional segmentation approaches.
CONCLUSION: The ACU-Net model shows significant potential to enhance clinical diagnosis and treatment planning by providing precise and efficient brain tumor segmentation in fMRI images. The integration of attention mechanisms within a CNN architecture proves beneficial for identifying complex tumor structures, suggesting that ACU-Net can be a valuable tool in medical imaging applications.
PMID:39968528 | PMC:PMC11833834 | DOI:10.1177/20552076251320288
Short-Term Associations Between Ambient Ozone and Acute Myocardial Infarction Onset Among Younger Patients: Results From the VIRGO Study
Geohealth. 2025 Feb 18;9(2):e2024GH001234. doi: 10.1029/2024GH001234. eCollection 2025 Feb.
ABSTRACT
The association between ambient ozone (O3) and acute myocardial infarction (AMI) onset is unclear, particularly for younger patients and AMI subtypes. This study examined the short-term association of O3 with AMI onset in patients aged 18-55 years and explored differences by AMI subtypes and patient characteristics. We analyzed 2,322 AMI patients admitted to 103 US hospitals (2008-2012). Daily maximum 8-hr O3 concentrations estimated using a spatiotemporal deep learning approach were assigned to participants' home addresses. We used a time-stratified case-crossover design with conditional logistic regression to assess the association between O3 and AMI, adjusting for fine particulate matter, air temperature, and relative humidity. We conducted stratified analyses to examine associations for AMI subtypes and effect modification by sociodemographic status, lifestyle factors, and medical history. An interquartile range (16.6 ppb) increase in O3 concentrations was associated with an increased AMI risk at lag 4 days (odds ratio [OR] = 1.21, 95% confidence interval [CI]: 1.08-1.34) and lag 5 days (OR = 1.11, 95% CI: 1.00-1.24). The association was more pronounced for non-ST-segment elevation AMI and type 2 AMI compared with ST-segment elevation AMI and type 1 AMI, respectively. Stronger O3-AMI associations were observed in non-Hispanic Blacks than in non-Hispanic Whites. Our study provides evidence that short-term O3 exposure is associated with increased AMI risk in younger patients, with varying associations across AMI subtypes. The effect modification by race/ethnicity highlights the need for population-specific intervention strategies.
PMID:39968338 | PMC:PMC11833228 | DOI:10.1029/2024GH001234
Identifying somatic driver mutations in cancer with a language model of the human genome
Comput Struct Biotechnol J. 2025 Jan 17;27:531-540. doi: 10.1016/j.csbj.2025.01.011. eCollection 2025.
ABSTRACT
Somatic driver mutations play important roles in cancer and must be precisely identified to advance our understanding of tumorigenesis and its promotion and progression. However, identifying somatic driver mutations remains challenging in Homo sapiens genomics due to the random nature of mutations and the high cost of qualitative experiments. Building on the powerful sequence interpretation capabilities of language models, we propose a self-attention-based contextualized pretrained language model for somatic driver mutation identification. We pretrained the model with the Homo sapiens reference genome to equip it with the ability to understand genome sequences and then fine-tuned it for oncogene and tumor suppressor gene prediction tasks, enabling it to extract features related to driver genes from the original genome sequence. The fine-tuned model was used to obtain the mutations' carcinogenic effect characteristics to further identify whether the mutation is a driver or a passenger. Compared with other computational algorithms, our method achieved excellent somatic driver mutation identification performance on the test set, with an absolute improvement of 4.31% in AUROC over the best comparison method. The strong performance of our method indicates that it can provide new insights into the discovery of cancer drivers.
PMID:39968174 | PMC:PMC11833646 | DOI:10.1016/j.csbj.2025.01.011
Role of artificial intelligence in smart grid - a mini review
Front Artif Intell. 2025 Feb 4;8:1551661. doi: 10.3389/frai.2025.1551661. eCollection 2025.
ABSTRACT
A smart grid is a structure that regulates, operates, and utilizes energy sources that are incorporated into the smart grid using smart communications techniques and computerized techniques. The running and maintenance of Smart Grids now depend on artificial intelligence methods quite extensively. Artificial intelligence is enabling more dependable, efficient, and sustainable energy systems from improving load forecasting accuracy to optimizing power distribution and guaranteeing issue identification. An intelligent smart grid will be created by substituting artificial intelligence for manual tasks and achieving high efficiency, dependability, and affordability across the energy supply chain from production to consumption. Collection of a large diversity of data is vital to make effective decisions. Artificial intelligence application operates by processing abundant data samples, advanced computing, and strong communication collaboration. The development of appropriate infrastructure resources, including big data, cloud computing, and other collaboration platforms, must be enhanced for this type of operation. In this paper, an attempt has been made to summarize the artificial intelligence techniques used in various aspects of smart grid system.
PMID:39968172 | PMC:PMC11832663 | DOI:10.3389/frai.2025.1551661
Exploring autonomous methods for deepfake detection: A detailed survey on techniques and evaluation
Heliyon. 2025 Jan 25;11(3):e42273. doi: 10.1016/j.heliyon.2025.e42273. eCollection 2025 Feb 15.
ABSTRACT
The fast progress of deepfake technology has caused a huge overlap between reality and deceit, leading to substantial worries over the authenticity of digital media content. Deepfakes, which involve the manipulation of image, audio and video to produce highly convincing yet completely fabricated content, present significant risks to media, politics, and personal well-being. To address this increasing problem, our comprehensive survey investigates the advancement along with evaluation of autonomous techniques for identifying and evaluating deepfake media. This paper provides an in-depth analysis of state-of-the-art techniques and tools for identifying deepfakes, encompassing image, video, and audio-based content. We explore the fundamental technologies, such as deep learning models, and evaluate their efficacy in differentiating real and manipulated media. In addition, we explore novel detection methods that utilize sophisticated machine learning, computer vision, and audio analysis techniques. The study we conducted included exclusively the most recent research conducted between 2018 and 2024, which represents the newest developments in the area. In an era where distinguishing fact from fiction is paramount, we aim to enhance the security and awareness of the digital ecosystem by advancing our understanding of autonomous detection and evaluation methods.
PMID:39968137 | PMC:PMC11834059 | DOI:10.1016/j.heliyon.2025.e42273
Diagnostic Performance of a Computer-aided System for Tuberculosis Screening in Two Philippine Cities
Acta Med Philipp. 2025 Jan 31;59(2):33-40. doi: 10.47895/amp.vi0.8950. eCollection 2025.
ABSTRACT
BACKGROUND AND OBJECTIVES: The Philippines faces challenges in the screening of tuberculosis (TB), one of them being the shortage in the health workforce who are skilled and allowed to screen TB. Deep learning neural networks (DLNNs) have shown potential in the TB screening process utilizing chest radiographs (CXRs). However, local studies on AI-based TB screening are limited. This study evaluated qXR3.0 technology's diagnostic performance for TB screening in Filipino adults aged 15 and older. Specifically, we evaluated the specificity and sensitivity of qXR3.0 compared to radiologists' impressions and determined whether it meets the World Health Organization (WHO) standards.
METHODS: A prospective cohort design was used to perform a study on comparing screening and diagnostic accuracies of qXR3.0 and two radiologist gradings in accordance with the Standards for Reporting Diagnostic Accuracy (STARD). Subjects from two clinics in Metro Manila which had qXR 3.0 seeking consultation at the time of study were invited to participate to have CXRs and sputum collected. Radiologists' and qXR3.0 readings and impressions were compared with respect to the reference standard Xpert MTB/RiF assay. Diagnostic accuracy measures were calculated.
RESULTS: With 82 participants, qXR3.0 demonstrated 100% sensitivity and 72.7% specificity with respect to the reference standard. There was a strong agreement between qXR3.0 and radiologists' readings as exhibited by the 0.7895 (between qXR 3.0 and CXRs read by at least one radiologist), 0.9362 (qXR 3.0 and CXRs read by both radiologists), and 0.9403 (qXR 3.0 and CXRs read as not suggestive of TB by at least one radiologist) concordance indices.
CONCLUSIONS: qXR3.0 demonstrated high sensitivity to identify presence of TB among patients, and meets the WHO standard of at least 70% specificity for detecting true TB infection. This shows an immense potential for the tool to supplement the shortage of radiologists for TB screening in the country. Future research directions may consider larger sample sizes to confirm these findings and explore the economic value of mainstream adoption of qXR 3.0 for TB screening.
PMID:39967706 | PMC:PMC11831083 | DOI:10.47895/amp.vi0.8950
RAG_MCNNIL6: A Retrieval-Augmented Multi-Window Convolutional Network for Accurate Prediction of IL-6 Inducing Epitopes
J Chem Inf Model. 2025 Feb 19. doi: 10.1021/acs.jcim.4c02144. Online ahead of print.
ABSTRACT
Interleukin-6 (IL-6) is a critical cytokine involved in immune regulation, inflammation, and the pathogenesis of various diseases, including autoimmune disorders, cancer, and the cytokine storm associated with severe COVID-19. Identifying IL-6 inducing epitopes, the short peptide fragments that trigger IL-6 production, is crucial for developing epitope-based vaccines and immunotherapies. However, traditional methods for epitope prediction often lack accuracy and efficiency. This study presents RAG_MCNNIL6, a novel deep learning framework that integrates Retrieval-augmented generation (RAG) with multiwindow convolutional neural networks (MCNNs) for accurate and rapid prediction of IL-6 inducing epitopes. RAG_MCNNIL6 leverages ProtTrans, a state-of-the-art pretrained protein language model, to generate rich embedding representations of peptide sequences. By incorporating a RAG-based similarity retrieval and embedding augmentation strategy, RAG_MCNNIL6 effectively captures both local and global sequence patterns relevant for IL-6 induction, significantly improving prediction performance compared to existing methods. We demonstrate the superior performance of RAG_MCNNIL6 on benchmark data sets, highlighting its potential for advancing research and therapeutic development for IL-6-mediated diseases.
PMID:39967508 | DOI:10.1021/acs.jcim.4c02144
A deep-learning based system for diagnosing multitype gastric lesions under white-light endoscopy
Chin Med J (Engl). 2025 Feb 19. doi: 10.1097/CM9.0000000000003421. Online ahead of print.
NO ABSTRACT
PMID:39967314 | DOI:10.1097/CM9.0000000000003421
Hybrid deep learning for computational precision in cardiac MRI segmentation: Integrating Autoencoders, CNNs, and RNNs for enhanced structural analysis
Comput Biol Med. 2025 Mar;186:109597. doi: 10.1016/j.compbiomed.2024.109597. Epub 2025 Jan 1.
ABSTRACT
Recent advancements in cardiac imaging have been significantly enhanced by integrating deep learning models, offering transformative potential in early diagnosis and patient care. The research paper explores the application of hybrid deep learning methodologies, focusing on the roles of Autoencoders, Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs) in enhancing cardiac image analysis. The study implements a comprehensive approach, combining traditional algorithms such as Sobel, Watershed, and Otsu's Thresholding with advanced deep learning models to achieve precise and accurate imaging outcomes. The Autoencoder model, developed for image enhancement and feature extraction, achieved a notable accuracy of 99.66% on the test data. Optimized for image recognition tasks, the CNN model demonstrated a high precision rate of 98.9%. The RNN model, utilized for sequential data analysis, showed a prediction accuracy of 98%, further underscoring the robustness of the hybrid framework. The research drew upon a diverse range of academic databases and pertinent publications within cardiac imaging and deep learning, focusing on peer-reviewed articles and studies published in the past five years. Models were implemented using the TensorFlow and Keras frameworks. The proposed methodology was evaluated in the clinical validation phase using advanced imaging protocols, including the QuickScan technique and balanced steady-state free precession (bSSFP) imaging. The validation metrics were promising: the Signal-to-Noise Ratio (SNR) was improved by 15%, the Contrast-to-Noise Ratio (CNR) saw an enhancement of 12%, and the ejection fraction (EF) analysis provided a 95% correlation with manually segmented data. These metrics confirm the efficacy of the models, showing significant improvements in image quality and diagnostic accuracy. The integration of adversarial defense strategies, such as adversarial training and model ensembling, have been analyzed to enhance model robustness against malicious inputs. The reliability and comparison of the model's ability have been investigated to maintain clinical integrity, even in adversarial attacks that could otherwise compromise segmentation outcomes. These findings indicate that integrating Autoencoders, CNNs, and RNNs within a hybrid deep-learning framework is promising for enhancing cardiac MRI segmentation and early diagnosis. The study contributes to the field by demonstrating the applicability of these advanced techniques in clinical settings, paving the way for improved patient outcomes through more accurate and timely diagnoses.
PMID:39967188 | DOI:10.1016/j.compbiomed.2024.109597
High-throughput, rapid, and non-destructive detection of common foodborne pathogens via hyperspectral imaging coupled with deep neural networks and support vector machines
Food Res Int. 2025 Feb;202:115598. doi: 10.1016/j.foodres.2024.115598. Epub 2025 Jan 7.
ABSTRACT
Foodborne pathogens such as Bacillus cereus, Staphylococcus aureus, and Escherichia coli are major causes of gastrointestinal diseases worldwide and frequently contaminate dairy products. Compared to nucleic acid detection and MALDI-TOF MS, hyperspectral imaging (HSI) offering advantages such as multiple bands, rapid, minimal damage, non-contact, and non-destructive detection. However, current HSI methods require agar plate cultures, which are time-consuming and labor-intensive. This study is the first to use bacterial broth in a 24-well plate to collect HSI spectra, combined with machine learning for enhanced feature extraction and classification. After data augmentation and dimensionality reduction via principal component analysis (PCA) and linear discriminant analysis (LDA), deep neural networks and support vector machines (DNN-SVM) resulted in prediction accuracies of 100 % on the training set, 98.31 % on the testing set, and 93.33 % on the validation set for classifying B. cereus, E. coli, and S. aureus. As a result, a high-throughput, rapid, and non-destructive detection method was developed, which is expected to detect 24 bacterial broth samples in less than ten minutes. It indicates the potential of HSI to be used as a feasible, robust, and non-destructive solution for real-time monitoring of microbial pathogens in food.
PMID:39967133 | DOI:10.1016/j.foodres.2024.115598
Linear regressive weighted Gaussian kernel liquid neural network for brain tumor disease prediction using time series data
Sci Rep. 2025 Feb 18;15(1):5912. doi: 10.1038/s41598-025-89249-w.
ABSTRACT
A brain tumor is an abnormal growth of cells within the brain or surrounding tissues, which can be either benign or malignant. Brain tumors develop in various regions of the brain, each affecting different functions such as movement, speech, and vision, depending on their location. Early prediction of brain tumors is crucial for improving survival rates and treatment outcomes. Advanced techniques, including medical imaging and machine learning, are widely used for early diagnosis. However, conventional machine learning and deep learning detection models face challenges in achieving high accuracy in brain tumor disease prediction while minimizing time complexity. To address this, a novel Linear Regressive Weighted Gaussian Kernel Liquid Neural Network (LRWGKLNN) model is developed. The proposed LRWGKLNN model comprises four major steps, namely data acquisition, preprocessing, feature selection, and classification. In the initial step, a large volume of time-series data samples is collected from a comprehensive dataset. Following data collection, preprocessing is performed, involving two key processes: handling missing data and outlier detection. First, the proposed LRWGKLNN model handles missing values using a linear regression method. After the imputation process, outlier data is identified and removed using the Generalized Extreme Studentized Deviation test. Once preprocessing is complete, the Cosine Congruence Weighted Majority Algorithm is employed to select significant features from the dataset while removing irrelevant features. This step helps minimize the brain tumor disease prediction time. Finally, the classification process is performed using the selected significant features with the Gaussian Kernelized Liquid Neural Network. This approach enhances the accuracy of brain tumor disease prediction using time-series data samples. The experimental evaluation is carried out using various performance metrics such as accuracy, precision, recall, F1 score, and disease prediction time with respect to the number of time-series data samples. The obtained results demonstrate that the proposed LRWGKLNN model achieves higher 4%, 4% 5%, 4% and 4% accuracy, precision, recall, specificity and F1 score in brain tumor disease prediction. Furthermore, the LRWGKLNN model realizes a substantial reduction in time consumption with feature selection by 16% compared to existing deep learning methods.
PMID:39966518 | DOI:10.1038/s41598-025-89249-w
Accelerating veterinary low field MRI acquisitions using the deep learning based denoising solution HawkAI
Sci Rep. 2025 Feb 18;15(1):5846. doi: 10.1038/s41598-025-88822-7.
ABSTRACT
Magnetic resonance imaging (MRI) has changed veterinary diagnosis but its long-sequence time can be problematic, especially because animals need to be sedated during the exam. Unfortunately, shorter scan times implies a fall in overall image quality and diagnosis reliability. Therefore, we developed a Generative Adversarial Net-based denoising algorithm called HawkAI. In this study, a Standard-Of-Care (SOC) MRI-sequence and then a faster sequence were acquired and HawkAI was applied to the latter. Radiologists were then asked to qualitatively evaluate the two proposed images based on different factors using a Likert scale (from 1 being strong preference for HawkAI to 5 being strong preference for SOC). The aim was to prove that they had at least no preference between the two sequences in terms of Signal-to-Noise Ratio (SNR) and diagnosis. They slightly preferred HawkAI in terms of SNR (confidence interval (CI) being [1.924-2.176]), had no preference in terms of Artifacts Presence, Diagnosis Pertinence and Lesion Conspicuity (respective CIs of [2.933-3.113], [2.808-3.132] and [2.941-3.119]), and a very slight preference for SOC in terms of Spatial Resolution and Image Contrast (respective CIs of [3.153-3.453] and [3.072-3.348]). This shows the possibility to acquire images twice as fast without any major drawback compared to a longer acquisition.
PMID:39966480 | DOI:10.1038/s41598-025-88822-7
Change analysis of surface water clarity in the Persian Gulf and the Oman Sea by remote sensing data and an interpretable deep learning model
Environ Sci Pollut Res Int. 2025 Feb 18. doi: 10.1007/s11356-025-36018-x. Online ahead of print.
ABSTRACT
The health of an ecosystem and the quality of water can be determined by the clarity of the water. The Persian Gulf and Oman Sea have a unique ecosystem, and monitoring their water clarity is necessary for sustainable development. Here, various criteria such as hue angle, chlorophyll-a, Forel-Ule index, organic carbon (OC), precipitation, sea surface salinity (SSS), Secchi disk depth (SDD), and sea surface temperature (SST) were analyzed from 2002 to 2018 using MODIS-Aqua Imagery, statistical tests, and deep learning (DL) models to monitor the water clarity of the Persian Gulf and the Oman Sea. The study found differences in criteria across different regions, with coastal areas showing higher Forel-Ule index and chlorophyll-a values. Positive trends in the Persian Gulf and the Oman Sea were attributed to the Forel-Ule index and OC, while negative trends were seen in SSS and SST in the Persian Gulf. The convolutional neural network (CNN) model was found to perform better than long short-term memory (LSTM) in predicting water clarity. Interpretation techniques were used to determine the importance of criteria in monitoring water clarity, with the Forel-Ule index, hue angle, and OC showing the greatest interaction. Sensitivity analysis revealed that chlorophyll-a and SSS had the most significant impact on water clarity prediction. Overall, this study using DL models and MODIS-Aqua Imagery can help improve water quality and protect the environment.
PMID:39966320 | DOI:10.1007/s11356-025-36018-x
Enhancing diabetic retinopathy diagnosis: automatic segmentation of hyperreflective foci in OCT via deep learning
Int Ophthalmol. 2025 Feb 18;45(1):79. doi: 10.1007/s10792-025-03439-z.
ABSTRACT
OBJECTIVE: Hyperreflective foci (HRF) are small, punctate lesions ranging from 20 to 50 μ m and exhibiting high reflective intensity in optical coherence tomography (OCT) images of patients with diabetic retinopathy (DR). The purpose of the model proposed in this paper is to precisely identify and segment the HRF in OCT images of patients with DR. This method is essential for assisting ophthalmologists in the early diagnosis and assessing the effectiveness of treatment and prognosis. In this study, we introduce an HRF segmentation algorithm based on KiU-Net, the algorithm that comprises the Kite-Net branch using up-sampling coding to collect more detailed information and a three-layer U-Net branch to extract high-level semantic information. To enhance the capacity of a single-branch network, we also design a cross-attention block (CAB) which combines the information extracted from two branches. The experimental results demonstrate that the number of parameters of our model is significantly reduced, and the sensitivity (SE) and the dice similarity coefficient (DSC) are respectively improved to 72.90 % and 66.84 % . Considering the SE and precision(P) of the segmentation, as well as the recall ratio and recall P of HRF, we believe that this model outperforms most advanced medical image segmentation algorithms and significantly relieves the strain on ophthalmologists.
PURPOSE: Hyperreflective foci (HRF) are small, punctate lesions ranging from 20 to 50 μm with high reflective intensity in optical coherence tomography (OCT) images of patients with diabetic retinopathy (DR). This study aims to develop a model that precisely identifies and segments HRF in OCT images of DR patients. Accurate segmentation of HRF is essential for assisting ophthalmologists in early diagnosis and in assessing the effectiveness of treatment and prognosis.
METHODS: We introduce an HRF segmentation algorithm based on the KiU-Net architecture. The model comprises two branches: a Kite-Net branch that uses up-sampling coding to capture detailed information, and a three-layer U-Net branch that extracts high-level semantic information. To enhance the capacity of the network, we designed a cross-attention block (CAB) that combines the information extracted from both branches, effectively integrating detail and semantic features.
RESULTS: Experimental results demonstrate that our model significantly reduces the number of parameters while improving performance. The sensitivity (SE) and Dice Similarity Coefficient (DSC) of our model are improved to 72.90% and 66.84%, respectively. Considering the SE and precision (P) of the segmentation, as well as the recall ratio and precision of HRF detection, our model outperforms most advanced medical image segmentation algorithms CONCLUSION: The proposed HRF segmentation algorithm effectively identifies and segments HRF in OCT images of DR patients, outperforming existing methods. This advancement can significantly alleviate the burden on ophthalmologists by aiding in early diagnosis and treatment evaluation, ultimately improving patient outcomes.
PMID:39966317 | DOI:10.1007/s10792-025-03439-z
Sway frequencies may predict postural instability in Parkinson's disease: a novel convolutional neural network approach
J Neuroeng Rehabil. 2025 Feb 18;22(1):29. doi: 10.1186/s12984-025-01570-7.
ABSTRACT
BACKGROUND: Postural instability greatly reduces quality of life in people with Parkinson's disease (PD). Early and objective detection of postural impairments is crucial to facilitate interventions. Our aim was to use a convolutional neural network (CNN) to differentiate people with early to mid-stage PD from healthy age-matched individuals based on spectrogram images obtained from their body sway. We hypothesized the time-frequency content of body sway to be predictive of PD, even when impairments are not yet clinically apparent.
METHODS: 18 people with idiopathic PD and 15 healthy controls (HC) participated in the study. We tracked participants' center of pressure (COP) using a Wii Balance Board and their full-body motion using a Microsoft Kinect, out of which we calculated the trajectory of their center of mass (COM). We used 30 s-snippets of motion data from which we acquired wavelet-based time-frequency spectrograms that were fed into a custom-built CNN as labeled images. We used binary classification to have the network differentiate between individuals with PD and controls (n = 15, respectively).
RESULTS: Classification performance was best when the medio-lateral motion of the COM was considered. Here, our network reached a predictive accuracy, sensitivity, specificity, precision and F1-score of 100%, respectively, with a receiver operating characteristic area under the curve of 1.0. Moreover, an explainable AI approach revealed high frequencies in the postural sway data to be most distinct between both groups.
CONCLUSION: Heeding our small and heterogeneous sample, our findings suggest a CNN classifier based on cost-effective and conveniently obtainable posturographic data to be a promising approach to detect postural impairments in early to mid-stage PD and to gain novel insight into the subtle characteristics of impairments at this stage of the disease.
PMID:39966853 | DOI:10.1186/s12984-025-01570-7