Deep learning
Deep-Learning-Assisted Self-Powered Microfluidic Bionic Electronic Tongues
ACS Appl Mater Interfaces. 2025 Feb 24. doi: 10.1021/acsami.4c22067. Online ahead of print.
ABSTRACT
Inspired by the natural mechanism of taste perception, artificial bionic electronic tongues have successfully enabled the detection and classification of various tastes. The liquid-solid contact electrification (LSCE) effect has emerged as a highly effective approach for developing self-powered electronic tongues. However, droplet-based sensing structures often face challenges related to internal and environmental interferences, compromising their stability and repeatability. In this work, we developed a monolithically integrated self-powered microfluidic bionic electronic tongue (SMET), combining the LSCE effect with deep learning algorithms to achieve highly reliable and intelligent sample identification and concentration detection. The incorporation of a multiplexed microchannel structure significantly reduced the required liquid sample volume while simultaneously increasing the electrical output amplitude (up to 10 V at multitone wave excitation), thereby enhancing sensitivity. Instead of micropumps, miniaturized exciters were employed as SMET drivers to generate multiple excitation waveforms, producing various signal types to improve specific algorithmic accuracy. The SMET achieved over 93% classification accuracy for five taste element samples (glacial acetic acid, anhydrous dextrose, quinine, edible chili essence, sodium chloride) and five concentrations of sodium chloride solutions using a single waveform signal, reaching 100% accuracy with the fusion of multiple waveform signals. Furthermore, the SMET was used to detect more than ten different taste samples, each exhibiting distinct signal variations. Thus, due to its ultrahigh sensitivity to the electrical properties of liquids, SMET enables accurate and rapid analysis of liquid samples with high reliability, positioning it as a promising tool in the field of rapid liquid detection.
PMID:39992874 | DOI:10.1021/acsami.4c22067
Deep learning methods for clinical workflow phase-based prediction of procedure duration: a benchmark study
Comput Assist Surg (Abingdon). 2025 Dec;30(1):2466426. doi: 10.1080/24699322.2025.2466426. Epub 2025 Feb 24.
ABSTRACT
This study evaluates the performance of deep learning models in the prediction of the end time of procedures performed in the cardiac catheterization laboratory (cath lab). We employed only the clinical phases derived from video analysis as input to the algorithms. Our results show that InceptionTime and LSTM-FCN yielded the most accurate predictions. InceptionTime achieves Mean Absolute Error (MAE) values below 5 min and Symmetric Mean Absolute Percentage Error (SMAPE) under 6% at 60-s sampling intervals. In contrast, LSTM with attention mechanism and standard LSTM models have higher error rates, indicating challenges in handling both long-term and short-term dependencies. CNN-based models, especially InceptionTime, excel at feature extraction across different scales, making them effective for time-series predictions. We also analyzed training and testing times. CNN models, despite higher computational costs, significantly reduce prediction errors. The Transformer model has the fastest inference time, making it ideal for real-time applications. An ensemble model derived by averaging the two best performing algorithms reported low MAE and SMAPE, although needing longer training. Future research should validate these findings across different procedural contexts and explore ways to optimize training times without losing accuracy. Integrating these models into clinical scheduling systems could improve efficiency in cath labs. Our research demonstrates that the models we implemented can form the basis of an automated tool, which predicts the optimal time to call the next patient with an average error of approximately 30 s. These findings show the effectiveness of deep learning models, especially CNN-based architectures, in accurately predicting procedure end times.
PMID:39992712 | DOI:10.1080/24699322.2025.2466426
Beyond human perception: challenges in AI interpretability of orangutan artwork
Primates. 2025 Feb 24. doi: 10.1007/s10329-025-01185-5. Online ahead of print.
ABSTRACT
Drawings serve as a profound medium of expression for both humans and apes, offering unique insights into the cognitive and emotional landscapes of the artists, regardless of their species. This study employs artificial intelligence (AI), specifically Convolutional Neural Networks (CNNs) and the interpretability tool Captum, to analyse non-figurative drawings by Molly, an orangutan. The research utilizes VGG19 and ResNet18 models to decode seasonal nuances in the drawings, achieving notable accuracy in seasonal classification and revealing complex influences beyond human-centric methods. Techniques, such as occlusion, integrated gradients, PCA, t-SNE, and Louvain clustering, highlight critical areas and elements influencing seasonal recognition, providing deeper insights into the drawings. This approach not only advances the analysis of non-human art but also demonstrates the potential of AI to enrich our understanding of non-human cognitive and emotional expressions, with significant implications for fields like evolutionary anthropology and comparative psychology.
PMID:39992583 | DOI:10.1007/s10329-025-01185-5
Subclinical tremor differentiation using long short-term memory networks
Phys Eng Sci Med. 2025 Feb 24. doi: 10.1007/s13246-025-01526-0. Online ahead of print.
ABSTRACT
Subclinical amplitudes complicate the differentiation between essential tremor (ET) and Parkinson's disease (PD) tremor, which is uncertain even when the tremors are apparent. Despite their prevalence-up to 30% of PD cases exhibit subclinical tremors-these tremors remain inadequately studied. Therefore, this study explores the potential of artificial intelligence (AI) to address this differentiation uncertainty. Our objective is to develop a deep learning model that can differentiate among subclinical tremors due to PD, ET, and normal physiological tremors. Subclinical tremor data were obtained from inertial sensors placed on the hands and arms of 51 PD, 15 ET, and 58 normal subjects. The AI architecture used was designed using a long short-term memory network (LSTM) and was trained on the short-time Fourier transformed subclinical tremor data as the input features. The network was trained separately to differentiate firstly between PD and ET tremors and then between PD, ET, and physiological tremors and yielded accuracies of 95% and 93%, respectively. Comparative analysis with existing convolutional LSTM demonstrated the superior performance of our work. The proposed method has 30-50% better accuracies when classifying low amplitude tremors as compared to the reference method. Future enhancements aim to enhance model interpretability and validate on larger, more diverse datasets, including action tremors. The proposed work can potentially serve as a valuable tool for clinicians, aiding in the differentiation of subclinical tremors common in Parkinson's disease, which in turn enhances diagnostic accuracy and informs treatment decisions.
PMID:39992543 | DOI:10.1007/s13246-025-01526-0
Electroencephalogram (EEG) Based Fuzzy Logic and Spiking Neural Networks (FLSNN) for Advanced Multiple Neurological Disorder Diagnosis
Brain Topogr. 2025 Feb 24;38(3):33. doi: 10.1007/s10548-025-01106-1.
ABSTRACT
Neurological disorders are a major global health concern that have a substantial impact on death rates and quality of life. accurately identifying a number of diseases Due to inherent data uncertainties and Electroencephalogram (EEG) pattern overlap, conventional EEG diagnosis methods frequently encounter difficulties. This paper proposes a novel framework that integrates FLSNN to enhance the accuracy and robustness of multiple neurological disorder disease detection from EEG signals. In multiple neurological disorders, the primary motivation is to overcome the limitations of existing methods that are unable to handle the complex and overlapping nature of EEG signals. The key aim is to provide a unified, automated solution for detecting multiple neurological disorders such as epilepsy, Parkinson's, Alzheimer's, schizophrenia, and stroke in a single framework. In the Fuzzy Logic and Spiking Neural Networks (FLSNN) framework, EEG data is preprocessed to eliminate noise and artifacts, while a fuzzy logic model is applied to handling uncertainties prior to applying spike neural networking to analyze the temporal and dynamics of the signals. Processes EEG data three times faster than traditional techniques. This framework achieves 97.46% accuracy in binary classification and 98.87% accuracy in multi-class classification, indicating increased efficiency. This research provides a significant advancement in the diagnosis of multiple neurological disorders using EEG and enhances both the quality and speed of diagnostics from the EEG signal and the advancement of AI-based medical diagnostics. at https://github.com/jainshraddha12/FLSNN , the source code will be available to the public.
PMID:39992458 | DOI:10.1007/s10548-025-01106-1
Intelligent Recognition and Segmentation of Blunt Craniocerebral Injury CT Images Based on DeepLabV3+ Model
Fa Yi Xue Za Zhi. 2024 Oct 25;40(5):419-429. doi: 10.12116/j.issn.1004-5619.2024.440801.
ABSTRACT
OBJECTIVES: To achieve intelligent recognition and segmentation of common craniocerebral injuries (hereinafter referred to as "segmentation") by training convolutional neural network DeepLabV3+ model based on CT images of blunt craniocerebral injury (BCI), and to explore the value of deep learning in automated diagnosis of BCI in forensic medicine.
METHODS: A total of 5 486 CT images of BCI from living persons were collected as the training set, validation set and test set for model training and performance evaluation. Another 255 CT images of BCI and 156 normal craniocerebral CT images from living persons were collected as the blind test set to evaluate the ability of the model to segment the five types of craniocerebral injuries including scalp hematoma, skull fracture, epidural hematoma, subdural hematoma, and brain contusion. Another 340 BCI and 120 normal craniocerebral CT images from cadavers were collected as the new blind test set to explore the application value of the model trained by living CT images in the segmentation of BCI in cadavers. The five types CT images of all BCI except the blind test set were manually labeled; then, each dataset was inputted into the model to train the model. The performance of the model was evaluated and optimized based on the loss function and accuracy curves of the training set and validation set, and the generalization ability was evaluated based on the Dice value of the test set. According to the accuracy, precision and F1 value of the blind test set, the segmentation performance of the model for five types of BCI was evaluated.
RESULTS: After training and optimizing the model, the average Dice values of the final optimal model to scalp hematoma, skull fracture, epidural hematoma, subdural hematoma and brain contusion segmentation were 0.766 4, 0.812 3, 0.938 7, 0.782 7 and 0.858 1, respectively, all greater than 0.75, meeting the expected requirements. External validation showed that the F1 values were 93.02%, 89.80%, 87.80%, 92.93% and 86.57% in living CT images, respectively; 83.92%, 44.90%, 76.47%, 64.29% and 48.89% in cadaveric CT images, respectively. The above suggested that the model was able to accurately segment various types of craniocerebral injury on living CT images, while its segmentation ability was relatively poor on cadaveric CT images, but still able to accurately segment scalp hematoma, epidural hematoma and subdural hematoma.
CONCLUSIONS: Deep learning model trained on CT images can be used for BCI segmentation. However, the direct use of living persons' BCI models for the identification of cadaveric BCI has some limitations. This study provides a new approach for intelligent segmentation of virtual anatomical data for BCI.
PMID:39992333 | DOI:10.12116/j.issn.1004-5619.2024.440801
Shapley-based saliency maps improve interpretability of vertebral compression fractures classification: multicenter study
Radiol Med. 2025 Feb 24. doi: 10.1007/s11547-025-01968-2. Online ahead of print.
ABSTRACT
PURPOSE: Evaluate the classification performance and interpretability of the Vision Transformer (ViT) model on acute and chronic vertebral compression fractures using Shapley significance maps.
MATERIALS AND METHODS: This retrospective study utilized medical imaging data from December 2018 to December 2023 from three hospitals in China. The study included 942 patients, with imaging data comprising X-rays, CTs, and MRIs. Patients were divided into training, validation, and test sets with a ratio of 7:2:1. The ViT model variant, SimpleViT, was fine-tuned on the training dataset. Statistical analyses were performed using the PixelMedAI platform, focusing on metrics such as ROC curves, sensitivity, specificity, and AUC values, with statistical significance assessed using the DeLong test.
RESULTS: A total of 942 patients (mean age 69.17 ± 10.61 years) were included, with 1076 vertebral fractures analyzed (705 acute, 371 chronic). In the test set, the ViT model demonstrated superior performance over the ResNet18 model, with an accuracy of 0.880 and an AUC of 0.901 compared to 0.843 and 0.833, respectively. The use of ViT Shapley saliency maps significantly enhanced diagnostic sensitivity and specificity, reaching 0.883 (95% CI: 0.800, 0.963) and 0.950 (95% CI: 0.891, 1.00), respectively.
CONCLUSION: In vertebral compression fractures classification, Vision Transformer outperformed Convolutional Neural Network, providing more effective Shapley-based saliency maps that were favored by radiologists over GradCAM.
PMID:39992331 | DOI:10.1007/s11547-025-01968-2
Recent topics in musculoskeletal imaging focused on clinical applications of AI: How should radiologists approach and use AI?
Radiol Med. 2025 Feb 24. doi: 10.1007/s11547-024-01947-z. Online ahead of print.
ABSTRACT
The advances in artificial intelligence (AI) technology in recent years have been remarkable, and the field of radiology is at the forefront of applying and implementing these technologies in daily clinical practice. Radiologists must keep up with this trend and continually update their knowledge. This narrative review discusses the application of artificial intelligence in the field of musculoskeletal imaging. For image generation, we focused on the clinical application of deep learning reconstruction and the recently emerging MRI-based cortical bone imaging. For automated diagnostic support, we provided an overview of qualitative diagnosis, including classifications essential for daily practice, and quantitative diagnosis, which can serve as imaging biomarkers for treatment decision making and prognosis prediction. Finally, we discussed current issues in the use of AI, the application of AI in the diagnosis of rare diseases, and the role of AI-based diagnostic imaging in preventive medicine as part of our outlook for the future.
PMID:39992330 | DOI:10.1007/s11547-024-01947-z
Ectopic, intra-thyroid parathyroid adenoma better visualised by deep learning enhanced choline PET/CT
QJM. 2025 Feb 24:hcaf057. doi: 10.1093/qjmed/hcaf057. Online ahead of print.
NO ABSTRACT
PMID:39992255 | DOI:10.1093/qjmed/hcaf057
A novel framework for the automated characterization of Gram-stained blood culture slides using a large-scale vision transformer
J Clin Microbiol. 2025 Feb 24:e0151424. doi: 10.1128/jcm.01514-24. Online ahead of print.
ABSTRACT
This study introduces a new framework for the artificial intelligence-based characterization of Gram-stained whole-slide images (WSIs). As a test for the diagnosis of bloodstream infections, Gram stains provide critical early data to inform patient treatment in conjunction with data from rapid molecular tests. In this work, we developed a novel transformer-based model for Gram-stained WSI classification, which is more scalable to large data sets than previous convolutional neural network-based methods as it does not require patch-level manual annotations. We also introduce a large Gram stain data set from Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire, USA) to evaluate our model, exploring the classification of five major categories of Gram-stained WSIs: gram-positive cocci in clusters, gram-positive cocci in pairs/chains, gram-positive rods, gram-negative rods, and slides with no bacteria. Our model achieves a classification accuracy of 0.858 (95% CI: 0.805, 0.905) and an area under the receiver operating characteristic curve (AUC) of 0.952 (95% CI: 0.922, 0.976) using fivefold nested cross-validation on our 475-slide data set, demonstrating the potential of large-scale transformer models for Gram stain classification. Results were measured against the final clinical laboratory Gram stain report after growth of organism in culture. We further demonstrate the generalizability of our trained model by applying it without additional fine-tuning on a second 27-slide external data set from Stanford Health (Palo Alto, California, USA) where it achieves a binary classification accuracy of 0.926 (95% CI: 0.885, 0.960) and an AUC of 0.8651 (95% CI: 0.6337, 0.9917) while distinguishing gram-positive from gram-negative bacteria.
IMPORTANCE: This study introduces a scalable transformer-based deep learning model for automating Gram-stained whole-slide image classification. It surpasses previous methods by eliminating the need for manual annotations and demonstrates high accuracy and generalizability across multiple data sets, enhancing the speed and reliability of Gram stain analysis.
PMID:39992156 | DOI:10.1128/jcm.01514-24
MRI-Based Topology Deep Learning Model for Noninvasive Prediction of Microvascular Invasion and Assisting Prognostic Stratification in HCC
Liver Int. 2025 Mar;45(3):e16205. doi: 10.1111/liv.16205.
ABSTRACT
BACKGROUND & AIMS: Microvascular invasion (MVI) is associated with poor prognosis in hepatocellular carcinoma (HCC). Topology may improve the predictive performance and interpretability of deep learning (DL). We aimed to develop and externally validate an MRI-based topology DL model for preoperative prediction of MVI.
METHODS: This dual-centre retrospective study included consecutive surgically treated HCC patients from two tertiary care hospitals. Automatic liver and tumour segmentations were performed with DL methods. A pure convolutional neural network (CNN) model, a topology-CNN (TopoCNN) model and a topology-CNN-clinical (TopoCNN+Clinic) model were developed and externally validated. Model performance was assessed using the area under the receiver operating characteristic curve (AUC). Cox regression analyses were conducted to identify risk factors for recurrence-free survival within 2 years (early RFS) and overall survival (OS).
RESULTS: In total, 589 patients were included (292 [49.6%] with pathologically confirmed MVI). The AUCs of the TopoCNN and TopoCNN+Clinic models were 0.890 and 0.895 for the internal test dataset and 0.871 and 0.879 for the external test dataset, respectively. For tumours ≤ 3.0 cm, the AUCs of the TopoCNN and TopoCNN+Clinic models were 0.879 and 0.929 for the internal test dataset, and 0.763 and 0.758 for the external test dataset. The TopoCNN-derived MVI prediction probability was an independent risk factor for early RFS (hazard ratio 6.64) and OS (hazard ratio 13.33).
CONCLUSIONS: The MRI topological DL model based on automatic liver and tumour segmentation could accurately predict MVI and effectively stratify postoperative early RFS and OS, which may assist in personalised treatment decision-making.
PMID:39992060 | DOI:10.1111/liv.16205
Exploring Structure Diversity in Atomic Resolution Microscopy With Graph
Adv Mater. 2025 Feb 23:e2417478. doi: 10.1002/adma.202417478. Online ahead of print.
ABSTRACT
The emergence of deep learning (DL) has provided great opportunities for the high-throughput analysis of atomic-resolution micrographs. However, the DL models trained by image patches in fixed size generally lack efficiency and flexibility when processing micrographs containing diversified atomic configurations. Herein, inspired by the similarity between the atomic structures and graphs, a few-shot learning framework based on an equivariant graph neural network (EGNN) to analyze a library of atomic structures (e.g., vacancies, phases, grain boundaries, doping, etc.) is described, showing significantly promoted robustness and three orders of magnitude reduced computing parameters compared to the image-driven DL models, which is especially evident for those aggregated vacancy lines with flexible lattice distortion. Besides, the intuitiveness of graphs enables quantitative and straightforward extraction of the atomic-scale structural features in batches, thus statistically unveiling the self-assembly dynamics of vacancy lines under electron beam irradiation. A versatile model toolkit is established by integrating EGNN sub-models for single structure recognition to process images involving varied configurations in the form of a task chain, leading to the discovery of novel doping configurations with superior electrocatalytic properties for hydrogen evolution reactions. This work provides a powerful tool to explore structure diversity in a fast, accurate, and intelligent manner.
PMID:39988855 | DOI:10.1002/adma.202417478
ProCeSa: Contrast-Enhanced Structure-Aware Network for Thermostability Prediction with Protein Language Models
J Chem Inf Model. 2025 Feb 23. doi: 10.1021/acs.jcim.4c01752. Online ahead of print.
ABSTRACT
Proteins play a fundamental role in biology, and their thermostability is essential for their proper functionality. The precise measurement of thermostability is crucial, traditionally relying on resource-intensive experiments. Recent advances in deep learning, particularly in protein language models (PLMs), have significantly accelerated the progress in protein thermostability prediction. These models utilize various biological characteristics or deep representations generated by PLMs to represent the protein sequences. However, effectively incorporating structural information, based on the PLM embeddings, while not considering atomic protein structures, remains an open and formidable challenge. Here, we propose a novel Protein Contrast-enhanced Structure-Aware (ProCeSa) model that seamlessly integrates both sequence and structural information extracted from PLMs to enhance thermostability prediction. Our model employs a contrastive learning scheme guided by the categories of amino acid residues, allowing it to discern intricate patterns within protein sequences. Rigorous experiments conducted on publicly available data sets establish the superiority of our method over state-of-the-art approaches, excelling in both classification and regression tasks. Our results demonstrate that ProCeSa addresses the complex challenge of predicting protein thermostability by utilizing PLM-derived sequence embeddings, without requiring access to atomic structural data.
PMID:39988825 | DOI:10.1021/acs.jcim.4c01752
Deep-learning approach for developing bilayered electromagnetic interference shielding composite aerogels based on multimodal data fusion neural networks
J Colloid Interface Sci. 2025 Feb 20;688:79-92. doi: 10.1016/j.jcis.2025.02.133. Online ahead of print.
ABSTRACT
A non-experimental approach to developing high-performance EMI shielding materials is urgently needed to reduce costs and manpower. In this investigation, a multimodal data fusion neural network model is proposed to predict the EMI shielding performances of silver-modified four-pronged zinc oxide/waterborne polyurethane/barium ferrite (Ag@F-ZnO/WPU/BF) aerogels. First, 16 Ag@F-ZnO/WPU/BF samples with varying Ag@F-ZnO and BF contents were successfully prepared using the pre-casting and directional freezing techniques. The experimental results demonstrate that these aerogels perform well in terms of averaged EMI shielding effectiveness (SET) up to 78.6 dB and absorption coefficient as high as 0.96. On the basis of composite ingredients and microstructural images, the established multimodal neural network model can effectively predict the EMI shielding performances of Ag@F-ZnO/WPU/BF aerogels. Notably, the multimodal model of fully connected neural network (FCNN) and residual neural network (ResNet) utilizing GatedFusion method yields the best root mean squared error (RMSE) and mean absolute error (MAE) values of 0.7626 and 0.4918, respectively, and correlation coefficient (R) of 0.9885. In addition, this multimodal model successfully predicts the EMI performances of four new aerogels with an average error of less than 5 %, demonstrating its strong generalization capability. The accuracy and efficiency of material property prediction based on multimodal neural network model are largely improved by integrating multiple data sources, offering new possibility for reducing experimental burdens, accelerating the development of new materials, and gaining a deeper understanding of material mechanisms.
PMID:39987843 | DOI:10.1016/j.jcis.2025.02.133
Deep learning and electrocardiography: systematic review of current techniques in cardiovascular disease diagnosis and management
Biomed Eng Online. 2025 Feb 23;24(1):23. doi: 10.1186/s12938-025-01349-w.
ABSTRACT
This paper reviews the recent advancements in the application of deep learning combined with electrocardiography (ECG) within the domain of cardiovascular diseases, systematically examining 198 high-quality publications. Through meticulous categorization and hierarchical segmentation, it provides an exhaustive depiction of the current landscape across various cardiovascular ailments. Our study aspires to furnish interested readers with a comprehensive guide, thereby igniting enthusiasm for further, in-depth exploration and research in this realm.
PMID:39988715 | DOI:10.1186/s12938-025-01349-w
Deep learning algorithms for detecting fractured instruments in root canals
BMC Oral Health. 2025 Feb 23;25(1):293. doi: 10.1186/s12903-025-05652-9.
ABSTRACT
BACKGROUND: Identifying fractured endodontic instruments (FEIs) in periapical radiographs (PAs) is a critical yet challenging aspect of root canal treatment (RCT) due to anatomical complexities and overlapping structures. Deep learning (DL) models offer potential solutions, yet their comparative performance in this domain remains underexplored.
METHODS: A dataset of 700 annotated PAs, including 381 teeth with FEIs, was divided into training, validation, and test sets (60/20/20 split). Five DL models-DenseNet201, EfficientNet B0, ResNet-18, VGG-19, and MaxVit-T-were trained using transfer learning and data augmentation techniques. Performance was evaluated using accuracy, AUC and MCC. Statistical analysis included the Friedman test with post-hoc corrections.
RESULTS: DenseNet201 achieved the highest AUC (0.900) and MCC (0.810), outperforming other models in FEI detection. ResNet-18 demonstrated robust results, while EfficientNet B0 and VGG-19 provided moderate performance. MaxVit-T underperformed, with metrics near random guessing. Statistical analysis revealed significant differences among models (p < 0.05), but pairwise comparisons were not significant.
CONCLUSIONS: DenseNet201's superior performance highlights its clinical potential for FEI detection, while ResNet-18 offers a balance between accuracy and computational efficiency. The findings highlight the need for model-task alignment and optimization in medical imaging applications.
PMID:39988714 | DOI:10.1186/s12903-025-05652-9
Retinal vascular alterations in cognitive impairment: A multicenter study in China
Alzheimers Dement. 2025 Feb;21(2):e14593. doi: 10.1002/alz.14593.
ABSTRACT
INTRODUCTION: Foundational models suggest Alzheimer's disease (AD) can be diagnosed using retinal images, but the specific structural features remain poorly understood. This study investigates retinal vascular changes in individuals with cognitive impairment in three East Asian regions.
METHODS: A multicenter study was conducted in Shanghai, Hong Kong, and Ningxia, collecting retinal images from 176 patients with mild cognitive impairment (MCI) or AD and 264 controls. The VC-Net deep learning model segmented arterial/venous networks, extracting 36 vascular features.
RESULTS: Significant reductions in vessel length, segment number, and vascular density were observed in cognitively impaired patients, while venous structure and complexity were correlated with the level of cognitive function.
DISCUSSION: Retinal vascular changes may serve as indicators of cognitive impairment, requiring validation in larger cohorts and exploration of the underlying mechanisms.
HIGHLIGHTS: A deep learning segmentation model extracted diverse retinal vascular features. Significant alterations in the structure of retinal arterial/venous networks were identified. Partitioning vessel-rich retinal zones improved detection of vascular changes. Decreases in vessel length, segment number, and vascular density were found in CI individuals.
PMID:39988572 | DOI:10.1002/alz.14593
Ventilator pressure prediction employing voting regressor with time series data of patient breaths
Health Informatics J. 2025 Jan-Mar;31(1):14604582241295912. doi: 10.1177/14604582241295912.
ABSTRACT
Objectives: Mechanical ventilator plays a vital role in saving millions of lives. Patients with COVID-19 symptoms need a ventilator to survive during the pandemic. Studies have reported that the mortality rates rise from 50% to 97% in those requiring mechanical ventilation during COVID-19. The pumping of air into the patient's lungs using a ventilator requires a particular air pressure. High or low ventilator pressure can result in a patient's life loss as high air pressure in the ventilator causes the patient lung damage while lower pressure provides insufficient oxygen. Consequently, precise prediction of ventilator pressure is a task of great significance in this regard. The primary aim of this study is to predict the airway pressure in the ventilator respiratory circuit during the breath. Methods: A novel hybrid ventilator pressure predictor (H-VPP) approach is proposed. The ventilator exploratory data analysis reveals that the high values of lung attributes R and C during initial time step values are the prominent causes of high ventilator pressure. Results: Experiments using the proposed approach indicate H-VPP achieves a 0.78 R2, mean absolute error of 0.028, and mean squared error of 0.003. These results are better than other machine learning and deep learning models employed in this study. Conclusion: Extensive experimentation indicates the superior performance of the proposed approach for ventilator pressure prediction with high accuracy. Furthermore, performance comparison with state-of-the-art studies corroborates the superior performance of the proposed approach.
PMID:39988551 | DOI:10.1177/14604582241295912
Artificial Intelligence non-invasive methods for neonatal jaundice detection: A review
Artif Intell Med. 2025 Feb 19:103088. doi: 10.1016/j.artmed.2025.103088. Online ahead of print.
ABSTRACT
Neonatal jaundice is a common and potentially fatal health condition in neonates, especially in low and middle income countries, where it contributes considerably to neonatal morbidity and death. Traditional diagnostic approaches, such as Total Serum Bilirubin (TSB) testing, are invasive and could lead to discomfort, infection risk, and diagnostic delays. As a result, there is a rising interest in non-invasive approaches for detecting jaundice early and accurately. An in-depth analysis of non-invasive techniques for detecting neonatal jaundice is presented by this review, exploring several AI-driven techniques, such as Machine Learning (ML) and Deep Learning (DL), which have demonstrated the ability to enhance diagnostic accuracy by evaluating complex patterns in neonatal skin color and other relevant features. It is identified that AI models incorporating variants of neural networks achieve an accuracy rate of over 90% in detecting jaundice when compared to traditional methods. Furthermore, satisfactory outcomes in field settings have been demonstrated by mobile-based applications that use smartphone cameras to estimate bilirubin levels, providing a practical alternative for resource-constrained areas. The potential impact of AI-based solutions on reducing neonatal morbidity and mortality is evaluated by this review, with a focus on real-world clinical challenges, highlighting the effectiveness and practicality of AI-based strategies as an assistive tool in revolutionizing neonatal care through early jaundice diagnosis, while also addressing the ethical and practical implications of integrating these technologies in clinical practice. Future research areas, such as the development of new imaging technologies and the incorporation of wearable sensors for real-time bilirubin monitoring, are recommended by the paper.
PMID:39988547 | DOI:10.1016/j.artmed.2025.103088
Incorporating indirect MRI information in a CT-based deep learning model for prostate auto-segmentation
Radiother Oncol. 2025 Feb 21:110806. doi: 10.1016/j.radonc.2025.110806. Online ahead of print.
ABSTRACT
BACKGROUND AND PURPOSE: Computed tomography (CT) imaging poses challenges for delineation of soft tissue structures for prostate cancer external beam radiotherapy. Guidelines require the input of magnetic resonance imaging (MRI) information. We developed a deep learning (DL) prostate and organ-at-risk contouring model designed to find the MRI-truth in CT imaging.
MATERIAL AND METHODS: The study utilized CT-scan data from 165 prostate cancer patients, with 136 scans for training and 29 for testing. The research focused on contouring five regions of interest (ROIs): clinical target volume of the prostate including the venous plexus (VP) (CTV-iVP) and excluding the VP (CTV-eVP), bladder, anorectum and the whole seminal vesicles (SV) according to The European Society for Radiotherapy and Oncology (ESTRO) and Advisory Committee on Radiation Oncology Practice (ACROP) contouring guidelines. Human delineation included fusion of MRI-imaging with the planning CT-scans in the process, but the model itself has never been shown MRI-images during its development. Model training involved a three-dimensional U-Net architecture. A qualitative review was independently performed by two clinicians scoring the model on time-based criteria and the DL segmentation results were compared to manual adaptations using the Dice similarity coefficient (DSC) and the 95th percentile Hausdorff distance (HD95).
RESULTS: The qualitative review of DL segmentations for CTV-iVP and CTV-eVP showed 2 or 3 out of 3 in 96 % of cases, indicating minimal manual adjustments were needed by clinicians. The DL model demonstrated comparable quantitative performance in delineating CTV-iVP and CTV-eVP with a DSC of 89 % with a standard deviation of 3.3 %. HD95 is 4 mm for CTV-iVP and 4.1 mm CTV-eVP with a standard deviation of 2.1 mm for both contours. Anorectum, bladder and SV scored 3 out of 3 in the qualitative analysis in 62 %, 72 % and 55 % of cases respectively. DSC and HD95 are 90 % and 5.5 mm for anorectum, 96 % and 2.9 mm for bladder, and 81 % and 4.6 mm for the seminal vesicles.
CONCLUSION: To our knowledge, this is the first DL model designed to implement MRI contouring guidelines in CT imaging and the first model trained according to ESTRO-ACROP contouring guidelines. This CT-based DL model presents a valuable tool for aiding prostate delineation without requiring the actual MRI information.
PMID:39988305 | DOI:10.1016/j.radonc.2025.110806