Literature Watch

Development of an Artificial Intelligence-Enabled Electrocardiography to Detect 23 Cardiac Arrhythmias and Predict Cardiovascular Outcomes

Deep learning - Mon, 2025-04-21 06:00

J Med Syst. 2025 Apr 22;49(1):51. doi: 10.1007/s10916-025-02177-0.

ABSTRACT

Arrhythmias are common and can affect individuals with or without structural heart disease. Deep learning models (DLMs) have shown the ability to recognize arrhythmias using 12-lead electrocardiograms (ECGs). However, the limited types of arrhythmias and dataset robustness have hindered widespread adoption. This study aimed to develop a DLM capable of detecting various arrhythmias across diverse datasets. This algorithm development study utilized 22,130 ECGs, divided into development, tuning, validation, and competition sets. External validation was conducted on three open datasets (CODE-test, PTB-XL, CPSC2018) comprising 32,495 ECGs. The study also assessed the long-term risks of new-onset atrial fibrillation (AF), heart failure (HF), and mortality in individuals with false-positive AF detection by the DLM. In the validation set, the DLM achieved area under the receiver operating characteristic curve above 0.97 and sensitivity/specificity exceeding 90% across most arrhythmia classes. It demonstrated cardiologist-level performance, ranking first in balanced accuracy in a human-machine competition. External validation confirmed comparable performance. Individuals with false-positive AF detection had a significantly higher risk of new-onset AF (hazard ration [HR]: 1.69, 95% confidence interval [CI]: 1.11-2.59), HF (HR: 1.73, 95% CI: 1.20-2.51), and mortality (HR: 1.40, 95% CI: 1.02-1.92) compared to true-negative individuals after adjusting for age and sex. We developed an accurate DLM capable of detecting 23 cardiac arrhythmias across multiple datasets. This DLM serves as a valuable screening tool to aid physicians in identifying high-risk patients, with potential implications for early intervention and risk stratification.

PMID:40259136 | DOI:10.1007/s10916-025-02177-0

Categories: Literature Watch

Design and experimental research of on device style transfer models for mobile environments

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 21;15(1):13724. doi: 10.1038/s41598-025-98545-4.

ABSTRACT

This study develops a neural style transfer (NST) model optimized for real-time execution on mobile devices through on-device AI, eliminating reliance on cloud servers. By embedding AI models directly into mobile hardware, this approach reduces operational costs and enhances user privacy. However, designing deep learning models for mobile deployment presents a trade-off between computational efficiency and visual quality, as reducing model size often leads to performance degradation. To address this challenge, we propose a set of lightweight NST models incorporating depthwise separable convolutions, residual bottlenecks, and optimized upsampling techniques inspired by MobileNet and ResNet architectures. Five model variations are designed and evaluated based on parameters, floating-point operations, memory usage, and image transformation quality. Experimental results demonstrate that our optimized models achieve a balance between efficiency and performance, enabling high-quality real-time style transfer on resource-constrained mobile environments. These findings highlight the feasibility of deploying NST applications on mobile devices, paving the way for advancements in real-time artistic image processing in mobile photography, augmented reality, and creative applications.

PMID:40259046 | DOI:10.1038/s41598-025-98545-4

Categories: Literature Watch

DSIT UNet a dual stream iterative transformer based UNet architecture for segmenting brain tumors from FLAIR MRI images

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 22;15(1):13815. doi: 10.1038/s41598-025-98464-4.

ABSTRACT

Brain tumor segmentation remains challenging in medical imaging with conventional therapies and rehabilitation owing to the complex morphology and heterogeneous nature of tumors. Although convolutional neural networks (CNNs) have advanced medical image segmentation, they struggle with long-range dependencies because of their limited receptive fields. We propose Dual-Stream Iterative Transformer UNet (DSIT-UNet), a novel framework that combines Iterative Transformer (IT) modules with a dual-stream encoder-decoder architecture. Our model incorporates a transformed spatial-hybrid attention optimization (TSHAO) module to enhance multiscale feature interactions and balance local details with the global context. We evaluated DSIT-UNet using three benchmark datasets: The Cancer Imaging Archive (TCIA) from The Cancer Genome Atlas (TCGA), BraTS2020, and BraTS2021. On TCIA, our model achieved a Mean Intersection over Union of 95.21%, mean Dice Coefficient of 96.23%, precision of 95.91%, and recall of 96.55%. BraTS2020 attained a Mean IoU of 95.88%, mDice of 96.32%, precision of 96.21%, and recall of 96.44%, surpassing the performance of the existing methods. The superior results of DSIT-UNet demonstrate its effectiveness in capturing tumor boundaries and improving segmentation robustness through hierarchical attention mechanisms and multiscale feature extraction. This architecture advances automated brain tumor segmentation, with potential applications in clinical neuroimaging and future extensions to 3D volumetric segmentation.

PMID:40259039 | DOI:10.1038/s41598-025-98464-4

Categories: Literature Watch

The development of CC-TF-BiGRU model for enhancing accuracy in photovoltaic power forecasting

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 21;15(1):13790. doi: 10.1038/s41598-025-99109-2.

ABSTRACT

In the face of escalating global energy crises and pressing challenges of environmental pollution, the imperative for sustainable energy solutions has never been more pronounced. Photovoltaic (PV) power generation is recognized as a cornerstone in transition towards a clean energy paradigm. This study introduces a groundbreaking short-term PV power forecasting methodology based on teacher forcing (TF) integrated with bi-directional gated recurrent unit (BiGRU). Firstly, the chaotic feature extraction is synergistically employed in conjunction with the C-C method to meticulously discern the pivotal factors that shape the dynamics of PV power, complemented by the inclusion for solar radiation data as an additional element. Besides, a potent fusion of gradient boosting decision trees (GBDT) and BiGRU is leveraged to adeptly process time series data. Moreover, teacher forcing is seamlessly integrated into the model to bolster forecasting accuracy and stability. Experimental validations demonstrate the remarkable performance of the proposed method under complex and diverse weather conditions, offering a pioneering technical approach and theoretical framework for PV power forecasting.

PMID:40258997 | DOI:10.1038/s41598-025-99109-2

Categories: Literature Watch

Improving deep learning-based neural distinguisher with multiple ciphertext pairs for speck and Simon

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 21;15(1):13696. doi: 10.1038/s41598-025-98251-1.

ABSTRACT

The neural network-based differential distinguisher has attracted significant interest from researchers due to its high efficiency in cryptanalysis since its introduction by Gohr in 2019. However, the accuracy of existing neural distinguishers remains limited for high-round-reduced cryptosystems. In this work, we explore the design principles of neural networks and propose a novel neural distinguisher based on a multi-scale convolutional block and dense residual connections. Two different ablation schemes are designed to verify the efficiency of the proposed neural distinguisher. Additionally, the concept of a linear attack is introduced to optimize the input dataset for the neural distinguisher. By combining ciphertext pairs, the differences between ciphertext pairs, the keys, and the differences between the keys, a novel dataset model is designed. The results show that the accuracy of the proposed neural distinguisher, utilizing the novel neural network and dataset, is 0.15-0.45% higher than Gohr's distinguisher for Speck 32/64 when using a single ciphertext pair as input. When using multiple ciphertext pairs as input, it is 1.24-3.5% higher than the best distinguishers for Speck 32/64 and 0.32-1.83% higher than the best distinguishers for Simon 32/64. Finally, a key recovery attack based on the proposed neural distinguisher using a single ciphertext pair is implemented, achieving a success rate of 61.8%, which is 9.7% higher than the distinguisher proposed by Gohr. Therefore, the proposed neural distinguisher demonstrates significant advantages in both accuracy and key recovery rate.

PMID:40258982 | DOI:10.1038/s41598-025-98251-1

Categories: Literature Watch

Securing the CAN bus using deep learning for intrusion detection in vehicles

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 22;15(1):13820. doi: 10.1038/s41598-025-98433-x.

ABSTRACT

The Controller Area Network (CAN) bus protocol is the essential communication backbone in vehicles within the Intelligent Transportation System (ITS), enabling interaction between electronic control units (ECUs). However, CAN messages lack authentication and security, making the system vulnerable to attacks such as DoS, fuzzing, impersonation, and spoofing. This paper evaluates deep learning methods to detect intrusions in the CAN bus network. Using the Car Hacking, Survival Analysis, and OTIDS datasets, we train and test models to identify automotive cyber threats. We explore recurrent neural network (RNN) variants, including LSTM, GRU, and VGG-16, to analyze temporal and spatial features in the data. LSTMs and GRUs handle long-term dependencies in sequential data, making them suitable for analyzing CAN messages. Bi-LSTMs enhance this by processing sequences in both directions, learning from past and future contexts to improve anomaly detection. Our results show that LSTM achieves 99.89% accuracy in binary classification, while VGG-16 reaches 100% accuracy in multiclass classification. These findings demonstrate the potential of deep learning techniques in improving the security and resilience of ITS by effectively detecting and mitigating CAN bus network attacks.

PMID:40258975 | DOI:10.1038/s41598-025-98433-x

Categories: Literature Watch

Mitigating side channel attacks on FPGA through deep learning and dynamic partial reconfiguration

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 21;15(1):13745. doi: 10.1038/s41598-025-98473-3.

ABSTRACT

This paper introduces a framework that combines Deep Learning (DL) models and Dynamic Partial Reconfiguration (DPR) in Field Programmable Gate Arrays (FPGA) to mitigate Side Channel Attacks (SCA). Traditional static defense mechanisms often fail to fully mitigate SCA because they lack the ability to adapt dynamically to attacks. The proposed approach overcomes this limitation by adaptively reconfiguring the FPGA resources in real-time, disrupting the SCA patterns, and reducing the effectiveness of potential attacks. One of the notable advantages of this approach is its ability to defend against side-channel attacks while the FPGA design is operational. The framework accomplishes this by reconfiguring the FPGA resources to optimize response times, achieving latency levels beyond the reach of traditional static defense mechanisms. In particular, this study concentrates on mitigating power side-channel attacks, highlighting the resilience of the DL-DPR integration. Beyond its demonstrated efficacy against power SCA, the proposed framework can be extended to be adaptable to other types of side-channel attacks, making it a potential solution for hardware security. The integration of DL models allows for sophisticated threat analysis, while DPR provides the flexibility to implement countermeasures dynamically. Experimental results show that the latency from detection to mitigation is within 20 clock cycles. This combination represents a paradigm shift in securing hardware systems, moving from reactive to proactive defense mechanisms. The framework's real-time adaptability ensures it stays ahead of attackers, continuously evolving to neutralize new threats. The findings presented in this paper underscore the potential of combining Artificial Intelligence (AI) and FPGA technologies to redefine hardware security. By addressing detection and mitigation in a unified framework, the proposed methodology significantly enhances the resilience of FPGA designs and lays the groundwork for future research in adaptive security mechanisms.

PMID:40258964 | DOI:10.1038/s41598-025-98473-3

Categories: Literature Watch

Using deep learning for estimation of time-since-injury in pediatric accidental fractures

Deep learning - Mon, 2025-04-21 06:00

Pediatr Radiol. 2025 Apr 22. doi: 10.1007/s00247-025-06223-4. Online ahead of print.

ABSTRACT

BACKGROUND: Estimating time-since-injury of healing fractures is imprecise, encompassing excessively wide timeframes. Most injured children are evaluated at non-children's hospitals, yet pediatric radiologists can disagree with up to one in six skeletal imaging interpretations from referring community hospitals. There is a need to improve image interpretation by considering additional methods for fracture dating.

OBJECTIVE: To train and validate deep learning models to correctly estimate the age of pediatric accidental long bone fractures.

MATERIALS AND METHODS: This secondary data analysis used radiographic images of accidental long bone fractures in children <6 years at the time of injury seen at a large Midwestern children's hospital between 2000-2016. We built deep learning models both to classify fracture images into different age groups and to directly estimate fracture age (time-since-injury). We used cross-validation to evaluate model performance across various metrics, including confusion matrices, sensitivity/specificity, and activation maps for age classification, and mean absolute error (MAE) and root mean squared error (RMSE) for age estimation.

RESULTS: Our study cohort contained 2,328 radiographs from 399 patients. Overall, our models performed above baselines for fracture age classification and estimation, both when trained/validated across all bones and on specific bone types. The best model was able to estimate fracture age for any long bone with a MAE of 6.2 days and with 68% of estimates falling within 7 days of the correct fracture age.

CONCLUSION: Our study successfully demonstrated that, for radiographic dating of accidental fractures of long bones, deep learning models can estimate time-since-injury with above-baseline accuracy.

PMID:40258953 | DOI:10.1007/s00247-025-06223-4

Categories: Literature Watch

A novel deep learning approach to classify 3D foot types of diabetic patients

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 22;15(1):13819. doi: 10.1038/s41598-025-98471-5.

ABSTRACT

Diabetes mellitus is a worldwide epidemic that leads to significant changes in foot shape, deformities, and ulcers. Precise classification of diabetic foot not only helps identify foot abnormalities but also facilitates personalized treatment and preventive measures through the engineering design of foot orthoses. In this study, we propose a novel deep learning method based on DiffusionNet which incorporates a self-attention mechanism and external features to classify the foot types of diabetic patients into six categories by using simple 3D foot images directly. Our approach achieves a high accuracy of 82.9% surpassing existing machine and deep learning methods. The proposed model offers a cost-effective way to analyse foot shapes and facilitate the customization process for both the footwear industry and medical applications.

PMID:40258927 | DOI:10.1038/s41598-025-98471-5

Categories: Literature Watch

Bio inspired multi agent system for distributed power and interference management in MIMO OFDM networks

Deep learning - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 21;15(1):13740. doi: 10.1038/s41598-025-97944-x.

ABSTRACT

MIMO-OFDM systems are essential for high-capacity wireless networks, offering improved data throughput and spectral efficiency necessary for dense user environments. Effective power and interference management are pivotal for maintaining signal quality and enhancing resource utilization. Existing techniques for resource allocation and interference control in massive MIMO-OFDM networks face challenges related to scalability, adaptability, and energy efficiency. To address these limitations, this work proposes a novel bio-inspired Termite Colony Optimization-based Multi-Agent System (TCO-MAS) integrated with an LSTM model for predictive adaptability. The deep learning LSTM model aids agents in forecasting future network conditions, enabling dynamic adjustment of pheromone levels for optimized power allocation and interference management. By simulating termite behavior, agents utilize pheromone-based feedback to achieve localized optimization decisions with minimal communication overhead. Experimental analyses evaluated the proposed TCO-MAS across key metrics such as Sum Rate, Energy Efficiency, Spectral Efficiency, Latency, and Fairness Index. Results demonstrate that TCO-MAS outperformed conventional algorithms, achieving a 20% higher sum rate and 15% better energy efficiency under high-load conditions. Limitations include dependency on specific pheromone adjustment parameters, which may require fine-tuning for diverse scenarios. Practical implications highlight its potential for scalable and adaptive deployment in ultra-dense wireless networks, though additional field testing is recommended to ensure robustness in varied real-world environments.

PMID:40258916 | DOI:10.1038/s41598-025-97944-x

Categories: Literature Watch

Polypharmacy and potentially inappropriate medication (PIM) use among older veterans with idiopathic pulmonary fibrosis (IPF) - a retrospective cohort study

Idiopathic Pulmonary Fibrosis - Mon, 2025-04-21 06:00

BMC Pulm Med. 2025 Apr 21;25(1):186. doi: 10.1186/s12890-025-03611-2.

ABSTRACT

BACKGROUND: Idiopathic pulmonary fibrosis (IPF) is a deadly respiratory disease of older patients. IPF therapies (antifibrotics) are efficacious in slowing disease progression, but they are critically underutilized. Potential barriers to antifibrotic use are polypharmacy and potentially inappropriate medications (PIM). We examined the frequency of these factors for older patients with IPF.

METHODS: We retrospectively analyzed records of Veterans ≥ 65 years old in the Durham Veterans Affairs Health Care System who received a diagnosis of IPF and received care between 11 April 2023 and 9 September 2024. We analyzed medication profiles from the Corporate Data Warehouse including total medication counts, polypharmacy (≥ 5 medications), severe polypharmacy (> 15 medications), and prescription of a PIM in the anticholinergic, antidepressant, sedative, and antipsychotic classes using published geriatric guidelines (2023 Beers criteria, Screening Tool of Older People's Potentially Inappropriate Prescriptions [STOPP] version 3). Identified PIMs underwent protocolized review to categorize them further as likely appropriate or inappropriate.

RESULTS: We identified 367 Veterans ≥ 65 years old with a diagnosis of IPF diagnostic during our study period. Total medication count was high for older Veterans (mean 14.2, SD 7.0). Veterans commonly had polypharmacy (350/367, 95.4%), severe polypharmacy (161/367, 43.9%), and ≥ 1 PIM (97/367, 26.4%). After protocolized review, 5.7% (21/367) of older Veterans with IPF had a likely inappropriate medication without documentation of a failed preferred alternative.

CONCLUSION: For older Veterans with IPF, polypharmacy and PIM use were common and represent likely barriers to effective IPF pharmacotherapy initiation. Interventions that target these factors like deprescribing could improve antifibrotic use.

CLINICAL TRIAL NUMBER: Not applicable.

PMID:40259309 | DOI:10.1186/s12890-025-03611-2

Categories: Literature Watch

Targeted immunotherapy rescues pulmonary fibrosis by reducing activated fibroblasts and regulating alveolar cell profile

Idiopathic Pulmonary Fibrosis - Mon, 2025-04-21 06:00

Nat Commun. 2025 Apr 21;16(1):3748. doi: 10.1038/s41467-025-59093-7.

ABSTRACT

Idiopathic pulmonary fibrosis (IPF) is a severe lung disease occurring throughout the world; however, few clinical therapies are available for treating this disorder. Overactivated fibroblasts drive abnormal fibrosis accumulation to maintain dynamic balance between inflammation and extracellular matrix (ECM) stiffness. Given pulmonary cell can regenerate, the lung may possess self-repairing abilities if fibrosis is removed via clearance of overactivated fibroblasts. The aim of this study was to evaluate the therapeutic activity of transient antifibrotic chimeric antigen receptor (CAR) T cells (generated via a novelly-designed lipid nanoparticle-messenger RNA (LNP-mRNA) system) and explore the regeneration mechanisms of lung in a male mouse model of bleomycin-induced pulmonary fibrosis. Here we found that fibrosis-induced ECM stiffening impaired alveolar epithelial cell compensation. The proposed LNP-mRNA therapy eliminated overactivated fibroblasts to rescue pulmonary fibrosis. The restored ECM environment regulated the cellular profile. The elevated plasticity of AT2 and Pclaf+ cells increased AT1 cell population via polarization. Apoe+ macrophages and increased numbers of effector T cells were shown to reestablish pulmonary immunity. Hence, LNP-mRNA treatment for fibrosis can restore pulmonary structure and function to similar degrees to those of a healthy lung. This therapy is a potential treatment for IPF patients.

PMID:40258811 | DOI:10.1038/s41467-025-59093-7

Categories: Literature Watch

Microbial Community Shifts and Nitrogen Utilization in Peritidal Microbialites: The Role of Salinity and pH in Microbially Induced Carbonate Precipitation

Systems Biology - Mon, 2025-04-21 06:00

Microb Ecol. 2025 Apr 22;88(1):31. doi: 10.1007/s00248-025-02532-1.

ABSTRACT

Microbialites have the potential to record environmental changes and act as biosignatures of past geochemical conditions. As such, they could be used as indicators to decipher ancient rock records. Modern microbialites are primarily found in environments where competitors and destructors are absent or where biogeochemical conditions favor their continuous formation. Many previous studies have essentially focused on the role of photosynthetic microbes in controlling pH and carbonate speciation and potentially overlooked alternative non-photosynthetic pathways of carbonate precipitation. Given that microbial activity induces subtle geochemical changes, microbially induced carbonate precipitation (MICP) can involve several mechanisms, from extracellular polymeric substances (EPS), sulfate reduction, anaerobic oxidation of methane, to nitrogen cycling processes, such as ammonification, ureolysis, and denitrification. Moreover, the peritidal zone where temperate microbialites are mostly found today, is under the influence of both freshwater and seawater, arguing for successive biogeochemical processes leading to mineral saturation, and questioning interpretations of fossil records. This study investigates microbialites in three tide pools from the peritidal zone of Fongchueisha, Hengchun, Taiwan, to address the influence of salinity on microbial community composition and carbonate precipitation mechanisms. Microbial samples were collected across varying salinity gradients at multiple time points and analyzed using next-generation sequencing (NGS) of bacterial 16S and eukaryotic 18S rRNA genes. Our results indicate that dominant bacterial groups, including Cyanobacteria and Alphaproteobacteria, were largely influenced by salinity variations, albeit pH exhibited stronger correlation with community composition. Combining our results on geochemistry and taxonomic diversity over time, we inferred a shift in the trophic mode under high salinity conditions, during which the use of urea and amino acids as a nitrogen source outcompetes diazotrophy, ureolysis and ammonification of amino acids reinforcing carbonate precipitation dynamics by triggering an increase in both pH and dissolved inorganic carbon.

PMID:40259028 | DOI:10.1007/s00248-025-02532-1

Categories: Literature Watch

Intersecting impact of CAG repeat and huntingtin knockout in stem cell-derived cortical neurons

Systems Biology - Mon, 2025-04-21 06:00

Neurobiol Dis. 2025 Apr 19:106914. doi: 10.1016/j.nbd.2025.106914. Online ahead of print.

ABSTRACT

Huntington's Disease (HD) is caused by a CAG repeat expansion in the gene encoding Huntingtin (HTT). While normal HTT function appears impacted by the mutation, the specific pathways unique to CAG repeat expansion versus loss of normal function are unclear. To understand the impact of the CAG repeat expansion, we evaluated biological signatures of HTT knockout (HTT KO) versus those that occur from the CAG repeat expansion by applying multi-omics, live cell imaging, survival analysis and a novel feature- based pipeline to study cortical neurons (eCNs) derived from an isogenic human embryonic stem cell series (RUES2). HTT KO and the CAG repeat expansion influence developmental trajectories of eCNs, with opposing effects on the growth. Network analyses of differentially expressed genes and proteins associated with enriched epigenetic motifs identified subnetworks common to CAG repeat expansion and HTT KO that include neuronal differentiation, cell cycle regulation, and mechanisms related to transcriptional repression and may represent gain-of-function mechanisms that cannot be explained by HTT loss of function alone. A combination of dominant and loss-of-function mechanisms are likely involved in the aberrant neurodevelopmental and neurodegenerative features of HD that can help inform therapeutic strategies.

PMID:40258535 | DOI:10.1016/j.nbd.2025.106914

Categories: Literature Watch

A theoretical model for detecting drug interaction with awareness of timing of exposure

Drug-induced Adverse Events - Mon, 2025-04-21 06:00

Sci Rep. 2025 Apr 21;15(1):13693. doi: 10.1038/s41598-025-98528-5.

ABSTRACT

Drug-drug interaction-induced (DDI-induced) adverse drug event (ADE) is a significant public health burden. Risk of ADE can be related to timing of exposure (TOE) such as initiating two drugs concurrently or adding one drug to an existing drug. Thus, real-world data based DDI detection shall be expanded to investigate precise adverse DDI with a special awareness on TOE. We developed a Sensitive and Timing-awarE Model (STEM), which was able to optimize the probability of detection and control false positive rate for mining all two-drug combinations under case-crossover design, in particular for DDIs with TOE-dependent risk. We analyzed a large-scale US administrative claims data and conducted performance evaluation analyses. We identified signals of DDIs by using STEM, in particular for DDIs with TOE-dependent risk. We also observed that STEM identified significantly more signals than the conditional logistic regression model-based (CLRM-based) methods and the Benjamini-Hochberg procedure. In the performance evaluation, we found that STEM demonstrated proper false positive control and achieved a higher probability of detection compared to CLRM-based methods and the Benjamini-Hochberg procedure. STEM has a high probability to identify signals of DDIs in high-throughput DDI mining while controlling false positive rate, in particular for detecting signals of DDI with TOE-dependent risk.

PMID:40258952 | DOI:10.1038/s41598-025-98528-5

Categories: Literature Watch

Sharper insights: Adaptive ellipse-template for robust fovea localization in challenging retinal landscapes

Deep learning - Mon, 2025-04-21 06:00

Comput Biol Med. 2025 Apr 20;191:110125. doi: 10.1016/j.compbiomed.2025.110125. Online ahead of print.

ABSTRACT

Automated identification of retinal landmarks, particularly the fovea is crucial for diagnosing diabetic retinopathy and other ocular diseases. But accurate identification is challenging due to varying contrast, color irregularities, anatomical structure and the presence of lesions near the macula in fundus images. Existing methods often struggle to maintain accuracy in these complex conditions, particularly when lesions obscure vital regions. To overcome these limitations, this paper introduces a novel adaptive ellipse-template-based approach for fovea localization, leveraging mathematical modeling of blood vessel (BV) trajectories and optic disc (OD) positioning. Unlike traditional fixed-template model, our method dynamically adjusts the ellipse parameters based on OD diameter, ensuring a generalized and adaptable template. This flexibility enables consistent detection performance, even in challenging images with significant lesion interference. Extensive validation on ten publicly available databases, including MESSIDOR, DRIVE, DIARETDB0, DIARETDB1, HRF, IDRiD, HEIMED, ROC, GEI, and NETRALAYA, demonstrates a superior detection efficiency of 99.5%. Additionally, the method achieves a low mean Euclidean distance of 13.48 pixels with a standard deviation of 15.5 pixels between the actual and detected fovea locations, highlighting its precision and reliability. The proposed approach significantly outperforms conventional template-based and deep learning methods, particularly in lesion-rich and low-contrast conditions. It is computationally efficient, interpretable, and robust, making it a valuable tool for automated retinal image analysis in clinical settings.

PMID:40258324 | DOI:10.1016/j.compbiomed.2025.110125

Categories: Literature Watch

Advances in artificial intelligence for diabetes prediction: insights from a systematic literature review

Deep learning - Mon, 2025-04-21 06:00

Artif Intell Med. 2025 Apr 15;164:103132. doi: 10.1016/j.artmed.2025.103132. Online ahead of print.

ABSTRACT

Diabetes mellitus (DM), a prevalent metabolic disorder, has significant global health implications. The advent of machine learning (ML) has revolutionized the ability to predict and manage diabetes early, offering new avenues to mitigate its impact. This systematic review examined 53 articles on ML applications for diabetes prediction, focusing on datasets, algorithms, training methods, and evaluation metrics. Various datasets, such as the Singapore National Diabetic Retinopathy Screening Program, REPLACE-BG, National Health and Nutrition Examination Survey (NHANES), and Pima Indians Diabetes Database (PIDD), have been explored, highlighting their unique features and challenges, such as class imbalance. This review assesses the performance of various ML algorithms, such as Convolutional Neural Networks (CNN), Support Vector Machines (SVM), Logistic Regression, and XGBoost, for the prediction of diabetes outcomes from multiple datasets. In addition, it explores explainable AI (XAI) methods such as Grad-CAM, SHAP, and LIME, which improve the transparency and clinical interpretability of AI models in assessing diabetes risk and detecting diabetic retinopathy. Techniques such as cross-validation, data augmentation, and feature selection are discussed in terms of their influence on the versatility and robustness of the model. Some evaluation techniques involving k-fold cross-validation, external validation, and performance indicators such as accuracy, area under curve, sensitivity, and specificity are presented. The findings highlight the usefulness of ML in addressing the challenges of diabetes prediction, the value of sourcing different data types, the need to make models explainable, and the need to keep models clinically relevant. This study highlights significant implications for healthcare professionals, policymakers, technology developers, patients, and researchers, advocating interdisciplinary collaboration and ethical considerations when implementing ML-based diabetes prediction models. By consolidating existing knowledge, this SLR outlines future research directions aimed at improving diagnostic accuracy, patient care, and healthcare efficiency through advanced ML applications. This comprehensive review contributes to the ongoing efforts to utilize artificial intelligence technology for a better prediction of diabetes, ultimately aiming to reduce the global burden of this widespread disease.

PMID:40258308 | DOI:10.1016/j.artmed.2025.103132

Categories: Literature Watch

An interpretable artificial intelligence approach to differentiate between blastocysts with similar or same morphological grades

Deep learning - Mon, 2025-04-21 06:00

Hum Reprod. 2025 Apr 21:deaf066. doi: 10.1093/humrep/deaf066. Online ahead of print.

ABSTRACT

STUDY QUESTION: Can a quantitative method be developed to differentiate between blastocysts with similar or same inner cell mass (ICM) and trophectoderm (TE) grades, while also reflecting their potential for live birth?

SUMMARY ANSWER: We developed BlastScoringNet, an interpretable deep-learning model that quantifies blastocyst ICM and TE morphology with continuous scores, enabling finer differentiation between blastocysts with similar or same grades, with higher scores significantly correlating with higher live birth rates.

WHAT IS KNOWN ALREADY: While the Gardner grading system is widely used by embryologists worldwide, blastocysts having similar or same ICM and TE grades cause challenges for embryologists in decision-making. Furthermore, human assessment is subjective and inconsistent in predicting which blastocysts have higher potential to result in live birth.

STUDY DESIGN, SIZE, DURATION: The study design consists of three main steps. First, BlastScoringNet was developed using a grading dataset of 2760 blastocysts with majority-voted Gardner grades. Second, the model was applied to a live birth dataset of 15 228 blastocysts with known live birth outcomes to generate blastocyst scores. Finally, the correlation between these scores and live birth outcomes was assessed. The blastocysts were collected from patients who underwent IVF treatments between 2016 and 2018. For external application study, an additional grading dataset of 1455 blastocysts and a live birth dataset of 476 blastocysts were collected from patients who underwent IVF treatments between 2021 and 2023 at an external IVF institution.

PARTICIPANTS/MATERIALS, SETTING, METHODS: In this retrospective study, we developed BlastScoringNet, an interpretable deep-learning model which outputs expansion degree grade and continuous scores quantifying a blastocyst's ICM morphology and TE morphology, based on the Gardner grading system. The continuous ICM and TE scores were calculated by weighting each base grade's predicted probability and summing the predicted probabilities. To represent each blastocyst's overall potential for live birth, we combined the ICM and TE scores using their odds ratios (ORs) for live birth. We further assessed the correlation between live birth rates and the ICM score, TE score, and the OR-combined score (adjusted for expansion degree) by applying BlastScoringNet to blastocysts with known live birth outcomes. To test its generalizability, we also applied BlastScoringNet to an external IVF institution, accounting for variations in imaging conditions, live birth rates, and embryologists' experience levels.

MAIN RESULTS AND THE ROLE OF CHANCE: BlastScoringNet was developed using data from 2760 blastocysts with majority-voted grades for expansion degree, ICM, and TE. The model achieved mean area under the receiver operating characteristic curve values of 0.997 (SD 0.004) for expansion degree, 0.903 (SD 0.031) for ICM, and 0.943 (SD 0.040) for TE, based on predicted probabilities for each base grade. From these predicted probabilities, BlastScoringNet generated continuous ICM and TE scores, as well as expansion degree grades, for an additional 15 228 blastocysts with known live birth outcomes. Higher ICM and TE scores, along with their OR-combined scores, were significantly correlated with increased live birth rates (P < 0.0001). By fine-tuning, BlastScoringNet was applied to an external IVF institution, where higher OR-combined ICM and TE scores also significantly correlated with increased live birth rates (P = 0.00078), demonstrating consistent results across both institutions.

LIMITATIONS, REASONS FOR CAUTION: This study is limited by its retrospective nature. Further prospective randomized trials are required to confirm the clinical impact of BlastScoringNet in assisting embryologists in blastocyst selection.

WIDER IMPLICATIONS OF THE FINDINGS: BlastScoringNet provides an interpretable and quantitative method for evaluating blastocysts, aligned with the widely used Gardner grading system. Higher OR-combined ICM and TE scores, representing each blastocyst's overall potential for live birth, were significantly correlated with increased live birth rates. The model's demonstrated generalizability across two institutions further supports its clinical utility. These findings suggest that BlastScoringNet is a valuable tool for assisting embryologists in selecting blastocysts with the highest potential for live birth. The code and pre-trained models are publicly available to facilitate further research and widespread implementation.

STUDY FUNDING/COMPETING INTEREST(S): This work was supported by the Vector Institute and the Temerty Faculty of Medicine at the University of Toronto, Toronto, Ontario, Canada, via a Clinical AI Integration Grant, and the Natural Science Foundation of Hunan Province of China (2023JJ30714). The authors declare no competing interests.

TRIAL REGISTRATION NUMBER: N/A.

PMID:40258298 | DOI:10.1093/humrep/deaf066

Categories: Literature Watch

Use of deep learning model for paediatric elbow radiograph binomial classification: initial experience, performance and lessons learnt

Deep learning - Mon, 2025-04-21 06:00

Singapore Med J. 2025 Apr 1;66(4):208-214. doi: 10.4103/singaporemedj.SMJ-2022-078. Epub 2023 Nov 29.

ABSTRACT

INTRODUCTION: In this study, we aimed to compare the performance of a convolutional neural network (CNN)-based deep learning model that was trained on a dataset of normal and abnormal paediatric elbow radiographs with that of paediatric emergency department (ED) physicians on a binomial classification task.

METHODS: A total of 1,314 paediatric elbow lateral radiographs (patient mean age 8.2 years) were retrospectively retrieved and classified based on annotation as normal or abnormal (with pathology). They were then randomly partitioned to a development set (993 images); first and second tuning (validation) sets (109 and 100 images, respectively); and a test set (112 images). An artificial intelligence (AI) model was trained on the development set using the EfficientNet B1 network architecture. Its performance on the test set was compared to that of five physicians (inter-rater agreement: fair). Performance of the AI model and the physician group was tested using McNemar test.

RESULTS: The accuracy of the AI model on the test set was 80.4% (95% confidence interval [CI] 71.8%-87.3%), and the area under the receiver operating characteristic curve (AUROC) was 0.872 (95% CI 0.831-0.947). The performance of the AI model vs. the physician group on the test set was: sensitivity 79.0% (95% CI: 68.4%-89.5%) vs. 64.9% (95% CI: 52.5%-77.3%; P = 0.088); and specificity 81.8% (95% CI: 71.6%-92.0%) vs. 87.3% (95% CI: 78.5%-96.1%; P = 0.439).

CONCLUSION: The AI model showed good AUROC values and higher sensitivity, with the P-value at nominal significance when compared to the clinician group.

PMID:40258236 | DOI:10.4103/singaporemedj.SMJ-2022-078

Categories: Literature Watch

NeuroPred-AIMP: Multimodal Deep Learning for Neuropeptide Prediction via Protein Language Modeling and Temporal Convolutional Networks

Deep learning - Mon, 2025-04-21 06:00

J Chem Inf Model. 2025 Apr 21. doi: 10.1021/acs.jcim.5c00444. Online ahead of print.

ABSTRACT

Neuropeptides are key signaling molecules that regulate fundamental physiological processes ranging from metabolism to cognitive function. However, accurate identification is a huge challenge due to sequence heterogeneity, obscured functional motifs and limited experimentally validated data. Accurate identification of neuropeptides is critical for advancing neurological disease therapeutics and peptide-based drug design. Existing neuropeptide identification methods rely on manual features combined with traditional machine learning methods, which are difficult to capture the deep patterns of sequences. To address these limitations, we propose NeuroPred-AIMP (adaptive integrated multimodal predictor), an interpretable model that synergizes global semantic representation of the protein language model (ESM) and the multiscale structural features of the temporal convolutional network (TCN). The model introduced the adaptive features fusion mechanism of residual enhancement to dynamically recalibrate feature contributions, to achieve robust integration of evolutionary and local sequence information. The experimental results demonstrated that the proposed model showed excellent comprehensive performance on the independence test set, with an accuracy of 92.3% and the AUROC of 0.974. Simultaneously, the model showed good balance in the ability to identify positive and negative samples, with a sensitivity of 92.6% and a specificity of 92.1%, with a difference of less than 0.5%. The result fully confirms the effectiveness of the multimodal features strategy in the task of neuropeptide recognition.

PMID:40258183 | DOI:10.1021/acs.jcim.5c00444

Categories: Literature Watch

Pages

Subscribe to Anil Jegga aggregator - Literature Watch