Deep learning

Artificial Intelligence (AI)-Based Computer-Assisted Detection and Diagnosis for Mammography: An Evidence-Based Review of Food and Drug Administration (FDA)-Cleared Tools for Screening Digital Breast Tomosynthesis (DBT)

Fri, 2025-04-04 06:00

AI Precis Oncol. 2024 Aug 19;1(4):195-206. doi: 10.1089/aipo.2024.0022. eCollection 2024 Aug.

ABSTRACT

In recent years, the emergence of new-generation deep learning-based artificial intelligence (AI) tools has reignited enthusiasm about the potential of computer-assisted detection (CADe) and diagnosis (CADx) for screening mammography. For screening mammography, digital breast tomosynthesis (DBT) combined with acquired digital 2D mammography or synthetic 2D mammography is widely used throughout the United States. As of this writing in July 2024, there are six Food and Drug Administration (FDA)-cleared AI-based CADe/x tools for DBT. These tools detect suspicious lesions on DBT and provide corresponding scores at the lesion and examination levels that reflect likelihood of malignancy. In this article, we review the evidence supporting the use of AI-based CADe/x for DBT. The published literature on this topic consists of multireader, multicase studies, retrospective analyses, and two "real-world" evaluations. These studies suggest that AI-based CADe/x could lead to improvements in sensitivity without compromising specificity and to improvements in efficiency. However, the overall published evidence is limited and includes only two small postimplementation clinical studies. Prospective studies and careful postimplementation clinical evaluation will be necessary to fully understand the impact of AI-based CADe/x on screening DBT outcomes.

PMID:40182614 | PMC:PMC11963389 | DOI:10.1089/aipo.2024.0022

Categories: Literature Watch

Facing the challenges of autoimmune pancreatitis diagnosis: The answer from artificial intelligence

Fri, 2025-04-04 06:00

World J Gastroenterol. 2025 Mar 28;31(12):102950. doi: 10.3748/wjg.v31.i12.102950.

ABSTRACT

Current diagnosis of autoimmune pancreatitis (AIP) is challenging and often requires combining multiple dimensions. There is a need to explore new methods for diagnosing AIP. The development of artificial intelligence (AI) is evident, and it is believed to have potential in the clinical diagnosis of AIP. This article aims to list the current diagnostic difficulties of AIP, describe existing AI applications, and suggest directions for future AI usages in AIP diagnosis.

PMID:40182594 | PMC:PMC11962844 | DOI:10.3748/wjg.v31.i12.102950

Categories: Literature Watch

Automated inflammatory bowel disease detection using wearable bowel sound event spotting

Fri, 2025-04-04 06:00

Front Digit Health. 2025 Mar 13;7:1514757. doi: 10.3389/fdgth.2025.1514757. eCollection 2025.

ABSTRACT

INTRODUCTION: Inflammatory bowel disorders may result in abnormal Bowel Sound (BS) characteristics during auscultation. We employ pattern spotting to detect rare bowel BS events in continuous abdominal recordings using a smart T-shirt with embedded miniaturised microphones. Subsequently, we investigate the clinical relevance of BS spotting in a classification task to distinguish patients diagnosed with inflammatory bowel disease (IBD) and healthy controls.

METHODS: Abdominal recordings were obtained from 24 patients with IBD with varying disease activity and 21 healthy controls across different digestive phases. In total, approximately 281 h of audio data were inspected by expert raters and thereof 136 h were manually annotated for BS events. A deep-learning-based audio pattern spotting algorithm was trained to retrieve BS events. Subsequently, features were extracted around detected BS events and a Gradient Boosting Classifier was trained to classify patients with IBD vs. healthy controls. We further explored classification window size, feature relevance, and the link between BS-based IBD classification performance and IBD activity.

RESULTS: Stratified group K-fold cross-validation experiments yielded a mean area under the receiver operating characteristic curve ≥0.83 regardless of whether BS were manually annotated or detected by the BS spotting algorithm.

DISCUSSION: Automated BS retrieval and our BS event classification approach have the potential to support diagnosis and treatment of patients with IBD.

PMID:40182584 | PMC:PMC11965935 | DOI:10.3389/fdgth.2025.1514757

Categories: Literature Watch

An enhanced lightweight model for apple leaf disease detection in complex orchard environments

Fri, 2025-04-04 06:00

Front Plant Sci. 2025 Mar 13;16:1545875. doi: 10.3389/fpls.2025.1545875. eCollection 2025.

ABSTRACT

Automated detection of apple leaf diseases is crucial for predicting and preventing losses and for enhancing apple yields. However, in complex natural environments, factors such as light variations, shading from branches and leaves, and overlapping disease spots often result in reduced accuracy in detecting apple diseases. To address the challenges of detecting small-target diseases on apple leaves in complex backgrounds and difficulty in mobile deployment, we propose an enhanced lightweight model, ELM-YOLOv8n.To mitigate the high consumption of computational resources in real-time deployment of existing models, we integrate the Fasternet Block into the C2f of the backbone network and neck network, effectively reducing the parameter count and the computational load of the model. In order to enhance the network's anti-interference ability in complex backgrounds and its capacity to differentiate between similar diseases, we incorporate an Efficient Multi-Scale Attention (EMA) within the deep structure of the network for in-depth feature extraction. Additionally, we design a detail-enhanced shared convolutional scaling detection head (DESCS-DH) to enable the model to effectively capture edge information of diseases and address issues such as poor performance in object detection across different scales. Finally, we employ the NWD loss function to replace the CIoU loss function, allowing the model to locate and identify small targets more accurately and further enhance its robustness, thereby facilitating rapid and precise identification of apple leaf diseases. Experimental results demonstrate ELM-YOLOv8n's effectiveness, achieving 94.0% of F1 value and 96.7% of mAP50 value-a significant improvement over YOLOv8n. Furthermore, the parameter count and computational load are reduced by 44.8% and 39.5%, respectively. The ELM-YOLOv8n model is better suited for deployment on mobile devices while maintaining high accuracy.

PMID:40182549 | PMC:PMC11965912 | DOI:10.3389/fpls.2025.1545875

Categories: Literature Watch

CTDA: an accurate and efficient cherry tomato detection algorithm in complex environments

Fri, 2025-04-04 06:00

Front Plant Sci. 2025 Mar 13;16:1492110. doi: 10.3389/fpls.2025.1492110. eCollection 2025.

ABSTRACT

INTRODUCTION: In the natural harvesting conditions of cherry tomatoes, the robotic vision for harvesting faces challenges such as lighting, overlapping, and occlusion among various environmental factors. To ensure accuracy and efficiency in detecting cherry tomatoes in complex environments, the study proposes a precise, realtime, and robust target detection algorithm: the CTDA model, to support robotic harvesting operations in unstructured environments.

METHODS: The model, based on YOLOv8, introduces a lightweight downsampling method to restructure the backbone network, incorporating adaptive weights and receptive field spatial characteristics to ensure that low-dimensional small target features are not completely lost. By using softpool to replace maxpool in SPPF, a new SPPFS is constructed, achieving efficient feature utilization and richer multi-scale feature fusion. Additionally, by incorporating a dynamic head driven by the attention mechanism, the recognition precision of cherry tomatoes in complex scenarios is enhanced through more effective feature capture across different scales.

RESULTS: CTDA demonstrates good adaptability and robustness in complex scenarios. Its detection accuracy reaches 94.3%, with recall and average precision of 91.5% and 95.3%, respectively, while achieving a mAP@0.5:0.95 of 76.5% and an FPS of 154.1 frames per second. Compared to YOLOv8, it improves mAP by 2.9% while maintaining detection speed, with a model size of 6.7M.

DISCUSSION: Experimental results validate the effectiveness of the CTDA model in cherry tomato detection under complex environments. While improving detection accuracy, the model also enhances adaptability to lighting variations, occlusion, and dense small target scenarios, and can be deployed on edge devices for rapid detection, providing strong support for automated cherry tomato picking.

PMID:40182545 | PMC:PMC11965914 | DOI:10.3389/fpls.2025.1492110

Categories: Literature Watch

Breast cancer histopathology image classification using transformer with discrete wavelet transform

Thu, 2025-04-03 06:00

Med Eng Phys. 2025 Apr;138:104317. doi: 10.1016/j.medengphy.2025.104317. Epub 2025 Feb 26.

ABSTRACT

Early diagnosis of breast cancer using pathological images is essential to effective treatment. With the development of deep learning techniques, breast cancer histopathology image classification methods based on neural networks develop rapidly. However, these methods usually capture features in the spatial domain, rarely consider frequency feature distributions, which limits classification performance to some extent. This paper proposes a novel breast cancer histopathology image classification network, called DWNAT-Net, which introduces Discrete Wavelet Transform (DWT) to Neighborhood Attention Transformer (NAT). DWT decomposes inputs into different frequency bands through iterative filtering and downsampling, and it can extract frequency information while retaining spatial information. NAT utilizes Neighborhood Attention (NA) to confine the attention computation to a local neighborhood around each token to enable efficient modeling of local dependencies. The proposed method was evaluated on the BreakHis and Bach datasets, yielding impressive image-level recognition accuracy rates. We achieve a recognition accuracy rate of 99.66% on the BreakHis dataset and 91.25% on the BACH dataset, demonstrating competitive performance compared to state-of-the-art methods.

PMID:40180530 | DOI:10.1016/j.medengphy.2025.104317

Categories: Literature Watch

Multi-scale feature fusion model for real-time Blood glucose monitoring and hyperglycemia prediction based on wearable devices

Thu, 2025-04-03 06:00

Med Eng Phys. 2025 Apr;138:104312. doi: 10.1016/j.medengphy.2025.104312. Epub 2025 Mar 1.

ABSTRACT

Accurate monitoring of blood glucose levels and the prediction of hyperglycemia are critical for the management of diabetes and the enhancement of medical efficiency. The primary challenge lies in uncovering the correlations among physiological information, nutritional intake, and other features, and addressing the issue of scale disparity among these features, in addition to considering the impact of individual variances on the model's accuracy. This paper introduces a universal, wearable device-assisted, multi-scale feature fusion model for real-time blood glucose monitoring and hyperglycemia prediction. It aims to more effectively capture the local correlations between diverse features and their inherent temporal relationships, overcoming the challenges of physiological data redundancy at large time scales and the incompleteness of nutritional intake data at smaller time scales. Furthermore, we have devised a personalized tuner strategy to enhance the model's accuracy and stability by continuously collecting personal data from users of the wearable devices to fine-tune the generic model, thereby accommodating individual differences and providing patients with more precise health management services. The model's performance, assessed using public datasets, indicates that the real-time monitoring error in terms of Mean Squared Error (MSE) is 0.22mmol/L, with a prediction accuracy for hyperglycemia occurrences of 96.75%. The implementation of the personalized tuner strategy yielded an average improvement rate of 1.96% on individual user datasets. This study on blood glucose monitoring and hyperglycemia prediction, facilitated by wearable devices, assists users in better managing their blood sugar levels and holds significant clinical application prospects.

PMID:40180525 | DOI:10.1016/j.medengphy.2025.104312

Categories: Literature Watch

Using Explainable Machine Learning to Predict the Irritation and Corrosivity of Chemicals on Eyes and Skin

Thu, 2025-04-03 06:00

Toxicol Lett. 2025 Apr 1:S0378-4274(25)00057-8. doi: 10.1016/j.toxlet.2025.03.008. Online ahead of print.

ABSTRACT

Contact with specific chemicals often results in corrosive and irritative responses in the eyes and skin, playing a pivotal role in assessing the potential hazards of personal care products, cosmetics, and industrial chemicals to human health. While traditional animal testing can provide valuable information, its high costs, ethical controversies, and significant demand for animals limit its extensive use, particularly during preliminary screening stages. To address these issues, we adopted a computational modeling approach, integrating 3,316 experimental data points on eye irritation and 3,080 data points on skin irritation, to develop various machine learning and deep learning models. Under the evaluation of the external validation set, the best-performing models for the two tasks achieved balanced accuracies (BAC) of 73.0% and 75.1%, respectively. Furthermore, interpretability analyses were conducted at the dataset level, molecular level, and atomic level to provide insights into the prediction outcomes. Analysis of substructure frequencies identified structural alert fragments within the datasets. This information serves as a reference for identifying potentially irritating chemicals. Additionally, a user-friendly visualization interface was developed, enabling non-specialists to easily predict eye and skin irritation potential. In summary, our study provides a new avenue for the assessment of irritancy potential in chemicals used in pesticides, cosmetics, and ophthalmic drugs.

PMID:40180199 | DOI:10.1016/j.toxlet.2025.03.008

Categories: Literature Watch

Multi-Class Brain Malignant Tumor Diagnosis in Magnetic Resonance Imaging Using Convolutional Neural Networks

Thu, 2025-04-03 06:00

Brain Res Bull. 2025 Apr 1:111329. doi: 10.1016/j.brainresbull.2025.111329. Online ahead of print.

ABSTRACT

To reduce the clinical misdiagnosis rate of glioblastoma (GBM), primary central nervous system lymphoma (PCNSL), and brain metastases (BM), which are common malignant brain tumors with similar radiological features, we propose a new CNN-based model, FoTNet. The model integrates a frequency-based channel attention layer and Focal Loss to address the class imbalance issue caused by the limited data available for PCNSL. A multi-center MRI dataset was constructed by collecting and integrating data from Zhejiang University School of Medicine's Sir Run Run Shaw Hospital, along with public datasets from UPENN and TCGA. The dataset includes T1-weighted contrast-enhanced (T1-CE) MRI images from 58 GBM, 82 PCNSL, and 269 BM cases, which were divided into training and testing sets in a 5:2 ratio. FoTNet achieved a classification accuracy of 92.5% and an average AUC of 0.9754 on the test set, significantly outperforming existing machine learning and deep learning methods in distinguishing between GBM, PCNSL, and BM. Through multiple validations, FoTNet has proven to be an effective and robust tool for accurately classifying these brain tumors, providing strong support for preoperative diagnosis and assisting clinicians in making more informed treatment decisions.

PMID:40180191 | DOI:10.1016/j.brainresbull.2025.111329

Categories: Literature Watch

An enhanced CNN-Bi-transformer based framework for detection of neurological illnesses through neurocardiac data fusion

Thu, 2025-04-03 06:00

Sci Rep. 2025 Apr 3;15(1):11379. doi: 10.1038/s41598-025-96052-0.

ABSTRACT

Classical approaches to diagnosis frequently rely on self-reported symptoms or clinician observations, which can make it difficult to examine mental health illnesses due to their subjective and complicated nature. In this work, we offer an innovative methodology for predicting mental illnesses such as epilepsy, sleep disorders, bipolar disorder, eating disorders, and depression using a multimodal deep learning framework that integrates neurocardiac data fusion. The proposed framework combines MEG, EEG, and ECG signals to create a more comprehensive understanding of brain and cardiac function in individuals with mental disorders. The multimodal deep learning approach uses an integrated CNN-Bi-Transformer, i.e., CardioNeuroFusionNet, which can process multiple types of inputs simultaneously, allowing for the fusion of various modalities and improving the performance of the predictive representation. The proposed framework has undergone testing on data from the Deep BCI Scalp Database and was further validated on the Kymata Atlas dataset to assess its generalizability. The model achieved promising results with high accuracy (98.54%) and sensitivity (97.77%) in predicting mental problems, including neurological and psychiatric conditions. The neurocardiac data fusion has been found to provide additional insights into the relationship between brain and cardiac function in neurological conditions, which could potentially lead to more accurate diagnosis and personalized treatment options. The suggested method overcomes the shortcomings of earlier studies, which tended to concentrate on single-modality data, lacked thorough neurocardiac data fusion, and made use of less advanced machine learning algorithms. The comprehensive experimental findings, which provide an average improvement in accuracy of 2.72%, demonstrate that the suggested work performs better than other cutting-edge AI techniques and generalizes effectively across diverse datasets.

PMID:40181122 | DOI:10.1038/s41598-025-96052-0

Categories: Literature Watch

Efficient fault diagnosis in rolling bearings lightweight hybrid model

Thu, 2025-04-03 06:00

Sci Rep. 2025 Apr 3;15(1):11514. doi: 10.1038/s41598-025-96285-z.

ABSTRACT

To address the issue of low efficiency in feature extraction and model training when traditional deep learning methods handle long time-series data, this paper proposes a Time-Series Lightweight Transformer (TSL-Transformer) model. According to the data characteristics of bearing fault diagnosis tasks, the model makes lightweight improvements to the traditional Transformer model, and focuses on adjusting the encoder module (core feature extraction module), introducing multi-head attention mechanism and feedforward neural network to efficiently extract complex features of vibration signals. Considering the rich temporal features present in vibration signals, a Long Short-Term Memory (LSTM) module is introduced in parallel to the encoder module of the improved lightweight Transformer model. This enhancement further strengthens the model's ability to capture temporal features, thereby improving diagnostic accuracy. Experimental results demonstrate that the proposed TSL-Transformer model achieves a fault diagnosis accuracy of 99.2% on the CWRU dataset. Through dimensionality reduction and visualization analysis using the t-SNE method, the effectiveness of different network structures within the proposed TSL-Transformer model is elucidated.

PMID:40181056 | DOI:10.1038/s41598-025-96285-z

Categories: Literature Watch

Difficulty aware programming knowledge tracing via large language models

Thu, 2025-04-03 06:00

Sci Rep. 2025 Apr 3;15(1):11475. doi: 10.1038/s41598-025-96540-3.

ABSTRACT

Knowledge Tracing (KT) assesses students' mastery of specific knowledge concepts and predicts their problem-solving abilities by analyzing their interactions with intelligent tutoring systems. Although recent years have seen significant improvements in tracking accuracy with the introduction of deep learning and graph neural network techniques, existing research has not sufficiently focused on the impact of difficulty on knowledge state. The text understanding difficulty and knowledge concept difficulty of programming problems are crucial for students' responses; thus, accurately assessing these two types of difficulty and applying them to knowledge state prediction is a key challenge. To address this challenge, we propose a Difficulty aware Programming Knowledge Tracing via Large Language Models(DPKT) to extract the text understanding difficulty and knowledge concept difficulty of programming problems. Specifically, we analyze the relationship between knowledge concept difficulty and text understanding difficulty using an attention mechanism, allowing for dynamic updates to students' s. This model combines an update gate mechanism with a graph attention network, significantly improving the assessment accuracy of programming problem difficulty and the spatiotemporal reflection capability of knowledge state. Experimental results demonstrate that this model performs excellently across various language datasets, validating its application value in programming education. This model provides an innovative solution for programming knowledge tracing and offers educators a powerful tool to promote personalized learning.

PMID:40181055 | DOI:10.1038/s41598-025-96540-3

Categories: Literature Watch

An interpretable deep learning model for the accurate prediction of mean fragmentation size in blasting operations

Thu, 2025-04-03 06:00

Sci Rep. 2025 Apr 3;15(1):11515. doi: 10.1038/s41598-025-96005-7.

ABSTRACT

Fragmentation size is an important indicator for evaluating blasting effectiveness. To address the limitations of conventional blasting fragmentation size prediction methods in terms of prediction accuracy and applicability, this study proposes an NRBO-CNN-LSSVM model for predicting mean fragmentation size, which integrates Convolutional Neural Networks (CNN), Least Squares Support Vector Machines (LSSVM), and the Newton-Raphson Optimizer (NRBO). The study is based on a database containing 105 samples derived from both previous research and field collection. Additionally, several machine learning prediction models, including CNN-LSSVM, CNN, LSSVM, Support Vector Machine (SVM), and Support Vector Regression (SVR), are developed for comparative analysis. The results showed that the NRBO-CNN-LSSVM model achieved remarkable prediction accuracy on the training dataset, with a coefficient of determination (R2) as high as 0.9717 and a root mean square error (RMSE) as low as 0.0285. On the test set, the model maintained high prediction accuracy, with an R2 value of 0.9105 and an RMSE of 0.0403. SHapley Additive exPlanations (SHAP) analysis revealed that the modulus of elasticity (E) was a key variable influencing the prediction of mean fragmentation size. Partial Dependence Plots (PDP) analysis further disclosed a significant positive correlation between the modulus of elasticity (E) and mean fragmentation size. In contrast, a distinct negative correlation was observed between the powder factor (Pf) and mean fragmentation size. To enhance the convenience of the model in practical applications, we developed an interactive Graphical User Interface (GUI), allowing users to input relevant variables and obtain instant prediction results.

PMID:40181054 | DOI:10.1038/s41598-025-96005-7

Categories: Literature Watch

Linking sequence restoration capability of shuffled coronary angiography to coronary artery disease diagnosis

Thu, 2025-04-03 06:00

Sci Rep. 2025 Apr 3;15(1):11413. doi: 10.1038/s41598-025-95640-4.

ABSTRACT

The potential of the sequence in Coronary Angiography (CA) frames for diagnosing coronary artery disease (CAD) has been largely overlooked. Our study aims to reveal the "Sequence Value" embedded within these frames and to explore methods for its application in diagnostics. We conduct a survey via Amazon Mturk (Mechanical Turk) to evaluate the effectiveness of Sequence Restoration Capability in indicating CAD. Furthermore, we develop a self-supervised deep learning model to automatically assess this capability. Additionally, we ensure the robustness of our results by differently selecting coronary angiographies/modules for statistical analysis. Our self-supervised deep learning model achieves an average AUC of 80.1% across five-fold validation, demonstrating robustness against static data noise and efficiency, with calculations completed within 30 s. This study uncovers significant insights into CAD diagnosis through the sequence value in coronary angiography. We successfully illustrate methodologies for harnessing this potential, contributing valuable knowledge to the field.

PMID:40181050 | DOI:10.1038/s41598-025-95640-4

Categories: Literature Watch

Genetically regulated eRNA expression predicts chromatin contact frequency and reveals genetic mechanisms at GWAS loci

Thu, 2025-04-03 06:00

Nat Commun. 2025 Apr 3;16(1):3193. doi: 10.1038/s41467-025-58023-x.

ABSTRACT

The biological functions of extragenic enhancer RNAs and their impact on disease risk remain relatively underexplored. In this work, we develop in silico models of genetically regulated expression of enhancer RNAs across 49 cell and tissue types, characterizing their degree of genetic control. Leveraging the estimated genetically regulated expression for enhancer RNAs and canonical genes in a large-scale DNA biobank (N > 70,000) and high-resolution Hi-C contact data, we train a deep learning-based model of pairwise three-dimensional chromatin contact frequency for enhancer-enhancer and enhancer-gene pairs in cerebellum and whole blood. Notably, the use of genetically regulated expression of enhancer RNAs provides substantial tissue-specific predictive power, supporting a role for these transcripts in modulating spatial chromatin organization. We identify schizophrenia-associated enhancer RNAs independent of GWAS loci using enhancer RNA-based TWAS and determine the causal effects of these enhancer RNAs using Mendelian randomization. Using enhancer RNA-based TWAS, we generate a comprehensive resource of tissue-specific enhancer associations with complex traits in the UK Biobank. Finally, we show that a substantially greater proportion (63%) of GWAS associations colocalize with causal regulatory variation when enhancer RNAs are included.

PMID:40180945 | DOI:10.1038/s41467-025-58023-x

Categories: Literature Watch

CodonTransformer: a multispecies codon optimizer using context-aware neural networks

Thu, 2025-04-03 06:00

Nat Commun. 2025 Apr 3;16(1):3205. doi: 10.1038/s41467-025-58588-7.

ABSTRACT

Degeneracy in the genetic code allows many possible DNA sequences to encode the same protein. Optimizing codon usage within a sequence to meet organism-specific preferences faces combinatorial explosion. Nevertheless, natural sequences optimized through evolution provide a rich source of data for machine learning algorithms to explore the underlying rules. Here, we introduce CodonTransformer, a multispecies deep learning model trained on over 1 million DNA-protein pairs from 164 organisms spanning all domains of life. The model demonstrates context-awareness thanks to its Transformers architecture and to our sequence representation strategy that combines organism, amino acid, and codon encodings. CodonTransformer generates host-specific DNA sequences with natural-like codon distribution profiles and with minimum negative cis-regulatory elements. This work introduces the strategy of Shared Token Representation and Encoding with Aligned Multi-masking (STREAM) and provides a codon optimization framework with a customizable open-access model and a user-friendly Google Colab interface.

PMID:40180930 | DOI:10.1038/s41467-025-58588-7

Categories: Literature Watch

Efficacy of a deep learning-based software for chest X-ray analysis in an emergency department

Thu, 2025-04-03 06:00

Diagn Interv Imaging. 2025 Apr 3:S2211-5684(25)00067-1. doi: 10.1016/j.diii.2025.03.007. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of this study was to evaluate the efficacy of a deep learning (DL)-based computer-aided detection (CAD) system for the detection of abnormalities on chest X-rays performed in an emergency department setting, where readers have access to relevant clinical information.

MATERIALS AND METHODS: Four hundred and four consecutive chest X-rays performed over a two-month period in patients presenting to an emergency department with respiratory symptoms were retrospectively collected. Five readers (two radiologists, three emergency physicians) with access to clinical information were asked to identify five abnormalities (i.e., consolidation, lung nodule, pleural effusion, pneumothorax, mediastinal/hilar mass) in the dataset without assistance, and then after a 2-week period, with the assistance of a DL-based CAD system. The reference standard was a chest X-ray consensus review by two experienced radiologists. Reader performances were compared between the reading sessions, and interobserver agreement was assessed using Fleiss' kappa test.

RESULTS: The dataset included 118 occurrences of the five abnormalities in 103 chest X-rays. The CAD system improved sensitivity for consolidation, pleural effusion, and nodule, with respective absolute differences of 8.3 % (95 % CI: 3.8-12.7; P < 0.001), 7.9 % (95 % CI: 1.7-14.1; P = 0.012), and 29.5 % (95 % CI: 19.8-38.2; P < 0.001), respectively. Specificity was greater than 89 % for all abnormalities and showed a minimal but significant decrease with DL for nodules and mediastinal/hilar masses (-1.8 % [95 % CI: -2.7 - -0.9]; P < 0.001 and -0.8 % [95 % CI: -1.5 - -0.2]; P = 0.005). Inter-observer agreement improved with DL, with kappa values ranging from 0.40 [95 % CI: 0.37-0.43] for mediastinal/hilar mass to 0.84 [95 % CI: 0.81-0.87] for pneumothorax.

CONCLUSION: Our results suggest that DL-assisted reading increases the sensitivity for detecting important chest X-ray abnormalities in the emergency department, even when clinical information is available to the radiologist.

PMID:40180796 | DOI:10.1016/j.diii.2025.03.007

Categories: Literature Watch

GCN-BBB: Deep Learning Blood-Brain Barrier (BBB) Permeability PharmacoAnalytics with Graph Convolutional Neural (GCN) Network

Thu, 2025-04-03 06:00

AAPS J. 2025 Apr 3;27(3):73. doi: 10.1208/s12248-025-01059-0.

ABSTRACT

The Blood-Brain Barrier (BBB) is a selective barrier between the Central Nervous System (CNS) and the peripheral system, regulating the distribution of molecules. BBB permeability has been crucial in CNS-targeting drug development, such as glioblastoma-related drug discovery. In addition, more CNS diseases still present significant challenges, for instance, neurological disorders like Alzheimer's Disease (AD) and drug abuse. Conversely, cannabinoid drugs that do not cross the BBB are needed to avoid off-target CNS psychotropic effects. In vitro and in vivo experiments measuring BBB permeability are costly and low throughput. Computational pharmacoanalytics modeling, particularly using deep-learning Graph Neural Networks (GNNs), offers a promising alternative. GNNs excel at capturing intricate relationships in graph-based information, such as small molecular structures. In this study, we developed GNNs model for BBB permeability using the graph representation of drugs. The GNNs were compared with other algorithms using molecular fingerprints or physical-chemical descriptors. With a dataset of 1924 molecules, the best GNNs model, a convolutional graph neural network using a normalized Laplacian matrix (GCN_2), achieved a precision of 0.94, recall of 0.96, F1 score of 0.95, and MCC score of 0.77. This outperformed other machine learning algorithms with molecular fingerprints. The findings indicate that the graphic representation of small molecules combined with GNNs architecture is powerful in predicting BBB permeability with high accuracy and recall. The developed GNNs model can be utilized in the initial screening stage for new drug development.

PMID:40180695 | DOI:10.1208/s12248-025-01059-0

Categories: Literature Watch

Advancing Visual Perception Through VCANet-Crossover Osprey Algorithm: Integrating Visual Technologies

Thu, 2025-04-03 06:00

J Imaging Inform Med. 2025 Apr 3. doi: 10.1007/s10278-025-01467-w. Online ahead of print.

ABSTRACT

Diabetic retinopathy (DR) is a significant vision-threatening condition, necessitating accurate and efficient automated screening methods. Traditional deep learning (DL) models struggle to detect subtle lesions and also suffer from high computational complexity. Existing models primarily mimic the primary visual cortex (V1) of the human visual system, neglecting other higher-order processing regions. To overcome these limitations, this research introduces the vision core-adapted network-based crossover osprey algorithm (VCANet-COP) for subtle lesion recognition with better computational efficiency. The model integrates sparse autoencoders (SAEs) to extract vascular structures and lesion-specific features at a pixel level for improved abnormality detection. The front-end network in the VCANet emulates the V1, V2, V4, and inferotemporal (IT) regions to derive subtle lesions effectively and improve lesion detection accuracy. Additionally, the COP algorithm leveraging the osprey optimization algorithm (OOA) with a crossover strategy optimizes hyperparameters and network configurations to ensure better computational efficiency, faster convergence, and enhanced performance in lesion recognition. The experimental assessment of the VCANet-COP model on multiple DR datasets namely Diabetic_Retinopathy_Data (DR-Data), Structured Analysis of the Retina (STARE) dataset, Indian Diabetic Retinopathy Image Dataset (IDRiD), Digital Retinal Images for Vessel Extraction (DRIVE) dataset, and Retinal fundus multi-disease image dataset (RFMID) demonstrates superior performance over baseline works, namely EDLDR, FFU_Net, LSTM_MFORG, fundus-DeepNet, and CNN_SVD by achieving average outcomes of 98.14% accuracy, 97.9% sensitivity, 98.08% specificity, 98.4% precision, 98.1% F1-score, 96.2% kappa coefficient, 2.0% false positive rate (FPR), 2.1% false negative rate (FNR), and 1.5-s execution time. By addressing critical limitations, VCANet-COP provides a scalable and robust solution for real-world DR screening and clinical decision support.

PMID:40180632 | DOI:10.1007/s10278-025-01467-w

Categories: Literature Watch

Opportunities and Barriers to Artificial Intelligence Adoption in Palliative/Hospice Care for Underrepresented Groups: A Technology Acceptance Model-Based Review

Thu, 2025-04-03 06:00

J Hosp Palliat Nurs. 2025 Apr 2. doi: 10.1097/NJH.0000000000001120. Online ahead of print.

ABSTRACT

Underrepresented groups (URGs) in the United States, including African Americans, Latino/Hispanic Americans, Asian Pacific Islanders, and Native Americans, face significant barriers to accessing hospice and palliative care. Factors such as language barriers, cultural perceptions, and mistrust in healthcare systems contribute to the underutilization of these services. Recent advancements in artificial intelligence (AI) offer potential solutions to these challenges by enhancing cultural sensitivity, improving communication, and personalizing care. This article aims to synthesize the literature on AI in palliative/hospice care for URGs through the Technology Acceptance Model (TAM), highlighting current research and application in practice. The scoping review methodology, based on the framework developed by Arksey and O'Malley, was applied to rapidly map the field of AI in palliative and hospice care. A systematic search was conducted in 9 databases to identify studies examining AI applications in hospice and palliative care for URGs. Articles were independently assessed by 2 reviewers and then synthesized via narrative review through the lens of the TAM framework, which focuses on technology acceptance factors such as perceived ease of use and usefulness. Seventeen studies were identified. Findings suggest that AI has the potential to improve decision-making, enhance timely palliative care referrals, and bridge language and cultural gaps. Artificial intelligence tools were found to improve predictive accuracy, support serious illness communication, and assist in addressing language barriers, thus promoting equitable care for URGs. However, barriers such as limited generalizability, biases in data, and challenges in infrastructure were noted, hindering the full adoption of AI in hospice settings. Artificial intelligence has transformative potential to improve hospice care for URGs by enhancing cultural sensitivity, improving communication, and enabling more timely interventions. However, to fully realize its potential, AI solutions must address data biases, infrastructure limitations, and cultural nuances. Future research should prioritize developing culturally competent AI tools that are transparent, explainable, and scalable to ensure equitable access to hospice and palliative care services for all populations.

PMID:40179379 | DOI:10.1097/NJH.0000000000001120

Categories: Literature Watch

Pages