Deep learning

Examining the development, effectiveness, and limitations of computer-aided diagnosis systems for retained surgical items detection: a systematic review

Thu, 2025-04-10 06:00

Ergonomics. 2025 Apr 10:1-16. doi: 10.1080/00140139.2025.2487558. Online ahead of print.

ABSTRACT

Retained surgical items (RSIs) can lead to severe complications, and infections, with morbidity rates up to 84.32%. Computer-aided detection (CAD) systems offer potential advancement in enhancing the detection of RSIs. This systematic review aims to summarise the characteristics of CAD systems developed for the detection of RSIs, evaluate their development, effectiveness, and limitations, and propose opportunities for enhancement. The systematic review adheres to Preferred Reporting Items for Systematic Reviews and Meta-Analysis 2020 guidelines. Studies that have developed and evaluated CAD systems for identifying RSIs were eligible for inclusion. Five electronic databases were searched from inception to March 2023 and eleven studies were found eligible. The sensitivity of CAD systems ranges from 0.61 to 1 and specificity varied between 0.73 and 1. Most studies utilised synthesised RSI radiographs for developing CAD systems which raises generalisability concerns. Moreover, deep learning-based CAD systems did not incorporate explainable artificial intelligence techniques to ensure decision transparency.

PMID:40208001 | DOI:10.1080/00140139.2025.2487558

Categories: Literature Watch

The Potential Diagnostic Application of Artificial Intelligence in Breast Cancer

Thu, 2025-04-10 06:00

Curr Pharm Des. 2025 Apr 8. doi: 10.2174/0113816128369168250311172823. Online ahead of print.

ABSTRACT

Breast cancer poses a significant global health challenge, necessitating improved diagnostic and treatment strategies. This review explores the role of artificial intelligence (AI) in enhancing breast cancer pathology, emphasizing risk assessment, early detection, and analysis of histopathological and mammographic data. AI platforms show promise in predicting breast cancer risks and identifying tumors up to three years before clinical diagnosis. Deep learning techniques, particularly convolutional neural networks (CNNs), effectively classify cancer subtypes and grade tumor risk, achieving accuracy comparable to expert radiologists. Despite these advancements, challenges, such as the need for high-quality datasets and integration into clinical workflows, persist. Continued research on AI technologies is essential for advancing breast cancer detection and improving patient outcomes.

PMID:40207818 | DOI:10.2174/0113816128369168250311172823

Categories: Literature Watch

The Future of Medicine: AI and ML Driven Drug Discovery Advancements

Thu, 2025-04-10 06:00

Curr Top Med Chem. 2025 Apr 8. doi: 10.2174/0115680266346722250401191232. Online ahead of print.

ABSTRACT

The field of drug design has evolved from conventional approaches relying on empirical evidence to advanced approaches such as Computer-Aided Drug Design (CADD). It aids in intricate phases of drug discovery, such as target discovery, lead optimization, and clinical trials, establishing a safe, rapid, and cost-effective system. Structure based drug design (SBDD), Ligand based drug design (LBDD), and Pharmacophore modelling, being the most utilized techniques of CADD, play a major role in establishing the road map necessary for the discovery. Artificial intelligence (AI) and Machine learning (ML) have improved the field with the incorporation of big data and, thereby, enhancing the efficacy and accuracy of the CADD. Deep Learning (DL), a part of AI helps in processing complex and non-linear data and thereby decreases complexity, increases resource utilization and enhances drug-target interaction prediction. These approaches have revolutionized healthcare by enhancing diagnostic precision and predicting the behavior of drugs. Currently, AI/ML approach has become crucial for rapidly discovering novel insights and transforming healthcare areas lie diagnostics, clinical research, and critical care. In the case of the drug development area, techniques like PBPK modeling and advanced nano-QSAR enhance drug behavior understanding and predict nano material toxicity if any, leading to safe and effective therapeutic predictions and interventions. The advancement of AI/ML techniques will bring accuracy, efficacy, and more patient-tailored responses to the drug development field.

PMID:40207759 | DOI:10.2174/0115680266346722250401191232

Categories: Literature Watch

Validity and accuracy of artificial intelligence-based dietary intake assessment methods: a systematic review

Thu, 2025-04-10 06:00

Br J Nutr. 2025 Apr 10:1-13. doi: 10.1017/S0007114525000522. Online ahead of print.

ABSTRACT

One of the most significant challenges in research related to nutritional epidemiology is the achievement of high accuracy and validity of dietary data to establish an adequate link between dietary exposure and health outcomes. Recently, the emergence of artificial intelligence (AI) in various fields has filled this gap with advanced statistical models and techniques for nutrient and food analysis. We aimed to systematically review available evidence regarding the validity and accuracy of AI-based dietary intake assessment methods (AI-DIA). In accordance with PRISMA guidelines, an exhaustive search of the EMBASE, PubMed, Scopus and Web of Science databases was conducted to identify relevant publications from their inception to 1 December 2024. Thirteen studies that met the inclusion criteria were included in this analysis. Of the studies identified, 61·5 % were conducted in preclinical settings. Likewise, 46·2 % used AI techniques based on deep learning and 15·3 % on machine learning. Correlation coefficients of over 0·7 were reported in six articles concerning the estimation of calories between the AI and traditional assessment methods. Similarly, six studies obtained a correlation above 0·7 for macronutrients. In the case of micronutrients, four studies achieved the correlation mentioned above. A moderate risk of bias was observed in 61·5 % (n 8) of the articles analysed, with confounding bias being the most frequently observed. AI-DIA methods are promising, reliable and valid alternatives for nutrient and food estimations. However, more research comparing different populations is needed, as well as larger sample sizes, to ensure the validity of the experimental designs.

PMID:40207441 | DOI:10.1017/S0007114525000522

Categories: Literature Watch

NeuroFusionNet: cross-modal modeling from brain activity to visual understanding

Thu, 2025-04-10 06:00

Front Comput Neurosci. 2025 Mar 26;19:1545971. doi: 10.3389/fncom.2025.1545971. eCollection 2025.

ABSTRACT

In recent years, the integration of machine vision and neuroscience has provided a new perspective for deeply understanding visual information. This paper proposes an innovative deep learning model, NeuroFusionNet, designed to enhance the understanding of visual information by integrating fMRI signals with image features. Specifically, images are processed by a visual model to extract region-of-interest (ROI) features and contextual information, which are then encoded through fully connected layers. The fMRI signals are passed through 1D convolutional layers to extract features, effectively preserving spatial information and improving computational efficiency. Subsequently, the fMRI features are embedded into a 3D voxel representation to capture the brain's activity patterns in both spatial and temporal dimensions. To accurately model the brain's response to visual stimuli, this paper introduces a Mutli-scale fMRI Timeformer module, which processes fMRI signals at different scales to extract both fine details and global responses. To further optimize the model's performance, we introduce a novel loss function called the fMRI-guided loss. Experimental results show that NeuroFusionNet effectively integrates image and brain activity information, providing more precise and richer visual representations for machine vision systems, with broad potential applications.

PMID:40207297 | PMC:PMC11978827 | DOI:10.3389/fncom.2025.1545971

Categories: Literature Watch

Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise

Thu, 2025-04-10 06:00

Med Image Comput Comput Assist Interv. 2024 Oct;15011:37-47. doi: 10.1007/978-3-031-72120-5_4. Epub 2024 Oct 3.

ABSTRACT

The robustness of supervised deep learning-based medical image classification is significantly undermined by label noise in the training data. Although several methods have been proposed to enhance classification performance in the presence of noisy labels, they face some challenges: 1) a struggle with class-imbalanced datasets, leading to the frequent overlooking of minority classes as noisy samples; 2) a singular focus on maximizing performance using noisy datasets, without incorporating experts-in-the-loop for actively cleaning the noisy labels. To mitigate these challenges, we propose a two-phase approach that combines Learning with Noisy Labels (LNL) and active learning. This approach not only improves the robustness of medical image classification in the presence of noisy labels but also iteratively improves the quality of the dataset by relabeling the important incorrect labels, under a limited annotation budget. Furthermore, we introduce a novel Variance of Gradients approach in the LNL phase, which complements the loss-based sample selection by also sampling under-represented examples. Using two imbalanced noisy medical classification datasets, we demonstrate that our proposed technique is superior to its predecessors at handling class imbalance by not misidentifying clean samples from minority classes as mostly noisy samples. Code available at: https://github.com/Bidur-Khanal/imbalanced-medical-active-label-cleaning.git.

PMID:40207034 | PMC:PMC11981598 | DOI:10.1007/978-3-031-72120-5_4

Categories: Literature Watch

Gait Speed and Task Specificity in Predicting Lower-Limb Kinematics: A Deep Learning Approach Using Inertial Sensors

Thu, 2025-04-10 06:00

Mayo Clin Proc Digit Health. 2024 Nov 27;3(1):100183. doi: 10.1016/j.mcpdig.2024.11.004. eCollection 2025 Mar.

ABSTRACT

OBJECTIVE: To develop a deep learning framework to predict lower-limb joint kinematics from inertial measurement unit (IMU) data across multiple gait tasks (walking, jogging, and running) and evaluate the impact of dynamic time warping (DTW) on reducing prediction errors.

PATIENTS AND METHODS: Data were collected from 18 participants fitted with IMUs and an optical motion capture system between May 25, 2023, and May 30, 2023. A long short-term memory autoencoder supervised regression model was developed. The model consisted of multiple long short-term memory and convolution layers. Acceleration and gyroscope data from the IMUs in 3 axes and their magnitude for the proximal and distal sensors of each joint (hip, knee, and ankle) were inputs to the model. Optical motion capture kinematics were considered ground truth and used as an output to train the prediction model.

RESULTS: The deep learning models achieved a root-mean-square error of less than 6° for hip, knee, and ankle joint sagittal plane angles, with the ankle showing the lowest error (5.1°). Task-specific models reported enhanced performance during certain gait phases, such as knee flexion during running. The application of DTW significantly reduced root-mean-square error across all tasks by at least 3° to 4°. External validation of independent data confirmed the model's generalizability.

CONCLUSION: Our findings underscore the potential of IMU-based deep learning models for joint kinematic predictions, offering a practical solution for remote and continuous biomechanical assessments in health care and sports science.

PMID:40207006 | PMC:PMC11975825 | DOI:10.1016/j.mcpdig.2024.11.004

Categories: Literature Watch

Leveraging Comprehensive Echo Data to Power Artificial Intelligence Models for Handheld Cardiac Ultrasound

Thu, 2025-04-10 06:00

Mayo Clin Proc Digit Health. 2025 Jan 10;3(1):100194. doi: 10.1016/j.mcpdig.2025.100194. eCollection 2025 Mar.

ABSTRACT

OBJECTIVE: To develop a fully end-to-end deep learning framework capable of estimating left ventricular ejection fraction (LVEF), estimating patient age, and classifying patient sex from echocardiographic videos, including videos collected using handheld cardiac ultrasound (HCU).

PATIENTS AND METHODS: Deep learning models were trained using retrospective transthoracic echocardiography (TTE) data collected in Mayo Clinic Rochester and surrounding Mayo Clinic Health System sites (training: 6432 studies and internal validation: 1369 studies). Models were then evaluated using retrospective TTE data from the 3 Mayo Clinic sites (Rochester, n=1970; Arizona, n=1367; Florida, n=1562) before being applied to a prospective dataset of handheld ultrasound and TTE videos collected from 625 patients. Study data were collected between January 1, 2018 and February 29, 2024.

RESULTS: Models showed strong performance on the retrospective TTE datasets (LVEF regression: root mean squared error (RMSE)=6.83%, 6.53%, and 6.95% for Rochester, Arizona, and Florida cohorts, respectively; classification of LVEF ≤40% versus LVEF > 40%: area under curve (AUC)=0.962, 0.967, and 0.980 for Rochester, Arizona, and Florida, respectively; age: RMSE=9.44% for Rochester; sex: AUC=0.882 for Rochester), and performed comparably for prospective HCU versus TTE data (LVEF regression: RMSE=6.37% for HCU vs 5.57% for TTE; LVEF classification: AUC=0.974 vs 0.981; age: RMSE=10.35% vs 9.32%; sex: AUC=0.896 vs 0.933).

CONCLUSION: Robust TTE datasets can be used to effectively power HCU deep learning models, which in turn demonstrates focused diagnostic images can be obtained with handheld devices.

PMID:40207004 | PMC:PMC11975991 | DOI:10.1016/j.mcpdig.2025.100194

Categories: Literature Watch

Optimizing Input Selection for Cardiac Model Training and Inference: An Efficient 3D Convolutional Neural Networks-Based Approach to Automate Coronary Angiogram Video Selection

Thu, 2025-04-10 06:00

Mayo Clin Proc Digit Health. 2025 Jan 21;3(1):100195. doi: 10.1016/j.mcpdig.2025.100195. eCollection 2025 Mar.

ABSTRACT

OBJECTIVE: To develop an efficient and automated method for selecting appropriate coronary angiography videos for training deep learning models, thereby improving the accuracy and efficiency of medical image analysis.

PATIENTS AND METHODS: We developed deep learning models using 232 coronary angiographic studies from the Mayo Clinic. We utilized 2 state-of-the-art convolutional neural networks (CNN: ResNet and X3D) to identify low-quality angiograms through binary classification (satisfactory/unsatisfactory). Ground truth for the quality of the input angiogram was determined by 2 experienced cardiologists. We validated the developed model in an independent dataset of 3208 procedures from 3 Mayo sites.

RESULTS: The 3D-CNN models outperformed their 2D counterparts, with the X3D-L model achieving superior performance across all metrics (AUC 0.98, accuracy 0.96, precision 0.87, and F1 score 0.92). Compared with 3D models, 2D architectures are smaller and less computationally complex. Despite having a 3D architecture, the X3D-L model had lower computational demand (19.34 Giga Multiply Accumulate Operation) and parameter count (5.34 M) than 2D models. When validating models on the independent dataset, slight decreases in all metrics were observed, but AUC and accuracy remained robust (0.95 and 0.92, respectively, for the X3D-L model).

CONCLUSION: We developed a rapid and effective method for automating the selection of coronary angiogram video clips using 3D-CNNs, potentially improving model accuracy and efficiency in clinical applications. The X3D-L model reports a balanced trade-off between computational efficiency and complexity, making it suitable for real-life clinical applications.

PMID:40206993 | PMC:PMC11975815 | DOI:10.1016/j.mcpdig.2025.100195

Categories: Literature Watch

Deep learning-enabled transformation of anterior segment images to corneal fluorescein staining images for enhanced corneal disease screening

Thu, 2025-04-10 06:00

Comput Struct Biotechnol J. 2025 Mar 7;28:94-105. doi: 10.1016/j.csbj.2025.02.039. eCollection 2025.

ABSTRACT

Corneal diseases present a significant challenge to global health. Given the uneven distribution of ophthalmic resources, the development of a system to facilitate remote diagnosis of corneal diseases is particularly crucial. In this study, we developed an artificial intelligence system named Gancor, based on a large-scale clinical dataset comprising 9669 anterior segment (AS) images and corresponding corneal fluorescein staining (CFS) images from the Affiliated Eye Hospital of Nanchang University, as well as 967 pairs of AS-CFS images captured via smartphone from the Jiangxi Province Division of National Clinical Research Center for Ocular Diseases. The system utilizes Generative Adversarial Networks (GANs) to convert AS images into CFS images for the screening of 11 common corneal diseases. Objective assessments of the generated CFS images were conducted using Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM), along with subjective evaluations by three experienced ophthalmologists, confirming the high quality and diagnostic relevance of the synthesized images. In terms of diagnostic performance for corneal diseases, the accuracy rate exceeded 75 %, and the Area Under the Curve (AUC) value reached above 0.90. This innovative approach not only provides images with greater diagnostic value for telemedicine but also enhances the efficiency of remote diagnosis, offering an effective tool for achieving the goal of comprehensive, equitable, and accessible eye care services.

PMID:40206787 | PMC:PMC11981786 | DOI:10.1016/j.csbj.2025.02.039

Categories: Literature Watch

Identification of FDFT1 and PGRMC1 as New Biomarkers in Nonalcoholic Steatohepatitis (NASH)-Related Hepatocellular Carcinoma by Deep Learning

Thu, 2025-04-10 06:00

J Hepatocell Carcinoma. 2025 Apr 5;12:685-704. doi: 10.2147/JHC.S505752. eCollection 2025.

ABSTRACT

BACKGROUND: With the global epidemic of obesity and diabetes, non-alcoholic fatty liver disease (NAFLD) is becoming the most common chronic liver disease, and NASH is increasingly becoming a major risk factor for hepatocellular carcinoma. Therefore, it is essential to explore novel biomarkers in NASH-related HCC.

METHODS: Deep Learning (DL) methods are a promising and encouraging tool widely used in genomics by automatically applying neural networks (NNs). Therefore, DL, "limma package", weighted gene co-expression network analysis (WGCNA), and Protein-Protein Interaction Networks (PPI) were used to screen feature genes. Real-time quantitative PCR was used to validate the expression of feature genes in the NAFLD mice model. Enrichment and single-cell sequencing analyses of single genes were performed to investigate the role of feature genes in NASH-related HCC.

RESULTS: Combined core genes screened by DL in NAFLD with important genes in metabolic syndrome, six feature genes (FDFT1, TNFSF10, DNAJC16, RDH11, PGRMC1, and MYC) were obtained. ROC analysis demonstrates the model's superiority with the AUC was 0.983 (0.9241-0.98885). Animal experiments based on NAFLD mouse models have also shown that FDFT1, TNFSF10, DNAJC16, RDH11, and PGRMC1 have a higher expression in NAFLD livers. Among the feature genes, FDFT1 and PGRMC1 showed significant expression trends and outstanding diagnosis value in NASH-HCC.

CONCLUSION: In conclusion, FDFT1 and PGRMC1 are key enzymes in the cholesterol synthesis pathway, our study validates the important role of cholesterol metabolism in NAFLD from another perspective, implying they may be new prognostic and diagnostic markers for NASH-HCC.

PMID:40206734 | PMC:PMC11980943 | DOI:10.2147/JHC.S505752

Categories: Literature Watch

Analyzing handwriting legibility through hand kinematics

Thu, 2025-04-10 06:00

Front Artif Intell. 2025 Mar 26;8:1426455. doi: 10.3389/frai.2025.1426455. eCollection 2025.

ABSTRACT

INTRODUCTION: Handwriting is a complex skill that requires coordination between human motor system, sensory perception, cognitive processing, memory retrieval, and linguistic proficiency. Various aspects of hand and stylus kinematics can affect the legibility of a handwritten text. Assessing handwriting legibility is challenging due to variations in experts' cultural and academic backgrounds, which introduce subjectivity biases in evaluations.

METHODS: In this paper, we utilize a deep-learning model to analyze kinematic features influencing the legibility of handwriting based on temporal convolutional networks (TCN). Fifty subjects are recruited to complete a 26-word paragraph handwriting task, designed to include all possible orthographic combinations of Arabic characters, during which the hand and stylus movements are recorded. A total of 117 different spatiotemporal features are recorded, and the data collected are used to train the model. Shapley values are used to determine the important hand and stylus kinematics features toward evaluating legibility. Three experts are recruited to label the produced text into different legibility scores. Statistical analysis of the top 6 features is conducted to investigate the differences between features associated with high and low legibility scores.

RESULTS: Although the model trained on stylus kinematics features demonstrates relatively high accuracy (around 76%), where the number of legibility classes can vary between 7 and 8 depending on the expert, the addition of hand kinematics features significantly increases the model accuracy by approximately 10%. Explainability analysis revealed that pressure variability, pen slant (altitude, azimuth), and hand speed components are the most prominent for evaluating legibility across the three experts.

DISCUSSION: The model learns meaningful stylus and hand kinematics features associated with the legibility of handwriting. The hand kinematics features are important for accurate assessment of handwriting legibility. The proposed approach can be used in handwriting learning tools for personalized handwriting skill acquisition as well as for pathology detection and rehabilitation.

PMID:40206709 | PMC:PMC11979204 | DOI:10.3389/frai.2025.1426455

Categories: Literature Watch

Internet of things driven hybrid neuro-fuzzy deep learning building energy management system for cost and schedule optimization

Thu, 2025-04-10 06:00

Front Artif Intell. 2025 Mar 26;8:1544183. doi: 10.3389/frai.2025.1544183. eCollection 2025.

ABSTRACT

Optimizing building energy consumption holds significant untapped potential, particularly in a developing economy such as India. Existing solutions have yet to concentrate on a methodology that is cost-effective, small-scale, precise, and open source data-driven. In response, we have implemented an automated, DL-enabled approach to predict energy consumption with the goal to enable cost and schedule optimization. For two years from December 2021 to December 2023 the energy consumption and twenty seven associated energy parameters was monitored by developing an IoT enabled BEMS. The data collected was preprocessed, cleaned, transformed and used for training a machine learning model. Based on the previous literature, a hybrid DL model was developed using artificial neural networks and fuzzy logic by integrating fuzzy layers in the deep neural architecture. The collected electrical data was used for training, hyper-parameter tuning and testing the hybrid DL model. The proposed model when tested for out-of-sample dataset had comparable results on error and performance metrics as compared to other states of the art models. On deployment in the premises of a university, the BEMS achieved a reduction in the electricity bill of 20% highlighting its effectiveness and efficacy.

PMID:40206707 | PMC:PMC11979119 | DOI:10.3389/frai.2025.1544183

Categories: Literature Watch

Novel deep learning algorithm based MRI radiomics for predicting lymph node metastases in rectal cancer

Wed, 2025-04-09 06:00

Sci Rep. 2025 Apr 9;15(1):12089. doi: 10.1038/s41598-025-96618-y.

ABSTRACT

To explore the value of applying the MRI-based radiomic nomogram for predicting lymph node metastasis (LNM) in rectal cancer (RC). This retrospective analysis used data from 430 patients with RC from two medical centers. The patients were categorized into the LNM negative (LNM-) and LNM positive (LNM+) according to their surgical pathology results. We developed a physician model by selecting clinical independent predictors through physician assessments. Additionally, we developed deep learning radscore (DLRS) models by extracting deep features from multiparametric MRI (mpMRI) images. A nomogram model was constructed by combining the physician model and DLRS models. Among the patients, 192 (44.65%, 192/430) experienced LNM+. Six prediction models were developed, namely the physician model, three sequence models, the DLRS, and the nomogram. The physician model achieved AUC of the receiver operating characteristic (ROC) values of 0.78, 0.79, and 0.7, whereas the sequence models, DLRS model, and nomogram model achieved AUC values ranging from 0.83 to 0.99. The predictive performance of the DLRS and nomogram models was superior to that of the physician model. DLRS and nomogram models based on mpMRI provided higher accuracy in predicting LNM status in patients with RC than the other models.

PMID:40204902 | DOI:10.1038/s41598-025-96618-y

Categories: Literature Watch

Comprehensive evaluation of U-Net based transcranial magnetic stimulation electric field estimations

Wed, 2025-04-09 06:00

Sci Rep. 2025 Apr 9;15(1):12204. doi: 10.1038/s41598-025-95767-4.

ABSTRACT

Transcranial Magnetic Stimulation (TMS) is a non-invasive method to modulate neural activity by inducing an electric field in the human brain. Computational models are an important tool for informing TMS targeting and dosing. State-of-the-art modeling techniques use numerical methods, such as the finite element method (FEM), to produce highly accurate simulation results. However, these methods operate at a high computational cost, limiting real-time integration and high throughput applications. Deep learning (DL) methods, particularly U-Nets, are being investigated for TMS electric field estimations. However, their performance across large datasets and whole-head stimulation conditions has not been systematically evaluated. Here, we develop a DL framework to estimate TMS-induced electric fields directly from an anatomical magnetic resonance image (MRI) and TMS coil parameters. We perform a comprehensive evaluation of the performance of our U-Net approach compared to the FEM gold standard. We selected a dataset of 100 MRI scans from a diverse population demographic (ethnic, gender, age) made available by the Human Connectome Project. For each MRI, we generated a FEM head model and simulated the electric fields for 13 TMS coil orientations and 1206 positions (a total of 15,678 coil configurations per participant). We trained a modified U-Net architecture to predict individual TMS-induced electric fields in the brain based on an input T1-weighted MRI scan and stimulation parameters. We characterized the model's performance according to computational efficiency and simulation accuracy compared to FEM using an independent testing dataset. The U-Net results demonstrated an accelerated electric field modeling speed at 0.8 s per simulation (×97,000 times acceleration over the FEM-based approach). Sampling stimulation conditions across the whole brain yielded an average DICE coefficient of 0.71 ± 0.06 mm and an average center of gravity deviation of 7.52 ± 4.06 mm from the FEM-based approach. Our findings indicate that while deep learning has the potential to significantly accelerate electric field predictions, the precision it achieves needs to be evaluated for the specific TMS application.

PMID:40204769 | DOI:10.1038/s41598-025-95767-4

Categories: Literature Watch

Deep learning for cerebral vascular occlusion segmentation: A novel ConvNeXtV2 and GRN-integrated U-Net framework for diffusion-weighted imaging

Wed, 2025-04-09 06:00

Neuroscience. 2025 Apr 7:S0306-4522(25)00287-8. doi: 10.1016/j.neuroscience.2025.04.010. Online ahead of print.

ABSTRACT

Cerebral vascular occlusion is a serious condition that can lead to stroke and permanent neurological damage due to insufficient oxygen and nutrients reaching brain tissue. Early diagnosis and accurate segmentation are critical for effective treatment planning. Due to its high soft tissue contrast, Magnetic Resonance Imaging (MRI) is commonly used for detecting these occlusions such as ischemic stroke. However, challenges such as low contrast, noise, and heterogeneous lesion structures in MRI images complicate manual segmentation and often lead to misinterpretations. As a result, deep learning-based Computer-Aided Diagnosis (CAD) systems are essential for faster and more accurate diagnosis and treatment methods, although they can sometimes face challenges such as high computational costs and difficulties in segmenting small or irregular lesions. This study proposes a novel U-Net architecture enhanced with ConvNeXtV2 blocks and GRN-based Multi-Layer Perceptrons (MLP) to address these challenges in cerebral vascular occlusion segmentation. This is the first application of ConvNeXtV2 in this domain. The proposed model significantly improves segmentation accuracy, even in low-contrast regions, while maintaining high computational efficiency, which is crucial for real-world clinical applications. To reduce false positives and improve overall accuracy, small lesions (≤5 pixels) were removed in the preprocessing step with the support of expert clinicians. Experimental results on the ISLES 2022 dataset showed superior performance with an Intersection over Union (IoU) of 0.8015 and a Dice coefficient of 0.8894. Comparative analyses indicate that the proposed model achieves higher segmentation accuracy than existing U-Net variants and other methods, offering a promising solution for clinical use.

PMID:40204150 | DOI:10.1016/j.neuroscience.2025.04.010

Categories: Literature Watch

Application of artificial intelligence in the diagnosis of malignant digestive tract tumors: focusing on opportunities and challenges in endoscopy and pathology

Wed, 2025-04-09 06:00

J Transl Med. 2025 Apr 9;23(1):412. doi: 10.1186/s12967-025-06428-z.

ABSTRACT

BACKGROUND: Malignant digestive tract tumors are highly prevalent and fatal tumor types globally, often diagnosed at advanced stages due to atypical early symptoms, causing patients to miss optimal treatment opportunities. Traditional endoscopic and pathological diagnostic processes are highly dependent on expert experience, facing problems such as high misdiagnosis rates and significant inter-observer variations. With the development of artificial intelligence (AI) technologies such as deep learning, real-time lesion detection with endoscopic assistance and automated pathological image analysis have shown potential in improving diagnostic accuracy and efficiency. However, relevant applications still face challenges including insufficient data standardization, inadequate interpretability, and weak clinical validation.

OBJECTIVE: This study aims to systematically review the current applications of artificial intelligence in diagnosing malignant digestive tract tumors, focusing on the progress and bottlenecks in two key areas: endoscopic examination and pathological diagnosis, and to provide feasible ideas and suggestions for subsequent research and clinical translation.

METHODS: A systematic literature search strategy was adopted to screen relevant studies published between 2017 and 2024 from databases including PubMed, Web of Science, Scopus, and IEEE Xplore, supplemented with searches of early classical literature. Inclusion criteria included studies on malignant digestive tract tumors such as esophageal cancer, gastric cancer, or colorectal cancer, involving the application of artificial intelligence technology in endoscopic diagnosis or pathological analysis. The effects and main limitations of AI diagnosis were summarized through comprehensive analysis of research design, algorithmic methods, and experimental results from relevant literature.

RESULTS: In the field of endoscopy, multiple deep learning models have significantly improved detection rates in real-time polyp detection, early gastric cancer, and esophageal cancer screening, with some commercialized systems successfully entering clinical trials. However, the scale and quality of data across different studies vary widely, and the generalizability of models to multi-center, multi-device environments remains to be verified. In pathological analysis, using convolutional neural networks, multimodal pre-training models, etc., automatic tissue segmentation, tumor grading, and assisted diagnosis can be achieved, showing good scalability in interactive question-answering. Nevertheless, clinical implementation still faces obstacles such as non-uniform data standards, lack of large-scale prospective validation, and insufficient model interpretability and continuous learning mechanisms.

CONCLUSION: Artificial intelligence provides new technological opportunities for endoscopic and pathological diagnosis of malignant digestive tract tumors, achieving positive results in early lesion identification and assisted decision-making. However, to achieve the transition from research to widespread clinical application, data standardization, model reliability, and interpretability still need to be improved through multi-center joint research, and a complete regulatory and ethical system needs to be established. In the future, artificial intelligence will play a more important role in the standardization and precision management of diagnosis and treatment of digestive tract tumors.

PMID:40205603 | DOI:10.1186/s12967-025-06428-z

Categories: Literature Watch

Radiation and contrast dose reduction in coronary computed tomography angiography for slender patients with 70kV tube voltage and deep learning image reconstruction

Wed, 2025-04-09 06:00

Br J Radiol. 2025 Apr 9:tqaf077. doi: 10.1093/bjr/tqaf077. Online ahead of print.

ABSTRACT

OBJECTIVE: To evaluate the radiation and contrast dose reduction potential of combining 70 kV with deep learning image reconstruction(DLIR) in coronary computed tomography angiography(CCTA) for slender patients with body-mass-index (BMI)≤25kg/m2.

METHODS: Sixty patients for CCTA were randomly divided into two groups: group A with 120 kV and contrast agent dose of 0.8 ml/kg, and group B with 70 kV and contrast agent dose of 0.5 ml/kg.Group A used adaptive statistical iterative reconstruction-V(ASIR-V) with 50% strength level(50%ASIR-V) while group B used 50%ASIR-V, DLIR of low level(DLIR-L),DLIR of medium level(DLIR-M) and DLIR of high level(DLIR-H) for image reconstruction. The CT values and SD values of coronary arteries and pericardial fat were measured, and signal-to-noise ratio(SNR) and contrast-to-noise ratio(CNR) were calculated. The image quality was subjectively evaluated by two radiologists using a five-point scoring system. The effective radiation dose(ED) and contrast dose were calculated and compared.

RESULTS: Group B significantly reduced radiation dose by 75.6% and contrast dose by 32.9% compared to group A. Group B exhibited higher CT values of coronary arteries than group A, and DLIR-L, DLIR-M and DLIR-H in group B provided higher SNR values and CNR values and subjective scores, among which DLIR-H had the lowest noise and highest subjective scores.

CONCLUSION: Using 70 kV combined with DLIR significantly reduces radiation and contrast dose while improving image quality in CCTA for slender patients with DLIR-H having the best effect on improving image quality.

ADVANCES IN KNOWLEDGE: The 70 kV and DLIR-H may be used in CCTA for slender patients to significantly reduce radiation dose and contrast dose while improving image quality.

PMID:40205479 | DOI:10.1093/bjr/tqaf077

Categories: Literature Watch

Systematic review of AI/ML applications in multi-domain robotic rehabilitation: trends, gaps, and future directions

Wed, 2025-04-09 06:00

J Neuroeng Rehabil. 2025 Apr 9;22(1):79. doi: 10.1186/s12984-025-01605-z.

ABSTRACT

Robotic technology is expected to transform rehabilitation settings, by providing precise, repetitive, and task-specific interventions, thereby potentially improving patients' clinical outcomes. Artificial intelligence (AI) and machine learning (ML) have been widely applied in different areas to support robotic rehabilitation, from controlling robot movements to real-time patient assessment. To provide an overview of the current landscape and the impact of AI/ML use in robotics rehabilitation, we performed a systematic review focusing on the use of AI and robotics in rehabilitation from a broad perspective, encompassing different pathologies and body districts, and considering both motor and neurocognitive rehabilitation. We searched the Scopus and IEEE Xplore databases, focusing on the studies involving human participants. After article retrieval, a tagging phase was carried out to devise a comprehensive and easily-interpretable taxonomy: its categories include the aim of the AI/ML within the rehabilitation system, the type of algorithms used, and the location of robots and sensors. The 201 selected articles span multiple domains and diverse aims, such as movement classification, trajectory prediction, and patient evaluation, demonstrating the potential of ML to revolutionize personalized therapy and improve patient engagement. ML is reported as highly effective in predicting movement intentions, assessing clinical outcomes, and detecting compensatory movements, providing insights into the future of personalized rehabilitation interventions. Our analysis also reveals pitfalls in the current use of AI/ML in this area, such as potential explainability issues and poor generalization ability when these systems are applied in real-world settings.

PMID:40205472 | DOI:10.1186/s12984-025-01605-z

Categories: Literature Watch

Preoperative assessment in lymph node metastasis of pancreatic ductal adenocarcinoma: a transformer model based on dual-energy CT

Wed, 2025-04-09 06:00

World J Surg Oncol. 2025 Apr 9;23(1):135. doi: 10.1186/s12957-025-03774-6.

ABSTRACT

BACKGROUND: Deep learning(DL) models can improve significantly discrimination of lymph node metastasis(LNM) of pancreatic ductal adenocarcinoma(PDAC), but have not been systematically assessed.

PURPOSE: To develop and test a transformer model utilizing dual-energy computed tomography (DECT) for predicting LNM in patients with PDAC.

MATERIALS AND METHODS: This retrospective study examined patients who had undergone surgical resection and had pathologically confirmed PDAC, with DECT performed between August 2016 and October 2022. Six predictive models were constructed: a DECT report model, a clinical model, 100 keV DL model, 150 keV DL model, a combined 100 + 150 keV DL model, and a model that integrated clinical information with DL-derived signatures. Multivariable logistic regression analysis was employed to develop the integrated model. The efficacy of these models was assessed by comparing their areas under the receiver operating characteristic curve (AUC) using the Delong test. Survival analysis was conducted using Kaplan-Meier curves.

RESULTS: In brief, 223 patients (mean age, 57 years ± 11 standard deviation; 93 men) were evaluated. All patients were divided into training (n = 160) and test (n = 63) sets. Patients with LNM accounted for 96 of the 223 patients (43%). In the test set, the integrated model, which integrated DECT parameters such as IC and Z, CA- 199 levels, DECT reports, and DL signatures, demonstrated the highest performance in predicting LNM, with an AUC of 0.93. In contrast, the radiologists'assessment and the clinical model yielded AUCs of 0.60 and 0.62, respectively. The integrated model-predicted positive LNM was associated with worse overall survival (hazard ratio, 1.75; 95% confidence interval: 1.22 - 2.83; P =.023).

CONCLUSION: A transformer-based model outperformed radiologists and clinical model for prediction of LNM at DECT in patients with PDAC.

PMID:40205450 | DOI:10.1186/s12957-025-03774-6

Categories: Literature Watch

Pages