Deep learning

Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study

15 hours 35 min ago

JMIR Med Inform. 2025 May 16;13:e66917. doi: 10.2196/66917.

ABSTRACT

BACKGROUND: The capabilities of large language models (LLMs) to self-assess their own confidence in answering questions within the biomedical realm remain underexplored.

OBJECTIVE: This study evaluates the confidence levels of 12 LLMs across 5 medical specialties to assess LLMs' ability to accurately judge their own responses.

METHODS: We used 1965 multiple-choice questions that assessed clinical knowledge in the following areas: internal medicine, obstetrics and gynecology, psychiatry, pediatrics, and general surgery. Models were prompted to provide answers and to also provide their confidence for the correct answers (score: range 0%-100%). We calculated the correlation between each model's mean confidence score for correct answers and the overall accuracy of each model across all questions. The confidence scores for correct and incorrect answers were also analyzed to determine the mean difference in confidence, using 2-sample, 2-tailed t tests.

RESULTS: The correlation between the mean confidence scores for correct answers and model accuracy was inverse and statistically significant (r=-0.40; P=.001), indicating that worse-performing models exhibited paradoxically higher confidence. For instance, a top-performing model-GPT-4o-had a mean accuracy of 74% (SD 9.4%), with a mean confidence of 63% (SD 8.3%), whereas a low-performing model-Qwen2-7B-showed a mean accuracy of 46% (SD 10.5%) but a mean confidence of 76% (SD 11.7%). The mean difference in confidence between correct and incorrect responses was low for all models, ranging from 0.6% to 5.4%, with GPT-4o having the highest mean difference (5.4%, SD 2.3%; P=.003).

CONCLUSIONS: Better-performing LLMs show more aligned overall confidence levels. However, even the most accurate models still show minimal variation in confidence between right and wrong answers. This may limit their safe use in clinical settings. Addressing overconfidence could involve refining calibration methods, performing domain-specific fine-tuning, and involving human oversight when decisions carry high risks. Further research is needed to improve these strategies before broader clinical adoption of LLMs.

PMID:40378406 | DOI:10.2196/66917

Categories: Literature Watch

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning

15 hours 35 min ago

Neurology. 2025 Jun 10;104(11):e213691. doi: 10.1212/WNL.0000000000213691. Epub 2025 May 16.

ABSTRACT

BACKGROUND AND OBJECTIVES: Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection.

METHODS: We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 z-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness.

RESULTS: We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap.

DISCUSSION: The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.

PMID:40378378 | DOI:10.1212/WNL.0000000000213691

Categories: Literature Watch

Multiplexing and Sensing with Fluorescence Lifetime Imaging Microscopy Empowered by Phasor U-Net

15 hours 35 min ago

Anal Chem. 2025 May 16. doi: 10.1021/acs.analchem.5c02028. Online ahead of print.

ABSTRACT

Fluorescence lifetime imaging microscopy (FLIM) has been widely used as an essential multiplexing and sensing tool in frontier fields such as materials science and life sciences. However, the accuracy of lifetime estimation is compromised by limited time-correlated photon counts, and data processing is time-demanding due to the large data volume. Here, we introduce Phasor U-Net, a deep learning method designed for rapid and accurate FLIM imaging. Phasor U-Net incorporates two lightweight U-Net subnetworks to perform denoising and deconvolution to reduce the noise and calibrate the data caused by the instrumental response function, thus facilitating the downstream phasor analysis. Phasor U-Net is solely trained on computer-generated datasets, circumventing the necessity for large experimental datasets. The method reduced the modified Kullback-Leibler divergence on the phasor plots by 1.5-8-fold compared with the direct phasor method and reduced the mean absolute error of the lifetime images by 1.18-4.41-fold. We then show that this method can be used for multiplexed imaging on the small intestine samples of mice labeled by two fluorescence dyes with almost identical emission spectra. We further demonstrate that the size of quantum dots can be better estimated with measured lifetime information. This general method will open a new paradigm for more fundamental research with FLIM.

PMID:40378347 | DOI:10.1021/acs.analchem.5c02028

Categories: Literature Watch

An efficient leukemia prediction method using machine learning and deep learning with selected features

15 hours 35 min ago

PLoS One. 2025 May 16;20(5):e0320669. doi: 10.1371/journal.pone.0320669. eCollection 2025.

ABSTRACT

Leukemia is a serious problem affecting both children and adults, leading to death if left untreated. Leukemia is a kind of blood cancer described by the rapid proliferation of abnormal blood cells. An early, trustworthy, and precise identification of leukemia is important to treating and saving patients' lives. Acute and myelogenous lymphocytic, chronic and myelogenous leukemia are the four kinds of leukemia. Manual inspection of microscopic images is frequently used to identify these malignant growth cells. Leukemia symptoms include fatigue, a lack of enthusiasm, a dull appearance, recurring illnesses, and easy blood loss. Identifying subtypes of leukemia for specialized therapy is one of the hurdles in this area. The suggested work predicts and classifies leukemia subtypes in gene data CuMiDa (GSE9476) using feature selection and ML techniques. The Curated Microarray Database (CuMiDa) collected 64 samples representing five classes of leukemia genes out of 22283 genes. The proposed approach utilizes the 25 most differentiating selected features for classification using machine and deep learning techniques. This study has a classification accuracy of 96.15% using Random Fores, 92.30 using Linear Regression, 96.15% using SVM, and 100% using LSTM. Deep learning methods have been shown to outperform traditional methods in leukemia gene classification by utilizing specific features.

PMID:40378164 | DOI:10.1371/journal.pone.0320669

Categories: Literature Watch

DeepMBEnzy: An AI-Driven Database of Mycotoxin Biotransformation Enzymes

15 hours 35 min ago

J Agric Food Chem. 2025 May 16. doi: 10.1021/acs.jafc.5c02477. Online ahead of print.

ABSTRACT

Mycotoxins are toxic fungal metabolites that pose significant health risks. Enzyme biotransformation is a promising option for detoxifying mycotoxins and for elucidating their intracellular metabolism. However, few mycotoxin-biotransformation enzymes have been identified thus far. Here, we developed an enzyme promiscuity prediction for mycotoxin biotransformation (EPP-MB) model by fine-tuning a pretrained model using a cold protein data-splitting approach. The EPP-MB model leverages deep learning to predict enzymes capable of mycotoxin biotransformation, achieving a validation accuracy of 79% against a data set of experimentally confirmed mycotoxin-biotransforming enzymes. We applied the model to predict potential biotransformation enzymes for over 4000 mycotoxins and compiled these into the DeepMBEnzy database, which archives the predicted enzymes and related information for each mycotoxin, providing researchers with a user-friendly, publicly accessible interface at https://synbiodesign.com/DeepMBEnzy/. DeepMBEnzy is designed to facilitate the exploration and utilization of enzyme candidates in mycotoxin biotransformation, supporting further advancements in mycotoxin detoxification research and applications.

PMID:40378051 | DOI:10.1021/acs.jafc.5c02477

Categories: Literature Watch

A deep learning-based approach to automated rib fracture detection and CWIS classification

15 hours 35 min ago

Int J Comput Assist Radiol Surg. 2025 May 16. doi: 10.1007/s11548-025-03390-5. Online ahead of print.

ABSTRACT

PURPOSE: Trauma-induced rib fractures are a common injury. The number and characteristics of these fractures influence whether a patient is treated nonoperatively or surgically. Rib fractures are typically diagnosed using CT scans, yet 19.2-26.8% of fractures are still missed during assessment. Another challenge in managing rib fractures is the interobserver variability in their classification. Purpose of this study was to develop and assess an automated method that detects rib fractures in CT scans, and classifies them according to the Chest Wall Injury Society (CWIS) classification.

METHODS: 198 CT scans were collected, of which 170 were used for training and internal validation, and 28 for external validation. Fractures and their classifications were manually annotated in each of the scans. A detection and classification network was trained for each of the three components of the CWIS classifications. In addition, a rib number labeling network was trained for obtaining the rib number of a fracture. Experiments were performed to assess the method performance.

RESULTS: On the internal test set, the method achieved a detection sensitivity of 80%, at a precision of 87%, and an F1-score of 83%, with a mean number of FPPS (false positives per scan) of 1.11. Classification sensitivity varied, with the lowest being 25% for complex fractures and the highest being 97% for posterior fractures. The correct rib number was assigned to 94% of the detected fractures. The custom-trained nnU-Net correctly labeled 95.5% of all ribs and 98.4% of fractured ribs in 30 patients. The detection and classification performance on the external validation dataset was slightly better, with a fracture detection sensitivity of 84%, precision of 85%, F1-score of 84%, FPPS of 0.96 and 95% of the fractures were assigned the correct rib number.

CONCLUSION: The method developed is able to accurately detect and classify rib fractures in CT scans, there is room for improvement in the (rare and) underrepresented classes in the training set.

PMID:40377883 | DOI:10.1007/s11548-025-03390-5

Categories: Literature Watch

Impact of sarcopenia and obesity on mortality in older adults with SARS-CoV-2 infection: automated deep learning body composition analysis in the NAPKON-SUEP cohort

15 hours 35 min ago

Infection. 2025 May 16. doi: 10.1007/s15010-025-02555-3. Online ahead of print.

ABSTRACT

INTRODUCTION: Severe respiratory infections pose a major challenge in clinical practice, especially in older adults. Body composition analysis could play a crucial role in risk assessment and therapeutic decision-making. This study investigates whether obesity or sarcopenia has a greater impact on mortality in patients with severe respiratory infections. The study focuses on the National Pandemic Cohort Network (NAPKON-SUEP) cohort, which includes patients over 60 years of age with confirmed severe COVID-19 pneumonia. An innovative approach was adopted, using pre-trained deep learning models for automated analysis of body composition based on routine thoracic CT scans.

METHODS: The study included 157 hospitalized patients (mean age 70 ± 8 years, 41% women, mortality rate 39%) from the NAPKON-SUEP cohort at 57 study sites. A pre-trained deep learning model was used to analyze body composition (muscle, bone, fat, and intramuscular fat volumes) from thoracic CT images of the NAPKON-SUEP cohort. Binary logistic regression was performed to investigate the association between obesity, sarcopenia, and mortality.

RESULTS: Non-survivors exhibited lower muscle volume (p = 0.043), higher intramuscular fat volume (p = 0.041), and a higher BMI (p = 0.031) compared to survivors. Among all body composition parameters, muscle volume adjusted to weight was the strongest predictor of mortality in the logistic regression model, even after adjusting for factors such as sex, age, diabetes, chronic lung disease and chronic kidney disease, (odds ratio = 0.516). In contrast, BMI did not show significant differences after adjustment for comorbidities.

CONCLUSION: This study identifies muscle volume derived from routine CT scans as a major predictor of survival in patients with severe respiratory infections. The results underscore the potential of AI supported CT-based body composition analysis for risk stratification and clinical decision making, not only for COVID-19 patients but also for all patients over 60 years of age with severe acute respiratory infections. The innovative application of pre-trained deep learning models opens up new possibilities for automated and standardized assessment in clinical practice.

PMID:40377852 | DOI:10.1007/s15010-025-02555-3

Categories: Literature Watch

Development and validation of clinical-radiomics deep learning model based on MRI for endometrial cancer molecular subtypes classification

15 hours 35 min ago

Insights Imaging. 2025 May 16;16(1):107. doi: 10.1186/s13244-025-01966-y.

ABSTRACT

OBJECTIVES: This study aimed to develop and validate a clinical-radiomics deep learning (DL) model based on MRI for endometrial cancer (EC) molecular subtypes classification.

METHODS: This multicenter retrospective study included EC patients undergoing surgery, MRI, and molecular pathology diagnosis across three institutions from January 2020 to March 2024. Patients were divided into training, internal, and external validation cohorts. A total of 386 handcrafted radiomics features were extracted from each MR sequence, and MoCo-v2 was employed for contrastive self-supervised learning to extract 2048 DL features per patient. Feature selection integrated selected features into 12 machine learning methods. Model performance was evaluated with the AUC.

RESULTS: A total of 526 patients were included (mean age, 55.01 ± 11.07). The radiomics model and clinical model demonstrated comparable performance across the internal and external validation cohorts, with macro-average AUCs of 0.70 vs 0.69 and 0.70 vs 0.67 (p = 0.51), respectively. The radiomics DL model, compared to the radiomics model, improved AUCs for POLEmut (0.68 vs 0.79), NSMP (0.71 vs 0.74), and p53abn (0.76 vs 0.78) in the internal validation (p = 0.08). The clinical-radiomics DL Model outperformed both the clinical model and radiomics DL model (macro-average AUC = 0.79 vs 0.69 and 0.73, in the internal validation [p = 0.02], 0.74 vs 0.67 and 0.69 in the external validation [p = 0.04]).

CONCLUSIONS: The clinical-radiomics DL model based on MRI effectively distinguished EC molecular subtypes and demonstrated strong potential, with robust validation across multiple centers. Future research should explore larger datasets to further uncover DL's potential.

CRITICAL RELEVANCE STATEMENT: Our clinical-radiomics DL model based on MRI has the potential to distinguish EC molecular subtypes. This insight aids in guiding clinicians in tailoring individualized treatments for EC patients.

KEY POINTS: Accurate classification of EC molecular subtypes is crucial for prognostic risk assessment. The clinical-radiomics DL model outperformed both the clinical model and the radiomics DL model. The MRI features exhibited better diagnostic performance for POLEmut and p53abn.

PMID:40377781 | DOI:10.1186/s13244-025-01966-y

Categories: Literature Watch

Geospatial artificial intelligence for detection and mapping of small water bodies in satellite imagery

15 hours 35 min ago

Environ Monit Assess. 2025 May 16;197(6):657. doi: 10.1007/s10661-025-14066-7.

ABSTRACT

Remote sensing (RS) data is extensively used in the observation and management of surface water and the detection of water bodies for studying ecological and hydrological processes. Small waterbodies are often neglected because of their tiny presence in the image, but being very large in numbers, they significantly impact the ecosystem. However, the detection of small waterbodies in satellite images is challenging because of their varying sizes and tones. In this work, a geospatial artificial intelligence (GeoAI) approach is proposed to detect small water bodies in RS images and generate a spatial map of it along with area statistics. The proposed approach aims to detect waterbodies of different shapes and sizes including those with vegetation cover. For this purpose, a deep neural network (DNN) is trained using the Indian Space Research Organization's (ISRO) Cartosat-3 multispectral satellite images, which effectively extracts the boundaries of small water bodies with a mean precision of 0.92 and overall accuracy over 96%. A comparative analysis with other popular existing methods using the same data demonstrates the superior performance of the proposed method. The proposed GeoAI approach efficiently generates a map of small water bodies automatically from the input satellite image which can be utilized for monitoring and management of these micro water resources.

PMID:40377752 | DOI:10.1007/s10661-025-14066-7

Categories: Literature Watch

New approaches to lesion assessment in multiple sclerosis

15 hours 35 min ago

Curr Opin Neurol. 2025 May 19. doi: 10.1097/WCO.0000000000001378. Online ahead of print.

ABSTRACT

PURPOSE OF REVIEW: To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research.

RECENT FINDINGS: Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance.

SUMMARY: Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.

PMID:40377692 | DOI:10.1097/WCO.0000000000001378

Categories: Literature Watch

Automated CT segmentation for lower extremity tissues in lymphedema evaluation using deep learning

15 hours 35 min ago

Eur Radiol. 2025 May 16. doi: 10.1007/s00330-025-11673-3. Online ahead of print.

ABSTRACT

OBJECTIVES: Clinical assessment of lymphedema, particularly for lymphedema severity and fluid-fibrotic lesions, remains challenging with traditional methods. We aimed to develop and validate a deep learning segmentation tool for automated tissue component analysis in lower extremity CT scans.

MATERIALS AND METHODS: For development datasets, lower extremity CT venography scans were collected in 118 patients with gynecologic cancers for algorithm training. Reference standards were created by segmentation of fat, muscle, and fluid-fibrotic tissue components using 3D slicer. A deep learning model based on the Unet++ architecture with an EfficientNet-B7 encoder was developed and trained. Segmentation accuracy of the deep learning model was validated in an internal validation set (n = 10) and an external validation set (n = 10) using Dice similarity coefficient (DSC) and volumetric similarity (VS). A graphical user interface (GUI) tool was developed for the visualization of the segmentation results.

RESULTS: Our deep learning algorithm achieved high segmentation accuracy. Mean DSCs for each component and all components ranged from 0.945 to 0.999 in the internal validation set and 0.946 to 0.999 in the external validation set. Similar performance was observed in the VS, with mean VSs for all components ranging from 0.97 to 0.999. In volumetric analysis, mean volumes of the entire leg and each component did not differ significantly between reference standard and deep learning measurements (p > 0.05). Our GUI displays lymphedema mapping, highlighting segmented fat, muscle, and fluid-fibrotic components in the entire leg.

CONCLUSION: Our deep learning algorithm provides an automated segmentation tool enabling accurate segmentation, volume measurement of tissue component, and lymphedema mapping.

KEY POINTS: Question Clinical assessment of lymphedema remains challenging, particularly for tissue segmentation and quantitative severity evaluation. Findings A deep learning algorithm achieved DSCs > 0.95 and VS > 0.97 for fat, muscle, and fluid-fibrotic components in internal and external validation datasets. Clinical relevance The developed deep learning tool accurately segments and quantifies lower extremity tissue components on CT scans, enabling automated lymphedema evaluation and mapping with high segmentation accuracy.

PMID:40377677 | DOI:10.1007/s00330-025-11673-3

Categories: Literature Watch

Development of a Deep Learning-Based System for Supporting Medical Decision-Making in PI-RADS Score Determination

15 hours 35 min ago

Urologiia. 2024 Dec;(6):5-11.

ABSTRACT

AIM: to explore the development of a computer-aided diagnosis (CAD) system based on deep learning (DL) neural networks aimed at minimizing human error in PI-RADS grading and supporting medical decision-making.

MATERIALS AND METHODS: This retrospective multicenter study included a cohort of 136 patients, comprising 108 cases of PCa (PI-RADS score 4-5) and 28 cases of benign conditions (PI-RADS score 1-2). The 3D U-Net architecture was applied to process T2-weighted images (T2W), diffusion-weighted images (DWI), and dynamic contrast-enhanced images (DCE). Statistical analysis was conducted using Python libraries to assess diagnostic performance, including sensitivity, specificity, Dice similarity coefficients, and the area under the receiver operating characteristic curve (AUC).

RESULTS: The DL-CAD system achieved an average accuracy of 78%, sensitivity of 60%, and specificity of 84% for detecting lesions in the prostate. The Dice similarity coefficient for prostate segmentation was 0.71, and the AUC was 81.16%. The system demonstrated high specificity in reducing false-positive results, which, after further optimization, could help minimize unnecessary biopsies and overtreatment.

CONCLUSION: The DL-CAD system shows potential in supporting clinical decision-making for patients with clinically significant PCa by improving diagnostic accuracy, particularly in minimizing intra- and inter-observer variability. Despite its high specificity, improvements in sensitivity and segmentation accuracy are needed, which could be achieved by using larger datasets and advanced deep learning techniques. Further multicenter validation is required for accelerated integration of this system into clinical practice.

PMID:40377545

Categories: Literature Watch

Accuracy and Reliability of Multimodal Imaging in Diagnosing Knee Sports Injuries

15 hours 35 min ago

Curr Med Imaging. 2025 May 15. doi: 10.2174/0115734056360665250506115221. Online ahead of print.

ABSTRACT

BACKGROUND: Due to differences in subjective experience and professional level among doctors, as well as inconsistent diagnostic criteria, there are issues with the accuracy and reliability of single imaging diagnosis results for knee joint injuries.

OBJECTIVE: To address these issues, magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) are adopted in this article for ensemble learning, and deep learning (DL) is combined for automatic analysis.

METHODS: By steps such as image enhancement, noise elimination, and tissue segmentation, the quality of image data is improved, and then convolutional neural networks (CNN) are used to automatically identify and classify injury types. The experimental results show that the DL model exhibits high sensitivity and specificity in the diagnosis of different types of injuries, such as anterior cruciate ligament tear, meniscus injury, cartilage injury, and fracture.

RESULTS: The diagnostic accuracy of anterior cruciate ligament tear exceeds 90%, and the highest diagnostic accuracy of cartilage injury reaches 95.80%. In addition, compared with traditional manual image interpretation, the DL model has significant advantages in time efficiency, with a significant reduction in average interpretation time per case. The diagnostic consistency experiment shows that the DL model has high consistency with doctors' diagnosis results, with an overall error rate of less than 2%.

CONCLUSION: The model has high accuracy and strong generalization ability when dealing with different types of joint injuries. These data indicate that combining multiple imaging technologies and the DL algorithm can effectively improve the accuracy and efficiency of diagnosing sports injuries of knee joints.

PMID:40377156 | DOI:10.2174/0115734056360665250506115221

Categories: Literature Watch

ASOptimizer: optimizing chemical diversity of antisense oligonucleotides through deep learning

15 hours 35 min ago

Nucleic Acids Res. 2025 May 16:gkaf392. doi: 10.1093/nar/gkaf392. Online ahead of print.

ABSTRACT

Antisense oligonucleotides (ASOs) are a promising class of gene therapies that can modulate the gene expression. However, designing ASOs manually is resource-intensive and time-consuming. To address this, we introduce a user-friendly web server for ASOptimizer, a deep learning-based computational framework for optimizing ASO sequences and chemical modifications. Given a user-provided ASO sequence, the web server systematically explores modification sites within the nucleic acid and returns a ranked list of promising modification patterns. With an intuitive interface requiring no expertise in deep learning tools, the platform makes ASOptimizer easily accessible to the broader research community. The web server is freely available at https://asoptimizer.s-core.ai/.

PMID:40377084 | DOI:10.1093/nar/gkaf392

Categories: Literature Watch

Construction of Sonosensitizer-Drug Co-Assembly Based on Deep Learning Method

15 hours 35 min ago

Small. 2025 May 16:e2502328. doi: 10.1002/smll.202502328. Online ahead of print.

ABSTRACT

Drug co-assemblies have attracted extensive attention due to their advantages of easy preparation, adjustable performance and drug component co-delivery. However, the lack of a clear and reasonable co-assembly strategy has hindered the wide application and promotion of drug-co assembly. This paper introduces a deep learning-based sonosensitizer-drug interaction (SDI) model to predict the particle size of the drug mixture. To analyze the factors influencing the particle size after mixing, the graph neural network is employed to capture the atomic, bond, and structural features of the molecules. A multi-scale cross-attention mechanism is designed to integrate the feature representations of different scale substructures of the two drugs, which not only improves prediction accuracy but also allows for the analysis of the impact of molecular structures on the predictions. Ablation experiments evaluate the impact of molecular properties, and comparisons with other machine and deep learning methods show superiority, achieving 90.00% precision, 96.00% recall, and 91.67% F1-score. Furthermore, the SDI predicts the co-assembly of the chemotherapy drug methotrexate (MET) and the sonosensitizer emodin (EMO) to form the nanomedicine NanoME. This prediction is further validated through experiments, demonstrating that NanoME can be used for fluorescence imaging of liver cancer and sonodynamic/chemotherapy anticancer therapy.

PMID:40376918 | DOI:10.1002/smll.202502328

Categories: Literature Watch

YOLOv8 framework for COVID-19 and pneumonia detection using synthetic image augmentation

15 hours 35 min ago

Digit Health. 2025 May 14;11:20552076251341092. doi: 10.1177/20552076251341092. eCollection 2025 Jan-Dec.

ABSTRACT

OBJECTIVE: Early and accurate detection of COVID-19 and pneumonia through medical imaging is critical for effective patient management. This study aims to develop a robust framework that integrates synthetic image augmentation with advanced deep learning (DL) models to address dataset imbalance, improve diagnostic accuracy, and enhance trust in artificial intelligence (AI)-driven diagnoses through Explainable AI (XAI) techniques.

METHODS: The proposed framework benchmarks state-of-the-art models (InceptionV3, DenseNet, ResNet) for initial performance evaluation. Synthetic images are generated using Feature Interpolation through Linear Mapping and principal component analysis to enrich dataset diversity and balance class distribution. YOLOv8 and InceptionV3 models, fine-tuned via transfer learning, are trained on the augmented dataset. Grad-CAM is used for model explainability, while large language models (LLMs) support visualization analysis to enhance interpretability.

RESULTS: YOLOv8 achieved superior performance with 97% accuracy, precision, recall, and F1-score, outperforming benchmark models. Synthetic data generation effectively reduced class imbalance and improved recall for underrepresented classes. Comparative analysis demonstrated significant advancements over existing methodologies. XAI visualizations (Grad-CAM heatmaps) highlighted anatomically plausible focus areas aligned with clinical markers of COVID-19 and pneumonia, thereby validating the model's decision-making process.

CONCLUSION: The integration of synthetic data generation, advanced DL, and XAI significantly enhances the detection of COVID-19 and pneumonia while fostering trust in AI systems. YOLOv8's high accuracy, coupled with interpretable Grad-CAM visualizations and LLM-driven analysis, promotes transparency crucial for clinical adoption. Future research will focus on developing a clinically viable, human-in-the-loop diagnostic workflow, further optimizing performance through the integration of transformer-based language models to improve interpretability and decision-making.

PMID:40376574 | PMC:PMC12078974 | DOI:10.1177/20552076251341092

Categories: Literature Watch

Neurovision: A deep learning driven web application for brain tumour detection using weight-aware decision approach

15 hours 35 min ago

Digit Health. 2025 May 14;11:20552076251333195. doi: 10.1177/20552076251333195. eCollection 2025 Jan-Dec.

ABSTRACT

In recent times, appropriate diagnosis of brain tumour is a crucial task in medical system. Therefore, identification of a potential brain tumour is challenging owing to the complex behaviour and structure of the human brain. To address this issue, a deep learning-driven framework consisting of four pre-trained models viz DenseNet169, VGG-19, Xception, and EfficientNetV2B2 is developed to classify potential brain tumours from medical resonance images. At first, the deep learning models are trained and fine-tuned on the training dataset, obtained validation scores of trained models are considered as model-wise weights. Then, trained models are subsequently evaluated on the test dataset to generate model-specific predictions. In the weight-aware decision module, the class-bucket of a probable output class is updated with the weights of deep models when their predictions match the class. Finally, the bucket with the highest aggregated value is selected as the final output class for the input image. A novel weight-aware decision mechanism is a key feature of this framework, which effectively deals tie situations in multi-class classification compared to conventional majority-based techniques. The developed framework has obtained promising results of 98.7%, 97.52%, and 94.94% accuracy on three different datasets. The entire framework is seamlessly integrated into an end-to-end web-application for user convenience. The source code, dataset and other particulars are publicly released at https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app [Rishik Sai Santhosh, "Brain Tumour Image Classification Application," https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app] for academic, research and other non-commercial usage.

PMID:40376570 | PMC:PMC12078957 | DOI:10.1177/20552076251333195

Categories: Literature Watch

The application of ultrasound artificial intelligence in the diagnosis of endometrial diseases: Current practice and future development

15 hours 35 min ago

Digit Health. 2025 May 14;11:20552076241310060. doi: 10.1177/20552076241310060. eCollection 2025 Jan-Dec.

ABSTRACT

Diagnosis and treatment of endometrial diseases are crucial for women's health. Over the past decade, ultrasound has emerged as a non-invasive, safe, and cost-effective imaging tool, significantly contributing to endometrial disease diagnosis and generating extensive datasets. The introduction of artificial intelligence has enabled the application of machine learning and deep learning to extract valuable information from these datasets, enhancing ultrasound diagnostic capabilities. This paper reviews the progress of artificial intelligence in ultrasound image analysis for endometrial diseases, focusing on applications in diagnosis, decision support, and prognosis analysis. We also summarize current research challenges and propose potential solutions and future directions to advance ultrasound artificial intelligence technology in endometrial disease diagnosis, ultimately improving women's health through digital tools.

PMID:40376569 | PMC:PMC12078975 | DOI:10.1177/20552076241310060

Categories: Literature Watch

Making sense of blobs, whorls, and shades: methods for label-free, inverse imaging in bright-field optical microscopy

15 hours 35 min ago

Biophys Rev. 2025 Mar 18;17(2):335-345. doi: 10.1007/s12551-025-01301-1. eCollection 2025 Apr.

ABSTRACT

Despite its long history and widespread use, conventional bright-field optical microscopy has received recent attention as an excellent option to perform accurate, label-free, imaging of biological objects. As with any imaging system, bright-field produces an ill-defined representation of the specimen, in this case characterized by intertwined phase and amplitude in image formation, invisibility of phase objects at exact focus, and both positive and negative contrast present in images. These drawbacks have prevented the application of bright-field to the accurate imaging of unlabeled specimens. To address these challenges, a variety of methods using hardware, software or both have been developed, with the goal of providing solutions to the inverse imaging problem set in bright-field. We revise the main operating principles and characteristics of bright-field microscopy, followed by a discussion of the solutions (and potential limitations) to reconstruction in two dimensions (2D). We focus on methods based on conventional optics, including defocusing microscopy, transport of intensity, ptychography and deconvolution. Advances to achieving three-dimensional (3D) bright-field imaging are presented, including methods that exploit multi-view reconstruction, physical modeling, deep learning and conventional digital image processing. Among these techniques, optical sectioning in bright-field microscopy (OSBM) constitutes a direct approach that captures z-image stacks using a standard microscope and applies digital filters in the spatial domain, yielding inverse-imaging solutions in 3D. Finally, additional techniques that expand the capabilities of bright-field are discussed. Label-free, inverse imaging in conventional optical microscopy thus emerges as a powerful biophysical tool for accurate 2D and 3D imaging of biological samples.

PMID:40376420 | PMC:PMC12075049 | DOI:10.1007/s12551-025-01301-1

Categories: Literature Watch

Providing a Prostate Cancer Detection and Prevention Method With Developed Deep Learning Approach

15 hours 35 min ago

Prostate Cancer. 2025 May 8;2025:2019841. doi: 10.1155/proc/2019841. eCollection 2025.

ABSTRACT

Introduction: Prostate cancer is the second most common cancer among men worldwide. This cancer has become extremely noticeable due to the increase of prostate cancer in Iranian men in recent years due to the lack of marriage and sexual intercourse, as well as the abuse of hormones in sports without any standards. Methods: The histopathology images from a treatment center to diagnose prostate cancer are used with the help of deep learning methods, considering the two characteristics of Tile and Grad-CAM. The approach of this research is to present a prostate cancer diagnosis model to achieve proper performance from histopathology images with the help of a developed deep learning method based on the manifold model. Results: Similarly, in addition to the diagnosis of prostate cancer, a study on the methods of preventing this disease was investigated in literature reviews, and finally, after simulation, prostate cancer presentation factors were determined. Conclusions: The simulation results indicated that the proposed method has a performance advantage over the other state-of-the-art methods, and the accuracy of this method is up to 97.41%.

PMID:40376132 | PMC:PMC12081159 | DOI:10.1155/proc/2019841

Categories: Literature Watch

Pages