Deep learning

AI-powered skin spectral imaging enables instant sepsis diagnosis and outcome prediction in critically ill patients

Fri, 2025-07-18 06:00

Sci Adv. 2025 Jul 18;11(29):eadw1968. doi: 10.1126/sciadv.adw1968. Epub 2025 Jul 18.

ABSTRACT

With sepsis remaining a leading cause of mortality, early identification of patients with sepsis and those at high risk of death is a challenge of high socioeconomic importance. Given the potential of hyperspectral imaging (HSI) to monitor microcirculatory alterations, we propose a deep learning approach to automated sepsis diagnosis and mortality prediction using a single HSI cube acquired within seconds. In a prospective observational study, we collected HSI data from the palms and fingers of more than 480 intensive care unit patients. Neural networks applied to HSI measurements predicted sepsis and mortality with areas under the receiver operating characteristic curve (AUROCs) of 0.80 and 0.72, respectively. Performance improved substantially with additional clinical data, reaching AUROCs of 0.94 for sepsis and 0.83 for mortality. We conclude that deep learning-based HSI analysis enables rapid and noninvasive prediction of sepsis and mortality, with a potential clinical value for enhancing diagnosis and treatment.

PMID:40680113 | DOI:10.1126/sciadv.adw1968

Categories: Literature Watch

Fault analysis of chemical equipment based on an improved hybrid model

Fri, 2025-07-18 06:00

PLoS One. 2025 Jul 18;20(7):e0326370. doi: 10.1371/journal.pone.0326370. eCollection 2025.

ABSTRACT

The safety and reliability of chemical equipment are crucial to industrial production, as they directly impact production efficiency, environmental protection, and personnel safety. However, traditional fault detection techniques often exhibit limitations when applied to the complex operational conditions, varying environmental factors, and multimodal data encountered in chemical equipment. These conventional methods typically rely on a single signal source or shallow feature extraction, which makes it difficult to effectively capture the deep, implicit information within the equipment's operating state. Moreover, their accuracy and robustness are easily compromised when confronted with noisy signals or large, diverse datasets. Therefore, designing an intelligent fault detection method that integrates multimodal data, efficiently extracts deep features, and demonstrates strong generalization capability has become a key challenge in current research.This paper proposes an innovative fault detection method for chemical equipment aimed at improving detection accuracy and efficiency, providing technical support for intelligent and predictive maintenance. The method combines Variational Mode Decomposition (VMD), Least Mean Squares (LMS) processing, an asymmetric attention mechanism, and a pre-activation ResNet-BiGRU model to create an efficient framework for multimodal data fusion and analysis. First, the VMD-LMS process handles complex non-stationary signals, addressing the issue of mode mixing. Next, an asymmetric attention mechanism optimizes the ResNet, enhancing feature representation capabilities through deep learning. The pre-activation mechanism introduced in the residual blocks of ResNet improves training efficiency and model stability. Subsequently, the BiGRU model is used to model the extracted features in the time domain, capturing complex temporal dependencies. Experimental results demonstrate that the proposed method performs exceptionally well in chemical equipment fault detection, significantly enhancing diagnostic timeliness and reliability, achieving a classification accuracy of 99.78%, and providing an effective fault detection solution for industrial production.

PMID:40680025 | DOI:10.1371/journal.pone.0326370

Categories: Literature Watch

Automated identification of sedimentary structures in core images using object detection algorithms

Fri, 2025-07-18 06:00

PLoS One. 2025 Jul 18;20(7):e0327738. doi: 10.1371/journal.pone.0327738. eCollection 2025.

ABSTRACT

Manual interpretation of sedimentary structures in core-based analyses is critical for understanding subsurface geology but remains time-intensive, expert-dependent, and susceptible to bias. This study investigates the use of convolutional neural networks (CNNs) to automate structure identification in core images, focusing on siliciclastic deposits from deltaic, shoreface, fluvial, and lacustrine environments. Two object detection models-YOLOv4 and Faster R-CNN-were trained on annotated datasets comprising 15 sedimentary structure types. YOLOv4 achieved high precision (up to 95%) with faster training and shorter inference times (3.2 s/image) compared to Faster R-CNN (2.5 s/image) under consistent batch size and hardware conditions. Although Faster R-CNN reached a higher mean average precision (94.44%), it exhibited lower recall, particularly for frequently occurring structures. Both models faced challenges in distinguishing morphologically similar features, such as mud drapes and bioturbated media. Performance declined slightly in tests involving previously unseen datasets (Split III), indicating limitations in generalization across varied core imagery. Despite these challenges, the results demonstrate the promise of deep learning for streamlining core interpretation, reducing manual effort, and enhancing reproducibility. This study establishes a robust framework for advancing automated facies analysis in sedimentological research and geoscientific applications.

PMID:40680021 | DOI:10.1371/journal.pone.0327738

Categories: Literature Watch

Synergistic fusion: An integrated pipeline of CLAHE, YOLO models, and advanced super-resolution for enhanced thermal eye detection

Fri, 2025-07-18 06:00

PLoS One. 2025 Jul 18;20(7):e0328227. doi: 10.1371/journal.pone.0328227. eCollection 2025.

ABSTRACT

Accurate eye detection in thermal images is essential for diverse applications, including biometrics, healthcare, driver monitoring, and human-computer interaction. However, achieving this accuracy is often hindered by the inherent limitations of thermal data, such as low resolution and poor contrast. This work addresses these challenges by proposing a novel, multifaceted approach that combines both deep learning and image processing techniques. We first introduce a unique dataset of thermal facial images captured with meticulous eye location annotations. To improve image clarity, we employ Contrast Limited Adaptive Histogram Equalization (CLAHE). Subsequently, we explore the effectiveness of advanced YOLO models (YOLOv8 and YOLOv9) for accurate eye detection. Our experiments reveal that YOLOv8 with CLAHE-enhanced images achieved the highest accuracy (precision and recall of 1, mAP50 of 0.995, and mAP50-95 of 0.801), the YOLOv9 model also demonstrated excellent performance with a precision of 0.998, recall of 0.998, mAP-50 of 0.995, and mAP50-95 of 0.753. Furthermore, to enhance the resolution of detected eye regions, we investigate various super-resolution techniques, ranging from traditional methods like Bicubic interpolation to cutting-edge approaches like generative adversarial networks (BSRGAN, ESRGAN) and advanced models like Real-ESRGAN, SwinIR, and SwinIR-Large with ResShift. The performance of these techniques is evaluated using both objective and subjective quality measures. Overall, this work demonstrates the effectiveness of our proposed pipeline, which seamlessly integrates image enhancement, deep learning, and super-resolution techniques. This synergic fusion significantly improves the contrast, accuracy of eye detection, and overall resolution of thermal images, paving the way for potential applications across various fields.

PMID:40679961 | DOI:10.1371/journal.pone.0328227

Categories: Literature Watch

Transfer Learning for Predicting ncRNA-Protein Interactions

Fri, 2025-07-18 06:00

J Chem Inf Model. 2025 Jul 18. doi: 10.1021/acs.jcim.5c00914. Online ahead of print.

ABSTRACT

Noncoding RNAs (ncRNAs) interact with proteins, playing a crucial role in regulating gene expression and cellular functions. Accurate prediction of these interactions is essential for understanding biological processes and developing novel therapeutic agents. However, identifying ncRNA-protein interactions (ncRPI) through experimental methods is often costly and time-consuming. Although numerous machine learning and deep learning approaches have been developed for ncRPI prediction, their accuracy is often limited by the small size of available data sets. To address this challenge, we present Transfer-RPI, a transfer learning-based framework designed to enhance generalization and improve prediction performance through deep feature learning. Transfer-RPI leverages the RiNALMo and ESM models to extract comprehensive features from RNA and protein sequences, respectively. By integrating these rich and informative feature sets, Transfer-RPI fine-tunes the embedded complex interaction patterns, thereby enhancing performance even when trained on small data sets. Our results demonstrate that deep learning architectures augmented with intricate feature representations and transfer learning significantly boost prediction accuracy. Under 5-fold cross-validation, Transfer-RPI outperforms existing methods, achieving accuracies of 80.1, 89.3, 94.3, 94.4, and 95.4% on the RPI369, RPI488, RPI1807, RPI2241, and NPInter v2.0 data sets, respectively. These findings highlight the potential of transfer learning to overcome data limitations and enhance prediction performance. By harnessing advanced feature representations, Transfer-RPI offers a powerful tool for uncovering ncRPI, paving the way for deeper insights into molecular biology and novel therapeutic innovations.

PMID:40679953 | DOI:10.1021/acs.jcim.5c00914

Categories: Literature Watch

AI Prognostication in Nonsmall Cell Lung Cancer: A Systematic Review

Fri, 2025-07-18 06:00

Am J Clin Oncol. 2025 Jul 18. doi: 10.1097/COC.0000000000001238. Online ahead of print.

ABSTRACT

The systematic literature review was performed on the use of artificial intelligence (AI) algorithms in nonsmall cell lung cancer (NSCLC) prognostication. Studies were evaluated for the type of input data (histology and whether CT, PET, and MRI were used), cancer therapy intervention, prognosis performance, and comparisons to clinical prognosis systems such as TNM staging. Further comparisons were drawn between different types of AI, such as machine learning (ML) and deep learning (DL). Syntheses of therapeutic interventions and algorithm input modalities were performed for comparison purposes. The review adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The initial database identified 3880 results, which were reduced to 513 after the automatic screening, and 309 after the exclusion criteria. The prognostic performance of AI for NSCLC has been investigated using histology and genetic data, and CT, PET, and MR imaging for surgery, immunotherapy, and radiation therapy patients with and without chemotherapy. Studies per therapy intervention were 13 for immunotherapy, 10 for radiotherapy, 14 for surgery, and 34 for other, multiple, or no specific therapy. The results of this systematic review demonstrate that AI-based prognostication methods consistently present higher prognostic performance for NSCLC, especially when directly compared with traditional prognostication techniques such as TNM staging. The use of DL outperforms ML-based prognostication techniques. DL-based prognostication demonstrates the potential for personalized precision cancer therapy as a supplementary decision-making tool. Before it is fully utilized in clinical practice, it is recommended that it be thoroughly validated through well-designed clinical trials.

PMID:40679809 | DOI:10.1097/COC.0000000000001238

Categories: Literature Watch

Deep learning-based automatic detection of pancreatic ductal adenocarcinoma 2 cm with high-resolution computed tomography: impact of the combination of tumor mass detection and indirect indicator evaluation

Fri, 2025-07-18 06:00

Jpn J Radiol. 2025 Jul 18. doi: 10.1007/s11604-025-01836-z. Online ahead of print.

ABSTRACT

PURPOSE: Detecting small pancreatic ductal adenocarcinomas (PDAC) is challenging owing to their difficulty in being identified as distinct tumor masses. This study assesses the diagnostic performance of a three-dimensional convolutional neural network for the automatic detection of small PDAC using both automatic tumor mass detection and indirect indicator evaluation.

MATERIALS AND METHODS: High-resolution contrast-enhanced computed tomography (CT) scans from 181 patients diagnosed with PDAC (diameter ≤ 2 cm) between January 2018 and December 2023 were analyzed. The D/P ratio, which is the cross-sectional area of the MPD to that of the pancreatic parenchyma, was identified as an indirect indicator. A total of 204 patient data sets including 104 normal controls were analyzed for automatic tumor mass detection and D/P ratio evaluation. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were evaluated to detect tumor mass. The sensitivity of PDAC detection was compared with that of the software and radiologists, and tumor localization accuracy was validated against endoscopic ultrasonography (EUS) findings.

RESULTS: The sensitivity, specificity, PPV, and NPV for tumor mass detection were 77.0%, 76.0%, 75.5%, and 77.5%, respectively; for D/P ratio detection, 87.0%, 94.2%, 93.5%, and 88.3%, respectively; and for combined tumor mass and D/P ratio detections, 96.0%, 70.2%, 75.6%, and 94.8%, respectively. No significant difference was observed between the software's sensitivity and that of the radiologist's report (software, 96.0%; radiologist, 96.0%; p = 1). The concordance rate between software findings and EUS was 96.0%.

CONCLUSIONS: Combining indirect indicator evaluation with tumor mass detection may improve small PDAC detection accuracy.

PMID:40679757 | DOI:10.1007/s11604-025-01836-z

Categories: Literature Watch

Deep learning reconstruction enhances image quality in contrast-enhanced CT venography for deep vein thrombosis

Fri, 2025-07-18 06:00

Emerg Radiol. 2025 Jul 18. doi: 10.1007/s10140-025-02366-x. Online ahead of print.

ABSTRACT

PURPOSE: This study aimed to evaluate and compare the diagnostic performance and image quality of deep learning reconstruction (DLR) with hybrid iterative reconstruction (Hybrid IR) and filtered back projection (FBP) in contrast-enhanced CT venography for deep vein thrombosis (DVT).

METHODS: A retrospective analysis was conducted on 51 patients who underwent lower limb CT venography, including 20 with DVT lesions and 31 without DVT lesions. CT images were reconstructed using DLR, Hybrid IR, and FBP. Quantitative image quality metrics, such as contrast-to-noise ratio (CNR) and image noise, were measured. Three radiologists independently assessed DVT lesion detection, depiction of DVT lesions and normal structures, subjective image noise, artifacts, and overall image quality using scoring systems. Diagnostic performance was evaluated using sensitivity and area under the receiver operating characteristic curve (AUC). The paired t-test and Wilcoxon signed-rank test compared the results for continuous variables and ordinal scales, respectively, between DLR and Hybrid IR as well as between DLR and FBP.

RESULTS: DLR significantly improved CNR and reduced image noise compared to Hybrid IR and FBP (p < 0.001). AUC and sensitivity for DVT detection were not statistically different across reconstruction methods. Two readers reported improved lesion visualization with DLR. DLR was also rated superior in image quality, normal structure depiction, and noise suppression by all readers (p < 0.001).

CONCLUSIONS: DLR enhances image quality and anatomical clarity in CT venography. These findings support the utility of DLR in improving diagnostic confidence and image interpretability in DVT assessment.

PMID:40679754 | DOI:10.1007/s10140-025-02366-x

Categories: Literature Watch

Deep learning reconstruction for improving image quality of pediatric abdomen MRI using a 3D T1 fast spoiled gradient echo acquisition

Fri, 2025-07-18 06:00

Pediatr Radiol. 2025 Jul 18. doi: 10.1007/s00247-025-06313-3. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning (DL) reconstructions have shown utility for improving image quality of abdominal MRI in adult patients, but a paucity of literature exists in children.

OBJECTIVE: To compare image quality between three-dimensional fast spoiled gradient echo (SPGR) abdominal MRI acquisitions reconstructed conventionally and using a prototype method based on a commercial DL algorithm in a pediatric cohort.

MATERIALS AND METHODS: Pediatric patients (age < 18 years) who underwent abdominal MRI from 10/2023-3/2024 including gadolinium-enhanced accelerated 3D SPGR 2-point Dixon acquisitions (LAVA-Flex, GE HealthCare) were identified. Images were retrospectively generated using a prototype reconstruction method leveraging a commercial deep learning algorithm (AIR™ Recon DL, GE HealthCare) with the 75% noise reduction setting. For each case/reconstruction, three radiologists independently scored DL and non-DL image quality (overall and of selected structures) on a 5-point Likert scale (1-nondiagnostic, 5-excellent) and indicated reconstruction preference. The signal-to-noise ratio (SNR) and mean number of edges (inverse correlate of image sharpness) were also quantified. Image quality metrics and preferences were compared using Wilcoxon signed-rank, Fisher exact, and paired t-tests. Interobserver agreement was evaluated with the Kendall rank correlation coefficient (W).

RESULTS: The final cohort consisted of 38 patients with mean ± standard deviation age of 8.6 ± 5.7 years, 23 males. Mean image quality scores for evaluated structures ranged from 3.8 ± 1.1 to 4.6 ± 0.6 in the DL group, compared to 3.1 ± 1.1 to 3.9 ± 0.6 in the non-DL group (all P < 0.001). All radiologists preferred DL in most cases (32-37/38, P < 0.001). There were a 2.3-fold increase in SNR and a 3.9% reduction in the mean number of edges in DL compared to non-DL images (both P < 0.001). In all scored anatomic structures except the spine and non-DL adrenals, interobserver agreement was moderate to substantial (W = 0.41-0.74, all P < 0.01).

CONCLUSION: In a broad spectrum of pediatric patients undergoing contrast-enhanced Dixon abdominal MRI acquisitions, the prototype deep learning reconstruction is generally preferred to conventional methods with improved image quality across a wide range of structures.

PMID:40679617 | DOI:10.1007/s00247-025-06313-3

Categories: Literature Watch

Investigating brain tumor classification using MRI: a scientometric analysis of selected articles from 2015 to 2024

Fri, 2025-07-18 06:00

Neuroradiology. 2025 Jul 18. doi: 10.1007/s00234-025-03685-z. Online ahead of print.

ABSTRACT

BACKGROUND: Magnetic resonance imaging (MRI) is a non-invasive method widely used to evaluate abnormal tissues, especially in the brain. While many studies have examined brain tumor classification using MRI, a comprehensive scientometric analysis remains limited.

OBJECTIVE: This study aimed to investigate brain tumor classification based on magnetic resonance imaging (MRI), using scientometric approaches, from 2015 to 2024.

METHODS: A total of 348 peer-reviewed articles were extracted from the Scopus database. Tools such as CiteSpace and VOSviewer were employed to analyze key metrics, including citation frequency, author collaboration, and publication trends.

RESULTS: The analysis revealed top authors, top-cited journals, and international collaborations. Co-occurrence networks identified the top research topics and bibliometric coupling revealed knowledge advancements in the domain.

CONCLUSION: Deep learning methods are increasingly used in brain tumor classification research. This study outlines the current trends, uncovers research gaps, and suggests future directions for researchers in the domain of MRI-based brain tumor classification.

PMID:40679613 | DOI:10.1007/s00234-025-03685-z

Categories: Literature Watch

Artificial intelligence-enabled electrocardiography and echocardiography to track preclinical progression of transthyretin amyloid cardiomyopathy

Fri, 2025-07-18 06:00

Eur Heart J. 2025 Jul 18:ehaf450. doi: 10.1093/eurheartj/ehaf450. Online ahead of print.

ABSTRACT

BACKGROUND AND AIMS: The diagnosis of transthyretin amyloid cardiomyopathy (ATTR-CM) requires advanced imaging, precluding large-scale preclinical testing. Artificial intelligence (AI)-enabled transthoracic echocardiography (TTE) and electrocardiography (ECG) may provide a scalable strategy for preclinical monitoring.

METHODS: This was a retrospective analysis of individuals referred for nuclear cardiac amyloid testing at the Yale-New Haven Health System (YNHHS, internal cohort) and Houston Methodist Hospitals (HMH, external cohort). Deep learning models trained to discriminate ATTR-CM from age/sex-matched controls on TTE videos (AI-Echo) and ECG images (AI-ECG) were deployed to generate study-level ATTR-CM probabilities (0%-100%). Longitudinal trends in AI-derived probabilities were examined using age/sex-adjusted linear mixed models, and their discrimination of future disease was evaluated across preclinical stages.

RESULTS: Among 984 participants at YNHHS (median age 74 years, 44.3% female) and 806 at HMH (median age 69 years, 34.5% female), 112 (11.4%) and 174 (21.6%) tested positive for ATTR-CM, respectively. Across cohorts and modalities, AI-derived ATTR-CM probabilities from 7352 TTEs and 32 205 ECGs diverged as early as 3 years before diagnosis in cases vs controls (ptime(x)group interaction ≤ .004). Among those with both AI-Echo and AI-ECG probabilities available 1 to 3 years before nuclear testing [n = 433 (YNHHS) sand 174 (HMH)], a double-negative screen at a 0.05 threshold [164 (37.9%) and 66 (37.9%), vs all else] had 90.9% and 85.7% sensitivity (specificity of 40.3% and 41.2%), whereas a double-positive screen [78 (18.0%) and 26 (14.9%), vs all else] had 85.5% and 88.9% specificity (sensitivity of 60.6% and 42.9%).

CONCLUSIONS: Artificial intelligence-enabled echocardiography and electrocardiography may enable scalable risk stratification of ATTR-CM during its preclinical course.

PMID:40679604 | DOI:10.1093/eurheartj/ehaf450

Categories: Literature Watch

Clinical Translation of Integrated PET-MRI for Neurodegenerative Disease

Fri, 2025-07-18 06:00

J Magn Reson Imaging. 2025 Jul 18. doi: 10.1002/jmri.70046. Online ahead of print.

ABSTRACT

The prevalence of Alzheimer's disease and other dementias is increasing as populations live longer lifespans. Imaging is becoming a key component of the workup for patients with cognitive impairment or dementia. Integrated PET-MRI provides a unique opportunity for same-session multimodal characterization with many practical benefits to patients, referring physicians, radiologists, and researchers. The impact of integrated PET-MRI on clinical practice for early adopters of this technology can be profound. Classic imaging findings with integrated PET-MRI are illustrated for common neurodegenerative diseases or clinical-radiological syndromes. This review summarizes recent technical innovations that are being introduced into PET-MRI clinical practice and research for neurodegenerative disease. More recent MRI-based attenuation correction now performs similarly compared to PET-CT (e.g., whole-brain bias < 0.5%) such that early concerns for accurate PET tracer quantification with integrated PET-MRI appear resolved. Head motion is common in this patient population. MRI- and PET data-driven motion correction appear ready for routine use and should substantially improve PET-MRI image quality. PET-MRI by definition eliminates ~50% of the radiation from CT. Multiple hardware and software techniques for improving image quality with lower counts are reviewed (including motion correction). These methods can lower radiation to patients (and staff), increase scanner throughput, and generate better temporal resolution for dynamic PET. Deep learning has been broadly applied to PET-MRI. Deep learning analysis of PET and MRI data may provide accurate classification of different stages of Alzheimer's disease or predict progression to dementia. Over the past 5 years, clinical imaging of neurodegenerative disease has changed due to imaging research and the introduction of anti-amyloid immunotherapy-integrated PET-MRI is best suited for imaging these patients and its use appears poised for rapid growth outside academic medical centers. Evidence level: 5. Technical efficacy: Stage 3.

PMID:40679171 | DOI:10.1002/jmri.70046

Categories: Literature Watch

Evaluation of the False Discovery Rate in Library-Free Search by DIA-NN Using <em>In Vitro</em> Human Proteome

Fri, 2025-07-18 06:00

J Proteome Res. 2025 Jul 18. doi: 10.1021/acs.jproteome.5c00036. Online ahead of print.

ABSTRACT

Recently, deep-learning-based in silico spectral libraries have gained increasing attention. Several data-independent acquisition (DIA) software tools have integrated this feature, known as a library-free search, thereby making DIA analysis more accessible. However, controlling the false discovery rate (FDR) is challenging owing to the vast amount of peptide information in in silico libraries. In this study, we introduced a stringent method to evaluate FDR control using DIA software. Recombinant proteins were synthesized from full-length human cDNA libraries and analyzed by using liquid chromatography-mass spectrometry and DIA software. The results were compared with known protein sequences to calculate the FDR. Notably, we compared the identification performance of DIA-NN versions 1.8.1, 1.9.2, and 2.1.0. Versions 1.9.2 and 2.10 identified more peptides than version 1.8.1, and versions 1.9.2 and 2.1.0 used a more conservative identification approach, thus significantly improving the FDR control. Across the synthesized recombinant protein mixtures, the average FDR at the precursor level was 0.538% for version 1.8.1, 0.389% for version 1.9.2, and 0.385% for version 2.1.0; at the protein level, the FDRs were 2.85%, 1.81%, and 1.81%, respectively. Collectively, our data set provides valuable insights for comparing FDR controls across DIA software and aiding bioinformaticians in enhancing their tools.

PMID:40679152 | DOI:10.1021/acs.jproteome.5c00036

Categories: Literature Watch

Deep Learning-Based Multimodal Fusion Approach for Predicting Acute Dermal Toxicity

Fri, 2025-07-18 06:00

J Chem Inf Model. 2025 Jul 18. doi: 10.1021/acs.jcim.5c01128. Online ahead of print.

ABSTRACT

Acute dermal toxicity testing is essential for assessing the safety of chemicals used in pharmaceuticals, pesticides, cosmetics, and industrial chemicals. Conventional toxicity testing methods rely significantly on animal tests, which are resource-intensive and time-consuming and raise ethical issues. To address these issues and support the 3Rs principle (replacement, reduction, and refinement) in animal testing, this study investigates whether a multimodal deep learning framework based on the fusion of heterogeneous molecular representations can yield a reliable and accurate model for the prediction of acute dermal toxicity. This study proposes TriModalToxNet, a novel architecture that extracts features from three distinct molecular representations: 2D molecular images through a 2D convolutional neural network, SMILES embeddings via a 1D convolutional neural network, and molecular fingerprints via a fully connected neural network. These extracted features are then concatenated and passed into a deep neural network for classification. For comparative purposes, this study also evaluates BiModalToxNet, a baseline model using only 2D molecular images and fingerprints. The models are trained and tested on a curated data set consisting of 3845 compounds derived from experimental rat and rabbit acute dermal toxicity studies. The proposed model is evaluated using multiple standard performance metrics such as area under the receiver operating characteristic curve, sensitivity, Matthews correlation coefficient, and accuracy derived from stratified 10-fold cross-validation and external validation. TriModalToxNet achieved an area under the receiver operating characteristic curve of 95% and a sensitivity of 91.2% in cross-validation. External validation was also conducted to further demonstrate the robustness and generalizability of the model. These results show that multimodal methods can attain better predictive performance than traditional single-modality methods. This TriModalToxNet framework highlights the potential for integration into regulatory frameworks, pharmaceutical screening pipelines, and advancing the field toward more ethical and efficient chemical safety assessment.

PMID:40679078 | DOI:10.1021/acs.jcim.5c01128

Categories: Literature Watch

Artificial intelligence in healthcare

Fri, 2025-07-18 06:00

Klin Mikrobiol Infekc Lek. 2025 Mar;31(1):22-26.

ABSTRACT

Artificial intelligence (AI) is no longer confined to the realm of science fiction; it has become an integral part of many fields, including healthcare. This article provides a concise overview of AI's history, operating principles, and specific applications in medicine, particularly in imaging techniques, medical documentation analysis, and clinical decision support. Although AI offers numerous benefits, such as faster diagnosis and improved predictive accuracy, its use faces significant challenges, including the potential for errors, ethical dilemmas, and the risk of misuse. Successful implementation hinges on rigorous validation, transparency, and integration with expert clinical judgment. Future developments will likely focus on improving algorithm accuracy, strengthening resilience against bias, and ensuring the safe application of AI for patient benefit - all through multidisciplinary collaboration. Keywords: artificial intelligence, machine learning, deep learning, neural networks, healthcare, diagnosis, imaging techniques, data analysis, clinical decision-making, AI ethics, safety, medical informatics.

PMID:40678962

Categories: Literature Watch

Can artificial intelligence in spine imaging affect current practice? Practical developments and their clinical status

Fri, 2025-07-18 06:00

N Am Spine Soc J. 2025 May 27;23:100621. doi: 10.1016/j.xnsj.2025.100621. eCollection 2025 Sep.

ABSTRACT

BACKGROUND: As artificial intelligence (AI) increases its footprint in spine imaging, gauging the clinical relevance of new developments poses an increasingly difficult challenge, especially given the majority of developments reflect experimental or early work. With this summary of available AI tools, focusing on those in clinical use, the benefits of AI in spine imaging are explained for radiologists and surgeons to understand the current state of and potentially the decision to adopt AI in clinical practice.

METHODS: Through a narrative review of publications relating to "artificial intelligence" and "spine imaging" in the PubMed database, this article provides an update on AI applications in spine imaging being utilized in current clinical practice.

RESULTS: Current applications of AI in spine imaging range from deep learning image reconstruction and denoising, spine segmentation and biometry, radiological report generation, surgical outcomes prediction, surgical planning, to intraoperative assistance. Developments in deep learning reconstruction (DLR) are most mature and demonstrate improvements to imaging speed and interpretability compared to non-AI alternatives. While clinical implementations exist in other use cases, their performance remains either an area of active investigation or comparable to the level of a human.

CONCLUSIONS: Uses of AI in spine imaging span multiple applications with early clinical implementation in most areas, suggesting a promising future ahead.

PMID:40678684 | PMC:PMC12269973 | DOI:10.1016/j.xnsj.2025.100621

Categories: Literature Watch

Deep learning assisted non-invasive lymph node burden evaluation and CDK4/6i administration in luminal breast cancer

Fri, 2025-07-18 06:00

iScience. 2025 Jun 7;28(7):112849. doi: 10.1016/j.isci.2025.112849. eCollection 2025 Jul 18.

ABSTRACT

Precise lymph node evaluation is fundamental to optimize CDK4/6 inhibitor therapy in luminal breast cancer, particularly given contemporary trends toward axillary surgery de-escalation that may compromise traditional lymph node staging for recurrence risk evaluation. The lymph node prediction network (LNPN) was developed as a multi-modal model incorporating both clinicopathological parameters and ultrasonographic characteristics for lymph node burden differentiation. In a multicenter cohort of 411 patients, LNPN demonstrated robust performance, achieving an AUC of 0.92 for binary lymph node burden classification (N0 vs. N+) and 0.82 for ternary lymph node burden classification (N0/N1-3/N ≥ 4). Notably, among patients undergoing sentinel lymph node biopsy (SLNB) with confirmed 1-2 metastatic lymph nodes, LNPN predicted high-burden metastases (N ≥ 4) with an AUC of 0.77. LNPN provided a non-invasive method to assess lymph node metastasis and recurrence risk, potentially reducing unnecessary axillary lymph node dissection (ALND), and facilitating decision-making regarding the intervention of CDK4/6i in luminal breast cancer patients.

PMID:40678544 | PMC:PMC12268571 | DOI:10.1016/j.isci.2025.112849

Categories: Literature Watch

Deep learning enhanced deciphering of brain activity maps for discovery of therapeutics for brain disorders

Fri, 2025-07-18 06:00

iScience. 2025 Jun 10;28(7):112868. doi: 10.1016/j.isci.2025.112868. eCollection 2025 Jul 18.

ABSTRACT

This study presents an artificial intelligence enhanced in vivo screening platform, DeepBAM, which enables deep learning of large-scale whole brain activity maps (BAMs) from living, drug-responsive larval zebrafish for neuropharmacological prediction. Automated microfluidics and high-speed microscopy are utilized to achieve high-throughput in vivo phenotypic screening for generating the BAM library. Deep learning is applied to deconvolve the pharmacological information from the BAM library and to predict the therapeutical potential of non-clinical compounds without any prior information about the chemicals. For a validation set composed of blinded clinical neuro-drugs, several potent anti-Parkinson's disease and anti-epileptic drugs are predicted with nearly 45% accuracy. The prediction capability of DeepBAM is further tested with a set of nonclinical compounds, revealing the pharmaceutical potential in 80% of the anti-epileptic and 36% of the anti-Parkinson predictions. These data support the notion of systems-level phenotyping in combination with machine learning to aid therapeutics discovery for brain disorders.

PMID:40678509 | PMC:PMC12268937 | DOI:10.1016/j.isci.2025.112868

Categories: Literature Watch

druglikeFilter 1.0: An AI powered filter for collectively measuring the drug-likeness of compounds

Fri, 2025-07-18 06:00

J Pharm Anal. 2025 Jun;15(6):101298. doi: 10.1016/j.jpha.2025.101298. Epub 2025 Apr 9.

ABSTRACT

Advancements in artificial intelligence (AI) and emerging technologies are rapidly expanding the exploration of chemical space, facilitating innovative drug discovery. However, the transformation of novel compounds into safe and effective drugs remains a lengthy, high-risk, and costly process. Comprehensive early-stage evaluation is essential for reducing costs and improving the success rate of drug development. Despite this need, no comprehensive tool currently supports systematic evaluation and efficient screening. Here, we present druglikeFilter, a deep learning-based framework designed to assess drug-likeness across four critical dimensions: 1) physicochemical rule evaluated by systematic determination, 2) toxicity alert investigated from multiple perspectives, 3) binding affinity measured by dual-path analysis, and 4) compound synthesizability assessed by retro-route prediction. By enabling automated, multidimensional filtering of compound libraries, druglikeFilter not only streamlines the drug development process but also plays a crucial role in advancing research efforts towards viable drug candidates, which can be freely accessed at https://idrblab.org/drugfilter/.

PMID:40678482 | PMC:PMC12268052 | DOI:10.1016/j.jpha.2025.101298

Categories: Literature Watch

Pages