Deep learning

Deep Learning-Based Point Cloud Compression: An In-Depth Survey and Benchmark

Thu, 2025-07-31 06:00

IEEE Trans Pattern Anal Mach Intell. 2025 Jul 31;PP. doi: 10.1109/TPAMI.2025.3594355. Online ahead of print.

ABSTRACT

With the maturity of 3D capture technology, the explosive growth of point cloud data has burdened the storage and transmission process. Traditional hybrid point cloud compression (PCC) tools relying on handcrafted priors have limited compression performance and are increasingly weak in addressing the burden induced by data growth. Recently, deep learning-based PCC methods have been introduced to continue to push the PCC performance boundary. With the thriving of deep PCC, the community urgently demands a systematic overview to conclude the past progress and present future research directions. In this paper, we have a detailed review that covers popular point cloud datasets, algorithm evolution, benchmarking analysis, and future trends. Concretely, we first introduce several widely-used PCC datasets according to their major properties. Then the algorithm evolution of existing studies on deep PCC, including lossy ones and lossless ones proposed for various point cloud types, is reviewed. Apart from academic studies, we also investigate the development of relevant international standards (i.e., MPEG standards and JPEG standards). To help have an in-depth understanding of the advance of deep PCC, we select a representative set of methods and conduct extensive experiments on multiple datasets. Comprehensive benchmarking comparisons and analysis reveal the pros and cons of previous methods. Finally, based on the profound analysis, we highlight the challenges and future trends of deep learning-based PCC, paving the way for further study. Related source codes in this paper can be found at https://openi.pcl.ac.cn/OpenPointCloud.

PMID:40742848 | DOI:10.1109/TPAMI.2025.3594355

Categories: Literature Watch

A Trust-Guided Approach to MR Image Reconstruction with Side Information

Thu, 2025-07-31 06:00

IEEE Trans Med Imaging. 2025 Jul 31;PP. doi: 10.1109/TMI.2025.3594363. Online ahead of print.

ABSTRACT

Reducing MRI scan times can improve patient care and lower healthcare costs. Many acceleration methods are designed to reconstruct diagnostic-quality images from sparse k-space data, via an ill-posed or ill-conditioned linear inverse problem (LIP). To address the resulting ambiguities, it is crucial to incorporate prior knowledge into the optimization problem, e.g., in the form of regularization. Another form of prior knowledge less commonly used in medical imaging is the readily available auxiliary data (a.k.a. side information) obtained from sources other than the current acquisition. In this paper, we present the Trust-Guided Variational Network (TGVN), an end-to-end deep learning framework that effectively and reliably integrates side information into LIPs. We demonstrate its effectiveness in multi-coil, multi-contrast MRI reconstruction, where incomplete or low-SNR measurements from one contrast are used as side information to reconstruct high-quality images of another contrast from heavily under-sampled data. TGVN is robust across different contrasts, anatomies, and field strengths. Compared to baselines utilizing side information, TGVN achieves superior image quality while preserving subtle pathological features even at challenging acceleration levels, drastically speeding up acquisition while minimizing hallucinations. Source code and dataset splits are available on github.com/sodicksonlab/TGVN.

PMID:40742840 | DOI:10.1109/TMI.2025.3594363

Categories: Literature Watch

Hybrid protein-ligand binding residue prediction with protein language models: Does the structure matter?

Thu, 2025-07-31 06:00

Bioinformatics. 2025 Jul 31:btaf431. doi: 10.1093/bioinformatics/btaf431. Online ahead of print.

ABSTRACT

MOTIVATION: Predicting protein-ligand binding sites is crucial in studying protein interactions with applications in biotechnology and drug discovery. Two distinct paradigms have emerged for this purpose: sequence-based methods, which leverage protein sequence information, and structure-based methods, which rely on the three-dimensional (3D) structure of the protein. Here, we analyze a hybrid approach that combines the strengths of both paradigms by integrating two recent deep learning architectures: protein language models (pLMs) from the sequence-based paradigm and Graph Neural Networks (GNNs) from the structure-based paradigm. Specifically, we construct a residue-level Graph Attention Network (GAT) model based on the protein's 3D structure that uses pre-trained pLM embeddings as node features. This integration enables us to study the interplay between the sequential information encoded in the protein sequence and the spatial relationships within the protein structure on the model performance.

RESULTS: By exploiting a benchmark dataset over a range of ligands and ligand types, we have shown that using the structure information consistently enhances the predictive power of the baselines in absolute terms. Nevertheless, as more complex pLMs are used to represent node features, the relative impact of the structure information represented by the GNN architecture diminishes. The above observations suggest that although the use of the experimental protein structure almost always improves the accuracy of the prediction of the binding site, complex pLMs still contain structural information that leads to good predictive performance even without the use of 3D structure.

AVAILABILITY: The datasets generated and/or analyzed during the current study, as well as pretrained models are available in the following Zenodo link https://zenodo.org/records/15184302. The source code that was used to generate the results of the current study is available in the following GitHub repository https://github.com/hamzagamouh/pt-lm-gnn as well as in the following Zenodo link https://zenodo.org/records/15192327.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Journal online.

PMID:40742755 | DOI:10.1093/bioinformatics/btaf431

Categories: Literature Watch

Artificial intelligence in hepatopancreatobiliary surgery for clinical outcome prediction: current perspective and future direction

Thu, 2025-07-31 06:00

J Robot Surg. 2025 Jul 31;19(1):438. doi: 10.1007/s11701-025-02617-6.

ABSTRACT

The expanding and evolving role of artificial intelligence (AI) in surgery has been enhanced by the adoption of robotic-assisted surgery (RAS), which provides a platform to facilitate the integration and utilisation of AI technologies. One area where AI is likely to be particularly valuable is outcome prediction using deep learning models (DLMs). This narrative review examines DLMs in hepatopancreatobiliary (HPB) surgery, highlighting their role in predicting postoperative complications and surgical complexity with an increased level of accuracy compared with traditional methods. In addition to reviewing existing literature, this article offers a forward-looking perspective on emerging innovations such as real-time intraoperative guidance, federated learning for global collaboration, and the development of explainable AI frameworks. By addressing challenges related to data quality, model generalisability, and ethical implementation, AI has the potential to transform HPB surgery and deliver more personalised, precise, and equitable care.

PMID:40742577 | DOI:10.1007/s11701-025-02617-6

Categories: Literature Watch

A Meta-Learning Approach for Multicenter and Small-Data Single-Cell Image Analysis

Thu, 2025-07-31 06:00

Anal Chem. 2025 Jul 31. doi: 10.1021/acs.analchem.5c01810. Online ahead of print.

ABSTRACT

The application of algorithm-based single-cell imaging techniques can visualize and analyze cellular heterogeneity. However, algorithm-based single-cell imaging techniques are severely limited by the high workload required to label single-cell images and the high variation of cells from different sources. Herein, we propose a meta-learning approach for multicenter and small-data single-cell image analysis. Meta-learning combines automated wide-field fluorescence microscopy to build a hardware and software system to analyze cellular heterogeneity. We verified that the meta-learning single-cell imaging platform extracts the relevant information between multiple data centers through training to reduce the need for workload required to label single-cell images. The results show that the classification accuracy of the target task can reach about 92% using only 60% data volume labeled single-cell images. However, to achieve the same recognition accuracy, we need to use 100% data volume labeled single-cell images for traditional deep learning. Moreover, the accuracy achieved by our platform surpasses that of traditional deep learning methods, even when the data volume is reduced to 5%, which means our platform can significantly reduce the volume of single-cell image data labeling and the manual data labeling workload, thereby enhancing work efficiency and reducing work costs. Furthermore, our platform's robustness against data from different sources of single-cell images has been verified through knowledge migration experiments on public data sets. This robustness should instill confidence in the applicability of our platform across various research settings and data sources.

PMID:40742562 | DOI:10.1021/acs.analchem.5c01810

Categories: Literature Watch

MMPK: A Multimodal Deep Learning Framework to Predict Human Oral Pharmacokinetic Parameters

Thu, 2025-07-31 06:00

J Med Chem. 2025 Jul 31. doi: 10.1021/acs.jmedchem.5c01522. Online ahead of print.

ABSTRACT

Accurate prediction of in vivo pharmacokinetic (PK) profiles is crucial for assessing drug safety and efficacy, optimizing dosage regimens, and understanding interactions between the human body and drugs. Using machine learning to predict PK parameters has the potential to considerably save time and resources during drug development. In this study, we constructed a human oral PK data set containing over 1,200 unique compounds and more than 5,000 compound-dose combinations. Building on this data set, we developed a multimodal deep learning framework named MMPK, integrating molecular graphs, substructure graphs, and SMILES sequences to capture multiscale molecular information. MMPK employs multitask learning and data imputation to improve data efficiency and model robustness. Comparative evaluations confirm that MMPK outperforms baseline models, achieving an average geometric mean fold error (GMFE) of 2.895 and root mean squared logarithmic error (RMSLE) of 0.599 across eight PK parameters. The MMPK model is freely accessible at https://lmmd.ecust.edu.cn/mmpk/.

PMID:40741939 | DOI:10.1021/acs.jmedchem.5c01522

Categories: Literature Watch

Artificial Intelligence and the Evolving Landscape of Immunopeptidomics

Thu, 2025-07-31 06:00

Proteomics Clin Appl. 2025 Jul 31:e70018. doi: 10.1002/prca.70018. Online ahead of print.

ABSTRACT

BACKGROUND: Immunopeptidomics is the large-scale study of peptides presented by major histocompatibility complex (MHC) molecules and plays a central role in neoantigen discovery and cancer immunotherapy. However, the complexity of mass spectrometry data, the diversity of peptide sources, and variability in immune responses present major challenges in this field.

REVIEW FOCUS: In recent years, artificial intelligence (AI)-based methods have become central to advancing key steps in immunopeptidomics. It has enabled advances in de novo sequencing, peptide-spectrum matching, spectrum prediction, MHC binding prediction, and T cell recognition modeling. In this review, we examine these applications in detail, highlighting how AI is integrated into each stage of the immunopeptidomics workflow.

CASE STUDY: This review presents a focused case study on breast cancer, a heterogeneous and historically less immunogenic tumor type, to examine how AI may help overcome limitations in identifying actionable neoantigens.

CHALLENGES AND FUTURE PERSPECTIVES: We discuss current bottlenecks, including challenges in modeling noncanonical peptides, accounting for antigen processing defects, and avoiding on-target off-tumor toxicity. Finally, we outline future directions for improving AI models to support both personalized and off-the-shelf immunotherapy strategies.

SUMMARY: Artificial intelligence (AI) is reshaping the immunopeptidomics landscape by overcoming challenges in peptide identification, immunogenicity prediction, and neoantigen prioritization. This review highlights how AI-based tools enhance the detection of MHC-bound peptides-including low-abundance, noncanonical, and post-translationally modified epitopes and improve peptide-spectrum matching and T-cell epitope prediction. By demonstrating a case study on applications in breast cancer, we illustrate the potential of AI to reveal hidden immunogenic features in tumors previously likely considered immunologically "cold." These advancements open new opportunities for expanding neoantigen discovery pipelines and optimizing cancer immunotherapies. Looking ahead, the application of deep learning, transfer learning, and integrated multi-omics models may further elevate the accuracy and scalability of immunopeptidomics, enabling more effective and inclusive vaccine and T-cell therapy development.

PMID:40741879 | DOI:10.1002/prca.70018

Categories: Literature Watch

Hybrid framework for automated generation of mammography radiology reports

Thu, 2025-07-31 06:00

Comput Struct Biotechnol J. 2025 Jul 16;27:3229-3239. doi: 10.1016/j.csbj.2025.07.018. eCollection 2025.

ABSTRACT

Breast cancer remains a significant health concern for women at various stages of life, impacting both productivity and reproductive health. Recent advancements in deep learning (DL) have enabled substantial progress in the automation of radiological reports, offering potential support to radiologists and streamlining examination processes. This study introduces a framework for automated clinical text generation aimed at assisting radiologists in mammography examinations. Rather than replacing medical expertise, the system provides pre-processed evidence and automatic diagnostic suggestions for radiologist validation. The framework leverages an encoder-decoder architecture for natural language generation (NLG) models, trained and fine-tuned on a corpus of Spanish radiological text. Additionally, we incorporate an image intensity enhancement technique to address the issue of image quality variability and assess its impact on report generation outcomes. A comparative analysis using NLG metrics is conducted to identify the optimal feature extraction method. Furthermore, named entity recognition (NER) techniques are employed to extract key clinical concepts and automate precision evaluations. Our results demonstrate that the proposed framework could be a solid starting point for systematizing and implementing automated clinical report generation based on medical images.

PMID:40741541 | PMC:PMC12309959 | DOI:10.1016/j.csbj.2025.07.018

Categories: Literature Watch

Deep learning-based prediction of rheumatoid arthritis-associated deformity on MRI

Thu, 2025-07-31 06:00

Brain Spine. 2025 Jul 12;5:104328. doi: 10.1016/j.bas.2025.104328. eCollection 2025.

ABSTRACT

INTRODUCTION: While the prevalence of surgery to correct atlantoaxial subluxation (AAS), subaxial subluxation (SAS) and vertical translocation (VT) in patients with rheumatoid arthritis (RA) had declined, cervical deformity is still observed regularly.

RESEARCH QUESTION: The objective of this study is to develop a deep learning-based algorithm to predict RA-associated upper cervical spine deformity in patients before or close to RA diagnosis, with the purpose of early risk stratification.

MATERIALS AND METHODS: Patients with RA in which follow-up cervical MRI studies (at least 3 years apart) were available were identified retrospectively in two tertiary care centers. Patients without definitive deformity at baseline were included in the algorithm. Patients were assessed for RA-associated cervical spine deformity, defined as presence of pannus and/or degeneration of the facet joints of C0-C1 and/or C1-C2 on follow up MRI.

RESULTS: Of 3248 patients identified, 220 patients were included in this study, of whom 33 patients developed cervical spine deformity. 153 patients were included for training and sixty-seven for validation of the deep learning-based prediction model. The accuracy of the model was 0.84, with a positive predictive value of 0.56 and a negative predictive value of 0.92.

DISCUSSION AND CONCLUSION: A deep learning model was developed to predict the development of pannus and/or facet joint deformity at the craniocervical junction of patients with RA. Future research should focus on large-scale validation of this model with diverse sites and identifying the role of the subaxial spine in the risk of deformity at the level of the craniocervical junction during the course of disease.

PMID:40741519 | PMC:PMC12309277 | DOI:10.1016/j.bas.2025.104328

Categories: Literature Watch

Deep-learning-based 3D content-based image retrieval system on chest HRCT: Performance assessment for interstitial lung diseases and usual interstitial pneumonia

Thu, 2025-07-31 06:00

Eur J Radiol Open. 2025 Jul 23;15:100670. doi: 10.1016/j.ejro.2025.100670. eCollection 2025 Dec.

ABSTRACT

BACKGROUND: Diffuse parenchymal lung diseases have various conditions and CT imaging findings. Differentiating interstitial lung diseases (ILDs) and determining the presence or absence of usual interstitial pneumonia (UIP), can be challenging, even for experienced radiologists. To address this challenge, we developed a 3D-content-based image retrieval system (CBIR) and investigated its clinical usefulness.

METHODS: Using deep learning technology, we developed a prototype system that analyzes thin-slice whole lung HRCT images, automatically registers them in a database, and retrieves similar images. To evaluate search performance, we used a database of 2058 cases and assessed image similarity between query and retrieved cases using a 5-point visual score (5: Similar, 4: Somewhat similar, 3: Neither, 2: Somewhat dissimilar, 1: Dissimilar). To assess clinical usefulness, we evaluated the concordance of labels (ILD/non-ILD, with/without UIP) between query and retrieved cases, using a database of 301 cases across 57 diseases.

RESULTS: For search performance, the mean score of visual similarity between 70 queries and their top 5 retrieved cases was 4.37 ± 0.83. For clinical usefulness, label concordance between 25 queries and their top 5 retrieved cases was assessed across 4 labels. For ILD, the mean concordance of labels was 0.94 ± 0.15, while for non-ILD, it was 0.64 ± 0.31. For cases with UIP, the mean concordance of labels was 0.86 ± 0.17, while for cases without UIP, it was 0.83 ± 0.24.

CONCLUSIONS: Our CBIR system showed high accuracy for identifying cases with/without UIP, suggesting its potential to support UIP differentiation in clinical practice.

PMID:40741449 | PMC:PMC12309587 | DOI:10.1016/j.ejro.2025.100670

Categories: Literature Watch

Edge learning applications in the prediction and classification of combined hepatocellular-cholangiocarcinoma: A comprehensive narrative review

Thu, 2025-07-31 06:00

World J Clin Oncol. 2025 Jul 24;16(7):107246. doi: 10.5306/wjco.v16.i7.107246.

ABSTRACT

Combined hepatocellular-cholangiocarcinoma (cHCC-CCA) is a rare heterogeneous primary malignant liver tumor containing both hepatocellular and cholangiocarcinoma features. The complex presentation of cHCC-CCA tends to be poorly investigated, and the information derived from traditional diagnostic techniques (histopathology and radiological imaging) is often not optimal. Since cHCC-CCA is usually difficult to diagnose due to complex histopathological features (edge learning) as excessive photos, hence, achieves treatment delays and poor prognosis, the incorporation of advanced artificial intelligence like edge learning is able to improve the patient's outcome. Using artificial intelligence, particularly deep learning, has recently opened new doorways for the improvement of diagnostic accuracy. If artificial intelligence models are deployed on local devices, edge learning exercises this type of learning, which provides real time processing, improved data privacy and reduced bandwidth usage. This narrative review investigates the conceptual formulation of edge learning together with its opportunities for clinical applications in the prediction and classification of cHCC-CCA, the technical solution strategies, the clinical benefits it offers, and associated challenges and future directions.

PMID:40741203 | PMC:PMC12305016 | DOI:10.5306/wjco.v16.i7.107246

Categories: Literature Watch

Automated removal of corrupted tilts in cryo-electron tomography

Thu, 2025-07-31 06:00

J Struct Biol X. 2025 Jul 17;12:100130. doi: 10.1016/j.yjsbx.2025.100130. eCollection 2025 Dec.

ABSTRACT

Cryo-electron tomography (cryo-ET) enables the visualization of macromolecular structures in their near-native cellular environment. However, acquired tilt series are often compromised by image corruption due to drift, contamination, and ice reflections. Manually identifying and removing corrupted tilts is subjective and time-consuming, making an automated approach necessary. In this study, we present a deep learning-based method for automatically removing corrupted tilts. We evaluated 13 different neural network architectures, including convolutional neural networks (CNNs) and transformers. Using a dataset of 435 annotated tilt series, we trained models for both binary and multiclass classification of corrupted tilts. We demonstrate the high efficiency and reliability of these automated approaches for removing corrupted tilts in cryo-ET and provide a framework, including models trained on cryo-ET data, that allows users to apply these models directly to their tilt series, improving the quality and consistency of downstream cryo-ET data processing.

PMID:40741136 | PMC:PMC12309593 | DOI:10.1016/j.yjsbx.2025.100130

Categories: Literature Watch

Deep learning-based localization and lesion detection in capsule endoscopy for patients with suspected small-bowel bleeding

Thu, 2025-07-31 06:00

World J Gastroenterol. 2025 Jul 21;31(27):106819. doi: 10.3748/wjg.v31.i27.106819.

ABSTRACT

BACKGROUND: Small-bowel capsule endoscopy (SBCE) is widely used to evaluate obscure gastrointestinal bleeding; however, its interpretation is time-consuming and reader-dependent. Although artificial intelligence (AI) has emerged to address these limitations, few models simultaneously perform small-bowel (SB) localization and abnormality detection.

AIM: To develop an AI model that automatically distinguishes the SB from the stomach and colon and diagnoses SB abnormalities.

METHODS: We developed an AI model using 87005 CE images (11925, 33781, and 41299 from the stomach, SB, and colon, respectively) for SB localization and 28405 SBCE images (1337 erosions/ulcers, 126 angiodysplasia, 494 bleeding, and 26448 normal) for abnormality detection. The diagnostic performances of AI-assisted reading and conventional reading were compared using 32 SBCE videos in patients with suspicious SB bleeding.

RESULTS: Regarding organ localization, the AI model achieved an area under the receiver operating characteristic curve (AUC) and accuracy exceeding 0.99 and 97%, respectively. For SB abnormality detection, the performance was as follows: Erosion/ulcer: 99.4% accuracy (AUC, 0.98); angiodysplasia: 99.8% accuracy (AUC, 0.99); bleeding: 99.9% accuracy (AUC, 0.99); normal: 99.3% accuracy (AUC, 0.98). In external validation, AI-assisted reading (8.7 minutes) was significantly faster than conventional reading (53.9 minutes; P < 0.001). The SB localization accuracies (88.6% vs 72.7%, P = 0.07) and SB abnormality detection rates (77.3% vs 77.3%, P = 1.00) of the conventional reading and AI-assisted reading were comparable.

CONCLUSION: Our AI model decreased SBCE reading time and achieved performance comparable to that of experienced endoscopists, suggesting that AI integration into SBCE reading enables efficient and reliable SB abnormality detection.

PMID:40741104 | PMC:PMC12305051 | DOI:10.3748/wjg.v31.i27.106819

Categories: Literature Watch

Development and validation of deep learning- and ensemble learning-based biological ages in the NHANES study

Thu, 2025-07-31 06:00

Front Aging Neurosci. 2025 Jul 16;17:1532884. doi: 10.3389/fnagi.2025.1532884. eCollection 2025.

ABSTRACT

INTRODUCTION: Conventional machine learning (ML) approaches for constructing biological age (BA) have predominantly relied on blood-based markers, limiting their scope. This study aims to develop and validate novel ML-based BA models using a comprehensive set of clinical, behavioral, and socioeconomic factors and evaluate their predictive performance for mortality.

METHODS: We analyzed data from 24,985 participants in the National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010, with follow-up extending to 31 December 2019, or until death or loss to follow-up. Thirty features, including blood and urine biochemistry, physical examination data, behavioral traits, and socioeconomic factors, were selected using the Least Absolute Shrinkage and Selection Operator (LASSO). These features were utilized to train deep neural networks (DNN) and ensemble learning models, specifically the Deep Biological Age (DBA) and Ensemble Biological Age (EnBA), with chronological age (CA) as the reference label. Model performance was assessed using mean absolute error (MAE), while interpretability was explored using Shapley Additive exPlanation (SHAP). Predictive accuracy of DBA and EnBA for mortality was compared with Phenotypic Age (PhenoAge) using the area under the curve (AUC) derived from Cox proportional hazards models and hazard ratios (HR), adjusted for demographics and lifestyle factors. Sensitivity analyses were performed to ensure robustness.

RESULTS: DBA and EnBA accurately predicted actual age (MAE = 2.98 and 3.58 years, respectively) and demonstrated strong predictive capability for all-cause mortality, with AUCs of 0.896 (95% CI: 0.891-0.898) for DBA and 0.889 (95% CI: 0.884-0.894) for EnBA. Higher DBA and EnBA accelerations were significantly associated with increased mortality risk (HR = 1.059 and 1.039, respectively). SHAP analysis highlighted prescription medication usage, hepatitis B surface antibody status, and vigorous physical activity as the most influential features contributing to DBA predictions. Furthermore, BA acceleration was linked to elevated risk of death from specific chronic conditions, including cardiovascular and cerebrovascular diseases and cancer.

DISCUSSION: Our study successfully developed and validated two ML-based BA models capable of accurately predicting both all-cause and cause-specific mortality. These findings suggest that the DBA and EnBA models hold promise for early identification of high-risk individuals, potentially facilitating timely preventive interventions and improving population health outcomes.

PMID:40741049 | PMC:PMC12307447 | DOI:10.3389/fnagi.2025.1532884

Categories: Literature Watch

Evaluation of non-motor symptoms in Parkinson's disease using multiparametric MRI with the multiplex sequence

Thu, 2025-07-31 06:00

Front Aging Neurosci. 2025 Jul 16;17:1602245. doi: 10.3389/fnagi.2025.1602245. eCollection 2025.

ABSTRACT

BACKGROUND: Non-motor symptoms (NMS) in Parkinson's disease (PD) often precede motor manifestations and are challenging to detect with conventional MRI. This study investigates the use of the Multi-Flip-Angle and Multi-Echo Gradient Echo Sequence (MULTIPLEX) in MRI to detect previously undetectable microstructural changes in brain tissue associated with NMS in PD.

METHODS: A prospective study was conducted on 37 patients diagnosed with PD. Anxiety and depression levels were assessed using the Hamilton Anxiety Scale (HAMA) and Hamilton Depression Scale (HAMD), respectively. MRI techniques, including 3D T1-weighted imaging (3D T1WI) and MULTIPLEX - which encompasses T2*-mapping, T1-mapping, proton density-mapping, and quantitative susceptibility mapping (QSM)-were performed. Brain subregions were automatically segmented using deep learning, and their volume and quantitative parameters were correlated with NMS-related assessment scales using Spearman's rank correlation coefficient.

RESULTS: Correlations were observed between QSM and T2* values of certain subregions within the left frontal and bilateral temporal lobes and both anxiety and depression (absolute r-values ranging from 0.358 to 0.480, p < 0.05). Additionally, volume measurements of regions within the bilateral frontal, temporal, and insular lobes exhibited negative correlations with anxiety and depression (absolute r-values ranging from 0.354 to 0.658, p < 0.05). In T1-mapping and proton density-mapping, no specific brain regions were found to be significantly associated with the NMS of PD under investigation.

CONCLUSION: Quantitative parameters derived from MULTIPLEX MRI show significant associations with clinical evaluations of NMS in PD. Multiparametric MR neuroimaging may serve as a potential early diagnostic tool for PD.

PMID:40741048 | PMC:PMC12307406 | DOI:10.3389/fnagi.2025.1602245

Categories: Literature Watch

Quantitative assessment of brain glymphatic imaging features using deep learning-based EPVS segmentation and DTI-ALPS analysis in Alzheimer's disease

Thu, 2025-07-31 06:00

Front Aging Neurosci. 2025 Jul 16;17:1621106. doi: 10.3389/fnagi.2025.1621106. eCollection 2025.

ABSTRACT

BACKGROUND: This study aimed to quantitatively evaluate brain glymphatic imaging features in patients with Alzheimer's disease (AD), amnestic mild cognitive impairment (aMCI), and normal controls (NC) by applying a deep learning-based method for the automated segmentation of enlarged perivascular space (EPVS) and diffusion tensor imaging analysis along perivascular spaces (DTI-ALPS) indices.

METHODS: A total of 89 patients with AD, 24 aMCI, and 32 NCs were included. EPVS were automatically segmented from T1WI and T2WI images using a VB-Net-based model. Quantitative metrics, including total EPVS volume, number, and regional volume fractions were extracted, and segmentation performance was evaluated using the Dice similarity coefficient. Bilateral ALPS indices were also calculated. Group comparisons were conducted for all imaging metrics, and correlations with cognitive scores were analyzed.

RESULTS: VB-Net segmentation model demonstrated high accuracy, with mean Dice coefficients exceeding 0.90. Compared to the NC group, both AD and aMCI groups exhibited significantly increased EPVS volume, number, along with reduced ALPS indices (all P < 0.05). Partial correlation analysis revealed strong associations between ALPS and EPVS metrics and cognitive performance. The combined imaging features showed good discriminative performance among diagnostic groups.

CONCLUSION: The integration of deep learning-based EPVS segmentation and DTI-ALPS analysis enables multidimensional assessment of glymphatic system alterations, offering potential value for early diagnosis and translation in neurodegenerative diseases.

PMID:40741044 | PMC:PMC12307369 | DOI:10.3389/fnagi.2025.1621106

Categories: Literature Watch

Explaining care need assessment surveys: qualitative and quantitative evaluation of state-of-the-art local and global explainable artificial intelligence methods

Thu, 2025-07-31 06:00

JAMIA Open. 2025 Jul 29;8(4):ooaf064. doi: 10.1093/jamiaopen/ooaf064. eCollection 2025 Aug.

ABSTRACT

OBJECTIVE: With extended life expectancy, the number of people in need of care has been growing. To optimally support them, it is important to know the patterns and conditions of their daily life that influence the need for support, and thus, the classification of the care need. In this study, we aim to utilize a large corpus consisting of care benefits applications to do an explorative analysis of factors affecting care need to support the tedious work of experts gathering reliable criteria for a care need assessment.

MATERIALS AND METHODS: We compare state-of-the-art methods from explainable artificial intelligence (XAI) as means to extract such patterns from over 72 000 German care benefits applications. We train transformer models to predict assessment results as decided by a Medical Service Unit from accompanying text notes. To understand the key factors for care need assessment and its constituent modules (such as mobility and self-therapy), we apply feature attribution methods to extract the key phrases for each prediction. These local explanations are then aggregated into global insights to derive key phrases for different modules and severity of care need over the dataset.

RESULTS: Our experiments show that transformers-based models perform slightly better than traditional bag-of-words baselines in predicting care need. We find that the bag-of-words baseline also provides useful care-relevant phrases, whereas phrases obtained through transformer explanations better balance rare and common phrases, such as diagnoses mentioned only once, and are better in assigning the correct assessment module.

DISCUSSION: Even though XAI results can become unwieldy, they let us get an understanding of thousands of documents with no extra annotations other than existing assessment outcomes.

CONCLUSION: This work provides a systematic application and comparison of both traditional and state-of-the-art deep learning based XAI approaches to extract insights from a large corpus of text. Both traditional and deep learning approaches provide useful phrases, and we recommend using both to explore and understand large text corpora better. We will make our code available at https://github.com/oguzserbetci/explainer.

PMID:40741010 | PMC:PMC12307913 | DOI:10.1093/jamiaopen/ooaf064

Categories: Literature Watch

Ultrasound derived deep learning features for predicting axillary lymph node metastasis in breast cancer using graph convolutional networks in a multicenter study

Wed, 2025-07-30 06:00

Sci Rep. 2025 Jul 30;15(1):27796. doi: 10.1038/s41598-025-13086-0.

ABSTRACT

The purpose of this study was to create and validate an ultrasound-based graph convolutional network (US-based GCN) model for the prediction of axillary lymph node metastasis (ALNM) in patients with breast cancer. A total of 820 eligible patients with breast cancer who underwent preoperative breast ultrasonography (US) between April 2016 and June 2022 were retrospectively enrolled. The training cohort consisted of 621 patients, whereas validation cohort 1 included 112 patients, and validation cohort 2 included 87 patients. A US-based GCN model was built using US deep learning features. In validation cohort 1, the US-based GCN model performed satisfactorily, with an AUC of 0.88 and an accuracy of 0.76. In validation cohort 2, the US-based GCN model performed satisfactorily, with an AUC of 0.84 and an accuracy of 0.75. This approach has the potential to help guide optimal ALNM management in breast cancer patients, particularly by preventing overtreatment. In conclusion, we developed a US-based GCN model to assess the ALN status of breast cancer patients prior to surgery. The US-based GCN model can provide a possible noninvasive method for detecting ALNM and aid in clinical decision-making. High-level evidence for clinical use in later studies is anticipated to be obtained through prospective studies.

PMID:40738938 | DOI:10.1038/s41598-025-13086-0

Categories: Literature Watch

Deep learning for property prediction of natural fiber polymer composites

Wed, 2025-07-30 06:00

Sci Rep. 2025 Jul 30;15(1):27837. doi: 10.1038/s41598-025-10841-1.

ABSTRACT

The increasing availability of diverse experimental and computational data has accelerated the application of deep learning (DL) techniques for predicting polymer properties. A literature review was conducted to show recent advances in DL applied to this field. For example, Li et al. (2023) achieved an [Formula: see text] for predicting stiffness tensors of carbon fiber composites using a hybrid CNN-MLP model trained on microstructure images and two-point statistics. Aligning with this approach, Xue et al. (2023) compared DNN performance with genetic programming and minimax probability machine regression in predicting the lateral confinement coefficient for CFRP-wrapped RC columns, showing competitive predictive capability. These studies demonstrate that specialized architectures, including hybrid CNN-MLP models, feedforward ANNs, graph convolutional networks, and DNNs, provide high accuracy in predicting mechanical, thermal, and chemical properties of polymer composites and biodegradable plastics. Among these, DNNs have consistently shown superior performance in capturing complex nonlinear relationships within heterogeneous datasets, highlighting their suitability for materials characterization and optimization tasks. Building on these insights, this study investigates the effects of four natural fibers (flax, cotton, sisal, hemp) with densities around 1.48-1.54 g/cm[Formula: see text], incorporated at 30 wt.% into three polymer matrices (PLA, PP, epoxy resin) with varying surface treatments (untreated, alkaline, silane). Samples were prepared via extrusion and injection molding (or casting for epoxy) under controlled processing conditions. Mechanical properties (tensile strength, modulus, elongation at break, impact toughness) were measured per ASTM standards, and density was determined by Archimedes' method. Using 180 experimental samples, augmented up to 1500 using bootstrap technique, several regression models-linear, random forest, gradient boosting, DNNs-were developed to predict mechanical behavior. Best DNN model architecture (four hidden layers (128-64-32-16 neurons), ReLU activation, 20% dropout, a batch size of 64, and the AdamW optimizer with a learning rate of [Formula: see text]) obtained through hyperparameter optimization using Optuna, delivered the best performance (R[Formula: see text] up to 0.89) and MAE reductions of 9-12% compared to gradient boosting, driven by the DNN's ability to capture nonlinear synergies between fiber-matrix interactions, surface treatments, and processing parameters while aligning architectural complexity with multiscale material behavior.

PMID:40738916 | DOI:10.1038/s41598-025-10841-1

Categories: Literature Watch

Histopathological-based brain tumor grading using 2D-3D multi-modal CNN-transformer combined with stacking classifiers

Wed, 2025-07-30 06:00

Sci Rep. 2025 Jul 30;15(1):27764. doi: 10.1038/s41598-025-11754-9.

ABSTRACT

Reliability in diagnosing and treating brain tumors depends on the accurate grading of histopathological images. However, limited scalability, adaptability, and interpretability challenge current methods for frequently grading brain tumors to accurately capture complex spatial relationships in histopathological images. This highlights the need for new approaches to overcome these shortcomings. This paper proposes a comprehensive hybrid learning architecture for brain tumor grading. Our pipeline uses complementary feature extraction techniques to capture domain-specific knowledge related to brain tumor morphology, such as texture and intensity patterns. An efficient method of learning hierarchical patterns within the tissue is the 2D-3D hybrid convolution neural network (CNN), which extracts contextual and spatial features. A vision transformer (ViT) additionally learns global relationships between image regions by concentrating on high-level semantic representations from image patches. Finally, a stacking ensemble machine learning classifier is fed concatenated features, allowing it to take advantage of the individual model's strengths and possibly enhance generalization. Our model's performance is evaluated using two publicly accessible datasets: TCGA and DeepHisto. Extensive experiments with ablation studies and cross-dataset evaluation validate the model's effectiveness, demonstrating significant gains in accuracy, precision, and specificity using cross-validation scenarios. In total, our brain tumor grading model outperforms existing methods, achieving an average accuracy, precision, and specificity of 97.1%, 97.1%, and 97.0%, respectively, on the TCGA dataset, and 95%, 94%, and 95% on DeepHisto dataset. Reported results demonstrate how the suggested architecture, which blends deep learning (DL) with domain expertise, can achieve reliable and accurate brain tumor grading.

PMID:40739310 | DOI:10.1038/s41598-025-11754-9

Categories: Literature Watch

Pages