Deep learning

SpaMask: Dual masking graph autoencoder with contrastive learning for spatial transcriptomics

Thu, 2025-04-03 06:00

PLoS Comput Biol. 2025 Apr 3;21(4):e1012881. doi: 10.1371/journal.pcbi.1012881. eCollection 2025 Apr.

ABSTRACT

Understanding the spatial locations of cell within tissues is crucial for unraveling the organization of cellular diversity. Recent advancements in spatial resolved transcriptomics (SRT) have enabled the analysis of gene expression while preserving the spatial context within tissues. Spatial domain characterization is a critical first step in SRT data analysis, providing the foundation for subsequent analyses and insights into biological implications. Graph neural networks (GNNs) have emerged as a common tool for addressing this challenge due to the structural nature of SRT data. However, current graph-based deep learning approaches often overlook the instability caused by the high sparsity of SRT data. Masking mechanisms, as an effective self-supervised learning strategy, can enhance the robustness of these models. To this end, we propose SpaMask, dual masking graph autoencoder with contrastive learning for SRT analysis. Unlike previous GNNs, SpaMask masks a portion of spot nodes and spot-to-spot edges to enhance its performance and robustness. SpaMask combines Masked Graph Autoencoders (MGAE) and Masked Graph Contrastive Learning (MGCL) modules, with MGAE using node masking to leverage spatial neighbors for improved clustering accuracy, while MGCL applies edge masking to create a contrastive loss framework that tightens embeddings of adjacent nodes based on spatial proximity and feature similarity. We conducted a comprehensive evaluation of SpaMask on eight datasets from five different platforms. Compared to existing methods, SpaMask achieves superior clustering accuracy and effective batch correction.

PMID:40179332 | DOI:10.1371/journal.pcbi.1012881

Categories: Literature Watch

3D Hyperspectral Data Analysis with Spatially Aware Deep Learning for Diagnostic Applications

Thu, 2025-04-03 06:00

Anal Chem. 2025 Apr 3. doi: 10.1021/acs.analchem.4c05549. Online ahead of print.

ABSTRACT

Nowadays, with the rise of artificial intelligence (AI), deep learning algorithms play an increasingly important role in various traditional fields of research. Recently, these algorithms have already spread into data analysis for Raman spectroscopy. However, most current methods only use 1-dimensional (1D) spectral data classification, instead of considering any neighboring information in space. Despite some successes, this type of methods wastes the 3-dimensional (3D) structure of Raman hyperspectral scans. Therefore, to investigate the feasibility of preserving the spatial information on Raman spectroscopy for data analysis, spatially aware deep learning algorithms were applied into a colorectal tissue data set with 3D Raman hyperspectral scans. This data set contains Raman spectra from normal, hyperplasia, adenoma, carcinoma tissues as well as artifacts. First, a modified version of 3D U-Net was utilized for segmentation; second, another convolutional neural network (CNN) using 3D Raman patches was utilized for pixel-wise classification. Both methods were compared with the conventional 1D CNN method, which worked as baseline. Based on the results of both epithelial tissue detection and colorectal cancer detection, it is shown that using spatially neighboring information on 3D Raman scans can increase the performance of deep learning models, although it might also increase the complexity of network training. Apart from the colorectal tissue data set, experiments were also conducted on a cholangiocarcinoma data set for generalizability verification. The findings in this study can also be potentially applied into future tasks regarding spectroscopic data analysis, especially for improving model performance in a spatially aware way.

PMID:40179245 | DOI:10.1021/acs.analchem.4c05549

Categories: Literature Watch

Using deep learning artificial intelligence for sex identification and taxonomy of sand fly species

Thu, 2025-04-03 06:00

PLoS One. 2025 Apr 3;20(4):e0320224. doi: 10.1371/journal.pone.0320224. eCollection 2025.

ABSTRACT

Sandflies are vectors for several tropical diseases such as leishmaniasis, bartonellosis, and sandfly fever. Moreover, sandflies exhibit species-specificity in transmitting particular pathogen species, with females being responsible for disease transmission. Thus, effective classification of sandfly species and the corresponding sex identification are important for disease surveillance and control, managing breeding/populations, research and development, and conducting epidemiological studies. This is typically performed manually by observing internal morphological features, which maybe an error-prone tedious process. In this work, we developed a deep learning artificial intelligence system to determine the gender and to differentiate between three species of two sandfly subgenera (i.e., Phlebotomus alexandri, Phlebotomus papatasi, and Phlebotomus sergenti). Using locally field-caught and prepared samples over a period of two years, and based on convolutional neural networks, transfer learning, and early fusion of genital and pharynx images, we achieved exceptional classification accuracy (greater than 95%) across multiple performance metrics and using a wide range of pre-trained convolutional neural network models. This study not only contributes to the field of medical entomology by providing an automated and accurate solution for sandfly gender identification and taxonomy, but also establishes a framework for leveraging deep learning techniques in similar vector-borne disease research and control efforts.

PMID:40179129 | DOI:10.1371/journal.pone.0320224

Categories: Literature Watch

Advancing enterprise risk management with deep learning: A predictive approach using the XGBoost-CNN-BiLSTM model

Thu, 2025-04-03 06:00

PLoS One. 2025 Apr 3;20(4):e0319773. doi: 10.1371/journal.pone.0319773. eCollection 2025.

ABSTRACT

Enterprise risk management is a key element to ensure the sustainable and steady development of enterprises. However, traditional risk management methods have certain limitations when facing complex market environments and diverse risk events. This study introduces a deep learning-based risk management model utilizing the XGBoost-CNN-BiLSTM framework to enhance the prediction and detection of risk events. This model combines the structured data processing capabilities of XGBoost, the feature extraction capabilities of CNN, and the time series processing capabilities of BiLSTM to more comprehensively capture the key characteristics of risk events. Through experimental verification on multiple data sets, our model has achieved significant advantages in key indicators such as accuracy, recall, F1 score, and AUC. For example, on the S&P 500 historical data set, our model achieved a precision rate of 93.84% and a recall rate of 95.75%, further verifying its effectiveness in predicting risk events. These experimental results fully demonstrate the robustness and superiority of our model. Our research is of great significance, not only providing a more reliable risk management method for enterprises, but also providing useful inspiration for the application of deep learning in the field of risk management.

PMID:40179109 | DOI:10.1371/journal.pone.0319773

Categories: Literature Watch

Revisiting Supervised Learning-Based Photometric Stereo Networks

Thu, 2025-04-03 06:00

IEEE Trans Pattern Anal Mach Intell. 2025 Apr 3;PP. doi: 10.1109/TPAMI.2025.3557498. Online ahead of print.

ABSTRACT

Deep learning has significantly propelled the development of photometric stereo by handling the challenges posed by unknown reflectance and global illumination effects. However, how supervised learning-based photometric stereo networks resolve these challenges remains to be elucidated. In this paper, we aim to reveal how existing methods address these challenges by revisiting their deep features, deep feature encoding strategies, and network architectures. Based on the insights gained from our analysis, we propose ESSENCE-Net, which effectively encodes deep shading features with an easy-first-encoding strategy, enhances shading features with shading supervision, and accurately decodes normal with spatial context-aware attention. The experimental results verify that the proposed method outperforms state-of-the-art methods on three benchmark datasets, whether with dense or sparse inputs. The code is available at https://github.com/wxy-zju/ESSENCE-Net.

PMID:40178960 | DOI:10.1109/TPAMI.2025.3557498

Categories: Literature Watch

Towards Better Cephalometric Landmark Detection with Diffusion Data Generation

Thu, 2025-04-03 06:00

IEEE Trans Med Imaging. 2025 Apr 3;PP. doi: 10.1109/TMI.2025.3557430. Online ahead of print.

ABSTRACT

Cephalometric landmark detection is essential for orthodontic diagnostics and treatment planning. Nevertheless, the scarcity of samples in data collection and the extensive effort required for manual annotation have significantly impeded the availability of diverse datasets. This limitation has restricted the effectiveness of deep learning-based detection methods, particularly those based on large-scale vision models. To address these challenges, we have developed an innovative data generation method capable of producing diverse cephalometric X-ray images along with corresponding annotations without human intervention. To achieve this, our approach initiates by constructing new cephalometric landmark annotations using anatomical priors. Then, we employ a diffusion-based generator to create realistic X-ray images that correspond closely with these annotations. To achieve precise control in producing samples with different attributes, we introduce a novel prompt cephalometric X-ray image dataset. This dataset includes real cephalometric X-ray images and detailed medical text prompts describing the images. By leveraging these detailed prompts, our method improves the generation process to control different styles and attributes. Facilitated by the large, diverse generated data, we introduce large-scale vision detection models into the cephalometric landmark detection task to improve accuracy. Experimental results demonstrate that training with the generated data substantially enhances the performance. Compared to methods without using the generated data, our approach improves the Success Detection Rate (SDR) by 6.5%, attaining a notable 82.2%. All code and data are available at: https://um-lab.github.io/cepha-generation/.

PMID:40178956 | DOI:10.1109/TMI.2025.3557430

Categories: Literature Watch

Artificial Intelligence for the Detection of Patient-Ventilator Asynchrony

Thu, 2025-04-03 06:00

Respir Care. 2025 Apr 3. doi: 10.1089/respcare.12540. Online ahead of print.

ABSTRACT

Patient-ventilator asynchrony (PVA) is a challenge to invasive mechanical ventilation characterized by misalignment of ventilatory support and patient respiratory effort. PVA is highly prevalent and associated with adverse clinical outcomes, including increased work of breathing, oxygen consumption, and risk of barotrauma. Artificial intelligence (AI) is a potentially transformative solution offering capabilities for automated detection of PVA. This narrative review characterizes the landscape of AI models designed for PVA detection and quantification. A comprehensive literature search identified 13 studies, spanning diverse settings and patient populations. Machine learning (ML) techniques, derivation datasets, types of asynchronies detected, and performance metrics were assessed to provide a contemporary view of AI in this domain. We reviewed 166 articles published between 1989 and April 2024, of which 13 were included, encompassing 332 participants and analyzing >5.8 million breaths. Patient counts ranged between 8 and 107 and breath data ranged between 1,375 and 4.2 M. The reason for invasive mechanical ventilation use was given as ARDS in three articles, whereas the remainder had different invasive mechanical ventilation indications. Various ML methods as well as newer deep learning techniques were used to address PVA types. Sensitivity and specificity of 10 of the 13 models were >0.9, and 8 models reported accuracy of >0.9. AI models have significant potential to address PVA in invasive mechanical ventilation, displaying high accuracy across various populations and asynchrony types. This showcases their potential to accurately detect and quantify PVA. Future work should focus on model validation in diverse clinical settings and patient populations.

PMID:40178919 | DOI:10.1089/respcare.12540

Categories: Literature Watch

Hyaluronan network remodeling by ZEB1 and ITIH2 enhances the motility and invasiveness of cancer cells

Thu, 2025-04-03 06:00

J Clin Invest. 2025 Apr 3:e180570. doi: 10.1172/JCI180570. Online ahead of print.

ABSTRACT

Hyaluronan (HA) in the extracellular matrix promotes epithelial-to-mesenchymal transition (EMT) and metastasis; however, the mechanism by which the HA network constructed by cancer cells regulates cancer progression and metastasis in the tumor microenvironment (TME) remains largely unknown. In this study, inter-alpha-trypsin inhibitor heavy chain 2 (ITIH2), an HA-binding protein, was confirmed to be secreted from mesenchymal-like lung cancer cells when co-cultured with cancer-associated fibroblasts. ITIH2 expression is transcriptionally upregulated by the EMT-inducing transcription factor ZEB1, along with HA synthase 2 (HAS2), which positively correlates with ZEB1 expression. Depletion of ITIH2 and HAS2 reduced HA matrix formation and the migration and invasion of lung cancer cells. Furthermore, ZEB1 facilitates alternative splicing and isoform expression of CD44, an HA receptor, and CD44 knockdown suppresses the motility and invasiveness of lung cancer cells. Using a deep learning-based drug-target interaction algorithm, we identified an ITIH2 inhibitor (sincalide) that inhibited HA matrix formation and migration of lung cancer cells, preventing metastatic colonization of lung cancer cells in mouse models. These findings suggest that ZEB1 remodels the HA network in the TME through the regulation of ITIH2, HAS2, and CD44, presenting a strategy for targeting this network to suppress lung cancer progression.

PMID:40178908 | DOI:10.1172/JCI180570

Categories: Literature Watch

Arterial phase CT radiomics for non-invasive prediction of Ki-67 proliferation index in pancreatic solid pseudopapillary neoplasms

Thu, 2025-04-03 06:00

Abdom Radiol (NY). 2025 Apr 3. doi: 10.1007/s00261-025-04921-z. Online ahead of print.

ABSTRACT

BACKGROUND: This study aimed to preoperatively predict Ki-67 proliferation levels in patients with pancreatic solid pseudopapillary neoplasm (pSPN) using radiomics features extracted from arterial phase helical CT images.

METHODS: We retrospectively analyzed 92 patients (Ningbo Medical Center Lihuili Hospital: n = 64, Taizhou Central Hospital: n = 28) with pathologically confirmed pSPN from June 2015 to June 2023. Ki-67 positivity > 3% was considered high. Radiomics features were extracted using PyRadiomics, with patients from training cohort (n = 64) and validation cohort (n = 28). A radiomics signature was constructed, and a CT radiomics score (CTscore) was calculated. Deep learning models were employed for prediction, with early stopping to prevent overfitting.

RESULTS: Seven key radiomics features were selected via LASSO regression with cross-validation. The deep learning model demonstrated improved accuracy with demographics and CTscore, with key features such as Morphology and CTscore contributing significantly to predictive accuracy. The best-performing models, including GBM and deep learning algorithms, achieved high predictive performance with an AUC of up to 0.946 in the training cohort.

CONCLUSIONS: We developed a robust deep learning-based radiomics model using arterial phase CT images to predict Ki-67 levels in pSPN patients, identifying CTscore and Morphology as key predictors. This non-invasive approach has potential utility in guiding personalized preoperative treatment strategies.

CLINICAL TRIAL NUMBER: Not applicable.

PMID:40178588 | DOI:10.1007/s00261-025-04921-z

Categories: Literature Watch

Free-breathing, Highly Accelerated, Single-beat, Multisection Cardiac Cine MRI with Generative Artificial Intelligence

Thu, 2025-04-03 06:00

Radiol Cardiothorac Imaging. 2025 Apr;7(2):e240272. doi: 10.1148/ryct.240272.

ABSTRACT

Purpose To develop and evaluate a free-breathing, highly accelerated, multisection, single-beat cine sequence for cardiac MRI. Materials and Methods This prospective study, conducted from July 2022 to December 2023, included participants with various cardiac conditions as well as healthy participants who were imaged using a 3-T MRI system. A single-beat sequence was implemented, collecting data for each section in one heartbeat. Images were acquired with an in-plane spatiotemporal resolution of 1.9 × 1.9 mm2 and 37 msec and reconstructed using resolution enhancement generative adversarial inline neural network (REGAIN), a deep learning model. Multibreath-hold k-space-segmented (4.2-fold acceleration) and free-breathing single-beat (14.8-fold acceleration) cine images were collected, both reconstructed with REGAIN. Left ventricular (LV) and right ventricular (RV) parameters between the two methods were evaluated with linear regression, Bland-Altman analysis, and Pearson correlation. Three expert cardiologists independently scored diagnostic and image quality. Scan and rescan reproducibility was evaluated in a subset of participants 1 year apart using the intraclass correlation coefficient (ICC). Results This study included 136 participants (mean age [SD], 54 years ± 15; 69 female, 67 male), 40 healthy and 96 with cardiac conditions. k-Space-segmented and single-beat scan times were 2.6 minutes ± 0.8 and 0.5 minute ± 0.1, respectively. Strong correlations (P < .001) were observed between k-space-segmented and single-beat cine parameters in both LV (r = 0.97-0.99) and RV (r = 0.89-0.98). Scan and rescan reproducibility of single-beat cine was excellent (ICC, 0.97-1.0). Agreement among readers was high, with 125 of 136 (92%) images consistently assessed as diagnostic and 133 of 136 (98%) consistently rated as having good image quality by all readers. Conclusion Free-breathing 30-second single-beat cardiac cine MRI yielded accurate biventricular measurements, reduced scan time, and maintained high diagnostic and image quality compared with conventional multibreath-hold k-space-segmented cine images. Keywords: MR-Imaging, Cardiac, Heart, Imaging Sequences, Comparative Studies, Technology Assessment Supplemental material is available for this article. © RSNA, 2025.

PMID:40178397 | DOI:10.1148/ryct.240272

Categories: Literature Watch

CoupleVAE: coupled variational autoencoders for predicting perturbational single-cell RNA sequencing data

Thu, 2025-04-03 06:00

Brief Bioinform. 2025 Mar 4;26(2):bbaf126. doi: 10.1093/bib/bbaf126.

ABSTRACT

With the rapid advances in single-cell sequencing technology, it is now feasible to conduct in-depth genetic analysis in individual cells. Study on the dynamics of single cells in response to perturbations is of great significance for understanding the functions and behaviors of living organisms. However, the acquisition of post-perturbation cellular states via biological experiments is frequently cost-prohibitive. Predicting the single-cell perturbation responses poses a critical challenge in the field of computational biology. In this work, we propose a novel deep learning method called coupled variational autoencoders (CoupleVAE), devised to predict the postperturbation single-cell RNA-Seq data. CoupleVAE is composed of two coupled VAEs connected by a coupler, initially extracting latent features for controlled and perturbed cells via two encoders, subsequently engaging in mutual translation within the latent space through two nonlinear mappings via a coupler, and ultimately generating controlled and perturbed data by two separate decoders to process the encoded and translated features. CoupleVAE facilitates a more intricate state transformation of single cells within the latent space. Experiments in three real datasets on infection, stimulation and cross-species prediction show that CoupleVAE surpasses the existing comparative models in effectively predicting single-cell RNA-seq data for perturbed cells, achieving superior accuracy.

PMID:40178283 | DOI:10.1093/bib/bbaf126

Categories: Literature Watch

Data imbalance in drug response prediction: multi-objective optimization approach in deep learning setting

Thu, 2025-04-03 06:00

Brief Bioinform. 2025 Mar 4;26(2):bbaf134. doi: 10.1093/bib/bbaf134.

ABSTRACT

Drug response prediction (DRP) methods tackle the complex task of associating the effectiveness of small molecules with the specific genetic makeup of the patient. Anti-cancer DRP is a particularly challenging task requiring costly experiments as underlying pathogenic mechanisms are broad and associated with multiple genomic pathways. The scientific community has exerted significant efforts to generate public drug screening datasets, giving a path to various machine learning models that attempt to reason over complex data space of small compounds and biological characteristics of tumors. However, the data depth is still lacking compared to application domains like computer vision or natural language processing domains, limiting current learning capabilities. To combat this issue and improves the generalizability of the DRP models, we are exploring strategies that explicitly address the imbalance in the DRP datasets. We reframe the problem as a multi-objective optimization across multiple drugs to maximize deep learning model performance. We implement this approach by constructing Multi-Objective Optimization Regularized by Loss Entropy loss function and plugging it into a Deep Learning model. We demonstrate the utility of proposed drug discovery methods and make suggestions for further potential application of the work to achieve desirable outcomes in the healthcare field.

PMID:40178282 | DOI:10.1093/bib/bbaf134

Categories: Literature Watch

DOMSCNet: a deep learning model for the classification of stomach cancer using multi-layer omics data

Thu, 2025-04-03 06:00

Brief Bioinform. 2025 Mar 4;26(2):bbaf115. doi: 10.1093/bib/bbaf115.

ABSTRACT

The rapid advancement of next-generation sequencing (NGS) technology and the expanding availability of NGS datasets have led to a significant surge in biomedical research. To better understand the molecular processes, underlying cancer and to support its development, diagnosis, prediction, and therapy; NGS data analysis is crucial. However, the NGS multi-layer omics high-dimensional dataset is highly complex. In recent times, some computational methods have been developed for cancer omics data interpretation. However, various existing methods face challenges in accounting for diverse types of cancer omics data and struggle to effectively extract informative features for the integrated identification of core units. To address these challenges, we proposed a hybrid feature selection (HFS) technique to detect optimal features from multi-layer omics datasets. Subsequently, this study proposes a novel hybrid deep recurrent neural network-based model DOMSCNet to classify stomach cancer. The proposed model was made generic for all four multi-layer omics datasets. To observe the robustness of the DOMSCNet model, the proposed model was validated with eight external datasets. Experimental results showed that the SelectKBest-maximum relevancy minimum redundancy-Boruta (SMB), HFS technique outperformed all other HFS techniques. Across four multi-layer omics datasets and validated datasets, the proposed DOMSCNet model outdid existing classifiers along with other proposed classifiers.

PMID:40178281 | DOI:10.1093/bib/bbaf115

Categories: Literature Watch

Application of Deep Learning to Predict the Persistence, Bioaccumulation, and Toxicity of Pharmaceuticals

Thu, 2025-04-03 06:00

J Chem Inf Model. 2025 Apr 3. doi: 10.1021/acs.jcim.4c02293. Online ahead of print.

ABSTRACT

This study investigates the application of a deep learning (DL) model, specifically a message-passing neural network (MPNN) implemented through Chemprop, to predict the persistence, bioaccumulation, and toxicity (PBT) characteristics of compounds, with a focus on pharmaceuticals. We employed a clustering strategy to provide a fair assessment of the model performances. By applying the generated model to a set of pharmaceutically relevant molecules, we aim to highlight potential PBT chemicals and extract PBT-relevant substructures. These substructures can serve as structural flags, alerting drug designers to potential environmental issues from the earliest stages of the drug discovery process. Incorporating these findings into pharmaceutical development workflows is expected to drive significant advancements in creating more environmentally friendly drug candidates while preserving their therapeutic efficacy.

PMID:40178174 | DOI:10.1021/acs.jcim.4c02293

Categories: Literature Watch

Early Colon Cancer Prediction from Histopathological Images Using Enhanced Deep Learning with Confidence Scoring

Thu, 2025-04-03 06:00

Cancer Invest. 2025 Apr 3:1-19. doi: 10.1080/07357907.2025.2483302. Online ahead of print.

ABSTRACT

Colon Cancer (CC) arises from abnormal cell growth in the colon, which severely impacts a person's health and quality of life. Detecting CC through histopathological images for early diagnosis offers substantial benefits in medical diagnostics. This study proposes NalexNet, a hybrid deep-learning classifier, to enhance classification accuracy and computational efficiency. The research methodology involves Vahadane stain normalization for preprocessing and Watershed segmentation for accurate tissue separation. The Teamwork Optimization Algorithm (TOA) is employed for optimal feature selection to reduce redundancy and improve classification performance. Furthermore, the NalexNet model is structured with convolutional layers and normal and reduction cells, ensuring efficient feature representation and high classification accuracy. Experimental results demonstrate that the proposed model achieves a precision of 99.9% and an accuracy of 99.5%, significantly outperforming existing models. This study contributes to the development of an automated and computationally efficient CC classification system, which has the potential for real-world clinical implementation, aiding pathologists in early and accurate diagnosis.

PMID:40178023 | DOI:10.1080/07357907.2025.2483302

Categories: Literature Watch

CMV2U-Net: A U-shaped network with edge-weighted features for detecting and localizing image splicing

Thu, 2025-04-03 06:00

J Forensic Sci. 2025 Apr 3. doi: 10.1111/1556-4029.70033. Online ahead of print.

ABSTRACT

The practice of cutting and pasting portions of one image into another, known as "image splicing," is commonplace in the field of image manipulation. Image splicing detection using deep learning has been a hot research topic for the past few years. However, there are two problems with the way deep learning is currently implemented: first, it is not good enough for feature fusion, and second, it uses only simple models for feature extraction and encoding, which makes the models vulnerable to overfitting. To tackle these problems, this research proposes CMV2U-Net, an edge-weighted U-shaped network-based image splicing forgery localization approach. An initial step is the development of a feature extraction module that can process two streams of input images simultaneously, allowing for the simultaneous extraction of semantically connected and semantically agnostic features. One characteristic is that a hierarchical fusion approach has been devised to prevent data loss in shallow features that are either semantically related or semantically irrelevant. This approach implements a channel attention mechanism to monitor manipulation trajectories involving multiple levels. Extensive trials on numerous public datasets prove that CMV2U-Net provides high AUC and F1 in localizing tampered regions, outperforming state-of-the-art techniques. Noise, Gaussian blur, and JPEG compression are post-processing threats that CMV2U-Net has successfully resisted.

PMID:40177991 | DOI:10.1111/1556-4029.70033

Categories: Literature Watch

Deep Learning-Powered Colloidal Digital SERS for Precise Monitoring of Cell Culture Media

Thu, 2025-04-03 06:00

Nano Lett. 2025 Apr 3. doi: 10.1021/acs.nanolett.5c01071. Online ahead of print.

ABSTRACT

Maintaining consistent quality in biomanufacturing is essential for producing high-quality complex biologics. Yet, current process analytical technologies (PAT) often fall short in achieving rapid and accurate monitoring of small-molecule critical process parameters and critical quality attributes. Surface-enhanced Raman spectroscopy (SERS) holds great promise but faces challenges like intensity fluctuations, compromising reproducibility. Herein, we propose a deep learning-powered colloidal digital SERS platform. This innovation converts SERS spectra into binary "ON/OFF" signals based on defined intensity thresholds, which allows single-molecule event visualization and reduces false positives. Through integration with deep learning, this platform enables detection of a broad range of analytes, unlimited by the lack of characteristic SERS peaks. Furthermore, we demonstrate its accuracy and reproducibility for studying AMBIC 1.1 mammalian cell culture media. These results highlight its rapidity, accuracy, and precision, paving the way for widespread adoption and scale-up as a novel PAT tool in biomanufacturing and diagnostics.

PMID:40177940 | DOI:10.1021/acs.nanolett.5c01071

Categories: Literature Watch

ChatGPT for speech-impaired assistance

Thu, 2025-04-03 06:00

Disabil Rehabil Assist Technol. 2025 Apr 3:1-3. doi: 10.1080/17483107.2025.2483300. Online ahead of print.

ABSTRACT

Background: Speech and language impairments, though often used interchangeably, are two very distinct types of challenges. A speech impairment may lead to impaired ability to produce speech sounds whilst communication may be affected due to lack of fluency or articulation of words. Consequently this may affect a person's ability to articulate may affect academic achievement, social development and progress in life. ChatGPT (Generative Pretrained Transformer) is an open access AI (Artificial Intelligence) tool developed by Open AI® based on Large language models (LLMs) with the ability to respond to human prompts to generate texts using Supervised and Unsupervised Machine Learning (ML) Algorithms. This article explores the current role and future perspectives of ChatGPT AI Tool for Speech-Impaired Assistance.

Methods: A cumulative search strategy using databases of PubMed, Google Scholar, Scopus and grey literature was conducted to generate this narrative review.

Results: A spectrum of Enabling Technologies for Speech & Language Impairment have been explored. Augmentative and Alternative Communication technology (AAC), Integration with Neuroprosthesis technology and Speech therapy applications offer considerable potential to aid speech and language impaired individuals.

Conclusion: Current applications of AI, ChatGPT and other LLM's offer promising solutions in enhancing communication in people affected by Speech and Language impairment. However, further research and development is required to ensure affordability, accessibility and authenticity of these AI Tools in clinical Practice.

PMID:40177878 | DOI:10.1080/17483107.2025.2483300

Categories: Literature Watch

Age-sex-specific burden of urological cancers attributable to risk factors in China and its provinces, 1990-2021, and forecasts with scenarios simulation: a systematic analysis for the Global Burden of Disease Study 2021

Thu, 2025-04-03 06:00

Lancet Reg Health West Pac. 2025 Mar 18;56:101517. doi: 10.1016/j.lanwpc.2025.101517. eCollection 2025 Mar.

ABSTRACT

BACKGROUND: As global aging intensifies, urological cancers pose increasing health and economic burdens. In China, home to one-fifth of the world's population, monitoring the distribution and determinants of these cancers and simulating the effects of health interventions are crucial for global and national health.

METHODS: With Global Burden of Disease (GBD) China database, the present study analyzed age-sex-specific patterns of incidence, prevalence, mortality, disability-adjusted life years (DALYs), years lived with disability (YLDs), and years of life lost (YLLs) in China and its 34 provinces as well as the association between gross domestic product per capita (GDPPC) and these patterns. Importantly, a multi-attentive deep learning pipeline (iTransformer) was pioneered to model the spatiotemporal patterns of urological cancers, risk factors, GDPPC, and population, to provide age-sex-location-specific long-term forecasts of urological cancer burdens, and to investigate the impacts of risk-factor-directed interventions on their future burdens.

FINDINGS: From 1990 to 2021, the incidence and prevalence of urological cancers in China has increased, leading to 266,887 new cases (95% confidence interval: 205,304-346,033) and 159,506,067 (12,236,0000-207,447,070) cases in 2021, driven primarily by males aged 55+ years. In 2021, Taiwan, Beijing, and Zhejiang had the highest age-standardized incidence rate (ASIR) and age-standardized prevalence rates of urological cancer in China, highlighting significant regional disparities in the disease burden. Conversely, the national age-standardized mortality rate (ASMR) has declined from 6.5 (5.1-7.8) per 100,000 population in 1990 to 5.6 (4.4-7.2) in 2021, notably in Jilin [-166.7% (-237 to -64.6)], Tibet [-135.4% (-229.1 to 4.4)], and Heilongjiang [-118.5% (-206.5 to -4.6)]. Specifically, the national ASMR for bladder and testicular cancers reduced by -32.1% (-47.9 to 1.9) and -31.1% (-50.2 to 7.2), respectively, whereas prostate and kidney cancers rose by 7.9% (-18.4 to 43.6) and 9.2% (-12.2 to 36.5). Age-standardized DALYs, YLDs, and YLLs for urological cancers were consistent with ASMR. Males suffered higher burdens of urological cancers than females in all populations, except those aged <5 years. Regionally and provincially, high GDPPC provinces have the highest burden of prostate cancer, while the main burden in other provinces is bladder cancer. The main risk factors for urological cancers in 2021 are smoking [accounting for 55.1% (42.7-67.4)], high body mass index [13.9% (5.3-22.4)], and high fasting glycemic index [5.9% (-0.8 to 13.4)] for both males and females, with smoking remarkably affecting males and high body mass index affecting females. Between 2022 and 2040, the ASIR of urological cancers increased from 10.09 (9.19-10.99) to 14.42 (14.30-14.54), despite their ASMR decreasing. Notably, prostate cancer surpassed bladder cancer as the primary subcategory, with those aged 55+ years showing the highest increase in ASIR, highlighting the aging-related transformation of the urological cancer burden. Following the implementation of targeted interventions, smoking control achieved the greatest reduction in urological cancer burden, mainly affecting male bladder cancer (-45.8% decline). In females, controlling smoking and high fasting plasma glucose reduced by 5.3% and 5.8% ASMR in urological cancers. Finally, the averaged mean-square-Percentage-Error, absolute-Percentage-Error, and root-mean-square Logarithmic-Error of the forecasting model are 0.54 ± 0.22, 1.51 ± 1.26, and 0.15 ± 0.07, respectively, indicating that the model performs well.

INTERPRETATION: Urological cancers exhibit an aging trend, with increased incidence rates among the population aged 55+ years, making prostate cancer the most burdensome subcategory. Moreover, urological cancer burden is imbalanced by age, sex, and province. Based on our findings, authorities and policymakers could refine or tailor population-specific health strategies, including promoting smoking cessation, weight reduction, and blood sugar control.

FUNDING: Bill & Melinda Gates Foundation.

PMID:40177596 | PMC:PMC11964562 | DOI:10.1016/j.lanwpc.2025.101517

Categories: Literature Watch

The promise and limitations of artificial intelligence in CTPA-based pulmonary embolism detection

Thu, 2025-04-03 06:00

Front Med (Lausanne). 2025 Mar 19;12:1514931. doi: 10.3389/fmed.2025.1514931. eCollection 2025.

ABSTRACT

Computed tomography pulmonary angiography (CTPA) is an essential diagnostic tool for identifying pulmonary embolism (PE). The integration of AI has significantly advanced CTPA-based PE detection, enhancing diagnostic accuracy and efficiency. This review investigates the growing role of AI in the diagnosis of pulmonary embolism using CTPA imaging. The review examines the capabilities of AI algorithms, particularly deep learning models, in analyzing CTPA images for PE detection. It assesses their sensitivity and specificity compared to human radiologists. AI systems, using large datasets and complex neural networks, demonstrate remarkable proficiency in identifying subtle signs of PE, aiding clinicians in timely and accurate diagnosis. In addition, AI-powered CTPA analysis shows promise in risk stratification, prognosis prediction, and treatment optimization for PE patients. Automated image interpretation and quantitative analysis facilitate rapid triage of suspected cases, enabling prompt intervention and reducing diagnostic delays. Despite these advancements, several limitations remain, including algorithm bias, interpretability issues, and the necessity for rigorous validation, which hinder widespread adoption in clinical practice. Furthermore, integrating AI into existing healthcare systems requires careful consideration of regulatory, ethical, and legal implications. In conclusion, AI-driven CTPA-based PE detection presents unprecedented opportunities to enhance diagnostic precision and efficiency. However, addressing the associated limitations is critical for safe and effective implementation in routine clinical practice. Successful utilization of AI in revolutionizing PE care necessitates close collaboration among researchers, medical professionals, and regulatory organizations.

PMID:40177281 | PMC:PMC11961422 | DOI:10.3389/fmed.2025.1514931

Categories: Literature Watch

Pages