Deep learning
AI Image Generation Technology in Ophthalmology: Use, Misuse and Future Applications
Prog Retin Eye Res. 2025 Mar 17:101353. doi: 10.1016/j.preteyeres.2025.101353. Online ahead of print.
ABSTRACT
BACKGROUND: AI-powered image generation technology holds the potential to dramatically reshape clinical ophthalmic practice. The adoption of this technology relies on clinician acceptance, yet it is an unfamiliar technology for both ophthalmic researchers and clinicians. In this work we present a literature review on the application of image generation technology in ophthalmology to discuss its theoretical applications and future role.
METHODS: First, we explore the key model designs used for image synthesis, including generative adversarial networks, autoencoders, and diffusion models. We then perform a survey of the literature for image generation technology in ophthalmology prior to September 2024, collecting the type of model used, as well as its clinical application, for each study. Finally, we discuss the limitations of this technology, the risks of its misuse and the future directions of research in this field.
RESULTS: Applications of this technology include improving diagnostic model performance, inter-modality image transformation, treatment and disease prognosis, image denoising, and education. Key challenges for integration of this technology into ophthalmic clinical practice include bias in generative models, risk to patient data security, computational and logistical barriers to model development, challenges with model explainability, inconsistent use of validation metrics between studies and misuse of synthetic images. Looking forward, researchers are placing a further emphasis on clinically grounded metrics, the development of image generation foundation models and the implementation of methods to ensure data provenance.
CONCLUSION: It is evident image generation technology has the potential to benefit the field of ophthalmology for many tasks, however, compared to other medical applications of AI, it is still in its infancy. This review aims to enable ophthalmic researchers to identify the optimal model and methodology to best take advantage of this technology.
PMID:40107410 | DOI:10.1016/j.preteyeres.2025.101353
Comparison of the characteristics between machine learning and deep learning algorithms for ablation site classification in a novel cloud-based system
Heart Rhythm. 2025 Mar 17:S1547-5271(25)02192-7. doi: 10.1016/j.hrthm.2025.03.1955. Online ahead of print.
ABSTRACT
BACKGROUND: CARTONET is a cloud-based system for the analysis of ablation procedures using the CARTO system. The current CARTONET R14 model employs deep learning, but its accuracy and positive predictive value (PPV) remain under-evaluated.
OBJECTIVE: This study aimed to compare the characteristics of the CARTONET system between the R12.1 and the R14 models.
METHODS: Data from 396 atrial fibrillation ablation cases were analyzed. Using a CARTONET R14 model, the sensitivity and PPV of the automated anatomical location model were investigated. The distribution of potential reconnection sites and confidence level for each site were investigated. We also compared the difference in that data between the CARTONET R12.1, the previous CARTONET version, and the CARTONET R14 models.
RESULTS: We analyzed the overall tags of 39169 points and the gap prediction of 625 segments using the CARTONET R14 model. The sensitivity and PPV of the R14 model significantly improved compared to that of the R12.1 model (R12.1 vs. R14; sensitivity, 71.2% vs. 77.5%, p<0.0001; PPV, 85.6 % vs. 86.2 %, p=0.0184). The incidence of reconnections was highly observed in the posterior area of the RPVs and LPVs (RPV, 98/238 [41.2%]; LPV 190/387 [49.1%]). On the other hand, the possibility of reconnection was highest in the roof area for the RPVs and LPVs (%; RPV, 14 [5.5-41]; LPV, 16 [8-22]).
CONCLUSION: The R14 model significantly improved sensitivity and PPV compared to the R12.1 model. The tendency for predicting potential reconnection sites was similar to the previous version, the R12 model.
PMID:40107403 | DOI:10.1016/j.hrthm.2025.03.1955
A multimodal framework for assessing the link between pathomics, transcriptomics, and pancreatic cancer mutations
Comput Med Imaging Graph. 2025 Mar 15;123:102526. doi: 10.1016/j.compmedimag.2025.102526. Online ahead of print.
ABSTRACT
In Pancreatic Ductal Adenocarcinoma (PDAC), predicting genetic mutations directly from histopathological images using Deep Learning can provide valuable insights. The combination of several omics can provide further knowledge on mechanisms underlying tumor biology. This study aimed at developing an explainable multimodal pipeline to predict genetic mutations for the KRAS, TP53, SMAD4, and CDKN2A genes, integrating pathomic features with transcriptomics from two independent datasets, the TCGA-PAAD, assumed as training set, and the CPTAC-PDA, as external validation set. Large and small configurations of CLAM (Clustering-constrained Attention Multiple Instance Learning) models were evaluated with three different feature extractors (ResNet50, UNI, and CONCH). RNA-seq data were pre-processed both conventionally and using three autoencoder architectures. The processed transcript panels were input into machine learning (ML) models for mutation classification. Attention maps and SHAP were employed, highlighting significant features from both data modalities. A fusion layer or a voting mechanism combined the outputs from pathomic and transcriptomic models, obtaining a multimodal prediction. Performance comparisons were assessed by Area Under Receiver Operating Characteristic (AUROC) and Precision-Recall (AUPRC) curves. On the validation set, for KRAS, multimodal ML achieved 0.92 of AUROC and 0.98 of AUPRC. For TP53, the multimodal voting model achieved 0.75 of AUROC and 0.85 of AUPRC. For SMAD4 and CDKN2A, transcriptomic ML models achieved AUROC of 0.71 and 0.65, while multimodal ML showed AUPRC of 0.39 and 0.37, respectively. This approach demonstrated the potential of combining pathomics with transcriptomics, offering an interpretable framework for predicting key genetic mutations in PDAC.
PMID:40107149 | DOI:10.1016/j.compmedimag.2025.102526
CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation
Comput Med Imaging Graph. 2025 Mar 13;123:102525. doi: 10.1016/j.compmedimag.2025.102525. Online ahead of print.
ABSTRACT
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.
PMID:40107148 | DOI:10.1016/j.compmedimag.2025.102525
Deep learning method for malaria parasite evaluation from microscopic blood smear
Artif Intell Med. 2025 Mar 15;163:103114. doi: 10.1016/j.artmed.2025.103114. Online ahead of print.
ABSTRACT
OBJECTIVE: Malaria remains a leading cause of global morbidity and mortality, responsible for approximately 5,97,000 deaths according to World Malaria Report 2024. The study aims to systematically review current methodologies for automated analysis of the Plasmodium genus in malaria diagnostics. Specifically, it focuses on computer-assisted methods, examining databases, blood smear types, staining techniques, and diagnostic models used for malaria characterization while identifying the limitations and contributions of recent studies.
METHODS: A systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Peer-reviewed and published studies from 2020 to 2024 were retrieved from Web of Science and Scopus. Inclusion criteria focused on studies utilizing deep learning and machine learning models for automated malaria detection from microscopic blood smears. The review considered various blood smear types, staining techniques, and diagnostic models, providing a comprehensive evaluation of the automated diagnostic landscape for malaria.
RESULTS: The NIH database is the standardized and most widely tested database for malaria diagnostics. Giemsa stained-thin blood smear is the most efficient diagnostic method for the detection and observation of the plasmodium lifecycle. This study has been able to identify three categories of ML models most suitable for digital diagnostic of malaria, i.e., Most Accurate- ResNet and VGG with peak accuracy of 99.12 %, Most Popular- custom CNN-based models used by 58 % of studies, and least complex- CADx model. A few pre and post-processing techniques like Gaussian filter and auto encoder for noise reduction have also been discussed for improved accuracy of models.
CONCLUSION: Automated methods for malaria diagnostics show considerable promise in improving diagnostic accuracy and reducing human error. While deep learning models have demonstrated high performance, challenges remain in data standardization and real-world application. Addressing these gaps could lead to more reliable and scalable diagnostic tools, aiding global malaria control efforts.
PMID:40107120 | DOI:10.1016/j.artmed.2025.103114
Histopathology image classification based on semantic correlation clustering domain adaptation
Artif Intell Med. 2025 Mar 17;163:103110. doi: 10.1016/j.artmed.2025.103110. Online ahead of print.
ABSTRACT
Deep learning has been successfully applied to histopathology image classification tasks. However, the performance of deep models is data-driven, and the acquisition and annotation of pathological image samples are difficult, which limit the model's performance. Compared to whole slide images (WSI) of patients, histopathology image datasets of animal models are easier to acquire and annotate. Therefore, this paper proposes an unsupervised domain adaptation method based on semantic correlation clustering for histopathology image classification. The aim is to utilize Minmice model histopathology image dataset to achieve the classification and recognition of human WSIs. Firstly, the multi-scale fused features extracted from the source and target domains are normalized and mapped. In the new feature space, the cosine distance between class centers is used to measure the semantic correlation between categories. Then, the domain centers, class centers, and sample distributions are self-constrainedly aligned. Multi-granular information is applied to achieve cross-domain semantic correlation knowledge transfer between classes. Finally, the probabilistic heatmap is used to visualize the model's prediction results and annotate the cancerous regions in WSIs. Experimental results show that the proposed method has high classification accuracy for WSI, and the annotated result is close to manual annotation, indicating its potential for clinical applications.
PMID:40107119 | DOI:10.1016/j.artmed.2025.103110
Predicting infant brain connectivity with federated multi-trajectory GNNs using scarce data
Med Image Anal. 2025 Mar 13;102:103541. doi: 10.1016/j.media.2025.103541. Online ahead of print.
ABSTRACT
The understanding of the convoluted evolution of infant brain networks during the first postnatal year is pivotal for identifying the dynamics of early brain connectivity development. Thanks to the valuable insights into the brain's anatomy, existing deep learning frameworks focused on forecasting the brain evolution trajectory from a single baseline observation. While yielding remarkable results, they suffer from three major limitations. First, they lack the ability to generalize to multi-trajectory prediction tasks, where each graph trajectory corresponds to a particular imaging modality or connectivity type (e.g., T1-w MRI). Second, existing models require extensive training datasets to achieve satisfactory performance which are often challenging to obtain. Third, they do not efficiently utilize incomplete time series data. To address these limitations, we introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network. Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets. As a result, we enhance the performance of each hospital's local generative model, while preserving data privacy. The three key innovations of FedGmTE-Net++ are: (i) presenting the first federated learning framework specifically designed for brain multi-trajectory evolution prediction in a data-scarce environment, (ii) incorporating an auxiliary regularizer in the local objective function to exploit all the longitudinal brain connectivity within the evolution trajectory and maximize data utilization, (iii) introducing a two-step imputation process, comprising a preliminary K-Nearest Neighbours based precompletion followed by an imputation refinement step that employs regressors to improve similarity scores and refine imputations. Our comprehensive experimental results showed the outperformance of FedGmTE-Net++ in brain multi-trajectory prediction from a single baseline graph in comparison with benchmark methods. Our source code is available at https://github.com/basiralab/FedGmTE-Net-plus.
PMID:40107118 | DOI:10.1016/j.media.2025.103541
Development of an artificial intelligence-generated, explainable treatment recommendation system for urothelial carcinoma and renal cell carcinoma to support multidisciplinary cancer conferences
Eur J Cancer. 2025 Mar 15;220:115367. doi: 10.1016/j.ejca.2025.115367. Online ahead of print.
ABSTRACT
BACKGROUND: Decisions on the best available treatment in clinical oncology are based on expert opinions in multidisciplinary cancer conferences (MCC). Artificial intelligence (AI) could increase evidence-based treatment by generating additional treatment recommendations (TR). We aimed to develop such an AI system for urothelial carcinoma (UC) and renal cell carcinoma (RCC).
METHODS: Comprehensive data of patients with histologically confirmed UC and RCC who received MCC recommendations in the years 2015 - 2022 were transformed into machine readable representations. Development of a two-step process to train a classifier to mimic TR was followed by identification of superordinate and detailed categories of TR. Machine learning (CatBoost, XGBoost, Random Forest) and deep learning (TabPFN, TabNet, SoftOrdering CNN, FCN) techniques were trained. Results were measured by F1-scores for accuracy weights.
RESULTS: AI training was performed with 1617 (UC) and 880 (RCC) MCC recommendations (77 and 76 patient input parameters). The AI system generated fully automated TR with excellent F1-scores for UC (e.g. 'Surgery' 0.81, 'Anti-cancer drug' 0.83, 'Gemcitabine/Cisplatin' 0.88) and RCC (e.g. 'Anti-cancer drug' 0.92 'Nivolumab' 0.78, 'Pembrolizumab/Axitinib' 0.89). Explainability is provided by clinical features and their importance score. Finally, TR and explainability were visualized on a dashboard.
CONCLUSION: This study demonstrates for the first time AI-generated, explainable TR in UC and RCC with excellent performance results as a potential support tool for high-quality, evidence-based TR in MCC. The comprehensive technical and clinical development sets global reference standards for future AI developments in MCC recommendations in clinical oncology. Next, prospective validation of the results is mandatory.
PMID:40107091 | DOI:10.1016/j.ejca.2025.115367
Neuro_DeFused-Net: A novel multi-scale 2DCNN architecture assisted diagnostic model for Parkinson's disease diagnosis using deep feature-level fusion of multi-site multi-modality neuroimaging data
Comput Biol Med. 2025 Mar 18;190:110029. doi: 10.1016/j.compbiomed.2025.110029. Online ahead of print.
ABSTRACT
BACKGROUND: Neurological disorders, particularly Parkinson's Disease (PD), are serious and progressive conditions that significantly impact patients' motor functions and overall quality of life. Accurate and timely diagnosis is still crucial, but it is quite challenging. Understanding the changes in the brain linked to PD requires using neuroimaging modalities like magnetic resonance imaging (MRI). Artificial intelligence (AI), particularly deep learning (DL) methods, can potentially improve the precision of diagnosis.
METHOD: In the current study, we present a novel approach that integrates T1-weighted structural MRI and rest-state functional MRI using multi-site-cum-multi-modality neuroimaging data. To maximize the richness of the data, our approach integrates deep feature-level fusion across these modalities. We proposed a custom multi-scale 2D Convolutional Neural Network (CNN) architecture that captures features at different spatial scales, enhancing the model's capacity to learn PD-related complex patterns.
RESULTS: With an accuracy of 97.12 %, sensitivity of 97.26 %, F1-Score of 97.63 %, Area Under the Curve (AUC) of 0.99, mean average precision (mAP) of 99.53 %, and Dice Coefficient of 0.97, the proposed Neuro_DeFused-Net diagnostic model performs exceptionally well. These results highlight the model's robust ability to distinguish PD patients from Controls (Normal), even across a variety of datasets and neuroimaging modalities.
CONCLUSIONS: Our findings demonstrate the transformational ability of AI-driven models to facilitate the early diagnosis of PD. The proposed Neuro_DeFused-Net model enables the rapid detection of health markers through fast analysis of complicated neuroimaging data. Thus, timely intervention and individualized treatment strategies lead to improved patient outcomes and quality of life.
PMID:40107026 | DOI:10.1016/j.compbiomed.2025.110029
Emerging Trends and Innovations in Radiologic Diagnosis of Thoracic Diseases
Invest Radiol. 2025 Mar 20. doi: 10.1097/RLI.0000000000001179. Online ahead of print.
ABSTRACT
Over the past decade, Investigative Radiology has published numerous studies that have fundamentally advanced the field of thoracic imaging. This review summarizes key developments in imaging modalities, computational tools, and clinical applications, highlighting major breakthroughs in thoracic diseases-lung cancer, pulmonary nodules, interstitial lung disease (ILD), chronic obstructive pulmonary disease (COPD), COVID-19 pneumonia, and pulmonary embolism-and outlining future directions.Artificial intelligence (AI)-driven computer-aided detection systems and radiomic analyses have notably improved the detection and classification of pulmonary nodules, while photon-counting detector CT (PCD-CT) and low-field MRI offer enhanced resolution or radiation-free strategies. For lung cancer, CT texture analysis and perfusion imaging refine prognostication and therapy planning. ILD assessment benefits from automated diagnostic tools and innovative imaging techniques, such as PCD-CT and functional MRI, which reduce the need for invasive diagnostic procedures while improving accuracy. In COPD, dual-energy CT-based ventilation/perfusion assessment and dark-field radiography enable earlier detection and staging of emphysema, complemented by deep learning approaches for improved quantification. COVID-19 research has underscored the clinical utility of chest CT, radiographs, and AI-based algorithms for rapid triage, disease severity evaluation, and follow-up. Furthermore, tuberculosis remains a significant global health concern, highlighting the importance of AI-assisted chest radiography for early detection and management. Meanwhile, advances in CT pulmonary angiography, including dual-energy reconstructions, allow more sensitive detection of pulmonary emboli.Collectively, these innovations demonstrate the power of merging novel imaging technologies, quantitative functional analysis, and AI-driven tools to transform thoracic disease management. Ongoing progress promises more precise and personalized diagnostic and therapeutic strategies for diverse thoracic diseases.
PMID:40106831 | DOI:10.1097/RLI.0000000000001179
Using Deep Learning to Perform Automatic Quantitative Measurement of Masseter and Tongue Muscles in Persons With Dementia: Cross-Sectional Study
JMIR Aging. 2025 Mar 19;8:e63686. doi: 10.2196/63686.
ABSTRACT
BACKGROUND: Sarcopenia (loss of muscle mass and strength) increases adverse outcomes risk and contributes to cognitive decline in older adults. Accurate methods to quantify muscle mass and predict adverse outcomes, particularly in older persons with dementia, are still lacking.
OBJECTIVE: This study's main objective was to assess the feasibility of using deep learning techniques for segmentation and quantification of musculoskeletal tissues in magnetic resonance imaging (MRI) scans of the head in patients with neurocognitive disorders. This study aimed to pave the way for using automated techniques for opportunistic detection of sarcopenia in patients with neurocognitive disorder.
METHODS: In a cross-sectional analysis of 53 participants, we used 7 U-Net-like deep learning models to segment 5 different tissues in head MRI images and used the Dice similarity coefficient and average symmetric surface distance as main assessment techniques to compare results. We also analyzed the relationship between BMI and muscle and fat volumes.
RESULTS: Our framework accurately quantified masseter and subcutaneous fat on the left and right sides of the head and tongue muscle (mean Dice similarity coefficient 92.4%). A significant correlation exists between the area and volume of tongue muscle, left masseter muscle, and BMI.
CONCLUSIONS: Our study demonstrates the successful application of a deep learning model to quantify muscle volumes in head MRI in patients with neurocognitive disorders. This is a promising first step toward clinically applicable artificial intelligence and deep learning methods for estimating masseter and tongue muscle and predicting adverse outcomes in this population.
PMID:40106819 | DOI:10.2196/63686
Reducing hepatitis C diagnostic disparities with a fully automated deep learning-enabled microfluidic system for HCV antigen detection
Sci Adv. 2025 Mar 21;11(12):eadt3803. doi: 10.1126/sciadv.adt3803. Epub 2025 Mar 19.
ABSTRACT
Viral hepatitis remains a major global health issue, with chronic hepatitis B (HBV) and hepatitis C (HCV) causing approximately 1 million deaths annually, primarily due to liver cancer and cirrhosis. More than 1.5 million people contract HCV each year, disproportionately affecting vulnerable populations, including American Indians and Alaska Natives (AI/AN). While direct-acting antivirals (DAAs) are highly effective, timely and accurate HCV diagnosis remains a challenge, particularly in resource-limited settings. The current two-step HCV testing process is costly and time-intensive, often leading to patient loss before treatment. Point-of-care (POC) HCV antigen (Ag) testing offers a promising alternative, but no FDA-approved test meets the required sensitivity and specificity. To address this, we developed a fully automated, smartphone-based POC HCV Ag assay using platinum nanoparticles, deep learning image processing, and microfluidics. With an overall accuracy of 94.59%, this cost-effective, portable device has the potential to reduce HCV-related health disparities, particularly among AI/AN populations, improving accessibility and equity in care.
PMID:40106555 | DOI:10.1126/sciadv.adt3803
Evaluating and implementing machine learning models for personalised mobile health app recommendations
PLoS One. 2025 Mar 19;20(3):e0319828. doi: 10.1371/journal.pone.0319828. eCollection 2025.
ABSTRACT
This paper focuses on the evaluation and recommendation of healthcare applications in the mHealth field. The increase in the use of health applications, supported by an expanding mHealth market, highlights the importance of this research. In this study, a data set including app descriptions, ratings, reviews, and other relevant attributes from various health app platforms was selected. The main goal was to design a recommendation system that leverages app attributes, especially descriptions, to provide users with relevant contextual suggestions. A comprehensive pre-processing regime was carried out, including one-hot encoding, standardisation, and feature engineering. The feature, "Rating_Reviews", was introduced to capture the cumulative influence of ratings and reviews. The variable 'Category' was chosen as a target to discern different health contexts such as 'Weight loss' and 'Medical'. Various machine learning and deep learning models were evaluated, from the baseline Random Forest Classifier to the sophisticated BERT model. The results highlighted the efficiency of transfer learning, especially BERT, which achieved an accuracy of approximately 90% after hyperparameter tuning. A final recommendation system was designed, which uses cosine similarity to rank apps based on their relevance to user queries.
PMID:40106462 | DOI:10.1371/journal.pone.0319828
Mouse-Geneformer: A deep learning model for mouse single-cell transcriptome and its cross-species utility
PLoS Genet. 2025 Mar 19;21(3):e1011420. doi: 10.1371/journal.pgen.1011420. Online ahead of print.
ABSTRACT
Deep learning techniques are increasingly utilized to analyze large-scale single-cell RNA sequencing (scRNA-seq) data, offering valuable insights from complex transcriptome datasets. Geneformer, a pre-trained model using a Transformer Encoder architecture and human scRNA-seq datasets, has demonstrated remarkable success in human transcriptome analysis. However, given the prominence of the mouse, Mus musculus, as a primary mammalian model in biological and medical research, there is an acute need for a mouse-specific version of Geneformer. In this study, we developed a mouse-specific Geneformer (mouse-Geneformer) by constructing a large transcriptome dataset consisting of 21 million mouse scRNA-seq profiles and pre-training Geneformer on this dataset. The mouse-Geneformer effectively models the mouse transcriptome and, upon fine-tuning for downstream tasks, enhances the accuracy of cell type classification. In silico perturbation experiments using mouse-Geneformer successfully identified disease-causing genes that have been validated in in vivo experiments. These results demonstrate the feasibility of analyzing mouse data with mouse-Geneformer and highlight the robustness of the Geneformer architecture, applicable to any species with large-scale transcriptome data available. Furthermore, we found that mouse-Geneformer can analyze human transcriptome data in a cross-species manner. After the ortholog-based gene name conversion, the analysis of human scRNA-seq data using mouse-Geneformer, followed by fine-tuning with human data, achieved cell type classification accuracy comparable to that obtained using the original human Geneformer. In in silico simulation experiments using human disease models, we obtained results similar to human-Geneformer for the myocardial infarction model but only partially consistent results for the COVID-19 model, a trait unique to humans (laboratory mice are not susceptible to SARS-CoV-2). These findings suggest the potential for cross-species application of the Geneformer model while emphasizing the importance of species-specific models for capturing the full complexity of disease mechanisms. Despite the existence of the original Geneformer tailored for humans, human research could benefit from mouse-Geneformer due to its inclusion of samples that are ethically or technically inaccessible for humans, such as embryonic tissues and certain disease models. Additionally, this cross-species approach indicates potential use for non-model organisms, where obtaining large-scale single-cell transcriptome data is challenging.
PMID:40106407 | DOI:10.1371/journal.pgen.1011420
Structural assembly of the PAS domain drives the catalytic activation of metazoan PASK
Proc Natl Acad Sci U S A. 2025 Mar 25;122(12):e2409685122. doi: 10.1073/pnas.2409685122. Epub 2025 Mar 19.
ABSTRACT
PAS domains are ubiquitous sensory modules that transduce environmental signals into cellular responses through tandem PAS folds and PAS-associated C-terminal (PAC) motifs. While this conserved architecture underpins their regulatory roles, here we uncover a structural divergence in the metazoan PAS domain-regulated kinase (PASK). By integrating evolutionary-scale domain mapping with deep learning-based structural models, we identified two PAS domains in PASK, namely PAS-B and PAS-C, in addition to the previously known PAS-A domain. Unlike canonical PAS domains, the PAS fold and PAC motif in the PAS-C domain are spatially segregated by an unstructured linker, yet a functional PAS module is assembled through intramolecular interactions. We demonstrate that this assembly is nutrient responsive and serves to remodel the quaternary structure of PASK that positions the PAS-A domain near the kinase activation loop. This nutrient-sensitive spatial arrangement stabilizes the activation loop, enabling catalytic activation of PASK. These findings revealed an alternative mode of regulatory control in PAS sensory proteins, where the structural assembly of PAS domains links environmental sensing to enzymatic activity. By demonstrating that PAS domains integrate signals through dynamic structural rearrangements, this study broadens the understanding of their functional and regulatory roles and highlights potential opportunities for targeting PAS domain-mediated pathways in therapeutic applications.
PMID:40106358 | DOI:10.1073/pnas.2409685122
Synthetic Data-Driven Approaches for Chinese Medical Abstract Sentence Classification: Computational Study
JMIR Form Res. 2025 Mar 19;9:e54803. doi: 10.2196/54803.
ABSTRACT
BACKGROUND: Medical abstract sentence classification is crucial for enhancing medical database searches, literature reviews, and generating new abstracts. However, Chinese medical abstract classification research is hindered by a lack of suitable datasets. Given the vastness of Chinese medical literature and the unique value of traditional Chinese medicine, precise classification of these abstracts is vital for advancing global medical research.
OBJECTIVE: This study aims to address the data scarcity issue by generating a large volume of labeled Chinese abstract sentences without manual annotation, thereby creating new training datasets. Additionally, we seek to develop more accurate text classification algorithms to improve the precision of Chinese medical abstract classification.
METHODS: We developed 3 training datasets (dataset #1, dataset #2, and dataset #3) and a test dataset to evaluate our model. Dataset #1 contains 15,000 abstract sentences translated from the PubMed dataset into Chinese. Datasets #2 and #3, each with 15,000 sentences, were generated using GPT-3.5 from 40,000 Chinese medical abstracts in the CSL database. Dataset #2 used titles and keywords for pseudolabeling, while dataset #3 aligned abstracts with category labels. The test dataset includes 87,000 sentences from 20,000 abstracts. We used SBERT embeddings for deeper semantic analysis and evaluated our model using clustering (SBERT-DocSCAN) and supervised methods (SBERT-MEC). Extensive ablation studies and feature analyses were conducted to validate the model's effectiveness and robustness.
RESULTS: Our experiments involved training both clustering and supervised models on the 3 datasets, followed by comprehensive evaluation using the test dataset. The outcomes demonstrated that our models outperformed the baseline metrics. Specifically, when trained on dataset #1, the SBERT-DocSCAN model registered an impressive accuracy and F1-score of 89.85% on the test dataset. Concurrently, the SBERT-MEC algorithm exhibited comparable performance with an accuracy of 89.38% and an identical F1-score. Training on dataset #2 yielded similarly positive results for the SBERT-DocSCAN model, achieving an accuracy and F1-score of 89.83%, while the SBERT-MEC algorithm recorded an accuracy of 86.73% and an F1-score of 86.51%. Notably, training with dataset #3 allowed the SBERT-DocSCAN model to attain the best with an accuracy and F1-score of 91.30%, whereas the SBERT-MEC algorithm also showed robust performance, obtaining an accuracy of 90.39% and an F1-score of 90.35%. Ablation analysis highlighted the critical role of integrated features and methodologies in improving classification efficiency.
CONCLUSIONS: Our approach addresses the challenge of limited datasets for Chinese medical abstract classification by generating novel datasets. The deployment of SBERT-DocSCAN and SBERT-MEC models significantly enhances the precision of classifying Chinese medical abstracts, even when using synthetic datasets with pseudolabels.
PMID:40106267 | DOI:10.2196/54803
Generating Inverse Feature Space for Class Imbalance in Point Cloud Semantic Segmentation
IEEE Trans Pattern Anal Mach Intell. 2025 Mar 19;PP. doi: 10.1109/TPAMI.2025.3553051. Online ahead of print.
ABSTRACT
Point cloud semantic segmentation can enhance the understanding of the production environment and is a crucial component of vision tasks. The efficacy and generalization prowess of deep learning-based segmentation models are inherently contingent upon the quality and nature of the data employed in their training. However, it is often challenging to obtain data with inter-class balance, and training an intelligent segmentation network with the imbalanced data may cause cognitive bias. In this paper, a network framework InvSpaceNet is proposed, which generates an inverse feature space to alleviate the cognitive bias caused by imbalanced data. Specifically, we design a dual-branch training architecture that combines the superior feature representations derived from instance-balanced sampling data with the cognitive corrections introduced by the proposed inverse sampling data. In the inverse feature space of the point cloud generated by the auxiliary branch, the central points aggregated by class are constrained by the contrastive loss. To refine the class cognition in the inverse feature space, features are used to generate point cloud class prototypes through momentum update. These class prototypes from the inverse space are utilized to generate feature maps and structure maps that are aligned with the positive feature space of the main branch segmentation network. The training of the main branch is dynamically guided through gradients back propagated from different losses. Extensive experiments conducted on four large benchmarks (i.e., S3DIS, ScanNet v2, Toronto-3D, and SemanticKITTI) demonstrate that the proposed method can effectively mitigate point cloud imbalance issues and improve segmentation performance.
PMID:40106253 | DOI:10.1109/TPAMI.2025.3553051
High sensitivity photoacoustic imaging by learning from noisy data
IEEE Trans Med Imaging. 2025 Mar 19;PP. doi: 10.1109/TMI.2025.3552692. Online ahead of print.
ABSTRACT
Photoacoustic imaging (PAI) is a high-resolution biomedical imaging technology for the non-invasive detection of a broad range of chromophores at multiple scales and depths. However, limited by low chromophore concentration, weak signals in deep tissue, or various noise, the signal-to-noise ratio of photoacoustic images may be compromised in many biomedical applications. Although improvements in hardware and computational methods have been made to address this problem, they have not been readily available due to either high costs or an inability to generalize across different datasets. Here, we present a self-supervised deep learning method to increase the signal-to-noise ratio of photoacoustic images using noisy data only. Because this method does not require expensive ground truth data for training, it can be easily implemented across scanning microscopic and computed tomographic data acquired with various photoacoustic imaging systems. In vivo results show that our method makes the vascular details that were completely submerged in noise become clearly visible, increases the signal-to-noise ratio by up to 12-fold, doubles the imaging depth, and enables high-contrast imaging of deep tumors. We believe this method can be readily applied to many preclinical and clinical applications.
PMID:40106247 | DOI:10.1109/TMI.2025.3552692
TPNET: A time-sensitive small sample multimodal network for cardiotoxicity risk prediction
IEEE J Biomed Health Inform. 2025 Mar 19;PP. doi: 10.1109/JBHI.2025.3552819. Online ahead of print.
ABSTRACT
Cancer therapy-related cardiac dysfunction (CTRCD) is a potential complication associated with cancer treatment, particularly in patients with breast cancer, requiring monitoring of cardiac health during the treatment process. Tissue Doppler imaging (TDI) is a remarkable technique that can provide a comprehensive reflection of the left ventricle's physiological status. We hypothesized that the combination of TDI features with deep learning techniques could be utilized to predict CTRCD. To evaluate the hypothesis, we developed a temporal-multimodal pattern network for efficient training (TPNET) model to predict the incidence of CTRCD over a 24-month period based on TDI, function, and clinical data from 270 patients. Our model achieved an area under curve (AUC) of 0.83 and sensitivity of 0.88, demonstrating greater robustness compared to other existing visual models. To further translate our model's findings into practical applications, we utilized the integrated gradients (IG) attribution to perform a detailed evaluation of all the features. This analysis has identified key pathogenic signs that may have remained unnoticed, providing a viable option for implementing our model in preoperative breast cancer patients. Additionally, our findings demonstrate the potential of TPNET in discovering new causative agents for CTRCD.
PMID:40106240 | DOI:10.1109/JBHI.2025.3552819
Closing Gaps in Diabetic Retinopathy Screening in India Using a Deep Learning System
JAMA Netw Open. 2025 Mar 3;8(3):e250991. doi: 10.1001/jamanetworkopen.2025.0991.
NO ABSTRACT
PMID:40105846 | DOI:10.1001/jamanetworkopen.2025.0991