Deep learning
Artificial intelligence for the analysis of intracoronary optical coherence tomography images: a systematic review
Eur Heart J Digit Health. 2025 Jan 28;6(2):270-284. doi: 10.1093/ehjdh/ztaf005. eCollection 2025 Mar.
ABSTRACT
Intracoronary optical coherence tomography (OCT) is a valuable tool for, among others, periprocedural guidance of percutaneous coronary revascularization and the assessment of stent failure. However, manual OCT image interpretation is challenging and time-consuming, which limits widespread clinical adoption. Automated analysis of OCT frames using artificial intelligence (AI) offers a potential solution. For example, AI can be employed for automated OCT image interpretation, plaque quantification, and clinical event prediction. Many AI models for these purposes have been proposed in recent years. However, these models have not been systematically evaluated in terms of model characteristics, performances, and bias. We performed a systematic review of AI models developed for OCT analysis to evaluate the trends and performances, including a systematic evaluation of potential sources of bias in model development and evaluation.
PMID:40110224 | PMC:PMC11914731 | DOI:10.1093/ehjdh/ztaf005
Sudden cardiac arrest prediction via deep learning electrocardiogram analysis
Eur Heart J Digit Health. 2025 Feb 25;6(2):170-179. doi: 10.1093/ehjdh/ztae088. eCollection 2025 Mar.
ABSTRACT
AIMS: Sudden cardiac arrest (SCA) is a commonly fatal event that often occurs without prior indications. To improve outcomes and enable preventative strategies, the electrocardiogram (ECG) in conjunction with deep learning was explored as a potential screening tool.
METHODS AND RESULTS: A publicly available data set containing 10 s of 12-lead ECGs from individuals who did and did not have an SCA, information about time from ECG to arrest, and age and sex was utilized for analysis to individually predict SCA or not using deep convolution neural network models. The base model that included age and sex, ECGs within 1 day prior to arrest, and data sampled from windows of 720 ms around the R-waves from 221 individuals with SCA and 1046 controls had an area under the receiver operating characteristic curve of 0.77. With sensitivity set at 95%, base model specificity was 31%, which is not clinically applicable. Gradient-weighted class activation mapping showed that the model mostly relied on the QRS complex to make predictions. However, models with ECGs recorded between 1 day to 1 month and 1 month to 1 year prior to arrest demonstrated predictive capabilities.
CONCLUSION: Deep learning models processing ECG data are a promising means of screening for SCA, and this method explains differences in SCAs due to age and sex. Model performance improved when ECGs were nearer in time to SCAs, although ECG data up to a year prior had predictive value. Sudden cardiac arrest prediction was more dependent upon QRS complex data compared to other ECG segments.
PMID:40110219 | PMC:PMC11914729 | DOI:10.1093/ehjdh/ztae088
An ensemble learning model for detection of pulmonary hypertension using electrocardiogram, chest X-ray, and brain natriuretic peptide
Eur Heart J Digit Health. 2025 Jan 16;6(2):209-217. doi: 10.1093/ehjdh/ztae097. eCollection 2025 Mar.
ABSTRACT
AIMS: Delayed diagnosis of pulmonary hypertension (PH) is a known cause of poor patient prognosis. We aimed to develop an artificial intelligence (AI) model, using ensemble learning method to detect PH using electrocardiography (ECG), chest X-ray (CXR), and brain natriuretic peptide (BNP), facilitating accurate detection and prompting further examinations.
METHODS AND RESULTS: We developed a convolutional neural network model using ECG data to predict PH, labelled by ECG from seven institutions. Logistic regression was used for the BNP prediction model. We referenced a CXR deep learning model using ResNet18. Outputs from each of the three models were integrated into a three-layer fully connected multimodal model. Ten cardiologists participated in an interpretation test, detecting PH from patients' ECG, CXR, and BNP data both with and without the ensemble learning model. The area under the receiver operating characteristic curves of the ECG, CXR, BNP, and ensemble learning model were 0.818 [95% confidence interval (CI), 0.808-0.828], 0.823 (95% CI, 0.780-0.866), 0.724 (95% CI, 0.668-0.780), and 0.872 (95% CI, 0.829-0.915). Cardiologists' average accuracy rates were 65.0 ± 4.7% for test without AI model and 74.0 ± 2.7% for test with AI model, a statistically significant improvement (P < 0.01).
CONCLUSION: Our ensemble learning model improved doctors' accuracy in detecting PH from ECG, CXR, and BNP examinations. This suggests that earlier and more accurate PH diagnosis is possible, potentially improving patient prognosis.
PMID:40110214 | PMC:PMC11914732 | DOI:10.1093/ehjdh/ztae097
Prediction of time averaged wall shear stress distribution in coronary arteries' bifurcation varying in morphological features via deep learning
Front Physiol. 2025 Mar 4;16:1518732. doi: 10.3389/fphys.2025.1518732. eCollection 2025.
ABSTRACT
INTRODUCTION: Understanding the hemodynamics of blood circulation is crucial to reveal the processes contributing to stenosis and atherosclerosis development.
METHOD: Computational fluid dynamics (CFD) facilitates this understanding by simulating blood flow patterns in coronary arteries. Nevertheless, applying CFD in fast-response scenarios presents challenge due to the high computational costs. To overcome this challenge, we integrate a deep learning (DL) method to improve efficiency and responsiveness. This study presents a DL approach for predicting Time-Averaged Wall Shear Stress (TAWSS) values in coronary arteries' bifurcation.
RESULTS: To prepare the dataset, 1800 idealized models with varying morphological parameters are created. Afterward, we design a CNN-based U-net architecture to predict TAWSS by the point cloud of the geometries. Moreover, this architecture is implemented using TensorFlow 2.3.0. Our results indicate that the proposed algorithms can generate results in less than one second, showcasing their suitability for applications in terms of computational efficiency.
DISCUSSION: Furthermore, the DL-based predictions demonstrate strong agreement with results from CFD simulations, with a normalized mean absolute error of only 2.53% across various cases.
PMID:40110184 | PMC:PMC11920710 | DOI:10.3389/fphys.2025.1518732
GDP prediction of The Gambia using generative adversarial networks
Front Artif Intell. 2025 Mar 5;8:1546398. doi: 10.3389/frai.2025.1546398. eCollection 2025.
ABSTRACT
Predicting Gross Domestic Product (GDP) is one of the most crucial tasks in analyzing a nation's economy and growth. The primary goal of this study is to forecast GDP using factors such as government spending, inflation, official development aid, remittance inflows, and Foreign Direct Investment (FDI). Additionally, the paper aims to provide an alternative perspective to Generative Adversarial Networks method and demonstrate how such deep learning technique can enhance the accuracy of GDP predictions with small data and economy like The Gambia. We proposed the implementation of Generative Adversarial Networks to predict GDP using various economic factors over the period from 1970 to 2022. Performance metrics, including the coefficient of determination R2, mean absolute error (MAE), mean absolute percentage error (MAPE), and root- mean-square error (RMSE) were collected to evaluate the system's accuracy. Among the models tested-Random Forest Regression (RF), XGBoost (XGB), and Support Vector Regression (SVR)-the Generative Adversarial Networks (GAN) model demonstrated superior performance, achieving the highest accuracy, which is 99% prediction accuracies. The most dependable model for capturing intricate correlations between GDP and its affecting components, however, RF and XGBoost, also achieved an accuracy of 98% each. This makes GAN the most desirable model for GDP prediction for our study. Through data analysis, this project aims to provide actionable insights to support strategies that sustain economic boom. This approach enables the generation of accurate GDP forecasts, offering a valuable tool for policymakers and stakeholders.
PMID:40110175 | PMC:PMC11920123 | DOI:10.3389/frai.2025.1546398
Predicting implicit concept embeddings for singular relationship discovery replication of closed literature-based discovery
Front Res Metr Anal. 2025 Mar 5;10:1509502. doi: 10.3389/frma.2025.1509502. eCollection 2025.
ABSTRACT
OBJECTIVE: Literature-based Discovery (LBD) identifies new knowledge by leveraging existing literature. It exploits interconnecting implicit relationships to build bridges between isolated sets of non-interacting literatures. It has been used to facilitate drug repurposing, new drug discovery, and study adverse event reactions. Within the last decade, LBD systems have transitioned from using statistical methods to exploring deep learning (DL) to analyze semantic spaces between non-interacting literatures. Recent works explore knowledge graphs (KG) to represent explicit relationships. These works envision LBD as a knowledge graph completion (KGC) task and use DL to generate implicit relationships. However, these systems require the researcher to have domain-expert knowledge when submitting relevant queries for novel hypothesis discovery.
METHODS: Our method explores a novel approach to identify all implicit hypotheses given the researcher's search query and expedites the knowledge discovery process. We revise the KGC task as the task of predicting interconnecting vertex embeddings within the graph. We train our model using a similarity learning objective and compare our model's predictions against all known vertices within the graph to determine the likelihood of an implicit relationship (i.e., connecting edge). We also explore three approaches to represent edge connections between vertices within the KG: average, concatenation, and Hadamard. Lastly, we explore an approach to induce inductive biases and expedite model convergence (i.e., input representation scaling).
RESULTS: We evaluate our method by replicating five known discoveries within the Hallmark of Cancer (HOC) datasets and compare our method to two existing works. Our results show no significant difference in reported ranks and model convergence rate when comparing scaling our input representations and not using this method. Comparing our method to previous works, we found our method achieves optimal performance on two of five datasets and achieves comparable performance on the remaining datasets. We further analyze our results using statistical significance testing to demonstrate the efficacy of our method.
CONCLUSION: We found our similarity-based learning objective predicts linking vertex embeddings for single relationship closed discovery replication. Our method also provides a ranked list of linking vertices between a set of inputs. This approach reduces researcher burden and allows further exploration of generated hypotheses.
PMID:40110121 | PMC:PMC11920161 | DOI:10.3389/frma.2025.1509502
A Hybrid Energy-Based and AI-Based Screening Approach for the Discovery of Novel Inhibitors of AXL
ACS Med Chem Lett. 2025 Feb 10;16(3):410-419. doi: 10.1021/acsmedchemlett.4c00511. eCollection 2025 Mar 13.
ABSTRACT
AXL, part of the TAM receptor tyrosine kinase family, plays a significant role in the growth and survival of various tissues and tumors, making it a critical target for cancer therapy. This study introduces a novel high-throughput virtual screening (HTVS) methodology that merges an AI-enhanced graph neural network, PLANET, with a geometric deep learning algorithm, DeepDock. Using this approach, we identified potent AXL inhibitors from our database. Notably, compound 9, with an IC50 of 9.378 nM, showed excellent inhibitory activity, suggesting its potential as a candidate for further research. We also performed molecular dynamics simulations to explore the interactions between compound 9 and AXL, providing insights for future enhancements. This hybrid screening method proves effective in finding promising AXL inhibitors, and advancing the development of new cancer therapies.
PMID:40110119 | PMC:PMC11921171 | DOI:10.1021/acsmedchemlett.4c00511
Artificial intelligence models for periodontitis classification: A systematic review
J Dent. 2025 Mar 17:105690. doi: 10.1016/j.jdent.2025.105690. Online ahead of print.
ABSTRACT
OBJECTIVES: The graded diagnosis of periodontitis has always been a difficulty for dentists. This systematic review aimed to investigate the performance of artificial intelligence (AI) models for periodontitis classification.
DATA: This review includes original studies that explore the application of AI in periodontitis classification systems.
SOURCES: Two reviewers independently conducted a comprehensive search of literature published up to April 2024 in databases including PubMed, Web of Science, MEDLINE, Scopus, and Cochrane Library.
STUDY SELECTION: A total of 28 articles were eventually included in this study, from which 10 mapping parameters were extracted and evaluated separately for each article.
RESULTS: AI's diagnostic capabilities are comparable to those of a general dentist/periodontist, achieving an overall diagnostic accuracy rate of over 70% for periodontitis classification, with some reaching 80-90%. Variations in diagnosis accuracy rates were observed across different stages of periodontitis.
CONCLUSIONS: The AI model provides a novel and relatively reliable method for periodontitis classification. However, several key issues remain to be addressed, including access to and quality of data, interpretation of the decision-making process of the model, the ability of the model to generalize, and ethical and privacy considerations.
CLINICAL SIGNIFICANCE: The development of AI models for periodontitis classification is expected to assist dentists in improving diagnostic efficiency and enhancing diagnostic accuracy, and further development is expected to assist telemedicine and home self-testing.
PMID:40107599 | DOI:10.1016/j.jdent.2025.105690
AI Image Generation Technology in Ophthalmology: Use, Misuse and Future Applications
Prog Retin Eye Res. 2025 Mar 17:101353. doi: 10.1016/j.preteyeres.2025.101353. Online ahead of print.
ABSTRACT
BACKGROUND: AI-powered image generation technology holds the potential to dramatically reshape clinical ophthalmic practice. The adoption of this technology relies on clinician acceptance, yet it is an unfamiliar technology for both ophthalmic researchers and clinicians. In this work we present a literature review on the application of image generation technology in ophthalmology to discuss its theoretical applications and future role.
METHODS: First, we explore the key model designs used for image synthesis, including generative adversarial networks, autoencoders, and diffusion models. We then perform a survey of the literature for image generation technology in ophthalmology prior to September 2024, collecting the type of model used, as well as its clinical application, for each study. Finally, we discuss the limitations of this technology, the risks of its misuse and the future directions of research in this field.
RESULTS: Applications of this technology include improving diagnostic model performance, inter-modality image transformation, treatment and disease prognosis, image denoising, and education. Key challenges for integration of this technology into ophthalmic clinical practice include bias in generative models, risk to patient data security, computational and logistical barriers to model development, challenges with model explainability, inconsistent use of validation metrics between studies and misuse of synthetic images. Looking forward, researchers are placing a further emphasis on clinically grounded metrics, the development of image generation foundation models and the implementation of methods to ensure data provenance.
CONCLUSION: It is evident image generation technology has the potential to benefit the field of ophthalmology for many tasks, however, compared to other medical applications of AI, it is still in its infancy. This review aims to enable ophthalmic researchers to identify the optimal model and methodology to best take advantage of this technology.
PMID:40107410 | DOI:10.1016/j.preteyeres.2025.101353
Comparison of the characteristics between machine learning and deep learning algorithms for ablation site classification in a novel cloud-based system
Heart Rhythm. 2025 Mar 17:S1547-5271(25)02192-7. doi: 10.1016/j.hrthm.2025.03.1955. Online ahead of print.
ABSTRACT
BACKGROUND: CARTONET is a cloud-based system for the analysis of ablation procedures using the CARTO system. The current CARTONET R14 model employs deep learning, but its accuracy and positive predictive value (PPV) remain under-evaluated.
OBJECTIVE: This study aimed to compare the characteristics of the CARTONET system between the R12.1 and the R14 models.
METHODS: Data from 396 atrial fibrillation ablation cases were analyzed. Using a CARTONET R14 model, the sensitivity and PPV of the automated anatomical location model were investigated. The distribution of potential reconnection sites and confidence level for each site were investigated. We also compared the difference in that data between the CARTONET R12.1, the previous CARTONET version, and the CARTONET R14 models.
RESULTS: We analyzed the overall tags of 39169 points and the gap prediction of 625 segments using the CARTONET R14 model. The sensitivity and PPV of the R14 model significantly improved compared to that of the R12.1 model (R12.1 vs. R14; sensitivity, 71.2% vs. 77.5%, p<0.0001; PPV, 85.6 % vs. 86.2 %, p=0.0184). The incidence of reconnections was highly observed in the posterior area of the RPVs and LPVs (RPV, 98/238 [41.2%]; LPV 190/387 [49.1%]). On the other hand, the possibility of reconnection was highest in the roof area for the RPVs and LPVs (%; RPV, 14 [5.5-41]; LPV, 16 [8-22]).
CONCLUSION: The R14 model significantly improved sensitivity and PPV compared to the R12.1 model. The tendency for predicting potential reconnection sites was similar to the previous version, the R12 model.
PMID:40107403 | DOI:10.1016/j.hrthm.2025.03.1955
A multimodal framework for assessing the link between pathomics, transcriptomics, and pancreatic cancer mutations
Comput Med Imaging Graph. 2025 Mar 15;123:102526. doi: 10.1016/j.compmedimag.2025.102526. Online ahead of print.
ABSTRACT
In Pancreatic Ductal Adenocarcinoma (PDAC), predicting genetic mutations directly from histopathological images using Deep Learning can provide valuable insights. The combination of several omics can provide further knowledge on mechanisms underlying tumor biology. This study aimed at developing an explainable multimodal pipeline to predict genetic mutations for the KRAS, TP53, SMAD4, and CDKN2A genes, integrating pathomic features with transcriptomics from two independent datasets, the TCGA-PAAD, assumed as training set, and the CPTAC-PDA, as external validation set. Large and small configurations of CLAM (Clustering-constrained Attention Multiple Instance Learning) models were evaluated with three different feature extractors (ResNet50, UNI, and CONCH). RNA-seq data were pre-processed both conventionally and using three autoencoder architectures. The processed transcript panels were input into machine learning (ML) models for mutation classification. Attention maps and SHAP were employed, highlighting significant features from both data modalities. A fusion layer or a voting mechanism combined the outputs from pathomic and transcriptomic models, obtaining a multimodal prediction. Performance comparisons were assessed by Area Under Receiver Operating Characteristic (AUROC) and Precision-Recall (AUPRC) curves. On the validation set, for KRAS, multimodal ML achieved 0.92 of AUROC and 0.98 of AUPRC. For TP53, the multimodal voting model achieved 0.75 of AUROC and 0.85 of AUPRC. For SMAD4 and CDKN2A, transcriptomic ML models achieved AUROC of 0.71 and 0.65, while multimodal ML showed AUPRC of 0.39 and 0.37, respectively. This approach demonstrated the potential of combining pathomics with transcriptomics, offering an interpretable framework for predicting key genetic mutations in PDAC.
PMID:40107149 | DOI:10.1016/j.compmedimag.2025.102526
CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation
Comput Med Imaging Graph. 2025 Mar 13;123:102525. doi: 10.1016/j.compmedimag.2025.102525. Online ahead of print.
ABSTRACT
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.
PMID:40107148 | DOI:10.1016/j.compmedimag.2025.102525
Deep learning method for malaria parasite evaluation from microscopic blood smear
Artif Intell Med. 2025 Mar 15;163:103114. doi: 10.1016/j.artmed.2025.103114. Online ahead of print.
ABSTRACT
OBJECTIVE: Malaria remains a leading cause of global morbidity and mortality, responsible for approximately 5,97,000 deaths according to World Malaria Report 2024. The study aims to systematically review current methodologies for automated analysis of the Plasmodium genus in malaria diagnostics. Specifically, it focuses on computer-assisted methods, examining databases, blood smear types, staining techniques, and diagnostic models used for malaria characterization while identifying the limitations and contributions of recent studies.
METHODS: A systematic literature review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Peer-reviewed and published studies from 2020 to 2024 were retrieved from Web of Science and Scopus. Inclusion criteria focused on studies utilizing deep learning and machine learning models for automated malaria detection from microscopic blood smears. The review considered various blood smear types, staining techniques, and diagnostic models, providing a comprehensive evaluation of the automated diagnostic landscape for malaria.
RESULTS: The NIH database is the standardized and most widely tested database for malaria diagnostics. Giemsa stained-thin blood smear is the most efficient diagnostic method for the detection and observation of the plasmodium lifecycle. This study has been able to identify three categories of ML models most suitable for digital diagnostic of malaria, i.e., Most Accurate- ResNet and VGG with peak accuracy of 99.12 %, Most Popular- custom CNN-based models used by 58 % of studies, and least complex- CADx model. A few pre and post-processing techniques like Gaussian filter and auto encoder for noise reduction have also been discussed for improved accuracy of models.
CONCLUSION: Automated methods for malaria diagnostics show considerable promise in improving diagnostic accuracy and reducing human error. While deep learning models have demonstrated high performance, challenges remain in data standardization and real-world application. Addressing these gaps could lead to more reliable and scalable diagnostic tools, aiding global malaria control efforts.
PMID:40107120 | DOI:10.1016/j.artmed.2025.103114
Histopathology image classification based on semantic correlation clustering domain adaptation
Artif Intell Med. 2025 Mar 17;163:103110. doi: 10.1016/j.artmed.2025.103110. Online ahead of print.
ABSTRACT
Deep learning has been successfully applied to histopathology image classification tasks. However, the performance of deep models is data-driven, and the acquisition and annotation of pathological image samples are difficult, which limit the model's performance. Compared to whole slide images (WSI) of patients, histopathology image datasets of animal models are easier to acquire and annotate. Therefore, this paper proposes an unsupervised domain adaptation method based on semantic correlation clustering for histopathology image classification. The aim is to utilize Minmice model histopathology image dataset to achieve the classification and recognition of human WSIs. Firstly, the multi-scale fused features extracted from the source and target domains are normalized and mapped. In the new feature space, the cosine distance between class centers is used to measure the semantic correlation between categories. Then, the domain centers, class centers, and sample distributions are self-constrainedly aligned. Multi-granular information is applied to achieve cross-domain semantic correlation knowledge transfer between classes. Finally, the probabilistic heatmap is used to visualize the model's prediction results and annotate the cancerous regions in WSIs. Experimental results show that the proposed method has high classification accuracy for WSI, and the annotated result is close to manual annotation, indicating its potential for clinical applications.
PMID:40107119 | DOI:10.1016/j.artmed.2025.103110
Predicting infant brain connectivity with federated multi-trajectory GNNs using scarce data
Med Image Anal. 2025 Mar 13;102:103541. doi: 10.1016/j.media.2025.103541. Online ahead of print.
ABSTRACT
The understanding of the convoluted evolution of infant brain networks during the first postnatal year is pivotal for identifying the dynamics of early brain connectivity development. Thanks to the valuable insights into the brain's anatomy, existing deep learning frameworks focused on forecasting the brain evolution trajectory from a single baseline observation. While yielding remarkable results, they suffer from three major limitations. First, they lack the ability to generalize to multi-trajectory prediction tasks, where each graph trajectory corresponds to a particular imaging modality or connectivity type (e.g., T1-w MRI). Second, existing models require extensive training datasets to achieve satisfactory performance which are often challenging to obtain. Third, they do not efficiently utilize incomplete time series data. To address these limitations, we introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network. Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets. As a result, we enhance the performance of each hospital's local generative model, while preserving data privacy. The three key innovations of FedGmTE-Net++ are: (i) presenting the first federated learning framework specifically designed for brain multi-trajectory evolution prediction in a data-scarce environment, (ii) incorporating an auxiliary regularizer in the local objective function to exploit all the longitudinal brain connectivity within the evolution trajectory and maximize data utilization, (iii) introducing a two-step imputation process, comprising a preliminary K-Nearest Neighbours based precompletion followed by an imputation refinement step that employs regressors to improve similarity scores and refine imputations. Our comprehensive experimental results showed the outperformance of FedGmTE-Net++ in brain multi-trajectory prediction from a single baseline graph in comparison with benchmark methods. Our source code is available at https://github.com/basiralab/FedGmTE-Net-plus.
PMID:40107118 | DOI:10.1016/j.media.2025.103541
Development of an artificial intelligence-generated, explainable treatment recommendation system for urothelial carcinoma and renal cell carcinoma to support multidisciplinary cancer conferences
Eur J Cancer. 2025 Mar 15;220:115367. doi: 10.1016/j.ejca.2025.115367. Online ahead of print.
ABSTRACT
BACKGROUND: Decisions on the best available treatment in clinical oncology are based on expert opinions in multidisciplinary cancer conferences (MCC). Artificial intelligence (AI) could increase evidence-based treatment by generating additional treatment recommendations (TR). We aimed to develop such an AI system for urothelial carcinoma (UC) and renal cell carcinoma (RCC).
METHODS: Comprehensive data of patients with histologically confirmed UC and RCC who received MCC recommendations in the years 2015 - 2022 were transformed into machine readable representations. Development of a two-step process to train a classifier to mimic TR was followed by identification of superordinate and detailed categories of TR. Machine learning (CatBoost, XGBoost, Random Forest) and deep learning (TabPFN, TabNet, SoftOrdering CNN, FCN) techniques were trained. Results were measured by F1-scores for accuracy weights.
RESULTS: AI training was performed with 1617 (UC) and 880 (RCC) MCC recommendations (77 and 76 patient input parameters). The AI system generated fully automated TR with excellent F1-scores for UC (e.g. 'Surgery' 0.81, 'Anti-cancer drug' 0.83, 'Gemcitabine/Cisplatin' 0.88) and RCC (e.g. 'Anti-cancer drug' 0.92 'Nivolumab' 0.78, 'Pembrolizumab/Axitinib' 0.89). Explainability is provided by clinical features and their importance score. Finally, TR and explainability were visualized on a dashboard.
CONCLUSION: This study demonstrates for the first time AI-generated, explainable TR in UC and RCC with excellent performance results as a potential support tool for high-quality, evidence-based TR in MCC. The comprehensive technical and clinical development sets global reference standards for future AI developments in MCC recommendations in clinical oncology. Next, prospective validation of the results is mandatory.
PMID:40107091 | DOI:10.1016/j.ejca.2025.115367
Neuro_DeFused-Net: A novel multi-scale 2DCNN architecture assisted diagnostic model for Parkinson's disease diagnosis using deep feature-level fusion of multi-site multi-modality neuroimaging data
Comput Biol Med. 2025 Mar 18;190:110029. doi: 10.1016/j.compbiomed.2025.110029. Online ahead of print.
ABSTRACT
BACKGROUND: Neurological disorders, particularly Parkinson's Disease (PD), are serious and progressive conditions that significantly impact patients' motor functions and overall quality of life. Accurate and timely diagnosis is still crucial, but it is quite challenging. Understanding the changes in the brain linked to PD requires using neuroimaging modalities like magnetic resonance imaging (MRI). Artificial intelligence (AI), particularly deep learning (DL) methods, can potentially improve the precision of diagnosis.
METHOD: In the current study, we present a novel approach that integrates T1-weighted structural MRI and rest-state functional MRI using multi-site-cum-multi-modality neuroimaging data. To maximize the richness of the data, our approach integrates deep feature-level fusion across these modalities. We proposed a custom multi-scale 2D Convolutional Neural Network (CNN) architecture that captures features at different spatial scales, enhancing the model's capacity to learn PD-related complex patterns.
RESULTS: With an accuracy of 97.12 %, sensitivity of 97.26 %, F1-Score of 97.63 %, Area Under the Curve (AUC) of 0.99, mean average precision (mAP) of 99.53 %, and Dice Coefficient of 0.97, the proposed Neuro_DeFused-Net diagnostic model performs exceptionally well. These results highlight the model's robust ability to distinguish PD patients from Controls (Normal), even across a variety of datasets and neuroimaging modalities.
CONCLUSIONS: Our findings demonstrate the transformational ability of AI-driven models to facilitate the early diagnosis of PD. The proposed Neuro_DeFused-Net model enables the rapid detection of health markers through fast analysis of complicated neuroimaging data. Thus, timely intervention and individualized treatment strategies lead to improved patient outcomes and quality of life.
PMID:40107026 | DOI:10.1016/j.compbiomed.2025.110029
Emerging Trends and Innovations in Radiologic Diagnosis of Thoracic Diseases
Invest Radiol. 2025 Mar 20. doi: 10.1097/RLI.0000000000001179. Online ahead of print.
ABSTRACT
Over the past decade, Investigative Radiology has published numerous studies that have fundamentally advanced the field of thoracic imaging. This review summarizes key developments in imaging modalities, computational tools, and clinical applications, highlighting major breakthroughs in thoracic diseases-lung cancer, pulmonary nodules, interstitial lung disease (ILD), chronic obstructive pulmonary disease (COPD), COVID-19 pneumonia, and pulmonary embolism-and outlining future directions.Artificial intelligence (AI)-driven computer-aided detection systems and radiomic analyses have notably improved the detection and classification of pulmonary nodules, while photon-counting detector CT (PCD-CT) and low-field MRI offer enhanced resolution or radiation-free strategies. For lung cancer, CT texture analysis and perfusion imaging refine prognostication and therapy planning. ILD assessment benefits from automated diagnostic tools and innovative imaging techniques, such as PCD-CT and functional MRI, which reduce the need for invasive diagnostic procedures while improving accuracy. In COPD, dual-energy CT-based ventilation/perfusion assessment and dark-field radiography enable earlier detection and staging of emphysema, complemented by deep learning approaches for improved quantification. COVID-19 research has underscored the clinical utility of chest CT, radiographs, and AI-based algorithms for rapid triage, disease severity evaluation, and follow-up. Furthermore, tuberculosis remains a significant global health concern, highlighting the importance of AI-assisted chest radiography for early detection and management. Meanwhile, advances in CT pulmonary angiography, including dual-energy reconstructions, allow more sensitive detection of pulmonary emboli.Collectively, these innovations demonstrate the power of merging novel imaging technologies, quantitative functional analysis, and AI-driven tools to transform thoracic disease management. Ongoing progress promises more precise and personalized diagnostic and therapeutic strategies for diverse thoracic diseases.
PMID:40106831 | DOI:10.1097/RLI.0000000000001179
Using Deep Learning to Perform Automatic Quantitative Measurement of Masseter and Tongue Muscles in Persons With Dementia: Cross-Sectional Study
JMIR Aging. 2025 Mar 19;8:e63686. doi: 10.2196/63686.
ABSTRACT
BACKGROUND: Sarcopenia (loss of muscle mass and strength) increases adverse outcomes risk and contributes to cognitive decline in older adults. Accurate methods to quantify muscle mass and predict adverse outcomes, particularly in older persons with dementia, are still lacking.
OBJECTIVE: This study's main objective was to assess the feasibility of using deep learning techniques for segmentation and quantification of musculoskeletal tissues in magnetic resonance imaging (MRI) scans of the head in patients with neurocognitive disorders. This study aimed to pave the way for using automated techniques for opportunistic detection of sarcopenia in patients with neurocognitive disorder.
METHODS: In a cross-sectional analysis of 53 participants, we used 7 U-Net-like deep learning models to segment 5 different tissues in head MRI images and used the Dice similarity coefficient and average symmetric surface distance as main assessment techniques to compare results. We also analyzed the relationship between BMI and muscle and fat volumes.
RESULTS: Our framework accurately quantified masseter and subcutaneous fat on the left and right sides of the head and tongue muscle (mean Dice similarity coefficient 92.4%). A significant correlation exists between the area and volume of tongue muscle, left masseter muscle, and BMI.
CONCLUSIONS: Our study demonstrates the successful application of a deep learning model to quantify muscle volumes in head MRI in patients with neurocognitive disorders. This is a promising first step toward clinically applicable artificial intelligence and deep learning methods for estimating masseter and tongue muscle and predicting adverse outcomes in this population.
PMID:40106819 | DOI:10.2196/63686
Reducing hepatitis C diagnostic disparities with a fully automated deep learning-enabled microfluidic system for HCV antigen detection
Sci Adv. 2025 Mar 21;11(12):eadt3803. doi: 10.1126/sciadv.adt3803. Epub 2025 Mar 19.
ABSTRACT
Viral hepatitis remains a major global health issue, with chronic hepatitis B (HBV) and hepatitis C (HCV) causing approximately 1 million deaths annually, primarily due to liver cancer and cirrhosis. More than 1.5 million people contract HCV each year, disproportionately affecting vulnerable populations, including American Indians and Alaska Natives (AI/AN). While direct-acting antivirals (DAAs) are highly effective, timely and accurate HCV diagnosis remains a challenge, particularly in resource-limited settings. The current two-step HCV testing process is costly and time-intensive, often leading to patient loss before treatment. Point-of-care (POC) HCV antigen (Ag) testing offers a promising alternative, but no FDA-approved test meets the required sensitivity and specificity. To address this, we developed a fully automated, smartphone-based POC HCV Ag assay using platinum nanoparticles, deep learning image processing, and microfluidics. With an overall accuracy of 94.59%, this cost-effective, portable device has the potential to reduce HCV-related health disparities, particularly among AI/AN populations, improving accessibility and equity in care.
PMID:40106555 | DOI:10.1126/sciadv.adt3803