Deep learning

MutualDTA: An Interpretable Drug-Target Affinity Prediction Model Leveraging Pretrained Models and Mutual Attention

Wed, 2025-01-29 06:00

J Chem Inf Model. 2025 Jan 29. doi: 10.1021/acs.jcim.4c01893. Online ahead of print.

ABSTRACT

Efficient and accurate drug-target affinity (DTA) prediction can significantly accelerate the drug development process. Recently, deep learning models have been widely applied to DTA prediction and have achieved notable success. However, existing methods often encounter several common issues: first, the data representations lack sufficient information; second, the extracted features are not comprehensive; and third, most methods lack interpretability when modeling drug-target binding. To overcome the above-mentioned problems, we propose an interpretable deep learning model called MutualDTA for predicting DTA. MutualDTA leverages the power of pretrained models to obtain accurate representations of drugs and targets. It also employs well-designed modules to extract hidden features from these representations. Furthermore, the interpretability of MutualDTA is realized by the Mutual-Attention module, which (i) establishes relationships between drugs and proteins from the perspective of intermolecular interactions between drug atoms and protein amino acid residues and (ii) allows MutualDTA to capture the binding sites based on attention scores. The test results on two benchmark data sets show that MutualDTA achieves the best performance compared to the 12 state-of-the-art models. Attention visualization experiments show that MutualDTA can capture partial interaction sites, which not only helps drug developers reduce the search space for binding sites, but also demonstrates the interpretability of MutualDTA. Finally, the trained MutualDTA is applied to screen high-affinity drug screens targeting Alzheimer's disease (AD)-related proteins, and the screened drugs are partially present in the anti-AD drug library. These results demonstrate the reliability of MutualDTA in drug development.

PMID:39878060 | DOI:10.1021/acs.jcim.4c01893

Categories: Literature Watch

Noncoding variants and sulcal patterns in congenital heart disease: Machine learning to predict functional impact

Wed, 2025-01-29 06:00

iScience. 2024 Dec 28;28(2):111707. doi: 10.1016/j.isci.2024.111707. eCollection 2025 Feb 21.

ABSTRACT

Neurodevelopmental impairments associated with congenital heart disease (CHD) may arise from perturbations in brain developmental pathways, including the formation of sulcal patterns. While genetic factors contribute to sulcal features, the association of noncoding de novo variants (ncDNVs) with sulcal patterns in people with CHD remains poorly understood. Leveraging deep learning models, we examined the predicted impact of ncDNVs on gene regulatory signals. Predicted impact was compared between participants with CHD and a jointly called cohort without CHD. We then assessed the relationship of the predicted impact of ncDNVs with their sulcal folding patterns. ncDNVs predicted to increase H3K9me2 modification were associated with larger disruptions in right parietal sulcal patterns in the CHD cohort. Genes predicted to be regulated by these ncDNVs were enriched for functions related to neuronal development. This highlights the potential of deep learning models to generate hypotheses about the role of noncoding variants in brain development.

PMID:39877905 | PMC:PMC11772982 | DOI:10.1016/j.isci.2024.111707

Categories: Literature Watch

A Deep Learning Framework for Automated Classification and Archiving of Orthodontic Diagnostic Documents

Wed, 2025-01-29 06:00

Cureus. 2024 Dec 28;16(12):e76530. doi: 10.7759/cureus.76530. eCollection 2024 Dec.

ABSTRACT

Background Orthodontic diagnostic workflows often rely on manual classification and archiving of large volumes of patient images, a process that is both time-consuming and prone to errors such as mislabeling and incomplete documentation. These challenges can compromise treatment accuracy and overall patient care. To address these issues, we propose an artificial intelligence (AI)-driven deep learning framework based on convolutional neural networks (CNNs) to automate the classification and archiving of orthodontic diagnostic images. Our AI-based framework enhances workflow efficiency and reduces human errors. This study is an initial step towards fully automating orthodontic diagnosis and treatment planning systems, specifically focusing on the automation of orthodontic diagnostic record classification using AI. Methods This study employed a dataset comprising 61,842 images collected from three dental clinics, distributed across 13 categories. A sequential classification approach was developed, starting with a primary model that categorized images into three main groups: extraoral, intraoral, and radiographic. Secondary models were applied within each group to perform the final classification. The proposed model, enhanced with attention modules, was trained and compared with pre-trained models such as ResNet50 (Microsoft Corporation, Redmond, Washington, United States) and InceptionV3 (Google LLC, Mountain View, California, United States). External validation was performed using 13,729 new samples to assess the artificial intelligence (AI) system's accuracy and generalizability compared to expert assessments. Results The deep learning framework achieved an accuracy of 99.24% on an external validation set, demonstrating performance almost on par with human experts. Additionally, the model demonstrated significantly faster processing times compared to manual methods. Gradient-weighted class activation mapping (Grad-CAM) visualizations confirmed that the model effectively focused on clinically relevant features during classification, further supporting its clinical applicability. Conclusion This study introduces a deep learning framework for automating the classification and archiving of orthodontic diagnostic images. The model achieved impressive accuracy and demonstrated clinically relevant feature focus through Grad-CAM visualizations. Beyond its high accuracy, the framework offers significant improvements in processing speed, making it a viable tool for real-time applications in orthodontics. This approach not only reduces the workload in healthcare settings but also lays the foundation for future automated diagnostic and treatment planning systems in digital orthodontics.

PMID:39877794 | PMC:PMC11774544 | DOI:10.7759/cureus.76530

Categories: Literature Watch

AI-guided virtual biopsy: Automated differentiation of cerebral gliomas from other benign and malignant MRI findings using deep learning

Wed, 2025-01-29 06:00

Neurooncol Adv. 2025 Jan 20;7(1):vdae225. doi: 10.1093/noajnl/vdae225. eCollection 2025 Jan-Dec.

ABSTRACT

BACKGROUND: This study aimed to develop an automated algorithm to noninvasively distinguish gliomas from other intracranial pathologies, preventing misdiagnosis and ensuring accurate analysis before further glioma assessment.

METHODS: A cohort of 1280 patients with a variety of intracranial pathologies was included. It comprised 218 gliomas (mean age 54.76 ± 13.74 years; 136 males, 82 females), 514 patients with brain metastases (mean age 59.28 ± 12.36 years; 228 males, 286 females), 366 patients with inflammatory lesions (mean age 41.94 ± 14.57 years; 142 males, 224 females), 99 intracerebral hemorrhages (mean age 62.68 ± 16.64 years; 56 males, 43 females), and 83 meningiomas (mean age 63.99 ± 13.31 years; 25 males, 58 females). Radiomic features were extracted from fluid-attenuated inversion recovery (FLAIR), contrast-enhanced, and noncontrast T1-weighted MR sequences. Subcohorts, with 80% for training and 20% for testing, were established for model validation. Machine learning models, primarily XGBoost, were trained to distinguish gliomas from other pathologies.

RESULTS: The study demonstrated promising results in distinguishing gliomas from various intracranial pathologies. The best-performing model consistently achieved high area-under-the-curve (AUC) values, indicating strong discriminatory power across multiple distinctions, including gliomas versus metastases (AUC = 0.96), gliomas versus inflammatory lesions (AUC = 1.0), gliomas versus intracerebral hemorrhages (AUC = 0.99), gliomas versus meningiomas (AUC = 0.98). Additionally, across all these entities, gliomas had an AUC of 0.94.

CONCLUSIONS: The study presents an automated approach that effectively distinguishes gliomas from common intracranial pathologies. This can serve as a quality control upstream to further artificial-intelligence-based genetic analysis of cerebral gliomas.

PMID:39877747 | PMC:PMC11773384 | DOI:10.1093/noajnl/vdae225

Categories: Literature Watch

Deep learning driven silicon wafer defect segmentation and classification

Wed, 2025-01-29 06:00

MethodsX. 2025 Jan 6;14:103158. doi: 10.1016/j.mex.2025.103158. eCollection 2025 Jun.

ABSTRACT

Integrated Circuits are made of various transistors that are embedded on a silicon wafer, these wafers are difficult to process and hence are prone to defects. Defecting these defects manually is a time consuming and labour-intensive task and hence automation is necessary. Deep Learning approach is better suited in this case as it is able to generalize defects if trained properly and can be a solution to segmentation and classification of defects automatically. The segmentation model mentioned in this study achieved a Mean Absolute Error (MAE) of 0.0036, a Root Mean Squared Error (RMSE) of 0.0576, a Dice Index (DSC) of 0.7731, and an Intersection over Union (IoU) of 0.6590. The classification model achieved 0.9705 Accuracy, 0.9678 Precision, 0.9705 Recall, and 0.9676 F1 Score. In order to make this process a more interactive, an LLM with Q&A capabilities was integrated to solve any doubts and answer any questions regarding defects in wafers. This approach helps automate the detection process thus improving quality of end product.•Successful and precise defect segmentation and classification using Deep Learning was achieved.•High-intensity regions after post-processing.•An LLM offering defect analysis and guidance was streamlined.

PMID:39877475 | PMC:PMC11773255 | DOI:10.1016/j.mex.2025.103158

Categories: Literature Watch

EyeLiner: A Deep Learning Pipeline for Longitudinal Image Registration Using Fundus Landmarks

Wed, 2025-01-29 06:00

Ophthalmol Sci. 2024 Nov 28;5(2):100664. doi: 10.1016/j.xops.2024.100664. eCollection 2025 Mar-Apr.

ABSTRACT

OBJECTIVE: Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, "EyeLiner," for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.

DESIGN: EyeLiner registers a "moving" image to a "fixed" image using a DL-based keypoint matching algorithm.

PARTICIPANTS: We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).

METHODS: Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.

MAIN OUTCOME MEASURES: We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.

RESULTS: EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUCFIRE = 0.76, AUCCORIS = 0.83, AUCSIGF = 0.74).

CONCLUSIONS: Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

PMID:39877463 | PMC:PMC11773051 | DOI:10.1016/j.xops.2024.100664

Categories: Literature Watch

Detecting autism in children through drawing characteristics using the visual-motor integration test

Wed, 2025-01-29 06:00

Health Inf Sci Syst. 2025 Jan 26;13(1):18. doi: 10.1007/s13755-025-00338-6. eCollection 2025 Dec.

ABSTRACT

This study introduces a novel classification method to distinguish children with autism from typically developing children. We recruited 50 school-age children in Taiwan, including 44 boys and 6 girls aged 6 to 12 years, and asked them to draw patterns from a visual-motor integration test to collect data and train deep learning classification models. Ensemble learning was adopted to significantly improve the classification accuracy to 0.934. Moreover, we identified five patterns that most effectively differentiate the drawing performance between children with and without ASD. From these five patterns we found that children with ASD had difficulty producing patterns that include circles and spatial relationships. These results align with previous findings in the field of visual-motor perceptions of individuals with autism. Our results offer a potential cross-cultural tool to detect autism, which can further promote early detection and intervention of autism.

PMID:39877430 | PMC:PMC11769875 | DOI:10.1007/s13755-025-00338-6

Categories: Literature Watch

TrimNN: Characterizing cellular community motifs for studying multicellular topological organization in complex tissues

Wed, 2025-01-29 06:00

Res Sq [Preprint]. 2025 Jan 17:rs.3.rs-5584635. doi: 10.21203/rs.3.rs-5584635/v1.

ABSTRACT

The spatial arrangement of cells plays a pivotal role in shaping tissue functions in various biological systems and diseased microenvironments. However, it is still under-investigated of the topological coordinating rules among different cell types as tissue spatial patterns. Here, we introduce the Triangulation cellular community motif Neural Network (TrimNN), a bottom-up approach to estimate the prevalence of sizeable conservative cell organization patterns as Cellular Community (CC) motifs in spatial transcriptomics and proteomics. Different from clustering cell type composition from classical top-down analysis, TrimNN differentiates cellular niches as countable topological blocks in recurring interconnections of various types, representing multicellular neighborhoods with interpretability and generalizability. This graph-based deep learning framework adopts inductive bias in CCs and uses a semi-divide and conquer approach in the triangulated space. In spatial omics studies, various sizes of CC motifs identified by TrimNN robustly reveal relations between spatially distributed cell-type patterns and diverse phenotypical biological functions.

PMID:39877090 | PMC:PMC11774463 | DOI:10.21203/rs.3.rs-5584635/v1

Categories: Literature Watch

LEHP-DETR: A model with backbone improved and hybrid encoding innovated for flax capsule detection

Wed, 2025-01-29 06:00

iScience. 2024 Dec 9;28(1):111558. doi: 10.1016/j.isci.2024.111558. eCollection 2025 Jan 17.

ABSTRACT

Flax, as a functional crop with rich essential fatty acids and nutrients, is important in nutrition and industrial applications. However, the current process of flax seed detection relies mainly on manual operation, which is not only inefficient but also prone to error. The development of computer vision and deep learning techniques offers a new way to solve this problem. In this study, based on RT-DETR, we introduced the RepNCSPELAN4 module, ADown module, Context Aggregation module, and TFE module, and designed the HWD-ADown module, HiLo-AIFI module, and DSSFF module, and proposed an improved model, called LEHP-DETR. Experimental results show that LEHP-DETR achieves significant performance improvement on the flax dataset and comprehensively outperforms the comparison model. Compared to the base model, LEHP-DETR reduces the number of parameters by 67.3%, the model size by 66.3%, and the FLOPs by 37.6%. the average detection accuracy mAP50 and mAP50:95 increased by 2.6% and 3.5%, respectively.

PMID:39877068 | PMC:PMC11773470 | DOI:10.1016/j.isci.2024.111558

Categories: Literature Watch

Using machine and deep learning to predict short-term complications following trigger digit release surgery

Wed, 2025-01-29 06:00

J Hand Microsurg. 2024 Oct 28;17(1):100171. doi: 10.1016/j.jham.2024.100171. eCollection 2025 Jan.

ABSTRACT

BACKGROUND: Trigger finger is a common disorder of the hand characterized by pain and locking of the digits during flexion or extension. In cases refractory to nonoperative management, surgical release of the A1 pulley can be performed. This study evaluates the ability of machine learning (ML) techniques to predict short-term complications following trigger digit release surgery.

METHODS: A retrospective study was conducted using data for trigger digit release from the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) years 2005-2020. Outcomes of interest were 30-day complications and 30-day return to the operating room. Three ML algorithms were evaluated - a Random Forest (RF), Elastic-Net Regression (ENet), and Extreme Gradient Boosted Tree (XGBoost), along with a deep learning Neural Network (NN). Feature importance analysis was performed in the highest performing model for each outcome to identify predictors with the greatest contributions.

RESULTS: We included a total of 1209 cases of trigger digit release. The best algorithm for predicting wound complications was the RF, with an AUC of 0.64 ± 0.04. The XGBoost algorithm was best performing for medical complications (AUC: 0.70 ± 0.06) and reoperations (AUC: 0.60 ± 0.07). All three models had performance significantly above the AUC benchmark of 0.50 ± 0.00. On our feature importance analysis, age was distinctively the highest contributing predictor of wound complications.

CONCLUSIONS: Machine learning can be successfully used for risk stratification in surgical patients. Moving forwards, it is imperative for hand surgeons to continue evaluating applications of ML in the field.

PMID:39876951 | PMC:PMC11770221 | DOI:10.1016/j.jham.2024.100171

Categories: Literature Watch

Leveraging synthetic data to improve regional sea level predictions

Tue, 2025-01-28 06:00

Sci Rep. 2025 Jan 28;15(1):3546. doi: 10.1038/s41598-025-88078-1.

ABSTRACT

The rapid increase in sea levels driven by climate change presents serious risks to coastal communities around the globe. Traditional prediction models frequently concentrate on developed regions with extensive tide gauge networks, leaving a significant gap in data and forecasts for developing countries where the tide gauges are sparse. This study presents a novel deep learning approach that combines TimesGAN with ConvLSTM to enhance regional sea level predictions using the more widely available satellite altimetry data. By generating synthetic training data with TimesGAN, we can significantly improve the predictive accuracy of the ConvLSTM model. Our method is tested across three developed regions-Shanghai, New York, and Lisbon-and three developing regions-Liberia, Gabon, and Somalia. The results reveal that integrating TimesGAN reduces the average mean squared error of the ConvLSTM prediction by approximately 66.1%, 76.6%, 64.5%, 78.2%, 81.7% and 85.1% for Shanghai, New York, Lisbon, Liberia, Gabon, and Somalia, respectively. This underscores the effectiveness of synthetic data in enhancing sea level prediction accuracy, across all regions studied.

PMID:39875524 | DOI:10.1038/s41598-025-88078-1

Categories: Literature Watch

Using partially shared radiomics features to simultaneously identify isocitrate dehydrogenase mutation status and epilepsy in glioma patients from MRI images

Tue, 2025-01-28 06:00

Sci Rep. 2025 Jan 28;15(1):3591. doi: 10.1038/s41598-025-87778-y.

ABSTRACT

Prediction of isocitrate dehydrogenase (IDH) mutation status and epilepsy occurrence are important to glioma patients. Although machine learning models have been constructed for both issues, the correlation between them has not been explored. Our study aimed to exploit this correlation to improve the performance of both of the IDH mutation status identification and epilepsy diagnosis models in patients with glioma II-IV. 399 patients were retrospectively enrolled and divided into a training (n = 279) and an independent test (n = 120) cohort. Multi-center dataset (n = 228) from The Cancer Imaging Archive (TCIA) was used for external test for identification of IDH mutation status. Region of interests comprising the entire tumor and peritumoral edema were automatically segmented using a pre-trained deep learning model. Radiomic features were extracted from T1-weighted, T2-weighted, post-Gadolinium T1 weighted, and T2 fluid-attenuated inversion recovery images. We proposed an iterative approach derived from LASSO to select features shared by two tasks and features specific to each task, before using them to construct the final models. Receiver operating characteristic (ROC) analysis was employed to evaluate the model. The IDH mutation identification model achieved area under the ROC curve (AUC) values of 0.948, 0.946 and 0.860 on the training, internal test, and external test cohorts, respectively. The epilepsy diagnosis model achieved AUCs of 0.924 and 0.880 on the training and internal test cohorts, respectively. The proposed models can identify IDH status and epilepsy with fewer features, thus having better interpretability and lower risk of overfitting. This not only improves its chance of application in clinical settings, but also provides a new scheme to study multiple correlated clinical tasks.

PMID:39875517 | DOI:10.1038/s41598-025-87778-y

Categories: Literature Watch

hERGAT: predicting hERG blockers using graph attention mechanism through atom- and molecule-level interaction analyses

Tue, 2025-01-28 06:00

J Cheminform. 2025 Jan 28;17(1):11. doi: 10.1186/s13321-025-00957-x.

ABSTRACT

The human ether-a-go-go-related gene (hERG) channel plays a critical role in the electrical activity of the heart, and its blockers can cause serious cardiotoxic effects. Thus, screening for hERG channel blockers is a crucial step in the drug development process. Many in silico models have been developed to predict hERG blockers, which can efficiently save time and resources. However, previous methods have found it hard to achieve high performance and to interpret the predictive results. To overcome these challenges, we have proposed hERGAT, a graph neural network model with an attention mechanism, to consider compound interactions on atomic and molecular levels. In the atom-level interaction analysis, we applied a graph attention mechanism (GAT) that integrates information from neighboring nodes and their extended connections. The hERGAT employs a gated recurrent unit (GRU) with the GAT to learn information between more distant atoms. To confirm this, we performed clustering analysis and visualized a correlation heatmap, verifying the interactions between distant atoms were considered during the training process. In the molecule-level interaction analysis, the attention mechanism enables the target node to focus on the most relevant information, highlighting the molecular substructures that play crucial roles in predicting hERG blockers. Through a literature review, we confirmed that highlighted substructures have a significant role in determining the chemical and biological characteristics related to hERG activity. Furthermore, we integrated physicochemical properties into our hERGAT model to improve the performance. Our model achieved an area under the receiver operating characteristic of 0.907 and an area under the precision-recall of 0.904, demonstrating its effectiveness in modeling hERG activity and offering a reliable framework for optimizing drug safety in early development stages.Scientific contribution:hERGAT is a deep learning model for predicting hERG blockers by combining GAT and GRU, enabling it to capture complex interactions at atomic and molecular levels. We improve the model's interpretability by analyzing the highlighted molecular substructures, providing valuable insights into their roles in determining hERG activity. The model achieves high predictive performance, confirming its potential as a preliminary tool for early cardiotoxicity assessment and enhancing the reliability of the results.

PMID:39875959 | DOI:10.1186/s13321-025-00957-x

Categories: Literature Watch

Impacted lower third molar classification and difficulty index assessment: comparisons among dental students, general practitioners and deep learning model assistance

Tue, 2025-01-28 06:00

BMC Oral Health. 2025 Jan 28;25(1):152. doi: 10.1186/s12903-025-05425-4.

ABSTRACT

BACKGROUND: Assessing the difficulty of impacted lower third molar (ILTM) surgical extraction is crucial for predicting postoperative complications and estimating procedure duration. The aim of this study was to evaluate the effectiveness of a convolutional neural network (CNN) in determining the angulation, position, classification and difficulty index (DI) of ILTM. Additionally, we compared these parameters and the time required for interpretation among deep learning (DL) models, sixth-year dental students (DSs), and general dental practitioners (GPs) with and without CNN assistance.

MATERIALS AND METHODS: The dataset included cropped panoramic radiographs of 1200 ILTMs. The parameters examined were ILTM angulation, class, and position. The radiographs were randomly split into test datasets, while the remaining images were utilized for training and validation. Data augmentation techniques were applied. Another set of radiographs was used to compare the accuracy between human experts and the top-performing CNN. This dataset was also given to DSs and GPs. The participants were instructed to classify the parameters of the ILTMs both with and without the aid of the best-performing CNN model. The results, as well as the Pederson DI and time taken for both groups with and without CNN assistance, were statistically analyzed.

RESULTS: All the selected CNN models successfully classified ILTM angulation, class, and position. Within the DS and GP groups, the accuracy and kappa scores were significantly greater when CNN assistance was used. Among the groups, performance tests without CNN assistance revealed no significant differences in any category. However, compared with DSs, GPs took significantly less time for the class and total time, a trend that persisted when CNN assistance was used. With the CNN, the GPs achieved significantly higher accuracy and kappa scores for class classification than the DSs did (p = 0.035 and 0.010). Conversely, the DS group, with the CNN, exhibited higher accuracy and kappa scores for position classification than did the GP group (p < 0.001).

CONCLUSION: The CNN can achieve accuracies ranging from 87 to 96% for ILTM classification. With the assistance of the CNN, both DSs and GPs exhibited significantly higher accuracy in ILTM classification. Additionally, compared with DSs with and without CNN assistance, GPs took significantly less time to inspect the class and overall.

PMID:39875882 | DOI:10.1186/s12903-025-05425-4

Categories: Literature Watch

Virtual biopsy for non-invasive identification of follicular lymphoma histologic transformation using radiomics-based imaging biomarker from PET/CT

Tue, 2025-01-28 06:00

BMC Med. 2025 Jan 29;23(1):49. doi: 10.1186/s12916-025-03893-7.

ABSTRACT

BACKGROUND: This study aimed to construct a radiomics-based imaging biomarker for the non-invasive identification of transformed follicular lymphoma (t-FL) using PET/CT images.

METHODS: A total of 784 follicular lymphoma (FL), diffuse large B-cell lymphoma, and t-FL patients from 5 independent medical centers were included. The unsupervised EMFusion method was applied to fuse PET and CT images. Deep-based radiomic features were extracted from the fusion images using a deep learning model (ResNet18). These features, along with handcrafted radiomics, were utilized to construct a radiomic signature (R-signature) using automatic machine learning in the training and internal validation cohort. The R-signature was then tested for its predictive ability in the t-FL test cohort. Subsequently, this R-signature was combined with clinical parameters and SUVmax to develop a t-FL scoring system.

RESULTS: The R-signature demonstrated high accuracy, with mean AUC values as 0.994 in the training cohort and 0.976 in the internal validation cohort. In the t-FL test cohort, the R-signature achieved an AUC of 0.749, with an accuracy of 75.2%, sensitivity of 68.0%, and specificity of 77.5%. Furthermore, the t-FL scoring system, incorporating the R-signature along with clinical parameters (age, LDH, and ECOG PS) and SUVmax, achieved an AUC of 0.820, facilitating the stratification of patients into low, medium, and high transformation risk groups.

CONCLUSIONS: This study offers a promising approach for identifying t-FL non-invasively by radiomics analysis on PET/CT images. The developed t-FL scoring system provides a valuable tool for clinical decision-making, potentially improving patient management and outcomes.

PMID:39875864 | DOI:10.1186/s12916-025-03893-7

Categories: Literature Watch

Artificial intelligence methods applied to longitudinal data from electronic health records for prediction of cancer: a scoping review

Tue, 2025-01-28 06:00

BMC Med Res Methodol. 2025 Jan 28;25(1):24. doi: 10.1186/s12874-025-02473-w.

ABSTRACT

BACKGROUND: Early detection and diagnosis of cancer are vital to improving outcomes for patients. Artificial intelligence (AI) models have shown promise in the early detection and diagnosis of cancer, but there is limited evidence on methods that fully exploit the longitudinal data stored within electronic health records (EHRs). This review aims to summarise methods currently utilised for prediction of cancer from longitudinal data and provides recommendations on how such models should be developed.

METHODS: The review was conducted following PRISMA-ScR guidance. Six databases (MEDLINE, EMBASE, Web of Science, IEEE Xplore, PubMed and SCOPUS) were searched for relevant records published before 2/2/2024. Search terms related to the concepts "artificial intelligence", "prediction", "health records", "longitudinal", and "cancer". Data were extracted relating to several areas of the articles: (1) publication details, (2) study characteristics, (3) input data, (4) model characteristics, (4) reproducibility, and (5) quality assessment using the PROBAST tool. Models were evaluated against a framework for terminology relating to reporting of cancer detection and risk prediction models.

RESULTS: Of 653 records screened, 33 were included in the review; 10 predicted risk of cancer, 18 performed either cancer detection or early detection, 4 predicted recurrence, and 1 predicted metastasis. The most common cancers predicted in the studies were colorectal (n = 9) and pancreatic cancer (n = 9). 16 studies used feature engineering to represent temporal data, with the most common features representing trends. 18 used deep learning models which take a direct sequential input, most commonly recurrent neural networks, but also including convolutional neural networks and transformers. Prediction windows and lead times varied greatly between studies, even for models predicting the same cancer. High risk of bias was found in 90% of the studies. This risk was often introduced due to inappropriate study design (n = 26) and sample size (n = 26).

CONCLUSION: This review highlights the breadth of approaches to cancer prediction from longitudinal data. We identify areas where reporting of methods could be improved, particularly regarding where in a patients' trajectory the model is applied. The review shows opportunities for further work, including comparison of these approaches and their applications in other cancers.

PMID:39875808 | DOI:10.1186/s12874-025-02473-w

Categories: Literature Watch

Whole slide image based deep learning refines prognosis and therapeutic response evaluation in lung adenocarcinoma

Tue, 2025-01-28 06:00

NPJ Digit Med. 2025 Jan 29;8(1):69. doi: 10.1038/s41746-025-01470-z.

ABSTRACT

Existing prognostic models are useful for estimating the prognosis of lung adenocarcinoma patients, but there remains room for improvement. In the current study, we developed a deep learning model based on histopathological images to predict the recurrence risk of lung adenocarcinoma patients. The efficiency of the model was then evaluated in independent multicenter cohorts. The model defined high- and low-risk groups successfully stratified prognosis of the entire cohort. Moreover, multivariable Cox analysis identified the model defined risk groups as an independent predictor for disease-free survival. Importantly, combining TNM stage with the established model helped to distinguish subgroups of patients with high-risk stage II and stage III disease who are highly likely to benefit from adjuvant chemotherapy. Overall, our study highlights the significant value of the constructed model to serve as a complementary biomarker for survival stratification and adjuvant therapy selection for lung adenocarcinoma patients after resection.

PMID:39875799 | DOI:10.1038/s41746-025-01470-z

Categories: Literature Watch

MHNet: Multi-view High-Order Network for Diagnosing Neurodevelopmental Disorders Using Resting-State fMRI

Tue, 2025-01-28 06:00

J Imaging Inform Med. 2025 Jan 28. doi: 10.1007/s10278-025-01399-5. Online ahead of print.

ABSTRACT

Deep learning models have shown promise in diagnosing neurodevelopmental disorders (NDD) like ASD and ADHD. However, many models either use graph neural networks (GNN) to construct single-level brain functional networks (BFNs) or employ spatial convolution filtering for local information extraction from rs-fMRI data, often neglecting high-order features crucial for NDD classification. We introduce a Multi-view High-order Network (MHNet) to capture hierarchical and high-order features from multi-view BFNs derived from rs-fMRI data for NDD prediction. MHNet has two branches: the Euclidean Space Features Extraction (ESFE) module and the Non-Euclidean Space Features Extraction (Non-ESFE) module, followed by a Feature Fusion-based Classification (FFC) module for NDD identification. ESFE includes a Functional Connectivity Generation (FCG) module and a High-order Convolutional Neural Network (HCNN) module to extract local and high-order features from BFNs in Euclidean space. Non-ESFE comprises a Generic Internet-like Brain Hierarchical Network Generation (G-IBHN-G) module and a High-order Graph Neural Network (HGNN) module to capture topological and high-order features in non-Euclidean space. Experiments on three public datasets show that MHNet outperforms state-of-the-art methods using both AAL1 and Brainnetome Atlas templates. Extensive ablation studies confirm the superiority of MHNet and the effectiveness of using multi-view fMRI information and high-order features. Our study also offers atlas options for constructing more sophisticated hierarchical networks and explains the association between key brain regions and NDD. MHNet leverages multi-view feature learning from both Euclidean and non-Euclidean spaces, incorporating high-order information from BFNs to enhance NDD classification performance.

PMID:39875742 | DOI:10.1007/s10278-025-01399-5

Categories: Literature Watch

End-to-End Deep Learning Prediction of Neoadjuvant Chemotherapy Response in Osteosarcoma Patients Using Routine MRI

Tue, 2025-01-28 06:00

J Imaging Inform Med. 2025 Jan 28. doi: 10.1007/s10278-025-01424-7. Online ahead of print.

ABSTRACT

This study aims to develop an end-to-end deep learning (DL) model to predict neoadjuvant chemotherapy (NACT) response in osteosarcoma (OS) patients using routine magnetic resonance imaging (MRI). We retrospectively analyzed data from 112 patients with histologically confirmed OS who underwent NACT prior to surgery. Multi-sequence MRI data (including T2-weighted and contrast-enhanced T1-weighted images) and physician annotations were utilized to construct an end-to-end DL model. The model integrates ResUNet for automatic tumor segmentation and 3D-ResNet-18 for predicting NACT efficacy. Model performance was assessed using area under the curve (AUC) and accuracy (ACC). Among the 112 patients, 51 exhibited a good NACT response, while 61 showed a poor response. No statistically significant differences were found in age, sex, alkaline phosphatase levels, tumor size, or location between these groups (P > 0.05). The ResUNet model achieved robust performance, with an average Dice coefficient of 0.579 and average Intersection over Union (IoU) of 0.463. The T2-weighted 3D-ResNet-18 classification model demonstrated superior performance in the test set with an AUC of 0.902 (95% CI: 0.766-1), ACC of 0.783, sensitivity of 0.909, specificity of 0.667, and F1 score of 0.800. Our proposed end-to-end DL model can effectively predict NACT response in OS patients using routine MRI, offering a potential tool for clinical decision-making.

PMID:39875741 | DOI:10.1007/s10278-025-01424-7

Categories: Literature Watch

Enhancing quantitative coronary angiography (QCA) with advanced artificial intelligence: comparison with manual QCA and visual estimation

Tue, 2025-01-28 06:00

Int J Cardiovasc Imaging. 2025 Jan 29. doi: 10.1007/s10554-025-03342-9. Online ahead of print.

ABSTRACT

Artificial intelligence-based quantitative coronary angiography (AI-QCA) was introduced to address manual QCA's limitations in reproducibility and correction process. The present study aimed to assess the performance of an updated AI-QCA solution (MPXA-2000) in lesion detection and quantification using manual QCA as the reference standard, and to demonstrate its superiority over visual estimation. This multi-center retrospective study analyzed 1,076 coronary angiography images obtained from 420 patients, comparing AI-QCA and visual estimation against manual QCA as the reference standard. A lesion was classified as 'detected' when the minimum lumen diameter (MLD) identified by manual QCA fell within the boundaries of the lesion delineated by AI-QCA or visual estimation. The detected lesions were evaluated in terms of diameter stenosis (DS), MLD, and lesion length (LL). AI-QCA accurately detected lesions with a sensitivity of 93% (1705/1828) and showed strong correlations with manual QCA for DS, MLD, and LL (R² = 0.65, 0.83 and 0.71, respectively). In views targeting the major vessels, the proportion of undetected lesions by AI-QCA was less than 4% (56/1492). For lesions in the side branches, AI-QCA also demonstrated high sensitivity (> 92%) in detecting them. Compared to visual estimation, AI-QCA showed significantly better lesion detection capability (93% vs. 69%, p < 0.001), and had a higher probability of detecting all lesions in images with multiple lesions (86% vs. 33%, p < 0.001). The updated AI-QCA demonstrated robust performance in lesion detection and quantification without operator intervention, enabling reproducible vessel analysis. The automated process of AI-QCA has the potential to optimize angiography-guided interventions by providing quantitative metrics.

PMID:39875702 | DOI:10.1007/s10554-025-03342-9

Categories: Literature Watch

Pages