Deep learning

Application of MRI-based tumor heterogeneity analysis for identification and pathologic staging of breast phyllodes tumors

Thu, 2025-01-09 06:00

Magn Reson Imaging. 2025 Jan 7:110325. doi: 10.1016/j.mri.2025.110325. Online ahead of print.

ABSTRACT

OBJECTIVE: To explore the application value of MRI-based imaging histology and deep learning model in the identification and classification of breast phyllodes tumors.

METHODS: Seventy-seven patients diagnosed as breast phyllodes tumors and fibroadenomas by pathological examination were retrospectively analyzed, and traditional radiomics features, subregion radiomics features, and deep learning features were extracted from MRI images, respectively. The features were screened and modeled using variance selection method, statistical test, random forest importance ranking method, Spearman correlation analysis, least absolute shrinkage and selection operator (LASSO). The efficacy of each model was assessed using the subject operating characteristic (ROC) curve, The DeLong test was used to assess the differences in the AUC values of the different models, and the clinical benefit of each model was assessed using the decision curve (DCA), and the predictive accuracy of the model was assessed using the calibration curve (CCA).

RESULTS: Among the constructed models for classification of breast phyllodes tumors, the fusion model (AUC: 0.97) had the best diagnostic efficacy and highest clinical benefit. The traditional radiomics model (AUC: 0.81) had better diagnostic efficacy compared with subregion radiomics model (AUC: 0.70). De-Long test, there is a statistical difference between the fusion model traditional radiomics model, and subregion radiomics model in the training group. Among the models constructed to distinguish phyllodes tumors from fibroadenomas in the breast, the TDT_CIDL model (AUC: 0.974) had the best predictive efficacy and the highest clinical benefit. De-Long test, the TDT_CI combination model was statistically different from the remaining five models in the training group.

CONCLUSION: Traditional radiomics models, subregion radiomics models and deep learning models based on MRI sequences can help to differentiate benign from junctional phyllodes tumors, phyllodes tumors from fibroadenomas, and provide personalized treatment for patients.

PMID:39788394 | DOI:10.1016/j.mri.2025.110325

Categories: Literature Watch

Apnet: Lightweight network for apricot tree disease and pest detection in real-world complex backgrounds

Thu, 2025-01-09 06:00

Plant Methods. 2025 Jan 9;21(1):4. doi: 10.1186/s13007-025-01324-5.

ABSTRACT

Apricot trees, serving as critical agricultural resources, hold a significant role within the agricultural domain. Conventional methods for detecting pests and diseases in these trees are notably labor-intensive. Many conditions affecting apricot trees manifest distinct visual symptoms that are ideally suited for precise identification and classification via deep learning techniques. Despite this, the academic realm currently lacks extensive, realistic datasets and deep learning strategies specifically crafted for apricot trees. This study introduces ATZD01, a publicly accessible dataset encompassing 11 categories of apricot tree pests and diseases, meticulously compiled under genuine field conditions. Furthermore, we introduce an innovative detection algorithm founded on convolutional neural networks, specifically devised for the management of apricot tree pests and diseases. To enhance the accuracy of detection, we have developed a novel object detection framework, APNet, alongside a dedicated module, the Adaptive Thresholding Algorithm (ATA), tailored for the detection of apricot tree afflictions. Experimental evaluations reveal that our proposed algorithm attains an accuracy rate of 87.1% on ATZD01, surpassing the performance of all other leading algorithms tested, thereby affirming the effectiveness of our dataset and model. The code and dataset will be made available at https://github.com/meanlang/ATZD01 .

PMID:39789617 | DOI:10.1186/s13007-025-01324-5

Categories: Literature Watch

Prediction of urinary tract infection using machine learning methods: a study for finding the most-informative variables

Thu, 2025-01-09 06:00

BMC Med Inform Decis Mak. 2025 Jan 9;25(1):13. doi: 10.1186/s12911-024-02819-2.

ABSTRACT

BACKGROUND: Urinary tract infection (UTI) is a frequent health-threatening condition. Early reliable diagnosis of UTI helps to prevent misuse or overuse of antibiotics and hence prevent antibiotic resistance. The gold standard for UTI diagnosis is urine culture which is a time-consuming and also an error prone method. In this regard, complementary methods are demanded. In the recent decade, machine learning strategies that employ mathematical models on a dataset to extract the most informative hidden information are the center of interest for prediction and diagnosis purposes.

METHOD: In this study, machine learning approaches were used for finding the important variables for a reliable prediction of UTI. Several types of machines including classical and deep learning models were used for this purpose.

RESULTS: Eighteen selected features from urine test, blood test, and demographic data were found as the most informative features. Factors extracted from urine such as WBC, nitrite, leukocyte, clarity, color, blood, bilirubin, urobilinogen, and factors extracted from blood test like mean platelet volume, lymphocyte, glucose, red blood cell distribution width, and potassium, and demographic data such as age, gender and previous use of antibiotics were the determinative factors for UTI prediction. An ensemble combination of XGBoost, decision tree, and light gradient boosting machines with a voting scheme obtained the highest accuracy for UTI prediction (AUC: 88.53 (0.25), accuracy: 85.64 (0.20)%), according to the selected features. Furthermore, the results showed the importance of gender and age for UTI prediction.

CONCLUSION: This study highlighted the potential of machine learning strategies for UTI prediction.

PMID:39789596 | DOI:10.1186/s12911-024-02819-2

Categories: Literature Watch

Estimation of TP53 mutations for endometrial cancer based on diffusion-weighted imaging deep learning and radiomics features

Thu, 2025-01-09 06:00

BMC Cancer. 2025 Jan 9;25(1):45. doi: 10.1186/s12885-025-13424-5.

ABSTRACT

OBJECTIVES: To construct a prediction model based on deep learning (DL) and radiomics features of diffusion weighted imaging (DWI), and clinical variables for evaluating TP53 mutations in endometrial cancer (EC).

METHODS: DWI and clinical data from 155 EC patients were included in this study, consisting of 80 in the training set, 35 in the test set, and 40 in the external validation set. Radiomics features, convolutional neural network-based DL features, and clinical variables were analyzed. Feature selection was performed using Mann-Whitney U test, LASSO regression, and SelectKBest. Prediction models were established by gaussian process (GP) and decision tree (DT) algorithms and evaluated by the area under the receiver operating characteristic curve (AUC), net reclassification index (NRI), calibration curves, and decision curve analysis (DCA).

RESULTS: Compared to the DL (AUCtraining = 0.830, AUCtest = 0.779, and AUCvalidation = 0.711), radiomics (AUCtraining = 0.810, AUCtest = 0.710, and AUCvalidation = 0.839), and clinical (AUCtraining = 0.780, AUCtest = 0.685, and AUCvalidation = 0.695) models, the combined model based on the GP algorithm, which consisted of four DL features, five radiomics features, and two clinical variables, not only demonstrated the highest diagnostic efficacy (AUCtraining = 0.949, AUCtest = 0.877, and AUCvalidation = 0.914) but also led to an improvement in risk reclassification of the TP53 mutation (NIRtraining = 66.38%, 56.98%, and 83.48%, NIRtest = 50.72%, 80.43%, and 89.49%, and NIRvalidation = 64.58%, 87.50%, and 120.83%, respectively). In addition, the combined model exhibited good agreement and clinical utility in calibration curves and DCA analyses, respectively.

CONCLUSIONS: A prediction model based on the GP algorithm and consisting of DL and radiomics features of DWI as well as clinical variables can effectively assess TP53 mutation in EC.

PMID:39789538 | DOI:10.1186/s12885-025-13424-5

Categories: Literature Watch

Automated stenosis estimation of coronary angiographies using end-to-end learning

Thu, 2025-01-09 06:00

Int J Cardiovasc Imaging. 2025 Jan 9. doi: 10.1007/s10554-025-03324-x. Online ahead of print.

ABSTRACT

The initial evaluation of stenosis during coronary angiography is typically performed by visual assessment. Visual assessment has limited accuracy compared to fractional flow reserve and quantitative coronary angiography, which are more time-consuming and costly. Applying deep learning might yield a faster and more accurate stenosis assessment. We developed a deep learning model to classify cine loops into left or right coronary artery (LCA/RCA) or "other". Data were obtained by manual annotation. Using these classifications, cine loops before revascularization were identified and curated automatically. Separate deep learning models for LCA and RCA were developed to estimate stenosis using these identified cine loops. From a cohort of 19,414 patients and 332,582 cine loops, we identified cine loops for 13,480 patients for model development and 5056 for internal testing. External testing was conducted using automated identified cine loops from 608 patients. For identification of significant stenosis (visual assessment of diameter stenosis > 70%), our model obtained a receiver operator characteristic (ROC) area under the curve (ROC-AUC) of 0.903 (95% CI: 0.900-0.906) on the internal test. The performance was evaluated on the external test set against visual assessment, 3D quantitative coronary angiography, and fractional flow reserve (≤ 0.80), obtaining ROC AUC values of 0.833 (95% CI: 0.814-0.852), 0.798 (95% CI: 0.741-0.842), and 0.780 (95% CI: 0.743-0.817), respectively. The deep-learning-based stenosis estimation models showed promising results for predicting stenosis. Compared to previous work, our approach demonstrates performance increase, includes all 16 segments, does not exclude revascularized patients, is externally tested, and is simpler using fewer steps.

PMID:39789341 | DOI:10.1007/s10554-025-03324-x

Categories: Literature Watch

Deep Learning Models for Automatic Classification of Anatomic Location in Abdominopelvic Digital Subtraction Angiography

Thu, 2025-01-09 06:00

J Imaging Inform Med. 2025 Jan 9. doi: 10.1007/s10278-024-01351-z. Online ahead of print.

ABSTRACT

PURPOSE: To explore the information in routine digital subtraction angiography (DSA) and evaluate deep learning algorithms for automated identification of anatomic location in DSA sequences.

METHODS: DSA of the abdominal aorta, celiac, superior mesenteric, inferior mesenteric, and bilateral external iliac arteries was labeled with the anatomic location from retrospectively collected endovascular procedures performed between 2010 and 2020 at a tertiary care medical center. "Key" images within each sequence demonstrating the parent vessel and the first bifurcation were additionally labeled. Mode models aggregating single image predictions, trained with the full or "key" datasets, and a multiple instance learning (MIL) model were developed for location classification of the DSA sequences. Model performance was evaluated with a primary endpoint of multiclass classification accuracy and compared by McNemar's test.

RESULTS: A total of 819 unique angiographic sequences from 205 patients and 276 procedures were included in the training, validation, and testing data and split into partitions at the patient level to preclude data leakage. The data demonstrate substantial information sparsity as a minority of the images were designated as "key" with sufficient information for localization by a domain expert. A Mode model, trained and tested with "key" images, demonstrated an overall multiclass classification accuracy of 0.975 (95% CI 0.941-1). A MIL model, trained and tested with all data, demonstrated an overall multiclass classification accuracy of 0.966 (95% CI 0.932-0.992). Both the Mode model with "key" images (p < 0.001) and MIL model (p < 0.001) significantly outperformed a Mode model trained and tested with the full dataset. The MIL model additionally automatically identified a set of top-5 images with an average overlap of 92.5% to manually labelled "key" images.

CONCLUSION: Deep learning algorithms can identify anatomic locations in abdominopelvic DSA with high fidelity using manual or automatic methods to manage information sparsity.

PMID:39789320 | DOI:10.1007/s10278-024-01351-z

Categories: Literature Watch

Machine learning-based prediction model integrating ultrasound scores and clinical features for the progression to rheumatoid arthritis in patients with undifferentiated arthritis

Thu, 2025-01-09 06:00

Clin Rheumatol. 2025 Jan 10. doi: 10.1007/s10067-025-07304-3. Online ahead of print.

ABSTRACT

OBJECTIVES: Predicting rheumatoid arthritis (RA) progression in undifferentiated arthritis (UA) patients remains a challenge. Traditional approaches combining clinical assessments and ultrasonography (US) often lack accuracy due to the complex interaction of clinical variables, and routine extensive US is impractical. Machine learning (ML) models, particularly those integrating the 18-joint ultrasound scoring system (US18), have shown potential to address these issues but remain underexplored. This study aims to evaluate ML models integrating US18 with clinical data to improve early identification of high-risk patients and support personalized treatment strategies.

METHODS: In this prospective cohort, 432 UA patients were followed for 1 year to track progression to RA. Four ML algorithms and one deep learning model were developed using baseline clinical and US18 data. Comparative experiments on a testing cohort identified the optimal model. SHAP (SHapley Additive exPlanations) analysis highlighted key variables, validated through an ablation experiment.

RESULTS: Of the 432 patients, 152 (35.2%) progressed to the RA group, while 280 (64.8%) remained in the non-RA group. The Random Forest (RnFr) model demonstrated the highest accuracy and sensitivity. SHAP analysis identified joint counts at US18 Grade 2, total US18 score, and swollen joint count as the most influential variables. The ablation experiment confirmed the importance of US18 in enhancing early RA detection.

CONCLUSIONS: Integrating the US18 assessment with clinical data in an RnFr model significantly improves early detection of RA progression in UA patients, offering potential for earlier and more personalized treatments. Key Points • A machine learning model integrating clinical and ultrasound features effectively predicts rheumatoid arthritis progression in undifferentiated arthritis patients. • The 18-joint ultrasound scoring system (US18) enhances predictive accuracy, particularly when incorporated with clinical variables in a Random Forest model. • SHAP analysis underscores that joint severity levels in US18 contribute significantly to early identification of high-risk patients. • This study offers a feasible and efficient approach for clinical implementation, supporting more personalized and timely RA treatment strategies.

PMID:39789318 | DOI:10.1007/s10067-025-07304-3

Categories: Literature Watch

G-SET-DCL: a guided sequential episodic training with dual contrastive learning approach for colon segmentation

Thu, 2025-01-09 06:00

Int J Comput Assist Radiol Surg. 2025 Jan 9. doi: 10.1007/s11548-024-03319-4. Online ahead of print.

ABSTRACT

PURPOSE: This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings.

METHODS: The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy.

RESULTS: The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method's robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans).

CONCLUSIONS: The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets-a common challenge in medical imaging.

PMID:39789205 | DOI:10.1007/s11548-024-03319-4

Categories: Literature Watch

Deep learning model for automatic detection of different types of microaneurysms in diabetic retinopathy

Thu, 2025-01-09 06:00

Eye (Lond). 2025 Jan 9. doi: 10.1038/s41433-024-03585-1. Online ahead of print.

ABSTRACT

PURPOSE: This study aims to develop a deep-learning-based software capable of detecting and differentiating microaneurysms (MAs) as hyporeflective or hyperreflective on structural optical coherence tomography (OCT) images in patients with non-proliferative diabetic retinopathy (NPDR).

METHODS: A retrospective cohort of 249 patients (498 eyes) diagnosed with NPDR was analysed. Structural OCT scans were obtained using the Heidelberg Spectralis HRA + OCT device. Manual segmentation of MAs was performed by five masked readers, with an expert grader ensuring consistent labeling. Two deep learning models, YOLO (You Only Look Once) and DETR (DEtection TRansformer), were trained using the annotated OCT images. Detection and classification performance were evaluated using the area under the receiver operating characteristic (ROC) curves.

RESULTS: The YOLO model performed poorly with an AUC of 0.35 for overall MA detection, with AUCs of 0.33 and 0.24 for hyperreflective and hyporeflective MAs, respectively. The DETR model had an AUC of 0.86 for overall MA detection, but AUCs of 0.71 and 0.84 for hyperreflective and hyporeflective MAs, respectively. Post-hoc review revealed that discrepancies between automated and manual grading were often due to the automated method's selection of normal retinal vessels.

CONCLUSIONS: The choice of deep learning model is critical to achieving accuracy in detecting and classifying MAs in structural OCT images. An automated approach may assist clinicians in the early detection and monitoring of diabetic retinopathy, potentially improving patient outcomes.

PMID:39789187 | DOI:10.1038/s41433-024-03585-1

Categories: Literature Watch

An optimized LSTM-based deep learning model for anomaly network intrusion detection

Thu, 2025-01-09 06:00

Sci Rep. 2025 Jan 10;15(1):1554. doi: 10.1038/s41598-025-85248-z.

ABSTRACT

The increasing prevalence of network connections is driving a continuous surge in the requirement for network security and safeguarding against cyberattacks. This has triggered the need to develop and implement intrusion detection systems (IDS), one of the key components of network perimeter aimed at thwarting and alleviating the issues presented by network invaders. Over time, intrusion detection systems have been instrumental in identifying network breaches and deviations. Several researchers have recommended the implementation of machine learning approaches in IDSs to counteract the menace posed by network intruders. Nevertheless, most previously recommended IDSs exhibit a notable false alarm rate. To mitigate this challenge, exploring deep learning methodologies emerges as a viable solution, leveraging their demonstrated efficacy across various domains. Hence, this article proposes an optimized Long Short-Term Memory (LSTM) for identifying anomalies in network traffic. The presented model uses three optimization methods, i.e., Particle Swarm Optimization (PSO), JAYA, and Salp Swarm Algorithm (SSA), to optimize the hyperparameters of LSTM. In this study, NSL KDD, CICIDS, and BoT-IoT datasets are taken into consideration. To evaluate the efficacy of the proposed model, several indicators of performance like Accuracy, Precision, Recall, F-score, True Positive Rate (TPR), False Positive Rate (FPR), and Receiver Operating Characteristic curve (ROC) have been chosen. A comparative analysis of PSO-LSTMIDS, JAYA-LSTMIDS, and SSA-LSTMIDS is conducted. The simulation results demonstrate that SSA-LSTMIDS surpasses all the models examined in this study across all three datasets.

PMID:39789143 | DOI:10.1038/s41598-025-85248-z

Categories: Literature Watch

Deep learning-integrated MRI brain tumor analysis: feature extraction, segmentation, and Survival Prediction using Replicator and volumetric networks

Thu, 2025-01-09 06:00

Sci Rep. 2025 Jan 9;15(1):1437. doi: 10.1038/s41598-024-84386-0.

ABSTRACT

The most prevalent form of malignant tumors that originate in the brain are known as gliomas. In order to diagnose, treat, and identify risk factors, it is crucial to have precise and resilient segmentation of the tumors, along with an estimation of the patients' overall survival rate. Therefore, we have introduced a deep learning approach that employs a combination of MRI scans to accurately segment brain tumors and predict survival in patients with gliomas. To ensure strong and reliable tumor segmentation, we employ 2D volumetric convolution neural network architectures that utilize a majority rule. This method helps to significantly decrease model bias and improve performance. Additionally, in order to predict survival rates, we extract radiomic features from the tumor regions that have been segmented, and then use a Deep Learning Inspired 3D replicator neural network to identify the most effective features. The model presented in this study was successful in segmenting brain tumors and predicting the outcome of enhancing tumor and real enhancing tumor. The model was evaluated using the BRATS2020 benchmarks dataset, and the obtained results are quite satisfactory and promising.

PMID:39789043 | DOI:10.1038/s41598-024-84386-0

Categories: Literature Watch

Assessing Artificial Intelligence in Oral Cancer Diagnosis: A Systematic Review

Thu, 2025-01-09 06:00

J Craniofac Surg. 2024 Oct 29. doi: 10.1097/SCS.0000000000010663. Online ahead of print.

ABSTRACT

BACKGROUND: With the use of machine learning algorithms, artificial intelligence (AI) has become a viable diagnostic and treatment tool for oral cancer. AI can assess a variety of information, including histopathology slides and intraoral pictures.

AIM: The purpose of this systematic review is to evaluate the efficacy and accuracy of AI technology in the detection and diagnosis of oral cancer between 2020 and 2024.

METHODOLOGY: With an emphasis on AI applications in oral cancer diagnostics, a thorough search approach was used to find pertinent publications published between 2020 and 2024. Using particular keywords associated with AI, oral cancer, and diagnostic imaging, databases such as PubMed, Scopus, and Web of Science were searched. Among the selection criteria were actual English-language research papers that assessed the effectiveness of AI models in diagnosing oral cancer. Three impartial reviewers extracted data, evaluated quality, and compiled the findings using a narrative synthesis technique.

RESULTS: Twelve papers that demonstrated a range of AI applications in the diagnosis of oral cancer satisfied the inclusion criteria. This study showed encouraging results in lesion identification and prognostic prediction using machine learning and deep learning algorithms to evaluate oral pictures and histopathology slides. The results demonstrated how AI-driven technologies might enhance diagnostic precision and enable early intervention in cases of oral cancer.

CONCLUSION: Unprecedented prospects to transform oral cancer diagnosis and detection are provided by artificial intelligence. More resilient AI systems in oral oncology can be achieved by joint research and innovation efforts, even in the face of constraints like data set variability and regulatory concerns.

PMID:39787481 | DOI:10.1097/SCS.0000000000010663

Categories: Literature Watch

The potential role of machine learning and deep learning in differential diagnosis of Alzheimer's disease and FTD using imaging biomarkers: A review

Thu, 2025-01-09 06:00

Neuroradiol J. 2025 Jan 9:19714009251313511. doi: 10.1177/19714009251313511. Online ahead of print.

ABSTRACT

INTRODUCTION: The prevalence of neurodegenerative diseases has significantly increased, necessitating a deeper understanding of their symptoms, diagnostic processes, and prevention strategies. Frontotemporal dementia (FTD) and Alzheimer's disease (AD) are two prominent neurodegenerative conditions that present diagnostic challenges due to overlapping symptoms. To address these challenges, experts utilize a range of imaging techniques, including magnetic resonance imaging (MRI), diffusion tensor imaging (DTI), functional MRI (fMRI), positron emission tomography (PET), and single-photon emission computed tomography (SPECT). These techniques facilitate a detailed examination of the manifestations of these diseases. Recent research has demonstrated the potential of artificial intelligence (AI) in automating the diagnostic process, generating significant interest in this field.

MATERIALS AND METHODS: This narrative review aims to compile and analyze articles related to the AI-assisted diagnosis of FTD and AD. We reviewed 31 articles published between 2012 and 2024, with 23 focusing on machine learning techniques and 8 on deep learning techniques. The studies utilized features extracted from both single imaging modalities and multi-modal approaches, and evaluated the performance of various classification models.

RESULTS: Among the machine learning studies, Support Vector Machines (SVM) exhibited the most favorable performance in classifying FTD and AD. In deep learning studies, the ResNet convolutional neural network outperformed other networks.

CONCLUSION: This review highlights the utility of different imaging modalities as diagnostic aids in distinguishing between FTD and AD. However, it emphasizes the importance of incorporating clinical examinations and patient symptom evaluations to ensure comprehensive and accurate diagnoses.

PMID:39787363 | DOI:10.1177/19714009251313511

Categories: Literature Watch

Improving the Reliability of Language Model-Predicted Structures as Docking Targets through Geometric Graph Learning

Thu, 2025-01-09 06:00

J Med Chem. 2025 Jan 9. doi: 10.1021/acs.jmedchem.4c02740. Online ahead of print.

ABSTRACT

Applying artificial intelligence techniques to flexibly model the binding between the ligand and protein has attracted extensive interest in recent years, but their applicability remains improved. In this study, we have developed CarsiDock-Flex, a novel two-step flexible docking paradigm that generates binding poses directly from predicted structures. CarsiDock-Flex consists of an equivariant deep learning-based model termed CarsiInduce to refine ESMFold-predicted protein pockets with the induction of specific ligands and our existing CarsiDock algorithm to redock the ligand into the induced binding pockets. Extensive evaluations demonstrate the effectiveness of CarsiInduce, which can successfully guide the transition of ESMFold-predicted pockets into their holo-like conformations for numerous cases, thus leading to the superior docking accuracy of CarsiDock-Flex even on unseen sequences. Overall, our approach offers a novel design for flexible modeling of protein-ligand binding poses, paving the way for a deeper understanding of protein-ligand interactions that account for protein flexibility.

PMID:39787296 | DOI:10.1021/acs.jmedchem.4c02740

Categories: Literature Watch

Multi-region infectious disease prediction modeling based on spatio-temporal graph neural network and the dynamic model

Thu, 2025-01-09 06:00

PLoS Comput Biol. 2025 Jan 9;21(1):e1012738. doi: 10.1371/journal.pcbi.1012738. eCollection 2025 Jan.

ABSTRACT

Human mobility between different regions is a major factor in large-scale outbreaks of infectious diseases. Deep learning models incorporating infectious disease transmission dynamics for predicting the spread of multi-regional outbreaks due to human mobility have become a hot research topic. In this study, we incorporate the Graph Transformer Neural Network and graph learning mechanisms into a metapopulation SIR model to build a hybrid framework, Metapopulation Graph Transformer Neural Network (M-Graphormer), for high-dimensional parameter estimation and multi-regional epidemic prediction. The framework effectively solves the problem that existing models may lose some hidden spatial dependencies in the data when dealing with the dynamic graph structure of the network due to human mobility. We performed multi-wave infectious disease prediction in multiple regions based on real epidemic data. The results show that the framework is capable of performing high-dimensional parameter estimation and accurately predicting epidemic transmission dynamics in multiple regions even with low data quality. In addition, we retrospectively extrapolate the temporal evolution patterns of contact rate under different interventions implemented in different regions, reflecting the dynamics of intervention intensity and the need for flexibility in adjusting interventions in different regions. To provide early warning of infectious disease transmission, we retrospectively predicted the arrival time of infectious diseases using data from the early stages of outbreaks.

PMID:39787070 | DOI:10.1371/journal.pcbi.1012738

Categories: Literature Watch

Widespread use of ChatGPT and other Artificial Intelligence tools among medical students in Uganda: A cross-sectional study

Thu, 2025-01-09 06:00

PLoS One. 2025 Jan 9;20(1):e0313776. doi: 10.1371/journal.pone.0313776. eCollection 2025.

ABSTRACT

BACKGROUND: Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that uses deep learning algorithms trained on vast amounts of data to generate human-like texts such as essays. Consequently, it has introduced new challenges and threats to medical education. We assessed the use of ChatGPT and other AI tools among medical students in Uganda.

METHODS: We conducted a descriptive cross-sectional study among medical students at four public universities in Uganda from 1st November 2023 to 20th December 2023. Participants were recruited by stratified random sampling. We used a semi-structured questionnaire to collect data on participants' socio-demographics and use of AI tools such as ChatGPT. Our outcome variable was use of AI tools. Data were analyzed descriptively in Stata version 17.0. We conducted a modified Poisson regression to explore the association between use of AI tools and various exposures.

RESULTS: A total of 564 students participated. Almost all (93%) had heard about AI tools and more than two-thirds (75.7%) had ever used AI tools. Regarding the AI tools used, majority (72.2%) had ever used ChatGPT, followed by SnapChat AI (14.9%), Bing AI (11.5%), and Bard AI (6.9%). Most students use AI tools to complete assignments (55.5%), preparing for tutorials (39.9%), preparing for exams (34.8%) and research writing (24.8%). Students also reported the use of AI tools for nonacademic purposes including emotional support, recreation, and spiritual growth. Older students were 31% less likely to use AI tools compared to younger ones (Adjusted Prevalence Ratio (aPR):0.69; 95% CI: [0.62, 0.76]). Students at Makerere University were 66% more likely to use AI tools compared to students in Gulu University (aPR:1.66; 95% CI:[1.64, 1.69]).

CONCLUSION: The use of ChatGPT and other AI tools was widespread among medical students in Uganda. AI tools were used for both academic and non-academic purposes. Younger students were more likely to use AI tools compared to older students. There is a need to promote AI literacy in institutions to empower older students with essential skills for the digital age. Further, educators should assume students are using AI and adjust their way of teaching and setting exams to suit this new reality. Our research adds further evidence to existing voices calling for regulatory frameworks for AI in medical education.

PMID:39787055 | DOI:10.1371/journal.pone.0313776

Categories: Literature Watch

Deep learning model for identifying acute heart failure patients using electrocardiography in the emergency room

Thu, 2025-01-09 06:00

Eur Heart J Acute Cardiovasc Care. 2025 Jan 9:zuaf001. doi: 10.1093/ehjacc/zuaf001. Online ahead of print.

ABSTRACT

BACKGROUND: Acute heart failure (AHF) poses significant diagnostic challenges in the emergency room (ER) because of its varied clinical presentation and limitations of traditional diagnostic methods. This study aimed to develop and evaluate a deep-learning model using electrocardiogram (ECG) data to enhance AHF identification in the ER.

METHODS: In this retrospective cohort study, we analyzed the ECG data of 19,285 patients who visited ERs of three hospitals between 2016 and 2020; 9,119 with available left ventricular ejection fraction and N-terminal prohormone of brain natriuretic peptide level data and who were diagnosed with AHF were included in the study. We extracted morphological and clinical parameters from ECG data to train and validate four machine learning models: baseline linear regression and more advanced models including XGBoost, Light GBM, and CatBoost.

RESULTS: The CatBoost algorithm outperformed other models, showing superior area under the receiver operating characteristic and area under the precision-recall curve diagnostic accuracy across both internal (0.89 ± 0.01 and 0.89 ± 0.01) and external (0.90 and 0.89) validation datasets, respectively. The model demonstrated high accuracy, precision, recall, and f1 score, indicating robust performance in AHF identification.

CONCLUSION: The developed machine learning model significantly enhanced AHF detection in the ER using conventional 12-lead ECGs combined with clinical data. These findings suggest that ECGs, a common tool in the ER, can effectively help screen for AHF.

PMID:39787045 | DOI:10.1093/ehjacc/zuaf001

Categories: Literature Watch

Transcription factor prediction using protein 3D secondary structures

Thu, 2025-01-09 06:00

Bioinformatics. 2025 Jan 9:btae762. doi: 10.1093/bioinformatics/btae762. Online ahead of print.

ABSTRACT

MOTIVATION: Transcription factors (TFs) are DNA-binding proteins that regulate gene expression. Traditional methods predict a protein as a TF if the protein contains any DNA-binding domains (DBDs) of known TFs. However, this approach fails to identify a novel TF that does not contain any known DBDs. Recently proposed TF prediction methods do not rely on DBDs. Such methods use features of protein sequences to train a machine learning model, and then use the trained model to predict whether a protein is a TF or not. Because the 3-dimensional (3D) structure of a protein captures more information than its sequence, using 3D protein structures will likely allow for more accurate prediction of novel TFs.

RESULTS: We propose a deep learning-based TF prediction method (StrucTFactor), which is the first method to utilize 3D secondary structural information of proteins. We compare StrucTFactor with recent state-of-the-art TF prediction methods based on ∼525 000 proteins across 12 datasets, capturing different aspects of data bias (including sequence redundancy) possibly influencing a method's performance. We find that StrucTFactor significantly (p-value<0.001) outperforms the existing TF prediction methods, improving the performance over its closest competitor by up to 17% based on Matthews correlation coefficient.

AVAILABILITY: Data and source code are available at https://github.com/lieboldj/StrucTFactor and on our website at https://apps.cosy.bio/StrucTFactor/.

SUPPLEMENTARY INFORMATION: Included.

PMID:39786868 | DOI:10.1093/bioinformatics/btae762

Categories: Literature Watch

Multiple constraint network classification reveals functional brain networks distinguishing 0-back and 2-back task

Thu, 2025-01-09 06:00

Can J Exp Psychol. 2025 Jan 9. doi: 10.1037/cep0000360. Online ahead of print.

ABSTRACT

Working memory is associated with general intelligence and is crucial for performing complex cognitive tasks. Neuroimaging investigations have recognized that working memory is supported by a distribution of activity in regions across the entire brain. Identification of these regions has come primarily from general linear model analyses of statistical parametric maps to reveal brain regions whose activation is linearly related to working memory task conditions. This approach can fail to detect nonlinear task differences or differences reflected in distributed patterns of activity. In this study, we take advantage of the increased sensitivity of multivariate pattern analysis in a multiple-constraint deep learning classifier to analyze patterns of whole-brain blood oxygen level dependent (BOLD) activity in children performing two different conditions of the emotional n-back task. Regional (supervoxel) whole-brain activation patterns from functional imaging runs of 20 children were used to train a set of neural network classifiers to identify task category (0-back vs. 2-back) and activation co-occurrence probability, which encoded functional connectivity. These simultaneous constraints promote the discovery of coherent networks that contribute towards task performance in each memory load condition. Permutation analyses discovered the global activation patterns and interregional coactivations that distinguish memory load. Examination of model weights identified the brain regions most predictive of memory load and the functional networks integrating these regions. Community detection analyses identified functional networks integrating task-predictive regions and found distinct patterns of network activation for each task type. Comparisons to functional network literature suggest more focused attentional network activation during the 2-back task. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

PMID:39786863 | DOI:10.1037/cep0000360

Categories: Literature Watch

FlowPacker: Protein side-chain packing with torsional flow matching

Thu, 2025-01-09 06:00

Bioinformatics. 2025 Jan 9:btaf010. doi: 10.1093/bioinformatics/btaf010. Online ahead of print.

ABSTRACT

MOTIVATION: Accurate prediction of protein side-chain conformations is necessary to understand protein folding, protein-protein interactions and facilitate de novo protein design.

RESULTS: Here we apply torsional flow matching and equivariant graph attention to develop FlowPacker, a fast and performant model to predict protein side-chain conformations conditioned on the protein sequence and backbone. We show that FlowPacker outperforms previous state-of-the-art baselines across most metrics with improved runtime. We further show that FlowPacker can be used to inpaint missing side-chain coordinates and also for multimeric targets, and exhibits strong performance on a test set of antibody-antigen complexes.

AVAILABILITY: Code is available at https://gitlab.com/mjslee0921/flowpacker.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:39786861 | DOI:10.1093/bioinformatics/btaf010

Categories: Literature Watch

Pages