Deep learning

Efficient anomaly detection in tabular cybersecurity data using large language models

Mon, 2025-01-27 06:00

Sci Rep. 2025 Jan 27;15(1):3344. doi: 10.1038/s41598-025-88050-z.

ABSTRACT

In cybersecurity, anomaly detection in tabular data is essential for ensuring information security. While traditional machine learning and deep learning methods have shown some success, they continue to face significant challenges in terms of generalization. To address these limitations, this paper presents an innovative method for tabular data anomaly detection based on large language models, called "Tabular Anomaly Detection via Guided Prompts" (TAD-GP). This approach utilizes a 7-billion-parameter open-source model and incorporates strategies such as data sample introduction, anomaly type recognition, chain-of-thought reasoning, multi-turn dialogue, and key information reinforcement. Experimental results indicate that the TAD-GP framework improves F1 scores by 79.31%, 97.96%, and 59.09% on the CICIDS2017, KDD Cup 1999, and UNSW-NB15 datasets, respectively. Furthermore, the smaller-scale TAD-GP model outperforms larger models across multiple datasets, demonstrating its practical potential in environments with constrained computational resources and requirements for private deployment. This method addresses a critical gap in research on anomaly detection in cybersecurity, specifically using small-scale open-source models.

PMID:39870811 | DOI:10.1038/s41598-025-88050-z

Categories: Literature Watch

Multistage deep learning methods for automating radiographic sharp score prediction in rheumatoid arthritis

Mon, 2025-01-27 06:00

Sci Rep. 2025 Jan 27;15(1):3391. doi: 10.1038/s41598-025-86073-0.

ABSTRACT

The Sharp-van der Heijde score (SvH) is crucial for assessing joint damage in rheumatoid arthritis (RA) through radiographic images. However, manual scoring is time-consuming and subject to variability. This study proposes a multistage deep learning model to predict the Overall Sharp Score (OSS) from hand X-ray images. The framework involves four stages: image preprocessing, hand segmentation with UNet, joint identification via YOLOv7, and OSS prediction utilizing a custom Vision Transformer (ViT). Evaluation metrics included Intersection over Union (IoU), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Huber loss, and Intraclass Correlation Coefficient (ICC). The model was trained using stratified group 3-fold cross-validation on a dataset of 679 patients and tested externally on 291 subjects. The joint identification model achieved 99% accuracy. The ViT model achieved the best OSS prediction for patients with Sharp scores < 50. It achieved a Huber loss of 4.9, an RMSE of 9.73, and an MAE of 5.35, demonstrating a strong correlation with expert scores (ICC = 0.702, P < 0.001). This study is the first to apply a ViT for OSS prediction in RA. It presents an efficient and automated alternative for overall damage assessment. This approach may reduce reliance on manual scoring.

PMID:39870749 | DOI:10.1038/s41598-025-86073-0

Categories: Literature Watch

Deep learning based decision-making and outcome prediction for adolescent idiopathic scoliosis patients with posterior surgery

Mon, 2025-01-27 06:00

Sci Rep. 2025 Jan 27;15(1):3389. doi: 10.1038/s41598-025-87370-4.

ABSTRACT

With the emergence of numerous classifications, surgical treatment for adolescent idiopathic scoliosis (AIS) can be guided more effectively. However, surgical decision-making and optimal strategies still lack standardization and personalized customization. Our study aims to devise proper deep learning (DL) models that incorporate key factors influencing surgical outcomes on the coronal plane in AIS patients to facilitate surgical decision-making and predict surgical results for AIS patients. A total of 425 AIS patients who underwent posterior spinal fixation were collected. Variables such as age, gender, preoperative and final follow-up horizontal and vertical coordinate vectors, and screw positioning data were preprocessed by parameterizing image data and transforming various data types into a unified, continuous high-dimensional feature space. Four deep learning models were designed, including Multi-Layer Perceptron model, Encoder-Decoder model, CNN-LSTM Attention model and Deep FM model. For the implementation of deep learning, 70% of the data was adopted for training and 30% for evaluation. The mean square error (MSE), mean absolute error (MAE) and curve fitting between the predicted and corresponding real postoperative spinal coordinates of the test set were adopted to validate and compare the efficacy of the DL models. A total of 425 patients with an average age of 14.60 ± 2.08 years, including 77 males and 348 females, were enrolled in this study. The Lenke type 1 and 5 AIS patients accounted for the majority of the included patients. The results showed that the Multi-Layer Perceptron model achieved the best performance among the four DL models, with a mean square error of 2.77 × 10-5 and an average absolute error of 0.00350 on the validation set. Moreover, the results predicted by the Multi-Layer Perceptron model closely matched the actual coordinate positions on the original postoperative images of patients with Lenke type 1 and AIS patients. Deep learning models can provide alternative and effective decision-making support for AIS patients undergoing surgery. Regarding the learning curve and data volume, the optimal DL models should be adjusted and refined to meet future demands.

PMID:39870730 | DOI:10.1038/s41598-025-87370-4

Categories: Literature Watch

An endoscopic ultrasound-based interpretable deep learning model and nomogram for distinguishing pancreatic neuroendocrine tumors from pancreatic cancer

Mon, 2025-01-27 06:00

Sci Rep. 2025 Jan 27;15(1):3383. doi: 10.1038/s41598-024-84749-7.

ABSTRACT

To retrospectively develop and validate an interpretable deep learning model and nomogram utilizing endoscopic ultrasound (EUS) images to predict pancreatic neuroendocrine tumors (PNETs). Following confirmation via pathological examination, a retrospective analysis was performed on a cohort of 266 patients, comprising 115 individuals diagnosed with PNETs and 151 with pancreatic cancer. These patients were randomly assigned to the training or test group in a 7:3 ratio. The least absolute shrinkage and selection operator algorithm was employed to reduce the dimensionality of deep learning (DL) features extracted from pre-standardized EUS images. The retained nonzero coefficient features were subsequently applied to develop predictive eight DL models based on distinct machine learning algorithms. The optimal DL model was identified and used to establish a clinical signature, which subsequently informed the construction and evaluation of a nomogram. Gradient-weighted Class Activation Mapping (Grad-CAM) and Shapley Additive Explanations (SHAP) were implemented to interpret and visualize the model outputs. A total of 2048 DL features were initially extracted, from which only 27 features with coefficients greater than zero were retained. The support vector machine (SVM) DL model demonstrated exceptional performance, achieving area under the curve (AUC) values of 0.948 and 0.795 in the training and test groups, respectively. Additionally, a nomogram was developed, incorporating both DL and clinical signatures, and was visually represented for practical application. Finally, the calibration curves, decision curve analysis (DCA) plots, and clinical impact curves (CIC) exhibited by the DL model and nomogram indicated high accuracy. The application of Grad-CAM and SHAP enhanced the interpretability of these models. These methodologies contributed substantial net benefits to clinical decision-making processes. A novel interpretable DL model and nomogram were developed and validated using EUS images, cooperating with machine learning algorithms. This approach demonstrates significant potential for enhancing the clinical applicability of EUS in predicting PNETs from pancreatic cancer, thereby offering valuable insights for future research and implementation.

PMID:39870667 | DOI:10.1038/s41598-024-84749-7

Categories: Literature Watch

Development of a CT radiomics prognostic model for post renal tumor resection overall survival based on transformer enhanced K-means clustering

Mon, 2025-01-27 06:00

Med Phys. 2025 Jan 27. doi: 10.1002/mp.17639. Online ahead of print.

ABSTRACT

BACKGROUND: Kidney tumors, common in the urinary system, have widely varying survival rates post-surgery. Current prognostic methods rely on invasive biopsies, highlighting the need for non-invasive, accurate prediction models to assist in clinical decision-making.

PURPOSE: This study aimed to construct a K-means clustering algorithm enhanced by Transformer-based feature transformation to predict the overall survival rate of patients after kidney tumor resection and provide an interpretability analysis of the model to assist in clinical decision-making.

METHODS: This study was based on a publicly available C4KC-KiTS-2019 dataset from the TCIA database, including preoperative computed tomography (CT) images and survival time data of 210 patients. Initially, the radiomics features of the kidney tumor area were extracted using the 3D slicer software. Feature selection was then conducted using ICC, mRMR algorithms, and LASSO regression to calculate radiomics scores. Subsequently, the selected features were input into a pre-trained Transformer model for feature transformation to obtain a higher-dimensional feature set. Then, K-means clustering was performed using this feature set, and the model was evaluated using receiver operating characteristic (ROC) and Kaplan-Meier curves. Finally, the SHAP interpretability algorithm was used for the feature importance analysis of the K-means clustering results.

RESULTS: Eleven important features were selected from 851 radiomics features. The K-means clustering model after Transformer feature transformation showed AUCs of 0.889, 0.841, and 0.926 for predicting 1-, 3-, and 5-year overall survival rates, respectively, thereby outperforming both the K-means model with original feature inputs and the radiomics score method. A clustering analysis revealed survival prognosis differences among different patient groups, and a SHAP analysis provided insights into the features that had the most significant impacts on the model predictions.

CONCLUSIONS: The K-means clustering algorithm enhanced by the Transformer feature transformation proposed in this study demonstrates promising accuracy and interpretability in predicting the overall survival rate after kidney tumor resection. This method provides a valuable tool for clinical decision-making and contributes to improved management and treatment strategies for patients with kidney tumors.

PMID:39871101 | DOI:10.1002/mp.17639

Categories: Literature Watch

Preserved brain youthfulness: longitudinal evidence of slower brain aging in superagers

Mon, 2025-01-27 06:00

Geroscience. 2025 Jan 27. doi: 10.1007/s11357-025-01531-x. Online ahead of print.

ABSTRACT

BACKGROUND: Superagers, older adults with exceptional cognitive abilities, show preserved brain structure compared to typical older adults. We investigated whether superagers have biologically younger brains based on their structural integrity.

METHODS: A cohort of 153 older adults (aged 61-93) was recruited, with 63 classified as superagers based on superior episodic memory and 90 as typical older adults, of whom 64 were followed up after two years. A deep learning model for brain age prediction, trained on 899 diverse-aged adults (aged 31-100), was adapted to the older adult cohort via transfer learning. Brain age gap (BAG), a metric based on brain structural patterns, defined as the difference between predicted and chronological age, and its annual rate of change were calculated to assess brain aging status and speed, respectively, and compared among subgroups.

RESULTS: Lower BAGs correlated with more favorable cognitive status in memory and general cognitive function. Superagers exhibited a lower BAG than typical older adults at both baseline and follow-up. Individuals who maintained or attained superager status at follow-up showed a slower annual rate of change in BAG compared to those who remained or became typical older adults.

CONCLUSIONS: Superaging brains manifested maintained neurobiological youthfulness in terms of a more youthful brain aging status and a reduced speed of brain aging. These findings suggest that cognitive resilience, and potentially broader functional resilience, exhibited by superagers during the aging process may be attributable to their younger brains.

PMID:39871070 | DOI:10.1007/s11357-025-01531-x

Categories: Literature Watch

Validation of UniverSeg for Interventional Abdominal Angiographic Segmentation

Mon, 2025-01-27 06:00

J Imaging Inform Med. 2025 Jan 27. doi: 10.1007/s10278-024-01349-7. Online ahead of print.

ABSTRACT

Automatic segmentation of angiographic structures can aid in assessing vascular disease. While recent deep learning models promise automation, they lack validation on interventional angiographic data. This study investigates the feasibility of angiographic segmentation using in-context learning with the UniverSeg model, which is a cross-learning segmentation model that lacks inherent angiographic training. A retrospective review, after IRB approval, identified 234 patients who underwent interventional fluoroscopy of the celiac axis with iodinated contrast from January 1, 2019, to December 31, 2022. From 261 acquisitions, 303 maximum contrast images were selected, each generating a 128 × 128 pixel partition for arterial detail analysis and binary mask creation. Image-mask pairs were divided into three classes of 101 pairs each, based on arterial diameter and bifurcation number. UniverSeg was tested class independently in a fivefold nested cross-validation. Performance analysis for in-context learning determined average model convergence for class sizes from 1 to 81 pairs. The model was further validated by repeating the tests on the inverse segmentation task. Dice similarity coefficients for decreasing diameters were 78.7%, 72.5%, and 59.9% (σ = 5.96, 7.99, 14.29). Balanced average Hausdorff distances were 0.86, 0.71, and 1.16 (σ = 0.37, 0.52, 0.68) pixels, respectively. Inverted mask testing aligned with UniverSeg expectations for out-of-context problem sets. Performance improved with support class size, vessel diameter, and reduced bifurcations, plateauing to within ± 1.34 Dice score at N = 51. This study validates UniverSeg for arterial segmentation in interventional fluoroscopic procedures, supporting vascular disease modeling and imaging research.

PMID:39871044 | DOI:10.1007/s10278-024-01349-7

Categories: Literature Watch

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks

Mon, 2025-01-27 06:00

J Imaging Inform Med. 2025 Jan 27. doi: 10.1007/s10278-024-01174-y. Online ahead of print.

ABSTRACT

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169. The results of MGD classifications by three ophthalmologists were used to calculate the area under the receiver operating characteristic curve (AUROC), accuracy, precision, recall, true negative rate (TNR), true positive rate (TPR), and false positive rate (FPR) of the model. The deep learning (DL) was used to build the model to identify the IVCM images. Model accuracy and loss tests showed that the DCNN model had high accuracy, low loss, and no large fluctuations at an epoch of 175, indicating that DenseNet-169 could enable the dichotomization to proceed stably. The accuracy of each classification of the test set was above 90%, which was highly consistent with the ophthalmologists' diagnosis. The precision of the groups in each classification was more than 90%, or even close to 100%, except for the meibomian gland atrophy with obstruction group in the fifth classification. The recall ranged from 0.8728 to 0.9981, and the FPR was low in the screening and classification diagnoses. The application of DCNN can achieve accurate classification and diagnosis of MGD through IVCM images and has great potential during medical procedures.

PMID:39871043 | DOI:10.1007/s10278-024-01174-y

Categories: Literature Watch

Diagnostic Accuracy of Artificial Intelligence for Detection of Rib Fracture on X-ray and Computed Tomography Imaging: A Systematic Review

Mon, 2025-01-27 06:00

J Imaging Inform Med. 2025 Jan 27. doi: 10.1007/s10278-025-01412-x. Online ahead of print.

ABSTRACT

Rib pathology is uniquely difficult and time-consuming for radiologists to diagnose. AI can reduce radiologist workload and serve as a tool to improve accurate diagnosis. To date, no reviews have been performed synthesizing identification of rib fracture data on AI and its diagnostic performance on X-ray and CT scans of rib fractures and its comparison to physicians. The objectives of this study are to analyze the performance of artificial intelligence in diagnosing rib fracture on X-ray and computed tomography (CT) scan using multiple clinical studies and to compare it to that of physicians findings of rib fracture. A literature search was conducted on PubMed and Embase for articles regarding the use of artificial intelligence for the detection of rib fractures up until July 2024. AI model, number of cases, sensitivity, and comparison to physicians data was collected. A total of 29 studies, comprising 125,364 cases, were included in this review. The pooled sensitivity of AI models was 0.853. Nineteen of these studies compared their results to radiologists, orthopedic surgeons, or anesthesiologists, totalling 61 physicians. Of these 19 studies, the radiologists had a pooled sensitivity of 0.750. The sensitivity of AI in these studies by comparison was 0.840. The results suggest that artificial intelligence has a promising role in detecting rib fractures on X-ray and CT scans. In our interpretation, the performance of artificial intelligence is similar to, or better than, that of physicians, alluding to its encouraging potential in a clinical setting as it may reduce physician workload, improve reading efficiency, and lead to better patient outcomes.

PMID:39871041 | DOI:10.1007/s10278-025-01412-x

Categories: Literature Watch

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers

Mon, 2025-01-27 06:00

J Imaging Inform Med. 2025 Jan 27. doi: 10.1007/s10278-025-01416-7. Online ahead of print.

ABSTRACT

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated. These diseases must be identified in the early stages to prevent eye damage progression. This paper focuses on the accurate identification and analysis of disparate eye diseases, including glaucoma, diabetic retinopathy, and cataracts, using ophthalmoscopy images. Deep learning (DL) has been widely used in image recognition for the early detection and treatment of eye diseases. In this study, ResNet50, DenseNet121, Inception-ResNetV2, and six variations of ViT are employed, and their performance in diagnosing diseases such as glaucoma, cataracts, and diabetic retinopathy is evaluated. In particular, the article uses the vision transformer model as an automated method to diagnose retinal eye diseases, highlighting the accuracy of pre-trained deep transfer learning (DTL) structures. The updated ViT#5 model with the augmented-regularized pre-trained model (AugReg ViT-L/16_224) and learning rate of 0.00002 outperforms the state-of-the-art techniques, obtaining a data-based accuracy score of 98.1% on a publicly accessible retinal ophthalmoscopy image dataset, which includes 4217 images. In most categories, the model outperforms other convolutional-based and ViT models in terms of accuracy, precision, recall, and F1 score. This research contributes significantly to medical image analysis, demonstrating the potential of AI in enhancing the precision of eye disease diagnoses and advocating for the integration of artificial intelligence in medical diagnostics.

PMID:39871038 | DOI:10.1007/s10278-025-01416-7

Categories: Literature Watch

Exploring the role of multimodal [<sup>18</sup>F]F-PSMA-1007 PET/CT and multiparametric MRI data in predicting ISUP grading of primary prostate cancer

Mon, 2025-01-27 06:00

Eur J Nucl Med Mol Imaging. 2025 Jan 28. doi: 10.1007/s00259-025-07099-0. Online ahead of print.

ABSTRACT

PURPOSE: The study explores the role of multimodal imaging techniques, such as [18F]F-PSMA-1007 PET/CT and multiparametric MRI (mpMRI), in predicting the ISUP (International Society of Urological Pathology) grading of prostate cancer. The goal is to enhance diagnostic accuracy and improve clinical decision-making by integrating these advanced imaging modalities with clinical variables. In particular, the study investigates the application of few-shot learning to address the challenge of limited data in prostate cancer imaging, which is often a common issue in medical research.

METHODS: This study conducted a retrospective analysis of 341 prostate cancer patients enrolled between 2019 and 2023, with data collected from five imaging modalities: [18F]F-PSMA-1007 PET, CT, Diffusion Weighted Imaging (DWI), T2 Weighted Imaging (T2WI), and Apparent Diffusion Coefficient (ADC). The study compared the performance of five single-modality data sets, PET/CT dual-modality fusion data, mpMRI tri-modality fusion data, and five-modality fusion data within deep learning networks, analyzing how different modalities impact the accuracy of ISUP grading prediction. To address the issue of limited data, a few-shot deep learning network was employed, enabling training and cross-validation with only a small set of labeled samples. Additionally, the results were compared with those from preoperative biopsies and clinical prediction models to further assess the reliability of the experimental findings.

RESULTS: The experimental results demonstrate that the multimodal model (combining [18F]F-PSMA-1007 PET/CT and multiparametric MRI) significantly outperforms other models in predicting ISUP grading of prostate cancer. Meanwhile, both the PET/CT dual-modality and mpMRI tri-modality models outperform the single-modality model, with comparable performance between the two multimodal models. Furthermore, the experimental data confirm that the few-shot learning network introduced in this study provides reliable predictions, even with limited data.

CONCLUSION: This study highlights the potential of applying multimodal imaging techniques (such as PET/CT and mpMRI) in predicting ISUP grading of prostate cancer. The findings suggest that this integrated approach can enhance the accuracy of prostate cancer diagnosis and contribute to more personalized treatment planning. Furthermore, incorporating few-shot learning into the model development process allows for more robust predictions despite limited data, making this approach highly valuable in clinical settings with sparse data.

PMID:39871017 | DOI:10.1007/s00259-025-07099-0

Categories: Literature Watch

Deep learning-based Monte Carlo dose prediction for heavy-ion online adaptive radiotherapy and fast quality assurance: A feasibility study

Mon, 2025-01-27 06:00

Med Phys. 2025 Jan 27. doi: 10.1002/mp.17628. Online ahead of print.

ABSTRACT

BACKGROUND: Online adaptive radiotherapy (OART) and rapid quality assurance (QA) are essential for effective heavy ion therapy (HIT). However, there is a shortage of deep learning (DL) models and workflows for predicting Monte Carlo (MC) doses in such treatments.

PURPOSE: This study seeks to address this gap by developing a DL model for independent MC dose (MCDose) prediction, aiming to facilitate OART and rapid QA implementation for HIT.

METHODS AND MATERIALS: A MC dose prediction DL model called CAM-CHD U-Net for HIT was introduced, based on the GATE/Geant4 MC simulation platform. The proposed model improved upon the original CHD U-Net by adding a Channel Attention Mechanism (CAM). Two experiments were conducted, one with CHD U-Net (Experiment 1) and another with CAM-CHD U-Net (Experiment 2), and involved data from 120 head and neck cancer patients. Using patient CT images, three-dimensional energy matrices, and ray-masks as inputs, the model completed the entire MC dose prediction process within a few seconds.

RESULTS: In Experiment 2, within the Planned Target Volume (PTV) region, the average gamma passing rate (3%/3 mm) between the predicted dose and true MC dose reached 99.31%, and 96.48% across all body voxels. Experiment 2 demonstrated a 46.15% reduction in the mean absolute difference in D 5 ${D_5}$ in organs at risk compared to Experiment 1.

CONCLUSIONS: By extracting relevant parameters of radiotherapy plans, the CAM-CHD U-Net model can directly and accurately predict independent MC dose, and has a high gamma passing rate with the ground truth dose (the dose obtained after a complete MC simulation). Our workflow enables the implementation of heavy ion OART, and the predicted MCDose can be used for rapid QA of HIT.

PMID:39871016 | DOI:10.1002/mp.17628

Categories: Literature Watch

A deep learning pipeline for three-dimensional brain-wide mapping of local neuronal ensembles in teravoxel light-sheet microscopy

Mon, 2025-01-27 06:00

Nat Methods. 2025 Jan 27. doi: 10.1038/s41592-024-02583-1. Online ahead of print.

ABSTRACT

Teravoxel-scale, cellular-resolution images of cleared rodent brains acquired with light-sheet fluorescence microscopy have transformed the way we study the brain. Realizing the potential of this technology requires computational pipelines that generalize across experimental protocols and map neuronal activity at the laminar and subpopulation-specific levels, beyond atlas-defined regions. Here, we present artficial intelligence-based cartography of ensembles (ACE), an end-to-end pipeline that employs three-dimensional deep learning segmentation models and advanced cluster-wise statistical algorithms, to enable unbiased mapping of local neuronal activity and connectivity. Validation against state-of-the-art segmentation and detection methods on unseen datasets demonstrated ACE's high generalizability and performance. Applying ACE in two distinct neurobiological contexts, we discovered subregional effects missed by existing atlas-based analyses and showcase ACE's ability to reveal localized or laminar neuronal activity brain-wide. Our open-source pipeline enables whole-brain mapping of neuronal ensembles at a high level of precision across a wide range of neuroscientific applications.

PMID:39870865 | DOI:10.1038/s41592-024-02583-1

Categories: Literature Watch

MAI-TargetFisher: A proteome-wide drug target prediction method synergetically enhanced by artificial intelligence and physical modeling

Mon, 2025-01-27 06:00

Acta Pharmacol Sin. 2025 Jan 27. doi: 10.1038/s41401-024-01444-z. Online ahead of print.

ABSTRACT

Computational target identification plays a pivotal role in the drug development process. With the significant advancements of deep learning methods for protein structure prediction, the structural coverage of human proteome has increased substantially. This progress inspired the development of the first genome-wide small molecule targets scanning method. Our method aims to localize drug targets and detect potential off-target effects early in the drug discovery process, thereby improving the success rate of drug development. We have constructed a high-quality database of protein structures with annotated potential binding sites, covering 82% of the protein-coding genome. On the basis of this database, to enhance our search capabilities, we have integrated computational techniques, including both artificial intelligence-based and biophysical model-based methods. This integration led to the development of a target identification method called Multi-Algorithm Integrated Target Fisher (MAI-TargetFisher). MAI-TargetFisher leverages the complementary strengths of various methods while minimizing their weaknesses, enabling precise database navigation to generate a reliably ranked set of candidate targets for an active query molecule. Importantly, our work is the first comprehensive scan of protein surfaces across the entire human genome, aimed at evaluating potential small molecule binding sites on each protein. Through a series of evaluations on benchmark and a target identification task, the results demonstrate the high hit rates and good reliability of our method under the validation of wet experiments. We have also made available a freely accessible web server at https://bailab.siais.shanghaitech.edu.cn/mai-targetfisher for non-commercial use.

PMID:39870848 | DOI:10.1038/s41401-024-01444-z

Categories: Literature Watch

A simple 2D multibody model to better quantify the movement quality of anterior cruciate ligament patients during single leg hop

Mon, 2025-01-27 06:00

Acta Orthop Belg. 2024 Dec;90(4):603-611. doi: 10.52628/90.4.12600.

ABSTRACT

Patients with anterior cruciate ligament reconstruction frequently present asymmetries in the sagittal plane dynamics when performing single leg jumps but their assessment is inaccessible to health-care professionals as it requires a complex and expensive system. With the development of deep learning methods for human pose detection, kinematics can be quantified based on a video and this study aimed to investigate whether a relatively simple 2D multibody model could predict relevant dynamic biomarkers based on the kinematics using inverse dynamics. Six participants performed ten vertical and forward single leg hops while the kinematics and the ground reaction force "GRF" were captured using an optoelectronic system coupled with a force platform. The participants are modelled by a seven rigid bodies system and the sagittal plane kinematics was used as model input. Model outputs were compared to values measured by the force platform using intraclass correlation coefficients for seven outcomes: the peak vertical and antero-posterior GRFs and the impulses during the propulsion and landing phases and the loading ratio. The model reliability is either good or excellent for all outcomes (0,845 ≤ ICC ≤ 0.987). The study results are promising for deploying the developed model following a kinematics analysis based on a video. This could enable clinicians to assess their patients' jumps more effectively using video recordings made with widely available smartphones, even outside the laboratory.

PMID:39869863 | DOI:10.52628/90.4.12600

Categories: Literature Watch

Alzheimer's disease image classification based on enhanced residual attention network

Mon, 2025-01-27 06:00

PLoS One. 2025 Jan 27;20(1):e0317376. doi: 10.1371/journal.pone.0317376. eCollection 2025.

ABSTRACT

With the increasing number of patients with Alzheimer's Disease (AD), the demand for early diagnosis and intervention is becoming increasingly urgent. The traditional detection methods for Alzheimer's disease mainly rely on clinical symptoms, biomarkers, and imaging examinations. However, these methods have limitations in the early detection of Alzheimer's disease, such as strong subjectivity in diagnostic criteria, high detection costs, and high misdiagnosis rates. To address these issues, this study proposes a deep learning model to detect Alzheimer's disease; it is called Enhanced Residual Attention Network (ERAN) that can classify medical images. By combining residual learning, attention mechanism, and soft thresholding, the feature representation ability and classification accuracy of the model have been improved. The accuracy of the model in detecting Alzheimer's disease has reached 99.36%, with a loss rate of only 0.0264. The experimental results indicate that the Enhanced Residual Attention Network has achieved excellent performance on the Alzheimer's disease test dataset, providing strong support for the early diagnosis and treatment of Alzheimer's disease.

PMID:39869613 | DOI:10.1371/journal.pone.0317376

Categories: Literature Watch

Dual-hybrid intrusion detection system to detect False Data Injection in smart grids

Mon, 2025-01-27 06:00

PLoS One. 2025 Jan 27;20(1):e0316536. doi: 10.1371/journal.pone.0316536. eCollection 2025.

ABSTRACT

Modernizing power systems into smart grids has introduced numerous benefits, including enhanced efficiency, reliability, and integration of renewable energy sources. However, this advancement has also increased vulnerability to cyber threats, particularly False Data Injection Attacks (FDIAs). Traditional Intrusion Detection Systems (IDS) often fall short in identifying sophisticated FDIAs due to their reliance on predefined rules and signatures. This paper addresses this gap by proposing a novel IDS that utilizes hybrid feature selection and deep learning classifiers to detect FDIAs in smart grids. The main objective is to enhance the accuracy and robustness of IDS in smart grids. The proposed methodology combines Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO) for hybrid feature selection, ensuring the selection of the most relevant features for detecting FDIAs. Additionally, the IDS employs a hybrid deep learning classifier that integrates Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to capture the smart grid data's spatial and temporal features. The dataset used for evaluation, the Industrial Control System (ICS) Cyber Attack Dataset (Power System Dataset) consists of various FDIA scenarios simulated in a smart grid environment. Experimental results demonstrate that the proposed IDS framework significantly outperforms traditional methods. The hybrid feature selection effectively reduces the dimensionality of the dataset, improving computational efficiency and detection performance. The hybrid deep learning classifier performs better in key metrics, including accuracy, recall, precision, and F-measure. Precisely, the proposed approach attains higher accuracy by accurately identifying true positives and minimizing false negatives, ensuring the reliable operation of smart grids. Recall is enhanced by capturing critical features relevant to all attack types, while precision is improved by reducing false positives, leading to fewer unnecessary interventions. The F-measure balances recall and precision, indicating a robust and reliable detection system. This study presents a practical dual-hybrid IDS framework for detecting FDIAs in smart grids, addressing the limitations of existing IDS techniques. Future research should focus on integrating real-world smart grid data for validation, developing adaptive learning mechanisms, exploring other bio-inspired optimization algorithms, and addressing real-time processing and scalability challenges in large-scale deployments.

PMID:39869576 | DOI:10.1371/journal.pone.0316536

Categories: Literature Watch

Enhanced ResNet-50 for garbage classification: Feature fusion and depth-separable convolutions

Mon, 2025-01-27 06:00

PLoS One. 2025 Jan 27;20(1):e0317999. doi: 10.1371/journal.pone.0317999. eCollection 2025.

ABSTRACT

As people's material living standards continue to improve, the types and quantities of household garbage they generate rapidly increase. Therefore, it is urgent to develop a reasonable and effective method for garbage classification. This is important for resource recycling and environmental improvement and contributes to the sustainable development of production and the economy. However, existing deep learning-based garbage image classification models generally suffer from low classification accuracy, insufficient robustness, and slow detection speed due to the large number of model parameters. To this end, a new garbage image classification model is proposed, with the ResNet-50 network as the core architecture. Specifically, first, a redundancy-weighted feature fusion module is proposed, enabling the model to fully leverage valuable feature information, thereby improving its performance. At the same time, the module filters out redundant information from multi-scale features, reducing the number of model parameters. Second, the standard 3×3 convolutions in ResNet-50 are replaced with depth-separable convolutions, significantly improving the model's computational efficiency while preserving the feature extraction capability of the original convolutional structure. Finally, to address the issue of class imbalance, a weighting factor is added to the Focal Loss, aiming to mitigate the negative impact of class imbalance on model performance and enhance the model's robustness. Experimental results on the TrashNet dataset show that the proposed model effectively reduces the number of parameters, improves detection speed, and achieves an accuracy of 94.13%, surpassing the vast majority of existing deep learning-based waste image classification models, demonstrating its solid practical value.

PMID:39869568 | DOI:10.1371/journal.pone.0317999

Categories: Literature Watch

Deep learning based analysis of G3BP1 protein expression to predict the prognosis of nasopharyngeal carcinoma

Mon, 2025-01-27 06:00

PLoS One. 2025 Jan 27;20(1):e0315893. doi: 10.1371/journal.pone.0315893. eCollection 2025.

ABSTRACT

BACKGROUND: Ras-GTPase-activating protein (GAP)-binding protein 1 (G3BP1) emerges as a pivotal oncogenic gene across various malignancies, notably including nasopharyngeal carcinoma (NPC). The use of automated image analysis tools for immunohistochemical (IHC) staining of particular proteins is highly beneficial, as it could reduce the burden on pathologists. Interestingly, there have been no prior studies that have examined G3BP1 IHC staining using digital pathology.

METHODS: Whole-slide images (WSIs) were meticulously collected and annotated by experienced pathologists. A model was intricately designed and rigorously tested to yield the quantitative data regarding staining intensity and extent. The collective output data was subjected multiplicative analysis, exploring its correlation with the prognosis.

RESULTS: The G3BP1 molecular marker scoring model was successfully established utilizing deep learning methodologies, with a calculated threshold staining scores of 1.5. Notably, patients with NPC exhibiting higher expression levels of G3BP1 proteins displayed significantly lower for overall survival rates (OS). Multivariate analysis further validated that positive expression of G3BP1 stood as an independent poorer prognostic factors, indicating a poorer prognosis for NPC patients.

CONCLUSION: Computational pathology emerges as a transformative tool capable of substantially reducing the burden on pathologists while concurrently enhancing and diagnostic sensitivity and specificity. The positive expression of G3BP1 protein serves as valuable, independent biomarker, offering predictive insights into a poor prognosis for patients with NPC.

PMID:39869565 | DOI:10.1371/journal.pone.0315893

Categories: Literature Watch

Classification of CT scan and X-ray dataset based on deep learning and particle swarm optimization

Mon, 2025-01-27 06:00

PLoS One. 2025 Jan 27;20(1):e0317450. doi: 10.1371/journal.pone.0317450. eCollection 2025.

ABSTRACT

In 2019, the novel coronavirus swept the world, exposing the monitoring and early warning problems of the medical system. Computer-aided diagnosis models based on deep learning have good universality and can well alleviate these problems. However, traditional image processing methods may lead to high false positive rates, which is unacceptable in disease monitoring and early warning. This paper proposes a low false positive rate disease detection method based on COVID-19 lung images and establishes a two-stage optimization model. In the first stage, the model is trained using classical gradient descent, and relevant features are extracted; in the second stage, an objective function that minimizes the false positive rate is constructed to obtain a network model with high accuracy and low false positive rate. Therefore, the proposed method has the potential to effectively classify medical images. The proposed model was verified using a public COVID-19 radiology dataset and a public COVID-19 lung CT scan dataset. The results show that the model has made significant progress, with the false positive rate reduced to 11.3% and 7.5%, and the area under the ROC curve increased to 92.8% and 97.01%.

PMID:39869555 | DOI:10.1371/journal.pone.0317450

Categories: Literature Watch

Pages