Deep learning

RAU-Net for precise lung cancer GTV segmentation in radiation therapy planning

Tue, 2025-04-29 06:00

Sci Rep. 2025 Apr 29;15(1):15075. doi: 10.1038/s41598-025-99137-y.

ABSTRACT

Lung cancer, as one of the most lethal malignancies worldwide, primarily relies on radiation therapy, with about 60%-70% of patients requiring this treatment. In radiation therapy planning, precise segmentation of the Gross Tumor Volume (GTV) in CT images is crucial. However, the low contrast between the tumor and surrounding tissues, small size of the tumor area, and high heterogeneity of its internal structure pose significant technical challenges for accurate segmentation. To address these limitations, we propose RAU-Net (ROI-Attention U-Net), a two-stage framework, which combines target detection for Region of Interest (ROI) localization with a refined U-Net architecture incorporating attention mechanisms. Experiments on Lung Cancer GTV Dataset1 demonstrated that RAU-Net achieved a Dice coefficient of (77.13 ± 0.55)% and a sensitivity of (80.38 ± 0.63)% on the validation set, representing improvements of 4.1% and 6.25%, respectively, compared to the next best model, and significantly outperforming traditional U-Net and other advanced models. Similarly, On Lung Cancer GTV Dataset2, RAU-Net demonstrated remarkable performance, achieving the highest Dice coefficient of (73.95 ± 0.66)% and the second-highest Sensitivity of (66.40 ± 0.92)%, showcasing its superiority over other models overall. Ablation studies further confirmed the crucial role of the ROI extraction phase, attention mechanism, SE-Res module, and Combined Loss Function (CLoss) in enhancing segmentation performance. This framework provides a clinically viable solution for GTV delineation while offering methodological insights for medical image analysis.

PMID:40301479 | DOI:10.1038/s41598-025-99137-y

Categories: Literature Watch

High accuracy indoor positioning system using Galois field-based cryptography and hybrid deep learning

Tue, 2025-04-29 06:00

Sci Rep. 2025 Apr 29;15(1):15064. doi: 10.1038/s41598-025-97715-8.

ABSTRACT

In smart manufacturing, logistics, and other inside settings where the Global Positioning System (GPS) doesn't work, indoor positioning systems (IPS) are essential. Due to environmental complexity, signal noise, and possible data manipulation, traditional IPS techniques struggle with accuracy, resilience, and security. Online and offline phases are distinguished in the suggested indoor location system that employs deep learning and fingerprinting. During the offline phase, mobile devices gather signal strength measurements and contextual data traverse inside settings via Wi-Fi, Bluetooth, and magnetometers. Fingerprint classification using Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering follows the application of signal processing techniques for noise reduction and data augmentation. The online phase involves extracting information to improve the model's accuracy. These features can be signal-based, spatial-temporal, motion-based, or environmental. The Deep Spatial-Temporal Attention Network (Deep-STAN) is an innovative hybrid model for location classification that combines Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Long-Short Term Memory (LSTMs), and attention processes. The model hyperparameters are fine-tuned using hybrid optimization to guarantee optimal performance. The work's main contribution is the incorporation of ECC, an effective encryption and decryption method for signal data, which is based on Galois fields. This cryptographic method is well-suited for real-world applications since it guarantees low-latency operations while simultaneously improving data integrity and confidentiality. In addition, S-box enhances the IPS's resilience and security by including QR codes for distinct location marking and blockchain technology for safe and immutable storing of positioning data. Moreover, the performance of the suggested model includes an accuracy of 0.9937, precision of 0.987, sensitivity of 0.9898, and specificity of 0.9878, while when 80% of data were used it had an accuracy of 0.9804, precision of 0.9722, sensitivity of 0.9859, and specificity of 0.9756. These outcomes prove that the proposed system is stable and flexible enough to be used in indoor positioning applications.

PMID:40301441 | DOI:10.1038/s41598-025-97715-8

Categories: Literature Watch

A simple yet effective approach for predicting disease spread using mathematically-inspired diffusion-informed neural networks

Tue, 2025-04-29 06:00

Sci Rep. 2025 Apr 29;15(1):15000. doi: 10.1038/s41598-025-98398-x.

ABSTRACT

The COVID-19 outbreak has highlighted the importance of mathematical epidemic models like the Susceptible-Infected-Recovered (SIR) model, for understanding disease spread dynamics. However, enhancing their predictive accuracy complicates parameter estimation. To address this, we proposed a novel model that integrates traditional mathematical modeling with deep learning which has shown improved predicted power across diverse fields. The proposed model includes a simple artificial neural network (ANN) for regional disease incidences, and a graph convolutional neural network (GCN) to capture spread to adjacent regions. GCNs are a recent deep learning algorithm designed to learn spatial relationship from graph-structured data. We applied the model to COVID-19 incidences in Spain to evaluate its performance. It achieved a 0.9679 correlation with the test data, outperforming previous models with fewer parameters. By leveraging the efficient training methods of deep learning, the model simplifies parameter estimation while maintaining alignment with the mathematical framework to ensure interpretability. The proposed model may allow the more robust and insightful analyses by leveraging the generalization power of deep learning and theoretical foundations of the mathematical models.

PMID:40301427 | DOI:10.1038/s41598-025-98398-x

Categories: Literature Watch

A Dirichlet Distribution-Based Complex Ensemble Approach for Breast Cancer Classification from Ultrasound Images with Transfer Learning and Multiphase Spaced Repetition Method

Tue, 2025-04-29 06:00

J Imaging Inform Med. 2025 Apr 29. doi: 10.1007/s10278-025-01515-5. Online ahead of print.

ABSTRACT

Breast ultrasound is a useful and rapid diagnostic tool for the early detection of breast cancer. Artificial intelligence-supported computer-aided decision systems, which assist expert radiologists and clinicians, provide reliable and rapid results. Deep learning methods and techniques are widely used in the field of health for early diagnosis, abnormality detection, and disease diagnosis. Therefore, in this study, a deep ensemble learning model based on Dirichlet distribution using pre-trained transfer learning models for breast cancer classification from ultrasound images is proposed. In the study, experiments were conducted using the Breast Ultrasound Images Dataset (BUSI). The dataset, which had an imbalanced class structure, was balanced using data augmentation techniques. DenseNet201, InceptionV3, VGG16, and ResNet152 models were used for transfer learning with fivefold cross-validation. Statistical analyses, including the ANOVA test and Tukey HSD test, were applied to evaluate the model's performance and ensure the reliability of the results. Additionally, Grad-CAM (Gradient-weighted Class Activation Mapping) was used for explainable AI (XAI), providing visual explanations of the deep learning model's decision-making process. The spaced repetition method, commonly used to improve the success of learners in educational sciences, was adapted to artificial intelligence in this study. The results of training with transfer learning models were used as input for further training, and spaced repetition was applied using previously learned information. The use of the spaced repetition method led to increased model success and reduced learning times. The weights obtained from the trained models were input into an ensemble learning system based on Dirichlet distribution with different variations. The proposed model achieved 99.60% validation accuracy on the dataset, demonstrating its effectiveness in breast cancer classification.

PMID:40301291 | DOI:10.1007/s10278-025-01515-5

Categories: Literature Watch

Correction: Accurate, automated classification of radiographic knee osteoarthritis severity using a novel method of deep learning: Plug-in modules

Tue, 2025-04-29 06:00

Knee Surg Relat Res. 2025 Apr 29;37(1):17. doi: 10.1186/s43019-025-00268-3.

NO ABSTRACT

PMID:40302005 | DOI:10.1186/s43019-025-00268-3

Categories: Literature Watch

LAGNet: better electron density prediction for LCAO-based data and drug-like substances

Tue, 2025-04-29 06:00

J Cheminform. 2025 Apr 29;17(1):65. doi: 10.1186/s13321-025-01010-7.

ABSTRACT

The electron density is an important object in quantum chemistry that is crucial for many downstream tasks in drug design. Recent deep learning approaches predict the electron density around a molecule from atom types and atom positions. Most of these methods use the plane wave (PW) numerical method as a source of ground-truth training data. However, the drug design field mostly uses the Linear Combination of Atomic Orbitals (LCAO) for computation of quantum properties. In this study, we focus on prediction of the electron density for drug-like substances and training neural networks with LCAO-based datasets. Our experiments show that proper handling of large amplitudes of core orbitals is crucial for training on LCAO-based data. We propose to store the electron density with the standard grids instead of the uniform grid. This allowed us to reduce the number of probing points per molecule by 43 times and reduce storage space requirements by 8 times. Finally, we propose a novel architecture based on the DeepDFT model that we name LAGNet. It is specifically designed and tuned for drug-like substances and ∇ 2 DFT dataset.

PMID:40301997 | DOI:10.1186/s13321-025-01010-7

Categories: Literature Watch

Deep learning radiopathomics predicts targeted therapy sensitivity in EGFR-mutant lung adenocarcinoma

Tue, 2025-04-29 06:00

J Transl Med. 2025 Apr 29;23(1):482. doi: 10.1186/s12967-025-06480-9.

ABSTRACT

BACKGROUND: Ttyrosine kinase inhibitors (TKIs) represent the standard first-line treatment for patients with epidermal growth factor receptor (EGFR)-mutant lung adenocarcinoma. However, not all patients with EGFR mutations respond to TKIs. This study aims to develop a deep learning radiological-pathological-clinical (DLRPC) model that integrates computed tomography (CT) images, hematoxylin and eosin (H&E)-stained aspiration biopsy samples, and clinical data to predict the response in EGFR-mutant lung adenocarcinoma patients undergoing TKIs treatment.

METHODS: We retrospectively analyzed data from 214 lung adenocarcinoma patients who received TKIs treatment from two medical centers between September 2013 and June 2023. The DLRPC model leverages paired CT, pathological images and clinical data, incorporating a clinical-based attention mask to further explore the cross-modality associations. To evaluate its diagnostic performance, we compared the DLRPC model against single-modality models and a decision level fusion model based on Dempster-Shafer theory. Model performances metrics, including area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), were used for evaluation. The Delong test assessed statistically significantly differences in AUC among models.

RESULTS: The DLRPC model demonstrated strong performance, achieving an AUC value of 0.8424. It outperformed the single-modality models (AUC = 0.6894, 0.7753, 0.8052 for CT model, pathology model and clinical model, respectively. P < 0.05). Additionally, the DLRPC model surpassed the decision level fusion model (AUC = 0.8132, P < 0.05).

CONCLUSION: The DLRPC model effectively predicts the response of EGFR-mutant lung adenocarcinoma patients to TKIs, providing a promising tool for personalized treatment decisions in lung cancer management.

PMID:40301933 | DOI:10.1186/s12967-025-06480-9

Categories: Literature Watch

Vision transformer-based diagnosis of lumbar disc herniation with grad-CAM interpretability in CT imaging

Tue, 2025-04-29 06:00

BMC Musculoskelet Disord. 2025 Apr 29;26(1):419. doi: 10.1186/s12891-025-08602-2.

ABSTRACT

BACKGROUND: In this study, a computed tomography (CT)-vision transformer (ViT) framework for diagnosing lumbar disc herniation (LDH) was proposed for the first time by taking advantage of the multidirectional advantages of CT and a ViT.

METHODS: The proposed ViT model was trained and validated on a dataset consisting of 983 patients, including 2100 CT images. We compared the performance of the ViT model with that of several convolutional neural networks (CNNs), including ResNet18, ResNet50, LeNet, AlexNet, and VGG16, across two primary tasks: vertebra localization and disc abnormality classification.

RESULTS: The integration of a ViT with CT imaging allowed the constructed model to capture the complex spatial relationships and global dependencies within scans, outperforming CNN models and achieving accuracies of 97.13% and 93.63% in terms of vertebra localization and disc abnormality classification, respectively. The performance of the model was further validated via gradient-weighted class activation mapping (Grad-CAM), providing interpretable insights into the regions of the CT scans that contributed to the model predictions.

CONCLUSION: This study demonstrated the potential of a ViT for diagnosing LDH using CT imaging. The results highlight the promising clinical applications of this approach, particularly for enhancing the diagnostic efficiency and transparency of medical AI systems.

PMID:40301802 | DOI:10.1186/s12891-025-08602-2

Categories: Literature Watch

3D tooth identification for forensic dentistry using deep learning

Tue, 2025-04-29 06:00

BMC Oral Health. 2025 Apr 30;25(1):665. doi: 10.1186/s12903-025-06017-y.

ABSTRACT

The classification of intraoral teeth structures is a critical component in modern dental analysis and forensic dentistry. Traditional methods, relying on 2D imaging, often suffer from limitations in accuracy and comprehensiveness due to the complex three-dimensional (3D) nature of dental anatomy. Although 3D imaging introduces the third dimension, offering a more comprehensive view, it also introduces additional challenges due to the irregular nature of the data. Our proposed approach addresses these issues with a novel method that extracts critical representative features from 3D tooth models and transforms them into a 2D image format suitable for detailed analysis. The 2D images are subsequently processed using a recurrent neural network (RNN) architecture, which effectively detects complex patterns essential for accurate classification, while its capability to manage sequential data is further augmented by fully connected layers specifically designed for this purpose. This innovative approach improves accuracy and diagnostic efficiency by reducing manual analysis and speeding up processing time, overcoming the challenges of 3D data irregularity and leveraging its detailed representation, thereby setting a new standard in dental identification.

PMID:40301795 | DOI:10.1186/s12903-025-06017-y

Categories: Literature Watch

PPI-Graphomer: enhanced protein-protein affinity prediction using pretrained and graph transformer models

Tue, 2025-04-29 06:00

BMC Bioinformatics. 2025 Apr 29;26(1):116. doi: 10.1186/s12859-025-06123-2.

ABSTRACT

Protein-protein interactions (PPIs) refer to the phenomenon of protein binding through various types of bonds to execute biological functions. These interactions are critical for understanding biological mechanisms and drug research. Among these, the protein binding interface is a critical region involved in protein-protein interactions, particularly the hotspot residues on it that play a key role in protein interactions. Current deep learning methods trained on large-scale data can characterize proteins to a certain extent, but they often struggle to adequately capture information about protein binding interfaces. To address this limitation, we propose the PPI-Graphomer module, which integrates pretrained features from large-scale language models and inverse folding models. This approach enhances the characterization of protein binding interfaces by defining edge relationships and interface masks on the basis of molecular interaction information. Our model outperforms existing methods across multiple benchmark datasets and demonstrates strong generalization capabilities.

PMID:40301762 | DOI:10.1186/s12859-025-06123-2

Categories: Literature Watch

Application of deep learning reconstruction combined with time-resolved post-processing method to improve image quality in CTA derived from low-dose cerebral CT perfusion data

Tue, 2025-04-29 06:00

BMC Med Imaging. 2025 Apr 29;25(1):139. doi: 10.1186/s12880-025-01623-2.

ABSTRACT

BACKGROUND: To assess the effect of the combination of deep learning reconstruction (DLR) and time-resolved maximum intensity projection (tMIP) or time-resolved average (tAve) post-processing method on image quality of CTA derived from low-dose cerebral CTP.

METHODS: Thirty patients underwent regular dose CTP (Group A) and other thirty with low-dose (Group B) were retrospectively enrolled. Group A were reconstructed with hybrid iterative reconstruction (R-HIR). In Group B, four image datasets of CTA were gained: L-HIR, L-DLR, L-DLRtMIP and L-DLRtAve. The CT attenuation, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and subjective images quality were calculated and compared. The Intraclass Correlation (ICC) between CTA and MRA of two subgroups were calculated.

RESULTS: The low-dose group achieved reduction of radiation dose by 33% in single peak arterial phase and 18% in total compared to the regular dose group (single phase: 0.12 mSv vs 0.18 mSv; total: 1.91mSv vs 2.33mSv). The L-DLRtMIP demonstrated higher CT values in vessels compared to R-HIR (all P < 0.05). The CNR of vessels in L-HIR were statistically inferior to R-HIR (all P < 0.001). There was no significant different in image noise and CNR of vessels between L-DLR and R-HIR (all P > 0.05, except P = 0.05 for CNR of ICAs, 77.19 ± 21.64 vs 73.54 ± 37.03). However, the L-DLRtMIP and L-DLRtAve presented lower image noise, higher CNR (all P < 0.05) and subjective scores (all P < 0.001) in vessels than R-HIR. The diagnostic accuracy in Group B was excellent (ICC = 0.944).

CONCLUSION: Combining DLR with tMIP or tAve allows for reduction in radiation dose by about 33% in single peak arterial phase and 18% in total in CTP scanning, while further improving image quality of CTA derived from CTP data when compared to HIR.

PMID:40301751 | DOI:10.1186/s12880-025-01623-2

Categories: Literature Watch

Brain tumor detection empowered with ensemble deep learning approaches from MRI scan images

Tue, 2025-04-29 06:00

Sci Rep. 2025 Apr 29;15(1):15002. doi: 10.1038/s41598-025-99576-7.

ABSTRACT

Brain tumor detection is essential for early diagnosis and successful treatment, both of which can significantly enhance patient outcomes. To evaluate brain MRI scans and categorize them into four types-pituitary, meningioma, glioma, and normal-this study investigates a potent artificial intelligence (AI) technique. Even though AI has been utilized in the past to detect brain tumors, current techniques still have issues with accuracy and dependability. Our study presents a novel AI technique that combines two distinct deep learning models to enhance this. When combined, these models improve accuracy and yield more trustworthy outcomes than when used separately. Key performance metrics including accuracy, precision, and dependability are used to assess the system once it has been trained using MRI scan pictures. Our results show that this combined AI approach works better than individual models, particularly in identifying different types of brain tumors. Specifically, the InceptionV3 + Xception combination hit an accuracy level of 98.50% in training and 98.30% in validation. Such results further argue the potential application for advanced AI techniques in medical imaging while speaking even more strongly to the fact that multiple AI models used concurrently are able to enhance brain tumor detection.

PMID:40301625 | DOI:10.1038/s41598-025-99576-7

Categories: Literature Watch

BiaPy: accessible deep learning on bioimages

Tue, 2025-04-29 06:00

Nat Methods. 2025 Apr 29. doi: 10.1038/s41592-025-02699-y. Online ahead of print.

NO ABSTRACT

PMID:40301624 | DOI:10.1038/s41592-025-02699-y

Categories: Literature Watch

Automated radiography assessment of ankle joint instability using deep learning

Tue, 2025-04-29 06:00

Sci Rep. 2025 Apr 29;15(1):15012. doi: 10.1038/s41598-025-99620-6.

ABSTRACT

This study developed and evaluated a deep learning (DL)-based system for automatically measuring talar tilt and anterior talar translation on weight-bearing ankle radiographs, which are key parameters in diagnosing ankle joint instability. The system was trained and tested using a dataset comprising of 1,452 anteroposterior radiographs (mean age ± standard deviation [SD]: 43.70 ± 22.60 years; age range: 6-87 years; males: 733, females: 719) and 2,984 lateral radiographs (mean age ± SD: 44.37 ± 22.72 years; age range: 6-92 years; male: 1,533, female: 1,451) from a total of 4,000 patients, provided by the National Information Society Agency. Patients who underwent joint fusion, bone grafting, or joint replacement were excluded. Statistical analyses, including correlation coefficient analysis and Bland-Altman plots, were conducted to assess the agreement and consistency between the DL-calculated and clinician-assessed measurements. The system demonstrated high accuracy, with strong correlations for talar tilt (Pearson correlation coefficient [r] = 0.798 (p < .001); intraclass correlation coefficient [ICC] = 0.797 [95% CI 0.74, 0.82]; concordance correlation coefficient [CCC] = 0.796 [95% CI 0.69, 0.85]; mean absolute error [MAE] = 1.088° [95% CI 0.06°, 1.14°]; mean square error [MSE] = 1.780° [95% CI 1.69°, 2.73°]; root mean square error [RMSE] = 1.374° [95% CI 1.31°, 1.44°]; 95% limit of agreement [LoA], 2.0° to - 2.3°) and anterior talar translation (r = .862 (p < .001); ICC = 0.861 [95% CI 0.84, 0.89]; CCC = 0.861 [95% CI 0.86, 0.89]; MAE = 0.468 mm [95% CI 0.42 mm, 0.51 mm]; MSE = 0.551 mm [95% CI 0.49 mm, 0.61 mm]; RMSE = 0.742 mm [95% CI 0.69 mm, 0.79 mm]; 95% LoA, 1.5 mm to - 1.3 mm). These results demonstrate the system's capability to provide objective and reproducible measurements, supporting clinical interpretation of ankle instability in routine radiographic practice.

PMID:40301608 | DOI:10.1038/s41598-025-99620-6

Categories: Literature Watch

Transformer-based deep learning enables improved B-cell epitope prediction in parasitic pathogens: A proof-of-concept study on Fasciola hepatica

Tue, 2025-04-29 06:00

PLoS Negl Trop Dis. 2025 Apr 29;19(4):e0012985. doi: 10.1371/journal.pntd.0012985. Online ahead of print.

ABSTRACT

BACKGROUND: The identification of B-cell epitopes (BCEs) is fundamental to advancing epitope-based vaccine design, therapeutic antibody development, and diagnostics, such as in neglected tropical diseases caused by parasitic pathogens. However, the structural complexity of parasite antigens and the high cost of experimental validation present certain challenges. Advances in Artificial Intelligence (AI)-driven protein engineering, particularly through machine learning and deep learning, offer efficient solutions to enhance prediction accuracy and reduce experimental costs.

METHODOLOGY/PRINCIPAL FINDINGS: Here, we present deepBCE-Parasite, a Transformer-based deep learning model designed to predict linear BCEs from peptide sequences. By leveraging a state-of-the-art self-attention mechanism, the model achieved remarkable predictive performance, achieving an accuracy of approximately 81% and an AUC of 0.90 in both 10-fold cross-validation and independent testing. Comparative analyses against 12 handcrafted features and four conventional machine learning algorithms (GNB, SVM, RF, and LGBM) highlighted the superior predictive power of the model. As a case study, deepBCE-Parasite predicted eight BCEs from the leucine aminopeptidase (LAP) protein in Fasciola hepatica proteomic data. Dot-blot immunoassays confirmed the specific binding of seven synthetic peptides to positive sera, validating their IgG reactivity and demonstrating the model's efficacy in BCE prediction.

CONCLUSIONS/SIGNIFICANCE: deepBCE-Parasite demonstrates excellent performance in predicting BCEs across diverse parasitic pathogens, offering a valuable tool for advancing the design of epitope-based vaccines, antibodies, and diagnostic applications in parasitology.

PMID:40300022 | DOI:10.1371/journal.pntd.0012985

Categories: Literature Watch

Indoor fire and smoke detection based on optimized YOLOv5

Tue, 2025-04-29 06:00

PLoS One. 2025 Apr 29;20(4):e0322052. doi: 10.1371/journal.pone.0322052. eCollection 2025.

ABSTRACT

Ensuring safety and safeguarding indoor properties require reliable fire detection methods. Traditional detection techniques that use smoke, heat, or fire sensors often fail due to false positives and slow response time. Existing deep learning-based object detectors fall short of improved accuracy in indoor settings and real-time tracking, considering the dynamic nature of fire and smoke. This study aimed to address these challenges in fire and smoke detection in indoor settings. It presents a hyperparameter-optimized YOLOv5 (HPO-YOLOv5) model optimized by a genetic algorithm. To cover all prospective scenarios, we created a novel dataset comprising indoor fire and smoke images. There are 5,000 images in the dataset, split into training, validation, and testing samples at a ratio of 80:10:10. It also used the Grad-CAM technique to provide visual explanations for model predictions, ensuring interpretability and transparency. This research combined YOLOv5 with DeepSORT (which uses deep learning features to improve the tracking of objects over time) to provide real-time monitoring of fire progression. Thus, it allows for the notification of actual fire hazards. With a mean average precision (mAP@0.5) of 92.1%, the HPO-YOLOv5 model outperformed state-of-the-art models, including Faster R-CNN, YOLOv5, YOLOv7 and YOLOv8. The proposed model achieved a 2.4% improvement in mAP@0.5 over the original YOLOv5 baseline model. The research has laid the foundation for future developments in fire hazard detection technology, a system that is dependable and effective in indoor scenarios.

PMID:40299940 | DOI:10.1371/journal.pone.0322052

Categories: Literature Watch

Optimizing chemotherapeutic targets in non-small cell lung cancer with transfer learning for precision medicine

Tue, 2025-04-29 06:00

PLoS One. 2025 Apr 29;20(4):e0319499. doi: 10.1371/journal.pone.0319499. eCollection 2025.

ABSTRACT

Non-small cell lung cancer (NSCLC) accounts for the majority of lung cancer cases, making it the most fatal diseases worldwide. Predicting NSCLC patients' survival outcomes accurately remains a significant challenge despite advancements in treatment. The difficulties in developing effective drug therapies, which are frequently hampered by severe side effects, drug resistance, and limited effectiveness across diverse patient populations, highlight the complexity of NSCLC. The machine learning (ML) and deep learning (DL) modelsare starting to reform the field of NSCLC drug disclosure. These methodologies empower the distinguishing proof of medication targets and the improvement of customized treatment techniques that might actually upgrade endurance results for NSCLC patients. Using cutting-edge methods of feature extraction and transfer learning, we present a drug discovery model for the identification of therapeutic targets in this paper. For the purpose of extracting features from drug and protein sequences, we make use of a hybrid UNet transformer. This makes it possible to extract deep features that address the issue of false alarms. For dimensionality reduction, the modified Rime optimization (MRO) algorithm is used to select the best features among multiples. In addition, we design the deep transfer learning (DTransL) model to boost the drug discovery accuracy for NSCLC patients' therapeutic targets. Davis, KIBA, and Binding-DB are examples of benchmark datasets that are used to validate the proposed model. Results exhibit that the MRO+DTransL model outflanks existing cutting edge models. On the Davis dataset, the MRO+DTransL model performed better than the LSTM model by 9.742%, achieved an accuracy of 98.398%. It reached 98.264% and 97.344% on the KIBA and Binding-DB datasets, respectively, indicating improvements of 8.608% and 8.957% over baseline models.

PMID:40299923 | DOI:10.1371/journal.pone.0319499

Categories: Literature Watch

A deep-learning algorithm using real-time collected intraoperative vital sign signals for predicting acute kidney injury after major non-cardiac surgeries: A modelling study

Tue, 2025-04-29 06:00

PLoS Med. 2025 Apr 29;22(4):e1004566. doi: 10.1371/journal.pmed.1004566. eCollection 2025 Apr.

ABSTRACT

BACKGROUND: Postoperative acute kidney injury (PO-AKI) prediction models for non-cardiac major surgeries typically rely solely on preoperative clinical characteristics.

METHODS AND FINDINGS: In this study, we developed and externally validated a deep-learning-based model that integrates preoperative data with minute-scale intraoperative vital signs to predict PO-AKI. Using data from three hospitals, we constructed a convolutional neural network-based EfficientNet framework to analyze intraoperative data and created an ensemble model incorporating 103 baseline variables of demographics, medication use, comorbidities, and surgery-related characteristics. Model performance was compared with the conventional SPARK model from a previous study. Among 110,696 patients, 51,345 were included in the development cohort, and 59,351 in the external validation cohorts. The median age of the cohorts was 60, 61, and 66 years, respectively, with males comprising 54.9%, 50.8%, and 42.7% of each cohort. The intraoperative vital sign-based model demonstrated comparable predictive power (AUROC (Area Under the Receiver Operating Characteristic Curve): discovery cohort 0.707, validation cohort 0.637 and 0.607) to preoperative-only models (AUROC: discovery cohort 0.724, validation cohort 0.697 and 0.745). Adding 11 key clinical variables (e.g., age, sex, estimated glomerular filtration rate (eGFR), albuminuria, hyponatremia, hypoalbuminemia, anemia, diabetes, renin-angiotensin-aldosterone inhibitors, emergency surgery, and the estimated surgery time) improved the model's performance (AUROC: discovery cohort 0.765, validation cohort 0.716 and 0.761). The ensembled deep-learning model integrating both preoperative and intraoperative data achieved the highest predictive accuracy (AUROC: discovery cohort 0.795, validation cohort 0.762 and 0.786), outperforming the conventional SPARK model. The retrospective design in a single-nation cohort with non-inclusion of some potential AKI-associated variables is the main limitation of this study.

CONCLUSIONS: This deep-learning-based PO-AKI risk prediction model provides a comprehensive approach to evaluating PO-AKI risk prediction by combining preoperative clinical data with real-time intraoperative vital sign information, offering enhanced predictive performance for better clinical decision-making.

PMID:40299885 | DOI:10.1371/journal.pmed.1004566

Categories: Literature Watch

Prediction of stress-strain behavior of rock materials under biaxial compression using a deep learning approach

Tue, 2025-04-29 06:00

PLoS One. 2025 Apr 29;20(4):e0321478. doi: 10.1371/journal.pone.0321478. eCollection 2025.

ABSTRACT

Deep learning has significantly advanced in predicting stress-strain curves. However, due to the complex mechanical properties of rock materials, existing deep learning methods have the problem of insufficient accuracy in predicting the stress-strain curves of rock materials. This paper proposes a deep learning method based on a long short-term memory autoencoder (LSTM-AE) for predicting stress-strain curves of rock materials in discrete element numerical simulations. The LSTM-AE approach uses the LSTM network to construct both the encoder and decoder, where the encoder extracts features from the input data and the decoder generates the target sequence for prediction. The mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R2) of the predicted and true values are used as the evaluation metrics. The proposed LSTM-AE network is compared with the LSTM network, recurrent neural network (RNN), BP neural network (BPNN), and XGBoost model. The results indicate that the accuracy of the proposed LSTM-AE network outperforms LSTM, RNN, BPNN, and XGBoost. Furthermore, the robustness of the LSTM-AE network is confirmed by predicting 10 sets of special samples. However, the scalability of the LSTM-AE network in handling large datasets and its applicability to predicting laboratory datasets need further verification. Nevertheless, this study provides a valuable reference for solving the prediction accuracy of stress-strain curves in rock materials.

PMID:40299820 | DOI:10.1371/journal.pone.0321478

Categories: Literature Watch

Continuous Joint Kinematics Prediction using GAT-LSTM Framework Based on Muscle Synergy and Sparse sEMG

Tue, 2025-04-29 06:00

IEEE Trans Neural Syst Rehabil Eng. 2025 Apr 29;PP. doi: 10.1109/TNSRE.2025.3565305. Online ahead of print.

ABSTRACT

sEMG signals hold significant potential for motion prediction, with promising applications in areas such as rehabilitation, sports training, and human-computer interaction. However, achieving robust prediction accuracy remains a critical challenge, as even minor inaccuracies in motion prediction can severely affect the reliability and practical utility of sEMG-based systems. In this study, we propose a novel framework, muscle synergy (MS)-based graph attention networks (MSGAT-LSTM), specifically designed to address the challenges of continuous motion prediction using sparse sEMG electrodes. By leveraging MS theory and graph-based learning, the framework effectively compensates for the limitations of sparse sEMG setups and achieves significant improvements in prediction accuracy compared to existing methods. Based on MS theory, the framework calculates cosine similarity between sEMG signal features from different muscles to assign edge weights, effectively capturing their coordinated contributions to motion. The proposed framework integrates GAT for relational feature learning with LSTM networks for temporal dependency modeling, leveraging the strengths of both architectures. Experimental results on the public dataset Ninapro DB2 and a self-collected dataset demonstrate that MSGAT-LSTM achieves superior performance compared to state-of-the-art methods, including the muscle anatomy and MS-based 3DCNN, GCN-LSTM, and classic models such as CNN-LSTM, CNN, and LSTM, in terms of RMSE and R2. Furthermore, experimental results reveal that incorporating MS into GCN reduces training time by 13% compared to GCN-LSTM, significantly enhancing computational efficiency and scalability. This study highlights the potential of integrating MS theory with graph-based deep learning methods for motion prediction based on sEMG.

PMID:40299730 | DOI:10.1109/TNSRE.2025.3565305

Categories: Literature Watch

Pages