Deep learning
Development of model for identifying homologous recombination deficiency (HRD) status of ovarian cancer with deep learning on whole slide images
J Transl Med. 2025 Mar 4;23(1):267. doi: 10.1186/s12967-025-06234-7.
ABSTRACT
BACKGROUND: Homologous recombination deficiency (HRD) refers to the dysfunction of homologous recombination repair (HRR) at the cellular level. The assessment of HRD status has the important significance for the formulation of treatment plans, efficacy evaluation, and prognosis prediction of patients with ovarian cancer.
OBJECTIVES: This study aimed to construct a deep learning-based classifier for identifying tumor regions from whole slide images (WSIs) and stratify the HRD status of patients with ovarian cancer (OC).
METHODS: The deep learning models were trained on 205 H&E-stained sections which contained 205 ovarian cancer patients, 64 were found to have HRD status while 141 had homologous recombination proficiency (HRP) status from two institutions Memorial Sloan Kettering Cancer Center (MSKCC) and Zhongda Hospital, Southeast University. The framework includes tumor regions identification by UNet + + and subtypes of ovarian cancer classifier construction. Referring to the EasyEnsemble, we classified the HRP patients into three distributed subsets. These three subsets of HRP patients were combined with the HRD patients to establish three new training groups for subsequent model construction. The three models were integrated into a single model named Ensemble Model.
RESULTS: The UNet + + algorithm segmented tumor regions with 81.8% accuracy, 85.9% recall, 83.8% dice score and 68.3% IoU. The AUC of the Ensemble Model was 0.769 (Precision = 0.800, Recall = 0.727, F1-score = 0.762) in the study. The most discriminative features between HRD and HRP comprised S_mean_dln_obtuse_ratio, S_mean_dln_acute_ratio and mean_Graph_T-S_Betweenness_normed.
CONCLUSIONS: The models we constructed enables accurate discrimination between tumor and non-tumor tissues in ovarian cancer as well as the prediction of HRD status for patients with ovarian cancer.
PMID:40038690 | DOI:10.1186/s12967-025-06234-7
Automated classification of chest X-rays: a deep learning approach with attention mechanisms
BMC Med Imaging. 2025 Mar 4;25(1):71. doi: 10.1186/s12880-025-01604-5.
ABSTRACT
BACKGROUND: Pulmonary diseases such as COVID-19 and pneumonia, are life-threatening conditions, that require prompt and accurate diagnosis for effective treatment. Chest X-ray (CXR) has become the most common alternative method for detecting pulmonary diseases such as COVID-19, pneumonia, and lung opacity due to their availability, cost-effectiveness, and ability to facilitate comparative analysis. However, the interpretation of CXRs is a challenging task.
METHODS: This study presents an automated deep learning (DL) model that outperforms multiple state-of-the-art methods in diagnosing COVID-19, Lung Opacity, and Viral Pneumonia. Using a dataset of 21,165 CXRs, the proposed framework introduces a seamless combination of the Vision Transformer (ViT) for capturing long-range dependencies, DenseNet201 for powerful feature extraction, and global average pooling (GAP) for retaining critical spatial details. This combination results in a robust classification system, achieving remarkable accuracy.
RESULTS: The proposed methodology delivers outstanding results across all categories: achieving 99.4% accuracy and an F1-score of 98.43% for COVID-19, 96.45% accuracy and an F1-score of 93.64% for Lung Opacity, 99.63% accuracy and an F1-score of 97.05% for Viral Pneumonia, and 95.97% accuracy with an F1-score of 95.87% for Normal subjects.
CONCLUSION: The proposed framework achieves a remarkable overall accuracy of 97.87%, surpassing several state-of-the-art methods with reproducible and objective outcomes. To ensure robustness and minimize variability in train-test splits, our study employs five-fold cross-validation, providing reliable and consistent performance evaluation. For transparency and to facilitate future comparisons, the specific training and testing splits have been made publicly accessible. Furthermore, Grad-CAM-based visualizations are integrated to enhance the interpretability of the model, offering valuable insights into its decision-making process. This innovative framework not only boosts classification accuracy but also sets a new benchmark in CXR-based disease diagnosis.
PMID:40038588 | DOI:10.1186/s12880-025-01604-5
Reconstruction of diploid higher-order human 3D genome interactions from noisy Pore-C data using Dip3D
Nat Struct Mol Biol. 2025 Mar 4. doi: 10.1038/s41594-025-01512-w. Online ahead of print.
ABSTRACT
Differential high-order chromatin interactions between homologous chromosomes affect many biological processes. Traditional chromatin conformation capture genome analysis methods mainly identify two-way interactions and cannot provide comprehensive haplotype information, especially for low-heterozygosity organisms such as human. Here, we present a pipeline of methods to delineate diploid high-order chromatin interactions from noisy Pore-C outputs. We trained a previously published single-nucleotide variant (SNV)-calling deep learning model, Clair3, on Pore-C data to achieve superior SNV calling, applied a filtering strategy to tag reads for haplotypes and established a haplotype imputation strategy for high-order concatemers. Learning the haplotype characteristics of high-order concatemers from high-heterozygosity mouse allowed us to devise a progressive haplotype imputation strategy, which improved the haplotype-informative Pore-C contact rate 14.1-fold to 76% in the HG001 cell line. Overall, the diploid three-dimensional (3D) genome interactions we derived using Dip3D surpassed conventional methods in noise reduction and contact distribution uniformity, with better haplotype-informative contact density and genomic coverage rates. Dip3D identified previously unresolved haplotype high-order interactions, in addition to an understanding of their relationship with allele-specific expression, such as in X-chromosome inactivation. These results lead us to conclude that Dip3D is a robust pipeline for the high-quality reconstruction of diploid high-order 3D genome interactions.
PMID:40038455 | DOI:10.1038/s41594-025-01512-w
Precision diagnosis of burn injuries using imaging and predictive modeling for clinical applications
Sci Rep. 2025 Mar 4;15(1):7604. doi: 10.1038/s41598-025-92096-4.
ABSTRACT
Burns represents a serious clinical problem because the diagnosis and assessment are very complex. This paper proposes a methodology that combines the use of advanced medical imaging with predictive modeling for the improvement of burn injury assessment. The proposed framework makes use of the Adaptive Complex Independent Components Analysis (ACICA) and Reference Region (TBSA) methods in conjunction with deep learning techniques for the precise estimation of burn depth and Total Body Surface Area analysis. It also allows for the estimation of the depth of burns with high accuracy, calculation of TBSA, and non-invasive analysis with 96.7% accuracy using an RNN model. Extensive experimentation on DCE-LUV samples validates enhanced diagnostic precision and detailed texture analysis. These technologies provide nuanced insights into burn severity, improving diagnostic accuracy and treatment planning. Our results demonstrate the potential of these methods to revolutionize burn care and optimize patient outcomes.
PMID:40038450 | DOI:10.1038/s41598-025-92096-4
Evolution of AI enabled healthcare systems using textual data with a pretrained BERT deep learning model
Sci Rep. 2025 Mar 4;15(1):7540. doi: 10.1038/s41598-025-91622-8.
ABSTRACT
In the rapidly evolving field of healthcare, Artificial Intelligence (AI) is increasingly driving the promotion of the transformation of traditional healthcare and improving medical diagnostic decisions. The overall goal is to uncover emerging trends and potential future paths of AI in healthcare by applying text mining to collect scientific papers and patent information. This study, using advanced text mining and multiple deep learning algorithms, utilized the Web of Science for scientific papers (1587) and the Derwent innovations index for patents (1314) from 2018 to 2022 to study future trends of emerging AI in healthcare. A novel self-supervised text mining approach, leveraging bidirectional encoder representations from transformers (BERT), is introduced to explore AI trends in healthcare. The findings point out the market trends of the Internet of Things, data security and image processing. This study not only reveals current research hotspots and technological trends in AI for healthcare but also proposes an advanced research method. Moreover, by analysing patent data, this study provides an empirical basis for exploring the commercialisation of AI technology, indicating the potential transformation directions for future healthcare services. Early technology trend analysis relied heavily on expert judgment. This study is the first to introduce a deep learning self-supervised model to the field of AI in healthcare, effectively improving the accuracy and efficiency of the analysis. These findings provide valuable guidance for researchers, policymakers and industry professionals, enabling more informed decisions.
PMID:40038367 | DOI:10.1038/s41598-025-91622-8
A visual SLAM loop closure detection method based on lightweight siamese capsule network
Sci Rep. 2025 Mar 4;15(1):7644. doi: 10.1038/s41598-025-90511-4.
ABSTRACT
Loop closure detection is a key module in visual SLAM. During the robot's movement, the cumulative error of the robot is reduced by the loop closure detection method, which can provide constraints for the back-end pose optimization, and the SLAM system can build an accurate map. Traditional loop closure detection algorithms rely on the bag-of-words model, which involves a complex process, has slow loading speeds, and is sensitive to changes in illumination or viewing angles. Therefore, aiming at the problems of traditional methods, this paper proposes an algorithm based on the Siamese capsule neural network by using the deep learning method. We have designed a new feature extractor for capsule networks, and in order to further reduce the parameter count, we have performed pruning based on the characteristics of the capsule layer. The algorithm was tested on the CityCentre dataset and the New College dataset. Our experimental results show that the proposed algorithm in this paper has higher accuracy and robustness compared to traditional methods and other deep learning methods. Our algorithm demonstrates good robustness under changes in illumination and viewing angles. Finally, we evaluated the performance of the complete SLAM system on the KITTI dataset.
PMID:40038350 | DOI:10.1038/s41598-025-90511-4
Efficient CNN architecture with image sensing and algorithmic channeling for dataset harmonization
Sci Rep. 2025 Mar 4;15(1):7552. doi: 10.1038/s41598-025-90616-w.
ABSTRACT
The process of image formulation uses semantic analysis to extract influential vectors from image components. The proposed approach integrates DenseNet with ResNet-50, VGG-19, and GoogLeNet using an innovative bonding process that establishes algorithmic channeling between these models. The goal targets compact efficient image feature vectors that process data in parallel regardless of input color or grayscale consistency and work across different datasets and semantic categories. Image patching techniques with corner straddling and isolated responses help detect peaks and junctions while addressing anisotropic noise through curvature-based computations and auto-correlation calculations. An integrated channeled algorithm processes the refined features by uniting local-global features with primitive-parameterized features and regioned feature vectors. Using K-nearest neighbor indexing methods analyze and retrieve images from the harmonized signature collection effectively. Extensive experimentation is performed on the state-of-the-art datasets including Caltech-101, Cifar-10, Caltech-256, Cifar-100, Corel-10000, 17-Flowers, COIL-100, FTVL Tropical Fruits, Corel-1000, and Zubud. This contribution finally endorses its standing at the peak of deep and complex image sensing analysis. A state-of-the-art deep image sensing analysis method delivers optimal channeling accuracy together with robust dataset harmonization performance.
PMID:40038324 | DOI:10.1038/s41598-025-90616-w
YOLO-BS: a traffic sign detection algorithm based on YOLOv8
Sci Rep. 2025 Mar 4;15(1):7558. doi: 10.1038/s41598-025-88184-0.
ABSTRACT
Traffic signs are pivotal components of traffic management, ensuring the regulation and safety of road traffic. However, existing detection methods often suffer from low accuracy and poor real-time performance in dynamic road environments. This paper reviews traditional traffic sign detection methods and introduces an enhanced detection algorithm (YOLO-BS) based on YOLOv8 (You Only Look Once version 8). This algorithm addresses the challenges of complex backgrounds and small-sized detection targets in traffic sign images. A small object detection layer was incorporated into the YOLOv8 framework to enrich feature extraction. Additionally, a bidirectional feature pyramid network (BiFPN) was integrated into the detection framework to enhance the handling of multi-scale objects and improve the performance in detecting small objects. Experiments were conducted on the TT100K dataset to evaluate key metrics such as model size, recall, mean average precision (mAP), and frames per second (FPS), demonstrating that YOLO-BS surpasses current mainstream models with mAP50 of 90.1% and FPS of 78. Future work will refine YOLO-BS to explore broader applications within intelligent transportation systems.
PMID:40038318 | DOI:10.1038/s41598-025-88184-0
Dual-type deep learning-based image reconstruction for advanced denoising and super-resolution processing in head and neck T2-weighted imaging
Jpn J Radiol. 2025 Mar 5. doi: 10.1007/s11604-025-01756-y. Online ahead of print.
ABSTRACT
PURPOSE: To assess the utility of dual-type deep learning (DL)-based image reconstruction with DL-based image denoising and super-resolution processing by comparing images reconstructed with the conventional method in head and neck fat-suppressed (Fs) T2-weighted imaging (T2WI).
MATERIALS AND METHODS: We retrospectively analyzed the cases of 43 patients who underwent head/neck Fs-T2WI for the assessment of their head and neck lesions. All patients underwent two sets of Fs-T2WI scans with conventional- and DL-based reconstruction. The Fs-T2WI with DL-based reconstruction was acquired based on a 30% reduction of its spatial resolution in both the x- and y-axes with a shortened scan time. Qualitative and quantitative assessments were performed with both the conventional method- and DL-based reconstructions. For the qualitative assessment, we visually evaluated the overall image quality, visibility of anatomical structures, degree of artifact(s), lesion conspicuity, and lesion edge sharpness based on five-point grading. In the quantitative assessment, we measured the signal-to-noise ratio (SNR) of the lesion and the contrast-to-noise ratio (CNR) between the lesion and the adjacent or nearest muscle.
RESULTS: In the qualitative analysis, significant differences were observed between the Fs-T2WI with the conventional- and DL-based reconstruction in all of the evaluation items except the degree of the artifact(s) (p < 0.001). In the quantitative analysis, significant differences were observed in the SNR between the Fs-T2WI with conventional- (21.4 ± 14.7) and DL-based reconstructions (26.2 ± 13.5) (p < 0.001). In the CNR assessment, the CNR between the lesion and adjacent or nearest muscle in the DL-based Fs-T2WI (16.8 ± 11.6) was significantly higher than that in the conventional Fs-T2WI (14.2 ± 12.9) (p < 0.001).
CONCLUSION: Dual-type DL-based image reconstruction by an effective denoising and super-resolution process successfully provided high image quality in head and neck Fs-T2WI with a shortened scan time compared to the conventional imaging method.
PMID:40038217 | DOI:10.1007/s11604-025-01756-y
Role of artificial intelligence in data-centric additive manufacturing processes for biomedical applications
J Mech Behav Biomed Mater. 2025 Feb 25;166:106949. doi: 10.1016/j.jmbbm.2025.106949. Online ahead of print.
ABSTRACT
The role of additive manufacturing (AM) for healthcare applications is growing, particularly in the aspiration to meet subject-specific requirements. This article reviews the application of artificial intelligence (AI) to enhance pre-, during-, and post-AM processes to meet a wider range of subject-specific requirements of healthcare interventions. This article introduces common AM processes and AI tools, such as supervised learning, unsupervised learning, deep learning, and reinforcement learning. The role of AI in pre-processing is described in the core dimensions like structural design and image reconstruction, material design and formulations, and processing parameters. The role of AI in a printing process is described based on hardware specifications, printing configurations, and core operational parameters such as temperature. Likewise, the post-processing describes the role of AI for surface finishing, dimensional accuracy, curing processes, and a relationship between AM processes and bioactivity. The later sections provide detailed scientometric studies, thematic evaluation of the subject topic, and also reflect on AI ethics in AM for biomedical applications. This review article perceives AI as a robust and powerful tool for AM of biomedical products. From tissue engineering (TE) to prosthesis, lab-on-chip to organs-on-a-chip, and additive biofabrication for range of products; AI holds a high potential to screen desired process-property-performance relationships for resource-efficient pre- to post-AM cycle to develop high-quality healthcare products with enhanced subject-specific compliance specification.
PMID:40036906 | DOI:10.1016/j.jmbbm.2025.106949
TransHLA: a Hybrid Transformer model for HLA-presented epitope detection
Gigascience. 2025 Jan 6;14:giaf008. doi: 10.1093/gigascience/giaf008.
ABSTRACT
BACKGROUND: Precise prediction of epitope presentation on human leukocyte antigen (HLA) molecules is crucial for advancing vaccine development and immunotherapy. Conventional HLA-peptide binding affinity prediction tools often focus on specific alleles and lack a universal approach for comprehensive HLA site analysis. This limitation hinders efficient filtering of invalid peptide segments.
RESULTS: We introduce TransHLA, a pioneering tool designed for epitope prediction across all HLA alleles, integrating Transformer and Residue CNN architectures. TransHLA utilizes the ESM2 large language model for sequence and structure embeddings, achieving high predictive accuracy. For HLA class I, it reaches an accuracy of 84.72% and an area under the curve (AUC) of 91.95% on IEDB test data. For HLA class II, it achieves 79.94% accuracy and an AUC of 88.14%. Our case studies using datasets like CEDAR and VDJdb demonstrate that TransHLA surpasses existing models in specificity and sensitivity for identifying immunogenic epitopes and neoepitopes.
CONCLUSIONS: TransHLA significantly enhances vaccine design and immunotherapy by efficiently identifying broadly reactive peptides. Our resources, including data and code, are publicly accessible at https://github.com/SkywalkerLuke/TransHLA.
PMID:40036690 | DOI:10.1093/gigascience/giaf008
Machine-learning approach facilitates prediction of whitefly spatiotemporal dynamics in a plant canopy
J Econ Entomol. 2025 Feb 27:toaf035. doi: 10.1093/jee/toaf035. Online ahead of print.
ABSTRACT
Plant-specific insect scouting and prediction are still challenging in most crop systems. In this article, a machine-learning algorithm is proposed to predict populations during whiteflies (Bemisia tabaci, Hemiptera; Gennadius Aleyrodidae) scouting and aid in determining the population distribution of adult whiteflies in cotton plant canopies. The study investigated the main location of adult whiteflies relative to plant nodes (stem points where leaves or branches emerge), population variation within and between canopies, whitefly density variability across fields, the impact of dense nodes on overall canopy populations, and the feasibility of using machine learning for prediction. Daily scouting was conducted on 64 non-pesticide cotton plants, focusing on all leaves of a node with the highest whitefly counts. A linear mixed-effect model assessed distribution over time, and machine-learning model selection identified a suitable forecasting model for the entire canopy whitefly population. Findings showed that the top 3 to 5 nodes are key habitats, with a single node potentially accounting for 44.4% of the full canopy whitefly population. The Bagging Ensemble Artificial Neural Network Regression model accurately predicted canopy populations (R² = 85.57), with consistency between actual and predicted counts (P-value > 0.05). Strategic sampling of the top nodes could estimate overall plant populations when taking a few samples or transects across a field. The suggested machine-learning model could be integrated into computing devices and automated sensors to predict real-time whitefly population density within the entire plant canopy during scouting operations.
PMID:40036620 | DOI:10.1093/jee/toaf035
CryoTEN: Efficiently Enhancing cryo-EM Density Maps Using Transformers
Bioinformatics. 2025 Feb 27:btaf092. doi: 10.1093/bioinformatics/btaf092. Online ahead of print.
ABSTRACT
MOTIVATION: Cryogenic Electron Microscopy (cryo-EM) is a core experimental technique used to determine the structure of macromolecules such as proteins. However, the effectiveness of cryo-EM is often hindered by the noise and missing density values in cryo-EM density maps caused by experimental conditions such as low contrast and conformational heterogeneity. Although various global and local map sharpening techniques are widely employed to improve cryo-EM density maps, it is still challenging to efficiently improve their quality for building better protein structures from them.
RESULTS: In this study, we introduce CryoTEN-a three-dimensional UNETR ++ style transformer to improve cryo-EM maps effectively. CryoTEN is trained using a diverse set of 1,295 cryo-EM maps as inputs and their corresponding simulated maps generated from known protein structures as targets. An independent test set containing 150 maps is used to evaluate CryoTEN, and the results demonstrate that it can robustly enhance the quality of cryo-EM density maps. In addition, automatic de novo protein structure modeling shows that protein structures built from the density maps processed by CryoTEN have substantially better quality than those built from the original maps. Compared to the existing state-of-the-art deep learning methods for enhancing cryo-EM density maps, CryoTEN ranks second in improving the quality of density maps, while running > 10 times faster and requiring much less GPU memory than them.
AVAILABILITY AND IMPLEMENTATION: The source code and data is freely available at https://github.com/jianlin-cheng/cryoten.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
PMID:40036588 | DOI:10.1093/bioinformatics/btaf092
Challenges in AI-driven Biomedical Multimodal Data Fusion and Analysis
Genomics Proteomics Bioinformatics. 2025 Feb 27:qzaf011. doi: 10.1093/gpbjnl/qzaf011. Online ahead of print.
ABSTRACT
The rapid development of biological and medical examination methods has vastly expanded personal biomedical information, including molecular, cellular, image, and electronic health record datasets. Integrating this wealth of information enables precise disease diagnosis, biomarker identification, and treatment design in clinical settings. Artificial intelligence (AI) techniques, particularly deep learning models, have been extensively employed in biomedical applications, demonstrating increased precision, efficiency, and generalization. The success of the large language and vision models further significantly extends their biomedical applications. However, challenges remain in learning these multimodal biomedical datasets, such as data privacy, fusion, and model interpretation. In this review, we provided a comprehensive overview of various biomedical data modalities, multi-modal representation learning methods, and the applications of AI in biomedical data integrative analysis. Additionally, we discussed the challenges in applying these deep learning methods and how to better integrate them into biomedical scenarios. We then proposed future directions for adapting deep learning methods with model pre-training and knowledge integration to advance biomedical research and benefit their clinical applications.
PMID:40036568 | DOI:10.1093/gpbjnl/qzaf011
Enhancing Image Retrieval Performance With Generative Models in Siamese Networks
IEEE J Biomed Health Inform. 2025 Feb 20;PP. doi: 10.1109/JBHI.2025.3543907. Online ahead of print.
ABSTRACT
Prostate cancer is a critical healthcare challenge globally and is one of the most prevalent types of cancer in men. Early and accurate diagnosis is essential for effective treatment and improved patient outcomes. In the existing literature, computer-aided diagnosis (CAD) solutions have been developed to assist pathologists in various tasks, including classification, diagnosis, and prostate cancer grading. Content-based image retrieval (CBIR) techniques provide valuable approaches to enhance these computer-aided solutions. This study evaluates how generative deep learning models can improve the quality of retrievals within a CBIR system. Specifically, we propose applying a Siamese Network approach, which enables us to learn how to encode image patches into latent representations for retrieval purposes. We used the ProGleason-GAN framework trained on the SiCAPv2 dataset to create similar pairs of input patches. Our observations indicate that introducing synthetic patches leads to notable improvements in the evaluated metrics, underscoring the utility of generative models within CBIR tasks. Furthermore, this work is the first in the literature where latent representations optimized for CBIR are used to train an attention mechanism for performing Gleason Scoring of a WSI.
PMID:40036556 | DOI:10.1109/JBHI.2025.3543907
Collaborative Deep Learning and Information Fusion of Heterogeneous Latent Variable Models for Industrial Quality Prediction
IEEE Trans Cybern. 2025 Feb 21;PP. doi: 10.1109/TCYB.2025.3537809. Online ahead of print.
ABSTRACT
In the past years, latent variable models have played an important role in various industrial AI systems, among which quality prediction is one of the most representative applications. Inspired by the idea of deep learning, those basic latent variable models have been extended to deep forms, based on which the quality prediction performance has been significantly improved. However, different latent variable models have their own strengths and weaknesses, a model works well under one scenario might not provide satisfactory performance under another. The motivation of this article is based on the viewpoint of information fusion and ensemble learning for heterogeneous latent variable models. Particularly, a collaborative deep learning and model fusion framework is formulated for the purpose of industrial quality prediction. In the first stage of the framework, collaborative layer-by-layer feature extractions are implemented among different latent variable models, through which different patterns of latent variables are identified in different layers of the deep model. Then, in the second stage, an ensemble regression modeling strategy is proposed to fuse the quality prediction results from different latent variable models, which is based on a well-designed data description method. Two real industrial examples are used for performance evaluation of the proposed method, based on which we can observe that information fusions in terms of both collaborative layer-by-layer feature extraction and heterogeneous model ensemble have positive effects in improving prediction accuracy and stability.
PMID:40036535 | DOI:10.1109/TCYB.2025.3537809
Co-Training Broad Siamese-Like Network for Coupled-View Semi-Supervised Learning
IEEE Trans Cybern. 2025 Feb 21;PP. doi: 10.1109/TCYB.2025.3531441. Online ahead of print.
ABSTRACT
Multiview semi-supervised learning is a popular research area in which people utilize cross-view knowledge to overcome the limitation of labeled data in semi-supervised learning. Existing methods mainly utilize deep neural network, which is relatively time-consuming due to the complex network structure and back propagation iterations. In this article, co-training broad Siamese-like network (Co-BSLN) is proposed for coupled-view semi-supervised classification. Co-BSLN learns knowledge from two-view data and can be used for multiview data with the help of feature concatenation. Different from existing deep learning methods, Co-BSLN utilizes a simple shallow network based on broad learning system (BLS) to simplify the network structure and reduce training time. It replaces back propagation iterations with a direct pseudo inverse calculation to further reduce time consumption. In Co-BSLN, different views of the same instance are considered as positive pairs due to cross-view consistency. Predictions of views in positive pairs are used to guide the training of each other through a direct logit vector mapping. Such a design is fast and effectively utilizes cross-view consistency to improve the accuracy of semi-supervised learning. Evaluation results demonstrate that Co-BSLN is able to improve accuracy and reduce training time on popular datasets.
PMID:40036533 | DOI:10.1109/TCYB.2025.3531441
NciaNet: A Non-Covalent Interaction-Aware Graph Neural Network for the Prediction of Protein-Ligand Interaction in Drug Discovery
IEEE J Biomed Health Inform. 2025 Mar 4;PP. doi: 10.1109/JBHI.2025.3547741. Online ahead of print.
ABSTRACT
Precise quantification of protein-ligand interaction is critical in early-stage drug discovery. Artificial intelligence (AI) has gained massive popularity in this area, with deep-learning models used to extract features from ligand and protein molecules. However, these models often fail to capture intermolecular non-covalent interactions, the primary factor influencing binding, leading to lower accuracy and interpretability. Moreover, such models overlook the spatial structure of protein-ligand complexes, resulting in weaker generalization. To address these issues, we propose Non-covalent Interaction-aware Graph Neural Network (NciaNet), a novel method that effectively utilizes intermolecular non-covalent interactions and 3D protein-ligand structure. Our approach achieves excellent predictive performance on multiple benchmark datasets and outperforms competitive baseline models in the binding affinity task, with the benchmark core set v.2016 achieving an RMSE of 1.208 and an R of 0.833, and the core set v.2013 achieving an RMSE of 1.409 and an R of 0.805, under the high-quality refined v.2016 training conditions. Importantly, NciaNet successfully learns vital features related to protein-ligand interactions, providing biochemical insights and demonstrating practical utility and reliability. However, despite these strengths, there may still be limitations in generalizability to unseen protein-ligand complexes, suggesting potential avenues for future work.
PMID:40036511 | DOI:10.1109/JBHI.2025.3547741
An AI-Based Clinical Decision Support System for Antibiotic Therapy in Sepsis (KINBIOTICS): Use Case Analysis
JMIR Hum Factors. 2025 Mar 4;12:e66699. doi: 10.2196/66699.
ABSTRACT
BACKGROUND: Antimicrobial resistances pose significant challenges in health care systems. Clinical decision support systems (CDSSs) represent a potential strategy for promoting a more targeted and guideline-based use of antibiotics. The integration of artificial intelligence (AI) into these systems has the potential to support physicians in selecting the most effective drug therapy for a given patient.
OBJECTIVE: This study aimed to analyze the feasibility of an AI-based CDSS pilot version for antibiotic therapy in sepsis patients and identify facilitating and inhibiting conditions for its implementation in intensive care medicine.
METHODS: The evaluation was conducted in 2 steps, using a qualitative methodology. Initially, expert interviews were conducted, in which intensive care physicians were asked to assess the AI-based recommendations for antibiotic therapy in terms of plausibility, layout, and design. Subsequently, focus group interviews were conducted to examine the technology acceptance of the AI-based CDSS. The interviews were anonymized and evaluated using content analysis.
RESULTS: In terms of the feasibility, barriers included variability in previous antibiotic administration practices, which affected the predictive ability of AI recommendations, and the increased effort required to justify deviations from these recommendations. Physicians' confidence in accepting or rejecting recommendations depended on their level of professional experience. The ability to re-evaluate CDSS recommendations and an intuitive, user-friendly system design were identified as factors that enhanced acceptance and usability. Overall, barriers included low levels of digitization in clinical practice, limited availability of cross-sectoral data, and negative previous experiences with CDSSs. Conversely, facilitators to CDSS implementation were potential time savings, physicians' openness to adopting new technologies, and positive previous experiences.
CONCLUSIONS: Early integration of users is beneficial for both the identification of relevant context factors and the further development of an effective CDSS. Overall, the potential of AI-based CDSSs is offset by inhibiting contextual conditions that impede its acceptance and implementation. The advancement of AI-based CDSSs and the mitigation of these inhibiting conditions are crucial for the realization of its full potential.
PMID:40036494 | DOI:10.2196/66699
Cone-beam computed tomography (CBCT) image-quality improvement using a denoising diffusion probabilistic model conditioned by pseudo-CBCT of pelvic regions
Radiol Phys Technol. 2025 Mar 4. doi: 10.1007/s12194-025-00892-4. Online ahead of print.
ABSTRACT
Cone-beam computed tomography (CBCT) is widely used in radiotherapy to image patient configuration before treatment but its image quality is lower than planning CT due to scattering, motion, and reconstruction methods. This reduces the accuracy of Hounsfield units (HU) and limits its use in adaptive radiation therapy (ART). However, synthetic CT (sCT) generation using deep learning methods for CBCT intensity correction faces challenges due to deformation. To address these issues, we propose enhancing CBCT quality using a conditional denoising diffusion probability model (CDDPM), which is trained on pseudo-CBCT created by adding pseudo-scatter to planning CT. The CDDPM transforms CBCT into high-quality sCT, improving HU accuracy while preserving anatomical configuration. The performance evaluation of the proposed sCT showed a reduction in mean absolute error (MAE) from 81.19 HU for CBCT to 24.89 HU for the sCT. Peak signal-to-noise ratio (PSNR) improved from 31.20 dB for CBCT to 33.81 dB for the sCT. The Dice and Jaccard coefficients between CBCT and sCT for the colon, prostate, and bladder ranged from 0.69 to 0.91. When compared to other deep learning models, the proposed sCT outperformed them in terms of accuracy and anatomical preservation. The dosimetry analysis for prostate cancer revealed a dose error of over 10% with CBCT but nearly 0% with the sCT. Gamma pass rates for the proposed sCT exceeded 90% for all dose criteria, indicating high agreement with CT-based dose distributions. These results show that the proposed sCT improves image quality, dosimetry accuracy, and treatment planning, advancing ART for pelvic cancer.
PMID:40035984 | DOI:10.1007/s12194-025-00892-4