Deep learning
Drug repurposing to identify potential FDA-approved drugs targeting three main angiogenesis receptors through a deep learning framework
Mol Divers. 2025 May 26. doi: 10.1007/s11030-025-11214-6. Online ahead of print.
ABSTRACT
Tumor cell survival depends on the presence of oxygen and nutrients provided by existing blood vessels, particularly when cancer is in its early stage. Along with tumor growth in the vicinity of blood vessels, malignant cells require more nutrients; hence, capillary sprouting occurs from parental vessels, a process known as angiogenesis. Although multiple cellular pathways have been identified, controlling them with one single biomolecule as a multi-target inhibitor could be an attractive strategy for reducing medication side effects. Three critical pathways in angiogenesis have been identified, which are activated by the vascular endothelial growth factor receptor (VEGFR), fibroblast growth factor receptor (FGFR), and epidermal growth factor receptor (EGFR). This study aimed to develop a methodology to discover multi-target inhibitors among over 2000 FDA-approved drugs. Hence, a novel ensemble approach was employed, comprising classification and regression models. First, three different deep autoencoder classifications were generated for each target individually. The top 100 trained models were selected for the high-throughput virtual screening step. After that, all identified molecules with a probability of more than 0.9 in more than 70% of the models were removed to ensure accurate consideration in the regression step. Since the ultimate aim of virtual screening is to discover molecules with the highest success rate in the pharmaceutical industry, various aspects of the molecules in different assays were considered by integrating ten different regression models. In conclusion, this paper contributes to pharmaceutical sciences by introducing eleven diverse scaffolds and eight approved drugs that can potentially be used as inhibitors of angiogenesis receptors, including VEGFR, FGFR, and EGFR. Considering three target receptors simultaneously is another central concept and contribution used. This concept could increase the chance of success, while reducing the possibility of resistance to these agents.
PMID:40418485 | DOI:10.1007/s11030-025-11214-6
Applications of artificial intelligence in abdominal imaging
Abdom Radiol (NY). 2025 May 26. doi: 10.1007/s00261-025-04990-0. Online ahead of print.
ABSTRACT
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.
PMID:40418375 | DOI:10.1007/s00261-025-04990-0
Research-based clinical deployment of artificial intelligence algorithm for prostate MRI
Abdom Radiol (NY). 2025 May 26. doi: 10.1007/s00261-025-05014-7. Online ahead of print.
ABSTRACT
PURPOSE: A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario.
METHODS: A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized.
RESULTS: In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone.
CONCLUSIONS: Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.
PMID:40418374 | DOI:10.1007/s00261-025-05014-7
Optimizing MRI sequence classification performance: insights from domain shift analysis
Eur Radiol. 2025 May 26. doi: 10.1007/s00330-025-11671-5. Online ahead of print.
ABSTRACT
BACKGROUND: MRI sequence classification becomes challenging in multicenter studies due to variability in imaging protocols, leading to unreliable metadata and requiring labor-intensive manual annotation. While numerous automated MRI sequence identification models are available, they frequently encounter the issue of domain shift, which detrimentally impacts their accuracy. This study addresses domain shift, particularly from adult to pediatric MRI data, by evaluating the effectiveness of pre-trained models under these conditions.
METHODS: This retrospective and multicentric study explored the efficiency of a pre-trained convolutional (ResNet) and CNN-Transformer hybrid model (MedViT) to handle domain shift. The study involved training ResNet-18 and MedVit models on an adult MRI dataset and testing them on a pediatric dataset, with expert domain knowledge adjustments applied to account for differences in sequence types.
RESULTS: The MedViT model demonstrated superior performance compared to ResNet-18 and benchmark models, achieving an accuracy of 0.893 (95% CI 0.880-0.904). Expert domain knowledge adjustments further improved the MedViT model's accuracy to 0.905 (95% CI 0.893-0.916), showcasing its robustness in handling domain shift.
CONCLUSION: Advanced neural network architectures like MedViT and expert domain knowledge on the target dataset significantly enhance the performance of MRI sequence classification models under domain shift conditions. By combining the strengths of CNNs and transformers, hybrid architectures offer enhanced robustness for reliable automated MRI sequence classification in diverse research and clinical settings.
KEY POINTS: Question Domain shift between adult and pediatric MRI data limits deep learning model accuracy, requiring solutions for reliable sequence classification across diverse patient populations. Findings The MedViT model outperformed ResNet-18 in pediatric imaging; expert domain knowledge adjustment further improved accuracy, demonstrating robustness across diverse datasets. Clinical relevance This study enhances MRI sequence classification by leveraging advanced neural networks and expert domain knowledge to mitigate domain shift, boosting diagnostic precision and efficiency across diverse patient populations in multicenter environments.
PMID:40418319 | DOI:10.1007/s00330-025-11671-5
Multimodal integration of longitudinal noninvasive diagnostics for survival prediction in immunotherapy using deep learning
J Am Med Inform Assoc. 2025 May 26:ocaf074. doi: 10.1093/jamia/ocaf074. Online ahead of print.
ABSTRACT
OBJECTIVES: Immunotherapies have revolutionized the landscape of cancer treatments. However, our understanding of response patterns in advanced cancers treated with immunotherapy remains limited. By leveraging routinely collected noninvasive longitudinal and multimodal data with artificial intelligence, we could unlock the potential to transform immunotherapy for cancer patients, paving the way for personalized treatment approaches.
MATERIALS AND METHODS: In this study, we developed a novel artificial neural network architecture, multimodal transformer-based simple temporal attention (MMTSimTA) network, building upon a combination of recent successful developments. We integrated pre- and on-treatment blood measurements, prescribed medications, and CT-based volumes of organs from a large pan-cancer cohort of 694 patients treated with immunotherapy to predict mortality at 3, 6, 9, and 12 months. Different variants of our extended MMTSimTA network were implemented and compared to baseline methods, incorporating intermediate and late fusion-based integration methods.
RESULTS: The strongest prognostic performance was demonstrated using a variant of the MMTSimTA model with area under the curves of 0.84 ± 0.04, 0.83 ± 0.02, 0.82 ± 0.02, 0.81 ± 0.03 for 3-, 6-, 9-, and 12-month survival prediction, respectively.
DISCUSSION: Our findings show that integrating noninvasive longitudinal data using our novel architecture yields an improved multimodal prognostic performance, especially in short-term survival prediction.
CONCLUSION: Our study demonstrates that multimodal longitudinal integration of noninvasive data using deep learning may offer a promising approach for personalized prognostication in immunotherapy-treated cancer patients.
PMID:40418276 | DOI:10.1093/jamia/ocaf074
EMOCPD: Efficient Attention-Based Models for Computational Protein Design Using Amino Acid Microenvironment
J Chem Inf Model. 2025 May 26. doi: 10.1021/acs.jcim.5c00378. Online ahead of print.
ABSTRACT
Computational protein design (CPD) refers to the use of computational methods to design proteins. Traditional methods relying on energy functions and heuristic algorithms for sequence design are inefficient and do not meet the demands of the big data era in biomolecules, with their accuracy limited by the energy functions and search algorithms. Existing deep learning methods are constrained by the learning capabilities of the networks, failing to extract effective information from sparse protein structures, which limits the accuracy of protein design. To address these shortcomings, we developed an Efficient attention-based models for computational protein design using amino acid microenvironment (EMOCPD). It aims to predict the category of each amino acid in a protein by analyzing the three-dimensional atomic environment surrounding the amino acids, and optimize the protein based on the predicted high-probability potential amino acid categories. EMOCPD employs a multihead attention mechanism to focus on important features in the sparse protein microenvironment and utilizes an inverse residual structure to optimize the network architecture. In protein design, the thermal stability and protein expression of the predicted mutants from EMOCPD show significant improvements compared to the wild type, effectively validating EMOCPD's potential in designing superior proteins. Furthermore, the predictions of EMOCPD are influenced positively, negatively, or have minimal impact based on the content of the 20 amino acids, categorizing amino acids as positive, negative, or neutral. Research findings indicate that EMOCPD is more suitable for designing proteins with lower contents of negative amino acids.
PMID:40418077 | DOI:10.1021/acs.jcim.5c00378
Advances in Machine Learning-Driven Flexible Strain Sensors: Challenges, Innovations, and Applications
ACS Appl Mater Interfaces. 2025 May 26. doi: 10.1021/acsami.5c06453. Online ahead of print.
ABSTRACT
Flexible strain sensors have garnered significant attention due to their high sensitivity, rapid response, and flexibility. Recent innovations, particularly those incorporating machine learning, have significantly enhanced their stability, sensitivity, and adaptability, positioning these sensors as promising solutions in health monitoring, human-computer interaction, and smart home applications. However, challenges remain in optimizing sensor materials for enhanced responsiveness, durability, and stability. Moreover, the development of machine learning-based strain sensors faces obstacles, including algorithmic limitations, low noise tolerance in complex environments, and limited model interpretability. This review systematically evaluates the latest advancements in flexible strain sensors, emphasizing the critical role of machine learning in performance enhancement. It further explores the shift from traditional machine learning methods to deep learning approaches, elucidating the potential applications that these algorithms facilitate. Finally, we discuss future research trajectories, highlighting both opportunities and challenges that may guide the next wave of innovations in this dynamic field.
PMID:40418062 | DOI:10.1021/acsami.5c06453
Advancements in Structure-based Drug Design Using Geometric Deep Learning
Curr Med Chem. 2025 May 23. doi: 10.2174/0109298673388739250516071228. Online ahead of print.
NO ABSTRACT
PMID:40417758 | DOI:10.2174/0109298673388739250516071228
<em>In-silico</em> molecular investigation of <em>Nannochloropsis</em> microalgae cellulose synthase under salinity conditions and <em>in-vitro</em> evaluation of the proportionate effects on cellulose production
3 Biotech. 2025 Jun;15(6):180. doi: 10.1007/s13205-025-04329-y. Epub 2025 May 21.
ABSTRACT
Nannochloropsis is a microalgae with more than substantially 60-70% cellulose in its cell wall, making it a potential candidate for nanocellulose sustainable production. This study examined the effects of salts in seawater and their role on Nannochloropsis gaditana and Nannochloropsis oculata cellulose synthase activity using In-silico and In-vitro approaches for the first time. Deep-learning-based AlphaFold2 predicted model was selected as the most reliable 3D structure. Molecular docking results revealed that none of the selected ligands occupied the binding site predicted for the native substrate of the enzyme, uridine-diphosphate. To validate the In-silico results, experiments were conducted to investigate the impact of salinity stress (NaCl, NaNO3 and NaHCO3) on the cell growth and cellulose production. The assessment tools included a UV-visible spectrophotometer and a hemocytometer, with a modified Jayme-Wise method used for cellulose extraction. The results indicated that the following concentrations of 0.443 mol/L, 0.457 mol/L, and 0.469 mol/L of NaCl, 0.072 mol/L, 0.077 mol/L, and 0.082 mol/L of NaNO3, 0.0021 mol/L, 0.0022 mol/L, and 0.0023 mol/L of NaHCO3 did not lower the growth rate nor the cellulose yield of N. oculata and notable enhancement in growth was observed in cultures supplemented with 0.0023 mol/L NaHCO3. Furthermore, when NaCl (0.457 mol/L and 0.469 mol/L), NaNO3 (0.082 mol/L) and NaHCO3 (0.0022 mol/L and 0.0023 mol/L) were individually introduced to the culture, cellulose yield increased up to five times compared to the control group.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13205-025-04329-y.
PMID:40417659 | PMC:PMC12095769 | DOI:10.1007/s13205-025-04329-y
Research on target localization and adaptive scrubbing of intelligent bathing assistance system
Front Bioeng Biotechnol. 2025 May 9;13:1550875. doi: 10.3389/fbioe.2025.1550875. eCollection 2025.
ABSTRACT
INTRODUCTION: Bathing is a primary daily activity. Existing bathing systems are limited by their lack of intelligence and adaptability, reliance on caregivers, and the complexity of their control algorithms. Although visual sensors are widely used in intelligent systems, current intelligent bathing systems do not effectively process depth information from these sensors.
METHODS: The scrubbing task of the intelligent bath assist system can be divided into a pre-contact localization phase and a post-contact adaptive scrubbing phase. YOLOv5s, known for its ease of deployment and high accuracy, is utilized for multi-region skin detection to identify different body parts. The depth correction algorithm is designed to improve the depth accuracy of RGB-D vision sensors. The 3D position and pose of the target point in the RGB camera coordinate system are modeled and then transformed to the robot base coordinate system by hand-eye calibration. The system localization accuracy is measured when the collaborative robot runs into contact with the target. The self-rotating end scrubber head has flexible bristles with an adjustable length of 10 mm. After the end is in contact with the target, the point cloud scrubbing trajectory is optimized using cubic B-spline interpolation. Normal vectors are estimated based on approximate triangular dissected dyadic relations. Segmented interpolation is proposed to achieve real-time planning and to address the potential effects of possible unexpected movements of the target. The position and pose updating strategy of the end scrubber head is established.
RESULTS: YOLOv5s enables real-time detection, tolerating variations in skin color, water vapor, occlusion, light, and scene. The localization error is relatively small, with a maximum value of 2.421 mm, a minimum value of 2.081 mm, and an average of 2.186 mm. Sampling the scrubbing curve every 2 mm along the x-axis and comparing actual to desired trajectories, the y-axis shows a maximum deviation of 2.23 mm, which still allows the scrubbing head to conform to the human skin surface.
DISCUSSION: The study does not focus on developing complex control algorithms but instead emphasizes improving the accuracy of depth data to enhance localization precision.
PMID:40416315 | PMC:PMC12098527 | DOI:10.3389/fbioe.2025.1550875
Editorial for Innovative Artificial Intelligence System in the Children's Hospital in Japan
JMA J. 2025 Apr 28;8(2):361-362. doi: 10.31662/jmaj.2025-0076. Epub 2025 Mar 21.
NO ABSTRACT
PMID:40416027 | PMC:PMC12095351 | DOI:10.31662/jmaj.2025-0076
Innovative Artificial Intelligence System in the Children's Hospital in Japan
JMA J. 2025 Apr 28;8(2):354-360. doi: 10.31662/jmaj.2024-0312. Epub 2025 Feb 21.
ABSTRACT
The evolution of innovative artificial intelligence (AI) systems in pediatric hospitals in Japan promises benefits for patients and healthcare providers. We actively contribute to advancements in groundbreaking medical treatments by leveraging deep learning technology and using vast medical datasets. Our team of data scientists closely collaborates with departments within the hospital. Our research themes based on deep learning are wide-ranging, including acceleration of pathological diagnosis using image data, distinguishing of bacterial species, early detection of eye diseases, and prediction of genetic disorders from physical features. Furthermore, we implement Information and Communication Technology to diagnose pediatric cancer. Moreover, we predict immune responses based on genomic data and diagnose autism by quantifying behavior and communication. Our expertise extends beyond research to provide comprehensive AI development services, including data collection, annotation, high-speed computing, utilization of machine learning frameworks, design of web services, and containerization. In addition, as active members of medical AI platform collaboration partnerships, we provide unique data and analytical technologies to facilitate the development of AI development platforms. Furthermore, we address the challenges of securing medical data in the cloud to ensure compliance with stringent confidentiality standards. We will discuss AI's advancements in pediatric hospitals and their challenges.
PMID:40415999 | PMC:PMC12095641 | DOI:10.31662/jmaj.2024-0312
Response to the Letter by Matsubara
JMA J. 2025 Apr 28;8(2):664. doi: 10.31662/jmaj.2024-0420. Epub 2025 Mar 7.
NO ABSTRACT
PMID:40415986 | PMC:PMC12095420 | DOI:10.31662/jmaj.2024-0420
Editorial: Advances in computer vision: from deep learning models to practical applications
Front Neurosci. 2025 May 9;19:1615276. doi: 10.3389/fnins.2025.1615276. eCollection 2025.
NO ABSTRACT
PMID:40415892 | PMC:PMC12098266 | DOI:10.3389/fnins.2025.1615276
Improving annotation efficiency for fully labeling a breast mass segmentation dataset
J Med Imaging (Bellingham). 2025 May;12(3):035501. doi: 10.1117/1.JMI.12.3.035501. Epub 2025 May 21.
ABSTRACT
PURPOSE: Breast cancer remains a leading cause of death for women. Screening programs are deployed to detect cancer at early stages. One current barrier identified by breast imaging researchers is a shortage of labeled image datasets. Addressing this problem is crucial to improve early detection models. We present an active learning (AL) framework for segmenting breast masses from 2D digital mammography, and we publish labeled data. Our method aims to reduce the input needed from expert annotators to reach a fully labeled dataset.
APPROACH: We create a dataset of 1136 mammographic masses with pixel-wise binary segmentation labels, with the test subset labeled independently by two different teams. With this dataset, we simulate a human annotator within an AL framework to develop and compare AI-assisted labeling methods, using a discriminator model and a simulated oracle to collect acceptable segmentation labels. A UNet model is retrained on these labels, generating new segmentations. We evaluate various oracle heuristics using the percentage of segmentations that the oracle relabels and measure the quality of the proposed labels by evaluating the intersection over union over a validation dataset.
RESULTS: Our method reduces expert annotator input by 44%. We present a dataset of 1136 binary segmentation labels approved by board-certified radiologists and make the 143-image validation set public for comparison with other researchers' methods.
CONCLUSIONS: We demonstrate that AL can significantly improve the efficiency and time-effectiveness of creating labeled mammogram datasets. Our framework facilitates the development of high-quality datasets while minimizing manual effort in the domain of digital mammography.
PMID:40415867 | PMC:PMC12094908 | DOI:10.1117/1.JMI.12.3.035501
Convolutional variational auto-encoder and vision transformer hybrid approach for enhanced early Alzheimer's detection
J Med Imaging (Bellingham). 2025 May;12(3):034501. doi: 10.1117/1.JMI.12.3.034501. Epub 2025 May 21.
ABSTRACT
PURPOSE: Alzheimer's disease (AD) is becoming more prevalent among the elderly, with projections indicating that it will affect a significantly large population in the future. Regardless of substantial research efforts and investments focused on exploring the underlying biological factors, a definitive cure has yet to be discovered. The currently available treatments are only effective in slowing disease progression if it is identified in the early stages of the disease. Therefore, early diagnosis has become critical in treating AD.
APPROACH: Recently, the use of deep learning techniques has demonstrated remarkable improvement in enhancing the precision and speed of automatic AD diagnosis through medical image analysis. We propose a hybrid model that integrates a convolutional variational auto-encoder (CVAE) with a vision transformer (ViT). During the encoding phase, the CVAE captures key features from the MRI scans, whereas the decoding phase reduces irrelevant details in MRIs. These refined inputs enhance the ViT's ability to analyze complex patterns through its multihead attention mechanism.
RESULTS: The model was trained and evaluated using 14,000 structural MRI samples from the ADNI and SCAN databases. Compared with three benchmark methods and previous studies with Alzheimer's classification techniques, our approach achieved a significant improvement, with a test accuracy of 93.3%.
CONCLUSIONS: Through this research, we identified the potential of the CVAE-ViT hybrid approach in detecting minor structural abnormalities related to AD. Integrating unsupervised feature extraction via CVAE can significantly enhance transformer-based models in distinguishing between stages of cognitive impairment, thereby identifying early indicators of AD.
PMID:40415866 | PMC:PMC12094909 | DOI:10.1117/1.JMI.12.3.034501
Classifying chronic obstructive pulmonary disease status using computed tomography imaging and convolutional neural networks: comparison of model input image types and training data severity
J Med Imaging (Bellingham). 2025 May;12(3):034502. doi: 10.1117/1.JMI.12.3.034502. Epub 2025 May 22.
ABSTRACT
PURPOSE: Convolutional neural network (CNN)-based models using computed tomography images can classify chronic obstructive pulmonary disease (COPD) with high performance, but various input image types have been investigated, and it is unclear what image types are optimal. We propose a 2D airway-optimized topological multiplanar reformat (tMPR) input image and compare its performance with established 2D/3D input image types for COPD classification. As a secondary aim, we examined the impact of training on a dataset with predominantly mild COPD cases and testing on a more severe dataset to assess whether it improves generalizability.
APPROACH: CanCOLD study participants were used for training/internal testing; SPIROMICS participants were used for external testing. Several 2D/3D input image types were adapted from the literature. In the proposed models, 2D airway-optimized tMPR images (to convey shape and interior/contextual information) and 3D output fusion of axial/sagittal/coronal images were investigated. The area-under-the-receiver-operator-curve (AUC) was used to evaluate model performance and Brier scores were used to evaluate model calibration. To further examine how training dataset severity impacts generalization, we compared model performance when trained on the milder CanCOLD dataset versus the more severe SPIROMICS dataset, and vice versa.
RESULTS: A total of n = 742 CanCOLD participants were used for training/validation and n = 309 for testing; n = 448 SPIROMICS participants were used for external testing. For the CanCOLD and SPIROMICS test set, the proposed 2D tMPR on its own (CanCOLD: AUC = 0.79 ; SPIROMICS: AUC = 0.94 ) and combined with the 3D axial/coronal/sagittal lung view (CanCOLD: AUC = 0.82 ; SPIROMICS: AUC = 0.93 ) had the highest performance. The combined 2D tMPR and 3D axial/coronal/sagittal lung view had the lowest Brier score (CanCOLD: score = 0.16; SPIROMICS: score = 0.24). Conversely, using SPIROMICS for training/testing and CanCOLD for external testing resulted in lower performance when tested on CanCOLD for 2D tMPR on its own (SPIROMICS: AUC = 0.92; CanCOLD: AUC = 0.74) and when combined with the 3D axial/coronal/sagittal lung view (SPIROMICS: AUC = 0.92 ; CanCOLD: AUC = 0.75 ).
CONCLUSIONS: The CNN-based model with the combined 2D tMPR images and 3D lung view as input image types had the highest performance for COPD classification, highlighting the importance of airway information and that the fusion of different types of information as input image can improve CNN-based model performance. In addition, models trained on CanCOLD demonstrated strong generalization to the more severe SPIROMICS cohort, whereas training on SPIROMICS resulted in lower performance when tested on CanCOLD. These findings suggest that training on milder COPD cases may improve classification performance across the disease spectrum.
PMID:40415865 | PMC:PMC12097752 | DOI:10.1117/1.JMI.12.3.034502
Optimised Hybrid Attention-Based Capsule Network Integrated Three-Pathway Network for Chronic Disease Detection in Retinal Images
J Eval Clin Pract. 2025 Jun;31(4):e70126. doi: 10.1111/jep.70126.
ABSTRACT
BACKGROUND: Over the past 20 years, researchers have concentrated on generating retinal images as a means of detecting and classifying chronic diseases. Early diagnosis and treatment are essential to avoid chronic diseases. Manually grading retinal images is time-consuming, prone to errors, and lacks patient-friendliness. Various Deep Learning (DL) algorithms are employed to detect chronic diseases from retinal fundus images. Also, these methods have some disadvantages, such as overfitting, computational cost, and so on.
OBJECTIVE: The proposed research aims to develop Optimized DL based system for detecting chronic diseases in retinal images and solving existing issues.
METHODOLOGY: Initially, the retinal images are pre-processed to clean and organize the data. Normalization and HSI Colour Conversion are the techniques used for pre-processing. Inception-V3, ResNet-152 and a Convolutional Vision Transformer (Conv-ViT) are used to perform feature extraction. The classifier is an Optimized Hybrid Attention-based Capsule Network. An optimization is included in the proposed model to increase the classifier s performance.
RESULTS: The proposed approach attains accuracies of 99.05 % and 99.15% using Diabetic Retinopathy 224 × 224 (2019 Data) and the APTOS-2019 dataset, respectively. The superior performance of the proposed technique highlights its effectiveness in this domain.
CONCLUSION: The implementation of such automated methods can significantly improve the efficiency and accuracy of chronic disease diagnosis, benefiting both healthcare providers and patients.
PMID:40415584 | DOI:10.1111/jep.70126
AI in Orthopedic Research: A Comprehensive Review
J Orthop Res. 2025 May 26. doi: 10.1002/jor.26109. Online ahead of print.
ABSTRACT
Artificial intelligence (AI) is revolutionizing orthopedic research and clinical practice by enhancing diagnostic accuracy, optimizing treatment strategies, and streamlining clinical workflows. Recent advances in deep learning have enabled the development of algorithms that detect fractures, grade osteoarthritis, and identify subtle pathologies in radiographic and magnetic resonance images with performance comparable to expert clinicians. These AI-driven systems reduce missed diagnoses and provide objective, reproducible assessments that facilitate early intervention and personalized treatment planning. Moreover, AI has made significant strides in predictive analytics by integrating diverse patient data-including gait and imaging features-to forecast surgical outcomes, implant survivorship, and rehabilitation trajectories. Emerging applications in robotics, augmented reality, digital twin technologies, and exoskeleton control promise to further transform preoperative planning and intraoperative guidance. Despite these promising developments, challenges such as data heterogeneity, algorithmic bias, and the "black box" nature of many models-as well as issues with robust validation-remain. This comprehensive review synthesizes current developments, critically examines limitations, and outlines future directions for integrating AI into musculoskeletal care.
PMID:40415515 | DOI:10.1002/jor.26109
Automated landmark-based mid-sagittal plane: reliability for 3-dimensional mandibular asymmetry assessment on head CT scans
Clin Oral Investig. 2025 May 26;29(6):311. doi: 10.1007/s00784-025-06397-z.
ABSTRACT
OBJECTIVE: The determination of the mid-sagittal plane (MSP) on three-dimensional (3D) head imaging is key to the assessment of facial asymmetry. The aim of this study was to evaluate the reliability of an automated landmark-based MSP to quantify mandibular asymmetry on head computed tomography (CT) scans.
MATERIALS AND METHODS: A dataset of 368 CT scans, including orthognathic surgery patients, was automatically annotated with 3D cephalometric landmarks via a previously published deep learning-based method. Five of these landmarks were used to automatically construct an MSP orthogonal to the Frankfurt horizontal plane. The reliability of automatic MSP construction was compared with the reliability of manual MSP construction based on 6 manual localizations by 3 experienced operators on 19 randomly selected CT scans. The mandibular asymmetry of the 368 CT scans with respect to the MSP was calculated and compared with clinical expert judgment.
RESULTS: The construction of the MSP was found to be highly reliable, both manually and automatically. The manual reproducibility 95% limit of agreement was less than 1 mm for -y translation and less than 1.1° for -x and -z rotation, and the automatic measurement lied within the confidence interval of the manual method. The automatic MSP construction was shown to be clinically relevant, with the mandibular asymmetry measures being consistent with the expertly assessed levels of asymmetry.
CONCLUSION: The proposed automatic landmark-based MSP construction was found to be as reliable as manual construction and clinically relevant in assessing the mandibular asymmetry of 368 head CT scans.
CLINICAL RELEVANCE: Once implemented in a clinical software, fully automated landmark-based MSP construction could be clinically used to assess mandibular asymmetry on head CT scans.
PMID:40415151 | DOI:10.1007/s00784-025-06397-z