Deep learning
Current Research and Development in the Field of Magnetic Resonance Contrast Media
Invest Radiol. 2025 Jul 7. doi: 10.1097/RLI.0000000000001206. Online ahead of print.
ABSTRACT
The next-generation, high relaxivity, gadolinium-based contrast agents (GBCAs) are discussed, together with new studies of safety, improvements in MR technique, and the ongoing development of additional agents. It is likely that the next generation agents, gadopiclenol and gadoquatrane, will largely replace the current standards, the macrocyclic gadolinium chelates, despite the excellent safety profile and very high stability of the latter. In the Group of Seven (G7) nations, which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States, use of the linear gadolinium chelates has largely ceased, due to concerns regarding their relative instability as compared to the macrocyclic agents and the deposition of gadolinium that occurs in many tissues, including brain and bone, following their injection. Manganese-based compounds are once again being investigated, a field largely untouched since the initial development of clinical MR contrast media in the 1980s. Their potential impact on clinical imaging is, however, unclear. New information continues to emerge regarding differences in stability of the gadolinium-based agents. Artificial intelligence and deep learning techniques are maturing and are discussed briefly, given their potential and recent clinical application involving MR contrast media.
PMID:40622723 | DOI:10.1097/RLI.0000000000001206
Performance of a deep-learning-based lung nodule detection system using 0.25-mm thick ultra-high-resolution CT images
Jpn J Radiol. 2025 Jul 7. doi: 10.1007/s11604-025-01828-z. Online ahead of print.
ABSTRACT
PURPOSE: Artificial intelligence (AI) algorithms for lung nodule detection assist radiologists. As their performance using ultra-high-resolution CT (U-HRCT) images has not been evaluated, we investigated the usefulness of 0.25-mm slices at U-HRCT using the commercially available deep-learning-based lung nodule detection (DL-LND) system.
MATERIALS AND METHODS: We enrolled 63 patients who underwent U-HRCT for lung cancer and suspected lung cancer. Two board-certified radiologists identified nodules more than 4 mm in diameter on 1-mm HRCT slices and set the reference standard consensually. They recorded all lesions detected on 5-, 1-, and 0.25-mm slices by the DL-LND system. Unidentified nodules were included in the reference standard. To examine the performance of the DL-LND system, the sensitivity, and positive predictive value (PPV) and the number of false positive (FP) nodules were recorded.
RESULTS: The mean number of lesions detected on 5-, 1-, and 0.25-mm slices was 5.1, 7.8 and 7.2 per CT scan. On 5-mm slices the sensitivity and PPV were 79.8% and 46.4%; on 1-mm slices they were 91.5% and 34.8%, and on 0.25-mm slices they were 86.7% and 36.1%. The sensitivity was significantly higher on 1- than 5-mm slices (p < 0.01) while the PPV was significantly lower on 1- than 5-mm slices (p < 0.01). A slice thickness of 0.25 mm failed to improve its performance. The mean number of FP nodules on 5-, 1-, and 0.25-mm slices was 2.8, 5.2, and 4.7 per CT scan.
CONCLUSION: We found that 1 mm was the best slice thickness for U-HRCT images using the commercially available DL-LND system.
PMID:40622614 | DOI:10.1007/s11604-025-01828-z
Usefulness of compressed sensing coronary magnetic resonance angiography with deep learning reconstruction
Jpn J Radiol. 2025 Jul 7. doi: 10.1007/s11604-025-01830-5. Online ahead of print.
ABSTRACT
PURPOSE: Coronary magnetic resonance angiography (CMRA) scans are generally time-consuming. CMRA with compressed sensing (CS) and artificial intelligence (AI) (CSAI CMRA) is expected to shorten the imaging time while maintaining image quality. This study aimed to evaluate the usefulness of CS and AI for non-contrast CMRA.
MATERIALS AND METHODS: Twenty volunteers underwent both CS and conventional CMRA. Conventional CMRA employed parallel imaging (PI) with an acceleration factor of 2. CS CMRA employed a combination of PI and CS with an acceleration factor of 3. Deep learning reconstruction was performed offline on the CS CMRA data after scanning, which was defined as CSAI CMRA. We compared the imaging time, image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and vessel sharpness for each CMRA scan.
RESULTS: The CS CMRA scan time was significantly shorter than that of conventional CMRA (460 s [343,753 s] vs. 727 s [567,939 s], p < 0.001). The image quality scores of the left anterior descending artery (LAD) and left circumflex artery (LCX) were significantly higher in conventional CMRA (LAD: 3.3 ± 0.7, LCX: 3.3 ± 0.7) and CSAI CMRA (LAD: 3.7 ± 0.6, LCX: 3.5 ± 0.7) than the CS CMRA (LAD: 2.9 ± 0.6, LCX: 2.9 ± 0.6) (p < 0.05). The right coronary artery scores did not vary among the three groups (p = 0.087). The SNR and CNR were significantly higher in CSAI CMRA (SNR: 12.3 [9.7, 13.7], CNR: 12.3 [10.5, 14.5]) and CS CMRA (SNR: 10.5 [8.2, 12.6], CNR: 9.5 [7.9, 12.6]) than conventional CMRA (SNR: 9.0 [7.8, 11.1], CNR: 7.7 [6.0, 10.1]) (p < 0.01). The vessel sharpness was significantly higher in CSAI CMRA (LAD: 0.87 [0.78, 0.91]) (p < 0.05), with no significant difference between the CS CMRA (LAD: 0.77 [0.71, 0.83]) and conventional CMRA (LAD: 0.77 [0.71, 0.86]).
CONCLUSION: CSAI CMRA can shorten the imaging time while maintaining good image quality.
PMID:40622613 | DOI:10.1007/s11604-025-01830-5
Artificial Intelligence-Enabled Point-of-Care Echocardiography: Bringing Precision Imaging to the Bedside
Curr Atheroscler Rep. 2025 Jul 7;27(1):70. doi: 10.1007/s11883-025-01316-9.
ABSTRACT
PURPOSE OF REVIEW: The integration of artificial intelligence (AI) with point-of-care ultrasound (POCUS) is transforming cardiovascular diagnostics by enhancing image acquisition, interpretation, and workflow efficiency. These advancements hold promise in expanding access to cardiovascular imaging in resource-limited settings and enabling early disease detection through screening applications. This review explores the opportunities and challenges of AI-enabled POCUS as it reshapes the landscape of cardiovascular imaging.
RECENT FINDINGS: AI-enabled systems can reduce operator dependency, improve image quality, and support clinicians-both novice and experienced-in capturing diagnostically valuable images, ultimately promoting consistency across diverse clinical environments. However, widespread adoption faces significant challenges, including concerns around algorithm generalizability, bias, explainability, clinician trust, and data privacy. Addressing these issues through standardized development, ethical oversight, and clinician-AI collaboration will be critical to safe and effective implementation. Looking ahead, emerging innovations-such as autonomous scanning, real-time predictive analytics, tele-ultrasound, and patient-performed imaging-underscore the transformative potential of AI-enabled POCUS in reshaping cardiovascular care and advancing equitable healthcare delivery worldwide.
PMID:40622521 | DOI:10.1007/s11883-025-01316-9
Multi-Stage BiSTU Network Combining BiLSTM and Transformer for ABP Waveform Prediction from PPG Signals
Ann Biomed Eng. 2025 Jul 7. doi: 10.1007/s10439-025-03787-y. Online ahead of print.
ABSTRACT
PURPOSE: Cardiovascular disease (CVD) remains a global health issue, and arterial blood pressure (ABP) waveforms provide critical physiological data that aid in the early diagnosis of CVD. However, existing pulse waveform evaluation methods are insufficient for accurately predicting ABP. This study aims to propose a novel U-net joint network architecture, the BiSTU Sequential Network, to predict high-quality ABP waveforms.
METHODS: The designed BiSTU Sequential Network integrates a Bidirectional Long Short-Term Memory (Bi-LSTM) model to capture temporal dependencies, a Transformer model with multi-head attention mechanisms to extract detailed features, and a MultiRes Convolutional Block Attention Module U-Net (MCBAMU-Net) for multi-scale feature extraction. The model was trained using 12,000 vital sign records from 942 ICU patients.
RESULTS: Experimental results demonstrate that the predicted ABP waveforms closely align with the actual waveforms, achieving a mean absolute error (MAE) of 1.78 ± 2.15 mmHg, a root mean square error (RMSE) of 2.79 mmHg, and an R-squared (R 2 ) of 0.98. The model meets the standards of the Association for the Advancement of Medical Instrumentation (AAMI), with MAEs of 2.94 ± 3.43 mmHg for systolic blood pressure (SBP) and 4.22 ± 5.18 mmHg for diastolic blood pressure (DBP). Under the British Hypertension Society (BHS) standards, the accuracy rates within 5 mmHg are 85.3% for DBP and 72.4% for SBP and exceed 97% within 15 mmHg.
CONCLUSION: The BiSTU Sequential Network exhibits significant potential for accurate, non-invasive prediction of arterial blood pressure. Its predictions closely match actual waveforms and comply with multiple clinical standards, indicating broad application prospects and contributing to the early diagnosis and monitoring of cardiovascular diseases.
PMID:40622504 | DOI:10.1007/s10439-025-03787-y
pyDOSEIA: A Python Package for Radiological Impact Assessment during Long-term or Accidental Atmospheric Releases
Health Phys. 2025 Jul 7. doi: 10.1097/HP.0000000000002014. Online ahead of print.
ABSTRACT
pyDOSEIA is a Python package designed for meteorological data processing and radiological impact assessment in diverse scenarios, including nuclear and radiological accidents. Built upon robust computational models and using modern programming techniques, pyDOSEIA employs the Gaussian Plume Model and follows IAEA and AERB guidelines, offering a comprehensive suite of tools for estimating radiation doses from various exposure pathways, including inhalation, ingestion, groundshine, submersion, and plumeshine. The package enables age-specific, distance-specific, and radionuclide-specific radiation dose computations, providing accurate and reliable calculations for both short-term and long-term exposures. Additionally, pyDOSEIA leverages up-to-date dose conversion factors, features parallel processing capabilities for rapid analysis of large datasets, and facilitates applications in machine learning and deep learning research. With its user-friendly interface and extensive documentation, pyDOSEIA empowers researchers, practitioners, and policymakers to assess radiation risks effectively, aiding in decision making and emergency preparedness efforts. The package is open-source and available on GitHub at https://github.com/BiswajitSadhu/pyDOSEIA.
PMID:40622262 | DOI:10.1097/HP.0000000000002014
Impact of a computed tomography-based artificial intelligence software on radiologists' workflow for detecting acute intracranial hemorrhage
Diagn Interv Radiol. 2025 Jul 7. doi: 10.4274/dir.2025.253301. Online ahead of print.
ABSTRACT
PURPOSE: To assess the impact of a commercially available computed tomography (CT)-based artificial intelligence (AI) software for detecting acute intracranial hemorrhage (AIH) on radiologists' diagnostic performance and workflow in a real-world clinical setting.
METHODS: This retrospective study included a total of 956 non-contrast brain CT scans obtained over a 70-day period, interpreted independently by 2 board-certified general radiologists. Of these, 541 scans were interpreted during the initial 35 days before the implementation of AI software, and the remaining 415 scans were interpreted during the subsequent 35 days, with reference to AIH probability scores generated by the software. To assess the software's impact on radiologists' performance in detecting AIH, performance before and after implementation was compared. Additionally, to evaluate the software's effect on radiologists' workflow, Kendall's Tau was used to assess the correlation between the daily chronological order of CT scans and the radiologists' reading order before and after implementation. The early diagnosis rate for AIH (defined as the proportion of AIH cases read within the first quartile by radiologists) and the median reading order of AIH cases were also compared before and after implementation.
RESULTS: A total of 956 initial CT scans from 956 patients [mean age: 63.14 ± 18.41 years; male patients: 447 (47%)] were included. There were no significant differences in accuracy [from 0.99 (95% confidence interval: 0.99-1.00) to 0.99 (0.98-1.00), P = 0.343], sensitivity [from 1.00 (0.99-1.00) to 1.00 (0.99-1.00), P = 0.859], or specificity [from 1.00 (0.99-1.00) to 0.99 (0.97-1.00), P = 0.252] following the implementation of the AI software. However, the daily correlation between the chronological order of CT scans and the radiologists' reading order significantly decreased [Kendall's Tau, from 0.61 (0.48-0.73) to 0.01 (0.00-0.26), P < 0.001]. Additionally, the early diagnosis rate significantly increased [from 0.49 (0.34-0.63) to 0.76 (0.60-0.93), P = 0.013], and the daily median reading order of AIH cases significantly decreased [from 7.25 (Q1-Q3: 3-10.75) to 1.5 (1-3), P < 0.001] after the implementation.
CONCLUSION: After the implementation of CT-based AI software for detecting AIH, the radiologists' daily reading order was considerably reprioritized to allow more rapid interpretation of AIH cases without compromising diagnostic performance in a real-world clinical setting.
CLINICAL SIGNIFICANCE: With the increasing number of CT scans and the growing burden on radiologists, optimizing the workflow for diagnosing AIH through CT-based AI software integration may enhance the prompt and efficient treatment of patients with AIH.
PMID:40622194 | DOI:10.4274/dir.2025.253301
A Fundamental Study on the Removal of Vascular Pulsation Artifacts Using U-Net-Based Deep Neural Network
Cureus. 2025 Jun 5;17(6):e85400. doi: 10.7759/cureus.85400. eCollection 2025 Jun.
ABSTRACT
Introduction Artifacts caused by vascular pulsation manifest as periodically high signals in the phase direction, often overlapping the target area and hindering accurate observation. Traditionally, these artifacts have been mitigated using flow compensation and presaturation pulses. However, complete removal remains challenging owing to extended imaging times and the need to consider the specific absorption rate. Therefore, we aimed to propose a deep learning network for postprocessing to reduce these artifacts. Materials and methods Following approval from the institutional ethics committee, magnetic resonance imaging scans were conducted on 15 adult volunteers to create an image dataset. Short tau inversion recovery (STIR) images of the lower leg, where artifacts were prevalent, were acquired. The same cross-section was imaged under conditions likely to produce artifacts and conditions designed to minimize artifacts. We propose an artifact reduction network that combines a batch normalization layer and a dropout layer based on the U-Net architecture. The network performance was evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics on the test images. Visual evaluations were conducted using a five-point scale to assess artifact reduction and image resolution. Statistical analyses were performed for each evaluation metric. Profiles of the artifact-prone areas were obtained and assessed before and after artifact reduction. Results The average PSNR was 27.83 and 28.57 for the artifact-laden and corrected image groups, respectively. The average SSIM values were 0.869 and 0.882 for the artifact-laden and corrected image groups, respectively. No significant differences were observed between the artifact-laden and corrected image groups for either PSNR (p = 0.315) or SSIM (p = 0.436). The average visual assessment scores for artifact presence were 4.68, 3.52, and 4.34 for the reference, artifact-laden, and corrected image groups, respectively. The average visual assessment scores for image resolution were 4.34, 4.30, and 3.86 for the reference, artifact-laden, and corrected image groups, respectively. No significant differences were observed between the reference and corrected image groups in the presence of artifacts (p = 0.456), although significant differences were noted between these groups and the artifact-laden image group. Furthermore, no significant differences were observed among the three groups regarding resolution evaluation. Conclusion To our knowledge, this is the first study to apply deep learning to reduce flow artifacts caused by vascular pulsation using STIR images. We proposed a U-Net-based pulsation artifact reduction network and demonstrated its potential utility. Further detailed evaluation is required to develop an approach suitable for clinical application.
PMID:40621369 | PMC:PMC12228430 | DOI:10.7759/cureus.85400
Deep learning based time-dependent reliability analysis of an underactuated lower-limb robot exoskeleton for gait rehabilitation
Proc Inst Mech Eng H. 2025 Jul 7:9544119251349362. doi: 10.1177/09544119251349362. Online ahead of print.
ABSTRACT
This study evaluates the reliability of an underactuated wearable lower-limb exoskeleton designed to assist with gait rehabilitation. Recognizing the complexity of system reliability, a deep learning framework augmented with Long short-term Memory (LSTM) was utilized for the time-dependent reliability analysis of dynamic systems. The research commenced with the development of a lower-limb gait robot, modeled on a Stephenson III six-bar linkage mechanism. Following the mechanical design, computer-aided design (CAD) tools were employed to conceptualize a lower-limb robotic exoskeleton for rehabilitation purposes. The design incorporated two metallic materials (aluminum and steel), and a composite material (carbon fiber) tested using SolidWorks®. The prototype achieved a lightweight design (~1.63 kg) for carbon fiber material. An LSTM-enhanced deep neural network algorithm was implemented to predict the time-dependent reliability of joint displacements and end-effector trajectories. Finally, conditional probability methods were applied to complete the time-dependent system reliability assessment. The designed mechanical system for gait rehabilitation demonstrated high reliability (R ≈ 0.87). Over 200 simulation runs, reliability trends showed consistent and robust predictions.
PMID:40621669 | DOI:10.1177/09544119251349362
An Explainable Connectome Convolutional Transformer for Multimodal Autism Spectrum Disorder Classification
Int J Neural Syst. 2025 Aug;35(8):2550043. doi: 10.1142/S0129065725500431.
ABSTRACT
The diagnosis of autism spectrum disorder (ASD) is often hampered by its heterogeneity and reliance on time-consuming behavioral assessments. Automated neuroimaging-based diagnostic tools offer a promising alternative, but multi-site data integration often introduces variability, hindering the achievement of accurate and interpretable results. This study presents the Connectome Convolutional Transformer (CCTF), a multimodal deep learning framework that integrates functional and structural brain connectivity information from fMRI and sMRI modalities. The CCTF enriches feature representation by incorporating diverse functional connectivity metrics and structural covariance networks based on multiple morphological properties. It employs a connectome convolutional embedding module and transformer encoder to capture and refine brain connectivity patterns. In addition, a node-to-graph pooling layer facilitates the identification of potential ASD biomarkers. Evaluation on the multi-site ABIDE dataset demonstrated that CCTF outperformed state-of-the-art methods, achieving accuracies of [Formula: see text] for fMRI, [Formula: see text] for sMRI, and [Formula: see text] for the ensemble fMRI+sMRI model in intra-site cross-validation. In the inter-site leave-one-site-out cross-validation, the CCTF maintained its superiority, with the ensemble model reaching [Formula: see text] accuracy, underscoring its robustness and generalizability across different sites. The identified brain regions are consistent with established ASD neurobiology, underscoring CCTF's potential to advance the understanding of the neural mechanisms underlying this complex disorder.
PMID:40621646 | DOI:10.1142/S0129065725500431
Early warning score and feasible complementary approach using artificial intelligence-based bio-signal monitoring system: a review
Biomed Eng Lett. 2025 Jun 25;15(4):717-734. doi: 10.1007/s13534-025-00486-4. eCollection 2025 Jul.
ABSTRACT
Early warning score (EWS) have become an essential component of patient safety strategies in healthcare environments worldwide. These systems aim to identify patients at risk of clinical deterioration by evaluating vital signs and other physiological parameters, enabling timely intervention by rapid response teams. Despite proven benefits and widespread adoption, conventional EWS have limitations that may affect their ability to effectively detect and respond to patient deterioration. There is growing interest in integrating continuous multimodal monitoring technologies and advanced analytics, particularly artificial intelligence (AI) and machine learning (ML)-based approaches, to address these limitations and enhance EWS performance. This review provides a comprehensive overview of the current state and potential future directions of AI-based bio-signal monitoring in early warning system. It examines emerging trends and techniques in AI and ML for bio-signal analysis, exploring the possibilities and potential applications of various bio-signals such as electroencephalography, electrocardiography, electromyography in early warning system. However, significant challenges exist in developing and implementing AI-based bio-signal monitoring systems in early warning system, including data acquisition strategies, data quality and standardization, interpretability and explainability, validation and regulatory approval, integration into clinical workflows, and ethical and legal considerations. Addressing these challenges requires a multidisciplinary approach involving close collaboration between healthcare professionals, data scientists, engineers, and other stakeholders. Future research should focus on developing advanced data fusion techniques, personalized adaptive models, real-time and continuous monitoring, explainable and reliable AI, and regulatory and ethical frameworks. By addressing these challenges and opportunities, the integration of AI and bio-signals into early warning systems can enhance patient monitoring and clinical decision support, ultimately improving healthcare quality and safety. In conclusion, integrating AI and bio-signals into the early warning system represents a promising approach to improve patient care outcomes and support clinical decision-making. As research in this field continues to evolve, it is crucial to develop safe, effective, and ethically responsible solutions that can be seamlessly integrated into clinical practice, harnessing the power of innovative technology to enhance patient care and improve individual and population health and well-being.
PMID:40621610 | PMC:PMC12226448 | DOI:10.1007/s13534-025-00486-4
Efficient sparse-view medical image classification for low radiation and rapid COVID-19 diagnosis
Biomed Eng Lett. 2025 May 22;15(4):785-795. doi: 10.1007/s13534-025-00478-4. eCollection 2025 Jul.
ABSTRACT
This study proposes a deep learning-based diagnostic model called the Projection-wise Masked Autoencoder (ProMAE) for rapid and accurate COVID-19 diagnosis using sparse-view CT images. ProMAE employs a column-wise masking strategy during pre-training to effectively learn critical diagnostic features from sinograms, even under extremely sparse conditions. The trained ProMAE can directly classify sparse-view sinograms without requiring CT image reconstruction. Experiments on sparse-view data with 50%, 75%, 85%, 95%, and 99% sparsity show that ProMAE achieves a diagnostic accuracy of over 95% at all sparsity levels and, in particular, outperforms ResNet, ConvNeXt, and conventional MAE models in COVID-19 diagnosis in environments with 85% or higher sparsity. This capability is especially advantageous for the development of portable and flexible imaging systems during large-scale outbreaks such as COVID-19, as it ensures accurate diagnosis while minimizing radiation exposure, making it a vital tool in resource-limited and high-demand settings.
PMID:40621608 | PMC:PMC12229398 | DOI:10.1007/s13534-025-00478-4
Insights into motor impairment assessment using myographic signals with artificial intelligence: a scoping review
Biomed Eng Lett. 2025 Jun 5;15(4):693-716. doi: 10.1007/s13534-025-00483-7. eCollection 2025 Jul.
ABSTRACT
Myographic signals can effectively detect and assess subtle changes in muscle function; however, their measurement and analysis are often limited in clinical settings compared to inertial measurement units. Recently, the advent of artificial intelligence (AI) has made the analysis of complex myographic signals more feasible. This scoping review aims to examine the use of myographic signals in conjunction with AI for assessing motor impairments and highlight potential limitations and future directions. We conducted a systematic search using specific keywords in the Scopus and PubMed databases. After a thorough screening process, 111 relevant studies were selected for review. These studies were organized based on target applications (measurement modality, measurement location, and AI application task), sample demographics (age, sex, ethnicity, and pathology), and AI models (general approach and algorithm type). Among various myographic measurement modalities, surface electromyography was the most commonly used. In terms of AI approaches, machine learning with feature engineering was the predominant method, with classification tasks being the most common application of AI. Our review also noted a significant bias in participant demographics, with a greater representation of males compared to females and healthy individuals compared to clinical populations. Overall, our findings suggest that integrating myographic signals with AI has the potential to provide more objective and clinically relevant assessments of motor impairments.
PMID:40621607 | PMC:PMC12229422 | DOI:10.1007/s13534-025-00483-7
BoKDiff: best-of-K diffusion alignment for target-specific 3D molecule generation
Bioinform Adv. 2025 Jun 10;5(1):vbaf137. doi: 10.1093/bioadv/vbaf137. eCollection 2025.
ABSTRACT
MOTIVATION: Structure-based drug design (SBDD) leverages the 3D structure of target proteins to guide therapeutic development. While generative models like diffusion models and geometric deep learning show promise in ligand design, challenges such as limited protein-ligand data and poor alignment reduce their effectiveness. We introduce BoKDiff, a domain-adapted framework inspired by alignment strategies in large language and vision models that combines multi-objective optimization with Best-of-K alignment to enhance ligand generation.
RESULTS: Built on DecompDiff, BoKDiff generates diverse ligands and ranks them using a weighted score based on QED, SA, and docking metrics. To overcome alignment issues, we reposition each ligand's center of mass to match its docking pose, enabling more accurate sub-component extraction. We further incorporate a Best-of-N (BoN) sampling strategy to select optimal candidates without model fine-tuning. BoN achieves QED > 0.6, SA > 0.75, and over 35% success rate. BoKDiff outperforms prior models on the CrossDocked2020 dataset with an average docking score of -8.58 and 26% valid molecule generation rate. This is the first study to integrate Best-of-K alignment and BoN sampling into SBDD, demonstrating their potential for practical, high-quality ligand design.
AVAILABILITY AND IMPLEMENTATION: Code is available at https://github.com/khodabandeh-ali/BoKDiff.git.
PMID:40621602 | PMC:PMC12228967 | DOI:10.1093/bioadv/vbaf137
An Artificial Intelligence Pipeline for Hepatocellular Carcinoma: From Data to Treatment Recommendations
Int J Gen Med. 2025 Jul 2;18:3581-3595. doi: 10.2147/IJGM.S529322. eCollection 2025.
ABSTRACT
Hepatocellular carcinoma (HCC) poses significant clinical challenges, including difficulties in early diagnosis and the complexity of treatment options. Artificial intelligence (AI) technologies are emerging as powerful tools to address these issues through a unified AI pipeline. This pipeline begins with data ingestion and preprocessing, integrating multimodal data such as imaging, genomic and clinical records. Machine learning and deep learning techniques are then applied to analyze these data, improving tumor detection, characterization, and early diagnosis. The pipeline extends to personalized treatment planning, where AI integrates diverse data types to predict patient responses to various therapies. In drug development, AI accelerates the discovery of new treatments through virtual screening and molecular modeling, while also identifying potential new uses for existing drugs. AI further enhances patient management through remote monitoring and intelligent support systems, enabling real-time data analysis and personalized care. In research, AI improves big data analysis and clinical trial design, uncovering new biomarkers and optimizing patient recruitment and outcome prediction. However, challenges such as data quality, standardization, and privacy remain. Future developments in multimodal data integration and edge computing promise to further enhance AI's impact on HCC diagnosis, treatment, and research, leading to improved patient outcomes and more effective management strategies.
PMID:40621598 | PMC:PMC12229156 | DOI:10.2147/IJGM.S529322
Advanced deep learning framework for underwater object detection with multibeam forward-looking sonar
Struct Health Monit. 2024 Mar 24;24(4):1991-2007. doi: 10.1177/14759217241235637. eCollection 2025 Jul.
ABSTRACT
Underwater object detection (UOD) is an essential activity in maintaining and monitoring underwater infrastructure, playing an important role in their efficient and low-risk asset management. In underwater environments, sonar, recognized for overcoming the limitations of optical imaging in low-light and turbid conditions, has increasingly gained popularity for UOD. However, due to the low resolution and limited foreground-background contrast in sonar images, existing sonar-based object detection algorithms still face challenges regarding precision and transferability. To solve these challenges, this article proposes an advanced deep learning framework for UOD that uses the data from multibeam forward-looking sonar. The framework is adapted from the network architecture of YOLOv7, one of the state-of-the-art vision-based object detection algorithms, by incorporating unique optimizations in three key aspects: data preprocessing, feature fusion, and loss functions. These improvements are extensively tested on a dedicated public dataset, showing superior object classification performance compared to the selected existing sonar-based methods. Through experiments conducted on an underwater remotely operated vehicle, the proposed framework validates significant enhancements in target classification, localization, and transfer learning capabilities. Since the engineering structures have similar geometric shapes to the objects tested in this study, the proposed framework presents potential applicability to underwater structural inspection and monitoring, and autonomous asset management.
PMID:40621572 | PMC:PMC12227173 | DOI:10.1177/14759217241235637
Unsupervised Imputation of Non-ignorably Missing Data Using Importance-Weighted Autoencoders
Stat Biopharm Res. 2025;17(2):222-234. doi: 10.1080/19466315.2024.2368787. Epub 2024 Jul 15.
ABSTRACT
Deep Learning (DL) methods have dramatically increased in popularity in recent years. While its initial success was demonstrated in the classification and manipulation of image data, there has been significant growth in the application of DL methods to problems in the biomedical sciences. However, the greater prevalence and complexity of missing data in biomedical datasets present significant challenges for DL methods. Here, we provide a formal treatment of missing data in the context of Variational Autoencoders (VAEs), a popular unsupervised DL architecture commonly utilized for dimension reduction, imputation, and learning latent representations of complex data. We propose a new VAE architecture, NIMIWAE, that is one of the first to flexibly account for both ignorable and non-ignorable patterns of missingness in input features at training time. Following training, samples can be drawn from the approximate posterior distribution of the missing data can be used for multiple imputation, facilitating downstream analyses on high dimensional incomplete datasets. We demonstrate through statistical simulation that our method outperforms existing approaches for unsupervised learning tasks and imputation accuracy. We conclude with a case study of an EHR dataset pertaining to 12,000 ICU patients containing a large number of diagnostic measurements and clinical outcomes, where many features are only partially observed.
PMID:40621507 | PMC:PMC12223448 | DOI:10.1080/19466315.2024.2368787
af2rave: protein ensemble generation with physics-based sampling
Digit Discov. 2025 Jul 4. doi: 10.1039/d5dd00201j. Online ahead of print.
ABSTRACT
We introduce , an open-source Python package that implements an improved and automated version of our previous AlphaFold2-RAVE protocol. AlphaFold2-RAVE integrates machine learning-based structure prediction with physics-driven sampling to generate alternative protein conformations efficiently. It has been well established that protein structures are not static but exist as ensembles of conformations, many of which are functionally relevant yet challenging to resolve experimentally. While deep learning models like AlphaFold2 can predict structural ensembles, they lack explicit physical validation. The Alphafold2-RAVE family of methods addresses this limitation by combining reduced multiple sequence alignment (MSA) AlphaFold2 predictions with biased or unbiased molecular dynamics (MD) simulations to efficiently explore local conformational space. Compared to our previous work, the current workflow significantly reduced the required amount of a priori knowledge about a system to allow the user to focus on the conformation diversity they would like to sample. This is achieved by a feature selection module to automatically pickup the important collective variables to monitor. The improved workflow was validated on multiple systems with the package , including E. coli adenosine kinase (ADK) and human DDR1 kinase, successfully identifying distinct functional states with minimal prior biological knowledge. Furthermore, we demonstrate that achieves conformational sampling efficiency comparable to long unbiased MD simulations on the SARS-CoV-2 spike protein receptor-binding domain while significantly reducing the computational cost. The package provides a streamlined workflow for researchers to generate and analyze alternative protein conformations, offering an accessible tool for drug discovery and structural biology.
PMID:40621440 | PMC:PMC12226971 | DOI:10.1039/d5dd00201j
Deep Learning Multi Modal Melanoma Detection: Algorithm Development and Validation
JMIR AI. 2025 Jul 5. doi: 10.2196/66561. Online ahead of print.
ABSTRACT
BACKGROUND: The visual similarity of melanoma and seborrheic keratosis has made it difficult for elderly patients with disabilities to know when to seek medical attention, contributing to the metastasis of melanoma.
OBJECTIVE: In this paper, we present a novel multi-modal deep learning-based technique to distinguish between melanoma and seborrheic keratosis.
METHODS: Our strategy is three-fold: (1) utilize patient image data to train and test three deep learning models using transfer learning (ResNet50, InceptionV3, and VGG16) and one author designed model, (2) use patient metadata to train and test a deep learning model, and (3) combine the predictions of the image model with the best accuracy and the metadata model, using nonlinear least squares regression to specify ideal weights to each model for a combined prediction.
RESULTS: The accuracy of the combined model was 88% (195/221 classified correctly) on test data from the HAM10000 dataset. Model reliability was assessed by visualizing the output activation map of each model and comparing the diagnosis patterns to that of dermatologists. The addition of metadata to the image dataset was key to reducing the false negative and false positive rate simultaneously, thereby producing better metrics and improving overall model accuracy.
CONCLUSIONS: Results from this experiment could be used to eliminate late diagnosis of melanoma via easy access to an app. Future experiments can utilize text data (subjective data pertaining to how the patient felt over a certain period of time) to allow this model to reflect the real hospital setting to a greater extent.
PMID:40619207 | DOI:10.2196/66561
Progress on prediction models for temporomandibular disorders
Zhonghua Kou Qiang Yi Xue Za Zhi. 2025 Jul 2;60(7):787-792. doi: 10.3760/cma.j.cn112144-20250401-00114. Online ahead of print.
ABSTRACT
Temporomandibular disorders (TMD), a common condition in oral and maxillofacial surgery, significantly impairs patients' quality of life. Early prediction and appropriate treatment of TMD are therefore critically important. Research on TMD prediction models has evolved from traditional statistical methods to machine learning and subsequently to deep learning, with each phase offering distinct contributions and limitations. Traditional statistical methods can accurately identify independent risk factors affecting treatment efficacy but generally rely on substantial prior knowledge and assumptions. Machine learning techniques are capable of processing large-scale, high-dimensional data and autonomously learning patterns and regularities within datasets; however, they exhibit strong dependence on data quality and limited model generalization capabilities. Deep learning approaches excel at automatically extracting temporal patterns and trends from time-series data while effectively capturing complex nonlinear relationships, yet they require extensive training datasets and suffer from interpretability challenges due to their inherent black-box testing. This review synthesizes the applications and outcomes of these methodologies in TMD research, analyzes their respective strengths and constraints, and explores future directions for advancements in this field.
PMID:40619145 | DOI:10.3760/cma.j.cn112144-20250401-00114