Deep learning
Integrating Artificial Intelligence in Next-Generation Sequencing: Advances, Challenges, and Future Directions
Curr Issues Mol Biol. 2025 Jun 19;47(6):470. doi: 10.3390/cimb47060470.
ABSTRACT
The integration of artificial intelligence (AI) into next-generation sequencing (NGS) has revolutionized genomics, offering unprecedented advancements in data analysis, accuracy, and scalability. This review explores the synergistic relationship between AI and NGS, highlighting its transformative impact across genomic research and clinical applications. AI-driven tools, including machine learning and deep learning, enhance every aspect of NGS workflows-from experimental design and wet-lab automation to bioinformatics analysis of the generated raw data. Key applications of AI integration in NGS include variant calling, epigenomic profiling, transcriptomics, and single-cell sequencing, where AI models such as CNNs, RNNs, and hybrid architectures outperform traditional methods. In cancer research, AI enables precise tumor subtyping, biomarker discovery, and personalized therapy prediction, while in drug discovery, it accelerates target identification and repurposing. Despite these advancements, challenges persist, including data heterogeneity, model interpretability, and ethical concerns. This review also discusses the emerging role of AI in third-generation sequencing (TGS), addressing long-read-specific challenges, like fast and accurate basecalling, as well as epigenetic modification detection. Future directions should focus on implementing federated learning to address data privacy, advancing interpretable AI to improve clinical trust and developing unified frameworks for seamless integration of multi-modal omics data. By fostering interdisciplinary collaboration, AI promises to unlock new frontiers in precision medicine, making genomic insights more actionable and scalable.
PMID:40699869 | DOI:10.3390/cimb47060470
A Multi-Model Machine Learning Framework for Identifying Raloxifene as a Novel RNA Polymerase Inhibitor from FDA-Approved Drugs
Curr Issues Mol Biol. 2025 Apr 28;47(5):315. doi: 10.3390/cimb47050315.
ABSTRACT
RNA-dependent RNA polymerase (RdRP) represents a critical target for antiviral drug development. We developed a multi-model machine learning framework combining five traditional algorithms (ExtraTreesClassifier, RandomForestClassifier, LGBMClassifier, BernoulliNB, and BaggingClassifier) with a CNN deep learning model to identify potential RdRP inhibitors among FDA-approved drugs. Using the PubChem dataset AID 588519, our ensemble models achieved the highest performance with accuracy, ROC-AUC, and F1 scores higher than 0.70, while the CNN model demonstrated complementary predictive value with a specificity of 0.77 on external validation. Molecular docking studies with the norovirus RdRP (PDB: 4NRT) identified raloxifene as a promising candidate, with a binding affinity (-8.8 kcal/mol) comparable to the positive control (-9.2 kcal/mol). The molecular dynamics simulation confirmed stable binding with RMSD values of 0.12-0.15 nm for the protein-ligand complex and consistent hydrogen bonding patterns. Our findings suggest that raloxifene may possess RdRP inhibitory activity, providing a foundation for its experimental validation as a potential broad-spectrum antiviral agent.
PMID:40699714 | DOI:10.3390/cimb47050315
Advances in machine learning for ABCA4-related retinopathy: segmentation and phenotyping
Int Ophthalmol. 2025 Jul 23;45(1):314. doi: 10.1007/s10792-025-03690-4.
ABSTRACT
PURPOSE: Stargardt disease, also called ABCA4-related retinopathy (ABCA4R), is the most common form of juvenile-onset macular dystrophy and yet lacks an FDA approved treatment. Substantial progress has been made through landmark studies like that of the Progression of Atrophy Secondary to Stargardt Disease (ProgStar), but tasks like image segmentation and phenotyping still pose major challenges in terms of monitoring disease progression and categorizing patient subgroups. Furthermore, these methods are subjective and laborious. Recent advancements in machine learning (ML) and deep learning show considerable promise in automating these processes.
METHODS: This scoping review explores ML applications in ABCA4R, with a focus on segmentation and phenotyping. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology, 15 articles were selected from 264, with 12 focused on the task of segmenting atrophic lesions, retinal flecks, retinal layer boundaries, or en-face imaging. Three studies addressed phenotyping based on electroretinography (ERG), visual acuity, and microperimetry.
RESULTS: Several effective approaches were implemented in these studies, including ensemble modeling, self-attention mechanisms, soft-label approaches, and dynamic frameworks that consider extent of tissue damage. Excellent model performance includes segmentation DICE performances of 0.99 and ERG phenotyping accuracies 90% and greater. Smaller datasets and variable presentations present as significant challenges, while advanced methods like Monte Carlo dropout and active learning improve pipeline efficiency and performance.
CONCLUSION: ML techniques are well on their way to automate key steps in ABCA4R evaluation with excellent performance. These emerging methods have the potential to expedite therapeutic innovation and enhance our understanding of ABCA4R.
PMID:40699379 | DOI:10.1007/s10792-025-03690-4
Thin-Slice Brain CT Image Quality and Lesion Detection Evaluation in Deep Learning Reconstruction Algorithm
Clin Neuroradiol. 2025 Jul 23. doi: 10.1007/s00062-025-01542-3. Online ahead of print.
ABSTRACT
BACKGROUND: Clinical evaluation of Artificial Intelligence (AI)-based Precise Image (PI) algorithm in brain imaging remains limited. PI is a deep-learning reconstruction (DLR) technique that reduces image noise while maintaining a familiar Filtered Back Projection (FBP)-like appearance at low doses. This study aims to compare PI, Iterative Reconstruction (IR), and FBP-in improving image quality and enhancing lesion detection in 1.0 mm thin-slice brain computed tomography (CT) images.
METHODS: A retrospective analysis was conducted on brain non-contrast CT scans from August to September 2024 at our institution. Each scan was reconstructed using four methods: routine 5.0 mm FBP (Group A), thin-slice 1.0 mm FBP (Group B), thin-slice 1.0 mm IR (Group C), and thin-slice 1.0 mm PI (Group D). Subjective image quality was assessed by two radiologists using a 4- or 5‑point Likert scale. Objective metrics included contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and image noise across designated regions of interest (ROIs).
RESULTS: 60 patients (65.47 years ± 18.40; 29 males and 31 females) were included. Among these, 39 patients had lesions, primarily low-density lacunar infarcts. Thin-slice PI images demonstrated the lowest image noise and artifacts, alongside the highest CNR and SNR values (p < 0.001) compared to Groups A, B, and C. Subjective assessments revealed that both PI and IR provided significantly improved image quality over routine FBP (p < 0.05). Specifically, Group D (PI) achieved superior lesion conspicuity and diagnostic confidence, with a 100% detection rate for lacunar lesions, outperforming Groups B and A.
CONCLUSIONS: PI reconstruction significantly enhances image quality and lesion detectability in thin-slice brain CT scans compared to IR and FBP, suggesting its potential as a new clinical standard.
PMID:40699306 | DOI:10.1007/s00062-025-01542-3
VascX Models: Deep Ensembles for Retinal Vascular Analysis From Color Fundus Images
Transl Vis Sci Technol. 2025 Jul 1;14(7):19. doi: 10.1167/tvst.14.7.19.
ABSTRACT
PURPOSE: To present and validate deep learning model ensembles (VascX) for vessel, artery-vein, optic disc segmentation, and fovea localization for color fundus images (CFIs). VascX preprocessing and inference code and model weights were made publicly available to facilitate research on retinal vasculature.
METHODS: For model training, we combined over 15 published annotated datasets with CFIs from Dutch studies (mainly the Rotterdam Study). This resulted in diverse development sets with a variety of patient characteristics and imaging conditions. We trained UNet model ensembles using a new, more robust preprocessing algorithm and strong data augmentations. We compared VascX segmentation performance (Dice) to models with publicly available weights: AutoMorph and LittleWNet. We compared the quality of VascX (and previous models') features by measuring agreement (mean absolute error [MAE] and Pearson correlation) with features extracted from grader segmentations.
RESULTS: Dice scores revealed better performance from VascX across most datasets evaluated, especially for artery-vein and optic disc segmentation. VascX performed more consistently as the quality of the images decreased and for both disc and fovea-centered images. These improvements translated into higher-quality vascular features. Of 24 features evaluated, 14 showed a significant improvement in MAE when compared to AutoMorph and 23 when compared to LWNet. VascX had the highest correlations with ground-truth features in all but two cases.
CONCLUSIONS: VascX models perform well across a variety of conditions, likely due to the size and diversity of our development sets. VascX represents an important improvement in segmentation quality that translates into better vascular features to support more robust analyses of the retinal vasculature.
TRANSLATIONAL RELEVANCE: By making VascX public, we aim to facilitate and improve research linking retinal vascular biomarkers to ophthalmic and systemic conditions, relevant for the detection, prevention, and monitoring of disease.
PMID:40699175 | DOI:10.1167/tvst.14.7.19
Just-in-Time DNP Book Club: An Inclusive Immersion Experience
Nurs Educ Perspect. 2025 Jul 23. doi: 10.1097/01.NEP.0000000000001448. Online ahead of print.
ABSTRACT
A just-in-time faculty-led virtual book club for a Doctor of Nursing Practice program was initiated in response to student interest. Students met virtually to discuss advanced practice nursing implications of a nonfiction book in accordance with the American Association of Colleges of Nursing Essentials. Reading a book independently may be insufficient for deep learning, but individual summary reflections following faculty-led discussions connected domains, competencies, and subcompetencies. This unplanned reactive activity connected a real-world situation to the principles of advanced nursing practice and was deemed valuable by students and faculty.
PMID:40699031 | DOI:10.1097/01.NEP.0000000000001448
MDNN: memetic deep neural network for genomic prediction
Brief Bioinform. 2025 Jul 2;26(4):bbaf352. doi: 10.1093/bib/bbaf352.
ABSTRACT
Genomic prediction (GP) has made significant progress in the field of breeding. Traditional linear models perform well in handling simple traits but have limitations in extracting nonlinear features for complex traits. The introduction of deep learning (DL) techniques has provided a new approach to GP, especially suited for high-dimensional data processing and complex trait prediction. However, traditional DL models require manual design of the network architecture, which necessitates continuous experimentation and modification. In this paper, we propose a new framework, MDNN, that utilizes the memetic algorithm for neural architecture search and automatically optimizes the network architecture. Compared with the DNNGP, MDNN achieved a 36.49% improvement in the average Pearson correlation coefficient on the wheat599 dataset and a 12.28% improvement on the wheat2000 dataset.
PMID:40698862 | DOI:10.1093/bib/bbaf352
A multi-stage 3D convolutional neural network algorithm for CT-based lung segment parcellation
J Appl Clin Med Phys. 2025 Aug;26(8):e70193. doi: 10.1002/acm2.70193.
ABSTRACT
BACKGROUND: Current approaches to lung parcellation utilize established fissures between lobes to provide estimates of lobar volume. However, deep learning segment parcellation provides the ability to better assess regional heterogeneity in ventilation and perfusion.
PURPOSE: We aimed to validate and demonstrate the clinical applicability of CT-based lung segment parcellation using deep learning on a clinical cohort with mixed airways disease.
METHODS: Using a 3D convolutional neural network, airway centerlines were determined using an image-to-image network. Tertiary bronchi were identified on top of the airway centerline, and the pulmonary segments were parcellated based on the spatial relationship with tertiary and subsequent bronchi. The data obtained by following this workflow was used to train a neural network to enable end-to-end lung segment parcellation directly from 123 chest CT images. The performance of the parcellation network was then evaluated quantitatively using expert-defined reference masks on 20 distinct CTs from the training set, where the Dice score and inclusion rate (i.e., percentage of the detected bronchi covered by the correct segment) between the manual segmentation and automatic parcellation results were calculated for each lung segment. Lastly, a qualitative evaluation of external validation was performed on 20 CTs prospectively collected by having two radiologists review the parcellation accuracy in healthy individuals (n = 10) and in patients with chronic obstructive pulmonary disease (COPD) (n = 10).
RESULTS: Means and standard deviation of Dice score and inclusion rate between automatic and manual segmentation of twenty patient CTs were 86.81 (SD = 24.54) and 0.75 (SD = 0.19), respectively, across all lung segments. The mean age of the qualitative dataset was 54.4 years (SD = 16.4 years), with 45% (n = 9) women. There was 99.2% intra-reader agreement on average with the produced segments. Individuals with COPD had greater mismatch compared to healthy controls.
CONCLUSIONS: A deep-learning algorithm can create parcellation masks from chest CT scans, and the quantitative and qualitative evaluations yielded encouraging results for the potential clinical usage of lung analysis at the pulmonary segment level among those with structural airway disease.
PMID:40698834 | DOI:10.1002/acm2.70193
A modular deep learning pipeline for enhanced plane-wave beamforming and B-mode image quality
Med Phys. 2025 Aug;52(8):e17948. doi: 10.1002/mp.17948.
ABSTRACT
BACKGROUND: In ultrasound imaging using plane-wave (PW) techniques, image quality and contrast often suffer, especially when examining anechoic structures. Traditional beamforming methods like Delay-and-Sum or coherent PW compounding face limitations in balancing resolution and frame rate, which can result in suboptimal diagnostic accuracy.
PURPOSE: This study aims to introduce a modular beamforming pipeline that overcomes these challenges and enhances PW image quality. By dividing the beamforming process into two modules: a multi-attention U-Net based model for capturing complex dependencies in time-delayed data and a super-resolution model for scaling up to the original B-mode image grid. We seek to improve PW image quality and modularity in the ultrasound imaging process.
METHODS: We implemented a modular beamforming approach, comprising a multi-attention U-Net model and a super-resolution model. We conducted experiments using simulated, experimental, and in vivo data from the PICMUS dataset to evaluate the performance of our pipeline against conventional methods such as PW1, PW9, and U-Net. Key metrics assessed included contrast-to-noise ratio (CNR), contrast ratio (CR), generalized Contrast-to-Noise Ratio (gCNR), and resolution.
RESULTS: Our model demonstrated superior performance across all metrics. In simulated data, the model achieved a 0.99 improvement in CNR, a 3.5 dB increase in CR, and 18% in gCNR compared to PW1. Experimental data showed a 0.7 enhancement in CNR and a 8.6 dB improvement in CR and 24% increase in gCNR when compared to PW1. In vivo data also revealed significant improvements, with a 1.9 dB increase in CR and a 0.15 enhancement in CNR over PW1. The enhanced performance in the anechoic cyst region underscores the model's effectiveness in improving image quality.
CONCLUSIONS: The proposed modular beamforming approach offers significant advantages, including adaptability and improved image quality, despite the complexity of managing two models concurrently. The pipeline's flexibility in frame rate and image quality allows for customization based on specific clinical applications, making it a promising alternative to traditional methods.
PMID:40698752 | DOI:10.1002/mp.17948
Machine learning-driven inverse design of puncture needles with tailored mechanics
Minim Invasive Ther Allied Technol. 2025 Jul 23:1-10. doi: 10.1080/13645706.2025.2537927. Online ahead of print.
ABSTRACT
BACKGROUND: In minimally invasive surgery, designing puncture needles with customizable structures to achieve personalized puncture performance is a significant challenge. Existing reverse design methods struggle to capture the complex nonlinear behavior of needle-tissue interactions.
METHODS: This study proposes a machine-learning-based reverse design method aimed at achieving precise customization of needle mechanical behavior. We developed a rapid reverse design framework integrating machine learning and finite element analysis, capable of directly generating optimal structural parameters from target puncture force-penetration depth curves. Through training on large-scale finite element simulation data, deep learning neural network models captured the complex mapping relationship between needle structure and mechanical response.
RESULTS: In rigorous cross-validation, the prediction results showed normalized root mean square errors (NRMSE) of 0.06381 and 0.06234 compared to the target curves and finite element analysis, respectively. The model achieved 98.2% classification accuracy for curve types, with loss functions converging to optimal values after sufficient training epochs.
CONCLUSION: This approach demonstrates high accuracy and robustness in needle-design customization. It not only opens new avenues for rapid, customized design of puncture needles but also provides an innovative paradigm for intelligent design of complex medical devices, potentially advancing precision medicine technologies and shortening design cycles.
PMID:40698677 | DOI:10.1080/13645706.2025.2537927
AlphaBind, a domain-specific model to predict and optimize antibody-antigen binding affinity
MAbs. 2025 Dec;17(1):2534626. doi: 10.1080/19420862.2025.2534626. Epub 2025 Jul 22.
ABSTRACT
Antibodies are versatile therapeutic molecules that use combinatorial sequence diversity to cover a vast fitness landscape. Designing optimal antibody sequences, however, remains a major challenge. Recent advances in deep learning provide opportunities to address this challenge by learning sequence-function relationships to accurately predict fitness landscapes. These models enable efficient in silico prescreening and optimization of antibody candidates. By focusing experimental efforts on the most promising candidates guided by deep learning predictions, antibodies with optimal properties can be designed more quickly and effectively. Here we present AlphaBind, a domain-specific model that uses protein language model embeddings and pre-training on millions of quantitative laboratory measurements of antibody-antigen binding strength to achieve state-of-the-art performance for guided affinity optimization of parental antibodies. We demonstrate that an AlphaBind-powered antibody optimization pipeline can deliver candidates with substantially improved binding affinity across four parental antibodies (some of which were already affinity-matured) and using two different types of training data. The resulting candidates, which include up to 11 mutations from parental sequence, yield a sequence diversity that allows optimization of other biophysical characteristics, all while using only a single round of data generation for each parental antibody. AlphaBind weights and code are publicly available at: https://github.com/A-Alpha-Bio/alphabind.
PMID:40693434 | DOI:10.1080/19420862.2025.2534626
DeepSecMS Advances DIA-Based Selenoproteome Profiling Through Cys-to-Sec Proxy Training
Adv Sci (Weinh). 2025 Jul 22:e04109. doi: 10.1002/advs.202504109. Online ahead of print.
ABSTRACT
Selenoproteins, defined as proteins containing the 21st amino acid, selenocysteine (Sec, U), are functionally important but rare, with only 25 selenoproteins characterized in the entire human proteome to date. To comprehensively analyze selenoproteomes, previously developed selenocysteine-specific mass spectrometry (SecMS) and the selenocysteine insertion sequence (SECIS)-independent selenoprotein database (SIS) have provided effective tools for analyzing the selenoproteome and, more importantly, hold the potential to uncover new selenoproteins. In this study, a deep learning approach is employed to develop the DeepSecMS method. Given the rarity of Sec and its chemical similarity to cysteine (Cys, C), a proxy training strategy is utilized using a large dataset of Cys-containing peptides to generate a large-scale theoretical library of Sec-containing peptides. It is shown that DeepSecMS enables the accurate prediction of critical features of Sec-containing peptides, including MS2, retention time (RT), and ion mobility (IM). By integrating DeepSecMS with data-independent acquisition (DIA) methods, the identification of known selenoproteins is significantly enhanced across diverse cell types and tissues. More importantly, it facilitates the identification of numerous highly scored, potential novel selenoproteins. These findings highlight the powerful potential of DeepSecMS in advancing selenoprotein research. Moreover, the proxy training strategy may be extended to the analysis of other rare post-translational modifications.
PMID:40693414 | DOI:10.1002/advs.202504109
Dual-Network Deep Learning for Accelerated Head and Neck MRI: Enhanced Image Quality and Reduced Scan Time
Head Neck. 2025 Jul 22. doi: 10.1002/hed.28255. Online ahead of print.
ABSTRACT
BACKGROUND: Head-and-neck MRI faces inherent challenges, including motion artifacts and trade-offs between spatial resolution and acquisition time. We aimed to evaluate a dual-network deep learning (DL) super-resolution method for improving image quality and reducing scan time in T1- and T2-weighted head-and-neck MRI.
METHODS: In this prospective study, 97 patients with head-and-neck masses were enrolled at xx from August 2023 to August 2024. After exclusions, 58 participants underwent paired conventional and accelerated T1WI and T2WI MRI sequences, with the accelerated sequences being reconstructed using a dual-network DL framework for super-resolution. Image quality was assessed both quantitatively (signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], contrast ratio [CR]) and qualitatively by two blinded radiologists using a 5-point Likert scale for image sharpness, lesion conspicuity, structure delineation, and artifacts. Wilcoxon signed-rank tests were used to compare paired outcomes.
RESULTS: Among 58 participants (34 men, 24 women; mean age 51.37 ± 13.24 years), DL reconstruction reduced scan times by 46.3% (T1WI) and 26.9% (T2WI). Quantitative analysis showed significant improvements in SNR (T1WI: 26.33 vs. 20.65; T2WI: 14.14 vs. 11.26) and CR (T1WI: 0.20 vs. 0.18; T2WI: 0.34 vs. 0.30; all p < 0.001), with comparable CNR (p > 0.05). Qualitatively, image sharpness, lesion conspicuity, and structure delineation improved significantly (p < 0.05), while artifact scores remained similar (all p > 0.05).
CONCLUSIONS: The dual-network DL method significantly enhanced image quality and reduced scan times in head-and-neck MRI while maintaining diagnostic performance comparable to conventional methods. This approach offers potential for improved workflow efficiency and patient comfort.
PMID:40693394 | DOI:10.1002/hed.28255
Developing an artificial intelligence model for phase recognition in robot-assisted radical prostatectomy
BJU Int. 2025 Jul 22. doi: 10.1111/bju.16862. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop and evaluate a convolutional neural network (CNN)-based model for recognising surgical phases in robot-assisted laparoscopic radical prostatectomy (RARP), with an emphasis on model interpretability and cross-platform validation.
METHODS: A CNN using EfficientNet B7 was trained on video data from 75 RARP cases with the hinotori robotic system. Seven phases were annotated: bladder drop, prostate preparation, bladder neck dissection, seminal vesicle dissection, posterior dissection, apical dissection, and vesicourethral anastomosis. A total of 808 774 video frames were extracted at 1 frame/s for training and testing. Validation was performed on 25 RARP cases using the da Vinci robotic system to assess cross-platform generalisability. Gradient-weighted class activation mapping was used to enhance interpretability by identifying key regions of interest for phase classification.
RESULTS: The CNN achieved 0.90 accuracy on the hinotori test set but dropped to 0.64 on the da Vinci dataset, thus indicating cross-platform limitations. Phase-specific F1 scores ranged from 0.77 to 0.97, with lower performance in the phase of seminal vesicle dissection, and apical dissection. Gradient-weighted class activation mapping visualisations revealed the model's focus on central pelvic structures rather than transient instruments, enhancing interpretability and insights into phase classification.
CONCLUSIONS: The model demonstrated high accuracy on a single robotic platform but requires further refinement for consistent cross-platform performance. Interpretability techniques will foster clinical trust and integration into workflows, advancing robotic surgery applications.
PMID:40693331 | DOI:10.1111/bju.16862
MSPO: A machine learning hyperparameter optimization method for enhanced breast cancer image classification
Digit Health. 2025 Jul 20;11:20552076251361603. doi: 10.1177/20552076251361603. eCollection 2025 Jan-Dec.
ABSTRACT
As one of the major threats to women's health worldwide, breast cancer requires early diagnosis and accurate classification, since they are key to optimizing therapeutic interventions and ensuring precise prognosis. Recently, deep learning has demonstrated notable advantages in breast cancer image classification. However, their performance heavily relies on the proper configuration of hyperparameters. To overcome the inefficiencies and weaknesses of conventional hyperparameter optimization methods, like limited effectiveness and vulnerability to premature convergence, this research proposes a Multi-Strategy Parrot Optimizer (MSPO) and applies it to breast cancer image classification tasks. Based on the original Parrot Optimizer, MSPO integrates several strategies, including Sobol sequence initialization, nonlinear decreasing inertia weight, and a chaotic parameter to enhance global exploration ability and convergence steadiness. Tests using the CEC 2022 benchmark functions reveal that MSPO surpasses leading algorithms regarding optimization precision and convergence rate. An ablation study was conducted on three variants of MSPO through CEC 2022 to further validate the effectiveness of each key strategy. Furthermore, MSPO is combined with the ResNet18 model and applied to the BreaKHis breast cancer image dataset. Results indicate that the model optimized by MSPO notably surpasses both the non-optimized version and other alternative optimization algorithms using four assessment indicators: accuracy, precision, recall, and F1-score. This validates the promising application potential and practical significance of MSPO in medical image classification tasks.
PMID:40693252 | PMC:PMC12277558 | DOI:10.1177/20552076251361603
Varying the High-pass-Cut Off Frequency Influences the Accuracy of the Model for Detection of Mind State Associated with Himalayan Yoga and Vipassana Meditation
Ann Neurosci. 2025 Jul 19:09727531251351067. doi: 10.1177/09727531251351067. Online ahead of print.
ABSTRACT
BACKGROUND: Meditation and Yoga practices are being adopted and gaining considerable interest as a tool that prevents the occurrence of numerous ailments. Meditation is well prescribed in several old religious manuscripts and has origins in past Indian practices that encourage emotional and personal well-being. Two different classification tasks were performed. One way to identify the mind state allied with Vipassana meditation and another was to identify the mind state allied with Himalayan Yoga meditation. The tasks were performed for classifying non-meditative and meditative states with varying cut-off frequencies to obtain the best results.
PURPOSE: This study is mainly focused on how the high-pass cut-off influences the single-trial accuracy of the model. The performance of the model depends on appropriate pre-processing. The results of High-pass Filter (HPF) at different settings were methodically assessed. Although there are many factors on which the accuracy of the model depends, like the HPF, Independent Components Analysis (ICA), model building and the hyperparameter tuning. One important preprocessing step is to effectively choose the filter to improve the classification results.
METHODS: Inception Convolutional Gated Recurrent Neural Network (IC-RNN) and Convolutional Neural Network (CNN) models were designed and compared to examine the varying effects of HPF.
RESULTS AND CONCLUSION: The highest accuracy of 86.19% was attained for IC-RNN, and 99.45% was achieved for CNN model with filter setting at 1 Hz for the Vipassana meditation classification task. The highest accuracy of 88.15% was attained for IC-RNN, and 100% was achieved for CNN model with the same filter setting at 1 Hz for the Himalayan Yoga meditation classification task. HPF at 1 Hz steadily produced good results. Based on the outcomes, the guidelines are suggested for filter settings to increase the performance of the model.
PMID:40693246 | PMC:PMC12276206 | DOI:10.1177/09727531251351067
Multidisciplinary Evaluation of an AI-Based Pneumothorax Detection Model: Clinical Comparison with Physicians in Edge and Cloud Environments
J Multidiscip Healthc. 2025 Jul 17;18:4099-4111. doi: 10.2147/JMDH.S535405. eCollection 2025.
ABSTRACT
BACKGROUND: Accurate and timely detection of pneumothorax on chest radiographs is critical in emergency and critical care settings. While subtle cases remain challenging for clinicians, artificial intelligence (AI) offers promise as a diagnostic aid. This retrospective diagnostic accuracy study evaluates a deep learning model developed using Google Cloud Vertex AI for pneumothorax detection on chest X-rays.
METHODS: A total of 152 anonymized frontal chest radiographs (76 pneumothorax, 76 normal), confirmed by computed tomography (CT), were collected from a single center between 2023 and 2024. The median patient age was 50 years (range: 18-95), with 67.1% male. The AI model was trained using AutoML Vision and evaluated in both cloud and edge deployment environments. Diagnostic accuracy metrics-including sensitivity, specificity, and F1 score-were compared with those of 15 physicians from four specialties (general practice, emergency medicine, thoracic surgery, radiology), stratified by experience level. Subgroup analysis focused on minimal pneumothorax cases. Confidence intervals were calculated using the Wilson method.
RESULTS: In cloud deployment, the AI model achieved an overall diagnostic accuracy of 0.95 (95% CI: 0.83, 0.99), sensitivity of 1.00 (95% CI: 0.83, 1.00), specificity of 0.89 (95% CI: 0.69, 0.97), and F1 score of 0.95 (95% CI: 0.86, 1.00). Comparable performance was observed in edge mode. The model outperformed junior clinicians and matched or exceeded senior physicians, particularly in detecting minimal pneumothoraces, where AI sensitivity reached 0.93 (95% CI: 0.79, 0.97) compared to 0.55 (95% CI: 0.38, 0.69) - 0.84 (95% CI: 0.69, 0.92) among human readers.
CONCLUSION: The Google Cloud Vertex AI model demonstrates high diagnostic performance for pneumothorax detection, including subtle cases. Its consistent accuracy across edge and cloud settings supports its integration as a second reader or triage tool in diverse clinical workflows, especially in acute care or resource-limited environments.
PMID:40693169 | PMC:PMC12278965 | DOI:10.2147/JMDH.S535405
Modern statistical techniques for cardiothoracic surgeons: Part 8-Bayesian analysis and beyond
Indian J Thorac Cardiovasc Surg. 2025 Aug;41(8):1102-1113. doi: 10.1007/s12055-025-01941-8. Epub 2025 Apr 21.
ABSTRACT
Bayesian analysis is a statistical approach that updates the probability of a hypothesis as new evidence emerges, combining prior knowledge with observed data to produce posterior probabilities. It is particularly useful in adaptive clinical trials and hierarchical modeling, offering flexibility and dynamic decision-making. Machine learning (ML), on the other hand, leverages algorithms to analyze complex patterns in large datasets, providing predictive insights for risk assessment and personalized treatment. Techniques such as deep learning and clustering enhance diagnostic accuracy and treatment optimization. Together, Bayesian methods and ML have the potential to revolutionize cardiothoracic research by integrating prior knowledge with data-driven analytics.
PMID:40693004 | PMC:PMC12276153 | DOI:10.1007/s12055-025-01941-8
Deep learning for enhancement of low-resolution and noisy scanning probe microscopy images
Beilstein J Nanotechnol. 2025 Jul 16;16:1129-1140. doi: 10.3762/bjnano.16.83. eCollection 2025.
ABSTRACT
In this study, we employed traditional methods and deep learning models to improve resolution and quality of low-resolution AFM images made under standard ambient scanning. Both traditional methods and deep learning models were benchmarked and quantified regarding fidelity, quality, and a survey taken by AFM experts. The deep learning models outperform the traditional methods and yield better results. Additionally, some common AFM artifacts, such as streaking, are present in the ground truth high-resolution images. These artifacts are partially attenuated by the traditional methods but are completely eliminated by the deep learning models. This work shows deep learning models to be superior for super-resolution tasks and enables significant reduction in AFM measurement time, whereby low-pixel-resolution AFM images are enhanced in both resolution and fidelity through deep learning.
PMID:40692894 | PMC:PMC12278107 | DOI:10.3762/bjnano.16.83
Classifying office workers with and without cervicogenic headache or neck and shoulder pain using posture-based deep learning models: a multicenter retrospective study
Front Pain Res (Lausanne). 2025 Jul 7;6:1614143. doi: 10.3389/fpain.2025.1614143. eCollection 2025.
ABSTRACT
OBJECTIVE: To develop and evaluate deep learning models for classifying office workers with and without cervicogenic headache (CH) and/or neck and shoulder pain (NSP), based on habitual sitting posture images.
METHODS: This multicenter, retrospective, observational study analyzed 904 digital images of habitual sitting postures of 531 office workers. Three deep learning models (VGG19, ResNet50, and EfficientNet B5) were trained and evaluated to classify the CH, NSP, and combined CH + NSP. Model performance was assessed using 4-fold cross-validation with metrics including area under the curve (AUC), accuracy (ACC), sensitivity (Sen), specificity (Spe), and F1 score. Statistical significance was evaluated using 95% confidence intervals. Class Activation Mapping (CAM) was used to visualize the model focus areas.
RESULTS: Among 531 office workers (135 with CH, 365 with NSP, 108 with both conditions and 139 control group), ResNet50 achieved the highest performance for CH classification with an AUC of 0.782 (95% CI: 0.770-0.793) and an accuracy of 0.750 (95% CI: 0.731-0.768). NSP classification showed more modest results, with ResNet50 achieving an accuracy of 0.677 (95% CI: 0.640-0.713). In the combined CH + NSP classification, EfficientNet B5 demonstrated the highest AUC of 0.744 (95% CI: 0.647-0.841). CAM analysis revealed distinct focus areas for each condition: the cervical region for CH, the lower body for NSP, and broader neck and trunk regions for combined CH + NSP.
CONCLUSION: Deep learning models show potential for classifying CH and NSP based on habitual sitting posture images, with varying performances across conditions. The ability of these models to detect subtle postural patterns associated with different musculoskeletal conditions suggests their possible applications for early detection and intervention. However, the complex relationship between static posture and musculoskeletal pain underscores the need for a multimodal assessment approach in clinical practice.
PMID:40692757 | PMC:PMC12277355 | DOI:10.3389/fpain.2025.1614143