Deep learning
NiO/ZnO Nanocomposites for Multimodal Intelligent MEMS Gas Sensors
ACS Sens. 2025 Mar 24. doi: 10.1021/acssensors.4c02789. Online ahead of print.
ABSTRACT
Gas sensor arrays designed for pattern recognition face persistent challenges in achieving high sensitivity and selectivity for multiple volatile organic compounds (VOCs), particularly under varying environmental conditions. To address these limitations, we developed multimodal intelligent MEMS gas sensors by precisely tailoring the nanocomposite ratio of NiO and ZnO components. These sensors demonstrate enhanced responses to ethylene glycol (EG) and limonene (LM) at different operating temperatures, demonstrating material-specific selectivity. Additionally, a multitask deep learning model is employed for real-time, quantitative detection of VOCs, accurately predicting their concentration and type. These results showcase the effectiveness of combining material optimization with advanced algorithms for real-world VOCs detection, advancing the field of odor analysis tools.
PMID:40126565 | DOI:10.1021/acssensors.4c02789
AI-Derived Blood Biomarkers for Ovarian Cancer Diagnosis: Systematic Review and Meta-Analysis
J Med Internet Res. 2025 Mar 24;27:e67922. doi: 10.2196/67922.
ABSTRACT
BACKGROUND: Emerging evidence underscores the potential application of artificial intelligence (AI) in discovering noninvasive blood biomarkers. However, the diagnostic value of AI-derived blood biomarkers for ovarian cancer (OC) remains inconsistent.
OBJECTIVE: We aimed to evaluate the research quality and the validity of AI-based blood biomarkers in OC diagnosis.
METHODS: A systematic search was performed in the MEDLINE, Embase, IEEE Xplore, PubMed, Web of Science, and the Cochrane Library databases. Studies examining the diagnostic accuracy of AI in discovering OC blood biomarkers were identified. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-AI tool. Pooled sensitivity, specificity, and area under the curve (AUC) were estimated using a bivariate model for the diagnostic meta-analysis.
RESULTS: A total of 40 studies were ultimately included. Most (n=31, 78%) included studies were evaluated as low risk of bias. Overall, the pooled sensitivity, specificity, and AUC were 85% (95% CI 83%-87%), 91% (95% CI 90%-92%), and 0.95 (95% CI 0.92-0.96), respectively. For contingency tables with the highest accuracy, the pooled sensitivity, specificity, and AUC were 95% (95% CI 90%-97%), 97% (95% CI 95%-98%), and 0.99 (95% CI 0.98-1.00), respectively. Stratification by AI algorithms revealed higher sensitivity and specificity in studies using machine learning (sensitivity=85% and specificity=92%) compared to those using deep learning (sensitivity=77% and specificity=85%). In addition, studies using serum reported substantially higher sensitivity (94%) and specificity (96%) than those using plasma (sensitivity=83% and specificity=91%). Stratification by external validation demonstrated significantly higher specificity in studies with external validation (specificity=94%) compared to those without external validation (specificity=89%), while the reverse was observed for sensitivity (74% vs 90%). No publication bias was detected in this meta-analysis.
CONCLUSIONS: AI algorithms demonstrate satisfactory performance in the diagnosis of OC using blood biomarkers and are anticipated to become an effective diagnostic modality in the future, potentially avoiding unnecessary surgeries. Future research is warranted to incorporate external validation into AI diagnostic models, as well as to prioritize the adoption of deep learning methodologies.
TRIAL REGISTRATION: PROSPERO CRD42023481232; https://www.crd.york.ac.uk/PROSPERO/view/CRD42023481232.
PMID:40126546 | DOI:10.2196/67922
Optic Nerve Crush Does Not Induce Retinal Ganglion Cell Loss in the Contralateral Eye
Invest Ophthalmol Vis Sci. 2025 Mar 3;66(3):49. doi: 10.1167/iovs.66.3.49.
ABSTRACT
PURPOSE: Optic nerve crush (ONC) is a model for studying optic nerve trauma. Unilateral ONC induces massive retinal ganglion cell (RGC) degeneration in the affected eye, leading to vision loss within a month. A common assumption has been that the non-injured contralateral eye is unaffected due to the minimal retino-retinal projections of the RGCs at the chiasm. Yet, recently, microglia, the brain-resident macrophages, have shown a responsive phenotype in the contralateral eye after ONC. Whether RGC loss accompanies this phenotype is still controversial.
METHODS: Using the available RGCode algorithm and developing our own RGC-Quant deep-learning-based tool, we quantify RGC's total number and density across the entire retina after ONC.
RESULTS: We confirm a short-term microglia response in the contralateral eye after ONC, but this did not affect the microglia number. Furthermore, we cannot confirm the previously reported RGC loss between naïve and contralateral retinas 5 weeks after ONC induction across the commonly used Cx3cr1creERT2 and C57BL6/J mouse models. Neither sex nor the direct comparison of the RGC markers Brn3a and RBPMS, with Brn3a co-labeling, on average, 89% of the RBPMS+-cells, explained this discrepancy, suggesting that the early microglia-responsive phenotype does not have immediate consequences on the RGC number.
CONCLUSIONS: Our results corroborate that unilateral optic nerve injury elicits a microglial response in the uninjured contralateral eye but without RGC loss. Therefore, the contralateral eye should be treated separately and not as an ONC control.
PMID:40126507 | DOI:10.1167/iovs.66.3.49
A Multi-Input Neural Network Model for Accurate MicroRNA Target Site Detection
Noncoding RNA. 2025 Mar 7;11(2):23. doi: 10.3390/ncrna11020023.
ABSTRACT
(1) Background: MicroRNAs are non-coding RNA sequences that regulate cellular functions by targeting messenger RNAs and inhibiting protein synthesis. Identifying their target sites is vital to understanding their roles. However, it is challenging due to the high cost and time demands of experimental methods and the high false-positive rates of computational approaches. (2) Methods: We introduce a Multi-Input Neural Network (MINN) algorithm that integrates diverse biologically relevant features, including the microRNA duplex structure, substructures, minimum free energy, and base-pairing probabilities. For each feature derived from a microRNA target-site duplex, we create a corresponding image. These images are processed in parallel by the MINN algorithm, allowing it to learn a comprehensive and precise representation of the underlying biological mechanisms. (3) Results: Our method, on an experimentally validated test set, detects target sites with an AUPRC of 0.9373, Precision of 0.8725, and Recall of 0.8703 and outperforms several commonly used computational methods of microRNA target-site predictions. (4) Conclusions: Incorporating diverse biologically explainable features, such as duplex structure, substructures, their MFEs, and binding probabilities, enables our model to perform well on experimentally validated test data. These features, rather than nucleotide sequences, enhance our model to generalize beyond specific sequence contexts and perform well on sequentially distant samples.
PMID:40126347 | DOI:10.3390/ncrna11020023
Secondary-Structure-Informed RNA Inverse Design via Relational Graph Neural Networks
Noncoding RNA. 2025 Feb 26;11(2):18. doi: 10.3390/ncrna11020018.
ABSTRACT
RNA inverse design is an essential part of many RNA therapeutic strategies. To date, there have been great advances in computationally driven RNA design. The current machine learning approaches can predict the sequence of an RNA given its 3D structure with acceptable accuracy and at tremendous speed. The design and engineering of RNA regulators such as riboswitches, however, is often more difficult, partly due to their inherent conformational switching abilities. Although recent state-of-the-art models do incorporate information about the multiple structures that a sequence can fold into, there is great room for improvement in modeling structural switching. In this work, a relational geometric graph neural network is proposed that explicitly incorporates alternative structures to predict an RNA sequence. Converting the RNA structure into a geometric graph, the proposed model uses edge types to distinguish between the primary structure, secondary structure, and spatial positioning of the nucleotides in representing structures. The results show higher native sequence recovery rates over those of gRNAde across different test sets (eg. 72% vs. 66%) and a benchmark from the literature (60% vs. 57%). Secondary-structure edge types had a more significant impact on the sequence recovery than the spatial edge types as defined in this work. Overall, these results suggest the need for more complex and case-specific characterization of RNA for successful inverse design.
PMID:40126342 | DOI:10.3390/ncrna11020018
Smectic-like bundle formation of planktonic bacteria upon nutrient starvation
Soft Matter. 2025 Mar 24. doi: 10.1039/d4sm01117a. Online ahead of print.
ABSTRACT
Bacteria aggregate through various intercellular interactions to build biofilms, but the effect of environmental changes on them remains largely unexplored. Here, by using an experimental device that overcomes past difficulties, we observed the collective response of Escherichia coli aggregates to dynamic changes in the growth conditions. We discovered that nutrient starvation caused bacterial cells to arrange themselves into bundle-shaped clusters, developing a structure akin to that of smectic liquid crystals. The degree of the smectic-like bundle order was evaluated by a deep learning approach. Our experiments suggest that both the depletion attraction by extracellular polymeric substances and the growth arrest are essential for the bundle formation. Since these effects of nutrient starvation at the single-cell level are common to many bacterial species, bundle formation might also be a common collective behavior that bacterial cells may exhibit under harsh environments.
PMID:40126189 | DOI:10.1039/d4sm01117a
Generation of a High-Precision Whole Liver Panorama and Cross-Scale 3D Pathological Analysis for Hepatic Fibrosis
Adv Sci (Weinh). 2025 Mar 24:e2502744. doi: 10.1002/advs.202502744. Online ahead of print.
ABSTRACT
The liver harbors complex cross-scale structures, and the fibrosis-related alterations to these structures have a severe impact on the diverse function of the liver. However, the hepatic anatomic structures and their pathological alterations in the whole-liver scale remain to be elucidated. Combining the micro-optical sectioning tomography (MOST) system and liver Nissl staining, a first high-precision whole mouse liver atlas is generated, enabling visualization and analysis of the entire mouse liver. Thus, a detailed 3D panorama of CCl4-induced liver fibrosis pathology is constructed, capturing the 3D details of the central veins, portal veins, arteries, bile ducts, hepatic sinusoids, and liver cells. Pathological changes, including damaged sinusoids, steatotic hepatocytes, and collagen deposition, are region-specific and concentrated in the pericentral areas. The quantitative analysis shows a significantly reduced diameter and increased length density of the central vein. Additionally, a deep learning tool is used to segment steatotic hepatocytes, finding that the volume proportion of steatotic regions is similar across liver lobes. Steatosis severity increases with proximity to the central vein, independent of central vein diameter. The approach allows the cross-scale visualization of multiple structural components in liver research and promotes pathological studies from a 2D to a 3D perspective.
PMID:40126158 | DOI:10.1002/advs.202502744
Leveraging the internet of things and optimized deep residual networks for improved foliar disease detection in apple orchards
Network. 2025 Mar 24:1-37. doi: 10.1080/0954898X.2025.2472626. Online ahead of print.
ABSTRACT
Plant diseases significantly threaten food security by reducing the quantity and quality of agricultural products. This paper presents a deep learning approach for classifying foliar diseases in apple plants using the Tunicate Swarm Sine Cosine Algorithm-based Deep Residual Network (TSSCA-based DRN). Cluster heads in simulated Internet of Things (IoT) networks are selected by Fractional Lion Optimization (FLION), and images are pre-processed with a Gaussian filter and segmented using the DeepJoint model. The TSSCA, combining the Tunicate Swarm Algorithm (TSA) and Sine Cosine Algorithm (SCA), enhances the classifier's effectiveness. Moreover, Plant Pathology 2020 - FGVC7 dataset is used in this work. This dataset is designed for the classification of foliar diseases in apple trees. The TSSCA-based DRN outperforms other methods, achieving 97% accuracy, 94.666% specificity, 96.888% sensitivity, and 0.0442J maximal energy, with significant improvements over existing approaches. Additionally, the proposed model demonstrates superior accuracy, outperforming other methods by 8.97%, 6.58%, 2.07%, 1.71%, 1.14%, 1.07%, 0.93%, and 0.64% over Multidimensional Feature Compensation Residual neural network (MDFC - ResNet), Convolutional Neural Network (CNN), Multi-Context Fusion Network (MCFN), Advanced Segmented Dimension Extraction (ASDE), and DRN, fuzzy deep convolutional neural network (FCDCNN), ResNet9-SE, Capsule Neural Network (CapsNet), IoT-based scrutinizing model, and Multi-Model Fusion Network (MMF-Net).
PMID:40126079 | DOI:10.1080/0954898X.2025.2472626
HUNHODRL: Energy efficient resource distribution in a cloud environment using hybrid optimized deep reinforcement model with HunterPlus scheduler
Network. 2025 Mar 24:1-26. doi: 10.1080/0954898X.2025.2480294. Online ahead of print.
ABSTRACT
This study aims to enhance the educational security and legitimacy by overcoming the problem of real-time student signature verification. The issue is raised from the growing issue about identity theft and academic fraud in schools, which compromises the validity of tests and other academic evaluations. To overcome these problems, the paper presents a deep learning-based method for signature verification made possible by employing the cutting-edge Convolutional Neural Networks (CNNs). The proposed method utilizes a VGG19 architecture trained and adjusted to handle the unique characteristics of student signatures. Initially, the procedure is pre-processing the image, after the key signature features are extracted. After passing these characteristics across VGG19 network, the signature's authenticity is classified as either unreliable or malicious nodes. The proposed method offers a flexibility and scalability for various educational settings with its capacity to manage both batch and individual processing. The model's efficacy is demonstrated by experiment with accuracy, precision, and recall values, which surpasses the existing techniques. The method ensures dependable performance under circumstances by illustrating resilience to several kinds of noise and distortion. The proposed deep learning model results pay a way for addressing the issue of student signature verification, enhancing the academic institutions' security and legitimacy.
PMID:40126006 | DOI:10.1080/0954898X.2025.2480294
Parallel convolutional SpinalNet: A hybrid deep learning approach for breast cancer detection using mammogram images
Network. 2025 Mar 24:1-41. doi: 10.1080/0954898X.2025.2480299. Online ahead of print.
ABSTRACT
Breast cancer is the foremost cause of mortality among females. Early diagnosis of a disease is necessary to avoid breast cancer by reducing the death rate and offering a better life to the individuals. Therefore, this work proposes a Parallel Convolutional SpinalNet (PConv-SpinalNet) for the efficient detection of breast cancer using mammogram images. At first, the input image is pre-processed using the Gabor filter. The tumour segmentation is conducted using LadderNet. Then, the segmented tumour samples are augmented using Image manipulation, Image erasing, and Image mix techniques. After that, the essential features, like CNN features, Texton, Local Gabor binary patterns (LGBP), scale-invariant feature transform (SIFT), and Local Monotonic Pattern (LMP) with discrete cosine transform (DCT) are extracted in the feature extraction phase. Finally, the detection of breast cancer is performed using PConv-SpinalNet. PConv-SpinalNet is developed by an integration of Parallel Convolutional Neural Networks (PCNN) and SpinalNet. The evaluation results show that PConv-SpinalNet accomplished a superior range of accuracy as 88.5%, True Positive Rate (TPR) as 89.7%, True Negative Rate (TNR) as 90.7%, Positive Predictive Value (PPV) as 91.3%, and Negative Predictive Value (NPV) as 92.5%.
PMID:40125951 | DOI:10.1080/0954898X.2025.2480299
Artificial intelligence for Brugada syndrome diagnosis and gene variants interpretation
Am J Cardiovasc Dis. 2025 Feb 15;15(1):1-12. doi: 10.62347/YQHQ1079. eCollection 2025.
ABSTRACT
Brugada Syndrome (BrS) is a hereditary cardiac condition associated with an elevated risk of lethal arrhythmias, making precise and prompt diagnosis vital to prevent life-threatening outcomes. The diagnosis of BrS is challenging due to the requirement of invasive drug challenge tests, limited human visual capacity to detect subtle electrocardiogram (ECG) patterns, and the transient nature of the disease. Artificial intelligence (AI) can detect almost all patterns of BrS in ECG, some of which are even beyond the capability of expert eyes. AI is subcategorized into several models, with deep learning being considered the most beneficial, boasting its highest accuracy among the other models. With the capability to discriminate subtle data and analyze extensive datasets, AI has achieved higher accuracy, sensitivity, and specificity compared to trained cardiologists. Meanwhile, AI proficiency in managing complex data enables us to discover unclassified genetic variants. AI can also analyze data extracted from induced pluripotent stem cell-derived cardiomyocytes to distinguish BrS from other inherited cardiac arrhythmias. The aim of this study is to present a synopsis of the evolution of various algorithms of artificial intelligence utilized in the diagnosis of BrS and compare their diagnostic abilities to trained cardiologists. In addition, the application of AI for classification of BrS gene variants is also briefly discussed.
PMID:40124093 | PMC:PMC11928888 | DOI:10.62347/YQHQ1079
Attention-Enhanced Multi-Task Deep Learning Model for Classification and Segmentation of Esophageal Lesions
ACS Omega. 2025 Mar 4;10(10):10468-10479. doi: 10.1021/acsomega.4c10763. eCollection 2025 Mar 18.
ABSTRACT
Accurate detection and segmentation of esophageal lesions are crucial for diagnosing and treating gastrointestinal diseases. However, early detection of esophageal cancer remains challenging, contributing to a reduced five-year survival rate among patients. This paper introduces a novel multitask deep learning model for automatic diagnosis that integrates classification and segmentation tasks to assist endoscopists effectively. Our approach leverages the MobileNetV2 deep learning architecture enhanced with a mutual attention module, significantly improving the model's performance in determining the locations of esophageal lesions. Unlike traditional models, the proposed model is designed not to replace endoscopists but to empower them to correct false predictions when provided with additional Supporting Information. We evaluated the proposed model on three well-known data sets: Early Esophageal Cancer (EEC), CVC-ClinicDB, and KVASIR. The experimental results demonstrate promising performance, achieving high classification accuracies of 98.72% (F1-score: 98.08%) on CVC-ClinicDB, 98.95% (F1-score: 98.32%) on KVASIR, and 99.12% (F1-score: 99.00%) on our generated EEC data set. Compared to state-of-the-art models, our classification results show significant improvement. For the segmentation task, the model attained a Dice coefficient of 92.73% and an Intersection over Union (IoU) of 91.54%. These findings suggest that the proposed multitask deep learning model can effectively assist endoscopists in evaluating esophageal lesions, thereby alleviating their workload and enhancing diagnostic precision.
PMID:40124037 | PMC:PMC11923690 | DOI:10.1021/acsomega.4c10763
Artificial intelligence in obstructive sleep apnea: A bibliometric analysis
Digit Health. 2025 Mar 21;11:20552076251324446. doi: 10.1177/20552076251324446. eCollection 2025 Jan-Dec.
ABSTRACT
OBJECTIVE: To conduct a bibliometric analysis using VOSviewer and Citespace to explore the current applications, trends, and future directions of artificial intelligence (AI) in obstructive sleep apnea (OSA).
METHODS: On 13 September 2024, a computer search was conducted on the Web of Science Core Collection dataset published between 1 January 2011, and 30 August 2024, to identify literature related to the application of AI in OSA. Visualization analysis was performed on countries, institutions, journal sources, authors, co-cited authors, citations, and keywords using Vosviewer and Citespace, and descriptive analysis tables were created by using Microsoft Excel 2021 software.
RESULTS: A total of 867 articles were included in this study. The number of publications was low and stable from 2011 to 2016, with a significant increase after 2017. China had the highest number of publications. Alvarez, Daniel, and Hornero, Roberto were the two most prolific authors. Universidad de Valladolid and the IEEE Journal of Biomedical and Health Informatics were the most productive institution and journal, respectively. The top three authors in terms of co-citation frequency are Hassan, Ar, Young, T, and Vicini, C. "Estimation of the global prevalence and burden of obstructive sleep apnoea: a literature-based analysis" was cited the most frequently. Keywords such as "OSA," "machine learning," "Electrocardiography," and "deep learning" were dominant.
CONCLUSION: AI's application in OSA research is expanding. This study indicates that AI, particularly deep learning, will continue to be a key research area, focusing on diagnosis, identification, personalized treatment, prognosis assessment, telemedicine, and management. Future efforts should enhance international cooperation and interdisciplinary communication to maximize the potential of AI in advancing OSA research, comprehensively empowering sleep health, bringing more precise, convenient, and personalized medical services to patients and ushering in a new era of sleep health.
PMID:40123882 | PMC:PMC11930495 | DOI:10.1177/20552076251324446
uniDINO: Assay-independent feature extraction for fluorescence microscopy images
Comput Struct Biotechnol J. 2025 Feb 24;27:928-936. doi: 10.1016/j.csbj.2025.02.020. eCollection 2025.
ABSTRACT
High-content imaging (HCI) enables the characterization of cellular states through the extraction of quantitative features from fluorescence microscopy images. Despite the widespread availability of HCI data, the development of generalizable feature extraction models remains challenging due to the heterogeneity of microscopy images, as experiments often differ in channel count, cell type, and assay conditions. To address these challenges, we introduce uniDINO, a generalist feature extraction model capable of handling images with an arbitrary number of channels. We train uniDINO on a dataset of over 900,000 single-channel images from diverse experimental contexts and concatenate single-channel features to generate embeddings for multi-channel images. Our extensive validation across varied datasets demonstrates that uniDINO outperforms traditional computer vision methods and transfer learning from natural images, while also providing interpretability through channel attribution. uniDINO offers an out-of-the-box, computationally efficient solution for feature extraction in fluorescence microscopy, with the potential to significantly accelerate the analysis of HCI datasets.
PMID:40123801 | PMC:PMC11930362 | DOI:10.1016/j.csbj.2025.02.020
Gross tumor volume confidence maps prediction for soft tissue sarcomas from multi-modality medical images using a diffusion model
Phys Imaging Radiat Oncol. 2025 Feb 23;33:100734. doi: 10.1016/j.phro.2025.100734. eCollection 2025 Jan.
ABSTRACT
BACKGROUND AND PURPOSE: Accurate delineation of the gross tumor volume (GTV) is essential for radiotherapy of soft tissue sarcomas. However, manual GTV delineation from multi-modality images is time-consuming. Furthermore, GTV delineation is subject to inter- and intra-reader variability, which reduces the reproducibility of treatment planning. To address these issues, this work aims to develop a highly accurate automatic delineation technique modeling reader variability for soft tissue sarcomas using deep learning.
MATERIALS AND METHODS: We employed a publicly available soft tissue sarcoma dataset consisting of Fluorodeoxyglucose Positron Emission Tomography (FDG-PET), X-ray Computed Tomography (CT), and pre-contrast T1-weighted Magnetic Resonance Imaging (MRI) scans for 51 patients, of which 49 were selected for analysis. The GTVs were delineated by six experienced readers, each reader performing GTV contouring multiple times for every patient. The confidence maps were calculated by averaging the labels provided by all readers, resulting in values ranging from 0 to 1. We developed and trained a diffusion model-based neural network to predict confidence maps of GTV for soft tissue sarcomas from multi-modality medical images.
RESULTS: Quantitative analysis showed that the proposed diffusion model performed competitively with U-Net-based models, frequently ranking first or second across five evaluation metrics: Dice Index, Hausdorff Distance, Recall, Precision, and Brier Score. Additionally, experiments evaluating the impact of different imaging modalities demonstrated that incorporating multi-modality image inputs provided improved performance compared to single-modality and dual-modality inputs.
CONCLUSION: The proposed diffusion model is capable of predicting accurate confidence maps of GTV for soft tissue sarcomas from multi-modality inputs.
PMID:40123775 | PMC:PMC11926426 | DOI:10.1016/j.phro.2025.100734
LaMoD: Latent Motion Diffusion Model For Myocardial Strain Generation
Shape Med Imaging (2024). 2025;15275:164-177. doi: 10.1007/978-3-031-75291-9_13. Epub 2024 Oct 26.
ABSTRACT
Motion and deformation analysis of cardiac magnetic resonance (CMR) imaging videos is crucial for assessing myocardial strain of patients with abnormal heart functions. Recent advances in deep learning-based image registration algorithms have shown promising results in predicting motion fields from routinely acquired CMR sequences. However, their accuracy often diminishes in regions with subtle appearance changes, with errors propagating over time. Advanced imaging techniques, such as displacement encoding with stimulated echoes (DENSE) CMR, offer highly accurate and reproducible motion data but require additional image acquisition, which poses challenges in busy clinical flows. In this paper, we introduce a novel Latent Motion Diffusion model (LaMoD) to predict highly accurate DENSE motions from standard CMR videos. More specifically, our method first employs an encoder from a pre-trained registration network that learns latent motion features (also considered as deformation-based shape features) from image sequences. Supervised by the ground-truth motion provided by DENSE, LaMoD then leverages a probabilistic latent diffusion model to reconstruct accurate motion from these extracted features. Experimental results demonstrate that our proposed method, LaMoD, significantly improves the accuracy of motion analysis in standard CMR images; hence improving myocardial strain analysis in clinical settings for cardiac patients. Our code is publicly available at https://github.com/jr-xing/LaMoD.
PMID:40123747 | PMC:PMC11929565 | DOI:10.1007/978-3-031-75291-9_13
Revolutionizing total hip arthroplasty: The role of artificial intelligence and machine learning
J Exp Orthop. 2025 Mar 22;12(1):e70195. doi: 10.1002/jeo2.70195. eCollection 2025 Jan.
ABSTRACT
PURPOSE: There has been substantial growth in the literature describing the effectiveness of artificial intelligence (AI) and machine learning (ML) applications in total hip arthroplasty (THA); these models have shown the potential to predict post-operative outcomes using algorithmic analysis of acquired data and can ultimately optimize clinical decision-making while reducing time, cost and complexity. The aim of this review is to analyze the most updated articles on AI/ML applications in THA as well as present the potential of these tools in optimizing patient care and THA outcomes.
METHODS: A comprehensive search was completed through August 2024, according to the PRISMA guidelines. Publications were searched using the Scopus, Medline, EMBASE, CENTRAL and CINAHL databases. Pertinent findings and patterns in AI/ML methods utilization, as well as their applications, were quantitatively summarized and described using frequencies, averages and proportions. This study used a modified eight-item Methodological Index for Non-Randomized Studies (MINORS) checklist for quality assessment.
RESULTS: Nineteen articles were eligible for this study. The selected studies were published between 2016 and 2024. Out of the various ML algorithms, four models have proven to be particularly significant and were used in almost 20% of the studies, including elastic net penalized logistic regression, artificial neural network, convolutional neural network (CNN) and multiple linear regression. The highest area under the curve (=1) was reported in the preoperative planning outcome variable and utilized CNN. All 20 studies demonstrated a high level of quality and low risk of bias, with a modified MINORS score of at least 7/8 (88%).
CONCLUSIONS: Developments in AI/ML prediction models in THA are rapidly increasing. There is clear potential for these tools to assist in all stages of surgical care as well as in challenges at the broader hospital administrative level and patient-specific level.
LEVEL OF EVIDENCE: Level III.
PMID:40123682 | PMC:PMC11929018 | DOI:10.1002/jeo2.70195
Technical implications of a novel deep learning system in the segmentation and evaluation of computed tomography angiography before transcatheter aortic valve replacement
Ther Adv Cardiovasc Dis. 2025 Jan-Dec;19:17539447251321589. doi: 10.1177/17539447251321589. Epub 2025 Mar 24.
ABSTRACT
OBJECTIVE: The goal of this study was to compare the computed tomography angiography scans of the segmentation results from the Cvpilot, 3mensio, and Volume Viewer systems to explore the practicability of the Cvpilot system in the automatic segmentation and technical evaluation of the aortic root before transcatheter aortic valve replacement (TAVR).
DESIGN: A total of 154 patients who underwent TAVR at our center from January 2022 to May 2023 were enrolled, and their computed tomography angiography images were analyzed using the Cvpilot, 3mensio, and Volume Viewer systems, respectively.
SETTING: Not applicable.
PARTICIPANTS: Not applicable.
MAIN OUTCOME MEASURES: The reconstructed computed tomography angiography images were evaluated by experts, and the measurements of the aortic roots were analyzed statistically.
RESULTS: Compared with the 3mensio system, 92.2% of patients (n = 142) evaluated with the Cvpilot system reached grade A, 5.2% of patients (n = 8) reached grade B, and 2.6% of patients (n = 4) reached grade C. Compared with the Volume Viewer system, 90.9% of patients (n = 140) evaluated with the Cvpilot system achieved grade A, 7.1% of patients (n = 11) achieved grade B, and 2.0% of patients (n = 3) achieved grade C. Furthermore, there was no significant difference among the measurement results of the Cvpilot, 3mensio, and Volume Viewer systems (all p > 0.05).
CONCLUSION: Overall, the Cvpilot system is effective and reliable. It can accurately complete the segmentation and the measurement of aortic root structures, thereby effectively improving the measurement quality before TAVR.
TRIAL REGISTRATION: Not applicable.
PMID:40123453 | DOI:10.1177/17539447251321589
Enhancing Schizophrenia Diagnosis Through Multi-View EEG Analysis: Integrating Raw Signals and Spectrograms in a Deep Learning Framework
Clin EEG Neurosci. 2025 Mar 23:15500594251328068. doi: 10.1177/15500594251328068. Online ahead of print.
ABSTRACT
Objective: Schizophrenia is a chronic mental disorder marked by symptoms such as hallucinations, delusions, and cognitive impairments, which profoundly affect individuals' lives. Early detection is crucial for improving treatment outcomes, but the diagnostic process remains complex due to the disorder's multifaceted nature. In recent years, EEG data have been increasingly investigated to detect neural patterns linked to schizophrenia. Methods: This study presents a deep learning framework that integrates both raw multi-channel EEG signals and their spectrograms. Our two-branch model processes these complementary data views to capture both temporal dynamics and frequency-specific features while employing depth-wise convolution to efficiently combine spatial dependencies across EEG channels. Results: The model was evaluated on two datasets, consisting of 84 and 28 subjects, achieving classification accuracies of 0.985 and 0.994, respectively. These results highlight the effectiveness of combining raw EEG signals with their time-frequency representations for precise and automated schizophrenia detection. Additionally, an ablation study assessed the contributions of different architectural components. Conclusions: The approach outperformed existing methods in the literature, underscoring the value of utilizing multi-view EEG data in schizophrenia detection. These promising results suggest that our framework could contribute to more effective diagnostic tools in clinical practice.
PMID:40123224 | DOI:10.1177/15500594251328068
Food Freshness Prediction Platform Utilizing Deep Learning-Based Multimodal Sensor Fusion of Volatile Organic Compounds and Moisture Distribution
ACS Sens. 2025 Mar 23. doi: 10.1021/acssensors.5c00254. Online ahead of print.
ABSTRACT
Various sensing methods have been developed for food spoilage research, but in practical applications, the accuracy of these methods is frequently constrained by the limitation of single-source data and challenges in cross-validating multimodal data. To address these issues, a new method combining multidimensional sensing technology with deep learning-based dynamic fusion has been developed, which can precisely monitor the spoilage process of beef. This study designs a gas sensor based on surface-enhanced Raman scattering (SERS) to directly analyze volatile organic compounds (VOCs) adsorbed on MIL-101(Cr) with amine-specific adsorption for data collection while also evaluating the moisture distribution of beef through low-field nuclear magnetic resonance (LF-NMR), providing multidimensional recognition and readings. By introducing the self-attention mechanism and SENet scaling features into the multimodal deep learning model, the system is able to adaptively fuse and focus on the important features of the sensors. After training, the system can predict the storage time of beef under controlled storage conditions, with an R2 value greater than 0.98. Furthermore, it can provide accurate freshness assessments for beef samples under unknown storage conditions. Relative to single-modality methods, accuracy improves from 90 to over 97%. Overall, the newly developed dynamic fusion deep learning multimodal model effectively integrates multimodal information, enabling the fast and reliable monitoring of beef freshness.
PMID:40123082 | DOI:10.1021/acssensors.5c00254