Deep learning

Eye-Rubbing Detection Tool Using Artificial Intelligence on a Smartwatch in the Management of Keratoconus

Thu, 2024-12-12 06:00

Transl Vis Sci Technol. 2024 Dec 2;13(12):16. doi: 10.1167/tvst.13.12.16.

ABSTRACT

PURPOSE: Eye rubbing is considered to play a significant role in the progression of keratoconus and of corneal ectasia following refractive surgery. To our knowledge, no tool performs an objective quantitative evaluation of eye rubbing using a device that is familiar to typical patients. We introduce here an innovative solution for objectively quantifying and preventing eye rubbing. It consists of an application that uses a deep-learning artificial intelligence (AI) algorithm deployed on a smartwatch.

METHODS: A Samsung Galaxy Watch 4 smartwatch collected motion data from eye rubbing and everyday activities, including readings from the gyroscope, accelerometer, and linear acceleration sensors. The training of the model was carried out using two deep-learning algorithms, long short-term memory (LSTM) and gated recurrent unit (GRU), as well as four machine learning algorithms: random forest, K-nearest neighbors (KNN), support vector machine (SVM), and XGBoost.

RESULTS: The model achieved an accuracy of 94%. The developed application could recognize, count, and display the number of eye rubbings carried out. The GRU model and XGBoost algorithm also showed promising performance.

CONCLUSIONS: Automated detection of eye rubbing by deep-learning AI has been proven to be feasible. This approach could radically improve the management of patients with keratoconus and those undergoing refractive surgery. It could detect and quantify eye rubbing and help to reduce it by sending alerts directly to the patient.

TRANSLATIONAL RELEVANCE: This proof of concept could confirm one of the most prominent paradigms in keratoconus management, the role of abnormal eye rubbing, while providing the means to challenge or even negate it by offering the first automated and objective tool for detecting eye rubbing.

PMID:39666356 | DOI:10.1167/tvst.13.12.16

Categories: Literature Watch

Assessment of the stability of intracranial aneurysms using a deep learning model based on computed tomography angiography

Thu, 2024-12-12 06:00

Radiol Med. 2024 Dec 12. doi: 10.1007/s11547-024-01939-z. Online ahead of print.

ABSTRACT

PURPOSE: Assessment of the stability of intracranial aneurysms is important in the clinic but remains challenging. The aim of this study was to construct a deep learning model (DLM) to identify unstable aneurysms on computed tomography angiography (CTA) images.

METHODS: The clinical data of 1041 patients with 1227 aneurysms were retrospectively analyzed from August 2011 to May 2021. Patients with aneurysms were divided into unstable (ruptured, evolving and symptomatic aneurysms) and stable (fortuitous, nonevolving and asymptomatic aneurysms) groups and randomly divided into training (833 patients with 991 aneurysms) and internal validation (208 patients with 236 aneurysms) sets. One hundred and ninety-seven patients with 229 aneurysms from another hospital were included in the external validation set. Six models based on a convolutional neural network (CNN) or logistic regression were constructed on the basis of clinical, morphological and deep learning (DL) features. The area under the curve (AUC), accuracy, sensitivity and specificity were calculated to evaluate the discriminating ability of the models.

RESULTS: The AUCs of Models A (clinical), B (morphological) and C (DL features from the CTA image) in the external validation set were 0.5706, 0.9665 and 0.8453, respectively. The AUCs of Model D (clinical and DL features), Model E (clinical and morphological features) and Model F (clinical, morphological and DL features) in the external validation set were 0.8395, 0.9597 and 0.9696, respectively.

CONCLUSIONS: The CNN-based DLM, which integrates clinical, morphological and DL features, outperforms other models in predicting IA stability. The DLM has the potential to assess IA stability and support clinical decision-making.

PMID:39666223 | DOI:10.1007/s11547-024-01939-z

Categories: Literature Watch

Machine learning and deep learning algorithms in stroke medicine: a systematic review of hemorrhagic transformation prediction models

Thu, 2024-12-12 06:00

J Neurol. 2024 Dec 12;272(1):37. doi: 10.1007/s00415-024-12810-6.

ABSTRACT

BACKGROUND: Acute ischemic stroke (AIS) is a major cause of morbidity and mortality, with hemorrhagic transformation (HT) further worsening outcomes. Traditional scoring systems have limited predictive accuracy for HT in AIS. Recent research has explored machine learning (ML) and deep learning (DL) algorithms for stroke management. This study evaluates and compares the effectiveness of ML and DL algorithms in predicting HT post-AIS, benchmarking them against conventional models.

METHODS: A systematic search was conducted across PubMed, Embase, Web of Science, Scopus, and IEEE, initially yielding 1421 studies. After screening, 24 studies met the inclusion criteria. The Prediction Model Risk of Bias Assessment Tool (PROBAST) was used to assess the quality of these studies, and a qualitative synthesis was performed due to heterogeneity in the study design.

RESULTS: The included studies featured diverse ML and DL algorithms, with Logistic Regression (LR), Support Vector Machine (SVM), and Random Forest (RF) being the most common. Gradient boosting (GB) showed superior performance. Median Area Under the Curve (AUC) values were 0.91 for GB, 0.83 for RF, 0.77 for LR, and 0.76 for SVM. Neural networks had a median AUC of 0.81 and convolutional neural networks (CNNs) had a median AUC of 0.91. ML techniques outperformed conventional models, particularly those integrating clinical and imaging data.

CONCLUSIONS: ML and DL models significantly surpass traditional scoring systems in predicting HT. These advanced models enhance clinical decision-making and improve patient outcomes. Future research should address data expansion, imaging protocol standardization, and model transparency to enhance stroke outcomes further.

PMID:39666168 | DOI:10.1007/s00415-024-12810-6

Categories: Literature Watch

Fully automated MRI-based convolutional neural network for noninvasive diagnosis of cirrhosis

Thu, 2024-12-12 06:00

Insights Imaging. 2024 Dec 12;15(1):298. doi: 10.1186/s13244-024-01872-9.

ABSTRACT

OBJECTIVES: To develop and externally validate a fully automated diagnostic convolutional neural network (CNN) model for cirrhosis based on liver MRI and serum biomarkers.

METHODS: This multicenter retrospective study included consecutive patients receiving pathological evaluation of liver fibrosis stage and contrast-enhanced liver MRI between March 2010 and January 2024. On the training dataset, an MRI-based CNN model was constructed for cirrhosis against pathology, and then a combined model was developed integrating the CNN model and serum biomarkers. On the testing datasets, the area under the receiver operating characteristic curve (AUC) was computed to compare the diagnostic performance of the combined model with that of aminotransferase-to-platelet ratio index (APRI), fibrosis-4 index (FIB-4), and radiologists. The influence of potential confounders on the diagnostic performance was evaluated by subgroup analyses.

RESULTS: A total of 1315 patients (median age, 54 years; 1065 men; training, n = 840) were included, 855 (65%) with pathological cirrhosis. The CNN model was constructed on pre-contrast T1- and T2-weighted imaging, and the combined model was developed integrating the CNN model, age, and eight serum biomarkers. On the external testing dataset, the combined model achieved an AUC of 0.86, which outperformed FIB-4, APRI and two radiologists (AUC: 0.67 to 0.73, all p < 0.05). Subgroup analyses revealed comparable diagnostic performances of the combined model in patients with different sizes of focal liver lesions.

CONCLUSION: Based on pre-contrast T1- and T2-weighted imaging, age, and serum biomarkers, the combined model allowed diagnosis of cirrhosis with moderate accuracy, independent of the size of focal liver lesions.

CRITICAL RELEVANCE STATEMENT: The fully automated convolutional neural network model utilizing pre-contrast MR imaging, age and serum biomarkers demonstrated moderate accuracy, outperforming FIB-4, APRI, and radiologists, independent of size of focal liver lesions, potentially facilitating noninvasive diagnosis of cirrhosis pending further validation.

KEY POINTS: This fully automated convolutional neural network (CNN) model, using pre-contrast MRI, age, and serum biomarkers, diagnoses cirrhosis. The CNN model demonstrated an external testing dataset AUC of 0.86, independent of the size of focal liver lesions. The CNN model outperformed aminotransferase-to-platelet ratio index, fibrosis-4 index, and radiologists, potentially facilitating noninvasive diagnosis of cirrhosis.

PMID:39666107 | DOI:10.1186/s13244-024-01872-9

Categories: Literature Watch

The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis

Thu, 2024-12-12 06:00

Insights Imaging. 2024 Dec 12;15(1):297. doi: 10.1186/s13244-024-01869-4.

ABSTRACT

INTRODUCTION: Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging.

METHODS: A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'.

RESULTS: From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction.

CONCLUSION: This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field.

CLINICAL RELEVANCE STATEMENT: This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field.

KEY POINTS: Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.

PMID:39666106 | DOI:10.1186/s13244-024-01869-4

Categories: Literature Watch

From pixels to patients: the evolution and future of deep learning in cancer diagnostics

Thu, 2024-12-12 06:00

Trends Mol Med. 2024 Dec 11:S1471-4914(24)00310-1. doi: 10.1016/j.molmed.2024.11.009. Online ahead of print.

ABSTRACT

Deep learning has revolutionized cancer diagnostics, shifting from pixel-based image analysis to more comprehensive, patient-centric care. This opinion article explores recent advancements in neural network architectures, highlighting their evolution in biomedical research and their impact on medical imaging interpretation and multimodal data integration. We emphasize the need for domain-specific artificial intelligence (AI) systems capable of handling complex clinical tasks, advocating for the development of multimodal large language models that can integrate diverse data sources. These models have the potential to significantly enhance the precision and efficiency of cancer diagnostics, transforming AI from a supplementary tool into a core component of clinical decision-making, ultimately improving patient outcomes and advancing cancer care.

PMID:39665958 | DOI:10.1016/j.molmed.2024.11.009

Categories: Literature Watch

Vocal Biomarkers for Parkinson's Disease Classification Using Audio Spectrogram Transformers

Thu, 2024-12-12 06:00

J Voice. 2024 Dec 10:S0892-1997(24)00388-6. doi: 10.1016/j.jvoice.2024.11.008. Online ahead of print.

ABSTRACT

Parkinson's disease (PD) is a neurodegenerative disorder affecting motor and non-motor functions, including speech. This study evaluates the effectiveness of the audio spectrogram transformer (AST) model in detecting PD through vocal biomarkers, hypothesizing that its self-attention mechanism would better capture PD related speech impairments compared to traditional deep learning approaches. Speech recordings from 150 participants (100 from PC-GITA: 50 PD, 50 healthy controls (HC); 50 from Italian Parkinson's voice and speech (ITA): 28 PD, 22 HC) were analyzed using the AST model and compared against established architectures including VGG16, VGG19, ResNet18, ResNet34, vision transformer, and swin transformer. Audio preprocessing included sampling rate standardization to 16 kHz and amplitude normalization. The AST model achieved superior classification performance across all datasets: 97.14% accuracy on ITA, 91.67% on Parkinson's Colombian - Grupo de Investigación en Telecomunicaciones Aplicadas (PC-GITA), and 92.73% on the combined dataset. Performance remained consistent across different speech tasks, with particularly strong results in sustained vowel analysis (precision: 0.97 ± 0.03, recall: 0.96 ± 0.03). The model demonstrated robust cross-lingual generalization, outperforming traditional architectures by 5%-10% in accuracy. These results suggest that the AST model provides a reliable, non-invasive method for PD detection through voice analysis, with strong performance across different languages and speech tasks. The model's success in cross-lingual generalization indicates potential for broader clinical application, though validation across more diverse populations is needed for clinical implementation.

PMID:39665946 | DOI:10.1016/j.jvoice.2024.11.008

Categories: Literature Watch

Non-invasive Prediction of Lymph Node Metastasis in NSCLC Using Clinical, Radiomics, and Deep Learning Features From (18)F-FDG PET/CT Based on Interpretable Machine Learning

Thu, 2024-12-12 06:00

Acad Radiol. 2024 Dec 10:S1076-6332(24)00882-1. doi: 10.1016/j.acra.2024.11.037. Online ahead of print.

ABSTRACT

PURPOSE: This study aimed to develop and evaluate a machine learning model combining clinical, radiomics, and deep learning features derived from PET/CT imaging to predict lymph node metastasis (LNM) in patients with non-small cell lung cancer (NSCLC). The model's interpretability was enhanced using Shapley additive explanations (SHAP).

METHODS: A total of 248 NSCLC patients who underwent preoperative PET/CT scans were included and divided into training, test, and external validation sets. Radiomics features were extracted from segmented tumor regions on PET/CT images, and deep learning features were generated using the ResNet50 architecture. Feature selection was performed using minimum-redundancy maximum-relevance (mRMR), and the least absolute shrinkage and selection operator (LASSO) algorithm. Four models-clinical, radiomics, deep learning radiomics (DL_radiomics), and combined model-were constructed using the XGBoost algorithm and evaluated based on diagnostic performance metrics, including area under the receiver operating characteristic curve (AUC), accuracy, F1 score, sensitivity, and specificity. Shapley Additive exPlanations (SHAP) was used for model interpretability.

RESULTS: The combined model achieved the highest AUC in the test set (AUC=0.853), outperforming the clinical (AUC=0.758), radiomics (AUC=0.831), and DL_radiomics (AUC=0.834) models. Decision curve analysis (DCA) demonstrated that the combined model offered greater clinical net benefits. SHAP was used for global interpretation, and the summary plot indicated that the features ct_original_glrlm_LongRunHighGrayLevelEmphasis, and pet_gradient_glcm_lmc1 were the most important for the model's predictions.

CONCLUSION: The combined model, combining clinical, radiomics, and deep learning features from PET/CT, significantly improved the accuracy of LNM prediction in NSCLC patients. SHAP-based interpretability provided valuable insights into the model's decision-making process, enhancing its potential clinical application for preoperative decision-making in NSCLC.

PMID:39665892 | DOI:10.1016/j.acra.2024.11.037

Categories: Literature Watch

Evaluating the Cumulative Benefit of Inspiratory CT, Expiratory CT, and Clinical Data for COPD Diagnosis and Staging through Deep Learning

Thu, 2024-12-12 06:00

Radiol Cardiothorac Imaging. 2024 Dec;6(6):e240005. doi: 10.1148/ryct.240005.

ABSTRACT

Purpose To measure the benefit of single-phase CT, inspiratory-expiratory CT, and clinical data for convolutional neural network (CNN)-based chronic obstructive pulmonary disease (COPD) staging. Materials and Methods This retrospective study included inspiratory and expiratory lung CT images and spirometry measurements acquired between November 2007 and April 2011 from 8893 participants (mean age, 59.6 years ± 9.0 [SD]; 53.3% [4738 of 8893] male) in the COPDGene phase I cohort (ClinicalTrials.gov: NCT00608764). CNNs were trained to predict spirometry measurements (forced expiratory volume in 1 second [FEV1], FEV1 percent predicted, and ratio of FEV1 to forced vital capacity [FEV1/FVC]) using clinical data and either single-phase or multiphase CT. Spirometry predictions were then used to predict Global Initiative for Chronic Obstructive Lung Disease (GOLD) stage. Agreement between CNN-predicted and reference standard spirometry measurements and GOLD stage was assessed using intraclass correlation coefficient (ICC) and compared using bootstrapping. Accuracy for predicting GOLD stage, within-one GOLD stage, and GOLD 0 versus 1-4 was calculated. Results CNN-predicted and reference standard spirometry measurements showed moderate to good agreement (ICC, 0.66-0.79), which improved by inclusion of clinical data (ICC, 0.70-0.85; P ≤ .04), except for FEV1/FVC in the inspiratory-phase CNN model with clinical data (P = .35) and FEV1 in the expiratory-phase CNN model with clinical data (P = .33). Single-phase CNN accuracies for GOLD stage, within-one stage, and diagnosis ranged from 59.8% to 84.1% (682-959 of 1140), with moderate to good agreement (ICC, 0.68-0.70). Accuracies of CNN models using inspiratory and expiratory images ranged from 60.0% to 86.3% (684-984 of 1140), with moderate to good agreement (ICC, 0.72). Inclusion of clinical data improved agreement and accuracy for both the single-phase CNNs (ICC, 0.72; P ≤ .001; accuracy, 65.2%-85.8% [743-978 of 1140]) and inspiratory-expiratory CNNs (ICC, 0.77-0.78; P ≤ .001; accuracy, 67.6%-88.0% [771-1003 of 1140]), except expiratory CNN with clinical data (no change in GOLD stage ICC; P = .08). Conclusion CNN-based COPD diagnosis and staging using single-phase CT provides comparable accuracy with inspiratory-expiratory CT when provided clinical data relevant to staging. Keywords: Convolutional Neural Network, Chronic Obstructive Pulmonary Disease, CT, Severity Staging, Attention Map Supplemental material is available for this article. © RSNA, 2024.

PMID:39665633 | DOI:10.1148/ryct.240005

Categories: Literature Watch

Grand canonical Monte Carlo and deep learning assisted enhanced sampling to characterize the distribution of Mg2+ and influence of the Drude polarizable force field on the stability of folded states of the twister ribozyme

Thu, 2024-12-12 06:00

J Chem Phys. 2024 Dec 14;161(22):225102. doi: 10.1063/5.0241246.

ABSTRACT

Molecular dynamics simulations are crucial for understanding the structural and dynamical behavior of biomolecular systems, including the impact of their environment. However, there is a gap between the time scale of these simulations and that of real-world experiments. To address this problem, various enhanced simulation methods have been developed. In addition, there has been a significant advancement of the force fields used for simulations associated with the explicit treatment of electronic polarizability. In this study, we apply oscillating chemical potential grand canonical Monte Carlo and machine learning methods to determine reaction coordinates combined with metadynamics simulations to explore the role of Mg2+ distribution and electronic polarizability in the context of the classical Drude oscillator polarizable force field on the stability of the twister ribozyme. The introduction of electronic polarizability along with the details of the distribution of Mg2+ significantly stabilizes the simulations with respect to sampling the crystallographic conformation. The introduction of electronic polarizability leads to increased stability over that obtained with the additive CHARMM36 FF reported in a previous study, allowing for a distribution of a wider range of ions to stabilize twister. Specific interactions contributing to stabilization are identified, including both those observed in the crystal structures and additional experimentally unobserved interactions. Interactions of Mg2+ with the bases are indicated to make important contributions to stabilization. Notably, the presence of specific interactions between the Mg2+ ions and bases or the non-bridging phosphate oxygens (NBPOs) leads to enhanced dipole moments of all three moieties. Mg2+-NBPO interactions led to enhanced dipoles of the phosphates but, interestingly, not in all the participating ions. The present results further indicate the importance of electronic polarizability in stabilizing RNA in molecular simulations and the complicated nature of the relationship of Mg2+-RNA interactions with the polarization response of the bases and phosphates.

PMID:39665326 | DOI:10.1063/5.0241246

Categories: Literature Watch

AI-Powered Multimodal Modeling of Personalized Hemodynamics in Aortic Stenosis

Thu, 2024-12-12 06:00

Adv Sci (Weinh). 2024 Dec 12:e2404755. doi: 10.1002/advs.202404755. Online ahead of print.

ABSTRACT

Aortic stenosis (AS) is the most common valvular heart disease in developed countries. High-fidelity preclinical models can improve AS management by enabling therapeutic innovation, early diagnosis, and tailored treatment planning. However, their use is currently limited by complex workflows necessitating lengthy expert-driven manual operations. Here, we propose an AI-powered computational framework for accelerated and democratized patient-specific modeling of AS hemodynamics from computed tomography (CT). First, we demonstrate that the automated meshing algorithms can generate task-ready geometries for both computational and benchtop simulations with higher accuracy and 100 times faster than existing approaches. Then, we show that the approach can be integrated with fluid-structure interaction and soft robotics models to accurately recapitulate a broad spectrum of clinical hemodynamic measurements of diverse AS patients. The efficiency and reliability of these algorithms make them an ideal complementary tool for personalized high-fidelity modeling of AS biomechanics, hemodynamics, and treatment planning.

PMID:39665137 | DOI:10.1002/advs.202404755

Categories: Literature Watch

AppleLeafNet: a lightweight and efficient deep learning framework for diagnosing apple leaf diseases

Thu, 2024-12-12 06:00

Front Plant Sci. 2024 Nov 27;15:1502314. doi: 10.3389/fpls.2024.1502314. eCollection 2024.

ABSTRACT

Accurately identifying apple diseases is essential to control their spread and support the industry. Timely and precise detection is crucial for managing the spread of diseases, thereby improving the production and quality of apples. However, the development of algorithms for analyzing complex leaf images remains a significant challenge. Therefore, in this study, a lightweight deep learning model is designed from scratch to identify the apple leaf condition. The developed framework comprises two stages. First, the designed 37-layer model was employed to assess the condition of apple leaves (healthy or diseased). Second, transfer learning was used for further subclassification of the disease class (e.g., rust, complex, scab, and frogeye leaf spots). The trained lightweight model was reused because the model trained with correlated images facilitated transfer learning for further classification of the disease class. A dataset available online was used to validate the proposed two-stage framework, resulting in a classification rate of 98.25% for apple leaf condition identification and an accuracy of 98.60% for apple leaf disease diagnosis. Furthermore, the results confirm that the proposed model is lightweight and involves relatively fewer learnable parameters in comparison with other pre-trained deep learning models.

PMID:39665107 | PMC:PMC11631600 | DOI:10.3389/fpls.2024.1502314

Categories: Literature Watch

Artificial intelligence-based rapid brain volumetry substantially improves differential diagnosis in dementia

Thu, 2024-12-12 06:00

Alzheimers Dement (Amst). 2024 Dec 11;16(4):e70037. doi: 10.1002/dad2.70037. eCollection 2024 Oct-Dec.

ABSTRACT

INTRODUCTION: This study evaluates the clinical value of a deep learning-based artificial intelligence (AI) system that performs rapid brain volumetry with automatic lobe segmentation and age- and sex-adjusted percentile comparisons.

METHODS: Fifty-five patients-17 with Alzheimer's disease (AD), 18 with frontotemporal dementia (FTD), and 20 healthy controls-underwent cranial magnetic resonance imaging scans. Two board-certified neuroradiologists (BCNR), two board-certified radiologists (BCR), and three radiology residents (RR) assessed the scans twice: first without AI support and then with AI assistance.

RESULTS: AI significantly improved diagnostic accuracy for AD (area under the curve -AI: 0.800, +AI: 0.926, p < 0.05), with increased correct diagnoses (p < 0.01) and reduced errors (p < 0.03). BCR and RR showed notable performance gains (BCR: p < 0.04; RR: p < 0.02). For the diagnosis FTD, overall consensus (p < 0.01), BCNR (p < 0.02), and BCR (p < 0.05) recorded significantly more correct diagnoses.

DISCUSSION: AI-assisted volumetry improves diagnostic performance in differentiating AD and FTD, benefiting all reader groups, including BCNR.

HIGHLIGHTS: Artificial intelligence (AI)-supported brain volumetry significantly improved the diagnostic accuracy for Alzheimer's disease (AD) and frontotemporal dementia (FTD), with notable performance gains across radiologists of varying expertise levels.The presented AI tool is readily clinically available and reduces brain volumetry processing time from 12 to 24 hours to under 5 minutes, with full integration into picture archiving and communication systems, streamlining the workflow and facilitating real-time clinical decision making.AI-supported rapid brain volumetry has the potential to improve early diagnosis and to improve patient management.

PMID:39665087 | PMC:PMC11632536 | DOI:10.1002/dad2.70037

Categories: Literature Watch

Leveraging transfer learning for predicting total knee arthroplasty failure from post-operative radiographs

Thu, 2024-12-12 06:00

J Exp Orthop. 2024 Dec 11;11(4):e70097. doi: 10.1002/jeo2.70097. eCollection 2024 Oct.

ABSTRACT

PURPOSE: The incidence of both primary and revision total knee arthroplasty (TKA) is expected to rise, making early recognition of TKA failure crucial to prevent extensive revision surgeries. This study aims to develop a deep learning (DL) model to predict TKA failure using radiographic images.

METHODS: Two patient cohorts who underwent primary TKA were retrospectively collected: one was used for the model development and the other for the external validation. Each cohort encompassed failed and non-failed subjects, according to the need for TKA revision surgery. Moreover, for each patient, one anteroposterior and one lateral radiographic view obtained during routine TKA follow-up, were considered. A transfer learning fine-tuning approach was employed. After pre-processing, the images were analyzed using a convolutional neuronal network (CNN) that was originally developed for predicting hip prosthesis failure and was based on the Densenet169 pre-trained on Imagenet. The model was tested on 20% of the images of the first cohort and externally validated on the images of the second cohort. Metrics, such as accuracy, sensitivity, specificity and area under the receiving operating characteristic curve (AUC), were calculated for the final assessment.

RESULTS: The trained model correctly classified 108 out of 127 images in the test set, providing a classification accuracy of 0.85, sensitivity of 0.80, specificity of 0.89 and AUC of 0.86. Moreover, the model correctly classified 1547 out of 1937 in the external validation set, providing a balanced accuracy of 0.79, sensitivity of 0.80, specificity of 0.78 and AUC of 0.86.

CONCLUSIONS: The present DL model predicts TKA failure with moderate accuracy, regardless of the cause of revision surgery. Additionally, the effectiveness of the transfer learning fine-tuning approach, leveraging a previously developed DL model for hip prosthesis failure, has been successfully demonstrated.

LEVEL OF EVIDENCE: Level III, diagnostic study.

PMID:39664926 | PMC:PMC11633713 | DOI:10.1002/jeo2.70097

Categories: Literature Watch

Development of a deep learning model for automatic detection of narrowed intervertebral disc space sites in caudal thoracic and lumbar lateral X-ray images of dogs

Thu, 2024-12-12 06:00

Front Vet Sci. 2024 Nov 27;11:1453765. doi: 10.3389/fvets.2024.1453765. eCollection 2024.

ABSTRACT

Intervertebral disc disease is the most common spinal cord-related disease in dogs, caused by disc material protrusion or extrusion that compresses the spinal cord, leading to clinical symptoms. Diagnosis involves identifying radiographic signs such as intervertebral disc space narrowing, increased opacity of the intervertebral foramen, spondylosis deformans, and magnetic resonance imaging findings like spinal cord compression and lesions, alongside clinical symptoms and neurological examination findings. Intervertebral disc space narrowing on radiographs is the most common finding in intervertebral disc extrusion. This study aimed to develop a deep learning model to automatically recognize narrowed intervertebral disc space on caudal thoracic and lumbar X-ray images of dogs. In total, 241 caudal thoracic and lumbar lateral X-ray images from 142 dogs were used to develop and evaluate the model, which quantified intervertebral disc space distance and detected narrowing using a large-kernel one-dimensional convolutional neural network. When comparing veterinary clinicians and the deep learning model, the kappa value was 0.780, with 81.5% sensitivity and 95.6% specificity, showing substantial agreement. In conclusion, the deep learning model developed in this study, automatically and accurately quantified intervertebral disc space distance and detected narrowed sites in dogs, aiding in the initial screening of intervertebral disc disease and lesion localization.

PMID:39664893 | PMC:PMC11631885 | DOI:10.3389/fvets.2024.1453765

Categories: Literature Watch

Deep learning based landmark detection for measuring hock and knee angles in sows

Thu, 2024-12-12 06:00

Transl Anim Sci. 2023 Mar 21;8:txad033. doi: 10.1093/tas/txad033. eCollection 2024.

ABSTRACT

This paper presents a visual deep learning approach to automatically determine hock and knee angles from sow images. Lameness is the second largest reason for culling of breeding herd females and relies on human observers to provide visual scoring for detection which can be slow, subjective, and inconsistent. A deep learning model classified and detected ten and two key body landmarks from the side and rear profile images, respectively (mean average precision = 0.94). Trigonometric-based formulae were derived to calculate hock and knee angles using the features extracted from the imagery. Automated angle measurements were compared with manual results from each image (average root mean square error [RMSE] = 4.13°), where all correlation slopes (average R 2 = 0.84) were statistically different from zero (P < 0.05); all automated measurements were in statistical agreement with manually collected measurements using the Bland-Altman procedure. This approach will be of interest to animal geneticists, scientists, and practitioners for obtaining objective angle measurements that can be factored into gilt replacement criteria to optimize sow breeding units.

PMID:39664862 | PMC:PMC11632189 | DOI:10.1093/tas/txad033

Categories: Literature Watch

Triggers and substrate: The whole is more than the sum of its parts-A case of implantable cardioverter-defibrillator shock induced with echocardiography

Thu, 2024-12-12 06:00

HeartRhythm Case Rep. 2024 Jul 25;10(10):757-760. doi: 10.1016/j.hrcr.2024.07.017. eCollection 2024 Oct.

NO ABSTRACT

PMID:39664847 | PMC:PMC11628775 | DOI:10.1016/j.hrcr.2024.07.017

Categories: Literature Watch

Language task-based fMRI analysis using machine learning and deep learning

Thu, 2024-12-12 06:00

Front Radiol. 2024 Nov 27;4:1495181. doi: 10.3389/fradi.2024.1495181. eCollection 2024.

ABSTRACT

INTRODUCTION: Task-based language fMRI is a non-invasive method of identifying brain regions subserving language that is used to plan neurosurgical resections which potentially encroach on eloquent regions. The use of unstructured fMRI paradigms, such as naturalistic fMRI, to map language is of increasing interest. Their analysis necessitates the use of alternative methods such as machine learning (ML) and deep learning (DL) because task regressors may be difficult to define in these paradigms.

METHODS: Using task-based language fMRI as a starting point, this study investigates the use of different categories of ML and DL algorithms to identify brain regions subserving language. Data comprising of seven task-based language fMRI paradigms were collected from 26 individuals, and ML and DL models were trained to classify voxel-wise fMRI time series.

RESULTS: The general machine learning and the interval-based methods were the most promising in identifying language areas using fMRI time series classification. The geneal machine learning method achieved a mean whole-brain Area Under the Receiver Operating Characteristic Curve (AUC) of 0.97 ± 0.03 , mean Dice coefficient of 0.6 ± 0.34 and mean Euclidean distance of 2.7 ± 2.4 mm between activation peaks across the evaluated regions of interest. The interval-based method achieved a mean whole-brain AUC of 0.96 ± 0.03 , mean Dice coefficient of 0.61 ± 0.33 and mean Euclidean distance of 3.3 ± 2.7 mm between activation peaks across the evaluated regions of interest.

DISCUSSION: This study demonstrates the utility of different ML and DL methods in classifying task-based language fMRI time series. A potential application of these methods is the identification of language activation from unstructured paradigms.

PMID:39664795 | PMC:PMC11631583 | DOI:10.3389/fradi.2024.1495181

Categories: Literature Watch

Sarcopenia diagnosis using skeleton-based gait sequence and foot-pressure image datasets

Thu, 2024-12-12 06:00

Front Public Health. 2024 Nov 27;12:1443188. doi: 10.3389/fpubh.2024.1443188. eCollection 2024.

ABSTRACT

INTRODUCTION: Sarcopenia is a common age-related disease, defined as a decrease in muscle strength and function owing to reduced skeletal muscle. One way to diagnose sarcopenia is through gait analysis and foot-pressure imaging.

MOTIVATION AND RESEARCH GAP: We collected our own multimodal dataset from 100 subjects, consisting of both foot-pressure and skeleton data with real patients, which provides a unique resource for future studies aimed at more comprehensive analyses. While artificial intelligence has been employed for sarcopenia detection, previous studies have predominantly focused on skeleton-based datasets without exploring the combined potential of skeleton and foot pressure dataset. This study conducts separate experiments for foot-pressure and skeleton datasets, it demonstrates the potential of each data type in sarcopenia classification.

METHODS: This study had two components. First, we collected skeleton and foot-pressure datasets and classified them into sarcopenia and non-sarcopenia groups based on grip strength, gait performance, and appendicular skeletal muscle mass. Second, we performed experiments on the foot-pressure dataset using the ResNet-18 and spatiotemporal graph convolutional network (ST-GCN) models on the skeleton dataset to classify normal and abnormal gaits due to sarcopenia. For an accurate diagnosis, real-time walking of 100 participants was recorded at 30 fps as RGB + D images. The skeleton dataset was constructed by extracting 3D skeleton information comprising 25 feature points from the image, whereas the foot-pressure dataset was constructed by exerting pressure on the foot-pressure plates.

RESULTS: As a baseline evaluation, the accuracies of sarcopenia classification performance from foot-pressure image using Resnet-18 and skeleton sequences using ST-GCN were identified as 77.16 and 78.63%, respectively.

DISCUSSION: The experimental results demonstrated the potential applications of sarcopenia and non-sarcopenia classifications based on foot-pressure images and skeleton sequences.

PMID:39664552 | PMC:PMC11631742 | DOI:10.3389/fpubh.2024.1443188

Categories: Literature Watch

Screening for frequent hospitalization risk among community-dwelling older adult between 2016 and 2023: machine learning-driven item selection, scoring system development, and prospective validation

Thu, 2024-12-12 06:00

Front Public Health. 2024 Nov 27;12:1413529. doi: 10.3389/fpubh.2024.1413529. eCollection 2024.

ABSTRACT

BACKGROUND: Screening for frequent hospitalizations in the community can help prevent super-utilizers from growing in the inpatient population. However, the determinants of frequent hospitalizations have not been systematically examined, their operational definitions have been inconsistent, and screening among community members lacks tools. Nor do we know if what determined frequent hospitalizations before COVID-19 continued to be the determinant of frequent hospitalizations at the height of the pandemic. Hence, the current study aims to identify determinants of frequent hospitalization and their screening items developed from the Comprehensive Geriatric Assessment (CGA), as our 273-item CGA is too lengthy to administer in full in community or primary care settings. The stability of the identified determinants will be examined in terms of the prospective validity of pre-COVID-selected items administered at the height of the pandemic.

METHODS: Comprehensive Geriatric Assessments (CGAs) were administered between 2016 and 2018 in the homes of 1,611 older adults aged 65+ years. Learning models were deployed to select CGA items to maximize the classification of different operational definitions of frequent hospitalizations, ranging from the most inclusive definition, wherein two or more hospitalizations over 2 years, to the most exclusive, wherein two or more hospitalizations must appear during year two, reflecting different care needs. In addition, the CGA items selected by the best-performing learning model were then developed into a random-forest-based scoring system for assessing frequent hospitalization risk, the validity of which was tested during 2018 and again prospectively between 2022 and 2023 in a sample of 329 older adults recruited from a district adjacent to where the CGAs were initially performed.

RESULTS: Seventeen items were selected from the CGA by our best-performing algorithm (DeepBoost), achieving 0.90 AUC in classifying operational definitions of frequent hospitalizations differing in temporal distributions and care needs. The number of medications prescribed and the need for assistance with emptying the bowel, housekeeping, transportation, and laundry were selected using the DeepBoost algorithm under the supervision of all operational definitions of frequent hospitalizations. On the other hand, reliance on walking aids, ability to balance on one's own, history of chronic obstructive pulmonary disease (COPD), and usage of social services were selected in the top 10 by all but the operational definitions that reflect the greatest care needs. The prospective validation of the original risk-scoring system using a sample recruited from a different district during the COVID-19 pandemic achieved an AUC of 0.82 in differentiating those rehospitalized twice or more over 2 years from those who were not.

CONCLUSION: A small subset of CGA items representing one's independence in aspects of (instrumental) activities of daily living, mobility, history of COPD, and social service utilization are sufficient for community members at risk of frequent hospitalization. The determinants of frequent hospitalization represented by the subset of CGA items remain relevant over the course of COVID-19 pandemic and across sociogeography.

PMID:39664532 | PMC:PMC11632619 | DOI:10.3389/fpubh.2024.1413529

Categories: Literature Watch

Pages