Deep learning
Glu-Ensemble: An ensemble deep learning framework for blood glucose forecasting in type 2 diabetes patients
Heliyon. 2024 Apr 4;10(8):e29030. doi: 10.1016/j.heliyon.2024.e29030. eCollection 2024 Apr 30.
ABSTRACT
Diabetes is a chronic metabolic disorder characterized by elevated blood glucose levels, posing significant health risks such as cardiovascular disease, and nerve, kidney, and eye damage. Effective management of blood glucose is essential for individuals with diabetes to mitigate these risks. This study introduces the Glu-Ensemble, a deep learning framework designed for precise blood glucose forecasting in patients with type 2 diabetes. Unlike other predictive models, Glu-Ensemble addresses challenges related to small sample sizes, data quality issues, reliance on strict statistical assumptions, and the complexity of models. It enhances prediction accuracy and model generalizability by utilizing larger datasets and reduces bias inherent in many predictive models. The framework's unified approach, as opposed to patient-specific models, eliminates the need for initial calibration time, facilitating immediate blood glucose predictions for new patients. The obtained results indicate that Glu-Ensemble surpasses traditional methods in accuracy, as measured by root mean square error, mean absolute error, and error grid analysis. The Glu-Ensemble framework emerges as a promising tool for blood glucose level prediction in type 2 diabetes patients, warranting further investigation in clinical settings for its practical application.
PMID:38638954 | PMC:PMC11024573 | DOI:10.1016/j.heliyon.2024.e29030
Machine learning algorithms for detection of visuomotor neural control differences in individuals with PASC and ME
Front Hum Neurosci. 2024 Apr 4;18:1359162. doi: 10.3389/fnhum.2024.1359162. eCollection 2024.
ABSTRACT
The COVID-19 pandemic has affected millions worldwide, giving rise to long-term symptoms known as post-acute sequelae of SARS-CoV-2 (PASC) infection, colloquially referred to as long COVID. With an increasing number of people experiencing these symptoms, early intervention is crucial. In this study, we introduce a novel method to detect the likelihood of PASC or Myalgic Encephalomyelitis (ME) using a wearable four-channel headband that collects Electroencephalogram (EEG) data. The raw EEG signals are processed using Continuous Wavelet Transform (CWT) to form a spectrogram-like matrix, which serves as input for various machine learning and deep learning models. We employ models such as CONVLSTM (Convolutional Long Short-Term Memory), CNN-LSTM, and Bi-LSTM (Bidirectional Long short-term memory). Additionally, we test the dataset on traditional machine learning models for comparative analysis. Our results show that the best-performing model, CNN-LSTM, achieved an accuracy of 83%. In addition to the original spectrogram data, we generated synthetic spectrograms using Wasserstein Generative Adversarial Networks (WGANs) to augment our dataset. These synthetic spectrograms contributed to the training phase, addressing challenges such as limited data volume and patient privacy. Impressively, the model trained on synthetic data achieved an average accuracy of 93%, significantly outperforming the original model. These results demonstrate the feasibility and effectiveness of our proposed method in detecting the effects of PASC and ME, paving the way for early identification and management of the condition. The proposed approach holds significant potential for various practical applications, particularly in the clinical domain. It can be utilized for evaluating the current condition of individuals with PASC or ME, and monitoring the recovery process of those with PASC, or the efficacy of any interventions in the PASC and ME populations. By implementing this technique, healthcare professionals can facilitate more effective management of chronic PASC or ME effects, ensuring timely intervention and improving the quality of life for those experiencing these conditions.
PMID:38638805 | PMC:PMC11024369 | DOI:10.3389/fnhum.2024.1359162
Full-body pose reconstruction and correction in virtual reality for rehabilitation training
Front Neurosci. 2024 Apr 4;18:1388742. doi: 10.3389/fnins.2024.1388742. eCollection 2024.
ABSTRACT
Existing statistical data indicates that an increasing number of people now require rehabilitation to restore compromised physical mobility. During the rehabilitation process, physical therapists evaluate and guide the movements of patients, aiding them in a more effective recovery of rehabilitation and preventing secondary injuries. However, the immutability of mobility and the expensive price of rehabilitation training hinder some patients from timely access to rehabilitation. Utilizing virtual reality for rehabilitation training might offer a potential alleviation to these issues. However, prevalent pose reconstruction algorithms in rehabilitation primarily rely on images, limiting their applicability to virtual reality. Furthermore, existing pose evaluation and correction methods in the field of rehabilitation focus on providing clinical metrics for doctors, and failed to offer patients efficient movement guidance. In this paper, a virtual reality-based rehabilitation training method is proposed. The sparse motion signals from virtual reality devices, specifically head-mounted displays hand controllers, is used to reconstruct full body poses. Subsequently, the reconstructed poses and the standard poses are fed into a natural language processing model, which contrasts the difference between the two poses and provides effective pose correction guidance in the form of natural language. Quantitative and qualitative results indicate that the proposed method can accurately reconstruct full body poses from sparse motion signals in real-time. By referencing standard poses, the model generates professional motion correction guidance text. This approach facilitates virtual reality-based rehabilitation training, reducing the cost of rehabilitation training and enhancing the efficiency of self-rehabilitation training.
PMID:38638693 | PMC:PMC11024313 | DOI:10.3389/fnins.2024.1388742
Predicting small molecules solubility on endpoint devices using deep ensemble neural networks
Digit Discov. 2024 Mar 13;3(4):786-795. doi: 10.1039/d3dd00217a. eCollection 2024 Apr 17.
ABSTRACT
Aqueous solubility is a valuable yet challenging property to predict. Computing solubility using first-principles methods requires accounting for the competing effects of entropy and enthalpy, resulting in long computations for relatively poor accuracy. Data-driven approaches, such as deep learning, offer improved accuracy and computational efficiency but typically lack uncertainty quantification. Additionally, ease of use remains a concern for any computational technique, resulting in the sustained popularity of group-based contribution methods. In this work, we addressed these problems with a deep learning model with predictive uncertainty that runs on a static website (without a server). This approach moves computing needs onto the website visitor without requiring installation, removing the need to pay for and maintain servers. Our model achieves satisfactory results in solubility prediction. Furthermore, we demonstrate how to create molecular property prediction models that balance uncertainty and ease of use. The code is available at https://github.com/ur-whitelab/mol.dev, and the model is useable at https://mol.dev.
PMID:38638648 | PMC:PMC11022985 | DOI:10.1039/d3dd00217a
Sleep-Deep-Learner is taught sleep-wake scoring by the end-user to complete each record in their style
Sleep Adv. 2024 Apr 4;5(1):zpae022. doi: 10.1093/sleepadvances/zpae022. eCollection 2024.
ABSTRACT
Sleep-wake scoring is a time-consuming, tedious but essential component of clinical and preclinical sleep research. Sleep scoring is even more laborious and challenging in rodents due to the smaller EEG amplitude differences between states and the rapid state transitions which necessitate scoring in shorter epochs. Although many automated rodent sleep scoring methods exist, they do not perform as well when scoring new datasets, especially those which involve changes in the EEG/EMG profile. Thus, manual scoring by expert scorers remains the gold standard. Here we take a different approach to this problem by using a neural network to accelerate the scoring of expert scorers. Sleep-Deep-Learner creates a bespoke deep convolution neural network model for individual electroencephalographic or local-field-potential (LFP) records via transfer learning of GoogLeNet, by learning from a small subset of manual scores of each EEG/LFP record as provided by the end-user. Sleep-Deep-Learner then automates scoring of the remainder of the EEG/LFP record. A novel REM sleep scoring correction procedure further enhanced accuracy. Sleep-Deep-Learner reliably scores EEG and LFP data and retains sleep-wake architecture in wild-type mice, in sleep induced by the hypnotic zolpidem, in a mouse model of Alzheimer's disease and in a genetic knock-down study, when compared to manual scoring. Sleep-Deep-Learner reduced manual scoring time to 1/12. Since Sleep-Deep-Learner uses transfer learning on each independent recording, it is not biased by previously scored existing datasets. Thus, we find Sleep-Deep-Learner performs well when used on signals altered by a drug, disease model, or genetic modification.
PMID:38638581 | PMC:PMC11025629 | DOI:10.1093/sleepadvances/zpae022
Music-evoked emotions classification using vision transformer in EEG signals
Front Psychol. 2024 Apr 4;15:1275142. doi: 10.3389/fpsyg.2024.1275142. eCollection 2024.
ABSTRACT
INTRODUCTION: The field of electroencephalogram (EEG)-based emotion identification has received significant attention and has been widely utilized in both human-computer interaction and therapeutic settings. The process of manually analyzing electroencephalogram signals is characterized by a significant investment of time and work. While machine learning methods have shown promising results in classifying emotions based on EEG data, the task of extracting distinct characteristics from these signals still poses a considerable difficulty.
METHODS: In this study, we provide a unique deep learning model that incorporates an attention mechanism to effectively extract spatial and temporal information from emotion EEG recordings. The purpose of this model is to address the existing gap in the field. The implementation of emotion EEG classification involves the utilization of a global average pooling layer and a fully linked layer, which are employed to leverage the discernible characteristics. In order to assess the effectiveness of the suggested methodology, we initially gathered a dataset of EEG recordings related to music-induced emotions.
EXPERIMENTS: Subsequently, we ran comparative tests between the state-of-the-art algorithms and the method given in this study, utilizing this proprietary dataset. Furthermore, a publicly accessible dataset was included in the subsequent comparative trials.
DISCUSSION: The experimental findings provide evidence that the suggested methodology outperforms existing approaches in the categorization of emotion EEG signals, both in binary (positive and negative) and ternary (positive, negative, and neutral) scenarios.
PMID:38638516 | PMC:PMC11024288 | DOI:10.3389/fpsyg.2024.1275142
Performance evaluation in cataract surgery with an ensemble of 2D-3D convolutional neural networks
Healthc Technol Lett. 2024 Feb 17;11(2-3):189-195. doi: 10.1049/htl2.12078. eCollection 2024 Apr-Jun.
ABSTRACT
An important part of surgical training in ophthalmology is understanding how to proficiently perform cataract surgery. Operating skill in cataract surgery is typically assessed by real-time or video-based expert review using a rating scale. This is time-consuming, subjective and labour-intensive. A typical trainee graduates with over 100 complete surgeries, each of which requires review by the surgical educators. Due to the consistently repetitive nature of this task, it lends itself well to machine learning-based evaluation. Recent studies utilize deep learning models trained on tool motion trajectories obtained using additional equipment or robotic systems. However, the process of tool recognition by extracting frames from the videos to perform phase recognition followed by skill assessment is exhaustive. This project proposes a deep learning model for skill evaluation using raw surgery videos that is cost-effective and end-to-end trainable. An advanced ensemble of convolutional neural network models is leveraged to model technical skills in cataract surgeries and is evaluated using a large dataset comprising almost 200 surgical trials. The highest accuracy of 0.8494 is observed on the phacoemulsification step data. Our model yielded an average accuracy of 0.8200 and an average AUC score of 0.8800 for all four phase datasets of cataract surgery proving its robustness against different data. The proposed ensemble model with 2D and 3D convolutional neural networks demonstrated a promising result without using tool motion trajectories to evaluate surgery expertise.
PMID:38638495 | PMC:PMC11022224 | DOI:10.1049/htl2.12078
Calibration-free structured-light-based 3D scanning system in laparoscope for robotic surgery
Healthc Technol Lett. 2024 Mar 8;11(2-3):196-205. doi: 10.1049/htl2.12083. eCollection 2024 Apr-Jun.
ABSTRACT
Accurate 3D shape measurement is crucial for surgical support and alignment in robotic surgery systems. Stereo cameras in laparoscopes offer a potential solution; however, their accuracy in stereo image matching diminishes when the target image has few textures. Although stereo matching with deep learning has gained significant attention, supervised learning requires a large dataset of images with depth annotations, which are scarce for laparoscopes. Thus, there is a strong demand to explore alternative methods for depth reconstruction or annotation for laparoscopes. Active stereo techniques are a promising approach for achieving 3D reconstruction without textures. In this study, a 3D shape reconstruction method is proposed using an ultra-small patterned projector attached to a laparoscopic arm to address these issues. The pattern projector emits a structured light with a grid-like pattern that features node-wise modulation for positional encoding. To scan the target object, multiple images are taken while the projector is in motion, and the relative poses of the projector and a camera are auto-calibrated using a differential rendering technique. In the experiment, the proposed method is evaluated by performing 3D reconstruction using images obtained from a surgical robot and comparing the results with a ground-truth shape obtained from X-ray CT.
PMID:38638488 | PMC:PMC11022229 | DOI:10.1049/htl2.12083
Artificial intelligence-driven prognostic system for conception prediction and management in intrauterine adhesions following hysteroscopic adhesiolysis: a diagnostic study using hysteroscopic images
Front Bioeng Biotechnol. 2024 Apr 4;12:1327207. doi: 10.3389/fbioe.2024.1327207. eCollection 2024.
ABSTRACT
INTRODUCTION: Intrauterine adhesions (IUAs) caused by endometrial injury, commonly occurring in developing countries, can lead to subfertility. This study aimed to develop and evaluate a DeepSurv architecture-based artificial intelligence (AI) system for predicting fertility outcomes after hysteroscopic adhesiolysis.
METHODS: This diagnostic study included 555 intrauterine adhesions (IUAs) treated with hysteroscopic adhesiolysis with 4,922 second-look hysteroscopic images from a prospective clinical database (IUADB, NCT05381376) with a minimum of 2 years of follow-up. These patients were randomly divided into training, validation, and test groups for model development, tuning, and external validation. Four transfer learning models were built using the DeepSurv architecture and a code-free AI application for pregnancy prediction was also developed. The primary outcome was the model's ability to predict pregnancy within a year after adhesiolysis. Secondary outcomes were model performance which evaluated using time-dependent area under the curves (AUCs) and C-index, and ART benefits evaluated by hazard ratio (HR) among different risk groups.
RESULTS: External validation revealed that using the DeepSurv architecture, InceptionV3+ DeepSurv, InceptionResNetV2+ DeepSurv, and ResNet50+ DeepSurv achieved AUCs of 0.94, 0.95, and 0.93, respectively, for one-year pregnancy prediction, outperforming other models and clinical score systems. A code-free AI application was developed to identify candidates for ART. Patients with lower natural conception probability indicated by the application had a higher ART benefit hazard ratio (HR) of 3.13 (95% CI: 1.22-8.02, p = 0.017).
CONCLUSION: InceptionV3+ DeepSurv, InceptionResNetV2+ DeepSurv, and ResNet50+ DeepSurv show potential in predicting the fertility outcomes of IUAs after hysteroscopic adhesiolysis. The code-free AI application based on the DeepSurv architecture facilitates personalized therapy following hysteroscopic adhesiolysis.
PMID:38638324 | PMC:PMC11024240 | DOI:10.3389/fbioe.2024.1327207
Pole balancing on the fingertip: model-motivated machine learning forecasting of falls
Front Physiol. 2024 Apr 4;15:1334396. doi: 10.3389/fphys.2024.1334396. eCollection 2024.
ABSTRACT
Introduction: There is increasing interest in developing mathematical and computational models to forecast adverse events in physiological systems. Examples include falls, the onset of fatal cardiac arrhythmias, and adverse surgical outcomes. However, the dynamics of physiological systems are known to be exceedingly complex and perhaps even chaotic. Since no model can be perfect, it becomes important to understand how forecasting can be improved, especially when training data is limited. An adverse event that can be readily studied in the laboratory is the occurrence of stick falls when humans attempt to balance a stick on their fingertips. Over the last 20 years, this task has been extensively investigated experimentally, and presently detailed mathematical models are available. Methods: Here we use a long short-term memory (LTSM) deep learning network to forecast stick falls. We train this model to forecast stick falls in three ways: 1) using only data generated by the mathematical model (synthetic data), 2) using only stick balancing recordings of stick falls measured using high-speed motion capture measurements (human data), and 3) using transfer learning which combines a model trained using synthetic data plus a small amount of human balancing data. Results: We observe that the LTSM model is much more successful in forecasting a fall using synthetic data than it is in forecasting falls for models trained with limited available human data. However, with transfer learning, i.e., the LTSM model pre-trained with synthetic data and re-trained with a small amount of real human balancing data, the ability to forecast impending falls in human data is vastly improved. Indeed, it becomes possible to correctly forecast 60%-70% of real human stick falls up to 2.35 s in advance. Conclusion: These observations support the use of model-generated data and transfer learning techniques to improve the ability of computational models to forecast adverse physiological events.
PMID:38638278 | PMC:PMC11024436 | DOI:10.3389/fphys.2024.1334396
A Deep-Learning-Based Partial-Volume Correction Method for Quantitative (177)Lu SPECT/CT Imaging
J Nucl Med. 2024 Apr 18:jnumed.123.266889. doi: 10.2967/jnumed.123.266889. Online ahead of print.
ABSTRACT
With the development of new radiopharmaceutical therapies, quantitative SPECT/CT has progressively emerged as a crucial tool for dosimetry. One major obstacle of SPECT is its poor resolution, which results in blurring of the activity distribution. Especially for small objects, this so-called partial-volume effect limits the accuracy of activity quantification. Numerous methods for partial-volume correction (PVC) have been proposed, but most methods have the disadvantage of assuming a spatially invariant resolution of the imaging system, which does not hold for SPECT. Furthermore, most methods require a segmentation based on anatomic information. Methods: We introduce DL-PVC, a methodology for PVC of 177Lu SPECT/CT imaging using deep learning (DL). Training was based on a dataset of 10,000 random activity distributions placed in extended cardiac-torso body phantoms. Realistic SPECT acquisitions were created using the SIMIND Monte Carlo simulation program. SPECT reconstructions without and with resolution modeling were performed using the CASToR and STIR reconstruction software, respectively. The pairs of ground-truth activity distributions and simulated SPECT images were used for training various U-Nets. Quantitative analysis of the performance of these U-Nets was based on metrics such as the structural similarity index measure or normalized root-mean-square error, but also on volume activity accuracy, a new metric that describes the fraction of voxels in which the determined activity concentration deviates from the true activity concentration by less than a certain margin. On the basis of this analysis, the optimal parameters for normalization, input size, and network architecture were identified. Results: Our simulation-based analysis revealed that DL-PVC (0.95/7.8%/35.8% for structural similarity index measure/normalized root-mean-square error/volume activity accuracy) outperforms SPECT without PVC (0.89/10.4%/12.1%) and after iterative Yang PVC (0.94/8.6%/15.1%). Additionally, we validated DL-PVC on 177Lu SPECT/CT measurements of 3-dimensionally printed phantoms of different geometries. Although DL-PVC showed activity recovery similar to that of the iterative Yang method, no segmentation was required. In addition, DL-PVC was able to correct other image artifacts such as Gibbs ringing, making it clearly superior at the voxel level. Conclusion: In this work, we demonstrate the added value of DL-PVC for quantitative 177Lu SPECT/CT. Our analysis validates the functionality of DL-PVC and paves the way for future deployment on clinical image data.
PMID:38637141 | DOI:10.2967/jnumed.123.266889
Histomorphometric Image Classifier of Different Grades of Oral Squamous Cell Carcinoma Using Transfer Learning and Convolutional Neural Network
J Stomatol Oral Maxillofac Surg. 2024 Apr 16:101876. doi: 10.1016/j.jormas.2024.101876. Online ahead of print.
ABSTRACT
BACKGROUND: Machine learning is an emerging technology in health care field with aim of fundamentally revamping the traditional system and aiding medical practitioners. The histopathological analysis of oral cancers is crucial for pathologist to ascertain its grading. Therefore, this study attempts to grade the various stained tissue samples of OSCC (Oral Squamous Cell Carcinoma) patients using different deep-learning models.
METHODS: A dataset of 120 histopathological images of OSCC was collected and classified as well-differentiated (40), moderately differentiated (40), and poorly differentiated (40) according to Broders' grading system. The CNN (Convoluted neural networks) architectures were based on the pre-trained neural network models VGG16 (Visual Geometry Group16), VGG19 (Visual Geometry Group19), RESNET50 (Residual Network50), and DENSENET121 (Dense Network121) models for image analysis.
RESULTS: At a magnification of 4x, all four models achieved the highest test accuracy of 66.67%. DENSENET121 scored the highest validation accuracy of 81%. At 10x, RESNET50, VGG19, and DENSENET121 achieved the highest test accuracy of 90.9% and VGG19 achieved the highest validation accuracy of 83.3%. At 40x, the highest test accuracy of 70% was achieved by RESNET50 and DENSENET121. The validation accuracy was comparable between RESNET50, VGG16, and VGG19.
CONCLUSION: The grading of tissues with the help of deep learning in digital imaging and computational aid in the diagnosis can help in timely and effective prognosis and multi-modal treatment protocols for oral cancer patients, thus reducing the operational workload of pathologists. By systematically evaluating model performance and addressing concerns about overfitting, we develop robust models suitable for medical diagnosis.
PMID:38636805 | DOI:10.1016/j.jormas.2024.101876
Automated evaluation of hip abductor muscle quality and size in hip osteoarthritis: Localized muscle regions are strongly associated with overall muscle quality
Magn Reson Imaging. 2024 Apr 16:S0730-725X(24)00138-3. doi: 10.1016/j.mri.2024.04.025. Online ahead of print.
ABSTRACT
Limited information exists regarding abductor muscle quality variation across its length and which locations are most representative of overall muscle quality. This is exacerbated by time-intensive processes for manual muscle segmentation, which limits feasibility of large cohort analyses. The purpose of this study was to develop an automated and localized analysis pipeline that accurately estimates hip abductor muscle quality and size in individuals with mild-to-moderate hip osteoarthritis (OA) and identifies regions of each muscle which provide best estimates of overall muscle quality. Forty-four participants (age 52.7 ± 16.1 years, BMI 23.7 ± 3.4 kg/m2, 14 males) with and without mild-to-moderate radiographic hip OA were recruited for this study. Unilateral hip magnetic resonance (MR) images were acquired on a 3.0 T MR scanner and included axial T1-weighted fast spin echo and 3D axial Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL-IQ) spoiled gradient-recalled echo (SPGR) with multi-peak fat spectrum modeling and single T2* correction. A three dimensional (3D) V-Net convolutional neural network was trained to automatically segment the gluteus medius (GMED), gluteus minimus (GMIN), and tensor fascia lata (TFL) on axial IDEAL-IQ. Agreement between manual and automatic segmentation and associations between axial fat fraction (FF) estimated from IDEAL-IQ and overall muscle FF were evaluated. Dice scores for automatic segmentation were 0.94, 0.87, and 0.91 for GMED, GMIN, and TFL, respectively. GMED, GMIN, and TFL volumetric and FF measures were strongly correlated (r: 0.92-0.99) between automatic and manual segmentations, with 95% limits of agreement of [-1.99%, 2.89%] and [-9.79 cm3, 17.43 cm3], respectively. Axial FF was significantly associated with overall FF with the strongest correlations at 50%, 50%, and 65% the length of the GMED, GMIN, and TFL muscles, respectively (r: 0.93-0.97). An automated and localized analysis can provide efficient and accurate estimates of hip abductor muscle quality and size across muscle length. Specific regions of the muscle may be used to estimate overall muscle quality in an abbreviated evaluation of muscle quality.
PMID:38636675 | DOI:10.1016/j.mri.2024.04.025
Suppressing label noise in medical image classification using mixup attention and self-supervised learning
Phys Med Biol. 2024 Apr 18. doi: 10.1088/1361-6560/ad4083. Online ahead of print.
ABSTRACT
Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced in the medical image annotation, as the labeling process heavily relies on the expertise and experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the performance of models. Therefore, in this work, we innovatively devise noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification. Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning. The contrastive learning for feature extractor helps to enhance visual representation of DNNs. The intra-group attention mixup module constructs groups and assigns self-attention weights for group-wise samples, and subsequently interpolates massive noisy-suppressed samples through weighted mixup operation. We conduct comparative experiments on both synthetic and real-world noisy medical datasets under various noise levels. Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise, and is superior to state-of-the-art methods. An ablation study also shows that both components contribute to boost model performance. The proposed method demonstrates its capability of curb label noise and has certain potential toward real-world clinic applications.
PMID:38636495 | DOI:10.1088/1361-6560/ad4083
Thin-slice elbow MRI with deep learning reconstruction: Superior diagnostic performance of elbow ligament pathologies
Eur J Radiol. 2024 Apr 16;175:111471. doi: 10.1016/j.ejrad.2024.111471. Online ahead of print.
ABSTRACT
PURPOSE: With the slice thickness routinely used in elbow MRI, small or subtle lesions may be overlooked or misinterpreted as insignificant. To compare 1 mm slice thickness MRI (1 mm MRI) with deep learning reconstruction (DLR) to 3 mm slice thickness MRI (3 mm MRI) without/with DLR, and 1 mm MRI without DLR regarding image quality and diagnostic performance for elbow tendons and ligaments.
METHODS: This retrospective study included 53 patients between February 2021 and January 2022, who underwent 3 T elbow MRI, including T2-weighted fat-saturated coronal 3 mm and 1 mm MRI without/with DLR. Two radiologists independently assessed four MRI scans for image quality and artefacts, and identified the pathologies of the five elbow tendons and ligaments. In 19 patients underwent elbow surgery after elbow MRI, diagnostic performance was evaluated using surgical records as a reference standard.
RESULTS: For both readers, 3 mm MRI with DLR had significant higher image quality scores than 3 mm MRI without DLR and 1 mm MRI with DLR (all P < 0.01). For common extensor tendon and elbow ligament pathologies, 1 mm MRI with DLR showed the highest number of pathologies for both readers. The 1 mm MRI with DLR had the highest kappa values for all tendons and ligaments. For reader 1, 1 mm MRI with DLR showed superior diagnostic performance than 3 mm MRI without/with DLR. For reader 2, 1 mm MRI with DLR showed the highest diagnostic performance; however, there was no significant difference.
CONCLUSIONS: One mm MRI with DLR showed the highest diagnostic performance for evaluating elbow tendon and ligament pathologies, with similar subjective image qualities and artefacts.
PMID:38636411 | DOI:10.1016/j.ejrad.2024.111471
Parental status and markers of brain and cellular age: A 3D convolutional network and classification study
Psychoneuroendocrinology. 2024 Apr 2;165:107040. doi: 10.1016/j.psyneuen.2024.107040. Online ahead of print.
ABSTRACT
Recent research shows prominent effects of pregnancy and the parenthood transition on structural brain characteristics in humans. Here, we present a comprehensive study of how parental status and number of children born/fathered links to markers of brain and cellular ageing in 36,323 UK Biobank participants (age range 44.57-82.06 years; 52% female). To assess global effects of parenting on the brain, we trained a 3D convolutional neural network on T1-weighted magnetic resonance images, and estimated brain age in a held-out test set. To investigate regional specificity, we extracted cortical and subcortical volumes using FreeSurfer, and ran hierarchical clustering to group regional volumes based on covariance. Leukocyte telomere length (LTL) derived from DNA was used as a marker of cellular ageing. We employed linear regression models to assess relationships between number of children, brain age, regional brain volumes, and LTL, and included interaction terms to probe sex differences in associations. Lastly, we used the brain measures and LTL as features in binary classification models, to determine if markers of brain and cellular ageing could predict parental status. The results showed associations between a greater number of children born/fathered and younger brain age in both females and males, with stronger effects observed in females. Volume-based analyses showed maternal effects in striatal and limbic regions, which were not evident in fathers. We found no evidence for associations between number of children and LTL. Classification of parental status showed an Area under the ROC Curve (AUC) of 0.57 for the brain age model, while the models using regional brain volumes and LTL as predictors showed AUCs of 0.52. Our findings align with previous population-based studies of middle- and older-aged parents, revealing subtle but significant associations between parental experience and neuroimaging-based surrogate markers of brain health. The findings further corroborate results from longitudinal cohort studies following parents across pregnancy and postpartum, potentially indicating that the parenthood transition is associated with long-term influences on brain health.
PMID:38636355 | DOI:10.1016/j.psyneuen.2024.107040
Deep-learning-based real-time individualization for reduce-order haemodynamic model
Comput Biol Med. 2024 Apr 15;174:108476. doi: 10.1016/j.compbiomed.2024.108476. Online ahead of print.
ABSTRACT
The reduced-order lumped parameter model (LPM) has great computational efficiency in real-time numerical simulations of haemodynamics but is limited by the accuracy of patient-specific computation. This study proposed a method to achieve the individual LPM modeling with high accuracy to improve the practical clinical applicability of LPM. Clinical data was collected from two medical centres comprising haemodynamic indicators from 323 individuals, including brachial artery pressure waveforms, cardiac output data, and internal carotid artery flow waveforms. The data were expanded to 5000 synthesised cases that all fell within the physiological range of each indicator. LPM of the human blood circulation system was established. A double-path neural network (DPNN) was designed to input the waveforms of each haemodynamic indicator and their key features and then output the individual parameters of the LPM, which was labelled using a conventional optimization algorithm. Clinically collected data from the other 100 cases were used as the test set to verify the accuracy of the individual LPM parameters predicted by DPNN. The results show that DPNN provided good convergence in the training process. In the test set, compared with clinical measurements, the mean differences between each haemodynamic indicator and the estimate calculated by the individual LPM based on the DPNN were about 10 %. Furthermore, DPNN prediction only takes 4 s for 100 cases. The DPNN proposed in this study permits real-time and accurate individualization of LPM's. When facing medical issues involving haemodynamics, it lays the foundation for patient-specific numerical simulation, which may be beneficial for potential clinical application.
PMID:38636328 | DOI:10.1016/j.compbiomed.2024.108476
One-shot skill assessment in high-stakes domains with limited data via meta learning
Comput Biol Med. 2024 Apr 11;174:108470. doi: 10.1016/j.compbiomed.2024.108470. Online ahead of print.
ABSTRACT
Deep Learning (DL) has achieved robust competency assessment in various high-stakes fields. However, the applicability of DL models is often hampered by their substantial data requirements and confinement to specific training domains. This prevents them from transitioning to new tasks where data is scarce. Therefore, domain adaptation emerges as a critical element for the practical implementation of DL in real-world scenarios. Herein, we introduce A-VBANet, a novel meta-learning model capable of delivering domain-agnostic skill assessment via one-shot learning. Our methodology has been tested by assessing surgical skills on five laparoscopic and robotic simulators and real-life laparoscopic cholecystectomy. Our model successfully adapted with accuracies up to 99.5 % in one-shot and 99.9 % in few-shot settings for simulated tasks and 89.7 % for laparoscopic cholecystectomy. This study marks the first instance of a domain-agnostic methodology for skill assessment in critical fields setting a precedent for the broad application of DL across diverse real-life domains with limited data.
PMID:38636326 | DOI:10.1016/j.compbiomed.2024.108470
Abdominal CT metrics in 17,646 patients reveal associations between myopenia, myosteatosis, and medical phenotypes: a phenome-wide association study
EBioMedicine. 2024 Apr 17;103:105116. doi: 10.1016/j.ebiom.2024.105116. Online ahead of print.
ABSTRACT
BACKGROUND: Deep learning facilitates large-scale automated imaging evaluation of body composition. However, associations of body composition biomarkers with medical phenotypes have been underexplored. Phenome-wide association study (PheWAS) techniques search for medical phenotypes associated with biomarkers. A PheWAS integrating large-scale analysis of imaging biomarkers and electronic health record (EHR) data could discover previously unreported associations and validate expected associations. Here we use PheWAS methodology to determine the association of abdominal CT-based skeletal muscle metrics with medical phenotypes in a large North American cohort.
METHODS: An automated deep learning pipeline was used to measure skeletal muscle index (SMI; biomarker of myopenia) and skeletal muscle density (SMD; biomarker of myosteatosis) from abdominal CT scans of adults between 2012 and 2018. A PheWAS was performed with logistic regression using patient sex and age as covariates to assess for associations between CT-derived muscle metrics and 611 common EHR-derived medical phenotypes. PheWAS P values were considered significant at a Bonferroni corrected threshold (α = 0.05/1222).
FINDINGS: 17,646 adults (mean age, 56 years ± 19 [SD]; 57.5% women) were included. CT-derived SMI was significantly associated with 268 medical phenotypes; SMD with 340 medical phenotypes. Previously unreported associations with the highest magnitude of significance included higher SMI with decreased cardiac dysrhythmias (OR [95% CI], 0.59 [0.55-0.64]; P < 0.0001), decreased epilepsy (OR, 0.59 [0.50-0.70]; P < 0.0001), and increased elevated prostate-specific antigen (OR, 1.84 [1.47-2.31]; P < 0.0001), and higher SMD with decreased decubitus ulcers (OR, 0.36 [0.31-0.42]; P < 0.0001), sleep disorders (OR, 0.39 [0.32-0.47]; P < 0.0001), and osteomyelitis (OR, 0.43 [0.36-0.52]; P < 0.0001).
INTERPRETATION: PheWAS methodology reveals previously unreported associations between CT-derived biomarkers of myopenia and myosteatosis and EHR medical phenotypes. The high-throughput PheWAS technique applied on a population scale can generate research hypotheses related to myopenia and myosteatosis and can be adapted to research possible associations of other imaging biomarkers with hundreds of EHR medical phenotypes.
FUNDING: National Institutes of Health, Stanford AIMI-HAI pilot grant, Stanford Precision Health and Integrated Diagnostics, Stanford Cardiovascular Institute, Stanford Center for Digital Health, and Stanford Knight-Hennessy Scholars.
PMID:38636199 | DOI:10.1016/j.ebiom.2024.105116
Attention-based deep convolutional neural network for classification of generalized and focal epileptic seizures
Epilepsy Behav. 2024 Apr 17;155:109732. doi: 10.1016/j.yebeh.2024.109732. Online ahead of print.
ABSTRACT
Epilepsy affects over 50 million people globally. Electroencephalography is critical for epilepsy diagnosis, but manual seizure classification is time-consuming and requires extensive expertise. This paper presents an automated multi-class seizure classification model using EEG signals from the Temple University Hospital Seizure Corpus ver. 1.5.2. 11 features including time-based correlation, time-based eigenvalues, power spectral density, frequency-based correlation, frequency-based eigenvalues, sample entropy, spectral entropy, logarithmic sum, standard deviation, absolute mean, and ratio of Daubechies D4 wavelet transformed coefficients were extracted from 10-second sliding windows across channels. The model combines multi-head self-attention mechanism with a deep convolutional neural network (CNN) to classify seven subtypes of generalized and focal epileptic seizures. The model achieved 0.921 weighted accuracy and 0.902 weighted F1 score in classifying focal onset non-motor, generalized onset non-motor, simple partial, complex partial, absence, tonic, and tonic-clonic seizures. In comparison, a CNN model without multi-head attention achieved 0.767 weighted accuracy. Ablation studies were conducted to validate the importance of transformer encoders and attention. The promising classification results demonstrate the potential of deep learning for handling EEG complexity and improving epilepsy diagnosis. This seizure classification model could enable timely interventions when translated into clinical practice.
PMID:38636140 | DOI:10.1016/j.yebeh.2024.109732