Deep learning
Symmetric spatiotemporal learning network with sparse meter graph for short-term energy-consumption prediction in manufacturing systems
Heliyon. 2024 Jul 10;10(14):e34394. doi: 10.1016/j.heliyon.2024.e34394. eCollection 2024 Jul 30.
ABSTRACT
Short-term energy-consumption prediction is the basis of anomaly detection, real-time scheduling, and energy-saving control in manufacturing systems. Most existing methods focus on single-node energy-consumption prediction and suffer from difficult parameter collection and modelling. Although several methods have been presented for multinode energy-consumption prediction, their prediction performance needs to be improved owing to a lack of appropriate knowledge guidance and learning networks for complex spatiotemporal relationships. This study presents a symmetric spatiotemporal learning network (SSTLN) with a sparse meter graph (SMG) (SSTLN-SMG) that aims to predict multiple nodes based on energy-consumption time series and general process knowledge. The SMG expresses process knowledge by abstracting production nodes, material flows, and energy usage, and provides initial guidance for the SSTLN to extract spatial features. SSTLN, a symmetrical stack of graph convolutional networks (GCN) and gated linear units (GLU), is devised to achieve a trade-off not only between spatial and temporal feature extraction but also between detail capture and noise suppression. Extensive experiments were performed using datasets from an aluminium profile plant. The experimental results demonstrate that the proposed method allows multinode energy-consumption prediction with less prediction error than state-of-the-art methods, methods with deformed meter graphs, and methods with deformed learning networks.
PMID:39108905 | PMC:PMC11301346 | DOI:10.1016/j.heliyon.2024.e34394
A novel method for identifying rice seed purity using hybrid machine learning algorithms
Heliyon. 2024 Jul 6;10(14):e33941. doi: 10.1016/j.heliyon.2024.e33941. eCollection 2024 Jul 30.
ABSTRACT
In the grain industry, identifying seed purity is a crucial task because it is an important factor in evaluating seed quality. For rice seeds, this attribute enables the minimization of unexpected influences of other varieties on rice yield, nutrient composition, and price. However, in practice, they are often mixed with seeds from other varieties. This study proposes a novel method for automatically identifying the purity of a specific rice variety using hybrid machine learning algorithms. The core concept involves leveraging deep learning architectures to extract pertinent features from raw data, followed by the application of machine learning algorithms for classification. Several experiments are conducted to evaluate the performance of the proposed model through practical implementation. The results demonstrate that the novel method substantially outperformed the existing methods, demonstrating the potential for effective rice seed purity identification systems.
PMID:39108897 | PMC:PMC11301196 | DOI:10.1016/j.heliyon.2024.e33941
Patching-based deep-learning model for the inpainting of Bragg coherent diffraction patterns affected by detector gaps
J Appl Crystallogr. 2024 Jun 18;57(Pt 4):966-974. doi: 10.1107/S1600576724004163. eCollection 2024 Aug 1.
ABSTRACT
A deep-learning algorithm is proposed for the inpainting of Bragg coherent diffraction imaging (BCDI) patterns affected by detector gaps. These regions of missing intensity can compromise the accuracy of reconstruction algorithms, inducing artefacts in the final result. It is thus desirable to restore the intensity in these regions in order to ensure more reliable reconstructions. The key aspect of the method lies in the choice of training the neural network with cropped sections of diffraction data and subsequently patching the predictions generated by the model along the gap, thus completing the full diffraction peak. This approach enables access to a greater amount of experimental data for training and offers the ability to average overlapping sections during patching. As a result, it produces robust and dependable predictions for experimental data arrays of any size. It is shown that the method is able to remove gap-induced artefacts on the reconstructed objects for both simulated and experimental data, which becomes essential in the case of high-resolution BCDI experiments.
PMID:39108812 | PMC:PMC11299604 | DOI:10.1107/S1600576724004163
High temporal resolution prediction of mortality risk for single AML patient via deep learning
iScience. 2024 Jul 5;27(8):110458. doi: 10.1016/j.isci.2024.110458. eCollection 2024 Aug 16.
ABSTRACT
Acute myeloid leukemia (AML) is highly heterogeneous, necessitating personalized prognosis prediction and treatment strategies. Many of the current patient classifications are based on molecular features. Here, we classified the primary AML patients by predicted death risk curves and investigated the survival-directly-related molecular features. We developed a deep learning model to predict 5-year continuous-time survival probabilities for each patient and converted them to death risk curves. This method captured disease progression dynamics with high temporal resolution and identified seven patient groups with distinct risk peak timing. Based on clusters of death risk curves, we identified two robust AML prognostic biomarkers and discovered a subgroup within the European LeukemiaNet (ELN) 2017 Favorable category with an extremely poor prognosis. Additionally, we developed a web tool, De novo AML Prognostic Prediction (DAPP), for individualized prognosis prediction and expression perturbation simulation. This study utilized deep learning-based continuous-time risk modeling coupled with clustering-predicted risk distributions, facilitating dissecting time-specific molecular features of disease progression.
PMID:39108717 | PMC:PMC11301082 | DOI:10.1016/j.isci.2024.110458
Synthetic data generation methods in healthcare: A review on open-source tools and methods
Comput Struct Biotechnol J. 2024 Jul 9;23:2892-2910. doi: 10.1016/j.csbj.2024.07.005. eCollection 2024 Dec.
ABSTRACT
Synthetic data generation has emerged as a promising solution to overcome the challenges which are posed by data scarcity and privacy concerns, as well as, to address the need for training artificial intelligence (AI) algorithms on unbiased data with sufficient sample size and statistical power. Our review explores the application and efficacy of synthetic data methods in healthcare considering the diversity of medical data. To this end, we systematically searched the PubMed and Scopus databases with a great focus on tabular, imaging, radiomics, time-series, and omics data. Studies involving multi-modal synthetic data generation were also explored. The type of method used for the synthetic data generation process was identified in each study and was categorized into statistical, probabilistic, machine learning, and deep learning. Emphasis was given to the programming languages used for the implementation of each method. Our evaluation revealed that the majority of the studies utilize synthetic data generators to: (i) reduce the cost and time required for clinical trials for rare diseases and conditions, (ii) enhance the predictive power of AI models in personalized medicine, (iii) ensure the delivery of fair treatment recommendations across diverse patient populations, and (iv) enable researchers to access high-quality, representative multimodal datasets without exposing sensitive patient information, among others. We underline the wide use of deep learning based synthetic data generators in 72.6 % of the included studies, with 75.3 % of the generators being implemented in Python. A thorough documentation of open-source repositories is finally provided to accelerate research in the field.
PMID:39108677 | PMC:PMC11301073 | DOI:10.1016/j.csbj.2024.07.005
Improving image quality and in-stent restenosis diagnosis with high-resolution "double-low" coronary CT angiography in patients after percutaneous coronary intervention
Front Cardiovasc Med. 2024 Jul 23;11:1330824. doi: 10.3389/fcvm.2024.1330824. eCollection 2024.
ABSTRACT
OBJECTIVE: This study aims to investigate the image quality of a high-resolution, low-dose coronary CT angiography (CCTA) with deep learning image reconstruction (DLIR) and second-generation motion correction algorithms, namely, SnapShot Freeze 2 (SSF2) algorithm, and its diagnostic accuracy for in-stent restenosis (ISR) in patients after percutaneous coronary intervention (PCI), in comparison with standard-dose CCTA with high-definition mode reconstructed by adaptive statistical iterative reconstruction Veo algorithm (ASIR-V) and the first-generation motion correction algorithm, namely, SnapShot Freeze 1 (SSF1).
METHODS: Patients after PCI and suspected of having ISR scheduled for high-resolution CCTA (randomly for 100 kVp low-dose CCTA or 120 kVp standard-dose) and invasive coronary angiography (ICA) were prospectively enrolled in this study. After the basic information pairing, a total of 105 patients were divided into the LD group (60 patients underwent 100 kVp low-dose CCTA reconstructed with DLIR and SSF2) and the SD group (45 patients underwent 120 kVp standard-dose CCTA reconstructed with ASIR-V and SSF1). Radiation and contrast medium doses, objective image quality including CT value, image noise (standard deviation), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) for the aorta, left main artery (LMA), left ascending artery (LAD), left circumflex artery (LCX), and right coronary artery (RCA) of the two groups were compared. A five-point scoring system was used for the overall image quality and stent appearance evaluation. Binary ISR was defined as an in-stent neointimal proliferation with diameter stenosis ≥50% to assess the diagnostic performance between the LD group and SD group with ICA as the standard reference.
RESULTS: The LD group achieved better objective and subjective image quality than that of the SD group even with 39.1% radiation dose reduction and 28.0% contrast media reduction. The LD group improved the diagnostic accuracy for coronary ISR to 94.2% from the 83.8% of the SD group on the stent level and decreased the ratio of false-positive cases by 19.2%.
CONCLUSION: Compared with standard-dose CCTA with ASIR-V and SSF1, the high-resolution, low-dose CCTA with DLIR and SSF2 reconstruction algorithms further improves the image quality and diagnostic performance for coronary ISR at 39.1% radiation dose reduction and 28.0% contrast dose reduction.
PMID:39108672 | PMC:PMC11300262 | DOI:10.3389/fcvm.2024.1330824
Development of an eye-tracking system based on a deep learning model to assess executive function in patients with mental illnesses
Sci Rep. 2024 Aug 6;14(1):18186. doi: 10.1038/s41598-024-68586-2.
ABSTRACT
Patients with mental illnesses, particularly psychosis and obsessive‒compulsive disorder (OCD), frequently exhibit deficits in executive function and visuospatial memory. Traditional assessments, such as the Rey‒Osterrieth Complex Figure Test (RCFT), performed in clinical settings require time and effort. This study aimed to develop a deep learning model using the RCFT and based on eye tracking to detect impaired executive function during visuospatial memory encoding in patients with mental illnesses. In 96 patients with first-episode psychosis, 49 with clinical high risk for psychosis, 104 with OCD, and 159 healthy controls, eye movements were recorded during a 3-min RCFT figure memorization task, and organization and immediate recall scores were obtained. These scores, along with the fixation points indicating eye-focused locations in the figure, were used to train a Long Short-Term Memory + Attention model for detecting impaired executive function and visuospatial memory. The model distinguished between normal and impaired executive function, with an F1 score of 83.5%, and identified visuospatial memory deficits, with an F1 score of 80.7%, regardless of psychiatric diagnosis. These findings suggest that this eye tracking-based deep learning model can directly and rapidly identify impaired executive function during visuospatial memory encoding, with potential applications in various psychiatric and neurological disorders.
PMID:39107349 | DOI:10.1038/s41598-024-68586-2
A Deep Learning-Based Framework for Predicting Intracerebral Hemorrhage Hematoma Expansion Using Head Non-contrast CT Scan
Acad Radiol. 2024 Aug 5:S1076-6332(24)00472-0. doi: 10.1016/j.acra.2024.07.039. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: Hematoma expansion (HE) in intracerebral hemorrhage (ICH) is a critical factor affecting patient outcomes, yet effective clinical tools for predicting HE are currently lacking. We aim to develop a fully automated framework based on deep learning for predicting HE using only clinical non-contrast CT (NCCT) scans.
MATERIALS AND METHODS: A large retrospective dataset (n = 2484) was collected from 84 centers, while a prospective dataset (n = 500) was obtained from 26 additional centers. Baseline NCCT scans and follow-up NCCT scans were conducted within 6 h and 48 h from symptom onset, respectively. HE was defined as a volume increase of more than 6 mL on the follow-up NCCT. The retrospective dataset was divided into a training set (n = 1876) and a validation set (n = 608) by patient inclusion time. A two-stage framework was trained to predict HE, and its performance was evaluated on both the validation and prospective sets. Receiver operating characteristics area under the curve (AUC), sensitivity, and specificity were leveraged.
RESULTS: Our two-stage framework achieved an AUC of 0.760 (95% CI 0.724-0.799) on the retrospective validation set and 0.806 (95% CI 0.750-0.859) on the prospective set, outperforming the commonly used BAT score, which had AUCs of 0.582 and 0.699, respectively.
CONCLUSION: Our framework can automatically and robustly identify ICH patients at high risk of HE using admission head NCCT scans, providing more accurate predictions than the BAT score.
PMID:39107191 | DOI:10.1016/j.acra.2024.07.039
Preeclampsia and its prediction: traditional versus contemporary predictive methods
J Matern Fetal Neonatal Med. 2024 Dec;37(1):2388171. doi: 10.1080/14767058.2024.2388171. Epub 2024 Aug 6.
ABSTRACT
OBJECTIVE: Preeclampsia (PE) poses a significant threat to maternal and perinatal health, so its early prediction, prevention, and management are of paramount importance to mitigate adverse pregnancy outcomes. This article provides a brief review spanning epidemiology, etiology, pathophysiology, and risk factors associated with PE, mainly discussing the emerging role of Artificial Intelligence (AI) deep learning (DL) technology in predicting PE, to advance the understanding of PE and foster the clinical application of early prediction methods.
METHODS: Our narrative review comprehensively examines the PE epidemiology, etiology, pathophysiology, risk factors and predictive approaches, including traditional models and AI deep learning technology.
RESULTS: Preeclampsia involves a wide range of biological and biochemical risk factors, among which poor uterine artery remodeling, excessive immune response, endothelial dysfunction, and imbalanced angiogenesis play important roles. Traditional PE prediction models exhibit significant limitations in sensitivity and specificity, particularly in predicting late-onset PE, with detection rates ranging from only 30% to 50%. AI models have exhibited a notable level of predictive accuracy and value across various populations and datasets, achieving detection rates of approximately 70%. Particularly, they have shown superior predictive capabilities for late-onset PE, thereby presenting novel opportunities for early screening and management of the condition.
CONCLUSION: AI DL technology holds promise in revolutionizing the prediction and management of PE. AI-based approaches offer a pathway toward more effective risk assessment methods by addressing the shortcomings of traditional prediction models. Ongoing research efforts should focus on expanding databases and validating the performance of AI in diverse populations, leading to the development of more sophisticated prediction models with improved accuracy.
PMID:39107137 | DOI:10.1080/14767058.2024.2388171
Deep-learning-based design of synthetic orthologs of SH3 signaling domains
Cell Syst. 2024 Aug 5:S2405-4712(24)00204-7. doi: 10.1016/j.cels.2024.07.005. Online ahead of print.
ABSTRACT
Evolution-based deep generative models represent an exciting direction in understanding and designing proteins. An open question is whether such models can learn specialized functional constraints that control fitness in specific biological contexts. Here, we examine the ability of generative models to produce synthetic versions of Src-homology 3 (SH3) domains that mediate signaling in the Sho1 osmotic stress response pathway of yeast. We show that a variational autoencoder (VAE) model produces artificial sequences that experimentally recapitulate the function of natural SH3 domains. More generally, the model organizes all fungal SH3 domains such that locality in the model latent space (but not simply locality in sequence space) enriches the design of synthetic orthologs and exposes non-obvious amino acid constraints distributed near and far from the SH3 ligand-binding site. The ability of generative models to design ortholog-like functions in vivo opens new avenues for engineering protein function in specific cellular contexts and environments.
PMID:39106868 | DOI:10.1016/j.cels.2024.07.005
Nanoscale single-vesicle analysis: High-throughput approaches through AI-enhanced super-resolution image analysis
Biosens Bioelectron. 2024 Aug 5;263:116629. doi: 10.1016/j.bios.2024.116629. Online ahead of print.
ABSTRACT
The analysis of membrane vesicles at the nanoscale level is crucial for advancing the understanding of intercellular communication and its implications for health and disease. Despite their significance, the nanoscale analysis of vesicles at the single particle level faces challenges owing to their small size and the complexity of biological fluids. This new vesicle analysis tool leverages the single-molecule sensitivity of super-resolution microscopy (SRM) and the high-throughput analysis capability of deep-learning algorithms. By comparing classical clustering methods (k-means, DBSCAN, and SR-Tesseler) with deep-learning-based approaches (YOLO, DETR, Deformable DETR, and Faster R-CNN) for the analysis of super-resolution fluorescence images of exosomes, we identified the deep-learning algorithm, Deformable DETR, as the most effective. It showed superior accuracy and a reduced processing time for detecting individual vesicles from SRM images. Our findings demonstrate that image-based deep-learning-enhanced methods from SRM images significantly outperform traditional coordinate-based clustering techniques in identifying individual vesicles and resolving the challenges related to misidentification and computational demands. Moreover, the application of the combined Deformable DETR and ConvNeXt-S algorithms to differently labeled exosomes revealed its capability to differentiate between them, indicating its potential to dissect the heterogeneity of vesicle populations. This breakthrough in vesicle analysis suggests a paradigm shift towards the integration of AI into super-resolution imaging, which is promising for unlocking new frontiers in vesicle biology, disease diagnostics, and the development of vesicle-based therapeutics.
PMID:39106689 | DOI:10.1016/j.bios.2024.116629
Contrastive learning based method for X-ray and CT registration under surgical equipment occlusion
Comput Biol Med. 2024 Aug 5;180:108946. doi: 10.1016/j.compbiomed.2024.108946. Online ahead of print.
ABSTRACT
Deep learning-based 3D/2D surgical navigation registration techniques achieved excellent results. However, these methods are limited by the occlusion of surgical equipment resulting in poor accuracy. We designed a contrastive learning method that treats occluded and unoccluded X-rays as positive samples, maximizing the similarity between the positive samples and reducing interference from occlusion. The designed registration model has Transformer's residual connection (ResTrans), which enhances the long-sequence mapping capability, combined with the contrast learning strategy, ResTrans can adaptively retrieve the valid features in the global range to ensure the performance in the case of occlusion. Further, a learning-based region of interest (RoI) fine-tuning method is designed to refine the misalignment. We conducted experiments on occluded X-rays that contained different surgical devices. The experiment results show that the mean target registration error (mTRE) of ResTrans is 3.25 mm and the running time is 1.59 s. Compared with the state-of-the-art (SOTA) 3D/2D registration methods, our method offers better performance on occluded 3D/2D registration tasks.
PMID:39106676 | DOI:10.1016/j.compbiomed.2024.108946
Brain-GCN-Net: Graph-Convolutional Neural Network for brain tumor identification
Comput Biol Med. 2024 Aug 5;180:108971. doi: 10.1016/j.compbiomed.2024.108971. Online ahead of print.
ABSTRACT
BACKGROUND: The intersection of artificial intelligence and medical image analysis has ushered in a new era of innovation and changed the landscape of brain tumor detection and diagnosis. Correct detection and classification of brain tumors based on medical images is crucial for early diagnosis and effective treatment. Convolutional Neural Network (CNN) models are widely used for disease detection. However, they are sometimes unable to sufficiently recognize the complex features of medical images.
METHODS: This paper proposes a fused Deep Learning (DL) model that combines Graph Neural Networks (GNN), which recognize relational dependencies of image regions, and CNN, which captures spatial features, is proposed to improve brain tumor detection. By integrating these two architectures, our model achieves a more comprehensive representation of brain tumor images and improves classification performance. The proposed model is evaluated on a public dataset of 10847 MRI images. The results show that the proposed model outperforms the existing pre-trained models and traditional CNN architectures.
RESULTS: The fused DL model achieves 93.68% accuracy in brain tumor classification. The results indicate that the proposed model outperforms the existing pre-trained models and traditional CNN architectures.
CONCLUSION: The numerical results suggest that the model should be further investigated for potential use in clinical trials to improve clinical decision-making.
PMID:39106672 | DOI:10.1016/j.compbiomed.2024.108971
BertSNR: an interpretable deep learning framework for single nucleotide resolution identification of transcription factor binding sites based on DNA language model
Bioinformatics. 2024 Aug 6:btae461. doi: 10.1093/bioinformatics/btae461. Online ahead of print.
ABSTRACT
MOTIVATION: Transcription factors (TFs) are pivotal in the regulation of gene expression, and accurate identification of transcription factor binding sites (TFBSs) at high resolution is crucial for understanding the mechanisms underlying gene regulation. The task of identifying TFBSs from DNA sequences is a significant challenge in the field of computational biology today. To address this challenge, a variety of computational approaches have been developed. However, these methods face limitations in their ability to achieve high-resolution identification and often lack interpretability.
RESULTS: We propose BertSNR, an interpretable deep learning framework for identifying TFBSs at single nucleotide resolution. BertSNR integrates sequence-level and token-level information by multi-task learning based on pre-trained DNA language models. Benchmarking comparisons show that our BertSNR outperforms the existing state-of-the-art methods in TFBS predictions. Importantly, we enhanced the interpretability of the model through attentional weight visualization and motif analysis, and discovered the subtle relationship between attention weight and motif. Moreover, BertSNR effectively identifies TFBSs in promoter regions, facilitating the study of intricate gene regulation.
AVAILABILITY: The BertSNR source code can be found at https://github.com/lhy0322/BertSNR.
PMID:39107889 | DOI:10.1093/bioinformatics/btae461
Choroidal vascular changes in early-stage myopic maculopathy from deep learning choroidal analysis: a hospital-based SS-OCT study
Eye Vis (Lond). 2024 Aug 6;11(1):32. doi: 10.1186/s40662-024-00398-x.
ABSTRACT
BACKGROUND: The objective of this study is to illustrate the changes in the choroidal vasculature in individuals with diffuse chorioretinal atrophy (DCA, early-stage myopic maculopathy) and investigate the association between them.
METHODS: This study included 1418 highly myopic eyes from 720 participants aged 18 - 60 years from the Wenzhou High Myopia Cohort Study. These participants underwent comprehensive ophthalmic assessments. Myopic maculopathy classification followed the Meta-PM system, with pathological myopia defined as myopic maculopathy of DCA or severer. Eyes with myopic maculopathy categorized as no macular lesions (C0), tessellated fundus (C1), and DCA (C2) were enrolled in the analysis. Choroidal images were obtained from swept-source optical coherence tomography (SS-OCT), and the images were processed with a deep learning-based automatic segmentation algorithm and the Niblack auto-local threshold algorithm.
RESULTS: DCA was detected in 247 eyes (17.4%). In comparison to eyes with C0, those with C2 exhibited significant reductions in choroidal thickness (ChT), luminal area (LA), and stromal area (SA) across all evaluated regions (all P < 0.001). An increase in choroidal vascular index (CVI) was observed in all regions, except for the nasal perifoveal (N2) and inferior perifoveal (I2) regions (all P < 0.01). Multivariable logistic regression analysis revealed a negative association between the presence of DCA and increases in choroidal LA and SA (odds ratio ≤ 0.099, P < 0.001). Multivariable linear regression analysis showed that the mean deviation of the visual field test was positively associated with LA and SA at the vertical meridian (B = 1.512, P < 0.001 for LA; B = 1.956, P < 0.001 for SA). Furthermore, the receiver operating characteristic curve analyses showed the optimal ChT to diagnose pathological myopia was 82.4 µm in the N2 region, the LA was 0.076 mm2 and the SA was 0.049 mm2, with area under the curves of 0.916, 0.908, and 0.895, respectively.
CONCLUSIONS: The results of this study indicated that both the presence of DCA and visual function impairment were associated with reductions in choroidal perfusion and stromal components. Moreover, we established threshold values for choroidal parameters in diagnosing pathological myopia, offering valuable references for clinical diagnosis and management.
PMID:39107859 | DOI:10.1186/s40662-024-00398-x
Development and validation of novel interpretable survival prediction models based on drug exposures for severe heart failure during vulnerable period
J Transl Med. 2024 Aug 6;22(1):743. doi: 10.1186/s12967-024-05544-6.
ABSTRACT
BACKGROUND: Severe heart failure (HF) has a higher mortality during vulnerable period while targeted predictive tools, especially based on drug exposures, to accurately assess its prognoses remain largely unexplored. Therefore, this study aimed to utilize drug information as the main predictor to develop and validate survival models for severe HF patients during this period.
METHODS: We extracted severe HF patients from the MIMIC-IV database (as training and internal validation cohorts) as well as from the MIMIC-III database and local hospital (as external validation cohorts). Three algorithms, including Cox proportional hazards model (CoxPH), random survival forest (RSF), and deep learning survival prediction (DeepSurv), were applied to incorporate the parameters (partial hospitalization information and exposure durations of drugs) for constructing survival prediction models. The model performance was assessed mainly using area under the receiver operator characteristic curve (AUC), brier score (BS), and decision curve analysis (DCA). The model interpretability was determined by the permutation importance and Shapley additive explanations values.
RESULTS: A total of 11,590 patients were included in this study. Among the 3 models, the CoxPH model ultimately included 10 variables, while RSF and DeepSurv models incorporated 24 variables, respectively. All of the 3 models achieved respectable performance metrics while the DeepSurv model exhibited the highest AUC values and relatively lower BS among these models. The DCA also verified that the DeepSurv model had the best clinical practicality.
CONCLUSIONS: The survival prediction tools established in this study can be applied to severe HF patients during vulnerable period by mainly inputting drug treatment duration, thus contributing to optimal clinical decisions prospectively.
PMID:39107765 | DOI:10.1186/s12967-024-05544-6
scMaui: a widely applicable deep learning framework for single-cell multiomics integration in the presence of batch effects and missing data
BMC Bioinformatics. 2024 Aug 6;25(1):257. doi: 10.1186/s12859-024-05880-w.
ABSTRACT
The recent advances in high-throughput single-cell sequencing have created an urgent demand for computational models which can address the high complexity of single-cell multiomics data. Meticulous single-cell multiomics integration models are required to avoid biases towards a specific modality and overcome sparsity. Batch effects obfuscating biological signals must also be taken into account. Here, we introduce a new single-cell multiomics integration model, Single-cell Multiomics Autoencoder Integration (scMaui) based on variational product-of-experts autoencoders and adversarial learning. scMaui calculates a joint representation of multiple marginal distributions based on a product-of-experts approach which is especially effective for missing values in the modalities. Furthermore, it overcomes limitations seen in previous VAE-based integration methods with regard to batch effect correction and restricted applicable assays. It handles multiple batch effects independently accepting both discrete and continuous values, as well as provides varied reconstruction loss functions to cover all possible assays and preprocessing pipelines. We demonstrate that scMaui achieves superior performance in many tasks compared to other methods. Further downstream analyses also demonstrate its potential in identifying relations between assays and discovering hidden subpopulations.
PMID:39107690 | DOI:10.1186/s12859-024-05880-w
Metal implant segmentation in CT images based on diffusion model
BMC Med Imaging. 2024 Aug 6;24(1):204. doi: 10.1186/s12880-024-01379-1.
ABSTRACT
BACKGROUND: Computed tomography (CT) is widely in clinics and is affected by metal implants. Metal segmentation is crucial for metal artifact correction, and the common threshold method often fails to accurately segment metals.
PURPOSE: This study aims to segment metal implants in CT images using a diffusion model and further validate it with clinical artifact images and phantom images of known size.
METHODS: A retrospective study was conducted on 100 patients who received radiation therapy without metal artifacts, and simulated artifact data were generated using publicly available mask data. The study utilized 11,280 slices for training and verification, and 2,820 slices for testing. Metal mask segmentation was performed using DiffSeg, a diffusion model incorporating conditional dynamic coding and a global frequency parser (GFParser). Conditional dynamic coding fuses the current segmentation mask and prior images at multiple scales, while GFParser helps eliminate high-frequency noise in the mask. Clinical artifact images and phantom images are also used for model validation.
RESULTS: Compared with the ground truth, the accuracy of DiffSeg for metal segmentation of simulated data was 97.89% and that of DSC was 95.45%. The mask shape obtained by threshold segmentation covered the ground truth and DSCs were 82.92% and 84.19% for threshold segmentation based on 2500 HU and 3000 HU. Evaluation metrics and visualization results show that DiffSeg performs better than other classical deep learning networks, especially for clinical CT, artifact data, and phantom data.
CONCLUSION: DiffSeg efficiently and robustly segments metal masks in artifact data with conditional dynamic coding and GFParser. Future work will involve embedding the metal segmentation model in metal artifact reduction to improve the reduction effect.
PMID:39107679 | DOI:10.1186/s12880-024-01379-1
Free access via computational cloud to deep learning-based EEG assessment in neonatal hypoxic-ischemic encephalopathy: revolutionary opportunities to overcome health disparities
Pediatr Res. 2024 Aug 6. doi: 10.1038/s41390-024-03427-6. Online ahead of print.
ABSTRACT
In this issue of Pediatric Research, Kota et al. evaluate a novel monitoring visual trend using deep-learning - Brain State of the Newborn (BSN)- based EEG as a bedside marker for severity of the encephalopathy in 46 neonates with hypoxic-ischemic encephalopathy (HIE) compared with healthy infants. Early BSN distinguished between normal and abnormal outcome, and correlated with the Total Sarnat Score.
PMID:39107521 | DOI:10.1038/s41390-024-03427-6
Markerless Motion Capture to Quantify Functional Performance in Neurodegeneration: Systematic Review
JMIR Aging. 2024 Aug 6;7:e52582. doi: 10.2196/52582.
ABSTRACT
BACKGROUND: Markerless motion capture (MMC) uses video cameras or depth sensors for full body tracking and presents a promising approach for objectively and unobtrusively monitoring functional performance within community settings, to aid clinical decision-making in neurodegenerative diseases such as dementia.
OBJECTIVE: The primary objective of this systematic review was to investigate the application of MMC using full-body tracking, to quantify functional performance in people with dementia, mild cognitive impairment, and Parkinson disease.
METHODS: A systematic search of the Embase, MEDLINE, CINAHL, and Scopus databases was conducted between November 2022 and February 2023, which yielded a total of 1595 results. The inclusion criteria were MMC and full-body tracking. A total of 157 studies were included for full-text screening, out of which 26 eligible studies that met the selection criteria were included in the review. .
RESULTS: Primarily, the selected studies focused on gait analysis (n=24), while other functional tasks, such as sit to stand (n=5) and stepping in place (n=1), were also explored. However, activities of daily living were not evaluated in any of the included studies. MMC models varied across the studies, encompassing depth cameras (n=18) versus standard video cameras (n=5) or mobile phone cameras (n=2) with postprocessing using deep learning models. However, only 6 studies conducted rigorous comparisons with established gold-standard motion capture models.
CONCLUSIONS: Despite its potential as an effective tool for analyzing movement and posture in individuals with dementia, mild cognitive impairment, and Parkinson disease, further research is required to establish the clinical usefulness of MMC in quantifying mobility and functional performance in the real world.
PMID:39106477 | DOI:10.2196/52582