Deep learning

Head movement dynamics in dystonia: a multi-centre retrospective study using visual perceptive deep learning

Tue, 2024-06-18 06:00

NPJ Digit Med. 2024 Jun 18;7(1):160. doi: 10.1038/s41746-024-01140-6.

ABSTRACT

Dystonia is a neurological movement disorder characterised by abnormal involuntary movements and postures, particularly affecting the head and neck. However, current clinical assessment methods for dystonia rely on simplified rating scales which lack the ability to capture the intricate spatiotemporal features of dystonic phenomena, hindering clinical management and limiting understanding of the underlying neurobiology. To address this, we developed a visual perceptive deep learning framework that utilizes standard clinical videos to comprehensively evaluate and quantify disease states and the impact of therapeutic interventions, specifically deep brain stimulation. This framework overcomes the limitations of traditional rating scales and offers an efficient and accurate method that is rater-independent for evaluating and monitoring dystonia patients. To evaluate the framework, we leveraged semi-standardized clinical video data collected in three retrospective, longitudinal cohort studies across seven academic centres. We extracted static head angle excursions for clinical validation and derived kinematic variables reflecting naturalistic head dynamics to predict dystonia severity, subtype, and neuromodulation effects. The framework was also applied to a fully independent cohort of generalised dystonia patients for comparison between dystonia sub-types. Computer vision-derived measurements of head angle excursions showed a strong correlation with clinically assigned scores. Across comparisons, we identified consistent kinematic features from full video assessments encoding information critical to disease severity, subtype, and effects of neural circuit interventions, independent of static head angle deviations used in scoring. Our visual perceptive machine learning framework reveals kinematic pathosignatures of dystonia, potentially augmenting clinical management, facilitating scientific translation, and informing personalized precision neurology approaches.

PMID:38890413 | DOI:10.1038/s41746-024-01140-6

Categories: Literature Watch

A hybrid deep approach to recognizing student activity and monitoring health physique based on accelerometer data from smartphones

Tue, 2024-06-18 06:00

Sci Rep. 2024 Jun 18;14(1):14006. doi: 10.1038/s41598-024-63934-8.

ABSTRACT

Smartphone sensors have gained considerable traction in Human Activity Recognition (HAR), drawing attention for their diverse applications. Accelerometer data monitoring holds promise in understanding students' physical activities, fostering healthier lifestyles. This technology tracks exercise routines, sedentary behavior, and overall fitness levels, potentially encouraging better habits, preempting health issues, and bolstering students' well-being. Traditionally, HAR involved analyzing signals linked to physical activities using handcrafted features. However, recent years have witnessed the integration of deep learning into HAR tasks, leveraging digital physiological signals from smartwatches and learning features automatically from raw sensory data. The Long Short-Term Memory (LSTM) network stands out as a potent algorithm for analyzing physiological signals, promising improved accuracy and scalability in automated signal analysis. In this article, we propose a feature analysis framework for recognizing student activity and monitoring health based on smartphone accelerometer data through an edge computing platform. Our objective is to boost HAR performance by accounting for the dynamic nature of human behavior. Nonetheless, the current LSTM network's presetting of hidden units and initial learning rate relies on prior knowledge, potentially leading to suboptimal states. To counter this, we employ Bidirectional LSTM (BiLSTM), enhancing sequence processing models. Furthermore, Bayesian optimization aids in fine-tuning the BiLSTM model architecture. Through fivefold cross-validation on training and testing datasets, our model showcases a classification accuracy of 97.5% on the tested dataset. Moreover, edge computing offers real-time processing, reduced latency, enhanced privacy, bandwidth efficiency, offline capabilities, energy efficiency, personalization, and scalability. Extensive experimental results validate that our proposed approach surpasses state-of-the-art methodologies in recognizing human activities and monitoring health based on smartphone accelerometer data.

PMID:38890409 | DOI:10.1038/s41598-024-63934-8

Categories: Literature Watch

Empowering artificial intelligence in characterizing the human primary pacemaker of the heart at single cell resolution

Tue, 2024-06-18 06:00

Sci Rep. 2024 Jun 18;14(1):14041. doi: 10.1038/s41598-024-63542-6.

ABSTRACT

The sinus node (SN) serves as the primary pacemaker of the heart and is the first component of the cardiac conduction system. Due to its anatomical properties and sample scarcity, the cellular composition of the human SN has been historically challenging to study. Here, we employed a novel deep learning deconvolution method, namely Bulk2space, to characterise the cellular heterogeneity of the human SN using existing single-cell datasets of non-human species. As a proof of principle, we used Bulk2Space to profile the cells of the bulk human right atrium using publicly available mouse scRNA-Seq data as a reference. 18 human cell populations were identified, with cardiac myocytes being the most abundant. Each identified cell population correlated to its published experimental counterpart. Subsequently, we applied the deconvolution to the bulk transcriptome of the human SN and identified 11 cell populations, including a population of pacemaker cardiomyocytes expressing pacemaking ion channels (HCN1, HCN4, CACNA1D) and transcription factors (SHOX2 and TBX3). The connective tissue of the SN was characterised by adipocyte and fibroblast populations, as well as key immune cells. Our work unravelled the unique single cell composition of the human SN by leveraging the power of a novel machine learning method.

PMID:38890395 | DOI:10.1038/s41598-024-63542-6

Categories: Literature Watch

Predicting prostate cancer grade reclassification on active surveillance using a deep Learning-Based grading algorithm

Tue, 2024-06-18 06:00

J Natl Cancer Inst. 2024 Jun 18:djae139. doi: 10.1093/jnci/djae139. Online ahead of print.

ABSTRACT

Deep learning (DL)-based algorithms to determine prostate cancer (PCa) Grade Group (GG) on biopsy slides have not been validated by comparison to clinical outcomes. We used a DL-based algorithm, AIRAProstate, to re-grade initial prostate biopsies in two independent PCa active surveillance (AS) cohorts. In a cohort initially diagnosed with GG1 PCa using only systematic biopsies (n = 138), upgrading of the initial biopsy to ≥GG2 by AIRAProstate was associated with rapid or extreme grade reclassification on AS (odds ratio 3.3, p = .04), whereas upgrading of the initial biopsy by contemporary uropathologist reviews was not associated with this outcome. In a contemporary validation cohort that underwent prostate magnetic resonance imaging before initial biopsy (n = 169), upgrading of the initial biopsy (all contemporary GG1 by uropathologist grading) by AIRAProstate was associated with grade reclassification on AS (hazard ratio 1.7, p = .03). These results demonstrate the utility of a DL-based grading algorithm in PCa risk stratification for AS.

PMID:38889303 | DOI:10.1093/jnci/djae139

Categories: Literature Watch

Interpretable deep learning in single-cell omics

Tue, 2024-06-18 06:00

Bioinformatics. 2024 Jun 18:btae374. doi: 10.1093/bioinformatics/btae374. Online ahead of print.

ABSTRACT

MOTIVATION: Single-cell omics technologies have enabled the quantification of molecular profiles in individual cells at an unparalleled resolution. Deep learning, a rapidly evolving sub-field of machine learning, has instilled a significant interest in single-cell omics research due to its remarkable success in analysing heterogeneous high-dimensional single-cell omics data. Nevertheless, the inherent multi-layer nonlinear architecture of deep learning models often makes them 'black boxes' as the reasoning behind predictions is often unknown and not transparent to the user. This has stimulated an increasing body of research for addressing the lack of interpretability in deep learning models, especially in single-cell omics data analyses, where the identification and understanding of molecular regulators are crucial for interpreting model predictions and directing downstream experimental validations.

RESULTS: In this work, we introduce the basics of single-cell omics technologies and the concept of interpretable deep learning. This is followed by a review of the recent interpretable deep learning models applied to various single-cell omics research. Lastly, we highlight the current limitations and discuss potential future directions.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38889275 | DOI:10.1093/bioinformatics/btae374

Categories: Literature Watch

Knowledge Transfer from Macro-world to Micro-world: Enhancing 3D Cryo-ET Classification through Fine-Tuning Video-based Deep Models

Tue, 2024-06-18 06:00

Bioinformatics. 2024 Jun 18:btae368. doi: 10.1093/bioinformatics/btae368. Online ahead of print.

ABSTRACT

MOTIVATION: Deep learning models have achieved remarkable success in a wide range of natural-world tasks, such as vision, language, and speech recognition. These accomplishments are largely attributed to the availability of open-source large-scale datasets. More importantly, pre-trained foundational modellearnings exhibit a surprising degree of transferability to downstream tasks, enabling efficient learning even with limited training examples. However, the application of such natural-domain models to the domain of tiny Cryo-Electron Tomography (Cryo-ET) images has been a relatively unexplored frontier. This research is motivated by the intuition that 3D Cryo-ET voxel data can be conceptually viewed as a sequence of progressively evolving video frames.

RESULTS: Leveraging the above insight, we propose a novel approach that involves the utilization of 3D models pre-trained on large-scale video datasets to enhance Cryo-ET subtomogram classification. Our experiments, conducted on both simulated and real Cryo-ET datasets, reveal compelling results. The use of video initialization not only demonstrates improvements in classification accuracy but also substantially reduces training costs. Further analyses provide additional evidence of the value of video initialization in enhancing subtomogram feature extraction. Additionally, we observe that video initialization yields similar positive effects when applied to medical 3D classification tasks, underscoring the potential of cross-domain knowledge transfer from video-based models to advance the state-of-the-art in a wide range of biological and medical data types.

AVAILABILITY AND IMPLEMENTATION: https://github.com/xulabs/aitom.

PMID:38889274 | DOI:10.1093/bioinformatics/btae368

Categories: Literature Watch

RNA m6A detection using raw current signals and basecalling errors from nanopore direct RNA sequencing reads

Tue, 2024-06-18 06:00

Bioinformatics. 2024 Jun 18:btae375. doi: 10.1093/bioinformatics/btae375. Online ahead of print.

ABSTRACT

MOTIVATION: Nanopore direct RNA sequencing (DRS) enables the detection of RNA N6-methyladenosine (m6A) without extra laboratory technique. A number of supervised or comparative approaches have been developed to identify m6A from Nanopore DRS reads. However, existing methods typically utilize either statistical features of the current signals or basecalling error features, ignoring the richer information of the raw signals of DRS reads.

RESULTS: Here, we propose RedNano, a deep-learning method designed to detect m6A from Nanopore DRS reads by utilizing both raw signals and basecalling errors. RedNano processes the raw-signal feature and basecalling-error feature through residual networks. We validated the effectiveness of RedNano using synthesized, Arabidopsis, and human DRS data. The results demonstrate that RedNano surpasses existing methods by achieving higher AUCs and AUPRs in all three datasets. Furthermore, RedNano performs better in cross-species validation, demonstrating its robustness. Additionally, when detecting m6A from an independent dataset of P. trichocarpa, RedNano achieves the highest AUC and AUPR, which are 3.8-9.9% and 5.5-13.8% higher than other methods, respectively.

AVAILABILITY AND IMPLEMENTATION: The source code of RedNano is freely available at https://github.com/Derryxu/RedNano.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38889266 | DOI:10.1093/bioinformatics/btae375

Categories: Literature Watch

Perfect Match: Radiomics and Artificial Intelligence in Cardiac Imaging

Tue, 2024-06-18 06:00

Circ Cardiovasc Imaging. 2024 Jun;17(6):e015490. doi: 10.1161/CIRCIMAGING.123.015490. Epub 2024 Jun 18.

ABSTRACT

Cardiovascular diseases remain a significant health burden, with imaging modalities like echocardiography, cardiac computed tomography, and cardiac magnetic resonance imaging playing a crucial role in diagnosis and prognosis. However, the inherent heterogeneity of these diseases poses challenges, necessitating advanced analytical methods like radiomics and artificial intelligence. Radiomics extracts quantitative features from medical images, capturing intricate patterns and subtle variations that may elude visual inspection. Artificial intelligence techniques, including deep learning, can analyze these features to generate knowledge, define novel imaging biomarkers, and support diagnostic decision-making and outcome prediction. Radiomics and artificial intelligence thus hold promise for significantly enhancing diagnostic and prognostic capabilities in cardiac imaging, paving the way for more personalized and effective patient care. This review explores the synergies between radiomics and artificial intelligence in cardiac imaging, following the radiomics workflow and introducing concepts from both domains. Potential clinical applications, challenges, and limitations are discussed, along with solutions to overcome them.

PMID:38889216 | DOI:10.1161/CIRCIMAGING.123.015490

Categories: Literature Watch

Modeling 0.6 million genes for the rational design of functional <em>cis</em>-regulatory variants and de novo design of <em>cis-</em>regulatory sequences

Tue, 2024-06-18 06:00

Proc Natl Acad Sci U S A. 2024 Jun 25;121(26):e2319811121. doi: 10.1073/pnas.2319811121. Epub 2024 Jun 18.

ABSTRACT

Rational design of plant cis-regulatory DNA sequences without expert intervention or prior domain knowledge is still a daunting task. Here, we developed PhytoExpr, a deep learning framework capable of predicting both mRNA abundance and plant species using the proximal regulatory sequence as the sole input. PhytoExpr was trained over 17 species representative of major clades of the plant kingdom to enhance its generalizability. Via input perturbation, quantitative functional annotation of the input sequence was achieved at single-nucleotide resolution, revealing an abundance of predicted high-impact nucleotides in conserved noncoding sequences and transcription factor binding sites. Evaluation of maize HapMap3 single-nucleotide polymorphisms (SNPs) by PhytoExpr demonstrates an enrichment of predicted high-impact SNPs in cis-eQTL. Additionally, we provided two algorithms that harnessed the power of PhytoExpr in designing functional cis-regulatory variants, and de novo creation of species-specific cis-regulatory sequences through in silico evolution of random DNA sequences. Our model represents a general and robust approach for functional variant discovery in population genetics and rational design of regulatory sequences for genome editing and synthetic biology.

PMID:38889146 | DOI:10.1073/pnas.2319811121

Categories: Literature Watch

The potential of the transformer-based survival analysis model, SurvTrace, for predicting recurrent cardiovascular events and stratifying high-risk patients with ischemic heart disease

Tue, 2024-06-18 06:00

PLoS One. 2024 Jun 18;19(6):e0304423. doi: 10.1371/journal.pone.0304423. eCollection 2024.

ABSTRACT

INTRODUCTION: Ischemic heart disease is a leading cause of death worldwide, and its importance is increasing with the aging population. The aim of this study was to evaluate the accuracy of SurvTrace, a survival analysis model using the Transformer-a state-of-the-art deep learning method-for predicting recurrent cardiovascular events and stratifying high-risk patients. The model's performance was compared to that of a conventional scoring system utilizing real-world data from cardiovascular patients.

METHODS: This study consecutively enrolled patients who underwent percutaneous coronary intervention (PCI) at the Department of Cardiovascular Medicine, University of Tokyo Hospital, between 2005 and 2019. Each patient's initial PCI at our hospital was designated as the index procedure, and a composite of major adverse cardiovascular events (MACE) was monitored for up to two years post-index event. Data regarding patient background, clinical presentation, medical history, medications, and perioperative complications were collected to predict MACE. The performance of two models-a conventional scoring system proposed by Wilson et al. and the Transformer-based model SurvTrace-was evaluated using Harrell's c-index, Kaplan-Meier curves, and log-rank tests.

RESULTS: A total of 3938 cases were included in the study, with 394 used as the test dataset and the remaining 3544 used for model training. SurvTrace exhibited a mean c-index of 0.72 (95% confidence intervals (CI): 0.69-0.76), which indicated higher prognostic accuracy compared with the conventional scoring system's 0.64 (95% CI: 0.64-0.64). Moreover, SurvTrace demonstrated superior risk stratification ability, effectively distinguishing between the high-risk group and other risk categories in terms of event occurrence. In contrast, the conventional system only showed a significant difference between the low-risk and high-risk groups.

CONCLUSION: This study based on real-world cardiovascular patient data underscores the potential of the Transformer-based survival analysis model, SurvTrace, for predicting recurrent cardiovascular events and stratifying high-risk patients.

PMID:38889124 | DOI:10.1371/journal.pone.0304423

Categories: Literature Watch

CAManim: Animating end-to-end network activation maps

Tue, 2024-06-18 06:00

PLoS One. 2024 Jun 18;19(6):e0296985. doi: 10.1371/journal.pone.0296985. eCollection 2024.

ABSTRACT

Deep neural networks have been widely adopted in numerous domains due to their high performance and accessibility to developers and application-specific end-users. Fundamental to image-based applications is the development of Convolutional Neural Networks (CNNs), which possess the ability to automatically extract features from data. However, comprehending these complex models and their learned representations, which typically comprise millions of parameters and numerous layers, remains a challenge for both developers and end-users. This challenge arises due to the absence of interpretable and transparent tools to make sense of black-box models. There exists a growing body of Explainable Artificial Intelligence (XAI) literature, including a collection of methods denoted Class Activation Maps (CAMs), that seek to demystify what representations the model learns from the data, how it informs a given prediction, and why it, at times, performs poorly in certain tasks. We propose a novel XAI visualization method denoted CAManim that seeks to simultaneously broaden and focus end-user understanding of CNN predictions by animating the CAM-based network activation maps through all layers, effectively depicting from end-to-end how a model progressively arrives at the final layer activation. Herein, we demonstrate that CAManim works with any CAM-based method and various CNN architectures. Beyond qualitative model assessments, we additionally propose a novel quantitative assessment that expands upon the Remove and Debias (ROAD) metric, pairing the qualitative end-to-end network visual explanations assessment with our novel quantitative "yellow brick ROAD" assessment (ybROAD). This builds upon prior research to address the increasing demand for interpretable, robust, and transparent model assessment methodology, ultimately improving an end-user's trust in a given model's predictions. Examples and source code can be found at: https://omni-ml.github.io/pytorch-grad-cam-anim/.

PMID:38889117 | DOI:10.1371/journal.pone.0296985

Categories: Literature Watch

A high-accuracy lightweight network model for X-ray image diagnosis: A case study of COVID detection

Tue, 2024-06-18 06:00

PLoS One. 2024 Jun 18;19(6):e0303049. doi: 10.1371/journal.pone.0303049. eCollection 2024.

ABSTRACT

The Coronavirus Disease 2019(COVID-19) has caused widespread and significant harm globally. In order to address the urgent demand for a rapid and reliable diagnostic approach to mitigate transmission, the application of deep learning stands as a viable solution. The impracticality of many existing models is attributed to excessively large parameters, significantly limiting their utility. Additionally, the classification accuracy of the model with few parameters falls short of desirable levels. Motivated by this observation, the present study employs the lightweight network MobileNetV3 as the underlying architecture. This paper incorporates the dense block to capture intricate spatial information in images, as well as the transition layer designed to reduce the size and channel number of the feature map. Furthermore, this paper employs label smoothing loss to address the inter-class similarity effects and uses class weighting to tackle the problem of data imbalance. Additionally, this study applies the pruning technique to eliminate unnecessary structures and further reduce the number of parameters. As a result, this improved model achieves an impressive 98.71% accuracy on an openly accessible database, while utilizing only 5.94 million parameters. Compared to the previous method, this maximum improvement reaches 5.41%. Moreover, this research successfully reduces the parameter count by up to 24 times, showcasing the efficacy of our approach. This demonstrates the significant benefits in regions with limited availability of medical resources.

PMID:38889106 | DOI:10.1371/journal.pone.0303049

Categories: Literature Watch

TSRNet: A Dual-stream Network for Refining 3D Tooth Segmentation

Tue, 2024-06-18 06:00

IEEE Trans Vis Comput Graph. 2024 Jun 18;PP. doi: 10.1109/TVCG.2024.3413345. Online ahead of print.

ABSTRACT

The field of 3D tooth segmentation has made considerable advances thanks to deep learning, but challenges remain with coarse segmentation boundaries and prediction errors. In this paper, we introduce a novel learnable method to refine coarse results obtained from existing 3D tooth segmentation algorithms. The refinement framework features a dual-stream network called TSRNet (Tooth Segmentation Refinement Network) to rectify defective boundary and distance maps extracted from the coarse segmentation. The boundary map provides explicit boundary information, while the distance map provides gradient information in the form of the shortest geodesic distance between the vertex and the segmentation boundary. Following well-designed rules, the two refined maps are utilized to move the coarse tooth boundaries toward their correct positions through an iterative refinement process. The two-stage refinement method is validated on both 3D tooth and segmentation benchmark datasets. Extensive experiments demonstrate that our method significantly improves upon the coarse results from baseline methods and achieves state-of-the-art performance. (Code will be publicly available at https://github.com/bibi547/TSRNet.git.).

PMID:38889041 | DOI:10.1109/TVCG.2024.3413345

Categories: Literature Watch

Small target tea bud detection based on improved YOLOv5 in complex background

Tue, 2024-06-18 06:00

Front Plant Sci. 2024 Jun 3;15:1393138. doi: 10.3389/fpls.2024.1393138. eCollection 2024.

ABSTRACT

Tea bud detection is the first step in the precise picking of famous teas. Accurate and fast tea bud detection is crucial for achieving intelligent tea bud picking. However, existing detection methods still exhibit limitations in both detection accuracy and speed due to the intricate background of tea buds and their small size. This study uses YOLOv5 as the initial network and utilizes attention mechanism to obtain more detailed information about tea buds, reducing false detections and missed detections caused by different sizes of tea buds; The addition of Spatial Pyramid Pooling Fast (SPPF) in front of the head to better utilize the attention module's ability to fuse information; Introducing the lightweight convolutional method Group Shuffle Convolution (GSConv) to ensure model efficiency without compromising accuracy; The Mean-Positional-Distance Intersection over Union (MPDIoU) can effectively accelerate model convergence and reduce the training time of the model. The experimental results demonstrate that our proposed method achieves precision (P), recall rate (R) and mean average precision (mAP) of 93.38%, 89.68%, and 95.73%, respectively. Compared with the baseline network, our proposed model's P, R, and mAP have been improved by 3.26%, 11.43%, and 7.68%, respectively. Meanwhile, comparative analyses with other deep learning methods using the same dataset underscore the efficacy of our approach in terms of P, R, mAP, and model size. This method can accurately detect the tea bud area and provide theoretical research and technical support for subsequent tea picking.

PMID:38887461 | PMC:PMC11180724 | DOI:10.3389/fpls.2024.1393138

Categories: Literature Watch

Classification of Periapical and Bitewing Radiographs as Periodontally Healthy or Diseased by Deep Learning Algorithms

Tue, 2024-06-18 06:00

Cureus. 2024 May 18;16(5):e60550. doi: 10.7759/cureus.60550. eCollection 2024 May.

ABSTRACT

Objectives The aim of this artificial intelligence (AI) study was to develop a deep learning algorithm capable of automatically classifying periapical and bitewing radiography images as either periodontally healthy or unhealthy and to assess the algorithm's diagnostic success. Materials and methods The sample of the study consisted of 1120 periapical radiographs (560 periodontally healthy, 560 periodontally unhealthy) and 1498 bitewing radiographs (749 periodontally healthy, 749 periodontally ill). From the main datasets of both radiography types, three sub-datasets were randomly created: a training set (80%), a validation set (10%), and a test set (10%). Using these sub-datasets, a deep learning algorithm was developed with the YOLOv8-cls model (Ultralytics, Los Angeles, California, United States) and trained over 300 epochs. The success of the developed algorithm was evaluated using the confusion matrix method. Results The AI algorithm achieved classification accuracies of 75% or higher for both radiograph types. For bitewing radiographs, the sensitivity, specificity, precision, accuracy, and F1 score values were 0.8243, 0.7162, 0.7439, 0.7703, and 0.7821, respectively. For periapical radiographs, the sensitivity, specificity, precision, accuracy, and F1 score were 0.7500, 0.7500, 0.7500, 0.7500, and 0.7500, respectively. Conclusion The AI models developed in this study demonstrated considerable success in classifying periodontal disease. Future applications may involve employing AI algorithms for assessing periodontal status across various types of radiography images and for automated disease detection.

PMID:38887333 | PMC:PMC11181894 | DOI:10.7759/cureus.60550

Categories: Literature Watch

Non-Invasive Detection of Early-Stage Fatty Liver Disease via an On-Skin Impedance Sensor and Attention-Based Deep Learning

Tue, 2024-06-18 06:00

Adv Sci (Weinh). 2024 Jun 17:e2400596. doi: 10.1002/advs.202400596. Online ahead of print.

ABSTRACT

Early-stage nonalcoholic fatty liver disease (NAFLD) is a silent condition, with most cases going undiagnosed, potentially progressing to liver cirrhosis and cancer. A non-invasive and cost-effective detection method for early-stage NAFLD detection is a public health priority but challenging. In this study, an adhesive, soft on-skin sensor with low electrode-skin contact impedance for early-stage NAFLD detection is fabricated. A method is developed to synthesize platinum nanoparticles and reduced graphene quantum dots onto the on-skin sensor to reduce electrode-skin contact impedance by increasing double-layer capacitance, thereby enhancing detection accuracy. Furthermore, an attention-based deep learning algorithm is introduced to differentiate impedance signals associated with early-stage NAFLD in high-fat-diet-fed low-density lipoprotein receptor knockout (Ldlr-/-) mice compared to healthy controls. The integration of an adhesive, soft on-skin sensor with low electrode-skin contact impedance and the attention-based deep learning algorithm significantly enhances the detection accuracy for early-stage NAFLD, achieving a rate above 97.5% with an area under the receiver operating characteristic curve (AUC) of 1.0. The findings present a non-invasive approach for early-stage NAFLD detection and display a strategy for improved early detection through on-skin electronics and deep learning.

PMID:38887178 | DOI:10.1002/advs.202400596

Categories: Literature Watch

Toward understanding the role of genomic repeat elements in neurodegenerative diseases

Tue, 2024-06-18 06:00

Neural Regen Res. 2025 Mar 1;20(3):646-659. doi: 10.4103/NRR.NRR-D-23-01568. Epub 2024 Apr 16.

ABSTRACT

Neurodegenerative diseases cause great medical and economic burdens for both patients and society; however, the complex molecular mechanisms thereof are not yet well understood. With the development of high-coverage sequencing technology, researchers have started to notice that genomic repeat regions, previously neglected in search of disease culprits, are active contributors to multiple neurodegenerative diseases. In this review, we describe the association between repeat element variants and multiple degenerative diseases through genome-wide association studies and targeted sequencing. We discuss the identification of disease-relevant repeat element variants, further powered by the advancement of long-read sequencing technologies and their related tools, and summarize recent findings in the molecular mechanisms of repeat element variants in brain degeneration, such as those causing transcriptional silencing or RNA-mediated gain of toxic function. Furthermore, we describe how in silico predictions using innovative computational models, such as deep learning language models, could enhance and accelerate our understanding of the functional impact of repeat element variants. Finally, we discuss future directions to advance current findings for a better understanding of neurodegenerative diseases and the clinical applications of genomic repeat elements.

PMID:38886931 | DOI:10.4103/NRR.NRR-D-23-01568

Categories: Literature Watch

Deep learning-based classification of erosion, synovitis and osteitis in hand MRI of patients with inflammatory arthritis

Mon, 2024-06-17 06:00

RMD Open. 2024 Jun 17;10(2):e004273. doi: 10.1136/rmdopen-2024-004273.

ABSTRACT

OBJECTIVES: To train, test and validate the performance of a convolutional neural network (CNN)-based approach for the automated assessment of bone erosions, osteitis and synovitis in hand MRI of patients with inflammatory arthritis.

METHODS: Hand MRIs (coronal T1-weighted, T2-weighted fat-suppressed, T1-weighted fat-suppressed contrast-enhanced) of rheumatoid arthritis (RA) and psoriatic arthritis (PsA) patients from the rheumatology department of the Erlangen University Hospital were assessed by two expert rheumatologists using the Outcome Measures in Rheumatology-validated RA MRI Scoring System and PsA MRI Scoring System scores and were used to train, validate and test CNNs to automatically score erosions, osteitis and synovitis. Scoring performance was compared with human annotations in terms of macro-area under the receiver operating characteristic curve (AUC) and balanced accuracy using fivefold cross-validation. Validation was performed on an independent dataset of MRIs from a second patient cohort.

RESULTS: In total, 211 MRIs from 112 patients (14 906 region of interests (ROIs)) were included for training/internal validation using cross-validation and 220 MRIs from 75 patients (11 040 ROIs) for external validation of the networks. The networks achieved high mean (SD) macro-AUC of 92%±1% for erosions, 91%±2% for osteitis and 85%±2% for synovitis. Compared with human annotation, CNNs achieved a high mean Spearman correlation for erosions (90±2%), osteitis (78±8%) and synovitis (69±7%), which remained consistent in the validation dataset.

CONCLUSIONS: We developed a CNN-based automated scoring system that allowed a rapid grading of erosions, osteitis and synovitis with good diagnostic accuracy and using less MRI sequences compared with conventional scoring. This CNN-based approach may help develop standardised cost-efficient and time-efficient assessments of hand MRIs for patients with arthritis.

PMID:38886001 | DOI:10.1136/rmdopen-2024-004273

Categories: Literature Watch

Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy

Mon, 2024-06-17 06:00

Radiother Oncol. 2024 Jun 15:110387. doi: 10.1016/j.radonc.2024.110387. Online ahead of print.

ABSTRACT

Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.

PMID:38885905 | DOI:10.1016/j.radonc.2024.110387

Categories: Literature Watch

Diabetic retinopathy screening through artificial intelligence algorithms: A systematic review

Mon, 2024-06-17 06:00

Surv Ophthalmol. 2024 Jun 15:S0039-6257(24)00051-1. doi: 10.1016/j.survophthal.2024.05.008. Online ahead of print.

ABSTRACT

Diabetic retinopathy (DR) poses a significant challenge in diabetes management, with its progression often asymptomatic until advanced stages. This underscores the urgent need for cost-effective and reliable screening methods. Consequently, the integration of Artificial Intelligence tools presents a promising avenue to address this need effectively. We provide an overview of the current state of the art results and techniques in DR screening using AI, while also identifying gaps in research for future exploration. By synthesizing existing database and pinpointing areas requiring further investigation, this paper seeks to guide the direction of future research in the field of automatic diabetic retinopathy screening. There has been a continuous rise in the number of articles detailing Deep Learning methods designed for the automatic screening of Diabetic Retinopathy especially by the year 2021. Researchers utilized various databases, with a primary focus on the IDRiD dataset. This dataset comprises color fundus images captured at an ophthalmological clinic situated in India. It comprises 516 images that depict various stages of diabetic retinopathy and diabetic macular edema. Each of the chosen papers concentrates on various DR signs. Nevertheless, a significant portion of the authors primarily focused on detecting exudates, which remains insufficient to assess the overall presence of this disease. Various AI methods have been employed to identify DR signs. Among the chosen papers, 4.7% utilized detection methods, 46.5% employed classification techniques, 41.9% relied on segmentation, and 7% opted for a combination of classification and segmentation. Metrics calculated from 80% of the articles employing preprocessing techniques demonstrated the significant benefits of this approach in enhancing results quality. In addition, multiple Deep Learning techniques, starting by classification, detection then segmentation. Researchers used mostly YOLO for detection, ViT for classification and U-Net for segmentation. Another perspective on the evolving landscape of AI models for diabetic retinopathy screening lies in the increasing adoption of Convolutional Neural Networks for classification tasks and U-Net architectures for segmentation purposes;However, there is a growing realization within the research community that these techniques, while powerful individually, can be even more effective when integrated. This integration holds promise for not only diagnosing DR but also accurately classifying its different stages, thereby enabling more tailored treatment strategies. Despite this potential, the development of AI models for DR screening is fraught with challenges. Chief among these is the difficulty in obtaining high-quality, labeled data necessary for training models to perform effectively. This scarcity of data poses significant barriers to achieving robust performance and can hinder progress in developing accurate screening systems. Moreover, managing the complexity of these models, particularly deep neural networks, presents its own set of challenges. Additionally, interpreting the outputs of these models and ensuring their reliability in real-world clinical settings remain ongoing concerns. Furthermore, the iterative process of training and adapting these models to specific datasets can be time-consuming and resource-intensive. These challenges underscore the multifaceted nature of developing effective AI models for DR screening. Addressing these obstacles requires concerted efforts from researchers, clinicians, and technologists to innovate new approaches and overcome existing limitations. By doing so, a full potential of AI may transform DR screening and improve patient outcomes.

PMID:38885761 | DOI:10.1016/j.survophthal.2024.05.008

Categories: Literature Watch

Pages