Deep learning

Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images -- Nevus & Melanoma

Mon, 2024-05-27 06:00

ArXiv [Preprint]. 2024 May 16:arXiv:2405.09851v1.

ABSTRACT

Automated region of interest detection in histopathological image analysis is a challenging and important topic with tremendous potential impact on clinical practice. The deep-learning methods used in computational pathology may help us to reduce costs and increase the speed and accuracy of cancer diagnosis. We started with the UNC Melanocytic Tumor Dataset cohort that contains 160 hematoxylin and eosin whole-slide images of primary melanomas (86) and nevi (74). We randomly assigned 80% (134) as a training set and built an in-house deep-learning method to allow for classification, at the slide level, of nevi and melanomas. The proposed method performed well on the other 20% (26) test dataset; the accuracy of the slide classification task was 92.3% and our model also performed well in terms of predicting the region of interest annotated by the pathologists, showing excellent performance of our model on melanocytic skin tumors. Even though we tested the experiments on the skin tumor dataset, our work could also be extended to other medical image detection problems to benefit the clinical evaluation and diagnosis of different tumors.

PMID:38800658 | PMC:PMC11118677

Categories: Literature Watch

drGAT: Attention-Guided Gene Assessment of Drug Response Utilizing a Drug-Cell-Gene Heterogeneous Network

Mon, 2024-05-27 06:00

ArXiv [Preprint]. 2024 May 14:arXiv:2405.08979v1.

ABSTRACT

Drug development is a lengthy process with a high failure rate. Increasingly, machine learning is utilized to facilitate the drug development processes. These models aim to enhance our understanding of drug characteristics, including their activity in biological contexts. However, a major challenge in drug response (DR) prediction is model interpretability as it aids in the validation of findings. This is important in biomedicine, where models need to be understandable in comparison with established knowledge of drug interactions with proteins. drGAT, a graph deep learning model, leverages a heterogeneous graph composed of relationships between proteins, cell lines, and drugs. drGAT is designed with two objectives: DR prediction as a binary sensitivity prediction and elucidation of drug mechanism from attention coefficients. drGAT has demonstrated superior performance over existing models, achieving 78\% accuracy (and precision), and 76\% F1 score for 269 DNA-damaging compounds of the NCI60 drug response dataset. To assess the model's interpretability, we conducted a review of drug-gene co-occurrences in Pubmed abstracts in comparison to the top 5 genes with the highest attention coefficients for each drug. We also examined whether known relationships were retained in the model by inspecting the neighborhoods of topoisomerase-related drugs. For example, our model retained TOP1 as a highly weighted predictive feature for irinotecan and topotecan, in addition to other genes that could potentially be regulators of the drugs. Our method can be used to accurately predict sensitivity to drugs and may be useful in the identification of biomarkers relating to the treatment of cancer patients.

PMID:38800657 | PMC:PMC11118660

Categories: Literature Watch

Fully Automated OCT-based Tissue Screening System

Mon, 2024-05-27 06:00

ArXiv [Preprint]. 2024 May 15:arXiv:2405.09601v1.

ABSTRACT

This study introduces a groundbreaking optical coherence tomography (OCT) imaging system dedicated for high-throughput screening applications using ex vivo tissue culture. Leveraging OCT's non-invasive, high-resolution capabilities, the system is equipped with a custom-designed motorized platform and tissue detection ability for automated, successive imaging across samples. Transformer-based deep learning segmentation algorithms further ensure robust, consistent, and efficient readouts meeting the standards for screening assays. Validated using retinal explant cultures from a mouse model of retinal degeneration, the system provides robust, rapid, reliable, unbiased, and comprehensive readouts of tissue response to treatments. This fully automated OCT-based system marks a significant advancement in tissue screening, promising to transform drug discovery, as well as other relevant research fields.

PMID:38800655 | PMC:PMC11118679

Categories: Literature Watch

Computational modeling for deciphering tissue microenvironment heterogeneity from spatially resolved transcriptomics

Mon, 2024-05-27 06:00

Comput Struct Biotechnol J. 2024 May 17;23:2109-2115. doi: 10.1016/j.csbj.2024.05.028. eCollection 2024 Dec.

ABSTRACT

Spatial transcriptomics techniques, while measuring gene expression, retain spatial location information, aiding in situ studies of organismal tissue architecture and the progression of pathological processes. These techniques generate vast amounts of omics data, necessitating the development of computational methods to reveal the underlying tissue microenvironment heterogeneity. The main directions in spatial transcriptomics data analysis are spatial domain detection and spatial deconvolution, which can identify spatial functional regions and parse the distribution of cell types in spatial transcriptomics data by integrating single-cell transcriptomics data. In these two research directions, many computational methods have been successively proposed. This article will categorize them into three types: machine learning-based methods, probabilistic models-based methods, and deep learning-based methods. It will list and discuss the representative algorithms of each type along with their advantages and disadvantages and describe the datasets and evaluation metrics used to assess these computational methods, facilitating researchers in selecting suitable computational methods according to their research needs. Finally, combining the latest technological developments and the advantages and disadvantages of current algorithms, this article will look forward to the future directions of computational method development.

PMID:38800634 | PMC:PMC11126885 | DOI:10.1016/j.csbj.2024.05.028

Categories: Literature Watch

Incorporating simulated spatial context information improves the effectiveness of contrastive learning models

Mon, 2024-05-27 06:00

Patterns (N Y). 2024 Mar 26;5(5):100964. doi: 10.1016/j.patter.2024.100964. eCollection 2024 May 10.

ABSTRACT

Visual learning often occurs in a specific context, where an agent acquires skills through exploration and tracking of its location in a consistent environment. The historical spatial context of the agent provides a similarity signal for self-supervised contrastive learning. We present a unique approach, termed environmental spatial similarity (ESS), that complements existing contrastive learning methods. Using images from simulated, photorealistic environments as an experimental setting, we demonstrate that ESS outperforms traditional instance discrimination approaches. Moreover, sampling additional data from the same environment substantially improves accuracy and provides new augmentations. ESS allows remarkable proficiency in room classification and spatial prediction tasks, especially in unfamiliar environments. This learning paradigm has the potential to enable rapid visual learning in agents operating in new environments with unique visual characteristics. Potentially transformative applications span from robotics to space exploration. Our proof of concept demonstrates improved efficiency over methods that rely on extensive, disconnected datasets.

PMID:38800363 | PMC:PMC11117056 | DOI:10.1016/j.patter.2024.100964

Categories: Literature Watch

Enhancing Precision in Cardiac Segmentation for MR-Guided Radiation Therapy through Deep Learning

Sun, 2024-05-26 06:00

Int J Radiat Oncol Biol Phys. 2024 May 24:S0360-3016(24)00671-0. doi: 10.1016/j.ijrobp.2024.05.013. Online ahead of print.

ABSTRACT

INTRODUCTION: Cardiac substructure dose metrics are more strongly linked to late cardiac morbidities than whole-heart metrics. MR-guided radiation therapy (MRgRT) enables substructure visualization during daily localization, allowing potential for enhanced cardiac sparing. We extend a publicly available state-of-the-art deep learning (DL) framework, nnU-Net, to incorporate self-distillation (nnU-Net.wSD) for substructure segmentation for MRgRT.

METHODS: Eighteen (Institute A) patients who underwent thoracic or abdominal radiation therapy on a 0.35 T MR-guided linac were retrospectively evaluated. On each image, one of two radiation oncologists delineated reference contours of 12 cardiac substructures (chambers, great vessels, and coronary arteries) used to train (n=10), validate (n=3), and test (n=5) nnU-Net.wSD leveraging a teacher-student network and comparing to standard 3D U-Net. The impact of using simulation data or including 3-4 daily images for augmentation during training was evaluated for nnU-Net.wSD. Geometric metrics (Dice similarity coefficient (DSC), mean distance to agreement (MDA), and 95% Hausdorff distance (HD95)), visual inspection, and clinical dose volume histograms (DVHs) were evaluated. To determine generalizability, Institute A's model was tested on an unlabeled dataset from Institute B (n=22) and evaluated via consensus scoring and volume comparisons.

RESULTS: nnU-Net.wSD yielded a DSC (reported mean ± standard deviation) of 0.65±0.25 across the 12 substructures (Chambers: 0.85±0.05, Great Vessels: 0.67±0.19, and Coronary Arteries 0.33±0.16, mean MDA <3 mm, and mean HD95 <9 mm) while outperforming the 3D U-Net (0.583±0.28, p<0.01). Leveraging fractionated data for augmentation improved over a single MR-SIM timepoint (0.579±0.29, p<0.01). Predicted contours yielded DVHs that closely matched the clinical treatment plans where mean and D0.03cc doses deviated by 0.32±0.5 Gy and 1.42±2.6 Gy respectively. No statistically significant differences between Institute A and B volumes (p>0.05) for 11 of 12 substructures with larger volumes requiring minor changes and coronary arteries exhibiting more variability.

CONCLUSIONS: This work is a critical step to rapid and reliable cardiac substructure segmentation to improve cardiac sparing in low-field MRgRT.

PMID:38797498 | DOI:10.1016/j.ijrobp.2024.05.013

Categories: Literature Watch

nBEST: Deep-learning-based non-human primates Brain Extraction and Segmentation Toolbox across ages, sites and species

Sun, 2024-05-26 06:00

Neuroimage. 2024 May 24:120652. doi: 10.1016/j.neuroimage.2024.120652. Online ahead of print.

ABSTRACT

Accurate processing and analysis of non-human primate (NHP) brain magnetic resonance imaging (MRI) serves an indispensable role in understanding brain evolution, development, aging, and diseases. Despite the accumulation of diverse NHP brain MRI datasets at various developmental stages and from various imaging sites/scanners, existing computational tools designed for human MRI typically perform poor on NHP data, due to huge differences in brain sizes, morphologies, and imaging appearances across species, sites, and ages, highlighting the imperative for NHP-specialized MRI processing tools. To address this issue, in this paper, we present a robust, generic, and fully automated computational pipeline, called non-human primates Brain Extraction and Segmentation Toolbox (nBEST), whose main functionality includes brain extraction, non-cerebrum removal, and tissue segmentation. Building on cutting-edge deep learning techniques by employing lifelong learning to flexibly integrate data from diverse NHP populations and innovatively constructing 3D U-NeXt architecture, nBEST can well handle structural NHP brain MR images from multi-species, multi-site, and multi-developmental-stage (from neonates to the elderly). We extensively validated nBEST based on, to our knowledge, the largest assemblage dataset in NHP brain studies, encompassing 1,469 scans with 11 species (e.g., rhesus macaques, cynomolgus macaques, chimpanzees, marmosets, squirrel monkeys, etc.) from 23 independent datasets. Compared to alternative tools, nBEST outperforms in precision, applicability, robustness, comprehensiveness, and generalizability, greatly benefiting downstream longitudinal, cross-sectional, and cross-species quantitative analyses. We have made nBEST an open-source toolbox (https://github.com/TaoZhong11/nBEST) and we are committed to its continual refinement through lifelong learning with incoming data to greatly contribute to the research field.

PMID:38797384 | DOI:10.1016/j.neuroimage.2024.120652

Categories: Literature Watch

Beyond Years: Is AI Ready to Predict Biological Age and Cardiovascular Risk Using Echocardiography?

Sun, 2024-05-26 06:00

J Am Soc Echocardiogr. 2024 May 24:S0894-7317(24)00263-3. doi: 10.1016/j.echo.2024.05.013. Online ahead of print.

NO ABSTRACT

PMID:38797330 | DOI:10.1016/j.echo.2024.05.013

Categories: Literature Watch

Advanced deep learning algorithm for instant discriminating of tea leave stress symptoms by smartphone-based detection

Sun, 2024-05-26 06:00

Plant Physiol Biochem. 2024 May 22;212:108769. doi: 10.1016/j.plaphy.2024.108769. Online ahead of print.

ABSTRACT

The primary challenges in tea production under multiple stress exposures have negatively affected its global market sustainability, so introducing an infield fast technique for monitoring tea leaves' stresses has tremendous urgent needs. Therefore, this study aimed to propose an efficient method for the detection of stress symptoms based on a portable smartphone with deep learning models. Firstly, a database containing over 10,000 images of tea garden canopies in complex natural scenes was developed, which included healthy (no stress) and three types of stress (tea anthracnose (TA), tea blister blight (TB) and sunburn (SB)). Then, YOLOv5m and YOLOv8m algorithms were adapted to discriminate the four types of stress symptoms; where the YOLOv8m algorithm achieved better performance in the identification of healthy leaves (98%), TA (92.0%), TB (68.4%) and SB (75.5%). Furthermore, the YOLOv8m algorithm was used to construct a model for differentiation of disease severity of TA, and a satisfactory result was obtained with the accuracy of mild, moderate, and severe TA infections were 94%, 96%, and 91%, respectively. Besides, we found that CNN kernels of YOLOv8m could efficiently extract the texture characteristics of the images at layer 2, and these characteristics can clearly distinguish different types of stress symptoms. This makes great contributions to the YOLOv8m model to achieve high-precision differentiation of four types of stress symptoms. In conclusion, our study provided an effective system to achieve low-cost, high-precision, fast, and infield diagnosis of tea stress symptoms in complex natural scenes based on smartphone and deep learning algorithms.

PMID:38797010 | DOI:10.1016/j.plaphy.2024.108769

Categories: Literature Watch

Resting state electroencephalographic brain activity in neonates can predict age and is indicative of neurodevelopmental outcome

Sun, 2024-05-26 06:00

Clin Neurophysiol. 2024 May 10;163:226-235. doi: 10.1016/j.clinph.2024.05.002. Online ahead of print.

ABSTRACT

OBJECTIVE: Electroencephalography (EEG) can be used to estimate neonates' biological brain age. Discrepancies between postmenstrual age and brain age, termed the brain age gap, can potentially quantify maturational deviation. Existing brain age EEG models are not well suited to clinical cot-side use for estimating neonates' brain age gap due to their dependency on relatively large data and pre-processing requirements.

METHODS: We trained a deep learning model on resting state EEG data from preterm neonates with normal neurodevelopmental Bayley Scale of Infant and Toddler Development (BSID) outcomes, using substantially reduced data requirements. We subsequently tested this model in two independent datasets from two clinical sites.

RESULTS: In both test datasets, using only 20 min of resting-state EEG activity from a single channel, the model generated accurate age predictions: mean absolute error = 1.03 weeks (p-value = 0.0001) and 0.98 weeks (p-value = 0.0001). In one test dataset, where 9-month follow-up BSID outcomes were available, the average neonatal brain age gap in the severe abnormal outcome group was significantly larger than that of the normal outcome group: difference in mean brain age gap = 0.50 weeks (p-value = 0.04).

CONCLUSIONS: These findings demonstrate that the deep learning model generalises to independent datasets from two clinical sites, and that the model's brain age gap magnitudes differ between neonates with normal and severe abnormal follow-up neurodevelopmental outcomes.

SIGNIFICANCE: The magnitude of neonates' brain age gap, estimated using only 20 min of resting state EEG data from a single channel, can encode information of clinical neurodevelopmental value.

PMID:38797002 | DOI:10.1016/j.clinph.2024.05.002

Categories: Literature Watch

AVBAE-MODFR: A novel deep learning framework of embedding and feature selection on multi-omics data for pan-cancer classification

Sun, 2024-05-26 06:00

Comput Biol Med. 2024 May 14;177:108614. doi: 10.1016/j.compbiomed.2024.108614. Online ahead of print.

ABSTRACT

Integration analysis of cancer multi-omics data for pan-cancer classification has the potential for clinical applications in various aspects such as tumor diagnosis, analyzing clinically significant features, and providing precision medicine. In these applications, the embedding and feature selection on high-dimensional multi-omics data is clinically necessary. Recently, deep learning algorithms become the most promising cancer multi-omic integration analysis methods, due to the powerful capability of capturing nonlinear relationships. Developing effective deep learning architectures for cancer multi-omics embedding and feature selection remains a challenge for researchers in view of high dimensionality and heterogeneity. In this paper, we propose a novel two-phase deep learning model named AVBAE-MODFR for pan-cancer classification. AVBAE-MODFR achieves embedding by a multi2multi autoencoder based on the adversarial variational Bayes method and further performs feature selection utilizing a dual-net-based feature ranking method. AVBAE-MODFR utilizes AVBAE to pre-train the network parameters, which improves the classification performance and enhances feature ranking stability in MODFR. Firstly, AVBAE learns high-quality representation among multiple omics features for unsupervised pan-cancer classification. We design an efficient discriminator architecture to distinguish the latent distributions for updating forward variational parameters. Secondly, we propose MODFR to simultaneously evaluate multi-omics feature importance for feature selection by training a designed multi2one selector network, where the efficient evaluation approach based on the average gradient of random mask subsets can avoid bias caused by input feature drift. We conduct experiments on the TCGA pan-cancer dataset and compare it with four state-of-the-art methods for each phase. The results show the superiority of AVBAE-MODFR over SOTA methods.

PMID:38796884 | DOI:10.1016/j.compbiomed.2024.108614

Categories: Literature Watch

Impact of imperfect annotations on CNN training and performance for instance segmentation and classification in digital pathology

Sun, 2024-05-26 06:00

Comput Biol Med. 2024 May 14;177:108586. doi: 10.1016/j.compbiomed.2024.108586. Online ahead of print.

ABSTRACT

Segmentation and classification of large numbers of instances, such as cell nuclei, are crucial tasks in digital pathology for accurate diagnosis. However, the availability of high-quality datasets for deep learning methods is often limited due to the complexity of the annotation process. In this work, we investigate the impact of noisy annotations on the training and performance of a state-of-the-art CNN model for the combined task of detecting, segmenting and classifying nuclei in histopathology images. In this context, we investigate the conditions for determining an appropriate number of training epochs to prevent overfitting to annotation noise during training. Our results indicate that the utilisation of a small, correctly annotated validation set is instrumental in avoiding overfitting and maintaining model performance to a large extent. Additionally, our findings underscore the beneficial role of pre-training.

PMID:38796882 | DOI:10.1016/j.compbiomed.2024.108586

Categories: Literature Watch

A review of deep learning-based information fusion techniques for multimodal medical image classification

Sun, 2024-05-26 06:00

Comput Biol Med. 2024 May 22;177:108635. doi: 10.1016/j.compbiomed.2024.108635. Online ahead of print.

ABSTRACT

Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.

PMID:38796881 | DOI:10.1016/j.compbiomed.2024.108635

Categories: Literature Watch

Unlocking the power of AI models: exploring protein folding prediction through comparative analysis

Sun, 2024-05-26 06:00

J Integr Bioinform. 2024 May 27. doi: 10.1515/jib-2023-0041. Online ahead of print.

ABSTRACT

Protein structure determination has made progress with the aid of deep learning models, enabling the prediction of protein folding from protein sequences. However, obtaining accurate predictions becomes essential in certain cases where the protein structure remains undescribed. This is particularly challenging when dealing with rare, diverse structures and complex sample preparation. Different metrics assess prediction reliability and offer insights into result strength, providing a comprehensive understanding of protein structure by combining different models. In a previous study, two proteins named ARM58 and ARM56 were investigated. These proteins contain four domains of unknown function and are present in Leishmania spp. ARM refers to an antimony resistance marker. The study's main objective is to assess the accuracy of the model's predictions, thereby providing insights into the complexities and supporting metrics underlying these findings. The analysis also extends to the comparison of predictions obtained from other species and organisms. Notably, one of these proteins shares an ortholog with Trypanosoma cruzi and Trypanosoma brucei, leading further significance to our analysis. This attempt underscored the importance of evaluating the diverse outputs from deep learning models, facilitating comparisons across different organisms and proteins. This becomes particularly pertinent in cases where no previous structural information is available.

PMID:38797876 | DOI:10.1515/jib-2023-0041

Categories: Literature Watch

Fog-based deep learning framework for real-time pandemic screening in smart cities from multi-site tomographies

Sun, 2024-05-26 06:00

BMC Med Imaging. 2024 May 27;24(1):123. doi: 10.1186/s12880-024-01302-8.

ABSTRACT

The quick proliferation of pandemic diseases has been imposing many concerns on the international health infrastructure. To combat pandemic diseases in smart cities, Artificial Intelligence of Things (AIoT) technology, based on the integration of artificial intelligence (AI) with the Internet of Things (IoT), is commonly used to promote efficient control and diagnosis during the outbreak, thereby minimizing possible losses. However, the presence of multi-source institutional data remains one of the major challenges hindering the practical usage of AIoT solutions for pandemic disease diagnosis. This paper presents a novel framework that utilizes multi-site data fusion to boost the accurateness of pandemic disease diagnosis. In particular, we focus on a case study of COVID-19 lesion segmentation, a crucial task for understanding disease progression and optimizing treatment strategies. In this study, we propose a novel multi-decoder segmentation network for efficient segmentation of infections from cross-domain CT scans in smart cities. The multi-decoder segmentation network leverages data from heterogeneous domains and utilizes strong learning representations to accurately segment infections. Performance evaluation of the multi-decoder segmentation network was conducted on three publicly accessible datasets, demonstrating robust results with an average dice score of 89.9% and an average surface dice of 86.87%. To address scalability and latency issues associated with centralized cloud systems, fog computing (FC) emerges as a viable solution. FC brings resources closer to the operator, offering low latency and energy-efficient data management and processing. In this context, we propose a unique FC technique called PANDFOG to deploy the multi-decoder segmentation network on edge nodes for practical and clinical applications of automated COVID-19 pneumonia analysis. The results of this study highlight the efficacy of the multi-decoder segmentation network in accurately segmenting infections from cross-domain CT scans. Moreover, the proposed PANDFOG system demonstrates the practical deployment of the multi-decoder segmentation network on edge nodes, providing real-time access to COVID-19 segmentation findings for improved patient monitoring and clinical decision-making.

PMID:38797827 | DOI:10.1186/s12880-024-01302-8

Categories: Literature Watch

Deep learning nomogram for predicting neoadjuvant chemotherapy response in locally advanced gastric cancer patients

Sun, 2024-05-26 06:00

Abdom Radiol (NY). 2024 May 26. doi: 10.1007/s00261-024-04331-7. Online ahead of print.

ABSTRACT

PURPOSE: Developed and validated a deep learning radiomics nomogram using multi-phase contrast-enhanced computed tomography (CECT) images to predict neoadjuvant chemotherapy (NAC) response in locally advanced gastric cancer (LAGC) patients.

METHODS: This multi-center study retrospectively included 322 patients diagnosed with gastric cancer from January 2013 to June 2023 at two hospitals. Handcrafted radiomics technique and the EfficientNet V2 neural network were applied to arterial, portal venous, and delayed phase CT images to extract two-dimensional handcrafted and deep learning features. A nomogram model was built by integrating the handcrafted signature, the deep learning signature, with clinical features. Discriminative ability was assessed using the receiver operating characteristics (ROC) curve and the precision-recall (P-R) curve. Model fitting was evaluated using calibration curves, and clinical utility was assessed through decision curve analysis (DCA).

RESULTS: The nomogram exhibited excellent performance. The area under the ROC curve (AUC) was 0.848 [95% confidence interval (CI), 0.793-0.893)], 0.802 (95% CI 0.688-0.889), and 0.751 (95% CI 0.652-0.833) for the training, internal validation, and external validation sets, respectively. The AUCs of the P-R curves were 0.838 (95% CI 0.756-0.895), 0.541 (95% CI 0.329-0.740), and 0.556 (95% CI 0.376-0.722) for the corresponding sets. The nomogram outperformed the clinical model and handcrafted signature across all sets (all P < 0.05). The nomogram model demonstrated good calibration and provided greater net benefit within the relevant threshold range compared to other models.

CONCLUSION: This study created a deep learning nomogram using CECT images and clinical data to predict NAC response in LAGC patients undergoing surgical resection, offering personalized treatment insights.

PMID:38796795 | DOI:10.1007/s00261-024-04331-7

Categories: Literature Watch

Enhanced reliability and time efficiency of deep learning-based posterior tibial slope measurement over manual techniques

Sun, 2024-05-26 06:00

Knee Surg Sports Traumatol Arthrosc. 2024 May 26. doi: 10.1002/ksa.12241. Online ahead of print.

ABSTRACT

PURPOSE: Multifaceted factors contribute to inferior outcomes following anterior cruciate ligament (ACL) reconstruction surgery. A particular focus is placed on the posterior tibial slope (PTS). This study introduces the integration of machine learning and artificial intelligence (AI) for efficient measurements of tibial slopes on magnetic resonance imaging images as a promising solution. This advancement aims to enhance risk stratification, diagnostic insights, intervention prognosis and surgical planning for ACL injuries.

METHODS: Images and demographic information from 120 patients who underwent ACL reconstruction surgery were used for this study. An AI-driven model was developed to measure the posterior lateral tibial slope using the YOLOv8 algorithm. The accuracy of the lateral tibial slope, medial tibial slope and tibial longitudinal axis measurements was assessed, and the results reached high levels of reliability. This study employed machine learning and AI techniques to provide objective, consistent and efficient measurements of tibial slopes on MR images.

RESULTS: Three distinct models were developed to derive AI-based measurements. The study results revealed a substantial correlation between the measurements obtained from the AI models and those obtained by the orthopaedic surgeon across three parameters: lateral tibial slope, medial tibial slope and tibial longitudinal axis. Specifically, the Pearson correlation coefficients were 0.673, 0.850 and 0.839, respectively. The Spearman rank correlation coefficients were 0.736, 0.861 and 0.738, respectively. Additionally, the interclass correlation coefficients were 0.63, 0.84 and 0.84, respectively.

CONCLUSION: This study establishes that the deep learning-based method for measuring posterior tibial slopes strongly correlates with the evaluations of expert orthopaedic surgeons. The time efficiency and consistency of this technique suggest its utility in clinical practice, promising to enhance workflow, risk assessment and the customization of patient treatment plans.

LEVEL OF EVIDENCE: Level III, cross-sectional diagnostic study.

PMID:38796728 | DOI:10.1002/ksa.12241

Categories: Literature Watch

Development and evaluation of a deep learning framework for the diagnosis of malnutrition using a 3D facial points cloud: A cross-sectional study

Sun, 2024-05-26 06:00

JPEN J Parenter Enteral Nutr. 2024 May 26. doi: 10.1002/jpen.2643. Online ahead of print.

ABSTRACT

BACKGROUND: The feasibility of diagnosing malnutrition using facial features has been validated. A tool to integrate all facial features associated with malnutrition for disease screening is still demanded. This work aims to develop and evaluate a deep learning (DL) framework to accurately determine malnutrition based on a 3D facial points cloud.

METHODS: A group of 482 patients were studied in this perspective work. The 3D facial data were obtained using a 3D camera and represented as a 3D facial points cloud. A DL model, PointNet++, was trained and evaluated using the points cloud as inputs and classified the malnutrition states. The performance was evaluated with the area under the receiver operating characteristic curve, accuracy, specificity, sensitivity, and F1 score.

RESULTS: Among the 482 patients, 150 patients (31.1%) were diagnosed as having moderate malnutrition and 54 patients (11.2%) as having severe malnutrition. The DL model achieved the performance with an area under the receiver operating characteristic curve of 0.7240 ± 0.0416.

CONCLUSION: The DL model achieved encouraging performance in accurately classifying nutrition states based on a points cloud of 3D facial information of patients with malnutrition.

PMID:38796717 | DOI:10.1002/jpen.2643

Categories: Literature Watch

DeepFace: Deep learning-based framework to contextualize orofacial cleft-related variants during human embryonic craniofacial development

Sun, 2024-05-26 06:00

HGG Adv. 2024 May 24:100312. doi: 10.1016/j.xhgg.2024.100312. Online ahead of print.

ABSTRACT

Orofacial clefts (OFC) are among the most common human congenital birth defects. Previous multiethnic studies have identified dozens of associated loci for both cleft lip with or without cleft palate (CL/P) and cleft palate alone (CP). Although several nearby genes have been highlighted, the 'casual' variants are largely unknown. Here, we developed DeepFace, a convolutional neural network model, to assess the functional impact of variants by SNP activity difference (SAD) scores. The DeepFace model is trained with 204 epigenomic assays from crucial human embryonic craniofacial developmental stages of post-conception week (pcw) 4 to pcw 10. The Pearson Correlation Coefficient between the predicted and actual values for 12 epigenetic features achieved a median range of 0.50 to 0.83. Specifically, our model revealed that SNPs significantly associated with OFC tended to exhibit higher SAD scores across various variant categories compared to less-related groups, indicating a context-specific impact of OFC-related SNPs. Notably, we identified six SNPs with a significant linear relationship to SAD scores throughout developmental progression, suggesting that these SNPs could play a temporal regulatory role. Furthermore, our cell-type specificity analysis pinpointed the trophoblast cell as having the highest enrichment of risk signals associated with OFC. Overall, DeepFace can harness distal regulatory signals from extensive epigenomic assays, offering new perspectives for prioritizing OFC variants using contextualized functional genomic features. We expect DeepFace to be instrumental in accessing and predicting the regulatory roles of variants associated with OFC, and the model that can be extended to study other complex diseases or traits.

PMID:38796699 | DOI:10.1016/j.xhgg.2024.100312

Categories: Literature Watch

Automated Prediction of Malignant Melanoma using Two-Stage Convolutional Neural Network

Sat, 2024-05-25 06:00

Arch Dermatol Res. 2024 May 25;316(6):275. doi: 10.1007/s00403-024-03076-z.

ABSTRACT

PURPOSE: A skin lesion refers to an area of the skin that exhibits anomalous growth or distinctive visual characteristics compared to the surrounding skin. Benign skin lesions are noncancerous and generally pose no threat. These irregular skin growths can vary in appearance. On the other hand, malignant skin lesions correspond to skin cancer, which happens to be the most prevalent form of cancer in the United States. Skin cancer involves the unusual proliferation of skin cells anywhere on the body. The conventional method for detecting skin cancer is relatively more painful.

METHODS: This work involves the automated prediction of skin cancer and its types using two stage Convolutional Neural Network (CNN). The first stage of CNN extracts low level features and second stage extracts high level features. Feature selection is done using these two CNN and ABCD (Asymmetry, Border irregularity, Colour variation, and Diameter) technique. The features extracted from the two CNNs are fused with ABCD features and fed into classifiers for the final prediction. The classifiers employed in this work include ensemble learning methods such as gradient boosting and XG boost, as well as machine learning classifiers like decision trees and logistic regression. This methodology is evaluated using the International Skin Imaging Collaboration (ISIC) 2018 and 2019 dataset.

RESULTS: As a result, the first stage CNN which is used for creation of new dataset achieved an accuracy of 97.92%. Second stage CNN which is used for feature selection achieved an accuracy of 98.86%. Classification results are obtained for both with and without fusion of features.

CONCLUSION: Therefore, two stage prediction model achieved better results with feature fusion.

PMID:38796546 | DOI:10.1007/s00403-024-03076-z

Categories: Literature Watch

Pages