Deep learning

A review of traditional Chinese medicine diagnosis using machine learning: Inspection, auscultation-olfaction, inquiry, and palpation

Thu, 2024-02-08 06:00

Comput Biol Med. 2024 Feb 2;170:108074. doi: 10.1016/j.compbiomed.2024.108074. Online ahead of print.

ABSTRACT

Traditional Chinese medicine (TCM) is an essential part of the Chinese medical system and is recognized by the World Health Organization as an important alternative medicine. As an important part of TCM, TCM diagnosis is a method to understand a patient's illness, analyze its state, and identify syndromes. In the long-term clinical diagnosis practice of TCM, four fundamental and effective diagnostic methods of inspection, auscultation-olfaction, inquiry, and palpation (IAOIP) have been formed. However, the diagnostic information in TCM is diverse, and the diagnostic process depends on doctors' experience, which is subject to a high-level subjectivity. At present, the research on the automated diagnosis of TCM based on machine learning is booming. Machine learning, which includes deep learning, is an essential part of artificial intelligence (AI), which provides new ideas for the objective and AI-related research of TCM. This paper aims to review and summarize the current research status of machine learning in TCM diagnosis. First, we review some key factors for the application of machine learning in TCM diagnosis, including data, data preprocessing, machine learning models, and evaluation metrics. Second, we review and summarize the research and applications of machine learning methods in TCM IAOIP and the synthesis of the four diagnostic methods. Finally, we discuss the challenges and research directions of using machine learning methods for TCM diagnosis.

PMID:38330826 | DOI:10.1016/j.compbiomed.2024.108074

Categories: Literature Watch

The influence of femoral lytic tumors segmentation on autonomous finite element analysis

Thu, 2024-02-08 06:00

Clin Biomech (Bristol, Avon). 2024 Feb 2;112:106192. doi: 10.1016/j.clinbiomech.2024.106192. Online ahead of print.

ABSTRACT

BACKGROUND: The validated CT-based autonomous finite element system Simfini (Yosibash et al., 2020) is used in clinical practice to assist orthopedic oncologists in determining the risk of pathological femoral fractures due to metastatic tumors. The finite element models are created automatically from CT-scans, assigning to lytic tumors a relatively low stiffness as if these were a low-density bone tissue because the tumors could not be automatically identified.

METHODS: The newly developed automatic deep learning algorithm which segments lytic tumors in femurs, presented in (Rachmil et al., 2023), was integrated into Simfini. Finite element models of twenty femurs from ten CT-scans of patients with femoral lytic tumors were analyzed three times using: the original methodology without tumor segmentation, manual segmentation of the lytic tumors, and the new automatic segmentation deep learning algorithm to identify lytic tumors. The influence of explicitly incorporating tumors in the autonomous finite element analysis on computed principal strains is quantified. These serve as an indicator of femoral fracture and are therefore of clinical significance.

FINDINGS: Autonomous finite element models with segmented lytic tumors had generally larger strains in regions affected by the tumor. The deep learning and manual segmentation of tumors resulted in similar average principal strains in 19 regions out of the 23 regions within 15 femurs with lytic tumors. A high dice similarity score of the automatic deep learning tumor segmentation did not necessarily correspond to minor differences compared to manual segmentation.

INTERPRETATION: Automatic tumor segmentation by deep learning allows their incorporation into an autonomous finite element system, resulting generally in elevated averaged principal strains that may better predict pathological femoral fractures.

PMID:38330735 | DOI:10.1016/j.clinbiomech.2024.106192

Categories: Literature Watch

SWISTA-Nets: Subband-adaptive wavelet iterative shrinkage thresholding networks for image reconstruction

Thu, 2024-02-08 06:00

Comput Med Imaging Graph. 2024 Feb 5;113:102345. doi: 10.1016/j.compmedimag.2024.102345. Online ahead of print.

ABSTRACT

Robust and interpretable image reconstruction is central to imageology applications in clinical practice. Prevalent deep networks, with strong learning ability to extract implicit information from data manifold, are still lack of prior knowledge introduced from mathematics or physics, leading to instability, poor structure interpretability and high computation cost. As to this issue, we propose two prior knowledge-driven networks to combine the good interpretability of mathematical methods and the powerful learnability of deep learning methods. Incorporating different kinds of prior knowledge, we propose subband-adaptive wavelet iterative shrinkage thresholding networks (SWISTA-Nets), where almost every network module is in one-to-one correspondence with each step involved in the iterative algorithm. By end-to-end training of proposed SWISTA-Nets, implicit information can be extracted from training data and guide the tuning process of key parameters that possess mathematical definition. The inverse problems associated with two medical imaging modalities, i.e., electromagnetic tomography and X-ray computational tomography are applied to validate the proposed networks. Both visual and quantitative results indicate that the SWISTA-Nets outperform mathematical methods and state-of-the-art prior knowledge-driven networks, especially with fewer training parameters, interpretable network structures and well robustness. We assume that our analysis will support further investigation of prior knowledge-driven networks in the field of ill-posed image reconstruction.

PMID:38330636 | DOI:10.1016/j.compmedimag.2024.102345

Categories: Literature Watch

Deep learning model to differentiate Crohn's disease from intestinal tuberculosis using histopathological whole slide images from intestinal specimens

Thu, 2024-02-08 06:00

Virchows Arch. 2024 Feb 8. doi: 10.1007/s00428-024-03740-9. Online ahead of print.

ABSTRACT

Crohn's disease (CD) and intestinal tuberculosis (ITB) share similar histopathological characteristics, and differential diagnosis can be a dilemma for pathologists. This study aimed to apply deep learning (DL) to analyze whole slide images (WSI) of surgical resection specimens to distinguish CD from ITB. Overall, 1973 WSI from 85 cases from 3 centers were obtained. The DL model was established in internal training and validated in external test cohort, evaluated by area under receiver operator characteristic curve (AUC). Diagnostic results of pathologists were compared with those of the DL model using DeLong's test. DL model had case level AUC of 0.886, 0.893 and slide level AUC of 0.954, 0.827 in training and test cohorts. Attention maps highlighted discriminative areas and top 10 features were extracted from CD and ITB. DL model's diagnostic efficiency (AUC = 0.886) was better than junior pathologists (*1 AUC = 0.701, P = 0.088; *2 AUC = 0.861, P = 0.788) and inferior to senior GI pathologists (*3 AUC = 0.910, P = 0.800; *4 AUC = 0.946, P = 0.507) in training cohort. In the test cohort, model (AUC = 0.893) outperformed senior non-GI pathologists (*5 AUC = 0.782, P = 0.327; *6 AUC = 0.821, P = 0.516). We developed a DL model for the classification of CD and ITB, improving pathological diagnosis accuracy effectively.

PMID:38332051 | DOI:10.1007/s00428-024-03740-9

Categories: Literature Watch

Data-centric artificial olfactory system based on the eigengraph

Thu, 2024-02-08 06:00

Nat Commun. 2024 Feb 8;15(1):1211. doi: 10.1038/s41467-024-45430-9.

ABSTRACT

Recent studies of electronic nose system tend to waste significant amount of important data in odor identification. Until now, the sensitivity-oriented data composition has made it difficult to discover meaningful data to apply artificial intelligence in terms of in-depth analysis for odor attributes specifying the identities of gas molecules, ultimately resulting in hindering the advancement of the artificial olfactory technology. Here, we realize a data-centric approach to implement standardized artificial olfactory systems inspired by human olfactory mechanisms by formally defining and utilizing the concept of Eigengraph in electrochemisty. The implicit odor attributes of the eigengraphs were mathematically substantialized as the Fourier transform-based Mel-Frequency Cepstral Coefficient feature vectors. Their effectiveness and applicability in deep learning processes for gas classification have been clearly demonstrated through experiments on complex mixed gases and automobile exhaust gases. We suggest that our findings can be widely applied as source technologies to develop standardized artificial olfactory systems.

PMID:38332010 | DOI:10.1038/s41467-024-45430-9

Categories: Literature Watch

Precise individual muscle segmentation in whole thigh CT scans for sarcopenia assessment using U-net transformer

Thu, 2024-02-08 06:00

Sci Rep. 2024 Feb 8;14(1):3301. doi: 10.1038/s41598-024-53707-8.

ABSTRACT

The study aims to develop a deep learning based automatic segmentation approach using the UNETR(U-net Transformer) architecture to quantify the volume of individual thigh muscles(27 muscles in 5 groups) for Sarcopenia assessment. By automating the segmentation process, this approach improves the efficiency and accuracy of muscle volume calculation, facilitating a comprehensive understanding of muscle composition and its relationship to Sarcopenia. The study utilized a dataset of 72 whole thigh CT scans from hip fracture patients, annotated by two radiologists. The UNETR model was trained to perform precise voxel-level segmentation and various metrics such as dice score, average symmetric surface distance, volume correlation, relative absolute volume difference and Hausdorff distance were employed to evaluate the model's performance. Additionally, the correlation between Sarcopenia and individual thigh muscle volumes was examined. The proposed model demonstrated superior segmentation performance compared to the baseline model, achieving higher dice scores (DC = 0.84) and lower average symmetric surface distances (ASSD = 1.4191 ± 0.91). The volume correlation between Sarcopenia and individual thigh muscles in the male group. Furthermore, the correlation analysis of grouped thigh muscles also showed negative associations with Sarcopenia in the male participants. This thesis presents a deep learning based automatic segmentation approach for quantifying individual thigh muscle volume in sarcopenia assessment. The results highlights the associations between Sarcopenia and specific individual muscles as well as grouped thigh muscle regions, particularly in males. The proposed method improves the efficiency and accuracy of muscle volume calculation, contributing to a comprehensive evaluation of Sarcopenia. This research enhances our understanding of muscle composition and performance, providing valuable insights for effective interventions in Sarcopenia management.

PMID:38331977 | DOI:10.1038/s41598-024-53707-8

Categories: Literature Watch

A Clinical and Imaging Fused Deep Learning Model Matches Expert Clinician Prediction of 90-Day Stroke Outcomes

Thu, 2024-02-08 06:00

AJNR Am J Neuroradiol. 2024 Feb 8. doi: 10.3174/ajnr.A8140. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Predicting long-term clinical outcome in acute ischemic stroke is beneficial for prognosis, clinical trial design, resource management, and patient expectations. This study used a deep learning-based predictive model (DLPD) to predict 90-day mRS outcomes and compared its predictions with those made by physicians.

MATERIALS AND METHODS: A previously developed DLPD that incorporated DWI and clinical data from the acute period was used to predict 90-day mRS outcomes in 80 consecutive patients with acute ischemic stroke from a single-center registry. We assessed the predictions of the model alongside those of 5 physicians (2 stroke neurologists and 3 neuroradiologists provided with the same imaging and clinical information). The primary analysis was the agreement between the ordinal mRS predictions of the model or physician and the ground truth using the Gwet Agreement Coefficient. We also evaluated the ability to identify unfavorable outcomes (mRS >2) using the area under the curve, sensitivity, and specificity. Noninferiority analyses were undertaken using limits of 0.1 for the Gwet Agreement Coefficient and 0.05 for the area under the curve analysis. The accuracy of prediction was also assessed using the mean absolute error for prediction, percentage of predictions ±1 categories away from the ground truth (±1 accuracy [ACC]), and percentage of exact predictions (ACC).

RESULTS: To predict the specific mRS score, the DLPD yielded a Gwet Agreement Coefficient score of 0.79 (95% CI, 0.71-0.86), surpassing the physicians' score of 0.76 (95% CI, 0.67-0.84), and was noninferior to the readers (P < .001). For identifying unfavorable outcome, the model achieved an area under the curve of 0.81 (95% CI, 0.72-0.89), again noninferior to the readers' area under the curve of 0.79 (95% CI, 0.69-0.87) (P < .005). The mean absolute error, ±1ACC, and ACC were 0.89, 81%, and 36% for the DLPD.

CONCLUSIONS: A deep learning method using acute clinical and imaging data for long-term functional outcome prediction in patients with acute ischemic stroke, the DLPD, was noninferior to that of clinical readers.

PMID:38331959 | DOI:10.3174/ajnr.A8140

Categories: Literature Watch

A deep learning framework for non-functional requirement classification

Thu, 2024-02-08 06:00

Sci Rep. 2024 Feb 8;14(1):3216. doi: 10.1038/s41598-024-52802-0.

ABSTRACT

Analyzing, identifying, and classifying nonfunctional requirements from requirement documents is time-consuming and challenging. Machine learning-based approaches have been proposed to minimize analysts' efforts, labor, and stress. However, the traditional approach of supervised machine learning necessitates manual feature extraction, which is time-consuming. This study presents a novel deep-learning framework for NFR classification to overcome these limitations. The framework leverages a more profound architecture that naturally captures feature structures, possesses enhanced representational power, and efficiently captures a broader context than shallower structures. To evaluate the effectiveness of the proposed method, an experiment was conducted on two widely-used datasets, encompassing 914 NFR instances. Performance analysis was performed on the applied models, and the results were evaluated using various metrics. Notably, the DReqANN model outperforms the other models in classifying NFR, achieving precision between 81 and 99.8%, recall between 74 and 89%, and F1-score between 83 and 89%. These significant results highlight the exceptional efficacy of the proposed deep learning framework in addressing NFR classification tasks, showcasing its potential for advancing the field of NFR analysis and classification.

PMID:38331920 | DOI:10.1038/s41598-024-52802-0

Categories: Literature Watch

Prediction of emergency department revisits among child and youth mental health outpatients using deep learning techniques

Thu, 2024-02-08 06:00

BMC Med Inform Decis Mak. 2024 Feb 8;24(1):42. doi: 10.1186/s12911-024-02450-1.

ABSTRACT

BACKGROUND: The proportion of Canadian youth seeking mental health support from an emergency department (ED) has risen in recent years. As EDs typically address urgent mental health crises, revisiting an ED may represent unmet mental health needs. Accurate ED revisit prediction could aid early intervention and ensure efficient healthcare resource allocation. We examine the potential increased accuracy and performance of graph neural network (GNN) machine learning models compared to recurrent neural network (RNN), and baseline conventional machine learning and regression models for predicting ED revisit in electronic health record (EHR) data.

METHODS: This study used EHR data for children and youth aged 4-17 seeking services at McMaster Children's Hospital's Child and Youth Mental Health Program outpatient service to develop and evaluate GNN and RNN models to predict whether a child/youth with an ED visit had an ED revisit within 30 days. GNN and RNN models were developed and compared against conventional baseline models. Model performance for GNN, RNN, XGBoost, decision tree and logistic regression models was evaluated using F1 scores.

RESULTS: The GNN model outperformed the RNN model by an F1-score increase of 0.0511 and the best performing conventional machine learning model by an F1-score increase of 0.0470. Precision, recall, receiver operating characteristic (ROC) curves, and positive and negative predictive values showed that the GNN model performed the best, and the RNN model performed similarly to the XGBoost model. Performance increases were most noticeable for recall and negative predictive value than for precision and positive predictive value.

CONCLUSIONS: This study demonstrates the improved accuracy and potential utility of GNN models in predicting ED revisits among children and youth, although model performance may not be sufficient for clinical implementation. Given the improvements in recall and negative predictive value, GNN models should be further explored to develop algorithms that can inform clinical decision-making in ways that facilitate targeted interventions, optimize resource allocation, and improve outcomes for children and youth.

PMID:38331816 | DOI:10.1186/s12911-024-02450-1

Categories: Literature Watch

Multimodal Biomedical Image Segmentation using Multi-Dimensional U-Convolutional Neural Network

Thu, 2024-02-08 06:00

BMC Med Imaging. 2024 Feb 8;24(1):38. doi: 10.1186/s12880-024-01197-5.

ABSTRACT

Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively.

PMID:38331800 | DOI:10.1186/s12880-024-01197-5

Categories: Literature Watch

Development and application of a multi-task oriented deep learning model for quantifying drivers of air pollutant variations: A case study in Taiyuan, China

Thu, 2024-02-08 06:00

Sci Total Environ. 2024 Feb 6:170777. doi: 10.1016/j.scitotenv.2024.170777. Online ahead of print.

ABSTRACT

Quantitative assessment of the drivers behind the variation of six criteria pollutants, namely fine particulate matter (PM2.5), ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), particulate matter (PM10), and carbon monoxide (CO), in the warming climate will be critical for subsequent decision-making. Here, a novel hybrid model of multi-task oriented CNN-BiLSTM-Attention was proposed and performed in Taiyuan during 2015-2020 to synchronously and quickly quantify the impact of anthropogenic and meteorological factors on the six criteria pollutants variations. Empirical results revealed the residential and transportation sectors distinctly decreased SO2 by 25 % and 22 % and CO by 12 % and 10 %. Gradual downward trends for PM2.5, PM10, and NO2 were mainly ascribed to the stringent measures implemented in transportation and power sectors as part of the Blue Sky Defense War, which were further reinforced by the COVID-19 pandemic. Nevertheless, temperature-dependent adverse meteorological effects (27 %) and anthropogenic intervention (12 %) jointly increased O3 by 39 %. The O3-driven pollution events may be inevitable or even become more prominent under climate warming. The industrial (5 %) and transportation sectors (6 %) were mainly responsible for the anthropogenic-driven increase of O3 and precursor NO2, respectively. Synergistic reduction of precursors (VOCs and NOx) from industrial and transportation sectors requires coordination with climate actions to mitigate the temperature-dependent O3-driven pollution, thereby improving regional air quality. Meanwhile, the proposed model is expected to be applied flexibly in various regions to quantify the drivers of the pollutant variations in a warming climate, with the potential to offer valuable insights for improving regional air quality in near future.

PMID:38331278 | DOI:10.1016/j.scitotenv.2024.170777

Categories: Literature Watch

EHR-BERT: A BERT-based model for effective anomaly detection in electronic health records

Thu, 2024-02-08 06:00

J Biomed Inform. 2024 Feb 6:104605. doi: 10.1016/j.jbi.2024.104605. Online ahead of print.

ABSTRACT

OBJECTIVE: Physicians and clinicians rely on data contained in electronic health records (EHRs), as recorded by health information technology (HIT), to make informed decisions about their patients. The reliability of HIT systems in this regard is critical to patient safety. Consequently, better tools are needed to monitor the performance of HIT systems for potential hazards that could compromise the collected EHRs, which in turn could affect patient safety. In this paper, we propose a new framework for detecting anomalies in EHRs using sequence of clinical events. This new framework, EHR-Bidirectional Encoder Representations from Transformers (BERT), is motivated by the gaps in the existing deep-learning related methods, including high false negatives, sub-optimal accuracy, higher computational cost, and the risk of information loss. EHR-BERT is an innovative framework rooted in the BERT architecture, meticulously tailored to navigate the hurdles in the contemporary BERT method; thus, enhancing anomaly detection in EHRs for healthcare applications.

METHODS: The EHR-BERT framework was designed using the Sequential Masked Token Prediction (SMTP) method. This approach treats EHRs as natural language sentences and iteratively masks input tokens during both training and prediction stages. This method facilitates the learning of EHR sequence patterns in both directions for each event and identifies anomalies based on deviations from the normal execution models trained on EHR sequences.

RESULTS: Extensive experiments on large EHR datasets across various medical domains demonstrate that EHR-BERT markedly improves upon existing models. It significantly reduces the number of false positives and enhances the detection rate, thus bolstering the reliability of anomaly detection in electronic health records. This improvement is attributed to the model's ability to minimize information loss and maximize data utilization effectively.

CONCLUSION: EHR-BERT showcases immense potential in decreasing medical errors related to anomalous clinical events, positioning itself as an indispensable asset for enhancing patient safety and the overall standard of healthcare services. The framework effectively overcomes the drawbacks of earlier models, making it a promising solution for healthcare professionals to ensure the reliability and quality of health data.

PMID:38331082 | DOI:10.1016/j.jbi.2024.104605

Categories: Literature Watch

Extracting adverse drug events from clinical Notes: A systematic review of approaches used

Thu, 2024-02-08 06:00

J Biomed Inform. 2024 Feb 6:104603. doi: 10.1016/j.jbi.2024.104603. Online ahead of print.

ABSTRACT

BACKGROUND: An adverse drug event (ADE) is any unfavorable effect that occurs due to the use of a drug. Extracting ADEs from unstructured clinical notes is essential to biomedical text extraction research because it helps with pharmacovigilance and patient medication studies.

OBJECTIVE: From the considerable amount of clinical narrative text, natural language processing (NLP) researchers have developed methods for extracting ADEs and their related attributes. This work presents a systematic review of current methods.

METHODOLOGY: Two biomedical databases have been searched from June 2022 until December 2023 for relevant publications regarding this review, namely the databases PubMed and Medline. Similarly, we searched the multi-disciplinary databases IEEE Xplore, Scopus, ScienceDirect, and the ACL Anthology. We adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement guidelines and recommendations for reporting systematic reviews in conducting this review. Initially, we obtained 5,537 articles from the search results from the various databases between 2015 and 2023. Based on predefined inclusion and exclusion criteria for article selection, 100 publications have undergone full-text review, of which we consider 82 for our analysis.

RESULTS: We determined the general pattern for extracting ADEs from clinical notes, with named entity recognition (NER) and relation extraction (RE) being the dual tasks considered. Researchers that tackled both NER and RE simultaneously have approached ADE extraction as a "pipeline extraction" problem (n = 22), as a "joint task extraction" problem (n = 7), and as a "multi-task learning" problem (n = 6), while others have tackled only NER (n = 27) or RE (n = 20). We further grouped the reviews based on the approaches for data extraction, namely rule-based (n = 8), machine learning (n = 11), deep learning (n = 32), comparison of two or more approaches (n = 11), hybrid (n = 12) and large language models (n = 8). The most used datasets are MADE 1.0, TAC 2017 and n2c2 2018.

CONCLUSION: Extracting ADEs is crucial, especially for pharmacovigilance studies and patient medications. This survey showcases advances in ADE extraction research, approaches, datasets, and state-of-the-art performance in them. Challenges and future research directions are highlighted. We hope this review will guide researchers in gaining background knowledge and developing more innovative ways to address the challenges.

PMID:38331081 | DOI:10.1016/j.jbi.2024.104603

Categories: Literature Watch

Automatic segmentation of hepatocellular carcinoma on dynamic contrast-enhanced MRI based on deep learning

Thu, 2024-02-08 06:00

Phys Med Biol. 2024 Feb 8. doi: 10.1088/1361-6560/ad2790. Online ahead of print.

ABSTRACT

Precise hepatocellular carcinoma (HCC) detection is crucial for clinical management. While studies focus on CT-based automatic algorithms, there is a rareness of research on automatic detection based on dynamic contrast enhanced (DCE) MRI. This study is to develop an automatic detection and segmentation deep learning model for HCC using DCE. &#xD;Approach: DCE images acquired from 2016 to 2021 were retrospectively collected. Then, 382 patients (301 male; 81 female) with 466 lesions pathologically confirmed were included and divided into an 80% training-validation set and a 20% independent test set. For external validation, 51 patients (42 male; 9 female) in another hospital from 2018 to 2021 were included. The U-net architecture was modified to accommodate multi-phasic DCE input. The model was trained with the training-validation set using five-fold cross-validation, and furtherly evaluated with the independent test set using comprehensive metrics for segmentation and detection performance. The proposed automatic segmentation model consisted of five main steps: phase registration, automatic liver region extraction using a pre-trained model, automatic HCC lesion segmentation using the multi-phasic deep learning model, ensemble of five-fold predictions, and post-processing using connected component analysis to enhance the performance to refine predictions and eliminate false positives. &#xD;Main results: The proposed model achieved a mean dice similarity coefficient (DSC) of 0.81±0.11, a sensitivity of 94.41±15.50%, a precision of 94.19±17.32%, and 0.14±0.48 false positive lesions per patient in the independent test set. The model detected 88% (80/91) HCC lesions in the condition of DSC>0.5, and the DSC per tumor was 0.80±0.13. In the external set, the model detected 92% (58/62) lesions with 0.12±0.33 false positives per patient, and the DSC per tumor was 0.75±0.10. &#xD;Significance: This study developed an automatic detection and segmentation deep learning model for HCC using DCE, which yielded promising post-processed results in accurately identifying and delineating HCC lesions.

PMID:38330492 | DOI:10.1088/1361-6560/ad2790

Categories: Literature Watch

Recovery of the spatially-variant deformations in dual-panel PET reconstructions using deep-learning

Thu, 2024-02-08 06:00

Phys Med Biol. 2024 Feb 8. doi: 10.1088/1361-6560/ad278e. Online ahead of print.

ABSTRACT

Dual panel PET systems, such as Breast-PET (B-PET) scanner, exhibit strong asymmetric and anisotropic spatially-variant deformations in the reconstructed images due to the limited-angle data and strong depth of interaction effects for the oblique LORs inherent in such systems.&#xD;In our previous work, we studied TOF effects and image-based spatially-variant PSF resolution models within dual-panel PET reconstruction to reduce these deformations.&#xD;The application of PSF based models led to better and more uniform quantification of small lesions across the field of view.&#xD;However, the ability of such a model to correct for PSF deformation is limited to small objects.&#xD;On the other hand, large object deformations caused by the limited-angle reconstruction cannot be corrected with the PSF modeling alone.&#xD;In this work, we investigate the ability of the deep-learning (DL) networks to recover such strong spatially-variant image deformations using first simulated PSF deformations in image space of a generic dual panel PET system and then using simulated and acquired phantom reconstructions from dual panel B-PET system developed in our lab at University of Pennsylvania.&#xD;For the studies using real B-PET data, the network was trained on the simulated synthetic data sets providing ground truth for objects resembling experimentally acquired phantoms on which the network deformation corrections were then tested. The synthetic and acquired limited-angle B-PET data were reconstructed using DIRECT-RAMLA reconstructions, which were then used as the network inputs.&#xD;Our results demonstrate that DL approaches can significantly eliminate deformations of limited angle systems and improve their quantitative performance.

PMID:38330448 | DOI:10.1088/1361-6560/ad278e

Categories: Literature Watch

DP2LM: leveraging deep learning approach for estimation and hypothesis testing on mediation effects with high-dimensional mediators and complex confounders

Thu, 2024-02-08 06:00

Biostatistics. 2024 Feb 8:kxad037. doi: 10.1093/biostatistics/kxad037. Online ahead of print.

ABSTRACT

Traditional linear mediation analysis has inherent limitations when it comes to handling high-dimensional mediators. Particularly, accurately estimating and rigorously inferring mediation effects is challenging, primarily due to the intertwined nature of the mediator selection issue. Despite recent developments, the existing methods are inadequate for addressing the complex relationships introduced by confounders. To tackle these challenges, we propose a novel approach called DP2LM (Deep neural network-based Penalized Partially Linear Mediation). This approach incorporates deep neural network techniques to account for nonlinear effects in confounders and utilizes the penalized partially linear model to accommodate high dimensionality. Unlike most existing works that concentrate on mediator selection, our method prioritizes estimation and inference on mediation effects. Specifically, we develop test procedures for testing the direct and indirect mediation effects. Theoretical analysis shows that the tests maintain the Type-I error rate. In simulation studies, DP2LM demonstrates its superior performance as a modeling tool for complex data, outperforming existing approaches in a wide range of settings and providing reliable estimation and inference in scenarios involving a considerable number of mediators. Further, we apply DP2LM to investigate the mediation effect of DNA methylation on cortisol stress reactivity in individuals who experienced childhood trauma, uncovering new insights through a comprehensive analysis.

PMID:38330064 | DOI:10.1093/biostatistics/kxad037

Categories: Literature Watch

Deep Learning with Physics-embedded Neural Network for Full Waveform Ultrasonic Brain Imaging

Thu, 2024-02-08 06:00

IEEE Trans Med Imaging. 2024 Feb 8;PP. doi: 10.1109/TMI.2024.3363144. Online ahead of print.

ABSTRACT

The convenience, safety, and affordability of ultrasound imaging make it a vital non-invasive diagnostic technique for examining soft tissues. However, significant differences in acoustic impedance between the skull and soft tissues hinder the successful application of traditional ultrasound for brain imaging. In this study, we propose a physics-embedded neural network with deep learning based full waveform inversion (PEN-FWI), which can achieve reliable quantitative imaging of brain tissues. The network consists of two fundamental components: forward convolutional neural network (FCNN) and inversion sub-neural network (ISNN). The FCNN explores the nonlinear mapping relationship between the brain model and the wavefield, replacing the tedious wavefield calculation process based on the finite difference method. The ISNN implements the mapping from the wavefield to the model. PEN-FWI includes three iterative steps, each embedding the FCNN into the ISNN, ultimately achieving tomography from wavefield to brain models. Simulation and laboratory tests indicate that PEN-FWI can produce high-quality imaging of the skull and soft tissues, even starting from a homogeneous water model. PEN-FWI can achieve excellent imaging of clot models with constant uniform distribution of velocity, randomly Gaussian distribution of velocity, and irregularly shaped randomly distributed velocity. Robust differentiation can also be achieved for brain slices of various tissues and skulls, resulting in high-quality imaging. The imaging time for a horizontal cross-sectional image of the brain is only 1.13 seconds. This algorithm can effectively promote ultrasound-based brain tomography and provide feasible solutions in other fields.

PMID:38329866 | DOI:10.1109/TMI.2024.3363144

Categories: Literature Watch

Self-Supervised Deep Blind Video Super-Resolution

Thu, 2024-02-08 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Feb 8;PP. doi: 10.1109/TPAMI.2024.3361168. Online ahead of print.

ABSTRACT

Existing deep learning-based video super-resolution (SR) methods usually depend on the supervised learning approach, where the training data is usually generated by the blurring operation with known or predefined kernels (e.g., Bicubic kernel) followed by a decimation operation. However, this does not hold for real applications as the degradation process is complex and cannot be approximated by these idea cases well. Moreover, obtaining high-resolution (HR) videos and the corresponding low-resolution (LR) ones in real-world scenarios is difficult. To overcome these problems, we propose a self-supervised learning method to solve the blind video SR problem, which simultaneously estimates blur kernels and HR videos from the LR videos. As directly using LR videos as supervision usually leads to trivial solutions, we develop a simple and effective method to generate auxiliary paired data from original LR videos according to the image formation of video SR, so that the networks can be better constrained by the generated paired data for both blur kernel estimation and latent HR video restoration. In addition, we introduce an optical flow estimation module to exploit the information from adjacent frames for HR video restoration. Experiments show that our method performs favorably against state-of-the-art ones on benchmarks and real-world videos.

PMID:38329850 | DOI:10.1109/TPAMI.2024.3361168

Categories: Literature Watch

Inflammatory Knee Synovitis: Evaluation of an Accelerated FLAIR Sequence Compared With Standard Contrast-Enhanced Imaging

Thu, 2024-02-08 06:00

Invest Radiol. 2024 Feb 8. doi: 10.1097/RLI.0000000000001065. Online ahead of print.

ABSTRACT

OBJECTIVES: The aim of this study was to assess the diagnostic value and accuracy of a deep learning (DL)-accelerated fluid attenuated inversion recovery (FLAIR) sequence with fat saturation (FS) in patients with inflammatory synovitis of the knee.

MATERIALS AND METHODS: Patients with suspected knee synovitis were retrospectively included between January and September 2023. All patients underwent a 3 T knee magnetic resonance imaging including a DL-accelerated noncontrast FLAIR FS sequence (acquisition time: 1 minute 38 seconds) and a contrast-enhanced (CE) T1-weighted FS sequence (acquisition time: 4 minutes 50 seconds), which served as reference standard. All knees were scored by 2 radiologists using the semiquantitative modified knee synovitis score, effusion synovitis score, and Hoffa inflammation score. Diagnostic confidence, image quality, and image artifacts were rated on separate Likert scales. Wilcoxon signed rank test was used to compare the semiquantitative scores. Interreader and intrareader reproducibility were calculated using Cohen κ.

RESULTS: Fifty-five patients (mean age, 52 ± 17 years; 28 females) were included in the study. Twenty-seven patients (49%) had mild to moderate synovitis (synovitis score 6-13), and 17 patients (31%) had severe synovitis (synovitis score >14). No signs of synovitis were detected in 11 patients (20%) (synovitis score <5). Semiquantitative assessment of the whole knee synovitis score showed no significant difference between the DL-accelerated FLAIR sequence and the CE T1-weighted sequence (mean FLAIR score: 10.69 ± 8.83, T1 turbo spin-echo FS: 10.74 ± 10.32; P = 0.521). Both interreader and intrareader reproducibility were excellent (range Cohen κ [0.82-0.96]).

CONCLUSIONS: Assessment of inflammatory knee synovitis using a DL-accelerated noncontrast FLAIR FS sequence was feasible and equivalent to CE T1-weighted FS imaging.

PMID:38329824 | DOI:10.1097/RLI.0000000000001065

Categories: Literature Watch

Self-Supervised Deep Learning-The Next Frontier

Thu, 2024-02-08 06:00

JAMA Ophthalmol. 2024 Feb 8. doi: 10.1001/jamaophthalmol.2023.6650. Online ahead of print.

NO ABSTRACT

PMID:38329770 | DOI:10.1001/jamaophthalmol.2023.6650

Categories: Literature Watch

Pages