Deep learning

TIRPnet: Risk prediction of traditional Chinese medicine ingredients based on a deep neural network

Mon, 2024-02-05 06:00

J Ethnopharmacol. 2024 Feb 3:117860. doi: 10.1016/j.jep.2024.117860. Online ahead of print.

ABSTRACT

ETHNOPHARMACOLOGICAL RELEVANCE: Traditional Chinese medicine (TCM) has a history of over 3000 years of medical practice. Due to the complex ingredients and unclear pharmacological mechanism of TCM, it is very difficult to predict its risks. With the increase in the number and severity of spontaneous reports of adverse drug reactions (ADRs) of TCM, its safety has received widespread attention.

AIM OF THE STUDY: In this study, we proposed a framework based on deep learning to predict the probability of adverse reactions caused by TCM ingredients and validated the model using real-world data.

MATERIALS AND METHODS: The spontaneous reporting data from Jiangsu Province of China was selected as the research data, which included 72,561 ADR reports of TCMs. All the ingredients of these TCMs were collected from the medical website and correlated with the corresponding ADRs. Then, a risk prediction model was constructed based on a deep neural network (DNN), named TIRPnet. Based on one-hot encoded data, our model achieved the optimal performance by fine-tuning some hyperparameters. The ten most commonly used TCM ingredients and their ADRs were collected as the test set to evaluate their performance as objective criteria.

RESULTS: TIRPnet was constructed as a 7-layer DNN. The experimental results showed that TIRPnet performs excellently in all indicators, with a sensitivity of 0.950, specificity of 0.995, accuracy of 0.994, precision of 0.708, and F1 of 0.811.

CONCLUSIONS: The proposed TIRPnet owns the ability to predict the ADRs of a single TCM ingredient by learning a large number of TCM-related spontaneous reports, which can help doctors design safe prescriptions and provide technical support for the pharmacovigilance of TCM.

PMID:38316222 | DOI:10.1016/j.jep.2024.117860

Categories: Literature Watch

Variability of the femoral mechanical-anatomical axis angle and its implications in primary and revision total knee arthroplasty

Mon, 2024-02-05 06:00

Bone Jt Open. 2024 Feb 6;5(2):101-108. doi: 10.1302/2633-1462.52.BJO-2023-0056.R1.

ABSTRACT

AIMS: Distal femoral resection in conventional total knee arthroplasty (TKA) utilizes an intramedullary guide to determine coronal alignment, commonly planned for 5° of valgus. However, a standard 5° resection angle may contribute to malalignment in patients with variability in the femoral anatomical and mechanical axis angle. The purpose of the study was to leverage deep learning (DL) to measure the femoral mechanical-anatomical axis angle (FMAA) in a heterogeneous cohort.

METHODS: Patients with full-limb radiographs from the Osteoarthritis Initiative were included. A DL workflow was created to measure the FMAA and validated against human measurements. To reflect potential intramedullary guide placement during manual TKA, two different FMAAs were calculated either using a line approximating the entire diaphyseal shaft, and a line connecting the apex of the femoral intercondylar sulcus to the centre of the diaphysis. The proportion of FMAAs outside a range of 5.0° (SD 2.0°) was calculated for both definitions, and FMAA was compared using univariate analyses across sex, BMI, knee alignment, and femur length.

RESULTS: The algorithm measured 1,078 radiographs at a rate of 12.6 s/image (2,156 unique measurements in 3.8 hours). There was no significant difference or bias between reader and algorithm measurements for the FMAA (p = 0.130 to 0.563). The FMAA was 6.3° (SD 1.0°; 25% outside range of 5.0° (SD 2.0°)) using definition one and 4.6° (SD 1.3°; 13% outside range of 5.0° (SD 2.0°)) using definition two. Differences between males and females were observed using definition two (males more valgus; p < 0.001).

CONCLUSION: We developed a rapid and accurate DL tool to quantify the FMAA. Considerable variation with different measurement approaches for the FMAA supports that patient-specific anatomy and surgeon-dependent technique must be accounted for when correcting for the FMAA using an intramedullary guide. The angle between the mechanical and anatomical axes of the femur fell outside the range of 5.0° (SD 2.0°) for nearly a quarter of patients.

PMID:38316146 | DOI:10.1302/2633-1462.52.BJO-2023-0056.R1

Categories: Literature Watch

Comparison of Sysmex XN-V body fluid mode and deep-learning-based quantification with manual techniques for total nucleated cell count and differential count for equine bronchoalveolar lavage samples

Mon, 2024-02-05 06:00

BMC Vet Res. 2024 Feb 5;20(1):48. doi: 10.1186/s12917-024-03884-5.

ABSTRACT

BACKGROUND: Bronchoalveolar lavage (BAL) is a diagnostic method for the assessment of the lower respiratory airway health status in horses. Differential cell count and sometimes also total nucleated cell count (TNCC) are routinely measured by time-consuming manual methods, while faster automated methods exist. The aims of this study were to compare: 1) the Sysmex XN-V body fluid (BF) mode with the manual techniques for TNCC and two-part differential into mononuclear and polymorphonuclear cells; 2) the Olympus VS200 slide scanner and software generated deep-learning-based algorithm with manual techniques for four-part differential cell count into alveolar macrophages, lymphocytes, neutrophils, and mast cells. The methods were compared in 69 clinical BAL samples.

RESULTS: Incorrect gating by the Sysmex BF mode was observed on many scattergrams, therefore all samples were reanalyzed with manually set gates. For the TNCC, a proportional and systematic bias with a correlation of r = 0.79 was seen when comparing the Sysmex BF mode with manual methods. For the two-part differential count, a mild constant and proportional bias and a very small mean difference with moderate limits of agreement with a correlation of r = 0.84 and 0.83 were seen when comparing the Sysmex BF mode with manual methods. The Sysmex BF mode classified significantly more samples as abnormal based on the TNCC and the two-part differential compared to the manual method. When comparing the Olympus VS200 deep-learning-based algorithm with manual methods for the four-part differential cell count, a very small bias in the regression analysis and a very small mean difference in the difference plot, as well as a correlation of r = 0.85 to 0.92 were observed for all four cell categories. The Olympus VS200 deep-learning-based algorithm also showed better precision than manual methods for the four-part differential cell count, especially with an increasing number of analyzed cells.

CONCLUSIONS: The Sysmex XN-V BF mode can be used for TNCC and two-part differential count measurements after reanalyzing the samples with manually set gates. The Olympus VS200 deep-learning-based algorithm correlates well with the manual methods, while showing better precision and can be used for a four-part differential cell count.

PMID:38317167 | DOI:10.1186/s12917-024-03884-5

Categories: Literature Watch

A systematic analysis of deep learning in genomics and histopathology for precision oncology

Mon, 2024-02-05 06:00

BMC Med Genomics. 2024 Feb 5;17(1):48. doi: 10.1186/s12920-024-01796-9.

ABSTRACT

BACKGROUND: Digitized histopathological tissue slides and genomics profiling data are available for many patients with solid tumors. In the last 5 years, Deep Learning (DL) has been broadly used to extract clinically actionable information and biological knowledge from pathology slides and genomic data in cancer. In addition, a number of recent studies have introduced multimodal DL models designed to simultaneously process both images from pathology slides and genomic data as inputs. By comparing patterns from one data modality with those in another, multimodal DL models are capable of achieving higher performance compared to their unimodal counterparts. However, the application of these methodologies across various tumor entities and clinical scenarios lacks consistency.

METHODS: Here, we present a systematic survey of the academic literature from 2010 to November 2023, aiming to quantify the application of DL for pathology, genomics, and the combined use of both data types. After filtering 3048 publications, our search identified 534 relevant articles which then were evaluated by basic (diagnosis, grading, subtyping) and advanced (mutation, drug response and survival prediction) application types, publication year and addressed cancer tissue.

RESULTS: Our analysis reveals a predominant application of DL in pathology compared to genomics. However, there is a notable surge in DL incorporation within both domains. Furthermore, while DL applied to pathology primarily targets the identification of histology-specific patterns in individual tissues, DL in genomics is more commonly used in a pan-cancer context. Multimodal DL, on the contrary, remains a niche topic, evidenced by a limited number of publications, primarily focusing on prognosis predictions.

CONCLUSION: In summary, our quantitative analysis indicates that DL not only has a well-established role in histopathology but is also being successfully integrated into both genomic and multimodal applications. In addition, there is considerable potential in multimodal DL for harnessing further advanced tasks, such as predicting drug response. Nevertheless, this review also underlines the need for further research to bridge the existing gaps in these fields.

PMID:38317154 | DOI:10.1186/s12920-024-01796-9

Categories: Literature Watch

DeepLocRNA: an interpretable deep learning model for predicting RNA subcellular localisation with domain-specific transfer-learning

Mon, 2024-02-05 06:00

Bioinformatics. 2024 Feb 5:btae065. doi: 10.1093/bioinformatics/btae065. Online ahead of print.

ABSTRACT

MOTIVATION: Accurate prediction of RNA subcellular localisation plays an important role in understanding cellular processes and functions. Although post-transcriptional processes are governed by trans-acting RNA binding proteins (RBPs) through interaction with cis-regulatory RNA motifs, current methods do not incorporate RBP-binding information.

RESULTS: In this paper, we propose DeepLocRNA, an interpretable deep-learning model that leverages a pre-trained multi-task RBP-binding prediction model to predict the subcellular localisation of RNA molecules via fine-tuning. We constructed DeepLocRNA using a comprehensive dataset with variant RNA types and evaluated it on the held-out dataset. Our model achieved state-of-the-art performance in predicting RNA subcellular localisation in mRNA and miRNA. It has also demonstrated great generalization capabilities, performing well on both human and mouse RNA. Additionally, a motif analysis was performed to enhance the interpretability of the model, highlighting signal factors that contributed to the predictions. The proposed model provides general and powerful prediction abilities for different RNA types and species, offering valuable insights into the localisation patterns of RNA molecules and contributing to our understanding of cellular processes at the molecular level. A user-friendly web server is available at: https://biolib.com/KU/DeepLocRNA/.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38317052 | DOI:10.1093/bioinformatics/btae065

Categories: Literature Watch

AGImpute: Imputation of scRNA-seq data based on a hybrid GAN with dropouts identification

Mon, 2024-02-05 06:00

Bioinformatics. 2024 Feb 5:btae068. doi: 10.1093/bioinformatics/btae068. Online ahead of print.

ABSTRACT

MOTIVATION: Dropout events bring challenges in analyzing single-cell RNA sequencing data as they introduce noise and distort the true distributions of gene expression profiles. Recent studies focus on estimating dropout probability and imputing dropout events by leveraging information from similar cells or genes. However, the number of dropout events differs in different cells, due to the complex factors, such as different sequencing protocols, cell types, and batch effects. The dropout event differences are not fully considered in assessing the similarities between cells and genes, which compromises the reliability of downstream analysis.

RESULTS: This work proposes a hybrid Generative Adversarial Network with dropouts identification to impute single-cell RNA sequencing data, named AGImpute. First, the numbers of dropout events in different cells in scRNA-seq data are differentially estimated by using a dynamic threshold estimation strategy. Next, the identified dropout events are imputed by a hybrid deep learning model, combining Autoencoder with a Generative Adversarial Network. To validate the efficiency of the AGImpute, it is compared with seven state-of-the-art dropout imputation methods on two simulated datasets and seven real single-cell RNA sequencing datasets. The results show that AGImpute imputes the least number of dropout events than other methods. Moreover, AGImpute enhances the performance of downstream analysis, including clustering performance, identifying cell-specific marker genes, and inferring trajectory in the Time-course data set.

AVAILABILITY: The source code can be obtained from https://github.com/xszhu-lab/AGImpute.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38317025 | DOI:10.1093/bioinformatics/btae068

Categories: Literature Watch

Deep learning-aided decision support for diagnosis of skin disease across skin tones

Mon, 2024-02-05 06:00

Nat Med. 2024 Feb 5. doi: 10.1038/s41591-023-02728-3. Online ahead of print.

ABSTRACT

Although advances in deep learning systems for image-based medical diagnosis demonstrate their potential to augment clinical decision-making, the effectiveness of physician-machine partnerships remains an open question, in part because physicians and algorithms are both susceptible to systematic errors, especially for diagnosis of underrepresented populations. Here we present results from a large-scale digital experiment involving board-certified dermatologists (n = 389) and primary-care physicians (n = 459) from 39 countries to evaluate the accuracy of diagnoses submitted by physicians in a store-and-forward teledermatology simulation. In this experiment, physicians were presented with 364 images spanning 46 skin diseases and asked to submit up to four differential diagnoses. Specialists and generalists achieved diagnostic accuracies of 38% and 19%, respectively, but both specialists and generalists were four percentage points less accurate for the diagnosis of images of dark skin as compared to light skin. Fair deep learning system decision support improved the diagnostic accuracy of both specialists and generalists by more than 33%, but exacerbated the gap in the diagnostic accuracy of generalists across skin tones. These results demonstrate that well-designed physician-machine partnerships can enhance the diagnostic accuracy of physicians, illustrating that success in improving overall diagnostic accuracy does not necessarily address bias.

PMID:38317019 | DOI:10.1038/s41591-023-02728-3

Categories: Literature Watch

Author Correction: COVID-19 infection segmentation using hybrid deep learning and image processing techniques

Mon, 2024-02-05 06:00

Sci Rep. 2024 Feb 5;14(1):2970. doi: 10.1038/s41598-024-53425-1.

NO ABSTRACT

PMID:38316974 | DOI:10.1038/s41598-024-53425-1

Categories: Literature Watch

Enhanced Thermal Boundary Conductance across GaN/SiC Interfaces with AlN Transition Layers

Mon, 2024-02-05 06:00

ACS Appl Mater Interfaces. 2024 Feb 5. doi: 10.1021/acsami.3c16905. Online ahead of print.

ABSTRACT

Heat dissipation plays a crucial role in the performance and reliability of high-power GaN-based electronics. While AlN transition layers are commonly employed in the heteroepitaxial growth of GaN-on-SiC substrates, concerns have been raised about their impact on thermal transport across GaN/SiC interfaces. In this study, we present experimental measurements of the thermal boundary conductance (TBC) across GaN/SiC interfaces with varying thicknesses of the AlN transition layer (ranging from 0 to 73 nm) at different temperatures. Our findings reveal that the addition of an AlN transition layer leads to a notable increase in the TBC of the GaN/SiC interface, particularly at elevated temperatures. Structural characterization techniques are employed to understand the influence of the AlN transition layer on the crystalline quality of the GaN layer and its potential effects on interfacial thermal transport. To gain further insights into the trend of TBC, we conduct molecular dynamics simulations using high-fidelity deep learning-based interatomic potentials, which reproduce the experimentally observed enhancement in TBC even for atomically perfect interfaces. These results suggest that the enhanced TBC facilitated by the AlN intermediate layer could result from a combination of improved crystalline quality at the interface and the "phonon bridge" effect provided by AlN that enhances the overlap between the vibrational spectra of GaN and SiC.

PMID:38315970 | DOI:10.1021/acsami.3c16905

Categories: Literature Watch

Detection method of organic light-emitting diodes based on small sample deep learning

Mon, 2024-02-05 06:00

PLoS One. 2024 Feb 5;19(2):e0297642. doi: 10.1371/journal.pone.0297642. eCollection 2024.

ABSTRACT

In order to solve the surface detection problems of low accuracy, low precision and inability to automate in the production process of late-model display panels, a little sample-based deep learning organic light-emitting diodes detection model SmartMuraDetection is proposed. First, aiming at the detection difficulty of low surface defect contrast, a gradient boundary enhancement algorithm module is designed to automatically identify and enhance defects and background gray difference. Then, the problem of insufficient little sample data sets is solved, Poisson fusion image enhancement module is designed for sample enhancement. Then, a TinyDetection model adapted to small-scale target detection is constructed to improve the detection accuracy of defects in small-scale targets. Finally, SEMUMaxMin quantization module is proposed as a post-processing module for the result images derived from network model reasoning, and accurate defect data is obtained by setting a threshold filter. The number of sample images in the experiment is 334. This study utilizes an organic light-emitting diodes detection model. The detection accuracy of surface defects can be improved by 85% compared with the traditional algorithm. The method in this paper is used for mass production evaluation in the actual display panel production site. The detection accuracy of surface defects reaches 96%, which can meet the mass production level of the detection equipment in this process section.

PMID:38315697 | DOI:10.1371/journal.pone.0297642

Categories: Literature Watch

Decision Curve Analysis of In-Hospital Mortality Prediction Models: The Relative Value of Pre- and Intraoperative Data For Decision-Making

Mon, 2024-02-05 06:00

Anesth Analg. 2024 Feb 5. doi: 10.1213/ANE.0000000000006874. Online ahead of print.

ABSTRACT

BACKGROUND: Clinical prediction modeling plays a pivotal part in modern clinical care, particularly in predicting the risk of in-hospital mortality. Recent modeling efforts have focused on leveraging intraoperative data sources to improve model performance. However, the individual and collective benefit of pre- and intraoperative data for clinical decision-making remains unknown. We hypothesized that pre- and intraoperative predictors contribute equally to the net benefit in a decision curve analysis (DCA) of in-hospital mortality prediction models that include pre- and intraoperative predictors.

METHODS: Data from the VitalDB database featuring a subcohort of 6043 patients were used. A total of 141 predictors for in-hospital mortality were grouped into preoperative (demographics, intervention characteristics, and laboratory measurements) and intraoperative (laboratory and monitor data, drugs, and fluids) data. Prediction models using either preoperative, intraoperative, or all data were developed with multiple methods (logistic regression, neural network, random forest, gradient boosting machine, and a stacked learner). Predictive performance was evaluated by the area under the receiver-operating characteristic curve (AUROC) and under the precision-recall curve (AUPRC). Clinical utility was examined with a DCA in the predefined risk preference range (denoted by so-called treatment threshold probabilities) between 0% and 20%.

RESULTS: AUROC performance of the prediction models ranged from 0.53 to 0.78. AUPRC values ranged from 0.02 to 0.25 (compared to the incidence of 0.09 in our dataset) and high AUPRC values resulted from prediction models based on preoperative laboratory values. A DCA of pre- and intraoperative prediction models highlighted that preoperative data provide the largest overall benefit for decision-making, whereas intraoperative values provide only limited benefit for decision-making compared to preoperative data. While preoperative demographics, comorbidities, and surgery-related data provide the largest benefit for low treatment thresholds up to 5% to 10%, preoperative laboratory measurements become the dominant source for decision support for higher thresholds.

CONCLUSIONS: When it comes to predicting in-hospital mortality and subsequent decision-making, preoperative demographics, comorbidities, and surgery-related data provide the largest benefit for clinicians with risk-averse preferences, whereas preoperative laboratory values provide the largest benefit for decision-makers with more moderate risk preferences. Our decision-analytic investigation of different predictor categories moves beyond the question of whether certain predictors provide a benefit in traditional performance metrics (eg, AUROC). It offers a nuanced perspective on for whom these predictors might be beneficial in clinical decision-making. Follow-up studies requiring larger datasets and dedicated deep-learning models to handle continuous intraoperative data are essential to examine the robustness of our results.

PMID:38315623 | DOI:10.1213/ANE.0000000000006874

Categories: Literature Watch

Self-Supervised Learning Improves Accuracy and Data Efficiency for IMU-Based Ground Reaction Force Estimation

Mon, 2024-02-05 06:00

IEEE Trans Biomed Eng. 2024 Feb 5;PP. doi: 10.1109/TBME.2024.3361888. Online ahead of print.

ABSTRACT

OBJECTIVE: Recent deep learning techniques hold promise to enable IMU-driven kinetic assessment; however, they require large extents of ground reaction force (GRF) data to serve as labels for supervised model training. We thus propose using existing self-supervised learning (SSL) techniques to leverage large IMU datasets to pre-train deep learning models, which can improve the accuracy and data efficiency of IMU-based GRF estimation.

METHODS: We performed SSL by masking a random portion of the input IMU data and training a transformer model to reconstruct the masked portion. We systematically compared a series of masking ratios across three pre-training datasets that included real IMU data, synthetic IMU data, or a combination of the two. Finally, we built models that used pre-training and labeled data to estimate GRF during three prediction tasks: overground walking, treadmill walking, and drop landing.

RESULTS: When using the same amount of labeled data, SSL pre-training significantly improved the accuracy of 3-axis GRF estimation during walking compared to baseline models trained by conventional supervised learning. Fine-tuning SSL model with 1-10% of walking data yielded comparable accuracy to training baseline model with 100% of walking data. The optimal masking ratio for SSL is 6.25-12.5%.

CONCLUSION: SSL leveraged large real and synthetic IMU datasets to increase the accuracy and data efficiency of deep-learning-based GRF estimation, reducing the need for labeled data.

SIGNIFICANCE: This work, with its open-source code and models, may unlock broader use cases of IMU-driven kinetic assessment by mitigating the scarcity of GRF measurements in practical applications.

PMID:38315597 | DOI:10.1109/TBME.2024.3361888

Categories: Literature Watch

A Faithful Deep Sensitivity Estimation for Accelerated Magnetic Resonance Imaging

Mon, 2024-02-05 06:00

IEEE J Biomed Health Inform. 2024 Feb 5;PP. doi: 10.1109/JBHI.2024.3360128. Online ahead of print.

ABSTRACT

Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan time. To alleviate this limitation, advanced fast MRI technology attracts extensive research interests. Recent deep learning has shown its great potential in improving image quality and reconstruction speed. Faithful coil sensitivity estimation is vital for MRI reconstruction. However, most deep learning methods still rely on pre-estimated sensitivity maps and ignore their inaccuracy, resulting in the significant quality degradation of reconstructed images. In this work, we propose a Joint Deep Sensitivity estimation and Image reconstruction network, called JDSI. During the image artifacts removal, it gradually provides more faithful sensitivity maps with high-frequency information, leading to improved image reconstructions. To understand the behavior of the network, the mutual promotion of sensitivity estimation and image reconstruction is revealed through the visualization of network intermediate results. Results on in vivo datasets and radiologist reader study demonstrate that, for both calibration-based and calibrationless reconstruction, the proposed JDSI achieves the state-of-the-art performance visually and quantitatively, especially when the acceleration factor is high. Additionally, JDSI owns nice robustness to patients and autocalibration signals.

PMID:38315596 | DOI:10.1109/JBHI.2024.3360128

Categories: Literature Watch

DeepHealthNet: Adolescent Obesity Prediction System Based on a Deep Learning Framework

Mon, 2024-02-05 06:00

IEEE J Biomed Health Inform. 2024 Feb 5;PP. doi: 10.1109/JBHI.2024.3356580. Online ahead of print.

ABSTRACT

The global prevalence of childhood and adolescent obesity is a major concern due to its association with chronic diseases and long-term health risks. Artificial intelligence technology has been identified as a potential solution to accurately predict obesity rates and provide personalized feedback to adolescents. This study highlights the importance of early identification and prevention of obesity-related health issues. To develop effective algorithms for the prediction of obesity rates and provide personalized feedback, factors such as height, weight, waist circumference, calorie intake, physical activity levels, and other relevant health information must be taken into account. Therefore, by collecting health datasets from 321 adolescents who participated in Would You Do It! application, we proposed an adolescent obesity prediction system that provides personalized predictions and assists individuals in making informed health decisions. Our proposed deep learning framework, DeepHealthNet, effectively trains the model using data augmentation techniques, even when daily health data are limited, resulting in improved prediction accuracy (acc: 0.8842). Additionally, the study revealed variations in the prediction of the obesity rate between boys (acc: 0.9320) and girls (acc: 0.9163), allowing the identification of disparities and the determination of the optimal time to provide feedback. Statistical analysis revealed that the performance of the proposed deep learning framework was more statistically significant (p 0.001) compared to the other general models. The proposed system has the potential to effectively address childhood and adolescent obesity.

PMID:38315595 | DOI:10.1109/JBHI.2024.3356580

Categories: Literature Watch

Multimodal Brain Tumor Segmentation Boosted by Monomodal Normal Brain Images

Mon, 2024-02-05 06:00

IEEE Trans Image Process. 2024 Feb 5;PP. doi: 10.1109/TIP.2024.3359815. Online ahead of print.

ABSTRACT

Many deep learning based methods have been proposed for brain tumor segmentation. Most studies focus on deep network internal structure to improve the segmentation accuracy, while valuable external information, such as normal brain appearance, is often ignored. Inspired by the fact that radiologists often screen lesion regions with normal appearance as reference in mind, in this paper, we propose a novel deep framework for brain tumor segmentation, where normal brain images are adopted as reference to compare with tumor brain images in a learned feature space. In this way, features at tumor regions, i.e., tumor-related features, can be highlighted and enhanced for accurate tumor segmentation. It is known that routine tumor brain images are multimodal, while normal brain images are often monomodal. This causes the feature comparison a big issue, i.e., multimodal vs. monomodal. To this end, we present a new feature alignment module (FAM) to make the feature distribution of monomodal normal brain images consistent/inconsistent with multimodal tumor brain images at normal/tumor regions, making the feature comparison effective. Both public (BraTS2022) and in-house tumor brain image datasets are used to evaluate our framework. Experimental results demonstrate that for both datasets, our framework can effectively improve the segmentation accuracy and outperforms the state-of-the-art segmentation methods. Codes are available at https://github.com/hb-liu/Normal-Brain-Boost-Tumor-Segmentation.

PMID:38315584 | DOI:10.1109/TIP.2024.3359815

Categories: Literature Watch

Development of Medical Imaging Data Standardization for Imaging-Based Observational Research: OMOP Common Data Model Extension

Mon, 2024-02-05 06:00

J Imaging Inform Med. 2024 Feb 5. doi: 10.1007/s10278-024-00982-6. Online ahead of print.

ABSTRACT

The rapid growth of artificial intelligence (AI) and deep learning techniques require access to large inter-institutional cohorts of data to enable the development of robust models, e.g., targeting the identification of disease biomarkers and quantifying disease progression and treatment efficacy. The Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) has been designed to accommodate a harmonized representation of observational healthcare data. This study proposes the Medical Imaging CDM (MI-CDM) extension, adding two new tables and two vocabularies to the OMOP CDM to address the structural and semantic requirements to support imaging research. The tables provide the capabilities of linking DICOM data sources as well as tracking the provenance of imaging features derived from those images. The implementation of the extension enables phenotype definitions using imaging features and expanding standardized computable imaging biomarkers. This proposal offers a comprehensive and unified approach for conducting imaging research and outcome studies utilizing imaging features.

PMID:38315345 | DOI:10.1007/s10278-024-00982-6

Categories: Literature Watch

Enhancing YOLO5 for the Assessment of Irregular Pelvic Radiographs with Multimodal Information

Mon, 2024-02-05 06:00

J Imaging Inform Med. 2024 Feb 5. doi: 10.1007/s10278-024-00986-2. Online ahead of print.

ABSTRACT

Developmental dysplasia of the hip (DDH) is one of the most common orthopedic disorders in infants and young children. Accurate identification and localization of anatomical landmarks are prerequisites for the diagnosis of DDH. In recent years, various works have employed deep learning algorithms on radiography images for DDH diagnosis. However, none of these works have considered the incorporation of multimodal information. The pelvis exhibits distinct structures at different developmental stages, and there are also gender-based differences. In light of this, this study proposes a method to enhance the performance of deep learning models in diagnosing DDH by incorporating age and gender information into the channels. The study utilizes YOLO5 to construct a deep learning network for detecting hip joint landmarks. Moreover, a comprehensive dataset of 7750 pelvic X-ray images is established, covering ages from 4 months to 16 years and encompassing various conditions, such as deformities and post-operative cases, which authentically capture the temporal diversity and pathological complexities of DDH. Experimental results show that the YOLO5 model with integrated multimodal information achieves a mAP0.5-0.95 of 83.1% and a diagnostic accuracy of 86.7% in test dataset. The F1 scores for diagnosing cases of normal (NM), suspected dislocation (SD), mild dislocation (MD), and heavily dislocation (HD) are 90.9%, 79.8%, 63.5%, and 97.4%, respectively. Furthermore, experiments conducted on datasets of different sizes and networks of different sizes demonstrate the beneficial impact of multimodal information in improving the effectiveness of deep learning in diagnosing DDH.

PMID:38315343 | DOI:10.1007/s10278-024-00986-2

Categories: Literature Watch

Automatic caries detection in bitewing radiographs-Part II: experimental comparison

Mon, 2024-02-05 06:00

Clin Oral Investig. 2024 Feb 5;28(2):133. doi: 10.1007/s00784-024-05528-2.

ABSTRACT

OBJECTIVE: The objective of this study was to compare the detection of caries in bitewing radiographs by multiple dentists with an automatic method and to evaluate the detection performance in the absence of a reliable ground truth.

MATERIALS AND METHODS: Four experts and three novices marked caries using bounding boxes in 100 bitewing radiographs. The same dataset was processed by an automatic object detection deep learning method. All annotators were compared in terms of the number of errors and intersection over union (IoU) using pairwise comparisons, with respect to the consensus standard, and with respect to the annotator of the training dataset of the automatic method.

RESULTS: The number of lesions marked by experts in 100 images varied between 241 and 425. Pairwise comparisons showed that the automatic method outperformed all dentists except the original annotator in the mean number of errors, while being among the best in terms of IoU. With respect to a consensus standard, the performance of the automatic method was best in terms of the number of errors and slightly below average in terms of IoU. Compared with the original annotator, the automatic method had the highest IoU and only one expert made fewer errors.

CONCLUSIONS: The automatic method consistently outperformed novices and performed as well as highly experienced dentists.

CLINICAL SIGNIFICANCE: The consensus in caries detection between experts is low. An automatic method based on deep learning can improve both the accuracy and repeatability of caries detection, providing a useful second opinion even for very experienced dentists.

PMID:38315246 | DOI:10.1007/s00784-024-05528-2

Categories: Literature Watch

Machine learning-derived model for predicting poor post-treatment quality of life in Korean cancer survivors

Mon, 2024-02-05 06:00

Support Care Cancer. 2024 Feb 5;32(3):143. doi: 10.1007/s00520-024-08347-z.

ABSTRACT

PURPOSE: A substantial number of cancer survivors have poor quality of life (QOL) even after completing cancer treatment. Thus, in this study, we used machine learning (ML) to develop predictive models for poor QOL in post-treatment cancer survivors in South Korea.

METHODS: This cross-sectional study used online survey data from 1,005 post-treatment cancer survivors in South Korea. The outcome variable was QOL, which was measured using the global QOL subscale of the European Organization of Cancer and Treatment for Cancer Quality of Life Questionnaire, where a global QOL score < 60.4 was defined as poor QOL. Three ML models (random forest (RF), support vector machine, and extreme gradient boosting) and three deep learning models were used to develop predictive models for poor QOL. Model performance regarding accuracy, area under the receiver operating characteristic curve, F1 score, precision, and recall was evaluated. The SHapely Additive exPlanation (SHAP) method was used to identify important features.

RESULTS: Of the 1,005 participants, 65.1% had poor QOL. Among the six models, the RF model had the best performance (accuracy = 0.85, F1 = 0.90). The SHAP method revealed that survivorship concerns (e.g., distress, pain, and fatigue) were the most important factors that affected poor QOL.

CONCLUSIONS: The ML-based prediction model developed to predict poor QOL in Korean post-treatment cancer survivors showed good accuracy. The ML model proposed in this study can be used to support clinical decision-making in identifying survivors at risk of poor QOL.

PMID:38315224 | DOI:10.1007/s00520-024-08347-z

Categories: Literature Watch

Pediatric ECG-Based Deep Learning to Predict Left Ventricular Dysfunction and Remodeling

Mon, 2024-02-05 06:00

Circulation. 2024 Feb 5. doi: 10.1161/CIRCULATIONAHA.123.067750. Online ahead of print.

ABSTRACT

BACKGROUND: Artificial intelligence-enhanced ECG analysis shows promise to detect ventricular dysfunction and remodeling in adult populations. However, its application to pediatric populations remains underexplored.

METHODS: A convolutional neural network was trained on paired ECG-echocardiograms (≤2 days apart) from patients ≤18 years of age without major congenital heart disease to detect human expert-classified greater than mild left ventricular (LV) dysfunction, hypertrophy, and dilation (individually and as a composite outcome). Model performance was evaluated on single ECG-echocardiogram pairs per patient at Boston Children's Hospital and externally at Mount Sinai Hospital using area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC).

RESULTS: The training cohort comprised 92 377 ECG-echocardiogram pairs (46 261 patients; median age, 8.2 years). Test groups included internal testing (12 631 patients; median age, 8.8 years; 4.6% composite outcomes), emergency department (2830 patients; median age, 7.7 years; 10.0% composite outcomes), and external validation (5088 patients; median age, 4.3 years; 6.1% composite outcomes) cohorts. Model performance was similar on internal test and emergency department cohorts, with model predictions of LV hypertrophy outperforming the pediatric cardiologist expert benchmark. Adding age and sex to the model added no benefit to model performance. When using quantitative outcome cutoffs, model performance was similar between internal testing (composite outcome: AUROC, 0.88, AUPRC, 0.43; LV dysfunction: AUROC, 0.92, AUPRC, 0.23; LV hypertrophy: AUROC, 0.88, AUPRC, 0.28; LV dilation: AUROC, 0.91, AUPRC, 0.47) and external validation (composite outcome: AUROC, 0.86, AUPRC, 0.39; LV dysfunction: AUROC, 0.94, AUPRC, 0.32; LV hypertrophy: AUROC, 0.84, AUPRC, 0.25; LV dilation: AUROC, 0.87, AUPRC, 0.33), with composite outcome negative predictive values of 99.0% and 99.2%, respectively. Saliency mapping highlighted ECG components that influenced model predictions (precordial QRS complexes for all outcomes; T waves for LV dysfunction). High-risk ECG features include lateral T-wave inversion (LV dysfunction), deep S waves in V1 and V2 and tall R waves in V5 and V6 (LV hypertrophy), and tall R waves in V4 through V6 (LV dilation).

CONCLUSIONS: This externally validated algorithm shows promise to inexpensively screen for LV dysfunction and remodeling in children, which may facilitate improved access to care by democratizing the expertise of pediatric cardiologists.

PMID:38314583 | DOI:10.1161/CIRCULATIONAHA.123.067750

Categories: Literature Watch

Pages