Deep learning
Automatic Assessment of Upper Extremity Function and Mobile Application for Self-administered Stroke Rehabilitation
IEEE Trans Neural Syst Rehabil Eng. 2024 Jan 25;PP. doi: 10.1109/TNSRE.2024.3358497. Online ahead of print.
ABSTRACT
Rehabilitation training is essential for a successful recovery of upper extremity function after stroke. Training programs are typically conducted in hospitals or rehabilitation centers, supervised by specialized medical professionals. However, frequent visits to hospitals can be burdensome for stroke patients with limited mobility. We consider a self-administered rehabilitation system based on a mobile application in which patients can periodically upload videos of themselves performing reach-to-grasp tasks to receive recommendations for self-managed exercises or progress reports. Sensing equipment aside from cameras is typically unavailable in the home environment. A key contribution of our work is to propose a deep learning-based assessment model trained only with video data. As all patients carry out identical tasks, a fine-grained assessment of task execution is required. Our model addresses this difficulty by learning RGB and optical flow data in a complementary manner. The correlation between the RGB and optical flow data is captured by a novel module for modality fusion using cross-attention with Transformers. Experiments showed that our model achieved higher accuracy in movement assessment than existing methods for action recognition. Based on the assessment model, we developed a patient-centered, solution-based mobile application for upper extremity exercises for hemiplegia, which can recommend 57 exercises with three levels of difficulty. A prototype of our application was evaluated by potential end-users and achieved a good quality score on the Mobile Application Rating Scale (MARS).
PMID:38271165 | DOI:10.1109/TNSRE.2024.3358497
Convolutional neural networks for the differentiation between benign and malignant renal tumors with a multicenter international computed tomography dataset
Insights Imaging. 2024 Jan 25;15(1):26. doi: 10.1186/s13244-023-01601-8.
ABSTRACT
OBJECTIVES: To use convolutional neural networks (CNNs) for the differentiation between benign and malignant renal tumors using contrast-enhanced CT images of a multi-institutional, multi-vendor, and multicenter CT dataset.
METHODS: A total of 264 histologically confirmed renal tumors were included, from US and Swedish centers. Images were augmented and divided randomly 70%:30% for algorithm training and testing. Three CNNs (InceptionV3, Inception-ResNetV2, VGG-16) were pretrained with transfer learning and fine-tuned with our dataset to distinguish between malignant and benign tumors. The ensemble consensus decision of the three networks was also recorded. Performance of each network was assessed with receiver operating characteristics (ROC) curves and their area under the curve (AUC-ROC). Saliency maps were created to demonstrate the attention of the highest performing CNN.
RESULTS: Inception-ResNetV2 achieved the highest AUC of 0.918 (95% CI 0.873-0.963), whereas VGG-16 achieved an AUC of 0.813 (95% CI 0.752-0.874). InceptionV3 and ensemble achieved the same performance with an AUC of 0.894 (95% CI 0.844-0.943). Saliency maps indicated that Inception-ResNetV2 decisions are based on the characteristics of the tumor while in most tumors considering the characteristics of the interface between the tumor and the surrounding renal parenchyma.
CONCLUSION: Deep learning based on a diverse multicenter international dataset can enable accurate differentiation between benign and malignant renal tumors.
CRITICAL RELEVANCE STATEMENT: Convolutional neural networks trained on a diverse CT dataset can accurately differentiate between benign and malignant renal tumors.
KEY POINTS: • Differentiation between benign and malignant tumors based on CT is extremely challenging. • Inception-ResNetV2 trained on a diverse dataset achieved excellent differentiation between tumor types. • Deep learning can be used to distinguish between benign and malignant renal tumors.
PMID:38270726 | DOI:10.1186/s13244-023-01601-8
Preoperative CT-based deep learning radiomics model to predict lymph node metastasis and patient prognosis in bladder cancer: a two-center study
Insights Imaging. 2024 Jan 25;15(1):21. doi: 10.1186/s13244-023-01569-5.
ABSTRACT
OBJECTIVE: To establish a model for predicting lymph node metastasis in bladder cancer (BCa) patients.
METHODS: We retroactively enrolled 239 patients who underwent three-phase CT and resection for BCa in two centers (training set, n = 185; external test set, n = 54). We reviewed the clinical characteristics and CT features to identify significant predictors to construct a clinical model. We extracted the hand-crafted radiomics features and deep learning features of the lesions. We used the Minimum Redundancy Maximum Relevance algorithm and the least absolute shrinkage and selection operator logistic regression algorithm to screen features. We used nine classifiers to establish the radiomics machine learning signatures. To compensate for the uneven distribution of the data, we used the synthetic minority over-sampling technique to retrain each machine-learning classifier. We constructed the combined model using the top-performing radiomics signature and clinical model, and finally presented as a nomogram. We evaluated the combined model's performance using the area under the receiver operating characteristic, accuracy, calibration curves, and decision curve analysis. We used the Kaplan-Meier survival curve to analyze the prognosis of BCa patients.
RESULTS: The combined model incorporating radiomics signature and clinical model achieved an area under the receiver operating characteristic of 0.834 (95% CI: 0.659-1.000) for the external test set. The calibration curves and decision curve analysis demonstrated exceptional calibration and promising clinical use. The combined model showed good risk stratification performance for progression-free survival.
CONCLUSION: The proposed CT-based combined model is effective and reliable for predicting lymph node status of BCa patients preoperatively.
CRITICAL RELEVANCE STATEMENT: Bladder cancer is a type of urogenital cancer that has a high morbidity and mortality rate. Lymph node metastasis is an independent risk factor for death in bladder cancer patients. This study aimed to investigate the performance of a deep learning radiomics model for preoperatively predicting lymph node metastasis in bladder cancer patients.
KEY POINTS: • Conventional imaging is not sufficiently accurate to determine lymph node status. • Deep learning radiomics model accurately predicted bladder cancer lymph node metastasis. • The proposed method showed satisfactory patient risk stratification for progression-free survival.
PMID:38270647 | DOI:10.1186/s13244-023-01569-5
Multi_CycGT: A Deep Learning-Based Multimodal Model for Predicting the Membrane Permeability of Cyclic Peptides
J Med Chem. 2024 Jan 25. doi: 10.1021/acs.jmedchem.3c01611. Online ahead of print.
ABSTRACT
Cyclic peptides are gaining attention for their strong binding affinity, low toxicity, and ability to target "undruggable" proteins; however, their therapeutic potential against intracellular targets is constrained by their limited membrane permeability, and researchers need much time and money to test this property in the laboratory. Herein, we propose an innovative multimodal model called Multi_CycGT, which combines a graph convolutional network (GCN) and a transformer to extract one- and two-dimensional features for predicting cyclic peptide permeability. The extensive benchmarking experiments show that our Multi_CycGT model can attain state-of-the-art performance, with an average accuracy of 0.8206 and an area under the curve of 0.8650, and demonstrates satisfactory generalization ability on several external data sets. To the best of our knowledge, it is the first deep learning-based attempt to predict the membrane permeability of cyclic peptides, which is beneficial in accelerating the design of cyclic peptide active drugs in medicinal chemistry and chemical biology applications.
PMID:38270541 | DOI:10.1021/acs.jmedchem.3c01611
Deep Learning in High-Resolution Anoscopy: Assessing the Impact of Staining and Therapeutic Manipulation on Automated Detection of Anal Cancer Precursors
Clin Transl Gastroenterol. 2024 Jan 25. doi: 10.14309/ctg.0000000000000681. Online ahead of print.
ABSTRACT
INTRODUCTION: High-resolution anoscopy (HRA) is the gold standard for detecting anal squamous cell cancer (ASCC) precursors. Preliminary studies on the application of artificial intelligence (AI) models to this modality have revealed promising results. However, the impact of staining techniques and anal manipulation on the effectiveness of these algorithms has not been evaluated. We aimed to develop a deep learning system for automatic differentiation of high (HSIL) versus low-grade (LSIL) squamous intraepithelial lesions in HRA images in different subsets of patients (non-stained, acetic acid, lugol, and after manipulation).
METHODS: A convolutional neural network (CNN) was developed to detect and differentiate high and low-grade anal squamous intraepithelial lesions based on 27,770 images from 103 HRA exams performed in 88 patients. Subanalyses were performed to evaluate the algorithm's performance in subsets of images without staining, acetic acid, lugol, and after manipulation of the anal canal. The sensitivity, specificity, accuracy, positive and negative predictive values, and area under the curve (AUC) were calculated.
RESULTS: The CNN achieved an overall accuracy of 98.3%. The algorithm had a sensitivity and specificity of 97.4% and 99.2%, respectively. The accuracy of the algorithm for differentiating HSIL vs LSIL varied between 91.5% (post-manipulation) and 100% (lugol) for the categories at subanalysis. The AUC ranged between 0.95 and 1.00.
DISCUSSION: The introduction of AI to HRA may provide an accurate detection and differentiation of ASCC precursors. Our algorithm showed excellent performance at different staining settings. This is extremely important as real-time AI models during HRA exams can help guide local treatment or detect relapsing disease.
PMID:38270249 | DOI:10.14309/ctg.0000000000000681
Deep Learning-Based Chemical Similarity for Accelerated Organic Light-Emitting Diode Materials Discovery
J Chem Inf Model. 2024 Jan 25. doi: 10.1021/acs.jcim.3c01747. Online ahead of print.
ABSTRACT
Thermally activated delayed fluorescence (TADF) material has attracted great attention as a promising metal-free organic light-emitting diode material with a high theoretical efficiency. To accelerate the discovery of novel TADF materials, computer-aided material design strategies have been developed. However, they have clear limitations due to the accessibility of only a few computationally tractable properties. Here, we propose TADF-likeness, a quantitative score to evaluate the TADF potential of molecules based on a data-driven concept of chemical similarity to existing TADF molecules. We used a deep autoencoder to characterize the common features of existing TADF molecules with common chemical descriptors. The score was highly correlated with the four essential electronic properties of TADF molecules and had a high success rate in large-scale virtual screening of millions of molecules to identify promising candidates at almost no cost, validating its feasibility for accelerating TADF discovery. The concept of TADF-likeness can be extended to other fields of materials discovery.
PMID:38270063 | DOI:10.1021/acs.jcim.3c01747
New Prediction Model for Incidence of Dementia in Patients with Type 2 Diabetes
Stud Health Technol Inform. 2024 Jan 25;310:1354-1355. doi: 10.3233/SHTI231191.
NO ABSTRACT
PMID:38270040 | DOI:10.3233/SHTI231191
A deep learning approach to remove contrast from contrast-enhanced CT for proton dose calculation
J Appl Clin Med Phys. 2024 Jan 25:e14266. doi: 10.1002/acm2.14266. Online ahead of print.
ABSTRACT
PURPOSE: Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans.
METHODS: A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation.
RESULTS: Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was ∼4.72, whereas the difference between CECT and the scanned NCECT was ∼64.52, indicating a ∼93% reduction in mean HU difference.
CONCLUSIONS: A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.
PMID:38269961 | DOI:10.1002/acm2.14266
CMIR: A Unified Cross-Modality Framework for Preoperative Accurate Prediction of Microvascular Invasion in Hepatocellular Carcinoma
Stud Health Technol Inform. 2024 Jan 25;310:936-940. doi: 10.3233/SHTI231102.
ABSTRACT
Microvascular invasion of HCC is an important factor affecting postoperative recurrence and prognosis of patients. Preoperative diagnosis of MVI is greatly significant to improve the prognosis of HCC. Currently, the diagnosis of MVI is mainly based on the histopathological examination after surgery, which is difficult to meet the requirement of preoperative diagnosis. Also, the sensitivity, specificity and accuracy of MVI diagnosis based on a single imaging feature are low. In this paper, a robust, high-precision cross-modality unified framework for clinical diagnosis is proposed for the prediction of microvascular invasion of hepatocellular carcinoma. It can effectively extract, fuse and locate multi-phase MR Images and clinical data, enrich the semantic context, and comprehensively improve the prediction indicators in different hospitals. The state-of-the-art performance of the approach was validated on a dataset of HCC patients with confirmed pathological types. Moreover, CMIR provides a possible solution for related multimodality tasks in the medical field.
PMID:38269946 | DOI:10.3233/SHTI231102
Whole-Liver Based Deep Learning for Preoperatively Predicting Overall Survival in Patients with Hepatocellular Carcinoma
Stud Health Technol Inform. 2024 Jan 25;310:926-930. doi: 10.3233/SHTI231100.
ABSTRACT
Survival prediction is crucial for treatment decision making in hepatocellular carcinoma (HCC). We aimed to build a fully automated artificial intelligence system (FAIS) that mines whole-liver information to predict overall survival of HCC. We included 215 patients with preoperative contrast-enhance CT imaging and received curative resection from a hospital in China. The cohort was randomly split into developing and testing subcohorts. The FAIS was constructed with convolutional layers and full-connected layers. Cox regression loss was used for training. Models based on clinical and/or tumor-based radiomics features were built for comparison. The FAIS achieved C-indices of 0.81 and 0.72 for the developing and testing sets, outperforming all the other three models. In conclusion, our study suggest that more important information could be mined from whole liver instead of only the tumor. Our whole-liver based FAIS provides a non-invasive and efficient overall survival prediction tool for HCC before the surgery.
PMID:38269944 | DOI:10.3233/SHTI231100
Explainable Artificial Intelligence for Deep-Learning Based Classification of Cystic Fibrosis Lung Changes in MRI
Stud Health Technol Inform. 2024 Jan 25;310:921-925. doi: 10.3233/SHTI231099.
ABSTRACT
Algorithms increasing the transparence and explain ability of neural networks are gaining more popularity. Applying them to custom neural network architectures and complex medical problems remains challenging. In this work, several algorithms such as integrated gradients and grad came were used to generate additional explainable outputs for the classification of lung perfusion changes and mucus plugging in cystic fibrosis patients on MRI. The algorithms are applied on top of an already existing deep learning-based classification pipeline. From six explain ability algorithms, four were implemented successfully and one yielded satisfactory results which might provide support to the radiologist. It was evident, that the areas relevant for the classification were highlighted, thus emphasizing the applicability of deep learning for classification of lung changes in CF patients. Using explainable concepts with deep learning could improve confidence of clinicians towards deep learning and introduction of more diagnostic decision support systems.
PMID:38269943 | DOI:10.3233/SHTI231099
A Deep Learning-Based System for the Assessment of Dental Caries Using Colour Dental Photographs
Stud Health Technol Inform. 2024 Jan 25;310:911-915. doi: 10.3233/SHTI231097.
ABSTRACT
D1ental caries remains the most common chronic disease in childhood, affecting almost half of all children globally. Dental care and examination of children living in remote and rural areas is an ongoing challenge that has been compounded by COVID. The development of a validated system with the capacity to screen large numbers of children with some degree of automation has the potential to facilitate remote dental screening at low costs. In this study, we aim to develop and validate a deep learning system for the assessment of dental caries using color dental photos. Three state-of-the-art deep learning networks namely VGG16, ResNet-50 and Inception-v3 were adopted in the context. A total of 1020 child dental photos were used to train and validate the system. We achieved an accuracy of 79% with precision and recall respectively 95% and 75% in classifying 'caries' versus 'sound' with inception-v3.
PMID:38269941 | DOI:10.3233/SHTI231097
A Deep Multi-Task Network to Learn Tumor Pathological Representations for Lymph Node Metastasis Prediction
Stud Health Technol Inform. 2024 Jan 25;310:906-910. doi: 10.3233/SHTI231096.
ABSTRACT
Lymph node metastasis is of paramount importance for patient treatment decision-making, prognosis evaluation, and clinical trial enrollment. However, accurate preoperative diagnosis remains challenging. In this study, we proposed a multi-task network to learn the primary tumor pathological features using the pT stage prediction task and leverage these features to facilitate lymph node metastasis prediction. We conducted experiments using electronic medical record data from 681 patients with non-small cell lung cancer. The proposed method achieved a 0.768 area under the receiver operating characteristic curve (AUC) value with a 0.073 standard deviation (SD) and a 0.448 average precision (AP) value with a 0.113 SD for lymph node metastasis prediction, which significantly outperformed the baseline models. Based on the results, we can conclude that the proposed multi-task method can effectively learn representations about tumor pathological conditions to support lymph node metastasis prediction.
PMID:38269940 | DOI:10.3233/SHTI231096
Prognosticating Fetal Growth Restriction and Small for Gestational Age by Medical History
Stud Health Technol Inform. 2024 Jan 25;310:740-744. doi: 10.3233/SHTI231063.
ABSTRACT
This study aimed to develop and externally validate a prognostic prediction model for screening fetal growth restriction (FGR)/small for gestational age (SGA) using medical history. From a nationwide health insurance database (n=1,697,452), we retrospectively selected visits of 12-to-55-year-old females to healthcare providers. This study used machine learning (including deep learning) and 54 medical-history predictors. The best model was a deep-insight visible neural network (DI-VNN). It had area under the curve of receiver operating characteristics (AUROC) 0.742 (95% CI 0.734 to 0.750) and a sensitivity of 49.09% (95% CI 47.60% to 50.58% at with 95% specificity). Our model used medical history for screening FGR/SGA with moderate accuracy by DI-VNN. In future work, we will compare this model with those from systematically-reviewed, previous studies and evaluate if this model's usage impacts patient outcomes.
PMID:38269907 | DOI:10.3233/SHTI231063
Transfer Learning for Mortality Prediction in Non-Small Cell Lung Cancer with Low-Resolution Histopathology Slide Snapshots
Stud Health Technol Inform. 2024 Jan 25;310:735-739. doi: 10.3233/SHTI231062.
ABSTRACT
High-resolution whole slide image scans of histopathology slides have been widely used in recent years for prediction in cancer. However, in some cases, clinical informatics practitioners may only have access to low-resolution snapshots of histopathology slides, not high-resolution scans. We evaluated strategies for training neural network prognostic models in non-small cell lung cancer (NSCLC) based on low-resolution snapshots, using data from the Veterans Affairs Precision Oncology Data Repository. We compared strategies without transfer learning, with transfer learning from general domain images, and with transfer learning from publicly available high-resolution histopathology scans. We found transfer learning from high-resolution scans achieved significantly better performance than other strategies. Our contribution provides a foundation for future development of prognostic models in NSCLC that incorporate data from low-resolution pathology slide snapshots alongside known clinical predictors.
PMID:38269906 | DOI:10.3233/SHTI231062
Enhancing chemical synthesis: a two-stage deep neural network for predicting feasible reaction conditions
J Cheminform. 2024 Jan 24;16(1):11. doi: 10.1186/s13321-024-00805-4.
ABSTRACT
In the field of chemical synthesis planning, the accurate recommendation of reaction conditions is essential for achieving successful outcomes. This work introduces an innovative deep learning approach designed to address the complex task of predicting appropriate reagents, solvents, and reaction temperatures for chemical reactions. Our proposed methodology combines a multi-label classification model with a ranking model to offer tailored reaction condition recommendations based on relevance scores derived from anticipated product yields. To tackle the challenge of limited data for unfavorable reaction contexts, we employed the technique of hard negative sampling to generate reaction conditions that might be mistakenly classified as suitable, forcing the model to refine its decision boundaries, especially in challenging cases. Our developed model excels in proposing conditions where an exact match to the recorded solvents and reagents is found within the top-10 predictions 73% of the time. It also predicts temperatures within ± 20 [Formula: see text] of the recorded temperature in 89% of test cases. Notably, the model demonstrates its capacity to recommend multiple viable reaction conditions, with accuracy varying based on the availability of condition records associated with each reaction. What sets this model apart is its ability to suggest alternative reaction conditions beyond the constraints of the dataset. This underscores its potential to inspire innovative approaches in chemical research, presenting a compelling opportunity for advancing chemical synthesis planning and elevating the field of reaction engineering. Scientific contribution: The combination of multi-label classification and ranking models provides tailored recommendations for reaction conditions based on the reaction yields. A novel approach is presented to address the issue of data scarcity in negative reaction conditions through data augmentation.
PMID:38268009 | DOI:10.1186/s13321-024-00805-4
Improved prostate cancer diagnosis using a modified ResNet50-based deep learning architecture
BMC Med Inform Decis Mak. 2024 Jan 24;24(1):23. doi: 10.1186/s12911-024-02419-0.
ABSTRACT
Prostate cancer, the most common cancer in men, is influenced by age, family history, genetics, and lifestyle factors. Early detection of prostate cancer using screening methods improves outcomes, but the balance between overdiagnosis and early detection remains debated. Using Deep Learning (DL) algorithms for prostate cancer detection offers a promising solution for accurate and efficient diagnosis, particularly in cases where prostate imaging is challenging. In this paper, we propose a Prostate Cancer Detection Model (PCDM) model for the automatic diagnosis of prostate cancer. It proves its clinical applicability to aid in the early detection and management of prostate cancer in real-world healthcare environments. The PCDM model is a modified ResNet50-based architecture that integrates faster R-CNN and dual optimizers to improve the performance of the detection process. The model is trained on a large dataset of annotated medical images, and the experimental results show that the proposed model outperforms both ResNet50 and VGG19 architectures. Specifically, the proposed model achieves high sensitivity, specificity, precision, and accuracy rates of 97.40%, 97.09%, 97.56%, and 95.24%, respectively.
PMID:38267994 | DOI:10.1186/s12911-024-02419-0
Interpretable artificial intelligence-based app assists inexperienced radiologists in diagnosing biliary atresia from sonographic gallbladder images
BMC Med. 2024 Jan 25;22(1):29. doi: 10.1186/s12916-024-03247-9.
ABSTRACT
BACKGROUND: A previously trained deep learning-based smartphone app provides an artificial intelligence solution to help diagnose biliary atresia from sonographic gallbladder images, but it might be impractical to launch it in real clinical settings. This study aimed to redevelop a new model using original sonographic images and their derived smartphone photos and then test the new model's performance in assisting radiologists with different experiences to detect biliary atresia in real-world mimic settings.
METHODS: A new model was first trained retrospectively using 3659 original sonographic gallbladder images and their derived 51,226 smartphone photos and tested on 11,410 external validation smartphone photos. Afterward, the new model was tested in 333 prospectively collected sonographic gallbladder videos from 207 infants by 14 inexperienced radiologists (9 juniors and 5 seniors) and 4 experienced pediatric radiologists in real-world mimic settings. Diagnostic performance was expressed as the area under the receiver operating characteristic curve (AUC).
RESULTS: The new model outperformed the previously published model in diagnosing BA on the external validation set (AUC 0.924 vs 0.908, P = 0.004) with higher consistency (kappa value 0.708 vs 0.609). When tested in real-world mimic settings using 333 sonographic gallbladder videos, the new model performed comparable to experienced pediatric radiologists (average AUC 0.860 vs 0.876) and outperformed junior radiologists (average AUC 0.838 vs 0.773) and senior radiologists (average AUC 0.829 vs 0.749). Furthermore, the new model could aid both junior and senior radiologists to improve their diagnostic performances, with the average AUC increasing from 0.773 to 0.835 for junior radiologists and from 0.749 to 0.805 for senior radiologists.
CONCLUSIONS: The interpretable app-based model showed robust and satisfactory performance in diagnosing biliary atresia, and it could aid radiologists with limited experiences to improve their diagnostic performances in real-world mimic settings.
PMID:38267950 | DOI:10.1186/s12916-024-03247-9
Automated evaluation of masseter muscle volume: deep learning prognostic approach in oral cancer
BMC Cancer. 2024 Jan 24;24(1):128. doi: 10.1186/s12885-024-11873-y.
ABSTRACT
BACKGROUND: Sarcopenia has been identified as a potential negative prognostic factor in cancer patients. In this study, our objective was to investigate the relationship between the assessment method for sarcopenia using the masseter muscle volume measured on computed tomography (CT) images and the life expectancy of patients with oral cancer. We also developed a learning model using deep learning to automatically extract the masseter muscle volume and investigated its association with the life expectancy of oral cancer patients.
METHODS: To develop the learning model for masseter muscle volume, we used manually extracted data from CT images of 277 patients. We established the association between manually extracted masseter muscle volume and the life expectancy of oral cancer patients. Additionally, we compared the correlation between the groups of manual and automatic extraction in the masseter muscle volume learning model.
RESULTS: Our findings revealed a significant association between manually extracted masseter muscle volume on CT images and the life expectancy of patients with oral cancer. Notably, the manual and automatic extraction groups in the masseter muscle volume learning model showed a high correlation. Furthermore, the masseter muscle volume automatically extracted using the developed learning model exhibited a strong association with life expectancy.
CONCLUSIONS: The sarcopenia assessment method is useful for predicting the life expectancy of patients with oral cancer. In the future, it is crucial to validate and analyze various factors within the oral surgery field, extending beyond cancer patients.
PMID:38267924 | DOI:10.1186/s12885-024-11873-y
Deep learning in computed tomography to predict endotype in chronic rhinosinusitis with nasal polyps
BMC Med Imaging. 2024 Jan 24;24(1):25. doi: 10.1186/s12880-024-01203-w.
ABSTRACT
BACKGROUND: As treatment strategies differ according to endotype, rhinologists must accurately determine the endotype in patients affected by chronic rhinosinusitis with nasal polyps (CRSwNP) for the appropriate management. In this study, we aim to construct a novel deep learning model using paranasal sinus computed tomography (CT) to predict the endotype in patients with CRSwNP.
METHODS: We included patients diagnosed with CRSwNP between January 1, 2020, and April 31, 2023. The endotype of patients with CRSwNP in this study was classified as eosinophilic or non-eosinophilic. Sinus CT images (29,993 images) were retrospectively collected, including the axial, coronal, and sagittal planes, and randomly divided into training, validation, and testing sets. A residual network-18 was used to construct the deep learning model based on these images. Loss functions, accuracy functions, confusion matrices, and receiver operating characteristic curves were used to assess the predictive performance of the model. Gradient-weighted class activation mapping was performed to visualize and interpret the operating principles of the model.
RESULTS: Among 251 included patients, 86 and 165 had eosinophilic or non-eosinophilic CRSwNP, respectively. The median (interquartile range) patient age was 49 years (37-58 years), and 153 (61.0%) were male. The deep learning model showed good discriminative performance in the training and validation sets, with areas under the curves of 0.993 and 0.966, respectively. To confirm the model generalizability, the receiver operating characteristic curve in the testing set showed good discriminative performance, with an area under the curve of 0.963. The Kappa scores of the confusion matrices in the training, validation, and testing sets were 0.985, 0.928, and 0.922, respectively. Finally, the constructed deep learning model was used to predict the endotype of all patients, resulting in an area under the curve of 0.962.
CONCLUSIONS: The deep learning model developed in this study may provide a novel noninvasive method for rhinologists to evaluate endotypes in patients with CRSwNP and help develop precise treatment strategies.
PMID:38267881 | DOI:10.1186/s12880-024-01203-w