Deep learning

GoldDigger and Checkers, computational developments in cryo-scanning transmission electron tomography to improve the quality of reconstructed volumes

Mon, 2024-04-15 06:00

Biol Imaging. 2024 Mar 27;4:e6. doi: 10.1017/S2633903X24000047. eCollection 2024.

ABSTRACT

In this work, we present a pair of tools to improve the fiducial tracking and reconstruction quality of cryo-scanning transmission electron tomography (STET) datasets. We then demonstrate the effectiveness of these two tools on experimental cryo-STET data. The first tool, GoldDigger, improves the tracking of fiducials in cryo-STET by accommodating the changed appearance of highly defocussed fiducial markers. Since defocus effects are much stronger in scanning transmission electron microscopy than in conventional transmission electron microscopy, existing alignment tools do not perform well without manual intervention. The second tool, Checkers, combines image inpainting and unsupervised deep learning for denoising tomograms. Existing tools for denoising cryo-tomography often rely on paired noisy image frames, which are unavailable in cryo-STET datasets, necessitating a new approach. Finally, we make the two software tools freely available for the cryo-STET community.

PMID:38617998 | PMC:PMC11016363 | DOI:10.1017/S2633903X24000047

Categories: Literature Watch

Deep learning of pretreatment multiphase CT images for predicting response to lenvatinib and immune checkpoint inhibitors in unresectable hepatocellular carcinoma

Mon, 2024-04-15 06:00

Comput Struct Biotechnol J. 2024 Apr 3;24:247-257. doi: 10.1016/j.csbj.2024.04.001. eCollection 2024 Dec.

ABSTRACT

OBJECTIVES: Combination therapy of lenvatinib and immune checkpoint inhibitors (CLICI) has emerged as a promising approach for managing unresectable hepatocellular carcinoma (HCC). However, the response to such treatment is observed in only a subset of patients, underscoring the pressing need for reliable methods to identify potential responders.

MATERIALS & METHODS: This was a retrospective analysis involving 120 patients with unresectable HCC. They were divided into training (n = 72) and validation (n = 48) cohorts. We developed an interpretable deep learning model using multiphase computed tomography (CT) images to predict whether patients will respond or not to CLICI treatment, based on the Response Evaluation Criteria in Solid Tumors, version 1.1 (RECIST v1.1). We evaluated the models' performance and analyzed the impact of each CT phase. Critical regions influencing predictions were identified and visualized through heatmaps.

RESULTS: The multiphase model outperformed the best biphase and uniphase models, achieving an area under the curve (AUC) of 0.802 (95% CI = 0.780-0.824). The portal phase images were found to significantly enhance the model's predictive accuracy. Heatmaps identified six critical features influencing treatment response, offering valuable insights to clinicians. Additionally, we have made this model accessible via a web server at http://uhccnet.com/ for ease of use.

CONCLUSIONS: The integration of multiphase CT images with deep learning-generated heatmaps for predicting treatment response provides a robust and practical tool for guiding CLICI therapy in patients with unresectable HCC.

PMID:38617891 | PMC:PMC11015163 | DOI:10.1016/j.csbj.2024.04.001

Categories: Literature Watch

Advanced Abdominal MRI Techniques and Problem-Solving Strategies

Mon, 2024-04-15 06:00

J Korean Soc Radiol. 2024 Mar;85(2):345-362. doi: 10.3348/jksr.2023.0067. Epub 2024 Mar 26.

ABSTRACT

MRI plays an important role in abdominal imaging because of its ability to detect and characterize focal lesions. However, MRI examinations have several challenges, such as comparatively long scan times and motion management through breath-holding maneuvers. Techniques for reducing scan time with acceptable image quality, such as parallel imaging, compressed sensing, and cutting-edge deep learning techniques, have been developed to enable problem-solving strategies. Additionally, free-breathing techniques for dynamic contrast-enhanced imaging, such as extra-dimensional-volumetric interpolated breath-hold examination, golden-angle radial sparse parallel, and liver acceleration volume acquisition Star, can help patients with severe dyspnea or those under sedation to undergo abdominal MRI. We aimed to present various advanced abdominal MRI techniques for reducing the scan time while maintaining image quality and free-breathing techniques for dynamic imaging and illustrate cases using the techniques mentioned above. A review of these advanced techniques can assist in the appropriate interpretation of sequences.

PMID:38617869 | PMC:PMC11009130 | DOI:10.3348/jksr.2023.0067

Categories: Literature Watch

Toward Perception-based Anticipation of Cortical Breach During K-wire Fixation of the Pelvis

Mon, 2024-04-15 06:00

Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12031:120311N. doi: 10.1117/12.2612989. Epub 2022 Apr 4.

ABSTRACT

Intraoperative imaging using C-arm X-ray systems enables percutaneous management of fractures by providing real-time visualization of tool to tissue relationships. However, estimating appropriate positioning of surgical instruments, such as K-wires, relative to safe bony corridors is challenging due to the projective nature of X-ray images: tool pose in the plane containing the principal ray is difficult to assess, necessitating the acquisition of numerous views onto the anatomy. This task is especially demanding in complex anatomy, such as the superior pubic ramus of the pelvis, and results in high cognitive load and repeat attempts even in experienced trauma surgeons. A perception-based algorithm that interprets interventional radiographs during internal fixation to infer the likelihood of cortical breach - especially early on, when the wire has not been advanced - might reduce both the amount of X-rays acquired for verification and the likelihood of repeat attempts. In this manuscript, we present first steps towards developing such an algorithm. We devise a strategy for in silico collection and annotation of X-ray images suitable for detecting cortical breach of a K-wire in the superior pubic ramus, including those with visible fractures. Beginning with minimal manual annotations of correct trajectories, we randomly perturb entry and exit points and project the 3D scene using a physics-based forward model to obtain a large number of 2D X-ray images with and without cortical breach. We report baseline results for anticipating cortical breach at various K-wire insertion depths, achieving an AUROC score of 0.68 for 50% insertion. Code and data are available at github.com/benjamindkilleen/cortical-breach-detection.

PMID:38617810 | PMC:PMC11016333 | DOI:10.1117/12.2612989

Categories: Literature Watch

Combining Deep Learning and Structural Modeling to Identify Potential Acetylcholinesterase Inhibitors from Hericium erinaceus

Mon, 2024-04-15 06:00

ACS Omega. 2024 Mar 26;9(14):16311-16321. doi: 10.1021/acsomega.3c10459. eCollection 2024 Apr 9.

ABSTRACT

Alzheimer's disease (AD) is the most common type of dementia, affecting over 50 million people worldwide. Currently, most approved medications for AD inhibit the activity of acetylcholinesterase (AChE), but these treatments often come with harmful side effects. There is growing interest in the use of natural compounds for disease prevention, alleviation, and treatment. This trend is driven by the anticipation that these substances may incur fewer side effects than existing medications. This research presents a computational approach combining machine learning with structural modeling to discover compounds from medicinal mushrooms with a high potential to inhibit the activity of AChE. First, we developed a deep neural network capable of rapidly screening a vast number of compounds to indicate their potential to inhibit AChE activity. Subsequently, we applied deep learning models to screen the compounds in the BACMUSHBASE database, which catalogs the bioactive compounds from cultivated and wild mushroom varieties local to Thailand, resulting in the identification of five promising compounds. Next, the five identified compounds underwent molecular docking techniques to calculate the binding energy between the compounds and AChE. This allowed us to refine the selection to two compounds, erinacerin A and hericenone B. Further analysis of the binding energy patterns between these compounds and the target protein revealed that both compounds displayed binding energy profiles similar to the combined characteristics of donepezil and galanthamine, the prescription drugs for AD. We propose that these two compounds, derived from Hericium erinaceus (also known as lion's mane mushroom), are suitable candidates for further research and development into symptom-alleviating AD medications.

PMID:38617639 | PMC:PMC11007777 | DOI:10.1021/acsomega.3c10459

Categories: Literature Watch

Dose-Incorporated Deep Ensemble Learning for Improving Brain Metastasis SRS Outcome Prediction

Sun, 2024-04-14 06:00

Int J Radiat Oncol Biol Phys. 2024 Apr 12:S0360-3016(24)00505-4. doi: 10.1016/j.ijrobp.2024.04.006. Online ahead of print.

ABSTRACT

PURPOSE/OBJECTIVE(S): To develop a novel deep ensemble learning model for accurate prediction of brain metastasis(BM) local control outcomes following stereotactic radiosurgery(SRS).

MATERIALS/METHODS: A total of 114 BMs from 82 patients were evaluated, including 26 BMs that developed biopsy-confirmed local failure post-SRS. The SRS spatial dose distribution(Dmap) of each BM was registered to the planning contrast-enhanced T1(T1-CE) MR. Axial slices of the Dmap, T1-CE, and PTV segmentation(PTVseg) intersecting the BM center were extracted within a fixed field-of-view determined by the V60% in Dmap. A spherical projection was implemented to transform planar image content onto a spherical surface using multiple projection centers, and the resultant T1-CE/Dmap/PTVseg projections were stacked as a 3-channel variable. Four VGG-19 deep encoders were utilized in an ensemble design, with each sub-model using a different spherical projection formula as input for BM outcome prediction. In each sub-model, clinical features after positional encoding were fused with VGG-19 deep features to generate logit results. The ensemble's outcome was synthesized from the four sub-model results via logistic regression. A total of 10 model versions with random validation sample assignments were trained to study model robustness. Performance was compared to 1) a single VGG-19 encoder; 2) an ensemble with T1-CE MRI as the sole image input after projections; and 3) an ensemble with the same image input design without clinical feature inclusion.

RESULTS: The ensemble model achieved an excellent AUCROC=0.89±0.02 with high sensitivity(0.82±0.05), specificity(0.84±0.11), and accuracy(0.84±0.08) results. This outperformed the MRI-only VGG-19 encoder (sensitivity:0.35±0.01, AUC:0.64±0.08), the MRI-only deep ensemble (sensitivity:0.60±0.09, AUC:0.68±0.06), and the 3-channel ensemble without clinical feature fusion (sensitivity:0.78±0.08, AUC:0.84±0.03).

CONCLUSION: Facilitated by the spherical image projection method, a deep ensemble model incorporating Dmap and clinical variables demonstrated an excellent performance in predicting BM post-SRS local failure. Our novel approach could improve other radiotherapy outcome models and warrants further evaluation.

PMID:38615888 | DOI:10.1016/j.ijrobp.2024.04.006

Categories: Literature Watch

Deep learning for the automatic detection and segmentation of parotid gland tumors on MRI

Sun, 2024-04-14 06:00

Oral Oncol. 2024 Apr 12;152:106796. doi: 10.1016/j.oraloncology.2024.106796. Online ahead of print.

ABSTRACT

OBJECTIVES: Parotid gland tumors (PGTs) often occur as incidental findings on magnetic resonance images (MRI) that may be overlooked. This study aimed to construct and validate a deep learning model to automatically identify parotid glands (PGs) with a PGT from normal PGs, and in those with a PGT to segment the tumor.

MATERIALS AND METHODS: The nnUNet combined with a PG-specific post-processing procedure was used to develop the deep learning model trained on T1-weighed images (T1WI) in 311 patients (180 PGs with tumors and 442 normal PGs) and fat-suppressed (FS)-T2WI in 257 patients (125 PGs with tumors and 389 normal PGs), for detecting and segmenting PGTs with five-fold cross-validation. Additional validation set separated by time, comprising T1WI in 34 and FS-T2WI in 41 patients, was used to validate the model performance.

RESULTS AND CONCLUSION: To identify PGs with tumors from normal PGs, using combined T1WI and FS-T2WI, the deep learning model achieved an accuracy, sensitivity and specificity of 98.2% (497/506), 100% (119/119) and 97.7% (378/387), respectively, in the cross-validation set and 98.5% (67/68), 100% (20/20) and 97.9% (47/48), respectively, in the validation set. For patients with PGTs, automatic segmentation of PGTs on T1WI and FS-T2WI achieved mean dice coefficients of 86.1% and 84.2%, respectively, in the cross-validation set, and of 85.9% and 81.0%, respectively, in the validation set. The proposed deep learning model may assist the detection and segmentation of PGTs and, by acting as a second pair of eyes, ensure that incidentally detected PGTs on MRI are not missed.

PMID:38615586 | DOI:10.1016/j.oraloncology.2024.106796

Categories: Literature Watch

Nondestructive detection of SSC in multiple pear (Pyrus pyrifolia Nakai) cultivars using Vis-NIR spectroscopy coupled with the Grad-CAM method

Sun, 2024-04-14 06:00

Food Chem. 2024 Apr 8;450:139283. doi: 10.1016/j.foodchem.2024.139283. Online ahead of print.

ABSTRACT

Vis-NIR spectroscopy coupled with chemometric models is frequently used for pear soluble solid content (SSC) prediction. However, the model robustness is challenged by the variations in pear cultivars. This study explored the feasibility of developing universal models for predicting SSC of multiple pear varieties to improve the model's generalizability. The mature fruits of 6 pear cultivars with green skin (Pyrus pyrifolia Nakai cv. 'Cuiyu', 'Sucui No.1' and 'Cuiguan') and brown skin (Pyrus pyrifolia Nakai cv. 'Hosui','Syusui' and 'Wakahikari') were used to establish single-cultivar models and multi-cultivar universal models using convolutional neural network (CNN), partial least square (PLS), and support vector regression (SVR) approaches. Multi-cultivar universal models were built using full spectra and important variables extracted by gradient-weighted class activation mapping (Grad-CAM), respectively. The universal models based on important variables obtained satisfactory performances with RMSEPs of 0.76, 0.59, 0.80, 1.64, 0.98, and 1.03°Brix on 6 cultivars, respectively.

PMID:38615528 | DOI:10.1016/j.foodchem.2024.139283

Categories: Literature Watch

Estimation of electrical muscle activity during gait using inertial measurement units with convolution attention neural network and small-scale dataset

Sun, 2024-04-14 06:00

J Biomech. 2024 Apr 11;167:112093. doi: 10.1016/j.jbiomech.2024.112093. Online ahead of print.

ABSTRACT

In general, muscle activity can be directly measured using Electromyography (EMG) or calculated with musculoskeletal models. However, both methods are not suitable for non-technical users and unstructured environments. It is desired to establish more portable and easy-to-use muscle activity estimation methods. Deep learning (DL) models combined with inertial measurement units (IMUs) have shown great potential to estimate muscle activity. However, it frequently occurs in clinical scenarios that a very small amount of data is available and leads to limited performance of the DL models, while the augmentation techniques to efficiently expand a small sample size for DL model training are rarely used. The primary aim of the present study was to develop a novel DL model to estimate the EMG envelope during gait using IMUs with high accuracy. A secondary aim was to develop a novel model-based data augmentation method to improve the performance of the estimation model with small-scale dataset. Therefore, in the present study, a time convolutional network-based generative adversarial network, namely MuscleGAN, was proposed for data augmentation. Moreover, a subject-independent regression DL model was developed to estimate EMG envelope. Results suggested that the proposed two-stage method has better generalization and estimation performance than the commonly used existing methods. Pearson correlation coefficient and normalized root-mean-square errors derived from the proposed method reached up to 0.72 and 0.13, respectively. It was indicated that the MuscleGAN indeed improved the estimation accuracy of lower limb EMG envelope from 70% to 72%. Thus, even using only two IMUs and a very small-scale dataset, the proposed model is still capable of accurately estimating lower limb EMG envelope, demonstrating considerable potential for its application in clinical and daily life scenarios.

PMID:38615480 | DOI:10.1016/j.jbiomech.2024.112093

Categories: Literature Watch

CRIECNN: Ensemble convolutional neural network and advanced feature extraction methods for the precise forecasting of circRNA-RBP binding sites

Sun, 2024-04-14 06:00

Comput Biol Med. 2024 Apr 10;174:108466. doi: 10.1016/j.compbiomed.2024.108466. Online ahead of print.

ABSTRACT

Circular RNAs (circRNAs) have surfaced as important non-coding RNA molecules in biology. Understanding interactions between circRNAs and RNA-binding proteins (RBPs) is crucial in circRNA research. Existing prediction models suffer from limited availability and accuracy, necessitating advanced approaches. In this study, we propose CRIECNN (Circular RNA-RBP Interaction predictor using an Ensemble Convolutional Neural Network), a novel ensemble deep learning model that enhances circRNA-RBP binding site prediction accuracy. CRIECNN employs advanced feature extraction methods and evaluates four distinct sequence datasets and encoding techniques (BERT, Doc2Vec, KNF, EIIP). The model consists of an ensemble convolutional neural network, a BiLSTM, and a self-attention mechanism for feature refinement. Our results demonstrate that CRIECNN outperforms state-of-the-art methods in accuracy and performance, effectively predicting circRNA-RBP interactions from both full-length sequences and fragments. This novel strategy makes an enormous advancement in the prediction of circRNA-RBP interactions, improving our understanding of circRNAs and their regulatory roles.

PMID:38615462 | DOI:10.1016/j.compbiomed.2024.108466

Categories: Literature Watch

DermSynth3D: Synthesis of in-the-wild annotated dermatology images

Sun, 2024-04-14 06:00

Med Image Anal. 2024 Mar 26;95:103145. doi: 10.1016/j.media.2024.103145. Online ahead of print.

ABSTRACT

In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.

PMID:38615432 | DOI:10.1016/j.media.2024.103145

Categories: Literature Watch

Reconstruction of 3D knee MRI using deep learning and compressed sensing: a validation study on healthy volunteers

Sun, 2024-04-14 06:00

Eur Radiol Exp. 2024 Apr 15;8(1):47. doi: 10.1186/s41747-024-00446-0.

ABSTRACT

BACKGROUND: To investigate the potential of combining compressed sensing (CS) and artificial intelligence (AI), in particular deep learning (DL), for accelerating three-dimensional (3D) magnetic resonance imaging (MRI) sequences of the knee.

METHODS: Twenty healthy volunteers were examined using a 3-T scanner with a fat-saturated 3D proton density sequence with four different acceleration levels (10, 13, 15, and 17). All sequences were accelerated with CS and reconstructed using the conventional and a new DL-based algorithm (CS-AI). Subjective image quality was evaluated by two blinded readers using seven criteria on a 5-point-Likert-scale (overall impression, artifacts, delineation of the anterior cruciate ligament, posterior cruciate ligament, menisci, cartilage, and bone). Using mixed models, all CS-AI sequences were compared to the clinical standard (sense sequence with an acceleration factor of 2) and CS sequences with the same acceleration factor.

RESULTS: 3D sequences reconstructed with CS-AI achieved significantly better values for subjective image quality compared to sequences reconstructed with CS with the same acceleration factor (p ≤ 0.001). The images reconstructed with CS-AI showed that tenfold acceleration may be feasible without significant loss of quality when compared to the reference sequence (p ≥ 0.999).

CONCLUSIONS: For 3-T 3D-MRI of the knee, a DL-based algorithm allowed for additional acceleration of acquisition times compared to the conventional approach. This study, however, is limited by its small sample size and inclusion of only healthy volunteers, indicating the need for further research with a more diverse and larger sample.

TRIAL REGISTRATION: DRKS00024156.

RELEVANCE STATEMENT: Using a DL-based algorithm, 54% faster image acquisition (178 s versus 384 s) for 3D-sequences may be possible for 3-T MRI of the knee.

KEY POINTS: • Combination of compressed sensing and DL improved image quality and allows for significant acceleration of 3D knee MRI. • DL-based algorithm achieved better subjective image quality than conventional compressed sensing. • For 3D knee MRI at 3 T, 54% faster image acquisition may be possible.

PMID:38616220 | DOI:10.1186/s41747-024-00446-0

Categories: Literature Watch

SCB-YOLOv5: a lightweight intelligent detection model for athletes' normative movements

Sun, 2024-04-14 06:00

Sci Rep. 2024 Apr 14;14(1):8624. doi: 10.1038/s41598-024-59218-w.

ABSTRACT

Intelligent detection of athlete behavior is beneficial for guiding sports instruction. Existing mature target detection algorithms provide significant support for this task. However, large-scale target detection algorithms often encounter more challenges in practical application scenarios. We propose SCB-YOLOv5, to detect standardized movements of gymnasts. First, the movements of aerobics athletes were captured, labeled using the labelImg software, and utilized to establish the athlete normative behavior dataset, which was then enhanced by the dataset augmentation using Mosaic9. Then, we improved the YOLOv5 by (1) incorporating the structures of ShuffleNet V2 and convolutional block attention module to reconstruct the Backbone, effectively reducing the parameter size while maintaining network feature extraction capability; (2) adding a weighted bidirectional feature pyramid network into the multiscale feature fusion, to acquire precise channel and positional information through the global receptive field of feature maps. Finally, SCB-YOLOv5 was lighter by 56.9% than YOLOv5. The detection precision is 93.7%, with a recall of 99% and mAP value of 94.23%. This represents a 3.53% improvement compared to the original algorithm. Extensive experiments have verified that our method. SCB-YOLOv5 can meet the requirements for on-site athlete action detection. Our code and models are available at https://github.com/qingDu1/SCB-YOLOv5 .

PMID:38616199 | DOI:10.1038/s41598-024-59218-w

Categories: Literature Watch

An explainable long short-term memory network for surgical site infection identification

Sun, 2024-04-14 06:00

Surgery. 2024 Apr 13:S0039-6060(24)00142-9. doi: 10.1016/j.surg.2024.03.006. Online ahead of print.

ABSTRACT

BACKGROUND: Currently, surgical site infection surveillance relies on labor-intensive manual chart review. Recently suggested solutions involve machine learning to identify surgical site infections directly from the medical record. Deep learning is a form of machine learning that has historically performed better than traditional methods while being harder to interpret. We propose a deep learning model, a long short-term memory network, for the identification of surgical site infection from the medical record with an attention layer for explainability.

METHODS: We retrieved structured data and clinical notes from the University of Utah Health System's electronic health care record for operative events randomly selected for manual chart review from January 2016 to June 2021. Surgical site infection occurring within 30 days of surgery was determined according to the National Surgical Quality Improvement Program definition. We trained the long short-term memory model along with traditional machine learning models for comparison. We calculated several performance metrics from a holdout test set and performed additional analyses to understand the performance of the long short-term memory, including an explainability analysis.

RESULTS: Surgical site infection was present in 4.7% of the total 9,185 operative events. The area under the receiver operating characteristic curve and sensitivity of the long short-term memory was higher (area under the receiver operating characteristic curve: 0.954, sensitivity: 0.920) compared to the top traditional model (area under the receiver operating characteristic curve: 0.937, sensitivity: 0.736). The top 5 features of the long short-term memory included 2 procedure codes and 3 laboratory values.

CONCLUSION: Surgical site infection surveillance is vital for the reduction of surgical site infection rates. Our explainable long short-term memory achieved a comparable area under the receiver operating characteristic curve and greater sensitivity when compared to traditional machine learning methods. With explainable deep learning, automated surgical site infection surveillance could replace burdensome manual chart review processes.

PMID:38616153 | DOI:10.1016/j.surg.2024.03.006

Categories: Literature Watch

Geriatrics and artificial intelligence in Spain (Ger-IA project): talking to ChatGPT, a nationwide survey

Sun, 2024-04-14 06:00

Eur Geriatr Med. 2024 Apr 14. doi: 10.1007/s41999-024-00970-7. Online ahead of print.

ABSTRACT

PURPOSE: The purposes of the study was to describe the degree of agreement between geriatricians with the answers given by an AI tool (ChatGPT) in response to questions related to different areas in geriatrics, to study the differences between specialists and residents in geriatrics in terms of the degree of agreement with ChatGPT, and to analyse the mean scores obtained by areas of knowledge/domains.

METHODS: An observational study was conducted involving 126 doctors from 41 geriatric medicine departments in Spain. Ten questions about geriatric medicine were posed to ChatGPT, and doctors evaluated the AI's answers using a Likert scale. Sociodemographic variables were included. Questions were categorized into five knowledge domains, and means and standard deviations were calculated for each.

RESULTS: 130 doctors answered the questionnaire. 126 doctors (69.8% women, mean age 41.4 [9.8]) were included in the final analysis. The mean score obtained by ChatGPT was 3.1/5 [0.67]. Specialists rated ChatGPT lower than residents (3.0/5 vs. 3.3/5 points, respectively, P < 0.05). By domains, ChatGPT ​​scored better (M: 3.96; SD: 0.71) in general/theoretical questions rather than in complex decisions/end-of-life situations (M: 2.50; SD: 0.76) and answers related to diagnosis/performing of complementary tests obtained the lowest ones (M: 2.48; SD: 0.77).

CONCLUSION: Scores presented big variability depending on the area of knowledge. Questions related to theoretical aspects of challenges/future in geriatrics obtained better scores. When it comes to complex decision-making, appropriateness of the therapeutic efforts or decisions about diagnostic tests, professionals indicated a poorer performance. AI is likely to be incorporated into some areas of medicine, but it would still present important limitations, mainly in complex medical decision-making.

PMID:38615289 | DOI:10.1007/s41999-024-00970-7

Categories: Literature Watch

Development and validation of a multi-modality fusion deep learning model for differentiating glioblastoma from solitary brain metastases

Sat, 2024-04-13 06:00

Zhong Nan Da Xue Xue Bao Yi Xue Ban. 2024 Jan 28;49(1):58-67. doi: 10.11817/j.issn.1672-7347.2024.230248.

ABSTRACT

OBJECTIVES: Glioblastoma (GBM) and brain metastases (BMs) are the two most common malignant brain tumors in adults. Magnetic resonance imaging (MRI) is a commonly used method for screening and evaluating the prognosis of brain tumors, but the specificity and sensitivity of conventional MRI sequences in differential diagnosis of GBM and BMs are limited. In recent years, deep neural network has shown great potential in the realization of diagnostic classification and the establishment of clinical decision support system. This study aims to apply the radiomics features extracted by deep learning techniques to explore the feasibility of accurate preoperative classification for newly diagnosed GBM and solitary brain metastases (SBMs), and to further explore the impact of multimodality data fusion on classification tasks.

METHODS: Standard protocol cranial MRI sequence data from 135 newly diagnosed GBM patients and 73 patients with SBMs confirmed by histopathologic or clinical diagnosis were retrospectively analyzed. First, structural T1-weight, T1C-weight, and T2-weight were selected as 3 inputs to the entire model, regions of interest (ROIs) were manually delineated on the registered three modal MR images, and multimodality radiomics features were obtained, dimensions were reduced using a random forest (RF)-based feature selection method, and the importance of each feature was further analyzed. Secondly, we used the method of contrast disentangled to find the shared features and complementary features between different modal features. Finally, the response of each sample to GBM and SBMs was predicted by fusing 2 features from different modalities.

RESULTS: The radiomics features using machine learning and the multi-modal fusion method had a good discriminatory ability for GBM and SBMs. Furthermore, compared with single-modal data, the multimodal fusion models using machine learning algorithms such as support vector machine (SVM), Logistic regression, RF, adaptive boosting (AdaBoost), and gradient boosting decision tree (GBDT) achieved significant improvements, with area under the curve (AUC) values of 0.974, 0.978, 0.943, 0.938, and 0.947, respectively; our comparative disentangled multi-modal MR fusion method performs well, and the results of AUC, accuracy (ACC), sensitivity (SEN) and specificity(SPE) in the test set were 0.985, 0.984, 0.900, and 0.990, respectively. Compared with other multi-modal fusion methods, AUC, ACC, and SEN in this study all achieved the best performance. In the ablation experiment to verify the effects of each module component in this study, AUC, ACC, and SEN increased by 1.6%, 10.9% and 15.0%, respectively after 3 loss functions were used simultaneously.

CONCLUSIONS: A deep learning-based contrast disentangled multi-modal MR radiomics feature fusion technique helps to improve GBM and SBMs classification accuracy.

PMID:38615167 | DOI:10.11817/j.issn.1672-7347.2024.230248

Categories: Literature Watch

Deep learning evaluation of echocardiograms to identify occult atrial fibrillation

Sat, 2024-04-13 06:00

NPJ Digit Med. 2024 Apr 13;7(1):96. doi: 10.1038/s41746-024-01090-z.

ABSTRACT

Atrial fibrillation (AF) often escapes detection, given its frequent paroxysmal and asymptomatic presentation. Deep learning of transthoracic echocardiograms (TTEs), which have structural information, could help identify occult AF. We created a two-stage deep learning algorithm using a video-based convolutional neural network model that (1) distinguished whether TTEs were in sinus rhythm or AF and then (2) predicted which of the TTEs in sinus rhythm were in patients who had experienced AF within 90 days. Our model, trained on 111,319 TTE videos, distinguished TTEs in AF from those in sinus rhythm with high accuracy in a held-out test cohort (AUC 0.96 (0.95-0.96), AUPRC 0.91 (0.90-0.92)). Among TTEs in sinus rhythm, the model predicted the presence of concurrent paroxysmal AF (AUC 0.74 (0.71-0.77), AUPRC 0.19 (0.16-0.23)). Model discrimination remained similar in an external cohort of 10,203 TTEs (AUC of 0.69 (0.67-0.70), AUPRC 0.34 (0.31-0.36)). Performance held across patients who were women (AUC 0.76 (0.72-0.81)), older than 65 years (0.73 (0.69-0.76)), or had a CHA2DS2VASc ≥2 (0.73 (0.79-0.77)). The model performed better than using clinical risk factors (AUC 0.64 (0.62-0.67)), TTE measurements (0.64 (0.62-0.67)), left atrial size (0.63 (0.62-0.64)), or CHA2DS2VASc (0.61 (0.60-0.62)). An ensemble model in a cohort subset combining the TTE model with an electrocardiogram (ECGs) deep learning model performed better than using the ECG model alone (AUC 0.81 vs. 0.79, p = 0.01). Deep learning using TTEs can predict patients with active or occult AF and could be used for opportunistic AF screening that could lead to earlier treatment.

PMID:38615104 | DOI:10.1038/s41746-024-01090-z

Categories: Literature Watch

Deep learning predictions of TCR-epitope interactions reveal epitope-specific chains in dual alpha T cells

Sat, 2024-04-13 06:00

Nat Commun. 2024 Apr 13;15(1):3211. doi: 10.1038/s41467-024-47461-8.

ABSTRACT

T cells have the ability to eliminate infected and cancer cells and play an essential role in cancer immunotherapy. T cell activation is elicited by the binding of the T cell receptor (TCR) to epitopes displayed on MHC molecules, and the TCR specificity is determined by the sequence of its α and β chains. Here, we collect and curate a dataset of 17,715 αβTCRs interacting with dozens of class I and class II epitopes. We use this curated data to develop MixTCRpred, an epitope-specific TCR-epitope interaction predictor. MixTCRpred accurately predicts TCRs recognizing several viral and cancer epitopes. MixTCRpred further provides a useful quality control tool for multiplexed single-cell TCR sequencing assays of epitope-specific T cells and pinpoints a substantial fraction of putative contaminants in public databases. Analysis of epitope-specific dual α T cells demonstrates that MixTCRpred can identify α chains mediating epitope recognition. Applying MixTCRpred to TCR repertoires from COVID-19 patients reveals enrichment of clonotypes predicted to bind an immunodominant SARS-CoV-2 epitope. Overall, MixTCRpred provides a robust tool to predict TCRs interacting with specific epitopes and interpret TCR-sequencing data from both bulk and epitope-specific T cells.

PMID:38615042 | DOI:10.1038/s41467-024-47461-8

Categories: Literature Watch

Fully automated deep learning model for detecting proximity of mandibular third molar root to inferior alveolar canal using panoramic radiographs

Sat, 2024-04-13 06:00

Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 Feb 20:S2212-4403(24)00067-1. doi: 10.1016/j.oooo.2024.02.011. Online ahead of print.

ABSTRACT

OBJECTIVE: This study endeavored to develop a novel, fully automated deep-learning model to determine the topographic relationship between mandibular third molar (MM3) roots and the inferior alveolar canal (IAC) using panoramic radiographs (PRs).

STUDY DESIGN: A total of 1570 eligible subjects with MM3s who had paired PR and cone beam computed tomography (CBCT) from January 2019 to December 2020 were retrospectively collected and randomly grouped into training (80%), validation (10%), and testing (10%) cohorts. The spatial relationship of MM3/IAC was assessed by CBCT and set as the ground truth. MM3-IACnet, a modified deep learning network based on YOLOv5 (You only look once), was trained to detect MM3/IAC proximity using PR. Its diagnostic performance was further compared with dentists, AlexNet, GoogleNet, VGG-16, ResNet-50, and YOLOv5 in another independent cohort with 100 high-risk MM3 defined as root overlapping with IAC on PR.

RESULTS: The MM3-IACnet performed best in predicting the MM3/IAC proximity, as evidenced by the highest accuracy (0.885), precision (0.899), area under the curve value (0.95), and minimal time-spending compared with other models. Moreover, our MM3-IACnet outperformed other models in MM3/IAC risk prediction in high-risk cases.

CONCLUSION: MM3-IACnet model can assist clinicians in MM3s risk assessment and treatment planning by detecting MM3/IAC topographic relationship using PR.

PMID:38614873 | DOI:10.1016/j.oooo.2024.02.011

Categories: Literature Watch

Age and sex estimation in cephalometric radiographs based on multitask convolutional neural networks

Sat, 2024-04-13 06:00

Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 Feb 20:S2212-4403(24)00069-5. doi: 10.1016/j.oooo.2024.02.010. Online ahead of print.

ABSTRACT

OBJECTIVES: Age and sex characteristics are evident in cephalometric radiographs (CRs), yet their accurate estimation remains challenging due to the complexity of these images. This study aimed to harness deep learning to automate age and sex estimation from CRs, potentially simplifying their interpretation.

STUDY DESIGN: We compared the performance of 4 deep learning models (SVM, R-net, VGG16-SingleTask, and our proposed VGG16-MultiTask) in estimating age and sex from the testing dataset, utilizing a VGG16-based multitask deep learning model on 4,557 CRs. Gradient-weighted class activation mapping (Grad-CAM) was incorporated to identify sex. Performance was assessed using mean absolute error (MAE), specificity, sensitivity, F1 score, and the area under the curve (AUC) in receiver operating characteristic analysis.

RESULTS: The VGG16-MultiTask model outperformed the others, with the lowest MAE (0.864±1.602) and highest sensitivity (0.85), specificity (0.88), F1 score (0.863), and AUC (0.93), demonstrating superior efficacy and robust performance.

CONCLUSIONS: The VGG multitask model demonstrates significant potential in enhancing age and sex estimation from cephalometric analysis, underscoring the role of AI in improving biomedical interpretations.

PMID:38614872 | DOI:10.1016/j.oooo.2024.02.010

Categories: Literature Watch

Pages