Deep learning

Machine Learning-Based Prediction Model for ICU Mortality After Continuous Renal Replacement Therapy Initiation in Children

Tue, 2024-12-17 06:00

Crit Care Explor. 2024 Dec 17;6(12):e1188. doi: 10.1097/CCE.0000000000001188. eCollection 2024 Dec 1.

ABSTRACT

BACKGROUND: Continuous renal replacement therapy (CRRT) is the favored renal replacement therapy in critically ill patients. Predicting clinical outcomes for CRRT patients is difficult due to population heterogeneity, varying clinical practices, and limited sample sizes.

OBJECTIVE: We aimed to predict survival to ICUs and hospital discharge in children and young adults receiving CRRT using machine learning (ML) techniques.

DERIVATION COHORT: Patients less than 25 years of age receiving CRRT for acute kidney injury and/or volume overload from 2015 to 2021 (80%).

VALIDATION COHORT: Internal validation occurred in a testing group of patients from the dataset (20%).

PREDICTION MODEL: Retrospective international multicenter study utilizing an 80/20 training and testing cohort split, and logistic regression with L2 regularization (LR), decision tree, random forest (RF), gradient boosting machine, and support vector machine with linear kernel to predict ICU and hospital survival. Model performance was determined by the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) due to the imbalance in the dataset.

RESULTS: Of the 933 patients included in this study, 538 (54%) were male with a median age of 8.97 years and interquartile range (1.81-15.0 yr). The ICU mortality was 35% and hospital mortality was 37%. The RF had the best performance for predicting ICU mortality (AUROC, 0.791 and AUPRC, 0.878) and LR for hospital mortality (AUROC, 0.777 and AUPRC, 0.859). The top two predictors of ICU survival were Pediatric Logistic Organ Dysfunction-2 score at CRRT initiation and admission diagnosis of respiratory failure.

CONCLUSIONS: These are the first ML models to predict survival at ICU and hospital discharge in children and young adults receiving CRRT. RF outperformed other models for predicting ICU mortality. Future studies should expand the input variables, conduct a more sophisticated feature selection, and use deep learning algorithms to generate more precise models.

PMID:39688905 | DOI:10.1097/CCE.0000000000001188

Categories: Literature Watch

Geospatial Modeling of Deep Neural Visual Features for Predicting Obesity Prevalence in Missouri: Quantitative Study

Tue, 2024-12-17 06:00

JMIR AI. 2024 Dec 17;3:e64362. doi: 10.2196/64362.

ABSTRACT

BACKGROUND: The global obesity epidemic demands innovative approaches to understand its complex environmental and social determinants. Spatial technologies, such as geographic information systems, remote sensing, and spatial machine learning, offer new insights into this health issue. This study uses deep learning and spatial modeling to predict obesity rates for census tracts in Missouri.

OBJECTIVE: This study aims to develop a scalable method for predicting obesity prevalence using deep convolutional neural networks applied to satellite imagery and geospatial analysis, focusing on 1052 census tracts in Missouri.

METHODS: Our analysis followed 3 steps. First, Sentinel-2 satellite images were processed using the Residual Network-50 model to extract environmental features from 63,592 image chips (224×224 pixels). Second, these features were merged with obesity rate data from the Centers for Disease Control and Prevention for Missouri census tracts. Third, a spatial lag model was used to predict obesity rates and analyze the association between deep neural visual features and obesity prevalence. Spatial autocorrelation was used to identify clusters of obesity rates.

RESULTS: Substantial spatial clustering of obesity rates was found across Missouri, with a Moran I value of 0.68, indicating similar obesity rates among neighboring census tracts. The spatial lag model demonstrated strong predictive performance, with an R2 of 0.93 and a spatial pseudo R2 of 0.92, explaining 93% of the variation in obesity rates. Local indicators from a spatial association analysis revealed regions with distinct high and low clusters of obesity, which were visualized through choropleth maps.

CONCLUSIONS: This study highlights the effectiveness of integrating deep convolutional neural networks and spatial modeling to predict obesity prevalence based on environmental features from satellite imagery. The model's high accuracy and ability to capture spatial patterns offer valuable insights for public health interventions. Future work should expand the geographical scope and include socioeconomic data to further refine the model for broader applications in obesity research.

PMID:39688897 | DOI:10.2196/64362

Categories: Literature Watch

Advancing miRNA cancer research through artificial intelligence: from biomarker discovery to therapeutic targeting

Tue, 2024-12-17 06:00

Med Oncol. 2024 Dec 17;42(1):30. doi: 10.1007/s12032-024-02579-z.

ABSTRACT

MicroRNAs (miRNAs), a class of small non-coding RNAs, play a vital role in regulating gene expression at the post-transcriptional level. Their discovery has profoundly impacted therapeutic strategies, particularly in cancer treatment, where RNA therapeutics, including miRNA-based targeted therapies, have gained prominence. Advances in RNA sequencing technologies have facilitated a comprehensive exploration of miRNAs-from fundamental research to their diagnostic and prognostic potential in various diseases, notably cancers. However, the manual handling and interpretation of vast RNA datasets pose significant challenges. The advent of artificial intelligence (AI) has revolutionized biological research by efficiently extracting insights from complex data. Machine learning algorithms, particularly deep learning techniques are effective for identifying critical miRNAs across different cancers and developing prognostic models. Moreover, the integration of AI has led to the creation of comprehensive miRNA databases for identifying mRNA and gene targets, thus facilitating deeper understanding and application in cancer research. This review comprehensively examines current developments in the application of machine learning techniques in miRNA research across diverse cancers. We discuss their roles in identifying biomarkers, elucidating miRNA targets, establishing disease associations, predicting prognostic outcomes, and exploring broader AI applications in cancer research. This review aims to guide researchers in leveraging AI techniques effectively within the miRNA field, thereby accelerating advancements in cancer diagnostics and therapeutics.

PMID:39688780 | DOI:10.1007/s12032-024-02579-z

Categories: Literature Watch

A deep learning method for total-body dynamic PET imaging with dual-time-window protocols

Tue, 2024-12-17 06:00

Eur J Nucl Med Mol Imaging. 2024 Dec 17. doi: 10.1007/s00259-024-07012-1. Online ahead of print.

ABSTRACT

PURPOSE: Prolonged scanning durations are one of the primary barriers to the widespread clinical adoption of dynamic Positron Emission Tomography (PET). In this paper, we developed a deep learning algorithm that capable of predicting dynamic images from dual-time-window protocols, thereby shortening the scanning time.

METHODS: This study includes 70 patients (mean age ± standard deviation, 53.61 ± 13.53 years; 32 males) diagnosed with pulmonary nodules or breast nodules between 2022 to 2024. Each patient underwent a 65-min dynamic total-body [18F]FDG PET/CT scan. Acquisitions using early-stop protocols and dual-time-window protocols were simulated to reduce the scanning time. To predict the missing frames, we developed a bidirectional sequence-to-sequence model with attention mechanism (Bi-AT-Seq2Seq); and then compared the model with unidirectional or non-attentional models in terms of Mean Absolute Error (MAE), Bias, Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) of predicted frames. Furthermore, we reported the comparison of concordance correlation coefficient (CCC) of the kinetic parameters between the proposed method and traditional methods.

RESULTS: The Bi-AT-Seq2Seq significantly outperform unidirectional or non-attentional models in terms of MAE, Bias, PSNR, and SSIM. Using a dual-time-window protocol, which includes a 10-min early scan followed by a 5-min late scan, improves the four metrics of predicted dynamic images by 37.31%, 36.24%, 7.10%, and 0.014% respectively, compared to the early-stop protocol with a 15-min acquisition. The CCCs of tumor' kinetic parameters estimated with recovered full time-activity-curves (TACs) is higher than those with abbreviated TACs.

CONCLUSION: The proposed algorithm can accurately generate a complete dynamic acquisition (65 min) from dual-time-window protocols (10 + 5 min).

PMID:39688700 | DOI:10.1007/s00259-024-07012-1

Categories: Literature Watch

Deep learning algorithm enables automated Cobb angle measurements with high accuracy

Tue, 2024-12-17 06:00

Skeletal Radiol. 2024 Dec 17. doi: 10.1007/s00256-024-04853-7. Online ahead of print.

ABSTRACT

OBJECTIVE: To determine the accuracy of automatic Cobb angle measurements by deep learning (DL) on full spine radiographs.

MATERIALS AND METHODS: Full spine radiographs of patients aged > 2 years were screened using the radiology reports to identify radiographs for performing Cobb angle measurements. Two senior musculoskeletal radiologists and one senior orthopedic surgeon independently annotated Cobb angles exceeding 7° indicating the angle location as either proximal thoracic (apices between T3 and T5), main thoracic (apices between T6 and T11), or thoraco-lumbar (apices between T12 and L4). If at least two readers agreed on the number of angles, location of the angles, and difference between comparable angles was < 8°, then the ground truth was defined as the mean of their measurements. Otherwise, the radiographs were reviewed by the three annotators in consensus. The DL software (BoneMetrics, Gleamer) was evaluated against the manual annotation in terms of mean absolute error (MAE).

RESULTS: A total of 345 patients were included in the study (age 33 ± 24 years, 221 women): 179 pediatric patients (< 22 years old) and 166 adult patients (22 to 85 years old). Fifty-three cases were reviewed in consensus. The MAE of the DL algorithm for the main curvature was 2.6° (95% CI [2.0; 3.3]). For the subgroup of pediatric patients, the MAE was 1.9° (95% CI [1.6; 2.2]) versus 3.3° (95% CI [2.2; 4.8]) for adults.

CONCLUSION: The DL algorithm predicted the Cobb angle of scoliotic patients with high accuracy.

PMID:39688663 | DOI:10.1007/s00256-024-04853-7

Categories: Literature Watch

Diffusion model assisted designing self-assembling collagen mimetic peptides as biocompatible materials

Tue, 2024-12-17 06:00

Brief Bioinform. 2024 Nov 22;26(1):bbae622. doi: 10.1093/bib/bbae622.

ABSTRACT

Collagen self-assembly supports its mechanical function, but controlling collagen mimetic peptides (CMPs) to self-assemble into higher-order oligomers with numerous functions remains challenging due to the vast potential amino acid sequence space. Herein, we developed a diffusion model to learn features from different types of human collagens and generate CMPs; obtaining 66% of synthetic CMPs could self-assemble into triple helices. Triple-helical and untwisting states were probed by melting temperature (Tm); hence, we developed a model to predict collagen Tm, achieving a state-of-art Pearson's correlation (PC) of 0.95 by cross-validation and a PC of 0.8 for predicting Tm values of synthetic CMPs. Our chemically synthesized short CMPs and recombinantly expressed long CMPs could self-assemble, with the lowest requirement for hydrogel formation at a concentration of 0.08% (w/v). Five CMPs could promote osteoblast differentiation. Our results demonstrated the potential for using computer-aided methods to design functional self-assembling CMPs.

PMID:39688478 | DOI:10.1093/bib/bbae622

Categories: Literature Watch

NABP-BERT: NANOBODY®-antigen binding prediction based on bidirectional encoder representations from transformers (BERT) architecture

Tue, 2024-12-17 06:00

Brief Bioinform. 2024 Nov 22;26(1):bbae518. doi: 10.1093/bib/bbae518.

ABSTRACT

Antibody-mediated immunity is crucial in the vertebrate immune system. Nanobodies, also known as VHH or single-domain antibodies (sdAbs), are emerging as promising alternatives to full-length antibodies due to their compact size, precise target selectivity, and stability. However, the limited availability of nanobodies (Nbs) for numerous antigens (Ags) presents a significant obstacle to their widespread application. Understanding the interactions between Nbs and Ags is essential for enhancing their binding affinities and specificities. Experimental identification of these interactions is often costly and time-intensive. To address this issue, we introduce NABP-BERT, a deep-learning model based on the BERT architecture, designed to predict NANOBODY®-Ag binding solely from sequence information. Furthermore, we have developed a general pretrained model with transfer capabilities suitable for protein-related tasks, including protein-protein interaction tasks. NABP-BERT focuses on the surrounding amino acid contexts and outperforms existing methods, achieving an AUROC of 0.986 and an AUPR of 0.985.

PMID:39688476 | DOI:10.1093/bib/bbae518

Categories: Literature Watch

Metastasis Detection Using True and Artificial T1-Weighted Postcontrast Images in Brain MRI

Tue, 2024-12-17 06:00

Invest Radiol. 2024 Nov 19. doi: 10.1097/RLI.0000000000001137. Online ahead of print.

ABSTRACT

OBJECTIVES: Small lesions are the limiting factor for reducing gadolinium-based contrast agents in brain magnetic resonance imaging (MRI). The purpose of this study was to compare the sensitivity and precision in metastasis detection on true contrast-enhanced T1-weighted (T1w) images and artificial images synthesized by a deep learning method using low-dose images.

MATERIALS AND METHODS: In this prospective, multicenter study (5 centers, 12 scanners), 917 participants underwent brain MRI between October 2021 and March 2023 including T1w low-dose (0.033 mmol/kg) and full-dose (0.1 mmol/kg) images. Forty participants with metastases or unremarkable brain findings were evaluated in a reading (mean age ± SD, 54.3 ± 15.1 years; 24 men). True and artificial T1w images were assessed for metastases in random order with 4 weeks between readings by 2 neuroradiologists. A reference reader reviewed all data to confirm metastases. Performances were compared using mid-P McNemar tests for sensitivity and Wilcoxon signed rank tests for false-positive findings.

RESULTS: The reference reader identified 97 metastases. The sensitivity of reader 1 did not differ significantly between sequences (sensitivity [precision]: true, 66.0% [98.5%]; artificial, 61.9% [98.4%]; P = 0.38). With a lower precision than reader 1, reader 2 found significantly more metastases using true images (sensitivity [precision]: true, 78.4% [87.4%]; artificial, 60.8% [80.8%]; P < 0.001). There was no significant difference in sensitivity for metastases ≥5 mm. The number of false-positive findings did not differ significantly between sequences.

CONCLUSIONS: One reader showed a significantly higher overall sensitivity using true images. The similar detection performance for metastases ≥5 mm is promising for applying low-dose imaging in less challenging diagnostic tasks than metastasis detection.

PMID:39688447 | DOI:10.1097/RLI.0000000000001137

Categories: Literature Watch

Encoding matching criteria for cross-domain deformable image registration

Tue, 2024-12-17 06:00

Med Phys. 2024 Dec 17. doi: 10.1002/mp.17565. Online ahead of print.

ABSTRACT

BACKGROUND: Most existing deep learning-based registration methods are trained on single-type images to address same-domain tasks, resulting in performance degradation when applied to new scenarios. Retraining a model for new scenarios requires extra time and data. Therefore, efficient and accurate solutions for cross-domain deformable registration are in demand.

PURPOSE: We argue that the tailor-made matching criteria in traditional registration methods is one of the main reason they are applicable in different domains. Motivated by this, we devise a registration-oriented encoder to model the matching criteria of image features and structural features, which is beneficial to boost registration accuracy and adaptability.

METHODS: Specifically, a general feature encoder (Encoder-G) is proposed to capture comprehensive medical image features, while a structural feature encoder (Encoder-S) is designed to encode the structural self-similarity into the global representation. Moreover, by updating Encoder-S using one-shot learning, our method can effectively adapt to different domains. The efficacy of our method is evaluated using MRI images from three different domains, including brain images (training/testing: 870/90 pairs), abdomen images (training/testing: 1406/90 pairs), and cardiac images (training/testing: 64770/870 pairs). The comparison methods include traditional method (SyN) and cutting-edge deep networks. The evaluation metrics contain dice similarity coefficient (DSC) and average symmetric surface distance (ASSD).

RESULTS: In the single-domain task, our method attains an average DSC of 68.9%/65.2%/72.8%, and ASSD of 9.75/3.82/1.30 mm on abdomen/cardiac/brain images, outperforming the second-best comparison methods by large margins. In the cross-domain task, without one-shot optimization, our method outperforms other deep networks in five out of six cross-domain scenarios and even surpasses symmetric image normalization method (SyN) in two scenarios. By conducting the one-shot optimization, our method successfully surpasses SyN in all six cross-domain scenarios.

CONCLUSIONS: Our method yields favorable results in the single-domain task while ensuring improved generalization and adaptation performance in the cross-domain task, showing its feasibility for the challenging cross-domain registration applications. The code is publicly available at https://github.com/JuliusWang-7/EncoderReg.

PMID:39688347 | DOI:10.1002/mp.17565

Categories: Literature Watch

Development of Deep Learning-Based Virtual Lugol Chromoendoscopy for Superficial Esophageal Squamous Cell Carcinoma

Tue, 2024-12-17 06:00

J Gastroenterol Hepatol. 2024 Dec 17. doi: 10.1111/jgh.16843. Online ahead of print.

ABSTRACT

BACKGROUND: Lugol chromoendoscopy has been shown to increase the sensitivity of detection of esophageal squamous cell carcinoma (ESCC). We aimed to develop a deep learning-based virtual lugol chromoendoscopy (V-LCE) method.

METHODS: We developed still V-LCE images for superficial ESCC using a cycle-consistent generative adversarial network (CycleGAN). Six endoscopists graded the detection and margins of ESCCs using white-light endoscopy (WLE), real lugol chromoendoscopy (R-LCE), and V-LCE on a five-point scale ranging from 1 (poor) to 5 (excellent). We also calculated and compared the color differences between cancerous and non-cancerous areas using WLE, R-LCE, and V-LCE.

RESULTS: Scores for the detection and margins were significantly higher with R-LCE than V-LCE (detection, 4.7 vs. 3.8, respectively; p < 0.001; margins, 4.3 vs. 3.0, respectively; p < 0.001). There were nonsignificant trends towards higher scores with V-LCE than WLE (detection, 3.8 vs. 3.3, respectively; p = 0.089; margins, 3.0 vs. 2.7, respectively; p = 0.130). Color differences were significantly greater with V-LCE than WLE (p < 0.001) and with R-LCE than V-LCE (p < 0.001) (39.6 with R-LCE, 29.6 with V-LCE, and 18.3 with WLE).

CONCLUSIONS: Our V-LCE has a middle performance between R-LCE and WLE in terms of lesion detection, margin, and color difference. It suggests that V-LCE potentially improves the endoscopic diagnosis of superficial ESCC.

PMID:39687978 | DOI:10.1111/jgh.16843

Categories: Literature Watch

Optimizing the topology of convolutional neural network (CNN) and artificial neural network (ANN) for brain tumor diagnosis (BTD) through MRIs

Tue, 2024-12-17 06:00

Heliyon. 2024 Jul 23;10(16):e35083. doi: 10.1016/j.heliyon.2024.e35083. eCollection 2024 Aug 30.

ABSTRACT

The use of MRI analysis for BTD and tumor type detection has considerable importance within the domain of machine vision. Numerous methodologies have been proposed to address this issue, and significant progress has been achieved in this domain via the use of deep learning (DL) approaches. While the majority of offered approaches using artificial neural networks (ANNs) and deep neural networks (DNNs) demonstrate satisfactory performance in Bayesian Tree Descent (BTD), none of these research studies can ensure the optimality of the employed learning model structure. Put simply, there is room for improvement in the efficiency of these learning models in BTD. This research introduces a novel approach for optimizing the configuration of Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to address the BTD issue. The suggested approach employs Convolutional Neural Networks (CNN) for the purpose of segmenting brain MRIs. The model's configurable hyper-parameters are tuned using a genetic algorithm (GA). The Multi-Linear Principal Component Analysis (MPCA) is used to decrease the dimensionality of the segmented features in the pictures after they have been segmented. Ultimately, the segmentation procedure is executed using an Artificial Neural Network (ANN). In this artificial neural network (ANN), the genetic algorithm (GA) sets the ideal number of neurons in the hidden layer and the appropriate weight vector. The effectiveness of the suggested approach was assessed by utilizing the BRATS2014 and BTD20 databases. The results indicate that the proposed method can classify samples from these two databases with an average accuracy of 98.6 % and 99.1 %, respectively, which represents an accuracy improvement of at least 1.1 % over the preceding methods.

PMID:39687857 | PMC:PMC11647943 | DOI:10.1016/j.heliyon.2024.e35083

Categories: Literature Watch

Smartwatch ECG and artificial intelligence in detecting acute coronary syndrome compared to traditional 12-lead ECG

Tue, 2024-12-17 06:00

Int J Cardiol Heart Vasc. 2024 Dec 1;56:101573. doi: 10.1016/j.ijcha.2024.101573. eCollection 2025 Feb.

ABSTRACT

BACKGROUND: Acute coronary syndromes (ACS) require prompt diagnosis through initial electrocardiograms (ECG), but ECG machines are not always accessible. Meanwhile, smartwatches offering ECG functionality have become widespread. This study evaluates the feasibility of an image-based ECG analysis artificial intelligence (AI) system with smartwatch-based multichannel, asynchronous ECG for diagnosing ACS.

METHODS: Fifty-six patients with ACS and 15 healthy participants were included, and their standard 12-lead and smartwatch-based 9-lead ECGs were analyzed. The ACS group was categorized into ACS with acute total occlusion (ACS-O(+), culprit stenosis ≥ 99 %, n = 44) and ACS without occlusion (ACS-O(-), culprit stenosis 70 % to < 99 %, n = 12) based on coronary angiography. A deep learning-based AI-ECG tool interpreting 2-dimensional ECG images generated probability scores for ST-elevation myocardial infarction (qSTEMI), ACS (qACS), and myocardial injury (qMI: troponin I > 0.1 ng/mL).

RESULTS: The AI-driven qSTEMI, qACS, and qMI demonstrated correlation coefficients of 0.882, 0.874, and 0.872 between standard and smartwatch ECGs (all P < 0.001). The qACS score effectively distinguished ACS-O(±) from control, with AUROC for both ECGs (0.991 for standard and 0.987 for smartwatch, P = 0.745). The AUROC of qSTEMI in identifying ACS-O(+) from control was 0.989 and 0.982 with 12-lead and smartwatch (P = 0.617). Discriminating ACS-O(+) from ACS-O(-) or control presented a slight challenge, with an AUROC for qSTEMI of 0.855 for 12-lead and 0.880 for smartwatch ECGs (P = 0.352).

CONCLUSION: AI-ECG scores from standard and smartwatch-based ECGs showed high concordance with comparable diagnostic performance in differentiating ACS-O(+) and ACS-O(-). With increasing accessibility smartwatch accessibility, they may hold promise for aiding ACS diagnosis, regardless of location.

PMID:39687687 | PMC:PMC11648863 | DOI:10.1016/j.ijcha.2024.101573

Categories: Literature Watch

Improving the generalizability of white blood cell classification with few-shot domain adaptation

Tue, 2024-12-17 06:00

J Pathol Inform. 2024 Nov 7;15:100405. doi: 10.1016/j.jpi.2024.100405. eCollection 2024 Dec.

ABSTRACT

The morphological classification of nucleated blood cells is fundamental for the diagnosis of hematological diseases. Many Deep Learning algorithms have been implemented to automatize this classification task, but most of the time they fail to classify images coming from different sources. This is known as "domain shift". Whereas some research has been conducted in this area, domain adaptation techniques are often computationally expensive and can introduce significant modifications to initial cell images. In this article, we propose an easy-to-implement workflow where we trained a model to classify images from two datasets, and tested it on images coming from eight other datasets. An EfficientNet model was trained on a source dataset comprising images from two different datasets. It was afterwards fine-tuned on each of the eight target datasets by using 100 or less-annotated images from these datasets. Images from both the source and the target dataset underwent a color transform to put them into a standardized color style. The importance of color transform and fine-tuning was evaluated through an ablation study and visually assessed with scatter plots, and an extensive error analysis was carried out. The model achieved an accuracy higher than 80% for every dataset and exceeded 90% for more than half of the datasets. The presented workflow yielded promising results in terms of generalizability, significantly improving performance on target datasets, whereas keeping low computational cost and maintaining consistent color transformations. Source code is available at: https://github.com/mc2295/WBC_Generalization.

PMID:39687668 | PMC:PMC11648780 | DOI:10.1016/j.jpi.2024.100405

Categories: Literature Watch

Application of a deep learning algorithm for the diagnosis of HCC

Tue, 2024-12-17 06:00

JHEP Rep. 2024 Sep 16;7(1):101219. doi: 10.1016/j.jhepr.2024.101219. eCollection 2025 Jan.

ABSTRACT

BACKGROUND & AIMS: Hepatocellular carcinoma (HCC) is characterized by a high mortality rate. The Liver Imaging Reporting and Data System (LI-RADS) results in a considerable number of indeterminate observations, rendering an accurate diagnosis difficult.

METHODS: We developed four deep learning models for diagnosing HCC on computed tomography (CT) via a training-validation-testing approach. Thin-slice triphasic CT liver images and relevant clinical information were collected and processed for deep learning. HCC was diagnosed and verified via a 12-month clinical composite reference standard. CT observations among at-risk patients were annotated using LI-RADS. Diagnostic performance was assessed by internal validation and independent external testing. We conducted sensitivity analyses of different subgroups, deep learning explainability evaluation, and misclassification analysis.

RESULTS: From 2,832 patients and 4,305 CT observations, the best-performing model was Spatio-Temporal 3D Convolution Network (ST3DCN), achieving area under receiver-operating-characteristic curves (AUCs) of 0.919 (95% CI, 0.903-0.935) and 0.901 (95% CI, 0.879-0.924) at the observation (n = 1,077) and patient (n = 685) levels, respectively during internal validation, compared with 0.839 (95% CI, 0.814-0.864) and 0.822 (95% CI, 0.790-0.853), respectively for standard of care radiological interpretation. The negative predictive values of ST3DCN were 0.966 (95% CI, 0.954-0.979) and 0.951 (95% CI, 0.931-0.971), respectively. The observation-level AUCs among at-risk patients, 2-5-cm observations, and singular portovenous phase analysis of ST3DCN were 0.899 (95% CI, 0.874-0.924), 0.872 (95% CI, 0.838-0.909) and 0.912 (95% CI, 0.895-0.929), respectively. In external testing (551/717 patients/observations), the AUC of ST3DCN was 0.901 (95% CI, 0.877-0.924), which was non-inferior to radiological interpretation (AUC 0.900; 95% CI, 0.877--923).

CONCLUSIONS: ST3DCN achieved strong, robust performance for accurate HCC diagnosis on CT. Thus, deep learning can expedite and improve the process of diagnosing HCC.

IMPACT AND IMPLICATIONS: The clinical applicability of deep learning in HCC diagnosis is potentially huge, especially considering the expected increase in the incidence and mortality of HCC worldwide. Early diagnosis through deep learning can lead to earlier definitive management, particularly for at-risk patients. The model can be broadly deployed for patients undergoing a triphasic contrast CT scan of the liver to reduce the currently high mortality rate of HCC.

PMID:39687602 | PMC:PMC11648772 | DOI:10.1016/j.jhepr.2024.101219

Categories: Literature Watch

Correction methods and applications of ERT in complex terrain

Tue, 2024-12-17 06:00

MethodsX. 2024 Nov 22;13:103012. doi: 10.1016/j.mex.2024.103012. eCollection 2024 Dec.

ABSTRACT

Electrical Resistivity Tomography (ERT) is an efficient geophysical exploration technique widely used in the exploration of groundwater resources, environmental monitoring, engineering geological assessment, and archaeology. However, the undulation of the terrain significantly affects the accuracy of ERT data, potentially leading to false anomalies in the resistivity images and increasing the complexity of interpreting subsurface structures. This paper reviews the progress in the research on terrain correction for resistivity methods since the early 20th century. From the initial physical simulation methods to modern numerical simulation techniques, terrain correction technology has evolved to accommodate a variety of exploration site types. The paper provides a detailed introduction to various terrain correction techniques, including the ratio method, numerical simulation methods (including the finite element method and finite difference method), the angular domain method, conformal transformation method, inversion method, and orthogonal projection method. These methods correct the distortions caused by terrain using different mathematical and physical models, aiming to enhance the interpretative accuracy of ERT data. Although existing correction methods have made progress in mitigating the effects of terrain, they still have limitations such as high computational demands and poor alignment with actual geological conditions. Future research could explore the improvement of existing methods, the enhancement of computational efficiency, the reduction of resource consumption, and the use of advanced technologies like deep learning to improve the precision and reliability of corrections.

PMID:39687593 | PMC:PMC11648252 | DOI:10.1016/j.mex.2024.103012

Categories: Literature Watch

Action recognition in rehabilitation: combining 3D convolution and LSTM with spatiotemporal attention

Tue, 2024-12-17 06:00

Front Physiol. 2024 Dec 2;15:1472380. doi: 10.3389/fphys.2024.1472380. eCollection 2024.

ABSTRACT

This study addresses the limitations of traditional sports rehabilitation, emphasizing the need for improved accuracy and response speed in real-time action detection and recognition in complex rehabilitation scenarios. We propose the STA-C3DL model, a deep learning framework that integrates 3D Convolutional Neural Networks (C3D), Long Short-Term Memory (LSTM) networks, and spatiotemporal attention mechanisms to capture nuanced action dynamics more precisely. Experimental results on multiple datasets, including NTU RGB + D, Smarthome Rehabilitation, UCF101, and HMDB51, show that the STA-C3DL model significantly outperforms existing methods, achieving up to 96.42% accuracy and an F1 score of 95.83% on UCF101, with robust performance across other datasets. The model demonstrates particular strength in handling real-time feedback requirements, highlighting its practical application in enhancing rehabilitation processes. This work provides a powerful, accurate tool for action recognition, advancing the application of deep learning in rehabilitation therapy and offering valuable support to therapists and researchers. Future research will focus on expanding the model's adaptability to unconventional and extreme actions, as well as its integration into a wider range of rehabilitation settings to further support individualized patient recovery.

PMID:39687520 | PMC:PMC11646842 | DOI:10.3389/fphys.2024.1472380

Categories: Literature Watch

Toward improving reproducibility in neuroimaging deep learning studies

Tue, 2024-12-17 06:00

Front Neurosci. 2024 Dec 2;18:1509358. doi: 10.3389/fnins.2024.1509358. eCollection 2024.

NO ABSTRACT

PMID:39687491 | PMC:PMC11647000 | DOI:10.3389/fnins.2024.1509358

Categories: Literature Watch

Raw dataset of tensile tests in a 3D-printed nylon reinforced with oriented short carbon fibers

Tue, 2024-12-17 06:00

Data Brief. 2024 Nov 20;57:111149. doi: 10.1016/j.dib.2024.111149. eCollection 2024 Dec.

ABSTRACT

This dataset presents the results of tensile tests conducted on 3D-printed nylon composites reinforced with short carbon fibers, commercially known as Onyx™. Specimens were printed using a Markforged™ Mark 2 printer with three different printing orientations: 0°, ±45°, and 90°, following the ASTM D638-22 standard for Type IV tensile specimens. The dataset includes mechanical testing data, scanning electron microscope (SEM) images, and digital image correlation (DIC) images. Mechanical test data were collected using an Instron universal testing machine, while SEM images were captured to examine microstructural features and fracture surfaces, both before and after testing. DIC images were obtained using two cameras positioned orthogonally to capture deformation on multiple planes. Limitations include fracture at the radius of the testing region in some 0° specimens and premature failure of 90° specimens, which reduced the number of captured images. These data provide valuable insights into the anisotropic mechanical behavior of 3D-printed composites and can be reused for further research on material behavior under varying conditions like multiscale simulations and deep learning algorithms.

PMID:39687376 | PMC:PMC11647152 | DOI:10.1016/j.dib.2024.111149

Categories: Literature Watch

Dataset of Sentinel-1 SAR and Sentinel-2 RGB-NDVI imagery

Tue, 2024-12-17 06:00

Data Brief. 2024 Nov 20;57:111160. doi: 10.1016/j.dib.2024.111160. eCollection 2024 Dec.

ABSTRACT

This article presents a comprehensive dataset combining Synthetic Aperture Radar (SAR) imagery from the Sentinel-1 mission with optical imagery, including RGB and Normalized Difference Vegetation Index (NDVI), from the Sentinel-2 mission. The dataset consists of 8800 images, organized into four folders-SAR_VV, SAR_VH, RGB, and NDVI-each containing 2200 images with dimensions of 512 × 512 pixels. These images were collected from various global locations using random geographic coordinates and strict criteria for cloud cover, snow presence, and water percentage, ensuring high-quality and diverse data. The primary motivation for creating this dataset is to address the limitations of optical sensors, which are often hindered by cloud cover and atmospheric conditions. By integrating SAR data, which is unaffected by these factors, the dataset offers a robust tool for a wide range of applications, including land cover classification, vegetation monitoring, and environmental change detection. The dataset is particularly valuable for training machine learning models that require multimodal inputs, such as translating SAR images to optical imagery or enhancing the quality of noisy data. Additionally, the structure of the dataset and the preprocessing steps applied make it readily usable for various research purposes. The SAR images are processed to Level-1 Ground Range Detected (GRD) format, including radiometric calibration and terrain correction, while the optical images are filtered to ensure minimal cloud interference.

PMID:39687373 | PMC:PMC11648187 | DOI:10.1016/j.dib.2024.111160

Categories: Literature Watch

Diffusion data augmentation for enhancing Norberg hip angle estimation

Mon, 2024-12-16 06:00

Vet Radiol Ultrasound. 2025 Jan;66(1):e13463. doi: 10.1111/vru.13463.

ABSTRACT

The Norberg angle (NA) plays a crucial role in evaluating hip joint conformation, particularly in canines, by quantifying femoral head subluxation within the hip joint. Therefore, it is an important metric for evaluating hip joint quality and diagnosing canine hip dysplasia, the most prevalent hereditary orthopedic disorder in dogs. While contemporary tools offer automated quantification of the NA, their usage typically entails manual labeling and verification of radiographic images by professional veterinarians. To enhance efficiency and streamline this process, the study aims to develop a tool capable of predicting the NA directly from the image without the need for veterinary intervention. Due to the challenges in acquiring annotated, diverse, high-quality images, this study introduces diffusion models to expand the training dataset from 219 to 1493 images, encompassing original images. This augmentation enhances the dataset's diversity and scale, thereby improving the accuracy of Norberg angle estimation. The model predicts four key points: the center of left and right femoral heads and the edge of the left and right acetabulum, as well as the radii of the femoral heads and the Norberg angles. By evaluating 18 distinct pretrained ImageNet models, we investigate their performance pre- and post-incorporating augmented data from generated images. The results demonstrate a significant enhancement, with an average 35.3% improvement based on mean absolute percentage error when utilizing generated images from diffusion models. This study showcases the potential of diffusion modeling in advancing canine hip dysplasia diagnosis and underscores the value of incorporating augmented data to elevate model accuracy.

PMID:39681980 | DOI:10.1111/vru.13463

Categories: Literature Watch

Pages