Deep learning

FDSR: An Interpretable Frequency Division Stepwise Process Based Single-Image Super-Resolution Network

Wed, 2024-02-28 06:00

IEEE Trans Image Process. 2024 Feb 28;PP. doi: 10.1109/TIP.2024.3368960. Online ahead of print.

ABSTRACT

Deep learning has excelled in single-image super-resolution (SISR) applications, yet the lack of interpretability in most deep learning-based SR networks hinders their applicability, especially in fields like medical imaging that require transparent computation. To address these problems, we present an interpretable frequency division SR network that operates in the image frequency domain. It comprises a frequency division module and a step-wise reconstruction method, which divides the image into different frequencies and performs reconstruction accordingly. We develop a frequency division loss function to ensure that each ReM operates solely at one image frequency. These methods establish an interpretable framework for SR networks, visualizing the image reconstruction process and reducing the black box nature of SR networks. Additionally, we revisited the subpixel layer upsampling process by deriving its inverse process and designing a displacement generation module. This interpretable upsampling process incorporates subpixel information and is similar to pre-upsampling frameworks. Furthermore, we develop a new ReM based on interpretable Hessian attention to enhance network performance. Extensive experiments demonstrate that our network, without the frequency division loss, outperforms state-of-the-art methods qualitatively and quantitatively. The inclusion of the frequency division loss enhances the network's interpretability and robustness, and only slightly decreases the PSNR and SSIM metrics by an average of 0.48 dB and 0.0049, respectively.

PMID:38416622 | DOI:10.1109/TIP.2024.3368960

Categories: Literature Watch

Fluid Inverse Volumetric Modeling and Applications from Surface Motion

Wed, 2024-02-28 06:00

IEEE Trans Vis Comput Graph. 2024 Feb 28;PP. doi: 10.1109/TVCG.2024.3370551. Online ahead of print.

ABSTRACT

In this study, we devise a framework for volumetrically reconstructing fluid from observable, measurable free surface motion. Our innovative method amalgamates the benefits of deep learning and conventional simulation to preserve the guiding motion and temporal coherence of the reproduced fluid. We infer surface velocities by encoding and decoding spatiotemporal features of surface sequences, and a 3D CNN is used to generate the volumetric velocity field, which is then combined with 3D labels of obstacles and boundaries. Concurrently, we employ a network to estimate the fluid's physical properties. To progressively evolve the flow field over time, we input the reconstructed velocity field and estimated parameters into the physical simulator as the initial state. Our approach yields promising results for both synthetic fluid generated by different fluid solvers and captured real fluid. The developed framework naturally lends itself to a variety of graphics applications, such as 1) effective reproductions of fluid behaviors visually congruent with the observed surface motion, and 2) physics-guided re-editing of fluid scenes. Extensive experiments affirm that our novel method surpasses state-of-the-art approaches for 3D fluid inverse modeling and animation in graphics.

PMID:38416615 | DOI:10.1109/TVCG.2024.3370551

Categories: Literature Watch

Deep Learning-Assisted Automated Multidimensional Single Particle Tracking in Living Cells

Wed, 2024-02-28 06:00

Nano Lett. 2024 Feb 28. doi: 10.1021/acs.nanolett.3c04870. Online ahead of print.

ABSTRACT

The translational and rotational dynamics of anisotropic optical nanoprobes revealed in single particle tracking (SPT) experiments offer molecular-level information about cellular activities. Here, we report an automated high-speed multidimensional SPT system integrated with a deep learning algorithm for tracking the 3D orientation of anisotropic gold nanoparticle probes in living cells with high localization precision (<10 nm) and temporal resolution (0.9 ms), overcoming the limitations of rotational tracking under low signal-to-noise ratio (S/N) conditions. This method can resolve the azimuth (0°-360°) and polar angles (0°-90°) with errors of less than 2° on the experimental and simulated data under S/N of ∼4. Even when the S/N approaches the limit of 1, this method still maintains better robustness and noise resistance than the conventional pattern matching methods. The usefulness of this multidimensional SPT system has been demonstrated with a study of the motions of cargos transported along the microtubules within living cells.

PMID:38416583 | DOI:10.1021/acs.nanolett.3c04870

Categories: Literature Watch

Automatic Detection of 30 Fundus Diseases Using Ultra-Widefield Fluorescein Angiography with Deep Experts Aggregation

Wed, 2024-02-28 06:00

Ophthalmol Ther. 2024 Feb 28. doi: 10.1007/s40123-024-00900-7. Online ahead of print.

ABSTRACT

INTRODUCTION: Inaccurate, untimely diagnoses of fundus diseases leads to vision-threatening complications and even blindness. We built a deep learning platform (DLP) for automatic detection of 30 fundus diseases using ultra-widefield fluorescein angiography (UWFFA) with deep experts aggregation.

METHODS: This retrospective and cross-sectional database study included a total of 61,609 UWFFA images dating from 2016 to 2021, involving more than 3364 subjects in multiple centers across China. All subjects were divided into 30 different groups. The state-of-the-art convolutional neural network architecture, ConvNeXt, was chosen as the backbone to train and test the receiver operating characteristic curve (ROC) of the proposed system on test data and external test date. We compared the classification performance of the proposed system with that of ophthalmologists, including two retinal specialists.

RESULTS: We built a DLP to analyze UWFFA, which can detect up to 30 fundus diseases, with a frequency-weighted average area under the receiver operating characteristic curve (AUC) of 0.940 in the primary test dataset and 0.954 in the external multi-hospital test dataset. The tool shows comparable accuracy with retina specialists in diagnosis and evaluation.

CONCLUSIONS: This is the first study on a large-scale UWFFA dataset for multi-retina disease classification. We believe that our UWFFA DLP advances the diagnosis by artificial intelligence (AI) in various retinal diseases and would contribute to labor-saving and precision medicine especially in remote areas.

PMID:38416330 | DOI:10.1007/s40123-024-00900-7

Categories: Literature Watch

Electrochemical random-access memory: recent advances in materials, devices, and systems towards neuromorphic computing

Wed, 2024-02-28 06:00

Nano Converg. 2024 Feb 28;11(1):9. doi: 10.1186/s40580-024-00415-8.

ABSTRACT

Artificial neural networks (ANNs), inspired by the human brain's network of neurons and synapses, enable computing machines and systems to execute cognitive tasks, thus embodying artificial intelligence (AI). Since the performance of ANNs generally improves with the expansion of the network size, and also most of the computation time is spent for matrix operations, AI computation have been performed not only using the general-purpose central processing unit (CPU) but also architectures that facilitate parallel computation, such as graphic processing units (GPUs) and custom-designed application-specific integrated circuits (ASICs). Nevertheless, the substantial energy consumption stemming from frequent data transfers between processing units and memory has remained a persistent challenge. In response, a novel approach has emerged: an in-memory computing architecture harnessing analog memory elements. This innovation promises a notable advancement in energy efficiency. The core of this analog AI hardware accelerator lies in expansive arrays of non-volatile memory devices, known as resistive processing units (RPUs). These RPUs facilitate massively parallel matrix operations, leading to significant enhancements in both performance and energy efficiency. Electrochemical random-access memory (ECRAM), leveraging ion dynamics in secondary-ion battery materials, has emerged as a promising candidate for RPUs. ECRAM achieves over 1000 memory states through precise ion movement control, prompting early-stage research into material stacks such as mobile ion species and electrolyte materials. Crucially, the analog states in ECRAMs update symmetrically with pulse number (or voltage polarity), contributing to high network performance. Recent strides in device engineering in planar and three-dimensional structures and the understanding of ECRAM operation physics have marked significant progress in a short research period. This paper aims to review ECRAM material advancements through literature surveys, offering a systematic discussion on engineering assessments for ion control and a physical understanding of array-level demonstrations. Finally, the review outlines future directions for improvements, co-optimization, and multidisciplinary collaboration in circuits, algorithms, and applications to develop energy-efficient, next-generation AI hardware systems.

PMID:38416323 | DOI:10.1186/s40580-024-00415-8

Categories: Literature Watch

Congenital diaphragmatic hernia: automatic lung and liver MRI segmentation with nnU-Net, reproducibility of pyradiomics features, and a machine learning application for the classification of liver herniation

Wed, 2024-02-28 06:00

Eur J Pediatr. 2024 Feb 28. doi: 10.1007/s00431-024-05476-9. Online ahead of print.

ABSTRACT

Prenatal assessment of lung size and liver position is essential to stratify congenital diaphragmatic hernia (CDH) fetuses in risk categories, guiding counseling, and patient management. Manual segmentation on fetal MRI provides a quantitative estimation of total lung volume and liver herniation. However, it is time-consuming and operator-dependent. In this study, we utilized a publicly available deep learning (DL) segmentation system (nnU-Net) to automatically contour CDH-affected fetal lungs and liver on MRI sections. Concordance between automatic and manual segmentation was assessed by calculating the Jaccard coefficient. Pyradiomics standard features were then extracted from both manually and automatically segmented regions. The reproducibility of features between the two groups was evaluated through the Wilcoxon rank-sum test and intraclass correlation coefficients (ICCs). We finally tested the reliability of the automatic-segmentation approach by building a ML classifier system for the prediction of liver herniation based on support vector machines (SVM) and trained on shape features computed both in the manual and nnU-Net-segmented organs. We compared the area under the classifier receiver operating characteristic curve (AUC) in the two cases. Pyradiomics features calculated in the manual ROIs were partly reproducible by the same features calculated in nnU-Net segmented ROIs and, when used in the ML procedure, to predict liver herniation (both AUC around 0.85). Conclusion: Our results suggest that automatic MRI segmentation is feasible, with good reproducibility of pyradiomics features, and that a ML system for liver herniation prediction offers good reliability. Trial registration: https://clinicaltrials.gov/ct2/show/NCT04609163?term=NCT04609163&draw=2&rank=1 ; Clinical Trial Identification no. NCT04609163. What is Known: • Magnetic resonance imaging (MRI) is crucial for prenatal congenital diaphragmatic hernia (CDH) assessment. It enables the quantification of the total lung volume and the extent of liver herniation, which are essential for stratifying the severity of CDH, guiding counseling, and patient management. • The manual segmentation of MRI scans is a time-consuming process that is heavily reliant upon the skill set of the operator. What is New: • MRI lung and liver automatic segmentation using the deep learning nnU-Net system is feasible, with good Jaccard coefficient values and satisfactory reproducibility of pyradiomics features compared to manual results. • A feasible ML system for predicting liver herniation could improve prenatal assessments and CDH patient management.

PMID:38416256 | DOI:10.1007/s00431-024-05476-9

Categories: Literature Watch

Deep Learning Model for Tumor Type Prediction using Targeted Clinical Genomic Sequencing Data

Wed, 2024-02-28 06:00

Cancer Discov. 2024 Feb 27. doi: 10.1158/2159-8290.CD-23-0996. Online ahead of print.

ABSTRACT

Tumor type guides clinical treatment decisions in cancer, but histology-based diagnosis remains challenging. Genomic alterations are highly diagnostic of tumor type, and tumor type classifiers trained on genomic features have been explored, but the most accurate methods are not clinically feasible, relying on features derived from whole genome sequencing (WGS), or predicting across limited cancer types. We use genomic features from a dataset of 39,787 solid tumors sequenced using a clinical targeted cancer gene panel to develop Genome-Derived-Diagnosis Ensemble (GDD-ENS): a hyperparameter ensemble for classifying tumor type using deep neural networks. GDD-ENS achieves 93% accuracy for high-confidence predictions across 38 cancer types, rivalling performance of WGS-based methods. GDD-ENS can also guide diagnoses on rare type and cancers of unknown primary, and incorporate patient-specific clinical information for improved predictions. Overall, integrating GDD-ENS into prospective clinical sequencing workflows could provide clinically-relevant tumor type predictions to guide treatment decisions in real time.

PMID:38416134 | DOI:10.1158/2159-8290.CD-23-0996

Categories: Literature Watch

Denoising Multiphase Functional Cardiac CT Angiography Using Deep Learning and Synthetic Data

Wed, 2024-02-28 06:00

Radiol Artif Intell. 2024 Feb 28:e230153. doi: 10.1148/ryai.230153. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Coronary CT angiography (CTA) is increasingly used for cardiac diagnosis. Dose modulation techniques can reduce radiation dose, but resulting functional images are noisy and challenging for functional analysis. This retrospective study describes and evaluates a deep learning method for denoising functional cardiac imaging, taking advantage of multiphase information in a 3D convolutional neural network. Coronary CT angiograms (n = 566) were used to derive synthetic data for training. Deep learning-based image denoising (DLID) was compared with unprocessed images and a standard noise reduction algorithm (BM3D). Noise and signal-to-noise ratio measurements, as well as expert evaluation of image quality were performed. To validate the use of the denoised images for cardiac quantification, threshold-based segmentation was performed, and results were compared with manual measurements on unprocessed images. Deep learning-based denoised images showed significantly improved noise compared with standard denoising-based images (SD of left ventricular blood pool, 20.3 ± 42.5 HU versus 33.4 ± 39.8 HU for DLID versus BM3D, P < .0001). Expert evaluations of image quality were significantly higher in deep learningbased denoised images compared with standard denoising. Semiautomatic left ventricular size measurements on deep learning-based denoised images showed excellent correlation with expert quantification on unprocessed images (intraclass correlation coefficient, 0.97). Deep learning-based denoising using a 3D approach resulted in excellent denoising performance and facilitated valid automatic processing of cardiac functional imaging. ©RSNA, 2024.

PMID:38416035 | DOI:10.1148/ryai.230153

Categories: Literature Watch

Directed Evolution of Escherichia coli Nissle 1917 to Utilize Allulose as Sole Carbon Source

Wed, 2024-02-28 06:00

Small Methods. 2024 Feb 28:e2301385. doi: 10.1002/smtd.202301385. Online ahead of print.

ABSTRACT

Sugar substitutes are popular due to their akin taste and low calories. However, excessive use of aspartame and erythritol can have varying effects. While D-allulose is presently deemed a secure alternative to sugar, its excessive consumption is not devoid of cellular stress implications. In this study, the evolution of Escherichia coli Nissle 1917 (EcN) is directed to utilize allulose as sole carbon source through a combination of adaptive laboratory evolution (ALE) and fluorescence-activated droplet sorting (FADS) techniques. Employing whole genome sequencing (WGS) and clustered regularly interspaced short palindromic repeats interference (CRISPRi) in conjunction with compensatory expression displayed those genetic mutations in sugar and amino acid metabolic pathways, including glnP, glpF, gmpA, nagE, pgmB, ybaN, etc., increased allulose assimilation. Enzyme-substrate dynamics simulations and deep learning predict enhanced substrate specificity and catalytic efficiency in nagE A247E and pgmB G12R mutants. The findings evince that these mutations hold considerable promise in enhancing allulose uptake and facilitating its conversion into glycolysis, thus signifying the emergence of a novel metabolic pathway for allulose utilization. These revelations bear immense potential for the sustainable utilization of D-allulose in promoting health and well-being.

PMID:38415955 | DOI:10.1002/smtd.202301385

Categories: Literature Watch

Editorial for "Deep Learning Radiomic Analysis of MRI Combined with Clinical Characteristics Diagnoses Placenta Accreta Spectrum and its Subtypes"

Wed, 2024-02-28 06:00

J Magn Reson Imaging. 2024 Feb 28. doi: 10.1002/jmri.29321. Online ahead of print.

NO ABSTRACT

PMID:38415787 | DOI:10.1002/jmri.29321

Categories: Literature Watch

Deep learning-based harmonization of trabecular bone microstructures between high- and low-resolution CT imaging

Wed, 2024-02-28 06:00

Med Phys. 2024 Feb 28. doi: 10.1002/mp.17003. Online ahead of print.

ABSTRACT

BACKGROUND: Osteoporosis is a bone disease related to increased bone loss and fracture-risk. The variability in bone strength is partially explained by bone mineral density (BMD), and the remainder is contributed by bone microstructure. Recently, clinical CT has emerged as a viable option for in vivo bone microstructural imaging. Wide variations in spatial-resolution and other imaging features among different CT scanners add inconsistency to derived bone microstructural metrics, urging the need for harmonization of image data from different scanners.

PURPOSE: This paper presents a new deep learning (DL) method for the harmonization of bone microstructural images derived from low- and high-resolution CT scanners and evaluates the method's performance at the levels of image data as well as derived microstructural metrics.

METHODS: We generalized a three-dimensional (3D) version of GAN-CIRCLE that applies two generative adversarial networks (GANs) constrained by the identical, residual, and cycle learning ensemble (CIRCLE). Two GAN modules simultaneously learn to map low-resolution CT (LRCT) to high-resolution CT (HRCT) and vice versa. Twenty volunteers were recruited. LRCT and HRCT scans of the distal tibia of their left legs were acquired. Five-hundred pairs of LRCT and HRCT image blocks of 64 × 64 × 64 $64 \times 64 \times 64 $ voxels were sampled for each of the twelve volunteers and used for training in supervised as well as unsupervised setups. LRCT and HRCT images of the remaining eight volunteers were used for evaluation. LRCT blocks were sampled at 32 voxel intervals in each coordinate direction and predicted HRCT blocks were stitched to generate a predicted HRCT image.

RESULTS: Mean ± standard deviation of structural similarity (SSIM) values between predicted and true HRCT using both 3DGAN-CIRCLE-based supervised (0.84 ± 0.03) and unsupervised (0.83 ± 0.04) methods were significantly (p < 0.001) higher than the mean SSIM value between LRCT and true HRCT (0.75 ± 0.03). All Tb measures derived from predicted HRCT by the supervised 3DGAN-CIRCLE showed higher agreement (CCC ∈ $ \in $ [0.956 0.991]) with the reference values from true HRCT as compared to LRCT-derived values (CCC ∈ $ \in $ [0.732 0.989]). For all Tb measures, except Tb plate-width (CCC = 0.866), the unsupervised 3DGAN-CIRCLE showed high agreement (CCC ∈ $ \in $ [0.920 0.964]) with the true HRCT-derived reference measures. Moreover, Bland-Altman plots showed that supervised 3DGAN-CIRCLE predicted HRCT reduces bias and variability in residual values of different Tb measures as compared to LRCT and unsupervised 3DGAN-CIRCLE predicted HRCT. The supervised 3DGAN-CIRCLE method produced significantly improved performance (p < 0.001) for all Tb measures as compared to the two DL-based supervised methods available in the literature.

CONCLUSIONS: 3DGAN-CIRCLE, trained in either unsupervised or supervised fashion, generates HRCT images with high structural similarity to the reference true HRCT images. The supervised 3DGAN-CIRCLE improves agreements of computed Tb microstructural measures with their reference values and outperforms the unsupervised 3DGAN-CIRCLE. 3DGAN-CIRCLE offers a viable DL solution to retrospectively improve image resolution, which may aid in data harmonization in multi-site longitudinal studies where scanner mismatch is unavoidable.

PMID:38415781 | DOI:10.1002/mp.17003

Categories: Literature Watch

mSegResRF-SPECT: A Novel Joint Classification Model of Whole Body Bone Scan Images for Bone Metastasis Diagnosis

Wed, 2024-02-28 06:00

Curr Med Imaging. 2024 Feb 26. doi: 10.2174/0115734056288472240129112028. Online ahead of print.

ABSTRACT

BACKGROUND: Whole-body bone scanning is a nuclear medicine technique with high sensitivity used for the diagnosis of bone-related diseases [e.g., bone metastases] that can be obtained by positron emission tomography[PET] or single-photon emission computed tomography[SPECT] imaging, depending on the different radiopharmaceuticals used. In contrast to the high sensitivity of the bone scan, it has low specificity, which leads to misinterpretation, causing adverse effects of unwarranted intervention or interruption to timely treatment.

OBJECTIVE: To address this problem, this paper proposes a joint model called mSegResRF-SPECT, which accomplishes for the first time the task of classifying whole-body bone scan images on a public SPECT dataset [BS-80K] for the diagnosis of bone metastases.

METHODS: The mSegResRF-SPECT adopts a multi-bone region segmentation algorithm to segment the whole body image into 13 regions, ResNet34 as an extractor to extract the regional features, and a random forest algorithm as a classifier.

RESULTS: The experimental results of the proposed model show that the average accuracy, sensitivity, and F1 score of the model on the BS-80K dataset reached SOTA.

CONCLUSION: The proposed method presents a promising solution for better bone scan classification methods.

PMID:38415481 | DOI:10.2174/0115734056288472240129112028

Categories: Literature Watch

Evaluation of Interstitial Lung Diseases with Deep Learning Method of Two Major Computed Tomography Patterns

Wed, 2024-02-28 06:00

Curr Med Imaging. 2024 Feb 26. doi: 10.2174/0115734056279295231229095436. Online ahead of print.

ABSTRACT

BACKGROUND: Interstitial lung diseases (ILD) encompass various disorders characterized by inflammation and/or fibrosis in the lung interstitium. These conditions produce distinct patterns in High-Resolution Computed Tomography (HRCT).

OBJECTIVE: We employ a deep learning method to diagnose the most commonly encountered patterns in ILD differentially.

MATERIALS AND METHODS: Patients were categorized into usual interstitial pneumonia (UIP), nonspecific interstitial pneumonia (NSIP), and normal lung parenchyma groups. VGG16 and VGG19 deep learning architectures were utilized. 85% of each pattern was used as training data for the artificial intelligence model. The models were then tasked with diagnosing the patterns in the test dataset without human intervention. Accuracy rates were calculated for both models.

RESULTS: The success of the VGG16 model in the test phase was 95.02% accuracy. Using the same data, 98.05% accuracy results were obtained in the test phase of the VGG19 model.

CONCLUSION: Deep Learning models showed high accuracy in distinguishing the two most common patterns of ILD.

PMID:38415479 | DOI:10.2174/0115734056279295231229095436

Categories: Literature Watch

Application Exploration of Medical Image-aided Diagnosis of Breast Tumour Based on Deep Learning

Wed, 2024-02-28 06:00

Curr Med Imaging. 2024 Feb 27. doi: 10.2174/0115734056261997231217085501. Online ahead of print.

ABSTRACT

BACKGROUND: Nowadays, people attach increasing importance to accurate and timely disease diagnosis and personalized treatment. Because of the uncertainty and latency of the pathogenesis, it is difficult to detect breast tumour early. With higher resolution, magnetic resonance imaging (MRI) has become an important method for early detection of cancer in recent years. At present, DL technology can automatically study imaging features of different depths.

OBJECTIVE: This work aimed to use DL to study medical image-assisted diagnosis.

METHODS: The image data were collected from the patients. ROI (region of interest) containing the complete tumor area in the medical image was generated. The ROI image was extracted, and the extracted feature data were expanded. By constructing a three-dimensional (3D) CNN model, the evaluation indicators of breast tumour diagnosis results have been proposed. In the experiment part, 3D CNN model and other models have been used to diagnose the medical image of breast tumour.

RESULTS: The 3D CNN model exhibited good ROI region extraction effect and breast tumor image diagnosis effect, and the average diagnostic accuracy of breast tumor image diagnosis was 0.736, which has been found to be much higher than other models and could be applied to breast tumor medical image-aided diagnosis.

CONCLUSION: The 3D CNN model has been trained by combining the two-dimensional CNN training mode, and the evaluation index of diagnostic results has been established. The experimental part verified the medical image diagnosis effect of the 3D CNN model. The model had exhibited a high ROI region extraction effect and breast tumor image diagnosis effect.

PMID:38415468 | DOI:10.2174/0115734056261997231217085501

Categories: Literature Watch

Clinical Application of Automatic Assessment of Scoliosis Cobb Angle Based on Deep Learning

Wed, 2024-02-28 06:00

Curr Med Imaging. 2024 Feb 27. doi: 10.2174/0115734056278130231218073650. Online ahead of print.

ABSTRACT

INTRODUCTION: A recently developed deep-learning-based automatic evaluation model provides reliable and efficient Cobb angle measurements for scoliosis diagnosis. However, few studies have explored its clinical application, and external validation is lacking. Therefore, this study aimed to explore the value of automated assessment models in clinical practice by comparing deep-learning models with manual measurement methods.

METHODS: The 481 spine radiographs from an open-source dataset were divided into training and validation sets, and 119 spine radiographs from a private dataset were used as the test set. The mean Cobb angle values assessed by three physicians in the hospital's PACS system served as the reference standard. The results of Seg4Reg, VFLDN, and manual measurement were statistically analyzed. The intra-class correlation coefficients (ICC) and the Pearson correlation coefficient (PCC) were used to compare their reliability and correlation. The Bland-Altman method was used to compare their agreement. The Kappa statistic was used to compare the consistency of Cobb angles at different severity levels.

RESULTS: The mean Cobb angle values measured were 35.89° ± 9.33° with Seg4Reg, 31.54° ± 9.78° with VFLDN, and 32.23° ± 9.28° with manual measurement. The ICCs for the reliability of Seg4Reg and VFLDN were 0.809 and 0.974, respectively. The PCC and MAD between Seg4Reg and manual measurements were 0.731 (p<0.001) and 6.51°, while those between VFLDN and manual measurements were 0.952 (p<0.001) and 2.36°. The Kappa statistic indicated VFLDN (k= 0.686, p< 0.001) was superior to Seg4Reg and manual measurements for Cobb angle severity classification.

CONCLUSION: The deep-learning-based automatic scoliosis Cobb angle assessment model is feasible in clinical practice. Specifically, the keypoint-based VFLDN is more valuable in actual clinical work with higher accuracy, transparency, and interpretability.

PMID:38415463 | DOI:10.2174/0115734056278130231218073650

Categories: Literature Watch

Using machine learning models to predict synchronous genitourinary cancers among gastrointestinal stromal tumor patients

Wed, 2024-02-28 06:00

Urol Ann. 2024 Jan-Mar;16(1):94-97. doi: 10.4103/ua.ua_32_23. Epub 2024 Jan 25.

ABSTRACT

OBJECTIVES: Gastrointestinal stromal tumors (GISTs) can occur synchronously with other neoplasms, including the genitourinary (GU) system. Machine learning (ML) may be a valuable tool in predicting synchronous GU tumors in GIST patients, and thus improving prognosis. This study aims to evaluate the use of ML algorithms to predict synchronous GU tumors among GIST patients in a specialist research center in Saudi Arabia.

MATERIALS AND METHODS: We analyzed data from all patients with histopathologically confirmed GIST at our facility from 2003 to 2020. Patient files were reviewed for the presence of renal cell carcinoma, adrenal tumors, or other GU cancers. Three supervised ML algorithms were used: logistic regression, XGBoost Regressor, and random forests (RFs). A set of variables, including independent attributes, was entered into the models.

RESULTS: A total of 170 patients were included in the study, with 58.8% (n = 100) being male. The median age was 57 (range: 9-91) years. The majority of GISTs were gastric (60%, n = 102) with a spindle cell histology. The most common stage at diagnosis was T2 (27.6%, n = 47) and N0 (20%, n = 34). Six patients (3.5%) had synchronous GU tumors. The RF model achieved the highest accuracy with 97.1%.

CONCLUSION: Our study suggests that the RF model is an effective tool for predicting synchronous GU tumors in GIST patients. Larger multicenter studies, utilizing more powerful algorithms such as deep learning and other artificial intelligence subsets, are necessary to further refine and improve these predictions.

PMID:38415235 | PMC:PMC10896329 | DOI:10.4103/ua.ua_32_23

Categories: Literature Watch

Automatic Quantification of COVID-19 Pulmonary Edema by Self-supervised Contrastive Learning

Wed, 2024-02-28 06:00

Med Image Learn Ltd Noisy Data (2023). 2023 Oct;14307:128-137. doi: 10.1007/978-3-031-44917-8_12. Epub 2023 Oct 8.

ABSTRACT

We proposed a self-supervised machine learning method to automatically rate the severity of pulmonary edema in the frontal chest X-ray radiographs (CXR) which could be potentially related to COVID-19 viral pneumonia. For this we use the modified radiographic assessment of lung edema (mRALE) scoring system. The new model was first optimized with the simple Siamese network (SimSiam) architecture where a ResNet-50 pretrained by ImageNet database was used as the backbone. The encoder projected a 2048-dimension embedding as representation features to a downstream fully connected deep neural network for mRALE score prediction. A 5-fold cross-validation with 2,599 frontal CXRs was used to examine the new model's performance with comparison to a non-pretrained SimSiam encoder and a ResNet-50 trained from scratch. The mean absolute error (MAE) of the new model is 5.05 (95%CI 5.03-5.08), the mean squared error (MSE) is 66.67 (95%CI 66.29-67.06), and the Spearman's correlation coefficient (Spearman ρ) to the expert-annotated scores is 0.77 (95%CI 0.75-0.79). All the performance metrics of the new model are superior to the two comparators (P<0.01), and the scores of MSE and Spearman ρ of the two comparators have no statistical difference (P>0.05). The model also achieved a prediction probability concordance of 0.811 and a quadratic weighted kappa of 0.739 with the medical expert annotations in external validation. We conclude that the self-supervised contrastive learning method is an effective strategy for mRALE automated scoring. It provides a new approach to improve machine learning performance and minimize the expert knowledge involvement in quantitative medical image pattern learning.

PMID:38415180 | PMC:PMC10896252 | DOI:10.1007/978-3-031-44917-8_12

Categories: Literature Watch

Effect of deep learning image reconstruction with high-definition standard scan mode on image quality of coronary stents and arteries

Wed, 2024-02-28 06:00

Quant Imaging Med Surg. 2024 Feb 1;14(2):1616-1635. doi: 10.21037/qims-23-1064. Epub 2024 Jan 17.

ABSTRACT

BACKGROUND: The high-definition standard (HD-standard) scan mode has been proven to display stents better than the standard (STND) scan mode but with more image noise. Deep learning image reconstruction (DLIR) is capable of reducing image noise. This study examined the impact of HD-standard scan mode with DLIR algorithms on stent and coronary artery image quality in coronary computed tomography angiography (CCTA) via a comparison with conventional STND scan mode and adaptive statistical iterative reconstruction-Veo (ASIR-V) algorithms.

METHODS: The data of 121 patients who underwent HD-standard mode scans (group A: N=47, with coronary stent) or STND mode scans (group B: N=74, without coronary stent) were retrospectively collected. All images were reconstructed with ASIR-V at a level of 50% (ASIR-V50%) and a level of 80% (ASIR-V80%) and with DLIR at medium (DLIR-M) and high (DLIR-H) levels. The noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), artifact index (AI), and in-stent diameter were measured as objective evaluation parameters. Subjective assessment involved a 5-point scale for overall image quality, image noise, stent appearance, stent artifacts, vascular sharpness, and diagnostic confidence. Diagnostic confidence was evaluated based on the presence or absence of significant stenosis (≥50% lumen reduction). Both subjective and objective evaluations were conducted by two radiologists independently, with kappa and intraclass correlation statistics being used to test the interobserver agreement.

RESULTS: There were 76 evaluable stents in group A, and the DLIR-H algorithm significantly outperformed other algorithms, demonstrating the lowest noise (41.6±7.1/41.3±7.2) and AI (32.4±8.9/31.2±10.1), the highest SNR (14.6±3.5/15.0±3.5) and CNR (13.6±3.8/13.9±3.8), and the largest in-stent diameter (2.18±0.61/2.19±0.61) in representing true stent diameter (all P values <0.01), as well as the highest score in each subjective evaluation parameter. In group B, a total of 296 coronary arteries were evaluated, and the DLIR-H algorithm provided the best objective image quality, with statistically superior noise, SNR, and CNR compared with the other algorithms (all P values <0.05). Moreover, the HD-standard mode scan with DLIR provided better image quality and a lower radiation dose than did the STND mode scan with ASIR-V (P<0.01).

CONCLUSIONS: HD-standard scan mode with DLIR-H improves image quality of both stents and coronary arteries on CCTA under a lower radiation dose.

PMID:38415168 | PMC:PMC10895123 | DOI:10.21037/qims-23-1064

Categories: Literature Watch

Multi-parametric investigations on the effects of vascular disrupting agents based on a platform of chorioallantoic membrane of chick embryos

Wed, 2024-02-28 06:00

Quant Imaging Med Surg. 2024 Feb 1;14(2):1729-1746. doi: 10.21037/qims-23-1065. Epub 2024 Jan 23.

ABSTRACT

BACKGROUND: Vascular disrupting agents (VDAs) are known to specifically target preexisting tumoural vasculature. However, systemic side effects as safety or toxicity issues have been reported from clinical trials, which call for further preclinical investigations. The purpose is to gain insights into their non-specific off-targeting effects on normal vasculature and provide clues for exploring underlying molecular mechanisms.

METHODS: Based on a recently introduced platform consisting laser speckle contrast imaging (LSCI), chick embryo chorioallantoic membrane (CAM), and assisted deep learning techniques, for evaluation of vasoactive medicines, hemodynamics on embryonic day 12 under constant intravascular infusion of two VDAs were qualitatively observed and quantitatively measured in real time for 30 min. Blood perfusion, vessel diameter, vessel density, and vessel total length were further analyzed and compared between blank control and medicines dose groups by using multi-factor analysis of variance (ANOVA) analysis with factorial interactions. Conventional histopathology and fluorescent immunohistochemistry (FIHC) assays for endothelial cytoskeleton including ß-tubulin and F-actin were qualitatively demonstrated, quantitatively analyzed and further correlated with hemodynamic and vascular parameters.

RESULTS: The normal vasculature was systemically negatively affected by VDAs with statistical significance (P<0.0001), as evidenced by four positively correlated parameters, which can explain the side-effects observed among clinical patients. Such effects appeared to be dose dependent (P<0.0001). FIHC assays qualitatively and quantitatively verified the results and exposed molecular mechanisms.

CONCLUSIONS: LSCI-CAM platform combining with deep learning technique proves useful in preclinical evaluations of vasoactive medications. Such new evidences provide new reference to clinical practice.

PMID:38415159 | PMC:PMC10895113 | DOI:10.21037/qims-23-1065

Categories: Literature Watch

Usefulness of longitudinal nodule-matching algorithm in computer-aided diagnosis of new pulmonary metastases on cancer surveillance CT scans

Wed, 2024-02-28 06:00

Quant Imaging Med Surg. 2024 Feb 1;14(2):1493-1506. doi: 10.21037/qims-23-1174. Epub 2024 Jan 2.

ABSTRACT

BACKGROUND: Detecting new pulmonary metastases by comparing serial computed tomography (CT) scans is crucial, but a repetitive and time-consuming task that burdens the radiologists' workload. This study aimed to evaluate the usefulness of a nodule-matching algorithm with deep learning-based computer-aided detection (DL-CAD) in diagnosing new pulmonary metastases on cancer surveillance CT scans.

METHODS: Among patients who underwent pulmonary metastasectomy between 2014 and 2018, 65 new pulmonary metastases missed by interpreting radiologists on cancer surveillance CT (Time 2) were identified after a retrospective comparison with the previous CT (Time 1). First, DL-CAD detected nodules in Time 1 and Time 2 CT images. All nodules detected at Time 2 were initially considered metastasis candidates. Second, the nodule-matching algorithm was used to assess the correlation between the nodules from the two CT scans and to classify the nodules at Time 2 as "new" or "pre-existing". Pre-existing nodules were excluded from metastasis candidates. We evaluated the performance of DL-CAD with the nodule-matching algorithm, based on its sensitivity, false-metastasis candidates per scan, and positive predictive value (PPV).

RESULTS: A total of 475 lesions were detected by DL-CAD at Time 2. Following a radiologist review, the lesions were categorized as metastases (n=54), benign nodules (n=392), and non-nodules (n=29). Upon comparison of nodules at Time 1 and 2 using the nodule-matching algorithm, all metastases were classified as new nodules without any matching errors. Out of 421 benign lesions, 202 (48.0%) were identified as pre-existing and subsequently excluded from the pool of metastasis candidates through the nodule-matching algorithm. As a result, false-metastasis candidates per CT scan decreased by 47.9% (from 7.1 to 3.7, P<0.001) and the PPV increased from 11.4% to 19.8% (P<0.001), while maintaining sensitivity.

CONCLUSIONS: The nodule-matching algorithm improves the diagnostic performance of DL-CAD for new pulmonary metastases, by lowering the number of false-metastasis candidates without compromising sensitivity.

PMID:38415154 | PMC:PMC10895128 | DOI:10.21037/qims-23-1174

Categories: Literature Watch

Pages