Deep learning

Noninvasive virtual biopsy using micro-registered optical coherence tomography (OCT) in human subjects

Wed, 2024-04-10 06:00

Sci Adv. 2024 Apr 12;10(15):eadi5794. doi: 10.1126/sciadv.adi5794. Epub 2024 Apr 10.

ABSTRACT

Histological hematoxylin and eosin-stained (H&E) tissue sections are used as the gold standard for pathologic detection of cancer, tumor margin detection, and disease diagnosis. Producing H&E sections, however, is invasive and time-consuming. While deep learning has shown promise in virtual staining of unstained tissue slides, true virtual biopsy requires staining of images taken from intact tissue. In this work, we developed a micron-accuracy coregistration method [micro-registered optical coherence tomography (OCT)] that can take a two-dimensional (2D) H&E slide and find the exact corresponding section in a 3D OCT image taken from the original fresh tissue. We trained a conditional generative adversarial network using the paired dataset and showed high-fidelity conversion of noninvasive OCT images to virtually stained H&E slices in both 2D and 3D. Applying these trained neural networks to in vivo OCT images should enable physicians to readily incorporate OCT imaging into their clinical practice, reducing the number of unnecessary biopsy procedures.

PMID:38598626 | DOI:10.1126/sciadv.adi5794

Categories: Literature Watch

DeepDynaForecast: Phylogenetic-informed graph deep learning for epidemic transmission dynamic prediction

Wed, 2024-04-10 06:00

PLoS Comput Biol. 2024 Apr 10;20(4):e1011351. doi: 10.1371/journal.pcbi.1011351. Online ahead of print.

ABSTRACT

In the midst of an outbreak or sustained epidemic, reliable prediction of transmission risks and patterns of spread is critical to inform public health programs. Projections of transmission growth or decline among specific risk groups can aid in optimizing interventions, particularly when resources are limited. Phylogenetic trees have been widely used in the detection of transmission chains and high-risk populations. Moreover, tree topology and the incorporation of population parameters (phylodynamics) can be useful in reconstructing the evolutionary dynamics of an epidemic across space and time among individuals. We now demonstrate the utility of phylodynamic trees for transmission modeling and forecasting, developing a phylogeny-based deep learning system, referred to as DeepDynaForecast. Our approach leverages a primal-dual graph learning structure with shortcut multi-layer aggregation, which is suited for the early identification and prediction of transmission dynamics in emerging high-risk groups. We demonstrate the accuracy of DeepDynaForecast using simulated outbreak data and the utility of the learned model using empirical, large-scale data from the human immunodeficiency virus epidemic in Florida between 2012 and 2020. Our framework is available as open-source software (MIT license) at github.com/lab-smile/DeepDynaForcast.

PMID:38598563 | DOI:10.1371/journal.pcbi.1011351

Categories: Literature Watch

From explanation to intervention: Interactive knowledge extraction from Convolutional Neural Networks used in radiology

Wed, 2024-04-10 06:00

PLoS One. 2024 Apr 10;19(4):e0293967. doi: 10.1371/journal.pone.0293967. eCollection 2024.

ABSTRACT

Deep Learning models such as Convolutional Neural Networks (CNNs) are very effective at extracting complex image features from medical X-rays. However, the limited interpretability of CNNs has hampered their deployment in medical settings as they failed to gain trust among clinicians. In this work, we propose an interactive framework to allow clinicians to ask what-if questions and intervene in the decisions of a CNN, with the aim of increasing trust in the system. The framework translates a layer of a trained CNN into a measurable and compact set of symbolic rules. Expert interactions with visualizations of the rules promote the use of clinically-relevant CNN kernels and attach meaning to the rules. The definition and relevance of the kernels are supported by radiomics analyses and permutation evaluations, respectively. CNN kernels that do not have a clinically-meaningful interpretation are removed without affecting model performance. By allowing clinicians to evaluate the impact of adding or removing kernels from the rule set, our approach produces an interpretable refinement of the data-driven CNN in alignment with medical best practice.

PMID:38598468 | DOI:10.1371/journal.pone.0293967

Categories: Literature Watch

Deep learning to predict rapid progression of Alzheimer's disease from pooled clinical trials: A retrospective study

Wed, 2024-04-10 06:00

PLOS Digit Health. 2024 Apr 10;3(4):e0000479. doi: 10.1371/journal.pdig.0000479. eCollection 2024 Apr.

ABSTRACT

The rate of progression of Alzheimer's disease (AD) differs dramatically between patients. Identifying the most is critical because when their numbers differ between treated and control groups, it distorts the outcome, making it impossible to tell whether the treatment was beneficial. Much recent effort, then, has gone into identifying RPs. We pooled de-identified placebo-arm data of three randomized controlled trials (RCTs), EXPEDITION, EXPEDITION 2, and EXPEDITION 3, provided by Eli Lilly and Company. After processing, the data included 1603 mild-to-moderate AD patients with 80 weeks of longitudinal observations on neurocognitive health, brain volumes, and amyloid-beta (Aβ) levels. RPs were defined by changes in four neurocognitive/functional health measures. We built deep learning models using recurrent neural networks with attention mechanisms to predict RPs by week 80 based on varying observation periods from baseline (e.g., 12, 28 weeks). Feature importance scores for RP prediction were computed and temporal feature trajectories were compared between RPs and non-RPs. Our evaluation and analysis focused on models trained with 28 weeks of observation. The models achieved robust internal validation area under the receiver operating characteristic (AUROCs) ranging from 0.80 (95% CI 0.79-0.82) to 0.82 (0.81-0.83), and the area under the precision-recall curve (AUPRCs) from 0.34 (0.32-0.36) to 0.46 (0.44-0.49). External validation AUROCs ranged from 0.75 (0.70-0.81) to 0.83 (0.82-0.84) and AUPRCs from 0.27 (0.25-0.29) to 0.45 (0.43-0.48). Aβ plasma levels, regional brain volumetry, and neurocognitive health emerged as important factors for the model prediction. In addition, the trajectories were stratified between predicted RPs and non-RPs based on factors such as ventricular volumes and neurocognitive domains. Our findings will greatly aid clinical trialists in designing tests for new medications, representing a key step toward identifying effective new AD therapies.

PMID:38598464 | DOI:10.1371/journal.pdig.0000479

Categories: Literature Watch

Deep Learning Model for Quality Assessment of Urinary Bladder Ultrasound Images using Multi-scale and Higher-order Processing

Wed, 2024-04-10 06:00

IEEE Trans Ultrason Ferroelectr Freq Control. 2024 Apr 10;PP. doi: 10.1109/TUFFC.2024.3386919. Online ahead of print.

ABSTRACT

Autonomous Ultrasound Image Quality Assessment (US-IQA) is a promising tool to aid the interpretation by practicing sonographers and to enable the future robotization of ultrasound procedures. However, autonomous US-IQA has several challenges. Ultrasound images contain many spurious artifacts, such as noise due to handheld probe positioning, errors in the selection of probe parameters and patient respiration during the procedure. Further, these images are highly variable in appearance with respect to the individual patient's physiology. We propose to use a deep Convolutional Neural Network (CNN), USQNet, which utilizes a Multi-scale and Local-to-Global Second-order Pooling (MS-L2GSoP) classifier to conduct the sonographer-like assessment of image quality. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations, similar to a sonographer's understanding of anatomy. Then, it uses second-order pooling in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of multi-scale structural and multi-region textural features. The L2GSoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches, much like a sonographer prioritizes regions across the image. We experimentally validated the USQNet for a new dataset of the human urinary bladder ultrasound images. The validation involved first with the subjective assessment by experienced radiologists' annotation, and then with state-of-the-art CNN networks for US-IQA and its ablated counterparts. The results demonstrate that USQNet achieves a remarkable accuracy of 92.4% and outperforms the SOTA models by 3 - 14% while requiring comparable computation time.

PMID:38598406 | DOI:10.1109/TUFFC.2024.3386919

Categories: Literature Watch

Prototype-based Semantic Segmentation

Wed, 2024-04-10 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Apr 10;PP. doi: 10.1109/TPAMI.2024.3387116. Online ahead of print.

ABSTRACT

Deep learning based semantic segmentation solutions have yielded compelling results over the preceding decade. They encompass diverse network architectures (FCN based or attention based), along with various mask decoding schemes (parametric softmax based or pixel-query based). Despite the divergence, they can be grouped within a unified framework by interpreting the softmax weights or query vectors as learnable class prototypes. In light of this prototype view, we reveal inherent limitations within the parametric segmentation regime, and accordingly develop a nonparametric alternative based on non-learnable prototypes. In contrast to previous approaches that entail the learning of a single weight/query vector per class in a fully parametric manner, our approach represents each class as a set of non-learnable prototypes, relying solely upon the mean features of training pixels within that class. The pixel-wise prediction is thus achieved by nonparametric nearest prototype retrieving. This allows our model to directly shape the pixel embedding space by optimizing the arrangement between embedded pixels and anchored prototypes. It is able to accommodate an arbitrary number of classes with a constant number of learnable parameters. Through empirical evaluation with FCN based and Transformer based segmentation models (i.e., HRNet, Swin, SegFormer, Mask2Former) and backbones (i.e., ResNet, HRNet, Swin, MiT), our nonparametric framework shows superior performance on standard segmentation datasets (i.e., ADE20K, Cityscapes, COCO-Stuff), as well as in large-vocabulary semantic segmentation scenarios. We expect that this study will provoke a rethink of the current de facto semantic segmentation model design.

PMID:38598386 | DOI:10.1109/TPAMI.2024.3387116

Categories: Literature Watch

Multimodal Drug Target Binding Affinity Prediction using Graph Local Substructure

Wed, 2024-04-10 06:00

IEEE J Biomed Health Inform. 2024 Apr 10;PP. doi: 10.1109/JBHI.2024.3386815. Online ahead of print.

ABSTRACT

Predicting the binding affinity of drug target is essential to reduce drug development costs and cycles. Recently, several deep learning-based methods have been proposed to utilize the structural or sequential information of drugs and targets to predict the drug-target binding affinity (DTA). However, methods that rely solely on sequence features do not consider hydrogen atom data, which may result in information loss. Graph-based methods may contain information that is not directly related to the prediction process. Additionally, the lack of structured division can limit the representation of characteristics. To address these issues, we propose a multimodal DTA prediction model using graph local substructures, called MLSDTA. This model comprehensively integrates the graph and sequence modal information from drugs and targets, achieving multimodal fusion through a cross-attention approach for multimodal features. Additionally, adaptive structure aware pooling is applied to generate graphs containing local substructural information. The model also utilizes the DropNode strategy to enhance the distinctions between different molecules. Experiments on two benchmark datasets have shown that MLSDTA outperforms current state-of-the-art models, demonstrating the feasibility of MLSDTA.

PMID:38598378 | DOI:10.1109/JBHI.2024.3386815

Categories: Literature Watch

Predicting Blood Pressures for Pregnant Women by PPG and Personalized Deep Learning

Wed, 2024-04-10 06:00

IEEE J Biomed Health Inform. 2024 Apr 10;PP. doi: 10.1109/JBHI.2024.3386707. Online ahead of print.

ABSTRACT

Blood pressure (BP) is predicted by this effort based on photoplethysmography (PPG) data to provide effective pre-warning of possible preeclampsia of pregnant women. Towards frequent BP measurement, a PPG sensor device is utilized in this study as a solution to offer continuous, cuffless blood pressure monitoring frequently for pregnant women. PPG data were collected using a flexible sensor patch from the wrist arteries of 194 subjects, which included 154 normal individuals and 40 pregnant women. Deep-learning models in 3 stages were built and trained to predict BP. The first stage involves developing a baseline deep-learning BP model using a dataset from common subjects. In the 2nd stage, this model was fine-tuned with data from pregnant women, using a 1-Dimensional Convolutional Neural Network (1D-CNN) with Convolutional Block Attention Module (CBAMs), followed by bi-directional Gated Recurrent Units (GRUs) layers and attention layers. The fine-tuned model results in a mean error (ME) of -1.40 ± 7.15 (standard deviation, SD) for systolic blood pressure (SBP) and -0.44 (ME) ± 5.06 (SD) for diastolic blood pressure (DBP). At the final stage is the personalization for individual pregnant women using transfer learning again, enhancing further the model accuracy to -0.17 (ME) ± 1.45 (SD) for SBP and 0.27 (ME) ± 0.64 (SD) for DBP showing a promising solution for continuous, non-invasive BP monitoring in precision by the proposed 3-stage of modeling, fine-tuning and personalization.

PMID:38598377 | DOI:10.1109/JBHI.2024.3386707

Categories: Literature Watch

RefQSR: Reference-based Quantization for Image Super-Resolution Networks

Wed, 2024-04-10 06:00

IEEE Trans Image Process. 2024 Apr 10;PP. doi: 10.1109/TIP.2024.3385276. Online ahead of print.

ABSTRACT

Single image super-resolution (SISR) aims to reconstruct a high-resolution image from its low-resolution observation. Recent deep learning-based SISR models show high performance at the expense of increased computational costs, limiting their use in resource-constrained environments. As a promising solution for computationally efficient network design, network quantization has been extensively studied. However, existing quantization methods developed for SISR have yet to effectively exploit image self-similarity, which is a new direction for exploration in this study. We introduce a novel method called reference-based quantization for image super-resolution (RefQSR) that applies high-bit quantization to several representative patches and uses them as references for low-bit quantization of the rest of the patches in an image. To this end, we design dedicated patch clustering and reference-based quantization modules and integrate them into existing SISR network quantization methods. The experimental results demonstrate the effectiveness of RefQSR on various SISR networks and quantization methods.

PMID:38598375 | DOI:10.1109/TIP.2024.3385276

Categories: Literature Watch

A noise robust image reconstruction using slice aware cycle interpolator network for parallel imaging in MRI

Wed, 2024-04-10 06:00

Med Phys. 2024 Apr 10. doi: 10.1002/mp.17066. Online ahead of print.

ABSTRACT

BACKGROUND: Reducing Magnetic resonance imaging (MRI) scan time has been an important issue for clinical applications. In order to reduce MRI scan time, imaging acceleration was made possible by undersampling k-space data. This is achieved by leveraging additional spatial information from multiple, independent receiver coils, thereby reducing the number of sampled k-space lines.

PURPOSE: The aim of this study is to develop a deep-learning method for parallel imaging with a reduced number of auto-calibration signals (ACS) lines in noisy environments.

METHODS: A cycle interpolator network is developed for robust reconstruction of parallel MRI with a small number of ACS lines in noisy environments. The network estimates missing (unsampled) lines of each coil data, and these estimated missing lines are then utilized to re-estimate the sampled k-space lines. In addition, a slice aware reconstruction technique is developed for noise-robust reconstruction while reducing the number of ACS lines. We conducted an evaluation study using retrospectively subsampled data obtained from three healthy volunteers at 3T MRI, involving three different slice thicknesses (1.5, 3.0, and 4.5 mm) and three different image contrasts (T1w, T2w, and FLAIR).

RESULTS: Despite the challenges posed by substantial noise in cases with a limited number of ACS lines and thinner slices, the slice aware cycle interpolator network reconstructs the enhanced parallel images. It outperforms RAKI, effectively eliminating aliasing artifacts. Moreover, the proposed network outperforms GRAPPA and demonstrates the ability to successfully reconstruct brain images even under severe noisy conditions.

CONCLUSIONS: The slice aware cycle interpolator network has the potential to improve reconstruction accuracy for a reduced number of ACS lines in noisy environments.

PMID:38598259 | DOI:10.1002/mp.17066

Categories: Literature Watch

Spinet-QSM: model-based deep learning with schatten p-norm regularization for improved quantitative susceptibility mapping

Wed, 2024-04-10 06:00

MAGMA. 2024 Apr 10. doi: 10.1007/s10334-024-01158-7. Online ahead of print.

ABSTRACT

OBJECTIVE: Quantitative susceptibility mapping (QSM) provides an estimate of the magnetic susceptibility of tissue using magnetic resonance (MR) phase measurements. The tissue magnetic susceptibility (source) from the measured magnetic field distribution/local tissue field (effect) inherent in the MR phase images is estimated by numerically solving the inverse source-effect problem. This study aims to develop an effective model-based deep-learning framework to solve the inverse problem of QSM.

MATERIALS AND METHODS: This work proposes a Schatten p -norm-driven model-based deep learning framework for QSM with a learnable norm parameter p to adapt to the data. In contrast to other model-based architectures that enforce the l 2 -norm or l 1 -norm for the denoiser, the proposed approach can enforce any p -norm ( 0 < p ≤ 2 ) on a trainable regulariser.

RESULTS: The proposed method was compared with deep learning-based approaches, such as QSMnet, and model-based deep learning approaches, such as learned proximal convolutional neural network (LPCNN). Reconstructions performed using 77 imaging volumes with different acquisition protocols and clinical conditions, such as hemorrhage and multiple sclerosis, showed that the proposed approach outperformed existing state-of-the-art methods by a significant margin in terms of quantitative merits.

CONCLUSION: The proposed SpiNet-QSM showed a consistent improvement of at least 5% in terms of the high-frequency error norm (HFEN) and normalized root mean squared error (NRMSE) over other QSM reconstruction methods with limited training data.

PMID:38598165 | DOI:10.1007/s10334-024-01158-7

Categories: Literature Watch

Enhancing robotic telesurgery with sensorless haptic feedback

Wed, 2024-04-10 06:00

Int J Comput Assist Radiol Surg. 2024 Apr 10. doi: 10.1007/s11548-024-03117-y. Online ahead of print.

ABSTRACT

PURPOSE: This paper evaluates user performance in telesurgical tasks with the da Vinci Research Kit (dVRK), comparing unilateral teleoperation, bilateral teleoperation with force sensors and sensorless force estimation.

METHODS: A four-channel teleoperation system with disturbance observers and sensorless force estimation with learning-based dynamic compensation was developed. Palpation experiments were conducted with 12 users who tried to locate tumors hidden in tissue phantoms with their fingers or through handheld or teleoperated laparoscopic instruments with visual, force sensor, or sensorless force estimation feedback. In a peg transfer experiment with 10 users, the contribution of sensorless haptic feedback with/without learning-based dynamic compensation was assessed using NASA TLX surveys, measured free motion speeds and forces, environment interaction forces as well as experiment completion times.

RESULTS: The first study showed a 30% increase in accuracy in detecting tumors with sensorless haptic feedback over visual feedback with only a 5-10% drop in accuracy when compared with sensor feedback or direct instrument contact. The second study showed that sensorless feedback can help reduce interaction forces due to incidental contacts by about 3 times compared with unilateral teleoperation. The cost is an increase in free motion forces and physical effort. We show that it is possible to improve this with dynamic compensation.

CONCLUSION: We demonstrate the benefits of sensorless haptic feedback in teleoperated surgery systems, especially with dynamic compensation, and that it can improve surgical performance without hardware modifications.

PMID:38598140 | DOI:10.1007/s11548-024-03117-y

Categories: Literature Watch

Knowledge-based planning for Gamma Knife

Wed, 2024-04-10 06:00

Med Phys. 2024 Apr 10. doi: 10.1002/mp.17058. Online ahead of print.

ABSTRACT

BACKGROUND: Current methods for Gamma Knife (GK) treatment planning utilizes either manual forward planning, where planners manually place shots in a tumor to achieve a desired dose distribution, or inverse planning, whereby the dose delivered to a tumor is optimized for multiple objectives based on established metrics. For other treatment modalities like IMRT and VMAT, there has been a recent push to develop knowledge-based planning (KBP) pipelines to address the limitations presented by forward and inverse planning. However, no complete KBP pipeline has been created for GK.

PURPOSE: To develop a novel (KBP) pipeline, using inverse optimization (IO) with 3D dose predictions for GK.

METHODS: Data were obtained for 349 patients from Sunnybrook Health Sciences Centre. A 3D dose prediction model was trained using 322 patients, based on a previously published deep learning methodology, and dose predictions were generated for the remaining 27 out-of-sample patients. A generalized IO model was developed to learn objective function weights from dose predictions. These weights were then used in an inverse planning model to generate deliverable treatment plans. A dose mimicking (DM) model was also implemented for comparison. The quality of the resulting plans was compared to their clinical counterparts using standard GK quality metrics. The performance of the models was also characterized with respect to the dose predictions.

RESULTS: Across all quality metrics, plans generated using the IO pipeline performed at least as well as or better than the respective clinical plans. The average conformity and gradient indices of IO plans was 0.737 ± $\pm$ 0.158 and 3.356 ± $\pm$ 1.030 respectively, compared to 0.713 ± $\pm$ 0.124 and 3.452 ± $\pm$ 1.123 for the clinical plans. IO plans also performed better than DM plans for five of the six quality metrics. Plans generated using IO also have average treatment times comparable to that of clinical plans. With regards to the dose predictions, predictions with higher conformity tend to result in higher quality KBP plans.

CONCLUSIONS: Plans resulting from an IO KBP pipeline are, on average, of equal or superior quality compared to those obtained through manual planning. The results demonstrate the potential for the use of KBP to generate GK treatment with minimal human intervention.

PMID:38598107 | DOI:10.1002/mp.17058

Categories: Literature Watch

Artificial intelligence in kidney transplant pathology

Wed, 2024-04-10 06:00

Pathologie (Heidelb). 2024 Apr 10. doi: 10.1007/s00292-024-01324-7. Online ahead of print.

ABSTRACT

BACKGROUND: Artificial intelligence (AI) systems have showed promising results in digital pathology, including digital nephropathology and specifically also kidney transplant pathology.

AIM: Summarize the current state of research and limitations in the field of AI in kidney transplant pathology diagnostics and provide a future outlook.

MATERIALS AND METHODS: Literature search in PubMed and Web of Science using the search terms "deep learning", "transplant", and "kidney". Based on these results and studies cited in the identified literature, a selection was made of studies that have a histopathological focus and use AI to improve kidney transplant diagnostics.

RESULTS AND CONCLUSION: Many studies have already made important contributions, particularly to the automation of the quantification of some histopathological lesions in nephropathology. This likely can be extended to automatically quantify all relevant lesions for a kidney transplant, such as Banff lesions. Important limitations and challenges exist in the collection of representative data sets and the updates of Banff classification, making large-scale studies challenging. The already positive study results make future AI support in kidney transplant pathology appear likely.

PMID:38598097 | DOI:10.1007/s00292-024-01324-7

Categories: Literature Watch

Predicting the wicking rate of nitrocellulose membranes from recipe data: a case study using ANN at a membrane manufacturing in South Korea

Wed, 2024-04-10 06:00

Anal Sci. 2024 Apr 10. doi: 10.1007/s44211-024-00540-8. Online ahead of print.

ABSTRACT

Lateral flow assays have been widely used for detecting coronavirus disease 2019 (COVID-19). A lateral flow assay consists of a Nitrocellulose (NC) membrane, which must have a specific lateral flow rate for the proteins to react. The wicking rate is conventionally used as a method to assess the lateral flow in membranes. We used multiple regression and artificial neural networks (ANN) to predict the wicking rate of NC membranes based on membrane recipe data. The developed ANN predicted the wicking rate with a mean square error of 0.059, whereas the multiple regression had a square error of 0.503. This research also highlighted the significant impact of the water content on the wicking rate through images obtained from scanning electron microscopy. The findings of this research can cut down the research and development costs of novel NC membranes with a specific wicking rate significantly, as the algorithm can predict the wicking rate based on the membrane recipe.

PMID:38598050 | DOI:10.1007/s44211-024-00540-8

Categories: Literature Watch

Deep Learning Enables Automatic Detection of Joint Damage Progression in Rheumatoid Arthritis-Model Development and External Validation

Wed, 2024-04-10 06:00

Rheumatology (Oxford). 2024 Apr 10:keae215. doi: 10.1093/rheumatology/keae215. Online ahead of print.

ABSTRACT

OBJECTIVES: Although deep learning has demonstrated substantial potential in automatic quantification of joint damage in rheumatoid arthritis (RA), evidence for detecting longitudinal changes at an individual patient level is lacking. Here, we introduce and externally validate our automated RA scoring algorithm (AuRA), and demonstrate its utility for monitoring radiographic progression in a real-world setting.

METHODS: The algorithm, originally developed during the Rheumatoid Arthritis 2-Dialogue for Reverse Engineering Assessment and Methods (RA2-DREAM) challenge, was trained to predict expert-curated Sharp-van der Heijde total scores in hand and foot radiographs from two previous clinical studies (n = 367). We externally validated AuRA against data (n = 205) from Turku University Hospital and compared the performance against two top-performing RA2-DREAM solutions. Finally, for 54 patients, we extracted additional radiograph sets from another control visit to the clinic (average time interval of 4.6 years).

RESULTS: In the external validation cohort, with a root-mean-square-error (RMSE) of 23.6, AuRA outperformed both top-performing RA2-DREAM algorithms (RMSEs 35.0 and 35.6). The improved performance was explained mostly by lower errors at higher expert-assessed scores. The longitudinal changes predicted by our algorithm were significantly correlated with changes in expert-assessed scores (Pearson's R = 0.74, p< 0.001).

CONCLUSION: AuRA had the best external validation performance and demonstrated potential for detecting longitudinal changes in joint damage. Available in https://hub.docker.com/r/elolab/aura, our algorithm can easily be applied for automatic detection of radiographic progression in the future, reducing the need for laborious manual scoring.

PMID:38597875 | DOI:10.1093/rheumatology/keae215

Categories: Literature Watch

Conversion of single-energy computed tomography to parametric maps of dual-energy computed tomography using convolutional neural network

Wed, 2024-04-10 06:00

Br J Radiol. 2024 Apr 10:tqae076. doi: 10.1093/bjr/tqae076. Online ahead of print.

ABSTRACT

OBJECTIVES: We propose a deep learning (DL) multi-task learning framework using convolutional neural network (CNN) for a direct conversion of single-energy CT (SECT) to three different parametric maps of dual-energy CT (DECT): Virtual-monochromatic image (VMI), effective atomic number (EAN), and relative electron density (RED).

METHODS: We propose VMI-Net for conversion of SECT to 70, 120, and 200 keV VMIs. In addition, EAN-Net and RED-Net were also developed to convert SECT to EAN and RED. We trained and validated our model using 67 patients collected between 2019 and 2020. SECT images with 120 kVp acquired by the DECT (IQon spectral CT, Philips) were used as input, while the VMIs, EAN, and RED acquired by the same device were used as target. The performance of the DL framework was evaluated by absolute difference (AD) and relative difference (RD).

RESULTS: The VMI-Net converted 120 kVp SECT to the VMIs with AD of 9.02 Hounsfield Unit, and RD of 0.41% compared to the ground truth VMIs. The ADs of the converted EAN and RED were 0.29 and 0.96, respectively, while the RDs were 1.99% and 0.50% for the converted EAN and RED, respectively.

CONCLUSIONS: SECT images were directly converted to the three parametric maps of DECT (ie, VMIs, EAN, and RED). By using this model, one can generate the parametric information from SECT images without DECT device. Our model can help investigate the parametric information from SECT retrospectively.

ADVANCES IN KNOWLEDGE: Deep learning framework enables converting SECT to various high-quality parametric maps of DECT.

PMID:38597871 | DOI:10.1093/bjr/tqae076

Categories: Literature Watch

Assessing Biological Age: The Potential of ECG Evaluation Using Artificial Intelligence: JACC Family Series

Wed, 2024-04-10 06:00

JACC Clin Electrophysiol. 2024 Mar 14:S2405-500X(24)00111-7. doi: 10.1016/j.jacep.2024.02.011. Online ahead of print.

ABSTRACT

Biological age may be a more valuable predictor of morbidity and mortality than a person's chronological age. Mathematical models have been used for decades to predict biological age, but recent developments in artificial intelligence (AI) have led to new capabilities in age estimation. Using deep learning methods to train AI models on hundreds of thousands of electrocardiograms (ECGs) to predict age results in a good, but imperfect, age prediction. The error predicting age using ECG, or the difference between AI-ECG-derived age and chronological age (delta age), may be a surrogate measurement of biological age, as the delta age relates to survival, even after adjusting for chronological age and other covariates associated with total and cardiovascular mortality. The relative affordability, noninvasiveness, and ubiquity of ECGs, combined with ease of access and potential to be integrated with smartphone or wearable technology, presents a potential paradigm shift in assessment of biological age.

PMID:38597855 | DOI:10.1016/j.jacep.2024.02.011

Categories: Literature Watch

A Semiautonomous Deep Learning System to Reduce False-Positive Findings in Screening Mammography

Wed, 2024-04-10 06:00

Radiol Artif Intell. 2024 Apr 10:e230033. doi: 10.1148/ryai.230033. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate the ability of a semiautonomous artificial intelligence (AI) model to identify screening mammograms not suspicious for breast cancer and reduce the number of false-positive examinations. Materials and Methods The deep learning algorithm was trained using 123,248 2D digital mammograms (6,161 cancers) and a retrospective study was performed on three nonoverlapping datasets of 14,831 screening mammography examinations (1,026 cancers) from 2 US and 1 UK institutions (2008-2017). The standalone performance of humans and AI was compared. Human+AI performance was simulated to examine reductions in the cancer detection rate, number of examinations, false positive callbacks, and benign biopsies. Metrics were adjusted to mimic the natural distribution of a screening population, and bootstrapped confidence intervals (CI) and P values were calculated. Results Retrospective evaluation on all datasets showed minimal changes to the cancer detection rate with use of the AI device (US Dataset 1 P = .02, US Dataset 2 P < .001, UK P < .001, noninferiority margin of 0.25 cancers per 1000 examinations). On US Dataset 1 (11,592 mammograms, 101 cancers, 3810 female patients, mean age 57.3 ± [SD] 10.0 years), the device reduced screening examinations requiring radiologist interpretation by 41.6% [95% CI: 40.6%, 42.4%] (P < .001), diagnostic examinations callbacks by 31.1% [28.7%, 33.4%] (P < .001), and benign needle biopsies by 7.4% [4.1%, 12.4%] (P < .001). US Dataset 2 (1362 mammograms, 330 cancers, 1293 female patients, mean age 55.4 ± 10.5 years) had reductions of 19.5% [16.9%, 22.1%] (P < .001), 11.9% [8.6%, 15.7%] (P< .001), and 6.5% [0.0%, 19.0%] (P = .08), respectively. The UK dataset (1877 mammograms, 595 cancers, 1491 female patients, mean age 63.5 ± 7.1 SD) had reductions of 36.8% [34.4%, 39.7%] (P < .001), 17.1% [5.9%, 30.1%] (P < .001), and 5.9% [2.9%, 11.5%] (P < .001), respectively. Conclusion This work demonstrates the potential of a semiautonomous breast cancer screening system to reduce false positives, unnecessary procedures, patient anxiety, and medical expenses. Published under a CC BY 4.0 license.

PMID:38597785 | DOI:10.1148/ryai.230033

Categories: Literature Watch

Usformer: A small network for left atrium segmentation of 3D LGE MRI

Wed, 2024-04-10 06:00

Heliyon. 2024 Mar 28;10(7):e28539. doi: 10.1016/j.heliyon.2024.e28539. eCollection 2024 Apr 15.

ABSTRACT

Left atrial (LA) fibrosis plays a vital role as a mediator in the progression of atrial fibrillation. 3D late gadolinium-enhancement (LGE) MRI has been proven effective in identifying LA fibrosis. Image analysis of 3D LA LGE involves manual segmentation of the LA wall, which is both lengthy and challenging. Automated segmentation poses challenges owing to the diverse intensities in data from various vendors, the limited contrast between LA and surrounding tissues, and the intricate anatomical structures of the LA. Current approaches relying on 3D networks are computationally intensive since 3D LGE MRIs and the networks are large. Regarding this issue, most researchers came up with two-stage methods: initially identifying the LA center using a scaled-down version of the MRIs and subsequently cropping the full-resolution MRIs around the LA center for final segmentation. We propose a lightweight transformer-based 3D architecture, Usformer, designed to precisely segment LA volume in a single stage, eliminating error propagation associated with suboptimal two-stage training. The transposed attention facilitates capturing the global context in large 3D volumes without significant computation requirements. Usformer outperforms the state-of-the-art supervised learning methods in terms of accuracy and speed. First, with the smallest Hausdorff Distance (HD) and Average Symmetric Surface Distance (ASSD), it achieved a dice score of 93.1% and 92.0% in the 2018 Atrial Segmentation Challenge and our local institutional dataset, respectively. Second, the number of parameters and computation complexity are largely reduced by 2.8x and 3.8x, respectively. Moreover, Usformer does not require a large dataset. When only 16 labeled MRI scans are used for training, Usformer achieves a 92.1% dice score in the challenge dataset. The proposed Usformer delineates the boundaries of the LA wall relatively accurately, which may assist in the clinical translation of LA LGE for planning catheter ablation of atrial fibrillation.

PMID:38596055 | PMC:PMC11002571 | DOI:10.1016/j.heliyon.2024.e28539

Categories: Literature Watch

Pages