Deep learning

Predicting central choroidal thickness from colour fundus photographs using deep learning

Fri, 2024-03-29 06:00

PLoS One. 2024 Mar 29;19(3):e0301467. doi: 10.1371/journal.pone.0301467. eCollection 2024.

ABSTRACT

The estimation of central choroidal thickness from colour fundus images can improve disease detection. We developed a deep learning method to estimate central choroidal thickness from colour fundus images at a single institution, using independent datasets from other institutions for validation. A total of 2,548 images from patients who underwent same-day optical coherence tomography examination and colour fundus imaging at the outpatient clinic of Jichi Medical University Hospital were retrospectively analysed. For validation, 393 images from three institutions were used. Patients with signs of subretinal haemorrhage, central serous detachment, retinal pigment epithelial detachment, and/or macular oedema were excluded. All other fundus photographs with a visible pigment epithelium were included. The main outcome measure was the standard deviation of 10-fold cross-validation. Validation was performed using the original algorithm and the algorithm after learning based on images from all institutions. The standard deviation of 10-fold cross-validation was 73 μm. The standard deviation for other institutions was reduced by re-learning. We describe the first application and validation of a deep learning approach for the estimation of central choroidal thickness from fundus images. This algorithm is expected to help graders judge choroidal thickening and thinning.

PMID:38551957 | DOI:10.1371/journal.pone.0301467

Categories: Literature Watch

Fine-tuning TrailMap: The utility of transfer learning to improve the performance of deep learning in axon segmentation of light-sheet microscopy images

Fri, 2024-03-29 06:00

PLoS One. 2024 Mar 29;19(3):e0293856. doi: 10.1371/journal.pone.0293856. eCollection 2024.

ABSTRACT

Light-sheet microscopy has made possible the 3D imaging of both fixed and live biological tissue, with samples as large as the entire mouse brain. However, segmentation and quantification of that data remains a time-consuming manual undertaking. Machine learning methods promise the possibility of automating this process. This study seeks to advance the performance of prior models through optimizing transfer learning. We fine-tuned the existing TrailMap model using expert-labeled data from noradrenergic axonal structures in the mouse brain. By changing the cross-entropy weights and using augmentation, we demonstrate a generally improved adjusted F1-score over using the originally trained TrailMap model within our test datasets.

PMID:38551935 | DOI:10.1371/journal.pone.0293856

Categories: Literature Watch

The impact of large language models on radiology: a guide for radiologists on the latest innovations in AI

Fri, 2024-03-29 06:00

Jpn J Radiol. 2024 Mar 29. doi: 10.1007/s11604-024-01552-0. Online ahead of print.

ABSTRACT

The advent of Deep Learning (DL) has significantly propelled the field of diagnostic radiology forward by enhancing image analysis and interpretation. The introduction of the Transformer architecture, followed by the development of Large Language Models (LLMs), has further revolutionized this domain. LLMs now possess the potential to automate and refine the radiology workflow, extending from report generation to assistance in diagnostics and patient care. The integration of multimodal technology with LLMs could potentially leapfrog these applications to unprecedented levels.However, LLMs come with unresolved challenges such as information hallucinations and biases, which can affect clinical reliability. Despite these issues, the legislative and guideline frameworks have yet to catch up with technological advancements. Radiologists must acquire a thorough understanding of these technologies to leverage LLMs' potential to the fullest while maintaining medical safety and ethics. This review aims to aid in that endeavor.

PMID:38551772 | DOI:10.1007/s11604-024-01552-0

Categories: Literature Watch

Deep Learning-Derived Myocardial Strain

Fri, 2024-03-29 06:00

JACC Cardiovasc Imaging. 2024 Mar 12:S1936-878X(24)00063-9. doi: 10.1016/j.jcmg.2024.01.011. Online ahead of print.

ABSTRACT

BACKGROUND: Echocardiographic strain measurements require extensive operator experience and have significant intervendor variability. Creating an automated, open-source, vendor-agnostic method to retrospectively measure global longitudinal strain (GLS) from standard echocardiography B-mode images would greatly improve post hoc research applications and may streamline patient analyses.

OBJECTIVES: This study was seeking to develop an automated deep learning strain (DLS) analysis pipeline and validate its performance across multiple applications and populations.

METHODS: Interobserver/-vendor variation of traditional GLS, and simulated effects of variation in contour on speckle-tracking measurements were assessed. The DLS pipeline was designed to take semantic segmentation results from EchoNet-Dynamic and derive longitudinal strain by calculating change in the length of the left ventricular endocardial contour. DLS was evaluated for agreement with GLS on a large external dataset and applied across a range of conditions that result in cardiac hypertrophy.

RESULTS: In patients scanned by 2 sonographers using 2 vendors, GLS had an intraclass correlation of 0.29 (95% CI: -0.01 to 0.53, P = 0.03) between vendor measurements and 0.63 (95% CI: 0.48-0.74, P < 0.001) between sonographers. With minor changes in initial input contour, step-wise pixel shifts resulted in a mean absolute error of 3.48% and proportional strain difference of 13.52% by a 6-pixel shift. In external validation, DLS maintained moderate agreement with 2-dimensional GLS (intraclass correlation coefficient [ICC]: 0.56, P = 0.002) with a bias of -3.31% (limits of agreement: -11.65% to 5.02%). The DLS method showed differences (P < 0.0001) between populations with cardiac hypertrophy and had moderate agreement in a patient population of advanced cardiac amyloidosis: ICC was 0.64 (95% CI: 0.53-0.72), P < 0.001, with a bias of 0.57%, limits of agreement of -4.87% to 6.01% vs 2-dimensional GLS.

CONCLUSIONS: The open-source DLS provides lower variation than human measurements and similar quantitative results. The method is rapid, consistent, vendor-agnostic, publicly released, and applicable across a wide range of imaging qualities.

PMID:38551533 | DOI:10.1016/j.jcmg.2024.01.011

Categories: Literature Watch

Diagnostic Performance of Artificial Intelligence for Detection of Scaphoid and Distal Radius Fractures: A Systematic Review

Fri, 2024-03-29 06:00

J Hand Surg Am. 2024 Mar 27:S0363-5023(24)00054-6. doi: 10.1016/j.jhsa.2024.01.020. Online ahead of print.

ABSTRACT

PURPOSE: To review the existing literature to (1) determine the diagnostic efficacy of artificial intelligence (AI) models for detecting scaphoid and distal radius fractures and (2) compare the efficacy to human clinical experts.

METHODS: PubMed, OVID/Medline, and Cochrane libraries were queried for studies investigating the development, validation, and analysis of AI for the detection of scaphoid or distal radius fractures. Data regarding study design, AI model development and architecture, prediction accuracy/area under the receiver operator characteristic curve (AUROC), and imaging modalities were recorded.

RESULTS: A total of 21 studies were identified, of which 12 (57.1%) used AI to detect fractures of the distal radius, and nine (42.9%) used AI to detect fractures of the scaphoid. AI models demonstrated good diagnostic performance on average, with AUROC values ranging from 0.77 to 0.96 for scaphoid fractures and from 0.90 to 0.99 for distal radius fractures. Accuracy of AI models ranged between 72.0% to 90.3% and 89.0% to 98.0% for scaphoid and distal radius fractures, respectively. When compared to clinical experts, 13 of 14 (92.9%) studies reported that AI models demonstrated comparable or better performance. The type of fracture influenced model performance, with worse overall performance on occult scaphoid fractures; however, models trained specifically on occult fractures demonstrated substantially improved performance when compared to humans.

CONCLUSIONS: AI models demonstrated excellent performance for detecting scaphoid and distal radius fractures, with the majority demonstrating comparable or better performance compared with human experts. Worse performance was demonstrated on occult fractures. However, when trained specifically on difficult fracture patterns, AI models demonstrated improved performance.

CLINICAL RELEVANCE: AI models can help detect commonly missed occult fractures while enhancing workflow efficiency for distal radius and scaphoid fracture diagnoses. As performance varies based on fracture type, future studies focused on wrist fracture detection should clearly define whether the goal is to (1) identify difficult-to-detect fractures or (2) improve workflow efficiency by assisting in routine tasks.

PMID:38551529 | DOI:10.1016/j.jhsa.2024.01.020

Categories: Literature Watch

Multi-scale nested UNet with transformer for colorectal polyp segmentation

Fri, 2024-03-29 06:00

J Appl Clin Med Phys. 2024 Mar 29:e14351. doi: 10.1002/acm2.14351. Online ahead of print.

ABSTRACT

BACKGROUND: Polyp detection and localization are essential tasks for colonoscopy. U-shape network based convolutional neural networks have achieved remarkable segmentation performance for biomedical images, but lack of long-range dependencies modeling limits their receptive fields.

PURPOSE: Our goal was to develop and test a novel architecture for polyp segmentation, which takes advantage of learning local information with long-range dependencies modeling.

METHODS: A novel architecture combining with multi-scale nested UNet structure integrated transformer for polyp segmentation was developed. The proposed network takes advantage of both CNN and transformer to extract distinct feature information. The transformer layer is embedded between the encoder and decoder of a U-shape net to learn explicit global context and long-range semantic information. To address the challenging of variant polyp sizes, a MSFF unit was proposed to fuse features with multiple resolution.

RESULTS: Four public datasets and one in-house dataset were used to train and test the model performance. Ablation study was also conducted to verify each component of the model. For dataset Kvasir-SEG and CVC-ClinicDB, the proposed model achieved mean dice score of 0.942 and 0.950 respectively, which were more accurate than the other methods. To show the generalization of different methods, we processed two cross dataset validations, the proposed model achieved the highest mean dice score. The results demonstrate that the proposed network has powerful learning and generalization capability, significantly improving segmentation accuracy and outperforming state-of-the-art methods.

CONCLUSIONS: The proposed model produced more accurate polyp segmentation than current methods on four different public and one in-house datasets. Its capability of polyps segmentation in different sizes shows the potential clinical application.

PMID:38551396 | DOI:10.1002/acm2.14351

Categories: Literature Watch

A novel method in COPD diagnosing using respiratory signal generation based on CycleGAN and machine learning

Fri, 2024-03-29 06:00

Comput Methods Biomech Biomed Engin. 2024 Mar 29:1-16. doi: 10.1080/10255842.2024.2329938. Online ahead of print.

ABSTRACT

OBJECTIVE: The main goal of this research is to use distinctive features in respiratory sounds for diagnosing Chronic Obstructive Pulmonary Disease (COPD). This study develops a classification method by utilizing inverse transforms to effectively identify COPD based on unique respiratory features while comparing the classification performance of various optimal algorithms.

METHOD: Respiratory sounds are divided into individual breathing cycles. In the data standardization and augmentation phase, the CycleGAN model enhances data diversity. Comprehensive analyses for these segments are then implemented using various Wavelet families and different spectral transformations representing characteristic signals. Advanced convolutional neural networks, including VGG16, ResNet50, and InceptionV3, are used for the classification task.

RESULTS: The results of this study demonstrate the effectiveness of the mentioned method. Notably, the best-performing method utilizes Wavelet Bior1.3 after standardization in combination with InceptionV3, achieving a remarkable 99.75% F1-score, the gold standard for classification accuracy.

CONCLUSION: Inverse transformation techniques combined with deep learning models show significant accuracy in detecting COPD disease. These findings suggest the feasibility of early COPD diagnosis through AI-powered characterization of acoustic features.

MOTIVATION AND SIGNIFICANCE: The motivation behind this research stems from the urgent need for early and accurate diagnosis of Chronic Obstructive Pulmonary Disease (COPD). COPD is a respiratory disease that poses many difficulties when detected late, potentially causing severe harm to the patient's quality of life and increasing the healthcare burden. Timely identification and intervention are crucial to reduce the progression of the disease and improve patient outcomes.

PMID:38551327 | DOI:10.1080/10255842.2024.2329938

Categories: Literature Watch

Retracted: Detection and Classification of Colorectal Polyp Using Deep Learning

Fri, 2024-03-29 06:00

Biomed Res Int. 2024 Mar 20;2024:9879585. doi: 10.1155/2024/9879585. eCollection 2024.

ABSTRACT

[This retracts the article DOI: 10.1155/2022/2805607.].

PMID:38550160 | PMC:PMC10977110 | DOI:10.1155/2024/9879585

Categories: Literature Watch

Retracted: Detection of Breast Cancer Using Histopathological Image Classification Dataset with Deep Learning Techniques

Fri, 2024-03-29 06:00

Biomed Res Int. 2024 Mar 20;2024:9863139. doi: 10.1155/2024/9863139. eCollection 2024.

ABSTRACT

[This retracts the article DOI: 10.1155/2022/8363850.].

PMID:38550053 | PMC:PMC10977111 | DOI:10.1155/2024/9863139

Categories: Literature Watch

Multi-step validation of a deep learning-based system with visual explanations for optical diagnosis of polyps with advanced features

Fri, 2024-03-29 06:00

iScience. 2024 Mar 8;27(4):109461. doi: 10.1016/j.isci.2024.109461. eCollection 2024 Apr 19.

ABSTRACT

Artificial intelligence (AI) has been found to assist in optical differentiation of hyperplastic and adenomatous colorectal polyps. We investigated whether AI can improve the accuracy of endoscopists' optical diagnosis of polyps with advanced features. We introduced our AI system distinguishing polyps with advanced features with more than 0.870 of accuracy in the internal and external validation datasets. All 19 endoscopists with different levels showed significantly lower diagnostic accuracy (0.410-0.580) than the AI. Prospective randomized controlled study involving 120 endoscopists into optical diagnosis of polyps with advanced features with or without AI demonstration identified that AI improved endoscopists' proportion of polyps with advanced features correctly sent for histological examination (0.960 versus 0.840, p < 0.001), and the proportion of polyps without advanced features resected and discarded (0.490 versus 0.380, p = 0.007). We thus developed an AI technique that significantly increases the accuracy of colorectal polyps with advanced features.

PMID:38550997 | PMC:PMC10973580 | DOI:10.1016/j.isci.2024.109461

Categories: Literature Watch

Deep learning-based characterization of neutrophil activation phenotypes in ex vivo human Candida blood infections

Fri, 2024-03-29 06:00

Comput Struct Biotechnol J. 2024 Mar 18;23:1260-1273. doi: 10.1016/j.csbj.2024.03.006. eCollection 2024 Dec.

ABSTRACT

Early identification of human pathogens is crucial for the effective treatment of bloodstream infections to prevent sepsis. Since pathogens that are present in small numbers are usually difficult to detect directly, we hypothesize that the behavior of the immune cells that are present in large numbers may provide indirect evidence about the causative pathogen of the infection. We previously applied time-lapse microscopy to observe that neutrophils isolated from human whole-blood samples, which had been infected with the human-pathogenic fungus Candida albicans or C. glabrata, indeed exhibited a characteristic morphodynamic behavior. Tracking the neutrophil movement and shape dynamics over time, combined with machine learning approach, the accuracy for the differentiation between the two Candida species was about 75%. In this study, the focus is on improving the classification accuracy of the Candida species using advanced deep learning methods. We implemented (i) gated recurrent unit (GRU) networks and transformer-based networks for video data, and (ii) convolutional neural networks (CNNs) for individual frames of the time-lapse microscopy data. While the GRU and transformer-based approaches yielded promising results with 96% and 100% accuracy, respectively, the classification based on videos proved to be very time-consuming and required several hours. In contrast, the CNN model for individual microscopy frames yielded results within minutes, and, utilizing a majority-vote technique, achieved 100% accuracy both in identifying the pathogen-free blood samples and in distinguishing between the Candida species. The applied CNN demonstrates the potential for automatically differentiating bloodstream Candida infections with high accuracy and efficiency. We further analysed the results of the CNN using explainable artificial intelligence (XAI) techniques to understand the critical features and patterns, thereby shedding light on potential key morphodynamic characteristics of neutrophils in response to different Candida species. This approach could provide new insights into host-pathogen interactions and may facilitate the development of rapid, automated diagnostic tools for differentiating fungal species in blood samples.

PMID:38550973 | PMC:PMC10973576 | DOI:10.1016/j.csbj.2024.03.006

Categories: Literature Watch

SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction

Fri, 2024-03-29 06:00

Shape Med Imaging (2023). 2023 Oct;14350:287-300. doi: 10.1007/978-3-031-46914-5_23. Epub 2023 Oct 31.

ABSTRACT

3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis. While deep learning-based approaches have achieved impressive performance in this area, existing deep networks often fail to effectively utilize the shape structures of objects presented in images. As a result, the topology of reconstructed objects may not be well preserved, leading to the presence of artifacts such as discontinuities, holes, or mismatched connections between different parts. In this paper, we propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues. In contrast to previous methods that primarily rely on spatial correlations of image intensities for 3D reconstruction, our model leverages shape priors learned from the training data to guide the reconstruction process. To achieve this, we develop a joint learning network that simultaneously learns a mean shape under deformation models. Each reconstructed image is then considered as a deformed variant of the mean shape. We validate our model, SADIR, on both brain and cardiac magnetic resonance images (MRIs). Experimental results show that our method outperforms the baselines with lower reconstruction error and better preservation of the shape structure of objects within the images.

PMID:38550968 | PMC:PMC10977919 | DOI:10.1007/978-3-031-46914-5_23

Categories: Literature Watch

Artificial intelligence for early detection of renal cancer in computed tomography: A review

Fri, 2024-03-29 06:00

Camb Prism Precis Med. 2022 Nov 11;1:e4. doi: 10.1017/pcm.2022.9. eCollection 2023.

ABSTRACT

Renal cancer is responsible for over 100,000 yearly deaths and is principally discovered in computed tomography (CT) scans of the abdomen. CT screening would likely increase the rate of early renal cancer detection, and improve general survival rates, but it is expected to have a prohibitively high financial cost. Given recent advances in artificial intelligence (AI), it may be possible to reduce the cost of CT analysis and enable CT screening by automating the radiological tasks that constitute the early renal cancer detection pipeline. This review seeks to facilitate further interdisciplinary research in early renal cancer detection by summarising our current knowledge across AI, radiology, and oncology and suggesting useful directions for future novel work. Initially, this review discusses existing approaches in automated renal cancer diagnosis, and methods across broader AI research, to summarise the existing state of AI cancer analysis. Then, this review matches these methods to the unique constraints of early renal cancer detection and proposes promising directions for future research that may enable AI-based early renal cancer detection via CT screening. The primary targets of this review are clinicians with an interest in AI and data scientists with an interest in the early detection of cancer.

PMID:38550952 | PMC:PMC10953744 | DOI:10.1017/pcm.2022.9

Categories: Literature Watch

Applications of artificial intelligence in dementia research

Fri, 2024-03-29 06:00

Camb Prism Precis Med. 2022 Dec 6;1:e9. doi: 10.1017/pcm.2022.10. eCollection 2023.

ABSTRACT

More than 50 million older people worldwide are suffering from dementia, and this number is estimated to increase to 150 million by 2050. Greater caregiver burdens and financial impacts on the healthcare system are expected as we wait for an effective treatment for dementia. Researchers are constantly exploring new therapies and screening approaches for the early detection of dementia. Artificial intelligence (AI) is widely applied in dementia research, including machine learning and deep learning methods for dementia diagnosis and progression detection. Computerized apps are also convenient tools for patients and caregivers to monitor cognitive function changes. Furthermore, social robots can potentially provide daily life support or guidance for the elderly who live alone. This review aims to provide an overview of AI applications in dementia research. We divided the applications into three categories according to different stages of cognitive impairment: (1) cognitive screening and training, (2) diagnosis and prognosis for dementia, and (3) dementia care and interventions. There are numerous studies on AI applications for dementia research. However, one challenge that remains is comparing the effectiveness of different AI methods in real clinical settings.

PMID:38550934 | PMC:PMC10953738 | DOI:10.1017/pcm.2022.10

Categories: Literature Watch

Erratum: Convolutional neural network (CNN)-enabled electrocardiogram (ECG) analysis: a comparison between standard twelve-lead and single-lead setups

Fri, 2024-03-29 06:00

Front Cardiovasc Med. 2024 Mar 14;11:1396396. doi: 10.3389/fcvm.2024.1396396. eCollection 2024.

ABSTRACT

[This corrects the article DOI: 10.3389/fcvm.2024.1327179.].

PMID:38550518 | PMC:PMC10973542 | DOI:10.3389/fcvm.2024.1396396

Categories: Literature Watch

Review of Deep Learning Based Autosegmentation for Clinical Target Volume: Current Status and Future Directions

Fri, 2024-03-29 06:00

Adv Radiat Oncol. 2024 Feb 8;9(5):101470. doi: 10.1016/j.adro.2024.101470. eCollection 2024 May.

ABSTRACT

PURPOSE: Manual contour work for radiation treatment planning takes significant time to ensure volumes are accurately delineated. The use of artificial intelligence with deep learning based autosegmentation (DLAS) models has made itself known in recent years to alleviate this workload. It is used for organs at risk contouring with significant consistency in performance and time saving. The purpose of this study was to evaluate the performance of present published data for DLAS of clinical target volume (CTV) contours, identify areas of improvement, and discuss future directions.

METHODS AND MATERIALS: A literature review was performed by using the key words "deep learning" AND ("segmentation" or "delineation") AND "clinical target volume" in an indexed search into PubMed. A total of 154 articles based on the search criteria were reviewed. The review considered the DLAS model used, disease site, targets contoured, guidelines used, and the overall performance.

RESULTS: Of the 53 articles investigating DLAS of CTV, only 6 were published before 2020. Publications have increased in recent years, with 46 articles published between 2020 and 2023. The cervix (n = 19) and the prostate (n = 12) were studied most frequently. Most studies (n = 43) involved a single institution. Median sample size was 130 patients (range, 5-1052). The most common metrics used to measure DLAS performance were Dice similarity coefficient followed by Hausdorff distance. Dosimetric performance was seldom reported (n = 11). There was also variability in specific guidelines used (Radiation Therapy Oncology Group (RTOG), European Society for Therapeutic Radiology and Oncology (ESTRO), and others). DLAS models had good overall performance for contouring CTV volumes for multiple disease sites, with most studies showing Dice similarity coefficient values >0.7. DLAS models also delineated CTV volumes faster compared with manual contouring. However, some DLAS model contours still required at least minor edits, and future studies investigating DLAS of CTV volumes require improvement.

CONCLUSIONS: DLAS demonstrates capability of completing CTV contour plans with increased efficiency and accuracy. However, most models are developed and validated by single institutions using guidelines followed by the developing institutions. Publications about DLAS of the CTV have increased in recent years. Future studies and DLAS models need to include larger data sets with different patient demographics, disease stages, validation in multi-institutional settings, and inclusion of dosimetric performance.

PMID:38550365 | PMC:PMC10966174 | DOI:10.1016/j.adro.2024.101470

Categories: Literature Watch

Inversion of winter wheat leaf area index from UAV multispectral images: classical vs. deep learning approaches

Fri, 2024-03-29 06:00

Front Plant Sci. 2024 Mar 14;15:1367828. doi: 10.3389/fpls.2024.1367828. eCollection 2024.

ABSTRACT

Precise and timely leaf area index (LAI) estimation for winter wheat is crucial for precision agriculture. The emergence of high-resolution unmanned aerial vehicle (UAV) data and machine learning techniques offers a revolutionary approach for fine-scale estimation of wheat LAI at the low cost. While machine learning has proven valuable for LAI estimation, there are still model limitations and variations that impede accurate and efficient LAI inversion. This study explores the potential of classical machine learning models and deep learning model for estimating winter wheat LAI using multispectral images acquired by drones. Initially, the texture features and vegetation indices served as inputs for the partial least squares regression (PLSR) model and random forest (RF) model. Then, the ground-measured LAI data were combined to invert winter wheat LAI. In contrast, this study also employed a convolutional neural network (CNN) model that solely utilizes the cropped original image for LAI estimation. The results show that vegetation indices outperform the texture features in terms of correlation analysis with LAI and estimation accuracy. However, the highest accuracy is achieved by combining both vegetation indices and texture features to invert LAI in both conventional machine learning methods. Among the three models, the CNN approach yielded the highest LAI estimation accuracy (R 2 = 0.83), followed by the RF model (R 2 = 0.82), with the PLSR model exhibited the lowest accuracy (R 2 = 0.78). The spatial distribution and values of the estimated results for the RF and CNN models are similar, whereas the PLSR model differs significantly from the first two models. This study achieves rapid and accurate winter wheat LAI estimation using classical machine learning and deep learning methods. The findings can serve as a reference for real-time wheat growth monitoring and field management practices.

PMID:38550285 | PMC:PMC10972960 | DOI:10.3389/fpls.2024.1367828

Categories: Literature Watch

Editorial: Rising stars in PET and SPECT: 2022

Fri, 2024-03-29 06:00

Front Nucl Med. 2023;3:1326549. doi: 10.3389/fnume.2023.1326549. Epub 2023 Nov 10.

NO ABSTRACT

PMID:38550275 | PMC:PMC10976900 | DOI:10.3389/fnume.2023.1326549

Categories: Literature Watch

Spherical convolutional neural networks can improve brain microstructure estimation from diffusion MRI data

Fri, 2024-03-29 06:00

Front Neuroimaging. 2024 Mar 14;3:1349415. doi: 10.3389/fnimg.2024.1349415. eCollection 2024.

ABSTRACT

Diffusion magnetic resonance imaging is sensitive to the microstructural properties of brain tissue. However, estimating clinically and scientifically relevant microstructural properties from the measured signals remains a highly challenging inverse problem that machine learning may help solve. This study investigated if recently developed rotationally invariant spherical convolutional neural networks can improve microstructural parameter estimation. We trained a spherical convolutional neural network to predict the ground-truth parameter values from efficiently simulated noisy data and applied the trained network to imaging data acquired in a clinical setting to generate microstructural parameter maps. Our network performed better than the spherical mean technique and multi-layer perceptron, achieving higher prediction accuracy than the spherical mean technique with less rotational variance than the multi-layer perceptron. Although we focused on a constrained two-compartment model of neuronal tissue, the network and training pipeline are generalizable and can be used to estimate the parameters of any Gaussian compartment model. To highlight this, we also trained the network to predict the parameters of a three-compartment model that enables the estimation of apparent neural soma density using tensor-valued diffusion encoding.

PMID:38550242 | PMC:PMC10972853 | DOI:10.3389/fnimg.2024.1349415

Categories: Literature Watch

Artificial Intelligence Predicts Hospitalization for Acute Heart Failure Exacerbation in Patients Undergoing Myocardial Perfusion Imaging

Thu, 2024-03-28 06:00

J Nucl Med. 2024 Mar 28:jnumed.123.266761. doi: 10.2967/jnumed.123.266761. Online ahead of print.

ABSTRACT

Heart failure (HF) is a leading cause of morbidity and mortality in the United States and worldwide, with a high associated economic burden. This study aimed to assess whether artificial intelligence models incorporating clinical, stress test, and imaging parameters could predict hospitalization for acute HF exacerbation in patients undergoing SPECT/CT myocardial perfusion imaging. Methods: The HF risk prediction model was developed using data from 4,766 patients who underwent SPECT/CT at a single center (internal cohort). The algorithm used clinical risk factors, stress variables, SPECT imaging parameters, and fully automated deep learning-generated calcium scores from attenuation CT scans. The model was trained and validated using repeated hold-out (10-fold cross-validation). External validation was conducted on a separate cohort of 2,912 patients. During a median follow-up of 1.9 y, 297 patients (6%) in the internal cohort were admitted for HF exacerbation. Results: The final model demonstrated a higher area under the receiver-operating-characteristic curve (0.87 ± 0.03) for predicting HF admissions than did stress left ventricular ejection fraction (0.73 ± 0.05, P < 0.0001) or a model developed using only clinical parameters (0.81 ± 0.04, P < 0.0001). These findings were confirmed in the external validation cohort (area under the receiver-operating-characteristic curve: 0.80 ± 0.04 for final model, 0.70 ± 0.06 for stress left ventricular ejection fraction, 0.72 ± 0.05 for clinical model; P < 0.001 for all). Conclusion: Integrating SPECT myocardial perfusion imaging into an artificial intelligence-based risk assessment algorithm improves the prediction of HF hospitalization. The proposed method could enable early interventions to prevent HF hospitalizations, leading to improved patient care and better outcomes.

PMID:38548351 | DOI:10.2967/jnumed.123.266761

Categories: Literature Watch

Pages