Deep learning
A Deep Learning-based Pipeline for Segmenting the Cerebral Cortex Laminar Structure in Histology Images
Neuroinformatics. 2024 Oct 17. doi: 10.1007/s12021-024-09688-0. Online ahead of print.
ABSTRACT
Characterizing the anatomical structure and connectivity between cortical regions is a critical step towards understanding the information processing properties of the brain and will help provide insight into the nature of neurological disorders. A key feature of the mammalian cerebral cortex is its laminar structure. Identifying these layers in neuroimaging data is important for understanding their global structure and to help understand the connectivity patterns of neurons in the brain. We studied Nissl-stained and myelin-stained slice images of the brain of the common marmoset (Callithrix jacchus), which is a new world monkey that is becoming increasingly popular in the neuroscience community as an object of study. We present a novel computational framework that first acquired the cortical labels using AI-based tools followed by a trained deep learning model to segment cerebral cortical layers. We obtained a Euclidean distance of 1274.750 ± 156.400 μ m for the cortical labels acquisition, which was in the acceptable range by computing the half Euclidean distance of the average cortex thickness ( 1800.630 μ m ). We compared our cortical layer segmentation pipeline with the pipeline proposed by Wagstyl et al. (PLoS biology, 18(4), e3000678 2020) adapted to 2D data. We obtained a better mean 95 th percentile Hausdorff distance (95HD) of 92.150 μ m . Whereas a mean 95HD of 94.170 μ m was obtained from Wagstyl et al. We also compared our pipeline's performance against theirs using their dataset (the BigBrain dataset). The results also showed better segmentation quality, 85.318 % Jaccard Index acquired from our pipeline, while 83.000 % was stated in their paper.
PMID:39417954 | DOI:10.1007/s12021-024-09688-0
Artificial intelligence in urolithiasis: a systematic review of utilization and effectiveness
World J Urol. 2024 Oct 17;42(1):579. doi: 10.1007/s00345-024-05268-8.
ABSTRACT
PURPOSE: Mirroring global trends, artificial intelligence advances in medicine, notably urolithiasis. It promises accurate diagnoses, effective treatments, and forecasting epidemiological risks and stone passage. This systematic review aims to identify the types of AI models utilised in urolithiasis studies and evaluate their effectiveness.
METHODS: The study was registered with PROSPERO. Pubmed, EMBASE, Google Scholar, and Cochrane Library databases were searched for relevant literature, using keywords such as 'urology,' 'artificial intelligence,' and 'machine learning.' Only original AI studies on urolithiasis were included, excluding reviews, unrelated studies, and non-English articles. PRISMA guidelines followed.
RESULTS: Out of 4851 studies initially identified, 71 were included for comprehensive analysis in the application of AI in urolithiasis. AI showed notable proficiency in stone composition analysis in 12 studies, achieving an average precision of 88.2% (Range 0.65-1). In the domain of stone detection, the average precision remarkably reached 96.9%. AI's accuracy rate in predicting spontaneous ureteral stone passage averaged 87%, while its performance in treatment modalities such as PCNL and SWL achieved average accuracy rates of 82% and 83%, respectively. These AI models were generally superior to traditional diagnostic and treatment methods.
CONCLUSION: The consolidated data underscores AI's increasing significance in urolithiasis management. Across various dimensions-diagnosis, monitoring, and treatment-AI outperformed conventional methodologies. High precision and accuracy rates indicate that AI is not only effective but also poised for integration into routine clinical practice. Further research is warranted to establish AI's long-term utility and to validate its role as a standard tool in urological care.
PMID:39417840 | DOI:10.1007/s00345-024-05268-8
Automatic detection for bioacoustic research: a practical guide from and for biologists and computer scientists
Biol Rev Camb Philos Soc. 2024 Oct 17. doi: 10.1111/brv.13155. Online ahead of print.
ABSTRACT
Recent years have seen a dramatic rise in the use of passive acoustic monitoring (PAM) for biological and ecological applications, and a corresponding increase in the volume of data generated. However, data sets are often becoming so sizable that analysing them manually is increasingly burdensome and unrealistic. Fortunately, we have also seen a corresponding rise in computing power and the capability of machine learning algorithms, which offer the possibility of performing some of the analysis required for PAM automatically. Nonetheless, the field of automatic detection of acoustic events is still in its infancy in biology and ecology. In this review, we examine the trends in bioacoustic PAM applications, and their implications for the burgeoning amount of data that needs to be analysed. We explore the different methods of machine learning and other tools for scanning, analysing, and extracting acoustic events automatically from large volumes of recordings. We then provide a step-by-step practical guide for using automatic detection in bioacoustics. One of the biggest challenges for the greater use of automatic detection in bioacoustics is that there is often a gulf in expertise between the biological sciences and the field of machine learning and computer science. Therefore, this review first presents an overview of the requirements for automatic detection in bioacoustics, intended to familiarise those from a computer science background with the needs of the bioacoustics community, followed by an introduction to the key elements of machine learning and artificial intelligence that a biologist needs to understand to incorporate automatic detection into their research. We then provide a practical guide to building an automatic detection pipeline for bioacoustic data, and conclude with a discussion of possible future directions in this field.
PMID:39417330 | DOI:10.1111/brv.13155
scMGATGRN: a multiview graph attention network-based method for inferring gene regulatory networks from single-cell transcriptomic data
Brief Bioinform. 2024 Sep 23;25(6):bbae526. doi: 10.1093/bib/bbae526.
ABSTRACT
The gene regulatory network (GRN) plays a vital role in understanding the structure and dynamics of cellular systems, revealing complex regulatory relationships, and exploring disease mechanisms. Recently, deep learning (DL)-based methods have been proposed to infer GRNs from single-cell transcriptomic data and achieved impressive performance. However, these methods do not fully utilize graph topological information and high-order neighbor information from multiple receptive fields. To overcome those limitations, we propose a novel model based on multiview graph attention network, namely, scMGATGRN, to infer GRNs. scMGATGRN mainly consists of GAT, multiview, and view-level attention mechanism. GAT can extract essential features of the gene regulatory network. The multiview model can simultaneously utilize local feature information and high-order neighbor feature information of nodes in the gene regulatory network. The view-level attention mechanism dynamically adjusts the relative importance of node embedding representations and efficiently aggregates node embedding representations from two views. To verify the effectiveness of scMGATGRN, we compared its performance with 10 methods (five shallow learning algorithms and five state-of-the-art DL-based methods) on seven benchmark single-cell RNA sequencing (scRNA-seq) datasets from five cell lines (two in human and three in mouse) with four different kinds of ground-truth networks. The experimental results not only show that scMGATGRN outperforms competing methods but also demonstrate the potential of this model in inferring GRNs. The code and data of scMGATGRN are made freely available on GitHub (https://github.com/nathanyl/scMGATGRN).
PMID:39417321 | DOI:10.1093/bib/bbae526
Explainable artificial intelligence and domain adaptation for predicting HIV infection with graph neural networks
Ann Med. 2024 Dec;56(1):2407063. doi: 10.1080/07853890.2024.2407063. Epub 2024 Oct 17.
ABSTRACT
OBJECTIVE: Investigation of explainable deep learning methods for graph neural networks to predict HIV infections with social network information and performing domain adaptation to evaluate model transferability across different datasets.
METHODS: Network data from two cohorts of younger sexual minority men (SMM) from two U.S. cities (Chicago, IL, and Houston, TX) were collected between 2014 and 2016. Feature importance from graph attention network (GAT) models were determined using GNNExplainer. Domain adaptation was performed to examine model transferability from one city dataset to the other dataset, training with 100% of the source dataset with 30% of the target dataset and prediction on the remaining 70% from the target dataset.
RESULTS: Domain adaptation showed the ability of GAT to improve prediction over training with single city datasets. Feature importance analysis with GAT models in single city training indicated similar features across different cities, reinforcing potential application of GAT models in predicting HIV infections through domain adaptation.
CONCLUSION: GAT models can be used to address the data sparsity issue in HIV study populations. They are powerful tools for predicting individual risk of HIV that can be further explored for better understanding of HIV transmission.
PMID:39417227 | DOI:10.1080/07853890.2024.2407063
Circulating miRNAs and Machine Learning for Lateralizing Primary Aldosteronism
Hypertension. 2024 Oct 17. doi: 10.1161/HYPERTENSIONAHA.124.23418. Online ahead of print.
ABSTRACT
BACKGROUND: Distinguishing between unilateral and bilateral primary aldosteronism, a major cause of secondary hypertension, is crucial due to different treatment approaches. While adrenal venous sampling is the gold standard, its invasiveness, limited availability, and often difficult interpretation pose challenges. This study explores the utility of circulating microRNAs (miRNAs) and machine learning in distinguishing between unilateral and bilateral forms of primary aldosteronism.
METHODS: MiRNA profiling was conducted on plasma samples from 18 patients with primary aldosteronism taken during adrenal venous sampling on an Illumina MiSeq platform. Bioinformatics and machine learning identified 9 miRNAs for validation by reverse transcription real-time quantitative polymerase chain reaction. Validation was performed on a cohort consisting of 108 patients with known subdifferentiation. A 30-patient subset of the validation cohort involved both adrenal venous sampling and peripheral, the rest only peripheral samples. A neural network model was used for feature selection and comparison between adrenal venous sampling and peripheral samples, while a deep-learning model was used for classification.
RESULTS: Our model identified 10 miRNA combinations achieving >85% accuracy in distinguishing unilateral primary aldosteronism and bilateral adrenal hyperplasia on a 30-sample subset, while also confirming the suitability of peripheral samples for analysis. The best model, involving 6 miRNAs, achieved an area under curve of 87.1%. Deep learning resulted in 100% accuracy on the subset and 90.9% sensitivity and 81.8% specificity on all 108 samples, with an area under curve of 86.7%.
CONCLUSIONS: Machine learning analysis of circulating miRNAs offers a minimally invasive alternative for primary aldosteronism lateralization. Early identification of bilateral adrenal hyperplasia could expedite treatment initiation without the need for further localization, benefiting both patients and health care providers.
PMID:39417220 | DOI:10.1161/HYPERTENSIONAHA.124.23418
Research progress on machine algorithm prediction of liver cancer prognosis after intervention therapy
Am J Cancer Res. 2024 Sep 25;14(9):4580-4596. doi: 10.62347/BEAO1926. eCollection 2024.
ABSTRACT
The treatment for liver cancer has transitioned from traditional surgical resection to interventional therapies, which have become increasingly popular among patients due to their minimally invasive nature and significant local efficacy. However, with advancements in treatment technologies, accurately assessing patient response and predicting long-term survival has become a crucial research topic. Over the past decade, machine algorithms have made remarkable progress in the medical field, particularly in hepatology and prognosis studies of hepatocellular carcinoma (HCC). Machine algorithms, including deep learning and machine learning, can identify prognostic patterns and trends by analyzing vast amounts of clinical data. Despite significant advancements, several issues remain unresolved in the prognosis prediction of liver cancer using machine algorithms. Key challenges and main controversies include effectively integrating multi-source clinical data to improve prediction accuracy, addressing data privacy and ethical concerns, and enhancing the transparency and interpretability of machine algorithm decision-making processes. This paper aims to systematically review and analyze the current applications and potential of machine algorithms in predicting the prognosis of patients undergoing interventional therapy for liver cancer, providing theoretical and empirical support for future research and clinical practice.
PMID:39417194 | PMC:PMC11477842 | DOI:10.62347/BEAO1926
Comprehensive review of literature on Parkinson's disease diagnosis
Comput Biol Chem. 2024 Sep 28;113:108228. doi: 10.1016/j.compbiolchem.2024.108228. Online ahead of print.
ABSTRACT
PD is one of the neurodegenerative illnesses affects 1-2 individuals per 1000 people over the age of 60 and has a 1 % prevalence rate. It affects both the non-motor and motor aspects of movement, including initiation, execution, and planning. Prior to behavioral and cognitive abnormalities like dementia, movement-related symptoms including stiffness, tremor, and initiation issues may be observed. Patients with PD have substantial reductions in social interactions, quality of life (QoL), and familial ties, as well as significant financial burdens on both the individual and societal levels. The healthcare industry is mostly using ML approaches with the modalities like image, signal, and data as well. Therefore, this survey aims to conduct a review of 50 articles on Parkinson disease diagnosis using different modalities. The survey includes (i) Classifying multimodal articles on Parkinson disease diagnosis (image, signal, data) using various machine learning, deep learning, and other approaches. (ii) Analyzing different datasets, simulation tools used in the existing papers. (iii)Examining certain performance measures, assessing the best performance, and chronological review of reviewed paper. Finally, the review determines the research gaps and obstacles in this research topic.
PMID:39413446 | DOI:10.1016/j.compbiolchem.2024.108228
AI in Psoriatic Disease: Scoping Review
JMIR Dermatol. 2024 Oct 16;7:e50451. doi: 10.2196/50451.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) has many applications in numerous medical fields, including dermatology. Although the majority of AI studies in dermatology focus on skin cancer, there is growing interest in the applicability of AI models in inflammatory diseases, such as psoriasis. Psoriatic disease is a chronic, inflammatory, immune-mediated systemic condition with multiple comorbidities and a significant impact on patients' quality of life. Advanced treatments, including biologics and small molecules, have transformed the management of psoriatic disease. Nevertheless, there are still considerable unmet needs. Globally, delays in the diagnosis of the disease and its severity are common due to poor access to health care systems. Moreover, despite the abundance of treatments, we are unable to predict which is the right medication for the right patient, especially in resource-limited settings. AI could be an additional tool to address those needs. In this way, we can improve rates of diagnosis, accurately assess severity, and predict outcomes of treatment.
OBJECTIVE: This study aims to provide an up-to-date literature review on the use of AI in psoriatic disease, including diagnostics and clinical management as well as addressing the limitations in applicability.
METHODS: We searched the databases MEDLINE, PubMed, and Embase using the keywords "AI AND psoriasis OR psoriatic arthritis OR psoriatic disease," "machine learning AND psoriasis OR psoriatic arthritis OR psoriatic disease," and "prognostic model AND psoriasis OR psoriatic arthritis OR psoriatic disease" until June 1, 2023. Reference lists of relevant papers were also cross-examined for other papers not detected in the initial search.
RESULTS: Our literature search yielded 38 relevant papers. AI has been identified as a key component in digital health technologies. Within this field, there is the potential to apply specific techniques such as machine learning and deep learning to address several aspects of managing psoriatic disease. This includes diagnosis, particularly useful for remote teledermatology via photographs taken by patients as well as monitoring and estimating severity. Similarly, AI can be used to synthesize the vast data sets already in place through patient registries which can help identify appropriate biologic treatments for future cohorts and those individuals most likely to develop complications.
CONCLUSIONS: There are multiple advantageous uses for AI and digital health technologies in psoriatic disease. With wider implementation of AI, we need to be mindful of potential limitations, such as validation and standardization or generalizability of results in specific populations, such as patients with darker skin phototypes.
PMID:39413371 | DOI:10.2196/50451
Diffusion probabilistic priors for zero-shot low-dose CT image denoising
Med Phys. 2024 Oct 16. doi: 10.1002/mp.17431. Online ahead of print.
ABSTRACT
BACKGROUND: Denoising low-dose computed tomography (CT) images is a critical task in medical image computing. Supervised deep learning-based approaches have made significant advancements in this area in recent years. However, these methods typically require pairs of low-dose and normal-dose CT images for training, which are challenging to obtain in clinical settings. Existing unsupervised deep learning-based methods often require training with a large number of low-dose CT images or rely on specially designed data acquisition processes to obtain training data.
PURPOSE: To address these limitations, we propose a novel unsupervised method that only utilizes normal-dose CT images during training, enabling zero-shot denoising of low-dose CT images.
METHODS: Our method leverages the diffusion model, a powerful generative model. We begin by training a cascaded unconditional diffusion model capable of generating high-quality normal-dose CT images from low-resolution to high-resolution. The cascaded architecture makes the training of high-resolution diffusion models more feasible. Subsequently, we introduce low-dose CT images into the reverse process of the diffusion model as likelihood, combined with the priors provided by the diffusion model and iteratively solve multiple maximum a posteriori (MAP) problems to achieve denoising. Additionally, we propose methods to adaptively adjust the coefficients that balance the likelihood and prior in MAP estimations, allowing for adaptation to different noise levels in low-dose CT images.
RESULTS: We test our method on low-dose CT datasets of different regions with varying dose levels. The results demonstrate that our method outperforms the state-of-the-art unsupervised method and surpasses several supervised deep learning-based methods. Our method achieves PSNR of 45.02 and 35.35 dB on the abdomen CT dataset and the chest CT dataset, respectively, surpassing the best unsupervised algorithm Noise2Sim in the comparative methods by 0.39 and 0.85 dB, respectively.
CONCLUSIONS: We propose a novel low-dose CT image denoising method based on diffusion model. Our proposed method only requires normal-dose CT images as training data, greatly alleviating the data scarcity issue faced by most deep learning-based methods. At the same time, as an unsupervised algorithm, our method achieves very good qualitative and quantitative results. The Codes are available in https://github.com/DeepXuan/Dn-Dp.
PMID:39413369 | DOI:10.1002/mp.17431
Brain Computer Interfaces: An Introduction for Clinical Neurodiagnostic Technologists
Neurodiagn J. 2024 Oct 16:1-14. doi: 10.1080/21646821.2024.2408501. Online ahead of print.
ABSTRACT
Brain-computer interface (BCI) is a term used to describe systems that translate biological information into commands that can control external devices such as computers, prosthetics, and other machinery. While BCI is used in military applications, home control systems, and a wide array of entertainment, much of its modern interest and funding can be attributed to its utility in the medical community, where it has rapidly propelled advancements in the restoration or replacement of critical functions robbed from victims of disease, stroke, and traumatic injury. BCI devices can allow patients to move prosthetic limbs, operate devices such as wheelchairs or computers, and communicate through writing and speech-generating devices. In this article, we aim to provide an introductory summary of the historical context and modern growing utility of BCI, with specific interest in igniting the conversation of where and how the neurodiagnostics community and its associated parties can embrace and contribute to the world of BCI.
PMID:39413360 | DOI:10.1080/21646821.2024.2408501
Mpox outbreak: Time series analysis with multifractal and deep learning network
Chaos. 2024 Oct 1;34(10):101103. doi: 10.1063/5.0236082.
ABSTRACT
This article presents an overview of an mpox epidemiological situation in the most affected regions-Africa, Americas, and Europe-tailoring fractal interpolation for pre-processing the mpox cases. This keen analysis has highlighted the irregular and fractal patterns in the trend of mpox transmission. During the current scenario of public health emergency of international concern due to an mpox outbreak, an additional significance of this article is the interpretation of mpox spread in light of multifractality. The self-similar measure, namely, the multifractal measure, is utilized to explore the heterogeneity in the mpox cases. Moreover, a bidirectional long-short term memory neural network has been employed to forecast the future mpox spread to alert the outbreak as it seems to be a silent symptom for global epidemic.
PMID:39413265 | DOI:10.1063/5.0236082
Radiomics-Based Prediction of Patient Demographic Characteristics on Chest Radiographs: Looking Beyond Deep Learning for Risk of Bias
AJR Am J Roentgenol. 2024 Oct 16. doi: 10.2214/AJR.24.31963. Online ahead of print.
NO ABSTRACT
PMID:39413236 | DOI:10.2214/AJR.24.31963
Prior Visual-guided Self-supervised Learning Enables Color Vignetting Correction for High-throughput Microscopic Imaging
IEEE J Biomed Health Inform. 2024 Oct 16;PP. doi: 10.1109/JBHI.2024.3471907. Online ahead of print.
ABSTRACT
Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other stateof-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.
PMID:39412976 | DOI:10.1109/JBHI.2024.3471907
Attention-guided 3D CNN With Lesion Feature Selection for Early Alzheimer's Disease Prediction Using Longitudinal sMRI
IEEE J Biomed Health Inform. 2024 Oct 16;PP. doi: 10.1109/JBHI.2024.3482001. Online ahead of print.
ABSTRACT
Predicting the progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD) is critical for early intervention. Towards this end, various deep learning models have been applied in this domain, typically relying on structural magnetic resonance imaging (sMRI) data from a single time point whereas neglecting the dynamic changes in brain structure over time. Current longitudinal studies inadequately explore disease evolution dynamics and are burdened by high computational complexity. This paper introduces a novel lightweight 3D convolutional neural network specifically designed to capture the evolution of brain diseases for modeling the progression of MCI. First, a longitudinal lesion feature selection strategy is proposed to extract core features from temporal data, facilitating the detection of subtle differences in brain structure between two time points. Next, to refine the model for a more concentrated emphasis on lesion features, a disease trend attention mechanism is introduced to learn the dependencies between overall disease trends and local variation features. Finally, disease prediction visualization techniques are employed to improve the interpretability of the final predictions. Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance in terms of area under the curve (AUC), accuracy, specificity, precision, and F1 score. This study confirms the efficacy of our early diagnostic method, utilizing only two follow-up sMRI scans to predict the disease status of MCI patients 24 months later with an AUC of 79.03%.
PMID:39412975 | DOI:10.1109/JBHI.2024.3482001
One novel transfer learning-based CLIP model combined with self-attention mechanism for differentiating the tumor-stroma ratio in pancreatic ductal adenocarcinoma
Radiol Med. 2024 Oct 16. doi: 10.1007/s11547-024-01902-y. Online ahead of print.
ABSTRACT
PURPOSE: To develop a contrastive language-image pretraining (CLIP) model based on transfer learning and combined with self-attention mechanism to predict the tumor-stroma ratio (TSR) in pancreatic ductal adenocarcinoma on preoperative enhanced CT images, in order to understand the biological characteristics of tumors for risk stratification and guiding feature fusion during artificial intelligence-based model representation.
MATERIAL AND METHODS: This retrospective study collected a total of 207 PDAC patients from three hospitals. TSR assessments were performed on surgical specimens by pathologists and divided into high TSR and low TSR groups. This study developed one novel CLIP-adapter model that integrates the CLIP paradigm with a self-attention mechanism for better utilizing features from multi-phase imaging, thereby enhancing the accuracy and reliability of tumor-stroma ratio predictions. Additionally, clinical variables, traditional radiomics model and deep learning models (ResNet50, ResNet101, ViT_Base_32, ViT_Base_16) were constructed for comparison.
RESULTS: The models showed significant efficacy in predicting TSR in PDAC. The performance of the CLIP-adapter model based on multi-phase feature fusion was superior to that based on any single phase (arterial or venous phase). The CLIP-adapter model outperformed traditional radiomics models and deep learning models, with CLIP-adapter_ViT_Base_32 performing the best, achieving the highest AUC (0.978) and accuracy (0.921) in the test set. Kaplan-Meier survival analysis showed longer overall survival in patients with low TSR compared to those with high TSR.
CONCLUSION: The CLIP-adapter model designed in this study provides a safe and accurate method for predicting the TSR in PDAC. The feature fusion module based on multi-modal (image and text) and multi-phase (arterial and venous phase) significantly improves model performance.
PMID:39412688 | DOI:10.1007/s11547-024-01902-y
Deep learning radiomic nomogram outperforms the clinical model in distinguishing intracranial solitary fibrous tumors from angiomatous meningiomas and can predict patient prognosis
Eur Radiol. 2024 Oct 16. doi: 10.1007/s00330-024-11082-y. Online ahead of print.
ABSTRACT
OBJECTIVES: To evaluate the value of a magnetic resonance imaging (MRI)-based deep learning radiomic nomogram (DLRN) for distinguishing intracranial solitary fibrous tumors (ISFTs) from angiomatous meningioma (AMs) and predicting overall survival (OS) for ISFT patients.
METHODS: In total, 1090 patients from Beijing Tiantan Hospital, Capital Medical University, and 131 from Lanzhou University Second Hospital were categorized as primary cohort (PC) and external validation cohort (EVC), respectively. An MRI-based DLRN was developed in PC to distinguish ISFTs from AMs. We validated the DLRN and compared it with a clinical model (CM) in EVC. In total, 149 ISFT patients were followed up. We carried out Cox regression analysis on DLRN score, clinical characteristics, and histological stratification. Besides, we evaluated the association between independent risk factors and OS in the follow-up patients using Kaplan-Meier curves.
RESULTS: The DLRN outperformed CM in distinguishing ISFTs from AMs (area under the curve [95% confidence interval (CI)]: 0.86 [0.84-0.88] for DLRN and 0.70 [0.67-0.72] for CM, p < 0.001) in EVC. Patients with high DLRN score [per 1 increase; hazard ratio (HR) 1.079, 95% CI: 1.009-1.147, p = 0.019] and subtotal resection (STR) [per 1 increase; HR 2.573, 95% CI: 1.337-4.932, p = 0.004] were associated with a shorter OS. A statistically significant difference in OS existed between the high and low DLRN score groups with a cutoff value of 12.19 (p < 0.001). There is also a difference in OS between total excision (GTR) and STR groups (p < 0.001).
CONCLUSION: The proposed DLRN outperforms the CM in distinguishing ISFTs from AMs and can predict OS for ISFT patients.
CLINICAL RELEVANCE STATEMENT: The proposed MRI-based deep learning radiomic nomogram outperforms the clinical model in distinguishing ISFTs from AMs and can predict OS of ISFT patients, which could guide the surgical strategy and predict prognosis for patients.
KEY POINTS: Distinguishing ISFTs from AMs based on conventional radiological signs is challenging. The DLRN outperformed the CM in our study. The DLRN can predict OS for ISFT patients.
PMID:39412667 | DOI:10.1007/s00330-024-11082-y
Assessing the deep learning based image quality enhancements for the BGO based GE omni legend PET/CT
EJNMMI Phys. 2024 Oct 16;11(1):86. doi: 10.1186/s40658-024-00688-2.
ABSTRACT
BACKGROUND: This study investigates the integration of Artificial Intelligence (AI) in compensating the lack of time-of-flight (TOF) of the GE Omni Legend PET/CT, which utilizes BGO scintillation crystals.
METHODS: The current study evaluates the image quality of the GE Omni Legend PET/CT using a NEMA IQ phantom. It investigates the impact on imaging performance of various deep learning precision levels (low, medium, high) across different data acquisition durations. Quantitative analysis was performed using metrics such as contrast recovery coefficient (CRC), background variability (BV), and contrast to noise Ratio (CNR). Additionally, patient images reconstructed with various deep learning precision levels are presented to illustrate the impact on image quality.
RESULTS: The deep learning approach significantly reduced background variability, particularly for the smallest region of interest. We observed improvements in background variability of 11.8 % , 17.2 % , and 14.3 % for low, medium, and high precision deep learning, respectively. The results also indicate a significant improvement in larger spheres when considering both background variability and contrast recovery coefficient. The high precision deep learning approach proved advantageous for short scans and exhibited potential in improving detectability of small lesions. The exemplary patient study shows that the noise was suppressed for all deep learning cases, but low precision deep learning also reduced the lesion contrast (about -30 % ), while high precision deep learning increased the contrast (about 10 % ).
CONCLUSION: This study conducted a thorough evaluation of deep learning algorithms in the GE Omni Legend PET/CT scanner, demonstrating that these methods enhance image quality, with notable improvements in CRC and CNR, thereby optimizing lesion detectability and offering opportunities to reduce image acquisition time.
PMID:39412633 | DOI:10.1186/s40658-024-00688-2
Automated segment-level coronary artery calcium scoring on non-contrast CT: a multi-task deep-learning approach
Insights Imaging. 2024 Oct 16;15(1):250. doi: 10.1186/s13244-024-01827-0.
ABSTRACT
OBJECTIVES: To develop and evaluate a multi-task deep-learning (DL) model for automated segment-level coronary artery calcium (CAC) scoring on non-contrast computed tomography (CT) for precise localization and quantification of calcifications in the coronary artery tree.
METHODS: This study included 1514 patients (mean age, 60.0 ± 10.2 years; 56.0% female) with stable chest pain from 26 centers participating in the multicenter DISCHARGE trial (NCT02400229). The patients were randomly assigned to a training/validation set (1059) and a test set (455). We developed a multi-task neural network for performing the segmentation of calcifications on the segment level as the main task and the segmentation of coronary artery segment regions with weak annotations as an auxiliary task. Model performance was evaluated using (micro-average) sensitivity, specificity, F1-score, and weighted Cohen's κ for segment-level agreement based on the Agatston score and performing interobserver variability analysis.
RESULTS: In the test set of 455 patients with 1797 calcifications, the model assigned 73.2% (1316/1797) to the correct coronary artery segment. The model achieved a micro-average sensitivity of 0.732 (95% CI: 0.710-0.754), a micro-average specificity of 0.978 (95% CI: 0.976-0.980), and a micro-average F1-score of 0.717 (95% CI: 0.695-0.739). The segment-level agreement was good with a weighted Cohen's κ of 0.808 (95% CI: 0.790-0.824), which was only slightly lower than the agreement between the first and second observer (0.809 (95% CI: 0.798-0.845)).
CONCLUSION: Automated segment-level CAC scoring using a multi-task neural network approach showed good agreement on the segment level, indicating that DL has the potential for automated coronary artery calcification classification.
CRITICAL RELEVANCE STATEMENT: Multi-task deep learning can perform automated coronary calcium scoring on the segment level with good agreement and may contribute to the development of new and improved calcium scoring methods.
KEY POINTS: Segment-level coronary artery calcium scoring is a tedious and error-prone task. The proposed multi-task model achieved good agreement with a human observer on the segment level. Deep learning can contribute to the automation of segment-level coronary artery calcium scoring.
PMID:39412613 | DOI:10.1186/s13244-024-01827-0
TRAITER: Transformer-guided diagnosis and prognosis of heart failure using cell nuclear morphology and DNA damage marker
Bioinformatics. 2024 Oct 16:btae610. doi: 10.1093/bioinformatics/btae610. Online ahead of print.
ABSTRACT
MOTIVATION: Heart failure (HF), a major cause of morbidity and mortality, necessitates precise diagnostic and prognostic methods.
RESULTS: This study presents a novel deep learning approach, Transformer-based Analysis of Images of Tissue for Effective Remedy (TRAITER), for HF diagnosis and prognosis. Employing image segmentation techniques and a Vision Transformer, TRAITER predicts HF likelihood from cardiac tissue cell nuclear morphology images and the potential for left ventricular reverse remodeling (LVRR) from dual-stained images with cell nuclei and DNA damage markers. In HF prediction using 31,158 images from 9 patients, TRAITER achieved 83.1% accuracy. For LVRR prediction with 231,840 images from 46 patients, TRAITER attained 84.2% accuracy for individual images and 92.9% for individual patients. TRAITER outperformed other neural network models in terms of receiver operating characteristics, and precision-recall curves. Our method promises to advance personalized HF medicine decision-making.
AVAILABILITY: The source code and data are available at the following link: Https://github.com/HamanoLaboratory/predict-of-HF-and-LVRR.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
PMID:39412446 | DOI:10.1093/bioinformatics/btae610