Deep learning

DEMIST: A Deep-Learning-Based Detection-Task-Specific Denoising Approach for Myocardial Perfusion SPECT

Mon, 2024-05-20 06:00

IEEE Trans Radiat Plasma Med Sci. 2024 Apr;8(4):439-450. doi: 10.1109/trpms.2024.3379215. Epub 2024 Mar 25.

ABSTRACT

There is an important need for methods to process myocardial perfusion imaging (MPI) single-photon emission computed tomography (SPECT) images acquired at lower-radiation dose and/or acquisition time such that the processed images improve observer performance on the clinical task of detecting perfusion defects compared to low-dose images. To address this need, we build upon concepts from model-observer theory and our understanding of the human visual system to propose a detection task-specific deep-learning-based approach for denoising MPI SPECT images (DEMIST). The approach, while performing denoising, is designed to preserve features that influence observer performance on detection tasks. We objectively evaluated DEMIST on the task of detecting perfusion defects using a retrospective study with anonymized clinical data in patients who underwent MPI studies across two scanners (N = 338). The evaluation was performed at low-dose levels of 6.25%, 12.5%, and 25% and using an anthropomorphic channelized Hotelling observer. Performance was quantified using area under the receiver operating characteristics curve (AUC). Images denoised with DEMIST yielded significantly higher AUC compared to corresponding low-dose images and images denoised with a commonly used task-agnostic deep learning-based denoising method. Similar results were observed with stratified analysis based on patient sex and defect type. Additionally, DEMIST improved visual fidelity of the low-dose images as quantified using root mean squared error and structural similarity index metric. A mathematical analysis revealed that DEMIST preserved features that assist in detection tasks while improving the noise properties, resulting in improved observer performance. The results provide strong evidence for further clinical evaluation of DEMIST to denoise low-count images in MPI SPECT.

PMID:38766558 | PMC:PMC11101197 | DOI:10.1109/trpms.2024.3379215

Categories: Literature Watch

Lymphocyte-Infiltrated Periportal Region Detection With Structurally-Refined Deep Portal Segmentation and Heterogeneous Infiltration Features

Mon, 2024-05-20 06:00

IEEE Open J Eng Med Biol. 2024 Mar 20;5:261-270. doi: 10.1109/OJEMB.2024.3379479. eCollection 2024.

ABSTRACT

Goal: The early diagnosis and treatment of hepatitis is essential to reduce hepatitis-related liver function deterioration and mortality. One component of the widely-used Ishak grading system for the grading of periportal interface hepatitis is based on the percentage of portal borders infiltrated by lymphocytes. Thus, the accurate detection of lymphocyte-infiltrated periportal regions is critical in the diagnosis of hepatitis. However, the infiltrating lymphocytes usually result in the formation of ambiguous and highly-irregular portal boundaries, and thus identifying the infiltrated portal boundary regions precisely using automated methods is challenging. This study aims to develop a deep-learning-based automatic detection framework to assist diagnosis. Methods: The present study proposes a framework consisting of a Structurally-REfined Deep Portal Segmentation module and an Infiltrated Periportal Region Detection module based on heterogeneous infiltration features to accurately identify the infiltrated periportal regions in liver Whole Slide Images. Results: The proposed method achieves 0.725 in F1-score of lymphocyte-infiltrated periportal region detection. Moreover, the statistics of the ratio of the detected infiltrated portal boundary have high correlation to the Ishak grade (Spearman's correlations more than 0.87 with p-values less than 0.001) and medium correlation to the liver function index aspartate aminotransferase and alanine aminotransferase (Spearman's correlations more than 0.63 and 0.57 with p-values less than 0.001). Conclusions: The study shows the statistics of the ratio of infiltrated portal boundary have correlation to the Ishak grade and liver function index. The proposed framework provides pathologists with a useful and reliable tool for hepatitis diagnosis.

PMID:38766544 | PMC:PMC11100940 | DOI:10.1109/OJEMB.2024.3379479

Categories: Literature Watch

FetSAM: Advanced Segmentation Techniques for Fetal Head Biometrics in Ultrasound Imagery

Mon, 2024-05-20 06:00

IEEE Open J Eng Med Biol. 2024 Mar 27;5:281-295. doi: 10.1109/OJEMB.2024.3382487. eCollection 2024.

ABSTRACT

Goal: FetSAM represents a cutting-edge deep learning model aimed at revolutionizing fetal head ultrasound segmentation, thereby elevating prenatal diagnostic precision. Methods: Utilizing a comprehensive dataset-the largest to date for fetal head metrics-FetSAM incorporates prompt-based learning. It distinguishes itself with a dual loss mechanism, combining Weighted DiceLoss and Weighted Lovasz Loss, optimized through AdamW and underscored by class weight adjustments for better segmentation balance. Performance benchmarks against prominent models such as U-Net, DeepLabV3, and Segformer highlight its efficacy. Results: FetSAM delivers unparalleled segmentation accuracy, demonstrated by a DSC of 0.90117, HD of 1.86484, and ASD of 0.46645. Conclusion: FetSAM sets a new benchmark in AI-enhanced prenatal ultrasound analysis, providing a robust, precise tool for clinical applications and pushing the envelope of prenatal care with its groundbreaking dataset and segmentation capabilities.

PMID:38766538 | PMC:PMC11100952 | DOI:10.1109/OJEMB.2024.3382487

Categories: Literature Watch

Automated Cytometric Gating with Human-Level Performance Using Bivariate Segmentation

Mon, 2024-05-20 06:00

bioRxiv [Preprint]. 2024 May 9:2024.05.06.592739. doi: 10.1101/2024.05.06.592739.

ABSTRACT

Recent advances in cytometry technology have enabled high-throughput data collection with multiple single-cell protein expression measurements. The significant biological and technical variance between samples in cytometry has long posed a formidable challenge during the gating process, especially for the initial gates which deal with unpredictable events, such as debris and technical artifacts. Even with the same experimental machine and protocol, the target population, as well as the cell population that needs to be excluded, may vary across different measurements. To address this challenge and mitigate the labor-intensive manual gating process, we propose a deep learning framework UNITO to rigorously identify the hierarchical cytometric subpopulations. The UNITO framework transformed a cell-level classification task into an image-based semantic segmentation problem. For reproducibility purposes, the framework was applied to three independent cohorts and successfully detected initial gates that were required to identify single cellular events as well as subsequent cell gates. We validated the UNITO framework by comparing its results with previous automated methods and the consensus of at least four experienced immunologists. UNITO outperformed existing automated methods and differed from human consensus by no more than each individual human. Most critically, UNITO framework functions as a fully automated pipeline after training and does not require human hints or prior knowledge. Unlike existing multi-channel classification or clustering pipelines, UNITO can reproduce a similar contour compared to manual gating for each intermediate gating to achieve better interpretability and provide post hoc visual inspection. Beyond acting as a pioneering framework that uses image segmentation to do auto-gating, UNITO gives a fast and interpretable way to assign the cell subtype membership, and the speed of UNITO will not be impacted by the number of cells from each sample. The pre-gating and gating inference takes approximately 2 minutes for each sample using our pre-defined 9 gates system, and it can also adapt to any sequential prediction with different configurations.

PMID:38766268 | PMC:PMC11100732 | DOI:10.1101/2024.05.06.592739

Categories: Literature Watch

Virtual Screening of Molecules via Neural Fingerprint-based Deep Learning Technique

Mon, 2024-05-20 06:00

Res Sq [Preprint]. 2024 May 9:rs.3.rs-4355625. doi: 10.21203/rs.3.rs-4355625/v1.

ABSTRACT

A machine learning-based drug screening technique has been developed and optimized using convolutional neural network-derived fingerprints. The optimization of weights in the neural network-based fingerprinting technique was compared with fixed Morgan fingerprints in regard to binary classification on drug-target binding affinity. The assessment was carried out using six different target proteins using randomly chosen small molecules from the ZINC15 database for training. This new architecture proved to be more efficient in screening molecules that less favorably bind to specific targets and retaining molecules that favorably bind to it. Scientific contribution We have developed a new neural fingerprint-based screening model that has a significant ability to capture hits. Despite using a smaller dataset, this model is capable of mapping chemical space similar to other contemporary algorithms designed for molecular screening. The novelty of the present algorithm lies in the speed with which the models are trained and tuned before testing its predictive capabilities and hence is a significant step forward in the field of machine learning-embedded computational drug discovery.

PMID:38766198 | PMC:PMC11100899 | DOI:10.21203/rs.3.rs-4355625/v1

Categories: Literature Watch

Data-driven fine-grained region discovery in the mouse brain with transformers

Mon, 2024-05-20 06:00

bioRxiv [Preprint]. 2024 May 7:2024.05.05.592608. doi: 10.1101/2024.05.05.592608.

ABSTRACT

Technologies such as spatial transcriptomics offer unique opportunities to define the spatial organization of the mouse brain. We developed an unsupervised training scheme and novel transformer-based deep learning architecture to detect spatial domains in mouse whole-brain spatial transcriptomics data. Our model learns local representations of molecular and cellular statistical patterns. These local representations can be clustered to identify spatial domains within the brain from coarse to fine-grained. Discovered domains are spatially regular, even with several hundreds of spatial clusters. They are also consistent with existing anatomical ontologies such as the Allen Mouse Brain Common Coordinate Framework version 3 (CCFv31) and can be visually interpreted at the cell type or transcript level. We demonstrate our method can be used to identify previously uncatalogued subregions, such as in the midbrain, where we uncover gradients of inhibitory neuron complexity and abundance. We apply our method to a separate multi-animal whole-brain spatial transcriptomic dataset and observe that inclusion of both sagittal and coronal tissue slices in region identification improves correspondence of spatial domains to CCF.

PMID:38766132 | PMC:PMC11100623 | DOI:10.1101/2024.05.05.592608

Categories: Literature Watch

New Method of Paired Comparison for Improved Observer Shortage Using Deep Learning Models

Sun, 2024-05-19 06:00

Nihon Hoshasen Gijutsu Gakkai Zasshi. 2024 May 17. doi: 10.6009/jjrt.2024-1446. Online ahead of print.

ABSTRACT

PURPOSE: The aim of this study was to validate the potential of substituting an observer in a paired comparison with a deep-learning observer.

METHODS: Phantom images were obtained using computed tomography. Imaging conditions included a standard setting of 120 kVp and 200 mA, with tube current variations ranging from 160 mA, 120 mA, 80 mA, 40 mA, and 20 mA, resulting in six different imaging conditions. Fourteen radiologic technologists with >10 years of experience conducted pairwise comparisons using Ura's method. After training, VGG16 and VGG19 models were combined to form deep learning models, which were then evaluated for accuracy, recall, precision, specificity, and F1value. The validation results were used as the standard, and the results of the average degree of preference and significance tests between images were compared to the standard if the results of deep learning were incorporated.

RESULTS: The average accuracy of the deep learning model was 82%, with a maximum difference of 0.13 from the standard regarding the average degree of preference, a minimum difference of 0, and an average difference of 0.05. Significant differences were observed in the test results when replacing human observers with AI counterparts for image pairs with tube currents of 160 mA vs. 120 mA and 200 mA vs. 160 mA.

CONCLUSION: In paired comparisons with a limited phantom (7-point noise evaluation), the potential use of deep learning was suggested as one of the observers.

PMID:38763757 | DOI:10.6009/jjrt.2024-1446

Categories: Literature Watch

Generalizability of deep learning in organ-at-risk segmentation: A transfer learning study in cervical brachytherapy

Sun, 2024-05-19 06:00

Radiother Oncol. 2024 May 17:110332. doi: 10.1016/j.radonc.2024.110332. Online ahead of print.

ABSTRACT

PURPOSE: Deep learning can automate delineation in radiation therapy, reducing time and variability. Yet, its efficacy varies across different institutions, scanners, or settings, emphasizing the need for adaptable and robust models in clinical environments. Our study demonstrates the effectiveness of the transfer learning (TL) approach in enhancing the generalizability of deep learning models for auto-segmentation of organs-at-risk (OARs) in cervical brachytherapy.

METHODS: A pre-trained model was developed using 120 scans with ring and tandem applicator on a 3 T magnetic resonance (MR) scanner (RT3). Four OARs were segmented and evaluated. Segmentation performance was evaluated by Volumetric Dice Similarity Coefficient (vDSC), 95 % Hausdorff Distance (HD95), surface DSC, and Added Path Length (APL). The model was fine-tuned on three out-of-distribution target groups. Pre- and post-TL outcomes, and influence of number of fine-tuning scans, were compared. A model trained with one group (Single) and a model trained with all four groups (Mixed) were evaluated on both seen and unseen data distributions.

RESULTS: TL enhanced segmentation accuracy across target groups, matching the pre-trained model's performance. The first five fine-tuning scans led to the most noticeable improvements, with performance plateauing with more data. TL outperformed training-from-scratch given the same training data. The Mixed model performed similarly to the Single model on RT3 scans but demonstrated superior performance on unseen data.

CONCLUSIONS: TL can improve a model's generalizability for OAR segmentation in MR-guided cervical brachytherapy, requiring less fine-tuning data and reduced training time. These results provide a foundation for developing adaptable models to accommodate clinical settings.

PMID:38763356 | DOI:10.1016/j.radonc.2024.110332

Categories: Literature Watch

Predicting therapeutic response to neoadjuvant immunotherapy based on an integration model in resectable stage IIIA (N2) non-small cell lung cancer

Sun, 2024-05-19 06:00

J Thorac Cardiovasc Surg. 2024 May 17:S0022-5223(24)00437-9. doi: 10.1016/j.jtcvs.2024.05.006. Online ahead of print.

ABSTRACT

OBJECTIVE: Accurately predicting response during neoadjuvant chemoimmunotherapy for resectable non-small cell lung cancer (NSCLC) remains clinically challenging. In this study, we investigate the effectiveness of blood-based tumor mutational burden (bTMB) and a deep learning (DL) model in predicting major pathologic response (MPR) and survival from a phase II trial.

METHODS: Blood samples were prospectively collected from 45 stage IIIA (N2) NSCLC patients undergoing neoadjuvant chemoimmunotherapy. An integrated model, combining the CT-based DL score, bTMB, and clinical factors, was developed to predict tumor response to neoadjuvant chemoimmunotherapy.

RESULTS: At baseline, bTMB were detected in 77.8% (35 of 45) of patients. Baseline bTMB ≥11 Muts/Mb was associated with significantly higher MPR rates (77.8% vs. 38.5%, p = 0.042), and longer disease-free survival (DFS, p = 0.043), but not overall survival (p = 0.131), compared to bTMB < 11 Muts/Mb in 35 patients with bTMB available. The developed DL model achieved an area under the curve (AUC) of 0.703 in all patients. Importantly, the predictive performance of the integrated model improved to an AUC of 0.820 when combining the DL score with bTMB and clinical factors. Baseline circulating tumor DNA (ctDNA) status was not associated with pathological response and survival. Compared to ctDNA residual, ctDNA clearance before surgery was associated with significantly higher MPR rates (88.2% vs. 11.1%, p < 0.001) and improved DFS (p = 0.010).

CONCLUSIONS: The integrated model shows promise as a predictor of tumor response to neoadjuvant chemoimmunotherapy. Serial ctDNA dynamics provide a reliable tool for monitoring tumor response.

PMID:38763304 | DOI:10.1016/j.jtcvs.2024.05.006

Categories: Literature Watch

Machine learning-based longitudinal prediction for GJB2-related sensorineural hearing loss

Sun, 2024-05-19 06:00

Comput Biol Med. 2024 May 15;176:108597. doi: 10.1016/j.compbiomed.2024.108597. Online ahead of print.

ABSTRACT

BACKGROUND: Recessive GJB2 variants, the most common genetic cause of hearing loss, may contribute to progressive sensorineural hearing loss (SNHL). The aim of this study is to build a realistic predictive model for GJB2-related SNHL using machine learning to enable personalized medical planning for timely intervention.

METHOD: Patients with SNHL with confirmed biallelic GJB2 variants in a nationwide cohort between 2005 and 2022 were included. Different data preprocessing protocols and computational algorithms were combined to construct a prediction model. We randomly divided the dataset into training, validation, and test sets at a ratio of 72:8:20, and repeated this process ten times to obtain an average result. The performance of the models was evaluated using the mean absolute error (MAE), which refers to the discrepancy between the predicted and actual hearing thresholds.

RESULTS: We enrolled 449 patients with 2184 audiograms available for deep learning analysis. SNHL progression was identified in all models and was independent of age, sex, and genotype. The average hearing progression rate was 0.61 dB HL per year. The best MAE for linear regression, multilayer perceptron, long short-term memory, and attention model were 4.42, 4.38, 4.34, and 4.76 dB HL, respectively. The long short-term memory model performed best with an average MAE of 4.34 dB HL and acceptable accuracy for up to 4 years.

CONCLUSIONS: We have developed a prognostic model that uses machine learning to approximate realistic hearing progression in GJB2-related SNHL, allowing for the design of individualized medical plans, such as recommending the optimal follow-up interval for this population.

PMID:38763069 | DOI:10.1016/j.compbiomed.2024.108597

Categories: Literature Watch

Linguistic-based Mild Cognitive Impairment detection using Informative Loss

Sun, 2024-05-19 06:00

Comput Biol Med. 2024 May 14;176:108606. doi: 10.1016/j.compbiomed.2024.108606. Online ahead of print.

ABSTRACT

This paper presents a deep learning method using Natural Language Processing (NLP) techniques, to distinguish between Mild Cognitive Impairment (MCI) and Normal Cognitive (NC) conditions in older adults. We propose a framework that analyzes transcripts generated from video interviews collected within the I-CONECT study project, a randomized controlled trial aimed at improving cognitive functions through video chats. Our proposed NLP framework consists of two Transformer-based modules, namely Sentence Embedding (SE) and Sentence Cross Attention (SCA). First, the SE module captures contextual relationships between words within each sentence. Subsequently, the SCA module extracts temporal features from a sequence of sentences. This feature is then used by a Multi-Layer Perceptron (MLP) for the classification of subjects into MCI or NC. To build a robust model, we propose a novel loss function, called InfoLoss, that considers the reduction in entropy by observing each sequence of sentences to ultimately enhance the classification accuracy. The results of our comprehensive model evaluation using the I-CONECT dataset show that our framework can distinguish between MCI and NC with an average area under the curve of 84.75%.

PMID:38763068 | DOI:10.1016/j.compbiomed.2024.108606

Categories: Literature Watch

D-TrAttUnet: Toward hybrid CNN-transformer architecture for generic and subtle segmentation in medical images

Sun, 2024-05-19 06:00

Comput Biol Med. 2024 May 11;176:108590. doi: 10.1016/j.compbiomed.2024.108590. Online ahead of print.

ABSTRACT

Over the past two decades, machine analysis of medical imaging has advanced rapidly, opening up significant potential for several important medical applications. As complicated diseases increase and the number of cases rises, the role of machine-based imaging analysis has become indispensable. It serves as both a tool and an assistant to medical experts, providing valuable insights and guidance. A particularly challenging task in this area is lesion segmentation, a task that is challenging even for experienced radiologists. The complexity of this task highlights the urgent need for robust machine learning approaches to support medical staff. In response, we present our novel solution: the D-TrAttUnet architecture. This framework is based on the observation that different diseases often target specific organs. Our architecture includes an encoder-decoder structure with a composite Transformer-CNN encoder and dual decoders. The encoder includes two paths: the Transformer path and the Encoders Fusion Module path. The Dual-Decoder configuration uses two identical decoders, each with attention gates. This allows the model to simultaneously segment lesions and organs and integrate their segmentation losses. To validate our approach, we performed evaluations on the Covid-19 and Bone Metastasis segmentation tasks. We also investigated the adaptability of the model by testing it without the second decoder in the segmentation of glands and nuclei. The results confirmed the superiority of our approach, especially in Covid-19 infections and the segmentation of bone metastases. In addition, the hybrid encoder showed exceptional performance in the segmentation of glands and nuclei, solidifying its role in modern medical image analysis.

PMID:38763066 | DOI:10.1016/j.compbiomed.2024.108590

Categories: Literature Watch

Deciphering seizure semiology in corpus callosum injuries: A comprehensive systematic review with machine learning insights

Sun, 2024-05-19 06:00

Clin Neurol Neurosurg. 2024 May 7;242:108316. doi: 10.1016/j.clineuro.2024.108316. Online ahead of print.

ABSTRACT

INTRODUCTION: Seizure disorders have often been found to be associated with corpus callosum injuries, but in most cases, they remain undiagnosed. Understanding the clinical, electrographic, and neuroradiological alternations can be crucial in delineating this entity.

OBJECTIVE: This systematic review aims to analyze the effects of corpus callosum injuries on seizure semiology, providing insights into the neuroscientific and clinical implications of such injuries.

METHODS: Adhering to the PRISMA guidelines, a comprehensive search across multiple databases, including PubMed/Medline, NIH, Embase, Cochrane Library, and Cross-ref, was conducted until September 25, 2023. Studies on seizures associated with corpus callosum injuries, excluding other cortical or sub-cortical involvements, were included. Machine learning (Random Forest) and deep learning (1D-CNN) algorithms were employed for data classification.

RESULTS: Initially, 1250 articles were identified from the mentioned databases, and additional 350 were found through other relevant sources. Out of all these articles, 41 studies met the inclusion criteria, collectively encompassing 56 patients The most frequent clinical manifestations included generalized tonic-clonic seizures, complex partial seizures, and focal seizures. The most common callosal injuries were related to reversible splenial lesion syndrome and cytotoxic lesions. Machine learning and deep learning analyses revealed significant correlations between seizure types, semiological parameters, and callosal injury locations. Complete recovery was reported in the majority of patients post-treatment.

CONCLUSION: Corpus callosum injuries have diverse impacts on seizure semiology. This review highlights the importance of understanding the role of the corpus callosum in seizure propagation and manifestation. The findings emphasize the need for targeted diagnostic and therapeutic strategies in managing seizures associated with callosal injuries. Future research should focus on expanding the data pool and exploring the underlying mechanisms in greater detail.

PMID:38762973 | DOI:10.1016/j.clineuro.2024.108316

Categories: Literature Watch

Deep learning system for malignancy risk prediction in cystic renal lesions: a multicenter study

Sun, 2024-05-19 06:00

Insights Imaging. 2024 May 20;15(1):121. doi: 10.1186/s13244-024-01700-0.

ABSTRACT

OBJECTIVES: To develop an interactive, non-invasive artificial intelligence (AI) system for malignancy risk prediction in cystic renal lesions (CRLs).

METHODS: In this retrospective, multicenter diagnostic study, we evaluated 715 patients. An interactive geodesic-based 3D segmentation model was created for CRLs segmentation. A CRLs classification model was developed using spatial encoder temporal decoder (SETD) architecture. The classification model combines a 3D-ResNet50 network for extracting spatial features and a gated recurrent unit (GRU) network for decoding temporal features from multi-phase CT images. We assessed the segmentation model using sensitivity (SEN), specificity (SPE), intersection over union (IOU), and dice similarity (Dice) metrics. The classification model's performance was evaluated using the area under the receiver operator characteristic curve (AUC), accuracy score (ACC), and decision curve analysis (DCA).

RESULTS: From 2012 to 2023, we included 477 CRLs (median age, 57 [IQR: 48-65]; 173 men) in the training cohort, 226 CRLs (median age, 60 [IQR: 52-69]; 77 men) in the validation cohort, and 239 CRLs (median age, 59 [IQR: 53-69]; 95 men) in the testing cohort (external validation cohort 1, cohort 2, and cohort 3). The segmentation model and SETD classifier exhibited excellent performance in both validation (AUC = 0.973, ACC = 0.916, Dice = 0.847, IOU = 0.743, SEN = 0.840, SPE = 1.000) and testing datasets (AUC = 0.998, ACC = 0.988, Dice = 0.861, IOU = 0.762, SEN = 0.876, SPE = 1.000).

CONCLUSION: The AI system demonstrated excellent benign-malignant discriminatory ability across both validation and testing datasets and illustrated improved clinical decision-making utility.

CRITICAL RELEVANCE STATEMENT: In this era when incidental CRLs are prevalent, this interactive, non-invasive AI system will facilitate accurate diagnosis of CRLs, reducing excessive follow-up and overtreatment.

KEY POINTS: The rising prevalence of CRLs necessitates better malignancy prediction strategies. The AI system demonstrated excellent diagnostic performance in identifying malignant CRL. The AI system illustrated improved clinical decision-making utility.

PMID:38763985 | DOI:10.1186/s13244-024-01700-0

Categories: Literature Watch

MRI radiomics based on deep learning automated segmentation to predict early recurrence of hepatocellular carcinoma

Sun, 2024-05-19 06:00

Insights Imaging. 2024 May 20;15(1):120. doi: 10.1186/s13244-024-01679-8.

ABSTRACT

OBJECTIVES: To investigate the utility of deep learning (DL) automated segmentation-based MRI radiomic features and clinical-radiological characteristics in predicting early recurrence after curative resection of single hepatocellular carcinoma (HCC).

METHODS: This single-center, retrospective study included consecutive patients with surgically proven HCC who underwent contrast-enhanced MRI before curative hepatectomy from December 2009 to December 2021. Using 3D U-net-based DL algorithms, automated segmentation of the liver and HCC was performed on six MRI sequences. Radiomic features were extracted from the tumor, tumor border extensions (5 mm, 10 mm, and 20 mm), and the liver. A hybrid model incorporating the optimal radiomic signature and preoperative clinical-radiological characteristics was constructed via Cox regression analyses for early recurrence. Model discrimination was characterized with C-index and time-dependent area under the receiver operating curve (tdAUC) and compared with the widely-adopted BCLC and CNLC staging systems.

RESULTS: Four hundred and thirty-four patients (median age, 52.0 years; 376 men) were included. Among all radiomic signatures, HCC with 5 mm tumor border extension and liver showed the optimal predictive performance (training set C-index, 0.696). By incorporating this radiomic signature, rim arterial phase hyperenhancement (APHE), and incomplete tumor "capsule," a hybrid model demonstrated a validation set C-index of 0.706 and superior 2-year tdAUC (0.743) than both the BCLC (0.550; p < 0.001) and CNLC (0.635; p = 0.032) systems. This model stratified patients into two prognostically distinct risk strata (both datasets p < 0.001).

CONCLUSION: A preoperative imaging model incorporating the DL automated segmentation-based radiomic signature with rim APHE and incomplete tumor "capsule" accurately predicted early postsurgical recurrence of a single HCC.

CRITICAL RELEVANCE STATEMENT: The DL automated segmentation-based MRI radiomic model with rim APHE and incomplete tumor "capsule" hold the potential to facilitate individualized risk estimation of postsurgical early recurrence in a single HCC.

KEY POINTS: A hybrid model integrating MRI radiomic signature was constructed for early recurrence prediction of HCC. The hybrid model demonstrated superior 2-year AUC than the BCLC and CNLC systems. The model categorized the low-risk HCC group carried longer RFS.

PMID:38763975 | DOI:10.1186/s13244-024-01679-8

Categories: Literature Watch

Artificial intelligence for gastric cancer in endoscopy: From diagnostic reasoning to market

Sun, 2024-05-19 06:00

Dig Liver Dis. 2024 May 18:S1590-8658(24)00717-5. doi: 10.1016/j.dld.2024.04.019. Online ahead of print.

ABSTRACT

Recognition of gastric conditions during endoscopy exams, including gastric cancer, usually requires specialized training and a long learning curve. Besides that, the interobserver variability is frequently high due to the different morphological characteristics of the lesions and grades of mucosal inflammation. In this sense, artificial intelligence tools based on deep learning models have been developed to support physicians to detect, classify, and predict gastric lesions more efficiently. Even though a growing number of studies exists in the literature, there are multiple challenges to bring a model to practice in this field, such as the need for more robust validation studies and regulatory hurdles. Therefore, the aim of this review is to provide a comprehensive assessment of the current use of artificial intelligence applied to endoscopic imaging to evaluate gastric precancerous and cancerous lesions and the barriers to widespread implementation of this technology in clinical routine.

PMID:38763796 | DOI:10.1016/j.dld.2024.04.019

Categories: Literature Watch

Deep learning-based platform performs high detection sensitivity of intracranial aneurysms in 3D brain TOF-MRA: An external clinical validation study

Sat, 2024-05-18 06:00

Int J Med Inform. 2024 May 16;188:105487. doi: 10.1016/j.ijmedinf.2024.105487. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate the diagnostic efficacy of a developed artificial intelligence (AI) platform incorporating deep learning algorithms for the automated detection of intracranial aneurysms in time-of-flight (TOF) magnetic resonance angiography (MRA).

METHOD: This retrospective study encompassed 3D TOF MRA images acquired between January 2023 and June 2023, aiming to validate the presence of intracranial aneurysms via our developed AI platform. The manual segmentation results by experienced neuroradiologists served as the "gold standard". Following annotation of MRA images by neuroradiologists using InferScholar software, the AI platform conducted automatic segmentation of intracranial aneurysms. Various metrics including accuracy (ACC), balanced ACC, area under the curve (AUC), sensitivity (SE), specificity (SP), F1 score, Brier Score, and Net Benefit were utilized to evaluate the generalization of AI platform. Comparison of intracranial aneurysm identification performance was conducted between the AI platform and six radiologists with experience ranging from 3 to 12 years in interpreting MR images. Additionally, a comparative analysis was carried out between radiologists' detection performance based on independent visual diagnosis and AI-assisted diagnosis. Subgroup analyses were also performed based on the size and location of the aneurysms to explore factors impacting aneurysm detectability.

RESULTS: 510 patients were enrolled including 215 patients (42.16 %) with intracranial aneurysms and 295 patients (57.84 %) without aneurysms. Compared with six radiologists, the AI platform showed competitive discrimination power (AUC, 0.96), acceptable calibration (Brier Score loss, 0.08), and clinical utility (Net Benefit, 86.96 %). The AI platform demonstrated superior performance in detecting aneurysms with an overall SE, SP, ACC, balanced ACC, and F1 score of 91.63 %, 92.20 %, 91.96 %, 91.92 %, and 90.57 % respectively, outperforming the detectability of the two resident radiologists. For subgroup analysis based on aneurysm size and location, we observed that the SE of the AI platform for identifying tiny (diameter<3mm), small (3 mm ≤ diameter<5mm), medium (5 mm ≤ diameter<7mm) and large aneurysms (diameter ≥ 7 mm) was 87.80 %, 93.14 %, 95.45 %, and 100 %, respectively. Furthermore, the SE for detecting aneurysms in the anterior circulation was higher than that in the posterior circulation. Utilizing the AI assistance, six radiologists (i.e., two residents, two attendings and two professors) achieved statistically significant improvements in mean SE (residents: 71.40 % vs. 88.37 %; attendings: 82.79 % vs. 93.26 %; professors: 90.07 % vs. 97.44 %; P < 0.05) and ACC (residents: 85.29 % vs. 94.12 %; attendings: 91.76 % vs. 97.06 %; professors: 95.29 % vs. 98.82 %; P < 0.05) while no statistically significant change was observed in SP. Overall, radiologists' mean SE increased by 11.40 %, mean SP increased by 1.86 %, and mean ACC increased by 5.88 %, mean balanced ACC promoted by 6.63 %, mean F1 score grew by 7.89 %, and Net Benefit rose by 12.52 %, with a concurrent decrease in mean Brier score declined by 0.06.

CONCLUSIONS: The deep learning algorithms implemented in the AI platform effectively detected intracranial aneurysms on TOF-MRA and notably enhanced the diagnostic capabilities of radiologists. This indicates that the AI-based auxiliary diagnosis model can provide dependable and precise prediction to improve the diagnostic capacity of radiologists.

PMID:38761459 | DOI:10.1016/j.ijmedinf.2024.105487

Categories: Literature Watch

Histopathologic image-based deep learning classifier for predicting platinum-based treatment responses in high-grade serous ovarian cancer

Sat, 2024-05-18 06:00

Nat Commun. 2024 May 18;15(1):4253. doi: 10.1038/s41467-024-48667-6.

ABSTRACT

Platinum-based chemotherapy is the cornerstone treatment for female high-grade serous ovarian carcinoma (HGSOC), but choosing an appropriate treatment for patients hinges on their responsiveness to it. Currently, no available biomarkers can promptly predict responses to platinum-based treatment. Therefore, we developed the Pathologic Risk Classifier for HGSOC (PathoRiCH), a histopathologic image-based classifier. PathoRiCH was trained on an in-house cohort (n = 394) and validated on two independent external cohorts (n = 284 and n = 136). The PathoRiCH-predicted favorable and poor response groups show significantly different platinum-free intervals in all three cohorts. Combining PathoRiCH with molecular biomarkers provides an even more powerful tool for the risk stratification of patients. The decisions of PathoRiCH are explained through visualization and a transcriptomic analysis, which bolster the reliability of our model's decisions. PathoRiCH exhibits better predictive performance than current molecular biomarkers. PathoRiCH will provide a solid foundation for developing an innovative tool to transform the current diagnostic pipeline for HGSOC.

PMID:38762636 | DOI:10.1038/s41467-024-48667-6

Categories: Literature Watch

Development and validation of machine learning algorithms based on electrocardiograms for cardiovascular diagnoses at the population level

Sat, 2024-05-18 06:00

NPJ Digit Med. 2024 May 18;7(1):133. doi: 10.1038/s41746-024-01130-8.

ABSTRACT

Artificial intelligence-enabled electrocardiogram (ECG) algorithms are gaining prominence for the early detection of cardiovascular (CV) conditions, including those not traditionally associated with conventional ECG measures or expert interpretation. This study develops and validates such models for simultaneous prediction of 15 different common CV diagnoses at the population level. We conducted a retrospective study that included 1,605,268 ECGs of 244,077 adult patients presenting to 84 emergency departments or hospitals, who underwent at least one 12-lead ECG from February 2007 to April 2020 in Alberta, Canada, and considered 15 CV diagnoses, as identified by International Classification of Diseases, 10th revision (ICD-10) codes: atrial fibrillation (AF), supraventricular tachycardia (SVT), ventricular tachycardia (VT), cardiac arrest (CA), atrioventricular block (AVB), unstable angina (UA), ST-elevation myocardial infarction (STEMI), non-STEMI (NSTEMI), pulmonary embolism (PE), hypertrophic cardiomyopathy (HCM), aortic stenosis (AS), mitral valve prolapse (MVP), mitral valve stenosis (MS), pulmonary hypertension (PHTN), and heart failure (HF). We employed ResNet-based deep learning (DL) using ECG tracings and extreme gradient boosting (XGB) using ECG measurements. When evaluated on the first ECGs per episode of 97,631 holdout patients, the DL models had an area under the receiver operating characteristic curve (AUROC) of <80% for 3 CV conditions (PTE, SVT, UA), 80-90% for 8 CV conditions (CA, NSTEMI, VT, MVP, PHTN, AS, AF, HF) and an AUROC > 90% for 4 diagnoses (AVB, HCM, MS, STEMI). DL models outperformed XGB models with about 5% higher AUROC on average. Overall, ECG-based prediction models demonstrated good-to-excellent prediction performance in diagnosing common CV conditions.

PMID:38762623 | DOI:10.1038/s41746-024-01130-8

Categories: Literature Watch

Harnessing LSTM and XGBoost algorithms for storm prediction

Sat, 2024-05-18 06:00

Sci Rep. 2024 May 18;14(1):11381. doi: 10.1038/s41598-024-62182-0.

ABSTRACT

Storms can cause significant damage, severe social disturbance and loss of human life, but predicting them is challenging due to their infrequent occurrence. To overcome this problem, a novel deep learning and machine learning approach based on long short-term memory (LSTM) and Extreme Gradient Boosting (XGBoost) was applied to predict storm characteristics and occurrence in Western France. A combination of data from buoys and a storm database between 1996 and 2020 was processed for model training and testing. The models were trained and validated with the dataset from January 1996 to December 2015 and the trained models were then used to predict storm characteristics and occurrence from January 2016 to December 2020. The LSTM model used to predict storm characteristics showed great accuracy in forecasting temperature and pressure, with challenges observed in capturing extreme values for wave height and wind speed. The trained XGBoost model, on the other hand, performed extremely well in predicting storm occurrence. The methodology adopted can help reduce the impact of storms on humans and objects.

PMID:38762598 | DOI:10.1038/s41598-024-62182-0

Categories: Literature Watch

Pages