Deep learning

Linguistic-based Mild Cognitive Impairment detection using Informative Loss

Sun, 2024-05-19 06:00

Comput Biol Med. 2024 May 14;176:108606. doi: 10.1016/j.compbiomed.2024.108606. Online ahead of print.

ABSTRACT

This paper presents a deep learning method using Natural Language Processing (NLP) techniques, to distinguish between Mild Cognitive Impairment (MCI) and Normal Cognitive (NC) conditions in older adults. We propose a framework that analyzes transcripts generated from video interviews collected within the I-CONECT study project, a randomized controlled trial aimed at improving cognitive functions through video chats. Our proposed NLP framework consists of two Transformer-based modules, namely Sentence Embedding (SE) and Sentence Cross Attention (SCA). First, the SE module captures contextual relationships between words within each sentence. Subsequently, the SCA module extracts temporal features from a sequence of sentences. This feature is then used by a Multi-Layer Perceptron (MLP) for the classification of subjects into MCI or NC. To build a robust model, we propose a novel loss function, called InfoLoss, that considers the reduction in entropy by observing each sequence of sentences to ultimately enhance the classification accuracy. The results of our comprehensive model evaluation using the I-CONECT dataset show that our framework can distinguish between MCI and NC with an average area under the curve of 84.75%.

PMID:38763068 | DOI:10.1016/j.compbiomed.2024.108606

Categories: Literature Watch

D-TrAttUnet: Toward hybrid CNN-transformer architecture for generic and subtle segmentation in medical images

Sun, 2024-05-19 06:00

Comput Biol Med. 2024 May 11;176:108590. doi: 10.1016/j.compbiomed.2024.108590. Online ahead of print.

ABSTRACT

Over the past two decades, machine analysis of medical imaging has advanced rapidly, opening up significant potential for several important medical applications. As complicated diseases increase and the number of cases rises, the role of machine-based imaging analysis has become indispensable. It serves as both a tool and an assistant to medical experts, providing valuable insights and guidance. A particularly challenging task in this area is lesion segmentation, a task that is challenging even for experienced radiologists. The complexity of this task highlights the urgent need for robust machine learning approaches to support medical staff. In response, we present our novel solution: the D-TrAttUnet architecture. This framework is based on the observation that different diseases often target specific organs. Our architecture includes an encoder-decoder structure with a composite Transformer-CNN encoder and dual decoders. The encoder includes two paths: the Transformer path and the Encoders Fusion Module path. The Dual-Decoder configuration uses two identical decoders, each with attention gates. This allows the model to simultaneously segment lesions and organs and integrate their segmentation losses. To validate our approach, we performed evaluations on the Covid-19 and Bone Metastasis segmentation tasks. We also investigated the adaptability of the model by testing it without the second decoder in the segmentation of glands and nuclei. The results confirmed the superiority of our approach, especially in Covid-19 infections and the segmentation of bone metastases. In addition, the hybrid encoder showed exceptional performance in the segmentation of glands and nuclei, solidifying its role in modern medical image analysis.

PMID:38763066 | DOI:10.1016/j.compbiomed.2024.108590

Categories: Literature Watch

Deciphering seizure semiology in corpus callosum injuries: A comprehensive systematic review with machine learning insights

Sun, 2024-05-19 06:00

Clin Neurol Neurosurg. 2024 May 7;242:108316. doi: 10.1016/j.clineuro.2024.108316. Online ahead of print.

ABSTRACT

INTRODUCTION: Seizure disorders have often been found to be associated with corpus callosum injuries, but in most cases, they remain undiagnosed. Understanding the clinical, electrographic, and neuroradiological alternations can be crucial in delineating this entity.

OBJECTIVE: This systematic review aims to analyze the effects of corpus callosum injuries on seizure semiology, providing insights into the neuroscientific and clinical implications of such injuries.

METHODS: Adhering to the PRISMA guidelines, a comprehensive search across multiple databases, including PubMed/Medline, NIH, Embase, Cochrane Library, and Cross-ref, was conducted until September 25, 2023. Studies on seizures associated with corpus callosum injuries, excluding other cortical or sub-cortical involvements, were included. Machine learning (Random Forest) and deep learning (1D-CNN) algorithms were employed for data classification.

RESULTS: Initially, 1250 articles were identified from the mentioned databases, and additional 350 were found through other relevant sources. Out of all these articles, 41 studies met the inclusion criteria, collectively encompassing 56 patients The most frequent clinical manifestations included generalized tonic-clonic seizures, complex partial seizures, and focal seizures. The most common callosal injuries were related to reversible splenial lesion syndrome and cytotoxic lesions. Machine learning and deep learning analyses revealed significant correlations between seizure types, semiological parameters, and callosal injury locations. Complete recovery was reported in the majority of patients post-treatment.

CONCLUSION: Corpus callosum injuries have diverse impacts on seizure semiology. This review highlights the importance of understanding the role of the corpus callosum in seizure propagation and manifestation. The findings emphasize the need for targeted diagnostic and therapeutic strategies in managing seizures associated with callosal injuries. Future research should focus on expanding the data pool and exploring the underlying mechanisms in greater detail.

PMID:38762973 | DOI:10.1016/j.clineuro.2024.108316

Categories: Literature Watch

Deep learning system for malignancy risk prediction in cystic renal lesions: a multicenter study

Sun, 2024-05-19 06:00

Insights Imaging. 2024 May 20;15(1):121. doi: 10.1186/s13244-024-01700-0.

ABSTRACT

OBJECTIVES: To develop an interactive, non-invasive artificial intelligence (AI) system for malignancy risk prediction in cystic renal lesions (CRLs).

METHODS: In this retrospective, multicenter diagnostic study, we evaluated 715 patients. An interactive geodesic-based 3D segmentation model was created for CRLs segmentation. A CRLs classification model was developed using spatial encoder temporal decoder (SETD) architecture. The classification model combines a 3D-ResNet50 network for extracting spatial features and a gated recurrent unit (GRU) network for decoding temporal features from multi-phase CT images. We assessed the segmentation model using sensitivity (SEN), specificity (SPE), intersection over union (IOU), and dice similarity (Dice) metrics. The classification model's performance was evaluated using the area under the receiver operator characteristic curve (AUC), accuracy score (ACC), and decision curve analysis (DCA).

RESULTS: From 2012 to 2023, we included 477 CRLs (median age, 57 [IQR: 48-65]; 173 men) in the training cohort, 226 CRLs (median age, 60 [IQR: 52-69]; 77 men) in the validation cohort, and 239 CRLs (median age, 59 [IQR: 53-69]; 95 men) in the testing cohort (external validation cohort 1, cohort 2, and cohort 3). The segmentation model and SETD classifier exhibited excellent performance in both validation (AUC = 0.973, ACC = 0.916, Dice = 0.847, IOU = 0.743, SEN = 0.840, SPE = 1.000) and testing datasets (AUC = 0.998, ACC = 0.988, Dice = 0.861, IOU = 0.762, SEN = 0.876, SPE = 1.000).

CONCLUSION: The AI system demonstrated excellent benign-malignant discriminatory ability across both validation and testing datasets and illustrated improved clinical decision-making utility.

CRITICAL RELEVANCE STATEMENT: In this era when incidental CRLs are prevalent, this interactive, non-invasive AI system will facilitate accurate diagnosis of CRLs, reducing excessive follow-up and overtreatment.

KEY POINTS: The rising prevalence of CRLs necessitates better malignancy prediction strategies. The AI system demonstrated excellent diagnostic performance in identifying malignant CRL. The AI system illustrated improved clinical decision-making utility.

PMID:38763985 | DOI:10.1186/s13244-024-01700-0

Categories: Literature Watch

MRI radiomics based on deep learning automated segmentation to predict early recurrence of hepatocellular carcinoma

Sun, 2024-05-19 06:00

Insights Imaging. 2024 May 20;15(1):120. doi: 10.1186/s13244-024-01679-8.

ABSTRACT

OBJECTIVES: To investigate the utility of deep learning (DL) automated segmentation-based MRI radiomic features and clinical-radiological characteristics in predicting early recurrence after curative resection of single hepatocellular carcinoma (HCC).

METHODS: This single-center, retrospective study included consecutive patients with surgically proven HCC who underwent contrast-enhanced MRI before curative hepatectomy from December 2009 to December 2021. Using 3D U-net-based DL algorithms, automated segmentation of the liver and HCC was performed on six MRI sequences. Radiomic features were extracted from the tumor, tumor border extensions (5 mm, 10 mm, and 20 mm), and the liver. A hybrid model incorporating the optimal radiomic signature and preoperative clinical-radiological characteristics was constructed via Cox regression analyses for early recurrence. Model discrimination was characterized with C-index and time-dependent area under the receiver operating curve (tdAUC) and compared with the widely-adopted BCLC and CNLC staging systems.

RESULTS: Four hundred and thirty-four patients (median age, 52.0 years; 376 men) were included. Among all radiomic signatures, HCC with 5 mm tumor border extension and liver showed the optimal predictive performance (training set C-index, 0.696). By incorporating this radiomic signature, rim arterial phase hyperenhancement (APHE), and incomplete tumor "capsule," a hybrid model demonstrated a validation set C-index of 0.706 and superior 2-year tdAUC (0.743) than both the BCLC (0.550; p < 0.001) and CNLC (0.635; p = 0.032) systems. This model stratified patients into two prognostically distinct risk strata (both datasets p < 0.001).

CONCLUSION: A preoperative imaging model incorporating the DL automated segmentation-based radiomic signature with rim APHE and incomplete tumor "capsule" accurately predicted early postsurgical recurrence of a single HCC.

CRITICAL RELEVANCE STATEMENT: The DL automated segmentation-based MRI radiomic model with rim APHE and incomplete tumor "capsule" hold the potential to facilitate individualized risk estimation of postsurgical early recurrence in a single HCC.

KEY POINTS: A hybrid model integrating MRI radiomic signature was constructed for early recurrence prediction of HCC. The hybrid model demonstrated superior 2-year AUC than the BCLC and CNLC systems. The model categorized the low-risk HCC group carried longer RFS.

PMID:38763975 | DOI:10.1186/s13244-024-01679-8

Categories: Literature Watch

Artificial intelligence for gastric cancer in endoscopy: From diagnostic reasoning to market

Sun, 2024-05-19 06:00

Dig Liver Dis. 2024 May 18:S1590-8658(24)00717-5. doi: 10.1016/j.dld.2024.04.019. Online ahead of print.

ABSTRACT

Recognition of gastric conditions during endoscopy exams, including gastric cancer, usually requires specialized training and a long learning curve. Besides that, the interobserver variability is frequently high due to the different morphological characteristics of the lesions and grades of mucosal inflammation. In this sense, artificial intelligence tools based on deep learning models have been developed to support physicians to detect, classify, and predict gastric lesions more efficiently. Even though a growing number of studies exists in the literature, there are multiple challenges to bring a model to practice in this field, such as the need for more robust validation studies and regulatory hurdles. Therefore, the aim of this review is to provide a comprehensive assessment of the current use of artificial intelligence applied to endoscopic imaging to evaluate gastric precancerous and cancerous lesions and the barriers to widespread implementation of this technology in clinical routine.

PMID:38763796 | DOI:10.1016/j.dld.2024.04.019

Categories: Literature Watch

Deep learning-based platform performs high detection sensitivity of intracranial aneurysms in 3D brain TOF-MRA: An external clinical validation study

Sat, 2024-05-18 06:00

Int J Med Inform. 2024 May 16;188:105487. doi: 10.1016/j.ijmedinf.2024.105487. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate the diagnostic efficacy of a developed artificial intelligence (AI) platform incorporating deep learning algorithms for the automated detection of intracranial aneurysms in time-of-flight (TOF) magnetic resonance angiography (MRA).

METHOD: This retrospective study encompassed 3D TOF MRA images acquired between January 2023 and June 2023, aiming to validate the presence of intracranial aneurysms via our developed AI platform. The manual segmentation results by experienced neuroradiologists served as the "gold standard". Following annotation of MRA images by neuroradiologists using InferScholar software, the AI platform conducted automatic segmentation of intracranial aneurysms. Various metrics including accuracy (ACC), balanced ACC, area under the curve (AUC), sensitivity (SE), specificity (SP), F1 score, Brier Score, and Net Benefit were utilized to evaluate the generalization of AI platform. Comparison of intracranial aneurysm identification performance was conducted between the AI platform and six radiologists with experience ranging from 3 to 12 years in interpreting MR images. Additionally, a comparative analysis was carried out between radiologists' detection performance based on independent visual diagnosis and AI-assisted diagnosis. Subgroup analyses were also performed based on the size and location of the aneurysms to explore factors impacting aneurysm detectability.

RESULTS: 510 patients were enrolled including 215 patients (42.16 %) with intracranial aneurysms and 295 patients (57.84 %) without aneurysms. Compared with six radiologists, the AI platform showed competitive discrimination power (AUC, 0.96), acceptable calibration (Brier Score loss, 0.08), and clinical utility (Net Benefit, 86.96 %). The AI platform demonstrated superior performance in detecting aneurysms with an overall SE, SP, ACC, balanced ACC, and F1 score of 91.63 %, 92.20 %, 91.96 %, 91.92 %, and 90.57 % respectively, outperforming the detectability of the two resident radiologists. For subgroup analysis based on aneurysm size and location, we observed that the SE of the AI platform for identifying tiny (diameter<3mm), small (3 mm ≤ diameter<5mm), medium (5 mm ≤ diameter<7mm) and large aneurysms (diameter ≥ 7 mm) was 87.80 %, 93.14 %, 95.45 %, and 100 %, respectively. Furthermore, the SE for detecting aneurysms in the anterior circulation was higher than that in the posterior circulation. Utilizing the AI assistance, six radiologists (i.e., two residents, two attendings and two professors) achieved statistically significant improvements in mean SE (residents: 71.40 % vs. 88.37 %; attendings: 82.79 % vs. 93.26 %; professors: 90.07 % vs. 97.44 %; P < 0.05) and ACC (residents: 85.29 % vs. 94.12 %; attendings: 91.76 % vs. 97.06 %; professors: 95.29 % vs. 98.82 %; P < 0.05) while no statistically significant change was observed in SP. Overall, radiologists' mean SE increased by 11.40 %, mean SP increased by 1.86 %, and mean ACC increased by 5.88 %, mean balanced ACC promoted by 6.63 %, mean F1 score grew by 7.89 %, and Net Benefit rose by 12.52 %, with a concurrent decrease in mean Brier score declined by 0.06.

CONCLUSIONS: The deep learning algorithms implemented in the AI platform effectively detected intracranial aneurysms on TOF-MRA and notably enhanced the diagnostic capabilities of radiologists. This indicates that the AI-based auxiliary diagnosis model can provide dependable and precise prediction to improve the diagnostic capacity of radiologists.

PMID:38761459 | DOI:10.1016/j.ijmedinf.2024.105487

Categories: Literature Watch

Histopathologic image-based deep learning classifier for predicting platinum-based treatment responses in high-grade serous ovarian cancer

Sat, 2024-05-18 06:00

Nat Commun. 2024 May 18;15(1):4253. doi: 10.1038/s41467-024-48667-6.

ABSTRACT

Platinum-based chemotherapy is the cornerstone treatment for female high-grade serous ovarian carcinoma (HGSOC), but choosing an appropriate treatment for patients hinges on their responsiveness to it. Currently, no available biomarkers can promptly predict responses to platinum-based treatment. Therefore, we developed the Pathologic Risk Classifier for HGSOC (PathoRiCH), a histopathologic image-based classifier. PathoRiCH was trained on an in-house cohort (n = 394) and validated on two independent external cohorts (n = 284 and n = 136). The PathoRiCH-predicted favorable and poor response groups show significantly different platinum-free intervals in all three cohorts. Combining PathoRiCH with molecular biomarkers provides an even more powerful tool for the risk stratification of patients. The decisions of PathoRiCH are explained through visualization and a transcriptomic analysis, which bolster the reliability of our model's decisions. PathoRiCH exhibits better predictive performance than current molecular biomarkers. PathoRiCH will provide a solid foundation for developing an innovative tool to transform the current diagnostic pipeline for HGSOC.

PMID:38762636 | DOI:10.1038/s41467-024-48667-6

Categories: Literature Watch

Development and validation of machine learning algorithms based on electrocardiograms for cardiovascular diagnoses at the population level

Sat, 2024-05-18 06:00

NPJ Digit Med. 2024 May 18;7(1):133. doi: 10.1038/s41746-024-01130-8.

ABSTRACT

Artificial intelligence-enabled electrocardiogram (ECG) algorithms are gaining prominence for the early detection of cardiovascular (CV) conditions, including those not traditionally associated with conventional ECG measures or expert interpretation. This study develops and validates such models for simultaneous prediction of 15 different common CV diagnoses at the population level. We conducted a retrospective study that included 1,605,268 ECGs of 244,077 adult patients presenting to 84 emergency departments or hospitals, who underwent at least one 12-lead ECG from February 2007 to April 2020 in Alberta, Canada, and considered 15 CV diagnoses, as identified by International Classification of Diseases, 10th revision (ICD-10) codes: atrial fibrillation (AF), supraventricular tachycardia (SVT), ventricular tachycardia (VT), cardiac arrest (CA), atrioventricular block (AVB), unstable angina (UA), ST-elevation myocardial infarction (STEMI), non-STEMI (NSTEMI), pulmonary embolism (PE), hypertrophic cardiomyopathy (HCM), aortic stenosis (AS), mitral valve prolapse (MVP), mitral valve stenosis (MS), pulmonary hypertension (PHTN), and heart failure (HF). We employed ResNet-based deep learning (DL) using ECG tracings and extreme gradient boosting (XGB) using ECG measurements. When evaluated on the first ECGs per episode of 97,631 holdout patients, the DL models had an area under the receiver operating characteristic curve (AUROC) of <80% for 3 CV conditions (PTE, SVT, UA), 80-90% for 8 CV conditions (CA, NSTEMI, VT, MVP, PHTN, AS, AF, HF) and an AUROC > 90% for 4 diagnoses (AVB, HCM, MS, STEMI). DL models outperformed XGB models with about 5% higher AUROC on average. Overall, ECG-based prediction models demonstrated good-to-excellent prediction performance in diagnosing common CV conditions.

PMID:38762623 | DOI:10.1038/s41746-024-01130-8

Categories: Literature Watch

Harnessing LSTM and XGBoost algorithms for storm prediction

Sat, 2024-05-18 06:00

Sci Rep. 2024 May 18;14(1):11381. doi: 10.1038/s41598-024-62182-0.

ABSTRACT

Storms can cause significant damage, severe social disturbance and loss of human life, but predicting them is challenging due to their infrequent occurrence. To overcome this problem, a novel deep learning and machine learning approach based on long short-term memory (LSTM) and Extreme Gradient Boosting (XGBoost) was applied to predict storm characteristics and occurrence in Western France. A combination of data from buoys and a storm database between 1996 and 2020 was processed for model training and testing. The models were trained and validated with the dataset from January 1996 to December 2015 and the trained models were then used to predict storm characteristics and occurrence from January 2016 to December 2020. The LSTM model used to predict storm characteristics showed great accuracy in forecasting temperature and pressure, with challenges observed in capturing extreme values for wave height and wind speed. The trained XGBoost model, on the other hand, performed extremely well in predicting storm occurrence. The methodology adopted can help reduce the impact of storms on humans and objects.

PMID:38762598 | DOI:10.1038/s41598-024-62182-0

Categories: Literature Watch

Evaluation method for ecology-agriculture-urban spaces based on deep learning

Sat, 2024-05-18 06:00

Sci Rep. 2024 May 18;14(1):11353. doi: 10.1038/s41598-024-61919-1.

ABSTRACT

With the increasing global population and escalating ecological and farmland degradation, challenges to the environment and livelihoods have become prominent. Coordinating urban development, food security, and ecological conservation is crucial for fostering sustainable development. This study focuses on assessing the "Ecology-Agriculture-Urban" (E-A-U) space in Yulin City, China, as a representative case. Following the framework proposed by Chinese named "environmental capacity and national space development suitability evaluation" (hereinafter referred to as "Double Evaluation"), we developed a Self-Attention Residual Neural Network (SARes-NET) model to assess the E-U-A space. Spatially, the northwest region is dominated by agriculture, while the southeast is characterized by urban and ecological areas, aligning with regional development patterns. Comparative validations with five other models, including Logistic Regression (LR), Naive Bayes (NB), Gradient Boosting Decision Trees (GBDT), Random Forest (RF) and Artificial Neural Network (ANN), reveal that the SARes-NET model exhibits superior simulation performance, highlighting it's ability to capture intricate non-linear relationships and reduce human errors in data processing. This study establishes deep learning-guided E-A-U spatial evaluation as an innovative approach for national spatial planning, holding broader implications for national-level territorial assessments.

PMID:38762514 | DOI:10.1038/s41598-024-61919-1

Categories: Literature Watch

Advancing Automatic Gastritis Diagnosis: An Interpretable Multilabel Deep Learning Framework for the Simultaneous Assessment of Multiple Indicators

Sat, 2024-05-18 06:00

Am J Pathol. 2024 May 16:S0002-9440(24)00175-5. doi: 10.1016/j.ajpath.2024.04.007. Online ahead of print.

ABSTRACT

The evaluation of morphological features such as inflammation, gastric atrophy, and intestinal metaplasia is crucial for diagnosing gastritis. However, artificial intelligence (AI) analysis for nontumor diseases like gastritis is limited. Previous deep learning models have omitted important morphological indicators and cannot simultaneously diagnose gastritis indicators or provide interpretable labels. To address this, an attention-based multi-instance multilabel learning network (AMMNet) was developed to simultaneously achieve the multilabel diagnosis of activity, atrophy, and intestinal metaplasia with only slide-level weak labels. To evaluate AMMNet's real-world performance, a diagnostic test was designed to observe improvements in junior pathologists' diagnostic accuracy and efficiency with and without AMMNet assistance. In this study of 1,096 patients from 7 independent medical centers, AMMNet performed well in assessing activity (area under the curve (AUC): 0.93), atrophy (AUC: 0.97), and intestinal metaplasia (AUC: 0.93). The false-negative rates (FNRs) of these indicators were only 0.04, 0.08, and 0.18, respectively, and junior pathologists had lower FNRs with model assistance (0.15 vs. 0.10). Furthermore, AMMNet reduced the time required per whole-slide image (WSI) from 5.46 minutes to only 2.85 minutes, enhancing diagnostic efficiency. In block-level clustering analysis, AMMNet effectively visualized task-related patches within WSIs, improving interpretability. These findings highlight AMMNet's effectiveness in accurately evaluating gastritis morphological indicators on multicenter datasets. Using multi-instance multilabel learning strategies to support routine diagnostic pathology deserves further evaluation.

PMID:38762117 | DOI:10.1016/j.ajpath.2024.04.007

Categories: Literature Watch

Generative artificial intelligence in ophthalmology

Sat, 2024-05-18 06:00

Surv Ophthalmol. 2024 May 16:S0039-6257(24)00044-4. doi: 10.1016/j.survophthal.2024.04.009. Online ahead of print.

ABSTRACT

Generative AI has revolutionized medicine over the past several years. A generative adversarial network (GAN) is a deep learning framework that has become a powerful technique in medicine, particularly in ophthalmology and image analysis. In this paper we review the current ophthalmic literature involving GANs, and highlight key contributions in the field. We briefly touch on ChatGPT, another application of generative AI, and its potential in ophthalmology. We also explore the potential uses for GANs in ocular imaging, with a specific emphasis on 3 primary domains: image enhancement, disease identification, and generating of synthetic data. PubMed, Ovid MEDLINE, Google Scholar were searched from inception to October 30, 2022 to identify applications of GAN in ophthalmology. A total of 40 papers were included in this review. We cover various applications of GANs in ophthalmic-related imaging including optical coherence tomography, orbital magnetic resonance imaging, fundus photography, and ultrasound; however, we also highlight several challenges, that resulted in the generation of inaccurate and atypical results during certain iterations. Finally, we examine future directions and considerations for generative AI in ophthalmology.

PMID:38762072 | DOI:10.1016/j.survophthal.2024.04.009

Categories: Literature Watch

Subtype-WGME enables whole-genome-wide multi-omics cancer subtyping

Sat, 2024-05-18 06:00

Cell Rep Methods. 2024 May 14:100781. doi: 10.1016/j.crmeth.2024.100781. Online ahead of print.

ABSTRACT

We present an innovative strategy for integrating whole-genome-wide multi-omics data, which facilitates adaptive amalgamation by leveraging hidden layer features derived from high-dimensional omics data through a multi-task encoder. Empirical evaluations on eight benchmark cancer datasets substantiated that our proposed framework outstripped the comparative algorithms in cancer subtyping, delivering superior subtyping outcomes. Building upon these subtyping results, we establish a robust pipeline for identifying whole-genome-wide biomarkers, unearthing 195 significant biomarkers. Furthermore, we conduct an exhaustive analysis to assess the importance of each omic and non-coding region features at the whole-genome-wide level during cancer subtyping. Our investigation shows that both omics and non-coding region features substantially impact cancer development and survival prognosis. This study emphasizes the potential and practical implications of integrating genome-wide data in cancer research, demonstrating the potency of comprehensive genomic characterization. Additionally, our findings offer insightful perspectives for multi-omics analysis employing deep learning methodologies.

PMID:38761803 | DOI:10.1016/j.crmeth.2024.100781

Categories: Literature Watch

Longitudinal artificial intelligence-based deep learning models for diagnosis and prediction of the future occurrence of polyneuropathy in diabetes and prediabetes

Sat, 2024-05-18 06:00

Neurophysiol Clin. 2024 May 17;54(4):102982. doi: 10.1016/j.neucli.2024.102982. Online ahead of print.

ABSTRACT

OBJECTIVE: The objective of this study was to develop artificial intelligence-based deep learning models and assess their potential utility and accuracy in diagnosing and predicting the future occurrence of diabetic distal sensorimotor polyneuropathy (DSPN) among individuals with type 2 diabetes mellitus (T2DM) and prediabetes.

METHODS: In 394 patients (T2DM=300, Prediabetes=94), we developed a DSPN diagnostic and predictive model using Random Forest (RF)-based variable selection techniques, specifically incorporating the combined capabilities of the Clinical Toronto Neuropathy Score (TCNS) and nerve conduction study (NCS) to identify relevant variables. These important variables were then integrated into a deep learning framework comprising Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. To evaluate temporal predictive efficacy, patients were assessed at enrollment and one-year follow-up.

RESULTS: RF-based variable selection identified key factors for diagnosing DSPN. Numbness scores, sensory test results (vibration), reflexes (knee, ankle), sural nerve attributes (sensory nerve action potential [SNAP] amplitude, nerve conduction velocity [NCV], latency), and peroneal/tibial motor NCV were candidate variables at baseline and over one year. Tibial compound motor action potential amplitudes were used for initial diagnosis, and ulnar SNAP amplitude for subsequent diagnoses. CNNs and LSTMs achieved impressive AUC values of 0.98 for DSPN diagnosis prediction, and 0.93 and 0.89 respectively for predicting the future occurrence of DSPN. RF techniques combined with two deep learning algorithms exhibited outstanding performance in diagnosing and predicting the future occurrence of DSPN. These algorithms have the potential to serve as surrogate measures, aiding clinicians in accurate diagnosis and future prediction of DSPN.

PMID:38761793 | DOI:10.1016/j.neucli.2024.102982

Categories: Literature Watch

A deep learning-based quantitative prediction model for the processing potentials of soybeans as soymilk raw materials

Sat, 2024-05-18 06:00

Food Chem. 2024 May 14;453:139671. doi: 10.1016/j.foodchem.2024.139671. Online ahead of print.

ABSTRACT

Current technologies as correlation analysis, regression analysis and classification model, exhibited various limitations in the evaluation of soybean possessing potentials, including single, vague evaluation and failure of quantitative prediction, and thereby hindering more efficient and profitable soymilk industry. To solve this problem, 54 soybean cultivars and their corresponding soymilks were subjected to chemical, textural, and sensory analyses to obtain the soybean physicochemical nature (PN) and the soymilk profit and quality attribute (PQA) datasets. A deep-learning based model was established to quantitatively predict PQA data using PN data. Through 45 rounds of training with the stochastic gradient descent optimization, 9 remaining pairs of PN and PQA data were used for model validation. Results suggested that the overall prediction performance of the model showed significant improvements through iterative training, and the trained model eventually reached satisfying predictions (|relative error| ≤ 20%, standard deviation of relative error ≤ 40%) on 78% key soymilk PQAs. Future model training using big data may facilitate better prediction on soymilk odor qualities.

PMID:38761740 | DOI:10.1016/j.foodchem.2024.139671

Categories: Literature Watch

Automated marine oil spill detection algorithm based on single-image generative adversarial network and YOLO-v8 under small samples

Sat, 2024-05-18 06:00

Mar Pollut Bull. 2024 May 17;203:116475. doi: 10.1016/j.marpolbul.2024.116475. Online ahead of print.

ABSTRACT

As marine resources and transportation develop, oil spill incidents are increasing, endangering marine ecosystems and human lives. Rapidly and accurately identifying marine oil spill is of utmost importance in protecting marine ecosystems. Marine oil spill detection methods based on deep learning and computer vision have the great potential significantly enhance detection efficiency and accuracy, but their performance is often limited by the scarcity of real oil spill samples, posing a challenging to train a precise detection model. This study introduces a detection method specifically designed for scenarios with limited sample sizes. First, the small sample dataset of marine oil spill taken by Landsat-8 satellite is used as the training set. Then, a single image generative adversarial network (SinGAN) capable of training with a single oil spill image is constructed for expanding the dataset, generating diverse marine oil spill samples with different shapes. Second, a YOLO-v8 model is pretrained via the method of transfer learning and then trained with dataset before and after augmentation separately for real-time and efficient oil spill detection. Experimental results have demonstrated that the YOLO-v8 model, trained on an expanded dataset, exhibits notable enhancements in recall, precision, and average precision, with improvements of 12.3 %, 6.3 %, and 11.3 % respectively, compared to the unexpanded dataset. It reveals that our marine oil spill detection model based on YOLO-v8 exhibits leading or comparable performance in terms of recall, precision, and AP metrics. The data augmentation technique based on SinGAN contributes to the performance of other popular object detection algorithms as well.

PMID:38761680 | DOI:10.1016/j.marpolbul.2024.116475

Categories: Literature Watch

Point based weakly semi-supervised biomarker detection with cross-scale and label assignment in retinal OCT images

Sat, 2024-05-18 06:00

Comput Methods Programs Biomed. 2024 May 15;251:108229. doi: 10.1016/j.cmpb.2024.108229. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Optical coherence tomography (OCT) is currently one of the most advanced retinal imaging methods. Retinal biomarkers in OCT images are of clinical significance and can assist ophthalmologists in diagnosing lesions. Compared with fundus images, OCT can provide higher resolution segmentation. However, image annotation at the bounding box level needs to be performed by ophthalmologists carefully and is difficult to obtain. In addition, the large variation in shape of different retinal markers and the inconspicuous appearance of biomarkers make it difficult for existing deep learning-based methods to effectively detect them. To overcome the above challenges, we propose a novel network for the detection of retinal biomarkers in OCT images.

METHODS: We first address the issue of labeling cost using a novel weakly semi-supervised object detection method with point annotations which can reduce bounding box-level annotation efforts. To extend the method to the detection of biomarkers in OCT images, we propose multiple consistent regularizations for point-to-box regression network to deal with the shortage of supervision, which aims to learn more accurate regression mappings. Furthermore, in the subsequent fully supervised detection, we propose a cross-scale feature enhancement module to alleviate the detection problems caused by the large-scale variation of biomarkers. We also propose a dynamic label assignment strategy to distinguish samples of different importance more flexibly, thereby reducing detection errors due to the indistinguishable appearance of the biomarkers.

RESULTS: When using our detection network, our regressor also achieves an AP value of 20.83 s when utilizing a 5 % fully labeled dataset partition, surpassing the performance of other comparative methods at 5 % and 10 %. Even coming close to the 20.87 % result achieved by Point DETR under 20 % full labeling conditions. When using Group R-CNN as the point-to-box regressor, our detector achieves 27.21 % AP in the 50 % fully labeled dataset experiment. 7.42 % AP improvement is achieved compared to our detection network baseline Faster R-CNN.

CONCLUSIONS: The experimental findings not only demonstrate the effectiveness of our approach with minimal bounding box annotations but also highlight the enhanced biomarker detection performance of the proposed module. We have included a detailed algorithmic flow in the supplementary material.

PMID:38761413 | DOI:10.1016/j.cmpb.2024.108229

Categories: Literature Watch

Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery

Sat, 2024-05-18 06:00

Int J Comput Assist Radiol Surg. 2024 May 18. doi: 10.1007/s11548-024-03166-3. Online ahead of print.

ABSTRACT

PURPOSE: Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers.

METHODS: In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70.

RESULTS: The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)).

CONCLUSION: MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.

PMID:38761319 | DOI:10.1007/s11548-024-03166-3

Categories: Literature Watch

Machine learning, deep learning and hernia surgery. Are we pushing the limits of abdominal core health? A qualitative systematic review

Sat, 2024-05-18 06:00

Hernia. 2024 May 18. doi: 10.1007/s10029-024-03069-x. Online ahead of print.

ABSTRACT

INTRODUCTION: This systematic review aims to evaluate the use of machine learning and artificial intelligence in hernia surgery.

METHODS: The PRISMA guidelines were followed throughout this systematic review. The ROBINS-I and Rob 2 tools were used to perform qualitative assessment of all studies included in this review. Recommendations were then summarized for the following pre-defined key items: protocol, research question, search strategy, study eligibility, data extraction, study design, risk of bias, publication bias, and statistical analysis.

RESULTS: A total of 13 articles were ultimately included for this review, describing the use of machine learning and deep learning for hernia surgery. All studies were published from 2020 to 2023. Articles varied regarding the population studied, type of machine learning or Deep Learning Model (DLM) used, and hernia type. Of the thirteen included studies, all included either inguinal, ventral, or incisional hernias. Four studies evaluated recognition of surgical steps during inguinal hernia repair videos. Two studies predicted outcomes using image-based DMLs. Seven studies developed and validated deep learning algorithms to predict outcomes and identify factors associated with postoperative complications.

CONCLUSION: The use of ML for abdominal wall reconstruction has been shown to be a promising tool for predicting outcomes and identifying factors that could lead to postoperative complications.

PMID:38761300 | DOI:10.1007/s10029-024-03069-x

Categories: Literature Watch

Pages