Deep learning

Recent deep learning-based brain tumor segmentation models using multi-modality magnetic resonance imaging: a prospective survey

Tue, 2024-08-06 06:00

Front Bioeng Biotechnol. 2024 Jul 22;12:1392807. doi: 10.3389/fbioe.2024.1392807. eCollection 2024.

ABSTRACT

Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.

PMID:39104626 | PMC:PMC11298476 | DOI:10.3389/fbioe.2024.1392807

Categories: Literature Watch

Exploring Unlabeled Data in Multiple Aspects for Semi-Supervised MRI Segmentation

Tue, 2024-08-06 06:00

Health Data Sci. 2024 Aug 5;4:0166. doi: 10.34133/hds.0166. eCollection 2024.

ABSTRACT

Background: MRI segmentation offers crucial insights for automatic analysis. Although deep learning-based segmentation methods have attained cutting-edge performance, their efficacy heavily relies on vast sets of meticulously annotated data. Methods: In this study, we propose a novel semi-supervised MRI segmentation model that is able to explore unlabeled data in multiple aspects based on various semi-supervised learning technologies. Results: We compared the performance of our proposed method with other deep learning-based methods on 2 public datasets, and the results demonstrated that we have achieved Dice scores of 90.3% and 89.4% on the LA and ACDC datasets, respectively. Conclusions: We explored the synergy of various semi-supervised learning technologies for MRI segmentation, and our investigation will inspire research that focuses on designing MRI segmentation models.

PMID:39104600 | PMC:PMC11298716 | DOI:10.34133/hds.0166

Categories: Literature Watch

Phenotyping COVID-19 respiratory failure in spontaneously breathing patients with AI on lung CT-scan

Mon, 2024-08-05 06:00

Crit Care. 2024 Aug 5;28(1):263. doi: 10.1186/s13054-024-05046-3.

ABSTRACT

BACKGROUND: Automated analysis of lung computed tomography (CT) scans may help characterize subphenotypes of acute respiratory illness. We integrated lung CT features measured via deep learning with clinical and laboratory data in spontaneously breathing subjects to enhance the identification of COVID-19 subphenotypes.

METHODS: This is a multicenter observational cohort study in spontaneously breathing patients with COVID-19 respiratory failure exposed to early lung CT within 7 days of admission. We explored lung CT images using deep learning approaches to quantitative and qualitative analyses; latent class analysis (LCA) by using clinical, laboratory and lung CT variables; regional differences between subphenotypes following 3D spatial trajectories.

RESULTS: Complete datasets were available in 559 patients. LCA identified two subphenotypes (subphenotype 1 and 2). As compared with subphenotype 2 (n = 403), subphenotype 1 patients (n = 156) were older, had higher inflammatory biomarkers, and were more hypoxemic. Lungs in subphenotype 1 had a higher density gravitational gradient with a greater proportion of consolidated lungs as compared with subphenotype 2. In contrast, subphenotype 2 had a higher density submantellar-hilar gradient with a greater proportion of ground glass opacities as compared with subphenotype 1. Subphenotype 1 showed higher prevalence of comorbidities associated with endothelial dysfunction and higher 90-day mortality than subphenotype 2, even after adjustment for clinically meaningful variables.

CONCLUSIONS: Integrating lung-CT data in a LCA allowed us to identify two subphenotypes of COVID-19, with different clinical trajectories. These exploratory findings suggest a role of automated imaging characterization guided by machine learning in subphenotyping patients with respiratory failure.

TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT04395482. Registration date: 19/05/2020.

PMID:39103945 | DOI:10.1186/s13054-024-05046-3

Categories: Literature Watch

Predictive ability of hypotension prediction index and machine learning methods in intraoperative hypotension: a systematic review and meta-analysis

Mon, 2024-08-05 06:00

J Transl Med. 2024 Aug 5;22(1):725. doi: 10.1186/s12967-024-05481-4.

ABSTRACT

INTRODUCTION: Intraoperative Hypotension (IOH) poses a substantial risk during surgical procedures. The integration of Artificial Intelligence (AI) in predicting IOH holds promise for enhancing detection capabilities, providing an opportunity to improve patient outcomes. This systematic review and meta analysis explores the intersection of AI and IOH prediction, addressing the crucial need for effective monitoring in surgical settings.

METHOD: A search of Pubmed, Scopus, Web of Science, and Embase was conducted. Screening involved two-phase assessments by independent reviewers, ensuring adherence to predefined PICOS criteria. Included studies focused on AI models predicting IOH in any type of surgery. Due to the high number of studies evaluating the hypotension prediction index (HPI), we conducted two sets of meta-analyses: one involving the HPI studies and one including non-HPI studies. In the HPI studies the following outcomes were analyzed: cumulative duration of IOH per patient, time weighted average of mean arterial pressure < 65 (TWA-MAP < 65), area under the threshold of mean arterial pressure (AUT-MAP), and area under the receiver operating characteristics curve (AUROC). In the non-HPI studies, we examined the pooled AUROC of all AI models other than HPI.

RESULTS: 43 studies were included in this review. Studies showed significant reduction in IOH duration, TWA-MAP < 65 mmHg, and AUT-MAP < 65 mmHg in groups where HPI was used. AUROC for HPI algorithms demonstrated strong predictive performance (AUROC = 0.89, 95CI). Non-HPI models had a pooled AUROC of 0.79 (95CI: 0.74, 0.83).

CONCLUSION: HPI demonstrated excellent ability to predict hypotensive episodes and hence reduce the duration of hypotension. Other AI models, particularly those based on deep learning methods, also indicated a great ability to predict IOH, while their capacity to reduce IOH-related indices such as duration remains unclear.

PMID:39103852 | DOI:10.1186/s12967-024-05481-4

Categories: Literature Watch

An improved data augmentation approach and its application in medical named entity recognition

Mon, 2024-08-05 06:00

BMC Med Inform Decis Mak. 2024 Aug 5;24(1):221. doi: 10.1186/s12911-024-02624-x.

ABSTRACT

Performing data augmentation in medical named entity recognition (NER) is crucial due to the unique challenges posed by this field. Medical data is characterized by high acquisition costs, specialized terminology, imbalanced distributions, and limited training resources. These factors make achieving high performance in medical NER particularly difficult. Data augmentation methods help to mitigate these issues by generating additional training samples, thus balancing data distribution, enriching the training dataset, and improving model generalization. This paper proposes two data augmentation methods-Contextual Random Replacement based on Word2Vec Augmentation (CRR) and Targeted Entity Random Replacement Augmentation (TER)-aimed at addressing the scarcity and imbalance of data in the medical domain. When combined with a deep learning-based Chinese NER model, these methods can significantly enhance performance and recognition accuracy under limited resources. Experimental results demonstrate that both augmentation methods effectively improve the recognition capability of medical named entities. Specifically, the BERT-BiLSTM-CRF model achieved the highest F1 score of 83.587%, representing a 1.49% increase over the baseline model. This validates the importance and effectiveness of data augmentation in medical NER.

PMID:39103849 | DOI:10.1186/s12911-024-02624-x

Categories: Literature Watch

A Kernel Attention-based Transformer Model for Survival Prediction of Heart Disease Patients

Mon, 2024-08-05 06:00

J Cardiovasc Transl Res. 2024 Aug 5. doi: 10.1007/s12265-024-10537-3. Online ahead of print.

ABSTRACT

Survival analysis is employed to scrutinize time-to-event data, with emphasis on comprehending the duration until the occurrence of a specific event. In this article, we introduce two novel survival prediction models: CosAttnSurv and CosAttnSurv + DyACT. CosAttnSurv model leverages transformer-based architecture and a softmax-free kernel attention mechanism for survival prediction. Our second model, CosAttnSurv + DyACT, enhances CosAttnSurv with Dynamic Adaptive Computation Time (DyACT) control, optimizing computation efficiency. The proposed models are validated using two public clinical datasets related to heart disease patients. When compared to other state-of-the-art models, our models demonstrated an enhanced discriminative and calibration performance. Furthermore, in comparison to other transformer architecture-based models, our proposed models demonstrate comparable performance while exhibiting significant reduction in both time and memory requirements. Overall, our models offer significant advancements in the field of survival analysis and emphasize the importance of computationally effective time-based predictions, with promising implications for medical decision-making and patient care.

PMID:39103715 | DOI:10.1007/s12265-024-10537-3

Categories: Literature Watch

Advancements in triple-negative breast cancer sub-typing, diagnosis and treatment with assistance of artificial intelligence : a focused review

Mon, 2024-08-05 06:00

J Cancer Res Clin Oncol. 2024 Aug 6;150(8):383. doi: 10.1007/s00432-024-05903-2.

ABSTRACT

Triple negative breast cancer (TNBC) is most aggressive type of breast cancer with multiple invasive sub-types and leading cause of women's death worldwide. Lack of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER-2) causes it to spread rapidly making its treatment challenging due to unresponsiveness towards anti-HER and endocrine therapy. Hence, needing advanced therapeutic treatments and strategies in order to get better recovery from TNBC. Artificial intelligence (AI) has been emerged by giving its high inputs in the automated diagnosis as well as treatment of several diseases, particularly TNBC. AI based TNBC molecular sub-typing, diagnosis as well as therapeutic treatment has become successful now days. Therefore, present review has reviewed recent advancements in the role and assistance of AI particularly focusing on molecular sub-typing, diagnosis as well as treatment of TNBC. Meanwhile, advantages, certain limitations and future implications of AI assistance in the TNBC diagnosis and treatment are also discussed in order to fully understand readers regarding this issue.

PMID:39103624 | DOI:10.1007/s00432-024-05903-2

Categories: Literature Watch

Deep learning-based detection and semi-quantitative model for spread through air spaces (STAS) in lung adenocarcinoma

Mon, 2024-08-05 06:00

NPJ Precis Oncol. 2024 Aug 5;8(1):173. doi: 10.1038/s41698-024-00664-0.

ABSTRACT

Tumor spread through air spaces (STAS) is a distinctive metastatic pattern affecting prognosis in lung adenocarcinoma (LUAD) patients. Several challenges are associated with STAS detection, including misdetection, low interobserver agreement, and lack of quantitative analysis. In this research, a total of 489 digital whole slide images (WSIs) were collected. The deep learning-based STAS detection model, named STASNet, was constructed to calculate semi-quantitative parameters associated with STAS density and distance. STASNet demonstrated an accuracy of 0.93 for STAS detection at the tiles level and had an AUC of 0.72-0.78 for determining the STAS status at the WSI level. Among the semi-quantitative parameters, T10S, combined with the spatial location information, significantly stratified stage I LUAD patients on disease-free survival. Additionally, STASNet was deployed into a real-time pathological diagnostic environment, which boosted the STAS detection rate and led to the identification of three easily misidentified types of occult STAS.

PMID:39103596 | DOI:10.1038/s41698-024-00664-0

Categories: Literature Watch

Heatmap-Based Active Shape Model for Landmark Detection in Lumbar X-ray Images

Mon, 2024-08-05 06:00

J Imaging Inform Med. 2024 Aug 5. doi: 10.1007/s10278-024-01210-x. Online ahead of print.

ABSTRACT

Medical staff inspect lumbar X-ray images to diagnose lumbar spine diseases, and the analysis process is currently automated using deep-learning techniques. The detection of landmarks is necessary in the automatic process of localizing the position and identifying the morphological features of the vertebrae. However, detection errors may occur owing to the noise and ambiguity of images, as well as individual variations in the shape of the lumbar vertebrae. This study proposes a method to improve the robustness of landmark detection results. This method assumes that landmarks are detected by a convolutional neural network-based two-step model consisting of Pose-Net and M-Net. The model generates a heatmap response to indicate the probable landmark positions. The proposed method then corrects the landmark positions using the heatmap response and active shape model, which employs statistical information on the landmark distribution. Experiments were conducted using 3600 lumbar X-ray images, and the results showed that the landmark detection error was reduced by the proposed method. The average value of maximum errors decreased by 5.58% after applying the proposed method, which combines the outstanding image analysis capabilities of deep learning with statistical shape constraints on landmark distribution. The proposed method could also be easily integrated with other techniques to increase the robustness of landmark detection results such as CoordConv layers and non-directional part affinity field. This resulted in a further enhancement in the landmark detection performance. These advantages can improve the reliability of automatic systems used to inspect lumbar X-ray images. This will benefit both patients and medical staff by reducing medical expenses and increasing diagnostic efficiency.

PMID:39103566 | DOI:10.1007/s10278-024-01210-x

Categories: Literature Watch

Frontiers of machine learning in smart food safety

Mon, 2024-08-05 06:00

Adv Food Nutr Res. 2024;111:35-70. doi: 10.1016/bs.afnr.2024.06.009. Epub 2024 Jun 22.

ABSTRACT

Integration of machine learning (ML) technologies into the realm of smart food safety represents a rapidly evolving field with significant potential to transform the management and assurance of food quality and safety. This chapter will discuss the capabilities of ML across different segments of the food supply chain, encompassing pre-harvest agricultural activities to post-harvest processes and delivery to the consumers. Three specific examples of applying cutting-edge ML to advance food science are detailed in this chapter, including its use to improve beer flavor, using natural language processing to predict food safety incidents, and leveraging social media to detect foodborne disease outbreaks. Despite advances in both theory and practice, application of ML to smart food safety still suffers from issues such as data availability, model reliability, and transparency. Solving these problems can help realize the full potential of ML in food safety. Development of ML in smart food safety is also driven by social and industry impacts. The improvement and implementation of legal policies brings both opportunities and challenges. The future of smart food safety lies in the strategic implementation of ML technologies, navigating social and industry impacts, and adapting to regulatory changes in the AI era.

PMID:39103217 | DOI:10.1016/bs.afnr.2024.06.009

Categories: Literature Watch

Predicting telomerase reverse transcriptase promoter mutation in glioma: A systematic review and diagnostic meta-analysis on machine learning algorithms

Mon, 2024-08-05 06:00

Neuroradiol J. 2024 Aug 5:19714009241269526. doi: 10.1177/19714009241269526. Online ahead of print.

ABSTRACT

BACKGROUND: Glioma is one of the most common primary brain tumors. The presence of the telomerase reverse transcriptase promoter (pTERT) mutation is associated with a better prognosis. This study aims to investigate the TERT mutation in patients with glioma using machine learning (ML) algorithms on radiographic imaging.

METHOD: This study was prepared according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The electronic databases of PubMed, Embase, Scopus, and Web of Science were searched from inception to August 1, 2023. The statistical analysis was performed using the MIDAS package of STATA v.17.

RESULTS: A total of 22 studies involving 5371 patients were included for data extraction, with data synthesis based on 11 reports. The analysis revealed a pooled sensitivity of 0.86 (95% CI: 0.78-0.92) and a specificity of 0.80 (95% CI 0.72-0.86). The positive and negative likelihood ratios were 4.23 (95% CI: 2.99-5.99) and 0.18 (95% CI: 0.11-0.29), respectively. The pooled diagnostic score was 3.18 (95% CI: 2.45-3.91), with a diagnostic odds ratio 24.08 (95% CI: 11.63-49.87). The Summary Receiver Operating Characteristic (SROC) curve had an area under the curve (AUC) of 0.89 (95% CI: 0.86-0.91).

CONCLUSION: The study suggests that ML can predict TERT mutation status in glioma patients. ML models showed high sensitivity (0.86) and moderate specificity (0.80), aiding disease prognosis and treatment planning. However, further development and improvement of ML models are necessary for better performance metrics and increased reliability in clinical practice.

PMID:39103206 | DOI:10.1177/19714009241269526

Categories: Literature Watch

Navigating the frontier of drug-like chemical space with cutting-edge generative AI models

Mon, 2024-08-05 06:00

Drug Discov Today. 2024 Aug 3:104133. doi: 10.1016/j.drudis.2024.104133. Online ahead of print.

ABSTRACT

Deep generative models (GMs) have transformed the exploration of drug-like chemical space (CS) by generating novel molecules through complex, nontransparent processes, bypassing direct structural similarity. This review examines five key architectures for CS exploration: recurrent neural networks (RNNs), variational autoencoders (VAEs), generative adversarial networks (GANs), normalizing flows (NF), and transformers. It discusses molecular representation choices, training strategies for focused CS exploration, evaluation criteria for CS coverage, and related challenges. Future directions include refining models, exploring new notations, improving benchmarks, and enhancing interpretability to better understand biologically relevant molecular properties.

PMID:39103144 | DOI:10.1016/j.drudis.2024.104133

Categories: Literature Watch

Fully Automated Hippocampus Segmentation using T2-informed Deep Convolutional Neural Networks

Mon, 2024-08-05 06:00

Neuroimage. 2024 Aug 3:120767. doi: 10.1016/j.neuroimage.2024.120767. Online ahead of print.

ABSTRACT

Hippocampal atrophy (tissue loss) has become a fundamental outcome parameter in clinical trials on Alzheimer's disease. To accurately estimate hippocampus volume and track its volume loss, a robust and reliable segmentation is essential. Manual hippocampus segmentation is considered the gold standard but is extensive, time-consuming, and prone to rater bias. Therefore, it is often replaced by automated programs like FreeSurfer, one of the most commonly used tools in clinical research. Recently, deep learning-based methods have also been successfully applied to hippocampus segmentation. The basis of all approaches are clinically used T1-weighted whole-brain MR images with approximately 1mm isotropic resolution. However, such T1 images show low contrast-to-noise ratios (CNRs), particularly for many hippocampal substructures, limiting delineation reliability. To overcome these limitations, high-resolution T2-weighted scans are suggested for better visualization and delineation, as they show higher CNRs and usually allow for higher resolutions. Unfortunately, such time-consuming T2-weighted sequences are not feasible in a clinical routine. We propose an automated hippocampus segmentation pipeline leveraging deep learning with T2w MR images for enhanced hippocampus segmentation of clinical T1-weighted images based on a series of 3D convolutional neural networks and a specifically acquired multi-contrast dataset. This dataset consists of corresponding pairs of high-resolution T1- and T2-weighted images, with the T2 images only used to create more accurate manual ground truth annotations and to train the segmentation network. The T2-based ground truth labels were also used to evaluate all experiments by comparing the masks visually and by various quantitative measures. We compared our approach with four established state-of-the-art hippocampus segmentation algorithms (FreeSurfer, ASHS, HippoDeep, HippMapp3r) and demonstrated a superior segmentation performance. Moreover, we found that the automated segmentation of T1-weighted images benefits from the T2-based ground truth data. In conclusion, this work showed the beneficial use of high-resolution, T2-based ground truth data for training an automated, deep learning-based hippocampus segmentation and provides the basis for a reliable estimation of hippocampal atrophy in clinical studies.

PMID:39103064 | DOI:10.1016/j.neuroimage.2024.120767

Categories: Literature Watch

Effectiveness and efficiency: label-aware hierarchical subgraph learning for protein-protein interaction

Mon, 2024-08-05 06:00

J Mol Biol. 2024 Aug 3:168737. doi: 10.1016/j.jmb.2024.168737. Online ahead of print.

ABSTRACT

The study of protein-protein interactions (PPIs) holds immense significance in understanding various biological activities, as well as in drug discovery and disease diagnosis. Existing deep learning methods for PPI prediction, including graph neural networks (GNNs), have been widely employed as the solutions, while they often experience a decline in performance in the real world. We claim that the topological shortcut is one of the key problems contributing negatively to the performance, according to our analysis. By modeling the PPIs as a graph with protein as nodes and interactions as edge types, the prevailing models tend to learn the pattern of nodes' degrees rather than intrinsic sequence-structure profiles, leading to the problem termed topological shortcut. The huge data growth of PPI leads to intensive computational costs and challenges computing devices, causing infeasibility in practice. To address the discussed problems, we propose a label-aware hierarchical subgraph learning method (laruGL-PPI) that can effectively infer PPIs while being interpretable. Specifically, we introduced edge-based subgraph sampling to effectively alleviate the problems of topological shortcuts and high computing costs. Besides, the inner-outer connections of PPIs are modeled as a hierarchical graph, together with the dependencies between interaction types constructed by a label graph. Extensive experiments conducted across various scales of PPI datasets have conclusively demonstrated that the laruGL-PPI method surpasses the most advanced PPI prediction techniques currently available, particularly in the testing of unseen proteins. Also, our model can recognize crucial sites of proteins, such as surface sites for binding and active sites for catalysis.

PMID:39102976 | DOI:10.1016/j.jmb.2024.168737

Categories: Literature Watch

Probing the capacity of a spatiotemporal deep learning model for short-term PM(2.5) forecasts in a coastal urban area

Mon, 2024-08-05 06:00

Sci Total Environ. 2024 Aug 3:175233. doi: 10.1016/j.scitotenv.2024.175233. Online ahead of print.

ABSTRACT

Accurate forecast of fine particulate matter (PM2.5) is crucial for city air pollution control, yet remains challenging due to the complex urban atmospheric chemical and physical processes. Recently deep learning has been routinely applied for better urban PM2.5 forecasts. However, their capacity to represent the spatiotemporal urban atmospheric processes remains underexplored, especially compared with traditional approaches such as chemistry-transport models (CTMs) and shallow statistical methods other than deep learning. Here we probe such urban-scale representation capacity of a spatiotemporal deep learning (STDL) model for 24-h short-term PM2.5 forecasts at six urban stations in Rizhao, a coastal city in China. Compared with two operational CTMs and three statistical models, the STDL model shows its superiority with improvements in all five evaluation metrics, notably in root mean square error (RMSE) for forecasts at lead times within 12 h with reductions of 49.8 % and 47.8 % respectively. This demonstrates the STDL model's capacity to represent nonlinear small-scale phenomena such as street-level emissions and urban meteorology that are in general not well represented in either CTMs or shallow statistical models. This gain of small-scale representation in forecast performance decreases at increasing lead times, leading to similar RMSEs to the statistical methods (linear shallow representations) at about 12 h and to the CTMs (mesoscale representations) at 24 h. The STDL model performs especially well in winter, when complex urban physical and chemical processes dominate the frequent severe air pollution, and in moisture conditions fostering hygroscopic growth of particles. The DL-based PM2.5 forecasts align with observed trends under various humidity and wind conditions. Such investigation into the potential and limitations of deep learning representation for urban PM2.5 forecasting could hopefully inspire further fusion of distinct representations from CTMs and deep networks to break the conventional limits of short-term PM2.5 forecasts.

PMID:39102955 | DOI:10.1016/j.scitotenv.2024.175233

Categories: Literature Watch

A data augmentation procedure to improve detection of spike ripples in brain voltage recordings

Mon, 2024-08-05 06:00

Neurosci Res. 2024 Aug 3:S0168-0102(24)00096-8. doi: 10.1016/j.neures.2024.07.005. Online ahead of print.

ABSTRACT

Epilepsy is a major neurological disorder characterized by recurrent, spontaneous seizures. For patients with drug-resistant epilepsy, treatments include neurostimulation or surgical removal of the epileptogenic zone (EZ), the brain region responsible for seizure generation. Precise targeting of the EZ requires reliable biomarkers. Spike ripples - high-frequency oscillations that co-occur with large amplitude epileptic discharges - have gained prominence as a candidate biomarker. However, spike ripple detection remains a challenge. The gold-standard approach requires an expert manually visualize and interpret brain voltage recordings, which limits reproducibility and high-throughput analysis. Addressing these limitations requires more objective, efficient, and automated methods for spike ripple detection, including approaches that utilize deep neural networks. Despite advancements, dataset heterogeneity and scarcity severely limit machine learning performance. Our study explores long-short term memory (LSTM) neural network architectures for spike ripple detection, leveraging data augmentation to improve classifier performance. We highlight the potential of combining training on augmented and in vivo data for enhanced spike ripple detection and ultimately improving diagnostic accuracy in epilepsy treatment.

PMID:39102943 | DOI:10.1016/j.neures.2024.07.005

Categories: Literature Watch

Machine learning and deep learning tools for the automated capture of cancer surveillance data

Mon, 2024-08-05 06:00

J Natl Cancer Inst Monogr. 2024 Aug 1;2024(65):145-151. doi: 10.1093/jncimonographs/lgae018.

ABSTRACT

The National Cancer Institute and the Department of Energy strategic partnership applies advanced computing and predictive machine learning and deep learning models to automate the capture of information from unstructured clinical text for inclusion in cancer registries. Applications include extraction of key data elements from pathology reports, determination of whether a pathology or radiology report is related to cancer, extraction of relevant biomarker information, and identification of recurrence. With the growing complexity of cancer diagnosis and treatment, capturing essential information with purely manual methods is increasingly difficult. These new methods for applying advanced computational capabilities to automate data extraction represent an opportunity to close critical information gaps and create a nimble, flexible platform on which new information sources, such as genomics, can be added. This will ultimately provide a deeper understanding of the drivers of cancer and outcomes in the population and increase the timeliness of reporting. These advances will enable better understanding of how real-world patients are treated and the outcomes associated with those treatments in the context of our complex medical and social environment.

PMID:39102883 | DOI:10.1093/jncimonographs/lgae018

Categories: Literature Watch

Downgrading BI-RADS categories in ultrasound using strain elastography and computer-aided diagnosis system: a multicenter, prospective study

Mon, 2024-08-05 06:00

Br J Radiol. 2024 Aug 5:tqae136. doi: 10.1093/bjr/tqae136. Online ahead of print.

ABSTRACT

OBJECTIVE: To determine whether adding elastography strain ratio (SR) and a deep learning based computer-aided diagnosis (CAD) system to breast ultrasound (US) can help reclassify Breast Imaging Reporting and Data System (BI-RADS) 3 & 4a-c categories and avoid unnecessary biopsies.

METHODS: This prospective, multicenter study included 1049 masses (691 benign, 358 malignant) with assigned BI-RADS 3 & 4a-c between 2020 and 2022. CAD results was dichotomized possibly malignant vs. benign. All patients underwent SR and CAD examinations and histopathological findings were the standard of reference. Reduction of unnecessary biopsies (biopsies in benign lesions) and missed malignancies after reclassified (new BI-RADS 3) with SR and CAD were the outcome measures.

RESULTS: Following the routine conventional breast US assessment, 48.6% (336 of 691 masses) underwent unnecessary biopsies. After reclassifying BI-RADS 4a masses (SR cut-off < 2.90, CAD dichotomized possibly benign), 25.62% (177 of 691 masses) underwent an unnecessary biopsies corresponding to a 50.14% (177 vs. 355) reduction of unnecessary biopsies. After reclassification, only 1.72% (9 of 523 masses) malignancies were missed in the new BI-RADS 3 group.

CONCLUSION: Adding SR and CAD to clinical practice may show an optimal performance in reclassifying BI-RADS 4a to 3 categories, and 50.14% masses would be benefit by keeping the rate of undetected malignancies with an acceptable value of 1.72%.

ADVANCES IN KNOWLEDGE: Leveraging the potential of SR in conjunction with CAD holds immense promise in substantially reducing the biopsy frequency associated with BI-RADS 3 and 4A lesions, thereby conferring substantial advantages upon patients encompassed within this cohort.

PMID:39102827 | DOI:10.1093/bjr/tqae136

Categories: Literature Watch

Linked color imaging with artificial intelligence improves the detection of early gastric cancer

Mon, 2024-08-05 06:00

Dig Dis. 2024 Aug 5. doi: 10.1159/000540728. Online ahead of print.

ABSTRACT

INTRODUCTION: Esophagogastroduodenoscopy (EGD) is the most important tool to detect gastric cancer (GC). In this study, we developed a computer-aided system (CADe) to detect gastric cancer (GC) with white light imaging (WLI) and linked color imaging (LCI) modes and aimed to compare the performance of CADe with that of endoscopists.

METHODS: The system was developed based on the deep learning framework from 9021 images in 385 patients between 2017 and 2020. A total of 116 LCI and WLI videos from 110 patients between 2017 and 2023 were used to evaluate per-case sensitivity and per-frame specificity.

RESULTS: The per-case sensitivity and per-frame specificity of CADe with a confidence level of 0.5 in detecting GC were 78.6% and 93.4% for WLI and 94.0% and 93.3% for LCI, respectively (P &lt; 0.001). The per-case sensitivities of nonexpert endoscopists for WLI and LCI were 45.8% and 80.4%, whereas those of expert endoscopists were 66.7% and 90.6%, respectively. Regarding detectability between CADe and endoscopists, the per-case sensitivities for WLI and LCI were 78.6% and 94.0% in CADe, respectively, which were significantly higher than those for LCI in experts (90.6%, P = 0.004) and those for WLI and LCI in nonexperts (45.8% and 80.4%, respectively, P &lt; 0.0001); however, no significant difference for WLI was observed between CADe and experts (P = 0.134).

CONCLUSIONS: Our CADe system showed a significantly better sensitivity in detecting GC when used in LCI compared with WLI mode. Moreover, the sensitivity of CADe using LCI is significantly higher than those of expert endoscopists using LCI.

PMID:39102801 | DOI:10.1159/000540728

Categories: Literature Watch

Public participation in healthcare students' education: An umbrella review

Mon, 2024-08-05 06:00

Health Expect. 2024 Feb;27(1):e13974. doi: 10.1111/hex.13974.

ABSTRACT

BACKGROUND: An often-hidden element in healthcare students' education is the pedagogy of public involvement, yet public participation can result in deep learning for students with positive impacts on the public who participate.

OBJECTIVE: This article aimed to synthesize published literature reviews that described the impact of public participation in healthcare students' education.

SEARCH STRATEGY: We searched MEDLINE, EMBASE, ERIC, PsychINFO, CINAHL, PubMed, JBI Database of Systematic Reviews and Implementation Reports, the Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects and the PROSPERO register for literature reviews on public participation in healthcare students' education.

INCLUSION CRITERIA: Reviews published in the last 10 years were included if they described patient or public participation in healthcare students' education and reported the impacts on students, the public, curricula or healthcare systems.

DATA EXTRACTION AND SYNTHESIS: Data were extracted using a predesigned data extraction form and narratively synthesized.

MAIN RESULTS: Twenty reviews met our inclusion criteria reporting on outcomes related to students, the public, curriculum and future professional practice.

DISCUSSION AND CONCLUSION: Our findings raise awareness of the benefits and challenges of public participation in healthcare students' education and may inform future research exploring how public participation can best be utilized in higher education.

PATIENT OR PUBLIC CONTRIBUTION: This review was inspired by conversations with public healthcare consumers who saw value in public participation in healthcare students' education. Studies included involved public participants, providing a deeper understanding of the impacts of public participation in healthcare students' education.

PMID:39102698 | DOI:10.1111/hex.13974

Categories: Literature Watch

Pages