Deep learning

Discovery of Potent Selective HDAC6 Inhibitors with 5-Phenyl-1<em>H</em>-indole Fragment: Virtual Screening, Rational Design, and Biological Evaluation

Tue, 2024-07-23 06:00

J Chem Inf Model. 2024 Jul 23. doi: 10.1021/acs.jcim.4c01052. Online ahead of print.

ABSTRACT

Among the HDACs family, histone deacetylase 6 (HDAC6) has attracted extensive attention due to its unique structure and biological functions. Numerous studies have shown that compared with broad-spectrum HDACs inhibitors, selective HDAC6 inhibitors exert ideal efficacy in tumor treatment with insignificant toxic and side effects, demonstrating promising clinical application prospect. Herein, we carried out rational drug design by integrating a deep learning model, molecular docking, and molecular dynamics simulation technology to construct a virtual screening process. The designed derivatives with 5-phenyl-1H-indole fragment as Cap showed desirable cytotoxicity to the various tumor cell lines, all of which were within 15 μM (ranging from 0.35 to 14.87 μM), among which compound 5i had the best antiproliferative activities against HL-60 (IC50 = 0.35 ± 0.07 μM) and arrested HL-60 cells in the G0/G1 phase. In addition, 5i exhibited better isotype selective inhibitory activities due to the potent potency against HDAC6 (IC50 = 5.16 ± 0.25 nM) and the reduced inhibitory activities against HDAC1 (selective index ≈ 124), which was further verified by immunoblotting results. Moreover, the representative binding conformation of 5i on HDAC6 was revealed and the key residues contributing 5i's binding were also identified via decomposition free-energy analysis. The discovery of lead compound 5i also indicates that virtual screening is still a beneficial tool in drug discovery and can provide more molecular skeletons with research potential for drug design, which is worthy of widespread application.

PMID:39042494 | DOI:10.1021/acs.jcim.4c01052

Categories: Literature Watch

Machine learning-based estimation of patient body weight from radiation dose metrics in computed tomography

Tue, 2024-07-23 06:00

J Appl Clin Med Phys. 2024 Jul 23:e14467. doi: 10.1002/acm2.14467. Online ahead of print.

ABSTRACT

PURPOSE: Currently, precise patient body weight (BW) at the time of diagnostic imaging cannot always be used for radiation dose management. Various methods have been explored to address this issue, including the application of deep learning to medical imaging and BW estimation using scan parameters. This study develops and evaluates machine learning-based BW prediction models using 11 features related to radiation dose obtained from computed tomography (CT) scans.

METHODS: A dataset was obtained from 3996 patients who underwent positron emission tomography CT scans, and training and test sets were established. Dose metrics and descriptive data were automatically calculated from the CT images or obtained from Digital Imaging and Communications in Medicine metadata. Seven machine-learning models and three simple regression models were employed to predict BW using features such as effective diameter (ED), water equivalent diameter (WED), and mean milliampere-seconds. The mean absolute error (MAE) and correlation coefficient between the estimated BW and the actual BW obtained from each BW prediction model were calculated.

RESULTS: Our results found that the highest accuracy was obtained using a light gradient-boosting machine model, which had an MAE of 1.99 kg and a strong positive correlation between estimated and actual BW (ρ = 0.972). The model demonstrated significant predictive power, with 73% of patients falling within a ±5% error range. WED emerged as the most relevant dose metric for BW estimation, followed by ED and sex.

CONCLUSIONS: The proposed machine-learning approach is superior to existing methods, with high accuracy and applicability to radiation dose management. The model's reliance on universal dose metrics that are accessible through radiation dose management software enhances its practicality. In conclusion, this study presents a robust approach for BW estimation based on CT imaging that can potentially improve radiation dose management practices in clinical settings.

PMID:39042480 | DOI:10.1002/acm2.14467

Categories: Literature Watch

Construction of a Multi-Label Classifier for Extracting Multiple Incident Factors From Medication Incident Reports in Residential Care Facilities: Natural Language Processing Approach

Tue, 2024-07-23 06:00

JMIR Med Inform. 2024 Jul 23;12:e58141. doi: 10.2196/58141.

ABSTRACT

BACKGROUND: Medication safety in residential care facilities is a critical concern, particularly when nonmedical staff provide medication assistance. The complex nature of medication-related incidents in these settings, coupled with the psychological impact on health care providers, underscores the need for effective incident analysis and preventive strategies. A thorough understanding of the root causes, typically through incident-report analysis, is essential for mitigating medication-related incidents.

OBJECTIVE: We aimed to develop and evaluate a multilabel classifier using natural language processing to identify factors contributing to medication-related incidents using incident report descriptions from residential care facilities, with a focus on incidents involving nonmedical staff.

METHODS: We analyzed 2143 incident reports, comprising 7121 sentences, from residential care facilities in Japan between April 1, 2015, and March 31, 2016. The incident factors were annotated using sentences based on an established organizational factor model and previous research findings. The following 9 factors were defined: procedure adherence, medicine, resident, resident family, nonmedical staff, medical staff, team, environment, and organizational management. To assess the label criteria, 2 researchers with relevant medical knowledge annotated a subset of 50 reports; the interannotator agreement was measured using Cohen κ. The entire data set was subsequently annotated by 1 researcher. Multiple labels were assigned to each sentence. A multilabel classifier was developed using deep learning models, including 2 Bidirectional Encoder Representations From Transformers (BERT)-type models (Tohoku-BERT and a University of Tokyo Hospital BERT pretrained with Japanese clinical text: UTH-BERT) and an Efficiently Learning Encoder That Classifies Token Replacements Accurately (ELECTRA), pretrained on Japanese text. Both sentence- and report-level training were performed; the performance was evaluated by the F1-score and exact match accuracy through 5-fold cross-validation.

RESULTS: Among all 7121 sentences, 1167, 694, 2455, 23, 1905, 46, 195, 1104, and 195 included "procedure adherence," "medicine," "resident," "resident family," "nonmedical staff," "medical staff," "team," "environment," and "organizational management," respectively. Owing to limited labels, "resident family" and "medical staff" were omitted from the model development process. The interannotator agreement values were higher than 0.6 for each label. A total of 10, 278, and 1855 reports contained no, 1, and multiple labels, respectively. The models trained using the report data outperformed those trained using sentences, with macro F1-scores of 0.744, 0.675, and 0.735 for Tohoku-BERT, UTH-BERT, and ELECTRA, respectively. The report-trained models also demonstrated better exact match accuracy, with 0.411, 0.389, and 0.399 for Tohoku-BERT, UTH-BERT, and ELECTRA, respectively. Notably, the accuracy was consistent even when the analysis was confined to reports containing multiple labels.

CONCLUSIONS: The multilabel classifier developed in our study demonstrated potential for identifying various factors associated with medication-related incidents using incident reports from residential care facilities. Thus, this classifier can facilitate prompt analysis of incident factors, thereby contributing to risk management and the development of preventive strategies.

PMID:39042454 | DOI:10.2196/58141

Categories: Literature Watch

EBC-Net: 3D semi-supervised segmentation of pancreas based on edge-biased consistency regularization in dual perturbation space

Tue, 2024-07-23 06:00

Med Phys. 2024 Jul 23. doi: 10.1002/mp.17323. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning technology has made remarkable progress in pancreatic image segmentation tasks. However, annotating 3D medical images is time-consuming and requires expertise, and existing semi-supervised segmentation methods perform poorly in the segmentation task of organs with blurred edges in enhanced CT such as the pancreas.

PURPOSE: To address the challenges of limited labeled data and indistinct boundaries of regions of interest (ROI).

METHODS: We propose Edge-Biased Consistency Regularization (EBC-Net). 3D edge detection is employed to construct edge perturbations and integrate edge prior information into limited data, aiding the network in learning from unlabeled data. Additionally, due to the one-sidedness of a single perturbation space, we expand the dual-level perturbation space of both images and features to more efficiently focus the model's attention on the edges of the ROI. Finally, inspired by the clinical habits of doctors, we propose a 3D Anatomical Invariance Extraction Module and Anatomical Attention to capture anatomy-invariant features.

RESULTS: Extensive experiments have demonstrated that our method outperforms state-of-the-art methods in semi-supervised pancreas image segmentation. Moreover, it can better preserve the morphology of pancreatic organs and excel at edges region accuracy.

CONCLUSIONS: Incorporated with edge prior knowledge, our method mixes disturbances in dual-perturbation space, which shifts the network's attention to the fuzzy edge region using a few labeled samples. These ideas have been verified on the pancreas segmentation dataset.

PMID:39042373 | DOI:10.1002/mp.17323

Categories: Literature Watch

Deep learning for accelerated and robust MRI reconstruction

Tue, 2024-07-23 06:00

MAGMA. 2024 Jul 23. doi: 10.1007/s10334-024-01173-8. Online ahead of print.

ABSTRACT

Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.Affiliations [3 and 6] has been split into two different affiliations. Please check if action taken is appropriate and amend if necessary.looks good.

PMID:39042206 | DOI:10.1007/s10334-024-01173-8

Categories: Literature Watch

Unveil sleep spindles with concentration of frequency and time (ConceFT)

Tue, 2024-07-23 06:00

Physiol Meas. 2024 Jul 23. doi: 10.1088/1361-6579/ad66aa. Online ahead of print.

ABSTRACT

OBJECTIVE: Sleep spindles contain crucial brain dynamics information. We intro- duce the novel non-linear time-frequency analysis tool "Concentration of Fre- quency and Time" (ConceFT) to create an interpretable automated algorithm for sleep spindle annotation in EEG data and to measure spindle instantaneous frequencies (IFs).

APPROACH: ConceFT effectively reduces stochastic EEG in- fluence, enhancing spindle visibility in the time-frequency representation. Our automated spindle detection algorithm, ConceFT-Spindle (ConceFT-S), is com- pared to A7 (non-deep learning) and SUMO (deep learning) using Dream and MASS benchmark databases. We also quantify spindle IF dynamics. Main Re- sults: ConceFT-S achieves F1 scores of 0.765 in Dream and 0.791 in MASS, which surpass A7 and SUMO. We reveal that spindle IF is generally nonlin- ear.

SIGNIFICANCE: ConceFT offers an accurate, interpretable EEG-based sleep spindle detection algorithm and enables spindle IF quantification.

PMID:39042095 | DOI:10.1088/1361-6579/ad66aa

Categories: Literature Watch

Ultra-low dose hip CT-based automated measurement of volumetric bone mineral density at proximal femoral subregions

Tue, 2024-07-23 06:00

Med Phys. 2024 Jul 23. doi: 10.1002/mp.17319. Online ahead of print.

ABSTRACT

BACKGROUND: Forty to fifty percent of women and 13%-22% of men experience an osteoporosis-related fragility fracture in their lifetimes. After the age of 50 years, the risk of hip fracture doubles in every 10 years. x-Ray based DXA is currently clinically used to diagnose osteoporosis and predict fracture risk. However, it provides only 2-D representation of bone and is associated with other technical limitations. Thus, alternative methods are needed.

PURPOSE: To develop and evaluate an ultra-low dose (ULD) hip CT-based automated method for assessment of volumetric bone mineral density (vBMD) at proximal femoral subregions.

METHODS: An automated method was developed to segment the proximal femur in ULD hip CT images and delineate femoral subregions. The computational pipeline consists of deep learning (DL)-based computation of femur likelihood map followed by shape model-based femur segmentation and finite element analysis-based warping of a reference subregion labeling onto individual femur shapes. Finally, vBMD is computed over each subregion in the target image using a calibration phantom scan. A total of 100 participants (50 females) were recruited from the Genetic Epidemiology of COPD (COPDGene) study, and ULD hip CT imaging, equivalent to 18 days of background radiation received by U.S. residents, was performed on each participant. Additional hip CT imaging using a clinical protocol was performed on 12 participants and repeat ULD hip CT was acquired on another five participants. ULD CT images from 80 participants were used to train the DL network; ULD CT images of the remaining 20 participants as well as clinical and repeat ULD CT images were used to evaluate the accuracy, generalizability, and reproducibility of segmentation of femoral subregions. Finally, clinical CT and repeat ULD CT images were used to evaluate accuracy and reproducibility of ULD CT-based automated measurements of femoral vBMD.

RESULTS: Dice scores of accuracy (n = 20), reproducibility (n = 5), and generalizability (n = 12) of ULD CT-based automated subregion segmentation were 0.990, 0.982, and 0.977, respectively, for the femoral head and 0.941, 0.970, and 0.960, respectively, for the femoral neck. ULD CT-based regional vBMD showed Pearson and concordance correlation coefficients of 0.994 and 0.977, respectively, and a root-mean-square coefficient of variation (RMSCV) (%) of 1.39% with the clinical CT-derived reference measure. After 3-digit approximation, each of Pearson and concordance correlation coefficients as well as intraclass correlation coefficient (ICC) between baseline and repeat scans were 0.996 with RMSCV of 0.72%. Results of ULD CT-based bone analysis on 100 participants (age (mean ± SD) 73.6 ± 6.6 years) show that males have significantly greater (p < 0.01) vBMD at the femoral head and trochanteric regions than females, while females have moderately greater vBMD (p = 0.05) at the medial half of the femoral neck than males.

CONCLUSION: Deep learning, combined with shape model and finite element analysis, offers an accurate, reproducible, and generalizable algorithm for automated segmentation of the proximal femur and anatomic femoral subregions using ULD hip CT images. ULD CT-based regional measures of femoral vBMD are accurate and reproducible and demonstrate regional differences between males and females.

PMID:39042053 | DOI:10.1002/mp.17319

Categories: Literature Watch

Weakly Supervised Deep Learning in Radiology

Tue, 2024-07-23 06:00

Radiology. 2024 Jul;312(1):e232085. doi: 10.1148/radiol.232085.

ABSTRACT

Deep learning (DL) is currently the standard artificial intelligence tool for computer-based image analysis in radiology. Traditionally, DL models have been trained with strongly supervised learning methods. These methods depend on reference standard labels, typically applied manually by experts. In contrast, weakly supervised learning is more scalable. Weak supervision comprises situations in which only a portion of the data are labeled (incomplete supervision), labels refer to a whole region or case as opposed to a precisely delineated image region (inexact supervision), or labels contain errors (inaccurate supervision). In many applications, weak labels are sufficient to train useful models. Thus, weakly supervised learning can unlock a large amount of otherwise unusable data for training DL models. One example of this is using large language models to automatically extract weak labels from free-text radiology reports. Here, we outline the key concepts in weakly supervised learning and provide an overview of applications in radiologic image analysis. With more fundamental and clinical translational work, weakly supervised learning could facilitate the uptake of DL in radiology and research workflows by enabling large-scale image analysis and advancing the development of new DL-based biomarkers.

PMID:39041937 | DOI:10.1148/radiol.232085

Categories: Literature Watch

Identifying and training deep learning neural networks on biomedical-related datasets

Tue, 2024-07-23 06:00

Brief Bioinform. 2024 Jul 23;25(Supplement_1):bbae232. doi: 10.1093/bib/bbae232.

ABSTRACT

This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

PMID:39041915 | DOI:10.1093/bib/bbae232

Categories: Literature Watch

Artificial Intelligence Applications in Oral Cancer and Oral Dysplasia

Tue, 2024-07-23 06:00

Tissue Eng Part A. 2024 Jul 23. doi: 10.1089/ten.TEA.2024.0096. Online ahead of print.

ABSTRACT

Oral squamous cell carcinoma (OSCC) is a highly unpredictable disease with devastating mortality rates that have not changed over the past decades, in the face of advancements in treatments and biomarkers, which have improved survival for other cancers. Delays in diagnosis are frequent, leading to more disfiguring treatments and poor outcomes in patients. The clinical challenge lies in identifying those patients at highest risk for developing OSCC. Oral epithelial dysplasia (OED) is a precursor of OSCC with highly variable behavior across patients. There is no reliable clinical, pathologic, histologic or molecular biomarker to determine individual risk in OED patients. Similarly, there are no robust biomarkers to predict treatment outcomes or mortality of OSCC patients. This review aims to highlight advancements in artificial intelligence (AI)-based methods to develop predictive biomarkers of OED transformation to OSCC or predictive biomarkers of OSCC mortality and treatment response. Machine-learning based biomarkers, such as S100A7, demonstrate promising appraisal for the risk of malignant transformation of OED. Machine learning-enhanced multiplex immunohistochemistry (mIHC) workflows examine immune cell patterns and organization within the tumor immune microenvironment to generate outcome predictions in immunotherapy. Deep learning (DL) is an AI-based method using an extended neural network or related architecture with multiple "hidden" layers of simulated neurons to combine simple visual features into complex patterns. DL-based digital pathology is currently being developed to assess OED and OSCC outcomes. The integration of machine learning in epigenomics aims to examine the epigenetic modification of diseases and improve our ability to detect, classify, and predict outcomes associated with epigenetic marks. Collectively, these tools showcase promising advancements in discovery and technology, which may provide a potential solution to addressing the current limitations in predicting OED transformation and OSCC behavior, both of which are clinical challenges that must be addressed in order to improve OSCC survival.

PMID:39041628 | DOI:10.1089/ten.TEA.2024.0096

Categories: Literature Watch

The utility of artificial intelligence in identifying radiological evidence of lung cancer and pulmonary tuberculosis in a high-burden tuberculosis setting

Tue, 2024-07-23 06:00

S Afr Med J. 2024 May 31;114(6):e1846. doi: 10.7196/SAMJ.2024.v114i6.1846.

ABSTRACT

BACKGROUND: Artificial intelligence (AI), using deep learning (DL) systems, can be utilised to detect radiological changes of various pulmonary diseases. Settings with a high burden of tuberculosis (TB) and people living with HIV can potentially benefit from the use of AI to augment resource-constrained healthcare systems.

OBJECTIVE: To assess the utility of qXR software (AI) in detecting radiological changes compatible with lung cancer or pulmonary TB (PTB).

METHODS: We performed an observational study in a tertiary institution that serves a population with a high burden of lung cancer and PTB. In total, 382 chest radiographs that had a confirmed diagnosis were assessed: 127 with lung cancer, 144 with PTB and 111 normal. These chest radiographs were de-identified and randomly uploaded by a blinded investigator into qXR software. The output was generated as probability scores from predefined threshold values.

RESULTS: The overall sensitivity of the qXR in detecting lung cancer was 84% (95% confidence interval (CI) 80 - 87%), specificity 91% (95% CI 84 - 96%) and positive predictive value of 97% (95% CI 95 - 99%). For PTB, it had a sensitivity of 90% (95% CI 87 - 93%) and specificity of 79% (95% CI 73 - 84%) and negative predictive value of 85% (95% CI 79 - 91%).

CONCLUSION: The qXR software was sensitive and specific in categorising chest radiographs as consistent with lung cancer or TB, and can potentially aid in the earlier detection and management of these diseases.

PMID:39041503 | DOI:10.7196/SAMJ.2024.v114i6.1846

Categories: Literature Watch

Phenotypic drug discovery promotes research and development of innovative drugs based on traditional Chinese medicine

Tue, 2024-07-23 06:00

Zhongguo Zhong Yao Za Zhi. 2024 Jun;49(12):3125-3131. doi: 10.19540/j.cnki.cjcmm.20240222.601.

ABSTRACT

Traditional Chinese medicine with rich resources in China and definite therapeutic effects on complex diseases demonstrates great development potential. However, the complex composition, the unclear pharmacodynamic substances and mechanisms of action, and the lack of reasonable methods for evaluating clinical safety and efficacy have limited the research and development of innovative drugs based on traditional Chinese medicine. The progress in cutting-edge disciplines such as artificial intelligence and biomimetics, especially the emergence of cell painting and organ-on-a-chip, helps to identify and characterize the active ingredients in traditional Chinese medicine based on the changes in model characteristics, thus providing more accurate guidance for the development and application of traditional Chinese medicine. The application of phenotypic drug discovery in the research and development of innovative drugs based on traditional Chinese medicine is gaining increasing attention. In recent years, the technology for phenotypic drug discovery keeps advancing, which improves the early discovery rate of new drugs and the success rate of drug research and development. Accordingly, phenotypic drug discovery gradually becomes a key tool for the research on new drugs. This paper discusses the enormous potential of traditional Chinese medicine in the discovery and development of innovative drugs and illustrates how the application of phenotypic drug discovery, supported by cutting-edge technologies such as cell painting, deep learning, and organ-on-a-chip, propels traditional Chinese medicine into a new stage of development.

PMID:39041072 | DOI:10.19540/j.cnki.cjcmm.20240222.601

Categories: Literature Watch

GAN-Based Motion Artifact Correction of 3D MR Volumes Using an Image-to-Image Translation Algorithm

Tue, 2024-07-23 06:00

Proc SPIE Int Soc Opt Eng. 2024 Feb;12930:1293024. doi: 10.1117/12.3007743. Epub 2024 Apr 2.

ABSTRACT

The quality of brain MRI volumes is often compromised by motion artifacts arising from intricate respiratory patterns and involuntary head movements, manifesting as blurring and ghosting that markedly degrade imaging quality. In this study, we introduce an innovative approach employing a 3D deep learning framework to restore brain MR volumes afflicted by motion artifacts. The framework integrates a densely connected 3D U-net architecture augmented by generative adversarial network (GAN)-informed training with a novel volumetric reconstruction loss function tailored to 3D GAN to enhance the quality of the volumes. Our methodology is substantiated through comprehensive experimentation involving a diverse set of motion artifact-affected MR volumes. The generated high-quality MR volumes have similar volumetric signatures comparable to motion-free MR volumes after motion correction. This underscores the significant potential of harnessing this 3D deep learning system to aid in the rectification of motion artifacts in brain MR volumes, highlighting a promising avenue for advanced clinical applications.

PMID:39041007 | PMC:PMC11262355 | DOI:10.1117/12.3007743

Categories: Literature Watch

Siam-VAE: A hybrid deep learning based anomaly detection framework for automated quality control of head CT scans

Tue, 2024-07-23 06:00

Proc SPIE Int Soc Opt Eng. 2023 Feb;12465:124650X. doi: 10.1117/12.2654464. Epub 2023 Apr 7.

ABSTRACT

An automated quality control (QC) system is essential to ensure streamlined head computed tomography (CT) scan interpretations that do not affect subsequent image analysis. Such a system is advantageous compared to current human QC protocols, which are subjective and time-consuming. In this work, we aim to develop a deep learning-based framework to classify a scan to be of usable or unusable quality. Supervised deep learning models have been highly effective in classification tasks, but they are highly complex and require large, annotated data for effective training. Additional challenges with QC datasets include - 1) class-imbalance - usable cases far exceed the unusable ones and 2) weak-labels - scan level labels may not match slice level labels. The proposed framework utilizes these weak labels to augment a standard anomaly detection technique. Specifically, we proposed a hybrid model that consists of a variational autoencoder (VAE) and a Siamese Neural Network (SNN). While the VAE is trained to learn how usable scans appear and reconstruct an input scan, the SNN compares how similar this input scan is to its reconstruction and flags the ones that are less similar than a threshold. The proposed method is more suited to capture the differences in non-linear feature structure between the two classes of data than typical anomaly detection methods that depend on intensity-based metrics like root mean square error (RMSE). Comparison with state-of-the-art anomaly detection methods using multiple classification metrics establishes superiority of the proposed framework in flagging inferior quality scans for review by radiologists, thus reducing their workload and establishing a reliable and consistent dataflow.

PMID:39040978 | PMC:PMC11262463 | DOI:10.1117/12.2654464

Categories: Literature Watch

Editorial: Innovative methods for sleep staging using neuroinformatics

Tue, 2024-07-23 06:00

Front Neuroinform. 2024 Jul 8;18:1448591. doi: 10.3389/fninf.2024.1448591. eCollection 2024.

NO ABSTRACT

PMID:39040838 | PMC:PMC11262009 | DOI:10.3389/fninf.2024.1448591

Categories: Literature Watch

Artificial Intelligence (AI)-Based Detection of Anaemia Using the Clinical Appearance of the Gingiva

Tue, 2024-07-23 06:00

Cureus. 2024 Jun 20;16(6):e62792. doi: 10.7759/cureus.62792. eCollection 2024 Jun.

ABSTRACT

Background and aim Millions suffer from anaemia worldwide, and systemic disorders like anaemia harm oral health. Anaemia is linked to periodontitis as certain inflammatory cytokines produced during periodontal inflammation can depress erythropoietin production leading to the development of anemia. Thus, detecting and treating it is crucial to tooth health. Hence, this study aimed to evaluate three different machine-learning approaches for the automated detection of anaemia using clinical intraoral pictures of a patient's gingiva. Methodology Orange was employed with squeeze net embedding models for machine learning. Using 300 intraoral clinical photographs of patients' gingiva, logistic regression, neural network, and naive Bayes were trained and tested for prediction and detection. Accuracy was measured using a confusion matrix and receiver operating characteristic (ROC) curve. Results In the present study, three convolutional neural network (CNN)-embedded machine-learning algorithms detected and predicted anaemia. For anaemia identification, naive Bayes had an area under curve (AUC) of 0.77, random forest plot had an AUV of 0.78, and logistic regression had 0.85. Thus, the three machine learning methods detected anaemia with 77%, 78%, and 85% accuracy, respectively. Conclusion Using artificial intelligence (AI) with clinical intraoral gingiva images can accurately predict and detect anaemia. These findings need to be confirmed with larger samples and additional imaging modalities.

PMID:39040750 | PMC:PMC11260651 | DOI:10.7759/cureus.62792

Categories: Literature Watch

Multicell-Fold: geometric learning in folding multicellular life

Tue, 2024-07-23 06:00

ArXiv [Preprint]. 2024 Jul 9:arXiv:2407.07055v1.

ABSTRACT

During developmental processes such as embryogenesis, how a group of cells fold into specific structures, is a central question in biology that defines how living organisms form. Establishing tissue-level morphology critically relies on how every single cell decides to position itself relative to its neighboring cells. Despite its importance, it remains a major challenge to understand and predict the behavior of every cell within the living tissue over time during such intricate processes. To tackle this question, we propose a geometric deep learning model that can predict multicellular folding and embryogenesis, accurately capturing the highly convoluted spatial interactions among cells. We demonstrate that multicellular data can be represented with both granular and foam-like physical pictures through a unified graph data structure, considering both cellular interactions and cell junction networks. We successfully use our model to achieve two important tasks, interpretable 4-D morphological sequence alignment, and predicting local cell rearrangements before they occur at single-cell resolution. Furthermore, using an activation map and ablation studies, we demonstrate that cell geometries and cell junction networks together regulate local cell rearrangement which is critical for embryo morphogenesis. This approach provides a novel paradigm to study morphogenesis, highlighting a unified data structure and harnessing the power of geometric deep learning to accurately model the mechanisms and behaviors of cells during development. It offers a pathway toward creating a unified dynamic morphological atlas for a variety of developmental processes such as embryogenesis.

PMID:39040638 | PMC:PMC11261991

Categories: Literature Watch

Improvement of accumulated dose distribution in combined cervical cancer radiotherapy with deep learning-based dose prediction

Tue, 2024-07-23 06:00

Front Oncol. 2024 Jul 8;14:1407016. doi: 10.3389/fonc.2024.1407016. eCollection 2024.

ABSTRACT

PURPOSE: Difficulties remain in dose optimization and evaluation of cervical cancer radiotherapy that combines external beam radiotherapy (EBRT) and brachytherapy (BT). This study estimates and improves the accumulated dose distribution of EBRT and BT with deep learning-based dose prediction.

MATERIALS AND METHODS: A total of 30 patients treated with combined cervical cancer radiotherapy were enrolled in this study. The dose distributions of EBRT and BT plans were accumulated using commercial deformable image registration. A ResNet-101-based deep learning model was trained to predict pixel-wise dose distributions. To test the role of the predicted accumulated dose in clinic, each EBRT plan was designed using conventional method and then redesigned referencing the predicted accumulated dose distribution. Bladder and rectum dosimetric parameters and normal tissue complication probability (NTCP) values were calculated and compared between the conventional and redesigned accumulated doses.

RESULTS: The redesigned accumulated doses showed a decrease in mean values of V50, V60, and D2cc for the bladder (-3.02%, -1.71%, and -1.19 Gy, respectively) and rectum (-4.82%, -1.97%, and -4.13 Gy, respectively). The mean NTCP values for the bladder and rectum were also decreased by 0.02‰ and 0.98%, respectively. All values had statistically significant differences (p < 0.01), except for the bladder D2cc (p = 0.112).

CONCLUSION: This study realized accumulated dose prediction for combined cervical cancer radiotherapy without knowing the BT dose. The predicted dose served as a reference for EBRT treatment planning, leading to a superior accumulated dose distribution and lower NTCP values.

PMID:39040460 | PMC:PMC11260613 | DOI:10.3389/fonc.2024.1407016

Categories: Literature Watch

Unveiling the landscape of pathomics in personalized immunotherapy for lung cancer: a bibliometric analysis

Tue, 2024-07-23 06:00

Front Oncol. 2024 Jul 8;14:1432212. doi: 10.3389/fonc.2024.1432212. eCollection 2024.

ABSTRACT

BACKGROUND: Pathomics has emerged as a promising biomarker that could facilitate personalized immunotherapy in lung cancer. It is essential to elucidate the global research trends and emerging prospects in this domain.

METHODS: The annual distribution, journals, authors, countries, institutions, and keywords of articles published between 2018 and 2023 were visualized and analyzed using CiteSpace and other bibliometric tools.

RESULTS: A total of 109 relevant articles or reviews were included, demonstrating an overall upward trend; The terms "deep learning", "tumor microenvironment", "biomarkers", "image analysis", "immunotherapy", and "survival prediction", etc. are hot keywords in this field.

CONCLUSION: In future research endeavors, advanced methodologies involving artificial intelligence and pathomics will be deployed for the digital analysis of tumor tissues and the tumor microenvironment in lung cancer patients, leveraging histopathological tissue sections. Through the integration of comprehensive multi-omics data, this strategy aims to enhance the depth of assessment, characterization, and understanding of the tumor microenvironment, thereby elucidating a broader spectrum of tumor features. Consequently, the development of a multimodal fusion model will ensue, enabling precise evaluation of personalized immunotherapy efficacy and prognosis for lung cancer patients, potentially establishing a pivotal frontier in this domain of investigation.

PMID:39040448 | PMC:PMC11260632 | DOI:10.3389/fonc.2024.1432212

Categories: Literature Watch

Prediction of radiologic outcome-optimized dose plans and post-treatment magnetic resonance images: A proof-of-concept study in breast cancer brain metastases treated with stereotactic radiosurgery

Tue, 2024-07-23 06:00

Phys Imaging Radiat Oncol. 2024 Jun 24;31:100602. doi: 10.1016/j.phro.2024.100602. eCollection 2024 Jul.

ABSTRACT

BACKGROUND AND PURPOSE: Information in multiparametric Magnetic Resonance (mpMR) images is relatable to voxel-level tumor response to Radiation Treatment (RT). We have investigated a deep learning framework to predict (i) post-treatment mpMR images from pre-treatment mpMR images and the dose map ("forward models"), and, (ii) the RT dose map that will produce prescribed changes within the Gross Tumor Volume (GTV) on post-treatment mpMR images ("inverse model"), in Breast Cancer Metastases to the Brain (BCMB) treated with Stereotactic Radiosurgery (SRS).

MATERIALS AND METHODS: Local outcomes, planning computed tomography (CT) images, dose maps, and pre-treatment and post-treatment Apparent Diffusion Coefficient of water (ADC) maps, T1-weighted unenhanced (T1w) and contrast-enhanced (T1wCE), T2-weighted (T2w) and Fluid-Attenuated Inversion Recovery (FLAIR) mpMR images were curated from 39 BCMB patients. mpMR images were co-registered to the planning CT and intensity-calibrated. A 2D pix2pix architecture was used to train 5 forward models (ADC, T2w, FLAIR, T1w, T1wCE) and 1 inverse model on 1940 slices from 18 BCMB patients, and tested on 437 slices from another 9 BCMB patients.

RESULTS: Root Mean Square Percent Error (RMSPE) within the GTV between predicted and ground-truth post-RT images for the 5 forward models, in 136 test slices containing GTV, were (mean ± SD) 0.12 ± 0.044 (ADC), 0.14 ± 0.066 (T2w), 0.08 ± 0.038 (T1w), 0.13 ± 0.058 (T1wCE), and 0.09 ± 0.056 (FLAIR). RMSPE within the GTV on the same 136 test slices, between the predicted and ground-truth dose maps, was 0.37 ± 0.20 for the inverse model.

CONCLUSIONS: A deep learning-based approach for radiologic outcome-optimized dose planning in SRS of BCMB has been demonstrated.

PMID:39040435 | PMC:PMC11261135 | DOI:10.1016/j.phro.2024.100602

Categories: Literature Watch

Pages