Deep learning

ZleepAnlystNet: a novel deep learning model for automatic sleep stage scoring based on single-channel raw EEG data using separating training

Mon, 2024-04-29 06:00

Sci Rep. 2024 Apr 29;14(1):9859. doi: 10.1038/s41598-024-60796-y.

ABSTRACT

Numerous models for sleep stage scoring utilizing single-channel raw EEG signal have typically employed CNN and BiLSTM architectures. While these models, incorporating temporal information for sequence classification, demonstrate superior overall performance, they often exhibit low per-class performance for N1-stage, necessitating an adjustment of loss function. However, the efficacy of such adjustment is constrained by the training process. In this study, a pioneering training approach called separating training is introduced, alongside a novel model, to enhance performance. The developed model comprises 15 CNN models with varying loss function weights for feature extraction and 1 BiLSTM for sequence classification. Due to its architecture, this model cannot be trained using an end-to-end approach, necessitating separate training for each component using the Sleep-EDF dataset. Achieving an overall accuracy of 87.02%, MF1 of 82.09%, Kappa of 0.8221, and per-class F1-socres (W 90.34%, N1 54.23%, N2 89.53%, N3 88.96%, and REM 87.40%), our model demonstrates promising performance. Comparison with sleep technicians reveals a Kappa of 0.7015, indicating alignment with reference sleep stags. Additionally, cross-dataset validation and adaptation through training with the SHHS dataset yield an overall accuracy of 84.40%, MF1 of 74.96% and Kappa of 0.7785 when tested with the Sleep-EDF-13 dataset. These findings underscore the generalization potential in model architecture design facilitated by our novel training approach.

PMID:38684765 | DOI:10.1038/s41598-024-60796-y

Categories: Literature Watch

Deep-learning based 3D birefringence image generation using 2D multi-view holographic images

Mon, 2024-04-29 06:00

Sci Rep. 2024 Apr 30;14(1):9879. doi: 10.1038/s41598-024-60023-8.

ABSTRACT

Refractive index stands as an inherent characteristic of a material, allowing non-invasive exploration of the three-dimensional (3D) interior of the material. Certain materials with different refractive indices produce a birefringence phenomenon in which incident light is split into two polarization components when it passes through the materials. Representative birefringent materials appear in calcite crystals, liquid crystals (LCs), biological tissues, silk fibers, polymer films, etc. If the internal 3D shape of these materials can be visually expressed through a non-invasive method, it can greatly contribute to the semiconductor, display industry, optical components and devices, and biomedical diagnosis. This paper introduces a novel approach employing deep learning to generate 3D birefringence images using multi-viewed holographic interference images. First, we acquired a set of multi-viewed holographic interference pattern images and a 3D volume image of birefringence directly from a polarizing DTT (dielectric tensor tomography)-based microscope system about each LC droplet sample. The proposed model was trained to generate the 3D volume images of birefringence using the two-dimensional (2D) interference pattern image set. Performance evaluations were conducted against the ground truth images obtained directly from the DTT microscopy. Visualization techniques were applied to describe the refractive index distribution in the generated 3D images of birefringence. The results show the proposed method's efficiency in generating the 3D refractive index distribution from multi-viewed holographic interference images, presenting a novel data-driven alternative to traditional methods from the DTT devices.

PMID:38684698 | DOI:10.1038/s41598-024-60023-8

Categories: Literature Watch

Recognition of diabetic retinopathy and macular edema using deep learning

Mon, 2024-04-29 06:00

Med Biol Eng Comput. 2024 Apr 30. doi: 10.1007/s11517-024-03105-z. Online ahead of print.

ABSTRACT

Diabetic retinopathy (DR) and diabetic macular edema (DME) are both serious eye conditions associated with diabetes and if left untreated, and they can lead to permanent blindness. Traditional methods for screening these conditions rely on manual image analysis by experts, which can be time-consuming and costly due to the scarcity of such experts. To overcome the aforementioned challenges, we present the Modified CornerNet approach with DenseNet-100. This system aims to localize and classify lesions associated with DR and DME. To train our model, we first generate annotations for input samples. These annotations likely include information about the location and type of lesions within the retinal images. DenseNet-100 is a deep CNN used for feature extraction, and CornerNet is a one-stage object detection model. CornerNet is known for its ability to accurately localize small objects, which makes it suitable for detecting lesions in retinal images. We assessed our technique on two challenging datasets, EyePACS and IDRiD. These datasets contain a diverse range of retinal images, which is important to estimate the performance of our model. Further, the proposed model is also tested in the cross-corpus scenario on two challenging datasets named APTOS-2019 and Diaretdb1 to assess the generalizability of our system. According to the accomplished analysis, our method outperformed the latest approaches in terms of both qualitative and quantitative results. The ability to effectively localize small abnormalities and handle over-fitted challenges is highlighted as a key strength of the suggested framework which can assist the practitioners in the timely recognition of such eye ailments.

PMID:38684593 | DOI:10.1007/s11517-024-03105-z

Categories: Literature Watch

Two-headed UNetEfficientNets for parallel execution of segmentation and classification of brain tumors: incorporating postprocessing techniques with connected component labelling

Mon, 2024-04-29 06:00

J Cancer Res Clin Oncol. 2024 Apr 29;150(4):220. doi: 10.1007/s00432-024-05718-1.

ABSTRACT

PURPOSE: The purpose of this study is to develop accurate and automated detection and segmentation methods for brain tumors, given their significant fatality rates, with aggressive malignant tumors like Glioblastoma Multiforme (GBM) having a five-year survival rate as low as 5 to 10%. This underscores the urgent need to improve diagnosis and treatment outcomes through innovative approaches in medical imaging and deep learning techniques.

METHODS: In this work, we propose a novel approach utilizing the two-headed UNetEfficientNets model for simultaneous segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) images. The model combines the strengths of EfficientNets and a modified two-headed Unet model. We utilized a publicly available dataset consisting of 3064 brain MR images classified into three tumor classes: Meningioma, Glioma, and Pituitary. To enhance the training process, we performed 12 types of data augmentation on the training dataset. We evaluated the methodology using six deep learning models, ranging from UNetEfficientNet-B0 to UNetEfficientNet-B5, optimizing the segmentation and classification heads using binary cross entropy (BCE) loss with Dice and BCE with focal loss, respectively. Post-processing techniques such as connected component labeling (CCL) and ensemble models were applied to improve segmentation outcomes.

RESULTS: The proposed UNetEfficientNet-B4 model achieved outstanding results, with an accuracy of 99.4% after postprocessing. Additionally, it obtained high scores for DICE (94.03%), precision (98.67%), and recall (99.00%) after post-processing. The ensemble technique further improved segmentation performance, with a global DICE score of 95.70% and Jaccard index of 91.20%.

CONCLUSION: Our study demonstrates the high efficiency and accuracy of the proposed UNetEfficientNet-B4 model in the automatic and parallel detection and segmentation of brain tumors from MRI images. This approach holds promise for improving diagnosis and treatment planning for patients with brain tumors, potentially leading to better outcomes and prognosis.

PMID:38684578 | DOI:10.1007/s00432-024-05718-1

Categories: Literature Watch

Deep Learning-based Image Enhancement Techniques for Fast MRI in Neuroimaging

Mon, 2024-04-29 06:00

Magn Reson Med Sci. 2024 Apr 27. doi: 10.2463/mrms.rev.2023-0153. Online ahead of print.

ABSTRACT

Despite its superior soft tissue contrast and non-invasive nature, MRI requires long scan times due to its intrinsic signal acquisition principles, a main drawback which technological advancements in MRI have been focused on. In particular, scan time reduction is a natural requirement in neuroimaging due to detailed structures requiring high resolution imaging and often volumetric (3D) acquisitions, and numerous studies have recently attempted to harness deep learning (DL) technology in enabling scan time reduction and image quality improvement. Various DL-based image reconstruction products allow for additional scan time reduction on top of existing accelerated acquisition methods without compromising the image quality.

PMID:38684425 | DOI:10.2463/mrms.rev.2023-0153

Categories: Literature Watch

scTPC: a novel semi-supervised deep clustering model for scRNA-seq data

Mon, 2024-04-29 06:00

Bioinformatics. 2024 Apr 29:btae293. doi: 10.1093/bioinformatics/btae293. Online ahead of print.

ABSTRACT

MOTIVATION: Continuous advancements in single-cell RNA sequencing technology (scRNA-seq) have enabled researchers to further explore the study of cell heterogeneity, trajectory inference, identification of rare cell types, and neurology. Accurate scRNA-seq data clustering is crucial in single-cell sequencing data analysis. However, the high dimensionality, sparsity, and presence of "false" zero values in the data can pose challenges to clustering. Furthermore, current unsupervised clustering algorithms have not effectively leveraged prior biological knowledge, making cell clustering even more challenging.

RESULTS: This study investigates a semi-supervised clustering model called scTPC, which integrates the triplet constraint, pairwise constraint and cross-entropy constraint based on deep learning. Specifically, the model begins by pre-training a denoising autoencoder based on a zero-inflated negative binomial (ZINB) distribution. Deep clustering is then performed in the learned latent feature space using triplet constraints and pairwise constraints generated from partial labeled cells. Finally, to address imbalanced cell-type datasets, a weighted cross-entropy loss is introduced to optimize the model. A series of experimental results on 10 real scRNA-seq datasets and 5 simulated datasets demonstrate that scTPC achieves accurate clustering with a well-designed framework.

AVAILABILITY: scTPC is a Python-based algorithm, and the code is available from https://github.com/LF-Yang/Code or https://zenodo.org/records/10951780.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38684178 | DOI:10.1093/bioinformatics/btae293

Categories: Literature Watch

Physician Assistant Educators' Production Blueprint for Video Pedagogy

Mon, 2024-04-29 06:00

J Physician Assist Educ. 2024 Apr 30. doi: 10.1097/JPA.0000000000000592. Online ahead of print.

ABSTRACT

This article presents a blueprint for effective video media production in physician assistant (PA) education based on validated pedagogical practices found in the literature. Using the cognitive load theory and a practical blueprint for video production designed for PA educators, this method aims to improve video production practices and better engage students within a format that improves learning outcomes for a diverse body of PA students. Students are interacting with videos, and there is an opportunity for educators to hone practices in video production to enhance student learning. A literature review of pedagogical practices in video production guides the production blueprint for video production. The practical principles of cognitive load theory improve efficiency in assimilating new information, enhance student engagement, and facilitate active and deep learning for a student learner engaging with the instructional video. Based on the literature and the author's educational video creation experience, a guide in the form of a production blueprint specific to PA education is proposed.

PMID:38684095 | DOI:10.1097/JPA.0000000000000592

Categories: Literature Watch

MassDash: A Web-Based Dashboard for Data-Independent Acquisition Mass Spectrometry Visualization

Mon, 2024-04-29 06:00

J Proteome Res. 2024 Apr 29. doi: 10.1021/acs.jproteome.4c00026. Online ahead of print.

ABSTRACT

With the increased usage and diversity of methods and instruments being applied to analyze Data-Independent Acquisition (DIA) data, visualization is becoming increasingly important to validate automated software results. Here we present MassDash, a cross-platform DIA mass spectrometry visualization and validation software for comparing features and results across popular tools. MassDash provides a web-based interface and Python package for interactive feature visualizations and summary report plots across multiple automated DIA feature detection tools, including OpenSwath, DIA-NN, and dreamDIA. Furthermore, MassDash processes peptides on the fly, enabling interactive visualization of peptides across dozens of runs simultaneously on a personal computer. MassDash supports various multidimensional visualizations across retention time, ion mobility, m/z, and intensity, providing additional insights into the data. The modular framework is easily extendable, enabling rapid algorithm development of novel peak-picker techniques, such as deep-learning-based approaches and refinement of existing tools. MassDash is open-source under a BSD 3-Clause license and freely available at https://github.com/Roestlab/massdash, and a demo version can be accessed at https://massdash.streamlit.app.

PMID:38684072 | DOI:10.1021/acs.jproteome.4c00026

Categories: Literature Watch

UNNT: A novel Utility for comparing Neural Net and Tree-based models

Mon, 2024-04-29 06:00

PLoS Comput Biol. 2024 Apr 29;20(4):e1011504. doi: 10.1371/journal.pcbi.1011504. Online ahead of print.

ABSTRACT

The use of deep learning (DL) is steadily gaining traction in scientific challenges such as cancer research. Advances in enhanced data generation, machine learning algorithms, and compute infrastructure have led to an acceleration in the use of deep learning in various domains of cancer research such as drug response problems. In our study, we explored tree-based models to improve the accuracy of a single drug response model and demonstrate that tree-based models such as XGBoost (eXtreme Gradient Boosting) have advantages over deep learning models, such as a convolutional neural network (CNN), for single drug response problems. However, comparing models is not a trivial task. To make training and comparing CNNs and XGBoost more accessible to users, we developed an open-source library called UNNT (A novel Utility for comparing Neural Net and Tree-based models). The case studies, in this manuscript, focus on cancer drug response datasets however the application can be used on datasets from other domains, such as chemistry.

PMID:38683879 | DOI:10.1371/journal.pcbi.1011504

Categories: Literature Watch

Application of deep learning to pressure injury staging

Mon, 2024-04-29 06:00

J Wound Care. 2024 May 2;33(5):368-378. doi: 10.12968/jowc.2024.33.5.368.

ABSTRACT

OBJECTIVE: Accurate assessment of pressure injuries (PIs) is necessary for a good outcome. Junior and non-specialist nurses have less experience with PIs and lack clinical practice, and so have difficulty staging them accurately. In this work, a deep learning-based system for PI staging and tissue classification is proposed to help improve its accuracy and efficiency in clinical practice, and save healthcare costs.

METHOD: A total of 1610 cases of PI and their corresponding photographs were collected from clinical practice, and each sample was accurately staged and the tissues labelled by experts for training a Mask Region-based Convolutional Neural Network (Mask R-CNN, Facebook Artificial Intelligence Research, Meta, US) object detection and instance segmentation network. A recognition system was set up to automatically stage and classify the tissues of the remotely uploaded PI photographs.

RESULTS: On a test set of 100 samples, the average precision of this model for stage recognition reached 0.603, which exceeded that of the medical personnel involved in the comparative evaluation, including an enterostomal therapist.

CONCLUSION: In this study, the deep learning-based PI staging system achieved the evaluation performance of a nurse with professional training in wound care. This low-cost system could help overcome the difficulty of identifying PIs by junior and non-specialist nurses, and provide valuable auxiliary clinical information.

PMID:38683775 | DOI:10.12968/jowc.2024.33.5.368

Categories: Literature Watch

Factors associated with interobserver variation amongst pathologists in the diagnosis of endometrial hyperplasia: A systematic review

Mon, 2024-04-29 06:00

PLoS One. 2024 Apr 29;19(4):e0302252. doi: 10.1371/journal.pone.0302252. eCollection 2024.

ABSTRACT

OBJECTIVE: Reproducible diagnoses of endometrial hyperplasia (EH) remains challenging and has potential implications for patient management. This systematic review aimed to identify pathologist-specific factors associated with interobserver variation in the diagnosis and reporting of EH.

METHODS: Three electronic databases, namely MEDLINE, Embase and Web of Science, were searched from 1st January 2000 to 25th March 2023, using relevant key words and subject headings. Eligible studies reported on pathologist-specific factors or working practices influencing interobserver variation in the diagnosis of EH, using either the World Health Organisation (WHO) 2014 or 2020 classification or the endometrioid intraepithelial neoplasia (EIN) classification system. Quality assessment was undertaken using the QUADAS-2 tool, and findings were narratively synthesised.

RESULTS: Eight studies were identified. Interobserver variation was shown to be significant even amongst specialist gynaecological pathologists in most studies. Few studies investigated pathologist-specific characteristics, but pathologists were shown to have different diagnostic styles, with some more likely to under-diagnose and others likely to over-diagnose EH. Some novel working practices were identified, such as grading the "degree" of nuclear atypia and the incorporation of objective methods of diagnosis such as semi-automated quantitative image analysis/deep learning models.

CONCLUSIONS: This review highlighted the impact of pathologist-specific factors and working practices in the accurate diagnosis of EH, although few studies have been conducted. Further research is warranted in the development of more objective criteria that could improve reproducibility in EH diagnostic reporting, as well as determining the applicability of novel methods such as grading the degree of nuclear atypia in clinical settings.

PMID:38683770 | DOI:10.1371/journal.pone.0302252

Categories: Literature Watch

Learning to Holistically Detect Bridges From Large-Size VHR Remote Sensing Imagery

Mon, 2024-04-29 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Apr 29;PP. doi: 10.1109/TPAMI.2024.3393024. Online ahead of print.

ABSTRACT

Bridge detection in remote sensing images (RSIs) plays a crucial role in various applications, but it poses unique challenges compared to the detection of other objects. In RSIs, bridges exhibit considerable variations in terms of their spatial scales and aspect ratios. Therefore, to ensure the visibility and integrity of bridges, it is essential to perform holistic bridge detection in large-size very-high-resolution (VHR) RSIs. However, the lack of datasets with large-size VHR RSIs limits the deep learning algorithms' performance on bridge detection. Due to the limitation of GPU memory in tackling large-size images, deep learning-based object detection methods commonly adopt the cropping strategy, which inevitably results in label fragmentation and discontinuous prediction. To ameliorate the scarcity of datasets, this paper proposes a large-scale dataset named GLH-Bridge comprising 6,000 VHR RSIs sampled from diverse geographic locations across the globe. These images encompass a wide range of sizes, varying from 2,048 × 2,048 to 16,384 × 16,384 pixels, and collectively feature 59,737 bridges. These bridges span diverse backgrounds, and each of them has been manually annotated, using both an oriented bounding box (OBB) and a horizontal bounding box (HBB). Furthermore, we present an efficient network for holistic bridge detection (HBD-Net) in large-size RSIs. The HBD-Net presents a separate detector-based feature fusion (SDFF) architecture and is optimized via a shape-sensitive sample re-weighting (SSRW) strategy. The SDFF architecture performs inter-layer feature fusion (IFF) to incorporate multi-scale context in the dynamic image pyramid (DIP) of the large-size image, and the SSRW strategy is employed to ensure an equitable balance in the regression weight of bridges with various aspect ratios. Based on the proposed GLH-Bridge dataset, we establish a bridge detection benchmark including the OBB and HBB tasks, and validate the effectiveness of the proposed HBD-Net. Additionally, cross-dataset generalization experiments on two publicly available datasets illustrate the strong generalization capability of the GLH-Bridge dataset. The dataset and source code will be released at https://luo-z13.github.io/GLH-Bridge-page/.

PMID:38683714 | DOI:10.1109/TPAMI.2024.3393024

Categories: Literature Watch

Searching to Exploit Memorization Effect in Deep Learning with Noisy Labels

Mon, 2024-04-29 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Apr 29;PP. doi: 10.1109/TPAMI.2024.3394552. Online ahead of print.

ABSTRACT

Sample selection approaches are popular in robust learning from noisy labels. However, how to control the selection process properly so that deep networks can benefit from the memorization effect is a hard problem. In this paper, motivated by the success of automated machine learning (AutoML), we propose to control the selection process by bi-level optimization. Specifically, we parameterize the selection process by exploiting the general patterns of the memorization effect in the upper-level, and then update these parameters using predicting accuracy obtained from model training in the lower-level. We further introduce semi-supervised learning algorithms to utiilize noisy-labeled data as unlabeled data. To solve the bi-level optimization problem efficiently, we consider more information from the validation curvature by the Newton method and cubic regularization method. We provide convergence analysis for both optimization methods. Results show that while both methods can converge to an (approximately) stationary point, the cubic regularization method can find better local optimal than the Newton method with less time. Experiments on both benchmark and real-world data sets demonstrate that the proposed searching method can lead to significant improvements upon existing methods. Compared with existing AutoML approaches, our method is much more efficient on finding a good selection schedule.

PMID:38683712 | DOI:10.1109/TPAMI.2024.3394552

Categories: Literature Watch

AI-based denoising of head impact kinematics measurements with convolutional neural network for traumatic brain injury prediction

Mon, 2024-04-29 06:00

IEEE Trans Biomed Eng. 2024 Apr 29;PP. doi: 10.1109/TBME.2024.3392537. Online ahead of print.

ABSTRACT

OBJECTIVE: Wearable devices are developed to measure head impact kinematics but are intrinsically noisy because of the imperfect interface with human bodies. This study aimed to improve the head impact kinematics measurements obtained from instrumented mouthguards using deep learning to enhance traumatic brain injury (TBI) risk monitoring.

METHODS: We developed one-dimensional convolutional neural network (1D-CNN) models to denoise mouthguard kinematics measurements for tri-axial linear acceleration and tri-axial angular velocity from 163 laboratory dummy head impacts. The performance of the denoising models was evaluated on three levels: kinematics, brain injury criteria, and tissue-level strain and strain rate. Additionally, we performed a blind test on an on-field dataset of 118 college football impacts and a test on 413 post-mortem human subject (PMHS) impacts.

RESULTS: On the dummy head impacts, the denoised kinematics showed better correlation with reference kinematics, with relative reductions of 36% for pointwise root mean squared error and 56% for peak absolute error. Absolute errors in six brain injury criteria were reduced by a mean of 82%. For maximum principal strain and maximum principal strain rate, the mean error reduction was 35% and 69%, respectively. On the PMHS impacts, similar denoising effects were observed and the peak kinematics after denoising were more accurate (relative error reduction for 10% noisiest impacts was 75.6%).

CONCLUSION: The 1D-CNN denoising models effectively reduced errors in mouthguard-derived kinematics measurements on dummy and PMHS impacts.

SIGNIFICANCE: This study provides a novel approach for denoising head kinematics measurements in dummy and PMHS impacts, which can be further validated on more real-human kinematics data before real-world applications.

PMID:38683703 | DOI:10.1109/TBME.2024.3392537

Categories: Literature Watch

A comparative analysis of computational drug repurposing approaches: proposing a novel tensor-matrix-tensor factorization method

Mon, 2024-04-29 06:00

Mol Divers. 2024 Apr 29. doi: 10.1007/s11030-024-10851-7. Online ahead of print.

ABSTRACT

Efficient drug discovery relies on drug repurposing, an important and open research field. This work presents a novel factorization method and a practical comparison of different approaches for drug repurposing. First, we propose a novel tensor-matrix-tensor (TMT) formulation as a new data array method with a gradient-based factorization procedure. Additionally, this paper examines and contrasts four computational drug repurposing approaches-factorization-based methods, machine learning methods, deep learning methods, and graph neural networks-to fulfill the second purpose. We test the strategies on two datasets and assess each approach's performance, drawbacks, problems, and benefits based on results. The results demonstrate that deep learning techniques work better than other strategies and that their results might be more reliable. Ultimately, graph neural methods need to be in an inductive manner to have a reliable prediction.

PMID:38683487 | DOI:10.1007/s11030-024-10851-7

Categories: Literature Watch

Automated abdominal CT contrast phase detection using an interpretable and open-source artificial intelligence algorithm

Mon, 2024-04-29 06:00

Eur Radiol. 2024 Apr 29. doi: 10.1007/s00330-024-10769-6. Online ahead of print.

ABSTRACT

OBJECTIVES: To develop and validate an open-source artificial intelligence (AI) algorithm to accurately detect contrast phases in abdominal CT scans.

MATERIALS AND METHODS: Retrospective study aimed to develop an AI algorithm trained on 739 abdominal CT exams from 2016 to 2021, from 200 unique patients, covering 1545 axial series. We performed segmentation of five key anatomic structures-aorta, portal vein, inferior vena cava, renal parenchyma, and renal pelvis-using TotalSegmentator, a deep learning-based tool for multi-organ segmentation, and a rule-based approach to extract the renal pelvis. Radiomics features were extracted from the anatomical structures for use in a gradient-boosting classifier to identify four contrast phases: non-contrast, arterial, venous, and delayed. Internal and external validation was performed using the F1 score and other classification metrics, on the external dataset "VinDr-Multiphase CT".

RESULTS: The training dataset consisted of 172 patients (mean age, 70 years ± 8, 22% women), and the internal test set included 28 patients (mean age, 68 years ± 8, 14% women). In internal validation, the classifier achieved an accuracy of 92.3%, with an average F1 score of 90.7%. During external validation, the algorithm maintained an accuracy of 90.1%, with an average F1 score of 82.6%. Shapley feature attribution analysis indicated that renal and vascular radiodensity values were the most important for phase classification.

CONCLUSION: An open-source and interpretable AI algorithm accurately detects contrast phases in abdominal CT scans, with high accuracy and F1 scores in internal and external validation, confirming its generalization capability.

CLINICAL RELEVANCE STATEMENT: Contrast phase detection in abdominal CT scans is a critical step for downstream AI applications, deploying algorithms in the clinical setting, and for quantifying imaging biomarkers, ultimately allowing for better diagnostics and increased access to diagnostic imaging.

KEY POINTS: Digital Imaging and Communications in Medicine labels are inaccurate for determining the abdominal CT scan phase. AI provides great help in accurately discriminating the contrast phase. Accurate contrast phase determination aids downstream AI applications and biomarker quantification.

PMID:38683384 | DOI:10.1007/s00330-024-10769-6

Categories: Literature Watch

Classification of DNA Sequence Based on a Non-gradient Algorithm: Pseudoinverse Learners

Mon, 2024-04-29 06:00

Methods Mol Biol. 2024;2744:359-373. doi: 10.1007/978-1-0716-3581-0_23.

ABSTRACT

This chapter proposes a prototype-based classification approach for analyzing DNA barcodes that uses a spectral representation of DNA sequences and a non-gradient neural network. Biological sequences can be viewed as data components with higher non-fixed dimensions, which correspond to the length of the sequences. Through computational procedures such as one-hot encoding, numerical encoding plays an important role in DNA sequence evaluation (OHE). However, the OHE method has some disadvantages: (1) It does not add any details that could result in an additional predictive variable, and (2) if the variable has many classes, OHE significantly expands the feature space. To address these shortcomings, this chapter proposes a computationally efficient framework for classifying DNA sequences of living organisms in the image domain. A multilayer perceptron trained by a pseudoinverse learning autoencoder (PILAE) algorithm is used in the proposed strategy. The learning control parameters and the number of hidden layers do not have to be specified during the PILAE training process. As a result, the PILAE classifier outperforms other deep neural network (DNN) strategies such as the VGG-16 and Xception models.

PMID:38683331 | DOI:10.1007/978-1-0716-3581-0_23

Categories: Literature Watch

Exploring Novel Fentanyl Analogues Using a Graph-Based Transformer Model

Mon, 2024-04-29 06:00

Interdiscip Sci. 2024 Apr 29. doi: 10.1007/s12539-024-00623-0. Online ahead of print.

ABSTRACT

The structures of fentanyl and its analogues are easy to be modified and few types have been included in database so far, which allow criminals to avoid the supervision of relevant departments. This paper introduces a molecular graph-based transformer model, which is combined with a data augmentation method based on substructure replacement to generate novel fentanyl analogues. 140,000 molecules were generated, and after a set of screening, 36,799 potential fentanyl analogues were finally obtained. We calculated the molecular properties of 36,799 potential fentanyl analogues. The results showed that the model could learn some properties of original fentanyl molecules. We compared the generated molecules from transformer model and data augmentation method based on substructure replacement with those generated by the other two molecular generation models based on deep learning, and found that the model in this paper can generate more novel potential fentanyl analogues. Finally, the findings of the paper indicate that transformer model based on molecular graph helps us explore the structure of potential fentanyl molecules as well as understand distribution of original molecules of fentanyl.

PMID:38683279 | DOI:10.1007/s12539-024-00623-0

Categories: Literature Watch

Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review

Mon, 2024-04-29 06:00

Crit Rev Oncog. 2024;29(3):33-65. doi: 10.1615/CritRevOncog.2023050852.

ABSTRACT

Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.

PMID:38683153 | DOI:10.1615/CritRevOncog.2023050852

Categories: Literature Watch

PBAC: A pathway-based attention convolution neural network for predicting clinical drug treatment responses

Mon, 2024-04-29 06:00

J Cell Mol Med. 2024 May;28(9):e18298. doi: 10.1111/jcmm.18298.

ABSTRACT

Precise and personalized drug application is crucial in the clinical treatment of complex diseases. Although neural networks offer a new approach to improving drug strategies, their internal structure is difficult to interpret. Here, we propose PBAC (Pathway-Based Attention Convolution neural network), which integrates a deep learning framework and attention mechanism to address the complex biological pathway information, thereby provide a biology function-based robust drug responsiveness prediction model. PBAC has four layers: gene-pathway layer, attention layer, convolution layer and fully connected layer. PBAC improves the performance of predicting drug responsiveness by focusing on important pathways, helping us understand the mechanism of drug action in diseases. We validated the PBAC model using data from four chemotherapy drugs (Bortezomib, Cisplatin, Docetaxel and Paclitaxel) and 11 immunotherapy datasets. In the majority of datasets, PBAC exhibits superior performance compared to traditional machine learning methods and other research approaches (area under curve = 0.81, the area under the precision-recall curve = 0.73). Using PBAC attention layer output, we identified some pathways as potential core cancer regulators, providing good interpretability for drug treatment prediction. In summary, we presented PBAC, a powerful tool to predict drug responsiveness based on the biology pathway information and explore the potential cancer-driving pathways.

PMID:38683133 | DOI:10.1111/jcmm.18298

Categories: Literature Watch

Pages