Deep learning

High-resolution extracellular pH imaging of liver cancer with multiparametric MR using Deep Image Prior

Fri, 2024-03-15 06:00

NMR Biomed. 2024 Mar 15:e5145. doi: 10.1002/nbm.5145. Online ahead of print.

ABSTRACT

Noninvasive extracellular pH (pHe ) mapping with Biosensor Imaging of Redundant Deviation in Shifts (BIRDS) using MR spectroscopic imaging (MRSI) has been demonstrated on 3T clinical MR scanners at 8 × 8 × 10 $$ \times 8\times 10 $$ mm3 spatial resolution and applied to study various liver cancer treatments. Although pHe imaging at higher resolution can be achieved by extending the acquisition time, a postprocessing method to increase the resolution is preferable, to minimize the duration spent by the subject in the MR scanner. In this work, we propose to improve the spatial resolution of pHe mapping with BIRDS by incorporating anatomical information in the form of multiparametric MRI and using an unsupervised deep-learning technique, Deep Image Prior (DIP). Specifically, we used high-resolution T 1 $$ {\mathrm{T}}_1 $$ , T 2 $$ {\mathrm{T}}_2 $$ , and diffusion-weighted imaging (DWI) MR images of rabbits with VX2 liver tumors as inputs to a U-Net architecture to provide anatomical information. U-Net parameters were optimized to minimize the difference between the output super-resolution image and the experimentally acquired low-resolution pHe image using the mean-absolute error. In this way, the super-resolution pHe image would be consistent with both anatomical MR images and the low-resolution pHe measurement from the scanner. The method was developed based on data from 49 rabbits implanted with VX2 liver tumors. For evaluation, we also acquired high-resolution pHe images from two rabbits, which were used as ground truth. The results indicate a good match between the spatial characteristics of the super-resolution images and the high-resolution ground truth, supported by the low pixelwise absolute error.

PMID:38488205 | DOI:10.1002/nbm.5145

Categories: Literature Watch

PS(2)MS: A Deep Learning-Based Prediction System for Identifying New Psychoactive Substances Using Mass Spectrometry

Fri, 2024-03-15 06:00

Anal Chem. 2024 Mar 15. doi: 10.1021/acs.analchem.3c05019. Online ahead of print.

ABSTRACT

The rapid proliferation of new psychoactive substances (NPS) poses significant challenges to conventional mass-spectrometry-based identification methods due to the absence of reference spectra for these emerging substances. This paper introduces PS2MS, an AI-powered predictive system designed specifically to address the limitations of identifying the emergence of unidentified novel illicit drugs. PS2MS builds a synthetic NPS database by enumerating feasible derivatives of known substances and uses deep learning to generate mass spectra and chemical fingerprints. When the mass spectrum of an analyte does not match any known reference, PS2MS simultaneously examines the chemical fingerprint and mass spectrum against the putative NPS database using integrated metrics to deduce possible identities. Experimental results affirm the effectiveness of PS2MS in identifying cathinone derivatives within real evidence specimens, signifying its potential for practical use in identifying emerging drugs of abuse for researchers and forensic experts.

PMID:38488022 | DOI:10.1021/acs.analchem.3c05019

Categories: Literature Watch

Current Trends and Challenges in Drug-Likeness Prediction: Are They Generalizable and Interpretable?

Fri, 2024-03-15 06:00

Health Data Sci. 2023 Nov 10;3:0098. doi: 10.34133/hds.0098. eCollection 2023.

ABSTRACT

Importance: Drug-likeness of a compound is an overall assessment of its potential to succeed in clinical trials, and is essential for economizing research expenditures by filtering compounds with unfavorable properties and poor development potential. To this end, a robust drug-likeness prediction method is indispensable. Various approaches, including discriminative rules, statistical models, and machine learning models, have been developed to predict drug-likeness based on physiochemical properties and structural features. Notably, recent advancements in novel deep learning techniques have significantly advanced drug-likeness prediction, especially in classification performance. Highlights: In this review, we addressed the evolving landscape of drug-likeness prediction, with emphasis on methods employing novel deep learning techniques, and highlighted the current challenges in drug-likeness prediction, specifically regarding the aspects of generalization and interpretability. Moreover, we explored potential remedies and outlined promising avenues for future research. Conclusion: Despite the hurdles of generalization and interpretability, novel deep learning techniques have great potential in drug-likeness prediction and are worthy of further research efforts.

PMID:38487200 | PMC:PMC10880170 | DOI:10.34133/hds.0098

Categories: Literature Watch

SASAN: ground truth for the effective segmentation and classification of skin cancer using biopsy images

Fri, 2024-03-15 06:00

Diagnosis (Berl). 2024 Mar 18. doi: 10.1515/dx-2024-0012. Online ahead of print.

ABSTRACT

OBJECTIVES: Early skin cancer diagnosis can save lives; however, traditional methods rely on expert knowledge and can be time-consuming. This calls for automated systems using machine learning and deep learning. However, existing datasets often focus on flat skin surfaces, neglecting more complex cases on organs or with nearby lesions.

METHODS: This work addresses this gap by proposing a skin cancer diagnosis methodology using a dataset named ASAN that covers diverse skin cancer cases but suffers from noisy features. To overcome the noisy feature problem, a segmentation dataset named SASAN is introduced, focusing on Region of Interest (ROI) extraction-based classification. This allows models to concentrate on critical areas within the images while ignoring learning the noisy features.

RESULTS: Various deep learning segmentation models such as UNet, LinkNet, PSPNet, and FPN were trained on the SASAN dataset to perform segmentation-based ROI extraction. Classification was then performed using the dataset with and without ROI extraction. The results demonstrate that ROI extraction significantly improves the performance of these models in classification. This implies that SASAN is effective in evaluating performance metrics for complex skin cancer cases.

CONCLUSIONS: This study highlights the importance of expanding datasets to include challenging scenarios and developing better segmentation methods to enhance automated skin cancer diagnosis. The SASAN dataset serves as a valuable tool for researchers aiming to improve such systems and ultimately contribute to better diagnostic outcomes.

PMID:38487874 | DOI:10.1515/dx-2024-0012

Categories: Literature Watch

High-throughput prediction of enzyme promiscuity based on substrate-product pairs

Fri, 2024-03-15 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae089. doi: 10.1093/bib/bbae089.

ABSTRACT

The screening of enzymes for catalyzing specific substrate-product pairs is often constrained in the realms of metabolic engineering and synthetic biology. Existing tools based on substrate and reaction similarity predominantly rely on prior knowledge, demonstrating limited extrapolative capabilities and an inability to incorporate custom candidate-enzyme libraries. Addressing these limitations, we have developed the Substrate-product Pair-based Enzyme Promiscuity Prediction (SPEPP) model. This innovative approach utilizes transfer learning and transformer architecture to predict enzyme promiscuity, thereby elucidating the intricate interplay between enzymes and substrate-product pairs. SPEPP exhibited robust predictive ability, eliminating the need for prior knowledge of reactions and allowing users to define their own candidate-enzyme libraries. It can be seamlessly integrated into various applications, including metabolic engineering, de novo pathway design, and hazardous material degradation. To better assist metabolic engineers in designing and refining biochemical pathways, particularly those without programming skills, we also designed EnzyPick, an easy-to-use web server for enzyme screening based on SPEPP. EnzyPick is accessible at http://www.biosynther.com/enzypick/.

PMID:38487850 | DOI:10.1093/bib/bbae089

Categories: Literature Watch

PANCDR: precise medicine prediction using an adversarial network for cancer drug response

Fri, 2024-03-15 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae088. doi: 10.1093/bib/bbae088.

ABSTRACT

Pharmacogenomics aims to provide personalized therapy to patients based on their genetic variability. However, accurate prediction of cancer drug response (CDR) is challenging due to genetic heterogeneity. Since clinical data are limited, most studies predicting drug response use preclinical data to train models. However, such models might not be generalizable to external clinical data due to differences between the preclinical and clinical datasets. In this study, a Precision Medicine Prediction using an Adversarial Network for Cancer Drug Response (PANCDR) model is proposed. PANCDR consists of two sub-models, an adversarial model and a CDR prediction model. The adversarial model reduces the gap between the preclinical and clinical datasets, while the CDR prediction model extracts features and predicts responses. PANCDR was trained using both preclinical data and unlabeled clinical data. Subsequently, it was tested on external clinical data, including The Cancer Genome Atlas and brain tumor patients. PANCDR outperformed other machine learning models in predicting external test data. Our results demonstrate the robustness of PANCDR and its potential in precision medicine by recommending patient-specific drug candidates. The PANCDR codes and data are available at https://github.com/DMCB-GIST/PANCDR.

PMID:38487849 | DOI:10.1093/bib/bbae088

Categories: Literature Watch

Propagating variational model uncertainty for bioacoustic call label smoothing

Fri, 2024-03-15 06:00

Patterns (N Y). 2024 Feb 12;5(3):100932. doi: 10.1016/j.patter.2024.100932. eCollection 2024 Mar 8.

ABSTRACT

Along with propagating the input toward making a prediction, Bayesian neural networks also propagate uncertainty. This has the potential to guide the training process by rejecting predictions of low confidence, and recent variational Bayesian methods can do so without Monte Carlo sampling of weights. Here, we apply sample-free methods for wildlife call detection on recordings made via passive acoustic monitoring equipment in the animals' natural habitats. We further propose uncertainty-aware label smoothing, where the smoothing probability is dependent on sample-free predictive uncertainty, in order to downweigh data samples that should contribute less to the loss value. We introduce a bioacoustic dataset recorded in Malaysian Borneo, containing overlapping calls from 30 species. On that dataset, our proposed method achieves an absolute percentage improvement of around 1.5 points on area under the receiver operating characteristic (AU-ROC), 13 points in F1, and 19.5 points in expected calibration error (ECE) compared to the point-estimate network baseline averaged across all target classes.

PMID:38487806 | PMC:PMC10935495 | DOI:10.1016/j.patter.2024.100932

Categories: Literature Watch

FRAMM: Fair ranking with missing modalities for clinical trial site selection

Fri, 2024-03-15 06:00

Patterns (N Y). 2024 Mar 1;5(3):100944. doi: 10.1016/j.patter.2024.100944. eCollection 2024 Mar 8.

ABSTRACT

The underrepresentation of gender, racial, and ethnic minorities in clinical trials is a problem undermining the efficacy of treatments on minorities and preventing precise estimates of the effects within these subgroups. We propose FRAMM, a deep reinforcement learning framework for fair trial site selection to help address this problem. We focus on two real-world challenges: the data modalities used to guide selection are often incomplete for many potential trial sites, and the site selection needs to simultaneously optimize for both enrollment and diversity. To address the missing data challenge, FRAMM has a modality encoder with a masked cross-attention mechanism for bypassing missing data. To make efficient trade-offs, FRAMM uses deep reinforcement learning with a reward function designed to simultaneously optimize for both enrollment and fairness. We evaluate FRAMM using real-world historical clinical trials and show that it outperforms the leading baseline in enrollment-only settings while also greatly improving diversity.

PMID:38487797 | PMC:PMC10935501 | DOI:10.1016/j.patter.2024.100944

Categories: Literature Watch

Closing the Wearable Gap: Foot-ankle kinematic modeling via deep learning models based on a smart sock wearable

Fri, 2024-03-15 06:00

Wearable Technol. 2023 Feb 20;4:e4. doi: 10.1017/wtc.2023.3. eCollection 2023.

ABSTRACT

The development of wearable technology, which enables motion tracking analysis for human movement outside the laboratory, can improve awareness of personal health and performance. This study used a wearable smart sock prototype to track foot-ankle kinematics during gait movement. Multivariable linear regression and two deep learning models, including long short-term memory (LSTM) and convolutional neural networks, were trained to estimate the joint angles in sagittal and frontal planes measured by an optical motion capture system. Participant-specific models were established for ten healthy subjects walking on a treadmill. The prototype was tested at various walking speeds to assess its ability to track movements for multiple speeds and generalize models for estimating joint angles in sagittal and frontal planes. LSTM outperformed other models with lower mean absolute error (MAE), lower root mean squared error, and higher R-squared values. The average MAE score was less than 1.138° and 0.939° in sagittal and frontal planes, respectively, when training models for each speed and 2.15° and 1.14° when trained and evaluated for all speeds. These results indicate wearable smart socks to generalize foot-ankle kinematics over various walking speeds with relatively low error and could consequently be used to measure gait parameters without the need for a lab-constricted motion capture system.

PMID:38487777 | PMC:PMC10936318 | DOI:10.1017/wtc.2023.3

Categories: Literature Watch

Re-tear after arthroscopic rotator cuff repair can be predicted using deep learning algorithm

Fri, 2024-03-15 06:00

Front Artif Intell. 2024 Feb 29;7:1331853. doi: 10.3389/frai.2024.1331853. eCollection 2024.

ABSTRACT

The application of artificial intelligence technology in the medical field has become increasingly prevalent, yet there remains significant room for exploration in its deep implementation. Within the field of orthopedics, which integrates closely with AI due to its extensive data requirements, rotator cuff injuries are a commonly encountered condition in joint motion. One of the most severe complications following rotator cuff repair surgery is the recurrence of tears, which has a significant impact on both patients and healthcare professionals. To address this issue, we utilized the innovative EV-GCN algorithm to train a predictive model. We collected medical records of 1,631 patients who underwent rotator cuff repair surgery at a single center over a span of 5 years. In the end, our model successfully predicted postoperative re-tear before the surgery using 62 preoperative variables with an accuracy of 96.93%, and achieved an accuracy of 79.55% on an independent external dataset of 518 cases from other centers. This model outperforms human doctors in predicting outcomes with high accuracy. Through this methodology and research, our aim is to utilize preoperative prediction models to assist in making informed medical decisions during and after surgery, leading to improved treatment effectiveness. This research method and strategy can be applied to other medical fields, and the research findings can assist in making healthcare decisions.

PMID:38487743 | PMC:PMC10938848 | DOI:10.3389/frai.2024.1331853

Categories: Literature Watch

Knowledge-based quality assurance of a comprehensive set of organ at risk contours for head and neck radiotherapy

Fri, 2024-03-15 06:00

Front Oncol. 2024 Feb 29;14:1295251. doi: 10.3389/fonc.2024.1295251. eCollection 2024.

ABSTRACT

INTRODUCTION: Manual review of organ at risk (OAR) contours is crucial for creating safe radiotherapy plans but can be time-consuming and error prone. Statistical and deep learning models show the potential to automatically detect improper contours by identifying outliers using large sets of acceptable data (knowledge-based outlier detection) and may be able to assist human reviewers during review of OAR contours.

METHODS: This study developed an automated knowledge-based outlier detection method and assessed its ability to detect erroneous contours for all common head and neck (HN) OAR types used clinically at our institution. We utilized 490 accurate CT-based HN structure sets from unique patients, each with forty-two HN OAR contours when anatomically present. The structure sets were distributed as 80% for training, 10% for validation, and 10% for testing. In addition, 190 and 37 simulated contours containing errors were added to the validation and test sets, respectively. Single-contour features, including location, shape, orientation, volume, and CT number, were used to train three single-contour feature models (z-score, Mahalanobis distance [MD], and autoencoder [AE]). Additionally, a novel contour-to-contour relationship (CCR) model was trained using the minimum distance and volumetric overlap between pairs of OAR contours to quantify overlap and separation. Inferences from single-contour feature models were combined with the CCR model inferences and inferences evaluating the number of disconnected parts in a single contour and then compared.

RESULTS: In the test dataset, before combination with the CCR model, the area under the curve values were 0.922/0.939/0.939 for the z-score, MD, and AE models respectively for all contours. After combination with CCR model inferences, the z-score, MD, and AE had sensitivities of 0.838/0.892/0.865, specificities of 0.922/0.907/0.887, and balanced accuracies (BA) of 0.880/0.900/0.876 respectively. In the validation dataset, with similar overall performance and no signs of overfitting, model performance for individual OAR types was assessed. The combined AE model demonstrated minimum, median, and maximum BAs of 0.729, 0.908, and 0.980 across OAR types.

DISCUSSION: Our novel knowledge-based method combines models utilizing single-contour and CCR features to effectively detect erroneous OAR contours across a comprehensive set of 42 clinically used OAR types for HN radiotherapy.

PMID:38487718 | PMC:PMC10937434 | DOI:10.3389/fonc.2024.1295251

Categories: Literature Watch

Predicting Risk of Mortality in Pediatric ICU Based on Ensemble Step-Wise Feature Selection

Fri, 2024-03-15 06:00

Health Data Sci. 2021 May 31;2021:9365125. doi: 10.34133/2021/9365125. eCollection 2021.

ABSTRACT

Background. Prediction of mortality risk in intensive care units (ICU) is an important task. Data-driven methods such as scoring systems, machine learning methods, and deep learning methods have been investigated for a long time. However, few data-driven methods are specially developed for pediatric ICU. In this paper, we aim to amend this gap-build a simple yet effective linear machine learning model from a number of hand-crafted features for mortality prediction in pediatric ICU.Methods. We use a recently released publicly available pediatric ICU dataset named pediatric intensive care (PIC) from Children's Hospital of Zhejiang University School of Medicine in China. Unlike previous sophisticated machine learning methods, we want our method to keep simple that can be easily understood by clinical staffs. Thus, an ensemble step-wise feature ranking and selection method is proposed to select a small subset of effective features from the entire feature set. A logistic regression classifier is built upon selected features for mortality prediction.Results. The final predictive linear model with 11 features achieves a 0.7531 ROC-AUC score on the hold-out test set, which is comparable with a logistic regression classifier using all 397 features (0.7610 ROC-AUC score) and is higher than the existing well known pediatric mortality risk scorer PRISM III (0.6895 ROC-AUC score).Conclusions. Our method improves feature ranking and selection by utilizing an ensemble method while keeping a simple linear form of the predictive model and therefore achieves better generalizability and performance on mortality prediction in pediatric ICU.

PMID:38487508 | PMC:PMC10880178 | DOI:10.34133/2021/9365125

Categories: Literature Watch

Advances in Deep Learning-Based Medical Image Analysis

Fri, 2024-03-15 06:00

Health Data Sci. 2021 May 19;2021:8786793. doi: 10.34133/2021/8786793. eCollection 2021.

ABSTRACT

Importance. With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions.Highlights. This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors.Conclusion. Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.

PMID:38487506 | PMC:PMC10880179 | DOI:10.34133/2021/8786793

Categories: Literature Watch

2.5D UNet with context-aware feature sequence fusion for accurate esophageal tumor semantic segmentation

Thu, 2024-03-14 06:00

Phys Med Biol. 2024 Mar 14. doi: 10.1088/1361-6560/ad3419. Online ahead of print.

ABSTRACT

Segmenting esophageal tumor from Computed Tomography (CT) sequence images can assist doctors in diagnosing and treating patients with this malignancy. However, accurately extracting esophageal tumor features from CT images often present challenges due to their small area, variable position, and shape, as well as the low contrast with surrounding tissues. This results in not achieving the level of accuracy required for practical applications in current methods. To address this problem, we propose a 2.5D Context-Aware Feature Sequence Fusion UNet (2.5D CFSF-UNet) model for esophageal tumor segmentation in CT sequence images. Specifically, we embed Intra-slice Multiscale Attention Feature Fusion (Intra-slice MAFF) in each skip connection of UNet to improve feature learning capabilities, better expressing the differences between anatomical structures within CT sequence images. Additionally, the Inter-slice Context Fusion Block (Inter-slice CFB) is utilized in the final layer of UNet to enhance the depiction of context features between CT slices, thereby preventing the loss of structural information between slices. Experiments are conducted on a dataset of 430 esophageal tumor patients. The results show an 87.13% dice similarity coefficient, a 79.71% intersection over union (IOU) and a 2.4758 mm Hausdorff distance, which demonstrates that our approach can improve contouring consistency and can be applied to clinical applications.&#xD.

PMID:38484399 | DOI:10.1088/1361-6560/ad3419

Categories: Literature Watch

Automatic Detection and Tracking of Anatomical Landmarks in Transesophageal Echocardiography for Quantification of Left Ventricular Function

Thu, 2024-03-14 06:00

Ultrasound Med Biol. 2024 Mar 13:S0301-5629(24)00031-0. doi: 10.1016/j.ultrasmedbio.2024.01.017. Online ahead of print.

ABSTRACT

OBJECTIVE: Evaluation of left ventricular (LV) function in critical care patients is useful for guidance of therapy and early detection of LV dysfunction, but the tools currently available are too time-consuming. To resolve this issue, we previously proposed a method for the continuous and automatic quantification of global LV function in critical care patients based on the detection and tracking of anatomical landmarks on transesophageal heart ultrasound. In the present study, our aim was to improve the performance of mitral annulus detection in transesophageal echocardiography (TEE).

METHODS: We investigated several state-of-the-art networks for both the detection and tracking of the mitral annulus in TEE. We integrated the networks into a pipeline for automatic assessment of LV function through estimation of the mitral annular plane systolic excursion (MAPSE), called autoMAPSE. TEE recordings from a total of 245 patients were collected from St. Olav's University Hospital and used to train and test the respective networks. We evaluated the agreement between autoMAPSE estimates and manual references annotated by expert echocardiographers in 30 Echolab patients and 50 critical care patients. Furthermore, we proposed a prototype of autoMAPSE for clinical integration and tested it in critical care patients in the intensive care unit.

RESULTS: Compared with manual references, we achieved a mean difference of 0.8 (95% limits of agreement: -2.9 to 4.7) mm in Echolab patients, with a feasibility of 85.7%. In critical care patients, we reached a mean difference of 0.6 (95% limits of agreement: -2.3 to 3.5) mm and a feasibility of 88.1%. The clinical prototype of autoMAPSE achieved real-time performance.

CONCLUSION: Automatic quantification of LV function had high feasibility in clinical settings. The agreement with manual references was comparable to inter-observer variability of clinical experts.

PMID:38485534 | DOI:10.1016/j.ultrasmedbio.2024.01.017

Categories: Literature Watch

Deep learning model to predict lupus nephritis renal flare based on dynamic multivariable time-series data

Thu, 2024-03-14 06:00

BMJ Open. 2024 Mar 14;14(3):e071821. doi: 10.1136/bmjopen-2023-071821.

ABSTRACT

OBJECTIVES: To develop an interpretable deep learning model of lupus nephritis (LN) relapse prediction based on dynamic multivariable time-series data.

DESIGN: A single-centre, retrospective cohort study in China.

SETTING: A Chinese central tertiary hospital.

PARTICIPANTS: The cohort study consisted of 1694 LN patients who had been registered in the Nanjing Glomerulonephritis Registry at the National Clinical Research Center of Kidney Diseases, Jinling Hospital from January 1985 to December 2010.

METHODS: We developed a deep learning algorithm to predict LN relapse that consists of 59 features, including demographic, clinical, immunological, pathological and therapeutic characteristics that were collected for baseline analysis. A total of 32 227 data points were collected by the sliding window method and randomly divided into training (80%), validation (10%) and testing sets (10%). We developed a deep learning algorithm-based interpretable multivariable long short-term memory model for LN relapse risk prediction considering censored time-series data based on a cohort of 1694 LN patients. A mixture attention mechanism was deployed to capture variable interactions at different time points for estimating the temporal importance of the variables. Model performance was assessed according to C-index (concordance index).

RESULTS: The median follow-up time since remission was 4.1 (IQR, 1.7-6.7) years. The interpretable deep learning model based on dynamic multivariable time-series data achieved the best performance, with a C-index of 0.897, among models using only variables at the point of remission or time-variant variables. The importance of urinary protein, serum albumin and serum C3 showed time dependency in the model, that is, their contributions to the risk prediction increased over time.

CONCLUSIONS: Deep learning algorithms can effectively learn through time-series data to develop a predictive model for LN relapse. The model provides accurate predictions of LN relapse for different renal disease stages, which could be used in clinical practice to guide physicians on the management of LN patients.

PMID:38485471 | DOI:10.1136/bmjopen-2023-071821

Categories: Literature Watch

Validation of a deep learning model for automatic detection and quantification of five OCT critical retinal features associated with neovascular age-related macular degeneration

Thu, 2024-03-14 06:00

Br J Ophthalmol. 2024 Mar 14:bjo-2023-324647. doi: 10.1136/bjo-2023-324647. Online ahead of print.

ABSTRACT

PURPOSE: To develop and validate a deep learning model for the segmentation of five retinal biomarkers associated with neovascular age-related macular degeneration (nAMD).

METHODS: 300 optical coherence tomography volumes from subject eyes with nAMD were collected. Images were manually segmented for the presence of five crucial nAMD features: intraretinal fluid, subretinal fluid, subretinal hyperreflective material, drusen/drusenoid pigment epithelium detachment (PED) and neovascular PED. A deep learning architecture based on a U-Net was trained to perform automatic segmentation of these retinal biomarkers and evaluated on the sequestered data. The main outcome measures were receiver operating characteristic curves for detection, summarised using the area under the curves (AUCs) both on a per slice and per volume basis, correlation score, enface topography overlap (reported as two-dimensional (2D) correlation score) and Dice coefficients.

RESULTS: The model obtained a mean (±SD) AUC of 0.93 (±0.04) per slice and 0.88 (±0.07) per volume for fluid detection. The correlation score (R2) between automatic and manual segmentation obtained by the model resulted in a mean (±SD) of 0.89 (±0.05). The mean (±SD) 2D correlation score was 0.69 (±0.04). The mean (±SD) Dice score resulted in 0.61 (±0.10).

CONCLUSIONS: We present a fully automated segmentation model for five features related to nAMD that performs at the level of experienced graders. The application of this model will open opportunities for the study of morphological changes and treatment efficacy in real-world settings. Furthermore, it can facilitate structured reporting in the clinic and reduce subjectivity in clinicians' assessments.

PMID:38485214 | DOI:10.1136/bjo-2023-324647

Categories: Literature Watch

Results of an AI-Based Image Review System to Detect Patient Misalignment Errors in a Multi-Institutional Database of CBCT-Guided Radiotherapy Treatments

Thu, 2024-03-14 06:00

Int J Radiat Oncol Biol Phys. 2024 Mar 12:S0360-3016(24)00392-4. doi: 10.1016/j.ijrobp.2024.02.065. Online ahead of print.

ABSTRACT

PURPOSE: Present knowledge of patient setup and alignment errors in image-guided radiotherapy (IGRT) relies on voluntary reporting, which is thought to underestimate error frequencies. A manual retrospective patient-setup misalignment error search is infeasible due to the bulk of cases to be reviewed. We applied a deep learning-based misalignment error detection algorithm (EDA) to perform a fully-automated retrospective error search of clinical IGRT databases and determine an absolute gross patient misalignment error rate.

METHODS: The EDA was developed to analyze the registration between planning scans and pre-treatment CBCT scans, outputting a misalignment score ranging from 0 (most unlikely) to 1 (most likely). The algorithm was trained using simulated translational errors on a dataset obtained from 680 patients treated at two radiotherapy clinics between 2017 and 2022. A receiver operating characteristic analysis was performed to obtain target thresholds. A DICOM Query and Retrieval software was integrated with the EDA to interact with the clinical database and fully automate data retrieval and analysis during a retrospective error search from 2016-2017 and 2021-2022 for the two institutions, respectively. Registrations were flagged for human review using both a hard-thresholding method and a prediction trending analysis over each individual patient's treatment course. Flagged registrations were manually reviewed and categorized as errors (>1cm misalignment at the target) or non-errors.

RESULTS: A total of 17,612 registrations were analyzed by the EDA, resulting in 7.7% flagged events. Three previously reported errors were successfully flagged by the EDA and four previously-unreported vertebral body misalignment errors were discovered during case reviews. False positive cases often displayed substantial image artifacts, patient rotation, and soft-tissue anatomy changes.

CONCLUSION: Our results validated the clinical utility of the EDA for bulk image reviews, and highlighted the reliability and safety of IGRT, with an absolute gross patient misalignment error rate of 0.04% ± 0.02% per delivered fraction.

PMID:38485098 | DOI:10.1016/j.ijrobp.2024.02.065

Categories: Literature Watch

Deepm6A-MT: A deep learning-based method for identifying RNA N6-methyladenosine sites in multiple tissues

Thu, 2024-03-14 06:00

Methods. 2024 Mar 12:S1046-2023(24)00067-7. doi: 10.1016/j.ymeth.2024.03.004. Online ahead of print.

ABSTRACT

N6-methyladenosine (m6A) is the most prevalent, abundant, and conserved internal modification in the eukaryotic messenger RNA (mRNAs) and plays a crucial role in the cellular process. Although more than ten methods were developed for m6A detection over the past decades, there were rooms left to improve the predictive accuracy and the efficiency. In this paper, we proposed an improved method for predicting m6A modification sites, which was based on bi-directional gated recurrent unit (Bi-GRU) and convolutional neural networks (CNN), called Deepm6A-MT. The Deepm6A-MT has two input channels. One is to use an embedding layer followed by the Bi-GRU and then by the CNN, and another is to use one-hot encoding, dinucleotide one-hot encoding, and nucleotide chemical property codes. We trained and evaluated the Deepm6A-MT both by the 5-fold cross-validation and the independent test. The empirical tests showed that the Deepm6A-MT achieved the state of the art performance. In addition, we also conducted the cross-species and the cross-tissues tests to further verify the Deepm6A-MT for effectiveness and efficiency. Finally, for the convenience of academic research, we deployed the Deepm6A-MT to the web server, which is accessed at the URL http://www.biolscience.cn/Deepm6A-MT/.

PMID:38485031 | DOI:10.1016/j.ymeth.2024.03.004

Categories: Literature Watch

Improved nested U-structure for accurate nailfold capillary segmentation

Thu, 2024-03-14 06:00

Microvasc Res. 2024 Mar 12:104680. doi: 10.1016/j.mvr.2024.104680. Online ahead of print.

ABSTRACT

Changes in the structure and function of nailfold capillaries may be indicators of numerous diseases. Noninvasive diagnostic tools are commonly used for the extraction of morphological information from segmented nailfold capillaries to study physiological and pathological changes therein. However, current segmentation methods for nailfold capillaries cannot accurately separate capillaries from the background, resulting in issues such as unclear segmentation boundaries. Therefore, improving the accuracy of nailfold capillary segmentation is necessary to facilitate more efficient clinical diagnosis and research. Herein, we propose a nailfold capillary image segmentation method based on a U2-Net backbone network combined with a Transformer structure. This method integrates the U2-Net and Transformer networks to establish a decoder-encoder network, which inserts Transformer layers into the nested two-layer U-shaped architecture of the U2-Net. This structure effectively extracts multiscale features within stages and aggregates multilevel features across stages to generate high-resolution feature maps. The experimental results demonstrate an overall accuracy of 98.23 %, a Dice coefficient of 88.56 %, and an IoU of 80.41 % compared to the ground truth. Furthermore, our proposed method improves the overall accuracy by approximately 2 %, 3 %, and 5 % compared to the original U2-Net, Res-Unet, and U-Net, respectively. These results indicate that the Transformer-U2Net network performs well in nailfold capillary image segmentation and provides more detailed and accurate information on the segmented nailfold capillary structure, which may aid clinicians in the more precise diagnosis and treatment of nailfold capillary-related diseases.

PMID:38484792 | DOI:10.1016/j.mvr.2024.104680

Categories: Literature Watch

Pages