Deep learning
Boundary guidance network for medical image segmentation
Sci Rep. 2024 Jul 28;14(1):17345. doi: 10.1038/s41598-024-67554-0.
ABSTRACT
Accurate segmentation of the tumor area is crucial for the treatment and prognosis of patients with bladder cancer. Cystoscopy is the gold standard for diagnosing bladder tumors. However, The vast majority of current work uses deep learning to identify and segment tumors from CT and MRI findings, and rarely involves cystoscopy findings. Accurately segmenting bladder tumors remains a great challenge due to their diverse morphology and fuzzy boundaries. In order to solve the above problems, this paper proposes a medical image segmentation network with boundary guidance called boundary guidance network. This network combines local features extracted by CNNs and long-range dependencies between different levels inscribed by Parallel ViT, which can capture tumor features more effectively. The Boundary extracted module is designed to extract boundary features and utilize the boundary features to guide the decoding process. Foreground-background dual-channel decoding is performed by boundary integrated module. Experimental results on our proposed new cystoscopic bladder tumor dataset (BTD) show that our method can efficiently perform accurate segmentation of tumors and retain more boundary information, achieving an IoU score of 91.3%, a Hausdorff Distance of 10.43, an mAP score of 85.3%, and a F1 score of 94.8%. On BTD and three other public datasets, our model achieves the best scores compared to state-of-the-art methods, which proves the effectiveness of our model for common medical image segmentation.
PMID:39069513 | DOI:10.1038/s41598-024-67554-0
Comparison of the Efficacy of Artificial Intelligence-Powered Software in Crown Design: An In Vitro Study
Int Dent J. 2024 Jul 28:S0020-6539(24)00196-5. doi: 10.1016/j.identj.2024.06.023. Online ahead of print.
ABSTRACT
INTRODUCTION AND AIMS: Artificial intelligence (AI) has been adopted in the field of dental restoration. This study aimed to evaluate the time efficiency and morphological accuracy of crowns designed by two AI-powered software programs in comparison with conventional computer-aided design software.
METHODS: A total of 33 clinically adapted posterior crowns were involved in the standard group. AI Automate (AA) and AI Dentbird Crown (AD) used two AI-powered design software programs, while the computer-aided experienced and computer-aided novice employed the Exocad DentalCAD software. Time efficiency between the AI-powered groups and computer-aided groups was evaluated by assessing the elapsed time. Morphological accuracy was assessed by means of three-dimensional geometric calculations, with the root-mean-square error compared against the standard group. Statistical analysis was conducted via the Kruskal-Wallis test (α = 0.05).
RESULTS: The time efficiency of the AI-powered groups was significantly higher than that of the computer-aided groups (P < .01). Moreover, the working time for both AA and AD groups was only one-quarter of that for the computer-aided novice group. Four groups significantly differed in morphological accuracy for occlusal and distal surfaces (P < .05). The AD group performed lower accuracy than the other three groups on the occlusal surfaces (P < .001) and the computer-aided experienced group was superior to the AA group in terms of accuracy on the distal surfaces (P = .029). However, morphological accuracy showed no significant difference among the four groups for mesial surfaces and margin lines (P > .05).
CONCLUSION: AI-powered software enhanced the efficiency of crown design but failed to excel at morphological accuracy compared with experienced technicians using computer-aided software. AI-powered software requires further research and extensive deep learning to improve the morphological accuracy and stability of the crown design.
PMID:39069456 | DOI:10.1016/j.identj.2024.06.023
Deep learning-based surgical phase recognition in laparoscopic cholecystectomy
Ann Hepatobiliary Pancreat Surg. 2024 Jul 29. doi: 10.14701/ahbps.24-091. Online ahead of print.
ABSTRACT
BACKGROUNDS/AIMS: Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases.
METHODS: One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training.
RESULTS: A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot's triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score: 0.7761).
CONCLUSIONS: Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.
PMID:39069309 | DOI:10.14701/ahbps.24-091
Deep learning-assisted multispectral imaging for early screening of skin diseases
Photodiagnosis Photodyn Ther. 2024 Jul 26:104292. doi: 10.1016/j.pdpdt.2024.104292. Online ahead of print.
ABSTRACT
INTRODUCTION: Melanocytic nevi (MN), warts, seborrheic keratoses (SK), and psoriasis are four common types of skin surface lesions that typically require dermatoscopic examination for definitive diagnosis in clinical dermatology settings. This process is labor-intensive and resource-consuming. Traditional methods for diagnosing skin lesions rely heavily on the subjective judgment of dermatologists, leading to issues in diagnostic accuracy and prolonged detection times.
OBJECTIVES: This study aims to introduce a multispectral imaging (MSI)-based method for the early screening and detection of skin surface lesions. By capturing image data at multiple wavelengths, MSI can detect subtle spectral variations in tissues, significantly enhancing the differentiation of various skin conditions.
METHODS: The proposed method utilizes a pixel-level mosaic imaging spectrometer to capture multispectral images of lesions, followed by reflectance calibration and standardization. Regions of interest were manually extracted, and the spectral data were subsequently exported for analysis. An improved one-dimensional convolutional neural network is then employed to train and classify the data.
RESULTS: The new method achieves an accuracy of 96.82% on the test set, demonstrating its efficacy.
CONCLUSION: This multispectral imaging approach provides a non-contact and non-invasive method for early screening, effectively addressing the subjective identification of lesions by dermatologists and the prolonged detection times associated with conventional methods. It offers enhanced diagnostic accuracy for a variety of skin lesions, suggesting new avenues for dermatological diagnostics.
PMID:39069204 | DOI:10.1016/j.pdpdt.2024.104292
Clinical-Grade Validation of an Autofluorescence Virtual Staining System with Human Experts and a Deep Learning System for Prostate Cancer
Mod Pathol. 2024 Jul 26:100573. doi: 10.1016/j.modpat.2024.100573. Online ahead of print.
ABSTRACT
The tissue diagnosis of adenocarcinoma and intraductal carcinoma of the prostate (IDC-P) includes Gleason grading of tumor morphology on the hematoxylin and eosin (H&E) stain, and immunohistochemistry (IHC) markers on the PIN-4 stain (CK5/6, P63, AMACR). In this work, we create an automated system for producing both virtual H&E and PIN-4 IHC stains from unstained prostate tissue using a high-throughput hyperspectral fluorescence microscope and artificial intelligence & machine learning. We demonstrate that the virtual stainer models can produce high-quality images suitable for diagnosis by genitourinary pathologists. Specifically, we validate our system through extensive human review and computational analysis, using a previously-validated Gleason scoring model, and an expert panel, on a large dataset of test slides. This study extends our previous work on virtual staining from autofluorescence, demonstrates the clinical utility of this technology for prostate cancer, and exemplifies a rigorous standard of qualitative and quantitative evaluation for digital pathology.
PMID:39069201 | DOI:10.1016/j.modpat.2024.100573
Spectro-ViT: A vision transformer model for GABA-edited MEGA-PRESS reconstruction using spectrograms
Magn Reson Imaging. 2024 Jul 26:110219. doi: 10.1016/j.mri.2024.110219. Online ahead of print.
ABSTRACT
This study investigated the use of a Vision Transformer (ViT) for reconstructing GABA-edited Magnetic Resonance Spectroscopy (MRS) data from a reduced number of transients. Transients refer to the samples collected during an MRS acquisition by repeating the experiment to generate a signal of sufficient quality. Specifically, 80 transients were used instead of the typical 320 transients, aiming to reduce scan time. The 80 transients were pre-processed and converted into a spectrogram image representation using the Short-Time Fourier Transform (STFT). A pre-trained ViT, named Spectro-ViT, was fine-tuned and then tested using in-vivo GABA-edited MEGA-PRESS data. Its performance was compared against other pipelines in the literature using quantitative quality metrics and estimated metabolite concentration values, with the typical 320-transient scans serving as the reference for comparison. The Spectro-ViT model exhibited the best overall quality metrics among all other pipelines against which it was compared. The metabolite concentrations from Spectro-ViT's reconstructions for GABA+ achieved the best average R2 value of 0.67 and the best average Mean Absolute Percentage Error (MAPE) value of 9.68%, with no significant statistical differences found compared to the 320-transient reference. The code to reproduce this research is available at https://github.com/MICLab-Unicamp/Spectro-ViT.
PMID:39069027 | DOI:10.1016/j.mri.2024.110219
An unrolled neural network for accelerated dynamic MRI based on second-order half-quadratic splitting model
Magn Reson Imaging. 2024 Jul 26:110218. doi: 10.1016/j.mri.2024.110218. Online ahead of print.
ABSTRACT
The reconstruction of dynamic magnetic resonance images from incomplete k-space data has sparked significant research interest due to its potential to reduce scan time. However, traditional iterative optimization algorithms fail to faithfully reconstruct images at higher acceleration factors and incur long reconstruction time. Furthermore, end-to-end deep learning-based reconstruction algorithms suffer from large model parameters and lack robustness in the reconstruction results. Recently, unrolled deep learning models, have shown immense potential in algorithm stability and applicability flexibility. In this paper, we propose an unrolled deep learning network based on a second-order Half-Quadratic Splitting(HQS) algorithm, where the forward propagation process of this framework strictly follows the computational flow of the HQS algorithm. In particular, we propose a degradation-aware module by associating random sampling patterns with intermediate variables to guide the iterative process. We introduce the Information Fusion Transformer(IFT) to extract both local and non-local prior information from image sequences, thereby removing aliasing artifacts resulting from random undersampling. Finally, we impose low-rank constraints within the HQS algorithm to further enhance the reconstruction results. The experiments demonstrate that each component module of our proposed model contributes to the improvement of the reconstruction task. Our proposed method achieves comparably satisfying performance to the state-of-the-art methods and it exhibits excellent generalization capabilities across different sampling masks. At the low acceleration factor, there is a 0.7% enhancement in the PSNR. Furthermore, when the acceleration factor reached 8 and 12, the PSNR achieves an improvement of 3.4% and 5.8% respectively.
PMID:39069026 | DOI:10.1016/j.mri.2024.110218
Endoscopic Artificial Intelligence for Image Analysis in Gastrointestinal Neoplasms
Digestion. 2024 Jul 26. doi: 10.1159/000540251. Online ahead of print.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) using deep learning systems has recently been utilized in various medical fields. In the field of gastroenterology, AI is primarily implemented in image recognition and utilized in the realm of gastrointestinal (GI) endoscopy. In GI endoscopy, computer-aided detection/diagnosis (CAD) systems assist endoscopists in GI neoplasm detection or differentiation of cancerous or non-cancerous lesions. Several AI systems for colorectal polyps have already been applied in colonoscopy clinical practices. In esophagogastroduodenoscopy, a few CAD systems for upper GI neoplasms have been launched in Asian countries. The usefulness of these CAD systems in GI endoscopy has been gradually elucidated.
SUMMARY: In this review, we outline recent articles on several studies of endoscopic AI systems for GI neoplasms, focusing on esophageal squamous cell carcinoma (ESCC), esophageal adenocarcinoma (EAC), gastric cancer (GC), and colorectal polyps. In ESCC and EAC, computer-aided detection (CADe) systems were mainly developed, and a recent meta-analysis study showed sensitivities of 91.2% and 93.1% and specificities of 80% and 86.9%, respectively. In GC, a recent meta-analysis study on CADe systems demonstrated that their sensitivity and specificity were as high as 90%. A randomized controlled trial (RCT) also showed that the use of the CADe system reduced the miss rate. Regarding computer-aided diagnosis (CADx) systems for GC, although RCTs have not yet been conducted, most studies have demonstrated expert-level performance. In colorectal polyps, multiple RCTs have shown the usefulness of the CADe system for improving the polyp detection rate, and several CADx systems have been shown to have high accuracy in colorectal polyp differentiation.
KEY MESSAGES: Most analyses of endoscopic AI systems suggested that their performance was better than that of non-expert endoscopists and equivalent to that of expert endoscopists. Thus, Endoscopic AI systems may be useful for reducing the risk of overlooking lesions and improving the diagnostic ability of endoscopists.
PMID:39068926 | DOI:10.1159/000540251
Using deep learning for predicting the dynamic evolution of breast cancer migration
Comput Biol Med. 2024 Jul 27;180:108890. doi: 10.1016/j.compbiomed.2024.108890. Online ahead of print.
ABSTRACT
BACKGROUND: Breast cancer (BC) remains a prevalent health concern, with metastasis as the main driver of mortality. A detailed understanding of metastatic processes, particularly cell migration, is fundamental to improve therapeutic strategies. The wound healing assay, a traditional two-dimensional (2D) model, offers insights into cell migration but presents scalability issues due to data scarcity, arising from its manual and labor-intensive nature.
METHOD: To overcome these limitations, this study introduces the Prediction Wound Progression Framework (PWPF), an innovative approach utilizing Deep Learning (DL) and artificial data generation. The PWPF comprises a DL model initially trained on artificial data that simulates wound healing in MCF-7 BC cell monolayers and spheres, which is subsequently fine-tuned on real-world data.
RESULTS: Our results underscore the model's effectiveness in analyzing and predicting cell migration dynamics within the wound healing context, thus enhancing the usability of 2D models. The PWPF significantly contributes to a better understanding of cell migration processes in BC and expands the possibilities for research into wound healing mechanisms.
CONCLUSIONS: These advancements in automated cell migration analysis hold the potential for more comprehensive and scalable studies in the future. Our dataset, models, and code are publicly available at https://github.com/frangam/wound-healing.
PMID:39068903 | DOI:10.1016/j.compbiomed.2024.108890
Integrating multi-task and cost-sensitive learning for predicting mortality risk of chronic diseases in the elderly using real-world data
Int J Med Inform. 2024 Jul 25;191:105567. doi: 10.1016/j.ijmedinf.2024.105567. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Real-world data encompass population diversity, enabling insights into chronic disease mortality risk among the elderly. Deep learning excels on large datasets, offering promise for real-world data. However, current models focus on single diseases, neglecting comorbidities prevalent in patients. Moreover, mortality is infrequent compared to illness, causing extreme class imbalance that impedes reliable prediction. We aim to develop a deep learning framework that accurately forecasts mortality risk from real-world data by addressing comorbidities and class imbalance.
METHODS: We integrated multi-task and cost-sensitive learning, developing an enhanced deep neural network architecture that extends multi-task learning to predict mortality risk across multiple chronic diseases. Each patient cohort with a chronic disease was assigned to a separate task, with shared lower-level parameters capturing inter-disease complexities through distinct top-level networks. Cost-sensitive functions were incorporated to ensure learning of positive class characteristics for each task and achieve accurate prediction of the risk of death from multiple chronic diseases.
RESULTS: Our study covers 15 prevalent chronic diseases and is experimented with real-world data from 482,145 patients (including 9,516 deaths) in Shenzhen, China. The proposed model is compared with six models including three machine learning models: logistic regression, XGBoost, and CatBoost, and three state-of-the-art deep learning models: 1D-CNN, TabNet, and Saint. The experimental results show that, compared with the other compared algorithms, MTL-CSDNN has better prediction results on the test set (ACC=0.99, REC=0.99, PRAUC=0.97, MCC=0.98, G-means = 0.98).
CONCLUSIONS: Our method provides valuable insights into leveraging real-world data for precise multi-disease mortality risk prediction, offering potential applications in optimizing chronic disease management, enhancing well-being, and reducing healthcare costs for the elderly population.
PMID:39068894 | DOI:10.1016/j.ijmedinf.2024.105567
Fine-grained subphenotypes in acute kidney injury populations based on deep clustering: Derivation and interpretation
Int J Med Inform. 2024 Jul 20;191:105553. doi: 10.1016/j.ijmedinf.2024.105553. Online ahead of print.
ABSTRACT
BACKGROUND: Acute kidney injury (AKI) is associated with increased mortality in critically ill patients. Due to differences in the etiology and pathophysiological mechanism, the current AKI criteria put it an embarrassment to evaluate clinical therapy and prognosis.
OBJECTIVE: We aimed to identify subphenotypes based on routinely collected clinical data to expose the unique pathophysiologic patterns.
METHODS: A retrospective study was conducted based on the Medical Information Mart for Intensive Care IV (MIMIC-IV) and the eICU Collaborative Research Database (eICU-CRD), and a deep clustering approach was conducted to derive subphenotypes. We conducted further analysis to uncover the underlying clinical patterns and interpret the subphenotype derivation.
RESULTS: We studied 14,189 and 19,382 patients with AKI within 48 h of ICU admission in the two datasets, respectively. Through our approach, we identified seven distinct AKI subphenotypes with mortality heterogeneity in each cohort. These subphenotypes displayed significant variations in demographics, comorbidities, levels of laboratory measurements, and survival patterns. Notably, the subphenotypes could not be effectively characterized using the Kidney Disease: Improving Global Outcomes (KDIGO) criteria alone. Therefore, we uncovered the unique underlying characteristics of each subphenotype through model-based interpretation. To assess the usability of the subphenotypes, we conducted an evaluation, which yielded a micro-Area Under the Receiver Operating Characteristic (AUROC) of 0.81 in the single-center cohort and 0.83 in the multi-center cohort within 48-hour of admission.
CONCLUSION: We derived highly characteristic, interpretable, and usable AKI subphenotypes that exhibited superior prognostic values.
PMID:39068892 | DOI:10.1016/j.ijmedinf.2024.105553
Generating synthetic computed tomography for radiotherapy: SynthRAD2023 challenge report
Med Image Anal. 2024 Jul 17;97:103276. doi: 10.1016/j.media.2024.103276. Online ahead of print.
ABSTRACT
Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.
PMID:39068830 | DOI:10.1016/j.media.2024.103276
Open-world electrocardiogram classification via domain knowledge-driven contrastive learning
Neural Netw. 2024 Jul 17;179:106551. doi: 10.1016/j.neunet.2024.106551. Online ahead of print.
ABSTRACT
Automatic electrocardiogram (ECG) classification provides valuable auxiliary information for assisting disease diagnosis and has received much attention in research. The success of existing classification models relies on fitting the labeled samples for every ECG type. However, in practice, well-annotated ECG datasets usually cover only limited ECG types. It thus raises an issue: conventional classification models trained with limited ECG types can only identify those ECG types that have already been observed in the training set, but fail to recognize unseen (or unknown) ECG types that exist in the wild and are not included in training data. In this work, we investigate an important problem called open-world ECG classification that can predict fine-grained observed ECG classes and identify unseen classes. Accordingly, we propose a customized method that first incorporates clinical knowledge into contrastive learning by generating "hard negative" samples to guide learning diagnostic ECG features (i.e., distinguishable representations), and then performs multi-hypersphere learning to learn compact ECG representations for classification. The experiment results on 12-lead ECG datasets (CPSC2018, PTB-XL, and Georgia) demonstrate that the proposed method outperforms the state-of-the-art methods. Specifically, our method achieves superior accuracy than the comparative methods on the unseen ECG class and certain seen classes. Overall, the investigated problem (i.e., open-world ECG classification) helps to draw attention to the reliability of automatic ECG diagnosis, and the proposed method is proven effective in tackling the challenges. The code and datasets are released at https://github.com/betterzhou/Open_World_ECG_Classification.
PMID:39068675 | DOI:10.1016/j.neunet.2024.106551
Strain-Temperature Dual Sensor Based on Deep Learning Strategy for Human-Computer Interaction Systems
ACS Sens. 2024 Jul 28. doi: 10.1021/acssensors.4c01202. Online ahead of print.
ABSTRACT
Thermoelectric (TE) hydrogels, mimicking human skin, possessing temperature and strain sensing capabilities, are well-suited for human-machine interaction interfaces and wearable devices. In this study, a TE hydrogel with high toughness and temperature responsiveness was created using the Hofmeister effect and TE current effect, achieved through the cross-linking of PVA/PAA/carboxymethyl cellulose triple networks. The Hofmeister effect, facilitated by Na+ and SO42- ions coordination, notably increased the hydrogel's tensile strength (800 kPa). Introduction of Fe2+/Fe3+ as redox pairs conferred a high Seebeck coefficient (2.3 mV K-1), thereby enhancing temperature responsiveness. Using this dual-responsive sensor, successful demonstration of a feedback mechanism combining deep learning with a robotic hand was accomplished (with a recognition accuracy of 95.30%), alongside temperature warnings at various levels. It is expected to replace manual work through the control of the manipulator in some high-temperature and high-risk scenarios, thereby improving the safety factor, underscoring the vast potential of TE hydrogel sensors in motion monitoring and human-machine interaction applications.
PMID:39068608 | DOI:10.1021/acssensors.4c01202
Evaluation of OCT biomarker changes in treatment-naive neovascular AMD using a deep semantic segmentation algorithm
Eye (Lond). 2024 Jul 27. doi: 10.1038/s41433-024-03264-1. Online ahead of print.
ABSTRACT
OBJECTIVES: To determine real-life quantitative changes in OCT biomarkers in a large set of treatment naive patients in a real-life setting undergoing anti-VEGF therapy. For this purpose, we devised a novel deep learning based semantic segmentation algorithm providing the first benchmark results for automatic segmentation of 11 OCT features including biomarkers for neovascular age-related macular degeneration (nAMD).
METHODS: Training of a Deep U-net based semantic segmentation ensemble algorithm for state-of-the-art semantic segmentation performance which was used to analyze OCT features prior to, after 3 and 12 months of anti-VEGF therapy.
RESULTS: High F1 scores of almost 1.0 for neurosensory retina and subretinal fluid on a separate hold-out test set with unseen patients. The algorithm performed worse for subretinal hyperreflective material and fibrovascular PED, on par with drusenoid PED, and better in segmenting fibrosis. In the evaluation of treatment naive OCT scans, significant changes occurred for intraretinal fluid (mean: 0.03 µm3 to 0.01 µm3, p < 0.001), subretinal fluid (0.08 µm3 to 0.01 µm3, p < 0.001), subretinal hyperreflective material (0.02 µm3 to 0.01 µm3, p < 0.001), fibrovascular PED (0.12 µm3 to 0.09 µm3, p = 0.02) and central retinal thickness C0 (225.78 µm3 to 169.40 µm3). The amounts of intraretinal fluid, fibrovascular PED, and ERM were predictive of poor outcome.
CONCLUSIONS: The segmentation algorithm allows efficient volumetric analysis of OCT scans. Anti-VEGF provokes most potent changes in the first 3 months while a gradual loss of RPE hints at a progressing decline of visual acuity. Additional research is required to understand how these accurate OCT predictions can be leveraged for a personalized therapy regimen.
PMID:39068248 | DOI:10.1038/s41433-024-03264-1
Development of a deep learning model for cancer diagnosis by inspecting cell-free DNA end-motifs
NPJ Precis Oncol. 2024 Jul 27;8(1):160. doi: 10.1038/s41698-024-00635-5.
ABSTRACT
Accurate discrimination between patients with and without cancer from cfDNA is crucial for early cancer diagnosis. Herein, we develop and validate a deep-learning-based model entitled end-motif inspection via transformer (EMIT) for discriminating individuals with and without cancer by learning feature representations from cfDNA end-motifs. EMIT is a self-supervised learning approach that models rankings of cfDNA end-motifs. We include 4606 samples subjected to different types of cfDNA sequencing to develop EIMIT, and subsequently evaluate classification performance of linear projections of EMIT on six datasets and an additional inhouse testing set encopassing whole-genome, whole-genome bisulfite and 5-hydroxymethylcytosine sequencing. The linear projection of representations from EMIT achieved area under the receiver operating curve (AUROC) values ranged from 0.895 (0.835-0.955) to 0.996 (0.994-0.997) across these six datasets, outperforming its baseline by significant margins. Additionally, we showed that linear projection of EMIT representations can achieve an AUROC of 0.962 (0.914-1.0) in identification of lung cancer on an independent testing set subjected to whole-exome sequencing. The findings of this study indicate that a transformer-based deep learning model can learn cancer-discrimative representations from cfDNA end-motifs. The representations of this deep learning model can be exploited for discriminating patients with and without cancer.
PMID:39068267 | DOI:10.1038/s41698-024-00635-5
Predicting bone metastasis-free survival in non-small cell lung cancer from preoperative CT via deep learning
NPJ Precis Oncol. 2024 Jul 28;8(1):161. doi: 10.1038/s41698-024-00649-z.
ABSTRACT
Accurate prediction of bone metastasis-free survival (BMFS) after complete surgical resection in patients with non-small cell lung cancer (NSCLC) may facilitate appropriate follow-up planning. The aim of this study was to establish and validate a preoperative CT-based deep learning (DL) signature to predict BMFS in NSCLC patients. We performed a retrospective analysis of 1547 NSCLC patients who underwent complete surgical resection, followed by at least 36 months of monitoring at two hospitals. We constructed a DL signature from multiparametric CT images using 3D convolutional neural networks, and we integrated this signature with clinical-imaging factors to establish a deep learning clinical-imaging signature (DLCS). We evaluated performance using Harrell's concordance index (C-index) and the time-dependent receiver operating characteristic. We also assessed the risk of bone metastasis (BM) in NSCLC patients at different clinical stages using DLCS. The DL signature successfully predicted BM, with C-indexes of 0.799 and 0.818 for the validation cohorts. DLCS outperformed the DL signature with corresponding C-indexes of 0.806 and 0.834. Ranges for area under the curve at 1, 2, and 3 years were 0.820-0.865 for internal and 0.860-0.884 for external validation cohorts. Furthermore, DLCS successfully stratified patients with different clinical stages of NSCLC as high- and low-risk groups for BM (p < 0.05). CT-based DL can predict BMFS in NSCLC patients undergoing complete surgical resection, and may assist in the assessment of BM risk for patients at different clinical stages.
PMID:39068240 | DOI:10.1038/s41698-024-00649-z
Elucidating Microglial Heterogeneity and Functions in Alzheimer's Disease Using Single-cell Analysis and Convolutional Neural Network Disease Model Construction
Sci Rep. 2024 Jul 27;14(1):17271. doi: 10.1038/s41598-024-67537-1.
ABSTRACT
In this study, we conducted an in-depth exploration of Alzheimer's Disease (AD) by integrating state-of-the-art methodologies, including single-cell RNA sequencing (scRNA-seq), weighted gene co-expression network analysis (WGCNA), and a convolutional neural network (CNN) model. Focusing on the pivotal role of microglia in AD pathology, our analysis revealed 11 distinct microglial subclusters, with 4 exhibiting obviously alterations in AD and HC groups. The investigation of cell-cell communication networks unveiled intricate interactions between AD-related microglia and various cell types within the central nervous system (CNS). Integration of WGCNA and scRNA-seq facilitated the identification of critical genes associated with AD-related microglia, providing insights into their involvement in processes such as peptide chain elongation, synapse-related functions, and cell adhesion. The identification of 9 hub genes, including USP3, through the least absolute shrinkage and selection operator (LASSO) and COX regression analyses, presents potential therapeutic targets. Furthermore, the development of a CNN-based model showcases the application of deep learning in enhancing diagnostic accuracy for AD. Overall, our findings significantly contribute to unraveling the molecular intricacies of microglial responses in AD, offering promising avenues for targeted therapeutic interventions and improved diagnostic precision.
PMID:39068182 | DOI:10.1038/s41598-024-67537-1
Estimating the Severity of Oral Lesions Via Analysis of Cone Beam Computed Tomography Reports: A Proposed Deep Learning Model
Int Dent J. 2024 Jul 26:S0020-6539(24)00168-0. doi: 10.1016/j.identj.2024.06.015. Online ahead of print.
ABSTRACT
OBJECTIVES: Several factors such as unavailability of specialists, dental phobia, and financial difficulties may lead to a delay between receiving an oral radiology report and consulting a dentist. The primary aim of this study was to distinguish between high-risk and low-risk oral lesions according to the radiologist's reports of cone beam computed tomography (CBCT) images. Such a facility may be employed by dentist or his/her assistant to make the patient aware of the severity and the grade of the oral lesion and referral for immediate treatment or other follow-up care.
METHODS: A total number of 1134 CBCT radiography reports owned by Shiraz University of Medical Sciences were collected. The severity level of each sample was specified by three experts, and an annotation was carried out accordingly. After preprocessing the data, a deep learning model, referred to as CNN-LSTM, was developed, which aims to detect the degree of severity of the problem based on analysis of the radiologist's report. Unlike traditional models which usually use a simple collection of words, the proposed deep model uses words embedded in dense vector representations, which empowers it to effectively capture semantic similarities.
RESULTS: The results indicated that the proposed model outperformed its counterparts in terms of precision, recall, and F1 criteria. This suggests its potential as a reliable tool for early estimation of the severity of oral lesions.
CONCLUSIONS: This study shows the effectiveness of deep learning in the analysis of textual reports and accurately distinguishing between high-risk and low-risk lesions. Employing the proposed model which can Provide timely warnings about the need for follow-up and prompt treatment can shield the patient from the risks associated with delays.
CLINICAL SIGNIFICANCE: Our collaboratively collected and expert-annotated dataset serves as a valuable resource for exploratory research. The results demonstrate the pivotal role of our deep learning model could play in assessing the severity of oral lesions in dental reports.
PMID:39068121 | DOI:10.1016/j.identj.2024.06.015
A protein pre-trained model-based approach for the identification of the liquid-liquid phase separation (LLPS) proteins
Int J Biol Macromol. 2024 Jul 25:134146. doi: 10.1016/j.ijbiomac.2024.134146. Online ahead of print.
ABSTRACT
Liquid-liquid phase separation (LLPS) regulates many biological processes including RNA metabolism, chromatin rearrangement, and signal transduction. Aberrant LLPS potentially leads to serious diseases. Therefore, the identification of the LLPS proteins is crucial. Traditionally, biochemistry-based methods for identifying LLPS proteins are costly, time-consuming, and laborious. In contrast, artificial intelligence-based approaches are fast and cost-effective and can be a better alternative to biochemistry-based methods. Previous research methods employed word2vec in conjunction with machine learning or deep learning algorithms. Although word2vec captures word semantics and relationships, it might not be effective in capturing features relevant to protein classification, like physicochemical properties, evolutionary relationships, or structural features. Additionally, other studies often focused on a limited set of features for model training, including planar π contact frequency, pi-pi, and β-pairing propensities. To overcome such shortcomings, this study first constructed a reliable dataset containing 1206 protein sequences, including 603 LLPS and 603 non-LLPS protein sequences. Then a computational model was proposed to efficiently identify the LLPS proteins by perceiving semantic information of protein sequences directly; using an ESM2-36 pre-trained model based on transformer architecture in conjunction with a convolutional neural network. The model could achieve an accuracy of 85.86 % and 89.26 %, respectively on training data and test data, surpassing the accuracy of previous studies. The performance demonstrates the potential of our computational methods as efficient alternatives for identifying LLPS proteins.
PMID:39067723 | DOI:10.1016/j.ijbiomac.2024.134146