Deep learning
Res-TransNet: A Hybrid deep Learning Network for Predicting Pathological Subtypes of lung Adenocarcinoma in CT Images
J Imaging Inform Med. 2024 Jun 11. doi: 10.1007/s10278-024-01149-z. Online ahead of print.
ABSTRACT
This study aims to develop a CT-based hybrid deep learning network to predict pathological subtypes of early-stage lung adenocarcinoma by integrating residual network (ResNet) with Vision Transformer (ViT). A total of 1411 pathologically confirmed ground-glass nodules (GGNs) retrospectively collected from two centers were used as internal and external validation sets for model development. 3D ResNet and ViT were applied to investigate two deep learning frameworks to classify three subtypes of lung adenocarcinoma namely invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma and adenocarcinoma in situ, respectively. To further improve the model performance, four Res-TransNet based models were proposed by integrating ResNet and ViT with different ensemble learning strategies. Two classification tasks involving predicting IAC from Non-IAC (Task1) and classifying three subtypes (Task2) were designed and conducted in this study. For Task 1, the optimal Res-TransNet model yielded area under the receiver operating characteristic curve (AUC) values of 0.986 and 0.933 on internal and external validation sets, which were significantly higher than that of ResNet and ViT models (p < 0.05). For Task 2, the optimal fusion model generated the accuracy and weighted F1 score of 68.3% and 66.1% on the external validation set. The experimental results demonstrate that Res-TransNet can significantly increase the classification performance compared with the two basic models and have the potential to assist radiologists in precision diagnosis.
PMID:38861071 | DOI:10.1007/s10278-024-01149-z
Simulated deep CT characterization of liver metastases with high-resolution filtered back projection reconstruction
Vis Comput Ind Biomed Art. 2024 Jun 11;7(1):13. doi: 10.1186/s42492-024-00161-y.
ABSTRACT
Early diagnosis and accurate prognosis of colorectal cancer is critical for determining optimal treatment plans and maximizing patient outcomes, especially as the disease progresses into liver metastases. Computed tomography (CT) is a frontline tool for this task; however, the preservation of predictive radiomic features is highly dependent on the scanning protocol and reconstruction algorithm. We hypothesized that image reconstruction with a high-frequency kernel could result in a better characterization of liver metastases features via deep neural networks. This kernel produces images that appear noisier but preserve more sinogram information. A simulation pipeline was developed to study the effects of imaging parameters on the ability to characterize the features of liver metastases. This pipeline utilizes a fractal approach to generate a diverse population of shapes representing virtual metastases, and then it superimposes them on a realistic CT liver region to perform a virtual CT scan using CatSim. Datasets of 10,000 liver metastases were generated, scanned, and reconstructed using either standard or high-frequency kernels. These data were used to train and validate deep neural networks to recover crafted metastases characteristics, such as internal heterogeneity, edge sharpness, and edge fractal dimension. In the absence of noise, models scored, on average, 12.2% ( α = 0.012 ) and 7.5% ( α = 0.049 ) lower squared error for characterizing edge sharpness and fractal dimension, respectively, when using high-frequency reconstructions compared to standard. However, the differences in performance were statistically insignificant when a typical level of CT noise was simulated in the clinical scan. Our results suggest that high-frequency reconstruction kernels can better preserve information for downstream artificial intelligence-based radiomic characterization, provided that noise is limited. Future work should investigate the information-preserving kernels in datasets with clinical labels.
PMID:38861067 | DOI:10.1186/s42492-024-00161-y
Automated selection of abdominal MRI series using a DICOM metadata classifier and selective use of a pixel-based classifier
Abdom Radiol (NY). 2024 Jun 11. doi: 10.1007/s00261-024-04379-5. Online ahead of print.
ABSTRACT
Accurate, automated MRI series identification is important for many applications, including display ("hanging") protocols, machine learning, and radiomics. The use of the series description or a pixel-based classifier each has limitations. We demonstrate a combined approach utilizing a DICOM metadata-based classifier and selective use of a pixel-based classifier to identify abdominal MRI series. The metadata classifier was assessed alone as Group metadata and combined with selective use of the pixel-based classifier for predictions with less than 70% certainty (Group combined). The overall accuracy (mean and 95% confidence intervals) for Groups metadata and combined on the test dataset were 0.870 CI (0.824,0.912) and 0.930 CI (0.893,0.963), respectively. With this combined metadata and pixel-based approach, we demonstrate accurate classification of 95% or greater for all pre-contrast MRI series and improved performance for some post-contrast series.
PMID:38860997 | DOI:10.1007/s00261-024-04379-5
AmyloidPETNet: Classification of Amyloid Positivity in Brain PET Imaging Using End-to-End Deep Learning
Radiology. 2024 Jun;311(3):e231442. doi: 10.1148/radiol.231442.
ABSTRACT
Background Visual assessment of amyloid PET scans relies on the availability of radiologist expertise, whereas quantification of amyloid burden typically involves MRI for processing and analysis, which can be computationally expensive. Purpose To develop a deep learning model to classify minimally processed brain PET scans as amyloid positive or negative, evaluate its performance on independent data sets and different tracers, and compare it with human visual reads. Materials and Methods This retrospective study used 8476 PET scans (6722 patients) obtained from late 2004 to early 2023 that were analyzed across five different data sets. A deep learning model, AmyloidPETNet, was trained on 1538 scans from 766 patients, validated on 205 scans from 95 patients, and internally tested on 184 scans from 95 patients in the Alzheimer's Disease Neuroimaging Initiative (ADNI) fluorine 18 (18F) florbetapir (FBP) data set. It was tested on ADNI scans using different tracers and scans from independent data sets. Scan amyloid positivity was based on mean cortical standardized uptake value ratio cutoffs. To compare with model performance, each scan from both the Centiloid Project and a subset of the Anti-Amyloid Treatment in Asymptomatic Alzheimer's Disease (A4) study were visually interpreted with a confidence level (low, intermediate, high) of amyloid positivity/negativity. The area under the receiver operating characteristic curve (AUC) and other performance metrics were calculated, and Cohen κ was used to measure physician-model agreement. Results The model achieved an AUC of 0.97 (95% CI: 0.95, 0.99) on test ADNI 18F-FBP scans, which generalized well to 18F-FBP scans from the Open Access Series of Imaging Studies (AUC, 0.95; 95% CI: 0.93, 0.97) and the A4 study (AUC, 0.98; 95% CI: 0.98, 0.98). Model performance was high when applied to data sets with different tracers (AUC ≥ 0.97). Other performance metrics provided converging evidence. Physician-model agreement ranged from fair (Cohen κ = 0.39; 95% CI: 0.16, 0.60) on a sample of mostly equivocal cases from the A4 study to almost perfect (Cohen κ = 0.93; 95% CI: 0.86, 1.0) on the Centiloid Project. Conclusion The developed model was capable of automatically and accurately classifying brain PET scans as amyloid positive or negative without relying on experienced readers or requiring structural MRI. Clinical trial registration no. NCT00106899 © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Bryan and Forghani in this issue.
PMID:38860897 | DOI:10.1148/radiol.231442
Detection of oral cancer and oral potentially malignant disorders using artificial intelligence-based image analysis
Head Neck. 2024 Jun 11. doi: 10.1002/hed.27843. Online ahead of print.
ABSTRACT
BACKGROUND: We aimed to construct an artificial intelligence-based model for detecting oral cancer and dysplastic leukoplakia using oral cavity images captured with a single-lens reflex camera.
SUBJECTS AND METHODS: We used 1043 images of lesions from 424 patients with oral squamous cell carcinoma (OSCC), leukoplakia, and other oral mucosal diseases. An object detection model was constructed using a Single Shot Multibox Detector to detect oral diseases and their locations using images. The model was trained using 523 images of oral cancer, and its performance was evaluated using images of oral cancer (n = 66), leukoplakia (n = 49), and other oral diseases (n = 405).
RESULTS: For the detection of only OSCC versus OSCC and leukoplakia, the model demonstrated a sensitivity of 93.9% versus 83.7%, a negative predictive value of 98.8% versus 94.5%, and a specificity of 81.2% versus 81.2%.
CONCLUSIONS: Our proposed model is a potential diagnostic tool for oral diseases.
PMID:38860703 | DOI:10.1002/hed.27843
Machine Learning in Spine Oncology: A Narrative Review
Global Spine J. 2024 Jun 11:21925682241261342. doi: 10.1177/21925682241261342. Online ahead of print.
ABSTRACT
STUDY DESIGN: Narrative Review.
OBJECTIVE: Machine learning (ML) is one of the latest advancements in artificial intelligence used in medicine and surgery with the potential to significantly impact the way physicians diagnose, prognose, and treat spine tumors. In the realm of spine oncology, ML is utilized to analyze and interpret medical imaging and classify tumors with incredible accuracy. The authors present a narrative review that specifically addresses the use of machine learning in spine oncology.
METHODS: This study was conducted in accordance with the Preferred Reporting Items of Systematic Reviews and Meta-Analysis (PRISMA) methodology. A systematic review of the literature in the PubMed, EMBASE, Web of Science, Scopus, and Cochrane Library databases since inception was performed to present all clinical studies with the search terms '[[Machine Learning] OR [Artificial Intelligence]] AND [[Spine Oncology] OR [Spine Cancer]]'. Data included studies that were extracted and included algorithms, training and test size, outcomes reported. Studies were separated based on the type of tumor investigated using the machine learning algorithms into primary, metastatic, both, and intradural. A minimum of 2 independent reviewers conducted the study appraisal, data abstraction, and quality assessments of the studies.
RESULTS: Forty-five studies met inclusion criteria out of 480 references screened from the initial search results. Studies were grouped by metastatic, primary, and intradural tumors. The majority of ML studies relevant to spine oncology focused on utilizing a mixture of clinical and imaging features to risk stratify mortality and frailty. Overall, these studies showed that ML is a helpful tool in tumor detection, differentiation, segmentation, predicting survival, predicting readmission rates of patients with either primary, metastatic, or intradural spine tumors.
CONCLUSION: Specialized neural networks and deep learning algorithms have shown to be highly effective at predicting malignant probability and aid in diagnosis. ML algorithms can predict the risk of tumor recurrence or progression based on imaging and clinical features. Additionally, ML can optimize treatment planning, such as predicting radiotherapy dose distribution to the tumor and surrounding normal tissue or in surgical resection planning. It has the potential to significantly enhance the accuracy and efficiency of health care delivery, leading to improved patient outcomes.
PMID:38860699 | DOI:10.1177/21925682241261342
Accurate and rapid molecular subgrouping of high-grade glioma via deep learning-assisted label-free fiber-optic Raman spectroscopy
PNAS Nexus. 2024 May 27;3(6):pgae208. doi: 10.1093/pnasnexus/pgae208. eCollection 2024 Jun.
ABSTRACT
Molecular genetics is highly related with prognosis of high-grade glioma. Accordingly, the latest WHO guideline recommends that molecular subgroups of the genes, including IDH, 1p/19q, MGMT, TERT, EGFR, Chromosome 7/10, CDKN2A/B, need to be detected to better classify glioma and guide surgery and treatment. Unfortunately, there is no preoperative or intraoperative technology available for accurate and comprehensive molecular subgrouping of glioma. Here, we develop a deep learning-assisted fiber-optic Raman diagnostic platform for accurate and rapid molecular subgrouping of high-grade glioma. Specifically, a total of 2,354 fingerprint Raman spectra was obtained from 743 tissue sites (astrocytoma: 151; oligodendroglioma: 150; glioblastoma (GBM): 442) of 44 high-grade glioma patients. The convolutional neural networks (ResNet) model was then established and optimized for molecular subgrouping. The mean area under receiver operating characteristic curves (AUC) for identifying the molecular subgroups of high-grade glioma reached 0.904, with mean sensitivity of 83.3%, mean specificity of 85.0%, mean accuracy of 83.3%, and mean time expense of 10.6 s. The diagnosis performance using ResNet model was shown to be superior to PCA-SVM and UMAP models, suggesting that high dimensional information from Raman spectra would be helpful. In addition, for the molecular subgroups of GBM, the mean AUC reached 0.932, with mean sensitivity of 87.8%, mean specificity of 83.6%, and mean accuracy of 84.1%. Furthermore, according to saliency maps, the specific Raman features corresponding to tumor-associated biomolecules (e.g. nucleic acid, tyrosine, tryptophan, cholesteryl ester, fatty acid, and collagen) were found to contribute to the accurate molecular subgrouping. Collectively, this study opens up new opportunities for accurate and rapid molecular subgrouping of high-grade glioma, which would assist optimal surgical resection and instant post-operative decision-making.
PMID:38860145 | PMC:PMC11164103 | DOI:10.1093/pnasnexus/pgae208
Retraction: Analysis of psychological characteristics and emotional expression based on deep learning in higher vocational music education
Front Psychol. 2024 May 27;15:1433717. doi: 10.3389/fpsyg.2024.1433717. eCollection 2024.
ABSTRACT
[This retracts the article DOI: 10.3389/fpsyg.2022.981738.].
PMID:38860053 | PMC:PMC11163895 | DOI:10.3389/fpsyg.2024.1433717
Adversarial Consistency for Single Domain Generalization in Medical Image Segmentation
Med Image Comput Comput Assist Interv. 2022 Sep;13437:671-681. doi: 10.1007/978-3-031-16449-1_64. Epub 2022 Sep 17.
ABSTRACT
An organ segmentation method that can generalize to unseen contrasts and scanner settings can significantly reduce the need for retraining of deep learning models. Domain Generalization (DG) aims to achieve this goal. However, most DG methods for segmentation require training data from multiple domains during training. We propose a novel adversarial domain generalization method for organ segmentation trained on data from a single domain. We synthesize the new domains via learning an adversarial domain synthesizer (ADS) and presume that the synthetic domains cover a large enough area of plausible distributions so that unseen domains can be interpolated from synthetic domains. We propose a mutual information regularizer to enforce the semantic consistency between images from the synthetic domains, which can be estimated by patch-level contrastive learning. We evaluate our method for various organ segmentation for unseen modalities, scanning protocols, and scanner sites.
PMID:38859913 | PMC:PMC11164048 | DOI:10.1007/978-3-031-16449-1_64
Parallel CNN-Deep Learning Clinical-Imaging Signature for Assessing Pathologic Grade and Prognosis of Soft Tissue Sarcoma Patients
J Magn Reson Imaging. 2024 Jun 10. doi: 10.1002/jmri.29474. Online ahead of print.
ABSTRACT
BACKGROUND: Traditional biopsies pose risks and may not accurately reflect soft tissue sarcoma (STS) heterogeneity. MRI provides a noninvasive, comprehensive alternative.
PURPOSE: To assess the diagnostic accuracy of histological grading and prognosis in STS patients when integrating clinical-imaging parameters with deep learning (DL) features from preoperative MR images.
STUDY TYPE: Retrospective/prospective.
POPULATION: 354 pathologically confirmed STS patients (226 low-grade, 128 high-grade) from three hospitals and the Cancer Imaging Archive (TCIA), divided into training (n = 185), external test (n = 125), and TCIA cohorts (n = 44). 12 patients (6 low-grade, 6 high-grade) were enrolled into prospective validation cohort.
FIELD STRENGTH/SEQUENCE: 1.5 T and 3.0 T/Unenhanced T1-weighted and fat-suppressed-T2-weighted.
ASSESSMENT: DL features were extracted from MR images using a parallel ResNet-18 model to construct DL signature. Clinical-imaging characteristics included age, gender, tumor-node-metastasis stage and MRI semantic features (depth, number, heterogeneity at T1WI/FS-T2WI, necrosis, and peritumoral edema). Logistic regression analysis identified significant risk factors for the clinical model. A DL clinical-imaging signature (DLCS) was constructed by incorporating DL signature with risk factors, evaluated for risk stratification, and assessed for progression-free survival (PFS) in retrospective cohorts, with an average follow-up of 23 ± 22 months.
STATISTICAL TESTS: Logistic regression, Cox regression, Kaplan-Meier curves, log-rank test, area under the receiver operating characteristic curve (AUC),and decision curve analysis. A P-value <0.05 was considered significant.
RESULTS: The AUC values for DLCS in the external test, TCIA, and prospective test cohorts (0.834, 0.838, 0.819) were superior to clinical model (0.662, 0.685, 0.694). Decision curve analysis showed that the DLCS model provided greater clinical net benefit over the DL and clinical models. Also, the DLCS model was able to risk-stratify patients and assess PFS.
DATA CONCLUSION: The DLCS exhibited strong capabilities in histological grading and prognosis assessment for STS patients, and may have potential to aid in the formulation of personalized treatment plans.
TECHNICAL EFFICACY: Stage 2.
PMID:38859600 | DOI:10.1002/jmri.29474
DeepCBA: a deep learning framework for gene expression prediction in maize based on DNA sequence and chromatin interaction
Plant Commun. 2024 Jun 10:100985. doi: 10.1016/j.xplc.2024.100985. Online ahead of print.
ABSTRACT
Chromatin interactions create spatial proximity between distal regulatory elements and target genes in the genome, which has an important impact on gene expression, transcriptional regulation, and phenotypic traits. To date, several methods have been developed for predicting gene expression. However, existing methods do not take into consideration the impact of chromatin interactions on target gene expression, thus potentially reduces the accuracy of gene expression prediction and mining of important regulatory elements. In this study, a highly accurate deep learning-based gene expression prediction model (DeepCBA) based on maize chromatin interaction data was developed. Compared with existing models, DeepCBA exhibits higher accuracy in expression classification and expression value prediction. The average Pearson correlation coefficients (PCC) for predicting gene expression using gene promoter proximal interactions, proximal-distal interactions, and proximal and distal interactions were 0.818, 0.625, and 0.929, respectively, representing an increase of 0.357, 0.16, and 0.469 over the PCC of traditional methods that only use gene proximal sequences. Some important motifs were identified through DeepCBA and were found to be enriched in open chromatin regions and expression quantitative trait loci (eQTL) and have the molecular characteristic of tissue specificity. Importantly, the experimental results of maize flowering-related gene ZmRap2.7 and tillering-related gene ZmTb1 demonstrate the feasibility of DeepCBA in exploring regulatory elements that affect gene expression. Moreover, the promoter editing and verification of two reported genes (ZmCLE7, ZmVTE4) demonstrated new insights of DeepCBA in precise designing of gene expression and even future intelligent breeding. DeepCBA is available at http://www.deepcba.com/ or http://124.220.197.196/.
PMID:38859587 | DOI:10.1016/j.xplc.2024.100985
Target recognition and segmentation in turbid water using data from non-turbid conditions: a unified approach and experimental validation
Opt Express. 2024 Jun 3;32(12):20654-20668. doi: 10.1364/OE.524714.
ABSTRACT
Semantic segmentation of targets in underwater images within turbid water environments presents significant challenges, hindered by factors such as environmental variability, difficulties in acquiring datasets, imprecise data annotation, and the poor robustness of conventional methods. This paper addresses this issue by proposing a novel joint method using deep learning to effectively perform semantic segmentation tasks in turbid environments, with the practical case of efficiently collecting polymetallic nodules in deep-sea while minimizing damage to the seabed environment. Our approach includes a novel data expansion technique and a modified U-net based model. Drawing on the underwater image formation model, we introduce noise to clear water images to simulate images captured under varying degrees of turbidity, thus providing an alternative to the required data. Furthermore, traditional U-net-based modified models have shown limitations in enhancing performance in such tasks. Based on the primary factors underlying image degradation, we propose a new model which incorporates an improved dual-channel encoder. Our method significantly advances the fine segmentation of underwater images in turbid media, and experimental validation demonstrates its effectiveness and superiority under different turbidity conditions. The study provides new technical means for deep-sea resource development, holding broad application prospects and scientific value.
PMID:38859442 | DOI:10.1364/OE.524714
Vision transformer empowered physics-driven deep learning for omnidirectional three-dimensional holography
Opt Express. 2024 Apr 8;32(8):14394-14404. doi: 10.1364/OE.519400.
ABSTRACT
The inter-plane crosstalk and limited axial resolution are two key points that hinder the performance of three-dimensional (3D) holograms. The state-of-the-art methods rely on increasing the orthogonality of the cross-sections of a 3D object at different depths to lower the impact of inter-plane crosstalk. Such strategy either produces unidirectional 3D hologram or induces speckle noise. Recently, learning-based methods provide a new way to solve this problem. However, most related works rely on convolution neural networks and the reconstructed 3D holograms have limited axial resolution and display quality. In this work, we propose a vision transformer (ViT) empowered physics-driven deep neural network which can realize the generation of omnidirectional 3D holograms. Owing to the global attention mechanism of ViT, our 3D CGH has small inter-plane crosstalk and high axial resolution. We believe our work not only promotes high-quality 3D holographic display, but also opens a new avenue for complex inverse design in photonics.
PMID:38859385 | DOI:10.1364/OE.519400
ICF-PR-Net: a deep phase retrieval neural network for X-ray phase contrast imaging of inertial confinement fusion capsules
Opt Express. 2024 Apr 8;32(8):14356-14376. doi: 10.1364/OE.518249.
ABSTRACT
X-ray phase contrast imaging (XPCI) has demonstrated capability to characterize inertial confinement fusion (ICF) capsules, and phase retrieval can reconstruct phase information from intensity images. This study introduces ICF-PR-Net, a novel deep learning-based phase retrieval method for ICF-XPCI. We numerically constructed datasets based on ICF capsule shape features, and proposed an object-image loss function to add image formation physics to network training. ICF-PR-Net outperformed traditional methods as it exhibited satisfactory robustness against strong noise and nonuniform background and was well-suited for ICF-XPCI's constrained experimental conditions and single exposure limit. Numerical and experimental results showed that ICF-PR-Net accurately retrieved the phase and absorption while maintaining retrieval quality in different situations. Overall, the ICF-PR-Net enables the diagnosis of the inner interface and electron density of capsules to address ignition-preventing problems, such as hydrodynamic instability growth.
PMID:38859383 | DOI:10.1364/OE.518249
Flexible design of chiroptical response of planar chiral metamaterials using deep learning
Opt Express. 2024 Apr 8;32(8):13978-13985. doi: 10.1364/OE.510656.
ABSTRACT
Optical chirality is highly demanded for biochemical sensing, spectral detection, and advanced imaging, however, conventional design schemes for chiral metamaterials require highly computational cost due to the trial-and-error strategy, and it is crucial to accelerate the design process particularly in comparably simple planar chiral metamaterials. Herein, we construct a bidirectional deep learning (BDL) network consists of spectra predicting network (SPN) and design predicting network (DPN) to accelerate the prediction of spectra and inverse design of chiroptical response of planar chiral metamaterials. It is shown that the proposed BDL network can accelerate the design process and exhibit high prediction accuracy. The average process of prediction only takes ∼15 ms, which is 1 in 40000 compared to finite-difference time-domain (FDTD). The mean-square error (MSE) loss of forward and inverse prediction reaches 0.0085 after 100 epochs. Over 95.2% of training samples have MSE ≤ 0.0042 and MSE ≤ 0.0044 for SPN and DPN, respectively; indicating that the BDL network is robust in the inverse deign without underfitting or overfitting for both SPN and DPN. Our founding shows great potentials in accelerating the on-demand design of planar chiral metamaterials.
PMID:38859355 | DOI:10.1364/OE.510656
Deep learning-enhanced snapshot hyperspectral confocal microscopy imaging system
Opt Express. 2024 Apr 8;32(8):13918-13931. doi: 10.1364/OE.519045.
ABSTRACT
Laser-scanning confocal hyperspectral microscopy is a powerful technique to identify the different sample constituents and their spatial distribution in three-dimensional (3D). However, it suffers from low imaging speed because of the mechanical scanning methods. To overcome this challenge, we propose a snapshot hyperspectral confocal microscopy imaging system (SHCMS). It combined coded illumination microscopy based on a digital micromirror device (DMD) with a snapshot hyperspectral confocal neural network (SHCNet) to realize single-shot confocal hyperspectral imaging. With SHCMS, high-contrast 160-bands confocal hyperspectral images of potato tuber autofluorescence can be collected by only single-shot, which is almost 5 times improvement in the number of spectral channels than previously reported methods. Moreover, our approach can efficiently record hyperspectral volumetric imaging due to the optical sectioning capability. This fast high-resolution hyperspectral imaging method may pave the way for real-time highly multiplexed biological imaging.
PMID:38859350 | DOI:10.1364/OE.519045
Deep transfer learning radiomics model based on temporal bone CT for assisting in the diagnosis of inner ear malformations
Lin Chuang Er Bi Yan Hou Tou Jing Wai Ke Za Zhi. 2024 Jun;38(6):547-552. doi: 10.13201/j.issn.2096-7993.2024.06.017.
ABSTRACT
Objective:To evaluate the diagnostic efficacy of traditional radiomics, deep learning, and deep learning radiomics in differentiating normal and inner ear malformations on temporal bone computed tomography(CT). Methods:A total of 572 temporal bone CT data were retrospectively collected, including 201 cases of inner ear malformation and 371 cases of normal inner ear, and randomly divided into a training cohort(n=458) and a test cohort(n=114) in a ratio of 4∶1. Deep transfer learning features and radiomics features were extracted from the CT images and feature fusion was performed to establish the least absolute shrinkage and selection operator. The CT results interpretated by two chief otologists from the National Clinical Research Center for Otorhinolaryngological Diseases served as the gold standard for diagnosis. The model performance was evaluated using receiver operating characteristic(ROC), and the accuracy, sensitivity, specificity, and other indicators of the models were calculated. The predictive power of each model was compared using the Delong test. Results:1 179 radiomics features were obtained from traditional radiomics, 2 048 deep learning features were obtained from deep learning, and 137 features fusion were obtained after feature screening and fusion of the two. The area under the curve(AUC) of the deep learning radiomics model on the test cohort was 0.964 0(95%CI 0.931 4-0.996 8), with an accuracy of 0.922, sensitivity of 0.881, and specificity of 0.945. The AUC of the radiomics features alone on the test cohort was 0.929 0(95%CI 0.882 2-0.974 9), with an accuracy of 0.878, sensitivity of 0.881, and specificity of 0.877. The AUC of the deep learning features alone on the test cohort was 0.947 0(95%CI 0.898 2-0.994 8), with an accuracy of 0.913, sensitivity of 0.810, and specificity of 0.973. The results indicated that the prediction accuracy and AUC of the deep learning radiomics model are the highest. The Delong test showed that the differences between any two models did not reach statistical significance. Conclusion:The feature fusion model can be used for the differential diagnosis of normal and inner ear malformations, and its diagnostic performance is superior to radiomics or deep learning models alone.
PMID:38858123 | DOI:10.13201/j.issn.2096-7993.2024.06.017
Preface to the Special Issue of Food and Chemical Toxicology on "New approach methodologies and machine learning in food safety and chemical risk assessment: Development of reproducible, open-source, and user-friendly tools for exposure, toxicokinetic,...
Food Chem Toxicol. 2024 Jun 8:114809. doi: 10.1016/j.fct.2024.114809. Online ahead of print.
ABSTRACT
This Special Issue contains articles on applications of various new approach methodologies (NAMs) in the field of toxicology and risk assessment. These NAMs include in vitro high-throughput screening, quantitative structure-activity relationship (QSAR) modeling, physiologically based pharmacokinetic (PBPK) modeling, network toxicology analysis, molecular docking simulation, omics, machine learning, deep learning, and "template-and-anchor" multiscale computational modeling. These in vitro and in silico approaches complement each other and can be integrated together to support different applications of toxicology, including food safety assessment, dietary exposure assessment, chemical toxicity potency screening and ranking, chemical toxicity prediction, chemical toxicokinetic simulation, and to investigate the potential mechanisms of toxicities, as introduced further in selected articles in this Special Issue.
PMID:38857761 | DOI:10.1016/j.fct.2024.114809
Artificial intelligence-assisted quantitative CT analysis of airway changes following SABR for central lung tumors
Radiother Oncol. 2024 Jun 8:110376. doi: 10.1016/j.radonc.2024.110376. Online ahead of print.
ABSTRACT
INTRODUCTION: Use of stereotactic ablative radiotherapy (SABR) for central lung tumors can result in up to a 35% incidence of late pulmonary toxicity. We evaluated an automated scoring method to quantify post-SABR bronchial changes by using artificial intelligence (AI)-based airway segmentation.
MATERIALS AND METHODS: Central lung SABR patients treated at Amsterdam UMC (AUMC, internal reference dataset) and Peter MacCallum Cancer Centre (PMCC, external validation dataset) were identified. Patients were eligible if they had pre- and post-SABR CT scans with ≤ 1 mm slice thickness. The first step of the automated scoring method involved AI-based airway auto-segmentation using MEDPSeg, an end-to-end deep learning-based model. The Vascular Modeling Toolkit in 3D Slicer was then used to extract a centerline curve through the auto-segmented airway lumen, and cross-sectional measurements were computed along each bronchus for all CT scans. For AUMC patients, airway stenosis/occlusion was evaluated by both visual assessment and automated scoring. Only the automated method was applied to the PMCC dataset.
RESULTS: Study patients comprised 26 from AUMC, and 33 from PMCC. Visual scoring identified stenosis/occlusion in 8 AUMC patients (31 %), most frequently in the segmental bronchi. After airway auto-segmentation, minor manual edits were needed in 9 % of patients. Segmentation for a single scan averaged 83sec (range 73-136). Automated scoring nearly doubled detected airway stenosis/occlusion (n = 15, 58 %), and allowed for earlier detection in 5/8 patients who had also visually scored changes. Estimated rates were 48 % and 66 % at 1- and 2-years, respectively, for the internal dataset. The automated detection rate was 52 % in the external dataset, with 1- and 2-year risks of 56 % and 61 %, respectively.
CONCLUSION: An AI-based automated scoring method allows for detection of more bronchial stenosis/occlusion after lung SABR, and at an earlier time-point. This tool can facilitate studies to determine early airway changes and establish more reliable airway tolerance doses.
PMID:38857700 | DOI:10.1016/j.radonc.2024.110376
TrueTH: A user-friendly deep learning approach for robust dopaminergic neuron detection
Neurosci Lett. 2024 Jun 8:137871. doi: 10.1016/j.neulet.2024.137871. Online ahead of print.
ABSTRACT
Parkinson's disease (PD) entails the progressive loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNc), leading to movement-related impairments. Accurate assessment of DA neuron health is vital for research applications. Manual analysis, however, is laborious and subjective. To address this, we introduce TrueTH, a user-friendly and robust pipeline for unbiased quantification of DA neurons. Existing deep learning tools for tyrosine hydroxylase-positive (TH+) neuron counting often lack accessibility or require advanced programming skills. TrueTH bridges this gap by offering an open-sourced and user-friendly solution for PD research. We demonstrate TrueTH's performance across various PD rodent models, showcasing its accuracy and ease of use. TrueTH exhibits remarkable resilience to staining variations and extreme conditions, accurately identifying TH+ neurons even in lightly stained images and distinguishing brain section fragments from neurons. Furthermore, the evaluation of our pipeline's performance in segmenting fluorescence images shows strong correlation with ground truth and outperforms existing models in accuracy. In summary, TrueTH offers a user-friendly interface and is pretrained with a diverse range of images, providing a practical solution for DA neuron quantification in Parkinson's disease research.
PMID:38857698 | DOI:10.1016/j.neulet.2024.137871