Deep learning
Editorial: AI approach to the psychiatric diagnosis and prediction
Front Psychiatry. 2024 May 23;15:1387370. doi: 10.3389/fpsyt.2024.1387370. eCollection 2024.
NO ABSTRACT
PMID:38846910 | PMC:PMC11153783 | DOI:10.3389/fpsyt.2024.1387370
Neural networks for rapid phase quantification of cultural heritage X-ray powder diffraction data
J Appl Crystallogr. 2024 May 31;57(Pt 3):831-841. doi: 10.1107/S1600576724003704. eCollection 2024 Jun 1.
ABSTRACT
Recent developments in synchrotron radiation facilities have increased the amount of data generated during acquisitions considerably, requiring fast and efficient data processing techniques. Here, the application of dense neural networks (DNNs) to data treatment of X-ray diffraction computed tomography (XRD-CT) experiments is presented. Processing involves mapping the phases in a tomographic slice by predicting the phase fraction in each individual pixel. DNNs were trained on sets of calculated XRD patterns generated using a Python algorithm developed in-house. An initial Rietveld refinement of the tomographic slice sum pattern provides additional information (peak widths and integrated intensities for each phase) to improve the generation of simulated patterns and make them closer to real data. A grid search was used to optimize the network architecture and demonstrated that a single fully connected dense layer was sufficient to accurately determine phase proportions. This DNN was used on the XRD-CT acquisition of a mock-up and a historical sample of highly heterogeneous multi-layered decoration of a late medieval statue, called 'applied brocade'. The phase maps predicted by the DNN were in good agreement with other methods, such as non-negative matrix factorization and serial Rietveld refinements performed with TOPAS, and outperformed them in terms of speed and efficiency. The method was evaluated by regenerating experimental patterns from predictions and using the R-weighted profile as the agreement factor. This assessment allowed us to confirm the accuracy of the results.
PMID:38846765 | PMC:PMC11151672 | DOI:10.1107/S1600576724003704
Breaking new ground: can artificial intelligence and machine learning transform papillary glioneuronal tumor diagnosis?
Neurosurg Rev. 2024 Jun 7;47(1):261. doi: 10.1007/s10143-024-02504-y.
ABSTRACT
Papillary glioneuronal tumors (PGNTs), classified as Grade I by the WHO in 2016, present diagnostic challenges due to their rarity and potential for malignancy. Xiaodan Du et al.'s recent study of 36 confirmed PGNT cases provides critical insights into their imaging characteristics, revealing frequent presentation with headaches, seizures, and mass effect symptoms, predominantly located in the supratentorial region near the lateral ventricles. Lesions often appeared as mixed cystic and solid masses with septations or as cystic masses with mural nodules. Given these complexities, artificial intelligence (AI) and machine learning (ML) offer promising advancements for PGNT diagnosis. Previous studies have demonstrated AI's efficacy in diagnosing various brain tumors, utilizing deep learning and advanced imaging techniques for rapid and accurate identification. Implementing AI in PGNT diagnosis involves assembling comprehensive datasets, preprocessing data, extracting relevant features, and iteratively training models for optimal performance. Despite AI's potential, medical professionals must validate AI predictions, ensuring they complement rather than replace clinical expertise. This integration of AI and ML into PGNT diagnostics could significantly enhance preoperative accuracy, ultimately improving patient outcomes through more precise and timely interventions.
PMID:38844709 | DOI:10.1007/s10143-024-02504-y
Deep learning-based prediction of compressive strength of eco-friendly geopolymer concrete
Environ Sci Pollut Res Int. 2024 Jun 7. doi: 10.1007/s11356-024-33853-2. Online ahead of print.
ABSTRACT
The greenhouse gases cause global warming on Earth. The cement production industry is one of the largest sectors producing greenhouse gases. The geopolymer is produced with synthesized by the reaction of an alkaline solution and the waste materials such as slag and fly ash. The use of eco-friendly geopolymer concrete decreases energy consumption and greenhouse gases. In this study, the fc (compressive strength) of eco-friendly geopolymer concrete was predicted by the deep long short-term memory (LSTM) network model. Moreover, the support vector regression (SVR), least squares boosting ensemble (LSBoost), and multiple linear regression (MLR) models were devised to compare the forecast results of the deep LSTM algorithm. The input variables of the models were used as the mole ratio, the alkaline solution concentration, the curing temperature, the curing days, and the liquid-to-fly ash mass ratio. The output variable of the proposed models was chosen as the compressive strength (fc). Furthermore, the effects of the input variable on the fc of eco-friendly geopolymer concrete were determined by the sensitivity analysis. The fc of eco-friendly geopolymer concrete was predicted by the deep LSTM, LSBoost, SVR, and MLR models with 99.23%, 98.08%, 78.57%, and 88.03% accuracy, respectively. The deep LSTM model forecasted the fc of eco-friendly geopolymer concrete with higher accuracy than the SVR, LSBoost, and MLR models. The sensitivity analysis obtained that the curing temperature was the most important experimental variable that affected the fc of geopolymer concrete.
PMID:38844634 | DOI:10.1007/s11356-024-33853-2
A fully automated classification of third molar development stages using deep learning
Sci Rep. 2024 Jun 7;14(1):13082. doi: 10.1038/s41598-024-63744-y.
ABSTRACT
Accurate classification of tooth development stages from orthopantomograms (OPG) is crucial for dental diagnosis, treatment planning, age assessment, and forensic applications. This study aims to develop an automated method for classifying third molar development stages using OPGs. Initially, our data consisted of 3422 OPG images, each classified and curated by expert evaluators. The dataset includes images from both Q3 (lower jaw left side) and Q4 (lower right side) regions extracted from panoramic images, resulting in a total of 6624 images for analysis. Following data collection, the methodology employs region of interest extraction, pre-filtering, and extensive data augmentation techniques to enhance classification accuracy. The deep neural network model, including architectures such as EfficientNet, EfficientNetV2, MobileNet Large, MobileNet Small, ResNet18, and ShuffleNet, is optimized for this task. Our findings indicate that EfficientNet achieved the highest classification accuracy at 83.7%. Other architectures achieved accuracies ranging from 71.57 to 82.03%. The variation in performance across architectures highlights the influence of model complexity and task-specific features on classification accuracy. This research introduces a novel machine learning model designed to accurately estimate the development stages of lower wisdom teeth in OPG images, contributing to the fields of dental diagnostics and treatment planning.
PMID:38844566 | DOI:10.1038/s41598-024-63744-y
Deep Learning De-Noising Improves CT Perfusion Image Quality in the Setting of Lower Contrast Dosing: A Feasibility Study
AJNR Am J Neuroradiol. 2024 Jun 6:ajnr.A8367. doi: 10.3174/ajnr.A8367. Online ahead of print.
ABSTRACT
BACKGROUND AND PURPOSE: Considering recent iodinated contrast media (ICM) shortages, this study compared reduced ICM and standard dose CTP acquisitions, and the impact of deep learning (DL)-denoising on CTP image quality in preclinical and clinical studies.
MATERIALS AND METHODS: Twelve swine underwent 9 CTP exams each, performed at combinations of 3 different X-ray (37, 67, and 127mAs) and ICM doses (10, 15, and 20mL). Clinical CTP acquisitions performed before and during the ICM shortage and protocol change (from 40 mL to 30 mL) were retrospectively included. Eleven patients with reduced ICM dose and 11 propensity-score-matched controls with standard ICM dose were included. A Residual Encoder-Decoder Convolutional-Neural-Network (RED-CNN) was trained for CTP denoising using K-space-Weighted Image Average (KWIA) filtered CTP images as the target. The standard, RED-CNN denoised, and KWIA noise-filtered images for animal and human studies were compared for quantitative SNR and qualitative image evaluation.
RESULTS: The SNR of animal CTP images decreased with reductions in ICM and mAs doses. Contrast dose reduction had a greater effect on SNR than mAs reduction. Noise-filtering by KWIA and RED-CNN denoising progressively improved SNR of CTP maps, with RED-CNN resulting in the highest SNR. The SNR of clinical CTP images was generally lower with reduced ICM dose, which was improved by KWIA and RED-CNN denoising (p<0.05). Qualitative readings consistently rated RED-CNN denoised CTP as best quality, followed by KWIA and then standard CTP images.
CONCLUSIONS: DL-denoising can improve image quality for low ICM CTP protocols, and could approximate standard ICM dose CTP, in addition to potentially improving image quality for low mAs acquisitions.
ABBREVIATIONS: ICM=iodinated contrast media; DL=deep learning; KWIA=k-space weighted image average; LCD=low-contrast dose; SCD=standard contrast dose; RED-CNN=Residual Encoder-Decoder Convolutional Neural Network; PSNR=Peak Signal to Noise Ratio; RMSE=Root Mean Squared Error; SSIM=Structural Similarity Index.
PMID:38844370 | DOI:10.3174/ajnr.A8367
Is Automatic Tumor Segmentation on Whole-Body <sup>18</sup>F-FDG PET Images a Clinical Reality?
J Nucl Med. 2024 Jun 6:jnumed.123.267183. doi: 10.2967/jnumed.123.267183. Online ahead of print.
ABSTRACT
The integration of automated whole-body tumor segmentation using 18F-FDG PET/CT images represents a pivotal shift in oncologic diagnostics, enhancing the precision and efficiency of tumor burden assessment. This editorial examines the transition toward automation, propelled by advancements in artificial intelligence, notably through deep learning techniques. We highlight the current availability of commercial tools and the academic efforts that have set the stage for these developments. Further, we comment on the challenges of data diversity, validation needs, and regulatory barriers. The role of metabolic tumor volume and total lesion glycolysis as vital metrics in cancer management underscores the significance of this evaluation. Despite promising progress, we call for increased collaboration across academia, clinical users, and industry to better realize the clinical benefits of automated segmentation, thus helping to streamline workflows and improve patient outcomes in oncology.
PMID:38844359 | DOI:10.2967/jnumed.123.267183
Multi-task aquatic toxicity prediction model based on multi-level features fusion
J Adv Res. 2024 Jun 4:S2090-1232(24)00226-1. doi: 10.1016/j.jare.2024.06.002. Online ahead of print.
ABSTRACT
INTRODUCTION: With the escalating menace of organic compounds in environmental pollution imperiling the survival of aquatic organisms, the investigation of organic compound toxicity across diverse aquatic species assumes paramount significance for environmental protection. Understanding how different species respond to these compounds helps assess the potential ecological impact of pollution on aquatic ecosystems as a whole. Compared with traditional experimental methods, deep learning methods have higher accuracy in predicting aquatic toxicity, faster data processing speed and better generalization ability.
OBJECTIVES: This article presents ATFPGT-multi, an advanced multi-task deep neural network prediction model for organic toxicity.
METHODS: The model integrates molecular fingerprints and molecule graphs to characterize molecules, enabling the simultaneous prediction of acute toxicity for the same organic compound across four distinct fish species. Furthermore, to validate the advantages of multi-task learning, we independently construct prediction models, named ATFPGT-single, for each fish species. We employ cross-validation in our experiments to assess the performance and generalization ability of ATFPGT-multi.
RESULTS: The experimental results indicate, first, that ATFPGT-multi outperforms ATFPGT-single on four fish datasets with AUC improvements of 9.8%, 4%, 4.8%, and 8.2%, respectively, demonstrating the superiority of multi-task learning over single-task learning. Furthermore, in comparison with previous algorithms, ATFPGT-multi outperforms comparative methods, emphasizing that our approach exhibits higher accuracy and reliability in predicting aquatic toxicity. Moreover, ATFPGT-multi utilizes attention scores to identify molecular fragments associated with fish toxicity in organic molecules, as demonstrated by two organic molecule examples in the main text, demonstrating the interpretability of ATFPGT-multi.
CONCLUSION: In summary, ATFPGT-multi provides important support and reference for the further development of aquatic toxicity assessment. All of codes and datasets are freely available online at https://github.com/zhaoqi106/ATFPGT-multi.
PMID:38844122 | DOI:10.1016/j.jare.2024.06.002
Status and future trends in wastewater management strategies using artificial intelligence and machine learning techniques
Chemosphere. 2024 Jun 4:142477. doi: 10.1016/j.chemosphere.2024.142477. Online ahead of print.
ABSTRACT
The two main things needed to fulfill the world's impending need for water in the face of the widespread water crisis are collecting water and recycling. To do this, the present study has placed a greater focus on water management strategies used in a variety of contexts areas. To distribute water effectively, save it, and satisfy water quality requirements for a variety of uses, it is imperative to apply intelligent water management mechanisms while keeping in mind the population density index. The present review unveiled the latest trends in water and wastewater recycling, utilizing several Artificial Intelligence (AI) and machine learning (ML) techniques for distribution, rainfall collection, and control of irrigation models. The data collected for these purposes are unique and comes in different forms. An efficient water management system could be developed with the use of AI, Deep Learning (DL), and the Internet of Things (IoT) structure. This study has investigated several water management methodologies using AI, DL and IoT with case studies and sample statistical assessment, to provide an efficient framework for water management.
PMID:38844107 | DOI:10.1016/j.chemosphere.2024.142477
Artificial intelligence in veterinary diagnostic imaging: Perspectives and limitations
Res Vet Sci. 2024 May 31;175:105317. doi: 10.1016/j.rvsc.2024.105317. Online ahead of print.
ABSTRACT
The field of veterinary diagnostic imaging is undergoing significant transformation with the integration of artificial intelligence (AI) tools. This manuscript provides an overview of the current state and future prospects of AI in veterinary diagnostic imaging. The manuscript delves into various applications of AI across different imaging modalities, such as radiology, ultrasound, computed tomography, and magnetic resonance imaging. Examples of AI applications in each modality are provided, ranging from orthopaedics to internal medicine, cardiology, and more. Notable studies are discussed, demonstrating AI's potential for improved accuracy in detecting and classifying various abnormalities. The ethical considerations of using AI in veterinary diagnostics are also explored, highlighting the need for transparent AI development, accurate training data, awareness of the limitations of AI models, and the importance of maintaining human expertise in the decision-making process. The manuscript underscores the significance of AI as a decision support tool rather than a replacement for human judgement. In conclusion, this comprehensive manuscript offers an assessment of the current landscape and future potential of AI in veterinary diagnostic imaging. It provides insights into the benefits and challenges of integrating AI into clinical practice while emphasizing the critical role of ethics and human expertise in ensuring the wellbeing of veterinary patients.
PMID:38843690 | DOI:10.1016/j.rvsc.2024.105317
Construction of an antidepressant priority list based on functional, environmental, and health risks using an interpretable mixup-transformer deep learning model
J Hazard Mater. 2024 May 22;474:134651. doi: 10.1016/j.jhazmat.2024.134651. Online ahead of print.
ABSTRACT
As emerging pollutants, antidepressants (AD) must be urgently investigated for risk identification and assessment. This study constructed a comprehensive-effect risk-priority screening system (ADRank) for ADs by characterizing AD functionality, occurrence, persistence, bioaccumulation and toxicity based on the integrated assignment method. A classification model for ADs was constructed using an improved mixup-transformer deep learning method, and its classification accuracy was compared with those of other models. The accuracy of the proposed model improved by up to 23.25 % compared with the random forest model, and the reliability was 80 % more than that of the TOPSIS method. A priority screening candidate list was proposed to screen 33 high-priority ADs. Finally, SHapley Additive explanation (SHAP) visualization, molecular dynamics, and amino acid analysis were performed to analyze the correlation between AD structure and toxic receptor binding characteristics and reveal the differences in AD risk priority. ADs with more intramolecular hydrogen bonds, higher hydrophobicity, and electronegativity had a more significant risk. Van der Waals and electrostatic interactions were the primary influencing factors, and significant differences in the types and proportions of the main amino acids in the interaction between ADs and receptors were observed. The results of the study provide constructive schemes and insights for AD priority screening and risk management.
PMID:38843640 | DOI:10.1016/j.jhazmat.2024.134651
Light&fast generative adversarial network for high-fidelity CT image synthesis of liver tumor
Comput Methods Programs Biomed. 2024 May 28;254:108252. doi: 10.1016/j.cmpb.2024.108252. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Hepatocellular carcinoma is a common disease with high mortality. Through deep learning methods to analyze HCC CT, the screening classification and prognosis model of HCC can be established, which further promotes the development of computer-aided diagnosis and treatment in the treatment of HCC. However, there are significant challenges in the actual establishment of HCC auxiliary diagnosis model due to data imbalance, especially for rare subtypes of HCC and underrepresented demographic groups. This study proposes a GAN model aimed at overcoming these obstacles and improving the accuracy of HCC auxiliary diagnosis.
METHODS: In order to generate liver and tumor images close to the real distribution. Firstly, we construct a new gradient transfer sampling module to improve the lack of texture details and excessive gradient transfer parameters of the deep model; Secondly, we construct an attention module with spatial and cross-channel feature extraction ability to improve the discriminator's ability to distinguish images; Finally, we design a new loss function for liver tumor imaging features to constrain the model to approach the real tumor features in iterations.
RESULTS: In qualitative analysis, the images synthetic by our method closely resemble the real images in liver parenchyma, blood vessels, tumors, and other parts. In quantitative analysis, the optimal results of FID, PSNR, and SSIM are 75.73, 22.77, and 0.74, respectively. Furthermore, our experiments establish classification models for imbalanced data and enhanced data, resulting in an increase in accuracy rate by 21%-34%, an increase in AUC by 0.29 - 0.33, and an increase in specificity to 0.89.
CONCLUSION: Our solution provides a variety of training data sources with low cost and high efficiency for the establishment of classification or prognostic models for imbalanced data.
PMID:38843572 | DOI:10.1016/j.cmpb.2024.108252
New vision of HookEfficientNet deep neural network: Intelligent histopathological recognition system of non-small cell lung cancer
Comput Biol Med. 2024 Jun 4;178:108710. doi: 10.1016/j.compbiomed.2024.108710. Online ahead of print.
ABSTRACT
BACKGROUND: Efficient and precise diagnosis of non-small cell lung cancer (NSCLC) is quite critical for subsequent targeted therapy and immunotherapy. Since the advent of whole slide images (WSIs), the transition from traditional histopathology to digital pathology has aroused the application of convolutional neural networks (CNNs) in histopathological recognition and diagnosis. HookNet can make full use of macroscopic and microscopic information for pathological diagnosis, but it cannot integrate other excellent CNN structures. The new version of HookEfficientNet is based on a combination of HookNet structure and EfficientNet that performs well in the recognition of general objects. Here, a high-precision artificial intelligence-guided histopathological recognition system was established by HookEfficientNet to provide a basis for the intelligent differential diagnosis of NSCLC.
METHODS: A total of 216 WSIs of lung adenocarcinoma (LUAD) and 192 WSIs of lung squamous cell carcinoma (LUSC) were recruited from the First Affiliated Hospital of Zhengzhou University. Deep learning methods based on HookEfficientNet, HookNet and EfficientNet B4-B6 were developed and compared with each other using area under the curve (AUC) and the Youden index. Temperature scaling was used to calibrate the heatmap and highlight the cancer region of interest. Four pathologists of different levels blindly reviewed 108 WSIs of LUAD and LUSC, and the diagnostic results were compared with the various deep learning models.
RESULTS: The HookEfficientNet model outperformed HookNet and EfficientNet B4-B6. After temperature scaling, the HookEfficientNet model achieved AUCs of 0.973, 0.980, and 0.989 and Youden index values of 0.863, 0.899, and 0.922 for LUAD, LUSC and normal lung tissue, respectively, in the testing set. The accuracy of the model was better than the average accuracy from experienced pathologists, and the model was superior to pathologists in the diagnosis of LUSC.
CONCLUSIONS: HookEfficientNet can effectively recognize LUAD and LUSC with performance superior to that of senior pathologists, especially for LUSC. The model has great potential to facilitate the application of deep learning-assisted histopathological diagnosis for LUAD and LUSC in the future.
PMID:38843570 | DOI:10.1016/j.compbiomed.2024.108710
Exploring potential circRNA biomarkers for cancers based on double-line heterogeneous graph representation learning
BMC Med Inform Decis Mak. 2024 Jun 6;24(1):159. doi: 10.1186/s12911-024-02564-6.
ABSTRACT
BACKGROUND: Compared with the time-consuming and labor-intensive for biological validation in vitro or in vivo, the computational models can provide high-quality and purposeful candidates in an instant. Existing computational models face limitations in effectively utilizing sparse local structural information for accurate predictions in circRNA-disease associations. This study addresses this challenge with a proposed method, CDA-DGRL (Prediction of CircRNA-Disease Association based on Double-line Graph Representation Learning), which employs a deep learning framework leveraging graph networks and a dual-line representation model integrating graph node features.
METHOD: CDA-DGRL comprises several key steps: initially, the integration of diverse biological information to compute integrated similarities among circRNAs and diseases, leading to the construction of a heterogeneous network specific to circRNA-disease associations. Subsequently, circRNA and disease node features are derived using sparse autoencoders. Thirdly, a graph convolutional neural network is employed to capture the local graph network structure by inputting the circRNA-disease heterogeneous network alongside node features. Fourthly, the utilization of node2vec facilitates depth-first sampling of the circRNA-disease heterogeneous network to grasp the global graph network structure, addressing issues associated with sparse raw data. Finally, the fusion of local and global graph network structures is inputted into an extra trees classifier to identify potential circRNA-disease associations.
RESULTS: The results, obtained through a rigorous five-fold cross-validation on the circR2Disease dataset, demonstrate the superiority of CDA-DGRL with an AUC value of 0.9866 and an AUPR value of 0.9897 compared to existing state-of-the-art models. Notably, the hyper-random tree classifier employed in this model outperforms other machine learning classifiers.
CONCLUSION: Thus, CDA-DGRL stands as a promising methodology for reliably identifying circRNA-disease associations, offering potential avenues to alleviate the necessity for extensive traditional biological experiments. The source code and data for this study are available at https://github.com/zywait/CDA-DGRL .
PMID:38844961 | DOI:10.1186/s12911-024-02564-6
MedYOLO: A Medical Image Object Detection Framework
J Imaging Inform Med. 2024 Jun 6. doi: 10.1007/s10278-024-01138-2. Online ahead of print.
ABSTRACT
Artificial intelligence-enhanced identification of organs, lesions, and other structures in medical imaging is typically done using convolutional neural networks (CNNs) designed to make voxel-accurate segmentations of the region of interest. However, the labels required to train these CNNs are time-consuming to generate and require attention from subject matter experts to ensure quality. For tasks where voxel-level precision is not required, object detection models offer a viable alternative that can reduce annotation effort. Despite this potential application, there are few options for general-purpose object detection frameworks available for 3-D medical imaging. We report on MedYOLO, a 3-D object detection framework using the one-shot detection method of the YOLO family of models and designed for use with medical imaging. We tested this model on four different datasets: BRaTS, LIDC, an abdominal organ Computed tomography (CT) dataset, and an ECG-gated heart CT dataset. We found our models achieve high performance on a diverse range of structures even without hyperparameter tuning, reaching mean average precision (mAP) at intersection over union (IoU) 0.5 of 0.861 on BRaTS, 0.715 on the abdominal CT dataset, and 0.995 on the heart CT dataset. However, the models struggle with some structures, failing to converge on LIDC resulting in a mAP@0.5 of 0.0.
PMID:38844717 | DOI:10.1007/s10278-024-01138-2
Deep learning-based low-dose CT simulator for non-linear reconstruction methods
Med Phys. 2024 Jun 6. doi: 10.1002/mp.17232. Online ahead of print.
ABSTRACT
BACKGROUND: Computer algorithms that simulate lower-doses computed tomography (CT) images from clinical-dose images are widely available. However, most operate in the projection domain and assume access to the reconstruction method. Access to commercial reconstruction methods may often not be available in medical research, making image-domain noise simulation methods useful. However, the introduction of non-linear reconstruction methods, such as iterative and deep learning-based reconstruction, makes noise insertion in the image domain intractable, as it is not possible to determine the noise textures analytically.
PURPOSE: To develop a deep learning-based image-domain method to generate low-dose CT images from clinical-dose CT (CDCT) images for non-linear reconstruction methods.
METHODS: We propose a fully image domain-based method, utilizing a series of three convolutional neural networks (CNNs), which, respectively, denoise CDCT images, predict the standard deviation map of the low-dose image, and generate the noise power spectra (NPS) of local patches throughout the low-dose image. All three models have U-net-based architectures and are partly or fully three-dimensional. As a use case for this study and with no loss of generality, we use paired low-dose and clinical-dose brain CT scans. A dataset of 326 $\hskip.001pt 326$ paired scans was retrospectively obtained. All images were acquired with a wide-area detector clinical system and reconstructed using its standard clinical iterative algorithm. Each pair was registered using rigid registration to correct for motion between acquisitions. The data was randomly partitioned into training ( 251 $\hskip.001pt 251$ samples), validation ( 25 $\hskip.001pt 25$ samples), and test ( 50 $\hskip.001pt 50$ samples) sets. The performance of each of these three CNNs was validated separately. For the denoising CNN, the local standard deviation decrease, and bias were determined. For the standard deviation map CNN, the real and estimated standard deviations were compared locally. Finally, for the NPS CNN, the NPS of the synthetic and real low-dose noise were compared inside and outside the skull. Two proof-of-concept denoising studies were performed to determine if the performance of a CNN- or a gradient-based denoising filter on the synthetic low-dose data versus real data differed.
RESULTS: The denoising network had a median decrease in noise in the cerebrospinal fluid by a factor of 1.71 $1.71$ and introduced a median bias of + 0.7 $ + 0.7$ HU. The network for standard deviation map estimation had a median error of + 0.1 $ + 0.1$ HU. The noise power spectrum estimation network was able to capture the anisotropic and shift-variant nature of the noise structure by showing good agreement between the synthetic and real low-dose noise and their corresponding power spectra. The two proof of concept denoising studies showed only minimal difference in standard deviation improvement ratio between the synthetic and real low-dose CT images with the median difference between the two being 0.0 and +0.05 for the CNN- and gradient-based filter, respectively.
CONCLUSION: The proposed method demonstrated good performance in generating synthetic low-dose brain CT scans without access to the projection data or to the reconstruction method. This method can generate multiple low-dose image realizations from one clinical-dose image, so it is useful for validation, optimization, and repeatability studies of image-processing algorithms.
PMID:38843540 | DOI:10.1002/mp.17232
Automated Lugano Metabolic Response Assessment in (18)F-Fluorodeoxyglucose-Avid Non-Hodgkin Lymphoma With Deep Learning on (18)F-Fluorodeoxyglucose-Positron Emission Tomography
J Clin Oncol. 2024 Jun 6:JCO2301978. doi: 10.1200/JCO.23.01978. Online ahead of print.
ABSTRACT
PURPOSE: Artificial intelligence can reduce the time used by physicians on radiological assessments. For 18F-fluorodeoxyglucose-avid lymphomas, obtaining complete metabolic response (CMR) by end of treatment is prognostic.
METHODS: Here, we present a deep learning-based algorithm for fully automated treatment response assessments according to the Lugano 2014 classification. The proposed four-stage method, trained on a multicountry clinical trial (ClinicalTrials.gov identifier: NCT01287741) and tested in three independent multicenter and multicountry test sets on different non-Hodgkin lymphoma subtypes and different lines of treatment (ClinicalTrials.gov identifiers: NCT02257567, NCT02500407, 20% holdout in ClinicalTrials.gov identifier: NCT01287741), outputs the detected lesions at baseline and follow-up to enable focused radiologist review.
RESULTS: The method's response assessment achieved high agreement with the adjudicated radiologic responses (eg, agreement for overall response assessment of 93%, 87%, and 85% in ClinicalTrials.gov identifiers: NCT01287741, NCT02500407, and NCT02257567, respectively) similar to inter-radiologist agreement and was strongly prognostic of outcomes with a trend toward higher accuracy for death risk than adjudicated radiologic responses (hazard ratio for end of treatment by-model CMR of 0.123, 0.054, and 0.205 in ClinicalTrials.gov identifiers: NCT01287741, NCT02500407, and NCT02257567, compared with, respectively, 0.226, 0.292, and 0.272 for CMR by the adjudicated responses). Furthermore, a radiologist review of the algorithm's assessments was conducted. The radiologist median review time was 1.38 minutes/assessment, and no statistically significant differences were observed in the level of agreement of the radiologist with the model's response compared with the level of agreement of the radiologist with the adjudicated responses.
CONCLUSION: These results suggest that the proposed method can be incorporated into radiologic response assessment workflows in cancer imaging for significant time savings and with performance similar to trained medical experts.
PMID:38843483 | DOI:10.1200/JCO.23.01978
Rapid Detection of SARS-CoV-2 Variants Using an Angiotensin-Converting Enzyme 2-Based Surface-Enhanced Raman Spectroscopy Sensor Enhanced by CoVari Deep Learning Algorithms
ACS Sens. 2024 Jun 6. doi: 10.1021/acssensors.4c00488. Online ahead of print.
ABSTRACT
An integrated approach combining surface-enhanced Raman spectroscopy (SERS) with a specialized deep learning algorithm to rapidly and accurately detect and quantify SARS-CoV-2 variants is developed based on an angiotensin-converting enzyme 2 (ACE2)-functionalized AgNR@SiO2 array SERS sensor. SERS spectra with concentrations of different variants were collected using a portable Raman system. After appropriate spectral preprocessing, a deep learning algorithm, CoVari, is developed to predict both the viral variant species and concentrations. Using a 10-fold cross-validation strategy, the model achieves an average accuracy of 99.9% in discriminating between different virus variants and R2 values larger than 0.98 for quantifying viral concentrations of the three viruses, demonstrating the high quality of the detection. The limit of detection of the ACE2 SERS sensor is determined to be 10.472, 11.882, and 21.591 PFU/mL for SARS-CoV-2, SARS-CoV-2 B1, and CoV-NL63, respectively. The feature importance of virus classification and concentration regression in the CoVari algorithm are calculated based on a permutation algorithm, which showed a clear correlation to the biochemical origins of the spectra or spectral changes. In an unknown specimen test, classification accuracy can achieve >90% for concentrations larger than 781 PFU/mL, and the predicted concentrations consistently align with actual values, highlighting the robustness of the proposed algorithm. Based on the CoVari architecture and the output vector, this algorithm can be generalized to predict both viral variant species and concentrations simultaneously for a broader range of viruses. These results demonstrate that the SERS + CoVari strategy has the potential for rapid and quantitative detection of virus variants and potentially point-of-care diagnostic platforms.
PMID:38843447 | DOI:10.1021/acssensors.4c00488
Threatening language detection from Urdu data with deep sequential model
PLoS One. 2024 Jun 6;19(6):e0290915. doi: 10.1371/journal.pone.0290915. eCollection 2024.
ABSTRACT
The Urdu language is spoken and written on different social media platforms like Twitter, WhatsApp, Facebook, and YouTube. However, due to the lack of Urdu Language Processing (ULP) libraries, it is quite challenging to identify threats from textual and sequential data on the social media provided in Urdu. Therefore, it is required to preprocess the Urdu data as efficiently as English by creating different stemming and data cleaning libraries for Urdu data. Different lexical and machine learning-based techniques are introduced in the literature, but all of these are limited to the unavailability of online Urdu vocabulary. This research has introduced Urdu language vocabulary, including a stop words list and a stemming dictionary to preprocess Urdu data as efficiently as English. This reduced the input size of the Urdu language sentences and removed redundant and noisy information. Finally, a deep sequential model based on Long Short-Term Memory (LSTM) units is trained on the efficiently preprocessed, evaluated, and tested. Our proposed methodology resulted in good prediction performance, i.e., an accuracy of 82%, which is greater than the existing methods.
PMID:38843283 | DOI:10.1371/journal.pone.0290915
Anomaly detection in multivariate time series data using deep ensemble models
PLoS One. 2024 Jun 6;19(6):e0303890. doi: 10.1371/journal.pone.0303890. eCollection 2024.
ABSTRACT
Anomaly detection in time series data is essential for fraud detection and intrusion monitoring applications. However, it poses challenges due to data complexity and high dimensionality. Industrial applications struggle to process high-dimensional, complex data streams in real time despite existing solutions. This study introduces deep ensemble models to improve traditional time series analysis and anomaly detection methods. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks effectively handle variable-length sequences and capture long-term relationships. Convolutional Neural Networks (CNNs) are also investigated, especially for univariate or multivariate time series forecasting. The Transformer, an architecture based on Artificial Neural Networks (ANN), has demonstrated promising results in various applications, including time series prediction and anomaly detection. Graph Neural Networks (GNNs) identify time series anomalies by capturing temporal connections and interdependencies between periods, leveraging the underlying graph structure of time series data. A novel feature selection approach is proposed to address challenges posed by high-dimensional data, improving anomaly detection by selecting different or more critical features from the data. This approach outperforms previous techniques in several aspects. Overall, this research introduces state-of-the-art algorithms for anomaly detection in time series data, offering advancements in real-time processing and decision-making across various industrial sectors.
PMID:38843255 | DOI:10.1371/journal.pone.0303890