Deep learning

Brain Disorder Detection and Diagnosis using Machine Learning and Deep Learning - A Bibliometric Analysis

Fri, 2024-06-07 06:00

Curr Neuropharmacol. 2024;22(13):e310524230577. doi: 10.2174/1570159X22999240531160344.

ABSTRACT

BACKGROUND AND OBJECTIVE: Brain disorders are one of the major global mortality issues, and their early detection is crucial for healing. Machine learning, specifically deep learning, is a technology that is increasingly being used to detect and diagnose brain disorders. Our objective is to provide a quantitative bibliometric analysis of the field to inform researchers about trends that can inform their Research directions in the future.

METHODS: We carried out a bibliometric analysis to create an overview of brain disorder detection and diagnosis using machine learning and deep learning. Our bibliometric analysis includes 1550 articles gathered from the Scopus database on automated brain disorder detection and diagnosis using machine learning and deep learning published from 2015 to May 2023. A thorough bibliometric análisis is carried out with the help of Biblioshiny and the VOSviewer platform. Citation analysis and various measures of collaboration are analyzed in the study.

RESULTS: According to a study, maximum research is reported in 2022, with a consistent rise from preceding years. The majority of the authors referenced have concentrated on multiclass classification and innovative convolutional neural network models that are effective in this field. A keyword analysis revealed that among the several brain disorder types, Alzheimer's, autism, and Parkinson's disease had received the greatest attention. In terms of both authors and institutes, the USA, China, and India are among the most collaborating countries. We built a future research agenda based on our findings to help progress research on machine learning and deep learning for brain disorder detection and diagnosis.

CONCLUSION: In summary, our quantitative bibliometric analysis provides useful insights about trends in the field and points them to potential directions in applying machine learning and deep learning for brain disorder detection and diagnosis.</P>.

PMID:38847379 | DOI:10.2174/1570159X22999240531160344

Categories: Literature Watch

Automatic evaluation of Nail Psoriasis Severity Index using deep learning algorithm

Fri, 2024-06-07 06:00

J Dermatol. 2024 Jun 7. doi: 10.1111/1346-8138.17313. Online ahead of print.

ABSTRACT

Nail psoriasis is a chronic condition characterized by nail dystrophy affecting the nail matrix and bed. The severity of nail psoriasis is commonly assessed using the Nail Psoriasis Severity Index (NAPSI), which evaluates the characteristics and extent of nail involvement. Although the NAPSI is numeric, reproducible, and simple, the assessment process is time-consuming and often challenging to use in real-world clinical settings. To overcome the time-consuming nature of NAPSI assessment, we aimed to develop a deep learning algorithm that can rapidly and reliably evaluate NAPSI, thereby providing numerous clinical and research advantages. We developed a dataset consisting of 7054 single fingernail images cropped from images of the dorsum of the hands of 634 patients with psoriasis. We annotated the eight features of the NAPSI in a single nail using bounding boxes and trained the YOLOv7-based deep learning algorithm using this annotation. The performance of the deep learning algorithm (DLA) was evaluated by comparing the NAPSI estimated using the DLA with the ground truth of the test dataset. The NAPSI evaluated using the DLA differed by 2 points from the ground truth in 98.6% of the images. The accuracy and mean absolute error of the model were 67.6% and 0.449, respectively. The intraclass correlation coefficient was 0.876, indicating good agreement. Our results showed that the DLA can rapidly and accurately evaluate the NAPSI. The rapid and accurate NAPSI assessment by the DLA is not only applicable in clinical settings, but also provides research advantages by enabling rapid NAPSI evaluations of previously collected nail images.

PMID:38847292 | DOI:10.1111/1346-8138.17313

Categories: Literature Watch

A Comprehensive Review on Deep Learning Techniques in Alzheimer's Disease Diagnosis

Fri, 2024-06-07 06:00

Curr Top Med Chem. 2024 Jun 6. doi: 10.2174/0115680266310776240524061252. Online ahead of print.

ABSTRACT

Alzheimer's Disease (AD) is a serious neurological illness that causes memory loss gradually by destroying brain cells. This deadly brain illness primarily strikes the elderly, impairing their cognitive and bodily abilities until brain shrinkage occurs. Modern techniques are required for an accurate diagnosis of AD. Machine learning has gained attraction in the medical field as a means of determining a person's risk of developing AD in its early stages. One of the most advanced soft computing neural network-based Deep Learning (DL) methodologies has garnered significant interest among researchers in automating early-stage AD diagnosis. Hence, a comprehensive review is necessary to gain insights into DL techniques for the advancement of more effective methods for diagnosing AD. This review explores multiple biomarkers associated with Alzheimer's Disease (AD) and various DL methodologies, including Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), The k-nearest-neighbor (k-NN), Deep Boltzmann Machines (DBM), and Deep Belief Networks (DBN), which have been employed for automating the early diagnosis of AD. Moreover, the unique contributions of this review include the classification of ATN biomarkers for Alzheimer's Disease (AD), systemic description of diverse DL algorithms for early AD assessment, along with a discussion of widely utilized online datasets such as ADNI, OASIS, etc. Additionally, this review provides perspectives on future trends derived from critical evaluation of each variant of DL techniques across different modalities, dataset sources, AUC values, and accuracies.

PMID:38847164 | DOI:10.2174/0115680266310776240524061252

Categories: Literature Watch

Artificial Intelligence in Cancer Diagnosis: A Game-Changer in Healthcare

Fri, 2024-06-07 06:00

Curr Pharm Biotechnol. 2024 Jun 6. doi: 10.2174/0113892010298852240528123911. Online ahead of print.

ABSTRACT

Early cancer identification is essential for increasing survival rates and lowering the disease's burden in today's society. Artificial intelligence [AI]--based algorithms may help in the early detection of cancer and resolve problems with current diagnostic methods. This article gives an overview of the prospective uses of AI in early cancer detection. The authors go over the possible applications of Artificial Intelligence algorithms used for screening risk of malignancy in asymptomatic patients, investigating as well as prioritising symptomatic individuals, and more accurately diagnosing cancer recurrence. In screening programmes, the importance of patient selection and risk stratification is emphasised, and AI may be able to assist in identifying people who are most at risk of acquiring cancer. Aside from pathology slide and peripheral blood analysis, AI can also increase the diagnostic precision of imaging methods like computed tomography [CT] and mammography. A summary of various AI techniques is given in the review, covering more sophisticated deep learning and neural networks and more traditional models like logistic regression. The advantages of deep learning algorithms in spotting intricate patterns in huge datasets and their potential to increase the precision of cancer diagnosis are emphasised by the authors. The ethical concerns surrounding the application of AI in healthcare are also discussed, and include topics like prejudice, data security, and privacy. A review of the models now employed in clinical practice is included along with a discussion of the prospective clinical implications of AI algorithms. Examined are AI's drawbacks and hazards, such as resource requirements, data quality, and the necessity for consistent reporting. In conclusion, this study emphasises the utility of AI algorithms in the early detection of cancer and gives a general overview of the many strategies and difficulties involved in putting them into use in clinical settings.

PMID:38847160 | DOI:10.2174/0113892010298852240528123911

Categories: Literature Watch

Adults Ischium Age Estimation Based on Deep Learning and 3D CT Reconstruction

Fri, 2024-06-07 06:00

Fa Yi Xue Za Zhi. 2024 Apr 25;40(2):154-163. doi: 10.12116/j.issn.1004-5619.2023.231003.

ABSTRACT

OBJECTIVES: To develop a deep learning model for automated age estimation based on 3D CT reconstructed images of Han population in western China, and evaluate its feasibility and reliability.

METHODS: The retrospective pelvic CT imaging data of 1 200 samples (600 males and 600 females) aged 20.0 to 80.0 years in western China were collected and reconstructed into 3D virtual bone models. The images of the ischial tuberosity feature region were extracted to create sex-specific and left/right site-specific sample libraries. Using the ResNet34 model, 500 samples of different sexes were randomly selected as training and verification set, the remaining samples were used as testing set. Initialization and transfer learning were used to train images that distinguish sex and left/right site. Mean absolute error (MAE) and root mean square error (RMSE) were used as primary indicators to evaluate the model.

RESULTS: Prediction results varied between sexes, with bilateral models outperformed left/right unilateral ones, and transfer learning models showed superior performance over initial models. In the prediction results of bilateral transfer learning models, the male MAE was 7.74 years and RMSE was 9.73 years, the female MAE was 6.27 years and RMSE was 7.82 years, and the mixed sexes MAE was 6.64 years and RMSE was 8.43 years.

CONCLUSIONS: The skeletal age estimation model, utilizing ischial tuberosity images of Han population in western China and employing the ResNet34 combined with transfer learning, can effectively estimate adult ischium age.

PMID:38847030 | DOI:10.12116/j.issn.1004-5619.2023.231003

Categories: Literature Watch

Adolescents and Children Age Estimation Using Machine Learning Based on Pulp and Tooth Volumes on CBCT Images

Fri, 2024-06-07 06:00

Fa Yi Xue Za Zhi. 2024 Apr 25;40(2):143-148. doi: 10.12116/j.issn.1004-5619.2023.231210.

ABSTRACT

OBJECTIVES: To estimate adolescents and children age using stepwise regression and machine learning methods based on the pulp and tooth volumes of the left maxillary central incisor and cuspid on cone beam computed tomography (CBCT) images, and to compare and analyze the estimation results.

METHODS: A total of 498 Shanghai Han adolescents and children CBCT images of the oral and maxillofacial regions were collected. The pulp and tooth volumes of the left maxillary central incisor and cuspid were measured and calculated. Three machine learning algorithms (K-nearest neighbor, ridge regression, and decision tree) and stepwise regression were used to establish four age estimation models. The coefficient of determination, mean error, root mean square error, mean square error and mean absolute error were computed and compared. A correlation heatmap was drawn to visualize and the monotonic relationship between parameters was visually analyzed.

RESULTS: The K-nearest neighbor model (R2=0.779) and the ridge regression model (R2=0.729) outperformed stepwise regression (R2=0.617), while the decision tree model (R2=0.494) showed poor fitting. The correlation heatmap demonstrated a monotonically negative correlation between age and the parameters including pulp volume, the ratio of pulp volume to hard tissue volume, and the ratio of pulp volume to tooth volume.

CONCLUSIONS: Pulp volume and pulp volume proportion are closely related to age. The application of CBCT-based machine learning methods can provide more accurate age estimation results, which lays a foundation for further CBCT-based deep learning dental age estimation research.

PMID:38847028 | DOI:10.12116/j.issn.1004-5619.2023.231210

Categories: Literature Watch

Application of Medical Statistical and Machine Learning Methods in the Age Estimation of Living Individuals

Fri, 2024-06-07 06:00

Fa Yi Xue Za Zhi. 2024 Apr 25;40(2):118-127. doi: 10.12116/j.issn.1004-5619.2023.231103.

ABSTRACT

In the study of age estimation in living individuals, a lot of data needs to be analyzed by mathematical statistics, and reasonable medical statistical methods play an important role in data design and analysis. The selection of accurate and appropriate statistical methods is one of the key factors affecting the quality of research results. This paper reviews the principles and applicable principles of the commonly used medical statistical methods such as descriptive statistics, difference analysis, consistency test and multivariate statistical analysis, as well as machine learning methods such as shallow learning and deep learning in the age estimation research of living individuals, and summarizes the relevance and application prospects between medical statistical methods and machine learning methods. This paper aims to provide technical guidance for the age estimation research of living individuals to obtain more scientific and accurate results.

PMID:38847025 | DOI:10.12116/j.issn.1004-5619.2023.231103

Categories: Literature Watch

Deep learning driven diagnosis of malignant soft tissue tumors based on dual-modal ultrasound images and clinical indexes

Fri, 2024-06-07 06:00

Front Oncol. 2024 May 23;14:1361694. doi: 10.3389/fonc.2024.1361694. eCollection 2024.

ABSTRACT

BACKGROUND: Soft tissue tumors (STTs) are benign or malignant superficial neoplasms arising from soft tissues throughout the body with versatile pathological types. Although Ultrasonography (US) is one of the most common imaging tools to diagnose malignant STTs, it still has several drawbacks in STT diagnosis that need improving.

OBJECTIVES: The study aims to establish this deep learning (DL) driven Artificial intelligence (AI) system for predicting malignant STTs based on US images and clinical indexes of the patients.

METHODS: We retrospectively enrolled 271 malignant and 462 benign masses to build the AI system using 5-fold validation. A prospective dataset of 44 malignant masses and 101 benign masses was used to validate the accuracy of system. A multi-data fusion convolutional neural network, named ultrasound clinical soft tissue tumor net (UC-STTNet), was developed to combine gray scale and color Doppler US images and clinic features for malignant STTs diagnosis. Six radiologists (R1-R6) with three experience levels were invited for reader study.

RESULTS: The AI system achieved an area under receiver operating curve (AUC) value of 0.89 in the retrospective dataset. The diagnostic performance of the AI system was higher than that of one of the senior radiologists (AUC of AI vs R2: 0.89 vs. 0.84, p=0.022) and all of the intermediate and junior radiologists (AUC of AI vs R3, R4, R5, R6: 0.89 vs 0.75, 0.81, 0.80, 0.63; p <0.01). The AI system also achieved an AUC of 0.85 in the prospective dataset. With the assistance of the system, the diagnostic performances and inter-observer agreement of the radiologists was improved (AUC of R3, R5, R6: 0.75 to 0.83, 0.80 to 0.85, 0.63 to 0.69; p<0.01).

CONCLUSION: The AI system could be a useful tool in diagnosing malignant STTs, and could also help radiologists improve diagnostic performance.

PMID:38846984 | PMC:PMC11153704 | DOI:10.3389/fonc.2024.1361694

Categories: Literature Watch

Corrigendum: Deep learning or radiomics based on CT for predicting the response of gastric cancer to neoadjuvant chemotherapy: a meta-analysis and systematic review

Fri, 2024-06-07 06:00

Front Oncol. 2024 May 23;14:1433346. doi: 10.3389/fonc.2024.1433346. eCollection 2024.

ABSTRACT

[This corrects the article DOI: 10.3389/fonc.2024.1363812.].

PMID:38846979 | PMC:PMC11153870 | DOI:10.3389/fonc.2024.1433346

Categories: Literature Watch

Optical diagnosis in still images of colorectal polyps: comparison between expert endoscopists and PolyDeep, a Computer-Aided Diagnosis system

Fri, 2024-06-07 06:00

Front Oncol. 2024 May 23;14:1393815. doi: 10.3389/fonc.2024.1393815. eCollection 2024.

ABSTRACT

BACKGROUND: PolyDeep is a computer-aided detection and classification (CADe/x) system trained to detect and classify polyps. During colonoscopy, CADe/x systems help endoscopists to predict the histology of colonic lesions.

OBJECTIVE: To compare the diagnostic performance of PolyDeep and expert endoscopists for the optical diagnosis of colorectal polyps on still images.

METHODS: PolyDeep Image Classification (PIC) is an in vitro diagnostic test study. The PIC database contains NBI images of 491 colorectal polyps with histological diagnosis. We evaluated the diagnostic performance of PolyDeep and four expert endoscopists for neoplasia (adenoma, sessile serrated lesion, traditional serrated adenoma) and adenoma characterization and compared them with the McNemar test. Receiver operating characteristic curves were constructed to assess the overall discriminatory ability, comparing the area under the curve of endoscopists and PolyDeep with the chi- square homogeneity areas test.

RESULTS: The diagnostic performance of the endoscopists and PolyDeep in the characterization of neoplasia is similar in terms of sensitivity (PolyDeep: 89.05%; E1: 91.23%, p=0.5; E2: 96.11%, p<0.001; E3: 86.65%, p=0.3; E4: 91.26% p=0.3) and specificity (PolyDeep: 35.53%; E1: 33.80%, p=0.8; E2: 34.72%, p=1; E3: 39.24%, p=0.8; E4: 46.84%, p=0.2). The overall discriminative ability also showed no statistically significant differences (PolyDeep: 0.623; E1: 0.625, p=0.8; E2: 0.654, p=0.2; E3: 0.629, p=0.9; E4: 0.690, p=0.09). In the optical diagnosis of adenomatous polyps, we found that PolyDeep had a significantly higher sensitivity and a significantly lower specificity. The overall discriminative ability of adenomatous lesions by expert endoscopists is significantly higher than PolyDeep (PolyDeep: 0.582; E1: 0.685, p < 0.001; E2: 0.677, p < 0.0001; E3: 0.658, p < 0.01; E4: 0.694, p < 0.0001).

CONCLUSION: PolyDeep and endoscopists have similar diagnostic performance in the optical diagnosis of neoplastic lesions. However, endoscopists have a better global discriminatory ability than PolyDeep in the optical diagnosis of adenomatous polyps.

PMID:38846970 | PMC:PMC11153726 | DOI:10.3389/fonc.2024.1393815

Categories: Literature Watch

Editorial: AI approach to the psychiatric diagnosis and prediction

Fri, 2024-06-07 06:00

Front Psychiatry. 2024 May 23;15:1387370. doi: 10.3389/fpsyt.2024.1387370. eCollection 2024.

NO ABSTRACT

PMID:38846910 | PMC:PMC11153783 | DOI:10.3389/fpsyt.2024.1387370

Categories: Literature Watch

Neural networks for rapid phase quantification of cultural heritage X-ray powder diffraction data

Fri, 2024-06-07 06:00

J Appl Crystallogr. 2024 May 31;57(Pt 3):831-841. doi: 10.1107/S1600576724003704. eCollection 2024 Jun 1.

ABSTRACT

Recent developments in synchrotron radiation facilities have increased the amount of data generated during acquisitions considerably, requiring fast and efficient data processing techniques. Here, the application of dense neural networks (DNNs) to data treatment of X-ray diffraction computed tomography (XRD-CT) experiments is presented. Processing involves mapping the phases in a tomographic slice by predicting the phase fraction in each individual pixel. DNNs were trained on sets of calculated XRD patterns generated using a Python algorithm developed in-house. An initial Rietveld refinement of the tomographic slice sum pattern provides additional information (peak widths and integrated intensities for each phase) to improve the generation of simulated patterns and make them closer to real data. A grid search was used to optimize the network architecture and demonstrated that a single fully connected dense layer was sufficient to accurately determine phase proportions. This DNN was used on the XRD-CT acquisition of a mock-up and a historical sample of highly heterogeneous multi-layered decoration of a late medieval statue, called 'applied brocade'. The phase maps predicted by the DNN were in good agreement with other methods, such as non-negative matrix factorization and serial Rietveld refinements performed with TOPAS, and outperformed them in terms of speed and efficiency. The method was evaluated by regenerating experimental patterns from predictions and using the R-weighted profile as the agreement factor. This assessment allowed us to confirm the accuracy of the results.

PMID:38846765 | PMC:PMC11151672 | DOI:10.1107/S1600576724003704

Categories: Literature Watch

Breaking new ground: can artificial intelligence and machine learning transform papillary glioneuronal tumor diagnosis?

Thu, 2024-06-06 06:00

Neurosurg Rev. 2024 Jun 7;47(1):261. doi: 10.1007/s10143-024-02504-y.

ABSTRACT

Papillary glioneuronal tumors (PGNTs), classified as Grade I by the WHO in 2016, present diagnostic challenges due to their rarity and potential for malignancy. Xiaodan Du et al.'s recent study of 36 confirmed PGNT cases provides critical insights into their imaging characteristics, revealing frequent presentation with headaches, seizures, and mass effect symptoms, predominantly located in the supratentorial region near the lateral ventricles. Lesions often appeared as mixed cystic and solid masses with septations or as cystic masses with mural nodules. Given these complexities, artificial intelligence (AI) and machine learning (ML) offer promising advancements for PGNT diagnosis. Previous studies have demonstrated AI's efficacy in diagnosing various brain tumors, utilizing deep learning and advanced imaging techniques for rapid and accurate identification. Implementing AI in PGNT diagnosis involves assembling comprehensive datasets, preprocessing data, extracting relevant features, and iteratively training models for optimal performance. Despite AI's potential, medical professionals must validate AI predictions, ensuring they complement rather than replace clinical expertise. This integration of AI and ML into PGNT diagnostics could significantly enhance preoperative accuracy, ultimately improving patient outcomes through more precise and timely interventions.

PMID:38844709 | DOI:10.1007/s10143-024-02504-y

Categories: Literature Watch

Deep learning-based prediction of compressive strength of eco-friendly geopolymer concrete

Thu, 2024-06-06 06:00

Environ Sci Pollut Res Int. 2024 Jun 7. doi: 10.1007/s11356-024-33853-2. Online ahead of print.

ABSTRACT

The greenhouse gases cause global warming on Earth. The cement production industry is one of the largest sectors producing greenhouse gases. The geopolymer is produced with synthesized by the reaction of an alkaline solution and the waste materials such as slag and fly ash. The use of eco-friendly geopolymer concrete decreases energy consumption and greenhouse gases. In this study, the fc (compressive strength) of eco-friendly geopolymer concrete was predicted by the deep long short-term memory (LSTM) network model. Moreover, the support vector regression (SVR), least squares boosting ensemble (LSBoost), and multiple linear regression (MLR) models were devised to compare the forecast results of the deep LSTM algorithm. The input variables of the models were used as the mole ratio, the alkaline solution concentration, the curing temperature, the curing days, and the liquid-to-fly ash mass ratio. The output variable of the proposed models was chosen as the compressive strength (fc). Furthermore, the effects of the input variable on the fc of eco-friendly geopolymer concrete were determined by the sensitivity analysis. The fc of eco-friendly geopolymer concrete was predicted by the deep LSTM, LSBoost, SVR, and MLR models with 99.23%, 98.08%, 78.57%, and 88.03% accuracy, respectively. The deep LSTM model forecasted the fc of eco-friendly geopolymer concrete with higher accuracy than the SVR, LSBoost, and MLR models. The sensitivity analysis obtained that the curing temperature was the most important experimental variable that affected the fc of geopolymer concrete.

PMID:38844634 | DOI:10.1007/s11356-024-33853-2

Categories: Literature Watch

A fully automated classification of third molar development stages using deep learning

Thu, 2024-06-06 06:00

Sci Rep. 2024 Jun 7;14(1):13082. doi: 10.1038/s41598-024-63744-y.

ABSTRACT

Accurate classification of tooth development stages from orthopantomograms (OPG) is crucial for dental diagnosis, treatment planning, age assessment, and forensic applications. This study aims to develop an automated method for classifying third molar development stages using OPGs. Initially, our data consisted of 3422 OPG images, each classified and curated by expert evaluators. The dataset includes images from both Q3 (lower jaw left side) and Q4 (lower right side) regions extracted from panoramic images, resulting in a total of 6624 images for analysis. Following data collection, the methodology employs region of interest extraction, pre-filtering, and extensive data augmentation techniques to enhance classification accuracy. The deep neural network model, including architectures such as EfficientNet, EfficientNetV2, MobileNet Large, MobileNet Small, ResNet18, and ShuffleNet, is optimized for this task. Our findings indicate that EfficientNet achieved the highest classification accuracy at 83.7%. Other architectures achieved accuracies ranging from 71.57 to 82.03%. The variation in performance across architectures highlights the influence of model complexity and task-specific features on classification accuracy. This research introduces a novel machine learning model designed to accurately estimate the development stages of lower wisdom teeth in OPG images, contributing to the fields of dental diagnostics and treatment planning.

PMID:38844566 | DOI:10.1038/s41598-024-63744-y

Categories: Literature Watch

Deep Learning De-Noising Improves CT Perfusion Image Quality in the Setting of Lower Contrast Dosing: A Feasibility Study

Thu, 2024-06-06 06:00

AJNR Am J Neuroradiol. 2024 Jun 6:ajnr.A8367. doi: 10.3174/ajnr.A8367. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Considering recent iodinated contrast media (ICM) shortages, this study compared reduced ICM and standard dose CTP acquisitions, and the impact of deep learning (DL)-denoising on CTP image quality in preclinical and clinical studies.

MATERIALS AND METHODS: Twelve swine underwent 9 CTP exams each, performed at combinations of 3 different X-ray (37, 67, and 127mAs) and ICM doses (10, 15, and 20mL). Clinical CTP acquisitions performed before and during the ICM shortage and protocol change (from 40 mL to 30 mL) were retrospectively included. Eleven patients with reduced ICM dose and 11 propensity-score-matched controls with standard ICM dose were included. A Residual Encoder-Decoder Convolutional-Neural-Network (RED-CNN) was trained for CTP denoising using K-space-Weighted Image Average (KWIA) filtered CTP images as the target. The standard, RED-CNN denoised, and KWIA noise-filtered images for animal and human studies were compared for quantitative SNR and qualitative image evaluation.

RESULTS: The SNR of animal CTP images decreased with reductions in ICM and mAs doses. Contrast dose reduction had a greater effect on SNR than mAs reduction. Noise-filtering by KWIA and RED-CNN denoising progressively improved SNR of CTP maps, with RED-CNN resulting in the highest SNR. The SNR of clinical CTP images was generally lower with reduced ICM dose, which was improved by KWIA and RED-CNN denoising (p<0.05). Qualitative readings consistently rated RED-CNN denoised CTP as best quality, followed by KWIA and then standard CTP images.

CONCLUSIONS: DL-denoising can improve image quality for low ICM CTP protocols, and could approximate standard ICM dose CTP, in addition to potentially improving image quality for low mAs acquisitions.

ABBREVIATIONS: ICM=iodinated contrast media; DL=deep learning; KWIA=k-space weighted image average; LCD=low-contrast dose; SCD=standard contrast dose; RED-CNN=Residual Encoder-Decoder Convolutional Neural Network; PSNR=Peak Signal to Noise Ratio; RMSE=Root Mean Squared Error; SSIM=Structural Similarity Index.

PMID:38844370 | DOI:10.3174/ajnr.A8367

Categories: Literature Watch

Is Automatic Tumor Segmentation on Whole-Body <sup>18</sup>F-FDG PET Images a Clinical Reality?

Thu, 2024-06-06 06:00

J Nucl Med. 2024 Jun 6:jnumed.123.267183. doi: 10.2967/jnumed.123.267183. Online ahead of print.

ABSTRACT

The integration of automated whole-body tumor segmentation using 18F-FDG PET/CT images represents a pivotal shift in oncologic diagnostics, enhancing the precision and efficiency of tumor burden assessment. This editorial examines the transition toward automation, propelled by advancements in artificial intelligence, notably through deep learning techniques. We highlight the current availability of commercial tools and the academic efforts that have set the stage for these developments. Further, we comment on the challenges of data diversity, validation needs, and regulatory barriers. The role of metabolic tumor volume and total lesion glycolysis as vital metrics in cancer management underscores the significance of this evaluation. Despite promising progress, we call for increased collaboration across academia, clinical users, and industry to better realize the clinical benefits of automated segmentation, thus helping to streamline workflows and improve patient outcomes in oncology.

PMID:38844359 | DOI:10.2967/jnumed.123.267183

Categories: Literature Watch

Multi-task aquatic toxicity prediction model based on multi-level features fusion

Thu, 2024-06-06 06:00

J Adv Res. 2024 Jun 4:S2090-1232(24)00226-1. doi: 10.1016/j.jare.2024.06.002. Online ahead of print.

ABSTRACT

INTRODUCTION: With the escalating menace of organic compounds in environmental pollution imperiling the survival of aquatic organisms, the investigation of organic compound toxicity across diverse aquatic species assumes paramount significance for environmental protection. Understanding how different species respond to these compounds helps assess the potential ecological impact of pollution on aquatic ecosystems as a whole. Compared with traditional experimental methods, deep learning methods have higher accuracy in predicting aquatic toxicity, faster data processing speed and better generalization ability.

OBJECTIVES: This article presents ATFPGT-multi, an advanced multi-task deep neural network prediction model for organic toxicity.

METHODS: The model integrates molecular fingerprints and molecule graphs to characterize molecules, enabling the simultaneous prediction of acute toxicity for the same organic compound across four distinct fish species. Furthermore, to validate the advantages of multi-task learning, we independently construct prediction models, named ATFPGT-single, for each fish species. We employ cross-validation in our experiments to assess the performance and generalization ability of ATFPGT-multi.

RESULTS: The experimental results indicate, first, that ATFPGT-multi outperforms ATFPGT-single on four fish datasets with AUC improvements of 9.8%, 4%, 4.8%, and 8.2%, respectively, demonstrating the superiority of multi-task learning over single-task learning. Furthermore, in comparison with previous algorithms, ATFPGT-multi outperforms comparative methods, emphasizing that our approach exhibits higher accuracy and reliability in predicting aquatic toxicity. Moreover, ATFPGT-multi utilizes attention scores to identify molecular fragments associated with fish toxicity in organic molecules, as demonstrated by two organic molecule examples in the main text, demonstrating the interpretability of ATFPGT-multi.

CONCLUSION: In summary, ATFPGT-multi provides important support and reference for the further development of aquatic toxicity assessment. All of codes and datasets are freely available online at https://github.com/zhaoqi106/ATFPGT-multi.

PMID:38844122 | DOI:10.1016/j.jare.2024.06.002

Categories: Literature Watch

Status and future trends in wastewater management strategies using artificial intelligence and machine learning techniques

Thu, 2024-06-06 06:00

Chemosphere. 2024 Jun 4:142477. doi: 10.1016/j.chemosphere.2024.142477. Online ahead of print.

ABSTRACT

The two main things needed to fulfill the world's impending need for water in the face of the widespread water crisis are collecting water and recycling. To do this, the present study has placed a greater focus on water management strategies used in a variety of contexts areas. To distribute water effectively, save it, and satisfy water quality requirements for a variety of uses, it is imperative to apply intelligent water management mechanisms while keeping in mind the population density index. The present review unveiled the latest trends in water and wastewater recycling, utilizing several Artificial Intelligence (AI) and machine learning (ML) techniques for distribution, rainfall collection, and control of irrigation models. The data collected for these purposes are unique and comes in different forms. An efficient water management system could be developed with the use of AI, Deep Learning (DL), and the Internet of Things (IoT) structure. This study has investigated several water management methodologies using AI, DL and IoT with case studies and sample statistical assessment, to provide an efficient framework for water management.

PMID:38844107 | DOI:10.1016/j.chemosphere.2024.142477

Categories: Literature Watch

Artificial intelligence in veterinary diagnostic imaging: Perspectives and limitations

Thu, 2024-06-06 06:00

Res Vet Sci. 2024 May 31;175:105317. doi: 10.1016/j.rvsc.2024.105317. Online ahead of print.

ABSTRACT

The field of veterinary diagnostic imaging is undergoing significant transformation with the integration of artificial intelligence (AI) tools. This manuscript provides an overview of the current state and future prospects of AI in veterinary diagnostic imaging. The manuscript delves into various applications of AI across different imaging modalities, such as radiology, ultrasound, computed tomography, and magnetic resonance imaging. Examples of AI applications in each modality are provided, ranging from orthopaedics to internal medicine, cardiology, and more. Notable studies are discussed, demonstrating AI's potential for improved accuracy in detecting and classifying various abnormalities. The ethical considerations of using AI in veterinary diagnostics are also explored, highlighting the need for transparent AI development, accurate training data, awareness of the limitations of AI models, and the importance of maintaining human expertise in the decision-making process. The manuscript underscores the significance of AI as a decision support tool rather than a replacement for human judgement. In conclusion, this comprehensive manuscript offers an assessment of the current landscape and future potential of AI in veterinary diagnostic imaging. It provides insights into the benefits and challenges of integrating AI into clinical practice while emphasizing the critical role of ethics and human expertise in ensuring the wellbeing of veterinary patients.

PMID:38843690 | DOI:10.1016/j.rvsc.2024.105317

Categories: Literature Watch

Pages