Deep learning

MRET: Modified Recursive Elimination Technique for ranking author assessment parameters

Thu, 2024-06-13 06:00

PLoS One. 2024 Jun 13;19(6):e0303105. doi: 10.1371/journal.pone.0303105. eCollection 2024.

ABSTRACT

In scientific research, assessing the impact and influence of authors is crucial for evaluating their scholarly contributions. Whereas in literature, multitudinous parameters have been developed to quantify the productivity and significance of researchers, including the publication count, citation count, well-known h index and its extensions and variations. However, with a plethora of available assessment metrics, it is vital to identify and prioritize the most effective metrics. To address the complexity of this task, we employ a powerful deep learning technique known as the Multi-Layer Perceptron (MLP) classifier for the classification and the ranking purposes. By leveraging the MLP's capacity to discern patterns within datasets, we assign importance scores to each parameter using the proposed modified recursive elimination technique. Based on the importance scores, we ranked these parameters. Furthermore, in this study, we put forth a comprehensive statistical analysis of the top-ranked author assessment parameters, encompassing a vast array of 64 distinct metrics. This analysis gives us treasured insights in between these parameters, shedding light on the potential correlations and dependencies that may affect assessment outcomes. In the statistical analysis, we combined these parameters by using seven well-known statistical methods, such as arithmetic means, harmonic means, geometric means etc. After combining the parameters, we sorted the list of each pair of parameters and analyzed the top 10, 50, and 100 records. During this analysis, we counted the occurrence of the award winners. For experimental proposes, data collection was done from the field of Mathematics. This dataset consists of 525 individuals who are yet to receive their awards along with 525 individuals who have been recognized as potential award winners by certain well known and prestigious scientific societies belonging to the fields' of mathematics in the last three decades. The results of this study revealed that, in ranking of the author assessment parameters, the normalized h index achieved the highest importance score as compared to the remaining sixty-three parameters. Furthermore, the statistical analysis results revealed that the Trigonometric Mean (TM) outperformed the other six statistical models. Moreover, based on the analysis of the parameters, specifically the M Quotient and FG index, it is evident that combining these parameters with any other parameter using various statistical models consistently produces excellent results in terms of the percentage score for returning awardees.

PMID:38870157 | DOI:10.1371/journal.pone.0303105

Categories: Literature Watch

Predicting the daily number of patients for allergic diseases using PM10 concentration based on spatiotemporal graph convolutional networks

Thu, 2024-06-13 06:00

PLoS One. 2024 Jun 13;19(6):e0304106. doi: 10.1371/journal.pone.0304106. eCollection 2024.

ABSTRACT

Air pollution causes and exacerbates allergic diseases including asthma, allergic rhinitis, and atopic dermatitis. Precise prediction of the number of patients afflicted with these diseases and analysis of the environmental conditions that contribute to disease outbreaks play crucial roles in the effective management of hospital services. Therefore, this study aims to predict the daily number of patients with these allergic diseases and determine the impact of particulate matter (PM10) on each disease. To analyze the spatiotemporal correlations between allergic diseases (asthma, atopic dermatitis, and allergic rhinitis) and PM10 concentrations, we propose a multi-variable spatiotemporal graph convolutional network (MST-GCN)-based disease prediction model. Data on the number of patients were collected from the National Health Insurance Service from January 2013 to December 2017, and the PM10 data were collected from Airkorea during the same period. As a result, the proposed disease prediction model showed higher performance (R2 0.87) than the other deep-learning baseline methods. The synergic effect of spatial and temporal analyses improved the prediction performance of the number of patients. The prediction accuracies for allergic rhinitis, asthma, and atopic dermatitis achieved R2 scores of 0.96, 0.92, and 0.86, respectively. In the ablation study of environmental factors, PM10 improved the prediction accuracy by 10.13%, based on the R2 score.

PMID:38870112 | DOI:10.1371/journal.pone.0304106

Categories: Literature Watch

Comprehensive deep learning-based assessment of living liver donor CT angiography: from vascular segmentation to volumetric analysis

Thu, 2024-06-13 06:00

Int J Surg. 2024 Jun 13. doi: 10.1097/JS9.0000000000001829. Online ahead of print.

ABSTRACT

BACKGROUND: Precise preoperative assessment of liver vasculature and volume in living donor liver transplantation is essential for donor safety and recipient surgery. Traditional manual segmentation methods are being supplemented by deep learning (DL) models, which may offer more consistent and efficient volumetric evaluations.

METHODS: This study analyzed living liver donors from Samsung Medical Center using preoperative CT angiography data between April 2022 and February 2023. A DL-based 3D residual U-Net model was developed and trained on segmented CT images to calculate the liver volume and segment vasculature, with its performance compared to traditional manual segmentation by surgeons and actual graft weight.

RESULTS: The DL model achieved high concordance with manual methods, exhibiting Dice Similarity Coefficients of 0.94±0.01 for the right lobe and 0.91±0.02 for the left lobe. The liver volume estimates by DL model closely matched those of surgeons, with a mean discrepancy of 9.18 mL, and correlated more strongly with actual graft weights (R-squared value of 0.76 compared to 0.68 for surgeons).

CONCLUSION: The DL model demonstrates potential as a reliable tool for enhancing preoperative planning in liver transplantation, offering consistency and efficiency in volumetric assessment. Further validation is required to establish its generalizability across various clinical settings and imaging protocols.

PMID:38869975 | DOI:10.1097/JS9.0000000000001829

Categories: Literature Watch

Automated hepatic steatosis assessment on dual-energy CT-derived virtual non-contrast images through fully-automated 3D organ segmentation

Thu, 2024-06-13 06:00

Radiol Med. 2024 Jun 13. doi: 10.1007/s11547-024-01833-8. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate the efficacy of volumetric CT attenuation-based parameters obtained through automated 3D organ segmentation on virtual non-contrast (VNC) images from dual-energy CT (DECT) for assessing hepatic steatosis.

MATERIALS AND METHODS: This retrospective study included living liver donor candidates having liver DECT and MRI-determined proton density fat fraction (PDFF) assessments. Employing a 3D deep learning algorithm, the liver and spleen were automatically segmented from VNC images (derived from contrast-enhanced DECT scans) and true non-contrast (TNC) images, respectively. Mean volumetric CT attenuation values of each segmented liver (L) and spleen (S) were measured, allowing for liver attenuation index (LAI) calculation, defined as L minus S. Agreements of VNC and TNC parameters for hepatic steatosis, i.e., L and LAI, were assessed using intraclass correlation coefficients (ICC). Correlations between VNC parameters and MRI-PDFF values were assessed using the Pearson's correlation coefficient. Their performance to identify MRI-PDFF ≥ 5% and ≥ 10% was evaluated using receiver operating characteristic (ROC) curve analysis.

RESULTS: Of 252 participants, 56 (22.2%) and 16 (6.3%) had hepatic steatosis with MRI-PDFF ≥ 5% and ≥ 10%, respectively. LVNC and LAIVNC showed excellent agreement with LTNC and LAITNC (ICC = 0.957 and 0.968) and significant correlations with MRI-PDFF values (r = - 0.585 and - 0.588, Ps < 0.001). LVNC and LAIVNC exhibited areas under the ROC curve of 0.795 and 0.806 for MRI-PDFF ≥ 5%; and 0.916 and 0.932, for MRI-PDFF ≥ 10%, respectively.

CONCLUSION: Volumetric CT attenuation-based parameters from VNC images generated by DECT, via automated 3D segmentation of the liver and spleen, have potential for opportunistic hepatic steatosis screening, as an alternative to TNC images.

PMID:38869829 | DOI:10.1007/s11547-024-01833-8

Categories: Literature Watch

Patient-specific reference model estimation for orthognathic surgical planning

Thu, 2024-06-13 06:00

Int J Comput Assist Radiol Surg. 2024 Jun 13. doi: 10.1007/s11548-024-03123-0. Online ahead of print.

ABSTRACT

PURPOSE: Accurate estimation of reference bony shape models is fundamental for orthognathic surgical planning. Existing methods to derive this model are of two types: one determines the reference model by estimating the deformation field to correct the patient's deformed jaw, often introducing distortions in the predicted reference model; The other derives the reference model using a linear combination of their landmarks/vertices but overlooks the intricate nonlinear relationship between the subjects, compromising the model's precision and quality.

METHODS: We have created a self-supervised learning framework to estimate the reference model. The core of this framework is a deep query network, which estimates the similarity scores between the patient's midface and those of the normal subjects in a high-dimensional space. Subsequently, it aggregates high-dimensional features of these subjects and projects these features back to 3D structures, ultimately achieving a patient-specific reference model.

RESULTS: Our approach was trained using a dataset of 51 normal subjects and tested on 30 patient subjects to estimate their reference models. Performance assessment against the actual post-operative bone revealed a mean Chamfer distance error of 2.25 mm and an average surface distance error of 2.30 mm across the patient subjects.

CONCLUSION: Our proposed method emphasizes the correlation between the patients and the normal subjects in a high-dimensional space, facilitating the generation of the patient-specific reference model. Both qualitative and quantitative results demonstrate its superiority over current state-of-the-art methods in reference model estimation.

PMID:38869779 | DOI:10.1007/s11548-024-03123-0

Categories: Literature Watch

s(2)MRI-ADNet: an interpretable deep learning framework integrating Euclidean-graph representations of Alzheimer's disease solely from structural MRI

Thu, 2024-06-13 06:00

MAGMA. 2024 Jun 13. doi: 10.1007/s10334-024-01178-3. Online ahead of print.

ABSTRACT

OBJECTIVE: To establish a multi-dimensional representation solely on structural MRI (sMRI) for early diagnosis of AD.

METHODS: A total of 3377 participants' sMRI from four independent databases were retrospectively identified to construct an interpretable deep learning model that integrated multi-dimensional representations of AD solely on sMRI (called s2MRI-ADNet) by a dual-channel learning strategy of gray matter volume (GMV) from Euclidean space and the regional radiomics similarity network (R2SN) from graph space. Specifically, the GMV feature map learning channel (called GMV-Channel) was to take into consideration spatial information of both long-range spatial relations and detailed localization information, while the node feature and connectivity strength learning channel (called NFCS-Channel) was to characterize the graph-structured R2SN network by a separable learning strategy.

RESULTS: The s2MRI-ADNet achieved a superior classification accuracy of 92.1% and 91.4% under intra-database and inter-database cross-validation. The GMV-Channel and NFCS-Channel captured complementary group-discriminative brain regions, revealing a complementary interpretation of the multi-dimensional representation of brain structure in Euclidean and graph spaces respectively. Besides, the generalizable and reproducible interpretation of the multi-dimensional representation in capturing complementary group-discriminative brain regions revealed a significant correlation between the four independent databases (p < 0.05). Significant associations (p < 0.05) between attention scores and brain abnormality, between classification scores and clinical measure of cognitive ability, CSF biomarker, metabolism, and genetic risk score also provided solid neurobiological interpretation.

CONCLUSION: The s2MRI-ADNet solely on sMRI could leverage the complementary multi-dimensional representations of AD in Euclidean and graph spaces, and achieved superior performance in the early diagnosis of AD, facilitating its potential in both clinical translation and popularization.

PMID:38869733 | DOI:10.1007/s10334-024-01178-3

Categories: Literature Watch

Histological tissue classification with a novel statistical filter-based convolutional neural network

Thu, 2024-06-13 06:00

Anat Histol Embryol. 2024 Jul;53(4):e13073. doi: 10.1111/ahe.13073.

ABSTRACT

Deep networks have been of considerable interest in literature and have enabled the solution of recent real-world applications. Due to filters that offer feature extraction, Convolutional Neural Network (CNN) is recognized as an accurate, efficient and trustworthy deep learning technique for the solution of image-based challenges. The high-performing CNNs are computationally demanding even if they produce good results in a variety of applications. This is because a large number of parameters limit their ability to be reused on central processing units with low performance. To address these limitations, we suggest a novel statistical filter-based CNN (HistStatCNN) for image classification. The convolution kernels of the designed CNN model were initialized by continuous statistical methods. The performance of the proposed filter initialization approach was evaluated on a novel histological dataset and various histopathological benchmark datasets. To prove the efficiency of statistical filters, three unique parameter sets and a mixed parameter set of statistical filters were applied to the designed CNN model for the classification task. According to the results, the accuracy of GoogleNet, ResNet18, ResNet50 and ResNet101 models were 85.56%, 85.24%, 83.59% and 83.79%, respectively. The accuracy was improved by 87.13% by HistStatCNN for the histological data classification task. Moreover, the performance of the proposed filter generation approach was proved by testing on various histopathological benchmark datasets, increasing average accuracy rates. Experimental results validate that the proposed statistical filters enhance the performance of the network with more simple CNN models.

PMID:38868912 | DOI:10.1111/ahe.13073

Categories: Literature Watch

From mechanism to application: Decrypting light-regulated denitrifying microbiome through geometric deep learning

Thu, 2024-06-13 06:00

Imeta. 2024 Jan 6;3(1):e162. doi: 10.1002/imt2.162. eCollection 2024 Feb.

ABSTRACT

Regulation on denitrifying microbiomes is crucial for sustainable industrial biotechnology and ecological nitrogen cycling. The holistic genetic profiles of microbiomes can be provided by meta-omics. However, precise decryption and further applications of highly complex microbiomes and corresponding meta-omics data sets remain great challenges. Here, we combined optogenetics and geometric deep learning to form a discover-model-learn-advance (DMLA) cycle for denitrification microbiome encryption and regulation. Graph neural networks (GNNs) exhibited superior performance in integrating biological knowledge and identifying coexpression gene panels, which could be utilized to predict unknown phenotypes, elucidate molecular biology mechanisms, and advance biotechnologies. Through the DMLA cycle, we discovered the wavelength-divergent secretion system and nitrate-superoxide coregulation, realizing increasing extracellular protein production by 83.8% and facilitating nitrate removal with 99.9% enhancement. Our study showcased the potential of GNNs-empowered optogenetic approaches for regulating denitrification and accelerating the mechanistic discovery of microbiomes for in-depth research and versatile applications.

PMID:38868512 | PMC:PMC10989148 | DOI:10.1002/imt2.162

Categories: Literature Watch

Deformable multi-modal image registration for the correlation between optical measurements and histology images

Thu, 2024-06-13 06:00

J Biomed Opt. 2024 Jun;29(6):066007. doi: 10.1117/1.JBO.29.6.066007. Epub 2024 Jun 12.

ABSTRACT

SIGNIFICANCE: The accurate correlation between optical measurements and pathology relies on precise image registration, often hindered by deformations in histology images. We investigate an automated multi-modal image registration method using deep learning to align breast specimen images with corresponding histology images.

AIM: We aim to explore the effectiveness of an automated image registration technique based on deep learning principles for aligning breast specimen images with histology images acquired through different modalities, addressing challenges posed by intensity variations and structural differences.

APPROACH: Unsupervised and supervised learning approaches, employing the VoxelMorph model, were examined using a dataset featuring manually registered images as ground truth.

RESULTS: Evaluation metrics, including Dice scores and mutual information, demonstrate that the unsupervised model exceeds the supervised (and manual) approaches significantly, achieving superior image alignment. The findings highlight the efficacy of automated registration in enhancing the validation of optical technologies by reducing human errors associated with manual registration processes.

CONCLUSIONS: This automated registration technique offers promising potential to enhance the validation of optical technologies by minimizing human-induced errors and inconsistencies associated with manual image registration processes, thereby improving the accuracy of correlating optical measurements with pathology labels.

PMID:38868496 | PMC:PMC11167953 | DOI:10.1117/1.JBO.29.6.066007

Categories: Literature Watch

Exploring the Impact of Batch Size on Deep Learning Artificial Intelligence Models for Malaria Detection

Thu, 2024-06-13 06:00

Cureus. 2024 May 13;16(5):e60224. doi: 10.7759/cureus.60224. eCollection 2024 May.

ABSTRACT

Introduction Malaria is a major public health concern, especially in developing countries. Malaria often presents with recurrent fever, malaise, and other nonspecific symptoms mistaken for influenza. Light microscopy of peripheral blood smears is considered the gold standard diagnostic test for malaria. Delays in malaria diagnosis can increase morbidity and mortality. Microscopy can be time-consuming and limited by skilled labor, infrastructure, and interobserver variability. Artificial intelligence (AI)-based tools for diagnostic screening can automate blood smear analysis without relying on a trained technician. Convolutional neural networks (CNN), deep learning neural networks that can identify visual patterns, are being explored for use in abnormality detection in medical images. A parameter that can be optimized in CNN models is the batch size or the number of images used during model training at once in one forward and backward pass. The choice of batch size in developing CNN-based malaria screening tools can affect model accuracy, training speed, and, ultimately, clinical usability. This study explores the impact of batch size on CNN model accuracy for malaria detection from thin blood smear images. Methods We used the publicly available "NIH-NLM-ThinBloodSmearsPf" dataset from the United States National Library of Medicine, consisting of blood smear images for Plasmodium falciparum. The collection consists of 13,779 "parasitized" and 13,779 "uninfected" single-cell images. We created four datasets containing all images, each with unique randomized subsets of images for model testing. Using Python, four identical 10-layer CNN models were developed and trained with varying batch sizes for 10 epochs against all datasets, resulting in 16 sets of outputs. Model prediction accuracy, training time, and F1-score, an accuracy metric used to quantify model performance, were collected. Results All models produced F1-scores of 94%-96%, with 10 of 16 instances producing F1-scores of 95%. After averaging all four dataset outputs by batch size, we observed that, as batch size increased from 16 to 128, the average combined false positives plus false negatives increased by 15.4% (130-150), and the average model F1-score accuracy decreased by 1% (95.3%-94.3%). The average training time also decreased by 28.11% (1,556-1,119 seconds). Conclusion In each dataset, we observe an approximately 1% decrease in F1-score as the batch size was increased. Clinically, a 1% deviation at the population level can create a relatively significant impact on outcomes. Results from this study suggest that smaller batch sizes could improve accuracy in models with similar layer complexity and datasets, potentially resulting in better clinical outcomes. Reduced memory requirement for training also means that model training can be achieved with more economical hardware. Our findings suggest that smaller batch sizes could be evaluated for improvements in accuracy to help develop an AI model that could screen thin blood smears for malaria.

PMID:38868293 | PMC:PMC11167577 | DOI:10.7759/cureus.60224

Categories: Literature Watch

A deep learning model for DNA enhancer prediction based on nucleotide position aware feature encoding

Thu, 2024-06-13 06:00

iScience. 2024 May 19;27(6):110030. doi: 10.1016/j.isci.2024.110030. eCollection 2024 Jun 21.

ABSTRACT

Enhancers, genomic DNA elements, regulate neighboring gene expression crucial for biological processes like cell differentiation and stress response. However, current machine learning methods for predicting DNA enhancers often underutilize hidden features in gene sequences, limiting model accuracy. Hence, this article proposes the PDCNN model, a deep learning-based enhancer prediction method. PDCNN extracts statistical nucleotide representations from gene sequences, discerning positional distribution information of nucleotides in modifier-like DNA sequences. With a convolutional neural network structure, PDCNN employs dual convolutional and fully connected layers. The cross-entropy loss function iteratively updates using a gradient descent algorithm, enhancing prediction accuracy. Model parameters are fine-tuned to select optimal combinations for training, achieving over 95% accuracy. Comparative analysis with traditional methods and existing models demonstrates PDCNN's robust feature extraction capability. It outperforms advanced machine learning methods in identifying DNA enhancers, presenting an effective method with broad implications for genomics, biology, and medical research.

PMID:38868182 | PMC:PMC11167433 | DOI:10.1016/j.isci.2024.110030

Categories: Literature Watch

A transfer learning enabled approach for ocular disease detection and classification

Thu, 2024-06-13 06:00

Health Inf Sci Syst. 2024 Jun 11;12(1):36. doi: 10.1007/s13755-024-00293-8. eCollection 2024 Dec.

ABSTRACT

Ocular diseases pose significant challenges in timely diagnosis and effective treatment. Deep learning has emerged as a promising technique in medical image analysis, offering potential solutions for accurately detecting and classifying ocular diseases. In this research, we propose Ocular Net, a novel deep learning model for detecting and classifying ocular diseases, including Cataracts, Diabetic, Uveitis, and Glaucoma, using a large dataset of ocular images. The study utilized an image dataset comprising 6200 images of both eyes of patients. Specifically, 70% of these images (4000 images) were allocated for model training, while the remaining 30% (2200 images) were designated for testing purposes. The dataset contains images of five categories that include four diseases, and one normal category. The proposed model uses transfer learning, average pooling layers, Clipped Relu, Leaky Relu and various other layers to accurately detect the ocular diseases from images. Our approach involves training a novel Ocular Net model on diverse ocular images and evaluating its accuracy and performance metrics for disease detection. We also employ data augmentation techniques to improve model performance and mitigate overfitting. The proposed model is tested on different training and testing ratios with varied parameters. Additionally, we compare the performance of the Ocular Net with previous methods based on various evaluation parameters, assessing its potential for enhancing the accuracy and efficiency of ocular disease diagnosis. The results demonstrate that Ocular Net achieves 98.89% accuracy and 0.12% loss value in detecting and classifying ocular diseases by outperforming existing methods.

PMID:38868156 | PMC:PMC11164840 | DOI:10.1007/s13755-024-00293-8

Categories: Literature Watch

GSOOA-1DDRSN: Network traffic anomaly detection based on deep residual shrinkage networks

Thu, 2024-06-13 06:00

Heliyon. 2024 May 29;10(11):e32087. doi: 10.1016/j.heliyon.2024.e32087. eCollection 2024 Jun 15.

ABSTRACT

One of the critical technologies to ensure cyberspace security is network traffic anomaly detection, which detects malicious attacks by analyzing and identifying network traffic behavior. The rapid development of the network has led to explosive growth in network traffic, which seriously impacts the user's information security. Researchers have delved into intrusion detection as an active defense technology to address this challenge. However, traditional machine learning methods struggle to capture complex threats and attack patterns when dealing with large-scale network data. In contrast, deep learning methods have the advantages of automatically extracting features from network traffic data and strong generalization capabilities. Aiming to enhance the ability of network anomaly traffic detection, this paper proposes a network traffic anomaly detection based on Deep Residual Shrinkage Network (DRSN), namely "GSOOA-1DDRSN". This method uses an improved Osprey optimization algorithm to select the most relevant and essential features in network traffic, reducing the features' dimensionality. For better detection performance of network traffic anomalies, a one-dimensional deep residual shrinkage network (1DDRSN) is designed as a classifier. Validation is performed using the NSL-KDD and UNSW-NB15 datasets and compared with other methods. The experimental results show that GSOOA-1DDRSN has improved multi-classification accuracy, precision, recall, and F1 Score by approximately 2 % and 3 %, respectively, compared to the 1DDRSN model on two datasets. Additionally, it reduces the time computation costs by 20 % and 30 % on these datasets. Furthermore, compared to other models, GSOOA-1DDRSN offers superior classification accuracy and effectively reduces the number of features.

PMID:38868050 | PMC:PMC11168389 | DOI:10.1016/j.heliyon.2024.e32087

Categories: Literature Watch

Deep learning model based on contrast-enhanced MRI for predicting post-surgical survival in patients with hepatocellular carcinoma

Thu, 2024-06-13 06:00

Heliyon. 2024 May 16;10(11):e31451. doi: 10.1016/j.heliyon.2024.e31451. eCollection 2024 Jun 15.

ABSTRACT

OBJECTIVE: To develop a deep learning model based on contrast-enhanced magnetic resonance imaging (MRI) data to predict post-surgical overall survival (OS) in patients with hepatocellular carcinoma (HCC).

METHODS: This bi-center retrospective study included 564 surgically resected patients with HCC and divided them into training (326), testing (143), and external validation (95) cohorts. This study used a three-dimensional convolutional neural network (3D-CNN) ResNet to learn features from the pretreatment MR images (T1WIpre, late arterial phase, and portal venous phase) and got the deep learning score (DL score). Three cox regression models were established separately using the DL score (3D-CNN model), clinical features (clinical model), and a combination of above (combined model). The concordance index (C-index) was used to evaluate model performance.

RESULTS: We trained a 3D-CNN model to get DL score from samples. The C-index of the 3D-CNN model in predicting 5-year OS for the training, testing, and external validation cohorts were 0.746, 0.714, and 0.698, respectively, and were higher than those of the clinical model, which were 0.675, 0.674, and 0.631, respectively (P = 0.009, P = 0.204, and P = 0.092, respectively). The C-index of the combined model for testing and external validation cohorts was 0.750 and 0.723, respectively, significantly higher than the clinical model (P = 0.017, P = 0.016) and the 3D-CNN model (P = 0.029, P = 0.036).

CONCLUSIONS: The combined model integrating the DL score and clinical factors showed a higher predictive value than the clinical and 3D-CNN models and may be more useful in guiding clinical treatment decisions to improve the prognosis of patients with HCC.

PMID:38868019 | PMC:PMC11167253 | DOI:10.1016/j.heliyon.2024.e31451

Categories: Literature Watch

A deep neural network algorithm-based approach for predicting recovery period of accidents according to construction scale

Thu, 2024-06-13 06:00

Heliyon. 2024 May 31;10(11):e32215. doi: 10.1016/j.heliyon.2024.e32215. eCollection 2024 Jun 15.

ABSTRACT

Despite ongoing safety efforts, construction sites experience a concerningly high accident rate. Notwithstanding that policies and research to reduce the risk of accidents in the construction industry have been active for a long time, the accident rate in the construction industry is considerably higher than in other industries. This trend may likely be further exacerbated by the rapid growth of large-scale construction projects driven by urban population expansion. Consequently, accurately predicting recovery periods of accidents at construction sites in advance and proactively investing in measures to mitigate them is critical for efficiently managing construction projects. Therefore, the purpose of this study is to propose a framework for developing accident prediction models based on the Deep Neural Network (DNN) algorithm according to the scale of the construction site. This study suggests DNN models and applies the DNN for each construction site scale to predict accident recovery periods. The model performance and accuracy were evaluated using mean absolute error (MAE) and root-mean-square error (RMSE) and compared with the widely used multiple regression analysis model. As a result of model comparison, the DNN models showed a lower prediction error rate than the regression analysis models for both small-to-medium and large construction sites. The findings and framework of this study can be applied as the opening stage of accident risk assessment using deep learning techniques, and the introduction of deep learning technology to safety management according to the scale of the construction site is provided as a guideline.

PMID:38868011 | PMC:PMC11168429 | DOI:10.1016/j.heliyon.2024.e32215

Categories: Literature Watch

Application of the artificial intelligence system based on graphics and vision in ethnic tourism of subtropical grasslands

Thu, 2024-06-13 06:00

Heliyon. 2024 May 17;10(11):e31442. doi: 10.1016/j.heliyon.2024.e31442. eCollection 2024 Jun 15.

ABSTRACT

This study aims to optimize the evaluation and decision-making of ethnic tourism resources through the utilization of deep learning algorithms and Internet of Things (IoT) technology. Specifically, emphasis is placed on the recognition and feature extraction of Mongolian decorative patterns, providing new insights for the deep application of cultural heritage and visual design. In this study, the existing DL algorithm is improved, integrating the feature extraction algorithm of ResNet + Canny + Local Binary Pattern (LBP), and utilizing an intelligent decision method to analyze the intelligent development of indigenous tourism resources. Simultaneously, the DL algorithm and IoT technology are combined with visual design and convolutional neural networks to perform feature extraction and technology recognition. Visual design offers an intuitive representation of tourism resources, while fuzzy decision-making provides a more accurate evaluation in the face of uncertainty. By implementing an intelligent decision-making system, this study achieves a multiplier effect. The integration of intelligent methods not only enhances the accuracy of tourism resource evaluation and decision-making but also elevates the quality and efficiency of the tourism experience. This multiplier effect is evident in the system's capacity to manage substantial datasets and deliver prompt, precise decision support, thus playing a pivotal role in tourism resource management and planning. The findings of this study demonstrate that optimizing intelligent development technology for rural tourism through IoT can enhance the efficacy of intelligent solutions. In terms of pattern recognition accuracy, AlexNet, VGGNet, and ResNet achieve accuracies of 90.8 %, 94.5 %, and 96.9 %, respectively, while the proposed fusion algorithm attains an accuracy of 98.8 %. These results offer practical insights for rural tourism brand strategy and underscore the utility of applying fuzzy decision systems in urban tourism and visual design. Moreover, the research outcomes hold significant practical implications for the advancement of Mongolian cultural tourism and provide valuable lessons for exploring novel paradigms in image analysis and pattern recognition. This study contributes beneficial insights for future research endeavors in related domains.

PMID:38867958 | PMC:PMC11167259 | DOI:10.1016/j.heliyon.2024.e31442

Categories: Literature Watch

Distinct brain morphometry patterns revealed by deep learning improve prediction of post-stroke aphasia severity

Wed, 2024-06-12 06:00

Commun Med (Lond). 2024 Jun 12;4(1):115. doi: 10.1038/s43856-024-00541-8.

ABSTRACT

BACKGROUND: Emerging evidence suggests that post-stroke aphasia severity depends on the integrity of the brain beyond the lesion. While measures of lesion anatomy and brain integrity combine synergistically to explain aphasic symptoms, substantial interindividual variability remains unaccounted. One explanatory factor may be the spatial distribution of morphometry beyond the lesion (e.g., atrophy), including not just specific brain areas, but distinct three-dimensional patterns.

METHODS: Here, we test whether deep learning with Convolutional Neural Networks (CNNs) on whole brain morphometry (i.e., segmented tissue volumes) and lesion anatomy better predicts chronic stroke individuals with severe aphasia (N = 231) than classical machine learning (Support Vector Machines; SVMs), evaluating whether encoding spatial dependencies identifies uniquely predictive patterns.

RESULTS: CNNs achieve higher balanced accuracy and F1 scores, even when SVMs are nonlinear or integrate linear or nonlinear dimensionality reduction. Parity only occurs when SVMs access features learned by CNNs. Saliency maps demonstrate that CNNs leverage distributed morphometry patterns, whereas SVMs focus on the area around the lesion. Ensemble clustering of CNN saliencies reveals distinct morphometry patterns unrelated to lesion size, consistent across individuals, and which implicate unique networks associated with different cognitive processes as measured by the wider neuroimaging literature. Individualized predictions depend on both ipsilateral and contralateral features outside the lesion.

CONCLUSIONS: Three-dimensional network distributions of morphometry are directly associated with aphasia severity, underscoring the potential for CNNs to improve outcome prognostication from neuroimaging data, and highlighting the prospective benefits of interrogating spatial dependence at different scales in multivariate feature space.

PMID:38866977 | DOI:10.1038/s43856-024-00541-8

Categories: Literature Watch

A deep learning-based automated diagnosis system for SPECT myocardial perfusion imaging

Wed, 2024-06-12 06:00

Sci Rep. 2024 Jun 12;14(1):13583. doi: 10.1038/s41598-024-64445-2.

ABSTRACT

Images obtained from single-photon emission computed tomography for myocardial perfusion imaging (MPI SPECT) contain noises and artifacts, making cardiovascular disease diagnosis difficult. We developed a deep learning-based diagnosis support system using MPI SPECT images. Single-center datasets of MPI SPECT images (n = 5443) were obtained and labeled as healthy or coronary artery disease based on diagnosis reports. Three axes of four-dimensional datasets, resting, and stress conditions of three-dimensional reconstruction data, were reconstructed, and an AI model was trained to classify them. The trained convolutional neural network showed high performance [area under the curve (AUC) of the ROC curve: approximately 0.91; area under the recall precision curve: 0.87]. Additionally, using unsupervised learning and the Grad-CAM method, diseased lesions were successfully visualized. The AI-based automated diagnosis system had the highest performance (88%), followed by cardiologists with AI-guided diagnosis (80%) and cardiologists alone (65%). Furthermore, diagnosis time was shorter for AI-guided diagnosis (12 min) than for cardiologists alone (31 min). Our high-quality deep learning-based diagnosis support system may benefit cardiologists by improving diagnostic accuracy and reducing working hours.

PMID:38866884 | DOI:10.1038/s41598-024-64445-2

Categories: Literature Watch

Enhancing the diagnosis of functionally relevant coronary artery disease with machine learning

Wed, 2024-06-12 06:00

Nat Commun. 2024 Jun 12;15(1):5034. doi: 10.1038/s41467-024-49390-y.

ABSTRACT

Functionally relevant coronary artery disease (fCAD) can result in premature death or nonfatal acute myocardial infarction. Its early detection is a fundamentally important task in medicine. Classical detection approaches suffer from limited diagnostic accuracy or expose patients to possibly harmful radiation. Here we show how machine learning (ML) can outperform cardiologists in predicting the presence of stress-induced fCAD in terms of area under the receiver operating characteristic (AUROC: 0.71 vs. 0.64, p = 4.0E-13). We present two ML approaches, the first using eight static clinical variables, whereas the second leverages electrocardiogram signals from exercise stress testing. At a target post-test probability for fCAD of <15%, ML facilitates a potential reduction of imaging procedures by 15-17% compared to the cardiologist's judgement. Predictive performance is validated on an internal temporal data split as well as externally. We also show that combining clinical judgement with conventional ML and deep learning using logistic regression results in a mean AUROC of 0.74.

PMID:38866791 | DOI:10.1038/s41467-024-49390-y

Categories: Literature Watch

A Practical Roadmap to Implementing Deep Learning Segmentation in the Clinical Neuroimaging Research Workflow

Wed, 2024-06-12 06:00

World Neurosurg. 2024 Jun 10:S1878-8750(24)00974-4. doi: 10.1016/j.wneu.2024.06.026. Online ahead of print.

ABSTRACT

Thanks to the proliferation of open-source tools, we are seeing an exponential growth of machine learning applications, and its integration has become more accessible, particularly for segmentation tools in neuroimaging. This article explores a generalised methodology that harnesses these tools and aims/enables to expedite and enhance the reproducibility of clinical research. Herein, critical re considerations include hardware, software, neural network training strategies and data labelling guidelines. More specifically, we advocate an iterative approach to model training and transfer learning, focusing on internal validation and outlier handling early in the labelling process and fine-tuning later on. The iterative refinement process allows experts to intervene and improve model reliability whilst cutting down on their time spent in manual work. A seamless integration of the final model's predictions into clinical research is proposed to ensure standardized and reproducible results. In short, this article provides a comprehensive framework for accelerating research using machine learning techniques for image segmentation.

PMID:38866234 | DOI:10.1016/j.wneu.2024.06.026

Categories: Literature Watch

Pages