Deep learning

An efficient colorectal cancer detection network using atrous convolution with coordinate attention transformer and histopathological images

Sat, 2024-08-17 06:00

Sci Rep. 2024 Aug 17;14(1):19109. doi: 10.1038/s41598-024-70117-y.

ABSTRACT

The second most common type of malignant tumor worldwide is colorectal cancer. Histopathology image analysis offers crucial data for the clinical diagnosis of colorectal cancer. Currently, deep learning techniques are applied to enhance cancer classification and tumor localization in histopathological image analysis. Moreover, traditional deep learning techniques might loss integrated information in the image while evaluating thousands of patches recovered from whole slide images (WSIs). This research proposes a novel colorectal cancer detection network (CCDNet) that combines coordinate attention transformer with atrous convolution. CCDNet first denoises the input histopathological image using a Wiener based Midpoint weighted non-local means filter (WMW-NLM) for guaranteeing precise diagnoses and maintain image features. Also, a novel atrous convolution with coordinate attention transformer (AConvCAT) is introduced, which successfully combines the advantages of two networks to classify colorectal tissue at various scales by capturing local and global information. Further, coordinate attention model is integrated with a Cross-shaped window (CrSWin) transformer for capturing tiny changes in colorectal tissue from multiple angles. The proposed CCDNet achieved accuracy rates of 98.61% and 98.96%, on the colorectal histological image and NCT-CRC-HE-100 K datasets correspondingly. The comparison analysis demonstrates that the suggested framework performed better than the most advanced methods already in use. In hospitals, clinicians can use the proposed CCDNet to verify the diagnosis.

PMID:39154091 | DOI:10.1038/s41598-024-70117-y

Categories: Literature Watch

Dual-loop control and state prediction analysis of QUAV trajectory tracking based on biological swarm intelligent optimization algorithm

Sat, 2024-08-17 06:00

Sci Rep. 2024 Aug 17;14(1):19091. doi: 10.1038/s41598-024-69911-5.

ABSTRACT

Quadrotor unmanned aerial vehicles (QUAVs) have attracted significant research focus due to their outstanding Vertical Take-Off and Landing (VTOL) capabilities. This research addresses the challenge of maintaining precise trajectory tracking in QUAV systems when faced with external disturbances by introducing a robust, two-tier control system based on sliding mode technology. For position control, this approach utilizes a virtual sliding mode control signal to enhance tracking precision and includes adaptive mechanisms to adjust for changes in mass and external disruptions. In controlling the attitude subsystem, the method employs a sliding mode control framework that secures system stability and compliance with intermediate commands, eliminating the reliance on precise models of the inertia matrix. Furthermore, this study incorporates a deep learning approach that combines Particle Swarm Optimization (PSO) with the Long Short-Term Memory (LSTM) network to foresee and mitigate trajectory tracking errors, thereby significantly enhancing the reliability and safety of mission operations. The robustness and effectiveness of this innovative control strategy are validated through comprehensive numerical simulations.

PMID:39154026 | DOI:10.1038/s41598-024-69911-5

Categories: Literature Watch

MSTCRB: Predicting circRNA-RBP interaction by extracting multi-scale features based on transformer and attention mechanism

Sat, 2024-08-17 06:00

Int J Biol Macromol. 2024 Aug 15:134805. doi: 10.1016/j.ijbiomac.2024.134805. Online ahead of print.

ABSTRACT

CircRNAs play vital roles in biological system mainly through binding RNA-binding protein (RBP), which is essential for regulating physiological processes in vivo and for identifying causal disease variants. Therefore, predicting interactions between circRNA and RBP is a critical step for the discovery of new therapeutic agents. Application of various deep-learning models in bioinformatics has significantly improved prediction and classification performance. However, most of existing prediction models are only applicable to specific type of RNA or RNA with simple characteristics. In this study, we proposed an attractive deep learning model, MSTCRB, based on transformer and attention mechanism for extracting multi-scale features to predict circRNA-RBP interactions. Therein, K-mer and KNF encoding are employed to capture the global sequence features of circRNA, NCP and DPCP encoding are utilized to extract local sequence features, and the CDPfold method is applied to extract structural features. In order to improve prediction performance, optimized transformer framework and attention mechanism were used to integrate these multi-scale features. We compared our model's performance with other five state-of-the-art methods on 37 circRNA datasets and 31 linear RNA datasets. The results show that the average AUC value of MSTCRB reaches 98.45 %, which is better than other comparative methods. All of above datasets are deposited in https://github.com/chy001228/MSTCRB_database.git and source code are available from https://github.com/chy001228/MSTCRB.git.

PMID:39153682 | DOI:10.1016/j.ijbiomac.2024.134805

Categories: Literature Watch

A comprehensive overview of diffuse correlation spectroscopy: theoretical framework, recent advances in hardware, analysis, and applications

Sat, 2024-08-17 06:00

Neuroimage. 2024 Aug 15:120793. doi: 10.1016/j.neuroimage.2024.120793. Online ahead of print.

ABSTRACT

Diffuse correlation spectroscopy (DCS) is a powerful tool for assessing microvascular hemodynamic in deep tissues. Recent advances in sensors, lasers, and deep learning have further boosted the development of new DCS methods. However, newcomers might feel overwhelmed, not only by the already-complex DCS theoretical framework but also by the broad range of component options and system architectures. To facilitate new entry to this exciting field, we present a comprehensive review of DCS hardware architectures (continuous-wave, frequency-domain, and time-domain) and summarize corresponding theoretical models. Further, we discuss new applications of highly integrated silicon single-photon avalanche diode (SPAD) sensors in DCS, compare SPADs with existing sensors, and review other components (lasers, sensors, and correlators), as well as data analysis tools, including deep learning. Potential applications in medical diagnosis are discussed and an outlook for the future directions is provided, to offer effective guidance to embark on DCS research.

PMID:39153520 | DOI:10.1016/j.neuroimage.2024.120793

Categories: Literature Watch

Deep learning is necessary for safety regulation in predicting malnutrition in gastric cancer patients

Sat, 2024-08-17 06:00

Clin Nutr. 2024 Aug 2;43(9):2195. doi: 10.1016/j.clnu.2024.07.043. Online ahead of print.

NO ABSTRACT

PMID:39153431 | DOI:10.1016/j.clnu.2024.07.043

Categories: Literature Watch

Semantic segmentation in skin surface microscopic images with artifacts removal

Sat, 2024-08-17 06:00

Comput Biol Med. 2024 Aug 16;180:108975. doi: 10.1016/j.compbiomed.2024.108975. Online ahead of print.

ABSTRACT

Skin surface imaging has been used to examine skin lesions with a microscope for over a century and is commonly known as epiluminescence microscopy, dermatoscopy, or dermoscopy. Skin surface microscopy has been recommended to reduce the necessity of biopsy. This imaging technique could improve the clinical diagnostic performance of pigmented skin lesions. Different imaging techniques are employed in dermatology to find diseases. Segmentation and classification are the two main steps in the examination. The classification performance is influenced by the algorithm employed in the segmentation procedure. The most difficult aspect of segmentation is getting rid of the unwanted artifacts. Many deep-learning models are being created to segment skin lesions. In this paper, an analysis of common artifacts is proposed to investigate the segmentation performance of deep learning models with skin surface microscopic images. The most prevalent artifacts in skin images are hair and dark corners. These artifacts can be observed in the majority of dermoscopy images captured through various imaging techniques. While hair detection and removal methods are common, the introduction of dark corner detection and removal represents a novel approach to skin lesion segmentation. A comprehensive analysis of this segmentation performance is assessed using the surface density of artifacts. Assessment of the PH2, ISIC 2017, and ISIC 2018 datasets demonstrates significant enhancements, as reflected by Dice coefficients rising to 93.49 (86.81), 85.86 (79.91), and 75.38 (51.28) respectively, upon artifact removal. These results underscore the pivotal significance of artifact removal techniques in amplifying the efficacy of deep-learning models for skin lesion segmentation.

PMID:39153395 | DOI:10.1016/j.compbiomed.2024.108975

Categories: Literature Watch

Data-independent acquisition in Metaproteomics

Sat, 2024-08-17 06:00

Expert Rev Proteomics. 2024 Aug 17. doi: 10.1080/14789450.2024.2394190. Online ahead of print.

ABSTRACT

INTRODUCTION: Metaproteomics offers insights into the function of complex microbial communities while it is also capable of revealing microbe-microbe and host-microbe interactions. Data-independent acquisition (DIA) mass spectrometry is an emerging technology, which holds great potential to achieve deep and accurate metaproteomics with higher reproducibility yet still facing a series of challenges due to the inherent complexity of metaproteomics and DIA data.

AREAS COVERED: This review offers an overview of the DIA metaproteomics approaches, covering aspects such as database construction, search strategy, and data analysis tools. Several cases of current DIA metaproteomics studies are presented to illustrate the procedures. Important ongoing challenges are also highlighted. Future perspectives of DIA methods for metaproteomics analysis are further discussed. Cited references are searched through and collected from Google Scholar and PubMed.

EXPERT OPINION: Considering the inherent complexity of DIA metaproteomics data, data analysis strategies specifically designed for interpretation is imperative. From this point of view, we anticipate that deep learning methods and de novo sequencing methods will become more prevalent in the future, potentially improving protein coverage in metaproteomics. Moreover, the advancement of metaproteomics also depends on the development of sample preparation methods, data analysis strategies, etc. These factors are key to unlocking the full potential of metaproteomics.

PMID:39152734 | DOI:10.1080/14789450.2024.2394190

Categories: Literature Watch

MycoAI: Fast and accurate taxonomic classification for fungal ITS sequences

Sat, 2024-08-17 06:00

Mol Ecol Resour. 2024 Aug 16:e14006. doi: 10.1111/1755-0998.14006. Online ahead of print.

ABSTRACT

Efficient and accurate classification of DNA barcode data is crucial for large-scale fungal biodiversity studies. However, existing methods are either computationally expensive or lack accuracy. Previous research has demonstrated the potential of deep learning in this domain, successfully training neural networks for biological sequence classification. We introduce the MycoAI Python package, featuring various deep learning models such as BERT and CNN tailored for fungal Internal Transcribed Spacer (ITS) sequences. We explore different neural architecture designs and encoding methods to identify optimal models. By employing a multi-head output architecture and multi-level hierarchical label smoothing, MycoAI effectively generalizes across the taxonomic hierarchy. Using over 5 million labelled sequences from the UNITE database, we develop two models: MycoAI-BERT and MycoAI-CNN. While we emphasize the necessity of verifying classification results by AI models due to insufficient reference data, MycoAI still exhibits substantial potential. When benchmarked against existing classifiers such as DNABarcoder and RDP on two independent test sets with labels present in the training dataset, MycoAI models demonstrate high accuracy at the genus and higher taxonomic levels, with MycoAI-CNN being the fastest and most accurate. In terms of efficiency, MycoAI models can classify over 300,000 sequences within 5 min. We publicly release the MycoAI models, enabling mycologists to classify their ITS barcode data efficiently. Additionally, MycoAI serves as a platform for developing further deep learning-based classification methods. The source code for MycoAI is available under the MIT Licence at https://github.com/MycoAI/MycoAI.

PMID:39152642 | DOI:10.1111/1755-0998.14006

Categories: Literature Watch

Melon ripeness detection by an improved object detection algorithm for resource constrained environments

Fri, 2024-08-16 06:00

Plant Methods. 2024 Aug 16;20(1):127. doi: 10.1186/s13007-024-01259-3.

ABSTRACT

BACKGROUND: Ripeness is a phenotype that significantly impacts the quality of fruits, constituting a crucial factor in the cultivation and harvesting processes. Manual detection methods and experimental analysis, however, are inefficient and costly.

RESULTS: In this study, we propose a lightweight and efficient melon ripeness detection method, MRD-YOLO, based on an improved object detection algorithm. The method combines a lightweight backbone network, MobileNetV3, a design paradigm Slim-neck, and a Coordinate Attention mechanism. Additionally, we have created a large-scale melon dataset sourced from a greenhouse based on ripeness. This dataset contains common complexities encountered in the field environment, such as occlusions, overlapping, and varying light intensities. MRD-YOLO achieves a mean Average Precision of 97.4% on this dataset, achieving accurate and reliable melon ripeness detection. Moreover, the method demands only 4.8 G FLOPs and 2.06 M parameters, representing 58.5% and 68.4% of the baseline YOLOv8n model, respectively. It comprehensively outperforms existing methods in terms of balanced accuracy and computational efficiency. Furthermore, it maintains real-time inference capability in GPU environments and demonstrates exceptional inference speed in CPU environments. The lightweight design of MRD-YOLO is anticipated to be deployed in various resource constrained mobile and edge devices, such as picking robots. Particularly noteworthy is its performance when tested on two melon datasets obtained from the Roboflow platform, achieving a mean Average Precision of 85.9%. This underscores its excellent generalization ability on untrained data.

CONCLUSIONS: This study presents an efficient method for melon ripeness detection, and the dataset utilized in this study, alongside the detection method, will provide a valuable reference for ripeness detection across various types of fruits.

PMID:39152496 | DOI:10.1186/s13007-024-01259-3

Categories: Literature Watch

Deep learning-based prediction of indication for cracked tooth extraction using panoramic radiography

Fri, 2024-08-16 06:00

BMC Oral Health. 2024 Aug 16;24(1):952. doi: 10.1186/s12903-024-04721-9.

ABSTRACT

BACKGROUND: We aimed to determine the feasibility of utilizing deep learning-based predictions of the indications for cracked tooth extraction using panoramic radiography.

METHODS: Panoramic radiographs of 418 teeth (group 1: 209 normal teeth; group 2: 209 cracked teeth) were evaluated for the training and testing of a deep learning model. We evaluated the performance of the cracked diagnosis model for individual teeth using InceptionV3, ResNet50, and EfficientNetB0. The cracked tooth diagnosis model underwent fivefold cross-validation with 418 data instances divided into training, validation, and test sets at a ratio of 3:1:1.

RESULTS: To evaluate the feasibility, the sensitivity, specificity, accuracy, and F1 score of the deep learning models were calculated, with values of 90.43-94.26%, 52.63-60.77%, 72.01-75.84%, and 76.36-79.00%, respectively.

CONCLUSION: We found that the indications for cracked tooth extraction can be predicted to a certain extent through a deep learning model using panoramic radiography.

PMID:39152384 | DOI:10.1186/s12903-024-04721-9

Categories: Literature Watch

Validity of machine learning algorithms for automatically extract growing rod length on radiographs in children with early-onset scoliosis

Fri, 2024-08-16 06:00

Med Biol Eng Comput. 2024 Aug 16. doi: 10.1007/s11517-024-03181-1. Online ahead of print.

ABSTRACT

The magnetically controlled growing rod technique is an effective surgical treatment for children who have early-onset scoliosis. The length of the instrumented growing rods is adjusted regularly to compensate for the normal growth of these patients. Manual measurement of rod length on posteroanterior spine radiographs is subjective and time-consuming. A machine learning (ML) system using a deep learning approach was developed to automatically measure the adjusted rod length. Three ML models-rod model, 58 mm model, and head-piece model-were developed to extract the rod length from radiographs. Three-hundred and eighty-seven radiographs were used for model development, and 60 radiographs with 118 rods were separated for final testing. The average precision (AP), the mean absolute difference (MAD) ± standard deviation (SD), and the inter-method correlation coefficient (ICC[2,1]) between the manual and artificial intelligence (AI) adjustment measurements were used to evaluate the developed method. The AP of the 3 models were 67.6%, 94.8%, and 86.3%, respectively. The MAD ± SD of the rod length change was 0.98 ± 0.88 mm, and the ICC[2,1] was 0.90. The average time to output a single rod measurement was 6.1 s. The developed AI provided an accurate and reliable method to detect the rod length automatically.

PMID:39152359 | DOI:10.1007/s11517-024-03181-1

Categories: Literature Watch

Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling

Fri, 2024-08-16 06:00

NPJ Digit Med. 2024 Aug 16;7(1):216. doi: 10.1038/s41746-024-01207-4.

ABSTRACT

Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classification approaches only assess disease presence at the time of acquisition, neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing a disease is critical to properly plan treatment. Our proposed Longitudinal Transformer for Survival Analysis (LTSA) enables dynamic disease prognosis from longitudinal medical imaging, modeling the time to disease from sequences of fundus photography images captured over long, irregular time periods. Using longitudinal imaging data from the Age-Related Eye Disease Study (AREDS) and Ocular Hypertension Treatment Study (OHTS), LTSA significantly outperformed a single-image baseline in 19/20 head-to-head comparisons on late AMD prognosis and 18/20 comparisons on POAG prognosis. A temporal attention analysis also suggested that, while the most recent image is typically the most influential, prior imaging still provides additional prognostic value.

PMID:39152209 | DOI:10.1038/s41746-024-01207-4

Categories: Literature Watch

A new strong convective precipitation forecasting method based on attention mechanism and spatio-temporal reasoning

Fri, 2024-08-16 06:00

Sci Rep. 2024 Aug 16;14(1):19024. doi: 10.1038/s41598-024-68951-1.

ABSTRACT

Radar observation variables reflect the precipitation amount of strong convective precipitation processes, which accurate forecast is an important difficulty in weather forecasting. Current forecasting methods are mostly based on radar echo extrapolation, which has the insufficiency of input information and the ineffectiveness of model architecture. This paper presents a Bidirectional Long Short-Term Memory forecasting method for strong convective precipitation based on the attention mechanism and residual neural network (ResNet-Attention-BiLSTM). First, this paper uses ResNet to effectively extract the key information of extreme weather and solves the problem of regression to the mean of the prediction model by learning the residuals of the radar observation data. Second, this paper uses the attention mechanism to adaptively weight the fusion of the features to enhance the extraction of the important features of the precipitation image data. On this basis, this paper presents a novel spatio-temporal reasoning method for radar observations and establishes a precipitation forecasting model, which captures the past and future time-order relationship of the sequence data. Finally, this paper conducts experiments based on the real collected data of a strong convective precipitation process and compares its performance with the existing models, the mean absolute percentage error of this model was reduced by 15.94% (1 km), 18.72% (3 km), and 14.91% (7 km), and the coefficient of determination ( R 2 ) was increased by 10.89% (1 km), 9.61% (3 km), and 9.29% (7 km), which proves the state of the art and effectiveness of this forecasting model.

PMID:39152199 | DOI:10.1038/s41598-024-68951-1

Categories: Literature Watch

A comparative analysis of classical and machine learning methods for forecasting TB/HIV co-infection

Fri, 2024-08-16 06:00

Sci Rep. 2024 Aug 16;14(1):18991. doi: 10.1038/s41598-024-69580-4.

ABSTRACT

TB/HIV coinfection poses a complex public health challenge. Accurate forecasting of future trends is essential for efficient resource allocation and intervention strategy development. This study compares classical statistical and machine learning models to predict TB/HIV coinfection cases stratified by gender and the general populations. We analyzed time series data using exponential smoothing and ARIMA to establish the baseline trend and seasonality. Subsequently, machine learning models (SVR, XGBoost, LSTM, CNN, GRU, CNN-GRU, and CNN-LSTM) were employed to capture the complex dynamics and inherent non-linearities of TB/HIV coinfection data. Performance metrics (MSE, MAE, sMAPE) and the Diebold-Mariano test were used to evaluate the model performance. Results revealed that Deep Learning models, particularly Bidirectional LSTM and CNN-LSTM, significantly outperformed classical methods. This demonstrates the effectiveness of Deep Learning for modeling TB/HIV coinfection time series and generating more accurate forecasts.

PMID:39152187 | DOI:10.1038/s41598-024-69580-4

Categories: Literature Watch

Biologically interpretable multi-task deep learning pipeline predicts molecular alterations, grade, and prognosis in glioma patients

Fri, 2024-08-16 06:00

NPJ Precis Oncol. 2024 Aug 16;8(1):181. doi: 10.1038/s41698-024-00670-2.

ABSTRACT

Deep learning models have been developed for various predictions in glioma; yet, they were constrained by manual segmentation, task-specific design, or a lack of biological interpretation. Herein, we aimed to develop an end-to-end multi-task deep learning (MDL) pipeline that can simultaneously predict molecular alterations and histological grade (auxiliary tasks), as well as prognosis (primary task) in gliomas. Further, we aimed to provide the biological mechanisms underlying the model's predictions. We collected multiscale data including baseline MRI images from 2776 glioma patients across two private (FAHZU and HPPH, n = 1931) and three public datasets (TCGA, n = 213; UCSF, n = 410; and EGD, n = 222). We trained and internally validated the MDL model using our private datasets, and externally validated it using the three public datasets. We used the model-predicted deep prognosis score (DPS) to stratify patients into low-DPS and high-DPS subtypes. Additionally, a radio-multiomics analysis was conducted to elucidate the biological basis of the DPS. In the external validation cohorts, the MDL model achieved average areas under the curve of 0.892-0.903, 0.710-0.894, and 0.850-0.879 for predicting IDH mutation status, 1p/19q co-deletion status, and tumor grade, respectively. Moreover, the MDL model yielded a C-index of 0.723 in the TCGA and 0.671 in the UCSF for the prediction of overall survival. The DPS exhibits significant correlations with activated oncogenic pathways, immune infiltration patterns, specific protein expression, DNA methylation, tumor mutation burden, and tumor-stroma ratio. Accordingly, our work presents an accurate and biologically meaningful tool for predicting molecular subtypes, tumor grade, and survival outcomes in gliomas, which provides personalized clinical decision-making in a global and non-invasive manner.

PMID:39152182 | DOI:10.1038/s41698-024-00670-2

Categories: Literature Watch

Metaheuristics based dimensionality reduction with deep learning driven false data injection attack detection for enhanced network security

Fri, 2024-08-16 06:00

Sci Rep. 2024 Aug 16;14(1):18967. doi: 10.1038/s41598-024-69806-5.

ABSTRACT

Recent sensor, communication, and computing technological advancements facilitate smart grid use. The heavy reliance on developed data and communication technology increases the exposure of smart grids to cyberattacks. Existing mitigation in the electricity grid focuses on protecting primary or redundant measurements. These approaches make certain assumptions regarding false data injection (FDI) attacks, which are inadequate and restrictive to cope with cyberattacks. The reliance on communication technology has emphasized the exposure of power systems to FDI assaults that can bypass the current bad data detection (BDD) mechanism. The current study on unobservable FDI attacks (FDIA) reveals the severe threat of secured system operation because these attacks can avoid the BDD method. Thus, a Data-driven learning-based approach helps detect unobservable FDIAs in distribution systems to mitigate these risks. This study presents a new Hybrid Metaheuristics-based Dimensionality Reduction with Deep Learning for FDIA (HMDR-DLFDIA) Detection technique for Enhanced Network Security. The primary objective of the HMDR-DLFDIA technique is to recognize and classify FDIA attacks in the distribution systems. In the HMDR-DLFDIA technique, the min-max scalar is primarily used for the data normalization process. Besides, a hybrid Harris Hawks optimizer with a sine cosine algorithm (hybrid HHO-SCA) is applied for feature selection. For FDIA detection, the HMDR-DLFDIA technique utilizes the stacked autoencoder (SAE) method. To improve the detection outcomes of the SAE model, the gazelle optimization algorithm (GOA) is exploited. A complete set of experiments was organized to highlight the supremacy of the HMDR-DLFDIA method. The comprehensive result analysis stated that the HMDR-DLFDIA technique performed better than existing DL models.

PMID:39152172 | DOI:10.1038/s41598-024-69806-5

Categories: Literature Watch

Implementing Triage-Bot: Supporting the Current Practice for Triage Nurses

Fri, 2024-08-16 06:00

Surg Technol Int. 2024 Aug 16;44:sti44/1804. Online ahead of print.

ABSTRACT

In Canada, emergency departments (ED) have 15.1 million unscheduled visits every year; this has been suggested to indicate that patients rely on ED to address the gaps experienced by 6.5 million Canadians who lack a primary care provider. When this large number of visits is coupled with a predicted shortage of 100,000 nurses in Canada by 2030, ED can be expected to face resource limitations, which highlights the importance of triage systems as a source of immediate support. Technology that incorporates innovative analytical methods, automation of routine, and efficient processing can be leveraged to enhance patient outcomes, streamline clinical processes, and improve the overall quality and efficiency of healthcare delivery. This paper aims to highlight how the Triage-Bot, a proposed AI system, can assist ED nurses when triaging patients. The Triage-Bot system is based on the Canadian Triage and Acuity Scale (CTAS), which currently serves as a standardized and highly effective tool for prioritizing patient care in emergency departments across the country. Pre-set and open-ended questions are asked using voice and video, allowing patients to describe their health concerns and conditions. Triage-Bot automatically measures the following vital signs: heart rate (HR), heart rate variability (HRV), oxygen saturation (SpO2), respiratory rate (RR), blood pressure (BP), blood glucose (BG), and stress. The system uses artificial intelligence models, particularly those with a deep learning approach that simultaneously analyzes both the user's facial expression and voice tone. Implementation: A systematic review addressed the implications of AI in nursing and concluded that it could contribute to patient care by providing personalized instructions and/or remotely monitoring patients. The Triage-Bot system can be implemented in healthcare facilities, such as emergency department waiting rooms. The information it collects can then be added to a patient's health records to support nurses in assessing the severity of each patient's condition. Limitations: If the system is accessed without a nurse's guidance, it is imperative that the user receives information regarding when to visit a healthcare provider or ED. Continuous improvements in Triage-Bot's accessibility for patients with varying abilities are required to ensure that the system remains user-friendly during times of illness. The voice and text interaction can also be influenced by a user's understanding of language, culture, and age-related factors.

PMID:39151148

Categories: Literature Watch

Deep-Transfer-Learning-Based Natural Language Processing of Serial Free-Text Computed Tomography Reports for Predicting Survival of Patients With Pancreatic Cancer

Fri, 2024-08-16 06:00

JCO Clin Cancer Inform. 2024 Aug;8:e2400021. doi: 10.1200/CCI.24.00021.

ABSTRACT

PURPOSE: To explore the predictive potential of serial computed tomography (CT) radiology reports for pancreatic cancer survival using natural language processing (NLP).

METHODS: Deep-transfer-learning-based NLP models were retrospectively trained and tested with serial, free-text CT reports, and survival information of consecutive patients diagnosed with pancreatic cancer in a Korean tertiary hospital was extracted. Randomly selected patients with pancreatic cancer and their serial CT reports from an independent tertiary hospital in the United States were included in the external testing data set. The concordance index (c-index) of predicted survival and actual survival, and area under the receiver operating characteristic curve (AUROC) for predicting 1-year survival were calculated.

RESULTS: Between January 2004 and June 2021, 2,677 patients with 12,255 CT reports and 670 patients with 3,058 CT reports were allocated to training and internal testing data sets, respectively. ClinicalBERT (Bidirectional Encoder Representations from Transformers) model trained on the single, first CT reports showed a c-index of 0.653 and AUROC of 0.722 in predicting the overall survival of patients with pancreatic cancer. ClinicalBERT trained on up to 15 consecutive reports from the initial report showed an improved c-index of 0.811 and AUROC of 0.911. On the external testing set with 273 patients with 1,947 CT reports, the AUROC was 0.888, indicating the generalizability of our model. Further analyses showed our model's contextual interpretation beyond specific phrases.

CONCLUSION: Deep-transfer-learning-based NLP model of serial CT reports can predict the survival of patients with pancreatic cancer. Clinical decisions can be supported by the developed model, with survival information extracted solely from serial radiology reports.

PMID:39151114 | DOI:10.1200/CCI.24.00021

Categories: Literature Watch

A scalable blockchain-enabled federated learning architecture for edge computing

Fri, 2024-08-16 06:00

PLoS One. 2024 Aug 16;19(8):e0308991. doi: 10.1371/journal.pone.0308991. eCollection 2024.

ABSTRACT

Various deep learning techniques, including blockchain-based approaches, have been explored to unlock the potential of edge data processing and resultant intelligence. However, existing studies often overlook the resource requirements of blockchain consensus processing in typical Internet of Things (IoT) edge network settings. This paper presents our FLCoin approach. Specifically, we propose a novel committee-based method for consensus processing in which committee members are elected via the FL process. Additionally, we employed a two-layer blockchain architecture for federated learning (FL) processing to facilitate the seamless integration of blockchain and FL techniques. Our analysis reveals that the communication overhead remains stable as the network size increases, ensuring the scalability of our blockchain-based FL system. To assess the performance of the proposed method, experiments were conducted using the MNIST dataset to train a standard five-layer CNN model. Our evaluation demonstrated the efficiency of FLCoin. With an increasing number of nodes participating in the model training, the consensus latency remained below 3 s, resulting in a low total training time. Notably, compared with a blockchain-based FL system utilizing PBFT as the consensus protocol, our approach achieved a 90% improvement in communication overhead and a 35% reduction in training time cost. Our approach ensures an efficient and scalable solution, enabling the integration of blockchain and FL into IoT edge networks. The proposed architecture provides a solid foundation for building intelligent IoT services.

PMID:39150937 | DOI:10.1371/journal.pone.0308991

Categories: Literature Watch

An automatic glaucoma grading method based on attention mechanism and EfficientNet-B3 network

Fri, 2024-08-16 06:00

PLoS One. 2024 Aug 16;19(8):e0296229. doi: 10.1371/journal.pone.0296229. eCollection 2024.

ABSTRACT

Glaucoma infection is rapidly spreading globally and the number of glaucoma patients is expected to exceed 110 million by 2040. Early identification and detection of glaucoma is particularly important as it can easily lead to irreversible vision damage or even blindness if not treated with intervention in the early stages. Deep learning has attracted much attention in the field of computer vision and has been widely studied especially in the recognition and diagnosis of ophthalmic diseases. It is challenging to efficiently extract effective features for accurate grading of glaucoma in a limited dataset. Currently, in glaucoma recognition algorithms, 2D fundus images are mainly used to automatically identify the disease or not, but do not distinguish between early or late stages; however, in clinical practice, the treatment of early and late glaucoma is not the same, so it is more important to proceed to achieve accurate grading of glaucoma. This study uses a private dataset containing modal data, 2D fundus images, and 3D-OCT scanner images, to extract the effective features therein to achieve an accurate triple classification (normal, early, and moderately advanced) for optimal performance on various measures. In view of this, this paper proposes an automatic glaucoma classification method based on the attention mechanism and EfficientNetB3 network. The EfficientNetB3 network and ResNet34 network are built to extract and fuse 2D fundus images and 3D-OCT scanner images, respectively, to achieve accurate classification. The proposed auto-classification method minimizes feature redundancy while improving classification accuracy, and incorporates an attention mechanism in the two-branch model, which enables the convolutional neural network to focus its attention on the main features of the eye and discard the meaningless black background region in the image to improve the performance of the model. The auto-classification method combined with the cross-entropy function achieves the highest accuracy up to 97.83%. Since the proposed automatic grading method is effective and ensures reliable decision-making for glaucoma screening, it can be used as a second opinion tool by doctors, which can greatly reduce missed diagnosis and misdiagnosis by doctors, and buy more time for patient's treatment.

PMID:39150930 | DOI:10.1371/journal.pone.0296229

Categories: Literature Watch

Pages