Deep learning

Microbial community dynamics in different floc size aggregates during nitrogen removal process upgrading in a full-scale landfill leachate treatment plant

Sat, 2024-09-14 06:00

Bioresour Technol. 2024 Sep 12:131484. doi: 10.1016/j.biortech.2024.131484. Online ahead of print.

ABSTRACT

Upgrading processes to reduce biodegradable organic substance addition is crucial for treating landfill leachate with high pollutant concentrations, aiding carbon emission reduction. Aggregate size in activated sludge processes impacts pollutant removal and sludge/water separation. This study investigated microbial community succession and driving mechanisms in different floc-size aggregates during a nitrogen removal upgrade from conventional to partial nitrification-denitrification in a full-scale landfill leachate treatment plant (LLTP) using 16S rRNA gene sequencing. The upgrade and floc sizes significantly influenced microbial diversity and composition. After upgrading, ammonia-oxidizing bacteria were enriched while nitrite-oxidizing bacteria suppressed in small flocs with homogeneity and high mass transfer efficiency. Larger flocs enriched Defluviicoccus, Thauera, and Truepera, while smaller flocs enriched Nitrosomonas, suggesting their potential as biomarkers. Multi-network analyses revealed microbial interactions. A deep learning model with convolutional neural networks predicted nitrogen removal efficiency. These findings guide optimizing LLTP processes and understanding microbial community dynamics based on floc size.

PMID:39277056 | DOI:10.1016/j.biortech.2024.131484

Categories: Literature Watch

Gluconeogenesis unraveled: A proteomic Odyssey with machine learning

Sat, 2024-09-14 06:00

Methods. 2024 Sep 12:S1046-2023(24)00192-0. doi: 10.1016/j.ymeth.2024.09.002. Online ahead of print.

ABSTRACT

The metabolic pathway known as gluconeogenesis, which produces glucose from non-carbohydrate substrates, is essential for maintaining balanced blood sugar levels while fasting. It's extremely important to anticipate gluconeogenesis rates accurately to recognize metabolic disorders and create efficient treatment strategies. The implementation of deep learning and machine learning methods to forecast complex biological processes has been gaining popularity in recent years. The recognition of both the regulation of the pathway and possible therapeutic applications of proteins depends on accurate identification associated with their gluconeogenesis patterns. This article analyzes the uses of machine learning and deep learning models, to predict gluconeogenesis efficiency. The study also discusses the challenges that come with restricted data availability and model interpretability, as well as possible applications in personalized healthcare, metabolic disease treatment, and the discovery of drugs. The predictor utilizes statistics moments on the structures of gluconeogenesis and their enzymes, while Random Forest is utilized as a classifier to ensure the accuracy of this model in identifying the best outcomes. The method was validated utilizing the independent test, self-consistency, 10 k fold cross-validations, and jackknife test which achieved 92.33 %, 91.87 %, 87.88 %, and 87.02 %. An accurate prediction of gluconeogenesis has significant implications for understanding metabolic disorders and developing targeted therapies. This study contributes to the rising field of predictive biology by mixing algorithms for deep learning, and machine learning, with metabolic pathways.

PMID:39276958 | DOI:10.1016/j.ymeth.2024.09.002

Categories: Literature Watch

ViT-MAENB7: An innovative breast cancer diagnosis model from 3D mammograms using advanced segmentation and classification process

Sat, 2024-09-14 06:00

Comput Methods Programs Biomed. 2024 Aug 23;257:108373. doi: 10.1016/j.cmpb.2024.108373. Online ahead of print.

ABSTRACT

Tumors are an important health concern in modern times. Breast cancer is one of the most prevalent causes of death for women. Breast cancer is rapidly becoming the leading cause of mortality among women globally. Early detection of breast cancer allows patients to obtain appropriate therapy, increasing their probability of survival. The adoption of 3-Dimensional (3D) mammography for the medical identification of abnormalities in the breast reduced the number of deaths dramatically. Classification and accurate detection of lumps in the breast in 3D mammography is especially difficult due to factors such as inadequate contrast and normal fluctuations in tissue density. Several Computer-Aided Diagnosis (CAD) solutions are under development to help radiologists accurately classify abnormalities in the breast. In this paper, a breast cancer diagnosis model is implemented to detect breast cancer in cancer patients to prevent death rates. The 3D mammogram images are gathered from the internet. Then, the gathered images are given to the preprocessing phase. The preprocessing is done using a median filter and image scaling method. The purpose of the preprocessing phase is to enhance the quality of the images and remove any noise or artifacts that may interfere with the detection of abnormalities. The median filter helps to smooth out any irregularities in the images, while the image scaling method adjusts the size and resolution of the images for better analysis. Once the preprocessing is complete, the preprocessed image is given to the segmentation phase. The segmentation phase is crucial in medical image analysis as it helps to identify and separate different structures within the image, such as organs or tumors. This process involves dividing the preprocessed image into meaningful regions or segments based on intensity, color, texture, or other features. The segmentation process is done using Adaptive Thresholding with Region Growing Fusion Model (AT-RGFM)". This model combines the advantages of both thresholding and region-growing techniques to accurately identify and delineate specific structures within the image. By utilizing AT-RGFM, the segmentation phase can effectively differentiate between different parts of the image, allowing for more precise analysis and diagnosis. It plays a vital role in the medical image analysis process, providing crucial insights for healthcare professionals. Here, the Modified Garter Snake Optimization Algorithm (MGSOA) is used to optimize the parameters. It helps to optimize parameters for accurately identifying and delineating specific structures within medical images and also helps healthcare professionals in providing more precise analysis and diagnosis, ultimately playing a vital role in the medical image analysis process. MGSOA enhances the segmentation phase by effectively differentiating between different parts of the image, leading to more accurate results. Then, the segmented image is fed into the detection phase. The tumor detection is performed by the Vision Transformer-based Multiscale Adaptive EfficientNetB7 (ViT-MAENB7) model. This model utilizes a combination of advanced algorithms and deep learning techniques to accurately identify and locate tumors within the segmented medical image. By incorporating a multiscale adaptive approach, the ViT-MAENB7 model can analyze the image at various levels of detail, improving the overall accuracy of tumor detection. This crucial step in the medical image analysis process allows healthcare professionals to make more informed decisions regarding patient treatment and care. Here, the created MGSOA algorithm is used to optimize the parameters for enhancing the performance of the model. The suggested breast cancer diagnosis performance is compared to conventional cancer diagnosis models and it showed high accuracy. The accuracy of the developed MGSOA-ViT-MAENB7 is 96.6 %, and others model like RNN, LSTM, EffNet, and ViT-MAENet given the accuracy to be 90.31 %, 92.79 %, 94.46 % and 94.75 %. The developed model's ability to analyze images at multiple scales, combined with the optimization provided by the MGSOA algorithm, results in a highly accurate and efficient system for detecting tumors in medical images. This cutting-edge technology not only improves the accuracy of diagnosis but also helps healthcare professionals tailor treatment plans to individual patients, ultimately leading to better outcomes. By outperforming traditional cancer diagnosis models, the proposed model is revolutionizing the field of medical imaging and setting a new standard for precision and effectiveness in healthcare.

PMID:39276667 | DOI:10.1016/j.cmpb.2024.108373

Categories: Literature Watch

CRNN-Refined Spatiotemporal Transformer for Dynamic MRI reconstruction

Sat, 2024-09-14 06:00

Comput Biol Med. 2024 Sep 13;182:109133. doi: 10.1016/j.compbiomed.2024.109133. Online ahead of print.

ABSTRACT

Magnetic Resonance Imaging (MRI) plays a pivotal role in modern clinical practice, providing detailed anatomical visualization with exceptional spatial resolution and soft tissue contrast. Dynamic MRI, aiming to capture both spatial and temporal characteristics, faces challenges related to prolonged acquisition times and susceptibility to motion artifacts. Balancing spatial and temporal resolutions becomes crucial in real-world clinical scenarios. In the realm of dynamic MRI reconstruction, while Convolutional Recurrent Neural Networks (CRNNs) struggle with long-range dependencies, CRNNs require extensive iterations, impacting efficiency. Transformers, known for their effectiveness in high-dimensional imaging, are underexplored in dynamic MRI reconstruction. Additionally, prevailing algorithms fall short of achieving superior results in demanding generative reconstructions at high acceleration rates. This research proposes a novel approach for dynamic MRI reconstruction, named CRNN-Refined Spatiotemporal Transformer Network (CST-Net). The spatiotemporal Transformer initiates reconstruction, modeling temporal and spatial correlations, followed by refinement using the CRNN. This integration mitigates inaccuracies caused by damaged frames and reduces CRNN iterations, enhancing computational efficiency without compromising reconstruction quality. Our study compares the performance of the proposed CST-Net at 6 × and 12 × undersampling rates, showcasing its superiority over existing algorithms. Particularly, in challenging 25× generative reconstructions, the CST-Net outperforms current methods. The comparison includes experiments under both radial and Cartesian undersampling patterns. In conclusion, CST-Net successfully addresses the limitations inherent in existing generative reconstruction algorithms, thereby paving the way for further exploration and optimization of Transformer-based approaches in dynamic MRI reconstruction. Code and Datasets can be available: https://github.com/XWangBin/CST-Net.

PMID:39276614 | DOI:10.1016/j.compbiomed.2024.109133

Categories: Literature Watch

Dissecting AI-based mutation prediction in lung adenocarcinoma: A comprehensive real-world study

Sat, 2024-09-14 06:00

Eur J Cancer. 2024 Aug 23;211:114292. doi: 10.1016/j.ejca.2024.114292. Online ahead of print.

ABSTRACT

INTRODUCTION: Molecular profiling of lung cancer is essential to identify genetic alterations that predict response to targeted therapy. While deep learning shows promise for predicting oncogenic mutations from whole tissue images, existing studies often face challenges such as limited sample sizes, a focus on earlier stage patients, and insufficient analysis of robustness and generalizability.

METHODS: This retrospective study evaluates factors influencing mutation prediction accuracy using the large Heidelberg Lung Adenocarcinoma Cohort (HLCC), a cohort of 2356 late-stage FFPE samples. Validation is performed in the publicly available TCGA-LUAD cohort.

RESULTS: Models trained on the larger HLCC cohort generalized well to the TCGA dataset for mutations in EGFR (AUC 0.76), STK11 (AUC 0.71) and TP53 (AUC 0.75), in line with the hypothesis that larger cohort sizes improve model robustness. Variation in performance due to pre-processing and modeling choices, such as mutation variant calling, affected EGFR prediction accuracy by up to 7 %.

DISCUSSION: Model explanations suggest that acinar and papillary growth patterns are critical for the detection of EGFR mutations, whereas solid growth patterns and large nuclei are indicative of TP53 mutations. These findings highlight the importance of specific morphological features in mutation detection and the potential of deep learning models to improve mutation prediction accuracy.

CONCLUSION: Although deep learning models trained on larger cohorts show improved robustness and generalizability in predicting oncogenic mutations, they cannot replace comprehensive molecular profiling. However, they may support patient pre-selection for clinical trials and deepen the insight in genotype-phenotype relationships.

PMID:39276594 | DOI:10.1016/j.ejca.2024.114292

Categories: Literature Watch

A spectral bias-error stepwise correction method of plasma image-spectrum fusion based on deep learning for improving the performance of LIBS

Sat, 2024-09-14 06:00

Talanta. 2024 Sep 13;281:126872. doi: 10.1016/j.talanta.2024.126872. Online ahead of print.

ABSTRACT

Poor spectral stability seriously hinders the wide application of laser-induced breakdown spectroscopy (LIBS), so how to improve its stability is the focus, hotspot, and difficulty of current research. In this study, to achieve high precision quantitative analysis under complex detection conditions, utilizing the fusion of multi-dimensional plasma information and the integration of physical models and algorithmic models, a spectral bias-error stepwise correction method of plasma image-spectrum fusion based on deep learning (SBESC-PISF) was proposed. In this method, based on the statistical properties of LIBS spectra, the actual obtained spectra were decomposed into three parts: the ideal spectral intensity related only to the element concentration, and the spectral bias and spectral error caused by the fluctuation of complex high-dimensional plasma parameters. Further, the deep learning methods were used to fully excavate all the effective features in the plasma images and spectra to invert the complex high-dimensional plasma parameters according to the physical models. Finally, the estimation models of spectral bias and spectral error were established based on these features, to realize the high-precision correction of spectral intensity. To verify the feasibility of SBESC-PISF, the spectra of aluminum alloy samples obtained under three complex detection conditions were used for analysis. Under the experimental condition of laser energy fluctuation, after correction by SBESC-PISF, R2 of the three calibration curves was all increased to 0.999, RMSE and STD of the validation set (RMSEV, STDV) were reduced by 55.246 % and 50.167 %, respectively. Under the experimental condition of defocusing amount fluctuation, R2 was also all increased to 0.999, RMSEV and STDV were decreased by 58.201 % and 51.006 %, respectively. When the laser energy and defocusing amount fluctuate simultaneously, R2 was increased to 0.999, 0.996 and 0.988, RMSEV and STDV were reduced by 58.776 % and 54.397 %, respectively. These experimental results demonstrate that the spectral fluctuation correction of SBESC-PISF under complex detection conditions is effective and has wide applicability.

PMID:39276577 | DOI:10.1016/j.talanta.2024.126872

Categories: Literature Watch

Transferable and data efficient metamodeling of storm water system nodal depths using auto-regressive graph neural networks

Sat, 2024-09-14 06:00

Water Res. 2024 Sep 11;266:122396. doi: 10.1016/j.watres.2024.122396. Online ahead of print.

ABSTRACT

Storm water systems (SWSs) are essential infrastructure providing multiple services including environmental protection and flood prevention. Typically, utility companies rely on computer simulators to properly design, operate, and manage SWSs. However, multiple applications in SWSs are highly time-consuming. Researchers have resorted to cheaper-to-run models, i.e. metamodels, as alternatives of computationally expensive models. With the recent surge in artificial intelligence applications, machine learning has become a key approach for metamodelling urban water networks. Specifically, deep learning methods, such as feed-forward neural networks, have gained importance in this context. However, these methods require generating a sufficiently large database of examples and training their internal parameters. Both processes defeat the purpose of using a metamodel, i.e., saving time. To overcome this issue, this research focuses on the application of inductive biases and transfer learning for creating SWS metamodels which require less data and retain high performance when used elsewhere. In particular, this study proposes an auto-regressive graph neural network metamodel of the Storm Water Management Model (SWMM) from the Environmental Protection Agency (EPA) for estimating hydraulic heads. The results indicate that the proposed metamodel requires a smaller number of examples to reach high accuracy and speed-up, in comparison to fully connected neural networks. Furthermore, the metamodel shows transferability as it can be used to predict hydraulic heads with high accuracy on unseen parts of the network. This work presents a novel approach that benefits both urban drainage practitioners and water network modeling researchers. The proposed metamodel can help practitioners on the planning, operation, and maintenance of their systems by offering an efficient metamodel of SWMM for computationally intensive tasks like optimization and Monte Carlo analyses. Researchers can leverage the current metamodel's structure for developing new surrogate model architectures tailored to their specific needs or start paving the way for more general foundation metamodels of urban drainage systems.

PMID:39276474 | DOI:10.1016/j.watres.2024.122396

Categories: Literature Watch

scHyper: reconstructing cell-cell communication through hypergraph neural networks

Sat, 2024-09-14 06:00

Brief Bioinform. 2024 Jul 25;25(5):bbae436. doi: 10.1093/bib/bbae436.

ABSTRACT

Cell-cell communications is crucial for the regulation of cellular life and the establishment of cellular relationships. Most approaches of inferring intercellular communications from single-cell RNA sequencing (scRNA-seq) data lack a comprehensive global network view of multilayered communications. In this context, we propose scHyper, a new method that can infer intercellular communications from a global network perspective and identify the potential impact of all cells, ligand, and receptor expression on the communication score. scHyper designed a new way to represent tripartite relationships, by extracting a heterogeneous hypergraph that includes the source (ligand expression), the target (receptor expression), and the relevant ligand-receptor (L-R) pairs. scHyper is based on hypergraph representation learning, which measures the degree of match between the intrinsic attributes (static embeddings) of nodes and their observed behaviors (dynamic embeddings) in the context (hyperedges), quantifies the probability of forming hyperedges, and thus reconstructs the cell-cell communication score. Additionally, to effectively mine the key mechanisms of signal transmission, we collect a rich dataset of multisubunit complex L-R pairs and propose a nonparametric test to determine significant intercellular communications. Comparing with other tools indicates that scHyper exhibits superior performance and functionality. Experimental results on the human tumor microenvironment and immune cells demonstrate that scHyper offers reliable and unique capabilities for analyzing intercellular communication networks. Therefore, we introduced an effective strategy that can build high-order interaction patterns, surpassing the limitations of most methods that can only handle low-order interactions, thus more accurately interpreting the complexity of intercellular communications.

PMID:39276328 | DOI:10.1093/bib/bbae436

Categories: Literature Watch

Deep learning approaches for non-coding genetic variant effect prediction: current progress and future prospects

Sat, 2024-09-14 06:00

Brief Bioinform. 2024 Jul 25;25(5):bbae446. doi: 10.1093/bib/bbae446.

ABSTRACT

Recent advancements in high-throughput sequencing technologies have significantly enhanced our ability to unravel the intricacies of gene regulatory processes. A critical challenge in this endeavor is the identification of variant effects, a key factor in comprehending the mechanisms underlying gene regulation. Non-coding variants, constituting over 90% of all variants, have garnered increasing attention in recent years. The exploration of gene variant impacts and regulatory mechanisms has spurred the development of various deep learning approaches, providing new insights into the global regulatory landscape through the analysis of extensive genetic data. Here, we provide a comprehensive overview of the development of the non-coding variants models based on bulk and single-cell sequencing data and their model-based interpretation and downstream tasks. This review delineates the popular sequencing technologies for epigenetic profiling and deep learning approaches for discerning the effects of non-coding variants. Additionally, we summarize the limitations of current approaches in variant effect prediction research and outline opportunities for improvement. We anticipate that our study will offer a practical and useful guide for the bioinformatic community to further advance the unraveling of genetic variant effects.

PMID:39276327 | DOI:10.1093/bib/bbae446

Categories: Literature Watch

Single-cell profiling uncovers proliferative cells as key determinants of survival outcomes in lower-grade glioma patients

Sat, 2024-09-14 06:00

Discov Oncol. 2024 Sep 14;15(1):445. doi: 10.1007/s12672-024-01302-8.

ABSTRACT

Lower-grade gliomas (LGGs), despite their generally indolent clinical course, are characterized by invasive growth patterns and genetic heterogeneity, which can lead to malignant transformation, underscoring the need for improved prognostic markers and therapeutic strategies. This study utilized single-cell RNA sequencing (scRNA-seq) and bulk RNA-seq to identify a novel cell type, referred to as "Prol," characterized by increased proliferation and linked to a poor prognosis in patients with LGG, particularly under the context of immunotherapy interventions. A signature, termed the Prol signature, was constructed based on marker genes specific to the Prol cell type, utilizing an artificial intelligence (AI) network that integrates traditional regression, machine learning, and deep learning algorithms. This signature demonstrated enhanced predictive accuracy for LGG prognosis compared to existing models and showed pan-cancer prognostic potential. The mRNA expression of the key gene PTTG1 from the Prol signature was further validated through quantitative reverse transcription polymerase chain reaction (qRT-PCR). Our findings not only provide novel insights into the molecular and cellular mechanisms of LGG but also offer a promising avenue for the development of targeted biomarkers and therapeutic interventions.

PMID:39276278 | DOI:10.1007/s12672-024-01302-8

Categories: Literature Watch

NeoaPred: A deep-learning framework for predicting immunogenic neoantigen based on surface and structural features of peptide-HLA complexes

Sat, 2024-09-14 06:00

Bioinformatics. 2024 Sep 14:btae547. doi: 10.1093/bioinformatics/btae547. Online ahead of print.

ABSTRACT

MOTIVATION: Neoantigens, derived from somatic mutations in cancer cells, can elicit anti-tumor immune responses when presented to autologous T cells by human leukocyte antigen (HLA). Identifying immunogenic neoantigens is crucial for cancer immunotherapy development. However, the accuracy of current bioinformatic methods remains unsatisfactory. Surface and structural features of peptide-HLA class I (pHLA-I) complexes offer valuable insight into the immunogenicity of neoantigens.

RESULTS: We present NeoaPred, a deep-learning framework for neoantigen prediction. NeoaPred accurately constructs pHLA-I complex structures, with 82.37% of the predicted structures showing an RMSD of < 1 Å. Using these structures, NeoaPred integrates differences in surface, structural, and atom group features between the mutant peptide and its wild-type counterpart to predict a foreignness score. This foreignness score is an effective factor for neoantigen prediction, achieving an AUROC of 0.81 and an AUPRC of 0.54 in the test set, outperforming existing methods.

AVAILABILITY: The source code is released under an Apache v2.0 license and is available at the GitHub repository https://github.com/Dulab2020/NeoaPred.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:39276157 | DOI:10.1093/bioinformatics/btae547

Categories: Literature Watch

Deep Learning-Based Blood Abnormalities Detection as a Tool for VEXAS Syndrome Screening

Sat, 2024-09-14 06:00

Int J Lab Hematol. 2024 Sep 14. doi: 10.1111/ijlh.14368. Online ahead of print.

ABSTRACT

INTRODUCTION: VEXAS is a syndrome described in 2020, caused by mutations of the UBA1 gene, and displaying a large pleomorphic array of clinical and hematological features. Nevertheless, these criteria lack significance to discriminate VEXAS from other inflammatory conditions at the screening step. This work hence first focused on singling out dysplastic features indicative of the syndrome among peripheral blood (PB) polymorphonuclears (PMN). A deep learning algorithm is then proposed for automatic detection of these features.

METHODS: A multicentric dataset, comprising 9514 annotated PMN images was gathered, including UBA1 mutated VEXAS (n = 25), UBA1 wildtype myelodysplastic (n = 14), and UBA1 wildtype cytopenic patients (n = 25). Statistical analysis on a subset of patients was performed to screen for significant abnormalities. Detection of these features on PB was then automated with a convolutional neural network (CNN) for multilabel classification.

RESULTS: Significant differences were observed in the proportions of PMNs with pseudo-Pelger, nuclear spikes, vacuoles, and hypogranularity between patients with VEXAS and both cytopenic and myelodysplastic controls. Automatic detection of these abnormalities yielded AUCs in the range [0.85-0.97] and a F1-score of 0.70 on the test set. A VEXAS screening score was proposed, leveraging the model outputs and predicting the UBA1 mutational status with 0.82 sensitivity and 0.71 specificity on the test patients.

CONCLUSION: This study suggests that computer-assisted analysis of PB smears, focusing on suspected VEXAS cases, can provide valuable insights for determining which patients should undergo molecular testing. The presented deep learning approach can help hematologists direct their suspicions before initiating further analyses.

PMID:39275905 | DOI:10.1111/ijlh.14368

Categories: Literature Watch

Progress on deep learning in genomics

Sat, 2024-09-14 06:00

Yi Chuan. 2024 Sep;46(9):701-715. doi: 10.16288/j.yczz.24-151.

ABSTRACT

With the rapid growth of data driven by high-throughput sequencing technologies, genomics has entered an era characterized by big data, which presents significant challenges for traditional bioinformatics methods in handling complex data patterns. At this critical juncture of technological progress, deep learning-an advanced artificial intelligence technology-offers powerful capabilities for data analysis and pattern recognition, revitalizing genomic research. In this review, we focus on four major deep learning models: Convolutional Neural Network(CNN), Recurrent Neural Network(RNN), Long Short-Term Memory(LSTM), and Generative Adversarial Network(GAN). We outline their core principles and provide a comprehensive review of their applications in DNA, RNA, and protein research over the past five years. Additionally, we also explore the use of deep learning in livestock genomics, highlighting its potential benefits and challenges in genetic trait analysis, disease prevention, and genetic enhancement. By delivering a thorough analysis, we aim to enhance precision and efficiency in genomic research through deep learning and offer a framework for developing and applying livestock genomic strategies, thereby advancing precision livestock farming and genetic breeding technologies.

PMID:39275870 | DOI:10.16288/j.yczz.24-151

Categories: Literature Watch

FL-DSFA: Securing RPL-Based IoT Networks against Selective Forwarding Attacks Using Federated Learning

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 8;24(17):5834. doi: 10.3390/s24175834.

ABSTRACT

The Internet of Things (IoT) is a significant technological advancement that allows for seamless device integration and data flow. The development of the IoT has led to the emergence of several solutions in various sectors. However, rapid popularization also has its challenges, and one of the most serious challenges is the security of the IoT. Security is a major concern, particularly routing attacks in the core network, which may cause severe damage due to information loss. Routing Protocol for Low-Power and Lossy Networks (RPL), a routing protocol used for IoT devices, is faced with selective forwarding attacks. In this paper, we present a federated learning-based detection technique for detecting selective forwarding attacks, termed FL-DSFA. A lightweight model involving the IoT Routing Attack Dataset (IRAD), which comprises Hello Flood (HF), Decreased Rank (DR), and Version Number (VN), is used in this technique to increase the detection efficiency. The attacks on IoT threaten the security of the IoT system since they mainly focus on essential elements of RPL. The components include control messages, routing topologies, repair procedures, and resources within sensor networks. Binary classification approaches have been used to assess the training efficiency of the proposed model. The training step includes the implementation of machine learning algorithms, including logistic regression (LR), K-nearest neighbors (KNN), support vector machine (SVM), and naive Bayes (NB). The comparative analysis illustrates that this study, with SVM and KNN classifiers, exhibits the highest accuracy during training and achieves the most efficient runtime performance. The proposed system demonstrates exceptional performance, achieving a prediction precision of 97.50%, an accuracy of 95%, a recall rate of 98.33%, and an F1 score of 97.01%. It outperforms the current leading research in this field, with its classification results, scalability, and enhanced privacy.

PMID:39275748 | DOI:10.3390/s24175834

Categories: Literature Watch

A Novel End-to-End Deep Learning Framework for Chip Packaging Defect Detection

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 8;24(17):5837. doi: 10.3390/s24175837.

ABSTRACT

As semiconductor chip manufacturing technology advances, chip structures are becoming more complex, leading to an increased likelihood of void defects in the solder layer during packaging. However, identifying void defects in packaged chips remains a significant challenge due to the complex chip background, varying defect sizes and shapes, and blurred boundaries between voids and their surroundings. To address these challenges, we present a deep-learning-based framework for void defect segmentation in chip packaging. The framework consists of two main components: a solder region extraction method and a void defect segmentation network. The solder region extraction method includes a lightweight segmentation network and a rotation correction algorithm that eliminates background noise and accurately captures the solder region of the chip. The void defect segmentation network is designed for efficient and accurate defect segmentation. To cope with the variability of void defect shapes and sizes, we propose a Mamba model-based encoder that uses a visual state space module for multi-scale information extraction. In addition, we propose an interactive dual-stream decoder that uses a feature correlation cross gate module to fuse the streams' features to improve their correlation and produce more accurate void defect segmentation maps. The effectiveness of the framework is evaluated through quantitative and qualitative experiments on our custom X-ray chip dataset. Furthermore, the proposed void defect segmentation framework for chip packaging has been applied to a real factory inspection line, achieving an accuracy of 93.3% in chip qualification.

PMID:39275746 | DOI:10.3390/s24175837

Categories: Literature Watch

Phasor-Based Myoelectric Synergy Features: A Fast Hand-Crafted Feature Extraction Scheme for Boosting Performance in Gait Phase Recognition

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 8;24(17):5828. doi: 10.3390/s24175828.

ABSTRACT

Gait phase recognition systems based on surface electromyographic signals (EMGs) are crucial for developing advanced myoelectric control schemes that enhance the interaction between humans and lower limb assistive devices. However, machine learning models used in this context, such as Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), typically experience performance degradation when modeling the gait cycle with more than just stance and swing phases. This study introduces a generalized phasor-based feature extraction approach (PHASOR) that captures spatial myoelectric features to improve the performance of LDA and SVM in gait phase recognition. A publicly available dataset of 40 subjects was used to evaluate PHASOR against state-of-the-art feature sets in a five-phase gait recognition problem. Additionally, fully data-driven deep learning architectures, such as Rocket and Mini-Rocket, were included for comparison. The separability index (SI) and mean semi-principal axis (MSA) analyses showed mean SI and MSA metrics of 7.7 and 0.5, respectively, indicating the proposed approach's ability to effectively decode gait phases through EMG activity. The SVM classifier demonstrated the highest accuracy of 82% using a five-fold leave-one-trial-out testing approach, outperforming Rocket and Mini-Rocket. This study confirms that in gait phase recognition based on EMG signals, novel and efficient muscle synergy information feature extraction schemes, such as PHASOR, can compete with deep learning approaches that require greater processing time for feature extraction and classification.

PMID:39275739 | DOI:10.3390/s24175828

Categories: Literature Watch

Use of the SNOWED Dataset for Sentinel-2 Remote Sensing of Water Bodies: The Case of the Po River

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 8;24(17):5827. doi: 10.3390/s24175827.

ABSTRACT

The paper demonstrates the effectiveness of the SNOWED dataset, specifically designed for identifying water bodies in Sentinel-2 images, in developing a remote sensing system based on deep neural networks. For this purpose, a system is implemented for monitoring the Po River, Italy's most important watercourse. By leveraging the SNOWED dataset, a simple U-Net neural model is trained to segment satellite images and distinguish, in general, water and land regions. After verifying its performance in segmenting the SNOWED validation set, the trained neural network is employed to measure the area of water regions along the Po River, a task that involves segmenting a large number of images that are quite different from those in SNOWED. It is clearly shown that SNOWED-based water area measurements describe the river status, in terms of flood or drought periods, with a surprisingly good accordance with water level measurements provided by 23 in situ gauge stations (official measurements managed by the Interregional Agency for the Po). Consequently, the sensing system is used to take measurements at 100 "virtual" gauge stations along the Po River, over the 10-year period (2015-2024) covered by the Sentinel-2 satellites of the Copernicus Programme. In this way, an overall space-time monitoring of the Po River is obtained, with a spatial resolution unattainable, in a cost-effective way, by local physical sensors. Altogether, the obtained results demonstrate not only the usefulness of the SNOWED dataset for deep learning-based satellite sensing, but also the ability of such sensing systems to effectively complement traditional in situ sensing stations, providing precious tools for environmental monitoring, especially of locations difficult to reach, and permitting the reconstruction of historical data related to floods and draughts. Although physical monitoring stations are designed for rapid monitoring and prevention of flood or other disasters, the developed tool for remote sensing of water bodies could help decision makers to define long-term policies to reduce specific risks in areas not covered by physical monitoring or to define medium- to long-term strategies such as dam construction or infrastructure design.

PMID:39275738 | DOI:10.3390/s24175827

Categories: Literature Watch

A Combined CNN Architecture for Speech Emotion Recognition

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 6;24(17):5797. doi: 10.3390/s24175797.

ABSTRACT

Emotion recognition through speech is a technique employed in various scenarios of Human-Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.

PMID:39275707 | DOI:10.3390/s24175797

Categories: Literature Watch

Spectrum Sensing Method Based on STFT-RADN in Cognitive Radio Networks

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 6;24(17):5792. doi: 10.3390/s24175792.

ABSTRACT

To address the common issues in traditional convolutional neural network (CNN)-based spectrum sensing algorithms in cognitive radio networks (CRNs), including inadequate signal feature representation, inefficient utilization of feature map information, and limited feature extraction capabilities due to shallow network structures, this paper proposes a spectrum sensing algorithm based on a short-time Fourier transform (STFT) and residual attention dense network (RADN). Specifically, the RADN model improves the basic residual block and introduces the convolutional block attention module (CBAM), combining residual connections and dense connections to form a powerful deep feature extraction structure known as residual in dense (RID). This significantly enhances the network's feature extraction capabilities. By performing STFT on the received signals and normalizing them, the signals are converted into time-frequency spectrograms as network inputs, better capturing signal features. The RADN is trained to extract abstract features from the time-frequency images, and the trained RADN serves as the final classifier for spectrum sensing. Experimental results demonstrate that the STFT-RADN spectrum sensing method significantly improves performance under low signal-to-noise ratio (SNR) conditions compared to traditional deep-learning-based methods. This method not only adapts to various modulation schemes but also exhibits high detection probability and strong robustness.

PMID:39275703 | DOI:10.3390/s24175792

Categories: Literature Watch

Evaluation of Fracturing Effect of Tight Reservoirs Based on Deep Learning

Sat, 2024-09-14 06:00

Sensors (Basel). 2024 Sep 5;24(17):5775. doi: 10.3390/s24175775.

ABSTRACT

The utilization of hydraulic fracturing technology is indispensable for unlocking the potential of tight oil and gas reservoirs. Understanding and accurately evaluating the impact of fracturing is pivotal in maximizing oil and gas production and optimizing wellbore performance. Currently, evaluation methods based on acoustic logging, such as orthogonal dipole anisotropy and radial tomography imaging, are widely used. However, when the fractures generated by hydraulic fracturing form a network-like pattern, orthogonal dipole anisotropy fails to accurately assess the fracturing effects. Radial tomography imaging can address this issue, but it is challenged by high manpower and time costs. This study aims to develop a more efficient and accurate method for evaluating fracturing effects in tight reservoirs using deep learning techniques. Specifically, the method utilizes dipole array acoustic logging curves recorded before and after fracturing. Manual labeling was conducted by integrating logging data interpretation results. An improved WGAN-GP was employed to generate adversarial samples for data augmentation, and fracturing effect evaluation was implemented using SE-ResNet, ResNet, and DenseNet. The experimental results demonstrated that ResNet with residual connections is more suitable for the dataset in this study, achieving higher accuracy in fracturing effect evaluation. The inclusion of the SE module further enhanced model accuracy by adaptively adjusting the weights of feature map channels, with the highest accuracy reaching 99.75%.

PMID:39275685 | DOI:10.3390/s24175775

Categories: Literature Watch

Pages