Deep learning

Artificial intelligence-assisted delineation for postoperative radiotherapy in patients with lung cancer: a prospective, multi-center, cohort study

Fri, 2024-11-22 06:00

Front Oncol. 2024 Oct 22;14:1388297. doi: 10.3389/fonc.2024.1388297. eCollection 2024.

ABSTRACT

BACKGROUND: Postoperative radiotherapy (PORT) is an important treatment for lung cancer patients with poor prognostic features, but accurate delineation of the clinical target volume (CTV) and organs at risk (OARs) is challenging and time-consuming. Recently, deep learning-based artificial intelligent (AI) algorithms have shown promise in automating this process.

OBJECTIVE: To evaluate the clinical utility of a deep learning-based auto-segmentation model for AI-assisted delineating CTV and OARs in patients undergoing PORT, and to compare its accuracy and efficiency with manual delineation by radiation oncology residents from different levels of medical institutions.

METHODS: We previously developed an AI auto-segmentation model in 664 patients and validated its contouring performance in 149 patients. In this multi-center, validation trial, we prospectively involved 55 patients and compared the accuracy and efficiency of 3 contouring methods: (i) unmodified AI auto-segmentation, (ii) fully manual delineation by junior radiation oncology residents from different medical centers, and (iii) manual modifications based on AI segmentation model (AI-assisted delineation). The ground truth of CTV and OARs was delineated by 3 senior radiation oncologists. Contouring accuracy was evaluated by Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean distance of agreement (MDA). Inter-observer consistency was assessed by volume and coefficient of variation (CV).

RESULTS: AI-assisted delineation achieved significantly higher accuracy compared to unmodified AI auto-contouring and fully manual delineation by radiation oncologists, with median HD, MDA, and DCS values of 20.03 vs. 21.55 mm, 2.57 vs. 3.06 mm, 0.745 vs. 0.703 (all P<0.05) for CTV, respectively. The results of OARs contours were similar. CV for OARs was reduced by approximately 50%. In addition to better contouring accuracy, the AI-assisted delineation significantly decreased the consuming time and improved the efficiency.

CONCLUSION: AI-assisted CTV and OARs delineation for PORT significantly improves the accuracy and efficiency in the real-world setting, compared with pure AI auto-segmentation or fully manual delineation by junior oncologists. AI-assisted approach has promising clinical potential to enhance the quality of radiotherapy planning and further improve treatment outcomes of patients with lung cancer.

PMID:39575415 | PMC:PMC11579590 | DOI:10.3389/fonc.2024.1388297

Categories: Literature Watch

Implementation and evaluation of the three action teaching model with learning plan guidance in preventive medicine course

Fri, 2024-11-22 06:00

Front Psychol. 2024 Nov 7;15:1508432. doi: 10.3389/fpsyg.2024.1508432. eCollection 2024.

ABSTRACT

BACKGROUND: Toward the close of the 20th century, Chinese scholars introduced a novel pedagogical approach to education in China, distinguished by its divergence from conventional teaching methods. This instructional strategy assumes a pivotal role in imparting indispensable medical knowledge to students within a meticulously structured and all-encompassing framework.

OBJECTIVE: The objective of this study is to assess the effectiveness of a novel teaching approach that integrates the three action teaching model with learning plan guidance within a preventive medicine course. Through this investigation, empirical evidence will be provided regarding the impact of utilizing learning guided by the three action teaching model with learning plan guidance as an innovative instructional method, thereby shedding light on its potential to enhance students' autonomous learning in the field of preventive medicine.

METHODS: The control group consisted of 48 students from Class 2 of clinical medicine in grade 2021, who were taught using the traditional classroom teaching mode. Meanwhile, Class 1 served as the experimental group comprising 47 individuals, who received instruction through the three-action teaching model with learning plan guidance. Evaluation was conducted using course tests and questionnaires, and data analysis was performed utilizing t-tests, analysis of variance, and rank sum tests in SPSS software.

RESULTS: The average total score of the test group (79.44 ± 10.13) was significantly higher than that of the control group (70.00 ± 13.57) (t = 3.943, p < 0.001). Moreover, there were more experimental groups with total scores ranging from 80 to 89 and 90 to 100 compared to the control group (Z = 5.324, p = 0.002). The Subjective Evaluation System (SES) indicated that the experimental group (69.11 ± 8.39) outperformed the control group (61.23 ± 6.59) in terms of total scores (t = 5.095, p < 0.001), demonstrating superior performance in learning methods, emotions, engagement, and performance metrics (p < 0.05). Specifically, analysis using the Biggs study process questionnaire revealed that the experimental group exhibited higher levels of deep learning (t = 6.100, p < 0.001) and lower levels of superficial learning (t = -3.783, p < 0.001) when compared to the control group.

CONCLUSION: The implementation of a novel teaching approach that integrates the three-action teaching model with learning plan guidance significantly enhances students' academic achievements and fosters their intrinsic motivation for learning. The success of this pedagogical method can be attributed to the enhanced classroom efficiency exhibited by teachers as well as the heightened enthusiasm for learning displayed by students.

PMID:39575329 | PMC:PMC11578742 | DOI:10.3389/fpsyg.2024.1508432

Categories: Literature Watch

Contributing to the prediction of prognosis for treated hepatocellular carcinoma: Imaging aspects that sculpt the future

Fri, 2024-11-22 06:00

World J Gastrointest Surg. 2024 Oct 27;16(10):3377-3380. doi: 10.4240/wjgs.v16.i10.3377.

ABSTRACT

A novel nomogram model to predict the prognosis of hepatocellular carcinoma (HCC) treated with radiofrequency ablation and transarterial chemoembolization was recently published in the World Journal of Gastrointestinal Surgery. This model includes clinical and laboratory factors, but emerging imaging aspects, particularly from magnetic resonance imaging (MRI) and radiomics, could enhance the predictive accuracy thereof. Multiparametric MRI and deep learning radiomics models significantly improve prognostic predictions for the treatment of HCC. Incorporating advanced imaging features, such as peritumoral hypointensity and radiomics scores, alongside clinical factors, can refine prognostic models, aiding in personalized treatment and better predicting outcomes. This letter underscores the importance of integrating novel imaging techniques into prognostic tools to better manage and treat HCC.

PMID:39575286 | PMC:PMC11577411 | DOI:10.4240/wjgs.v16.i10.3377

Categories: Literature Watch

VINNA for neonates: Orientation independence through latent augmentations

Fri, 2024-11-22 06:00

Imaging Neurosci (Camb). 2024 May 30;2:1-26. doi: 10.1162/imag_a_00180. eCollection 2024 May 1.

ABSTRACT

A robust, fast, and accurate segmentation of neonatal brain images is highly desired to better understand and detect changes during development and disease, specifically considering the rise in imaging studies for this cohort. Yet, the limited availability of ground truth datasets, lack of standardized acquisition protocols, and wide variations of head positioning in the scanner pose challenges for method development. A few automated image analysis pipelines exist for newborn brain Magnetic Resonance Image (MRI) segmentation, but they often rely on time-consuming non-linear spatial registration procedures and require resampling to a common resolution, subject to loss of information due to interpolation and down-sampling. Without registration and image resampling, variations with respect to head positions and voxel resolutions have to be addressed differently. In deep learning, external augmentations such as rotation, translation, and scaling are traditionally used to artificially expand the representation of spatial variability, which subsequently increases both the training dataset size and robustness. However, these transformations in the image space still require resampling, reducing accuracy specifically in the context of label interpolation. We recently introduced the concept of resolution-independence with the Voxel-size Independent Neural Network framework, VINN. Here, we extend this concept by additionally shifting all rigid-transforms into the network architecture with a four degree of freedom (4-DOF) transform module, enabling resolution-aware internal augmentations (VINNA) for deep learning. In this work, we show that VINNA (i) significantly outperforms state-of-the-art external augmentation approaches, (ii) effectively addresses the head variations present specifically in newborn datasets, and (iii) retains high segmentation accuracy across a range of resolutions (0.5-1.0 mm). Furthermore, the 4-DOF transform module together with internal augmentations is a powerful, general approach to implement spatial augmentation without requiring image or label interpolation. The specific network application to newborns will be made publicly available as VINNA4neonates.

PMID:39575178 | PMC:PMC11576933 | DOI:10.1162/imag_a_00180

Categories: Literature Watch

Geometric deep learning for diffusion MRI signal reconstruction with continuous samplings (DISCUS)

Fri, 2024-11-22 06:00

Imaging Neurosci (Camb). 2024 Apr 2;2:1-18. doi: 10.1162/imag_a_00121. eCollection 2024 Apr 1.

ABSTRACT

Diffusion-weighted magnetic resonance imaging (dMRI) permits a detailed in-vivo analysis of neuroanatomical microstructure, invaluable for clinical and population studies. However, many measurements with different diffusion-encoding directions and possibly b-values are necessary to infer the underlying tissue microstructure within different imaging voxels accurately. Two challenges particularly limit the utility of dMRI: long acquisition times limit feasible scans to only a few directional measurements, and the heterogeneity of acquisition schemes across studies makes it difficult to combine datasets. Left unaddressed by previous learning-based methods that only accept dMRI data adhering to the specific acquisition scheme used for training, there is a need for methods that accept and predict signals for arbitrary diffusion encodings. Addressing these challenges, we describe the first geometric deep learning method for continuous dMRI signal reconstruction for arbitrary diffusion sampling schemes for both the input and output. Our method combines the reconstruction accuracy and robustness of previous learning-based methods with the flexibility of model-based methods, for example, spherical harmonics or SHORE. We demonstrate that our method outperforms model-based methods and performs on par with discrete learning-based methods on single-, multi-shell, and grid-based diffusion MRI datasets. Relevant for dMRI-derived analyses, we show that our reconstruction translates to higher-quality estimates of frequently used microstructure models compared to other reconstruction methods, enabling high-quality analyses even from very short dMRI acquisitions.

PMID:39575177 | PMC:PMC11576935 | DOI:10.1162/imag_a_00121

Categories: Literature Watch

Generative forecasting of brain activity enhances Alzheimer's classification and interpretation

Fri, 2024-11-22 06:00

ArXiv [Preprint]. 2024 Oct 30:arXiv:2410.23515v1.

ABSTRACT

Understanding the relationship between cognition and intrinsic brain activity through purely data-driven approaches remains a significant challenge in neuroscience. Resting-state functional magnetic resonance imaging (rs-fMRI) offers a non-invasive method to monitor regional neural activity, providing a rich and complex spatiotemporal data structure. Deep learning has shown promise in capturing these intricate representations. However, the limited availability of large datasets, especially for disease-specific groups such as Alzheimer's Disease (AD), constrains the generalizability of deep learning models. In this study, we focus on multivariate time series forecasting of independent component networks derived from rs-fMRI as a form of data augmentation, using both a conventional LSTM-based model and the novel Transformer-based BrainLM model. We assess their utility in AD classification, demonstrating how generative forecasting enhances classification performance. Post-hoc interpretation of BrainLM reveals class-specific brain network sensitivities associated with AD.

PMID:39575120 | PMC:PMC11581107

Categories: Literature Watch

Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis

Fri, 2024-11-22 06:00

ArXiv [Preprint]. 2024 Oct 31:arXiv:2410.23595v1.

ABSTRACT

The success of machine learning models relies heavily on effectively representing high-dimensional data. However, ensuring data representations capture human-understandable concepts remains difficult, often requiring the incorporation of prior knowledge and decomposition of data into multiple subspaces. Traditional linear methods fall short in modeling more than one space, while more expressive deep learning approaches lack interpretability. Here, we introduce Supervised Independent Subspace Principal Component Analysis ($\texttt{sisPCA}$), a PCA extension designed for multi-subspace learning. Leveraging the Hilbert-Schmidt Independence Criterion (HSIC), $\texttt{sisPCA}$ incorporates supervision and simultaneously ensures subspace disentanglement. We demonstrate $\texttt{sisPCA}$'s connections with autoencoders and regularized linear regression and showcase its ability to identify and separate hidden data structures through extensive applications, including breast cancer diagnosis from image features, learning aging-associated DNA methylation changes, and single-cell analysis of malaria infection. Our results reveal distinct functional pathways associated with malaria colonization, underscoring the essentiality of explainable representation in high-dimensional data analysis.

PMID:39575118 | PMC:PMC11581103

Categories: Literature Watch

Ion channel classification through machine learning and protein language model embeddings

Thu, 2024-11-21 06:00

J Integr Bioinform. 2024 Nov 25. doi: 10.1515/jib-2023-0047. Online ahead of print.

ABSTRACT

Ion channels are critical membrane proteins that regulate ion flux across cellular membranes, influencing numerous biological functions. The resource-intensive nature of traditional wet lab experiments for ion channel identification has led to an increasing emphasis on computational techniques. This study extends our previous work on protein language models for ion channel prediction, significantly advancing the methodology and performance. We employ a comprehensive array of machine learning algorithms, including k-Nearest Neighbors, Random Forest, Support Vector Machines, and Feed-Forward Neural Networks, alongside a novel Convolutional Neural Network (CNN) approach. These methods leverage fine-tuned embeddings from ProtBERT, ProtBERT-BFD, and MembraneBERT to differentiate ion channels from non-ion channels. Our empirical findings demonstrate that TooT-BERT-CNN-C, which combines features from ProtBERT-BFD and a CNN, substantially surpasses existing benchmarks. On our original dataset, it achieves a Matthews Correlation Coefficient (MCC) of 0.8584 and an accuracy of 98.35 %. More impressively, on a newly curated, larger dataset (DS-Cv2), it attains an MCC of 0.9492 and an ROC AUC of 0.9968 on the independent test set. These results not only highlight the power of integrating protein language models with deep learning for ion channel classification but also underscore the importance of using up-to-date, comprehensive datasets in bioinformatics tasks. Our approach represents a significant advancement in computational methods for ion channel identification, with potential implications for accelerating research in ion channel biology and aiding drug discovery efforts.

PMID:39572876 | DOI:10.1515/jib-2023-0047

Categories: Literature Watch

Comparison Between Conventional and Artificial Intelligence-Assisted Setup for Digital Implant Planning: Accuracy, Time-Efficiency, and User Experience

Thu, 2024-11-21 06:00

Clin Oral Implants Res. 2024 Nov 21. doi: 10.1111/clr.14382. Online ahead of print.

ABSTRACT

OBJECTIVES: To investigate the reliability and time efficiency of the conventional compared to the automatic artificial intelligence (AI) segmentation of the mandibular canal and registration of the CBCT with the model scan data, in relation to clinician's experience.

MATERIALS AND METHODS: Twenty clinicians, 10 with a moderate and 10 with a high experience in computer-assisted implant planning, were asked to perform a bilateral localization of the mandibular canal, followed by a registration of the intraoral model scan with the CBCT. Subsequently, for each data set and each participant, the same operations were performed utilizing the AI tool. Statistical significance was assessed via a mixed model (using the PROC MIXED statement and the compound symmetry covariance structure).

RESULTS: The mean time for the segmentation of the mandibular canals and the registration of the models was 4.75 (2.03)min for the manual and 2.03 (0.36) min for the AI-automated operations (p < 0.001). The mean discrepancy in the mandibular canals was 0.71 (1.80) mm RMS error for the manual segmentation and 0.68 (0.36) RMS error for the AI-assisted segmentation (p > 0.05). For the registration between the CBCT and the intraoral scans, the mean discrepancy was 0.45 (0.16) mm for the manual and 0.37 (0.07) mm for the AI-assisted superimposition (p > 0.05).

CONCLUSIONS: AI-automated implant planning tools are feasible options that can lead to a similar or better accuracy compared to the conventional manual workflow, providing improved time efficiency for both experienced and less experienced users. Further research including a variety of software and data sets is required to be able to generalize the outcomes of the present study.

PMID:39572789 | DOI:10.1111/clr.14382

Categories: Literature Watch

The diagnostic value of MRI segmentation technique for shoulder joint injuries based on deep learning

Thu, 2024-11-21 06:00

Sci Rep. 2024 Nov 21;14(1):28885. doi: 10.1038/s41598-024-80441-y.

ABSTRACT

This work is to investigate the diagnostic value of a deep learning-based magnetic resonance imaging (MRI) image segmentation (IS) technique for shoulder joint injuries (SJIs) in swimmers. A novel multi-scale feature fusion network (MSFFN) is developed by optimizing and integrating the AlexNet and U-Net algorithms for the segmentation of MRI images of the shoulder joint. The model is evaluated using metrics such as the Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity (SE). A cohort of 52 swimmers with SJIs from Guangzhou Hospital serve as the subjects for this study, wherein the accuracy of the developed shoulder joint MRI IS model in diagnosing swimmers' SJIs is analyzed. The results reveal that the DSC for segmenting joint bones in MRI images based on the MSFFN algorithm is 92.65%, with PPV of 95.83% and SE of 96.30%. Similarly, the DSC for segmenting humerus bones in MRI images is 92.93%, with PPV of 95.56% and SE of 92.78%. The MRI IS algorithm exhibits an accuracy of 86.54% in diagnosing types of SJIs in swimmers, surpassing the conventional diagnostic accuracy of 71.15%. The consistency between the diagnostic results of complete tear, superior surface tear, inferior surface tear, and intratendinous tear of SJIs in swimmers and arthroscopic diagnostic results yield a Kappa value of 0.785 and an accuracy of 87.89%. These findings underscore the significant diagnostic value and potential of the MRI IS technique based on the MSFFN algorithm in diagnosing SJIs in swimmers.

PMID:39572780 | DOI:10.1038/s41598-024-80441-y

Categories: Literature Watch

Large language modeling and deep learning shed light on RNA structure prediction

Thu, 2024-11-21 06:00

Nat Methods. 2024 Nov 21. doi: 10.1038/s41592-024-02488-z. Online ahead of print.

NO ABSTRACT

PMID:39572717 | DOI:10.1038/s41592-024-02488-z

Categories: Literature Watch

Accurate RNA 3D structure prediction using a language model-based deep learning approach

Thu, 2024-11-21 06:00

Nat Methods. 2024 Nov 21. doi: 10.1038/s41592-024-02487-0. Online ahead of print.

ABSTRACT

Accurate prediction of RNA three-dimensional (3D) structures remains an unsolved challenge. Determining RNA 3D structures is crucial for understanding their functions and informing RNA-targeting drug development and synthetic biology design. The structural flexibility of RNA, which leads to the scarcity of experimentally determined data, complicates computational prediction efforts. Here we present RhoFold+, an RNA language model-based deep learning method that accurately predicts 3D structures of single-chain RNAs from sequences. By integrating an RNA language model pretrained on ~23.7 million RNA sequences and leveraging techniques to address data scarcity, RhoFold+ offers a fully automated end-to-end pipeline for RNA 3D structure prediction. Retrospective evaluations on RNA-Puzzles and CASP15 natural RNA targets demonstrate the superiority of RhoFold+ over existing methods, including human expert groups. Its efficacy and generalizability are further validated through cross-family and cross-type assessments, as well as time-censored benchmarks. Additionally, RhoFold+ predicts RNA secondary structures and interhelical angles, providing empirically verifiable features that broaden its applicability to RNA structure and function studies.

PMID:39572716 | DOI:10.1038/s41592-024-02487-0

Categories: Literature Watch

Leveraging a deep learning generative model to enhance recognition of minor asphalt defects

Thu, 2024-11-21 06:00

Sci Rep. 2024 Nov 21;14(1):28904. doi: 10.1038/s41598-024-80199-3.

ABSTRACT

Deep learning-based computer vision systems have become powerful tools for automated and cost-effective pavement distress detection, essential for efficient road maintenance. Current methods focus primarily on developing supervised learning architectures, which are limited by the scarcity of annotated image datasets. The use of data augmentation with synthetic images created by generative models to improve these supervised systems is not widely explored. The few studies that do focus on generative architectures are mostly non-conditional, requiring extra labeling, and typically address only road crack defects while aiming to improve classification models rather than object detection. This study introduces AsphaltGAN, a novel class-conditional Generative Adversarial Network with attention mechanisms, designed to augment datasets with various rare road defects to enhance object detection. An in-depth analysis evaluates the impact of different loss functions and hyperparameter tuning. The optimized AsphaltGAN outperforms state-of-the-art generative architectures on public datasets. Additionally, a new workflow is proposed to improve object detection models using synthetic road images. The augmented datasets significantly improve the object detection metrics of You Only Look Once version 8 by 33.0%, 3.8%, 46.3%, and 51.8% on the Road Damage Detection 2022 dataset, Crack Dataset, Asphalt Pavement Detection Dataset, and Crack Surface Dataset, respectively.

PMID:39572659 | DOI:10.1038/s41598-024-80199-3

Categories: Literature Watch

Enhanced MobileNet for skin cancer image classification with fused spatial channel attention mechanism

Thu, 2024-11-21 06:00

Sci Rep. 2024 Nov 21;14(1):28850. doi: 10.1038/s41598-024-80087-w.

ABSTRACT

Skin Cancer, which leads to a large number of deaths annually, has been extensively considered as the most lethal tumor around the world. Accurate detection of skin cancer in its early stage can significantly raise the survival rate of patients and reduce the burden on public health. Currently, the diagnosis of skin cancer relies heavily on human visual system for screening and dermoscopy. However, manual inspection is laborious, time-consuming, and error-prone. In consequence, the development of an automatic machine vision algorithm for skin cancer classification turns into imperative. Various machine learning techniques have been presented for the last few years. Although these methods have yielded promising outcome in skin cancer detection and recognition, there is still a certain gap in machine learning algorithms and clinical applications. To enhance the performance of classification, this study proposes a novel deep learning model for discriminating clinical skin cancer images. The proposed model incorporates a convolutional neural network for extracting local receptive field information and a novel attention mechanism for revealing the global associations within an image. Experimental results of the proposed approach demonstrate its superiority over the state-of-the-art algorithms on the publicly available dataset International Skin Imaging Collaboration 2019 (ISIC-2019) in terms of Precision, Recall, F1-score. From the experimental results, it can be observed that the proposed approach is a potentially valuable instrument for skin cancer classification.

PMID:39572649 | DOI:10.1038/s41598-024-80087-w

Categories: Literature Watch

Performance prediction of sintered NdFeB magnet using multi-head attention regression models

Thu, 2024-11-21 06:00

Sci Rep. 2024 Nov 21;14(1):28822. doi: 10.1038/s41598-024-79435-7.

ABSTRACT

The preparation of sintered NdFeB magnets is complex, time-consuming, and costly. Data-driven machine learning methods can enhance the efficiency of material synthesis and performance optimization. Traditional machine learning models based on mathematical and statistical principles are effective for structured data and offer high interpretability. However, as the scale and dimensionality of the data increase, the computational complexity of models rises dramatically, making hyperparameter tuning more challenging. By contrast, neural network models possess strong nonlinear modeling capabilities for handling large-scale data, but their decision-making and inferential processes remain opaque. To enhance interpretability of neural network, we collected 1,200 high-quality experimental data points and developed a multi-head attention regression model by integrating an attention mechanism into the neural network. The model enables parallel data processing, accelerates both training and inference speed, and reduces reliance on feature engineering and hyperparameter tuning. The coefficients of determination for remanence and coercivity are 0.97 and 0.84, respectively. This study offers new insights into machine learning-based modeling of structure-property relationships in materials and has potential to advance the research of multimodal NdFeB magnet models.

PMID:39572633 | DOI:10.1038/s41598-024-79435-7

Categories: Literature Watch

Deep learning based emulator for predicting voltage behaviour in lithium ion batteries

Thu, 2024-11-21 06:00

Sci Rep. 2024 Nov 21;14(1):28905. doi: 10.1038/s41598-024-80371-9.

ABSTRACT

This study presents a data-driven battery emulator using long short-term memory deep learning models to predict the charge-discharge behaviour of lithium-ion batteries (LIBs). This study aimed to reduce the economic costs and time associated with the fabrication of large-scale automotive prototype batteries by emulating their performance using smaller laboratory-produced batteries. Two types of datasets were targeted: simulation data from the Dualfoil model and experimental data from liquid-based LIBs. These datasets were used to accurately predict the voltage profiles from the arbitrary inputs of various galvanostatic charge-discharge schedules. The results demonstrated high prediction accuracy, with the coefficient of determination scores reaching 0.98 and 0.97 for test datasets obtained from the simulation and experiments, respectively. The study also confirmed the significance of state-of-charge descriptors and inferred that a robust model performance could be achieved with as few as five charge-discharge training datasets. This study concludes that data-driven emulation using machine learning can significantly accelerate the battery development process, providing a powerful tool for reducing the time and economic costs associated with the production of large-scale prototype batteries.

PMID:39572616 | DOI:10.1038/s41598-024-80371-9

Categories: Literature Watch

Chisco: An EEG-based BCI dataset for decoding of imagined speech

Thu, 2024-11-21 06:00

Sci Data. 2024 Nov 21;11(1):1265. doi: 10.1038/s41597-024-04114-1.

ABSTRACT

The rapid advancement of deep learning has enabled Brain-Computer Interfaces (BCIs) technology, particularly neural decoding techniques, to achieve higher accuracy and deeper levels of interpretation. Interest in decoding imagined speech has significantly increased because its concept akin to "mind reading". However, previous studies on decoding neural language have predominantly focused on brain activity patterns during human reading. The absence of imagined speech electroencephalography (EEG) datasets has constrained further research in this field. We present the Chinese Imagined Speech Corpus (Chisco), including over 20,000 sentences of high-density EEG recordings of imagined speech from healthy adults. Each subject's EEG data exceeds 900 minutes, representing the largest dataset per individual currently available for decoding neural language to date. Furthermore, the experimental stimuli include over 6,000 everyday phrases across 39 semantic categories, covering nearly all aspects of daily language. We believe that Chisco represents a valuable resource for the fields of BCIs, facilitating the development of more user-friendly BCIs.

PMID:39572577 | DOI:10.1038/s41597-024-04114-1

Categories: Literature Watch

Conformalized Graph Learning for Molecular ADMET Property Prediction and Reliable Uncertainty Quantification

Thu, 2024-11-21 06:00

J Chem Inf Model. 2024 Nov 21. doi: 10.1021/acs.jcim.4c01139. Online ahead of print.

ABSTRACT

Drug discovery and development is a complex and costly process, with a substantial portion of the expense dedicated to characterizing the absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties of new drug candidates. While the advent of deep learning and molecular graph neural networks (GNNs) has significantly enhanced in silico ADMET prediction capabilities, reliably quantifying prediction uncertainty remains a critical challenge. The performance of GNNs is influenced by both the volume and the quality of the data. Hence, determining the reliability and extent of a prediction is as crucial as achieving accurate predictions, especially for out-of-domain (OoD) compounds. This paper introduces a novel GNN model called conformalized fusion regression (CFR). CFR combined a GNN model with a joint mean-quantile regression loss and an ensemble-based conformal prediction (CP) method. Through rigorous evaluation across various ADMET tasks, we demonstrate that our framework provides accurate predictions, reliable probability calibration, and high-quality prediction intervals, outperforming existing uncertainty quantification methods.

PMID:39571080 | DOI:10.1021/acs.jcim.4c01139

Categories: Literature Watch

Towards efficient IoT communication for smart agriculture: A deep learning framework

Thu, 2024-11-21 06:00

PLoS One. 2024 Nov 21;19(11):e0311601. doi: 10.1371/journal.pone.0311601. eCollection 2024.

ABSTRACT

The integration of IoT (Internet of Things) devices has emerged as a technical cornerstone in the landscape of modern agriculture, revolutionising the way farming practises are viewed and managed. Smart farming, enabled by interconnected sensors and technologies, has surpassed traditional methods, giving farmers real-time, granular information into their farms. These Internet of Things devices are responsible for collecting and sending greenhouse data (temperature, humidity, and soil moisture) for the required destination, to provide a comprehensive awareness of environmental factors critical to crop growth. Therefore, ensuring that the received data are accurate is a challenge, thus this paper investigates the optimization of Agriculture IoT communication, proposing a complete strategy for improving data transmission efficiency within smart farming ecosystems. The proposed model intends to maximize energy efficiency and data throughput in the context of essential agricultural factors by using Lagrange optimization and a Deep Convolutional Neural Network (DCNN). The paper focus on the ideal communication required distance between IoT sensors that measure humidity, temperature, and water levels and central control systems. The investigation emphasizes the critical necessity of these data points in guaranteeing crop health and vitality. The proposed technique strives to improve the performance of agricultural IoT communication networks through the integration of mathematical optimization and cutting-edge deep learning. This paradigm change emphasizes the inherent link between precise achievable data rate and energy efficiency, resulting in resilient agricultural ecosystems capable of adjusting to dynamic environmental conditions for optimal crop output and health.

PMID:39570960 | DOI:10.1371/journal.pone.0311601

Categories: Literature Watch

FDCN-C: A deep learning model based on frequency enhancement, deformable convolution network, and crop module for electroencephalography motor imagery classification

Thu, 2024-11-21 06:00

PLoS One. 2024 Nov 21;19(11):e0309706. doi: 10.1371/journal.pone.0309706. eCollection 2024.

ABSTRACT

Motor imagery (MI)-electroencephalography (EEG) decoding plays an important role in brain-computer interface (BCI), which enables motor-disabled patients to communicate with external world via manipulating smart equipment. Currently, deep learning (DL)-based methods are popular for EEG decoding. Whereas the utilization efficiency of EEG features in frequency and temporal domain is not sufficient, which results in poor MI classification performance. To address this issue, an EEG-based MI classification model based on a frequency enhancement module, a deformable convolutional network, and a crop module (FDCN-C) is proposed. Firstly, the frequency enhancement module is innovatively designed to address the issue of extracting frequency information. It utilizes convolution kernels at continuous time scales to extract features across different frequency bands. These features are screened by calculating attention and integrated into the original EEG data. Secondly, for temporal feature extraction, a deformable convolution network is employed to enhance feature extraction capabilities, utilizing offset parameters to modulate the convolution kernel size. In spatial domain, a one-dimensional convolution layer is designed to integrate all channel information. Finally, a dilated convolution is used to form a crop classification module, wherein the diverse receptive fields of the EEG data are computed multiple times. Two public datasets are employed to verify the proposed FDCN-C model, the classification accuracy obtained from the proposed model is greater than that of state-of-the-art methods. The model's accuracy has improved by 14.01% compared to the baseline model, and the ablation study has confirmed the effectiveness of each module in the model.

PMID:39570849 | DOI:10.1371/journal.pone.0309706

Categories: Literature Watch

Pages