Deep learning

Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality

Fri, 2024-08-09 06:00

Proc ACM Comput Graph Interact Tech. 2024 May;7(2):27. doi: 10.1145/3654705. Epub 2024 May 17.

ABSTRACT

Algorithms for the estimation of gaze direction from mobile and video-based eye trackers typically involve tracking a feature of the eye that moves through the eye camera image in a way that covaries with the shifting gaze direction, such as the center or boundaries of the pupil. Tracking these features using traditional computer vision techniques can be difficult due to partial occlusion and environmental reflections. Although recent efforts to use machine learning (ML) for pupil tracking have demonstrated superior results when evaluated using standard measures of segmentation performance, little is known of how these networks may affect the quality of the final gaze estimate. This work provides an objective assessment of the impact of several contemporary ML-based methods for eye feature tracking when the subsequent gaze estimate is produced using either feature-based or model-based methods. Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.

PMID:39119010 | PMC:PMC11308822 | DOI:10.1145/3654705

Categories: Literature Watch

A graph-learning based model for automatic diagnosis of Sjögren's syndrome on digital pathological images: a multicentre cohort study

Thu, 2024-08-08 06:00

J Transl Med. 2024 Aug 8;22(1):748. doi: 10.1186/s12967-024-05550-8.

ABSTRACT

BACKGROUND: Sjögren's Syndrome (SS) is a rare chronic autoimmune disorder primarily affecting adult females, characterized by chronic inflammation and salivary and lacrimal gland dysfunction. It is often associated with systemic lupus erythematosus, rheumatoid arthritis and kidney disease, which can lead to increased mortality. Early diagnosis is critical, but traditional methods for diagnosing SS, mainly through histopathological evaluation of salivary gland tissue, have limitations.

METHODS: The study used 100 labial gland biopsy, creating whole-slide images (WSIs) for analysis. The proposed model, named Cell-tissue-graph-based pathological image analysis model (CTG-PAM) and based on graph theory, characterizes single-cell feature, cell-cell feature, and cell-tissue feature. Building upon these features, CTG-PAM achieves cellular-level classification, enabling lymphocyte recognition. Furthermore, it leverages connected component analysis techniques in the cell graph structure to perform SS diagnosis based on lymphocyte counts.

FINDINGS: CTG-PAM outperforms traditional deep learning methods in diagnosing SS. Its area under the receiver operating characteristic curve (AUC) is 1.0 for the internal validation dataset and 0.8035 for the external test dataset. This indicates high accuracy. The sensitivity of CTG-PAM for the external dataset is 98.21%, while the accuracy is 93.75%. In comparison, the sensitivity and accuracy for traditional deep learning methods (ResNet-50) are lower. The study also shows that CTG-PAM's diagnostic accuracy is closer to skilled pathologists compared to beginners.

INTERPRETATION: Our findings indicate that CTG-PAM is a reliable method for diagnosing SS. Additionally, CTG-PAM shows promise in enhancing the prognosis of SS patients and holds significant potential for the differential diagnosis of both non-neoplastic and neoplastic diseases. The AI model potentially extends its application to diagnosing immune cells in tumor microenvironments.

PMID:39118142 | DOI:10.1186/s12967-024-05550-8

Categories: Literature Watch

Deep learning-based multimodal fusion of the surface ECG and clinical features in prediction of atrial fibrillation recurrence following catheter ablation

Thu, 2024-08-08 06:00

BMC Med Inform Decis Mak. 2024 Aug 8;24(1):225. doi: 10.1186/s12911-024-02616-x.

ABSTRACT

BACKGROUND: Despite improvement in treatment strategies for atrial fibrillation (AF), a significant proportion of patients still experience recurrence after ablation. This study aims to propose a novel algorithm based on Transformer using surface electrocardiogram (ECG) signals and clinical features can predict AF recurrence.

METHODS: Between October 2018 to December 2021, patients who underwent index radiofrequency ablation for AF with at least one standard 10-second surface ECG during sinus rhythm were enrolled. An end-to-end deep learning framework based on Transformer and a fusion module was used to predict AF recurrence using ECG and clinical features. Model performance was evaluated using areas under the receiver operating characteristic curve (AUROC), sensitivity, specificity, accuracy and F1-score.

RESULTS: A total of 920 patients (median age 61 [IQR 14] years, 66.3% male) were included. After a median follow-up of 24 months, 253 patients (27.5%) experienced AF recurrence. A single deep learning enabled ECG signals identified AF recurrence with an AUROC of 0.769, sensitivity of 75.5%, specificity of 61.1%, F1 score of 55.6% and overall accuracy of 65.2%. Combining ECG signals and clinical features increased the AUROC to 0.899, sensitivity to 81.1%, specificity to 81.7%, F1 score to 71.7%, and overall accuracy to 81.5%.

CONCLUSIONS: The Transformer algorithm demonstrated excellent performance in predicting AF recurrence. Integrating ECG and clinical features enhanced the models' performance and may help identify patients at low risk for AF recurrence after index ablation.

PMID:39118118 | DOI:10.1186/s12911-024-02616-x

Categories: Literature Watch

Evaluation of reinforcement learning in transformer-based molecular design

Thu, 2024-08-08 06:00

J Cheminform. 2024 Aug 8;16(1):95. doi: 10.1186/s13321-024-00887-0.

ABSTRACT

Designing compounds with a range of desirable properties is a fundamental challenge in drug discovery. In pre-clinical early drug discovery, novel compounds are often designed based on an already existing promising starting compound through structural modifications for further property optimization. Recently, transformer-based deep learning models have been explored for the task of molecular optimization by training on pairs of similar molecules. This provides a starting point for generating similar molecules to a given input molecule, but has limited flexibility regarding user-defined property profiles. Here, we evaluate the effect of reinforcement learning on transformer-based molecular generative models. The generative model can be considered as a pre-trained model with knowledge of the chemical space close to an input compound, while reinforcement learning can be viewed as a tuning phase, steering the model towards chemical space with user-specific desirable properties. The evaluation of two distinct tasks-molecular optimization and scaffold discovery-suggest that reinforcement learning could guide the transformer-based generative model towards the generation of more compounds of interest. Additionally, the impact of pre-trained models, learning steps and learning rates are investigated.Scientific contributionOur study investigates the effect of reinforcement learning on a transformer-based generative model initially trained for generating molecules similar to starting molecules. The reinforcement learning framework is applied to facilitate multiparameter optimisation of starting molecules. This approach allows for more flexibility for optimizing user-specific property profiles and helps finding more ideas of interest.

PMID:39118113 | DOI:10.1186/s13321-024-00887-0

Categories: Literature Watch

Occlusion enhanced pan-cancer classification via deep learning

Thu, 2024-08-08 06:00

BMC Bioinformatics. 2024 Aug 8;25(1):260. doi: 10.1186/s12859-024-05870-y.

ABSTRACT

Quantitative measurement of RNA expression levels through RNA-Seq is an ideal replacement for conventional cancer diagnosis via microscope examination. Currently, cancer-related RNA-Seq studies focus on two aspects: classifying the status and tissue of origin of a sample and discovering marker genes. Existing studies typically identify marker genes by statistically comparing healthy and cancer samples. However, this approach overlooks marker genes with low expression level differences and may be influenced by experimental results. This paper introduces "GENESO," a novel framework for pan-cancer classification and marker gene discovery using the occlusion method in conjunction with deep learning. we first trained a baseline deep LSTM neural network capable of distinguishing the origins and statuses of samples utilizing RNA-Seq data. Then, we propose a novel marker gene discovery method called "Symmetrical Occlusion (SO)". It collaborates with the baseline LSTM network, mimicking the "gain of function" and "loss of function" of genes to evaluate their importance in pan-cancer classification quantitatively. By identifying the genes of utmost importance, we then isolate them to train new neural networks, resulting in higher-performance LSTM models that utilize only a reduced set of highly relevant genes. The baseline neural network achieves an impressive validation accuracy of 96.59% in pan-cancer classification. With the help of SO, the accuracy of the second network reaches 98.30%, while using 67% fewer genes. Notably, our method excels in identifying marker genes that are not differentially expressed. Moreover, we assessed the feasibility of our method using single-cell RNA-Seq data, employing known marker genes as a validation test.

PMID:39118043 | DOI:10.1186/s12859-024-05870-y

Categories: Literature Watch

Automated 3D Cobb Angle Measurement Using U-Net in CT Images of Preoperative Scoliosis Patients

Thu, 2024-08-08 06:00

J Imaging Inform Med. 2024 Aug 8. doi: 10.1007/s10278-024-01211-w. Online ahead of print.

ABSTRACT

To propose a deep learning framework "SpineCurve-net" for automated measuring the 3D Cobb angles from computed tomography (CT) images of presurgical scoliosis patients. A total of 116 scoliosis patients were analyzed, divided into a training set of 89 patients (average age 32.4 ± 24.5 years) and a validation set of 27 patients (average age 17.3 ± 5.8 years). Vertebral identification and curve fitting were achieved through U-net and NURBS-net and resulted in a Non-Uniform Rational B-Spline (NURBS) curve of the spine. The 3D Cobb angles were measured in two ways: the predicted 3D Cobb angle (PRED-3D-CA), which is the maximum value in the smoothed angle map derived from the NURBS curve, and the 2D mapping Cobb angle (MAP-2D-CA), which is the maximal angle formed by the tangent vectors along the projected 2D spinal curve. The model segmented spinal masks effectively, capturing easily missed vertebral bodies. Spoke kernel filtering distinguished vertebral regions, centralizing spinal curves. The SpineCurve Network method's Cobb angle (PRED-3D-CA and MAP-2D-CA) measurements correlated strongly with the surgeons' annotated Cobb angle (ground truth, GT) based on 2D radiographs, revealing high Pearson correlation coefficients of 0.983 and 0.934, respectively. This paper proposed an automated technique for calculating the 3D Cobb angle in preoperative scoliosis patients, yielding results that are highly correlated with traditional 2D Cobb angle measurements. Given its capacity to accurately represent the three-dimensional nature of spinal deformities, this method shows potential in aiding physicians to develop more precise surgical strategies in upcoming cases.

PMID:39117939 | DOI:10.1007/s10278-024-01211-w

Categories: Literature Watch

Deep learning-based automated angle measurement for flatfoot diagnosis in weight-bearing lateral radiographs

Thu, 2024-08-08 06:00

Sci Rep. 2024 Aug 8;14(1):18411. doi: 10.1038/s41598-024-69549-3.

ABSTRACT

This study aimed to develop and evaluate a deep learning-based system for the automatic measurement of angles (specifically, Meary's angle and calcaneal pitch) in weight-bearing lateral radiographs of the foot for flatfoot diagnosis. We utilized 3960 lateral radiographs, either from the left or right foot, sourced from a pool of 4000 patients to construct and evaluate a deep learning-based model. These radiographs were captured between June and November 2021, and patients who had undergone total ankle replacement surgery or ankle arthrodesis surgery were excluded. Various methods, including correlation analysis, Bland-Altman plots, and paired T-tests, were employed to assess the concordance between the angles automatically measured using the system and those assessed by clinical experts. The evaluation dataset comprised 150 weight-bearing radiographs from 150 patients. In all test cases, the angles automatically computed using the deep learning-based system were in good agreement with the reference standards (Meary's angle: Pearson correlation coefficient (PCC) = 0.964, intraclass correlation coefficient (ICC) = 0.963, concordance correlation coefficient (CCC) = 0.963, p-value = 0.632, mean absolute error (MAE) = 1.59°; calcaneal pitch: PCC = 0.988, ICC = 0.987, CCC = 0.987, p-value = 0.055, MAE = 0.63°). The average time required for angle measurement using only the CPU to execute the deep learning-based system was 11 ± 1 s. The deep learning-based automatic angle measurement system, a tool for diagnosing flatfoot, demonstrated comparable accuracy and reliability with the results obtained by medical professionals for patients without internal fixation devices.

PMID:39117787 | DOI:10.1038/s41598-024-69549-3

Categories: Literature Watch

Kinetics and coexistence of autocatalytic reaction cycles

Thu, 2024-08-08 06:00

Sci Rep. 2024 Aug 8;14(1):18441. doi: 10.1038/s41598-024-69267-w.

ABSTRACT

Biological reproduction rests ultimately on chemical autocatalysis. Autocatalytic chemical cycles are thought to have played an important role in the chemical complexification en route to life. There are two, related issues: what chemical transformations allow such cycles to form, and at what speed they are operating. Here we investigate the latter question for solitary as well as competitive autocatalytic cycles in resource-unlimited batch and resource-limited chemostat systems. The speed of growth tends to decrease with the length of a cycle. Reversibility of the reproductive step results in parabolic growth that is conducive to competitive coexistence. Reversibility of resource uptake also slows down growth. Unilateral help by a cycle of its competitor tends to favour the competitor (in effect a parasite on the helper), rendering coexistence unlikely. We also show that deep learning is able to predict the outcome of competition just from the topology and the kinetic rate constants, provided the training set is large enough. These investigations pave the way for studying autocatalytic cycles with more complicated coupling, such as mutual catalysis.

PMID:39117739 | DOI:10.1038/s41598-024-69267-w

Categories: Literature Watch

ResNeXt-CC: a novel network based on cross-layer deep-feature fusion for white blood cell classification

Thu, 2024-08-08 06:00

Sci Rep. 2024 Aug 8;14(1):18439. doi: 10.1038/s41598-024-69076-1.

ABSTRACT

Accurate diagnosis of white blood cells from cytopathological images is a crucial step in evaluating leukaemia. In recent years, image classification methods based on fully convolutional networks have drawn extensive attention and achieved competitive performance in medical image classification. In this paper, we propose a white blood cell classification network called ResNeXt-CC for cytopathological images. First, we transform cytopathological images from the RGB color space to the HSV color space so as to precisely extract the texture features, color changes and other details of white blood cells. Second, since cell classification primarily relies on distinguishing local characteristics, we design a cross-layer deep-feature fusion module to enhance our ability to extract discriminative information. Third, the efficient attention mechanism based on the ECANet module is used to promote the feature extraction capability of cell details. Finally, we combine the modified softmax loss function and the central loss function to train the network, thereby effectively addressing the problem of class imbalance and improving the network performance. The experimental results on the C-NMC 2019 dataset show that our proposed method manifests obvious advantages over the existing classification methods, including ResNet-50, Inception-V3, Densenet121, VGG16, Cross ViT, Token-to-Token ViT, Deep ViT, and simple ViT about 5.5-20.43% accuracy, 3.6-23.56% F1-score, 3.5-25.71% AUROC and 8.1-36.98% specificity, respectively.

PMID:39117714 | DOI:10.1038/s41598-024-69076-1

Categories: Literature Watch

A deep learning model for anti-inflammatory peptides identification based on deep variational autoencoder and contrastive learning

Thu, 2024-08-08 06:00

Sci Rep. 2024 Aug 8;14(1):18451. doi: 10.1038/s41598-024-69419-y.

ABSTRACT

As a class of biologically active molecules with significant immunomodulatory and anti-inflammatory effects, anti-inflammatory peptides have important application value in the medical and biotechnology fields due to their unique biological functions. Research on the identification of anti-inflammatory peptides provides important theoretical foundations and practical value for a deeper understanding of the biological mechanisms of inflammation and immune regulation, as well as for the development of new drugs and biotechnological applications. Therefore, it is necessary to develop more advanced computational models for identifying anti-inflammatory peptides. In this study, we propose a deep learning model named DAC-AIPs based on variational autoencoder and contrastive learning for accurate identification of anti-inflammatory peptides. In the sequence encoding part, the incorporation of multi-hot encoding helps capture richer sequence information. The autoencoder, composed of convolutional layers and linear layers, can learn latent features and reconstruct features, with variational inference enhancing the representation capability of latent features. Additionally, the introduction of contrastive learning aims to improve the model's classification ability. Through cross-validation and independent dataset testing experiments, DAC-AIPs achieves superior performance compared to existing state-of-the-art models. In cross-validation, the classification accuracy of DAC-AIPs reached around 88%, which is 7% higher than previous models. Furthermore, various ablation experiments and interpretability experiments validate the effectiveness of DAC-AIPs. Finally, a user-friendly online predictor is designed to enhance the practicality of the model, and the server is freely accessible at http://dac-aips.online .

PMID:39117712 | DOI:10.1038/s41598-024-69419-y

Categories: Literature Watch

Two-stage deep neural network for diagnosing fungal keratitis via in vivo confocal microscopy images

Thu, 2024-08-08 06:00

Sci Rep. 2024 Aug 8;14(1):18432. doi: 10.1038/s41598-024-68768-y.

ABSTRACT

Timely and effective diagnosis of fungal keratitis (FK) is necessary for suitable treatment and avoiding irreversible vision loss for patients. In vivo confocal microscopy (IVCM) has been widely adopted to guide the FK diagnosis. We present a deep learning framework for diagnosing fungal keratitis using IVCM images to assist ophthalmologists. Inspired by the real diagnostic process, our method employs a two-stage deep architecture for diagnostic predictions based on both image-level and sequence-level information. To the best of our knowledge, we collected the largest dataset with 96,632 IVCM images in total with expert labeling to train and evaluate our method. The specificity and sensitivity of our method in diagnosing FK on the unseen test set achieved 96.65% and 97.57%, comparable or better than experienced ophthalmologists. The network can provide image-level, sequence-level and patient-level diagnostic suggestions to physicians. The results show great promise for assisting ophthalmologists in FK diagnosis.

PMID:39117709 | DOI:10.1038/s41598-024-68768-y

Categories: Literature Watch

A 3D and Explainable Artificial Intelligence Model for Evaluation of Chronic Otitis Media Based on Temporal Bone Computed Tomography: Model Development, Validation, and Clinical Application

Thu, 2024-08-08 06:00

J Med Internet Res. 2024 Aug 8;26:e51706. doi: 10.2196/51706.

ABSTRACT

BACKGROUND: Temporal bone computed tomography (CT) helps diagnose chronic otitis media (COM). However, its interpretation requires training and expertise. Artificial intelligence (AI) can help clinicians evaluate COM through CT scans, but existing models lack transparency and may not fully leverage multidimensional diagnostic information.

OBJECTIVE: We aimed to develop an explainable AI system based on 3D convolutional neural networks (CNNs) for automatic CT-based evaluation of COM.

METHODS: Temporal bone CT scans were retrospectively obtained from patients operated for COM between December 2015 and July 2021 at 2 independent institutes. A region of interest encompassing the middle ear was automatically segmented, and 3D CNNs were subsequently trained to identify pathological ears and cholesteatoma. An ablation study was performed to refine model architecture. Benchmark tests were conducted against a baseline 2D model and 7 clinical experts. Model performance was measured through cross-validation and external validation. Heat maps, generated using Gradient-Weighted Class Activation Mapping, were used to highlight critical decision-making regions. Finally, the AI system was assessed with a prospective cohort to aid clinicians in preoperative COM assessment.

RESULTS: Internal and external data sets contained 1661 and 108 patients (3153 and 211 eligible ears), respectively. The 3D model exhibited decent performance with mean areas under the receiver operating characteristic curves of 0.96 (SD 0.01) and 0.93 (SD 0.01), and mean accuracies of 0.878 (SD 0.017) and 0.843 (SD 0.015), respectively, for detecting pathological ears on the 2 data sets. Similar outcomes were observed for cholesteatoma identification (mean area under the receiver operating characteristic curve 0.85, SD 0.03 and 0.83, SD 0.05; mean accuracies 0.783, SD 0.04 and 0.813, SD 0.033, respectively). The proposed 3D model achieved a commendable balance between performance and network size relative to alternative models. It significantly outperformed the 2D approach in detecting COM (P≤.05) and exhibited a substantial gain in identifying cholesteatoma (P<.001). The model also demonstrated superior diagnostic capabilities over resident fellows and the attending otologist (P<.05), rivaling all senior clinicians in both tasks. The generated heat maps properly highlighted the middle ear and mastoid regions, aligning with human knowledge in interpreting temporal bone CT. The resulting AI system achieved an accuracy of 81.8% in generating preoperative diagnoses for 121 patients and contributed to clinical decision-making in 90.1% cases.

CONCLUSIONS: We present a 3D CNN model trained to detect pathological changes and identify cholesteatoma via temporal bone CT scans. In both tasks, this model significantly outperforms the baseline 2D approach, achieving levels comparable with or surpassing those of human experts. The model also exhibits decent generalizability and enhanced comprehensibility. This AI system facilitates automatic COM assessment and shows promising viability in real-world clinical settings. These findings underscore AI's potential as a valuable aid for clinicians in COM evaluation.

TRIAL REGISTRATION: Chinese Clinical Trial Registry ChiCTR2000036300; https://www.chictr.org.cn/showprojEN.html?proj=58685.

PMID:39116439 | DOI:10.2196/51706

Categories: Literature Watch

Industry 4.0 Technologies in Maternal Health Care: Bibliometric Analysis and Research Agenda

Thu, 2024-08-08 06:00

JMIR Pediatr Parent. 2024 Aug 8;7:e47848. doi: 10.2196/47848.

ABSTRACT

BACKGROUND: Industry 4.0 (I4.0) technologies have improved operations in health care facilities by optimizing processes, leading to efficient systems and tools to assist health care personnel and patients.

OBJECTIVE: This study investigates the current implementation and impact of I4.0 technologies within maternal health care, explicitly focusing on transforming care processes, treatment methods, and automated pregnancy monitoring. Additionally, it conducts a thematic landscape mapping, offering a nuanced understanding of this emerging field. Building on this analysis, a future research agenda is proposed, highlighting critical areas for future investigations.

METHODS: A bibliometric analysis of publications retrieved from the Scopus database was conducted to examine how the research into I4.0 technologies in maternal health care evolved from 1985 to 2022. A search strategy was used to screen the eligible publications using the abstract and full-text reading. The most productive and influential journals; authors', institutions', and countries' influence on maternal health care; and current trends and thematic evolution were computed using the Bibliometrix R package (R Core Team).

RESULTS: A total of 1003 unique papers in English were retrieved using the search string, and 136 papers were retained after the inclusion and exclusion criteria were implemented, covering 37 years from 1985 to 2022. The annual growth rate of publications was 9.53%, with 88.9% (n=121) of the publications observed in 2016-2022. In the thematic analysis, 4 clusters were identified-artificial neural networks, data mining, machine learning, and the Internet of Things. Artificial intelligence, deep learning, risk prediction, digital health, telemedicine, wearable devices, mobile health care, and cloud computing remained the dominant research themes in 2016-2022.

CONCLUSIONS: This bibliometric analysis reviews the state of the art in the evolution and structure of I4.0 technologies in maternal health care and how they may be used to optimize the operational processes. A conceptual framework with 4 performance factors-risk prediction, hospital care, health record management, and self-care-is suggested for process improvement. a research agenda is also proposed for governance, adoption, infrastructure, privacy, and security.

PMID:39116433 | DOI:10.2196/47848

Categories: Literature Watch

DDSBC: A Stacking Ensemble Classifier-Based Approach for Breast Cancer Drug-Pair Cell Synergy Prediction

Thu, 2024-08-08 06:00

J Chem Inf Model. 2024 Aug 8. doi: 10.1021/acs.jcim.4c01101. Online ahead of print.

ABSTRACT

Breast cancer (BC) ranks as a leading cause of mortality among women worldwide, with incidence rates continuing to rise. The quest for effective treatments has led to the adoption of drug combination therapy, aiming to enhance drug efficacy. However, identifying synergistic drug combinations remains a daunting challenge due to the myriad of potential drug pairs. Current research leverages machine learning (ML) and deep learning (DL) models for drug-pair synergy prediction and classification. Nevertheless, these models often underperform on specific cancer types, including BC, as they are trained on data spanning various cancers without any specialization. Here, we introduce a stacking ensemble classifier, the drug-drug synergy for breast cancer (DDSBC), tailored explicitly for BC drug-pair cell synergy classification. Unlike existing models that generalize across cancer types, DDSBC is exclusively developed for BC, offering a more focused approach. Our comparative analysis against classical ML methods as well as DL models developed for drug synergy prediction highlights DDSBC's superior performance across test and independent datasets on BC data. Despite certain metrics where other methods narrowly surpass DDSBC by 1-2%, DDSBC consistently emerges as the top-ranked model, showcasing significant differences in scoring metrics and robust performance in ablation studies. DDSBC's performance and practicality position it as a preferred choice or an adjunctive validation tool for identifying synergistic or antagonistic drug pairs in BC, providing valuable insights for treatment strategies.

PMID:39116326 | DOI:10.1021/acs.jcim.4c01101

Categories: Literature Watch

Chicken swarm optimization modelling for cognitive radio networks using deep belief network-enabled spectrum sensing technique

Thu, 2024-08-08 06:00

PLoS One. 2024 Aug 8;19(8):e0305987. doi: 10.1371/journal.pone.0305987. eCollection 2024.

ABSTRACT

Cognitive radio networks (CRN) enable wireless devices to sense the radio spectrum, determine the frequency state channels, and reconfigure the communication variables to satisfy Quality of Service (QoS) needs by reducing energy utilization. In CRN, spectrum sensing is an essential process that is highly challenging and can be addressed by several traditional techniques, such as energy detection, match filtering, etc. For now, the current models' performance is impacted by the comparatively low Signal to Noise Ratio (SNR) of recognized signals and the insignificant quantity of traditional signal samples. This research proposals a new spectral sensing technique for cognitive radio networks (SST-CRN) that addresses the drawbacks of predictable energy detection models. With the use of a deep belief network (DBN), the suggested model contributes to accomplish a nonlinear threshold based on the chicken swarm algorithm (CSA). The proposed DBN enabled SST-CRN technique goes through two phases in a organized process: offline and online. Throughout the offline phase, the DBN model is methodically trained on pre-gathered data, developing the aptitude to identify problematic patterns and examples from the spectral features of the radio environment. This stage involves extensive feature extraction, validation, and model development to ensure that the DBN can professionally represent complicated spectral dynamics. Additionally, online spectrum sensing is conducted during the real communication phase to enable real-time adaptation to dynamic changes in the spectrum environment. Offline spectrum sensing is typically performed during a devoted sensing period before actual communication begins. When combined with DBN's deep learning capabilities and CSO's innate nature-inspired algorithms, a synergistic framework is created that enables CRNs to explore and allocate incidences on their own with astonishing accuracy. The proposed solution considerably improves the spectrum efficiency and resilience of CRNs by harnessing the power of DBN, which leads to more effective resource utilization and less interference. The Simulation results show that our proposed strategy produces more accurate spectrum occupancy assessments. The result parameters such as probability of detection, SNR of -24dB, the SST-CRN perfect has increased a developed Pd of 0.810, whereas the existing methods RMLSSCRN-100 and RMLSSCRN-300 have accomplished a lower Pd of 0.577 and 0.736, respectively. Our deep learning methodology uses convolutional neural networks to automatically learn and adapt to dynamic and complicated radio environments, improving accuracy and flexibility over classic spectrum sensing approaches. Future research might focus on improving CSO algorithms to better optimize the spectrum sensing process, enhancing the reliability of DBN-enabled sensing techniques.

PMID:39116190 | DOI:10.1371/journal.pone.0305987

Categories: Literature Watch

Fruit-In-Sight: A deep learning-based framework for secondary metabolite class prediction using fruit and leaf images

Thu, 2024-08-08 06:00

PLoS One. 2024 Aug 8;19(8):e0308708. doi: 10.1371/journal.pone.0308708. eCollection 2024.

ABSTRACT

Fruits produce a wide variety of secondary metabolites of great economic value. Analytical measurement of the metabolites is tedious, time-consuming, and expensive. Additionally, metabolite concentrations vary greatly from tree to tree, making it difficult to choose trees for fruit collection. The current study tested whether deep learning-based models can be developed using fruit and leaf images alone to predict a metabolite's concentration class (high or low). We collected fruits and leaves (n = 1045) from neem trees grown in the wild across 0.6 million sq km, imaged them, and measured concentration of five metabolites (azadirachtin, deacetyl-salannin, salannin, nimbin and nimbolide) using high-performance liquid chromatography. We used the data to train deep learning models for metabolite class prediction. The best model out of the seven tested (YOLOv5, GoogLeNet, InceptionNet, EfficientNet_B0, Resnext_50, Resnet18, and SqueezeNet) provided a validation F1 score of 0.93 and a test F1 score of 0.88. The sensitivity and specificity of the fruit model alone in the test set were 83.52 ± 6.19 and 82.35 ± 5.96, and 79.40 ± 8.50 and 85.64 ± 6.21, for the low and the high classes, respectively. The sensitivity was further boosted to 92.67± 5.25 for the low class and 88.11 ± 9.17 for the high class, and the specificity to 100% for both classes, using a multi-analyte framework. We incorporated the multi-analyte model in an Android mobile App Fruit-In-Sight that uses fruit and leaf images to decide whether to 'pick' or 'not pick' the fruits from a specific tree based on the metabolite concentration class. Our study provides evidence that images of fruits and leaves alone can predict the concentration class of a secondary metabolite without using expensive laboratory equipment and cumbersome analytical procedures, thus simplifying the process of choosing the right tree for fruit collection.

PMID:39116159 | DOI:10.1371/journal.pone.0308708

Categories: Literature Watch

A new protocol for multispecies bacterial infections in zebrafish and their monitoring through automated image analysis

Thu, 2024-08-08 06:00

PLoS One. 2024 Aug 8;19(8):e0304827. doi: 10.1371/journal.pone.0304827. eCollection 2024.

ABSTRACT

The zebrafish Danio rerio has become a popular model host to explore disease pathology caused by infectious agents. A main advantage is its transparency at an early age, which enables live imaging of infection dynamics. While multispecies infections are common in patients, the zebrafish model is rarely used to study them, although the model would be ideal for investigating pathogen-pathogen and pathogen-host interactions. This may be due to the absence of an established multispecies infection protocol for a defined organ and the lack of suitable image analysis pipelines for automated image processing. To address these issues, we developed a protocol for establishing and tracking single and multispecies bacterial infections in the inner ear structure (otic vesicle) of the zebrafish by imaging. Subsequently, we generated an image analysis pipeline that involved deep learning for the automated segmentation of the otic vesicle, and scripts for quantifying pathogen frequencies through fluorescence intensity measures. We used Pseudomonas aeruginosa, Acinetobacter baumannii, and Klebsiella pneumoniae, three of the difficult-to-treat ESKAPE pathogens, to show that our infection protocol and image analysis pipeline work both for single pathogens and pairwise pathogen combinations. Thus, our protocols provide a comprehensive toolbox for studying single and multispecies infections in real-time in zebrafish.

PMID:39116043 | DOI:10.1371/journal.pone.0304827

Categories: Literature Watch

Artificial intelligence methods available for cancer research

Thu, 2024-08-08 06:00

Front Med. 2024 Aug 8. doi: 10.1007/s11684-024-1085-3. Online ahead of print.

ABSTRACT

Cancer is a heterogeneous and multifaceted disease with a significant global footprint. Despite substantial technological advancements for battling cancer, early diagnosis and selection of effective treatment remains a challenge. With the convenience of large-scale datasets including multiple levels of data, new bioinformatic tools are needed to transform this wealth of information into clinically useful decision-support tools. In this field, artificial intelligence (AI) technologies with their highly diverse applications are rapidly gaining ground. Machine learning methods, such as Bayesian networks, support vector machines, decision trees, random forests, gradient boosting, and K-nearest neighbors, including neural network models like deep learning, have proven valuable in predictive, prognostic, and diagnostic studies. Researchers have recently employed large language models to tackle new dimensions of problems. However, leveraging the opportunity to utilize AI in clinical settings will require surpassing significant obstacles-a major issue is the lack of use of the available reporting guidelines obstructing the reproducibility of published studies. In this review, we discuss the applications of AI methods and explore their benefits and limitations. We summarize the available guidelines for AI in healthcare and highlight the potential role and impact of AI models on future directions in cancer research.

PMID:39115792 | DOI:10.1007/s11684-024-1085-3

Categories: Literature Watch

PolypNextLSTM: a lightweight and fast polyp video segmentation network using ConvNext and ConvLSTM

Thu, 2024-08-08 06:00

Int J Comput Assist Radiol Surg. 2024 Aug 8. doi: 10.1007/s11548-024-03244-6. Online ahead of print.

ABSTRACT

PURPOSE: Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices.

METHODS: PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion.

RESULTS: Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNS+ (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion.

CONCLUSION: PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Code can be found here: https://github.com/mtec-tuhh/PolypNextLSTM .

PMID:39115609 | DOI:10.1007/s11548-024-03244-6

Categories: Literature Watch

Attention-based approach to predict drug-target interactions across seven target superfamilies

Thu, 2024-08-08 06:00

Bioinformatics. 2024 Aug 8:btae496. doi: 10.1093/bioinformatics/btae496. Online ahead of print.

ABSTRACT

MOTIVATION: Drug-target interactions (DTIs) hold a pivotal role in drug repurposing and elucidation of drug mechanisms of action. While single-targeted drugs have demonstrated clinical success, they often exhibit limited efficacy against complex diseases, such as cancers, whose development and treatment is dependent on several biological processes. Therefore, a comprehensive understanding of primary, secondary and even inactive targets becomes essential in the quest for effective and safe treatments for cancer and other indications. The human proteome offers over a thousand druggable targets, yet most FDA-approved drugs bind to only a small fraction of these targets.

RESULTS: This study introduces an attention-based method (called as MMAtt-DTA) to predict drug-target bioactivities across human proteins within seven superfamilies. We meticulously examined nine different descriptor sets to identify optimal signature descriptors for predicting novel DTIs. Our testing results demonstrated Spearman correlations exceeding 0.72 (P < 0.001) for six out of seven superfamilies. The proposed method outperformed fourteen state-of-the-art machine learning, deep learning and graph-based methods and maintained relatively high performance for most target superfamilies when tested with independent bioactivity data sources. We computationally validated 185,676 drug-target pairs from ChEMBL-V33 that were not available during model training, achieving a reasonable performance with Spearman correlation greater than 0.57 (P < 0.001) for most superfamilies. This underscores the robustness of the proposed method for predicting novel DTIs. Finally, we applied our method to predict missing bioactivities among 3,492 approved molecules in ChEMBL-V33, offering a valuable tool for advancing drug mechanism discovery and repurposing existing drugs for new indications.

AVAILABILITY: https://github.com/AronSchulman/MMAtt-DTA.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:39115379 | DOI:10.1093/bioinformatics/btae496

Categories: Literature Watch

Pages