Deep learning

PlantNh-Kcr: a deep learning model for predicting non-histone crotonylation sites in plants

Thu, 2024-02-15 06:00

Plant Methods. 2024 Feb 15;20(1):28. doi: 10.1186/s13007-024-01157-8.

ABSTRACT

BACKGROUND: Lysine crotonylation (Kcr) is a crucial protein post-translational modification found in histone and non-histone proteins. It plays a pivotal role in regulating diverse biological processes in both animals and plants, including gene transcription and replication, cell metabolism and differentiation, as well as photosynthesis. Despite the significance of Kcr, detection of Kcr sites through biological experiments is often time-consuming, expensive, and only a fraction of crotonylated peptides can be identified. This reality highlights the need for efficient and rapid prediction of Kcr sites through computational methods. Currently, several machine learning models exist for predicting Kcr sites in humans, yet models tailored for plants are rare. Furthermore, no downloadable Kcr site predictors or datasets have been developed specifically for plants. To address this gap, it is imperative to integrate existing Kcr sites detected in plant experiments and establish a dedicated computational model for plants.

RESULTS: Most plant Kcr sites are located on non-histones. In this study, we collected non-histone Kcr sites from five plants, including wheat, tabacum, rice, peanut, and papaya. We then conducted a comprehensive analysis of the amino acid distribution surrounding these sites. To develop a predictive model for plant non-histone Kcr sites, we combined a convolutional neural network (CNN), a bidirectional long short-term memory network (BiLSTM), and attention mechanism to build a deep learning model called PlantNh-Kcr. On both five-fold cross-validation and independent tests, PlantNh-Kcr outperformed multiple conventional machine learning models and other deep learning models. Furthermore, we conducted an analysis of species-specific effect on the PlantNh-Kcr model and found that a general model trained using data from multiple species outperforms species-specific models.

CONCLUSION: PlantNh-Kcr represents a valuable tool for predicting plant non-histone Kcr sites. We expect that this model will aid in addressing key challenges and tasks in the study of plant crotonylation sites.

PMID:38360730 | DOI:10.1186/s13007-024-01157-8

Categories: Literature Watch

StairNet: visual recognition of stairs for human-robot locomotion

Thu, 2024-02-15 06:00

Biomed Eng Online. 2024 Feb 15;23(1):20. doi: 10.1186/s12938-024-01216-0.

ABSTRACT

Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human-robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.

PMID:38360664 | DOI:10.1186/s12938-024-01216-0

Categories: Literature Watch

A foundation for evaluating the surgical artificial intelligence literature

Thu, 2024-02-15 06:00

Eur J Surg Oncol. 2024 Feb 10:108014. doi: 10.1016/j.ejso.2024.108014. Online ahead of print.

ABSTRACT

With increasing growth in applications of artificial intelligence (AI) in surgery, it has become essential for surgeons to gain a foundation of knowledge to critically appraise the scientific literature, commercial claims regarding products, and regulatory and legal frameworks that govern the development and use of AI. This guide offers surgeons a framework with which to evaluate manuscripts that incorporate the use of AI. It provides a glossary of common terms, an overview of prerequisite knowledge to maximize understanding of methodology, and recommendations on how to carefully consider each element of a manuscript to assess the quality of the data on which an algorithm was trained, the appropriateness of the methodological approach, the potential for reproducibility of the experiment, and the applicability to surgical practice, including considerations on generalizability and scalability.

PMID:38360498 | DOI:10.1016/j.ejso.2024.108014

Categories: Literature Watch

Estimating four-decadal variations of seagrass distribution using satellite data and deep learning methods in a marine lagoon

Thu, 2024-02-15 06:00

Sci Total Environ. 2024 Feb 13:170936. doi: 10.1016/j.scitotenv.2024.170936. Online ahead of print.

ABSTRACT

Seagrasses are marine flowering plants that inhabit shallow coastal and estuarine waters and serve vital ecological functions in marine ecosystems. However, seagrass ecosystems face the looming threat of degradation, necessitating effective monitoring. Remote-sensing technology offers significant advantages in terms of spatial coverage and temporal accessibility. Although some remote sensing approaches, such as water column correction, spectral index-based, and machine learning-based methods, have been proposed for seagrass detection, their performances are not always satisfactory. Deep learning models, known for their powerful learning and vast data processing capabilities, have been widely employed in automatic target detection. In this study, a typical seagrass habitat (Swan Lake) in northern China was used to propose a deep learning-based model for seagrass detection from Landsat satellite data. The performances of UNet and SegNet at different patch scales for seagrass detection were compared. The results showed that the SegNet model at a patch scale of 16 × 16 pixels worked well, with validation accuracy and loss of 96.3 % and 0.15, respectively, during training. Evaluations based on the test dataset also indicated good performance of this model, with an overall accuracy >95 %. Subsequently, the deep learning model was applied for seagrass detection in Swan Lake between 1984 and 2022. We observed a noticeable seasonal variation in germination, growth, maturation, and shrinkage from spring to winter. The seagrass area decreased over the past four decades, punctuated by intermittent fluctuations likely attributed to anthropogenic activities, such as aquaculture and construction development. Additionally, changes in landscape ecology indicators have demonstrated that seagrass experiences severe patchiness. However, these problems have weakened recently. Overall, by combining remote sensing big data with deep learning technology, our study provides a valuable approach for the highly precise monitoring of seagrass. These findings on seagrass area variation in Swan Lake offer significant information for seagrass restoration and management.

PMID:38360328 | DOI:10.1016/j.scitotenv.2024.170936

Categories: Literature Watch

Use of artificial intelligence and I-score for prediction of recurrence before catheter ablation of atrial fibrillation

Thu, 2024-02-15 06:00

Int J Cardiol. 2024 Feb 13:131851. doi: 10.1016/j.ijcard.2024.131851. Online ahead of print.

ABSTRACT

BACKGROUND: Based solely on pre-ablation characteristics, previous risk scores have demonstrated variable predictive performance. This study aimed to predict the recurrence of AF after catheter ablation by using artificial intelligence (AI)-enabled pre-ablation computed tomography (PVCT) images and pre-ablation clinical data.

METHODS: A total of 638 drug-refractory paroxysmal atrial fibrillation (AF) patients undergone ablation were recruited. For model training, we used left atria (LA) acquired from pre-ablation PVCT slices (126,288 images). A total of 29 clinical variables were collected before ablation, including baseline characteristics, medical histories, laboratory results, transthoracic echocardiographic parameters, and 3D reconstructed LA volumes. The I-Score was applied to select variables for model training. For the prediction of one-year AF recurrence, PVCT deep-learning and clinical variable machine-learning models were developed. We then applied machine learning to ensemble the PVCT and clinical variable models.

RESULTS: The PVCT model achieved an AUC of 0.63 in the test set. Various combinations of clinical variables selected by I-Score can yield an AUC of 0.72, which is significantly better than all variables or features selected by nonparametric statistics (AUCs of 0.66 to 0.69). The ensemble model (PVCT images and clinical variables) significantly improved predictive performance up to an AUC of 0.76 (sensitivity of 86.7%, and specificity of 51.0%).

CONCLUSIONS: Before ablation, AI-enabled PVCT combined with I-Score features was applicable in predicting recurrence in paroxysmal AF patients. Based on all possible predictors, the I-Score is capable of identifying the most influential combination.

PMID:38360099 | DOI:10.1016/j.ijcard.2024.131851

Categories: Literature Watch

Image2InChI: Automated Molecular Optical Image Recognition

Thu, 2024-02-15 06:00

J Chem Inf Model. 2024 Feb 15. doi: 10.1021/acs.jcim.3c02082. Online ahead of print.

ABSTRACT

The accurate identification and analysis of chemical structures in molecular images are prerequisites of artificial intelligence for drug discovery. It is important to efficiently and automatically convert molecular images into machine-readable representations. Therefore, in this paper, we propose an automated molecular optical image recognition model based on deep learning, called Image2InChI. Additionally, the proposed Image2InChI introduces a novel feature fusion network with attention to integrate image patch and InChI prediction. The improved SwinTransformer as an encoder and the Transformer Decoder as a decoder with patch embedding are applied to predict the image features for the corresponding InChI. The experimental results showed that the Image2InChI model achieves an accuracy of InChI (InChI acc) of 99.8%, a Morgan FP of 94.1%, an accuracy of maximum common structures (MCS acc) of 94.8%, and an accuracy of longest common subsequence (LCS acc) of 96.2%. The experiments demonstrated that the proposed Image2InChI model improves the accuracy and efficiency of molecular image recognition and provided a valuable reference about optical chemical structure recognition for InChI.

PMID:38359459 | DOI:10.1021/acs.jcim.3c02082

Categories: Literature Watch

Coupling speckle noise suppression with image classification for deep-learning-aided ultrasound diagnosis

Thu, 2024-02-15 06:00

Phys Med Biol. 2024 Feb 15. doi: 10.1088/1361-6560/ad29bb. Online ahead of print.

ABSTRACT

During deep-learning-aided (DL-aided) ultrasound (US) diagnosis, US image classification is a foundational task. Due to the existence of serious speckle noise in US images, the performance of DL models may be degraded. Pre-denoising US images before their use in DL models is usually a logical choice. However, our investigation suggests that pre-speckle-denoising is not consistently advantageous. Furthermore, due to the decoupling of speckle denoising from the subsequent DL classification, investing intensive time in parameter tuning is inevitable to attain the optimal denoising parameters for various datasets and DL models. Pre-denoising will also add extra complexity to the classification task and make it no longer end-to-end. 
Approach. In this work, we propose a multi-scale high-frequency-based feature augmentation (MSHFFA) module that couples feature augmentation and speckle noise suppression with specific DL models, preserving an end-to-end fashion. In MSHFFA, the input US image is first decomposed to multi-scale low-frequency and high-frequency components (LFC&HFC) with discrete wavelet transform. Then, multi-scale augmentation maps are obtained by computing the correlation betweenLFCandHFC. Last, the original DL model features are augmented with multi-scale augmentation maps. 
Main results. On two public US datasets, all six renowned DL models exhibited enhanced F1-scores compared with their original versions (by 1.31% to 8.17% on the POCUS dataset and 0.46% to 3.89% on the BLU dataset) after using the MSHFFA module, with only approximately 1% increase in model parameter count. 
Significance. The proposed MSHFFA has broad applicability and commendable efficiency and thus can be used to enhance the performance of DL-aided US diagnosis. The codes are available at https://github.com/ResonWang/MSHFFA.

&#xD.

PMID:38359452 | DOI:10.1088/1361-6560/ad29bb

Categories: Literature Watch

An Approach to Detect and Predict Epileptic Seizures with High Accuracy Using Convolutional Neural Networks and Single-Lead-ECG Signal

Thu, 2024-02-15 06:00

Biomed Phys Eng Express. 2024 Feb 15. doi: 10.1088/2057-1976/ad29a3. Online ahead of print.

ABSTRACT

One of the epileptic patients' challenges is to detect the time of seizures and the possibility of predicting. This research aims to provide an algorithm based on deep learning to detect and predict the time of seizure from one to two minutes before its occurrence. The proposed Convolutional Neural Network (CNN) can detect and predict the occurrence of focal epilepsy seizures through single-lead-ECG signal processing instead of using EEG signals. The structure of the proposed CNN for seizure detection and prediction is the same. Considering the requirements of a wearable system, after a few light pre-processing steps, the ECG signal can be used as input to the neural network without any manual feature extraction step. The desired neural network learns purposeful features according to the labelled ECG signals and then performs the classification of these signals. Training of 39-layer CNN for seizure detection and prediction has been done separately. The proposed method can detect seizures with an accuracy of 98.84% and predict them with an accuracy of 94.29%. With this approach, the ECG signal can be a promising indicator for the construction of portable systems for monitoring the status of epileptic patients.

PMID:38359446 | DOI:10.1088/2057-1976/ad29a3

Categories: Literature Watch

Enhanced multimodal biometric recognition systems based on deep learning and traditional methods in smart environments

Thu, 2024-02-15 06:00

PLoS One. 2024 Feb 15;19(2):e0291084. doi: 10.1371/journal.pone.0291084. eCollection 2024.

ABSTRACT

In the field of data security, biometric security is a significant emerging concern. The multimodal biometrics system with enhanced accuracy and detection rate for smart environments is still a significant challenge. The fusion of an electrocardiogram (ECG) signal with a fingerprint is an effective multimodal recognition system. In this work, unimodal and multimodal biometric systems using Convolutional Neural Network (CNN) are conducted and compared with traditional methods using different levels of fusion of fingerprint and ECG signal. This study is concerned with the evaluation of the effectiveness of proposed parallel and sequential multimodal biometric systems with various feature extraction and classification methods. Additionally, the performance of unimodal biometrics of ECG and fingerprint utilizing deep learning and traditional classification technique is examined. The suggested biometric systems were evaluated utilizing ECG (MIT-BIH) and fingerprint (FVC2004) databases. Additional tests are conducted to examine the suggested models with:1) virtual dataset without augmentation (ODB) and 2) virtual dataset with augmentation (VDB). The findings show that the optimum performance of the parallel multimodal achieved 0.96 Area Under the ROC Curve (AUC) and sequential multimodal achieved 0.99 AUC, in comparison to unimodal biometrics which achieved 0.87 and 0.99 AUCs, for the fingerprint and ECG biometrics, respectively. The overall performance of the proposed multimodal biometrics outperformed unimodal biometrics using CNN. Moreover, the performance of the suggested CNN model for ECG signal and sequential multimodal system based on neural network outperformed other systems. Lastly, the performance of the proposed systems is compared with previously existing works.

PMID:38358992 | DOI:10.1371/journal.pone.0291084

Categories: Literature Watch

De novo prediction of RNA 3D structures with deep generative models

Thu, 2024-02-15 06:00

PLoS One. 2024 Feb 15;19(2):e0297105. doi: 10.1371/journal.pone.0297105. eCollection 2024.

ABSTRACT

We present a Deep Learning approach to predict 3D folding structures of RNAs from their nucleic acid sequence. Our approach combines an autoregressive Deep Generative Model, Monte Carlo Tree Search, and a score model to find and rank the most likely folding structures for a given RNA sequence. We show that RNA de novo structure prediction by deep learning is possible at atom resolution, despite the low number of experimentally measured structures that can be used for training. We confirm the predictive power of our approach by achieving competitive results in a retrospective evaluation of the RNA-Puzzles prediction challenges, without using structural contact information from multiple sequence alignments or additional data from chemical probing experiments. Blind predictions for recent RNA-Puzzle challenges under the name "Dfold" further support the competitive performance of our approach.

PMID:38358972 | DOI:10.1371/journal.pone.0297105

Categories: Literature Watch

NormAUG: Normalization-guided Augmentation for Domain Generalization

Thu, 2024-02-15 06:00

IEEE Trans Image Process. 2024 Feb 15;PP. doi: 10.1109/TIP.2024.3364516. Online ahead of print.

ABSTRACT

Deep learning has made significant advancements in supervised learning. However, models trained in this setting often face challenges due to domain shift between training and test sets, resulting in a significant drop in performance during testing. To address this issue, several domain generalization methods have been developed to learn robust and domain-invariant features from multiple training domains that can generalize well to unseen test domains. Data augmentation plays a crucial role in achieving this goal by enhancing the diversity of the training data. In this paper, inspired by the observation that normalizing an image with different statistics generated by different batches with various domains can perturb its feature, we propose a simple yet effective method called NormAUG (Normalization-guided Augmentation). Our method includes two paths: the main path and the auxiliary (augmented) path. During training, the auxiliary path includes multiple sub-paths, each corresponding to batch normalization for a single domain or a random combination of multiple domains. This introduces diverse information at the feature level and improves the generalization of the main path. Moreover, our NormAUG method effectively reduces the existing upper boundary for generalization based on theoretical perspectives. During the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance. Extensive experiments are conducted on multiple benchmark datasets to validate the effectiveness of our proposed method.

PMID:38358878 | DOI:10.1109/TIP.2024.3364516

Categories: Literature Watch

A Multi-Level Interpretable Sleep Stage Scoring System by Infusing Experts' Knowledge Into a Deep Network Architecture

Thu, 2024-02-15 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Feb 15;PP. doi: 10.1109/TPAMI.2024.3366170. Online ahead of print.

ABSTRACT

In recent years, deep learning has shown potential and efficiency in a wide area including computer vision, image and signal processing. Yet, translational challenges remain for user applications due to a lack of interpretability of algorithmic decisions and results. This black box problem is particularly problematic for high-risk applications such as medical-related decision-making. The current study goal was to design an interpretable deep learning system for time series classification of electroencephalogram (EEG) for sleep stage scoring as a step toward designing a transparent system. We have developed an interpretable deep neural network that includes a kernel-based layer guided by a set of principles used for sleep scoring by human experts in the visual analysis of polysomnographic records. A kernel-based convolutional layer was defined and used as the first layer of the system and made available for user interpretation. The trained system and its results were interpreted in four levels from microstructure of EEG signals, such as trained kernels and effect of each kernel on the detected stages, to macrostructures, such as transitions between stages. The proposed system demonstrated greater performance than prior studies and the system learned information consistent with expert knowledge.

PMID:38358869 | DOI:10.1109/TPAMI.2024.3366170

Categories: Literature Watch

GenCoder: A Novel Convolutional Neural Network based Autoencoder for Genomic Sequence Data Compression

Thu, 2024-02-15 06:00

IEEE/ACM Trans Comput Biol Bioinform. 2024 Feb 15;PP. doi: 10.1109/TCBB.2024.3366240. Online ahead of print.

ABSTRACT

Revolutionary advances in DNA sequencing technologies fundamentally change the nature of genomics. Today's sequencing technologies have opened into an outburst in genomic data volume. These data can be used in various applications where long-term storage and analysis of genomic sequence data are required. Data-specific compression algorithms can effectively manage a large volume of data. Genomic sequence data compression has been investigated as a fundamental research topic for many decades. In recent times, deep learning has achieved great success in many compression tools and is gradually being used in genomic sequence compression. Significantly, autoencoder has been applied in dimensionality reduction, compact representations of data, and generative model learning. It can use convolutional layers to learn essential features from input data, which is better for image and series data. Autoencoder reconstructs the input data with some loss of information. Since accuracy is critical in genomic data, compressed genomic data must be decompressed without any information loss. We introduce a new scheme to address the loss incurred in the decompressed data of the autoencoder. This paper proposes a novel algorithm called GenCoder for reference-free compression of genomic sequences using a convolutional autoencoder and regenerating the genomic sequences from a latent code produced by the autoencoder, and retrieving original data losslessly. Performance evaluation is conducted on various genomes and benchmarked datasets. The experimental results on the tested data demonstrate that the deep learning model used in the proposed compression algorithm generalizes well for genomic sequence data and improves performance over the state-of-the-art approaches.

PMID:38358865 | DOI:10.1109/TCBB.2024.3366240

Categories: Literature Watch

Automated angular measurement for puncture angle using a computer-aided method in ultrasound-guided peripheral insertion

Thu, 2024-02-15 06:00

Phys Eng Sci Med. 2024 Feb 15. doi: 10.1007/s13246-024-01397-x. Online ahead of print.

ABSTRACT

Ultrasound guidance has become the gold standard for obtaining vascular access. Angle information, which indicates the entry angle of the needle into the vein, is required to ensure puncture success. Although various image processing-based methods, such as deep learning, have recently been applied to improve needle visibility, these methods have limitations, in that the puncture angle to the target organ is not measured. We aim to detect the target vessel and puncture needle and to derive the puncture angle by combining deep learning and conventional image processing methods such as the Hough transform. Median cubital vein US images were obtained from 20 healthy volunteers, and images of simulated blood vessels and needles were obtained during the puncture of a simulated blood vessel in four phantoms. The U-Net architecture was used to segment images of blood vessels and needles, and various image processing methods were employed to automatically measure angles. The experimental results indicated that the mean dice coefficients of median cubital veins, simulated blood vessels, and needles were 0.826, 0.931, and 0.773, respectively. The quantitative results of angular measurement showed good agreement between the expert and automatic measurements of the puncture angle with 0.847 correlations. Our findings indicate that the proposed method achieves extremely high segmentation accuracy and automated angular measurements. The proposed method reduces the variability and time required in manual angle measurements and presents the possibility where the operator can concentrate on delicate techniques related to the direction of the needle.

PMID:38358620 | DOI:10.1007/s13246-024-01397-x

Categories: Literature Watch

Prediction of treatment response in major depressive disorder using a hybrid of convolutional recurrent deep neural networks and effective connectivity based on EEG signal

Thu, 2024-02-15 06:00

Phys Eng Sci Med. 2024 Feb 15. doi: 10.1007/s13246-024-01392-2. Online ahead of print.

ABSTRACT

In this study, we have developed a novel method based on deep learning and brain effective connectivity to classify responders and non-responders to selective serotonin reuptake inhibitors (SSRIs) antidepressants in major depressive disorder (MDD) patients prior to the treatment using EEG signal. The effective connectivity of 30 MDD patients was determined by analyzing their pretreatment EEG signals, which were then concatenated into delta, theta, alpha, and beta bands and transformed into images. Using these images, we then fine tuned a hybrid Convolutional Neural Network that is enhanced with bidirectional Long Short-Term Memory cells based on transfer learning. The Inception-v3, ResNet18, DenseNet121, and EfficientNet-B0 models are implemented as base models. Finally, the models are followed by BiLSTM and dense layers in order to classify responders and non-responders to SSRI treatment. Results showed that the EfficiencyNet-B0 has the highest accuracy of 98.33, followed by DensNet121, ResNet18 and Inception-v3. Therefore, a new method was proposed in this study that uses deep learning models to extract both spatial and temporal features automatically, which will improve classification results. The proposed method provides accurate identification of MDD patients who are responding, thereby reducing the cost of medical facilities and patient care.

PMID:38358619 | DOI:10.1007/s13246-024-01392-2

Categories: Literature Watch

Retinal imaging and Alzheimer's disease: a future powered by Artificial Intelligence

Thu, 2024-02-15 06:00

Graefes Arch Clin Exp Ophthalmol. 2024 Feb 15. doi: 10.1007/s00417-024-06394-0. Online ahead of print.

ABSTRACT

Alzheimer's disease (AD) is a neurodegenerative condition that primarily affects brain tissue. Because the retina and brain share the same embryonic origin, visual deficits have been reported in AD patients. Artificial Intelligence (AI) has recently received a lot of attention due to its immense power to process and detect image hallmarks and make clinical decisions (like diagnosis) based on images. Since retinal changes have been reported in AD patients, AI is being proposed to process images to predict, diagnose, and prognosis AD. As a result, the purpose of this review was to discuss the use of AI trained on retinal images of AD patients. According to previous research, AD patients experience retinal thickness and retinal vessel density changes, which can occasionally occur before the onset of the disease's clinical symptoms. AI and machine vision can detect and use these changes in the domains of disease prediction, diagnosis, and prognosis. As a result, not only have unique algorithms been developed for this condition, but also databases such as the Retinal OCTA Segmentation dataset (ROSE) have been constructed for this purpose. The achievement of high accuracy, sensitivity, and specificity in the classification of retinal images between AD and healthy groups is one of the major breakthroughs in using AI based on retinal images for AD. It is fascinating that researchers could pinpoint individuals with a positive family history of AD based on the properties of their eyes. In conclusion, the growing application of AI in medicine promises its future position in processing different aspects of patients with AD, but we need cohort studies to determine whether it can help to follow up with healthy persons at risk of AD for a quicker diagnosis or assess the prognosis of patients with AD.

PMID:38358524 | DOI:10.1007/s00417-024-06394-0

Categories: Literature Watch

Machine Learning and Deep Learning Applications in Magnetic Particle Imaging

Thu, 2024-02-15 06:00

J Magn Reson Imaging. 2024 Feb 15. doi: 10.1002/jmri.29294. Online ahead of print.

ABSTRACT

In recent years, magnetic particle imaging (MPI) has emerged as a promising imaging technique depicting high sensitivity and spatial resolution. It originated in the early 2000s where it proposed a new approach to challenge the low spatial resolution achieved by using relaxometry in order to measure the magnetic fields. MPI presents 2D and 3D images with high temporal resolution, non-ionizing radiation, and optimal visual contrast due to its lack of background tissue signal. Traditionally, the images were reconstructed by the conversion of signal from the induced voltage by generating system matrix and X-space based methods. Because image reconstruction and analyses play an integral role in obtaining precise information from MPI signals, newer artificial intelligence-based methods are continuously being researched and developed upon. In this work, we summarize and review the significance and employment of machine learning and deep learning models for applications with MPI and the potential they hold for the future. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.

PMID:38358090 | DOI:10.1002/jmri.29294

Categories: Literature Watch

Implementation of a portable diffraction phase microscope for digital histopathology

Thu, 2024-02-15 06:00

J Biophotonics. 2024 Feb 15:e202300496. doi: 10.1002/jbio.202300496. Online ahead of print.

ABSTRACT

Quantitative phase imaging (QPI) has a significant advantage in histopathology as it helps in differentiating biological tissue structures and cells without the need for staining. To make this capability more accessible, it is crucial to develop compact and portable systems. In this study, we introduce a portable diffraction phase microscopy (DPM) system that allows the acquisition of phase map images from various organs in mice using a low-NA objective lens. Our findings indicate that the cell and tissue structures observed in portable DPM images are similar to those seen in conventional histology microscope images. We confirmed that the developed system's performance is comparable to the benchtop DPM system. Additionally, we investigate the potential utility of digital histopathology by applying deep learning technology to create virtual staining of DPM images.

PMID:38358045 | DOI:10.1002/jbio.202300496

Categories: Literature Watch

MRM-BERT: a novel deep neural network predictor of multiple RNA modifications by fusing BERT representation and sequence features

Thu, 2024-02-15 06:00

RNA Biol. 2024 Jan;21(1):1-10. doi: 10.1080/15476286.2024.2315384. Epub 2024 Feb 15.

ABSTRACT

RNA modifications play crucial roles in various biological processes and diseases. Accurate prediction of RNA modification sites is essential for understanding their functions. In this study, we propose a hybrid approach that fuses a pre-trained sequence representation with various sequence features to predict multiple types of RNA modifications in one combined prediction framework. We developed MRM-BERT, a deep learning method that combined the pre-trained DNABERT deep sequence representation module and the convolutional neural network (CNN) exploiting four traditional sequence feature encodings to improve the prediction performance. MRM-BERT was evaluated on multiple datasets of 12 commonly occurring RNA modifications, including m6A, m5C, m1A and so on. The results demonstrate that our hybrid model outperforms other models in terms of area under receiver operating characteristic curve (AUC) for all 12 types of RNA modifications. MRM-BERT is available as an online tool (http://117.122.208.21:8501) or source code (https://github.com/abhhba999/MRM-BERT), which allows users to predict RNA modification sites and visualize the results. Overall, our study provides an effective and efficient approach to predict multiple RNA modifications, contributing to the understanding of RNA biology and the development of therapeutic strategies.

PMID:38357904 | DOI:10.1080/15476286.2024.2315384

Categories: Literature Watch

Carving out a Glycoside Hydrolase Active Site for Incorporation into a New Protein Scaffold Using Deep Network Hallucination

Thu, 2024-02-15 06:00

ACS Synth Biol. 2024 Feb 15. doi: 10.1021/acssynbio.3c00674. Online ahead of print.

ABSTRACT

Enzymes are indispensable biocatalysts for numerous industrial applications, yet stability, selectivity, and restricted substrate recognition present limitations for their use. Despite the importance of enzyme engineering in overcoming these limitations, success is often challenged by the intricate architecture of enzymes derived from natural sources. Recent advances in computational methods have enabled the de novo design of simplified scaffolds with specific functional sites. Such scaffolds may be advantageous as platforms for enzyme engineering. Here, we present a strategy for the de novo design of a simplified scaffold of an endo-α-N-acetylgalactosaminidase active site, a glycoside hydrolase from the GH101 enzyme family. Using a combination of trRosetta hallucination, iterative cycles of deep-learning-based structure prediction, and ProteinMPNN sequence design, we designed proteins with 290 amino acids incorporating the active site while reducing the molecular weight by over 100 kDa compared to the initial endo-α-N-acetylgalactosaminidase. Of 11 tested designs, six were expressed as soluble monomers, displaying similar or increased thermostabilities compared to the natural enzyme. Despite lacking detectable enzymatic activity, the experimentally determined crystal structures of a representative design closely matched the design with a root-mean-square deviation of 1.0 Å, with most catalytically important side chains within 2.0 Å. The results highlight the potential of scaffold hallucination in designing proteins that may serve as a foundation for subsequent enzyme engineering.

PMID:38357862 | DOI:10.1021/acssynbio.3c00674

Categories: Literature Watch

Pages