Deep learning

Prospective evaluation of deep learning image reconstruction for Lung-RADS and automatic nodule volumetry on ultralow-dose chest CT

Thu, 2024-02-22 06:00

PLoS One. 2024 Feb 22;19(2):e0297390. doi: 10.1371/journal.pone.0297390. eCollection 2024.

ABSTRACT

PURPOSE: To prospectively evaluate whether Lung-RADS classification and volumetric nodule assessment were feasible with ultralow-dose (ULD) chest CT scans with deep learning image reconstruction (DLIR).

METHODS: The institutional review board approved this prospective study. This study included 40 patients (mean age, 66±12 years; 21 women). Participants sequentially underwent LDCT and ULDCT (CTDIvol, 0.96±0.15 mGy and 0.12±0.01 mGy) scans reconstructed with the adaptive statistical iterative reconstruction-V 50% (ASIR-V50) and DLIR. CT image quality was compared subjectively and objectively. The pulmonary nodules were assessed visually by two readers using the Lung-RADS 1.1 and automatically using a computerized assisted tool.

RESULTS: DLIR provided a significantly higher signal-to-noise ratio for LDCT and ULDCT images than ASIR-V50 (all P < .001). In general, DLIR showed superior subjective image quality for ULDCT images (P < .001) and comparable quality for LDCT images compared to ASIR-V50 (P = .01-1). The per-nodule sensitivities of observers for Lung-RADS category 3-4 nodules were 70.6-88.2% and 64.7-82.4% for DLIR-LDCT and DLIR-ULDCT images (P = 1) and categories were mostly concordant within observers. The per-nodule sensitivities of the computer-assisted detection for nodules ≥4 mm were 72.1% and 67.4% on DLIR-LDCT and ULDCT images (P = .50). The 95% limits of agreement for nodule volume differences between DLIR-LDCT and ULDCT images (-85.6 to 78.7 mm3) was similar to the within-scan nodule volume differences between DLIR- and ASIR-V50-LDCT images (-63.9 to 78.5 mm3), with volume differences smaller than 25% in 88.5% and 92.3% of nodules, respectively (P = .65).

CONCLUSION: DLIR enabled comparable Lung-RADS and volumetric nodule assessments on ULDCT images to LDCT images.

PMID:38386632 | DOI:10.1371/journal.pone.0297390

Categories: Literature Watch

Automated identification of abnormal infant movements from smart phone videos

Thu, 2024-02-22 06:00

PLOS Digit Health. 2024 Feb 22;3(2):e0000432. doi: 10.1371/journal.pdig.0000432. eCollection 2024 Feb.

ABSTRACT

Cerebral palsy (CP) is the most common cause of physical disability during childhood, occurring at a rate of 2.1 per 1000 live births. Early diagnosis is key to improving functional outcomes for children with CP. The General Movements (GMs) Assessment has high predictive validity for the detection of CP and is routinely used in high-risk infants but only 50% of infants with CP have overt risk factors when they are born. The implementation of CP screening programs represents an important endeavour, but feasibility is limited by access to trained GMs assessors. To facilitate progress towards this goal, we report a deep-learning framework for automating the GMs Assessment. We acquired 503 videos captured by parents and caregivers at home of infants aged between 12- and 18-weeks term-corrected age using a dedicated smartphone app. Using a deep learning algorithm, we automatically labelled and tracked 18 key body points in each video. We designed a custom pipeline to adjust for camera movement and infant size and trained a second machine learning algorithm to predict GMs classification from body point movement. Our automated body point labelling approach achieved human-level accuracy (mean ± SD error of 3.7 ± 5.2% of infant length) compared to gold-standard human annotation. Using body point tracking data, our prediction model achieved a cross-validated area under the curve (mean ± S.D.) of 0.80 ± 0.08 in unseen test data for predicting expert GMs classification with a sensitivity of 76% ± 15% for abnormal GMs and a negative predictive value of 94% ± 3%. This work highlights the potential for automated GMs screening programs to detect abnormal movements in infants as early as three months term-corrected age using digital technologies.

PMID:38386627 | DOI:10.1371/journal.pdig.0000432

Categories: Literature Watch

Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks

Thu, 2024-02-22 06:00

IEEE Trans Image Process. 2024 Feb 22;PP. doi: 10.1109/TIP.2024.3364064. Online ahead of print.

ABSTRACT

This paper presents a deep learning-based spectral demosaicing technique trained in an unsupervised manner. Many existing deep learning-based techniques relying on supervised learning with synthetic images, often underperform on real-world images, especially as the number of spectral bands increases. This paper presents a comprehensive unsupervised spectral demosaicing (USD) framework based on the characteristics of spectral mosaic images. This framework encompasses a training method, model structure, transformation strategy, and a well-fitted model selection strategy. To enable the network to dynamically model spectral correlation while maintaining a compact parameter space, we reduce the complexity and parameters of the spectral attention module. This is achieved by dividing the spectral attention tensor into spectral attention matrices in the spatial dimension and spectral attention vector in the channel dimension. This paper also presents Mosaic25, a real 25-band hyperspectral mosaic image dataset featuring various objects, illuminations, and materials for benchmarking purposes. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed method outperforms conventional unsupervised methods in terms of spatial distortion suppression, spectral fidelity, robustness, and computational cost. Our code and dataset are publicly available at https://github.com/polwork/Unsupervised-Spectral-Demosaicing.

PMID:38386587 | DOI:10.1109/TIP.2024.3364064

Categories: Literature Watch

Non-invasive quantification of the brain [<sup>18</sup>F]FDG-PET using inferred blood input function learned from total-body data with physical constraint

Thu, 2024-02-22 06:00

IEEE Trans Med Imaging. 2024 Feb 22;PP. doi: 10.1109/TMI.2024.3368431. Online ahead of print.

ABSTRACT

Full quantification of brain PET requires the blood input function (IF), which is traditionally achieved through an invasive and time-consuming arterial catheter procedure, making it unfeasible for clinical routine. This study presents a deep learning based method to estimate the input function (DLIF) for a dynamic brain FDG scan. A long short-term memory combined with a fully connected network was used. The dataset for training was generated from 85 total-body dynamic scans obtained on a uEXPLORER scanner. Time-activity curves from 8 brain regions and the carotid served as the input of the model, and labelled IF was generated from the ascending aorta defined on CT image. We emphasize the goodness-of-fitting of kinetic modeling as an additional physical loss to reduce the bias and the need for large training samples. DLIF was evaluated together with existing methods in terms of RMSE, area under the curve, regional and parametric image quantifications. The results revealed that the proposed model can generate IFs that closer to the reference ones in terms of shape and amplitude compared with the IFs generated using existing methods. All regional kinetic parameters calculated using DLIF agreed with reference values, with the correlation coefficient being 0.961 (0.913) and relative bias being 1.68±8.74% (0.37±4.93%) for Ki (K1). In terms of the visual appearance and quantification, parametric images were also highly identical to the reference images. In conclusion, our experiments indicate that a trained model can infer an image-derived IF from dynamic brain PET data, which enables subsequent reliable kinetic modeling.

PMID:38386580 | DOI:10.1109/TMI.2024.3368431

Categories: Literature Watch

AEGNN-M:A 3D Graph-Spatial Co-Representation Model for Molecular Property Prediction

Thu, 2024-02-22 06:00

IEEE J Biomed Health Inform. 2024 Feb 22;PP. doi: 10.1109/JBHI.2024.3368608. Online ahead of print.

ABSTRACT

Improving the drug development process can expedite the introduction of more novel drugs that cater to the demands of precision medicine. Accurately predicting molecular properties remains a fundamental challenge in drug discovery and development. Currently, a plethora of computer-aided drug discovery (CADD) methods have been widely employed in the field of molecular prediction. However, most of these methods primarily analyze molecules using low-dimensional representations such as SMILES notations, molecular fingerprints, and molecular graph-based descriptors. Only a few approaches have focused on incorporating and utilizing high-dimensional spatial structural representations of molecules. In light of the advancements in artificial intelligence, we introduce a 3D graph-spatial co-representation model called AEGNN-M, which combines two graph neural networks, GAT and EGNN. AEGNN-M enables learning of information from both molecular graphs representations and 3D spatial structural representations to predict molecular properties accurately. We conducted experiments on seven public datasets, three regression datasets and 14 breast cancer cell line phenotype screening datasets, comparing the performance of AEGNN-M with state-of-the-art deep learning methods. Extensive experimental results demonstrate the satisfactory performance of the AEGNN-M model. Furthermore, we analyzed the performance impact of different modules within AEGNN-M and the influence of spatial structural representations on the model's performance. The interpretability analysis also revealed the significance of specific atoms in determining particular molecular properties.

PMID:38386576 | DOI:10.1109/JBHI.2024.3368608

Categories: Literature Watch

Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses

Thu, 2024-02-22 06:00

Elife. 2024 Feb 22;13:e85812. doi: 10.7554/eLife.85812.

ABSTRACT

Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.

PMID:38386406 | DOI:10.7554/eLife.85812

Categories: Literature Watch

Stop moving: MR motion correction as an opportunity for artificial intelligence

Thu, 2024-02-22 06:00

MAGMA. 2024 Feb 22. doi: 10.1007/s10334-023-01144-5. Online ahead of print.

ABSTRACT

Subject motion is a long-standing problem of magnetic resonance imaging (MRI), which can seriously deteriorate the image quality. Various prospective and retrospective methods have been proposed for MRI motion correction, among which deep learning approaches have achieved state-of-the-art motion correction performance. This survey paper aims to provide a comprehensive review of deep learning-based MRI motion correction methods. Neural networks used for motion artifacts reduction and motion estimation in the image domain or frequency domain are detailed. Furthermore, besides motion-corrected MRI reconstruction, how estimated motion is applied in other downstream tasks is briefly introduced, aiming to strengthen the interaction between different research areas. Finally, we identify current limitations and point out future directions of deep learning-based MRI motion correction.

PMID:38386151 | DOI:10.1007/s10334-023-01144-5

Categories: Literature Watch

Flipover outperforms dropout in deep learning

Thu, 2024-02-22 06:00

Vis Comput Ind Biomed Art. 2024 Feb 22;7(1):4. doi: 10.1186/s42492-024-00153-y.

ABSTRACT

Flipover, an enhanced dropout technique, is introduced to improve the robustness of artificial neural networks. In contrast to dropout, which involves randomly removing certain neurons and their connections, flipover randomly selects neurons and reverts their outputs using a negative multiplier during training. This approach offers stronger regularization than conventional dropout, refining model performance by (1) mitigating overfitting, matching or even exceeding the efficacy of dropout; (2) amplifying robustness to noise; and (3) enhancing resilience against adversarial attacks. Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.

PMID:38386109 | DOI:10.1186/s42492-024-00153-y

Categories: Literature Watch

Improved 3D DESS MR neurography of the lumbosacral plexus with deep learning and geometric image combination reconstruction

Thu, 2024-02-22 06:00

Skeletal Radiol. 2024 Feb 22. doi: 10.1007/s00256-024-04613-7. Online ahead of print.

ABSTRACT

OBJECTIVE: To evaluate the impact of deep learning (DL) reconstruction in enhancing image quality and nerve conspicuity in LSP MRN using DESS sequences. Additionally, a geometric image combination (GIC) method to improve DESS signals' combination was proposed.

MATERIALS AND METHODS: Adult patients undergoing 3.0 Tesla LSP MRN with DESS were prospectively enrolled. The 3D DESS echoes were separately reconstructed with and without DL and DL-GIC combined reconstructions. In a subset of patients, 3D T2-weighted short tau inversion recovery (STIR-T2w) sequences were also acquired. Three radiologists rated 4 image stacks ('DESS S2', 'DESS S2 DL', 'DESS GIC DL' and 'STIR-T2w DL') for bulk motion, vascular suppression, nerve fascicular architecture, and overall nerve conspicuity. Relative SNR, nerve-to-muscle, -fat, and -vessel contrast ratios were measured. Statistical analysis included ANOVA and Wilcoxon signed-rank tests. p < 0.05 was considered statistically significant.

RESULTS: Forty patients (22 females; mean age = 48.6 ± 18.5 years) were enrolled. Quantitatively, 'DESS GIC DL' demonstrated superior relative SNR (p < 0.001), while 'DESS S2 DL' exhibited superior nerve-to-background contrast ratio (p value range: 0.002 to < 0.001). Qualitatively, DESS provided superior vascular suppression and depiction of sciatic nerve fascicular architecture but more bulk motion as compared to 'STIR-T2w DL'. 'DESS GIC DL' demonstrated better nerve visualization for several smaller, distal nerve segments than 'DESS S2 DL' and 'STIR-T2w DL'.

CONCLUSION: Application of a DL reconstruction with geometric image combination in DESS MRN improves nerve conspicuity of the LSP, especially for its smaller branch nerves.

PMID:38386108 | DOI:10.1007/s00256-024-04613-7

Categories: Literature Watch

Artificial intelligence applied to magnetic resonance imaging reliably detects the presence, but not the location, of meniscus tears: a systematic review and meta-analysis

Thu, 2024-02-22 06:00

Eur Radiol. 2024 Feb 22. doi: 10.1007/s00330-024-10625-7. Online ahead of print.

ABSTRACT

OBJECTIVES: To review and compare the accuracy of convolutional neural networks (CNN) for the diagnosis of meniscal tears in the current literature and analyze the decision-making processes utilized by these CNN algorithms.

MATERIALS AND METHODS: PubMed, MEDLINE, EMBASE, and Cochrane databases up to December 2022 were searched in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement. Risk of analysis was used for all identified articles. Predictive performance values, including sensitivity and specificity, were extracted for quantitative analysis. The meta-analysis was divided between AI prediction models identifying the presence of meniscus tears and the location of meniscus tears.

RESULTS: Eleven articles were included in the final review, with a total of 13,467 patients and 57,551 images. Heterogeneity was statistically significantly large for the sensitivity of the tear identification analysis (I2 = 79%). A higher level of accuracy was observed in identifying the presence of a meniscal tear over locating tears in specific regions of the meniscus (AUC, 0.939 vs 0.905). Pooled sensitivity and specificity were 0.87 (95% confidence interval (CI) 0.80-0.91) and 0.89 (95% CI 0.83-0.93) for meniscus tear identification and 0.88 (95% CI 0.82-0.91) and 0.84 (95% CI 0.81-0.85) for locating the tears.

CONCLUSIONS: AI prediction models achieved favorable performance in the diagnosis, but not location, of meniscus tears. Further studies on the clinical utilities of deep learning should include standardized reporting, external validation, and full reports of the predictive performances of these models, with a view to localizing tears more accurately.

CLINICAL RELEVANCE STATEMENT: Meniscus tears are hard to diagnose in the knee magnetic resonance images. AI prediction models may play an important role in improving the diagnostic accuracy of clinicians and radiologists.

KEY POINTS: • Artificial intelligence (AI) provides great potential in improving the diagnosis of meniscus tears. • The pooled diagnostic performance for artificial intelligence (AI) in identifying meniscus tears was better (sensitivity 87%, specificity 89%) than locating the tears (sensitivity 88%, specificity 84%). • AI is good at confirming the diagnosis of meniscus tears, but future work is required to guide the management of the disease.

PMID:38386028 | DOI:10.1007/s00330-024-10625-7

Categories: Literature Watch

ADH-Enhancer: an attention-based deep hybrid framework for enhancer identification and strength prediction

Thu, 2024-02-22 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae030. doi: 10.1093/bib/bbae030.

ABSTRACT

Enhancers play an important role in the process of gene expression regulation. In DNA sequence abundance or absence of enhancers and irregularities in the strength of enhancers affects gene expression process that leads to the initiation and propagation of diverse types of genetic diseases such as hemophilia, bladder cancer, diabetes and congenital disorders. Enhancer identification and strength prediction through experimental approaches is expensive, time-consuming and error-prone. To accelerate and expedite the research related to enhancers identification and strength prediction, around 19 computational frameworks have been proposed. These frameworks used machine and deep learning methods that take raw DNA sequences and predict enhancer's presence and strength. However, these frameworks still lack in performance and are not useful in real time analysis. This paper presents a novel deep learning framework that uses language modeling strategies for transforming DNA sequences into statistical feature space. It applies transfer learning by training a language model in an unsupervised fashion by predicting a group of nucleotides also known as k-mers based on the context of existing k-mers in a sequence. At the classification stage, it presents a novel classifier that reaps the benefits of two different architectures: convolutional neural network and attention mechanism. The proposed framework is evaluated over the enhancer identification benchmark dataset where it outperforms the existing best-performing framework by 5%, and 9% in terms of accuracy and MCC. Similarly, when evaluated over the enhancer strength prediction benchmark dataset, it outperforms the existing best-performing framework by 4%, and 7% in terms of accuracy and MCC.

PMID:38385876 | DOI:10.1093/bib/bbae030

Categories: Literature Watch

Lactylation prediction models based on protein sequence and structural feature fusion

Thu, 2024-02-22 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbad539. doi: 10.1093/bib/bbad539.

ABSTRACT

Lysine lactylation (Kla) is a newly discovered posttranslational modification that is involved in important life activities, such as glycolysis-related cell function, macrophage polarization and nervous system regulation, and has received widespread attention due to the Warburg effect in tumor cells. In this work, we first design a natural language processing method to automatically extract the 3D structural features of Kla sites, avoiding potential biases caused by manually designed structural features. Then, we establish two Kla prediction frameworks, Attention-based feature fusion Kla model (ABFF-Kla) and EBFF-Kla, to integrate the sequence features and the structure features based on the attention layer and embedding layer, respectively. The results indicate that ABFF-Kla and Embedding-based feature fusion Kla model (EBFF-Kla), which fuse features from protein sequences and spatial structures, have better predictive performance than that of models that use only sequence features. Our work provides an approach for the automatic extraction of protein structural features, as well as a flexible framework for Kla prediction. The source code and the training data of the ABFF-Kla and the EBFF-Kla are publicly deposited at: https://github.com/ispotato/Lactylation_model.

PMID:38385873 | DOI:10.1093/bib/bbad539

Categories: Literature Watch

ChemMORT: an automatic ADMET optimization platform using deep learning and multi-objective particle swarm optimization

Thu, 2024-02-22 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae008. doi: 10.1093/bib/bbae008.

ABSTRACT

Drug discovery and development constitute a laborious and costly undertaking. The success of a drug hinges not only good efficacy but also acceptable absorption, distribution, metabolism, elimination, and toxicity (ADMET) properties. Overall, up to 50% of drug development failures have been contributed from undesirable ADMET profiles. As a multiple parameter objective, the optimization of the ADMET properties is extremely challenging owing to the vast chemical space and limited human expert knowledge. In this study, a freely available platform called Chemical Molecular Optimization, Representation and Translation (ChemMORT) is developed for the optimization of multiple ADMET endpoints without the loss of potency (https://cadd.nscc-tj.cn/deploy/chemmort/). ChemMORT contains three modules: Simplified Molecular Input Line Entry System (SMILES) Encoder, Descriptor Decoder and Molecular Optimizer. The SMILES Encoder can generate the molecular representation with a 512-dimensional vector, and the Descriptor Decoder is able to translate the above representation to the corresponding molecular structure with high accuracy. Based on reversible molecular representation and particle swarm optimization strategy, the Molecular Optimizer can be used to effectively optimize undesirable ADMET properties without the loss of bioactivity, which essentially accomplishes the design of inverse QSAR. The constrained multi-objective optimization of the poly (ADP-ribose) polymerase-1 inhibitor is provided as the case to explore the utility of ChemMORT.

PMID:38385872 | DOI:10.1093/bib/bbae008

Categories: Literature Watch

Machine Learning for Sequence and Structure-Based Protein-Ligand Interaction Prediction

Thu, 2024-02-22 06:00

J Chem Inf Model. 2024 Feb 22. doi: 10.1021/acs.jcim.3c01841. Online ahead of print.

ABSTRACT

Developing new drugs is too expensive and time -consuming. Accurately predicting the interaction between drugs and targets will likely change how the drug is discovered. Machine learning-based protein-ligand interaction prediction has demonstrated significant potential. In this paper, computational methods, focusing on sequence and structure to study protein-ligand interactions, are examined. Therefore, this paper starts by presenting an overview of the data sets applied in this area, as well as the various approaches applied for representing proteins and ligands. Then, sequence-based and structure-based classification criteria are subsequently utilized to categorize and summarize both the classical machine learning models and deep learning models employed in protein-ligand interaction studies. Moreover, the evaluation methods and interpretability of these models are proposed. Furthermore, delving into the diverse applications of protein-ligand interaction models in drug research is presented. Lastly, the current challenges and future directions in this field are addressed.

PMID:38385768 | DOI:10.1021/acs.jcim.3c01841

Categories: Literature Watch

Explainable deep learning enhances robust and reliable real-time monitoring of a chromatographic protein A capture step

Thu, 2024-02-22 06:00

Biotechnol J. 2024 Jan;19(2):e2300554. doi: 10.1002/biot.202300554.

ABSTRACT

The application of model-based real-time monitoring in biopharmaceutical production is a major step toward quality-by-design and the fundament for model predictive control. Data-driven models have proven to be a viable option to model bioprocesses. In the high stakes setting of biopharmaceutical manufacturing it is essential to ensure high model accuracy, robustness, and reliability. That is only possible when (i) the data used for modeling is of high quality and sufficient size, (ii) state-of-the-art modeling algorithms are employed, and (iii) the input-output mapping of the model has been characterized. In this study, we evaluate the accuracy of multiple data-driven models in predicting the monoclonal antibody (mAb) concentration, double stranded DNA concentration, host cell protein concentration, and high molecular weight impurity content during elution from a protein A chromatography capture step. The models achieved high-quality predictions with a normalized root mean squared error of <4% for the mAb concentration and of ≈10% for the other process variables. Furthermore, we demonstrate how permutation/occlusion-based methods can be used to gain an understanding of dependencies learned by one of the most complex data-driven models, convolutional neural network ensembles. We observed that the models generally exhibited dependencies on correlations that agreed with first principles knowledge, thereby bolstering confidence in model reliability. Finally, we present a workflow to assess the model behavior in case of systematic measurement errors that may result from sensor fouling or failure. This study represents a major step toward improved viability of data-driven models in biopharmaceutical manufacturing.

PMID:38385524 | DOI:10.1002/biot.202300554

Categories: Literature Watch

DIProT: A deep learning based interactive toolkit for efficient and effective Protein design

Thu, 2024-02-22 06:00

Synth Syst Biotechnol. 2024 Feb 8;9(2):217-222. doi: 10.1016/j.synbio.2024.01.011. eCollection 2024 Jun.

ABSTRACT

The protein inverse folding problem, designing amino acid sequences that fold into desired protein structures, is a critical challenge in biological sciences. Despite numerous data-driven and knowledge-driven methods, there remains a need for a user-friendly toolkit that effectively integrates these approaches for in-silico protein design. In this paper, we present DIProT, an interactive protein design toolkit. DIProT leverages a non-autoregressive deep generative model to solve the inverse folding problem, combined with a protein structure prediction model. This integration allows users to incorporate prior knowledge into the design process, evaluate designs in silico, and form a virtual design loop with human feedback. Our inverse folding model demonstrates competitive performance in terms of effectiveness and efficiency on TS50 and CATH4.2 datasets, with promising sequence recovery and inference time. Case studies further illustrate how DIProT can facilitate user-guided protein design.

PMID:38385151 | PMC:PMC10876589 | DOI:10.1016/j.synbio.2024.01.011

Categories: Literature Watch

Computational pathology-based weakly supervised prediction model for MGMT promoter methylation status in glioblastoma

Thu, 2024-02-22 06:00

Front Neurol. 2024 Feb 7;15:1345687. doi: 10.3389/fneur.2024.1345687. eCollection 2024.

ABSTRACT

INTRODUCTION: The methylation status of oxygen 6-methylguanine-DNA methyltransferase (MGMT) is closely related to the treatment and prognosis of glioblastoma. However, there are currently some challenges in detecting the methylation status of MGMT promoters. The hematoxylin and eosin (H&E)-stained histopathological slides have always been the gold standard for tumor diagnosis.

METHODS: In this study, based on the TCGA database and H&E-stained Whole slide images (WSI) of Beijing Tiantan Hospital, we constructed a weakly supervised prediction model of MGMT promoter methylation status in glioblastoma by using two Transformer structure models.

RESULTS: The accuracy scores of this model in the TCGA dataset and our independent dataset were 0.79 (AUC = 0.86) and 0.76 (AUC = 0.83), respectively.

CONCLUSION: The model demonstrates effective prediction of MGMT promoter methylation status in glioblastoma and exhibits some degree of generalization capability. At the same time, our study also shows that adding Patches automatic screening module to the computational pathology research framework of glioma can significantly improve the model effect.

PMID:38385046 | PMC:PMC10880091 | DOI:10.3389/fneur.2024.1345687

Categories: Literature Watch

Deep learning classification for macrophage subtypes through cell migratory pattern analysis

Thu, 2024-02-22 06:00

Front Cell Dev Biol. 2024 Feb 7;12:1259037. doi: 10.3389/fcell.2024.1259037. eCollection 2024.

ABSTRACT

Macrophages can exhibit pro-inflammatory or pro-reparatory functions, contingent upon their specific activation state. This dynamic behavior empowers macrophages to engage in immune reactions and contribute to tissue homeostasis. Understanding the intricate interplay between macrophage motility and activation status provides valuable insights into the complex mechanisms that govern their diverse functions. In a recent study, we developed a classification method based on morphology, which demonstrated that movement characteristics, including speed and displacement, can serve as distinguishing factors for macrophage subtypes. In this study, we develop a deep learning model to explore the potential of classifying macrophage subtypes based solely on raw trajectory patterns. The classification model relies on the time series of x-y coordinates, as well as the distance traveled and net displacement. We begin by investigating the migratory patterns of macrophages to gain a deeper understanding of their behavior. Although this analysis does not directly inform the deep learning model, it serves to highlight the intricate and distinct dynamics exhibited by different macrophage subtypes, which cannot be easily captured by a finite set of motility metrics. Our study uses cell trajectories to classify three macrophage subtypes: M0, M1, and M2. This advancement holds promising implications for the future, as it suggests the possibility of identifying macrophage subtypes without relying on shape analysis. Consequently, it could potentially eliminate the necessity for high-quality imaging techniques and provide more robust methods for analyzing inherently blurry images.

PMID:38385029 | PMC:PMC10879298 | DOI:10.3389/fcell.2024.1259037

Categories: Literature Watch

Microscale 3-D Capacitance Tomography with a CMOS Sensor Array

Thu, 2024-02-22 06:00

IEEE Biomed Circuits Syst Conf. 2023 Oct;2023. doi: 10.1109/biocas58349.2023.10388576. Epub 2024 Jan 18.

ABSTRACT

Electrical capacitance tomography (ECT) is a non-optical imaging technique in which a map of the interior permittivity of a volume is estimated by making capacitance measurements at its boundary and solving an inverse problem. While previous ECT demonstrations have often been at centimeter scales, ECT is not limited to macroscopic systems. In this paper, we demonstrate ECT imaging of polymer microspheres and bacterial biofilms using a CMOS microelectrode array, achieving spatial resolution of 10 microns. Additionally, we propose a deep learning architecture and an improved multi-objective training scheme for reconstructing out-of-plane permittivity maps from the sensor measurements. Experimental results show that the proposed approach is able to resolve microscopic 3-D structures, achieving 91.5% prediction accuracy on the microsphere dataset and 82.7% on the biofilm dataset, including an average of 4.6% improvement over baseline computational methods.

PMID:38384749 | PMC:PMC10880799 | DOI:10.1109/biocas58349.2023.10388576

Categories: Literature Watch

Application of flash GC e-nose and FT-NIR combined with deep learning algorithm in preventing age fraud and quality evaluation of pericarpium citri reticulatae

Thu, 2024-02-22 06:00

Food Chem X. 2024 Feb 12;21:101220. doi: 10.1016/j.fochx.2024.101220. eCollection 2024 Mar 30.

ABSTRACT

Pericarpium citri reticulatae (PCR) is the dried mature fruit peel of Citrus reticulata Blanco and its cultivated varieties in the Brassicaceae family. It can be used as both food and medicine, and has the effect of relieving cough and phlegm, and promoting digestion. The smell and medicinal properties of PCR are aged over the years; only varieties with aging value can be called "Chenpi". That is to say, the storage year of PCR has a great influence on its quality. As the color and smell of PCR of different storage years are similar, some unscrupulous merchants often use PCRs of low years to pretend to be PCRs of high years, and make huge profits. Therefore, we did this study with the aim of establishing a rapid and nondestructive method to identify the counterfeiting of PCR storage year, so as to protect the legitimate rights and interests of consumers. In this study, a classification model of PCR was established by e-eye, flash GC e-nose, and Fourier transform near-infrared (FT-NIR) combined with machine learning algorithms, which can quickly and accurately distinguish PCRs of different storage years. DFA and PLS-DA models were established by flash GC e-nose to distinguish PCRs of different ages, and 8 odor components were identified, among which (+)-limonene and γ-terpinene were the key components to distinguish PCRs of different ages. In addition, the classification and calibration model of PCRs were established by the combination of FT-NIR and machine learning algorithms. The classification models included SVM, KNN, LSTM, and CNN-LSTM, while the calibration models included PLSR, LSTM, and CNN-LSTM. Among them, the CNN-LSTM model built by internal capsule had significantly better classification and calibration performance than the other models. The accuracy of the classification model was 98.21 %. The R2P of age, (+)-limonene and γ-terpinene was 0.9912, 0.9875 and 0.9891, respectively. These results showed that the combination of flash GC e-nose and FT-NIR combined with deep learning algorithm could quickly and accurately distinguish PCRs of different ages. It also provided an effective and reliable method to monitor the quality of PCR in the market.

PMID:38384686 | PMC:PMC10879671 | DOI:10.1016/j.fochx.2024.101220

Categories: Literature Watch

Pages