Deep learning
Diagnosis of retinal damage using Resnet rescaling and support vector machine (Resnet-RS-SVM): a case study from an Indian hospital
Int Ophthalmol. 2024 Apr 13;44(1):174. doi: 10.1007/s10792-024-03058-0.
ABSTRACT
PURPOSE: This study aims to address the challenge of identifying retinal damage in medical applications through a computer-aided diagnosis (CAD) approach. Data was collected from four prominent eye hospitals in India for analysis and model development.
METHODS: Data was collected from Silchar Medical College and Hospital (SMCH), Aravind Eye Hospital (Tamil Nadu), LV Prasad Eye Hospital (Hyderabad), and Medanta (Gurugram). A modified version of the ResNet-101 architecture, named ResNet-RS, was utilized for retinal damage identification. In this modified architecture, the last layer's softmax function was replaced with a support vector machine (SVM). The resulting model, termed ResNet-RS-SVM, was trained and evaluated on each hospital's dataset individually and collectively.
RESULTS: The proposed ResNet-RS-SVM model achieved high accuracies across the datasets from the different hospitals: 99.17% for Aravind, 98.53% for LV Prasad, 98.33% for Medanta, and 100% for SMCH. When considering all hospitals collectively, the model attained an accuracy of 97.19%.
CONCLUSION: The findings demonstrate the effectiveness of the ResNet-RS-SVM model in accurately identifying retinal damage in diverse datasets collected from multiple eye hospitals in India. This approach presents a promising advancement in computer-aided diagnosis for improving the detection and management of retinal diseases.
PMID:38613630 | DOI:10.1007/s10792-024-03058-0
The Road to Robust and Automated Strain Measurements in Echocardiography by Deep Learning
JACC Cardiovasc Imaging. 2024 Mar 26:S1936-878X(24)00110-4. doi: 10.1016/j.jcmg.2024.02.015. Online ahead of print.
NO ABSTRACT
PMID:38613555 | DOI:10.1016/j.jcmg.2024.02.015
Diffusion-Based Generative Network for de Novo Synthetic Promoter Design
ACS Synth Biol. 2024 Apr 13. doi: 10.1021/acssynbio.4c00041. Online ahead of print.
ABSTRACT
Computer-aided promoter design is a major development trend in synthetic promoter engineering. Various deep learning models have been used to evaluate or screen synthetic promoters, but there have been few works on de novo promoter design. To explore the potential ability of generative models in promoter design, we established a diffusion-based generative model for promoter design in Escherichia coli. The model was completely driven by sequence data and could study the essential characteristics of natural promoters, thus generating synthetic promoters similar to natural promoters in structure and component. We also improved the calculation method of FID indicator, using a convolution layer to extract the feature matrix of the promoter sequence instead. As a result, we got an FID equal to 1.37, which meant synthetic promoters have a distribution similar to that of natural ones. Our work provides a fresh approach to de novo promoter design, indicating that a completely data-driven generative model is feasible for promoter design.
PMID:38613497 | DOI:10.1021/acssynbio.4c00041
Applications of Artificial Intelligence, Machine Learning, and Deep Learning in Nutrition: A Systematic Review
Nutrients. 2024 Apr 6;16(7):1073. doi: 10.3390/nu16071073.
ABSTRACT
In industry 4.0, where the automation and digitalization of entities and processes are fundamental, artificial intelligence (AI) is increasingly becoming a pivotal tool offering innovative solutions in various domains. In this context, nutrition, a critical aspect of public health, is no exception to the fields influenced by the integration of AI technology. This study aims to comprehensively investigate the current landscape of AI in nutrition, providing a deep understanding of the potential of AI, machine learning (ML), and deep learning (DL) in nutrition sciences and highlighting eventual challenges and futuristic directions. A hybrid approach from the systematic literature review (SLR) guidelines and the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines was adopted to systematically analyze the scientific literature from a search of major databases on artificial intelligence in nutrition sciences. A rigorous study selection was conducted using the most appropriate eligibility criteria, followed by a methodological quality assessment ensuring the robustness of the included studies. This review identifies several AI applications in nutrition, spanning smart and personalized nutrition, dietary assessment, food recognition and tracking, predictive modeling for disease prevention, and disease diagnosis and monitoring. The selected studies demonstrated the versatility of machine learning and deep learning techniques in handling complex relationships within nutritional datasets. This study provides a comprehensive overview of the current state of AI applications in nutrition sciences and identifies challenges and opportunities. With the rapid advancement in AI, its integration into nutrition holds significant promise to enhance individual nutritional outcomes and optimize dietary recommendations. Researchers, policymakers, and healthcare professionals can utilize this research to design future projects and support evidence-based decision-making in AI for nutrition and dietary guidance.
PMID:38613106 | DOI:10.3390/nu16071073
Boosting Clear Cell Renal Carcinoma-Specific Drug Discovery Using a Deep Learning Algorithm and Single-Cell Analysis
Int J Mol Sci. 2024 Apr 8;25(7):4134. doi: 10.3390/ijms25074134.
ABSTRACT
Clear cell renal carcinoma (ccRCC), the most common subtype of renal cell carcinoma, has the high heterogeneity of a highly complex tumor microenvironment. Existing clinical intervention strategies, such as target therapy and immunotherapy, have failed to achieve good therapeutic effects. In this article, single-cell transcriptome sequencing (scRNA-seq) data from six patients downloaded from the GEO database were adopted to describe the tumor microenvironment (TME) of ccRCC, including its T cells, tumor-associated macrophages (TAMs), endothelial cells (ECs), and cancer-associated fibroblasts (CAFs). Based on the differential typing of the TME, we identified tumor cell-specific regulatory programs that are mediated by three key transcription factors (TFs), whilst the TF EPAS1/HIF-2α was identified via drug virtual screening through our analysis of ccRCC's protein structure. Then, a combined deep graph neural network and machine learning algorithm were used to select anti-ccRCC compounds from bioactive compound libraries, including the FDA-approved drug library, natural product library, and human endogenous metabolite compound library. Finally, five compounds were obtained, including two FDA-approved drugs (flufenamic acid and fludarabine), one endogenous metabolite, one immunology/inflammation-related compound, and one inhibitor of DNA methyltransferase (N4-methylcytidine, a cytosine nucleoside analogue that, like zebularine, has the mechanism of inhibiting DNA methyltransferase). Based on the tumor microenvironment characteristics of ccRCC, five ccRCC-specific compounds were identified, which would give direction of the clinical treatment for ccRCC patients.
PMID:38612943 | DOI:10.3390/ijms25074134
Integrated Computational Approaches for Drug Design Targeting Cruzipain
Int J Mol Sci. 2024 Mar 27;25(7):3747. doi: 10.3390/ijms25073747.
ABSTRACT
Cruzipain inhibitors are required after medications to treat Chagas disease because of the need for safer, more effective treatments. Trypanosoma cruzi is the source of cruzipain, a crucial cysteine protease that has driven interest in using computational methods to create more effective inhibitors. We employed a 3D-QSAR model, using a dataset of 36 known inhibitors, and a pharmacophore model to identify potential inhibitors for cruzipain. We also built a deep learning model using the Deep purpose library, trained on 204 active compounds, and validated it with a specific test set. During a comprehensive screening of the Drug Bank database of 8533 molecules, pharmacophore and deep learning models identified 1012 and 340 drug-like molecules, respectively. These molecules were further evaluated through molecular docking, followed by induced-fit docking. Ultimately, molecular dynamics simulation was performed for the final potent inhibitors that exhibited strong binding interactions. These results present four novel cruzipain inhibitors that can inhibit the cruzipain protein of T. cruzi.
PMID:38612558 | DOI:10.3390/ijms25073747
Predicting the Structure of Enzymes with Metal Cofactors: The Example of [FeFe] Hydrogenases
Int J Mol Sci. 2024 Mar 25;25(7):3663. doi: 10.3390/ijms25073663.
ABSTRACT
The advent of deep learning algorithms for protein folding opened a new era in the ability of predicting and optimizing the function of proteins once the sequence is known. The task is more intricate when cofactors like metal ions or small ligands are essential to functioning. In this case, the combined use of traditional simulation methods based on interatomic force fields and deep learning predictions is mandatory. We use the example of [FeFe] hydrogenases, enzymes of unicellular algae promising for biotechnology applications to illustrate this situation. [FeFe] hydrogenase is an iron-sulfur protein that catalyzes the chemical reduction of protons dissolved in liquid water into molecular hydrogen as a gas. Hydrogen production efficiency and cell sensitivity to dioxygen are important parameters to optimize the industrial applications of biological hydrogen production. Both parameters are related to the organization of iron-sulfur clusters within protein domains. In this work, we propose possible three-dimensional structures of Chlorella vulgaris 211/11P [FeFe] hydrogenase, the sequence of which was extracted from the recently published genome of the given strain. Initial structural models are built using: (i) the deep learning algorithm AlphaFold; (ii) the homology modeling server SwissModel; (iii) a manual construction based on the best known bacterial crystal structure. Missing iron-sulfur clusters are included and microsecond-long molecular dynamics of initial structures embedded into the water solution environment were performed. Multiple-walkers metadynamics was also used to enhance the sampling of structures encompassing both functional and non-functional organizations of iron-sulfur clusters. The resulting structural model provided by deep learning is consistent with functional [FeFe] hydrogenase characterized by peculiar interactions between cofactors and the protein matrix.
PMID:38612474 | DOI:10.3390/ijms25073663
A Serial Multi-Scale Feature Fusion and Enhancement Network for Amur Tiger Re-Identification
Animals (Basel). 2024 Apr 4;14(7):1106. doi: 10.3390/ani14071106.
ABSTRACT
The Amur tiger is an important endangered species in the world, and its re-identification (re-ID) plays an important role in regional biodiversity assessment and wildlife resource statistics. This paper focuses on the task of Amur tiger re-ID based on visible light images from screenshots of surveillance videos or camera traps, aiming to solve the problem of low accuracy caused by camera perspective, noisy background noise, changes in motion posture, and deformation of Amur tiger body patterns during the re-ID process. To overcome this challenge, we propose a serial multi-scale feature fusion and enhancement re-ID network of Amur tiger for this task, in which global and local branches are constructed. Specifically, we design a global inverted pyramid multi-scale feature fusion method in the global branch to effectively fuse multi-scale global features and achieve high-level, fine-grained, and deep semantic feature preservation. We also design a local dual-domain attention feature enhancement method in the local branch, further enhancing local feature extraction and fusion by dividing local feature blocks. Based on the above model structure, we evaluated the effectiveness and feasibility of the model on the public dataset of the Amur Tiger Re-identification in the Wild (ATRW), and achieved good results on mAP, Rank-1, and Rank-5, demonstrating a certain competitiveness. In addition, since our proposed model does not require the introduction of additional expensive annotation information and does not incorporate other pre-training modules, it has important advantages such as strong transferability and simple training.
PMID:38612345 | DOI:10.3390/ani14071106
Automatic Identification of Pangolin Behavior Using Deep Learning Based on Temporal Relative Attention Mechanism
Animals (Basel). 2024 Mar 28;14(7):1032. doi: 10.3390/ani14071032.
ABSTRACT
With declining populations in the wild, captive rescue and breeding have become one of the most important ways to protect pangolins from extinction. At present, the success rate of artificial breeding is low, due to the insufficient understanding of the breeding behavior characteristics of pangolins. The automatic recognition method based on machine vision not only monitors for 24 h but also reduces the stress response of pangolins. This paper aimed to establish a temporal relation and attention mechanism network (Pangolin breeding attention and transfer network, PBATn) to monitor and recognize pangolin behaviors, including breeding and daily behavior. There were 11,476 videos including breeding behavior and daily behavior that were divided into training, validation, and test sets. For the training set and validation set, the PBATn network model had an accuracy of 98.95% and 96.11%, and a loss function value of 0.1531 and 0.1852. The model is suitable for a 2.40 m × 2.20 m (length × width) pangolin cage area, with a nest box measuring 40 cm × 30 cm × 30 cm (length × width × height) positioned either on the left or right side inside the cage. A spherical night-vision monitoring camera was installed on the cage wall at a height of 2.50 m above the ground. For the test set, the mean Average Precision (mAP), average accuracy, average recall, average specificity, and average F1 score were found to be higher than SlowFast, X3D, TANet, TSN, etc., with values of 97.50%, 99.17%, 97.55%, 99.53%, and 97.48%, respectively. The recognition accuracies of PBATn were 94.00% and 98.50% for the chasing and mounting breeding behaviors, respectively. The results showed that PBATn outperformed the baseline methods in all aspects. This study shows that the deep learning system can accurately observe pangolin breeding behavior and it will be useful for analyzing the behavior of these animals.
PMID:38612271 | DOI:10.3390/ani14071032
Bridging Nanomanufacturing and Artificial Intelligence-A Comprehensive Review
Materials (Basel). 2024 Apr 2;17(7):1621. doi: 10.3390/ma17071621.
ABSTRACT
Nanomanufacturing and digital manufacturing (DM) are defining the forefront of the fourth industrial revolution-Industry 4.0-as enabling technologies for the processing of materials spanning several length scales. This review delineates the evolution of nanomaterials and nanomanufacturing in the digital age for applications in medicine, robotics, sensory technology, semiconductors, and consumer electronics. The incorporation of artificial intelligence (AI) tools to explore nanomaterial synthesis, optimize nanomanufacturing processes, and aid high-fidelity nanoscale characterization is discussed. This paper elaborates on different machine-learning and deep-learning algorithms for analyzing nanoscale images, designing nanomaterials, and nano quality assurance. The challenges associated with the application of machine- and deep-learning models to achieve robust and accurate predictions are outlined. The prospects of incorporating sophisticated AI algorithms such as reinforced learning, explainable artificial intelligence (XAI), big data analytics for material synthesis, manufacturing process innovation, and nanosystem integration are discussed.
PMID:38612135 | DOI:10.3390/ma17071621
Developing an Improved Cycle Architecture for AI-Based Generation of New Structures Aimed at Drug Discovery
Molecules. 2024 Mar 27;29(7):1499. doi: 10.3390/molecules29071499.
ABSTRACT
Drug discovery involves a crucial step of optimizing molecules with the desired structural groups. In the domain of computer-aided drug discovery, deep learning has emerged as a prominent technique in molecular modeling. Deep generative models, based on deep learning, play a crucial role in generating novel molecules when optimizing molecules. However, many existing molecular generative models have limitations as they solely process input information in a forward way. To overcome this limitation, we propose an improved generative model called BD-CycleGAN, which incorporates BiLSTM (bidirectional long short-term memory) and Mol-CycleGAN (molecular cycle generative adversarial network) to preserve the information of molecular input. To evaluate the proposed model, we assess its performance by analyzing the structural distribution and evaluation matrices of generated molecules in the process of structural transformation. The results demonstrate that the BD-CycleGAN model achieves a higher success rate and exhibits increased diversity in molecular generation. Furthermore, we demonstrate its application in molecular docking, where it successfully increases the docking score for the generated molecules. The proposed BD-CycleGAN architecture harnesses the power of deep learning to facilitate the generation of molecules with desired structural features, thus offering promising advancements in the field of drug discovery processes.
PMID:38611779 | DOI:10.3390/molecules29071499
Opportunistic Screening for Acute Vertebral Fractures on a Routine Abdominal or Chest Computed Tomography Scans Using an Automated Deep Learning Model
Diagnostics (Basel). 2024 Apr 8;14(7):781. doi: 10.3390/diagnostics14070781.
ABSTRACT
OBJECTIVES: To develop an opportunistic screening model based on a deep learning algorithm to detect recent vertebral fractures in abdominal or chest CTs.
MATERIALS AND METHODS: A total of 1309 coronal reformatted images (504 with a recent fracture from 119 patients, and 805 without fracture from 115 patients), from torso CTs, performed from September 2018 to April 2022, on patients who also had a spine MRI within two months, were included. Two readers participated in image selection and manually labeled the fractured segment on each selected image with Neuro-T (version 2.3.3; Neurocle Inc.) software. We split the images randomly into the training and internal test set (labeled: unlabeled = 480:700) and the secondary interval validation set (24:105). For the observer study, three radiologists reviewed the CT images in the external test set with and without deep learning assistance and scored the likelihood of an acute fracture in each image independently.
RESULTS: For the training and internal test sets, the AI achieved a 99.86% test accuracy, 91.22% precision, and 89.18% F1 score for detection of recent fracture. Then, in the secondary internal validation set, it achieved 99.90%, 74.93%, and 78.30%, respectively. In the observer study, with the assistance of the deep learning algorithm, a significant improvement was observed in the radiology resident's accuracy, from 92.79% to 98.2% (p = 0.04).
CONCLUSION: The model showed a high level of accuracy in the test set and also the internal validation set. If this algorithm is applied opportunistically to daily torso CT evaluation, it will be helpful for the early detection of fractures that require treatment.
PMID:38611694 | DOI:10.3390/diagnostics14070781
Deep Learning Detection and Segmentation of Facet Joints in Ultrasound Images Based on Convolutional Neural Networks and Enhanced Data Annotation
Diagnostics (Basel). 2024 Apr 2;14(7):755. doi: 10.3390/diagnostics14070755.
ABSTRACT
The facet joint injection is the most common procedure used to release lower back pain. In this paper, we proposed a deep learning method for detecting and segmenting facet joints in ultrasound images based on convolutional neural networks (CNNs) and enhanced data annotation. In the enhanced data annotation, a facet joint was considered as the first target and the ventral complex as the second target to improve the capability of CNNs in recognizing the facet joint. A total of 300 cases of patients undergoing pain treatment were included. The ultrasound images were captured and labeled by two professional anesthesiologists, and then augmented to train a deep learning model based on the Mask Region-based CNN (Mask R-CNN). The performance of the deep learning model was evaluated using the average precision (AP) on the testing sets. The data augmentation and data annotation methods were found to improve the AP. The AP50 for facet joint detection and segmentation was 90.4% and 85.0%, respectively, demonstrating the satisfying performance of the deep learning model. We presented a deep learning method for facet joint detection and segmentation in ultrasound images based on enhanced data annotation and the Mask R-CNN. The feasibility and potential of deep learning techniques in facet joint ultrasound image analysis have been demonstrated.
PMID:38611668 | DOI:10.3390/diagnostics14070755
Effect of tokenization on transformers for biological sequences
Bioinformatics. 2024 Apr 12:btae196. doi: 10.1093/bioinformatics/btae196. Online ahead of print.
ABSTRACT
MOTIVATION: Deep-learning models are transforming biological research, including many bioinformatics and comparative genomics algorithms, such as sequence alignments, phylogenetic tree inference, and automatic classification of protein functions. Among these deep-learning algorithms, models for processing natural languages, developed in the natural language processing (NLP) community, were recently applied to biological sequences. However, biological sequences are different from natural languages, such as English, and French, in which segmentation of the text to separate words is relatively straightforward. Moreover, biological sequences are characterized by extremely long sentences, which hamper their processing by current machine-learning models, notably the transformer architecture. In NLP, one of the first processing steps is to transform the raw text to a list of tokens. Deep-learning applications to biological sequence data mostly segment proteins and DNA to single characters. In this work, we study the effect of alternative tokenization algorithms on eight different tasks in biology, from predicting the function of proteins and their stability, through nucleotide sequence alignment, to classifying proteins to specific families.
RESULTS: We demonstrate that applying alternative tokenization algorithms can increase accuracy and at the same time, substantially reduce the input length compared to the trivial tokenizer in which each character is a token. Furthermore, applying these tokenization algorithms allows interpreting trained models, taking into account dependencies among positions. Finally, we trained these tokenizers on a large dataset of protein sequences containing more than 400 billion amino acids, which resulted in over a three-fold decrease in the number of tokens. We then tested these tokenizers trained on large-scale data on the above specific tasks and showed that for some tasks it is highly beneficial to train database-specific tokenizers. Our study suggests that tokenizers are likely to be a critical component in future deep-network analysis of biological sequence data.
AVAILABILITY: Code, data and trained tokenizers are available on https://github.com/technion-cs-nlp/BiologicalTokenizers.
PMID:38608190 | DOI:10.1093/bioinformatics/btae196
Learning to learn by yourself: Unsupervised meta-learning with self-knowledge distillation for COVID-19 diagnosis from pneumonia cases
Int J Intell Syst. 2021 Aug;36(8):4033-4064. doi: 10.1002/int.22449. Epub 2021 May 13.
ABSTRACT
The goal of diagnosing the coronavirus disease 2019 (COVID-19) from suspected pneumonia cases, that is, recognizing COVID-19 from chest X-ray or computed tomography (CT) images, is to improve diagnostic accuracy, leading to faster intervention. The most important and challenging problem here is to design an effective and robust diagnosis model. To this end, there are three challenges to overcome: (1) The lack of training samples limits the success of existing deep-learning-based methods. (2) Many public COVID-19 data sets contain only a few images without fine-grained labels. (3) Due to the explosive growth of suspected cases, it is urgent and important to diagnose not only COVID-19 cases but also the cases of other types of pneumonia that are similar to the symptoms of COVID-19. To address these issues, we propose a novel framework called Unsupervised Meta-Learning with Self-Knowledge Distillation to address the problem of differentiating COVID-19 from pneumonia cases. During training, our model cannot use any true labels and aims to gain the ability of learning to learn by itself. In particular, we first present a deep diagnosis model based on a relation network to capture and memorize the relation among different images. Second, to enhance the performance of our model, we design a self-knowledge distillation mechanism that distills knowledge within our model itself. Our network is divided into several parts, and the knowledge in the deeper parts is squeezed into the shallow ones. The final results are derived from our model by learning to compare the features of images. Experimental results demonstrate that our approach achieves significantly higher performance than other state-of-the-art methods. Moreover, we construct a new COVID-19 pneumonia data set based on text mining, consisting of 2696 COVID-19 images (347 X-ray + 2349 CT), 10,155 images (9661 X-ray + 494 CT) about other types of pneumonia, and the fine-grained labels of all. Our data set considers not only a bacterial infection or viral infection which causes pneumonia but also a viral infection derived from the influenza virus or coronavirus.
PMID:38607826 | PMC:PMC8242586 | DOI:10.1002/int.22449
Machine learning for medical imaging-based COVID-19 detection and diagnosis
Int J Intell Syst. 2021 Sep;36(9):5085-5115. doi: 10.1002/int.22504. Epub 2021 May 31.
ABSTRACT
The novel coronavirus disease 2019 (COVID-19) is considered to be a significant health challenge worldwide because of its rapid human-to-human transmission, leading to a rise in the number of infected people and deaths. The detection of COVID-19 at the earliest stage is therefore of paramount importance for controlling the pandemic spread and reducing the mortality rate. The real-time reverse transcription-polymerase chain reaction, the primary method of diagnosis for coronavirus infection, has a relatively high false negative rate while detecting early stage disease. Meanwhile, the manifestations of COVID-19, as seen through medical imaging methods such as computed tomography (CT), radiograph (X-ray), and ultrasound imaging, show individual characteristics that differ from those of healthy cases or other types of pneumonia. Machine learning (ML) applications for COVID-19 diagnosis, detection, and the assessment of disease severity based on medical imaging have gained considerable attention. Herein, we review the recent progress of ML in COVID-19 detection with a particular focus on ML models using CT and X-ray images published in high-ranking journals, including a discussion of the predominant features of medical imaging in patients with COVID-19. Deep Learning algorithms, particularly convolutional neural networks, have been utilized widely for image segmentation and classification to identify patients with COVID-19 and many ML modules have achieved remarkable predictive results using datasets with limited sample sizes.
PMID:38607786 | PMC:PMC8242401 | DOI:10.1002/int.22504
A Deep Quantum Convolutional Neural Network Based Facial Expression Recognition For Mental Health Analysis
IEEE Trans Neural Syst Rehabil Eng. 2024;32:1556-1565. doi: 10.1109/TNSRE.2024.3385336.
ABSTRACT
The purpose of this work is to analyze how new technologies can enhance clinical practice while also examining the physical traits of emotional expressiveness of face expression in a number of psychiatric illnesses. Hence, in this work, an automatic facial expression recognition system has been proposed that analyzes static, sequential, or video facial images from medical healthcare data to detect emotions in people's facial regions. The proposed method has been implemented in five steps. The first step is image preprocessing, where a facial region of interest has been segmented from the input image. The second component includes a classical deep feature representation and the quantum part that involves successive sets of quantum convolutional layers followed by random quantum variational circuits for feature learning. Here, the proposed system has attained a faster training approach using the proposed quantum convolutional neural network approach that takes [Formula: see text] time. In contrast, the classical convolutional neural network models have [Formula: see text] time. Additionally, some performance improvement techniques, such as image augmentation, fine-tuning, matrix normalization, and transfer learning methods, have been applied to the recognition system. Finally, the scores due to classical and quantum deep learning models are fused to improve the performance of the proposed method. Extensive experimentation with Karolinska-directed emotional faces (KDEF), Static Facial Expressions in the Wild (SFEW 2.0), and Facial Expression Recognition 2013 (FER-2013) benchmark databases and compared with other state-of-the-art methods that show the improvement of the proposed system.
PMID:38607744 | DOI:10.1109/TNSRE.2024.3385336
DDA-SSNets: Dual decoder attention-based semantic segmentation networks for COVID-19 infection segmentation and classification using chest X-Ray images
J Xray Sci Technol. 2024 Apr 6. doi: 10.3233/XST-230421. Online ahead of print.
ABSTRACT
BACKGROUND: COVID-19 needs to be diagnosed and staged to be treated accurately. However, prior studies' diagnostic and staging abilities for COVID-19 infection needed to be improved. Therefore, new deep learning-based approaches are required to aid radiologists in detecting and quantifying COVID-19-related lung infections.
OBJECTIVE: To develop deep learning-based models to classify and quantify COVID-19-related lung infections.
METHODS: Initially, Dual Decoder Attention-based Semantic Segmentation Networks (DDA-SSNets) such as Dual Decoder Attention-UNet (DDA-UNet) and Dual Decoder Attention-SegNet (DDA-SegNet) are proposed to facilitate the dual segmentation tasks such as lung lobes and infection segmentation in chest X-ray (CXR) images. The lung lobe and infection segmentations are mapped to grade the severity of COVID-19 infection in both the lungs of CXRs. Later, a Genetic algorithm-based Deep Convolutional Neural Network classifier with the optimum number of layers, namely GADCNet, is proposed to classify the extracted regions of interest (ROI) from the CXR lung lobes into COVID-19 and non-COVID-19.
RESULTS: The DDA-SegNet shows better segmentation with an average BCSSDC of 99.53% and 99.97% for lung lobes and infection segmentations, respectively, compared with DDA-UNet with an average BCSSDC of 99.14% and 99.92%. The proposed DDA-SegNet with GADCNet classifier offered excellent classification results with an average BCCAC of 99.98%, followed by the GADCNet with DDA-UNet with an average BCCAC of 99.92% after extensive testing and analysis.
CONCLUSIONS: The results show that the proposed DDA-SegNet has superior performance in the segmentation of lung lobes and COVID-19-infected regions in CXRs, along with improved severity grading compared to the DDA-UNet and improved accuracy of the GADCNet classifier in classifying the CXRs into COVID-19, and non-COVID-19.
PMID:38607728 | DOI:10.3233/XST-230421
A user-friendly deep learning application for accurate lung cancer diagnosis
J Xray Sci Technol. 2024 Apr 9. doi: 10.3233/XST-230255. Online ahead of print.
ABSTRACT
BACKGROUND: Accurate diagnosis and subsequent delineated treatment planning require the experience of clinicians in the handling of their case numbers. However, applying deep learning in image processing is useful in creating tools that promise faster high-quality diagnoses, but the accuracy and precision of 3-D image processing from 2-D data may be limited by factors such as superposition of organs, distortion and magnification, and detection of new pathologies. The purpose of this research is to use radiomics and deep learning to develop a tool for lung cancer diagnosis.
METHODS: This study applies radiomics and deep learning in the diagnosis of lung cancer to help clinicians accurately analyze the images and thereby provide the appropriate treatment planning. 86 patients were recruited from Bach Mai Hospital, and 1012 patients were collected from an open-source database. First, deep learning has been applied in the process of segmentation by U-NET and cancer classification via the use of the DenseNet model. Second, the radiomics were applied for measuring and calculating diameter, surface area, and volume. Finally, the hardware also was designed by connecting between Arduino Nano and MFRC522 module for reading data from the tag. In addition, the displayed interface was created on a web platform using Python through Streamlit.
RESULTS: The applied segmentation model yielded a validation loss of 0.498, a train loss of 0.27, a cancer classification validation loss of 0.78, and a training accuracy of 0.98. The outcomes of the diagnostic capabilities of lung cancer (recognition and classification of lung cancer from chest CT scans) were quite successful.
CONCLUSIONS: The model provided means for storing and updating patients' data directly on the interface which allowed the results to be readily available for the health care providers. The developed system will improve clinical communication and information exchange. Moreover, it can manage efforts by generating correlated and coherent summaries of cancer diagnoses.
PMID:38607727 | DOI:10.3233/XST-230255
TransC-ac4C: Identification of N4-acetylcytidine (ac4C) sites in mRNA using deep learning
IEEE/ACM Trans Comput Biol Bioinform. 2024 Apr 12;PP. doi: 10.1109/TCBB.2024.3386972. Online ahead of print.
ABSTRACT
N4-acetylcytidine (ac4C) is a post-transcriptional modification in mRNA that is critical in mRNA translation in terms of stability and regulation. In the past few years, numerous approaches employing convolutional neural networks (CNN) and Transformer have been proposed for the identification of ac4C sites, with each variety of approaches processing distinct characteristics. CNN-based methods excels at extracting local features and positional information, whereas Transformer-based ones stands out in establishing long-range dependencies and generating global representations. Given the importance of both local and global features in mRNA ac4C sites identification, we propose a novel method termed TransC-ac4C which combines CNN and Transformer together for enhancing the feature extraction capability and improving the identification accuracy. Five different feature encoding strategies (One-hot, NCP, ND, EIIP, and K-mer) are employed to generate the mRNA sequence representations, in which way the sequence attributes and physical and chemical properties of the sequences can be embedded. To strengthen the relevance of features, we construct a novel feature fusion method. Firstly, the CNN is employed to process five single features, stitch them together and feed them to the Transformer layer. Then, our approach employs CNN to extract local features and Transformer subsequently to establish global long-range dependencies among extracted features. We use 5-fold cross-validation to evaluate the model, and the evaluation indicators are significantly improved. The prediction accuracy of the two datasets is as high as 81.42.
PMID:38607721 | DOI:10.1109/TCBB.2024.3386972