Deep learning
The Evolution of Artificial Intelligence in Biomedicine: Bibliometric Analysis
JMIR AI. 2023 Dec 19;2:e45770. doi: 10.2196/45770.
ABSTRACT
BACKGROUND: The utilization of artificial intelligence (AI) technologies in the biomedical field has attracted increasing attention in recent decades. Studying how past AI technologies have found their way into medicine over time can help to predict which current (and future) AI technologies have the potential to be utilized in medicine in the coming years, thereby providing a helpful reference for future research directions.
OBJECTIVE: The aim of this study was to predict the future trend of AI technologies used in different biomedical domains based on past trends of related technologies and biomedical domains.
METHODS: We collected a large corpus of articles from the PubMed database pertaining to the intersection of AI and biomedicine. Initially, we attempted to use regression on the extracted keywords alone; however, we found that this approach did not provide sufficient information. Therefore, we propose a method called "background-enhanced prediction" to expand the knowledge utilized by the regression algorithm by incorporating both the keywords and their surrounding context. This method of data construction resulted in improved performance across the six regression models evaluated. Our findings were confirmed through experiments on recurrent prediction and forecasting.
RESULTS: In our analysis using background information for prediction, we found that a window size of 3 yielded the best results, outperforming the use of keywords alone. Furthermore, utilizing data only prior to 2017, our regression projections for the period of 2017-2021 exhibited a high coefficient of determination (R2), which reached up to 0.78, demonstrating the effectiveness of our method in predicting long-term trends. Based on the prediction, studies related to proteins and tumors will be pushed out of the top 20 and become replaced by early diagnostics, tomography, and other detection technologies. These are certain areas that are well-suited to incorporate AI technology. Deep learning, machine learning, and neural networks continue to be the dominant AI technologies in biomedical applications. Generative adversarial networks represent an emerging technology with a strong growth trend.
CONCLUSIONS: In this study, we explored AI trends in the biomedical field and developed a predictive model to forecast future trends. Our findings were confirmed through experiments on current trends.
PMID:38875563 | DOI:10.2196/45770
RDLR: A Robust Deep Learning-Based Image Registration Method for Pediatric Retinal Images
J Imaging Inform Med. 2024 Jun 14. doi: 10.1007/s10278-024-01154-2. Online ahead of print.
ABSTRACT
Retinal diseases stand as a primary cause of childhood blindness. Analyzing the progression of these diseases requires close attention to lesion morphology and spatial information. Standard image registration methods fail to accurately reconstruct pediatric fundus images containing significant distortion and blurring. To address this challenge, we proposed a robust deep learning-based image registration method (RDLR). The method consisted of two modules: registration module (RM) and panoramic view module (PVM). RM effectively integrated global and local feature information and learned prior information related to the orientation of images. PVM was capable of reconstructing spatial information in panoramic images. Furthermore, as the registration model was trained on over 280,000 pediatric fundus images, we introduced a registration annotation automatic generation process coupled with a quality control module to ensure the reliability of training data. We compared the performance of RDLR to the other methods, including conventional registration pipeline (CRP), voxel morph (WM), generalizable image matcher (GIM), and self-supervised techniques (SS). RDLR achieved significantly higher registration accuracy (average Dice score of 0.948) than the other methods (ranging from 0.491 to 0.802). The resulting panoramic retinal maps reconstructed by RDLR also demonstrated substantially higher fidelity (average Dice score of 0.960) compared to the other methods (ranging from 0.720 to 0.783). Overall, the proposed method addressed key challenges in pediatric retinal imaging, providing an effective solution to enhance disease diagnosis. Our source code is available at https://github.com/wuwusky/RobustDeepLeraningRegistration .
PMID:38874699 | DOI:10.1007/s10278-024-01154-2
Advancing Peptide-Based Cancer Therapy with AI: In-Depth Analysis of State-of-the-Art AI Models
J Chem Inf Model. 2024 Jun 14. doi: 10.1021/acs.jcim.4c00295. Online ahead of print.
ABSTRACT
Anticancer peptides (ACPs) play a vital role in selectively targeting and eliminating cancer cells. Evaluating and comparing predictions from various machine learning (ML) and deep learning (DL) techniques is challenging but crucial for anticancer drug research. We conducted a comprehensive analysis of 15 ML and 10 DL models, including the models released after 2022, and found that support vector machines (SVMs) with feature combination and selection significantly enhance overall performance. DL models, especially convolutional neural networks (CNNs) with light gradient boosting machine (LGBM) based feature selection approaches, demonstrate improved characterization. Assessment using a new test data set (ACP10) identifies ACPred, MLACP 2.0, AI4ACP, mACPred, and AntiCP2.0_AAC as successive optimal predictors, showcasing robust performance. Our review underscores current prediction tool limitations and advocates for an omnidirectional ACP prediction framework to propel ongoing research.
PMID:38874445 | DOI:10.1021/acs.jcim.4c00295
The Putative Prenyltransferase Nus1 is Required for Filamentation in the Human Fungal Pathogen Candida albicans
G3 (Bethesda). 2024 Jun 14:jkae124. doi: 10.1093/g3journal/jkae124. Online ahead of print.
ABSTRACT
Candida albicans is a major fungal pathogen of humans that can cause serious systemic infections in vulnerable immunocompromised populations. One of its virulence attributes is its capacity to transition between yeast and filamentous morphologies, but our understanding of this process remains incomplete. Here, we analyzed data from a functional genomic screen performed with the C. albicans Gene Replacement And Conditional Expression (GRACE) collection to identify genes crucial for morphogenesis in host-relevant conditions. Through manual scoring of microscopy images coupled with analysis of each image using a deep learning-based method termed Candescence, we identified 307 genes important for filamentation in tissue culture medium at 37 °C with 5% CO2. One such factor was orf19.5963, which is predicted to encode the prenyltransferase Nus1 based on sequence homology to Saccharomyces cerevisiae. We further showed that Nus1 and its predicted interacting partner Rer2 are important for filamentation in multiple liquid filament-inducing conditions as well as for wrinkly colony formation on solid agar. Finally, we highlight that Nus1 and Rer2 likely govern C. albicans morphogenesis due to their importance in intracellular trafficking, as well as maintaining lipid homeostasis. Overall, this work identifies Nus1 and Rer2 as important regulators of C. albicans filamentation and highlights the power of functional genomic screens in advancing our understanding of gene function in human fungal pathogens.
PMID:38874344 | DOI:10.1093/g3journal/jkae124
Synthesizing PET images from high-field and ultra-high-field MR images using joint diffusion attention model
Med Phys. 2024 Jun 14. doi: 10.1002/mp.17254. Online ahead of print.
ABSTRACT
BACKGROUND: Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) stand as pivotal diagnostic tools for brain disorders, offering the potential for mutually enriching disease diagnostic perspectives. However, the costs associated with PET scans and the inherent radioactivity have limited the widespread application of PET. Furthermore, it is noteworthy to highlight the promising potential of high-field and ultra-high-field neuroimaging in cognitive neuroscience research and clinical practice. With the enhancement of MRI resolution, a related question arises: can high-resolution MRI improve the quality of PET images?
PURPOSE: This study aims to enhance the quality of synthesized PET images by leveraging the superior resolution capabilities provided by high-field and ultra-high-field MRI.
METHODS: From a statistical perspective, the joint probability distribution is considered the most direct and fundamental approach for representing the correlation between PET and MRI. In this study, we proposed a novel model, the joint diffusion attention model, namely, the joint diffusion attention model (JDAM), which primarily focuses on learning information about the joint probability distribution. JDAM consists of two primary processes: the diffusion process and the sampling process. During the diffusion process, PET gradually transforms into a Gaussian noise distribution by adding Gaussian noise, while MRI remains fixed. The central objective of the diffusion process is to learn the gradient of the logarithm of the joint probability distribution between MRI and noise PET. The sampling process operates as a predictor-corrector. The predictor initiates a reverse diffusion process, and the corrector applies Langevin dynamics.
RESULTS: Experimental results from the publicly available Alzheimer's Disease Neuroimaging Initiative dataset highlight the effectiveness of the proposed model compared to state-of-the-art (SOTA) models such as Pix2pix and CycleGAN. Significantly, synthetic PET images guided by ultra-high-field MRI exhibit marked improvements in signal-to-noise characteristics when contrasted with those generated from high-field MRI data. These results have been endorsed by medical experts, who consider the PET images synthesized through JDAM to possess scientific merit. This endorsement is based on their symmetrical features and precise representation of regions displaying hypometabolism, a hallmark of Alzheimer's disease.
CONCLUSIONS: This study establishes the feasibility of generating PET images from MRI. Synthesis of PET by JDAM significantly enhances image quality compared to SOTA models.
PMID:38874206 | DOI:10.1002/mp.17254
Leveraging Artificial Intelligence for Synergies in Drug Discovery: From Computers to Clinics
Curr Pharm Des. 2024 Jun 13. doi: 10.2174/0113816128308066240529121148. Online ahead of print.
ABSTRACT
Over the period of the preceding decade, artificial intelligence (AI) has proved an outstanding performance in entire dimensions of science including pharmaceutical sciences. AI uses the concept of machine learning (ML), deep learning (DL), and neural networks (NNs) approaches for novel algorithm and hypothesis development by training the machines in multiple ways. AI-based drug development from molecule identification to clinical approval tremendously reduces the cost of development and the time over conventional methods. The COVID-19 vaccine development and approval by regulatory agencies within 1-2 years is the finest example of drug development. Hence, AI is fast becoming a boon for scientific researchers to streamline their advanced discoveries. AI-based FDA-approved nanomedicines perform well as target selective, synergistic therapies, recolonize the theragnostic pharmaceutical stream, and significantly improve drug research outcomes. This comprehensive review delves into the fundamental aspects of AI along with its applications in the realm of pharmaceutical life sciences. It explores AI's role in crucial areas such as drug designing, drug discovery and development, traditional Chinese medicine, integration of multi-omics data, as well as investigations into drug repurposing and polypharmacology studies.
PMID:38874046 | DOI:10.2174/0113816128308066240529121148
Evaluation of Neoadjuvant Chemoradiotherapy Response in Rectal Cancer Using MR Images and Deep Learning Neural Networks
Curr Med Imaging. 2024;20(1):e15734056309748. doi: 10.2174/0115734056309748240509072222.
ABSTRACT
INTRODUCTION: The aim of the study was to develop deep-learning neural networks to guide treatment decisions and for the accurate evaluation of tumor response to neoadjuvant chemoradiotherapy (nCRT) in rectal cancer using magnetic resonance (MR) images.
METHODS: Fifty-nine tumors with stage 2 or 3 rectal cancer that received nCRT were retrospectively evaluated. Pathological tumor regression grading was carried out using the Dworak (Dw-TRG) guidelines and served as the ground truth for response predictions. Imaging-based tumor regression grading was performed according to the MERCURY group guidelines from pre-treatment and post-treatment para-axial T2-weighted MR images (MR-TRG). Tumor signal intensity signatures were extracted by segmenting the tumors volumetrically on the images. Normalized histograms of the signatures were used as input to a deep neural network (DNN) housing long short-term memory (LSTM) units. The output of the network was the tumor regression grading prediction, DNN-TRG.
RESULTS: In predicting complete or good response, DNN-TRG demonstrated modest agreement with Dw-TRG (Cohen's kappa= 0.79) and achieved 84.6% sensitivity, 93.9% specificity, and 89.8% accuracy. MR-TRG revealed 46.2% sensitivity, 100% specificity, and 76.3% accuracy. In predicting a complete response, DNN-TRG showed slight agreement with Dw-TRG (Cohen's kappa= 0.75) with 71.4% sensitivity, 97.8% specificity, and 91.5% accuracy. MR-TRG provided 42.9% sensitivity, 100% specificity, and 86.4% accuracy. DNN-TRG benefited from higher sensitivity but lower specificity, leading to higher accuracy than MR-TRG in predicting tumor response.
CONCLUSION: The use of deep LSTM neural networks is a promising approach for evaluating the tumor response to nCRT in rectal cancer.</P>.
PMID:38874041 | DOI:10.2174/0115734056309748240509072222
Enhancing Alzheimer's Disease Classification with Transfer Learning: Finetuning a Pre-trained Algorithm
Curr Med Imaging. 2024 Jun 13. doi: 10.2174/0115734056305633240603061644. Online ahead of print.
ABSTRACT
OBJECTIVE: The increasing longevity of the population has made Alzheimer's disease (AD) a significant public health concern. However, the challenge of accurately distinguishing different disease stages due to limited variability within the same stage and the potential for errors in manual classification highlights the need for more precise approaches to classifying AD stages. In the field of deep learning, the ResNet50V2 model stands as a testament to its exceptional capabilities in image classification tasks.
MATERIALS: The dataset employed in this study was sourced from Kaggle and consisted of 6400 MRI images that were meticulously collected and rigorously verified to assure their precision. The selection of images was conducted with great attention to detail, drawing from a diverse array of sources.
METHODS: This study focuses on harnessing the potential of this model for AD classification, a task that relies on extracting disease-specific features. Furthermore, to achieve this, a multi-class classification methodology is employed, using transfer learning and fine-tuning of layers to adapt the pre-trained ResNet50V2 model for AD classification. Notably, the impact of various input layer sizes on model performance is investigated, meticulously striking a balance between capacity and computational efficiency. The optimal fine-tuning strategy is determined by counting layers within convolution blocks and selectively unfreezing and training individual layers after a designated layer index, ensuring consistency and reproducibility. Custom classification layers, dynamic learning rate reduction, and extensive visualization techniques are incorporated.
RESULTS: The model's performance is evaluated using accuracy, AUC, precision, recall, F1-score, and ROC curves. The comprehensive analysis reveals the model's ability to discriminate between AD stages. Visualization through confusion matrices aided in understanding model behavior. The rounded predicted labels enhanced practical utility.
CONCLUSION: This approach combined empirical research and iterative refinement, resulting in enhanced accuracy and reliability in AD classification. Our model holds promise for real-world applications, achieving an accuracy of 96.18%, showcasing the potential of deep learning in addressing complex medical challenges.
PMID:38874032 | DOI:10.2174/0115734056305633240603061644
Prostate Segmentation in MRI Images using Transfer Learning based Mask RCNN
Curr Med Imaging. 2024 Jun 13. doi: 10.2174/0115734056305021240603114137. Online ahead of print.
ABSTRACT
INTRODUCTION: The second highest cause of death among males is Prostate Cancer (PCa) in America. Over the globe, it's the usual case in men, and the annual PCa ratio is very surprising. Identical to other prognosis and diagnostic medical systems, deep learning-based automated recognition and detection systems (i.e., Computer Aided Detection (CAD) systems) have gained enormous attention in PCA.
METHODS: These paradigms have attained promising results with a high segmentation, detection, and classification accuracy ratio. Numerous researchers claimed efficient results from deep learning-based approaches compared to other ordinary systems that utilized pathological samples.
RESULTS: This research is intended to perform prostate segmentation using transfer learning-based Mask R-CNN, which is consequently helpful in prostate cancer detection.
CONCLUSION: Lastly, limitations in current work, research findings, and prospects have been discussed.
PMID:38874030 | DOI:10.2174/0115734056305021240603114137
Identification of biological indicators for human exposure toxicology in smart cities based on public health data and deep learning
Front Public Health. 2024 May 30;12:1361901. doi: 10.3389/fpubh.2024.1361901. eCollection 2024.
ABSTRACT
With the acceleration of urbanization, the risk of urban population exposure to environmental pollutants is increasing. Protecting public health is the top priority in the construction of smart cities. The purpose of this study is to propose a method for identifying toxicological biological indicators of human exposure in smart cities based on public health data and deep learning to achieve accurate assessment and management of exposure risks. Initially, the study used a network of sensors within the smart city infrastructure to collect environmental monitoring data, including indicators such as air quality, water quality, and soil pollution. Using public health data, a database containing information on types and concentrations of environmental pollutants has been established. Convolutional neural network was used to recognize the pattern of environmental monitoring data, identify the relationship between different indicators, and build the correlation model between health indicators and environmental indicators. Identify biological indicators associated with environmental pollution exposure through training optimization. Experimental analysis showed that the prediction accuracy of the model reached 93.45%, which could provide decision support for the government and the health sector. In the recognition of the association pattern between respiratory diseases, cardiovascular diseases and environmental exposure factors such as PM2.5 and SO2, the fitting degree between the model and the simulation value reached more than 0.90. The research design model can play a positive role in public health and provide new decision-making ideas for protecting public health.
PMID:38873314 | PMC:PMC11171719 | DOI:10.3389/fpubh.2024.1361901
Stable tensor neural networks for efficient deep learning
Front Big Data. 2024 May 30;7:1363978. doi: 10.3389/fdata.2024.1363978. eCollection 2024.
ABSTRACT
Learning from complex, multidimensional data has become central to computational mathematics, and among the most successful high-dimensional function approximators are deep neural networks (DNNs). Training DNNs is posed as an optimization problem to learn network weights or parameters that well-approximate a mapping from input to target data. Multiway data or tensors arise naturally in myriad ways in deep learning, in particular as input data and as high-dimensional weights and features extracted by the network, with the latter often being a bottleneck in terms of speed and memory. In this work, we leverage tensor representations and processing to efficiently parameterize DNNs when learning from high-dimensional data. We propose tensor neural networks (t-NNs), a natural extension of traditional fully-connected networks, that can be trained efficiently in a reduced, yet more powerful parameter space. Our t-NNs are built upon matrix-mimetic tensor-tensor products, which retain algebraic properties of matrix multiplication while capturing high-dimensional correlations. Mimeticity enables t-NNs to inherit desirable properties of modern DNN architectures. We exemplify this by extending recent work on stable neural networks, which interpret DNNs as discretizations of differential equations, to our multidimensional framework. We provide empirical evidence of the parametric advantages of t-NNs on dimensionality reduction using autoencoders and classification using fully-connected and stable variants on benchmark imaging datasets MNIST and CIFAR-10.
PMID:38873283 | PMC:PMC11170703 | DOI:10.3389/fdata.2024.1363978
Enhancing bladder cancer diagnosis through transitional cell carcinoma polyp detection and segmentation: an artificial intelligence powered deep learning solution
Front Artif Intell. 2024 May 30;7:1406806. doi: 10.3389/frai.2024.1406806. eCollection 2024.
ABSTRACT
BACKGROUND: Bladder cancer, specifically transitional cell carcinoma (TCC) polyps, presents a significant healthcare challenge worldwide. Accurate segmentation of TCC polyps in cystoscopy images is crucial for early diagnosis and urgent treatment. Deep learning models have shown promise in addressing this challenge.
METHODS: We evaluated deep learning architectures, including Unetplusplus_vgg19, Unet_vgg11, and FPN_resnet34, trained on a dataset of annotated cystoscopy images of low quality.
RESULTS: The models showed promise, with Unetplusplus_vgg19 and FPN_resnet34 exhibiting precision of 55.40 and 57.41%, respectively, suitable for clinical application without modifying existing treatment workflows.
CONCLUSION: Deep learning models demonstrate potential in TCC polyp segmentation, even when trained on lower-quality images, suggesting their viability in improving timely bladder cancer diagnosis without impacting the current clinical processes.
PMID:38873177 | PMC:PMC11169928 | DOI:10.3389/frai.2024.1406806
Hate speech detection with ADHAR: a multi-dialectal hate speech corpus in Arabic
Front Artif Intell. 2024 May 30;7:1391472. doi: 10.3389/frai.2024.1391472. eCollection 2024.
ABSTRACT
Hate speech detection in Arabic poses a complex challenge due to the dialectal diversity across the Arab world. Most existing hate speech datasets for Arabic cover only one dialect or one hate speech category. They also lack balance across dialects, topics, and hate/non-hate classes. In this paper, we address this gap by presenting ADHAR-a comprehensive multi-dialect, multi-category hate speech corpus for Arabic. ADHAR contains 70,369 words and spans four language variants: Modern Standard Arabic (MSA), Egyptian, Levantine, Gulf and Maghrebi. It covers four key hate speech categories: nationality, religion, ethnicity, and race. A major contribution is that ADHAR is carefully curated to maintain balance across dialects, categories, and hate/non-hate classes to enable unbiased dataset evaluation. We describe the systematic data collection methodology, followed by a rigorous annotation process involving multiple annotators per dialect. Extensive qualitative and quantitative analyses demonstrate the quality and usefulness of ADHAR. Our experiments with various classical and deep learning models demonstrate that our dataset enables the development of robust hate speech classifiers for Arabic, achieving accuracy and F1-scores of up to 90% for hate speech detection and up to 92% for category detection. When trained with Arabert, we achieved an accuracy and F1-score of 94% for hate speech detection, as well as 95% for the category detection.
PMID:38873176 | PMC:PMC11170444 | DOI:10.3389/frai.2024.1391472
Prediction of miRNAs and diseases association based on sparse autoencoder and MLP
Front Genet. 2024 May 30;15:1369811. doi: 10.3389/fgene.2024.1369811. eCollection 2024.
ABSTRACT
Introduction: MicroRNAs (miRNAs) are small and non-coding RNA molecules which have multiple important regulatory roles within cells. With the deepening research on miRNAs, more and more researches show that the abnormal expression of miRNAs is closely related to various diseases. The relationship between miRNAs and diseases is crucial for discovering the pathogenesis of diseases and exploring new treatment methods. Methods: Therefore, we propose a new sparse autoencoder and MLP method (SPALP) to predict the association between miRNAs and diseases. In this study, we adopt advanced deep learning technologies, including sparse autoencoder and multi-layer perceptron (MLP), to improve the accuracy of predicting miRNA-disease associations. Firstly, the SPALP model uses a sparse autoencoder to perform feature learning and extract the initial features of miRNAs and diseases separately, obtaining the latent features of miRNAs and diseases. Then, the latent features combine miRNAs functional similarity data with diseases semantic similarity data to construct comprehensive miRNAs-diseases datasets. Subsequently, the MLP model can predict the unknown association among miRNAs and diseases. Result: To verify the performance of our model, we set up several comparative experiments. The experimental results show that, compared with traditional methods and other deep learning prediction methods, our method has significantly improved the accuracy of predicting miRNAs-disease associations, with 94.61% accuracy and 0.9859 AUC value. Finally, we conducted case study of SPALP model. We predicted the top 30 miRNAs that might be related to Lupus Erythematosus, Ecute Myeloid Leukemia, Cardiovascular, Stroke, Diabetes Mellitus five elderly diseases and validated that 27, 29, 29, 30, and 30 of the top 30 are indeed associated. Discussion: The SPALP approach introduced in this study is adept at forecasting the links between miRNAs and diseases, addressing the complexities of analyzing extensive bioinformatics datasets and enriching the comprehension contribution to disease progression of miRNAs.
PMID:38873111 | PMC:PMC11169787 | DOI:10.3389/fgene.2024.1369811
STM-ac4C: a hybrid model for identification of N4-acetylcytidine (ac4C) in human mRNA based on selective kernel convolution, temporal convolutional network, and multi-head self-attention
Front Genet. 2024 May 30;15:1408688. doi: 10.3389/fgene.2024.1408688. eCollection 2024.
ABSTRACT
N4-acetylcysteine (ac4C) is a chemical modification in mRNAs that alters the structure and function of mRNA by adding an acetyl group to the N4 position of cytosine. Researchers have shown that ac4C is closely associated with the occurrence and development of various cancers. Therefore, accurate prediction of ac4C modification sites on human mRNA is crucial for revealing its role in diseases and developing new diagnostic and therapeutic strategies. However, existing deep learning models still have limitations in prediction accuracy and generalization ability, which restrict their effectiveness in handling complex biological sequence data. This paper introduces a deep learning-based model, STM-ac4C, for predicting ac4C modification sites on human mRNA. The model combines the advantages of selective kernel convolution, temporal convolutional networks, and multi-head self-attention mechanisms to effectively extract and integrate multi-level features of RNA sequences, thereby achieving high-precision prediction of ac4C sites. On the independent test dataset, STM-ac4C showed improvements of 1.81%, 3.5%, and 0.37% in accuracy, Matthews correlation coefficient, and area under the curve, respectively, compared to the existing state-of-the-art technologies. Moreover, its performance on additional balanced and imbalanced datasets also confirmed the model's robustness and generalization ability. Various experimental results indicate that STM-ac4C outperforms existing methods in predictive performance. In summary, STM-ac4C excels in predicting ac4C modification sites on human mRNA, providing a powerful new tool for a deeper understanding of the biological significance of mRNA modifications and cancer treatment. Additionally, the model reveals key sequence features that influence the prediction of ac4C sites through sequence region impact analysis, offering new perspectives for future research. The source code and experimental data are available at https://github.com/ymy12341/STM-ac4C.
PMID:38873109 | PMC:PMC11169723 | DOI:10.3389/fgene.2024.1408688
Automatic cortical surface parcellation in the fetal brain using attention-gated spherical U-net
Front Neurosci. 2024 May 30;18:1410936. doi: 10.3389/fnins.2024.1410936. eCollection 2024.
ABSTRACT
Cortical surface parcellation for fetal brains is essential for the understanding of neurodevelopmental trajectories during gestations with regional analyses of brain structures and functions. This study proposes the attention-gated spherical U-net, a novel deep-learning model designed for automatic cortical surface parcellation of the fetal brain. We trained and validated the model using MRIs from 55 typically developing fetuses [gestational weeks: 32.9 ± 3.3 (mean ± SD), 27.4-38.7]. The proposed model was compared with the surface registration-based method, SPHARM-net, and the original spherical U-net. Our model demonstrated significantly higher accuracy in parcellation performance compared to previous methods, achieving an overall Dice coefficient of 0.899 ± 0.020. It also showed the lowest error in terms of the median boundary distance, 2.47 ± 1.322 (mm), and mean absolute percent error in surface area measurement, 10.40 ± 2.64 (%). In this study, we showed the efficacy of the attention gates in capturing the subtle but important information in fetal cortical surface parcellation. Our precise automatic parcellation model could increase sensitivity in detecting regional cortical anomalies and lead to the potential for early detection of neurodevelopmental disorders in fetuses.
PMID:38872945 | PMC:PMC11169851 | DOI:10.3389/fnins.2024.1410936
Dataset of chilli and onion plant leaf images for classification and detection
Data Brief. 2024 May 15;54:110524. doi: 10.1016/j.dib.2024.110524. eCollection 2024 Jun.
ABSTRACT
This article presents the chili and onion leaf (COLD) dataset, which focuses on the leaves of chili and onion plants, scientifically known as Allium cepa and capsicum. The presence of various diseases such as Purple blotch, Stemphylium leaf blight, Colletotrichum leaf blight, and Iris yellow spot virus in onions, as well as Cercospora leaf spot, powdery mildew, Murda complex syndrome, and nutrition deficiency in chili, have had a significant negative effect on onion and chili production. As a consequence, farmers have incurred financial losses. Computer vision and image-processing algorithms have been widely used in recent years for a range of applications, such as diagnosing and categorizing plant leaf diseases. In this paper we introduced a detailed chilli and onion leaf dataset gathered from Chilwadigi village with varying climatic conditions in Karnataka. The dataset contains a variety of chili and onion leaf categories carefully selected to tackle the complex challenges of categorizing leaf images taken in natural environments. Dealing with challenges such as subtle inter-class similarities, changes in lighting, and differences in background conditions like different foliage arrangements and varying light levels. We carefully documented chilli and onion leaves from various angles using high resolution camera to create a diverse and reliable dataset. The dataset on chilli leaves is set to be a valuable resource for enhancing computer vision algorithms, from traditional deep learning models to cutting-edge vision transformer architectures. This will help in creating advanced image recognition systems specifically designed for identifying chilli plants. By making this dataset publicly accessible, our goal is to empower researchers to develop new computer vision techniques to tackle the unique challenges of chilli and onion leaf recognition. You can access the dataset for free at the following DOI number: http://doi.org/10.17632/7nxxn4gj5s.3 and http://doi.org/10.17632/tf9dtfz9m6.3.
PMID:38872936 | PMC:PMC11170091 | DOI:10.1016/j.dib.2024.110524
A deep learning method for classification of HNSCC and HPV patients using single-cell transcriptomics
Front Mol Biosci. 2024 May 30;11:1395721. doi: 10.3389/fmolb.2024.1395721. eCollection 2024.
ABSTRACT
BACKGROUND: Head and Neck Squamous Cell Carcinoma (HNSCC) is the seventh most highly prevalent cancer type worldwide. Early detection of HNSCC is one of the important challenges in managing the treatment of the cancer patients. Existing techniques for detecting HNSCC are costly, expensive, and invasive in nature.
METHODS: In this study, we aimed to address this issue by developing classification models using machine learning and deep learning techniques, focusing on single-cell transcriptomics to distinguish between HNSCC and normal samples. Furthermore, we built models to classify HNSCC samples into HPV-positive (HPV+) and HPV-negative (HPV-) categories. In this study, we have used GSE181919 dataset, we have extracted 20 primary cancer (HNSCC) samples, and 9 normal tissues samples. The primary cancer samples contained 13 HPV- and 7 HPV+ samples. The models developed in this study have been trained on 80% of the dataset and validated on the remaining 20%. To develop an efficient model, we performed feature selection using mRMR method to shortlist a small number of genes from a plethora of genes. We also performed Gene Ontology (GO) enrichment analysis on the 100 shortlisted genes.
RESULTS: Artificial Neural Network based model trained on 100 genes outperformed the other classifiers with an AUROC of 0.91 for HNSCC classification for the validation set. The same algorithm achieved an AUROC of 0.83 for the classification of HPV+ and HPV- patients on the validation set. In GO enrichment analysis, it was found that most genes were involved in binding and catalytic activities.
CONCLUSION: A software package has been developed in Python which allows users to identify HNSCC in patients along with their HPV status. It is available at https://webs.iiitd.edu.in/raghava/hnscpred/.
PMID:38872916 | PMC:PMC11169846 | DOI:10.3389/fmolb.2024.1395721
Rapid segmentation of computed tomography angiography images of the aortic valve: the efficacy and clinical value of a deep learning algorithm
Front Bioeng Biotechnol. 2024 May 30;12:1285166. doi: 10.3389/fbioe.2024.1285166. eCollection 2024.
ABSTRACT
OBJECTIVES: The goal of this study was to explore the reliability and clinical value of fast, accurate automatic segmentation of the aortic root based on a deep learning tool compared with computed tomography angiography.
METHODS: A deep learning tool for automatic 3-dimensional aortic root reconstruction, the CVPILOT system (TAVIMercy Data Technology Ltd., Nanjing, China), was trained and tested using computed tomography angiography scans collected from 183 patients undergoing transcatheter aortic valve replacement from January 2021 to December 2022. The quality of the reconstructed models was assessed using validation data sets and evaluated clinically by experts.
RESULTS: The segmentation of the ascending aorta and the left ventricle attained Dice similarity coefficients (DSC) of 0.9806/0.9711 and 0.9603/0.9643 for the training and validation sets, respectively. The leaflets had a DSC of 0.8049/0.7931, and the calcification had a DSC of 0.8814/0.8630. After 6 months of application, the system modeling time was reduced to 19.83 s.
CONCLUSION: For patients undergoing transcatheter aortic valve replacement, the CVPILOT system facilitates clinical workflow. The reliable evaluation quality of the platform indicates broad clinical application prospects in the future.
PMID:38872900 | PMC:PMC11169779 | DOI:10.3389/fbioe.2024.1285166
Imaging at the nexus: how state of the art imaging techniques can enhance our understanding of cancer and fibrosis
J Transl Med. 2024 Jun 13;22(1):567. doi: 10.1186/s12967-024-05379-1.
ABSTRACT
Both cancer and fibrosis are diseases involving dysregulation of cell signaling pathways resulting in an altered cellular microenvironment which ultimately leads to progression of the condition. The two disease entities share common molecular pathophysiology and recent research has illuminated the how each promotes the other. Multiple imaging techniques have been developed to aid in the early and accurate diagnosis of each disease, and given the commonalities between the pathophysiology of the conditions, advances in imaging one disease have opened new avenues to study the other. Here, we detail the most up-to-date advances in imaging techniques for each disease and how they have crossed over to improve detection and monitoring of the other. We explore techniques in positron emission tomography (PET), magnetic resonance imaging (MRI), second generation harmonic Imaging (SGHI), ultrasound (US), radiomics, and artificial intelligence (AI). A new diagnostic imaging tool in PET/computed tomography (CT) is the use of radiolabeled fibroblast activation protein inhibitor (FAPI). SGHI uses high-frequency sound waves to penetrate deeper into the tissue, providing a more detailed view of the tumor microenvironment. Artificial intelligence with the aid of advanced deep learning (DL) algorithms has been highly effective in training computer systems to diagnose and classify neoplastic lesions in multiple organs. Ultimately, advancing imaging techniques in cancer and fibrosis can lead to significantly more timely and accurate diagnoses of both diseases resulting in better patient outcomes.
PMID:38872212 | DOI:10.1186/s12967-024-05379-1