Deep learning

Deep Learning Based Intrusion Detection With Adversaries

Mon, 2024-06-17 06:00

IEEE Access. 2018;6. doi: 10.1109/access.2018.2854599.

ABSTRACT

Deep neural networks have demonstrated their effectiveness for most machine learning tasks, with Intrusion Detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raise some concerns in applying deep neural networks in security-critical areas such as Intrusion Detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning based Intrusion Detection on the NSL-KDD dataset. Based on the implementation of deep neural networks using TensorFlow, we examine the vulnerabilities of neural networks under attacks on the IDS. To gain insights into the nature of Intrusion Detection and its attacks, we also explore the roles of individual features in generating adversarial examples.

PMID:38882674 | PMC:PMC11177870 | DOI:10.1109/access.2018.2854599

Categories: Literature Watch

Editorial: The combination of data-driven machine learning approaches and prior knowledge for robust medical image processing and analysis

Mon, 2024-06-17 06:00

Front Med (Lausanne). 2024 May 31;11:1434686. doi: 10.3389/fmed.2024.1434686. eCollection 2024.

NO ABSTRACT

PMID:38882668 | PMC:PMC11176619 | DOI:10.3389/fmed.2024.1434686

Categories: Literature Watch

Emerging trends and hotspots in cervical intraepithelial neoplasia research from 2013 to 2023: A bibliometric analysis

Mon, 2024-06-17 06:00

Heliyon. 2024 May 29;10(11):e32114. doi: 10.1016/j.heliyon.2024.e32114. eCollection 2024 Jun 15.

ABSTRACT

BACKGROUND: Cervical intraepithelial neoplasia (CIN) encompasses a range of cervical lesions that are closely linked to cervical invasive carcinoma. Early detection and timely treatment of CIN are crucial for preventing the progression of the disease. However, no bibliometric analysis has been conducted in this area. This research aimed to employ bibliometric analysis to summarize the current research hotspots and estimate future research trends in the CIN field.

METHODS: Publications related to CIN (2013-2023) were retrieved from the Science-Citation-Index-Expanded-of-Web-of-Science-Core-Collection. CiteSpace, VOSviewer, and the bibliometric-Online-Analysis-Platform-of-Literature-Metrology were employed to analyze the yearly research output, collaborating institutions or countries, leading researchers, principal journals, co-referenced sources, and emerging keywords.

RESULTS: In total, 4677 articles on CIN that were published from 2013 to 2023 and met our criteria were extracted. Major publishing platforms were predominantly USA until 2017 when China emerged as the leading source of publications about CIN. The USA was the leading nation in international collaborations. The National-Cancer-Institute (NCI) was the institution with the most publications. Schiffman Mark produced the highest number of articles, with a total of 92. Ten major clusters were identified through co-cited keyword clustering, including prevalence, human papillomavirus, DNA methylation, p16, methylation, conization, HPV genotyping tests (VALGENT), deep learning, vaginal microbiome, and immunohistochemistry. Keyword burst analysis showed that photodynamic therapy and deep learning emerged as prominent research focal points with significant impact in resent three years.

CONCLUSION: Global publications on CIN research showed a relatively stable trend over the past eleven years. Current research hotspots are deep learning and photodynamic therapy. This research offered organized data and insightful guidance for future studies, which may help better prevent, screen, and treat CIN.

PMID:38882369 | PMC:PMC11177135 | DOI:10.1016/j.heliyon.2024.e32114

Categories: Literature Watch

Deep learning guided prediction modeling of dengue virus evolving serotype

Mon, 2024-06-17 06:00

Heliyon. 2024 May 29;10(11):e32061. doi: 10.1016/j.heliyon.2024.e32061. eCollection 2024 Jun 15.

ABSTRACT

Evolution remains an incessant process in viruses, allowing them to elude the host immune response and induce severe diseases, impacting the diagnostic and vaccine effectiveness. Emerging and re-emerging diseases are among the significant public health concerns globally. The revival of dengue is mainly due to the potential for naturally arising mutations to induce genotypic alterations in serotypes. These transformations could lead to future outbreaks, underscoring the significance of studying DENV evolution in endemic regions. Predicting the emerging Dengue Virus (DENV) genome is crucial as the virus disrupts host cells, leading to fatal outcomes. Deep learning has been applied to predict dengue fever cases; there has been relatively less emphasis on its significance in forecasting emerging DENV serotypes. While Recurrent Neural Networks (RNN) were initially designed for modeling temporal sequences, our proposed DL-DVE generative and classification model, trained on complete genome data of DENV, transcends traditional approaches by learning semantic relationships between nucleotides in a continuous vector space instead of representing the contextual meaning of nucleotide characters. Leveraging 2000 publicly available DENV complete genome sequences, our Long Short-Term Memory (LSTM) based generative and Feedforward Neural Network (FNN) based classification DL-DVE model showcases proficiency in learning intricate patterns and generating sequences for emerging serotype of DENV. The generated sequences were analyzed along with available DENV serotype sequences to find conserved motifs in the genome through MEME Suite (version 5.5.5). The generative model showed an accuracy of 93 %, and the classification model provided insight into the specific serotype label, corroborated by BLAST search verification. Evaluation metrics such as ROC-AUC value 0.818, accuracy, precision, recall and F1 score, all to be around 99.00 %, demonstrating the classification model's reliability. Our model classified the generated sequences as DENV-4, exhibiting 65.99 % similarity to DENV-4 and around 63-65 % similarity with other serotypes, indicating notable distinction from other serotypes. Moreover, the intra-serotype divergence of sequences with a minimum of 90 % similarity underscored their uniqueness.

PMID:38882365 | PMC:PMC11177124 | DOI:10.1016/j.heliyon.2024.e32061

Categories: Literature Watch

A dataset of ground-dwelling nocturnal fauna for object detection and classification

Mon, 2024-06-17 06:00

Data Brief. 2024 May 17;54:110537. doi: 10.1016/j.dib.2024.110537. eCollection 2024 Jun.

ABSTRACT

The exploration of ground-dwelling nocturnal fauna represents a significant challenge due to its broad implications across various sectors, including pesticide management, crop yield forecasting, and plant disease identification. This paper unveils an annotated dataset, BioAuxdataset, aimed at facilitating the recognition of such fauna through field images gathered across multiple years. Culled from a collection exceeding 100,000 raw field images over a span of four years, this meticulously curated dataset features seven prevalent species of nocturnal ground-dwelling fauna: carabid, mouse, opilion, slug, shrew, small-slug, and worm. In instances of underrepresented species within the dataset, we have implemented straightforward yet potent image augmentation techniques to enhance data quality. BioAuxdataset stands as a valuable resource for the detection and identification of these organisms, leveraging the power of deep learning algorithms to unlock new potentials in ecological research and beyond. This dataset not only enriches the academic discourse but also opens up avenues for practical applications in agriculture, environmental science, and biodiversity conservation.

PMID:38882193 | PMC:PMC11177085 | DOI:10.1016/j.dib.2024.110537

Categories: Literature Watch

Segmented X-ray image data for diagnosing dental periapical diseases using deep learning

Mon, 2024-06-17 06:00

Data Brief. 2024 May 17;54:110539. doi: 10.1016/j.dib.2024.110539. eCollection 2024 Jun.

ABSTRACT

The study presents a segmented dataset comprising dental periapical X-ray images from both healthy and diseased patients. The ability to differentiate between normal and abnormal dental periapical X-rays is pivotal for accurate diagnosis of dental pathology. These X-rays contain crucial information, offering in- sights into the physiological and pathological conditions of teeth and surrounding structures. The dataset outlined in this article encompasses dental periapical X-ray images obtained during routine examinations and treatment procedures of patients at the oral and dental health department of a local government hos- pital in North Jordan. Comprising a total of 929 high-quality X-ray images, the dataset includes subjects of varying ages with a spectrum of dental and pulpal diseases, bone loss, periapical diseases, and other abnormalities. Employing an advanced image segmentation approach, the collected dataset is categorized into healthy and diseased dental patients. This labelled dataset serves as a foundation for the development of an automated system capable of detecting dental pathologies, including caries and pulpal diseases, and distinguishing between normal and abnormal cases. Notably, recent advancements in deep learning artificial intelligence have significantly contributed to the creation of advanced dental models for diverse applications. This technology has demonstrated remarkable accuracy in the development of diagnostic and detection tools for various dental problems.

PMID:38882192 | PMC:PMC11177072 | DOI:10.1016/j.dib.2024.110539

Categories: Literature Watch

Multimodal Brain Tumor Classification Using Convolutional Tumnet Architecture

Mon, 2024-06-17 06:00

Behav Neurol. 2024 May 30;2024:4678554. doi: 10.1155/2024/4678554. eCollection 2024.

ABSTRACT

The most common and aggressive tumor is brain malignancy, which has a short life span in the fourth grade of the disease. As a result, the medical plan may be a crucial step toward improving the well-being of a patient. Both diagnosis and therapy are part of the medical plan. Brain tumors are commonly imaged with magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). In this paper, multimodal fused imaging with classification and segmentation for brain tumors was proposed using the deep learning method. The MRI and CT brain tumor images of the same slices (308 slices of meningioma and sarcoma) are combined using three different types of pixel-level fusion methods. The presence/absence of a tumor is classified using the proposed Tumnet technique, and the tumor area is found accordingly. In the other case, Tumnet is also applied for single-modal MRI/CT (561 image slices) for classification. The proposed Tumnet was modeled with 5 convolutional layers, 3 pooling layers with ReLU activation function, and 3 fully connected layers. The first-order statistical fusion metrics for an average method of MRI-CT images are obtained as SSIM tissue at 83%, SSIM bone at 84%, accuracy at 90%, sensitivity at 96%, and specificity at 95%, and the second-order statistical fusion metrics are obtained as the standard deviation of fused images at 79% and entropy at 0.99. The entropy value confirms the presence of additional features in the fused image. The proposed Tumnet yields a sensitivity of 96%, an accuracy of 98%, a specificity of 99%, normalized values of the mean of 0.75, a standard deviation of 0.4, a variance of 0.16, and an entropy of 0.90.

PMID:38882177 | PMC:PMC11178426 | DOI:10.1155/2024/4678554

Categories: Literature Watch

Quantitative Evaluation of the Pore and Window Sizes of Tissue Engineering Scaffolds on Scanning Electron Microscope Images Using Deep Learning

Mon, 2024-06-17 06:00

ACS Omega. 2024 May 10;9(23):24695-24706. doi: 10.1021/acsomega.4c01234. eCollection 2024 Jun 11.

ABSTRACT

The morphological characteristics of tissue engineering scaffolds, such as pore and window diameters, are crucial, as they directly impact cell-material interactions, attachment, spreading, infiltration of the cells, degradation rate and the mechanical properties of the scaffolds. Scanning electron microscopy (SEM) is one of the most commonly used techniques for characterizing the microarchitecture of tissue engineering scaffolds due to its advantages, such as being easily accessible and having a short examination time. However, SEM images provide qualitative data that need to be manually measured using software such as ImageJ to quantify the morphological features of the scaffolds. As it is not practical to measure each pore/window in the SEM images as it requires extensive time and effort, only the number of pores/windows is measured and assumed to represent the whole sample, which may cause user bias. Additionally, depending on the number of samples and groups, a study may require measuring thousands of samples and the human error rate may increase. To overcome such problems, in this study, a deep learning model (Pore D2) was developed to quantify the morphological features (such as the pore size and window size) of the open-porous scaffolds automatically for the first time. The developed algorithm was tested on emulsion-templated scaffolds fabricated under different fabrication conditions, such as changing mixing speed, temperature, and surfactant concentration, which resulted in scaffolds with various morphologies. Along with the developed model, blind manual measurements were taken, and the results showed that the developed tool is capable of quantifying pore and window sizes with a high accuracy. Quantifying the morphological features of scaffolds fabricated under different circumstances and controlling these features enable us to engineer tissue engineering scaffolds precisely for specific applications. Pore D2, an open-source software, is available for everyone at the following link: https://github.com/ilaydakaraca/PoreD2.

PMID:38882138 | PMC:PMC11170757 | DOI:10.1021/acsomega.4c01234

Categories: Literature Watch

An artificial intelligence algorithm to select most viable embryos considering current process in IVF labs

Mon, 2024-06-17 06:00

Front Artif Intell. 2024 May 30;7:1375474. doi: 10.3389/frai.2024.1375474. eCollection 2024.

ABSTRACT

BACKGROUND: The most common Assisted Reproductive Technology is In-Vitro Fertilization (IVF). During IVF, embryologists commonly perform a morphological assessment to evaluate embryo quality and choose the best embryo for transferring to the uterus. However, embryo selection through morphological assessment is subjective, so various embryologists obtain different conclusions. Furthermore, humans can consider only a limited number of visual parameters resulting in a poor IVF success rate. Artificial intelligence (AI) for embryo selection is objective and can include many parameters, leading to better IVF outcomes.

OBJECTIVES: This study sought to use AI to (1) predict pregnancy results based on embryo images, (2) assess using more than one image of the embryo in the prediction of pregnancy but based on the current process in IVF labs, and (3) compare results of AI-Based methods and embryologist experts in predicting pregnancy.

METHODS: A data set including 252 Time-lapse Videos of embryos related to IVF performed between 2017 and 2020 was collected. Frames related to 19 ± 1, 43 ± 1, and 67 ± 1 h post-insemination were extracted. Well-Known CNN architectures with transfer learning have been applied to these images. The results have been compared with an algorithm that only uses the final image of embryos. Furthermore, the results have been compared with five experienced embryologists.

RESULTS: To predict the pregnancy outcome, we applied five well-known CNN architectures (AlexNet, ResNet18, ResNet34, Inception V3, and DenseNet121). DeepEmbryo, using three images, predicts pregnancy better than the algorithm that only uses one final image. It also can predict pregnancy better than all embryologists. Different well-known architectures can successfully predict pregnancy chances with up to 75.0% accuracy using Transfer Learning.

CONCLUSION: We have developed DeepEmbryo, an AI-based tool that uses three static images to predict pregnancy. Additionally, DeepEmbryo uses images that can be obtained in the current IVF process in almost all IVF labs. AI-based tools have great potential for predicting pregnancy and can be used as a proper tool in the future.

PMID:38881952 | PMC:PMC11177761 | DOI:10.3389/frai.2024.1375474

Categories: Literature Watch

Digital pathology, deep learning, and cancer: a narrative review

Mon, 2024-06-17 06:00

Transl Cancer Res. 2024 May 31;13(5):2544-2560. doi: 10.21037/tcr-23-964. Epub 2024 May 22.

ABSTRACT

BACKGROUND AND OBJECTIVE: Cancer is a leading cause of morbidity and mortality worldwide. The emergence of digital pathology and deep learning technologies signifies a transformative era in healthcare. These technologies can enhance cancer detection, streamline operations, and bolster patient care. A substantial gap exists between the development phase of deep learning models in controlled laboratory environments and their translations into clinical practice. This narrative review evaluates the current landscape of deep learning and digital pathology, analyzing the factors influencing model development and implementation into clinical practice.

METHODS: We searched multiple databases, including Web of Science, Arxiv, MedRxiv, BioRxiv, Embase, PubMed, DBLP, Google Scholar, IEEE Xplore, Semantic Scholar, and Cochrane, targeting articles on whole slide imaging and deep learning published from 2014 and 2023. Out of 776 articles identified based on inclusion criteria, we selected 36 papers for the analysis.

KEY CONTENT AND FINDINGS: Most articles in this review focus on the in-laboratory phase of deep learning model development, a critical stage in the deep learning lifecycle. Challenges arise during model development and their integration into clinical practice. Notably, lab performance metrics may not always match real-world clinical outcomes. As technology advances and regulations evolve, we expect more clinical trials to bridge this performance gap and validate deep learning models' effectiveness in clinical care. High clinical accuracy is vital for informed decision-making throughout a patient's cancer care.

CONCLUSIONS: Deep learning technology can enhance cancer detection, clinical workflows, and patient care. Challenges may arise during model development. The deep learning lifecycle involves data preprocessing, model development, and clinical implementation. Achieving health equity requires including diverse patient groups and eliminating bias during implementation. While model development is integral, most articles focus on the pre-deployment phase. Future longitudinal studies are crucial for validating models in real-world settings post-deployment. A collaborative approach among computational pathologists, technologists, industry, and healthcare providers is essential for driving adoption in clinical settings.

PMID:38881914 | PMC:PMC11170525 | DOI:10.21037/tcr-23-964

Categories: Literature Watch

Novel artificial intelligence algorithms for diabetic retinopathy and diabetic macular edema

Sun, 2024-06-16 06:00

Eye Vis (Lond). 2024 Jun 17;11(1):23. doi: 10.1186/s40662-024-00389-y.

ABSTRACT

BACKGROUND: Diabetic retinopathy (DR) and diabetic macular edema (DME) are major causes of visual impairment that challenge global vision health. New strategies are needed to tackle these growing global health problems, and the integration of artificial intelligence (AI) into ophthalmology has the potential to revolutionize DR and DME management to meet these challenges.

MAIN TEXT: This review discusses the latest AI-driven methodologies in the context of DR and DME in terms of disease identification, patient-specific disease profiling, and short-term and long-term management. This includes current screening and diagnostic systems and their real-world implementation, lesion detection and analysis, disease progression prediction, and treatment response models. It also highlights the technical advancements that have been made in these areas. Despite these advancements, there are obstacles to the widespread adoption of these technologies in clinical settings, including regulatory and privacy concerns, the need for extensive validation, and integration with existing healthcare systems. We also explore the disparity between the potential of AI models and their actual effectiveness in real-world applications.

CONCLUSION: AI has the potential to revolutionize the management of DR and DME, offering more efficient and precise tools for healthcare professionals. However, overcoming challenges in deployment, regulatory compliance, and patient privacy is essential for these technologies to realize their full potential. Future research should aim to bridge the gap between technological innovation and clinical application, ensuring AI tools integrate seamlessly into healthcare workflows to enhance patient outcomes.

PMID:38880890 | DOI:10.1186/s40662-024-00389-y

Categories: Literature Watch

Deep learning survival model predicts outcome after intracerebral hemorrhage from initial CT scan

Sun, 2024-06-16 06:00

Eur Stroke J. 2024 Jun 16:23969873241260154. doi: 10.1177/23969873241260154. Online ahead of print.

ABSTRACT

BACKGROUND: Predicting functional impairment after intracerebral hemorrhage (ICH) provides valuable information for planning of patient care and rehabilitation strategies. Current prognostic tools are limited in making long term predictions and require multiple expert-defined inputs and interpretation that make their clinical implementation challenging. This study aimed to predict long term functional impairment of ICH patients from admission non-contrast CT scans, leveraging deep learning models in a survival analysis framework.

METHODS: We used the admission non-contrast CT scans from 882 patients from the Massachusetts General Hospital ICH Study for training, hyperparameter optimization, and model selection, and 146 patients from the Yale New Haven ICH Study for external validation of a deep learning model predicting functional outcome. Disability (modified Rankin scale [mRS] > 2), severe disability (mRS > 4), and dependent living status were assessed via telephone interviews after 6, 12, and 24 months. The prediction methods were evaluated by the c-index and compared with ICH score and FUNC score.

RESULTS: Using non-contrast CT, our deep learning model achieved higher prediction accuracy of post-ICH dependent living, disability, and severe disability by 6, 12, and 24 months (c-index 0.742 [95% CI -0.700 to 0.778], 0.712 [95% CI -0.674 to 0.752], 0.779 [95% CI -0.733 to 0.832] respectively) compared with the ICH score (c-index 0.673 [95% CI -0.662 to 0.688], 0.647 [95% CI -0.637 to 0.661] and 0.697 [95% CI -0.675 to 0.717]) and FUNC score (c-index 0.701 [95% CI- 0.698 to 0.723], 0.668 [95% CI -0.657 to 0.680] and 0.727 [95% CI -0.708 to 0.753]). In the external independent Yale-ICH cohort, similar performance metrics were obtained for disability and severe disability (c-index 0.725 [95% CI -0.673 to 0.781] and 0.747 [95% CI -0.676 to 0.807], respectively). Similar AUC of predicting each outcome at 6 months, 1 and 2 years after ICH was achieved compared with ICH score and FUNC score.

CONCLUSION: We developed a generalizable deep learning model to predict onset of dependent living and disability after ICH, which could help to guide treatment decisions, advise relatives in the acute setting, optimize rehabilitation strategies, and anticipate long-term care needs.

PMID:38880882 | DOI:10.1177/23969873241260154

Categories: Literature Watch

Multimodal deep learning for dementia classification using text and audio

Sun, 2024-06-16 06:00

Sci Rep. 2024 Jun 16;14(1):13887. doi: 10.1038/s41598-024-64438-1.

ABSTRACT

Dementia is a progressive neurological disorder that affects the daily lives of older adults, impacting their verbal communication and cognitive function. Early diagnosis is important to enhance the lifespan and quality of life for affected individuals. Despite its importance, diagnosing dementia is a complex process. Automated machine learning solutions involving multiple types of data have the potential to improve the process of automated dementia screening. In this study, we build deep learning models to classify dementia cases from controls using the Pitt Cookie Theft dataset from DementiaBank, a database of short participant responses to the structured task of describing a picture of a cookie theft. We fine-tune Wav2vec and Word2vec baseline models to make binary predictions of dementia from audio recordings and text transcripts, respectively. We conduct experiments with four versions of the dataset: (1) the original data, (2) the data with short sentences removed, (3) text-based augmentation of the original data, and (4) text-based augmentation of the data with short sentences removed. Our results indicate that synonym-based text data augmentation generally enhances the performance of models that incorporate the text modality. Without data augmentation, models using the text modality achieve around 60% accuracy and 70% AUROC scores, and with data augmentation, the models achieve around 80% accuracy and 90% AUROC scores. We do not observe significant improvements in performance with the addition of audio or timestamp information into the model. We include a qualitative error analysis of the sentences that are misclassified under each study condition. This study provides preliminary insights into the effects of both text-based data augmentation and multimodal deep learning for automated dementia classification.

PMID:38880810 | DOI:10.1038/s41598-024-64438-1

Categories: Literature Watch

Deep learning-based approach for 3D bone segmentation and prediction of missing tooth region for dental implant planning

Sun, 2024-06-16 06:00

Sci Rep. 2024 Jun 16;14(1):13888. doi: 10.1038/s41598-024-64609-0.

ABSTRACT

Recent studies have shown that dental implants have high long-term survival rates, indicating their effectiveness compared to other treatments. However, there is still a concern regarding treatment failure. Deep learning methods, specifically U-Net models, have been effectively applied to analyze medical and dental images. This study aims to utilize U-Net models to segment bone in regions where teeth are missing in cone-beam computerized tomography (CBCT) scans and predict the positions of implants. The proposed models were applied to a CBCT dataset of Taibah University Dental Hospital (TUDH) patients between 2018 and 2023. They were evaluated using different performance metrics and validated by a domain expert. The experimental results demonstrated outstanding performance in terms of dice, precision, and recall for bone segmentation (0.93, 0.94, and 0.93, respectively) with a low volume error (0.01). The proposed models offer promising automated dental implant planning for dental implantologists.

PMID:38880802 | DOI:10.1038/s41598-024-64609-0

Categories: Literature Watch

Beyond Macrostructure: Is There a Role for Radiomics Analysis in Neuroimaging ?

Sun, 2024-06-16 06:00

Magn Reson Med Sci. 2024 Jun 14. doi: 10.2463/mrms.rev.2024-0053. Online ahead of print.

ABSTRACT

The most commonly used neuroimaging biomarkers of brain structure, particularly in neurodegenerative diseases, have traditionally been summary measurements from ROIs derived from structural MRI, such as volume and thickness. Advances in MR acquisition techniques, including high-field imaging, and emergence of learning-based methods have opened up opportunities to interrogate brain structure in finer detail, allowing investigators to move beyond macrostructural measurements. On the one hand, superior signal contrast has the potential to make appearance-based metrics that directly analyze intensity patterns, such as texture analysis and radiomics features, more reliable. Quantitative MRI, particularly at high-field, can also provide a richer set of measures with greater interpretability. On the other hand, use of neural networks-based techniques has the potential to exploit subtle patterns in images that can now be mined with advanced imaging. Finally, there are opportunities for integration of multimodal data at different spatial scales that is enabled by developments in many of the above techniques-for example, by combining digital histopathology with high-resolution ex-vivo and in-vivo MRI. Some of these approaches are at early stages of development and present their own set of challenges. Nonetheless, they hold promise to drive the next generation of validation and biomarker studies. This article will survey recent developments in this area, with a particular focus on Alzheimer's disease and related disorders. However, most of the discussion is equally relevant to imaging of other neurological disorders, and even to other organ systems of interest. It is not meant to be an exhaustive review of the available literature, but rather presented as a summary of recent trends through the discussion of a collection of representative studies with an eye towards what the future may hold.

PMID:38880615 | DOI:10.2463/mrms.rev.2024-0053

Categories: Literature Watch

Future directions of generative artificial intelligence in ophthalmology and vision science

Sun, 2024-06-16 06:00

Surv Ophthalmol. 2024 Jun 14:S0039-6257(24)00072-9. doi: 10.1016/j.survophthal.2024.06.003. Online ahead of print.

NO ABSTRACT

PMID:38880399 | DOI:10.1016/j.survophthal.2024.06.003

Categories: Literature Watch

Brain-Computer Interfaces Inspired Spiking Neural Network Model for Depression Stage Identification

Sun, 2024-06-16 06:00

J Neurosci Methods. 2024 Jun 14:110203. doi: 10.1016/j.jneumeth.2024.110203. Online ahead of print.

ABSTRACT

BACKGROUND: Depression is a global mental disorder, and traditional diagnostic methods mainly rely on scales and subjective evaluations by doctors, which cannot effectively identify symptoms and even carry the risk of misdiagnosis. Brain-Computer Interfaces inspired deep learning-assisted diagnosis based on physiological signals holds promise for improving traditional methods lacking physiological basis and leads next generation neuro-technologies. However, traditional deep learning methods rely on immense computational power and mostly involve end-to-end network learning. These learning methods also lack physiological interpretability, limiting their clinical application in assisted diagnosis.

METHODOLOGY: A brain-like learning model for diagnosing depression using electroencephalogram (EEG) is proposed. The study collects EEG data using 128-channel electrodes, producing a 128×128 brain adjacency matrix. Given the assumption of undirected connectivity, the upper half of the 128×128 matrix is chosen in order to minimise the input parameter size, producing 8,128-dimensional data. After eliminating 28 components derived from irrelevant or reference electrodes, a 90×90 matrix is produced, which can be used as an input for a single-channel brain-computer interface image.

RESULT: At the functional level, a spiking neural network is constructed to classify individuals with depression and healthy individuals, achieving an accuracy exceeding 97.5%.

COMPARISON WITH EXISTING METHODS: Compared to deep convolutional methods, the spiking method reduces energy consumption.

CONCLUSION: At the structural level, complex networks are utilized to establish spatial topology of brain connections and analyse their graph features, identifying potential abnormal brain functional connections in individuals with depression.

PMID:38880343 | DOI:10.1016/j.jneumeth.2024.110203

Categories: Literature Watch

Identification and experimental validation of immune-related gene PPARG is involved in ulcerative colitis

Sun, 2024-06-16 06:00

Biochim Biophys Acta Mol Basis Dis. 2024 Jun 14:167300. doi: 10.1016/j.bbadis.2024.167300. Online ahead of print.

ABSTRACT

BACKGROUND: The pathophysiology of ulcerative colitis (UC) is believed to be heavily influenced by immunology, which presents challenges for both diagnosis and treatment. The main aims of this study are to deepen our understanding of the immunological characteristics associated with the disease and to identify valuable biomarkers for diagnosis and treatment.

METHODS: The UC datasets were sourced from the GEO database and were analyzed using unsupervised clustering to identify different subtypes of UC. Twelve machine learning algorithms and Deep learning model DNN were developed to identify potential UC biomarkers, with the LIME and SHAP methods used to explain the models' findings. PPI network is used to verify the identified key biomarkers, and then a network connecting super enhancers, transcription factors and genes is constructed. Single-cell sequencing technology was utilized to investigate the role of Peroxisome Proliferator Activated Receptor Gamma (PPARG) in UC and its correlation with macrophage infiltration. Furthermore, alterations in PPARG expression were validated through Western blot (WB) and immunohistochemistry (IHC) in both in vitro and in vivo experiments.

RESULT: By utilizing bioinformatics techniques, we were able to pinpoint PPARG as a key biomarker for UC. The expression of PPARG was significantly reduced in cell models, UC animal models, and colitis models induced by dextran sodium sulfate (DSS). Interestingly, overexpression of PPARG was able to restore intestinal barrier function in H2O2-induced IEC-6 cells. Additionally, immune-related differentially expressed genes (DEGs) allowed for efficient classification of UC samples into neutrophil and mitochondrial metabolic subtypes. A diagnostic model incorporating the three disease-specific genes PPARG, PLA2G2A, and IDO1 demonstrated high accuracy in distinguishing between the UC group and the control group. Furthermore, single-cell analysis revealed that decreased PPARG expression in colon tissue may contribute to the polarization of M1 macrophages through activation of inflammatory pathways.

CONCLUSION: In conclusion, PPARG, a gene related to immunity, has been established as a reliable potential biomarker for the diagnosis and treatment of UC. The immune response it controls plays a key role in the progression and development of UC by enabling interaction between characteristic biomarkers and immune infiltrating cells.

PMID:38880160 | DOI:10.1016/j.bbadis.2024.167300

Categories: Literature Watch

Physics-informed neural networks for parameter estimation in blood flow models

Sun, 2024-06-16 06:00

Comput Biol Med. 2024 Jun 5;178:108706. doi: 10.1016/j.compbiomed.2024.108706. Online ahead of print.

ABSTRACT

BACKGROUND: Physics-informed neural networks (PINNs) have emerged as a powerful tool for solving inverse problems, especially in cases where no complete information about the system is known and scatter measurements are available. This is especially useful in hemodynamics since the boundary information is often difficult to model, and high-quality blood flow measurements are generally hard to obtain.

METHODS: In this work, we use the PINNs methodology for estimating reduced-order model parameters and the full velocity field from scatter 2D noisy measurements in the aorta. Two different flow regimes, stationary and transient were studied.

RESULTS: We show robust and relatively accurate parameter estimations when using the method with simulated data, while the velocity reconstruction accuracy shows dependence on the measurement quality and the flow pattern complexity. Comparison with a Kalman filter approach shows similar results when the number of parameters to be estimated is low to medium. For a higher number of parameters, only PINNs were capable of achieving good results.

CONCLUSION: The method opens a door to deep-learning-driven methods in the simulations of complex coupled physical systems.

PMID:38879935 | DOI:10.1016/j.compbiomed.2024.108706

Categories: Literature Watch

Spectrum-based deep learning framework for dermatological pigment analysis and simulation

Sun, 2024-06-16 06:00

Comput Biol Med. 2024 Jun 15;178:108741. doi: 10.1016/j.compbiomed.2024.108741. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning in dermatology presents promising tools for automated diagnosis but faces challenges, including labor-intensive ground truth preparation and a primary focus on visually identifiable features. Spectrum-based approaches offer professional-level information like pigment distribution maps, but encounter practical limitations such as complex system requirements.

METHODS: This study introduces a spectrum-based framework for training a deep learning model to generate melanin and hemoglobin distribution maps from skin images. This approach eliminates the need for manually prepared ground truth by synthesizing output maps into skin images for regression analysis. The framework is applied to acquire spectral data, create pigment distribution maps, and simulate pigment variations.

RESULTS: Our model generated reflectance spectra and spectral images that accurately reflect pigment absorption properties, outperforming spectral upsampling methods. It produced pigment distribution maps with correlation coefficients of 0.913 for melanin and 0.941 for hemoglobin compared to the VISIA system. Additionally, the model's simulated images of pigment variations exhibited a proportional correlation with adjustments made to pigment levels. These evaluations are based on pigment absorption properties, the Individual Typology Angle (ITA), and pigment indices.

CONCLUSION: The model produces pigment distribution maps comparable to those from specialized clinical equipment and simulated images with numerically adjusted pigment variations. This approach demonstrates significant promise for developing professional-level diagnostic tools for future clinical applications.

PMID:38879933 | DOI:10.1016/j.compbiomed.2024.108741

Categories: Literature Watch

Pages