Deep learning

Explainable deep-learning prediction for brain-computer interfaces supported lower extremity motor gains based on multi-state fusion

Fri, 2024-04-05 06:00

IEEE Trans Neural Syst Rehabil Eng. 2024 Apr 5;PP. doi: 10.1109/TNSRE.2024.3384498. Online ahead of print.

ABSTRACT

Predicting the potential for recovery of motor function in stroke patients who undergo specific rehabilitation treatments is an important and major challenge. Recently, electroencephalography (EEG) has shown potential in helping to determine the relationship between cortical neural activity and motor recovery. EEG recorded in different states could more accurately predict motor recovery than single-state recordings. Here, we design a multi-state (combining eyes closed, EC, and eyes open, EO) fusion neural network for predicting the motor recovery of patients with stroke after EEG-brain-computer-interface (BCI) rehabilitation training and use an explainable deep learning method to identify the most important features of EEG power spectral density and functional connectivity contributing to prediction. The prediction accuracy of the multi-states fusion network was 82%, significantly improved compared with a single-state model. The neural network explanation result demonstrated the important region and frequency oscillation bands. Specifically, in those two states, power spectral density and functional connectivity were shown as the regions and bands related to motor recovery in frontal, central, and occipital. Moreover, the motor recovery relation in bands, the power spectrum density shows the bands at delta and alpha bands. The functional connectivity shows the delta, theta, and alpha bands in the EC state; delta, theta, and beta mid at the EO state are related to motor recovery. Multi-state fusion neural networks, which combine multiple states of EEG signals into a single network, can increase the accuracy of predicting motor recovery after BCI training, and reveal the underlying mechanisms of motor recovery in brain activity.

PMID:38578854 | DOI:10.1109/TNSRE.2024.3384498

Categories: Literature Watch

Semantic segmentation of urban environments: Leveraging U-Net deep learning model for cityscape image analysis

Fri, 2024-04-05 06:00

PLoS One. 2024 Apr 5;19(4):e0300767. doi: 10.1371/journal.pone.0300767. eCollection 2024.

ABSTRACT

Semantic segmentation of cityscapes via deep learning is an essential and game-changing research topic that offers a more nuanced comprehension of urban landscapes. Deep learning techniques tackle urban complexity and diversity, which unlocks a broad range of applications. These include urban planning, transportation management, autonomous driving, and smart city efforts. Through rich context and insights, semantic segmentation helps decision-makers and stakeholders make educated decisions for sustainable and effective urban development. This study investigates an in-depth exploration of cityscape image segmentation using the U-Net deep learning model. The proposed U-Net architecture comprises an encoder and decoder structure. The encoder uses convolutional layers and down sampling to extract hierarchical information from input images. Each down sample step reduces spatial dimensions, and increases feature depth, aiding context acquisition. Batch normalization and dropout layers stabilize models and prevent overfitting during encoding. The decoder reconstructs higher-resolution feature maps using "UpSampling2D" layers. Through extensive experimentation and evaluation of the Cityscapes dataset, this study demonstrates the effectiveness of the U-Net model in achieving state-of-the-art results in image segmentation. The results clearly shown that, the proposed model has high accuracy, mean IOU and mean DICE compared to existing models.

PMID:38578733 | DOI:10.1371/journal.pone.0300767

Categories: Literature Watch

Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection

Fri, 2024-04-05 06:00

Brain Inform. 2024 Apr 5;11(1):10. doi: 10.1186/s40708-024-00222-1.

ABSTRACT

Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep learning (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) and Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools for ML and DL models. This article provides a systematic review of the application of LIME and SHAP in interpreting the detection of Alzheimer's disease (AD). Adhering to PRISMA and Kitchenham's guidelines, we identified 23 relevant articles and investigated these frameworks' prospective capabilities, benefits, and challenges in depth. The results emphasise XAI's crucial role in strengthening the trustworthiness of AI-based AD predictions. This review aims to provide fundamental capabilities of LIME and SHAP XAI frameworks in enhancing fidelity within clinical decision support systems for AD prognosis.

PMID:38578524 | DOI:10.1186/s40708-024-00222-1

Categories: Literature Watch

Deep learning denoising reconstruction enables faster T2-weighted FLAIR sequence acquisition with satisfactory image quality

Fri, 2024-04-05 06:00

J Med Imaging Radiat Oncol. 2024 Apr 5. doi: 10.1111/1754-9485.13649. Online ahead of print.

ABSTRACT

INTRODUCTION: Deep learning reconstruction (DLR) technologies are the latest methods attempting to solve the enduring problem of reducing MRI acquisition times without compromising image quality. The clinical utility of this reconstruction technique is yet to be fully established. This study aims to assess whether a commercially available DLR technique applied to 2D T2-weighted FLAIR brain images allows a reduction in scan time, without compromising image quality and thus diagnostic accuracy.

METHODS: 47 participants (24 male, mean age 55.9 ± 18.7 SD years, range 20-89 years) underwent routine, clinically indicated brain MRI studies in March 2022, that included a standard-of-care (SOC) T2-weighted FLAIR sequence, and an accelerated acquisition that was reconstructed using the DLR denoising product. Overall image quality, lesion conspicuity, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and artefacts for each sequence, and preferred sequence on direct comparison, were subjectively assessed by two readers.

RESULTS: There was a strong preference for SOC FLAIR sequence for overall image quality (P = 0.01) and head-to-head comparison (P < 0.001). No difference was observed for lesion conspicuity (P = 0.49), perceived SNR (P = 1.0), and perceived CNR (P = 0.84). There was no difference in motion (P = 0.57) nor Gibbs ringing (P = 0.86) artefacts. Phase ghosting (P = 0.038) and pseudolesions were significantly more frequent (P < 0.001) on DLR images.

CONCLUSION: DLR algorithm allowed faster FLAIR acquisition times with comparable image quality and lesion conspicuity. However, an increased incidence and severity of phase ghosting artefact and presence of pseudolesions using this technique may result in a reduction in reading speed, efficiency, and diagnostic confidence.

PMID:38577926 | DOI:10.1111/1754-9485.13649

Categories: Literature Watch

Advancing Ligand Docking through Deep Learning: Challenges and Prospects in Virtual Screening

Fri, 2024-04-05 06:00

Acc Chem Res. 2024 Apr 5. doi: 10.1021/acs.accounts.4c00093. Online ahead of print.

ABSTRACT

ConspectusMolecular docking, also termed ligand docking (LD), is a pivotal element of structure-based virtual screening (SBVS) used to predict the binding conformations and affinities of protein-ligand complexes. Traditional LD methodologies rely on a search and scoring framework, utilizing heuristic algorithms to explore binding conformations and scoring functions to evaluate binding strengths. However, to meet the efficiency demands of SBVS, these algorithms and functions are often simplified, prioritizing speed over accuracy.The emergence of deep learning (DL) has exerted a profound impact on diverse fields, ranging from natural language processing to computer vision and drug discovery. DeepMind's AlphaFold2 has impressively exhibited its ability to accurately predict protein structures solely from amino acid sequences, highlighting the remarkable potential of DL in conformation prediction. This groundbreaking advancement circumvents the traditional search-scoring frameworks in LD, enhancing both accuracy and processing speed and thereby catalyzing a broader adoption of DL algorithms in binding pose prediction. Nevertheless, a consensus on certain aspects remains elusive.In this Account, we delineate the current status of employing DL to augment LD within the VS paradigm, highlighting our contributions to this domain. Furthermore, we discuss the challenges and future prospects, drawing insights from our scholarly investigations. Initially, we present an overview of VS and LD, followed by an introduction to DL paradigms, which deviate significantly from traditional search-scoring frameworks. Subsequently, we delve into the challenges associated with the development of DL-based LD (DLLD), encompassing evaluation metrics, application scenarios, and physical plausibility of the predicted conformations. In the evaluation of LD algorithms, it is essential to recognize the multifaceted nature of the metrics. While the accuracy of binding pose prediction, often measured by the success rate, is a pivotal aspect, the scoring/screening power and computational speed of these algorithms are equally important given the pivotal role of LD tools in VS. Regarding application scenarios, early methods focused on blind docking, where the binding site is unknown. However, recent studies suggest a shift toward identifying binding sites rather than solely predicting binding poses within these models. In contrast, LD with a known pocket in VS has been shown to be more practical. Physical plausibility poses another significant challenge. Although DLLD models often achieve higher success rates compared to traditional methods, they may generate poses with implausible local structures, such as incorrect bond angles or lengths, which are disadvantageous for postprocessing tasks like visualization. Finally, we discuss the future perspectives for DLLD, emphasizing the need to improve generalization ability, strike a balance between speed and accuracy, account for protein conformation flexibility, and enhance physical plausibility. Additionally, we delve into the comparison between generative and regression algorithms in this context, exploring their respective strengths and potential.

PMID:38577892 | DOI:10.1021/acs.accounts.4c00093

Categories: Literature Watch

Identification of B cell subsets based on antigen receptor sequences using deep learning

Fri, 2024-04-05 06:00

Front Immunol. 2024 Mar 21;15:1342285. doi: 10.3389/fimmu.2024.1342285. eCollection 2024.

ABSTRACT

B cell receptors (BCRs) denote antigen specificity, while corresponding cell subsets indicate B cell functionality. Since each B cell uniquely encodes this combination, physical isolation and subsequent processing of individual B cells become indispensable to identify both attributes. However, this approach accompanies high costs and inevitable information loss, hindering high-throughput investigation of B cell populations. Here, we present BCR-SORT, a deep learning model that predicts cell subsets from their corresponding BCR sequences by leveraging B cell activation and maturation signatures encoded within BCR sequences. Subsequently, BCR-SORT is demonstrated to improve reconstruction of BCR phylogenetic trees, and reproduce results consistent with those verified using physical isolation-based methods or prior knowledge. Notably, when applied to BCR sequences from COVID-19 vaccine recipients, it revealed inter-individual heterogeneity of evolutionary trajectories towards Omicron-binding memory B cells. Overall, BCR-SORT offers great potential to improve our understanding of B cell responses.

PMID:38576618 | PMC:PMC10991714 | DOI:10.3389/fimmu.2024.1342285

Categories: Literature Watch

Development of a Deep Learning System for Intra-Operative Identification of Cancer Metastases

Fri, 2024-04-05 06:00

Ann Surg. 2024 Apr 5. doi: 10.1097/SLA.0000000000006294. Online ahead of print.

ABSTRACT

OBJECTIVE: The aim of this study was to develop and test a prototype of a deep learning surgical guidance system (CASL) that can intra-operative identify peritoneal surface metastases on routine laparoscopy images.

BACKGROUND: For a number of cancer patients, operative resection with curative intent can end up in early recurrence of the cancer. Surgeons misidentifying visible peritoneal surface metastases is likely a common reason.

METHODS: CASL was developed and tested using staging laparoscopy images recorded from 132 patients with histologically-confirmed adenocarcinoma involving the gastrointestinal tract. The data included images depicting 4287 visible peritoneal surface lesions and 3650 image patches of 365 biopsied peritoneal surface lesions. The prototype's diagnostic performance was compared to results from a national survey evaluating 111 oncologic surgeons in a simulated clinical environment.

RESULTS: In a simulated environment, surgeons' accuracy of correctly recommending a biopsy for metastases while omitting a biopsy for benign lesions was only 52%. In this environment, the prototype of a deep learning surgical guidance system demonstrated improved performance in identifying peritoneal surface metastases compared to oncologic surgeons with an area under the receiver operating characteristic curve of 0.69 (oncologic surgeon) versus 0.78 (CASL) versus 0.79 (human-computer combined). A proposed model would have improved the identification of metastases by 5% while reducing the number of unnecessary biopsies by 28% compared to current standard practice.

CONCLUSIONS: Our findings demonstrate a pathway for an artificial intelligence system for intra-operative identification of peritoneal surface metastases, but still requires additional development and future validation in a multi-institutional clinical setting.

PMID:38577794 | DOI:10.1097/SLA.0000000000006294

Categories: Literature Watch

A Video Transformer Network for Thyroid Cancer Detection on Hyperspectral Histologic Images

Fri, 2024-04-05 06:00

Proc SPIE Int Soc Opt Eng. 2023 Feb;12471:1247107. doi: 10.1117/12.2654851. Epub 2023 Apr 6.

ABSTRACT

Hyperspectral imaging is a label-free and non-invasive imaging modality that seeks to capture images in different wavelengths. In this study, we used a vision transformer that was pre-trained from video data to detect thyroid cancer on hyperspectral images. We built a dataset of 49 whole slide hyperspectral images (WS-HSI) of thyroid cancer. To improve training, we introduced 5 new data augmentation methods that transform spectra. We achieved an F-1 score of 88.1% and an accuracy of 89.64% on our test dataset. The transformer network and the whole slide hyperspectral imaging technique can have many applications in digital pathology.

PMID:38577581 | PMC:PMC10993530 | DOI:10.1117/12.2654851

Categories: Literature Watch

Develop prediction model to help forecast advanced prostate cancer patients' prognosis after surgery using neural network

Fri, 2024-04-05 06:00

Front Endocrinol (Lausanne). 2024 Mar 21;15:1293953. doi: 10.3389/fendo.2024.1293953. eCollection 2024.

ABSTRACT

BACKGROUND: The effect of surgery on advanced prostate cancer (PC) is unclear and predictive model for postoperative survival is lacking yet.

METHODS: We investigate the National Cancer Institute's Surveillance, Epidemiology, and End Results (SEER) database, to collect clinical features of advanced PC patients. According to clinical experience, age, race, grade, pathology, T, N, M, stage, size, regional nodes positive, regional nodes examined, surgery, radiotherapy, chemotherapy, history of malignancy, clinical Gleason score (composed of needle core biopsy or transurethral resection of the prostate specimens), pathological Gleason score (composed of prostatectomy specimens) and prostate-specific antigen (PSA) are the potential predictive variables. All samples are divided into train cohort (70% of total, for model training) and test cohort (30% of total, for model validation) by random sampling. We then develop neural network to predict advanced PC patients' overall. Area under receiver operating characteristic curve (AUC) is used to evaluate model's performance.

RESULTS: 6380 patients, diagnosed with advanced (stage III-IV) prostate cancer and receiving surgery, have been included. The model using all collected clinical features as predictors and based on neural network algorithm performs best, which scores 0.7058 AUC (95% CIs, 0.7021-0.7068) in train cohort and 0.6925 AUC (95% CIs, 0.6906-0.6956) in test cohort. We then package it into a Windows 64-bit software.

CONCLUSION: Patients with advanced prostate cancer may benefit from surgery. In order to forecast their overall survival, we first build a clinical features-based prognostic model. This model is accuracy and may offer some reference on clinical decision making.

PMID:38577575 | PMC:PMC10991752 | DOI:10.3389/fendo.2024.1293953

Categories: Literature Watch

Image-based second opinion for blood typing

Fri, 2024-04-05 06:00

Health Inf Sci Syst. 2024 Apr 2;12(1):28. doi: 10.1007/s13755-024-00289-4. eCollection 2024 Dec.

ABSTRACT

This paper considers a new method for providing a recommendation (second opinion) for a laboratory assistant in manual blood typing based on serological plates. The manual method consists of two steps: preparation and analysis. During preparation step the laboratory assistant needs to fill each well of a plate with a blood sample and a reagent mixture according to methodological guidelines. In the second step it is necessary to visually determine the result of the reactions, named agglutination. Despite the popularity of this method, it is slow and highly influenced by human factor, which cause blood typing errors. To increase the quality and performance of the analysis step, we propose a novel neural-based classification method. Our solution provides a fast way to fill the results into a laboratory system. We collected a new large dataset consisting of 3139 well images with GTs from donors' medical history and six experts' assessment for each. We showed that the proposed solution based on state-of-the-art architectures is comparable with the best expert and has 2.75 times fewer errors than the average one, with an overall accuracy equal to 98.4%. Taking into account the low-semantic nature of the task, we also considered shallow neural networks, which showed accuracy comparable with state-of-the-art models.

PMID:38577517 | PMC:PMC10987457 | DOI:10.1007/s13755-024-00289-4

Categories: Literature Watch

An automatic diagnosis model of otitis media with high accuracy rate using transfer learning

Fri, 2024-04-05 06:00

Front Mol Biosci. 2024 Mar 21;10:1250596. doi: 10.3389/fmolb.2023.1250596. eCollection 2023.

ABSTRACT

Introduction: Chronic Suppurative Otitis Media (CSOM) and Middle Ear Cholesteatoma are two common chronic otitis media diseases that often cause confusion among physicians due to their similar location and shape in clinical CT images of the internal auditory canal. In this study, we utilized the transfer learning method combined with CT scans of the internal auditory canal to achieve accurate lesion segmentation and automatic diagnosis for patients with CSOM and middle ear cholesteatoma. Methods: We collected 1019 CT scan images and utilized the nnUnet skeleton model along with coarse grained focal segmentation labeling to pre-train on the above CT images for focal segmentation. We then fine-tuned the pre-training model for the downstream three-classification diagnosis task. Results: Our proposed algorithm model achieved a classification accuracy of 92.33% for CSOM and middle ear cholesteatoma, which is approximately 5% higher than the benchmark model. Moreover, our upstream segmentation task training resulted in a mean Intersection of Union (mIoU) of 0.569. Discussion: Our results demonstrate that using coarse-grained contour boundary labeling can significantly enhance the accuracy of downstream classification tasks. The combination of deep learning and automatic diagnosis of CSOM and internal auditory canal CT images of middle ear cholesteatoma exhibits high sensitivity and specificity.

PMID:38577506 | PMC:PMC10991843 | DOI:10.3389/fmolb.2023.1250596

Categories: Literature Watch

A focus on molecular representation learning for the prediction of chemical properties

Fri, 2024-04-05 06:00

Chem Sci. 2024 Mar 25;15(14):5052-5055. doi: 10.1039/d4sc90043j. eCollection 2024 Apr 3.

ABSTRACT

Molecular representation learning (MRL) is a specialized field in which deep-learning models condense essential molecular information into a vectorized form. Whereas recent research has predominantly emphasized drug discovery and bioactivity applications, MRL holds significant potential for diverse chemical properties beyond these contexts. The recently published study by King-Smith introduces a novel application of molecular representation training and compellingly demonstrates its value in predicting molecular properties (E. King-Smith, Chem. Sci., 2024, https://doi.org/10.1039/D3SC04928K). In this focus article, we will briefly delve into MRL in chemistry and the significance of King-Smith's work within the dynamic landscape of this evolving field.

PMID:38577350 | PMC:PMC10988574 | DOI:10.1039/d4sc90043j

Categories: Literature Watch

Recent Outcomes and Challenges of Artificial Intelligence, Machine Learning, and Deep Learning in Neurosurgery

Fri, 2024-04-05 06:00

World Neurosurg X. 2024 Mar 8;23:100301. doi: 10.1016/j.wnsx.2024.100301. eCollection 2024 Jul.

ABSTRACT

Neurosurgeons receive extensive technical training, which equips them with the knowledge and skills to specialise in various fields and manage the massive amounts of information and decision-making required throughout the various stages of neurosurgery, including preoperative, intraoperative, and postoperative care and recovery. Over the past few years, artificial intelligence (AI) has become more useful in neurosurgery. AI has the potential to improve patient outcomes by augmenting the capabilities of neurosurgeons and ultimately improving diagnostic and prognostic outcomes as well as decision-making during surgical procedures. By incorporating AI into both interventional and non-interventional therapies, neurosurgeons may provide the best care for their patients. AI, machine learning (ML), and deep learning (DL) have made significant progress in the field of neurosurgery. These cutting-edge methods have enhanced patient outcomes, reduced complications, and improved surgical planning.

PMID:38577317 | PMC:PMC10992893 | DOI:10.1016/j.wnsx.2024.100301

Categories: Literature Watch

UMS-Rep: Unified modality-specific representation for efficient medical image analysis

Fri, 2024-04-05 06:00

Inform Med Unlocked. 2021;24:100571. doi: 10.1016/j.imu.2021.100571. Epub 2021 Apr 20.

ABSTRACT

Medical image analysis typically includes several tasks such as enhancement, segmentation, and classification. Traditionally, these tasks are implemented using separate deep learning models for separate tasks, which is not efficient because it involves unnecessary training repetitions, demands greater computational resources, and requires a relatively large amount of labeled data. In this paper, we propose a multi-task training approach for medical image analysis, where individual tasks are fine-tuned simultaneously through relevant knowledge transfer using a unified modality-specific feature representation (UMS-Rep). We explore different fine-tuning strategies to demonstrate the impact of the strategy on the performance of target medical image tasks. We experiment with different visual tasks (e.g., image denoising, segmentation, and classification) to highlight the advantages offered with our approach for two imaging modalities, chest X-ray and Doppler echocardiography. Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance. Specifically, the proposed approach improves accuracy (up to ∼ 9% ↑) and decreases computational time (up to ∼ 86% ↓) as compared to the baseline approach. Further, our results prove that the performance of target tasks in medical images is highly influenced by the utilized fine-tuning strategy.

PMID:38577267 | PMC:PMC10994192 | DOI:10.1016/j.imu.2021.100571

Categories: Literature Watch

Artificial Intelligence in Endodontics: A Scoping Review

Fri, 2024-04-05 06:00

Iran Endod J. 2024;19(2):85-98. doi: 10.22037/iej.v19i2.44842.

ABSTRACT

Artificial intelligence (AI) is transforming the diagnostic methods and treatment approaches in the constantly evolving field of endodontics. The current review discusses the recent advancements in AI; with a specific focus on convolutional and artificial neural networks. Apparently, AI models have proved to be highly beneficial in the analysis of root canal anatomy, detecting periapical lesions in early stages as well as providing accurate working-length determination. Moreover, they seem to be effective in predicting the treatment success next to identifying various conditions e.g., dental caries, pulpal inflammation, vertical root fractures, and expression of second opinions for non-surgical root canal treatments. Furthermore, AI has demonstrated an exceptional ability to recognize landmarks and lesions in cone-beam computed tomography scans with consistently high precision rates. While AI has significantly promoted the accuracy and efficiency of endodontic procedures, it is of high importance to continue validating the reliability and practicality of AI for possible widespread integration into daily clinical practice. Additionally, ethical considerations related to patient privacy, data security, and potential bias should be carefully examined to ensure the ethical and responsible implementation of AI in endodontics.

PMID:38577001 | PMC:PMC10988643 | DOI:10.22037/iej.v19i2.44842

Categories: Literature Watch

Revisiting methotrexate and phototrexate Zinc15 library-based derivatives using deep learning in-silico drug design approach

Fri, 2024-04-05 06:00

Front Chem. 2024 Mar 21;12:1380266. doi: 10.3389/fchem.2024.1380266. eCollection 2024.

ABSTRACT

Introduction: Cancer is the second most prevalent cause of mortality in the world, despite the availability of several medications for cancer treatment. Therefore, the cancer research community emphasized on computational techniques to speed up the discovery of novel anticancer drugs. Methods: In the current study, QSAR-based virtual screening was performed on the Zinc15 compound library (271 derivatives of methotrexate (MTX) and phototrexate (PTX)) to predict their inhibitory activity against dihydrofolate reductase (DHFR), a potential anticancer drug target. The deep learning-based ADMET parameters were employed to generate a 2D QSAR model using the multiple linear regression (MPL) methods with Leave-one-out cross-validated (LOO-CV) Q2 and correlation coefficient R2 values as high as 0.77 and 0.81, respectively. Results: From the QSAR model and virtual screening analysis, the top hits (09, 27, 41, 68, 74, 85, 99, 180) exhibited pIC50 ranging from 5.85 to 7.20 with a minimum binding score of -11.6 to -11.0 kcal/mol and were subjected to further investigation. The ADMET attributes using the message-passing neural network (MPNN) model demonstrated the potential of selected hits as an oral medication based on lipophilic profile Log P (0.19-2.69) and bioavailability (76.30% to 78.46%). The clinical toxicity score was 31.24% to 35.30%, with the least toxicity score (8.30%) observed with compound 180. The DFT calculations were carried out to determine the stability, physicochemical parameters and chemical reactivity of selected compounds. The docking results were further validated by 100 ns molecular dynamic simulation analysis. Conclusion: The promising lead compounds found endorsed compared to standard reference drugs MTX and PTX that are best for anticancer activity and can lead to novel therapies after experimental validations. Furthermore, it is suggested to unveil the inhibitory potential of identified hits via in-vitro and in-vivo approaches.

PMID:38576849 | PMC:PMC10991842 | DOI:10.3389/fchem.2024.1380266

Categories: Literature Watch

Context-aware deep learning enables high-efficacy localization of high concentration microbubbles for super-resolution ultrasound localization microscopy

Thu, 2024-04-04 06:00

Nat Commun. 2024 Apr 4;15(1):2932. doi: 10.1038/s41467-024-47154-2.

ABSTRACT

Ultrasound localization microscopy (ULM) enables deep tissue microvascular imaging by localizing and tracking intravenously injected microbubbles circulating in the bloodstream. However, conventional localization techniques require spatially isolated microbubbles, resulting in prolonged imaging time to obtain detailed microvascular maps. Here, we introduce LOcalization with Context Awareness (LOCA)-ULM, a deep learning-based microbubble simulation and localization pipeline designed to enhance localization performance in high microbubble concentrations. In silico, LOCA-ULM enhanced microbubble detection accuracy to 97.8% and reduced the missing rate to 23.8%, outperforming conventional and deep learning-based localization methods up to 17.4% in accuracy and 37.6% in missing rate reduction. In in vivo rat brain imaging, LOCA-ULM revealed dense cerebrovascular networks and spatially adjacent microvessels undetected by conventional ULM. We further demonstrate the superior localization performance of LOCA-ULM in functional ULM (fULM) where LOCA-ULM significantly increased the functional imaging sensitivity of fULM to hemodynamic responses invoked by whisker stimulations in the rat brain.

PMID:38575577 | DOI:10.1038/s41467-024-47154-2

Categories: Literature Watch

Splice site recognition - deciphering Exon-Intron transitions for genetic insights using Enhanced integrated Block-Level gated LSTM model

Thu, 2024-04-04 06:00

Gene. 2024 Apr 2:148429. doi: 10.1016/j.gene.2024.148429. Online ahead of print.

ABSTRACT

Bioinformatics is a contemporary interdisciplinary area focused on analyzing the growing number of genome sequences. Gene variants are differences in DNA sequences among individuals within a population. Splice site recognition is a crucial step in the process of gene expression, where the coding sequences of genes are joined together to form mature messenger RNA (mRNA). These genetic variants that disrupt genes are believed to be the primary reason for neuro-developmental disorders like ASD (Autism Spectrum Disorder) is a neuro-developmental disorder that is diagnosed in individuals, families, and society and occurs as the developmental delay in one among the hundred genes that are associated with these disorders. Missense variants, premature stop codons, or deletions alter both the quality and quantity of encoded proteins. Predicting genes within exons and introns presents main challenges, such as dealing with sequencing errors, short reads, incomplete genes, overlapping, and more. Although many traditional techniques have been utilized in creating an exon prediction system, the primary challenge lies in accurately identifying the length and spliced strand location classification of exons in conjunction with introns. From now on, the suggested approach utilizes a Deep Learning algorithm to analyze intricate and extensive genomic datasets. M-LSTM is utilized to categorize three binary combinations (EI as 1, IE as 2, and none as 3) using spliced DNA strands. The M-LSTM system is able to sequence extensive datasets, ensuring that long information can be stored without any impact on the current input or output. This enables it to recognize and address long-term connections and problems with rapidly increasing gradients. The proposed model is compared internally with Naïve Bayes and Random Forest to assess its efficacy. Additionally, the proposed model's performance is forecasted by utilizing probabilistic parameters like recall, F1-score, precision, and accuracy to assess the effectiveness of the proposed system.

PMID:38575098 | DOI:10.1016/j.gene.2024.148429

Categories: Literature Watch

Enhanced capillary delivery with nanobubble-mediated blood-brain barrier opening and advanced high resolution vascular segmentation

Thu, 2024-04-04 06:00

J Control Release. 2024 Apr 2:S0168-3659(24)00222-0. doi: 10.1016/j.jconrel.2024.04.001. Online ahead of print.

ABSTRACT

Overcoming the blood-brain barrier (BBB) is essential to enhance brain therapy. Here, we utilized nanobubbles with focused ultrasound for targeted and improved BBB opening in mice. A microscopy technique method assessed BBB opening at a single blood vessel resolution employing a dual-dye labeling technique using green fluorescent molecules to label blood vessels and Evans blue brain-impermeable dye for quantifying BBB extravasation. A deep learning architecture enabled blood vessels segmentation, delivering comparable accuracy to manual segmentation with a significant time reduction. Segmentation outcomes were applied to the Evans blue channel to quantify extravasation of each blood vessel. Results were compared to microbubble-mediated BBB opening, where reduced extravasation was observed in capillaries with a diameter of 2-6 μm. In comparison, nanobubbles yield an improved opening in these capillaries, and equivalent efficacy to that of microbubbles in larger vessels. These results indicate the potential of nanobubbles to serve as enhanced agents for BBB opening, amplifying bioeffects in capillaries while preserving comparable opening in larger vessels.

PMID:38575074 | DOI:10.1016/j.jconrel.2024.04.001

Categories: Literature Watch

Qualitätskontrolle in der Hornhautbank mit KI: Vergleich des neuen Deep-Learning-basierten Ansatzes mit der konventionellen Endothelzelldichtenbestimmung durch das Rhine-Tec System

Thu, 2024-04-04 06:00

Klin Monbl Augenheilkd. 2024 Apr 4. doi: 10.1055/a-2299-8117. Online ahead of print.

ABSTRACT

Die Endothelzelldichte ist ein objektiver Parameter für die Freigabe von Hornhauttransplantaten zur Operation. In der Lions Hornhautbank Baden-Württemberg wird für diese Quantifizierung das "Rhine-Tec Endothelial Analysis System" verwendet, das auf der Methode des festen Zählrahmens basiert und nur eine kleine Stichprobe von 15 bis 40 Endothelzellen berücksichtigt. Das Messergebnis hängt daher von der Platzierung des Zählrahmens und der manuellen Nachkorrektur der im Zählrahmen gewerteten Zellen ab. Um den Stichprobenumfang zu erhöhen und eine höhere Objektivität zu schaffen, haben wir auf Grundlage von "Deep-Learning" eine neue Methode entwickelt, die alle sichtbaren Endothelzellen im Bild vollautomatisch erkennt. Ziel dieser Studie ist der Vergleich dieser neuen Methode mit dem konventionellen Rhine-Tec-System 9.375 archivierte phasenkonstrastmikroskopische Bildaufnahmen von konsekutiven Transplantaten aus der Lions-Hornhautbank wurden mit der Deep-Learning-Methode evaluiert und mit den korrespondierenden archivierten Analysen des Rhine-Tec-Systems verglichen. Zum Vergleich der Mittelwerte wurden Bland Altman- und Korrelationsanalysen durchgeführt. Es ergaben sich vergleichbare Ergebnisse beider Methoden. Die mittlere Differenz zwischen Rhine-Tec-System und der Deep-Learning-Methode betrug lediglich -23 Zellen / Quadratmillimeter (95%-Konfidenzintervall -29 bis -17). Es zeigte sich eine statistisch signifikant positive Korrelation zwischen den beiden Methoden mit 0,748. Auffällig in der Bland-Altman-Analyse waren gehäufte Abweichungen im Zelldichtenbereich zwischen 2000 und 2500 Zellen pro Quadratmillimeter mit höheren Werten beim Rhine-Tec-System. Die vergleichbaren Ergebnisse bezüglich der Zelldichtenmesswerte unterstreichen die Wertigkeit des “Deep-Learning-” basierten Verfahrens. Die Abweichungen im Bereich der formalen Schwelle für eine Transplantatfreigabe von 2000 Zellen pro Quadratmillimeter sind sehr wahrscheinlich durch die höhere Objektivität der Deep-Learning-Methode erklärbar und der Tatsache geschuldet, dass Messrahmen und manuelle Nachkorrektur unter Berücksichtigung des Gesamtbildes aus der Endothelbewertung jeweils gezielt ausgewählt worden waren. Diese vollständige Sichtung des Transplantatendothels und Qualitätsbeurteilung kann aktuell noch nicht durch das Deep-Learning System ersetzt werden, und ist somit weiterhin die wichtigste Grundlage der Transplantatfreigabe zur Keratoplastik. Endothelial cell density (ECD) is a crucial parameter for the release of corneal grafts for transplantation. The Lions Eye Bank of Baden-Württemberg uses the "Rhine-Tec Endothelial Analysis System" for ECD quantification, which is based on a fixed counting frame method considering only a small sample of 15 to 40 endothelial cells. The measurement result therefore depends on the frame placement and manual correction of the cells counted within the frame. To increase the sample size and create higher objectivity, we developed a new method based on "deep learning" that automatically detects all visible endothelial cells in the image. This study aims to compare this new method with the conventional Rhine-Tec system. 9,375 archived phase-contrast microscopic images of consecutive grafts from the Lions Eye Bank were evaluated with the deep learning method and compared with the corresponding archived analyses of the Rhine-Tec system. Specifically, comparisons of means, Bland-Altman and correlation analyses were performed. Comparable results were obtained for both methods. The mean difference between the Rhine-Tec system and the deep learning method was only -23 cells/mm2 (95% confidence interval -29 to -17). There was a statistically significant positive correlation between the two methods with a correlation coefficient of 0.748. Noticeable in the Bland-Altman analysis were clustered deviations in the cell density range between 2000 and 2500 cells/mm2 with higher values in the Rhine-Tec system. The comparable results regarding cell density measurement values underline the validity of the "deep learning" based method. The deviations around the formal threshold for graft release of 2000 cells/mm2 are most likely explained by the higher objectivity of the deep learning method and the fact that measurement frames and manual corrections were specifically selected to reach the formal threshold of 2000 cells/mm2 when the full area endothelial quality was good. This full area assessment of the graft endothelium cannot currently be replaced by deep learning methods and remains the most important basis for graft release for keratoplasty.

PMID:38574759 | DOI:10.1055/a-2299-8117

Categories: Literature Watch

Pages