Deep learning

Improved nested U-structure for accurate nailfold capillary segmentation

Thu, 2024-03-14 06:00

Microvasc Res. 2024 Mar 12:104680. doi: 10.1016/j.mvr.2024.104680. Online ahead of print.

ABSTRACT

Changes in the structure and function of nailfold capillaries may be indicators of numerous diseases. Noninvasive diagnostic tools are commonly used for the extraction of morphological information from segmented nailfold capillaries to study physiological and pathological changes therein. However, current segmentation methods for nailfold capillaries cannot accurately separate capillaries from the background, resulting in issues such as unclear segmentation boundaries. Therefore, improving the accuracy of nailfold capillary segmentation is necessary to facilitate more efficient clinical diagnosis and research. Herein, we propose a nailfold capillary image segmentation method based on a U2-Net backbone network combined with a Transformer structure. This method integrates the U2-Net and Transformer networks to establish a decoder-encoder network, which inserts Transformer layers into the nested two-layer U-shaped architecture of the U2-Net. This structure effectively extracts multiscale features within stages and aggregates multilevel features across stages to generate high-resolution feature maps. The experimental results demonstrate an overall accuracy of 98.23 %, a Dice coefficient of 88.56 %, and an IoU of 80.41 % compared to the ground truth. Furthermore, our proposed method improves the overall accuracy by approximately 2 %, 3 %, and 5 % compared to the original U2-Net, Res-Unet, and U-Net, respectively. These results indicate that the Transformer-U2Net network performs well in nailfold capillary image segmentation and provides more detailed and accurate information on the segmented nailfold capillary structure, which may aid clinicians in the more precise diagnosis and treatment of nailfold capillary-related diseases.

PMID:38484792 | DOI:10.1016/j.mvr.2024.104680

Categories: Literature Watch

Efficiently improving the Wi-Fi-based human activity recognition, using auditory features, autoencoders, and fine-tuning

Thu, 2024-03-14 06:00

Comput Biol Med. 2024 Feb 27;172:108232. doi: 10.1016/j.compbiomed.2024.108232. Online ahead of print.

ABSTRACT

Human activity recognition (HAR) based on Wi-Fi signals has attracted significant attention due to its convenience and the availability of infrastructures and sensors. Channel State Information (CSI) measures how Wi-Fi signals propagate through the environment. However, many scenarios and applications have insufficient training data due to constraints such as cost, time, or resources. This poses a challenge for achieving high accuracy levels with machine learning techniques. In this study, multiple deep learning models for HAR were employed to achieve acceptable accuracy levels with much less training data than other methods. A pretrained encoder trained from a Multi-Input Multi-Output Autoencoder (MIMO AE) on Mel Frequency Cepstral Coefficients (MFCC) from a small subset of data samples was used for feature extraction. Then, fine-tuning was applied by adding the encoder as a fixed layer in the classifier, which was trained on a small fraction of the remaining data. The evaluation results (K-fold cross-validation and K = 5) showed that using only 30% of the training and validation data (equivalent to 24% of the total data), the accuracy was improved by 17.7% compared to the case where the encoder was not used (with an accuracy of 79.3% for the designed classifier, and an accuracy of 90.3% for the classifier with the fixed encoder). While by considering more calculational cost, achieving higher accuracy using the pretrained encoder as a trainable layer is possible (up to 2.4% improvement), this small gap demonstrated the effectiveness and efficiency of the proposed method for HAR using Wi-Fi signals.

PMID:38484697 | DOI:10.1016/j.compbiomed.2024.108232

Categories: Literature Watch

Deep learning-assisted flavonoid-based fluorescent sensor array for the nondestructive detection of meat freshness

Thu, 2024-03-14 06:00

Food Chem. 2024 Mar 4;447:138931. doi: 10.1016/j.foodchem.2024.138931. Online ahead of print.

ABSTRACT

Gas sensors containing indicators have been widely used in meat freshness testing. However, concerns about the toxicity of indicators have prevented their commercialization. Here, we prepared three fluorescent sensors by complexing each flavonoid (fisetin, puerarin, daidzein) with a flexible film, forming a fluorescent sensor array. The fluorescent sensor array was used as a freshness indication label for packaged meat. Then, the images of the indication labels on the packaged meat under different freshness levels were collected by smartphones. A deep convolutional neural network (DCNN) model was built using the collected indicator label images and freshness labels as the dataset. Finally, the model was used to detect the freshness of meat samples, and the overall accuracy of the prediction model was as high as 97.1%. Unlike the TVB-N measurement, this method provides a nondestructive, real-time measurement of meat freshness.

PMID:38484548 | DOI:10.1016/j.foodchem.2024.138931

Categories: Literature Watch

Circadian assessment of heart failure using explainable deep learning and novel multi-parameter polar images

Thu, 2024-03-14 06:00

Comput Methods Programs Biomed. 2024 Mar 6;248:108107. doi: 10.1016/j.cmpb.2024.108107. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Heart failure (HF) is a multi-faceted and life-threatening syndrome that affects more than 64.3 million people worldwide. Current gold-standard screening technique, echocardiography, neglects cardiovascular information regulated by the circadian rhythm and does not incorporate knowledge from patient profiles. In this study, we propose a novel multi-parameter approach to assess heart failure using heart rate variability (HRV) and patient clinical information.

METHODS: In this approach, features from 24-hour HRV and clinical information were combined as a single polar image and fed to a 2D deep learning model to infer the HF condition. The edges of the polar image correspond to the timely variation of different features, each of which carries information on the function of the heart, and internal illustrates color-coded patient clinical information.

RESULTS: Under a leave-one-subject-out cross-validation scheme and using 7,575 polar images from a multi-center cohort (American and Greek) of 303 coronary artery disease patients (median age: 58 years [50-65], median body mass index (BMI): 27.28 kg/m2 [24.91-29.41]), the model yielded mean values for the area under the receiver operating characteristics curve (AUC), sensitivity, specificity, normalized Matthews correlation coefficient (NMCC), and accuracy of 0.883, 90.68%, 95.19%, 0.93, and 92.62%, respectively. Moreover, interpretation of the model showed proper attention to key hourly intervals and clinical information for each HF stage.

CONCLUSIONS: The proposed approach could be a powerful early HF screening tool and a supplemental circadian enhancement to echocardiography which sets the basis for next-generation personalized healthcare.

PMID:38484409 | DOI:10.1016/j.cmpb.2024.108107

Categories: Literature Watch

Cross noise level PET denoising with continuous adversarial domain generalization

Thu, 2024-03-14 06:00

Phys Med Biol. 2024 Mar 14. doi: 10.1088/1361-6560/ad341a. Online ahead of print.

ABSTRACT

Objective
Performing PET denoising within the image space proves effective in reducing the variance in PET images. In recent years, deep learning has demonstrated superior denoising performance, but models trained on a specific noise level typically fail to generalize well on different noise levels, due to inherent distribution shifts between inputs. The distribution shift usually results in bias in the denoised images. Our goal is to tackle such a problem using a domain generalization technique.
Approach
We propose to utilize the domain generalization technique with a novel feature space continuous discriminator (CD) for adversarial training, using the fraction of events as a continuous domain label. The core idea is to enforce the extraction of noise-level invariant features. Thus minimizing the distribution divergence of latent feature representation for different continuous noise levels, and making the model general for arbitrary noise levels. We created three sets of 10%, 13-22% (uniformly randomly selected), or 25% fractions of events from 97 $^{18}$F-MK6240 tau PET studies of 60 subjects. For each set, we generated 20 noise realizations. Training, validation, and testing were implemented using 1400, 120, and 420 pairs of 3D image volumes from the same or different sets. 
Main results
The proposed CD improves the denoising performance of our model trained in a 13-22% fraction set for testing in both 10% and 25% fraction sets, measured by bias and standard deviation using full-count images as references. In addition, our CD method can improve the SSIM and PSNR consistently for Alzheimer-related regions and the whole brain. 
Significance
To our knowledge, this is the first attempt to alleviate the performance degradation in cross-noise level denoising from the perspective of domain generalization. Our study is also a pioneer work of continuous domain generalization.

PMID:38484401 | DOI:10.1088/1361-6560/ad341a

Categories: Literature Watch

CRISPR-M: Predicting sgRNA off-target effect using a multi-view deep learning network

Thu, 2024-03-14 06:00

PLoS Comput Biol. 2024 Mar 14;20(3):e1011972. doi: 10.1371/journal.pcbi.1011972. Online ahead of print.

ABSTRACT

Using the CRISPR-Cas9 system to perform base substitutions at the target site is a typical technique for genome editing with the potential for applications in gene therapy and agricultural productivity. When the CRISPR-Cas9 system uses guide RNA to direct the Cas9 endonuclease to the target site, it may misdirect it to a potential off-target site, resulting in an unintended genome editing. Although several computational methods have been proposed to predict off-target effects, there is still room for improvement in the off-target effect prediction capability. In this paper, we present an effective approach called CRISPR-M with a new encoding scheme and a novel multi-view deep learning model to predict the sgRNA off-target effects for target sites containing indels and mismatches. CRISPR-M takes advantage of convolutional neural networks and bidirectional long short-term memory recurrent neural networks to construct a three-branch network towards multi-views. Compared with existing methods, CRISPR-M demonstrates significant performance advantages running on real-world datasets. Furthermore, experimental analysis of CRISPR-M under multiple metrics reveals its capability to extract features and validates its superiority on sgRNA off-target effect predictions.

PMID:38483980 | DOI:10.1371/journal.pcbi.1011972

Categories: Literature Watch

Deep learning in public health: Comparative predictive models for COVID-19 case forecasting

Thu, 2024-03-14 06:00

PLoS One. 2024 Mar 14;19(3):e0294289. doi: 10.1371/journal.pone.0294289. eCollection 2024.

ABSTRACT

The COVID-19 pandemic has had a significant impact on both the United Arab Emirates (UAE) and Malaysia, emphasizing the importance of developing accurate and reliable forecasting mechanisms to guide public health responses and policies. In this study, we compared several cutting-edge deep learning models, including Long Short-Term Memory (LSTM), bidirectional LSTM, Convolutional Neural Networks (CNN), hybrid CNN-LSTM, Multilayer Perceptron's, and Recurrent Neural Networks (RNN), to project COVID-19 cases in the aforementioned regions. These models were calibrated and evaluated using a comprehensive dataset that includes confirmed case counts, demographic data, and relevant socioeconomic factors. To enhance the performance of these models, Bayesian optimization techniques were employed. Subsequently, the models were re-evaluated to compare their effectiveness. Analytic approaches, both predictive and retrospective in nature, were used to interpret the data. Our primary objective was to determine the most effective model for predicting COVID-19 cases in the United Arab Emirates (UAE) and Malaysia. The findings indicate that the selected deep learning algorithms were proficient in forecasting COVID-19 cases, although their efficacy varied across different models. After a thorough evaluation, the model architectures most suitable for the specific conditions in the UAE and Malaysia were identified. Our study contributes significantly to the ongoing efforts to combat the COVID-19 pandemic, providing crucial insights into the application of sophisticated deep learning algorithms for the precise and timely forecasting of COVID-19 cases. These insights hold substantial value for shaping public health strategies, enabling authorities to develop targeted and evidence-based interventions to manage the virus spread and its impact on the populations of the UAE and Malaysia. The study confirms the usefulness of deep learning methodologies in efficiently processing complex datasets and generating reliable projections, a skill of great importance in healthcare and professional settings.

PMID:38483948 | DOI:10.1371/journal.pone.0294289

Categories: Literature Watch

Deep learning-based fully automated grading system for dry eye disease severity

Thu, 2024-03-14 06:00

PLoS One. 2024 Mar 14;19(3):e0299776. doi: 10.1371/journal.pone.0299776. eCollection 2024.

ABSTRACT

There is an increasing need for an objective grading system to evaluate the severity of dry eye disease (DED). In this study, a fully automated deep learning-based system for the assessment of DED severity was developed. Corneal fluorescein staining (CFS) images of DED patients from one hospital for system development (n = 1400) and from another hospital for external validation (n = 94) were collected. Three experts graded the CFS images using NEI scale, and the median value was used as ground truth. The system was developed in three steps: (1) corneal segmentation, (2) CFS candidate region classification, and (3) estimation of NEI grades by CFS density map generation. Also, two images taken on different days in 50 eyes (100 images) were compared to evaluate the probability of improvement or deterioration. The Dice coefficient of the segmentation model was 0.962. The correlation between the system and the ground truth data was 0.868 (p<0.001) and 0.863 (p<0.001) for the internal and external validation datasets, respectively. The agreement rate for improvement or deterioration was 88% (44/50). The fully automated deep learning-based grading system for DED severity can evaluate the CFS score with high accuracy and thus may have potential for clinical application.

PMID:38483911 | DOI:10.1371/journal.pone.0299776

Categories: Literature Watch

PTransIPs: Identification of Phosphorylation Sites Enhanced by Protein PLM Embeddings

Thu, 2024-03-14 06:00

IEEE J Biomed Health Inform. 2024 Mar 14;PP. doi: 10.1109/JBHI.2024.3377362. Online ahead of print.

ABSTRACT

Phosphorylation is pivotal in numerous fundamental cellular processes and plays a significant role in the onset and progression of various diseases. The accurate identification of these phosphorylation sites is crucial for unraveling the molecular mechanisms within cells and during viral infections, potentially leading to the discovery of novel therapeutic targets. In this study, we develop PTransIPs, a new deep learning framework for the identification of phosphorylation sites. Independent testing results demonstrate that PTransIPs outperforms existing state-of-the-art (SOTA) methods, achieving AUCs of 0.9232 and 0.9660 for the identification of phosphorylated S/T and Y sites, respectively. PTransIPs contributes from three aspects. 1) PTransIPs is the first to apply protein pre-trained language model (PLM) embeddings to this task. It utilizes ProtTrans and EMBER2 to extract sequence and structure embeddings, respectively, as additional inputs into the model, effectively addressing issues of dataset size and overfitting, thus enhancing model performance; 2) PTransIPs is based on Transformer architecture, optimized through the integration of convolutional neural networks and TIM loss function, providing practical insights for model design and training; 3) The encoding of amino acids in PTransIPs enables it to serve as a universal framework for other peptide bioactivity tasks, with its excellent performance shown in extended experiments of this paper. Our code, data and models are publicly available at https://github.com/StatXzy7/PTransIPs.

PMID:38483806 | DOI:10.1109/JBHI.2024.3377362

Categories: Literature Watch

Cross-Attention Enhanced Pyramid Multi-Scale Networks for Sensor-based Human Activity Recognition

Thu, 2024-03-14 06:00

IEEE J Biomed Health Inform. 2024 Mar 14;PP. doi: 10.1109/JBHI.2024.3377353. Online ahead of print.

ABSTRACT

Human Activity Recognition (HAR) has recently attracted widespread attention, with the effective application of this technology helping people in areas such as healthcare, smart homes, and gait analysis. Deep learning methods have shown remarkable performance in HAR. A pivotal challenge is the trade-off between recognition accuracy and computational efficiency, especially in resource-constrained mobile devices. This challenge necessitates the development of models that enhance feature representation capabilities without imposing additional computational burdens. Addressing this, we introduce a novel HAR model leveraging deep learning, ingeniously designed to navigate the accuracy-efficiency trade-off. The model comprises two innovative modules: 1) Pyramid Multi-scale Convolutional Network (PMCN), which is designed with a symmetric structure and is capable of obtaining a rich receptive field at a finer level through its multiscale representation capability; 2) Cross-Attention Mechanism, which establishes interrelationships among sensor dimensions, temporal dimensions, and channel dimensions, and effectively enhances useful information while suppressing irrelevant data. The proposed model is rigorously evaluated across four diverse datasets: UCI, WISDM, PAMAP2, and OPPORTUNITY. Additional ablation and comparative studies are conducted to comprehensively assess the performance of the model. Experimental results demonstrate that the proposed model achieves superior activity recognition accuracy while maintaining low computational overhead.

PMID:38483804 | DOI:10.1109/JBHI.2024.3377353

Categories: Literature Watch

Long-term Regional Influenza-like-illness Forecasting Using Exogenous Data

Thu, 2024-03-14 06:00

IEEE J Biomed Health Inform. 2024 Mar 14;PP. doi: 10.1109/JBHI.2024.3377529. Online ahead of print.

ABSTRACT

Disease forecasting is a longstanding problem for the research community, which aims at informing and improving decisions with the best available evidence. Specifically, the interest in respiratory disease forecasting has dramatically increased since the beginning of the coronavirus pandemic, rendering the accurate prediction of influenza-like-illness (ILI) a critical task. Although methods for short-term ILI forecasting and nowcasting have achieved good accuracy, their performance worsens at long-term ILI forecasts. Machine learning models have outperformed conventional forecasting approaches enabling to utilize diverse exogenous data sources, such as social media, internet users' search query logs, and climate data. However, the most recent deep learning ILI forecasting models use only historical occurrence data achieving state-of-the-art results. Inspired by recent deep neural network architectures in time series forecasting, this work proposes the Regional Influenza-Like-Illness Forecasting (ReILIF) method for regional long-term ILI prediction. The proposed architecture takes advantage of diverse exogenous data, that are, meteorological and population data, introducing an efficient intermediate fusion mechanism to combine the different types of information with the aim to capture the variations of ILI from various views. The efficacy of the proposed approach compared to state-of-the-art ILI forecasting methods is confirmed by an extensive experimental study following standard evaluation measures.

PMID:38483802 | DOI:10.1109/JBHI.2024.3377529

Categories: Literature Watch

Advancing brain tumor classification through MTAP model: an innovative approach in medical diagnostics

Thu, 2024-03-14 06:00

Med Biol Eng Comput. 2024 Mar 14. doi: 10.1007/s11517-024-03064-5. Online ahead of print.

ABSTRACT

The early diagnosis of brain tumors is critical in the area of healthcare, owing to the potentially life-threatening repercussions unstable growths within the brain can pose to individuals. The accurate and early diagnosis of brain tumors enables prompt medical intervention. In this context, we have established a new model called MTAP to enable a highly accurate diagnosis of brain tumors. The MTAP model addresses dataset class imbalance by utilizing the ADASYN method, employs a network pruning technique to reduce unnecessary weights and nodes in the neural network, and incorporates Avg-TopK pooling method for enhanced feature extraction. The primary goal of our research is to enhance the accuracy of brain tumor type detection, a critical aspect of medical imaging and diagnostics. The MTAP model introduces a novel classification strategy for brain tumors, leveraging the strength of deep learning methods and novel model refinement techniques. Following comprehensive experimental studies and meticulous design, the MTAP model has achieved a state-of-the-art accuracy of 99.69%. Our findings indicate that the use of deep learning and innovative model refinement techniques shows promise in facilitating the early detection of brain tumors. Analysis of the model's heat map revealed a notable focus on regions encompassing the parietal and temporal lobes.

PMID:38483711 | DOI:10.1007/s11517-024-03064-5

Categories: Literature Watch

Test-time augmentation with synthetic data addresses distribution shifts in spectral imaging

Thu, 2024-03-14 06:00

Int J Comput Assist Radiol Surg. 2024 Mar 14. doi: 10.1007/s11548-024-03085-3. Online ahead of print.

ABSTRACT

PURPOSE: Surgical scene segmentation is crucial for providing context-aware surgical assistance. Recent studies highlight the significant advantages of hyperspectral imaging (HSI) over traditional RGB data in enhancing segmentation performance. Nevertheless, the current hyperspectral imaging (HSI) datasets remain limited and do not capture the full range of tissue variations encountered clinically.

METHODS: Based on a total of 615 hyperspectral images from a total of 16 pigs, featuring porcine organs in different perfusion states, we carry out an exploration of distribution shifts in spectral imaging caused by perfusion alterations. We further introduce a novel strategy to mitigate such distribution shifts, utilizing synthetic data for test-time augmentation.

RESULTS: The effect of perfusion changes on state-of-the-art (SOA) segmentation networks depended on the organ and the specific perfusion alteration induced. In the case of the kidney, we observed a performance decline of up to 93% when applying a state-of-the-art (SOA) network under ischemic conditions. Our method improved on the state-of-the-art (SOA) by up to 4.6 times.

CONCLUSION: Given its potential wide-ranging relevance to diverse pathologies, our approach may serve as a pivotal tool to enhance neural network generalization within the realm of spectral imaging.

PMID:38483702 | DOI:10.1007/s11548-024-03085-3

Categories: Literature Watch

Checklist for Reproducibility of Deep Learning in Medical Imaging

Thu, 2024-03-14 06:00

J Imaging Inform Med. 2024 Mar 14. doi: 10.1007/s10278-024-01065-2. Online ahead of print.

ABSTRACT

The application of deep learning (DL) in medicine introduces transformative tools with the potential to enhance prognosis, diagnosis, and treatment planning. However, ensuring transparent documentation is essential for researchers to enhance reproducibility and refine techniques. Our study addresses the unique challenges presented by DL in medical imaging by developing a comprehensive checklist using the Delphi method to enhance reproducibility and reliability in this dynamic field. We compiled a preliminary checklist based on a comprehensive review of existing checklists and relevant literature. A panel of 11 experts in medical imaging and DL assessed these items using Likert scales, with two survey rounds to refine responses and gauge consensus. We also employed the content validity ratio with a cutoff of 0.59 to determine item face and content validity. Round 1 included a 27-item questionnaire, with 12 items demonstrating high consensus for face and content validity that were then left out of round 2. Round 2 involved refining the checklist, resulting in an additional 17 items. In the last round, 3 items were deemed non-essential or infeasible, while 2 newly suggested items received unanimous agreement for inclusion, resulting in a final 26-item DL model reporting checklist derived from the Delphi process. The 26-item checklist facilitates the reproducible reporting of DL tools and enables scientists to replicate the study's results.

PMID:38483694 | DOI:10.1007/s10278-024-01065-2

Categories: Literature Watch

AI as a Medical Device for Ophthalmic Imaging in Europe, Australia, and the United States: Protocol for a Systematic Scoping Review of Regulated Devices

Thu, 2024-03-14 06:00

JMIR Res Protoc. 2024 Mar 14;13:e52602. doi: 10.2196/52602.

ABSTRACT

BACKGROUND: Artificial intelligence as a medical device (AIaMD) has the potential to transform many aspects of ophthalmic care, such as improving accuracy and speed of diagnosis, addressing capacity issues in high-volume areas such as screening, and detecting novel biomarkers of systemic disease in the eye (oculomics). In order to ensure that such tools are safe for the target population and achieve their intended purpose, it is important that these AIaMD have adequate clinical evaluation to support any regulatory decision. Currently, the evidential requirements for regulatory approval are less clear for AIaMD compared to more established interventions such as drugs or medical devices. There is therefore value in understanding the level of evidence that underpins AIaMD currently on the market, as a step toward identifying what the best practices might be in this area. In this systematic scoping review, we will focus on AIaMD that contributes to clinical decision-making (relating to screening, diagnosis, prognosis, and treatment) in the context of ophthalmic imaging.

OBJECTIVE: This study aims to identify regulator-approved AIaMD for ophthalmic imaging in Europe, Australia, and the United States; report the characteristics of these devices and their regulatory approvals; and report the available evidence underpinning these AIaMD.

METHODS: The Food and Drug Administration (United States), the Australian Register of Therapeutic Goods (Australia), the Medicines and Healthcare products Regulatory Agency (United Kingdom), and the European Database on Medical Devices (European Union) regulatory databases will be searched for ophthalmic imaging AIaMD through a snowballing approach. PubMed and clinical trial registries will be systematically searched, and manufacturers will be directly contacted for studies investigating the effectiveness of eligible AIaMD. Preliminary regulatory database searches, evidence searches, screening, data extraction, and methodological quality assessment will be undertaken by 2 independent review authors and arbitrated by a third at each stage of the process.

RESULTS: Preliminary searches were conducted in February 2023. Data extraction, data synthesis, and assessment of methodological quality commenced in October 2023. The review is on track to be completed and submitted for peer review by April 2024.

CONCLUSIONS: This systematic review will provide greater clarity on ophthalmic imaging AIaMD that have achieved regulatory approval as well as the evidence that underpins them. This should help adopters understand the range of tools available and whether they can be safely incorporated into their clinical workflow, and it should also support developers in navigating regulatory approval more efficiently.

INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/52602.

PMID:38483456 | DOI:10.2196/52602

Categories: Literature Watch

Improving Resolution of Panoramic Radiographs: Super Resolution Concept

Thu, 2024-03-14 06:00

Dentomaxillofac Radiol. 2024 Mar 14:twae009. doi: 10.1093/dmfr/twae009. Online ahead of print.

NO ABSTRACT

PMID:38483289 | DOI:10.1093/dmfr/twae009

Categories: Literature Watch

MINDG: A Drug-Target Interaction Prediction Method Based on an Integrated Learning Algorithm

Thu, 2024-03-14 06:00

Bioinformatics. 2024 Mar 14:btae147. doi: 10.1093/bioinformatics/btae147. Online ahead of print.

ABSTRACT

MOTIVATION: Drug target interaction (DTI) prediction refers to the prediction of whether a given drug molecule will bind to a specific target and thus exert a targeted therapeutic effect. Although intelligent computational approaches for drug target prediction have received much attention and made many advances, they are still a challenging task that requires further research. The main challenges are manifested as follows: (1) Most graph neural network-based methods only consider the information of the first-order neighboring nodes (drug and target) in the graph, without learning deeper and richer structural features from the higher-order neighboring nodes. (2) Existing methods do not consider both the sequence and structural features of drugs and targets, and each method is independent of each other, and cannot combine the advantages of sequence and structural features to improve the interactive learning effect.

RESULTS: To address the above challenges, a Multi-view Integrated learning Network that integrates Deep learning and Graph Learning (MINDG) is proposed in this study, which consists of the following parts:(1) A mixed deep network is used to extract sequence features of drugs and targets.(2) A higher-order graph attention convolutional network is proposed to better extract and capture structural features.(3) A multi-view adaptive integrated decision module is used to improve and complement the initial prediction results of the above two networks to enhance the prediction performance. We evaluate MINDG on two dataset and show it improved DTI prediction performance compared to state-of-the-art baselines.

AVAILABILITY: https://github.com/jnuaipr/MINDG.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38483285 | DOI:10.1093/bioinformatics/btae147

Categories: Literature Watch

SuperCUT, an unsupervised multimodal image registration with deep learning for biomedical microscopy

Thu, 2024-03-14 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae029. doi: 10.1093/bib/bbae029.

ABSTRACT

Numerous imaging techniques are available for observing and interrogating biological samples, and several of them can be used consecutively to enable correlative analysis of different image modalities with varying resolutions and the inclusion of structural or molecular information. Achieving accurate registration of multimodal images is essential for the correlative analysis process, but it remains a challenging computer vision task with no widely accepted solution. Moreover, supervised registration methods require annotated data produced by experts, which is limited. To address this challenge, we propose a general unsupervised pipeline for multimodal image registration using deep learning. We provide a comprehensive evaluation of the proposed pipeline versus the current state-of-the-art image registration and style transfer methods on four types of biological problems utilizing different microscopy modalities. We found that style transfer of modality domains paired with fully unsupervised training leads to comparable image registration accuracy to supervised methods and, most importantly, does not require human intervention.

PMID:38483256 | DOI:10.1093/bib/bbae029

Categories: Literature Watch

Deep learning in spatially resolved transcriptfomics: a comprehensive technical view

Thu, 2024-03-14 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae082. doi: 10.1093/bib/bbae082.

ABSTRACT

Spatially resolved transcriptomics (SRT) is a pioneering method for simultaneously studying morphological contexts and gene expression at single-cell precision. Data emerging from SRT are multifaceted, presenting researchers with intricate gene expression matrices, precise spatial details and comprehensive histology visuals. Such rich and intricate datasets, unfortunately, render many conventional methods like traditional machine learning and statistical models ineffective. The unique challenges posed by the specialized nature of SRT data have led the scientific community to explore more sophisticated analytical avenues. Recent trends indicate an increasing reliance on deep learning algorithms, especially in areas such as spatial clustering, identification of spatially variable genes and data alignment tasks. In this manuscript, we provide a rigorous critique of these advanced deep learning methodologies, probing into their merits, limitations and avenues for further refinement. Our in-depth analysis underscores that while the recent innovations in deep learning tailored for SRT have been promising, there remains a substantial potential for enhancement. A crucial area that demands attention is the development of models that can incorporate intricate biological nuances, such as phylogeny-aware processing or in-depth analysis of minuscule histology image segments. Furthermore, addressing challenges like the elimination of batch effects, perfecting data normalization techniques and countering the overdispersion and zero inflation patterns seen in gene expression is pivotal. To support the broader scientific community in their SRT endeavors, we have meticulously assembled a comprehensive directory of readily accessible SRT databases, hoping to serve as a foundation for future research initiatives.

PMID:38483255 | DOI:10.1093/bib/bbae082

Categories: Literature Watch

Measurement Variability of Same-Day CT Quantification of Interstitial Lung Disease: A Multicenter Prospective Study

Thu, 2024-03-14 06:00

Radiol Cardiothorac Imaging. 2024 Apr;6(2):e230287. doi: 10.1148/ryct.230287.

ABSTRACT

Purpose To investigate quantitative CT (QCT) measurement variability in interstitial lung disease (ILD) on the basis of two same-day CT scans. Materials and Methods Participants with ILD were enrolled in this multicenter prospective study between March and October 2022. Participants underwent two same-day CT scans at an interval of a few minutes. Deep learning-based texture analysis software was used to segment ILD features. Fibrosis extent was defined as the sum of reticular opacity and honeycombing cysts. Measurement variability between scans was assessed with Bland-Altman analyses for absolute and relative differences with 95% limits of agreement (LOA). The contribution of fibrosis extent to variability was analyzed using a multivariable linear mixed-effects model while adjusting for lung volume. Eight readers assessed ILD fibrosis stability with and without QCT information for 30 randomly selected samples. Results Sixty-five participants were enrolled in this study (mean age, 68.7 years ± 10 [SD]; 47 [72%] men, 18 [28%] women). Between two same-day CT scans, the 95% LOA for the mean absolute and relative differences of quantitative fibrosis extent were -0.9% to 1.0% and -14.8% to 16.1%, respectively. However, these variabilities increased to 95% LOA of -11.3% to 3.9% and -123.1% to 18.4% between CT scans with different reconstruction parameters. Multivariable analysis showed that absolute differences were not associated with the baseline extent of fibrosis (P = .09), but the relative differences were negatively associated (β = -0.252, P < .001). The QCT results increased readers' specificity in interpreting ILD fibrosis stability (91.7% vs 94.6%, P = .02). Conclusion The absolute QCT measurement variability of fibrosis extent in ILD was 1% in same-day CT scans. Keywords: CT, CT-Quantitative, Thorax, Lung, Lung Diseases, Interstitial, Pulmonary Fibrosis, Diagnosis, Computer Assisted, Diagnostic Imaging Supplemental material is available for this article. © RSNA, 2024.

PMID:38483245 | DOI:10.1148/ryct.230287

Categories: Literature Watch

Pages