Deep learning

Integrating pharmacophore model and deep learning for activity prediction of molecules with BRCA1 gene

Wed, 2024-04-03 06:00

J Bioinform Comput Biol. 2024 Feb;22(1):2450003. doi: 10.1142/S0219720024500033.

ABSTRACT

In this paper, we propose a novel approach for predicting the activity/inactivity of molecules with the BRCA1 gene by combining pharmacophore modeling and deep learning techniques. Initially, we generated 3D pharmacophore fingerprints using a pharmacophore model, which captures the essential features and spatial arrangements critical for biological activity. These fingerprints served as informative representations of the molecular structures. Next, we employed deep learning algorithms to train a predictive model using the generated pharmacophore fingerprints. The deep learning model was designed to learn complex patterns and relationships between the pharmacophore features and the corresponding activity/inactivity labels of the molecules. By utilizing this integrated approach, we aimed to enhance the accuracy and efficiency of activity prediction. To validate the effectiveness of our approach, we conducted experiments using a dataset of known molecules with BRCA1 gene activity/inactivity from diverse sources. Our results demonstrated promising predictive performance, indicating the successful integration of pharmacophore modeling and deep learning. Furthermore, we utilized the trained model to predict the activity/inactivity of unknown molecules extracted from the ChEMBL database. The predictions obtained from the ChEMBL database were assessed and compared against experimentally determined values to evaluate the reliability and generalizability of our model. Overall, our proposed approach showcased significant potential in accurately predicting the activity/inactivity of molecules with the BRCA1 gene, thus enabling the identification of potential candidates for further investigation in drug discovery and development processes.

PMID:38567386 | DOI:10.1142/S0219720024500033

Categories: Literature Watch

Identification of plant microRNAs using convolutional neural network

Wed, 2024-04-03 06:00

Front Plant Sci. 2024 Mar 19;15:1330854. doi: 10.3389/fpls.2024.1330854. eCollection 2024.

ABSTRACT

MicroRNAs (miRNAs) are of significance in tuning and buffering gene expression. Despite abundant analysis tools that have been developed in the last two decades, plant miRNA identification from next-generation sequencing (NGS) data remains challenging. Here, we show that we can train a convolutional neural network to accurately identify plant miRNAs from NGS data. Based on our methods, we also present a user-friendly pure Java-based software package called Small RNA-related Intelligent and Convenient Analysis Tools (SRICATs). SRICATs encompasses all the necessary steps for plant miRNA analysis. Our results indicate that SRICATs outperforms currently popular software tools on the test data from five plant species. For non-commercial users, SRICATs is freely available at https://sourceforge.net/projects/sricats.

PMID:38567128 | PMC:PMC10985208 | DOI:10.3389/fpls.2024.1330854

Categories: Literature Watch

Deep learning-based identification of esophageal cancer subtypes through analysis of high-resolution histopathology images

Wed, 2024-04-03 06:00

Front Mol Biosci. 2024 Mar 19;11:1346242. doi: 10.3389/fmolb.2024.1346242. eCollection 2024.

ABSTRACT

Esophageal cancer (EC) remains a significant health challenge globally, with increasing incidence and high mortality rates. Despite advances in treatment, there remains a need for improved diagnostic methods and understanding of disease progression. This study addresses the significant challenges in the automatic classification of EC, particularly in distinguishing its primary subtypes: adenocarcinoma and squamous cell carcinoma, using histopathology images. Traditional histopathological diagnosis, while being the gold standard, is subject to subjectivity and human error and imposes a substantial burden on pathologists. This study proposes a binary class classification system for detecting EC subtypes in response to these challenges. The system leverages deep learning techniques and tissue-level labels for enhanced accuracy. We utilized 59 high-resolution histopathological images from The Cancer Genome Atlas (TCGA) Esophageal Carcinoma dataset (TCGA-ESCA). These images were preprocessed, segmented into patches, and analyzed using a pre-trained ResNet101 model for feature extraction. For classification, we employed five machine learning classifiers: Support Vector Classifier (SVC), Logistic Regression (LR), Decision Tree (DT), AdaBoost (AD), Random Forest (RF), and a Feed-Forward Neural Network (FFNN). The classifiers were evaluated based on their prediction accuracy on the test dataset, yielding results of 0.88 (SVC and LR), 0.64 (DT and AD), 0.82 (RF), and 0.94 (FFNN). Notably, the FFNN classifier achieved the highest Area Under the Curve (AUC) score of 0.92, indicating its superior performance, followed closely by SVC and LR, with a score of 0.87. This suggested approach holds promising potential as a decision-support tool for pathologists, particularly in regions with limited resources and expertise. The timely and precise detection of EC subtypes through this system can substantially enhance the likelihood of successful treatment, ultimately leading to reduced mortality rates in patients with this aggressive cancer.

PMID:38567100 | PMC:PMC10985197 | DOI:10.3389/fmolb.2024.1346242

Categories: Literature Watch

A cost-sensitive deep neural network-based prediction model for the mortality in acute myocardial infarction patients with hypertension on imbalanced data

Wed, 2024-04-03 06:00

Front Cardiovasc Med. 2024 Mar 19;11:1276608. doi: 10.3389/fcvm.2024.1276608. eCollection 2024.

ABSTRACT

BACKGROUND AND OBJECTIVES: Hypertension is one of the most serious risk factors and the leading cause of mortality in patients with cardiovascular diseases (CVDs). It is necessary to accurately predict the mortality of patients suffering from CVDs with hypertension. Therefore, this paper proposes a novel cost-sensitive deep neural network (CSDNN)-based mortality prediction model for out-of-hospital acute myocardial infarction (AMI) patients with hypertension on imbalanced data.

METHODS: The synopsis of our research is as follows. First, the experimental data is extracted from the Korea Acute Myocardial Infarction Registry-National Institutes of Health (KAMIR-NIH) and preprocessed with several approaches. Then the imbalanced experimental dataset is divided into training data (80%) and test data (20%). After that, we design the proposed CSDNN-based mortality prediction model, which can solve the skewed class distribution between the majority and minority classes in the training data. The threshold moving technique is also employed to enhance the performance of the proposed model. Finally, we evaluate the performance of the proposed model using the test data and compare it with other commonly used machine learning (ML) and data sampling-based ensemble models. Moreover, the hyperparameters of all models are optimized through random search strategies with a 5-fold cross-validation approach.

RESULTS AND DISCUSSION: In the result, the proposed CSDNN model with the threshold moving technique yielded the best results on imbalanced data. Additionally, our proposed model outperformed the best ML model and the classic data sampling-based ensemble model with an AUC of 2.58% and 2.55% improvement, respectively. It aids in decision-making and offers a precise mortality prediction for AMI patients with hypertension.

PMID:38566962 | PMC:PMC10986180 | DOI:10.3389/fcvm.2024.1276608

Categories: Literature Watch

Prediction of tissue outcome in acute ischemic stroke based on single-phase CT angiography at admission

Wed, 2024-04-03 06:00

Front Neurol. 2024 Mar 19;15:1330497. doi: 10.3389/fneur.2024.1330497. eCollection 2024.

ABSTRACT

INTRODUCTION: In acute ischemic stroke, prediction of the tissue outcome after reperfusion can be used to identify patients that might benefit from mechanical thrombectomy (MT). The aim of this work was to develop a deep learning model that can predict the follow-up infarct location and extent exclusively based on acute single-phase computed tomography angiography (CTA) datasets. In comparison to CT perfusion (CTP), CTA imaging is more widely available, less prone to artifacts, and the established standard of care in acute stroke imaging protocols. Furthermore, recent RCTs have shown that also patients with large established infarctions benefit from MT, which might not have been selected for MT based on CTP core/penumbra mismatch analysis.

METHODS: All patients with acute large vessel occlusion of the anterior circulation treated at our institution between 12/2015 and 12/2020 were screened (N = 404) and 238 patients undergoing MT with successful reperfusion were included for final analysis. Ground truth infarct lesions were segmented on 24 h follow-up CT scans. Pre-processed CTA images were used as input for a U-Net-based convolutional neural network trained for lesion prediction, enhanced with a spatial and channel-wise squeeze-and-excitation block. Post-processing was applied to remove small predicted lesion components. The model was evaluated using a 5-fold cross-validation and a separate test set with Dice similarity coefficient (DSC) as the primary metric and average volume error as the secondary metric.

RESULTS: The mean ± standard deviation test set DSC over all folds after post-processing was 0.35 ± 0.2 and the mean test set average volume error was 11.5 mL. The performance was relatively uniform across models with the best model according to the DSC achieved a score of 0.37 ± 0.2 after post-processing and the best model in terms of average volume error yielded 3.9 mL.

CONCLUSION: 24 h follow-up infarct prediction using acute CTA imaging exclusively is feasible with DSC measures comparable to results of CTP-based algorithms reported in other studies. The proposed method might pave the way to a wider acceptance, feasibility, and applicability of follow-up infarct prediction based on artificial intelligence.

PMID:38566856 | PMC:PMC10985353 | DOI:10.3389/fneur.2024.1330497

Categories: Literature Watch

ThyroidNet: A Deep Learning Network for Localization and Classification of Thyroid Nodules

Wed, 2024-04-03 06:00

Comput Model Eng Sci. 2023 Dec 30;139(1):361-382. doi: 10.32604/cmes.2023.031229.

ABSTRACT

AIM: This study aims to establish an artificial intelligence model, ThyroidNet, to diagnose thyroid nodules using deep learning techniques accurately.

METHODS: A novel method, ThyroidNet, is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules. First, we propose the multitask TransUnet, which combines the TransUnet encoder and decoder with multitask learning. Second, we propose the DualLoss function, tailored to the thyroid nodule localization and classification tasks. It balances the learning of the localization and classification tasks to help improve the model's generalization ability. Third, we introduce strategies for augmenting the data. Finally, we submit a novel deep learning model, ThyroidNet, to accurately detect thyroid nodules.

RESULTS: ThyroidNet was evaluated on private datasets and was comparable to other existing methods, including U-Net and TransUnet. Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules. It achieved improved accuracy of 3.9% and 1.5%, respectively.

CONCLUSION: ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks. Future research directions include optimization of the model structure, expansion of the dataset size, reduction of computational complexity and memory requirements, and exploration of additional applications of ThyroidNet in medical image analysis.

PMID:38566835 | PMC:PMC7615790 | DOI:10.32604/cmes.2023.031229

Categories: Literature Watch

Nondestructive, quantitative viability analysis of 3D tissue cultures using machine learning image segmentation

Wed, 2024-04-03 06:00

APL Bioeng. 2024 Mar 28;8(1):016121. doi: 10.1063/5.0189222. eCollection 2024 Mar.

ABSTRACT

Ascertaining the collective viability of cells in different cell culture conditions has typically relied on averaging colorimetric indicators and is often reported out in simple binary readouts. Recent research has combined viability assessment techniques with image-based deep-learning models to automate the characterization of cellular properties. However, further development of viability measurements to assess the continuity of possible cellular states and responses to perturbation across cell culture conditions is needed. In this work, we demonstrate an image processing algorithm for quantifying features associated with cellular viability in 3D cultures without the need for assay-based indicators. We show that our algorithm performs similarly to a pair of human experts in whole-well images over a range of days and culture matrix compositions. To demonstrate potential utility, we perform a longitudinal study investigating the impact of a known therapeutic on pancreatic cancer spheroids. Using images taken with a high content imaging system, the algorithm successfully tracks viability at the individual spheroid and whole-well level. The method we propose reduces analysis time by 97% in comparison with the experts. Because the method is independent of the microscope or imaging system used, this approach lays the foundation for accelerating progress in and for improving the robustness and reproducibility of 3D culture analysis across biological and clinical research.

PMID:38566822 | PMC:PMC10985731 | DOI:10.1063/5.0189222

Categories: Literature Watch

Graphical user interface-based convolutional neural network models for detecting nasopalatine duct cysts using panoramic radiography

Tue, 2024-04-02 06:00

Sci Rep. 2024 Apr 2;14(1):7699. doi: 10.1038/s41598-024-57632-8.

ABSTRACT

Nasopalatine duct cysts are difficult to detect on panoramic radiographs due to obstructive shadows and are often overlooked. Therefore, sensitive detection using panoramic radiography is clinically important. This study aimed to create a trained model to detect nasopalatine duct cysts from panoramic radiographs in a graphical user interface-based environment. This study was conducted on panoramic radiographs and CT images of 115 patients with nasopalatine duct cysts. As controls, 230 age- and sex-matched patients without cysts were selected from the same database. The 345 pre-processed panoramic radiographs were divided into 216 training data sets, 54 validation data sets, and 75 test data sets. Deep learning was performed for 400 epochs using pretrained-LeNet and pretrained-VGG16 as the convolutional neural networks to classify the cysts. The deep learning system's accuracy, sensitivity, and specificity using LeNet and VGG16 were calculated. LeNet and VGG16 showed an accuracy rate of 85.3% and 88.0%, respectively. A simple deep learning method using a graphical user interface-based Windows machine was able to create a trained model to detect nasopalatine duct cysts from panoramic radiographs, and may be used to prevent such cysts being overlooked during imaging.

PMID:38565866 | DOI:10.1038/s41598-024-57632-8

Categories: Literature Watch

A Comparative Analysis of Deep Learning-Based Approaches for Classifying Dental Implants Decision Support System

Tue, 2024-04-02 06:00

J Imaging Inform Med. 2024 Apr 2. doi: 10.1007/s10278-024-01086-x. Online ahead of print.

ABSTRACT

This study aims to provide an effective solution for the autonomous identification of dental implant brands through a deep learning-based computer diagnostic system. It also seeks to ascertain the system's potential in clinical practices and to offer a strategic framework for improving diagnosis and treatment processes in implantology. This study employed a total of 28 different deep learning models, including 18 convolutional neural network (CNN) models (VGG, ResNet, DenseNet, EfficientNet, RegNet, ConvNeXt) and 10 vision transformer models (Swin and Vision Transformer). The dataset comprises 1258 panoramic radiographs from patients who received implant treatments at Erciyes University Faculty of Dentistry between 2012 and 2023. It is utilized for the training and evaluation process of deep learning models and consists of prototypes from six different implant systems provided by six manufacturers. The deep learning-based dental implant system provided high classification accuracy for different dental implant brands using deep learning models. Furthermore, among all the architectures evaluated, the small model of the ConvNeXt architecture achieved an impressive accuracy rate of 94.2%, demonstrating a high level of classification success.This study emphasizes the effectiveness of deep learning-based systems in achieving high classification accuracy in dental implant types. These findings pave the way for integrating advanced deep learning tools into clinical practice, promising significant improvements in patient care and treatment outcomes.

PMID:38565730 | DOI:10.1007/s10278-024-01086-x

Categories: Literature Watch

End-to-End Multi-task Learning Architecture for Brain Tumor Analysis with Uncertainty Estimation in MRI Images

Tue, 2024-04-02 06:00

J Imaging Inform Med. 2024 Apr 2. doi: 10.1007/s10278-024-01009-w. Online ahead of print.

ABSTRACT

Brain tumors are a threat to life for every other human being, be it adults or children. Gliomas are one of the deadliest brain tumors with an extremely difficult diagnosis. The reason is their complex and heterogenous structure which gives rise to subjective as well as objective errors. Their manual segmentation is a laborious task due to their complex structure and irregular appearance. To cater to all these issues, a lot of research has been done and is going on to develop AI-based solutions that can help doctors and radiologists in the effective diagnosis of gliomas with the least subjective and objective errors, but an end-to-end system is still missing. An all-in-one framework has been proposed in this research. The developed end-to-end multi-task learning (MTL) architecture with a feature attention module can classify, segment, and predict the overall survival of gliomas by leveraging task relationships between similar tasks. Uncertainty estimation has also been incorporated into the framework to enhance the confidence level of healthcare practitioners. Extensive experimentation was performed by using combinations of MRI sequences. Brain tumor segmentation (BraTS) challenge datasets of 2019 and 2020 were used for experimental purposes. Results of the best model with four sequences show 95.1% accuracy for classification, 86.3% dice score for segmentation, and a mean absolute error (MAE) of 456.59 for survival prediction on the test data. It is evident from the results that deep learning-based MTL models have the potential to automate the whole brain tumor analysis process and give efficient results with least inference time without human intervention. Uncertainty quantification confirms the idea that more data can improve the generalization ability and in turn can produce more accurate results with less uncertainty. The proposed model has the potential to be utilized in a clinical setup for the initial screening of glioma patients.

PMID:38565728 | DOI:10.1007/s10278-024-01009-w

Categories: Literature Watch

Neuron-level explainable AI for Alzheimer's Disease assessment from fundus images

Tue, 2024-04-02 06:00

Sci Rep. 2024 Apr 2;14(1):7710. doi: 10.1038/s41598-024-58121-8.

ABSTRACT

Alzheimer's Disease (AD) is a progressive neurodegenerative disease and the leading cause of dementia. Early diagnosis is critical for patients to benefit from potential intervention and treatment. The retina has emerged as a plausible diagnostic site for AD detection owing to its anatomical connection with the brain. However, existing AI models for this purpose have yet to provide a rational explanation behind their decisions and have not been able to infer the stage of the disease's progression. Along this direction, we propose a novel model-agnostic explainable-AI framework, called Granu la ̲ r Neuron-le v ̲ el Expl a ̲ iner (LAVA), an interpretation prototype that probes into intermediate layers of the Convolutional Neural Network (CNN) models to directly assess the continuum of AD from the retinal imaging without the need for longitudinal or clinical evaluations. This innovative approach aims to validate retinal vasculature as a biomarker and diagnostic modality for evaluating Alzheimer's Disease. Leveraged UK Biobank cognitive tests and vascular morphological features demonstrate significant promise and effectiveness of LAVA in identifying AD stages across the progression continuum.

PMID:38565579 | DOI:10.1038/s41598-024-58121-8

Categories: Literature Watch

Transferable non-invasive modal fusion-transformer (NIMFT) for end-to-end hand gesture recognition

Tue, 2024-04-02 06:00

J Neural Eng. 2024 Apr 2. doi: 10.1088/1741-2552/ad39a5. Online ahead of print.

ABSTRACT

OBJECTIVE: Recent studies have shown that integrating IMU signals with surface electromyographic (sEMG) can greatly improve hand gesture recognition (HGR) performance in applications such as prosthetic control and rehabilitation training. However, current deep learning models for multimodal HGR encounter difficulties in invasive modal fusion, complex feature extraction from heterogeneous signals, and limited inter-subject model generalization. To address these challenges, this study aims to develop an end-to-end and inter-subject transferable model that utilizes non-invasively fused sEMG and acceleration (ACC) data.

APPROACH: The proposed NIMFT model utilizes 1D-CNN-based patch embedding for local information extraction and employs a multi-head cross-attention (MCA) mechanism to non-invasively integrate sEMG and ACC signals, stabilizing the variability induced by sEMG. The proposed architecture undergoes detailed ablation studies after hyperparameter tuning. Transfer learning is employed by fine-tuning a pre-trained model on new subject and a comparative analysis is performed between the fine-tuning and subject-specific model. Additionally, the performance of NIMFT is compared to state-of-the-art fusion models.

MAIN RESULTS: The NIMFT model achieved recognition accuracies of 93.91%, 91.02%, and 95.56% on the three action sets in the Ninapro DB2 dataset. The proposed embedding method and MCA outperformed the traditional invasive modal fusion transformer by 2.01% (embedding) and 1.23% (fusion), respectively. In comparison to subject-specific models, the fine-tuning model exhibited the highest average accuracy improvement of 2.26%, achieving a final accuracy of 96.13%. Moreover, the NIMFT model demonstrated superiority in terms of accuracy, recall, precision, and F1-score compared to the latest modal fusion models with similar model scale.

SIGNIFICANCE: The NIMFT is a novel end-to-end HGR model, utilizes a non-invasive MCA mechanism to integrate long-range intermodal information effectively. Compared to recent modal fusion models, it demonstrates superior performance in inter-subject experiments and offers higher training efficiency and accuracy levels through transfer learning than subject-specific approaches.

PMID:38565124 | DOI:10.1088/1741-2552/ad39a5

Categories: Literature Watch

Integrating portable NIR spectrometry with deep learning for accurate Estimation of crude protein in corn feed

Tue, 2024-04-02 06:00

Spectrochim Acta A Mol Biomol Spectrosc. 2024 Mar 28;314:124203. doi: 10.1016/j.saa.2024.124203. Online ahead of print.

ABSTRACT

This study investigates the challenges encountered in utilizing portable near-infrared (NIR) spectrometers in agriculture, specifically in developing predictive models with high accuracy and robust generalization abilities despite limited spectral resolution and small sample sizes. The research concentrates on the near-infrared spectra of corn feed, utilizing spectral processing techniques and CNNs to precisely estimate crude protein content. Five preprocessing methods were implemented alongside two-dimensional (2D) correlation spectroscopy, resulting in the development of both one-dimensional (1D) and 2D regression models. A comparative analysis of these models in predicting crude protein content demonstrated that 1D-CNNs exhibited superior predictive performance within the 1D category. For the 2D models, CropNet and CropResNet were utilized, with CropResNet demonstrating more accurate and superior predictive capabilities. Overall, the integration of 2D correlation spectroscopy with suitable preprocessing techniques in deep learning models, particularly the 2D CropResNet, proved to be more precise in predicting the crude protein content in corn feed. This finding emphasis the potential of this approach in the portable spectrometer market.

PMID:38565047 | DOI:10.1016/j.saa.2024.124203

Categories: Literature Watch

Performance evaluation of deep learning based stream nitrate concentration prediction model to fill stream nitrate data gaps at low-frequency nitrate monitoring basins

Tue, 2024-04-02 06:00

J Environ Manage. 2024 Apr 1;357:120721. doi: 10.1016/j.jenvman.2024.120721. Online ahead of print.

ABSTRACT

Accurate and frequent nitrate estimates can provide valuable information on the nitrate transport dynamics. The study aimed to develop a data-driven modeling framework to estimate daily nitrate concentrations at low-frequency nitrate monitoring sites using the daily nitrate concentration and stream discharge information of a neighboring high-frequency nitrate monitoring site. A Long Short-Term Memory (LSTM) based deep learning (DL) modeling framework was developed to predict daily nitrate concentrations. The DL modeling framework performance was compared with two well-established statistical models, including LOADEST and WRTDS-Kalman, in three selected basins in Iowa, USA: Des Moines, Iowa, and Cedar River. The developed DL model performed well with NSE >0.70 and KGE >0.70 for 67% and 79% nitrate monitoring sites, respectively. DL and WRTDS-Kalman models performed better than the LOADEST in nitrate concentration and load estimation for all low-frequency sites. The average NSE performance of the DL model in daily nitrate estimation is 20% higher than that of the WRTDS-Kalman model at 18 out of 24 sites (75%). The WRTDS-Kalman model showed unrealistic fluctuations in the estimated daily nitrate time series when the model received limited observed nitrate data (less than 50) for simulation. The DL model indicated superior performance in winter months' nitrate prediction (60% of cases) compared to WRTDS-Kalman models (33% of cases). The DL model also better represented the exceedance days from the USEPA maximum contamination level (MCL). Both the DL and WRTDS-Kalman models demonstrated similar performance in annual stream nitrate load estimation, and estimated values are close to actual nitrate loads.

PMID:38565027 | DOI:10.1016/j.jenvman.2024.120721

Categories: Literature Watch

Using generative AI to investigate medical imagery models and datasets

Tue, 2024-04-02 06:00

EBioMedicine. 2024 Apr 1;102:105075. doi: 10.1016/j.ebiom.2024.105075. Online ahead of print.

ABSTRACT

BACKGROUND: AI models have shown promise in performing many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust of doctors in AI-based models, especially in domains where AI prediction capabilities surpass those of humans. Moreover, such explanations could enable novel scientific discovery by uncovering signals in the data that aren't yet known to experts.

METHODS: In this paper, we present a workflow for generating hypotheses to understand which visual signals in images are correlated with a classification model's predictions for a given task. This approach leverages an automatic visual explanation algorithm followed by interdisciplinary expert review. We propose the following 4 steps: (i) Train a classifier to perform a given task to assess whether the imagery indeed contains signals relevant to the task; (ii) Train a StyleGAN-based image generator with an architecture that enables guidance by the classifier ("StylEx"); (iii) Automatically detect, extract, and visualize the top visual attributes that the classifier is sensitive towards. For visualization, we independently modify each of these attributes to generate counterfactual visualizations for a set of images (i.e., what the image would look like with the attribute increased or decreased); (iv) Formulate hypotheses for the underlying mechanisms, to stimulate future research. Specifically, present the discovered attributes and corresponding counterfactual visualizations to an interdisciplinary panel of experts so that hypotheses can account for social and structural determinants of health (e.g., whether the attributes correspond to known patho-physiological or socio-cultural phenomena, or could be novel discoveries).

FINDINGS: To demonstrate the broad applicability of our approach, we present results on eight prediction tasks across three medical imaging modalities-retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples where many of the automatically-learned attributes clearly capture clinically known features (e.g., types of cataract, enlarged heart), and demonstrate automatically-learned confounders that arise from factors beyond physiological mechanisms (e.g., chest X-ray underexposure is correlated with the classifier predicting abnormality, and eye makeup is correlated with the classifier predicting low hemoglobin levels). We further show that our method reveals a number of physiologically plausible, previously-unknown attributes based on the literature (e.g., differences in the fundus associated with self-reported sex, which were previously unknown).

INTERPRETATION: Our approach enables hypotheses generation via attribute visualizations and has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AI-based models, as well as debug and design better datasets. Though not designed to infer causality, importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and socio-cultural factors, and hence interdisciplinary perspectives are critical in these investigations. Finally, we will release code to help researchers train their own StylEx models and analyze their predictive tasks of interest, and use the methodology presented in this paper for responsible interpretation of the revealed attributes.

FUNDING: Google.

PMID:38565004 | DOI:10.1016/j.ebiom.2024.105075

Categories: Literature Watch

Explainable prediction of problematic smartphone use among South Korea's children and adolescents using a Machine learning approach

Tue, 2024-04-02 06:00

Int J Med Inform. 2024 Mar 30;186:105441. doi: 10.1016/j.ijmedinf.2024.105441. Online ahead of print.

ABSTRACT

BACKGROUND: Korea is known for its technological prowess, has the highest smartphone ownership rate in the world at 95%, and the smallest gap in smartphone ownership between generations. Since the onset of the COVID-19 pandemic, problematic smartphone use is becoming more prevalent among Korean children and adolescent owing to limited school attendance and outdoor activities, resulting in increased reliance on smartphones. 40.1% of adolescents are classified as high-risk, with only the adolescent group showing a persistent rise year after year.

OBJECTIVE: The study purpose is to present data-driven analysis results for predicting and preventing smartphone addiction in Korea, where problematic smartphone use is severe.

PARTICIPANTS AND METHODS: To predict the risk of problematic smartphone use in Korean children and adolescents at an early stage, we used data collected from the Smartphone Overdependence Survey conducted by the National Information Society Agency between 2017 and 2021. Eight representative machine and deep learning algorithms were used to predict groups at high risk for smartphone addiction: Logistic Regression, Random Forest, Gradient Boosting Machine (GBM), extreme Gradient Boosting (XGBoost), Light GBM, Categorical Boosting, Multilayer Perceptron, and Convolutional Neural Network.

RESULTS: The XGBoost ensemble algorithm predicted 87.60% of participants at risk of future problematic smartphone usebased on precision. Our results showed that prolonged use of games, webtoons/web novels, and e-books, which have not been found in previous studies, further increased the risk of problematic smartphone use.

CONCLUSIONS: Artificial intelligence algorithms have potential predictive and explanatory capabilities for identifying early signs of problematic smartphone use in adolescents and young children. We recommend that a variety of healthy, beneficial, and face-to-face activities be offered as alternatives to smartphones for leisure and play culture.

PMID:38564961 | DOI:10.1016/j.ijmedinf.2024.105441

Categories: Literature Watch

Cellular data extraction from multiplexed brain imaging data using self-supervised Dual-loss Adaptive Masked Autoencoder

Tue, 2024-04-02 06:00

Artif Intell Med. 2024 Mar 15;151:102828. doi: 10.1016/j.artmed.2024.102828. Online ahead of print.

ABSTRACT

Reliable large-scale cell detection and segmentation is the fundamental first step to understanding biological processes in the brain. The ability to phenotype cells at scale can accelerate preclinical drug evaluation and system-level brain histology studies. The impressive advances in deep learning offer a practical solution to cell image detection and segmentation. Unfortunately, categorizing cells and delineating their boundaries for training deep networks is an expensive process that requires skilled biologists. This paper presents a novel self-supervised Dual-Loss Adaptive Masked Autoencoder (DAMA) for learning rich features from multiplexed immunofluorescence brain images. DAMA's objective function minimizes the conditional entropy in pixel-level reconstruction and feature-level regression. Unlike existing self-supervised learning methods based on a random image masking strategy, DAMA employs a novel adaptive mask sampling strategy to maximize mutual information and effectively learn brain cell data. To the best of our knowledge, this is the first effort to develop a self-supervised learning method for multiplexed immunofluorescence brain images. Our extensive experiments demonstrate that DAMA features enable superior cell detection, segmentation, and classification performance without requiring many annotations. In addition, to examine the generalizability of DAMA, we also experimented on TissueNet, a multiplexed imaging dataset comprised of two-channel fluorescence images from six distinct tissue types, captured using six different imaging platforms. Our code is publicly available at https://github.com/hula-ai/DAMA.

PMID:38564879 | DOI:10.1016/j.artmed.2024.102828

Categories: Literature Watch

Research Note: Prospects for early detection of breast muscle myopathies by automated image analysis

Tue, 2024-04-02 06:00

Poult Sci. 2024 Mar 20;103(6):103680. doi: 10.1016/j.psj.2024.103680. Online ahead of print.

ABSTRACT

White Striping (WS), Wooden Breast (WB), and Spaghetti Meat (SM) are documented breast muscle myopathies (BMM) affecting broiler chickens' product quality, profitability and welfare. This study evaluated the efficacy of our newly developed deep learning-based automated image analysis tool for early detection of morphometric parameters related to BMM in broiler chickens. Male chicks were utilized, and muscle samples were collected on d 14 of rearing. Histological procedures, including microscopic scoring, blood vessel count, and collagen quantification, were conducted. A previous study demonstrated our automated image analysis as a reliable tool for evaluating myofiber size, conforming with manual histological measurements. A threshold for BMM detection was established by normalizing and consolidating myofiber diameter and area into a unified metric based on automated measurements, also termed as "relative myofiber size value." Results show that severe myopathy broilers consistently exhibited lower relative myofiber size values, effectively detecting myopathy severity. Our study, aimed as proof of concept, underscores the potential of our automated image analysis tool as an early detection method for BMM.

PMID:38564836 | DOI:10.1016/j.psj.2024.103680

Categories: Literature Watch

A deep learning framework for identifying and segmenting three vessels in fetal heart ultrasound images

Tue, 2024-04-02 06:00

Biomed Eng Online. 2024 Apr 2;23(1):39. doi: 10.1186/s12938-024-01230-2.

ABSTRACT

BACKGROUND: Congenital heart disease (CHD) is one of the most common birth defects in the world. It is the leading cause of infant mortality, necessitating an early diagnosis for timely intervention. Prenatal screening using ultrasound is the primary method for CHD detection. However, its effectiveness is heavily reliant on the expertise of physicians, leading to subjective interpretations and potential underdiagnosis. Therefore, a method for automatic analysis of fetal cardiac ultrasound images is highly desired to assist an objective and effective CHD diagnosis.

METHOD: In this study, we propose a deep learning-based framework for the identification and segmentation of the three vessels-the pulmonary artery, aorta, and superior vena cava-in the ultrasound three vessel view (3VV) of the fetal heart. In the first stage of the framework, the object detection model Yolov5 is employed to identify the three vessels and localize the Region of Interest (ROI) within the original full-sized ultrasound images. Subsequently, a modified Deeplabv3 equipped with our novel AMFF (Attentional Multi-scale Feature Fusion) module is applied in the second stage to segment the three vessels within the cropped ROI images.

RESULTS: We evaluated our method with a dataset consisting of 511 fetal heart 3VV images. Compared to existing models, our framework exhibits superior performance in the segmentation of all the three vessels, demonstrating the Dice coefficients of 85.55%, 89.12%, and 77.54% for PA, Ao and SVC respectively.

CONCLUSIONS: Our experimental results show that our proposed framework can automatically and accurately detect and segment the three vessels in fetal heart 3VV images. This method has the potential to assist sonographers in enhancing the precision of vessel assessment during fetal heart examinations.

PMID:38566181 | DOI:10.1186/s12938-024-01230-2

Categories: Literature Watch

CAT-DTI: cross-attention and Transformer network with domain adaptation for drug-target interaction prediction

Tue, 2024-04-02 06:00

BMC Bioinformatics. 2024 Apr 2;25(1):141. doi: 10.1186/s12859-024-05753-2.

ABSTRACT

Accurate and efficient prediction of drug-target interaction (DTI) is critical to advance drug development and reduce the cost of drug discovery. Recently, the employment of deep learning methods has enhanced DTI prediction precision and efficacy, but it still encounters several challenges. The first challenge lies in the efficient learning of drug and protein feature representations alongside their interaction features to enhance DTI prediction. Another important challenge is to improve the generalization capability of the DTI model within real-world scenarios. To address these challenges, we propose CAT-DTI, a model based on cross-attention and Transformer, possessing domain adaptation capability. CAT-DTI effectively captures the drug-target interactions while adapting to out-of-distribution data. Specifically, we use a convolution neural network combined with a Transformer to encode the distance relationship between amino acids within protein sequences and employ a cross-attention module to capture the drug-target interaction features. Generalization to new DTI prediction scenarios is achieved by leveraging a conditional domain adversarial network, aligning DTI representations under diverse distributions. Experimental results within in-domain and cross-domain scenarios demonstrate that CAT-DTI model overall improves DTI prediction performance compared with previous methods.

PMID:38566002 | DOI:10.1186/s12859-024-05753-2

Categories: Literature Watch

Pages