Deep learning
Renal tumor segmentation, visualization, and segmentation confidence using ensembles of neural networks in patients undergoing surgical resection
Eur Radiol. 2024 Aug 23. doi: 10.1007/s00330-024-11026-6. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop an automatic segmentation model for solid renal tumors on contrast-enhanced CTs and to visualize segmentation with associated confidence to promote clinical applicability.
MATERIALS AND METHODS: The training dataset included solid renal tumor patients from two tertiary centers undergoing surgical resection and receiving CT in the corticomedullary or nephrogenic contrast media (CM) phase. Manual tumor segmentation was performed on all axial CT slices serving as reference standard for automatic segmentations. Independent testing was performed on the publicly available KiTS 2019 dataset. Ensembles of neural networks (ENN, DeepLabV3) were used for automatic renal tumor segmentation, and their performance was quantified with DICE score. ENN average foreground entropy measured segmentation confidence (binary: successful segmentation with DICE score > 0.8 versus inadequate segmentation ≤ 0.8).
RESULTS: N = 639/n = 210 patients were included in the training and independent test dataset. Datasets were comparable regarding age and sex (p > 0.05), while renal tumors in the training dataset were larger and more frequently benign (p < 0.01). In the internal test dataset, the ENN model yielded a median DICE score = 0.84 (IQR: 0.62-0.97, corticomedullary) and 0.86 (IQR: 0.77-0.96, nephrogenic CM phase), and the segmentation confidence an AUC = 0.89 (sensitivity = 0.86; specificity = 0.77). In the independent test dataset, the ENN model achieved a median DICE score = 0.84 (IQR: 0.71-0.97, corticomedullary CM phase); and segmentation confidence an accuracy = 0.84 (sensitivity = 0.86 and specificity = 0.81). ENN segmentations were visualized with color-coded voxelwise tumor probabilities and thresholds superimposed on clinical CT images.
CONCLUSIONS: ENN-based renal tumor segmentation robustly performs in external test data and might aid in renal tumor classification and treatment planning.
CLINICAL RELEVANCE STATEMENT: Ensembles of neural networks (ENN) models could automatically segment renal tumors on routine CTs, enabling and standardizing downstream image analyses and treatment planning. Providing confidence measures and segmentation overlays on images can lower the threshold for clinical ENN implementation.
KEY POINTS: Ensembles of neural networks (ENN) segmentation is visualized by color-coded voxelwise tumor probabilities and thresholds. ENN provided a high segmentation accuracy in internal testing and in an independent external test dataset. ENN models provide measures of segmentation confidence which can robustly discriminate between successful and inadequate segmentations.
PMID:39177855 | DOI:10.1007/s00330-024-11026-6
Diagnostic accuracy of artificial intelligence models in detecting osteoporosis using dental images: a systematic review and meta-analysis
Osteoporos Int. 2024 Aug 23. doi: 10.1007/s00198-024-07229-8. Online ahead of print.
ABSTRACT
The current study aimed to systematically review the literature on the accuracy of artificial intelligence (AI) models for osteoporosis (OP) diagnosis using dental images. A thorough literature search was executed in October 2022 and updated in November 2023 across multiple databases, including PubMed, Scopus, Web of Science, and Google Scholar. The research targeted studies using AI models for OP diagnosis from dental radiographs. The main outcomes were the sensitivity and specificity of AI models regarding OP diagnosis. The "meta" package from the R Foundation was selected for statistical analysis. A random-effects model, along with 95% confidence intervals, was utilized to estimate pooled values. The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool was employed for risk of bias and applicability assessment. Among 640 records, 22 studies were included in the qualitative analysis and 12 in the meta-analysis. The overall sensitivity for AI-assisted OP diagnosis was 0.85 (95% CI, 0.70-0.93), while the pooled specificity equaled 0.95 (95% CI, 0.91-0.97). Conventional algorithms led to a pooled sensitivity of 0.82 (95% CI, 0.57-0.94) and a pooled specificity of 0.96 (95% CI, 0.93-0.97). Deep convolutional neural networks exhibited a pooled sensitivity of 0.87 (95% CI, 0.68-0.95) and a pooled specificity of 0.92 (95% CI, 0.83-0.96). This systematic review corroborates the accuracy of AI in OP diagnosis using dental images. Future research should expand sample sizes in test and training datasets and standardize imaging techniques to establish the reliability of AI-assisted methods in OP diagnosis through dental images.
PMID:39177815 | DOI:10.1007/s00198-024-07229-8
Research on low-power driving fatigue monitoring method based on spiking neural network
Exp Brain Res. 2024 Aug 23. doi: 10.1007/s00221-024-06911-x. Online ahead of print.
ABSTRACT
Fatigue driving is one of the leading causes of traffic accidents, and the rapid and accurate detection of driver fatigue is of paramount importance for enhancing road safety. However, the application of deep learning models in fatigue driving detection has long been constrained by high computational costs and power consumption. To address this issue, this study proposes an approach that combines Self-Organizing Map (SOM) and Spiking Neural Networks (SNN) to develop a low-power model capable of accurately recognizing the driver's mental state. Initially, spatial features are extracted from electroencephalogram (EEG) signals using the SOM network. Subsequently, the extracted weight vectors are encoded and fed into the SNN for fatigue driving classification. The research results demonstrate that the proposed method effectively considers the spatiotemporal characteristics of EEG signals, achieving efficient fatigue detection. Simultaneously, this approach successfully reduces the model's power consumption. When compared to traditional artificial neural networks, our method reduces energy consumption by approximately 12.21-42.59%.
PMID:39177685 | DOI:10.1007/s00221-024-06911-x
Football teaching and training based on video surveillance using deep learning
Technol Health Care. 2024 Jul 13. doi: 10.3233/THC-231860. Online ahead of print.
ABSTRACT
BACKGROUND: The objective performance evaluation of an athlete is essential to allow detailed research into elite sports. The automatic identification and classification of football teaching and training exercises overcome the shortcomings of manual analytical approaches. Video monitoring is vital in detecting human conduct acts and preventing or reducing inappropriate actions in time. The video's digital material is classified by relevance depending on those individual actions.
OBJECTIVE: The research goal is to systematically use the data from an inertial measurement unit (IMU) and data from computer vision analysis for the deep Learning of football teaching motion recognition (DL-FTMR). There has been a search for many libraries. The studies included have examined and analyzed training through profound model construction learning methods. Investigations show the ability to distinguish the efficiency of qualified and less qualified officers for sport-specific video-based decision-making assessments.
METHODS: Video-based research is an effective way of assessing decision-making due to the potential to present changing in-game decision-making scenarios more environmentally friendly than static picture printing. The data showed that the filtering accuracy of responses is improved without losing response time. This observation indicates that practicing with a video monitoring system offers a play view close to that seen in a game scenario. It can be an essential way to improve the perception of selection precision. This study discusses publicly accessible training datasets for Human Activity Recognition (HAR) and presents a dataset that combines various components. The study also used the UT-Interaction dataset to identify complex events.
RESULTS: Thus, the experimental results of DL-FTMR give a performance ratio of 94.5%, behavior processing ratio of 92.4%, athletes energy level ratio of 92.5%, interaction ratio of 91.8%, prediction ratio of 92.5%, sensitivity ratio of 93.7%, and the precision ratio of 94.86% compared to the optimized convolutional neural network (OCNN), Gaussian Mixture Model (GMM), you only look once (YOLO), Human Activity Recognition- state-of-the-art methodologies (HAR-SAM).
CONCLUSION: This finding proves that exercising a video monitoring system that provides a play view similar to that seen in a game scenario can be a valuable technique to increase selection accuracy perception.
PMID:39177616 | DOI:10.3233/THC-231860
SmartCADD: AI-QM Empowered Drug Discovery Platform with Explainability
J Chem Inf Model. 2024 Aug 23. doi: 10.1021/acs.jcim.4c00720. Online ahead of print.
ABSTRACT
Artificial intelligence (AI) has emerged as a pivotal force in enhancing productivity across various sectors, with its impact being profoundly felt within the pharmaceutical and biotechnology domains. Despite AI's rapid adoption, its integration into scientific research faces resistance due to myriad challenges: the opaqueness of AI models, the intricate nature of their implementation, and the issue of data scarcity. In response to these impediments, we introduce SmartCADD, an innovative, open-source virtual screening platform that combines deep learning, computer-aided drug design (CADD), and quantum mechanics methodologies within a user-friendly Python framework. SmartCADD is engineered to streamline the construction of comprehensive virtual screening workflows that incorporate a variety of formerly independent techniques─spanning ADMET property predictions, de novo 2D and 3D pharmacophore modeling, molecular docking, to the integration of explainable AI mechanisms. This manuscript highlights the foundational principles, key functionalities, and the unique integrative approach of SmartCADD. Furthermore, we demonstrate its efficacy through a case study focused on the identification of promising lead compounds for HIV inhibition. By democratizing access to advanced AI and quantum mechanics tools, SmartCADD stands as a catalyst for progress in pharmaceutical research and development, heralding a new era of innovation and efficiency.
PMID:39177478 | DOI:10.1021/acs.jcim.4c00720
BertTCR: a Bert-based deep learning framework for predicting cancer-related immune status based on T cell receptor repertoire
Brief Bioinform. 2024 Jul 25;25(5):bbae420. doi: 10.1093/bib/bbae420.
ABSTRACT
The T cell receptor (TCR) repertoire is pivotal to the human immune system, and understanding its nuances can significantly enhance our ability to forecast cancer-related immune responses. However, existing methods often overlook the intra- and inter-sequence interactions of T cell receptors (TCRs), limiting the development of sequence-based cancer-related immune status predictions. To address this challenge, we propose BertTCR, an innovative deep learning framework designed to predict cancer-related immune status using TCRs. BertTCR combines a pre-trained protein large language model with deep learning architectures, enabling it to extract deeper contextual information from TCRs. Compared to three state-of-the-art sequence-based methods, BertTCR improves the AUC on an external validation set for thyroid cancer detection by 21 percentage points. Additionally, this model was trained on over 2000 publicly available TCR libraries covering 17 types of cancer and healthy samples, and it has been validated on multiple public external datasets for its ability to distinguish cancer patients from healthy individuals. Furthermore, BertTCR can accurately classify various cancer types and healthy individuals. Overall, BertTCR is the advancing method for cancer-related immune status forecasting based on TCRs, offering promising potential for a wide range of immune status prediction tasks.
PMID:39177262 | DOI:10.1093/bib/bbae420
Explaining Deep Learning Models Applied in Histopathology: Current Developments and the Path to Sustainability
Stud Health Technol Inform. 2024 Aug 22;316:1003-1007. doi: 10.3233/SHTI240579.
ABSTRACT
The digital pathology landscape is in continuous expansion. The digitalization of slides using WSIs (Whole Slide Images) fueled the capacity of automatic support for diagnostics. The paper presents an overview of the current state of the art methods used in histopathological practice for explaining CNN classification useful for histopathological experts. Following the study we observed that histopathological deep learning models are still underused and that the pathologists do not trust them. Also we need to point out that in order to get a sustainable use of deep learning we need to get the experts to trust the models. In order to do that, they need to understand how the results are generated and how this information correlates with their prior knowledge and for obtaining this they can use the methods highlighted in this study.
PMID:39176960 | DOI:10.3233/SHTI240579
Fine-Tuning SSL-Model to Enhance Detection of Cilioretinal Arteries on Colored Fundus Images
Stud Health Technol Inform. 2024 Aug 22;316:919-923. doi: 10.3233/SHTI240561.
ABSTRACT
Cilioretinal arteries are a common congenital anomaly of retinal blood supply. This paper presents a deep learning-based approach for the automated detection of a CRA from color fundus images. Leveraging the Vision Transformer architecture, a pre-trained model from RETFound was fine-tuned to transfer knowledge from a broader dataset to our specific task. An initial dataset of 85 was expanded to 170 images through data augmentation using self-supervised learning-driven techniques. To address the imbalance in the dataset and prevent overfitting, Focal Loss and Early Stopping were implemented. The model's performance was evaluated using a 70-30 split of the dataset for training and validation. The results showcase the potential of ophthalmic foundation models in enhancing detection of CRAs and reducing the effort required for labeling by retinal experts, as promising results could be achieved with only a small amount of training data through fine-tuning.
PMID:39176942 | DOI:10.3233/SHTI240561
Integrated Cognitive Ergonomics of the Remote Evaluation of the Grafts with Robotics and Machine Organ Perfusion Technology in Solid Organ Transplantation
Stud Health Technol Inform. 2024 Aug 22;316:884-888. doi: 10.3233/SHTI240554.
ABSTRACT
AIM: To extend reliability of the integrated Tele-Radiological (TRE) and Tele-Pathological (TPE) evaluation of the Renal Graft (RG) of Prometheus Digital Medical Device (pn 2003016) via integration with Machine Organ Perfusion and Tele-Robotics (Stamoulis Rb) in Organ Transplantation.
MATERIAL AND METHODS: A sensitivity-specificity analysis by a simulation of the TRE of RG on 15 MR abdominal images by a radiologist and of the TPE of RG by 26 specialists based on 130 human RG images assessing damages and lesions.
RESULTS: The integrated analysis of TRE and TPE of RG showed: Sensitivity=96.7%, Specificity=100% and Accuracy=97.6%. Integration of Machine Organ Perfusion based results pattern recognition and AI programming offers deep learning and improves morbidity-mortality and organ viability prognosis.
CONCLUSION: The TRE integrated with TPE and AI programming of RG machine organ perfusion based results pattern recognition by AI programming and Deep Learning supported virtual benching is feasible and seems more reliable for instant morbidity-mortality and organ viability prognosis in renal transplant decision support and operational planning.
PMID:39176935 | DOI:10.3233/SHTI240554
Accelerating Clinical Text Annotation in Underrepresented Languages: A Case Study on Text De-Identification
Stud Health Technol Inform. 2024 Aug 22;316:853-857. doi: 10.3233/SHTI240546.
ABSTRACT
Clinical notes contain valuable information for research and monitoring quality of care. Named Entity Recognition (NER) is the process for identifying relevant pieces of information such as diagnoses, treatments, side effects, etc., and bring them to a more structured form. Although recent advancements in deep learning have facilitated automated recognition, particularly in English, NER can still be challenging due to limited specialized training data. This exacerbated in hospital settings where annotations are costly to obtain without appropriate incentives and often dependent on local specificities. In this work, we study whether this annotation process can be effectively accelerated by combining two practical strategies. First, we convert usually passive annotation tasks into a proactive contest to motivate human annotators in performing a task often considered tedious and time-consuming. Second, we provide pre-annotations for the participants to evaluate how recall and precision of the pre-annotations can boost or deteriorate annotation performance. We applied both strategies to a text de-identification task on French clinical notes and discharge summaries at a large Swiss university hospital. Our results show that proactive contest and average quality pre-annotations can significantly speed up annotation time and increase annotation quality, enabling us to develop a text de-identification model for French clinical notes with high performance (F1 score 0.94).
PMID:39176927 | DOI:10.3233/SHTI240546
Exploring Explainable AI Techniques for Text Classification in Healthcare: A Scoping Review
Stud Health Technol Inform. 2024 Aug 22;316:846-850. doi: 10.3233/SHTI240544.
ABSTRACT
Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as "black boxes." Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.
PMID:39176925 | DOI:10.3233/SHTI240544
Data for AI in Congenital Heart Defects: Systematic Review
Stud Health Technol Inform. 2024 Aug 22;316:820-821. doi: 10.3233/SHTI240537.
ABSTRACT
Congenital heart disease (CHD) represents a significant challenge in prenatal care due to low prenatal detection rates. Artificial Intelligence (AI) offers promising avenues for precise CHD prediction. In this study we conducted a systematic review according to the PRISMA guidelines, investigating the landscape of AI applications in prenatal CHD detection. Through searches on PubMed, Embase, and Web of Science, 621 articles were screened, yielding 28 relevant studies for analysis. Deep Learning (DL) emerged as the predominant AI approach. Data types were limited to ultrasound and MRI sequences mainly. This comprehensive analysis provides valuable insights for future research and clinical practice in CHD detection using AI applications.
PMID:39176918 | DOI:10.3233/SHTI240537
Causal Deep Learning for the Detection of Adverse Drug Reactions: Drug-Induced Acute Kidney Injury as a Case Study
Stud Health Technol Inform. 2024 Aug 22;316:803-807. doi: 10.3233/SHTI240533.
ABSTRACT
Causal Deep/Machine Learning (CDL/CML) is an emerging Artificial Intelligence (AI) paradigm. The combination of causal inference and AI could mine explainable causal relationships between data features, providing useful insights for various applications, e.g. Pharmacovigilance (PV) signal detection upon Real-World Data. The objective of this study is to demonstrate the use of CDL for potential PV signal validation using Electronic Health Records as input data source.
PMID:39176914 | DOI:10.3233/SHTI240533
Comparing a Large Language Model with Previous Deep Learning Models on Named Entity Recognition of Adverse Drug Events
Stud Health Technol Inform. 2024 Aug 22;316:781-785. doi: 10.3233/SHTI240528.
ABSTRACT
The ability to fine-tune pre-trained deep learning models to learn how to process a downstream task using a large training set allow to significantly improve performances of named entity recognition. Large language models are recent models based on the Transformers architecture that may be conditioned on a new task with in-context learning, by providing a series of instructions or prompt. These models only require few examples and such approach is defined as few shot learning. Our objective was to compare performances of named entity recognition of adverse drug events between state of the art deep learning models fine-tuned on Pubmed abstracts and a large language model using few-shot learning. Hussain et al's state of the art model (PMID: 34422092) significantly outperformed the ChatGPT-3.5 model (F1-Score: 97.6% vs 86.0%). Few-shot learning is a convenient way to perform named entity recognition when training examples are rare, but performances are still inferior to those of a deep learning model fine-tuned with several training examples. Perspectives are to evaluate few-shot prompting with GPT-4 and perform fine-tuning on GPT-3.5.
PMID:39176909 | DOI:10.3233/SHTI240528
Preliminary Evaluation of Fine-Tuning the OpenDeLD Deidentification Pipeline Across Multi-Center Corpora
Stud Health Technol Inform. 2024 Aug 22;316:719-723. doi: 10.3233/SHTI240515.
ABSTRACT
Automatic deidentification of Electronic Health Records (EHR) is a crucial step in secondary usage for biomedical research. This study introduces evaluation of an intricate hybrid deidentification strategy to enhance patient privacy in secondary usage of EHR. Specifically, this study focuses on assessing automatic deidentification using OpenDeID pipeline across diverse corpora for safeguarding sensitive information within EHR datasets by incorporating diverse corpora. Three distinct corpora were utilized: the OpenDeID v2 corpus containing pathology reports from Australian hospitals, the 2014 i2b2/UTHealth deidentification corpus with clinical narratives from the USA, and the 2016 CEGS N-GRID identification corpus comprising psychiatric notes. The OpenDeID pipeline employs a hybrid approach based on deep learning and contextual rules. Pre-processing steps involved harmonizing and addressing encoding and format issues. Precision, Recall, F-measure metrics were used to assess the performance. The evaluation metrics demonstrated the superior performance of the Discharge Summary BioBERT model. Trained on three corpora with a total of 4,038 reports, the best performing model exhibited robust deidentification capabilities when applied to EHR. It achieved impressive micro-averaged F1-scores of 0.9248 and 0.9692 for strict and relaxed settings, respectively. These results offer valuable insights into the model's efficacy and its potential role in safeguarding patient privacy in secondary usage of EHR.
PMID:39176896 | DOI:10.3233/SHTI240515
Deep Learning Models for Health-Driven Forecasting of Indoor Temperatures in Heat Waves in Canada: An Exploratory Study Using Smart Thermostats
Stud Health Technol Inform. 2024 Aug 22;316:1999-2003. doi: 10.3233/SHTI240826.
ABSTRACT
In Canada, extreme heat occurrences present significant risks to public health, particularly for vulnerable groups like older individuals and those with pre-existing health conditions. Accurately predicting indoor temperatures during these events is crucial for informing public health strategies and mitigating the adverse impacts of extreme heat. While current systems rely on outdoor temperature data, incorporating real-time indoor temperature estimations can significantly enhance decision-making and strengthen overall health system responses. Sensor-based technologies, such as ecobee smart thermostats installed in homes, enable effortless collection of indoor temperature and humidity data. This study evaluates the efficacy of deep learning models in predicting indoor temperatures during heat waves using smart thermostat data, to enhance public health responses. Utilizing ecobee smart thermostats, we analyzed indoor temperature trends and developed forecasting models. Our findings indicate the potential of integrating IoT and deep learning into health warning systems, enabling proactive interventions, and improving sustainable health care practices in extreme heat scenarios. This approach highlights the role of digital health innovations in creating the resilient and sustainable healthcare systems against climate-related health adversities.
PMID:39176885 | DOI:10.3233/SHTI240826
Super-resolution reconstruction for early cervical cancer magnetic resonance imaging based on deep learning
Biomed Eng Online. 2024 Aug 22;23(1):84. doi: 10.1186/s12938-024-01281-5.
ABSTRACT
This study aims to develop a super-resolution (SR) algorithm tailored specifically for enhancing the image quality and resolution of early cervical cancer (CC) magnetic resonance imaging (MRI) images. The proposed method is subjected to both qualitative and quantitative analyses, thoroughly investigating its performance across various upscaling factors and assessing its impact on medical image segmentation tasks. The innovative SR algorithm employed for reconstructing early CC MRI images integrates complex architectures and deep convolutional kernels. Training is conducted on matched pairs of input images through a multi-input model. The research findings highlight the significant advantages of the proposed SR method on two distinct datasets at different upscaling factors. Specifically, at a 2× upscaling factor, the sagittal test set outperforms the state-of-the-art methods in the PSNR index evaluation, second only to the hybrid attention transformer, while the axial test set outperforms the state-of-the-art methods in both PSNR and SSIM index evaluation. At a 4× upscaling factor, both the sagittal test set and the axial test set achieve the best results in the evaluation of PNSR and SSIM indicators. This method not only effectively enhances image quality, but also exhibits superior performance in medical segmentation tasks, thereby providing a more reliable foundation for clinical diagnosis and image analysis.
PMID:39175006 | DOI:10.1186/s12938-024-01281-5
Artificial intelligence in COPD CT images: identification, staging, and quantitation
Respir Res. 2024 Aug 22;25(1):319. doi: 10.1186/s12931-024-02913-z.
ABSTRACT
Chronic obstructive pulmonary disease (COPD) stands as a significant global health challenge, with its intricate pathophysiological manifestations often demanding advanced diagnostic strategies. The recent applications of artificial intelligence (AI) within the realm of medical imaging, especially in computed tomography, present a promising avenue for transformative changes in COPD diagnosis and management. This review delves deep into the capabilities and advancements of AI, particularly focusing on machine learning and deep learning, and their applications in COPD identification, staging, and imaging phenotypes. Emphasis is laid on the AI-powered insights into emphysema, airway dynamics, and vascular structures. The challenges linked with data intricacies and the integration of AI in the clinical landscape are discussed. Lastly, the review casts a forward-looking perspective, highlighting emerging innovations in AI for COPD imaging and the potential of interdisciplinary collaborations, hinting at a future where AI doesn't just support but pioneers breakthroughs in COPD care. Through this review, we aim to provide a comprehensive understanding of the current state and future potential of AI in shaping the landscape of COPD diagnosis and management.
PMID:39174978 | DOI:10.1186/s12931-024-02913-z
TIANA: transcription factors cooperativity inference analysis with neural attention
BMC Bioinformatics. 2024 Aug 22;25(1):274. doi: 10.1186/s12859-024-05852-0.
ABSTRACT
BACKGROUND: Growing evidence suggests that distal regulatory elements are essential for cellular function and states. The sequences within these distal elements, especially motifs for transcription factor binding, provide critical information about the underlying regulatory programs. However, cooperativities between transcription factors that recognize these motifs are nonlinear and multiplexed, rendering traditional modeling methods insufficient to capture the underlying mechanisms. Recent development of attention mechanism, which exhibit superior performance in capturing dependencies across input sequences, makes them well-suited to uncover and decipher intricate dependencies between regulatory elements.
RESULT: We present Transcription factors cooperativity Inference Analysis with Neural Attention (TIANA), a deep learning framework that focuses on interpretability. In this study, we demonstrated that TIANA could discover biologically relevant insights into co-occurring pairs of transcription factor motifs. Compared with existing tools, TIANA showed superior interpretability and robust performance in identifying putative transcription factor cooperativities from co-occurring motifs.
CONCLUSION: Our results suggest that TIANA can be an effective tool to decipher transcription factor cooperativities from distal sequence data. TIANA can be accessed through: https://github.com/rzzli/TIANA .
PMID:39174927 | DOI:10.1186/s12859-024-05852-0
Infection Inspection: using the power of citizen science for image-based prediction of antibiotic resistance in Escherichia coli treated with ciprofloxacin
Sci Rep. 2024 Aug 22;14(1):19543. doi: 10.1038/s41598-024-69341-3.
ABSTRACT
Antibiotic resistance is an urgent global health challenge, necessitating rapid diagnostic tools to combat its threat. This study uses citizen science and image feature analysis to profile the cellular features associated with antibiotic resistance in Escherichia coli. Between February and April 2023, we conducted the Infection Inspection project, in which 5273 volunteers made 1,045,199 classifications of single-cell images from five E. coli strains, labelling them as antibiotic-sensitive or antibiotic-resistant based on their response to the antibiotic ciprofloxacin. User accuracy in image classification reached 66.8 ± 0.1%, lower than our deep learning model's performance at 75.3 ± 0.4%, but both users and the model were more accurate when classifying cells treated at a concentration greater than the strain's own minimum inhibitory concentration. We used the users' classifications to elucidate which visual features influence classification decisions, most importantly the degree of DNA compaction and heterogeneity. We paired our classification data with an image feature analysis which showed that most of the incorrect classifications happened when cellular features varied from the expected response. This understanding informs ongoing efforts to enhance the robustness of our diagnostic methodology. Infection Inspection is another demonstration of the potential for public participation in research, specifically increasing public awareness of antibiotic resistance.
PMID:39174600 | DOI:10.1038/s41598-024-69341-3