Deep learning

High-throughput platform for label-free sorting of 3D spheroids using deep learning

Tue, 2024-12-24 06:00

Front Bioeng Biotechnol. 2024 Dec 9;12:1432737. doi: 10.3389/fbioe.2024.1432737. eCollection 2024.

ABSTRACT

End-stage liver diseases have an increasing impact worldwide, exacerbated by the shortage of transplantable organs. Recognized as one of the promising solutions, tissue engineering aims at recreating functional tissues and organs in vitro. The integration of bioprinting technologies with biological 3D models, such as multi-cellular spheroids, has enabled the fabrication of tissue constructs that better mimic complex structures and in vivo functionality of organs. However, the lack of methods for large-scale production of homogeneous spheroids has hindered the upscaling of tissue fabrication. In this work, we introduce a fully automated platform, designed for high-throughput sorting of 3D spheroids based on label-free analysis of brightfield images. The compact platform is compatible with standard biosafety cabinets and includes a custom-made microscope and two fluidic systems that optimize single spheroid handling to enhance sorting speed. We use machine learning to classify spheroids based on their bioprinting compatibility. This approach enables complex morphological analysis, including assessing spheroid viability, without relying on invasive fluorescent labels. Furthermore, we demonstrate the efficacy of transfer learning for biological applications, for which acquiring large datasets remains challenging. Utilizing this platform, we efficiently sort mono-cellular and multi-cellular liver spheroids, the latter being used in bioprinting applications, and confirm that the sorting process preserves viability and functionality of the spheroids. By ensuring spheroid homogeneity, our sorting platform paves the way for standardized and scalable tissue fabrication, advancing regenerative medicine applications.

PMID:39717531 | PMC:PMC11663632 | DOI:10.3389/fbioe.2024.1432737

Categories: Literature Watch

Multidisciplinary quantitative and qualitative assessment of IDH-mutant gliomas with full diagnostic deep learning image reconstruction

Tue, 2024-12-24 06:00

Eur J Radiol Open. 2024 Dec 4;13:100617. doi: 10.1016/j.ejro.2024.100617. eCollection 2024 Dec.

ABSTRACT

Rationale and Objectives: Diagnostic accuracy and therapeutic decision-making for IDH-mutant gliomas in tumor board reviews are based on MRI and multidisciplinary interactions.

MATERIALS AND METHODS: This study explores the feasibility of deep learning-based reconstruction (DLR) in MRI for IDH-mutant gliomas. The research utilizes a multidisciplinary approach, engaging neuroradiologists, neurosurgeons, neuro-oncologists, and radiotherapists to evaluate qualitative aspects of DLR and conventional reconstructed (CR) sequences. Furthermore, quantitative image quality and tumor volumes according to Response Assessment in Neuro-Oncology (RANO) 2.0 standards were assessed.

RESULTS: All DLR sequences consistently outperformed CR sequences (median of 4 for all) in qualitative image quality across all raters (p < 0.001 for all) and revealed higher SNR and CNR values (p < 0.001 for all). Preference for all DLR over CR was overwhelming, with ratings of 84 % from the neuroradiologist, 100 % from the neurosurgeon, 92 % from the neuro-oncologist, and 84 % from the radiation oncologist. The RANO 2.0 compliant measurements showed no significant difference between the CR and DRL sequences (p = 0.142).

CONCLUSION: This study demonstrates the clinical feasibility of DLR in MR imaging of IDH-mutant gliomas, with significant time savings of 29.6 % on average and non-inferior image quality to CR. DLR sequences received strong multidisciplinary preference, underscoring their potential for enhancing neuro-oncological decision-making and suitability for clinical implementation.

PMID:39717474 | PMC:PMC11664152 | DOI:10.1016/j.ejro.2024.100617

Categories: Literature Watch

Automated pediatric brain tumor imaging assessment tool from CBTN: Enhancing suprasellar region inclusion and managing limited data with deep learning

Tue, 2024-12-24 06:00

Neurooncol Adv. 2024 Dec 12;6(1):vdae190. doi: 10.1093/noajnl/vdae190. eCollection 2024 Jan-Dec.

ABSTRACT

BACKGROUND: Fully automatic skull-stripping and tumor segmentation are crucial for monitoring pediatric brain tumors (PBT). Current methods, however, often lack generalizability, particularly for rare tumors in the sellar/suprasellar regions and when applied to real-world clinical data in limited data scenarios. To address these challenges, we propose AI-driven techniques for skull-stripping and tumor segmentation.

METHODS: Multi-institutional, multi-parametric MRI scans from 527 pediatric patients (n = 336 for skull-stripping, n = 489 for tumor segmentation) with various PBT histologies were processed to train separate nnU-Net-based deep learning models for skull-stripping, whole tumor (WT), and enhancing tumor (ET) segmentation. These models utilized single (T2/FLAIR) or multiple (T1-Gd and T2/FLAIR) input imaging sequences. Performance was evaluated using Dice scores, sensitivity, and 95% Hausdorff distances. Statistical comparisons included paired or unpaired 2-sample t-tests and Pearson's correlation coefficient based on Dice scores from different models and PBT histologies.

RESULTS: Dice scores for the skull-stripping models for whole brain and sellar/suprasellar region segmentation were 0.98 ± 0.01 (median 0.98) for both multi- and single-parametric models, with significant Pearson's correlation coefficient between single- and multi-parametric Dice scores (r > 0.80; P < .05 for all). Whole tumor Dice scores for single-input tumor segmentation models were 0.84 ± 0.17 (median = 0.90) for T2 and 0.82 ± 0.19 (median = 0.89) for FLAIR inputs. Enhancing tumor Dice scores were 0.65 ± 0.35 (median = 0.79) for T1-Gd+FLAIR and 0.64 ± 0.36 (median = 0.79) for T1-Gd+T2 inputs.

CONCLUSION: Our skull-stripping models demonstrate excellent performance and include sellar/suprasellar regions, using single- or multi-parametric inputs. Additionally, our automated tumor segmentation models can reliably delineate whole lesions and ET regions, adapting to MRI sessions with missing sequences in limited data context.

PMID:39717438 | PMC:PMC11664259 | DOI:10.1093/noajnl/vdae190

Categories: Literature Watch

Deep clustering representation of spatially resolved transcriptomics data using multi-view variational graph auto-encoders with consensus clustering

Tue, 2024-12-24 06:00

Comput Struct Biotechnol J. 2024 Dec 2;23:4369-4383. doi: 10.1016/j.csbj.2024.11.041. eCollection 2024 Dec.

ABSTRACT

The rapid development of spatial transcriptomics (ST) technology has provided unprecedented opportunities to understand tissue relationships and functions within specific spatial contexts. Accurate identification of spatial domains is crucial for downstream spatial transcriptomics analysis. However, effectively combining gene expression data, histological images and spatial coordinate data to identify spatial domains remains a challenge. To this end, we propose STMVGAE, a novel spatial transcriptomics analysis tool that combines a multi-view variational graph autoencoder with a consensus clustering framework. STMVGAE begins by extracting histological images features using a pre-trained convolutional neural network (CNN) and integrates these features with gene expression data to generate augmented gene expression profiles. Subsequently, multiple graphs (views) are constructed using various similarity measures, capturing different aspects of the spatial and transcriptional relationships. These views, combined with the augmented gene expression data, are then processed through variational graph auto-encoders (VGAEs) to learn multiple low-dimensional latent embeddings. Finally, the model employs a consensus clustering method to integrate the clustering results derived from these embeddings, significantly improving clustering accuracy and stability. We applied STMVGAE to five real datasets and compared it with five state-of-the-art methods, showing that STMVGAE consistently achieves competitive results. We assessed its capabilities in spatial domain identification and evaluated its performance across various downstream tasks, including UMAP visualization, PAGA trajectory inference, spatially variable gene (SVG) identification, denoising, batch integration, and other analyses. All code and public datasets used in this paper is available at https://github.com/wenwenmin/STMVGAE and https://zenodo.org/records/13119867.

PMID:39717398 | PMC:PMC11664090 | DOI:10.1016/j.csbj.2024.11.041

Categories: Literature Watch

Deformable multi-level feature network applied to nucleus segmentation

Tue, 2024-12-24 06:00

Front Microbiol. 2024 Dec 9;15:1519871. doi: 10.3389/fmicb.2024.1519871. eCollection 2024.

ABSTRACT

INTRODUCTION: The nucleus plays a crucial role in medical diagnosis, and accurate nucleus segmentation is essential for disease assessment. However, existing methods have limitations in handling the diversity of nuclei and differences in staining conditions, restricting their practical application.

METHODS: A novel deformable multi-level feature network (DMFNet) is proposed for nucleus segmentation. This network is based on convolutional neural network and divides feature processing and mask generation into two levels. At the feature level, deformable convolution is used to enhance feature extraction ability, and multi-scale features are integrated through a balanced feature pyramid. At the mask level, a one-stage framework is adopted to directly perform instance segmentation based on location.

RESULTS: Experimental results on the MoNuSeg 2018 dataset show that the mean average precision (mAP) and mean average recall (mAR) of DMFNet reach 37.8% and 47.4% respectively, outperforming many current advanced methods. Ablation experiments verified the effectiveness of each module of the network.

DISCUSSION: DMFNet provides an effective solution for nucleus segmentation and has important application value in medical image analysis.

PMID:39717268 | PMC:PMC11665065 | DOI:10.3389/fmicb.2024.1519871

Categories: Literature Watch

Joint extraction of entity and relation based on fine-tuning BERT for long biomedical literatures

Tue, 2024-12-24 06:00

Bioinform Adv. 2024 Dec 5;4(1):vbae194. doi: 10.1093/bioadv/vbae194. eCollection 2024.

ABSTRACT

MOTIVATION: Joint extraction of entity and relation is an important research direction in Information Extraction. The number of scientific and technological biomedical literature is rapidly increasing, so automatically extracting entities and their relations from these literatures are key tasks to promote the progress of biomedical research.

RESULTS: The joint extraction of entity and relation model achieves both intra-sentence extraction and cross-sentence extraction, alleviating the problem of long-distance information dependence in long literature. Joint extraction of entity and relation model incorporates a variety of advanced deep learning techniques in this paper: (i) a fine-tuning BERT text classification pre-training model, (ii) Graph Convolutional Network learning method, (iii) Robust Learning Against Textual Label Noise with Self-Mixup Training, (iv) Local regularization Conditional Random Fields. The model implements the following functions: identifying entities from complex biomedical literature effectively, extracting triples within and across sentences, reducing the effect of noisy data during training, and improving the robustness and accuracy of the model. The experiment results prove that the model performs well on the self-built BM_GBD dataset and public datasets, enabling precise large language model enhanced knowledge graph construction for biomedical tasks.

AVAILABILITY AND IMPLEMENTATION: The model and partial code are available on GitHub at https://github.com/zhaix922/Joint-extraction-of-entity-and-relation.

PMID:39717202 | PMC:PMC11665630 | DOI:10.1093/bioadv/vbae194

Categories: Literature Watch

The influence of cardiac substructure dose on survival in a large lung cancer stereotactic radiotherapy cohort using a robust personalized contour analysis

Tue, 2024-12-24 06:00

Phys Imaging Radiat Oncol. 2024 Dec 1;32:100686. doi: 10.1016/j.phro.2024.100686. eCollection 2024 Oct.

ABSTRACT

BACKGROUND/PURPOSE: Radiation-induced cardiac toxicity in lung cancer patients has received increased attention since RTOG 0617. However, large cohort studies with accurate cardiac substructure (CS) contours are lacking, limiting our understanding of the potential influence of individual CSs. Here, we analyse the correlation between CS dose and overall survival (OS) while accounting for deep learning (DL) contouring uncertainty, α / β uncertainty and different modelling approaches.

MATERIALS/METHODS: This single institution, retrospective cohort study includes 730 patients (early-stage tumours (I or II). All treated: 2009-2019), who received stereotactic body radiotherapy (≥ 5 Gy per fraction). A DL model was trained on 70 manually contoured patients to create 12 cardio-vascular structures. Structures with median dice score above 0.8 and mean surface distance (MSD) <2 mm during testing, were further analysed. Patientspecific CS dose was used to find the correlation between CS dose and OS with elastic net and random survival forest models (with and without confounding clinical factors). The influence of delineation-induced dose uncertainty on OS was investigated by expanding/contracting the DL-created contours using the MSD ± 2 standard deviations.

RESULTS: Eight CS contours met the required performance level. The left atrium (LA) mean dose was significant for OS and an LA mean dose of 3.3 Gy (in EQD2) was found as a significant dose stratum.

CONCLUSION: Explicitly accounting for input parameter uncertainty in lung cancer survival modelling was crucial in robustly identifying critical CS dose parameters. Using this robust methodology, LA mean dose was revealed as the most influential CS dose parameter.

PMID:39717185 | PMC:PMC11663986 | DOI:10.1016/j.phro.2024.100686

Categories: Literature Watch

Comparison of conventional diffusion-weighted imaging and multiplexed sensitivity-encoding combined with deep learning-based reconstruction in breast magnetic resonance imaging

Tue, 2024-12-24 06:00

Magn Reson Imaging. 2024 Dec 21:110316. doi: 10.1016/j.mri.2024.110316. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate the feasibility of multiplexed sensitivity-encoding (MUSE) with deep learning-based reconstruction (DLR) for breast imaging in comparison with conventional diffusion-weighted imaging (DWI) and MUSE alone.

METHODS: This study was conducted using conventional single-shot DWI and MUSE data of female participants who underwent breast magnetic resonance imaging (MRI) from June to December 2023. The k-space data in MUSE were reconstructed using both conventional reconstruction and DLR. Two experienced radiologists conducted quantitative analyses of DWI, MUSE, and MUSE-DLR images by obtaining the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of lesions and normal tissue and qualitative analyses by using a 5-point Likert scale to assess the image quality. Inter-reader agreement was assessed using the intraclass correlation coefficient (ICC). Image scores, SNR, CNR, and apparent diffusion coefficient (ADC) measurements among the three sequences were compared using the Friedman test, with significance defined at P < 0.05.

RESULTS: In evaluations of the images of 51 female participants using the three sequences, the two radiologists exhibited good agreement (ICC = 0.540-1.000, P < 0.05). MUSE-DLR showed significantly better SNR than MUSE (P < 0.001), while the ADC values within lesions and tissues did not differ significantly among the three sequences (P = 0.924, P = 0.636, respectively). In the subjective assessments, MUSE and MUSE-DLR scored significantly higher than conventional DWI in overall image quality, geometric distortion and axillary lymph node (P < 0.001).

CONCLUSION: In comparison with conventional DWI, MUSE-DLR yielded improved image quality with only a slightly longer acquisition time.

PMID:39716684 | DOI:10.1016/j.mri.2024.110316

Categories: Literature Watch

A tool for CRISPR-Cas9 sgRNA evaluation based on computational models of gene expression

Mon, 2024-12-23 06:00

Genome Med. 2024 Dec 23;16(1):152. doi: 10.1186/s13073-024-01420-6.

ABSTRACT

BACKGROUND: CRISPR is widely used to silence genes by inducing mutations expected to nullify their expression. While numerous computational tools have been developed to design single-guide RNAs (sgRNAs) with high cutting efficiency and minimal off-target effects, only a few tools focus specifically on predicting gene knockouts following CRISPR. These tools consider factors like conservation, amino acid composition, and frameshift likelihood. However, they neglect the impact of CRISPR on gene expression, which can dramatically affect the success of CRISPR-induced gene silencing attempts. Furthermore, information regarding gene expression can be useful even when the objective is not to silence a gene. Therefore, a tool that considers gene expression when predicting CRISPR outcomes is lacking.

RESULTS: We developed EXPosition, the first computational tool that combines models predicting gene knockouts after CRISPR with models that forecast gene expression, offering more accurate predictions of gene knockout outcomes. EXPosition leverages deep-learning models to predict key steps in gene expression: transcription, splicing, and translation initiation. We showed our tool performs better at predicting gene knockout than existing tools across 6 datasets, 4 cell types and ~207k sgRNAs. We also validated our gene expression models using the ClinVar dataset by showing enrichment of pathogenic mutations in high-scoring mutations according to our models.

CONCLUSIONS: We believe EXPosition will enhance both the efficiency and accuracy of genome editing projects, by directly predicting CRISPR's effect on various aspects of gene expression. EXPosition is available at http://www.cs.tau.ac.il/~tamirtul/EXPosition . The source code is available at https://github.com/shaicoh3n/EXPosition .

PMID:39716183 | DOI:10.1186/s13073-024-01420-6

Categories: Literature Watch

Multi-branch CNNFormer: a novel framework for predicting prostate cancer response to hormonal therapy

Mon, 2024-12-23 06:00

Biomed Eng Online. 2024 Dec 23;23(1):131. doi: 10.1186/s12938-024-01325-w.

ABSTRACT

PURPOSE: This study aims to accurately predict the effects of hormonal therapy on prostate cancer (PC) lesions by integrating multi-modality magnetic resonance imaging (MRI) and the clinical marker prostate-specific antigen (PSA). It addresses the limitations of Convolutional Neural Networks (CNNs) in capturing long-range spatial relations and the Vision Transformer (ViT)'s deficiency in localization information due to consecutive downsampling. The research question focuses on improving PC response prediction accuracy by combining both approaches.

METHODS: We propose a 3D multi-branch CNN Transformer (CNNFormer) model, integrating 3D CNN and 3D ViT. Each branch of the model utilizes a 3D CNN to encode volumetric images into high-level feature representations, preserving detailed localization, while the 3D ViT extracts global salient features. The framework was evaluated on a 39-individual patient cohort, stratified by PSA biomarker status.

RESULTS: Our framework achieved remarkable performance in differentiating responders and non-responders to hormonal therapy, with an accuracy of 97.50%, sensitivity of 100%, and specificity of 95.83%. These results demonstrate the effectiveness of the CNNFormer model, despite the cohort's small size.

CONCLUSION: The findings emphasize the framework's potential in enhancing personalized PC treatment planning and monitoring. By combining the strengths of CNN and ViT, the proposed approach offers robust, accurate prediction of PC response to hormonal therapy, with implications for improving clinical decision-making.

PMID:39716178 | DOI:10.1186/s12938-024-01325-w

Categories: Literature Watch

Comparison and analysis of deep learning models for discriminating longitudinal and oblique vaginal septa based on ultrasound imaging

Mon, 2024-12-23 06:00

BMC Med Imaging. 2024 Dec 23;24(1):347. doi: 10.1186/s12880-024-01507-x.

ABSTRACT

BACKGROUND: The longitudinal vaginal septum and oblique vaginal septum are female müllerian duct anomalies that are relatively less diagnosed but severely fertility-threatening in clinical practice. Ultrasound imaging is commonly used to examine the two vaginal malformations, but in fact it's difficult to make an accurate differential diagnosis. This study is intended to assess the performance of multiple deep learning models based on ultrasonographic images for distinguishing longitudinal vaginal septum and oblique vaginal septum.

METHODS: The cases and ultrasound images of longitudinal vaginal septum and oblique vaginal septum were collected. Two convolutional neural network (CNN)-based models (ResNet50 and ConvNeXt-B) and one base resolution variant of vision transformer (ViT)-based neural network (ViT-B/16) were selected to construct ultrasonographic classification models. The receiver operating curve analysis and four indicators including accuracy, sensitivity, specificity and area under the curve (AUC) were used to compare the diagnostic performance of deep learning models.

RESULTS: A total of 70 cases with 426 ultrasound images were included for deep learning models construction using 5-fold cross-validation. Convolutional neural network-based models (ResNet50 and ConvNeXt-B) presented significantly better case-level discriminative efficacy with accuracy of 0.842 (variance, 0.004, 95%CI, [0.639-0.997]) and 0.897 (variance, 0.004, [95%CI, 0.734-1.000]), specificity of 0.709 (variance, 0.041, [95%CI, 0.505-0.905]) and 0.811 (variance, 0.017, [95%CI, 0.622-0.979]), and AUC of 0.842 (variance, 0.004, [95%CI, 0.639-0.997]) and 0.897 (variance, 0.004, [95%CI, 0.734-1.000]) than transformer-based model (ViT-B/16) with its accuracy of 0.668 (variance, 0.014, [95%CI, 0.407-0.920]), specificity of 0.572 (variance, 0.024, [95%CI, 0.304-0.831]) and AUC of 0.681 (variance, 0.030, [95%CI, 0.434-0.908]). There was no significance of AUC between ConvNeXt-B and ResNet50 (P = 0.841).

CONCLUSIONS: Convolutional neural network-based model (ConvNeXt-B) shows promising capability of discriminating longitudinal and oblique vaginal septa ultrasound images and is expected to be introduced to clinical ultrasonographic diagnostic system.

PMID:39716160 | DOI:10.1186/s12880-024-01507-x

Categories: Literature Watch

Identifying the presence of atrial fibrillation during sinus rhythm using a dual-input mixed neural network with ECG coloring technology

Mon, 2024-12-23 06:00

BMC Med Res Methodol. 2024 Dec 23;24(1):318. doi: 10.1186/s12874-024-02421-0.

ABSTRACT

BACKGROUND: Undetected atrial fibrillation (AF) poses a significant risk of stroke and cardiovascular mortality. However, diagnosing AF in real-time can be challenging as the arrhythmia is often not captured instantly. To address this issue, a deep-learning model was developed to diagnose AF even during periods of arrhythmia-free windows.

METHODS: The proposed method introduces a novel approach that integrates clinical data and electrocardiograms (ECGs) using a colorization technique. This technique recolors ECG images based on patients' demographic information while preserving their original characteristics and incorporating color correlations from statistical data features. Our primary objective is to enhance atrial fibrillation (AF) detection by fusing ECG images with demographic data for colorization. To ensure the reliability of our dataset for training, validation, and testing, we rigorously maintained separation to prevent cross-contamination among these sets. We designed a Dual-input Mixed Neural Network (DMNN) that effectively handles different types of inputs, including demographic and image data, leveraging their mixed characteristics to optimize prediction performance. Unlike previous approaches, this method introduces demographic data through color transformation within ECG images, enriching the diversity of features for improved learning outcomes.

RESULTS: The proposed approach yielded promising results on the independent test set, achieving an impressive AUC of 83.4%. This outperformed the AUC of 75.8% obtained when using only the original signal values as input for the CNN. The evaluation of performance improvement revealed significant enhancements, including a 7.6% increase in AUC, an 11.3% boost in accuracy, a 9.4% improvement in sensitivity, an 11.6% enhancement in specificity, and a substantial 25.1% increase in the F1 score. Notably, AI diagnosis of AF was associated with future cardiovascular mortality. For clinical application, over a median follow-up of 71.6 ± 29.1 months, high-risk AI-predicted AF patients exhibited significantly higher cardiovascular mortality (AF vs. non-AF; 47 [18.7%] vs. 34 [4.8%]) and all-cause mortality (176 [52.9%] vs. 216 [26.3%]) compared to non-AF patients. In the low-risk group, AI-predicted AF patients showed slightly elevated cardiovascular (7 [0.7%] vs. 1 [0.3%]) and all-cause mortality (103 [9.0%] vs. 26 [6.4%]) than AI-predicted non-AF patients during six-year follow-up. These findings underscore the potential clinical utility of the AI model in predicting AF-related outcomes.

CONCLUSIONS: This study introduces an ECG colorization approach to enhance atrial fibrillation (AF) detection using deep learning and demographic data, improving performance compared to ECG-only methods. This method is effective in identifying high-risk and low-risk populations, providing valuable features for future AF research and clinical applications, as well as benefiting ECG-based classification studies.

PMID:39716064 | DOI:10.1186/s12874-024-02421-0

Categories: Literature Watch

Hybrid of Deep Feature Extraction and Machine Learning Ensembles for Imbalanced Skin Cancer Datasets

Mon, 2024-12-23 06:00

Exp Dermatol. 2024 Dec;33(12):e70020. doi: 10.1111/exd.70020.

ABSTRACT

Skin cancer remains one of the most common and deadly forms of cancer, necessitating accurate and early diagnosis to improve patient outcomes. In order to improve classification performance on unbalanced datasets, this study proposes a distinctive approach for classifying skin cancer that utilises both machine learning (ML) and deep learning (DL) methods. We extract features from three different DL models (DenseNet201, Xception, Mobilenet) and concatenate them to create an extensive feature set. Afterwards, several ML algorithms are given these features to be classified. We utilise ensemble techniques to aggregate the predictions from several classifiers, significantly improving the classification's resilience and accuracy. To address the problem of data imbalance, we employ class weight updates and data augmentation strategies to ensure that the model is thoroughly trained across all classes. Our method shows significant improvements over recent existing approaches in terms of classification accuracy and generalisation. The proposed model successfully received 98.7%, 94.4% accuracy, 99%, 95%, precision, 99%, 96% recall, 99%, and 96% f1-score for the HAM10000 and ISIC datasets, respectively. This study offers dermatologists and other medical practitioners' valuable insights into the classification of skin cancer.

PMID:39716023 | DOI:10.1111/exd.70020

Categories: Literature Watch

Identification of STAT3 phosphorylation inhibitors using generative deep learning, virtual screening, molecular dynamics simulations, and biological evaluation for non-small cell lung cancer therapy

Mon, 2024-12-23 06:00

Mol Divers. 2024 Dec 23. doi: 10.1007/s11030-024-11067-5. Online ahead of print.

ABSTRACT

The development of phosphorylation-suppressing inhibitors targeting Signal Transducer and Activator of Transcription 3 (STAT3) represents a promising therapeutic strategy for non-small cell lung cancer (NSCLC). In this study, a generative model was developed using transfer learning and virtual screening, leveraging a comprehensive dataset of STAT3 inhibitors to explore the chemical space for novel candidates. This approach yielded a chemically diverse library of compounds, which were prioritized through molecular docking and molecular dynamics (MD) simulations. Among the identified candidates, the HG110 molecule demonstrated potent suppression of STAT3 phosphorylation at Tyr705 and inhibited its nuclear translocation in IL6-stimulated H441 cells. Rigorous MD simulations further confirmed the stability and interaction profiles of top candidates within the STAT3 binding site. Notably, HG106 and HG110 exhibited superior binding affinities and stable conformations, with favorable interactions involving key residues in the STAT3 binding pocket, outperforming known inhibitors. These findings underscore the potential of generative deep learning to expedite the discovery of selective STAT3 inhibitors, providing a compelling pathway for advancing NSCLC therapies.

PMID:39715975 | DOI:10.1007/s11030-024-11067-5

Categories: Literature Watch

Semi-supervised contour-driven broad learning system for autonomous segmentation of concealed prohibited baggage items

Mon, 2024-12-23 06:00

Vis Comput Ind Biomed Art. 2024 Dec 24;7(1):30. doi: 10.1186/s42492-024-00182-7.

ABSTRACT

With the exponential rise in global air traffic, ensuring swift passenger processing while countering potential security threats has become a paramount concern for aviation security. Although X-ray baggage monitoring is now standard, manual screening has several limitations, including the propensity for errors, and raises concerns about passenger privacy. To address these drawbacks, researchers have leveraged recent advances in deep learning to design threat-segmentation frameworks. However, these models require extensive training data and labour-intensive dense pixel-wise annotations and are finetuned separately for each dataset to account for inter-dataset discrepancies. Hence, this study proposes a semi-supervised contour-driven broad learning system (BLS) for X-ray baggage security threat instance segmentation referred to as C-BLX. The research methodology involved enhancing representation learning and achieving faster training capability to tackle severe occlusion and class imbalance using a single training routine with limited baggage scans. The proposed framework was trained with minimal supervision using resource-efficient image-level labels to localize illegal items in multi-vendor baggage scans. More specifically, the framework generated candidate region segments from the input X-ray scans based on local intensity transition cues, effectively identifying concealed prohibited items without entire baggage scans. The multi-convolutional BLS exploits the rich complementary features extracted from these region segments to predict object categories, including threat and benign classes. The contours corresponding to the region segments predicted as threats were then utilized to yield the segmentation results. The proposed C-BLX system was thoroughly evaluated on three highly imbalanced public datasets and surpassed other competitive approaches in baggage-threat segmentation, yielding 90.04%, 78.92%, and 59.44% in terms of mIoU on GDXray, SIXray, and Compass-XP, respectively. Furthermore, the limitations of the proposed system in extracting precise region segments in intricate noisy settings and potential strategies for overcoming them through post-processing techniques were explored (source code will be available at https://github.com/Divs1159/CNN_BLS .).

PMID:39715960 | DOI:10.1186/s42492-024-00182-7

Categories: Literature Watch

Improved enzyme functional annotation prediction using contrastive learning with structural inference

Mon, 2024-12-23 06:00

Commun Biol. 2024 Dec 23;7(1):1690. doi: 10.1038/s42003-024-07359-z.

ABSTRACT

Recent years have witnessed the remarkable progress of deep learning within the realm of scientific disciplines, yielding a wealth of promising outcomes. A prominent challenge within this domain has been the task of predicting enzyme function, a complex problem that has seen the development of numerous computational methods, particularly those rooted in deep learning techniques. However, the majority of these methods have primarily focused on either amino acid sequence data or protein structure data, neglecting the potential synergy of combining both modalities. To address this gap, we propose a Contrastive Learning framework for Enzyme functional ANnotation prediction combined with protein amino acid sequences and Contact maps (CLEAN-Contact). We rigorously evaluate the performance of our CLEAN-Contact framework against the state-of-the-art enzyme function prediction models using multiple benchmark datasets. Using CLEAN-Contact, we predict previously unknown enzyme functions within the proteome of Prochlorococcus marinus MED4. Our findings convincingly demonstrate the substantial superiority of our CLEAN-Contact framework, marking a significant step forward in enzyme function prediction accuracy.

PMID:39715863 | DOI:10.1038/s42003-024-07359-z

Categories: Literature Watch

Leveraging pharmacovigilance data to predict population-scale toxicity profiles of checkpoint inhibitor immunotherapy

Mon, 2024-12-23 06:00

Nat Comput Sci. 2024 Dec 23. doi: 10.1038/s43588-024-00748-8. Online ahead of print.

ABSTRACT

Immune checkpoint inhibitor (ICI) therapies have made considerable advances in cancer immunotherapy, but the complex and diverse spectrum of ICI-induced toxicities poses substantial challenges to treatment outcomes and computational analysis. Here we introduce DySPred, a dynamic graph convolutional network-based deep learning framework, to map and predict the toxicity profiles of ICIs at the population level by leveraging large-scale real-world pharmacovigilance data. DySPred accurately predicts toxicity risks across diverse demographic cohorts and cancer types, demonstrating resilience in small-sample scenarios and revealing toxicity trends over time. Furthermore, DySPred consistently aligns the toxicity-safety profiles of small-molecule antineoplastic agents with their drug-induced transcriptional alterations. Our study provides a versatile methodology for population-level profiling of ICI-induced toxicities, enabling proactive toxicity monitoring and timely tailoring of treatment and intervention strategies in the advancement of cancer immunotherapy.

PMID:39715829 | DOI:10.1038/s43588-024-00748-8

Categories: Literature Watch

An enhanced classification system of various rice plant diseases based on multi-level handcrafted feature extraction technique

Mon, 2024-12-23 06:00

Sci Rep. 2024 Dec 23;14(1):30601. doi: 10.1038/s41598-024-81143-1.

ABSTRACT

The rice plant is one of the most significant crops in the world, and it suffers from various diseases. The traditional methods for rice disease detection are complex and time-consuming, mainly depending on the expert's experience. The explosive growth in image processing, computer vision, and deep learning techniques provides effective and innovative agriculture solutions for automatically detecting and classifying these diseases. Moreover, more information can be extracted from the input images due to different feature extraction techniques. This paper proposes a new system for detecting and classifying rice plant leaf diseases by fusing different features, including color texture with Local Binary Pattern (LBP) and color features with Color Correlogram (CC). The proposed system consists of five stages. First, input images acquire RGB images of rice plants. Second, image preprocessing applies data augmentation to solve imbalanced problems, and logarithmic transformation enhancement to handle illumination problems has been applied. Third, the features extraction stage is responsible for extracting color features using CC and color texture features using multi-level multi-channel local binary pattern (MCLBP). Fourth, the feature fusion stage provides complementary and discriminative information by concatenating the two types of features. Finally, the rice image classification stage has been applied using a one-against-all support vector machine (SVM). The proposed system has been evaluated on three benchmark datasets with six classes: Blast (BL), Bacterial Leaf Blight (BLB), Brown Spot (BS), Tungro (TU), Sheath Blight (SB), and Leaf Smut (LS) have been used. Rice Leaf Diseases First Dataset, Second Dataset, and Third Dataset achieved maximum accuracy of 99.53%, 99.4%, and 99.14%, respectively, with processing time from [Formula: see text]. Hence, the proposed system has achieved promising results compared to other state-of-the-art approaches.

PMID:39715807 | DOI:10.1038/s41598-024-81143-1

Categories: Literature Watch

Modernizing histopathological analysis: a fully automated workflow for the digital image analysis of the intestinal microcolony survival assay

Mon, 2024-12-23 06:00

bioRxiv [Preprint]. 2024 Dec 12:2024.12.09.627578. doi: 10.1101/2024.12.09.627578.

ABSTRACT

BACKGROUND: Manual analysis of histopathological images is often not only time-consuming and painstaking but also prone to error from subjective evaluation criteria and human error. To address these issues, we created a fully automated workflow to enumerate jejunal crypts in a microcolony survival assay to quantify gastrointestinal damage from radiation.

METHODS AND MATERIALS: After abdominal irradiation of mice, jejuna were obtained and prepared on histopathologic slides, and crypts were counted manually by trained individuals. The automated workflow (AW) involved obtaining images of jejunal slices from the irradiated mice, followed by cropping and normalizing the individual slice images for resolution and color; using deep learning-based semantic image segmentation to detect crypts on each slice; using a tailored algorithm to enumerate the crypts; and tabulating and saving the results. A graphical user interface (GUI) was developed to allow users to review and correct the automated results.

RESULTS: Crypts counted manually exhibited a mean absolute percent deviation of (34 ± 26)% between individuals vs the group mean across counters, which was reduced to (11 ± 6)% across the 3 most-experienced counters. The AW processed a sample image dataset from 60 mice in a few hours and required only a few minutes of active user effort. AW counts deviated from experts' mean counts by (10 ± 8)%. The AW thereby allowed rapid, automated evaluation of the microcolony survival assay with accuracy comparable to that of trained experts and without subjective inter-observer variation.

HIGHLIGHTS: We fully automated the digital image analysis of a microcolony survival assayAnalyzing 540 images takes a few hours with only minutes of active user effortThe automated workflow (AW) is just as accurate as trained expertsThe AW eliminates subjective inter-observer variation and human errorHuman review possible with built-in graphical user interface.

PMID:39713436 | PMC:PMC11661163 | DOI:10.1101/2024.12.09.627578

Categories: Literature Watch

Mapping of high-resolution daily particulate matter (PM<sub>2.5</sub>) concentration at the city level through a machine learning-based downscaling approach

Mon, 2024-12-23 06:00

Environ Monit Assess. 2024 Dec 23;197(1):94. doi: 10.1007/s10661-024-13562-6.

ABSTRACT

PM2.5 pollution is a major global concern, especially in Vietnam, due to its harmful effects on health and the environment. Monitoring local PM2.5 levels is crucial for assessing air quality. However, Vietnam's state-of-the-art (SOTA) dataset with a 3 km resolution needs to be revised to depict spatial variation in smaller regions accurately. In this research, we investigated machine learning-based downscaling methods to improve the spatial resolution and quality of Vietnam's existing 3 km PM2.5 products using different approaches: traditional machine learning models (random forest, XGBoost, Catboost, support vector regression (SVR), mixed effect model (MEM)) and deep learning models (long short-term memory (LSTM), convolutional neural network (CNN), convolutional LSTM (ConvLSTM)). Overall, the CatBoost 2-day lag model exhibited superior performance. In terms of modeling, integrating temporal factors into tree-based models can enhance predictive accuracy. Furthermore, when faced with small datasets, traditional machine learning models demonstrate superior performance over complex deep learning approaches. The validation of machine and deep learning models based on their PM2.5 generated maps is requested because these models can obtain very high results for model evaluation but are unrealistic for application. In this study, compared to the state-of-the-art (SOTA) PM2.5 maps in Vietnam and the SOTA global maps, the proposed CatBoost 2-day lag model's maps showed a 57% increase in the correlation coefficient (Pearson R), as well as 42-73%, 28-75%, and 39-75% reductions in root mean squared error (RMSE), mean relative error (MRE), and mean absolute error (MAE), respectively. Additionally, the daily, monthly, and year-average maps generated by the Catboost 2-day lag model effectively capture the spatial distribution and seasonal variations of PM2.5 in Ho Chi Minh City. These findings indicate a substantial enhancement in the accuracy and reliability of downscaled PM2.5 maps.

PMID:39714636 | DOI:10.1007/s10661-024-13562-6

Categories: Literature Watch

Pages