Deep learning
OrganoLabeler: A Quick and Accurate Annotation Tool for Organoid Images
ACS Omega. 2024 Nov 6;9(46):46117-46128. doi: 10.1021/acsomega.4c06450. eCollection 2024 Nov 19.
ABSTRACT
Organoids are self-assembled 3D cellular structures that resemble organs structurally and functionally, providing in vitro platforms for molecular and therapeutic studies. Generation of organoids from human cells often requires long and costly procedures with arguably low efficiency. Prediction and selection of cellular aggregates that result in healthy and functional organoids can be achieved by using artificial intelligence-based tools. Transforming images of 3D cellular constructs into digitally processable data sets for training deep learning models requires labeling of morphological boundaries, which often is performed manually. Here, we report an application named OrganoLabeler, which can create large image-based data sets in a consistent, reliable, fast, and user-friendly manner. OrganoLabeler can create segmented versions of images with combinations of contrast adjusting, K-means clustering, CLAHE, binary, and Otsu thresholding methods. We created embryoid body and brain organoid data sets, of which segmented images were manually created by human researchers and compared with OrganoLabeler. Validation is performed by training U-Net models, which are deep learning models specialized in image segmentation. U-Net models, which are trained with images segmented by OrganoLabeler, achieved similar or better segmentation accuracies than the ones trained with manually labeled reference images. OrganoLabeler can replace manual labeling, providing faster and more accurate results for organoid research free of charge.
PMID:39583683 | PMC:PMC11579745 | DOI:10.1021/acsomega.4c06450
Erratum: Bioinformatic analysis reveals the association between bacterial morphology and antibiotic resistance using light microscopy with deep learning
Front Microbiol. 2024 Nov 8;15:1516320. doi: 10.3389/fmicb.2024.1516320. eCollection 2024.
ABSTRACT
[This corrects the article DOI: 10.3389/fmicb.2024.1450804.].
PMID:39583549 | PMC:PMC11582059 | DOI:10.3389/fmicb.2024.1516320
DDKG: A Dual Domain Knowledge Guidance strategy for localization and diagnosis of non-displaced femoral neck fractures
Med Image Anal. 2024 Nov 19;100:103393. doi: 10.1016/j.media.2024.103393. Online ahead of print.
ABSTRACT
X-ray is the primary tool for diagnosing fractures, crucial for determining their type, location, and severity. However, non-displaced femoral neck fractures (ND-FNF) can pose challenges in identification due to subtle cracks and complex anatomical structures. Most deep learning-based methods for diagnosing ND-FNF rely on cropped images, necessitating manual annotation of the hip location, which increases annotation costs. To address this challenge, we propose Dual Domain Knowledge Guidance (DDKG), which harnesses spatial and semantic domain knowledge to guide the model in acquiring robust representations of ND-FNF across the whole X-ray image. Specifically, DDKG comprises two key modules: the Spatial Aware Module (SAM) and the Semantic Coordination Module (SCM). SAM employs limited positional supervision to guide the model in focusing on the hip joint region and reducing background interference. SCM integrates information from radiological reports, utilizes prior knowledge from large language models to extract critical information related to ND-FNF, and guides the model to learn relevant visual representations. During inference, the model only requires the whole X-ray image for accurate diagnosis without additional information. The model was validated on datasets from four different centers, showing consistent accuracy and robustness. Codes and models are available at https://github.com/Yjing07/DDKG.
PMID:39581120 | DOI:10.1016/j.media.2024.103393
Small-data-trained model for predicting nitrate accumulation in one-stage partial nitritation-anammox processes controlled by oxygen supply rate
Water Res. 2024 Nov 15;269:122798. doi: 10.1016/j.watres.2024.122798. Online ahead of print.
ABSTRACT
Nitrate (NO3--N) accumulation is the biggest obstacle for wastewater treatment via partial nitritation-anammox process. Dissolved oxygen (DO) control is the most used strategy to prevent NO3--N accumulation, but the performance is usually unstable. This study proposes a novel strategy for controlling NO3--N accumulation based on oxygen supply rate (OSR). In comparison, limiting the OSR is more effective than limiting DO in controlling NO3--N accumulation through mathematical simulation. A laboratory-scale one-stage partial nitritation-anammox system was continuously operated for 135 days, which was divided into five stages with different OSRs. A novel deep learning model integrating Gated Recurrent Unit and Multilayer Perceptron was developed to predict NO3--N accumulation load. To tackle with the general obstacle of limited environmental samples, a generic evaluation was proposed to optimise the model structure by leveraging predictive performance and overfitting risk. The developed model successfully predicted the NO3--N accumulation in the system ten days in advance, showcasing its potential contribution to system design and performance enhancement.
PMID:39581117 | DOI:10.1016/j.watres.2024.122798
Rapid detection of corn moisture content based on improved ICEEMDAN algorithm combined with TCN-BiGRU model
Food Chem. 2024 Nov 19;465(Pt 2):142133. doi: 10.1016/j.foodchem.2024.142133. Online ahead of print.
ABSTRACT
Rapid detection of corn moisture content(MC) during maturity is of great significance for field cultivation, mechanical harvesting, storage, and transportation management. However, cumbersome operation, time-consuming and labor-intensive operation were the bottleneck in the traditional drying process and dielectric parameter method. Thus, to overcome the above problems, a rapid detection method for corn MC based on improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) combined with temporal convolutional network-bidirectional gated recurrent unit (TCN-BiGRU) model. First, based on the 405 groups of NIR spectral data of corn seeds, the crested Porcupine Optimizer (CPO) algorithm was used to optimize ICEEMDAN to reduce the noise of the original spectral data. Then the Chaotic-Cuckoo Search (CCS) algorithm was applied to extract 203 characteristic wavenumbers from the original spectrum, which were input into the constructed TCN-BiGRU network model to realize corn MC detection. Finally, the CPO-ICEEMDAN-CCS-TCN-BiGRU corn MC classification detection model was constructed. The result showed that the model accuracy was 97.54 %, which was 9.22 %, 5.58 %, 2.34 %, 4.74 %, and 5.94 % higher than those of convolutional neural networks (CNN), long short-term memory networks (LSTM), temporal convolutional network (TCN), partial least squares (PLS), and support vector machine (SVM) models, respectively. The research results can provide a reliable basis for improving corn yield, quality and economic benefits.
PMID:39581096 | DOI:10.1016/j.foodchem.2024.142133
Deep learning techniques for automated Alzheimer's and mild cognitive impairment disease using EEG signals: A comprehensive review of the last decade (2013 - 2024)
Comput Methods Programs Biomed. 2024 Nov 12;259:108506. doi: 10.1016/j.cmpb.2024.108506. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVES: Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD) are progressive neurological disorders that significantly impair the cognitive functions, memory, and daily activities. They affect millions of individuals worldwide, posing a significant challenge for its diagnosis and management, leading to detrimental impacts on patients' quality of lives and increased burden on caregivers. Hence, early detection of MCI and AD is crucial for timely intervention and effective disease management.
METHODS: This study presents a comprehensive systematic review focusing on the applications of deep learning in detecting MCI and AD using electroencephalogram (EEG) signals. Through a rigorous literature screening process based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, the research has investigated 74 different papers in detail to analyze the different approaches used to detect MCI and AD neurological disorders.
RESULTS: The findings of this study stand out as the first to deal with the classification of dual MCI and AD (MCI+AD) using EEG signals. This unique approach has enabled us to highlight the state-of-the-art high-performing models, specifically focusing on deep learning while examining their strengths and limitations in detecting the MCI, AD, and the MCI+AD comorbidity situations.
CONCLUSION: The present study has not only identified the current limitations in deep learning area for MCI and AD detection but also proposes specific future directions to address these neurological disorders by implement best practice deep learning approaches. Our main goal is to offer insights as references for future research encouraging the development of deep learning techniques in early detection and diagnosis of MCI and AD neurological disorders. By recommending the most effective deep learning tools, we have also provided a benchmark for future research, with clear implications for the practical use of these techniques in healthcare.
PMID:39581069 | DOI:10.1016/j.cmpb.2024.108506
Investigating streetscape environmental characteristics associated with road traffic crashes using street view imagery and computer vision
Accid Anal Prev. 2024 Nov 23;210:107851. doi: 10.1016/j.aap.2024.107851. Online ahead of print.
ABSTRACT
Examining the relationship between streetscape features and road traffic crashes is vital for enhancing roadway safety. Traditional field surveys are often inefficient and lack comprehensive spatial coverage. Leveraging street view images (SVIs) and deep learning techniques provides a cost-effective alternative for extracting streetscape features. However, prior studies often rely solely on semantic segmentation, overlooking distinctions in feature shapes and contours. This study addresses these limitations by combining semantic segmentation and object detection networks to comprehensively measure streetscape features from Baidu SVIs. Semantic segmentation identifies pixel-level proportions of features such as roads, sidewalks, buildings, fences, trees, and grass, while object detection captures discrete elements like vehicles, pedestrians, and traffic lights. Zero-inflated negative binomial regression models are employed to analyze the impact of these features on three crash types: vehicle-vehicle (VCV), vehicle-pedestrian (VCP), and single-vehicle crashes (SVC). Results show that incorporating streetscape features from combined deep learning methods significantly improves crash prediction. Vehicles have a significant impact on VCV and SVC crashes, whereas pedestrians predominantly affect VCP crashes. Road surfaces, sidewalks, and plants are associated with increased crash risks, while buildings and trees correlate with reduced vehicle crash frequencies. This study highlights the advantages of integrating semantic segmentation and object detection for streetscape analysis and underscores the critical role of environmental characteristics in road traffic crashes. The findings provide actionable insights for urban planning and traffic safety strategies.
PMID:39581057 | DOI:10.1016/j.aap.2024.107851
Inequality in breast cancer: Global statistics from 2022 to 2050
Breast. 2024 Nov 22;79:103851. doi: 10.1016/j.breast.2024.103851. Online ahead of print.
ABSTRACT
This study evaluates the global inequalities of breast cancer incidence and mortality from 2022 to 2050 with the latest GLOBOCAN estimates. It focuses on disparities across continents, age groups and Human Development Index (HDI) levels. In 2022, Africa shows the highest positive slope values of age-standardized rates (world) of mortality vs. incidence, both for those under 40 (0.346) and those 40 and older (0.335). These values contrast with those for Asia (0.085, 0.208), Europe (0.002, -0.014), Latin America and the Caribbean (0.17, 0.303), Northern America (-0.078, -0.188), and Oceania (0.166, -0.001). In both age groups, lower HDI levels are correlated with higher slope values and vice versa. Projections to 2050 indicate significant increases in the burden of breast cancer, with persistent yet varied disparities and differences. This highlights the need for differentiated strategies in breast cancer prevention, early-stage diagnosis, and treatment.
PMID:39580931 | DOI:10.1016/j.breast.2024.103851
Multimodal separation and cross fusion network based on Raman spectroscopy and FTIR spectroscopy for diagnosis of thyroid malignant tumor metastasis
Sci Rep. 2024 Nov 25;14(1):29125. doi: 10.1038/s41598-024-80590-0.
ABSTRACT
The diagnosis of cervical lymph node metastasis from thyroid cancer is an essential stage in the progression of thyroid cancer. The metastasis of cervical lymph nodes directly affects the prognosis and survival rate of patients. Therefore, timely and early diagnosis is crucial for effective treatment and can significantly improve patients' survival rate and quality of life. Traditional diagnostic methods, such as ultrasonography and radionuclide scanning, have limitations, such as complex operations and high missed diagnosis rates. Raman spectroscopy and FTIR spectroscopy can well reflect the molecular information of samples, have characteristics such as sensitivity and specificity, and are simple to operate. They have been widely used in clinical research in recent years. With the development of intelligent medical diagnosis technology, medical data shows a multi-modal trend. Compared with single-modal data, multi-modal data fusion can achieve complementary information, provide more comprehensive and valuable diagnostic information, significantly enhance the richness of data features, and improve the modeling effect of the model, helping to achieve better results. Accurate disease diagnosis. Existing research mostly uses cascade processing, ignoring the important correlations between multi-modal data, and at the same time not making full use of the intra-modal relationships that are also beneficial to prediction. We developed a new multi-modal separation cross-fusion network (MSCNet) based on deep learning technology. This network fully captures the complementary information between and within modalities through the feature separation module and feature cross-fusion module and effectively integrates Raman spectrum and FTIR spectrum data to diagnose thyroid cancer cervical lymph node metastasis accurately. The test results on the serum vibrational spectrum data set of 99 cases of cervical lymph node metastasis showed that the accuracy and AUC of a single Raman spectrum reached 63.63% and 63.78% respectively, and the accuracy and AUC of a single FTIR spectrum reached 95.84% respectively and 96%. The accuracy and AUC of Raman spectroscopy combined with FTIR spectroscopy reached 97.95% and 98% respectively, which is better than existing diagnostic technology. The omics correlation verification obtained correlation pairs of 5 Raman frequency shifts and 84 infrared spectral bands. This study provides new ideas and methods for the early diagnosis of cervical lymph node metastasis of thyroid cancer.
PMID:39582068 | DOI:10.1038/s41598-024-80590-0
Fusing multiplex heterogeneous networks using graph attention-aware fusion networks
Sci Rep. 2024 Nov 24;14(1):29119. doi: 10.1038/s41598-024-78555-4.
ABSTRACT
Graph Neural Networks (GNN) emerged as a deep learning framework to generate node and graph embeddings for downstream machine learning tasks. Popular GNN-based architectures operate on networks of single node and edge type. However, a large number of real-world networks include multiple types of nodes and edges. Enabling these architectures to work on networks with multiple node and edge types brings additional challenges due to the heterogeneity of the networks and the multiplicity of the existing associations. In this study, we present a framework, named GRAF (Graph Attention-aware Fusion Networks), to convert multiplex heterogeneous networks to homogeneous networks to make them more suitable for graph representation learning. Using attention-based neighborhood aggregation, GRAF learns the importance of each neighbor per node (called node-level attention) followed by the importance of each network layer (called network layer-level attention). Then, GRAF processes a network fusion step weighing each edge according to the learned attentions. After an edge elimination step based on edge weights, GRAF utilizes Graph Convolutional Networks (GCN) on the fused network and incorporates node features on graph-structured data for a node classification or a similar downstream task. To demonstrate GRAF's generalizability, we applied it to four datasets from different domains and observed that GRAF outperformed or was on par with the baselines and state-of-the-art (SOTA) methods. We were able to interpret GRAF's findings utilizing the attention weights. Source code for GRAF is publicly available at https://github.com/bozdaglab/GRAF .
PMID:39582056 | DOI:10.1038/s41598-024-78555-4
Whole-cell multi-target single-molecule super-resolution imaging in 3D with microfluidics and a single-objective tilted light sheet
Nat Commun. 2024 Nov 24;15(1):10187. doi: 10.1038/s41467-024-54609-z.
ABSTRACT
Multi-target single-molecule super-resolution fluorescence microscopy offers a powerful means of understanding the distributions and interplay between multiple subcellular structures at the nanoscale. However, single-molecule super-resolution imaging of whole mammalian cells is often hampered by high fluorescence background and slow acquisition speeds, especially when imaging multiple targets in 3D. In this work, we have mitigated these issues by developing a steerable, dithered, single-objective tilted light sheet for optical sectioning to reduce fluorescence background and a pipeline for 3D nanoprinting microfluidic systems for reflection of the light sheet into the sample. This easily adaptable microfluidic fabrication pipeline allows for the incorporation of reflective optics into microfluidic channels without disrupting efficient and automated solution exchange. We combine these innovations with point spread function engineering for nanoscale localization of individual molecules in 3D, deep learning for analysis of overlapping emitters, active 3D stabilization for drift correction and long-term imaging, and Exchange-PAINT for sequential multi-target imaging without chromatic offsets. We then demonstrate that this platform, termed soTILT3D, enables whole-cell multi-target 3D single-molecule super-resolution imaging with improved precision and imaging speed.
PMID:39582043 | DOI:10.1038/s41467-024-54609-z
VGAE-CCI: variational graph autoencoder-based construction of 3D spatial cell-cell communication network
Brief Bioinform. 2024 Nov 22;26(1):bbae619. doi: 10.1093/bib/bbae619.
ABSTRACT
Cell-cell communication plays a critical role in maintaining normal biological functions, regulating development and differentiation, and controlling immune responses. The rapid development of single-cell RNA sequencing and spatial transcriptomics sequencing (ST-seq) technologies provides essential data support for in-depth and comprehensive analysis of cell-cell communication. However, ST-seq data often contain incomplete data and systematic biases, which may reduce the accuracy and reliability of predicting cell-cell communication. Furthermore, other methods for analyzing cell-cell communication mainly focus on individual tissue sections, neglecting cell-cell communication across multiple tissue layers, and fail to comprehensively elucidate cell-cell communication networks within three-dimensional tissues. To address the aforementioned issues, we propose VGAE-CCI, a deep learning framework based on the Variational Graph Autoencoder, capable of identifying cell-cell communication across multiple tissue layers. Additionally, this model can be applied to spatial transcriptomics data with missing or partially incomplete data and can clustered cells at single-cell resolution based on spatial encoding information within complex tissues, thereby enabling more accurate inference of cell-cell communication. Finally, we tested our method on six datasets and compared it with other state of art methods for predicting cell-cell communication. Our method outperformed other methods across multiple metrics, demonstrating its efficiency and reliability in predicting cell-cell communication.
PMID:39581873 | DOI:10.1093/bib/bbae619
RNADiffFold: generative RNA secondary structure prediction using discrete diffusion models
Brief Bioinform. 2024 Nov 22;26(1):bbae618. doi: 10.1093/bib/bbae618.
ABSTRACT
Ribonucleic acid (RNA) molecules are essential macromolecules that perform diverse biological functions in living beings. Precise prediction of RNA secondary structures is instrumental in deciphering their complex three-dimensional architecture and functionality. Traditional methodologies for RNA structure prediction, including energy-based and learning-based approaches, often depict RNA secondary structures from a static perspective and rely on stringent a priori constraints. Inspired by the success of diffusion models, in this work, we introduce RNADiffFold, an innovative generative prediction approach of RNA secondary structures based on multinomial diffusion. We reconceptualize the prediction of contact maps as akin to pixel-wise segmentation and accordingly train a denoising model to refine the contact maps starting from a noise-infused state progressively. We also devise a potent conditioning mechanism that harnesses features extracted from RNA sequences to steer the model toward generating an accurate secondary structure. These features encompass one-hot encoded sequences, probabilistic maps generated from a pre-trained scoring network, and embeddings and attention maps derived from RNA foundation model. Experimental results on both within- and cross-family datasets demonstrate RNADiffFold's competitive performance compared with current state-of-the-art methods. Additionally, RNADiffFold has shown a notable proficiency in capturing the dynamic aspects of RNA structures, a claim corroborated by its performance on datasets comprising multiple conformations.
PMID:39581872 | DOI:10.1093/bib/bbae618
Automatic Segmentation of Quadriceps Femoris Cross-Sectional Area in Ultrasound Images: Development and Validation of Convolutional Neural Networks in People With Anterior Cruciate Ligament Injury and Surgery
Ultrasound Med Biol. 2024 Nov 23:S0301-5629(24)00431-9. doi: 10.1016/j.ultrasmedbio.2024.11.004. Online ahead of print.
ABSTRACT
OBJECTIVE: Deep learning approaches such as DeepACSA enable automated segmentation of muscle ultrasound cross-sectional area (CSA). Although they provide fast and accurate results, most are developed using data from healthy populations. The changes in muscle size and quality following anterior cruciate ligament (ACL) injury challenges the validity of these automated approaches in the ACL population. Quadriceps muscle CSA is an important outcome following ACL injury; therefore, our aim was to validate DeepACSA, a convolutional neural network (CNN) approach for ACL injury.
METHODS: Quadriceps panoramic CSA ultrasound images (vastus lateralis [VL] n = 430, rectus femoris [RF] n = 349, and vastus medialis [VM] n = 723) from 124 participants with an ACL injury (age 22.8 ± 7.9 y, 61 females) were used to train CNN models. For VL and RF, combined models included extra images from healthy participants (n = 153, age 38.2, range 13-78) that the DeepACSA was developed from. All models were tested on unseen external validation images (n = 100) from ACL-injured participants. Model predicted CSA results were compared to manual segmentation results.
RESULTS: All models showed good comparability (ICC > 0.81, < 14.1% standard error of measurement, mean differences of <1.56 cm2) to manual segmentation. Removal of the erroneous predictions resulted in excellent comparability (ICC > 0.94, < 7.40% standard error of measurement, mean differences of <0.57 cm2). Erroneous predictions were 17% for combined VL, 11% for combined RF, and 20% for ACL-only VM models.
CONCLUSION: The new CNN models provided can be used in ACL-injured populations to measure CSA of VL, RF, and VM muscles automatically. The models yield high comparability to manual segmentation results and reduce the burden of manual segmentation.
PMID:39581823 | DOI:10.1016/j.ultrasmedbio.2024.11.004
Pulmonary <sup>129</sup>Xe MRI: CNN Registration and Segmentation to Generate Ventilation Defect Percent with Multi-center Validation
Acad Radiol. 2024 Nov 23:S1076-6332(24)00789-X. doi: 10.1016/j.acra.2024.10.029. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: Hyperpolarized 129Xe MRI quantifies ventilation-defect-percent (VDP), the ratio of 129Xe signal-void to the anatomic 1H MRI thoracic-cavity-volume. VDP is associated with airway inflammation and disease control and serves as a treatable trait in therapy studies. Semi-automated VDP pipelines require time-intensive observer interactions. Current convolutional neural network (CNN) approaches for quantifying VDP lack external validation, which limits multicenter utilization. Our objective was to develop an automated and externally validated deep-learning pipeline to quantify pulmonary 129Xe MRI VDP.
MATERIALS AND METHODS: 1H and 129Xe MRI data from the primary site (Site1) were used to train and test a CNN segmentation and registration pipeline, while two independent sites (Site2 and Site3) provided external validation. Semi-automated and CNN-based registration error was measured using mean-absolute-error (MAE) while segmentation error was measured using generalized-Dice-similarity coefficient (gDSC). CNN and semi-automated VDP were compared using linear regression and Bland-Altman analysis.
RESULTS: Training/testing used data from 205 participants (healthy volunteers, asthma, COPD, long-COVID; mean age=54 ± 16y; 119 females) from Site1. External validation used data from 71 participants. CNN and semi-automated 1H and 129Xe registrations agreed (MAE=0.3°, R2 =0.95 rotation; 1.1%, R2 =0.79 scaling; 0.2/0.5px, R2 =0.96/0.95, x/y-translation; all p < .001). Thoracic-cavity and ventilation segmentations were also spatially corresponding (gDSC=0.92 and 0.88, respectively). CNN VDP correlated with semi-automated VDP (Site1 R2/ρ = .97/.95, bias=-0.5%; Site2 R2/ρ = .85/.93, bias=-0.9%; Site3 R2/ρ = .95/.89, bias=-0.8%, all p < .001).
CONCLUSION: An externally validated CNN registration/segmentation model demonstrated strong agreement with low error compared to the semi-automated method. CNN and semi-automated registrations, thoracic-cavity-volume and ventilation-volume segmentations were highly correlated with high gDSC for the datasets.
PMID:39581785 | DOI:10.1016/j.acra.2024.10.029
A Multicenter Evaluation of the Impact of Therapies on Deep Learning-based Electrocardiographic Hypertrophic Cardiomyopathy Markers
Am J Cardiol. 2024 Nov 22:S0002-9149(24)00828-2. doi: 10.1016/j.amjcard.2024.11.028. Online ahead of print.
ABSTRACT
Artificial intelligence-enhanced electrocardiography (AI-ECG) can identify hypertrophic cardiomyopathy (HCM) on 12-lead ECGs and offers a novel way to monitor treatment response. While the surgical or percutaneous reduction of the interventricular septum (SRT) represented initial HCM therapies, mavacamten offers an oral alternative. We aimed to assess the use of AI-ECG as a strategy to evaluate biological response to SRT and mavacamten. We applied an AI-ECG model for HCM detection to ECG images from patients who underwent SRT across 3 sites: Yale New Haven Health System (YNHHS), Cleveland Clinic Foundation (CCF), and Atlantic Health System (AHS); and to ECG images from patients receiving mavacamten at YNHHS. A total of 70 patients underwent SRT at YNHHS, 100 at CCF, and 145 at AHS. At YNHHS, there was no significant change in the AI-ECG HCM score before versus after SRT (pre-SRT: median 0.55 [IQR 0.24-0.77] vs post-SRT: 0.59 [0.40-0.75]). The AI-ECG HCM scores also did not improve post SRT at CCF (0.61 [0.32-0.79] vs 0.69 [0.52-0.79]) and AHS (0.52 [0.35-0.69] vs 0.61 [0.49-0.70]). Among 36 YNHHS patients on mavacamten therapy, the median AI-ECG score before starting mavacamten was 0.41 (0.22-0.77), which decreased significantly to 0.28 (0.11-0.50, p <0.001 by Wilcoxon signed-rank test) at the end of a median follow-up period of 237 days. In conclusion, we observed a lack of improvement in AI-based HCM score with SRT, in contrast to a significant decrease with mavacamten. Our approach suggests the potential role of AI-ECG for serial point-of-care monitoring of pathophysiological improvement following medical therapy in HCM using ECG images.
PMID:39581517 | DOI:10.1016/j.amjcard.2024.11.028
Deep learning-based multiple-CT optimization: An adaptive treatment planning approach to account for anatomical changes in intensity-modulated proton therapy for head and neck cancers
Radiother Oncol. 2024 Nov 22:110650. doi: 10.1016/j.radonc.2024.110650. Online ahead of print.
ABSTRACT
BACKGROUNDS: Intensity-modulated proton therapy (IMPT) is particularly susceptible to range and setup uncertainties, as well as anatomical changes.
PURPOSE: We present a framework for IMPT planning that employs a deep learning method for dose prediction based on multiple-CT (MCT). The extra CTs are created from cone-beam CT (CBCT) using deformable registration with the primary planning CT (PCT). Our method also includes a dose mimicking algorithm.
METHODS: The MCT IMPT planning pipeline involves prediction of robust dose from input images using a deep learning model with a U-net architecture. Deliverable plans may then be created by solving a dose mimicking problem with the predictions as reference dose. Model training, dose prediction and plan generation are performed using a dataset of 55 patients with head and neck cancer in this retrospective study. Among them, 38 patients were used as training set, 7 patients were used as validation set, and 10 patients were reserved as test set for final evaluation.
RESULTS: We demonstrated that the deliverable plans generated through subsequent MCT dose mimicking exhibited greater robustness than the robust plans produced by the PCT, as well as enhanced dose sparing for organs at risk. MCT plans had lower D2% (76.1 Gy vs. 82.4 Gy), better homogeneity index (7.7 % vs. 16.4 %) of CTV1 and better conformity index (70.5 % vs. 61.5 %) of CTV2 than the robust plans produced by the primary planning CT for all test patients.
CONCLUSIONS: We demonstrated the feasibility and advantages of incorporating daily CBCT images into MCT optimization. This approach improves plan robustness against anatomical changes and may reduce the need for plan adaptations in head and neck cancer treatments.
PMID:39581351 | DOI:10.1016/j.radonc.2024.110650
In silico identification of Histone Deacetylase inhibitors using Streamlined Masked Transformer-based Pretrained features
Methods. 2024 Nov 22:S1046-2023(24)00246-9. doi: 10.1016/j.ymeth.2024.11.009. Online ahead of print.
ABSTRACT
Histone Deacetylases (HDACs) are enzymes that regulate gene expression by removing acetyl groups from histones. They are involved in various diseases, including neurodegenerative, cardiovascular, inflammatory, and metabolic disorders, as well as fibrosis in the liver, lungs, and kidneys. Successfully identifying potent HDAC inhibitors may offer a promising approach to treating these diseases. In addition to experimental techniques, researchers have introduced several in silico methods for identifying HDAC inhibitors. However, these existing computer-aided methods have shortcomings in their modeling stages, which limit their applications. In our study, we present a Streamlined Masked Transformer-based Pretrained (SMTP) encoder, which can be used to generate features for downstream tasks. The training process of the SMTP encoder was directed by masked attention-based learning, enhancing the model's generalizability in encoding molecules. The SMTP features were used to develop 11 classification models identifying 11 HDAC isoforms. We trained SMTP, a lightweight encoder, with only 1.9 million molecules, a smaller number than other known molecular encoders, yet its discriminant ability remains competitive. The results revealed that machine learning models developed using the SMTP feature set outperformed those developed using other feature sets in 8 out of 11 classification tasks. Additionally, chemical diversity analysis confirmed the encoder's effectiveness in distinguishing between two classes of molecules.
PMID:39581247 | DOI:10.1016/j.ymeth.2024.11.009
MoAGL-SA: a multi-omics adaptive integration method with graph learning and self attention for cancer subtype classification
BMC Bioinformatics. 2024 Nov 23;25(1):364. doi: 10.1186/s12859-024-05989-y.
ABSTRACT
BACKGROUND: The integration of multi-omics data through deep learning has greatly improved cancer subtype classification, particularly in feature learning and multi-omics data integration. However, key challenges remain in embedding sample structure information into the feature space and designing flexible integration strategies.
RESULTS: We propose MoAGL-SA, an adaptive multi-omics integration method based on graph learning and self-attention, to address these challenges. First, patient relationship graphs are generated from each omics dataset using graph learning. Next, three-layer graph convolutional networks are employed to extract omic-specific graph embeddings. Self-attention is then used to focus on the most relevant omics, adaptively assigning weights to different graph embeddings for multi-omics integration. Finally, cancer subtypes are classified using a softmax classifier.
CONCLUSIONS: Experimental results show that MoAGL-SA outperforms several popular algorithms on datasets for breast invasive carcinoma, kidney renal papillary cell carcinoma, and kidney renal clear cell carcinoma. Additionally, MoAGL-SA successfully identifies key biomarkers for breast invasive carcinoma.
PMID:39580382 | DOI:10.1186/s12859-024-05989-y
Accelerated spine MRI with deep learning based image reconstruction: a prospective comparison with standard MRI
Acad Radiol. 2024 Nov 22:S1076-6332(24)00850-X. doi: 10.1016/j.acra.2024.11.004. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: To evaluate the performance of deep learning (DL) reconstructed MRI in terms of image acquisition time, overall image quality and diagnostic interchangeability compared to standard-of-care (SOC) MRI.
MATERIALS AND METHODS: This prospective study recruited participants between July 2023 and August 2023 who had spinal discomfort. All participants underwent two separate MRI examinations (Standard and accelerated scanning). Signal-to-noise ratios (SNR), contrast-to-noise ratios (CNR) and similarity metrics were calculated for quantitative evaluation. Four radiologists performed subjective quality and lesion characteristic assessment. Wilcoxon test was used to assess the differences of SNR, CNR and subjective image quality between DL and SOC. Various lesions of spine were also tested for interchangeability using individual equivalence index. Interreader and intrareader agreement and concordance (κ and Kendall τ and W statistics) were computed and McNemar tests were performed for comprehensive evaluation.
RESULTS: 200 participants (107 male patients, mean age 46.56 ± 17.07 years) were included. Compared with SOC, DL enabled scan time reduced by approximately 40%. The SNR and CNR of DL were significantly higher than those of SOC (P < 0.001). DL showed varying degrees of improvement (0-0.35) in each of similarity metrics. All absolute individual equivalence indexes were less than 4%, indicating interchangeability between SOC and DL. Kappa and Kendall showed a good to near-perfect agreement in range of 0.72-0.98. There is no difference between SOC and DL regarding subjective scoring and frequency of lesion detection.
CONCLUSION: Compared to SOC, DL provided high-quality image for diagnosis and reduced examination time for patients. DL was found to be interchangeable with SOC in detecting various spinal abnormalities.
PMID:39580249 | DOI:10.1016/j.acra.2024.11.004