Deep learning
Comparative analysis of traditional machine learning and automated machine learning: advancing inverted papilloma versus associated squamous cell carcinoma diagnosis
Int Forum Allergy Rhinol. 2024 Aug 26. doi: 10.1002/alr.23438. Online ahead of print.
ABSTRACT
Inverted papilloma conversion to squamous cell carcinoma is not always easy to predict. AutoML requires much less technical knowledge and skill to use than traditional ML. AutoML surpassed the traditional ML algorithm in differentiating IP from IP-SCC.
PMID:39186252 | DOI:10.1002/alr.23438
CT-based multimodal deep learning for non-invasive overall survival prediction in advanced hepatocellular carcinoma patients treated with immunotherapy
Insights Imaging. 2024 Aug 26;15(1):214. doi: 10.1186/s13244-024-01784-8.
ABSTRACT
OBJECTIVES: To develop a deep learning model combining CT scans and clinical information to predict overall survival in advanced hepatocellular carcinoma (HCC).
METHODS: This retrospective study included immunotherapy-treated advanced HCC patients from 52 multi-national in-house centers between 2018 and 2022. A multi-modal prognostic model using baseline and the first follow-up CT images and 7 clinical variables was proposed. A convolutional-recurrent neural network (CRNN) was developed to extract spatial-temporal information from automatically selected representative 2D CT slices to provide a radiological score, then fused with a Cox-based clinical score to provide the survival risk. The model's effectiveness was assessed using a time-dependent area under the receiver operating curve (AUC), and risk group stratification using the log-rank test. Prognostic performances of multi-modal inputs were compared to models of missing modality, and the size-based RECIST criteria.
RESULTS: Two-hundred seven patients (mean age, 61 years ± 12 [SD], 180 men) were included. The multi-modal CRNN model reached the AUC of 0.777 and 0.704 of 1-year overall survival predictions in the validation and test sets. The model achieved significant risk stratification in validation (hazard ratio [HR] = 3.330, p = 0.008), and test sets (HR = 2.024, p = 0.047) based on the median risk score of the training set. Models with missing modalities (the single-modal imaging-based model and the model incorporating only baseline scans) can still achieve favorable risk stratification performance (all p < 0.05, except for one, p = 0.053). Moreover, results proved the superiority of the deep learning-based model to the RECIST criteria.
CONCLUSION: Deep learning analysis of CT scans and clinical data can offer significant prognostic insights for patients with advanced HCC.
CRITICAL RELEVANCE STATEMENT: The established model can help monitor patients' disease statuses and identify those with poor prognosis at the time of first follow-up, helping clinicians make informed treatment decisions, as well as early and timely interventions.
KEY POINTS: An AI-based prognostic model was developed for advanced HCC using multi-national patients. The model extracts spatial-temporal information from CT scans and integrates it with clinical variables to prognosticate. The model demonstrated superior prognostic ability compared to the conventional size-based RECIST method.
PMID:39186192 | DOI:10.1186/s13244-024-01784-8
Automated peripheral nerve segmentation for MR-neurography
Eur Radiol Exp. 2024 Aug 26;8(1):97. doi: 10.1186/s41747-024-00503-8.
ABSTRACT
BACKGROUND: Magnetic resonance neurography (MRN) is increasingly used as a diagnostic tool for peripheral neuropathies. Quantitative measures enhance MRN interpretation but require nerve segmentation which is time-consuming and error-prone and has not become clinical routine. In this study, we applied neural networks for the automated segmentation of peripheral nerves.
METHODS: A neural segmentation network was trained to segment the sciatic nerve and its proximal branches on the MRN scans of the right and left upper leg of 35 healthy individuals, resulting in 70 training examples, via 5-fold cross-validation (CV). The model performance was evaluated on an independent test set of one-sided MRN scans of 60 healthy individuals.
RESULTS: Mean Dice similarity coefficient (DSC) in CV was 0.892 (95% confidence interval [CI]: 0.888-0.897) with a mean Jaccard index (JI) of 0.806 (95% CI: 0.799-0.814) and mean Hausdorff distance (HD) of 2.146 (95% CI: 2.184-2.208). For the independent test set, DSC and JI were lower while HD was higher, with a mean DSC of 0.789 (95% CI: 0.760-0.815), mean JI of 0.672 (95% CI: 0.642-0.699), and mean HD of 2.118 (95% CI: 2.047-2.190).
CONCLUSION: The deep learning-based segmentation model showed a good performance for the task of nerve segmentation. Future work will focus on extending training data and including individuals with peripheral neuropathies in training to enable advanced peripheral nerve disease characterization.
RELEVANCE STATEMENT: The results will serve as a baseline to build upon while developing an automated quantitative MRN feature analysis framework for application in routine reading of MRN examinations.
KEY POINTS: Quantitative measures enhance MRN interpretation, requiring complex and challenging nerve segmentation. We present a deep learning-based segmentation model with good performance. Our results may serve as a baseline for clinical automated quantitative MRN segmentation.
PMID:39186183 | DOI:10.1186/s41747-024-00503-8
Deep learning links localized digital pathology phenotypes with transcriptional subtype and patient outcome in glioblastoma
Gigascience. 2024 Jan 2;13:giae057. doi: 10.1093/gigascience/giae057.
ABSTRACT
BACKGROUND: Deep learning has revolutionized medical image analysis in cancer pathology, where it had a substantial clinical impact by supporting the diagnosis and prognostic rating of cancer. Among the first available digital resources in the field of brain cancer is glioblastoma, the most common and fatal brain cancer. At the histologic level, glioblastoma is characterized by abundant phenotypic variability that is poorly linked with patient prognosis. At the transcriptional level, 3 molecular subtypes are distinguished with mesenchymal-subtype tumors being associated with increased immune cell infiltration and worse outcome.
RESULTS: We address genotype-phenotype correlations by applying an Xception convolutional neural network to a discovery set of 276 digital hematozylin and eosin (H&E) slides with molecular subtype annotation and an independent The Cancer Genome Atlas-based validation cohort of 178 cases. Using this approach, we achieve high accuracy in H&E-based mapping of molecular subtypes (area under the curve for classical, mesenchymal, and proneural = 0.84, 0.81, and 0.71, respectively; P < 0.001) and regions associated with worse outcome (univariable survival model P < 0.001, multivariable P = 0.01). The latter were characterized by higher tumor cell density (P < 0.001), phenotypic variability of tumor cells (P < 0.001), and decreased T-cell infiltration (P = 0.017).
CONCLUSIONS: We modify a well-known convolutional neural network architecture for glioblastoma digital slides to accurately map the spatial distribution of transcriptional subtypes and regions predictive of worse outcome, thereby showcasing the relevance of artificial intelligence-enabled image mining in brain cancer.
PMID:39185700 | DOI:10.1093/gigascience/giae057
Fully automated hybrid approach on conventional MRI for triaging clinically significant liver fibrosis: A multi-center cohort study
J Med Virol. 2024 Aug;96(8):e29882. doi: 10.1002/jmv.29882.
ABSTRACT
Establishing reliable noninvasive tools to precisely diagnose clinically significant liver fibrosis (SF, ≥F2) remains an unmet need. We aimed to build a combined radiomics-clinic (CoRC) model for triaging SF and explore the additive value of the CoRC model to transient elastography-based liver stiffness measurement (FibroScan, TE-LSM). This retrospective study recruited 595 patients with biopsy-proven liver fibrosis at two centers between January 2015 and December 2021. At Center 1, the patients before December 2018 were randomly split into training (276) and internal test (118) sets, the remaining were time-independent as a temporal test set (96). Another data set (105) from Center 2 was collected for external testing. Radiomics scores were built with selected features from Deep learning-based (ResUNet) automated whole liver segmentations on MRI (T2FS and delayed enhanced-T1WI). The CoRC model incorporated radiomics scores and relevant clinical variables with logistic regression, comparing routine approaches. Diagnostic performance was evaluated by the area under the receiver operating characteristic curve (AUC). The additive value of the CoRC model to TE-LSM was investigated, considering necroinflammation. The CoRC model achieved AUCs of 0.79 (0.70, 0.86), 0.82 (0.73, 0.89), and 0.81 (0.72-0.91), outperformed FIB-4, APRI (all p < 0.05) in the internal, temporal, and external test sets and maintained the discriminatory power in G0-1 subgroups (AUCs range, 0.85-0.86; all p < 0.05). The AUCs of joint CoRC-LSM model were 0.86 (0.79-0.94), and 0.81 (0.72-0.90) in the internal and temporal sets (p = 0.01). The CoRC model was useful for triaging SF, and may add value to TE-LSM.
PMID:39185672 | DOI:10.1002/jmv.29882
Corrigendum to: Deep Learning-based Automated Knee Joint Localization in Radiographic Images Using Faster R-CNN
Curr Med Imaging. 2024;20:e060624230768. doi: 10.2174/157340562001240606112211.
ABSTRACT
In the online version of the article, a change was made in the section of author's affiliation. The affiliation of Drs. Sivakumari and Vani in the online version of the article entitled "Deep Learning-based Automated Knee Joint Localization in Radiographic Images Using Faster R-CNN has been updated in "Current Medical Imaging", 2024; 20: e15734056262464 [1]. The original article can be found online at: https:// www.eurekaselect.com/article/135374 Original: T. Sivakumari1 and R. Vani1* 1SRM Institute of Science and Technology, Faculty of Engineering and Technology, Ramapuram, Chennai, Tamil Nadu, India Corrected: T. Sivakumari1 and R. Vani1* 1Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Faculty of Engineering and Technology, Ramapuram, Chennai, Tamil Nadu, India.
PMID:39185659 | DOI:10.2174/157340562001240606112211
Corrigendum to: Super-resolution based Nodule Localization in Thyroid Ultrasound Images through Deep Learning
Curr Med Imaging. 2024;20:e250724232275. doi: 10.2174/157340562001240725140602.
ABSTRACT
The funding details have been incorporated upon author's request in the funding section of this articles entitled "Superresolution based Nodule Localization in Thyroid Ultrasound Images through Deep Learning," 2024, 20, e15734056269264 [1]. The original article can be found online at: https://www.eurekaselect.com/article/140408 Details of the error and a correction are provided here. Original: FUNDING None. Corrected: FUNDING This research is funded by the research project of Qingpu Branch of Zhongshan Hospital affiliated with the Fudan University (Project Number: QYM2022-09) and the research project of Qingpu District Health Commission (Project Number: QWJ2023-19).
PMID:39185658 | DOI:10.2174/157340562001240725140602
Deep multimodal saliency parcellation of cerebellar pathways: Linking microstructure and individual function through explainable multitask learning
Hum Brain Mapp. 2024 Aug 15;45(12):e70008. doi: 10.1002/hbm.70008.
ABSTRACT
Parcellation of human cerebellar pathways is essential for advancing our understanding of the human brain. Existing diffusion magnetic resonance imaging tractography parcellation methods have been successful in defining major cerebellar fibre tracts, while relying solely on fibre tract structure. However, each fibre tract may relay information related to multiple cognitive and motor functions of the cerebellum. Hence, it may be beneficial for parcellation to consider the potential importance of the fibre tracts for individual motor and cognitive functional performance measures. In this work, we propose a multimodal data-driven method for cerebellar pathway parcellation, which incorporates both measures of microstructure and connectivity, and measures of individual functional performance. Our method involves first training a multitask deep network to predict various cognitive and motor measures from a set of fibre tract structural features. The importance of each structural feature for predicting each functional measure is then computed, resulting in a set of structure-function saliency values that are clustered to parcellate cerebellar pathways. We refer to our method as Deep Multimodal Saliency Parcellation (DeepMSP), as it computes the saliency of structural measures for predicting cognitive and motor functional performance, with these saliencies being applied to the task of parcellation. Applying DeepMSP to a large-scale dataset from the Human Connectome Project Young Adult study (n = 1065), we found that it was feasible to identify multiple cerebellar pathway parcels with unique structure-function saliency patterns that were stable across training folds. We thoroughly experimented with all stages of the DeepMSP pipeline, including network selection, structure-function saliency representation, clustering algorithm, and cluster count. We found that a 1D convolutional neural network architecture and a transformer network architecture both performed comparably for the multitask prediction of endurance, strength, reading decoding, and vocabulary comprehension, with both architectures outperforming a fully connected network architecture. Quantitative experiments demonstrated that a proposed low-dimensional saliency representation with an explicit measure of motor versus cognitive category bias achieved the best parcellation results, while a parcel count of four was most successful according to standard cluster quality metrics. Our results suggested that motor and cognitive saliencies are distributed across the cerebellar white matter pathways. Inspection of the final k = 4 parcellation revealed that the highest-saliency parcel was most salient for the prediction of both motor and cognitive performance scores and included parts of the middle and superior cerebellar peduncles. Our proposed saliency-based parcellation framework, DeepMSP, enables multimodal, data-driven tractography parcellation. Through utilising both structural features and functional performance measures, this parcellation strategy may have the potential to enhance the study of structure-function relationships of the cerebellar pathways.
PMID:39185598 | DOI:10.1002/hbm.70008
Deep learning-derived splenic radiomics, genomics, and coronary artery disease
medRxiv [Preprint]. 2024 Aug 20:2024.08.16.24312129. doi: 10.1101/2024.08.16.24312129.
ABSTRACT
BACKGROUND: Despite advances in managing traditional risk factors, coronary artery disease (CAD) remains the leading cause of mortality. Circulating hematopoietic cells influence risk for CAD, but the role of a key regulating organ, spleen, is unknown. The understudied spleen is a 3-dimensional structure of the hematopoietic system optimally suited for unbiased radiologic investigations toward novel mechanistic insights.
METHODS: Deep learning-based image segmentation and radiomics techniques were utilized to extract splenic radiomic features from abdominal MRIs of 42,059 UK Biobank participants. Regression analysis was used to identify splenic radiomics features associated with CAD. Genome-wide association analyses were applied to identify loci associated with these radiomics features. Overlap between loci associated with CAD and the splenic radiomics features was explored to understand the underlying genetic mechanisms of the role of the spleen in CAD.
RESULTS: We extracted 107 splenic radiomics features from abdominal MRIs, and of these, 10 features were associated with CAD. Genome-wide association analysis of CAD-associated features identified 219 loci, including 35 previously reported CAD loci, 7 of which were not associated with conventional CAD risk factors. Notably, variants at 9p21 were associated with splenic features such as run length non-uniformity.
CONCLUSIONS: Our study, combining deep learning with genomics, presents a new framework to uncover the splenic axis of CAD. Notably, our study provides evidence for the underlying genetic connection between the spleen as a candidate causal tissue-type and CAD with insight into the mechanisms of 9p21, whose mechanism is still elusive despite its initial discovery in 2007. More broadly, our study provides a unique application of deep learning radiomics to non-invasively find associations between imaging, genetics, and clinical outcomes.
PMID:39185532 | PMC:PMC11343250 | DOI:10.1101/2024.08.16.24312129
Analyzing and identifying predictable time range for stress prediction based on chaos theory and deep learning
Health Inf Sci Syst. 2024 Mar 6;12(1):16. doi: 10.1007/s13755-024-00280-z. eCollection 2024 Dec.
ABSTRACT
PROPOSE: Stress is a common problem globally. Prediction of stress in advance could help people take effective measures to manage stress before bad consequences occur. Considering the chaotic features of human psychological states, in this study, we integrate deep learning and chaos theory to address the stress prediction problem.
METHODS: Based on chaos theory, we embed one's seemingly disordered stress sequence into a high dimensional phase space so as to reveal the underlying dynamics and patterns of the stress system, and meanwhile are able to identify the stress predictable time range. We then conduct deep learning with a two-layer (dimension and temporal) attention mechanism to simulate the nonlinear state of the embedded stress sequence for stress prediction.
RESULTS: We validate the effectiveness of the proposed method on the public available Tesserae dataset. The experimental results show that the proposed method outperforms the pure deep learning method and Chaos method in both 2-label and 3-label stress prediction.
CONCLUSION: Integrating deep learning and chaos theory for stress prediction is effective, and can improve the prediction accuracy over 2% and 8% more than those of the deep learning and the Chaos method respectively. Implications and further possible improvements are also discussed at the end of the paper.
PMID:39185396 | PMC:PMC11343935 | DOI:10.1007/s13755-024-00280-z
Deeply-Learned Generalized Linear Models with Missing Data
J Comput Graph Stat. 2024;33(2):638-650. doi: 10.1080/10618600.2023.2276122. Epub 2023 Dec 15.
ABSTRACT
Deep Learning (DL) methods have dramatically increased in popularity in recent years, with significant growth in their application to various supervised learning problems. However, the greater prevalence and complexity of missing data in such datasets present significant challenges for DL methods. Here, we provide a formal treatment of missing data in the context of deeply learned generalized linear models, a supervised DL architecture for regression and classification problems. We propose a new architecture, dlglm, that is one of the first to be able to flexibly account for both ignorable and non-ignorable patterns of missingness in input features and response at training time. We demonstrate through statistical simulation that our method outperforms existing approaches for supervised learning tasks in the presence of missing not at random (MNAR) missingness. We conclude with a case study of the Bank Marketing dataset from the UCI Machine Learning Repository, in which we predict whether clients subscribed to a product based on phone survey data. Supplementary materials for this article are available online.
PMID:39184956 | PMC:PMC11339858 | DOI:10.1080/10618600.2023.2276122
Enhanced accuracy with Segmentation of Colorectal Polyp using NanoNetB, and Conditional Random Field Test-Time Augmentation
Front Robot AI. 2024 Aug 9;11:1387491. doi: 10.3389/frobt.2024.1387491. eCollection 2024.
ABSTRACT
Colonoscopy is a reliable diagnostic method to detect colorectal polyps early on and prevent colorectal cancer. The current examination techniques face a significant challenge of high missed rates, resulting in numerous undetected polyps and irregularities. Automated and real-time segmentation methods can help endoscopists to segment the shape and location of polyps from colonoscopy images in order to facilitate clinician's timely diagnosis and interventions. Different parameters like shapes, small sizes of polyps, and their close resemblance to surrounding tissues make this task challenging. Furthermore, high-definition image quality and reliance on the operator make real-time and accurate endoscopic image segmentation more challenging. Deep learning models utilized for segmenting polyps, designed to capture diverse patterns, are becoming progressively complex. This complexity poses challenges for real-time medical operations. In clinical settings, utilizing automated methods requires the development of accurate, lightweight models with minimal latency, ensuring seamless integration with endoscopic hardware devices. To address these challenges, in this study a novel lightweight and more generalized Enhanced Nanonet model, an improved version of Nanonet using NanonetB for real-time and precise colonoscopy image segmentation, is proposed. The proposed model enhances the performance of Nanonet using Nanonet B on the overall prediction scheme by applying data augmentation, Conditional Random Field (CRF), and Test-Time Augmentation (TTA). Six publicly available datasets are utilized to perform thorough evaluations, assess generalizability, and validate the improvements: Kvasir-SEG, Endotect Challenge 2020, Kvasir-instrument, CVC-ClinicDB, CVC-ColonDB, and CVC-300. Through extensive experimentation, using the Kvasir-SEG dataset, our model achieves a mIoU score of 0.8188 and a Dice coefficient of 0.8060 with only 132,049 parameters and employing minimal computational resources. A thorough cross-dataset evaluation was performed to assess the generalization capability of the proposed Enhanced Nanonet model across various publicly available polyp datasets for potential real-world applications. The result of this study shows that using CRF (Conditional Random Fields) and TTA (Test-Time Augmentation) enhances performance within the same dataset and also across diverse datasets with a model size of just 132,049 parameters. Also, the proposed method indicates improved results in detecting smaller and sessile polyps (flats) that are significant contributors to the high miss rates.
PMID:39184863 | PMC:PMC11341306 | DOI:10.3389/frobt.2024.1387491
Enhancing Clinical Diagnosis With Convolutional Neural Networks: Developing High-Accuracy Deep Learning Models for Differentiating Thoracic Pathologies
Cureus. 2024 Jul 26;16(7):e65444. doi: 10.7759/cureus.65444. eCollection 2024 Jul.
ABSTRACT
Background The use of computational technology in medicine has allowed for an increase in the accuracy of clinical diagnosis, reducing errors through additional layers of oversight. Artificial intelligence technologies present the potential to further augment and expedite the accuracy, quality, and efficiency at which diagnosis can be made when used as an adjunctive tool. Such techniques, if found to be accurate and reliable in their diagnostic acuity, can be implemented to foster better clinical decision-making, improving patient quality of care while reducing healthcare costs. Methodology This study implemented convolution neural networks to develop a deep learning model capable of differentiating normal chest X-rays from those indicating pneumonia, tuberculosis, cardiomegaly, and COVID-19. There were 3,063 normal chest X-rays, 3,098 pneumonia chest X-rays, 2,920 COVID-19 chest X-rays, 2,214 chest X-rays, and 554 tuberculosis chest X-rays from Kaggle that were used for training and validation. The model was trained to recognize patterns within the chest X-rays to efficiently recognize these diseases within patients to be treated on time. Results The results indicated a success rate of 98.34% incorrect detections, exemplifying a high degree of accuracy. There are limitations to this study. Training models require hundreds to thousands of samples, and due to potential variability in image scanning equipment and techniques from which the images are sourced, the model could have learned to interpret external noise and unintended details which can adversely impact accuracy. Conclusions Further studies that implement more universal database-sourced images with similar image scanning techniques, assess diverse but related medical conditions, and the utilization of repeat trials can help assess the reliability of the model. These results highlight the potential of machine learning algorithms for disease detection with chest X-rays.
PMID:39184667 | PMC:PMC11345040 | DOI:10.7759/cureus.65444
Perspectives: Comparison of Deep Learning Segmentation Models on Biophysical and Biomedical Data
ArXiv [Preprint]. 2024 Aug 14:arXiv:2408.07786v1.
ABSTRACT
Deep learning based approaches are now widely used across biophysics to help automate a variety of tasks including image segmentation, feature selection, and deconvolution. However, the presence of multiple competing deep learning architectures, each with its own unique advantages and disadvantages, makes it challenging to select an architecture best suited for a specific application. As such, we present a comprehensive comparison of common models. Here, we focus on the task of segmentation assuming the typically small training dataset sizes available from biophysics experiments and compare the following four commonly used architectures: convolutional neural networks, U-Nets, vision transformers, and vision state space models. In doing so, we establish criteria for determining optimal conditions under which each model excels, thereby offering practical guidelines for researchers and practitioners in the field.
PMID:39184539 | PMC:PMC11343239
A Multibranch Neural Network for Drug-Target Affinity Prediction Using Similarity Information
ACS Omega. 2024 Aug 12;9(33):35978-35989. doi: 10.1021/acsomega.4c05607. eCollection 2024 Aug 20.
ABSTRACT
Predicting drug-target affinity (DTA) is beneficial for accelerating drug discovery. In recent years, graph structure-based deep learning models have garnered significant attention in this field. However, these models typically handle drug or target protein in isolation and only extract the molecular structure information on the drug or protein itself. To address this limitation, existing network-based models represent drug-target interactions or affinities as a knowledge graph to capture the interaction information. In this study, we propose a novel solution. Specifically, we introduce drug similarity information and protein similarity information into the field of DTA prediction. Moreover, we propose a network framework that autonomously extracts similarity information, avoiding reliance on knowledge graphs. Based on this framework, we design a multibranch neural network called GASI-DTA. This network integrates similarity information, sequence information, and molecular structure information. Comprehensive experimental results conducted on two benchmark data sets and three cold-start scenarios demonstrate that our model outperforms state-of-the-art graph structure-based methods in nearly all metrics. Furthermore, it exhibits significant advantages over existing network-based models, outperforming the best of them in the majority of metrics. Our study's code and data are openly accessible at http://github.com/XiaoLin-Yang-S/GASI-DTA.
PMID:39184467 | PMC:PMC11339836 | DOI:10.1021/acsomega.4c05607
TF-EPI: an interpretable enhancer-promoter interaction detection method based on Transformer
Front Genet. 2024 Aug 9;15:1444459. doi: 10.3389/fgene.2024.1444459. eCollection 2024.
ABSTRACT
The detection of enhancer-promoter interactions (EPIs) is crucial for understanding gene expression regulation, disease mechanisms, and more. In this study, we developed TF-EPI, a deep learning model based on Transformer designed to detect these interactions solely from DNA sequences. The performance of TF-EPI surpassed that of other state-of-the-art methods on multiple benchmark datasets. Importantly, by utilizing the attention mechanism of the Transformer, we identified distinct cell type-specific motifs and sequences in enhancers and promoters, which were validated against databases such as JASPAR and UniBind, highlighting the potential of our method in discovering new biological insights. Moreover, our analysis of the transcription factors (TFs) corresponding to these motifs and short sequence pairs revealed the heterogeneity and commonality of gene regulatory mechanisms and demonstrated the ability to identify TFs relevant to the source information of the cell line. Finally, the introduction of transfer learning can mitigate the challenges posed by cell type-specific gene regulation, yielding enhanced accuracy in cross-cell line EPI detection. Overall, our work unveils important sequence information for the investigation of enhancer-promoter pairs based on the attention mechanism of the Transformer, providing an important milestone in the investigation of cis-regulatory grammar.
PMID:39184348 | PMC:PMC11341371 | DOI:10.3389/fgene.2024.1444459
Editorial: Machine learning approaches to antimicrobials: discovery and resistance
Front Bioinform. 2024 Aug 9;4:1458237. doi: 10.3389/fbinf.2024.1458237. eCollection 2024.
NO ABSTRACT
PMID:39184338 | PMC:PMC11341447 | DOI:10.3389/fbinf.2024.1458237
Correction: PSMA-positive prostatic volume prediction with deep learning based on T2-weighted MRI
Radiol Med. 2024 Aug 24. doi: 10.1007/s11547-024-01829-4. Online ahead of print.
NO ABSTRACT
PMID:39180615 | DOI:10.1007/s11547-024-01829-4
Evaluation of automated photograph-cephalogram image integration using artificial intelligence models
Angle Orthod. 2024 Aug 21. doi: 10.2319/010124-1.1. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop and evaluate an automated method for combining a digital photograph with a lateral cephalogram.
MATERIALS AND METHODS: A total of 985 digital photographs were collected and soft tissue landmarks were manually detected. Then 2500 lateral cephalograms were collected, and corresponding soft tissue landmarks were manually detected. Using the images and landmark identification information, two different artificial intelligence (AI) models-one for detecting soft tissue on photographs and the other for identifying soft tissue on cephalograms-were developed using different deep-learning algorithms. The digital photographs were rotated, scaled, and shifted to minimize the squared sum of distances between the soft tissue landmarks identified by the two different AI models. As a validation process, eight soft tissue landmarks were selected on digital photographs and lateral cephalometric radiographs from 100 additionally collected validation subjects. Paired t-tests were used to compare the accuracy of measures obtained between the automated and manual image integration methods.
RESULTS: The validation results showed statistically significant differences between the automated and manual methods on the upper lip and soft tissue B point. Otherwise, no statistically significant difference was found.
CONCLUSIONS: Automated photograph-cephalogram image integration using AI models seemed to be as reliable as manual superimposition procedures.
PMID:39180503 | DOI:10.2319/010124-1.1
Deep learning of multimodal networks with topological regularization for drug repositioning
J Cheminform. 2024 Aug 23;16(1):103. doi: 10.1186/s13321-024-00897-y.
ABSTRACT
MOTIVATION: Computational techniques for drug-disease prediction are essential in enhancing drug discovery and repositioning. While many methods utilize multimodal networks from various biological databases, few integrate comprehensive multi-omics data, including transcriptomes, proteomes, and metabolomes. We introduce STRGNN, a novel graph deep learning approach that predicts drug-disease relationships using extensive multimodal networks comprising proteins, RNAs, metabolites, and compounds. We have constructed a detailed dataset incorporating multi-omics data and developed a learning algorithm with topological regularization. This algorithm selectively leverages informative modalities while filtering out redundancies.
RESULTS: STRGNN demonstrates superior accuracy compared to existing methods and has identified several novel drug effects, corroborating existing literature. STRGNN emerges as a powerful tool for drug prediction and discovery. The source code for STRGNN, along with the dataset for performance evaluation, is available at https://github.com/yuto-ohnuki/STRGNN.git .
PMID:39180095 | DOI:10.1186/s13321-024-00897-y