Deep learning
Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings
J Magn Reson Imaging. 2024 Dec 17. doi: 10.1002/jmri.29672. Online ahead of print.
ABSTRACT
BACKGROUND: Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images.
PURPOSE: To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models.
STUDY TYPE: Retrospective.
POPULATION: A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation.
FIELD STRENGTH/SEQUENCE: 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR).
ASSESSMENT: Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction.
STATISTICAL TESTS: Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant.
RESULTS: Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( 94.9 % ± 2.4 % $$ 94.9\%\pm 2.4\% $$ ) and lower interobserver agreement ( 61.47 % ± 5.51 % $$ 61.47\%\pm 5.51\% $$ ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( 75.2 % ± 1.3 % $$ 75.2\%\pm 1.3\% $$ and 79.2 % ± 1.7 % $$ 79.2\%\pm 1.7\% $$ ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks.
DATA CONCLUSION: Image quality networks can be trained from image ranking and used to optimize DL tasks.
LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 1.
PMID:39690114 | DOI:10.1002/jmri.29672
"Sadness smile" curve: Processing emotional information from social network for evaluating thermal comfort perception
J Therm Biol. 2024 Dec 12;127:104025. doi: 10.1016/j.jtherbio.2024.104025. Online ahead of print.
ABSTRACT
Thermal comfort is a subjective perception, hence conventional evaluation using meteorological factors faces a technical challenge in precise assessment. Human beings have the nature to differentiate expressions of facial emotions when varied thermal environments are perceived. Facial expression scores can be taken as a predictor of perceived thermal comfort which can be precisely assessed using deep learning against physical factors. In this study, a total of 8314 facial photos were obtained from volunteers in 82 parks of 49 cities via social network. Facial expressions were analyzed to happy, sad, and neutral emotion scores using a professional instrument. Temperature-responsive changes in sadness score (SS) can be fit by a U-shaped curve which was called as the 'sadness smile'. The stationary point of second-order derivative was identified to predict the-most-comfort temperature (22.84 °C), across which a tangent line framed the range of comfort temperatures based on two intersections with first-order derivatives (14.62-31.06 °C). Critical temperature points were identified along a positively correlated line of modified temperature-humidity index against increasing temperatures, which were negatively correlated with SS in autumn and winter. The ResNet model was demonstrated to excellently predict emotion-based thermal comfort perceptions in validation set (R2 > 0.5). A nation-wide mapping suggested that many cities of Northwest and North China had local environments that can be perceived with comfort assessed by SS against thermal and cooling temperatures in summer and winter, respectively.
PMID:39689668 | DOI:10.1016/j.jtherbio.2024.104025
Active learning for extracting rare adverse events from electronic health records: A study in pediatric cardiology
Int J Med Inform. 2024 Dec 12;195:105761. doi: 10.1016/j.ijmedinf.2024.105761. Online ahead of print.
ABSTRACT
OBJECTIVE: Automate the extraction of adverse events from the text of electronic medical records of patients hospitalized for cardiac catheterization.
METHODS: We focused on events related to cardiac catheterization as defined by the NCDR-IMPACT registry. These events were extracted from the Necker Children's Hospital data warehouse. Electronic health records were pre-screened using regular expressions. The resulting datasets contained numerous false positives sentences that were annotated by a cardiologist using an active learning process. A deep learning text classifier was then trained on this active learning-annotated dataset to accurately identify patients who have suffered a serious adverse event.
RESULTS: The dataset included 2,980 patients. Regular expression based extraction of adverse events related to cardiac catheterization achieved a perfect recall. Due to the rarity of adverse events, the dataset obtained from this initial pre-screening step was imbalanced, containing a significant number of false positives. The active learning annotation enabled the acquisition of a representative dataset suitable for training a deep learning model. The deep learning text-classifier identified patients who underwent adverse events after cardiac catheterization with a recall of 0.78 and a specificity of 0.94.
CONCLUSION: Our model effectively identified patients who experienced adverse events related to cardiac catheterization using real clinical data. Enabled by an active learning annotation process, it shows promise for large language model applications in clinical research, especially for rare diseases with limited annotated databases. Our model's strength lies in its development by physicians for physicians, ensuring its relevance and applicability in clinical practice.
PMID:39689449 | DOI:10.1016/j.ijmedinf.2024.105761
Deep profiling of gene expression across 18 human cancers
Nat Biomed Eng. 2024 Dec 17. doi: 10.1038/s41551-024-01290-8. Online ahead of print.
ABSTRACT
Clinical and biological information in large datasets of gene expression across cancers could be tapped with unsupervised deep learning. However, difficulties associated with biological interpretability and methodological robustness have made this impractical. Here we describe an unsupervised deep-learning framework for the generation of low-dimensional latent spaces for gene-expression data from 50,211 transcriptomes across 18 human cancers. The framework, which we named DeepProfile, outperformed dimensionality-reduction methods with respect to biological interpretability and allowed us to unveil that genes that are universally important in defining latent spaces across cancer types control immune cell activation, whereas cancer-type-specific genes and pathways define molecular disease subtypes. By linking latent variables in DeepProfile to secondary characteristics of tumours, we discovered that mutation burden is closely associated with the expression of cell-cycle-related genes, and that the activity of biological pathways for DNA-mismatch repair and MHC class II antigen presentation are consistently associated with patient survival. We also found that tumour-associated macrophages are a source of survival-correlated MHC class II transcripts. Unsupervised learning can facilitate the discovery of biological insight from gene-expression data.
PMID:39690287 | DOI:10.1038/s41551-024-01290-8
A multimodal deep-learning model based on multichannel CT radiomics for predicting pathological grade of bladder cancer
Abdom Radiol (NY). 2024 Dec 18. doi: 10.1007/s00261-024-04748-0. Online ahead of print.
ABSTRACT
OBJECTIVE: To construct a predictive model using deep-learning radiomics and clinical risk factors for assessing the preoperative histopathological grade of bladder cancer according to computed tomography (CT) images.
METHODS: A retrospective analysis was conducted involving 201 bladder cancer patients with definite pathological grading results after surgical excision at the organization between January 2019 and June 2023. The cohort was classified into a test set of 81 cases and a training set of 120 cases. Hand-crafted radiomics (HCR) and features derived from deep-learning (DL) were obtained from computed tomography (CT) images. The research builds a prediction model using 12 machine-learning classifiers, which integrate HCR, DL features, and clinical data. Model performance was estimated utilizing decision-curve analysis (DCA), the area under the curve (AUC), and calibration curves.
RESULTS: Among the classifiers tested, the logistic regression model that combined DL and HCR characteristics demonstrated the finest performance. The AUC values were 0.912 (training set) and 0.777 (test set). The AUC values of clinical model achieved 0.850 (training set) and 0.804 (test set). The AUC values of the combined model were 0.933 (training set) and 0.824 (test set), outperforming both the clinical and HCR-only models.
CONCLUSION: The CT-based combined model demonstrated considerable diagnostic capability in differentiating high-grade from low-grade bladder cancer, serving as a valuable noninvasive instrument for preoperative pathological evaluation.
PMID:39690281 | DOI:10.1007/s00261-024-04748-0
Author Correction: Predictive analytics of complex healthcare systems using deep learning based disease diagnosis model
Sci Rep. 2024 Dec 17;14(1):30526. doi: 10.1038/s41598-024-82835-4.
NO ABSTRACT
PMID:39690238 | DOI:10.1038/s41598-024-82835-4
Interpretable Deep-learning Model Based on Superb Microvascular Imaging for Noninvasive Diagnosis of Interstitial Fibrosis in Chronic Kidney Disease
Acad Radiol. 2024 Dec 16:S1076-6332(24)00942-5. doi: 10.1016/j.acra.2024.11.067. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: To develop an interpretable deep learning (XDL) model based on superb microvascular imaging (SMI) for the noninvasive diagnosis of the degree of interstitial fibrosis (IF) in chronic kidney disease (CKD).
METHODS: We included CKD patients who underwent renal biopsy, two-dimensional ultrasound, and SMI examinations between May 2022 and October 2023. Based on the pathological IF score, they were divided into two groups: minimal-mild IF (≤25%) and moderate-severe IF (>25%). An XDL model based on the SMI while establishing an ultrasound radiomics model and a color doppler ultrasonography (CDUS) model as the control group. The utility of the proposed model was evaluated using the receiver operating characteristic curve (ROC) and decision curve analysis.
RESULTS: In total, 365 CKD patients were included herein. In the validation group, AUCs of the ROC curves for the DL, ultrasound radiomics, and CDUS models were 0.854, 0.784, and 0.745, respectively. In the test group, AUCs of the ROC curve for the DL ultrasound radiomics, and CDUS models were 0.824, 0.792, and 0.752, respectively. The pie chart and heat map based on Shapley additive explanations (SHAP) provided substantial interpretability for the model.
CONCLUSION: Compared with the ultrasound radiomics and CDUS models, the DL model based on the SMI had higher accuracy in the noninvasive judgment of the degree of IF in CKD patients. Pie and heat maps based on Shapley can explain which image regions are helpful in diagnosing the degree of IF.
PMID:39690075 | DOI:10.1016/j.acra.2024.11.067
SMART: Development and Application of a Multimodal Multi-organ Trauma Screening Model for Abdominal Injuries in Emergency Settings
Acad Radiol. 2024 Dec 16:S1076-6332(24)00929-2. doi: 10.1016/j.acra.2024.11.056. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: Effective trauma care in emergency departments necessitates rapid diagnosis by interdisciplinary teams using various medical data. This study constructed a multimodal diagnostic model for abdominal trauma using deep learning on non-contrast computed tomography (CT) and unstructured text data, enhancing the speed and accuracy of solid organ assessments.
MATERIALS AND METHODS: Data were collected from patients undergoing abdominal CT scans. The SMART model (Screening for Multi-organ Assessment in Rapid Trauma) classifies trauma using text data (SMART_GPT), non-contrast CT scans (SMART_Image), or both. SMART_GPT uses the GPT-4 embedding API for text feature extraction, whereas SMART_Image incorporates nnU-Net and DenseNet121 for segmentation and classification. A composite model was developed by integrating multimodal data via logistic regression of SMART_GPT, SMART_Image, and patient demographics (age and gender).
RESULTS: This study included 2638 patients (459 positive, 2179 negative abdominal trauma cases). A trauma-based dataset included 1006 patients with 1632 real continuous data points for testing. SMART_GPT achieved a sensitivity of 81.3% and an area under the receiver operating characteristic curve (AUC) of 0.88 based on unstructured text data. SMART_Image exhibited a sensitivity of 87.5% and an AUC of 0.81 on non-contrast CT data, with the average sensitivity exceeding 90% at the organ level. The integrated SMART model achieved a sensitivity of 93.8% and an AUC of 0.88. In emergency department simulations, SMART reduced waiting times by over 64.24%.
CONCLUSION: SMART provides rapid, objective trauma diagnostics, improving emergency care efficiency, reducing patient wait times, and enabling multimodal screening in diverse emergency contexts.
PMID:39690074 | DOI:10.1016/j.acra.2024.11.056
The modified elevated gap interaction test: a novel paradigm to assess social preference
Open Biol. 2024 Dec;14(12):240250. doi: 10.1098/rsob.240250. Epub 2024 Dec 18.
ABSTRACT
Social deficits play a role in numerous psychiatric, neurological and neurodevelopmental disorders. Relating complex behaviour, such as social interaction, to brain activity remains one of the biggest goals and challenges in neuroscience. Availability of standardized tests that assess social preference is however, limited. Here, we present a novel behavioural paradigm that we developed to measure social behaviour, the modified elevated gap interaction test (MEGIT). In this test, animals are placed on one of two elevated platforms separated by a gap, in which they can engage in whisker interaction with either a conspecific or an object. This allows quantification of social preference in real interaction rather than just proximity and forms an ideal setup for social behaviour-related neuronal recordings. We provide a detailed description of the paradigm and its highly reliable, deep-learning based analysis, and show results obtained from wild-type animals as well as mouse models for disorders characterized by either hyposocial (autism spectrum disorder; ASD) or hypersocial (Williams Beuren syndrome; WBS) behaviour. Wild-type animals show a clear social preference. This preference is significantly smaller in an ASD mouse model, whereas it is larger in WBS mice. The results indicate that MEGIT is a sensitive and reliable test for detecting social phenotypes.
PMID:39689857 | DOI:10.1098/rsob.240250
AI-Driven Drug Discovery for Rare Diseases
J Chem Inf Model. 2024 Dec 17. doi: 10.1021/acs.jcim.4c01966. Online ahead of print.
ABSTRACT
Rare diseases (RDs), affecting 300 million people globally, present a daunting public health challenge characterized by complexity, limited treatment options, and diagnostic hurdles. Despite legislative efforts, such as the 1983 US Orphan Drug Act, more than 90% of RDs lack effective therapies. Traditional drug discovery models, marked by lengthy development cycles and high failure rates, struggle to meet the unique demands of RDs, often yielding poor returns on investment. However, the advent of artificial intelligence (AI), encompassing machine learning (ML) and deep learning (DL), offers groundbreaking solutions. This review explores AI's potential to revolutionize drug discovery for RDs by overcoming these challenges. It discusses AI-driven advancements, such as drug repurposing, biomarker discovery, personalized medicine, genetics, clinical trial optimization, corporate innovations, and novel drug target identification. By synthesizing current knowledge and recent breakthroughs, this review provides crucial insights into how AI can accelerate therapeutic development for RDs, ultimately improving patient outcomes. This comprehensive analysis fills a critical gap in the literature, enhancing understanding of AI's pivotal role in transforming RD research and guiding future research and development efforts in this vital area of medicine.
PMID:39689164 | DOI:10.1021/acs.jcim.4c01966
Machine Learning-Based Prediction Model for ICU Mortality After Continuous Renal Replacement Therapy Initiation in Children
Crit Care Explor. 2024 Dec 17;6(12):e1188. doi: 10.1097/CCE.0000000000001188. eCollection 2024 Dec 1.
ABSTRACT
BACKGROUND: Continuous renal replacement therapy (CRRT) is the favored renal replacement therapy in critically ill patients. Predicting clinical outcomes for CRRT patients is difficult due to population heterogeneity, varying clinical practices, and limited sample sizes.
OBJECTIVE: We aimed to predict survival to ICUs and hospital discharge in children and young adults receiving CRRT using machine learning (ML) techniques.
DERIVATION COHORT: Patients less than 25 years of age receiving CRRT for acute kidney injury and/or volume overload from 2015 to 2021 (80%).
VALIDATION COHORT: Internal validation occurred in a testing group of patients from the dataset (20%).
PREDICTION MODEL: Retrospective international multicenter study utilizing an 80/20 training and testing cohort split, and logistic regression with L2 regularization (LR), decision tree, random forest (RF), gradient boosting machine, and support vector machine with linear kernel to predict ICU and hospital survival. Model performance was determined by the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) due to the imbalance in the dataset.
RESULTS: Of the 933 patients included in this study, 538 (54%) were male with a median age of 8.97 years and interquartile range (1.81-15.0 yr). The ICU mortality was 35% and hospital mortality was 37%. The RF had the best performance for predicting ICU mortality (AUROC, 0.791 and AUPRC, 0.878) and LR for hospital mortality (AUROC, 0.777 and AUPRC, 0.859). The top two predictors of ICU survival were Pediatric Logistic Organ Dysfunction-2 score at CRRT initiation and admission diagnosis of respiratory failure.
CONCLUSIONS: These are the first ML models to predict survival at ICU and hospital discharge in children and young adults receiving CRRT. RF outperformed other models for predicting ICU mortality. Future studies should expand the input variables, conduct a more sophisticated feature selection, and use deep learning algorithms to generate more precise models.
PMID:39688905 | DOI:10.1097/CCE.0000000000001188
Geospatial Modeling of Deep Neural Visual Features for Predicting Obesity Prevalence in Missouri: Quantitative Study
JMIR AI. 2024 Dec 17;3:e64362. doi: 10.2196/64362.
ABSTRACT
BACKGROUND: The global obesity epidemic demands innovative approaches to understand its complex environmental and social determinants. Spatial technologies, such as geographic information systems, remote sensing, and spatial machine learning, offer new insights into this health issue. This study uses deep learning and spatial modeling to predict obesity rates for census tracts in Missouri.
OBJECTIVE: This study aims to develop a scalable method for predicting obesity prevalence using deep convolutional neural networks applied to satellite imagery and geospatial analysis, focusing on 1052 census tracts in Missouri.
METHODS: Our analysis followed 3 steps. First, Sentinel-2 satellite images were processed using the Residual Network-50 model to extract environmental features from 63,592 image chips (224×224 pixels). Second, these features were merged with obesity rate data from the Centers for Disease Control and Prevention for Missouri census tracts. Third, a spatial lag model was used to predict obesity rates and analyze the association between deep neural visual features and obesity prevalence. Spatial autocorrelation was used to identify clusters of obesity rates.
RESULTS: Substantial spatial clustering of obesity rates was found across Missouri, with a Moran I value of 0.68, indicating similar obesity rates among neighboring census tracts. The spatial lag model demonstrated strong predictive performance, with an R2 of 0.93 and a spatial pseudo R2 of 0.92, explaining 93% of the variation in obesity rates. Local indicators from a spatial association analysis revealed regions with distinct high and low clusters of obesity, which were visualized through choropleth maps.
CONCLUSIONS: This study highlights the effectiveness of integrating deep convolutional neural networks and spatial modeling to predict obesity prevalence based on environmental features from satellite imagery. The model's high accuracy and ability to capture spatial patterns offer valuable insights for public health interventions. Future work should expand the geographical scope and include socioeconomic data to further refine the model for broader applications in obesity research.
PMID:39688897 | DOI:10.2196/64362
Advancing miRNA cancer research through artificial intelligence: from biomarker discovery to therapeutic targeting
Med Oncol. 2024 Dec 17;42(1):30. doi: 10.1007/s12032-024-02579-z.
ABSTRACT
MicroRNAs (miRNAs), a class of small non-coding RNAs, play a vital role in regulating gene expression at the post-transcriptional level. Their discovery has profoundly impacted therapeutic strategies, particularly in cancer treatment, where RNA therapeutics, including miRNA-based targeted therapies, have gained prominence. Advances in RNA sequencing technologies have facilitated a comprehensive exploration of miRNAs-from fundamental research to their diagnostic and prognostic potential in various diseases, notably cancers. However, the manual handling and interpretation of vast RNA datasets pose significant challenges. The advent of artificial intelligence (AI) has revolutionized biological research by efficiently extracting insights from complex data. Machine learning algorithms, particularly deep learning techniques are effective for identifying critical miRNAs across different cancers and developing prognostic models. Moreover, the integration of AI has led to the creation of comprehensive miRNA databases for identifying mRNA and gene targets, thus facilitating deeper understanding and application in cancer research. This review comprehensively examines current developments in the application of machine learning techniques in miRNA research across diverse cancers. We discuss their roles in identifying biomarkers, elucidating miRNA targets, establishing disease associations, predicting prognostic outcomes, and exploring broader AI applications in cancer research. This review aims to guide researchers in leveraging AI techniques effectively within the miRNA field, thereby accelerating advancements in cancer diagnostics and therapeutics.
PMID:39688780 | DOI:10.1007/s12032-024-02579-z
A deep learning method for total-body dynamic PET imaging with dual-time-window protocols
Eur J Nucl Med Mol Imaging. 2024 Dec 17. doi: 10.1007/s00259-024-07012-1. Online ahead of print.
ABSTRACT
PURPOSE: Prolonged scanning durations are one of the primary barriers to the widespread clinical adoption of dynamic Positron Emission Tomography (PET). In this paper, we developed a deep learning algorithm that capable of predicting dynamic images from dual-time-window protocols, thereby shortening the scanning time.
METHODS: This study includes 70 patients (mean age ± standard deviation, 53.61 ± 13.53 years; 32 males) diagnosed with pulmonary nodules or breast nodules between 2022 to 2024. Each patient underwent a 65-min dynamic total-body [18F]FDG PET/CT scan. Acquisitions using early-stop protocols and dual-time-window protocols were simulated to reduce the scanning time. To predict the missing frames, we developed a bidirectional sequence-to-sequence model with attention mechanism (Bi-AT-Seq2Seq); and then compared the model with unidirectional or non-attentional models in terms of Mean Absolute Error (MAE), Bias, Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) of predicted frames. Furthermore, we reported the comparison of concordance correlation coefficient (CCC) of the kinetic parameters between the proposed method and traditional methods.
RESULTS: The Bi-AT-Seq2Seq significantly outperform unidirectional or non-attentional models in terms of MAE, Bias, PSNR, and SSIM. Using a dual-time-window protocol, which includes a 10-min early scan followed by a 5-min late scan, improves the four metrics of predicted dynamic images by 37.31%, 36.24%, 7.10%, and 0.014% respectively, compared to the early-stop protocol with a 15-min acquisition. The CCCs of tumor' kinetic parameters estimated with recovered full time-activity-curves (TACs) is higher than those with abbreviated TACs.
CONCLUSION: The proposed algorithm can accurately generate a complete dynamic acquisition (65 min) from dual-time-window protocols (10 + 5 min).
PMID:39688700 | DOI:10.1007/s00259-024-07012-1
Deep learning algorithm enables automated Cobb angle measurements with high accuracy
Skeletal Radiol. 2024 Dec 17. doi: 10.1007/s00256-024-04853-7. Online ahead of print.
ABSTRACT
OBJECTIVE: To determine the accuracy of automatic Cobb angle measurements by deep learning (DL) on full spine radiographs.
MATERIALS AND METHODS: Full spine radiographs of patients aged > 2 years were screened using the radiology reports to identify radiographs for performing Cobb angle measurements. Two senior musculoskeletal radiologists and one senior orthopedic surgeon independently annotated Cobb angles exceeding 7° indicating the angle location as either proximal thoracic (apices between T3 and T5), main thoracic (apices between T6 and T11), or thoraco-lumbar (apices between T12 and L4). If at least two readers agreed on the number of angles, location of the angles, and difference between comparable angles was < 8°, then the ground truth was defined as the mean of their measurements. Otherwise, the radiographs were reviewed by the three annotators in consensus. The DL software (BoneMetrics, Gleamer) was evaluated against the manual annotation in terms of mean absolute error (MAE).
RESULTS: A total of 345 patients were included in the study (age 33 ± 24 years, 221 women): 179 pediatric patients (< 22 years old) and 166 adult patients (22 to 85 years old). Fifty-three cases were reviewed in consensus. The MAE of the DL algorithm for the main curvature was 2.6° (95% CI [2.0; 3.3]). For the subgroup of pediatric patients, the MAE was 1.9° (95% CI [1.6; 2.2]) versus 3.3° (95% CI [2.2; 4.8]) for adults.
CONCLUSION: The DL algorithm predicted the Cobb angle of scoliotic patients with high accuracy.
PMID:39688663 | DOI:10.1007/s00256-024-04853-7
Diffusion model assisted designing self-assembling collagen mimetic peptides as biocompatible materials
Brief Bioinform. 2024 Nov 22;26(1):bbae622. doi: 10.1093/bib/bbae622.
ABSTRACT
Collagen self-assembly supports its mechanical function, but controlling collagen mimetic peptides (CMPs) to self-assemble into higher-order oligomers with numerous functions remains challenging due to the vast potential amino acid sequence space. Herein, we developed a diffusion model to learn features from different types of human collagens and generate CMPs; obtaining 66% of synthetic CMPs could self-assemble into triple helices. Triple-helical and untwisting states were probed by melting temperature (Tm); hence, we developed a model to predict collagen Tm, achieving a state-of-art Pearson's correlation (PC) of 0.95 by cross-validation and a PC of 0.8 for predicting Tm values of synthetic CMPs. Our chemically synthesized short CMPs and recombinantly expressed long CMPs could self-assemble, with the lowest requirement for hydrogel formation at a concentration of 0.08% (w/v). Five CMPs could promote osteoblast differentiation. Our results demonstrated the potential for using computer-aided methods to design functional self-assembling CMPs.
PMID:39688478 | DOI:10.1093/bib/bbae622
NABP-BERT: NANOBODY®-antigen binding prediction based on bidirectional encoder representations from transformers (BERT) architecture
Brief Bioinform. 2024 Nov 22;26(1):bbae518. doi: 10.1093/bib/bbae518.
ABSTRACT
Antibody-mediated immunity is crucial in the vertebrate immune system. Nanobodies, also known as VHH or single-domain antibodies (sdAbs), are emerging as promising alternatives to full-length antibodies due to their compact size, precise target selectivity, and stability. However, the limited availability of nanobodies (Nbs) for numerous antigens (Ags) presents a significant obstacle to their widespread application. Understanding the interactions between Nbs and Ags is essential for enhancing their binding affinities and specificities. Experimental identification of these interactions is often costly and time-intensive. To address this issue, we introduce NABP-BERT, a deep-learning model based on the BERT architecture, designed to predict NANOBODY®-Ag binding solely from sequence information. Furthermore, we have developed a general pretrained model with transfer capabilities suitable for protein-related tasks, including protein-protein interaction tasks. NABP-BERT focuses on the surrounding amino acid contexts and outperforms existing methods, achieving an AUROC of 0.986 and an AUPR of 0.985.
PMID:39688476 | DOI:10.1093/bib/bbae518
Metastasis Detection Using True and Artificial T1-Weighted Postcontrast Images in Brain MRI
Invest Radiol. 2024 Nov 19. doi: 10.1097/RLI.0000000000001137. Online ahead of print.
ABSTRACT
OBJECTIVES: Small lesions are the limiting factor for reducing gadolinium-based contrast agents in brain magnetic resonance imaging (MRI). The purpose of this study was to compare the sensitivity and precision in metastasis detection on true contrast-enhanced T1-weighted (T1w) images and artificial images synthesized by a deep learning method using low-dose images.
MATERIALS AND METHODS: In this prospective, multicenter study (5 centers, 12 scanners), 917 participants underwent brain MRI between October 2021 and March 2023 including T1w low-dose (0.033 mmol/kg) and full-dose (0.1 mmol/kg) images. Forty participants with metastases or unremarkable brain findings were evaluated in a reading (mean age ± SD, 54.3 ± 15.1 years; 24 men). True and artificial T1w images were assessed for metastases in random order with 4 weeks between readings by 2 neuroradiologists. A reference reader reviewed all data to confirm metastases. Performances were compared using mid-P McNemar tests for sensitivity and Wilcoxon signed rank tests for false-positive findings.
RESULTS: The reference reader identified 97 metastases. The sensitivity of reader 1 did not differ significantly between sequences (sensitivity [precision]: true, 66.0% [98.5%]; artificial, 61.9% [98.4%]; P = 0.38). With a lower precision than reader 1, reader 2 found significantly more metastases using true images (sensitivity [precision]: true, 78.4% [87.4%]; artificial, 60.8% [80.8%]; P < 0.001). There was no significant difference in sensitivity for metastases ≥5 mm. The number of false-positive findings did not differ significantly between sequences.
CONCLUSIONS: One reader showed a significantly higher overall sensitivity using true images. The similar detection performance for metastases ≥5 mm is promising for applying low-dose imaging in less challenging diagnostic tasks than metastasis detection.
PMID:39688447 | DOI:10.1097/RLI.0000000000001137
Encoding matching criteria for cross-domain deformable image registration
Med Phys. 2024 Dec 17. doi: 10.1002/mp.17565. Online ahead of print.
ABSTRACT
BACKGROUND: Most existing deep learning-based registration methods are trained on single-type images to address same-domain tasks, resulting in performance degradation when applied to new scenarios. Retraining a model for new scenarios requires extra time and data. Therefore, efficient and accurate solutions for cross-domain deformable registration are in demand.
PURPOSE: We argue that the tailor-made matching criteria in traditional registration methods is one of the main reason they are applicable in different domains. Motivated by this, we devise a registration-oriented encoder to model the matching criteria of image features and structural features, which is beneficial to boost registration accuracy and adaptability.
METHODS: Specifically, a general feature encoder (Encoder-G) is proposed to capture comprehensive medical image features, while a structural feature encoder (Encoder-S) is designed to encode the structural self-similarity into the global representation. Moreover, by updating Encoder-S using one-shot learning, our method can effectively adapt to different domains. The efficacy of our method is evaluated using MRI images from three different domains, including brain images (training/testing: 870/90 pairs), abdomen images (training/testing: 1406/90 pairs), and cardiac images (training/testing: 64770/870 pairs). The comparison methods include traditional method (SyN) and cutting-edge deep networks. The evaluation metrics contain dice similarity coefficient (DSC) and average symmetric surface distance (ASSD).
RESULTS: In the single-domain task, our method attains an average DSC of 68.9%/65.2%/72.8%, and ASSD of 9.75/3.82/1.30 mm on abdomen/cardiac/brain images, outperforming the second-best comparison methods by large margins. In the cross-domain task, without one-shot optimization, our method outperforms other deep networks in five out of six cross-domain scenarios and even surpasses symmetric image normalization method (SyN) in two scenarios. By conducting the one-shot optimization, our method successfully surpasses SyN in all six cross-domain scenarios.
CONCLUSIONS: Our method yields favorable results in the single-domain task while ensuring improved generalization and adaptation performance in the cross-domain task, showing its feasibility for the challenging cross-domain registration applications. The code is publicly available at https://github.com/JuliusWang-7/EncoderReg.
PMID:39688347 | DOI:10.1002/mp.17565
Development of Deep Learning-Based Virtual Lugol Chromoendoscopy for Superficial Esophageal Squamous Cell Carcinoma
J Gastroenterol Hepatol. 2024 Dec 17. doi: 10.1111/jgh.16843. Online ahead of print.
ABSTRACT
BACKGROUND: Lugol chromoendoscopy has been shown to increase the sensitivity of detection of esophageal squamous cell carcinoma (ESCC). We aimed to develop a deep learning-based virtual lugol chromoendoscopy (V-LCE) method.
METHODS: We developed still V-LCE images for superficial ESCC using a cycle-consistent generative adversarial network (CycleGAN). Six endoscopists graded the detection and margins of ESCCs using white-light endoscopy (WLE), real lugol chromoendoscopy (R-LCE), and V-LCE on a five-point scale ranging from 1 (poor) to 5 (excellent). We also calculated and compared the color differences between cancerous and non-cancerous areas using WLE, R-LCE, and V-LCE.
RESULTS: Scores for the detection and margins were significantly higher with R-LCE than V-LCE (detection, 4.7 vs. 3.8, respectively; p < 0.001; margins, 4.3 vs. 3.0, respectively; p < 0.001). There were nonsignificant trends towards higher scores with V-LCE than WLE (detection, 3.8 vs. 3.3, respectively; p = 0.089; margins, 3.0 vs. 2.7, respectively; p = 0.130). Color differences were significantly greater with V-LCE than WLE (p < 0.001) and with R-LCE than V-LCE (p < 0.001) (39.6 with R-LCE, 29.6 with V-LCE, and 18.3 with WLE).
CONCLUSIONS: Our V-LCE has a middle performance between R-LCE and WLE in terms of lesion detection, margin, and color difference. It suggests that V-LCE potentially improves the endoscopic diagnosis of superficial ESCC.
PMID:39687978 | DOI:10.1111/jgh.16843