Deep learning
Rapid in vivo EPID image prediction using a combination of analytically calculated attenuation and AI predicted scatter
Med Phys. 2024 Nov 28. doi: 10.1002/mp.17549. Online ahead of print.
ABSTRACT
BACKGROUND: The electronic portal imaging device (EPID) can be used in vivo, to detect on-treatment errors by evaluating radiation exiting a patient. To detect deviations from the planning intent, image predictions need to be modeled based on the patient's anatomy and plan information. To date in vivo transit images have been predicted using Monte Carlo (MC) algorithms. A deep learning approach can make predictions faster than MC and only requires patient information for training.
PURPOSE: To test the feasibility and reliability of creating a deep-learning model with patient data for predicting in vivo EPID images for IMRT treatments.
METHODS: In our approach, the in vivo EPID image was separated into contributions from primary and scattered photons. A primary photon attenuation function was determined by measuring attenuation factors for various thicknesses of solid water. The scatter component of in vivo EPID images was estimated using a convolutional neural network (CNN). The CNN input was a 3-channel image comprised of the non-transit EPID image and ray tracing projections through a pretreatment CBCT. The predicted scatter component was added to the primary attenuation component to give the full predicted in vivo EPID image. We acquired 193 IMRT fields/images from 93 patients treated on the Varian Halcyon. Model training:validation:test dataset ratios were 133:20:40 images. Additional patient plans were delivered to anthropomorphic phantoms, yielding 75 images for further validation. We assessed model accuracy by comparing model-calculated and measured in vivo images with a gamma comparison.
RESULTS: Comparing the model-calculated and measured in vivo images gives a mean gamma pass rate for the training:validation:test datasets of 95.4%:94.1%:92.9% for 3%/3 mm and 98.4%:98.4%:96.8% for 5%/3 mm. For images delivered to phantom data sets the average gamma pass rate was 96.4% (3%/3 mm criteria). In all data sets, the lower passing rates of some images were due to CBCT artifacts and patient motion that occurred between the time of CBCT and treatment. CONCLUSIONS: The developed deep-learning-based model can generate in vivo EPID images with a mean gamma pass rate greater than 92% (3%/3 mm criteria). This approach provides an alternative to MC prediction algorithms. Image predictions can be made in 30 ms on a standard GPU. In future work, image predictions from this model can be used to detect in vivo treatment errors and on-treatment changes in patient anatomy, providing an additional layer of patient-specific quality assurance.
PMID:39607282 | DOI:10.1002/mp.17549
Machine Learning Models for Predicting Monoclonal Antibody Biophysical Properties from Molecular Dynamics Simulations and Deep Learning-Based Surface Descriptors
Mol Pharm. 2024 Nov 28. doi: 10.1021/acs.molpharmaceut.4c00804. Online ahead of print.
ABSTRACT
Monoclonal antibodies (mAbs) have found extensive applications and development in treating various diseases. From the pharmaceutical industry's perspective, the journey from the design and development of mAbs to clinical testing and large-scale production is a highly time-consuming and resource-intensive process. During the research and development phase, assessing and optimizing the developability of mAbs is of paramount importance to ensure their success as candidates for therapeutic drugs. The critical factors influencing mAb development are their biophysical properties, such as aggregation propensity, solubility, and viscosity. This study utilized a data set comprising 12 biophysical properties of 137 antibodies from a previous study (Proc Natl Acad Sci USA. 114(5):944-949, 2017). We employed full-length antibody molecular dynamics simulations and machine learning techniques to predict experimental data for these 12 biophysical properties. Additionally, we utilized a newly developed deep learning model called DeepSP, which directly predicts the dynamical and structural properties of spatial aggregation propensity and spatial charge map in different antibody regions from sequences. Our research findings indicate that the machine learning models we developed outperform previous methods in predicting most biophysical properties. Furthermore, the DeepSP model yields similar predictive results compared to molecular dynamic simulations while significantly reducing computational time. The code and parameters are freely available at https://github.com/Lailabcode/AbDev. Also, the webapp, AbDev, for 12 biophysical properties prediction has been developed and provided at https://devpred.onrender.com/AbDev.
PMID:39606945 | DOI:10.1021/acs.molpharmaceut.4c00804
WelQrate: Defining the Gold Standard in Small Molecule Drug Discovery Benchmarking
ArXiv [Preprint]. 2024 Nov 14:arXiv:2411.09820v1.
ABSTRACT
While deep learning has revolutionized computer-aided drug discovery, the AI community has predominantly focused on model innovation and placed less emphasis on establishing best benchmarking practices. We posit that without a sound model evaluation framework, the AI community's efforts cannot reach their full potential, thereby slowing the progress and transfer of innovation into real-world drug discovery. Thus, in this paper, we seek to establish a new gold standard for small molecule drug discovery benchmarking, WelQrate. Specifically, our contributions are threefold: WelQrate Dataset Collection - we introduce a meticulously curated collection of 9 datasets spanning 5 therapeutic target classes. Our hierarchical curation pipelines, designed by drug discovery experts, go beyond the primary high-throughput screen by leveraging additional confirmatory and counter screens along with rigorous domain-driven preprocessing, such as Pan-Assay Interference Compounds (PAINS) filtering, to ensure the high-quality data in the datasets; WelQrate Evaluation Framework - we propose a standardized model evaluation framework considering high-quality datasets, featurization, 3D conformation generation, evaluation metrics, and data splits, which provides a reliable benchmarking for drug discovery experts conducting real-world virtual screening; Benchmarking - we evaluate model performance through various research questions using the WelQrate dataset collection, exploring the effects of different models, dataset quality, featurization methods, and data splitting strategies on the results. In summary, we recommend adopting our proposed WelQrate as the gold standard in small molecule drug discovery benchmarking. The WelQrate dataset collection, along with the curation codes, and experimental scripts are all publicly available at WelQrate.org.
PMID:39606732 | PMC:PMC11601797
ACE-Net: AutofoCus-Enhanced Convolutional Network for Field Imperfection Estimation with application to high b-value spiral Diffusion MRI
ArXiv [Preprint]. 2024 Nov 21:arXiv:2411.14630v1.
ABSTRACT
Spatiotemporal magnetic field variations from B0-inhomogeneity and diffusion-encoding-induced eddy-currents can be detrimental to rapid image-encoding schemes such as spiral, EPI and 3D-cones, resulting in undesirable image artifacts. In this work, a data driven approach for automatic estimation of these field imperfections is developed by combining autofocus metrics with deep learning, and by leveraging a compact basis representation of the expected field imperfections. The method was applied to single-shot spiral diffusion MRI at high b-values where accurate estimation of B0 and eddy were obtained, resulting in high quality image reconstruction without need for additional external calibrations.
PMID:39606720 | PMC:PMC11601784
Rapid response to fast viral evolution using AlphaFold 3-assisted topological deep learning
ArXiv [Preprint]. 2024 Nov 19:arXiv:2411.12370v1.
ABSTRACT
The fast evolution of SARS-CoV-2 and other infectious viruses poses a grand challenge to the rapid response in terms of viral tracking, diagnostics, and design and manufacture of monoclonal antibodies (mAbs) and vaccines, which are both time-consuming and costly. This underscores the need for efficient computational approaches. Recent advancements, like topological deep learning (TDL), have introduced powerful tools for forecasting emerging dominant variants, yet they require deep mutational scanning (DMS) of viral surface proteins and associated three-dimensional (3D) protein-protein interaction (PPI) complex structures. We propose an AlphaFold 3 (AF3)-assisted multi-task topological Laplacian (MT-TopLap) strategy to address this need. MT-TopLap combines deep learning with topological data analysis (TDA) models, such as persistent Laplacians (PL) to extract detailed topological and geometric characteristics of PPIs, thereby enhancing the prediction of DMS and binding free energy (BFE) changes upon virus mutations. Validation with four experimental DMS datasets of SARS-CoV-2 spike receptor-binding domain (RBD) and the human angiotensin-converting enzyme-2 (ACE2) complexes indicates that our AF3 assisted MT-TopLap strategy maintains robust performance, with only an average 1.1% decrease in Pearson correlation coefficients (PCC) and an average 9.3% increase in root mean square errors (RMSE), compared with the use of experimental structures. Additionally, AF3-assisted MT-TopLap achieved a PCC of 0.81 when tested with a SARS-CoV-2 HK.3 variant DMS dataset, confirming its capability to accurately predict BFE changes and adapt to new experimental data, thereby showcasing its potential for rapid and effective response to fast viral evolution.
PMID:39606716 | PMC:PMC11601794
Mixed Effects Deep Learning for the interpretable analysis of single cell RNA sequencing data by quantifying and visualizing batch effects
ArXiv [Preprint]. 2024 Nov 13:arXiv:2411.06635v2.
ABSTRACT
Single-cell RNA sequencing (scRNA-seq) data are often confounded by technical or biological batch effects. Existing deep learning models mitigate these effects but often discard batch-specific information, potentially losing valuable biological insights. We propose a Mixed Effects Deep Learning (MEDL) autoencoder framework that separately models batch-invariant (fixed effects) and batch-specific (random effects) components. By decoupling batch-invariant biological states from batch variations, our framework integrates both into predictive models. Our approach also generates 2D visualizations of how the same cell appears across batches, enhancing interpretability. Retaining both fixed and random effect latent spaces improves classification accuracy. We applied our framework to three datasets spanning the cardiovascular system (Healthy Heart), Autism Spectrum Disorder (ASD), and Acute Myeloid Leukemia (AML). With 147 batches in the Healthy Heart dataset, far exceeding typical numbers, we tested our framework's ability to handle many batches. In the ASD dataset, our approach captured donor heterogeneity between autistic and healthy individuals. In the AML dataset, it distinguished donor heterogeneity despite missing cell types and diseased donors exhibiting both healthy and malignant cells. These results highlight our framework's ability to characterize fixed and random effects, enhance batch effect visualization, and improve prediction accuracy across diverse datasets.
PMID:39606715 | PMC:PMC11601787
Employing Xception convolutional neural network through high-precision MRI analysis for brain tumor diagnosis
Front Med (Lausanne). 2024 Nov 8;11:1487713. doi: 10.3389/fmed.2024.1487713. eCollection 2024.
ABSTRACT
The classification of brain tumors from medical imaging is pivotal for accurate medical diagnosis but remains challenging due to the intricate morphologies of tumors and the precision required. Existing methodologies, including manual MRI evaluations and computer-assisted systems, primarily utilize conventional machine learning and pre-trained deep learning models. These systems often suffer from overfitting due to modest medical imaging datasets and exhibit limited generalizability on unseen data, alongside substantial computational demands that hinder real-time application. To enhance diagnostic accuracy and reliability, this research introduces an advanced model utilizing the Xception architecture, enriched with additional batch normalization and dropout layers to mitigate overfitting. This model is further refined by leveraging large-scale data through transfer learning and employing a customized dense layer setup tailored to effectively distinguish between meningioma, glioma, and pituitary tumor categories. This hybrid method not only capitalizes on the strengths of pre-trained network features but also adapts specific training to a targeted dataset, thereby improving the generalization capacity of the model across different imaging conditions. Demonstrating an important improvement in diagnostic performance, the proposed model achieves a classification accuracy of 98.039% on the test dataset, with precision and recall rates above 96% for all categories. These results underscore the possibility of the model as a reliable diagnostic tool in clinical settings, significantly surpassing existing diagnostic protocols for brain tumors.
PMID:39606635 | PMC:PMC11601128 | DOI:10.3389/fmed.2024.1487713
Enhanced skin cancer diagnosis: a deep feature extraction-based framework for the multi-classification of skin cancer utilizing dermoscopy images
Front Med (Lausanne). 2024 Nov 13;11:1495576. doi: 10.3389/fmed.2024.1495576. eCollection 2024.
ABSTRACT
Skin cancer is one of the most common, deadly, and widespread cancers worldwide. Early detection of skin cancer can lead to reduced death rates. A dermatologist or primary care physician can use a dermatoscope to inspect a patient to diagnose skin disorders visually. Early detection of skin cancer is essential, and in order to confirm the diagnosis and determine the most appropriate course of therapy, patients should undergo a biopsy and a histological evaluation. Significant advancements have been made recently as the accuracy of skin cancer categorization by automated deep learning systems matches that of dermatologists. Though progress has been made, there is still a lack of a widely accepted, clinically reliable method for diagnosing skin cancer. This article presented four variants of the Convolutional Neural Network (CNN) model (i.e., original CNN, no batch normalization CNN, few filters CNN, and strided CNN) for the classification and prediction of skin cancer in lesion images with the aim of helping physicians in their diagnosis. Further, it presents the hybrid models CNN-Support Vector Machine (CNNSVM), CNN-Random Forest (CNNRF), and CNN-Logistic Regression (CNNLR), using a grid search for the best parameters. Exploratory Data Analysis (EDA) and random oversampling are performed to normalize and balance the data. The CNN models (original CNN, strided, and CNNSVM) obtained an accuracy rate of 98%. In contrast, CNNRF and CNNLR obtained an accuracy rate of 99% for skin cancer prediction on a HAM10000 dataset of 10,015 dermoscopic images. The encouraging outcomes demonstrate the effectiveness of the proposed method and show that improving the performance of skin cancer diagnosis requires including the patient's metadata with the lesion image.
PMID:39606634 | PMC:PMC11601079 | DOI:10.3389/fmed.2024.1495576
Automated lung segmentation on chest MRI in children with cystic fibrosis
Front Med (Lausanne). 2024 Nov 12;11:1401473. doi: 10.3389/fmed.2024.1401473. eCollection 2024.
ABSTRACT
INTRODUCTION: Segmentation of lung structures in medical imaging is crucial for the application of automated post-processing steps on lung diseases like cystic fibrosis (CF). Recently, machine learning methods, particularly neural networks, have demonstrated remarkable improvements, often outperforming conventional segmentation methods. Nonetheless, challenges still remain when attempting to segment various imaging modalities and diseases, especially when the visual characteristics of pathologic findings significantly deviate from healthy tissue.
METHODS: Our study focuses on imaging of pediatric CF patients [mean age, standard deviation (7.50 ± 4.6)], utilizing deep learning-based methods for automated lung segmentation from chest magnetic resonance imaging (MRI). A total of 165 standardized annual surveillance MRI scans from 84 patients with CF were segmented using the nnU-Net framework. Patient cases represented a range of disease severities and ages. The nnU-Net was trained and evaluated on three MRI sequences (BLADE, VIBE, and HASTE), which are highly relevant for the evaluation of CF induced lung changes. We utilized 40 cases for training per sequence, and tested with 15 cases per sequence, using the Sørensen-Dice-Score, Pearson's correlation coefficient (r), a segmentation questionnaire, and slice-based analysis.
RESULTS: The results demonstrated a high level of segmentation performance across all sequences, with only minor differences observed in the mean Dice coefficient: BLADE (0.96 ± 0.05), VIBE (0.96 ± 0.04), and HASTE (0.95 ± 0.05). Additionally, the segmentation quality was consistent across different disease severities, patient ages, and sizes. Manual evaluation identified specific challenges, such as incomplete segmentations near the diaphragm and dorsal regions. Validation on a separate, external dataset of nine toddlers (2-24 months) demonstrated generalizability of the trained model achieving a Dice coefficient of 0.85 ± 0.03.
DISCUSSION AND CONCLUSION: Overall, our study demonstrates the feasibility and effectiveness of using nnU-Net for automated segmentation of lung halves in pediatric CF patients, showing promising directions for advanced image analysis techniques to assist in clinical decision-making and monitoring of CF lung disease progression. Despite these achievements, further improvements are needed to address specific segmentation challenges and enhance generalizability.
PMID:39606627 | PMC:PMC11600534 | DOI:10.3389/fmed.2024.1401473
Effective automatic classification methods via deep learning for myopic maculopathy
Front Med (Lausanne). 2024 Nov 13;11:1492808. doi: 10.3389/fmed.2024.1492808. eCollection 2024.
ABSTRACT
BACKGROUND: Pathologic myopia (PM) associated with myopic maculopathy (MM) is a significant cause of visual impairment, especially in East Asia, where its prevalence has surged. Early detection and accurate classification of myopia-related fundus lesions are critical for managing PM. Traditional clinical analysis of fundus images is time-consuming and dependent on specialist expertise, driving the need for automated, accurate diagnostic tools.
METHODS: This study developed a deep learning-based system for classifying five types of MM using color fundus photographs. Five architectures-ResNet50, EfficientNet-B0, Vision Transformer (ViT), Contrastive Language-Image Pre-Training (CLIP), and RETFound-were utilized. An ensemble learning approach with weighted voting was employed to enhance model performance. The models were trained on a dataset of 2,159 annotated images from Shenzhen Eye Hospital, with performance evaluated using accuracy, sensitivity, specificity, F1-Score, Cohen's Kappa, and area under the receiver operating characteristic curve (AUC).
RESULTS: The ensemble model achieved superior performance across all metrics, with an accuracy of 95.4% (95% CI: 93.0-97.0%), sensitivity of 95.4% (95% CI: 86.8-97.5%), specificity of 98.9% (95% CI: 97.1-99.5%), F1-Score of 95.3% (95% CI: 93.2-97.2%), Kappa value of 0.976 (95% CI: 0.957-0.989), and AUC of 0.995 (95% CI: 0.992-0.998). The voting ensemble method demonstrated robustness and high generalization ability in classifying complex lesions, outperforming individual models.
CONCLUSION: The ensemble deep learning system significantly enhances the accuracy and reliability of MM classification. This system holds potential for assisting ophthalmologists in early detection and precise diagnosis, thereby improving patient outcomes. Future work could focus on expanding the dataset, incorporating image quality assessment, and optimizing the ensemble algorithm for better efficiency and broader applicability.
PMID:39606624 | PMC:PMC11598530 | DOI:10.3389/fmed.2024.1492808
Sensitive Quantification of Cerebellar Speech Abnormalities Using Deep Learning Models
IEEE Access. 2024;12:62328-62340. doi: 10.1109/access.2024.3393243. Epub 2024 Apr 24.
ABSTRACT
Objective, sensitive, and meaningful disease assessments are critical to support clinical trials and clinical care. Speech changes are one of the earliest and most evident manifestations of cerebellar ataxias. This work aims to develop models that can accurately identify and quantify clinical signs of ataxic speech. We use convolutional neural networks to capture the motor speech phenotype of cerebellar ataxia based on time and frequency partial derivatives of log-mel spectrogram representations of speech. We train classification models to distinguish patients with ataxia from healthy controls as well as regression models to estimate disease severity. Classification models were able to accurately distinguish healthy controls from individuals with ataxia, including ataxia participants who clinicians rated as having no detectable clinical deficits in speech. Regression models produced accurate estimates of disease severity, were able to measure subclinical signs of ataxia, and captured disease progression over time. Convolutional networks trained on time and frequency partial derivatives of the speech signal can detect sub-clinical speech changes in ataxias and sensitively measure disease change over time. Learned speech analysis models have the potential to aid early detection of disease signs in ataxias and provide sensitive, low-burden assessment tools in support of clinical trials and neurological care.
PMID:39606584 | PMC:PMC11601984 | DOI:10.1109/access.2024.3393243
Sex differences in brain MRI using deep learning toward fairer healthcare outcomes
Front Comput Neurosci. 2024 Nov 13;18:1452457. doi: 10.3389/fncom.2024.1452457. eCollection 2024.
ABSTRACT
This study leverages deep learning to analyze sex differences in brain MRI data, aiming to further advance fairness in medical imaging. We employed 3D T1-weighted Magnetic Resonance images from four diverse datasets: Calgary-Campinas-359, OASIS-3, Alzheimer's Disease Neuroimaging Initiative, and Cambridge Center for Aging and Neuroscience, ensuring a balanced representation of sexes and a broad demographic scope. Our methodology focused on minimal preprocessing to preserve the integrity of brain structures, utilizing a Convolutional Neural Network model for sex classification. The model achieved an accuracy of 87% on the test set without employing total intracranial volume (TIV) adjustment techniques. We observed that while the model exhibited biases at extreme brain sizes, it performed with less bias when the TIV distributions overlapped more. Saliency maps were used to identify brain regions significant in sex differentiation, revealing that certain supratentorial and infratentorial regions were important for predictions. Furthermore, our interdisciplinary team, comprising machine learning specialists and a radiologist, ensured diverse perspectives in validating the results. The detailed investigation of sex differences in brain MRI in this study, highlighted by the sex differences map, offers valuable insights into sex-specific aspects of medical imaging and could aid in developing sex-based bias mitigation strategies, contributing to the future development of fair AI algorithms. Awareness of the brain's differences between sexes enables more equitable AI predictions, promoting fairness in healthcare outcomes. Our code and saliency maps are available at https://github.com/mahsadibaji/sex-differences-brain-dl.
PMID:39606583 | PMC:PMC11598355 | DOI:10.3389/fncom.2024.1452457
A Study on Automatic O-RADS Classification of Sonograms of Ovarian Adnexal Lesions Based on Deep Convolutional Neural Networks
Ultrasound Med Biol. 2024 Nov 26:S0301-5629(24)00430-7. doi: 10.1016/j.ultrasmedbio.2024.11.009. Online ahead of print.
ABSTRACT
OBJECTIVE: This study explored a new method for automatic O-RADS classification of sonograms based on a deep convolutional neural network (DCNN).
METHODS: A development dataset (DD) of 2,455 2D grayscale sonograms of 870 ovarian adnexal lesions and an intertemporal validation dataset (IVD) of 426 sonograms of 280 lesions were collected and classified according to O-RADS v2022 (categories 2-5) by three senior sonographers. Classification results verified by a two-tailed z-test to be consistent with the O-RADS v2022 malignancy rate indicated the diagnostic performance was comparable to that of a previous study and were used for training; otherwise, the classification was repeated by two different sonographers. The DD was used to develop three DCNN models (ResNet34, DenseNet121, and ConvNeXt-Tiny) that employed transfer learning techniques. Model performance was assessed for accuracy, precision, and F1 score, among others. The optimal model was selected and validated over time using the IVD and to analyze whether the efficiency of O-RADS classification was improved with the assistance of this model for three sonographers with different years of experience.
RESULTS: The proportion of malignant tumors in the DD and IVD in each O-RADS-defined risk category was verified using a two-tailed z-test. Malignant lesions (O-RADS categories 4 and 5) were diagnosed in the DD and IVD with sensitivities of 0.949 and 0.962 and specificities of 0.892 and 0.842, respectively. ResNet34, DenseNet121, and ConvNeXt-Tiny had overall accuracies of 0.737, 0.752, and 0.878, respectively, for sonogram prediction in the DD. The ConvNeXt-Tiny model's accuracy for sonogram prediction in the IVD was 0.859, with no significant difference between test sets. The modeling aid significantly reduced O-RADS classification time for three sonographers (Cohen's d = 5.75).
CONCLUSION: ConvNeXt-Tiny showed robust and stable performance in classifying O-RADS 2-5, improving sonologists' classification efficacy.
PMID:39603844 | DOI:10.1016/j.ultrasmedbio.2024.11.009
Automated Assessment of Left Ventricular Filling Pressures From Coronary Angiograms With Video-Based Deep Learning Algorithms
JACC Cardiovasc Interv. 2024 Nov 25;17(22):2709-2711. doi: 10.1016/j.jcin.2024.07.047.
NO ABSTRACT
PMID:39603788 | DOI:10.1016/j.jcin.2024.07.047
Assessing greenspace and cardiovascular health through deep-learning analysis of street-view imagery in a cohort of US children
Environ Res. 2024 Nov 25:120459. doi: 10.1016/j.envres.2024.120459. Online ahead of print.
ABSTRACT
BACKGROUND: Accurately capturing individuals' experiences with greenspace at ground-level can provide valuable insights into their impact on children's health. However, most previous research has relied on coarse satellite-based measurements.
METHODS: We utilized CVH and residential address data from Project Viva, a US-based pre-birth cohort, tracking participants from mid-childhood to late adolescence (2007-21). A deep learning segmentation algorithm was applied to street-view images across the US to estimate % street-view trees, grass, and other greenspace (flowers/field/plants). Exposure estimates were derived by inking street-view greenspace metrics to 500m of participants' residences during mid-childhood, early and late adolescence. CVH scores (range 0-100; higher indicate better CVH) were calculated using the American Heart Association's Life's Essential 8 algorithm at these three time points, incorporating four biomedical components (body weight, blood lipids, blood glucose, blood pressure) and four behavioral components (diet, physical activity, nicotine exposure, sleep). Linear regression models were used to examine cross-sectional and cumulative associations between street-view greenspace metrics and CVH scores. Generalized estimating equations models were used to examine associations between street-view greenspace metrics and changes in CVH scores across three timepoints. All models were adjusted for individual and neighborhood-level confounders.
RESULTS: Adjusting for confounders, a one-SD increase in street-view trees within 500m of residence was cross-sectionally associated with a 1.92-point (95%CI: 0.50, 3.35) higher CVH score in late adolescence, but not mid-childhood or early adolescence. Longitudinally, street-view greenspace metrics at baseline (either mid-childhood or early adolescence) were not associated with changes in CVH scores at the same and all subsequent time points. Cumulative street-view greenspace metrics across the three time points were also not associated with CVH scores in late adolescence.
CONCLUSION: and Relevance: In this US cohort of children, we observed few evidence of associations between street-level greenspace children's CVH, though the impact may vary with children's growth stage.
PMID:39603586 | DOI:10.1016/j.envres.2024.120459
Structural analysis and intelligent classification of clinical trial eligibility criteria based on deep learning and medical text mining
J Biomed Inform. 2024 Nov 25:104753. doi: 10.1016/j.jbi.2024.104753. Online ahead of print.
ABSTRACT
OBJECTIVE: To enhance the efficiency, quality, and innovation capability of clinical trials, this paper introduces a novel model called CTEC-AC (Clinical Trial Eligibility Criteria Automatic Classification), aimed at structuring clinical trial eligibility criteria into computationally explainable classifications.
METHODS: We obtained detailed information on the latest 2,500 clinical trials from ClinicalTrials.gov, generating over 20,000 eligibility criteria data entries. To enhance the expressiveness of these criteria, we integrated two powerful methods: ClinicalBERT and MetaMap. The resulting enhanced features were used as input for a hierarchical clustering algorithm. Post-processing included expert validation of the algorithm's output to ensure the accuracy of the constructed annotated eligibility text corpus. Ultimately, our model was employed to automate the classification of eligibility criteria.
RESULTS: We identified 31 distinct categories to summarize the eligibility criteria written by clinical researchers and uncovered common themes in how these criteria are expressed. Using our automated classification model on a labeled dataset, we achieved a macro-average F1 score of 0.94.
CONCLUSION: This work can automatically extract structured representations from unstructured eligibility criteria text, significantly advancing the informatization of clinical trials. This, in turn, can significantly enhance the intelligence of automated participant recruitment for clinical researchers.
PMID:39603550 | DOI:10.1016/j.jbi.2024.104753
Targeted isolation and AI-based analysis of edible fungal polysaccharides: Emphasizing tumor immunological mechanisms and future prospects as mycomedicines
Int J Biol Macromol. 2024 Nov 25:138089. doi: 10.1016/j.ijbiomac.2024.138089. Online ahead of print.
ABSTRACT
Edible fungal polysaccharides have emerged as significant bioactive compounds with diverse therapeutic potentials, including notable anti-tumor effects. Derived from various fungal sources, these polysaccharides exhibit complex biological activities such as antioxidant, immune-modulatory, anti-inflammatory, and anti-obesity properties. In cancer therapy, members of this family show promise in inhibiting tumor growth and metastasis through mechanisms like apoptosis induction and modulation of the immune system. This review provides a detailed examination of contemporary techniques for the targeted isolation and structural elucidation of edible fungal polysaccharides. Additionally, the review highlights the application of advanced artificial intelligence (AI) methodologies to facilitate efficient and accurate structural analysis of these polysaccharides. It also explores their interactions with immune cells within the tumor microenvironment and their role in modulating gut microbiota, which can enhance overall immune function and potentially reduce cancer risks. Clinical studies further demonstrate their efficacy in various cancer treatments. Overall, edible fungal polysaccharides represent a promising frontier in cancer therapy, leveraging their natural origins and minimal toxicity to offer novel strategies for comprehensive cancer management.
PMID:39603293 | DOI:10.1016/j.ijbiomac.2024.138089
NExpR: Neural Explicit Representation for fast arbitrary-scale medical image super-resolution
Comput Biol Med. 2024 Nov 26;184:109354. doi: 10.1016/j.compbiomed.2024.109354. Online ahead of print.
ABSTRACT
Medical images often require rescaling to various spatial resolutions to ensure interpretations at different levels. Conventional deep learning-based image super-resolution (SR) enhances the fixed-scale resolution. Implicit neural representation (INR) is a promising way of achieving arbitrary-scale image SR. However, existing INR-based methods require the repeated execution of the neural network (NN), which is slow and inefficient. In this paper, we present Neural Explicit Representation (NExpR) for fast arbitrary-scale medical image SR. Our algorithm represents an image with an explicit analytical function, whose input is the low-resolution image and output is the parameterization of the analytical function. After obtaining the analytical representation through a single NN inference, SR images of arbitrary scales can be derived by evaluating the explicit functions at desired coordinates. Because of the analytical explicit representation, NExpR is significantly faster than INR-based methods. In addition to speed, our method achieves on-par or better image quality than other strong competitors. Extensive experiments on Magnetic Resonance Imaging (MRI) datasets, including ProstateX, fastMRI, and our in-house clinical prostate dataset, as well as the Computerized Tomography (CT) dataset, specifically the Medical Segmentation Decathlon (MSD) liver dataset, demonstrate the superiority of our method. Our method reduces the rescaling time from the order of 1 ms to the order of 0.01 ms, achieving an over 100× speedup without losing the image quality. Code is available at https://github.com/Calvin-Pang/NExpR.
PMID:39602975 | DOI:10.1016/j.compbiomed.2024.109354
Recognition analysis of spiral and straight-line drawings in tremor assessment
Biomed Tech (Berl). 2024 Nov 28. doi: 10.1515/bmt-2023-0080. Online ahead of print.
ABSTRACT
OBJECTIVES: No standard, objective diagnostic procedure exists for most neurological diseases causing tremors. Therefore, drawing tests have been widely analyzed to support diagnostic procedures. In this study, we examine the comparison of Archimedean spiral and line drawings, the possibilities of their joint application, and the relevance of displaying pressure on the drawings to recognize Parkinsonism and cerebellar dysfunction. We further attempted to use an automatic processing and evaluation system.
METHODS: Digital images were developed from raw data by adding or omitting pressure data. Pre-trained (MobileNet, Xception, ResNet50) models and a Baseline (from scratch) model were applied for binary classification with a fold cross-validation procedure. Predictions were analyzed separately by drawing tasks and in combination.
RESULTS: The neurological diseases presented here can be recognized with a significantly higher macro f1 score from the spiral drawing task (up to 95.7 %) than lines (up to 84.3 %). A significant improvement can be achieved if the spiral is supplemented with line drawing. The pressure inclusion in the images did not result in significant information gain.
CONCLUSIONS: The spiral drawing has a robust recognition power and can be supplemented with a line drawing task to increase the correct recognition. Moreover, X and Y coordinates appeared sufficient without pressure with this methodology.
PMID:39602901 | DOI:10.1515/bmt-2023-0080
Residual Pix2Pix networks: streamlining PET/CT imaging process by eliminating CT energy conversion
Biomed Phys Eng Express. 2024 Nov 27. doi: 10.1088/2057-1976/ad97c2. Online ahead of print.
ABSTRACT
Objective
Attenuation correction of PET data is commonly conducted through the utilization of a secondary imaging technique to produce attenuation maps. The customary approach to attenuation correction, which entails the employment of CT images, necessitates energy conversion. However, the present study introduces a novel deep learning-based method that obviates the requirement for CT images and energy conversion.
Methods
This study employs a residual Pix2Pix network to generate attenuation-corrected PET images using the 4033 2D PET images of 37 healthy adult brains for train and test. The model, implemented in TensorFlow and Keras, was evaluated by comparing image similarity, intensity correlation, and distribution against CT-AC images using metrics such as PSNR and SSIM for image similarity, while a 2D histogram plotted pixel intensities. Differences in standardized uptake values (SUV) demonstrated the model's efficiency compared to the CTAC method.
Results
The residual Pix2Pix demonstrated strong agreement with the CT-based attenuation correction, the proposed network yielding MAE, MSE, PSNR, and MS-SSIM values of 3×10-3, 2×10-4, 38.859, and 0.99, respectively. The residual Pix2Pix model's results showed a negligible mean SUV difference of 8×10-4(P-value = 0.10), indicating its accuracy in PET image correction. The residual Pix2Pix model exhibits high precision with a strong correlation coefficient of R2 = 0.99 to CT-based methods. The findings indicate that this approach surpasses the conventional method in terms of precision and efficacy.
Conclusions
The proposed residual Pix2Pix framework enables accurate and feasible attenuation correction of brain F-FDG PET without CT. However, clinical trials are required to evaluate its clinical performance. The PET images reconstructed by the framework have low errors compared to the accepted test reliability of PET/CT, indicating high quantitative similarity.
.
PMID:39602833 | DOI:10.1088/2057-1976/ad97c2