Deep learning
Enhancing Privacy-Preserving Cancer Classification with Convolutional Neural Networks
Pac Symp Biocomput. 2025;30:565-579.
ABSTRACT
Precision medicine significantly enhances patients prognosis, offering personalized treatments. Particularly for metastatic cancer, incorporating primary tumor location into the diagnostic process greatly improves survival rates. However, traditional methods rely on human expertise, requiring substantial time and financial resources. To address this challenge, Machine Learning (ML) and Deep Learning (DL) have proven particularly effective. Yet, their application to medical data, especially genomic data, must consider and encompass privacy due to the highly sensitive nature of data. In this paper, we propose OGHE, a convolutional neural network-based approach for privacy-preserving cancer classification designed to exploit spatial patterns in genomic data, while maintaining confidentiality by means of Homomorphic Encryption (HE). This encryption scheme allows the processing directly on encrypted data, guaranteeing its confidentiality during the entire computation. The design of OGHE is specific for privacy-preserving applications, taking into account HE limitations from the outset, and introducing an efficient packing mechanism to minimize the computational overhead introduced by HE. Additionally, OGHE relies on a novel feature selection method, VarScout, designed to extract the most significant features through clustering and occurrence analysis, while preserving inherent spatial patterns. Coupled with VarScout, OGHE has been compared with existing privacy-preserving solutions for encrypted cancer classification on the iDash 2020 dataset, demonstrating their effectiveness in providing accurate privacy-preserving cancer classification, and reducing latency thanks to our packing mechanism. The code is released to the scientific community.
PMID:39670396
Investigating the Differential Impact of Psychosocial Factors by Patient Characteristics and Demographics on Veteran Suicide Risk Through Machine Learning Extraction of Cross-Modal Interactions
Pac Symp Biocomput. 2025;30:167-184.
ABSTRACT
Accurate prediction of suicide risk is crucial for identifying patients with elevated risk burden, helping ensure these patients receive targeted care. The US Department of Veteran Affairs' suicide prediction model primarily leverages structured electronic health records (EHR) data. This approach largely overlooks unstructured EHR, a data format that could be utilized to enhance predictive accuracy. This study aims to enhance suicide risk models' predictive accuracy by developing a model that incorporates both structured EHR predictors and semantic NLP-derived variables from unstructured EHR. XGBoost models were fit to predict suicide risk- the interactions identified by the model were extracted using SHAP, validated using logistic regression models, added to a ridge regression model, which was subsequently compared to a ridge regression approach without the use of interactions. By introducing a selection parameter, α, to balance the influence of structured (α=1) and unstructured (α=0) data, we found that intermediate α values achieved optimal performance across various risk strata, improved model performance of the ridge regression approach and uncovered significant cross-modal interactions between psychosocial constructs and patient characteristics. These interactions highlight how psychosocial risk factors are influenced by individual patient contexts, potentially informing improved risk prediction methods and personalized interventions. Our findings underscore the importance of incorporating nuanced narrative data into predictive models and set the stage for future research that will expand the use of advanced machine learning techniques, including deep learning, to further refine suicide risk prediction methods.
PMID:39670369
Session Introduction: AI and Machine Learning in Clinical Medicine: Generative and Interactive Systems at the Human-Machine Interface
Pac Symp Biocomput. 2025;30:33-39.
ABSTRACT
Artificial Intelligence (AI) technologies are increasingly capable of processing complex and multilayered datasets. Innovations in generative AI and deep learning have notably enhanced the extraction of insights from both unstructured texts, images, and structured data alike. These breakthroughs in AI technology have spurred a wave of research in the medical field, leading to the creation of a variety of tools aimed at improving clinical decision-making, patient monitoring, image analysis, and emergency response systems. However, thorough research is essential to fully understand the broader impact and potential consequences of deploying AI within the healthcare sector.
PMID:39670359
A dataset of shallow soil moisture for alfalfa in the Ningxia irrigation area of the Yellow River
Front Plant Sci. 2024 Nov 28;15:1472930. doi: 10.3389/fpls.2024.1472930. eCollection 2024.
NO ABSTRACT
PMID:39670272 | PMC:PMC11634607 | DOI:10.3389/fpls.2024.1472930
Explainable light-weight deep learning pipeline for improved drought stress identification
Front Plant Sci. 2024 Nov 28;15:1476130. doi: 10.3389/fpls.2024.1476130. eCollection 2024.
ABSTRACT
INTRODUCTION: Early identification of drought stress in crops is vital for implementing effective mitigation measures and reducing yield loss. Non-invasive imaging techniques hold immense potential by capturing subtle physiological changes in plants under water deficit. Sensor-based imaging data serves as a rich source of information for machine learning and deep learning algorithms, facilitating further analysis that aims to identify drought stress. While these approaches yield favorable results, real-time field applications require algorithms specifically designed for the complexities of natural agricultural conditions.
METHODS: Our work proposes a novel deep learning framework for classifying drought stress in potato crops captured by unmanned aerial vehicles (UAV) in natural settings. The novelty lies in the synergistic combination of a pre-trained network with carefully designed custom layers. This architecture leverages the pre-trained network's feature extraction capabilities while the custom layers enable targeted dimensionality reduction and enhanced regularization, ultimately leading to improved performance. A key innovation of our work is the integration of gradient-based visualization inspired by Gradient-Class Activation Mapping (Grad-CAM), an explainability technique. This visualization approach sheds light on the internal workings of the deep learning model, often regarded as a "black box". By revealing the model's focus areas within the images, it enhances interpretability and fosters trust in the model's decision-making process.
RESULTS AND DISCUSSION: Our proposed framework achieves superior performance, particularly with the DenseNet121 pre-trained network, reaching a precision of 97% to identify the stressed class with an overall accuracy of 91%. Comparative analysis of existing state-of-the-art object detection algorithms reveals the superiority of our approach in achieving higher precision and accuracy. Thus, our explainable deep learning framework offers a powerful approach to drought stress identification with high accuracy and actionable insights.
PMID:39670267 | PMC:PMC11635298 | DOI:10.3389/fpls.2024.1476130
A method of identification and localization of tea buds based on lightweight improved YOLOV5
Front Plant Sci. 2024 Nov 28;15:1488185. doi: 10.3389/fpls.2024.1488185. eCollection 2024.
ABSTRACT
The low degree of intelligence and standardization of tea bud picking, as well as laborious and time-consuming manual harvesting, bring significant challenges to the sustainable development of the high-quality tea industry. There is an urgent need to investigate the critical technologies of intelligent picking robots for tea. The complexity of the model requires high hardware computing resources, which limits the deployment of the tea bud detection model in tea-picking robots. Therefore, in this study, we propose the YOLOV5M-SBSD tea bud lightweight detection model to address the above issues. The Fuding white tea bud image dataset was established by collecting Fuding white tea images; then the lightweight network ShuffleNetV2 was used to replace the YOLOV5 backbone network; the up-sampling algorithm of YOLOV5 was optimized by using CARAFE modular structure, which increases the sensory field of the network while maintaining the lightweight; then BiFPN was used to achieve more efficient multi-scale feature fusion; and the introduction of the parameter-free attention SimAm to enhance the feature extraction ability of the model while not adding extra computation. The improved model was denoted as YOLOV5M-SBSD and compared and analyzed with other mainstream target detection models. Then, the YOLOV5M-SBSD recognition model is experimented on with the tea bud dataset, and the tea buds are recognized using YOLOV5M-SBSD. The experimental results show that the recognition accuracy of tea buds is 88.7%, the recall rate is 86.9%, and the average accuracy is 93.1%, which is 0.5% higher than the original YOLOV5M algorithm's accuracy, the average accuracy is 0.2% higher, the Size is reduced by 82.89%, and the Params, and GFlops are reduced by 83.7% and 85.6%, respectively. The improved algorithm has higher detection accuracy while reducing the amount of computation and parameters. Also, it reduces the dependence on hardware, provides a reference for deploying the tea bud target detection model in the natural environment of the tea garden, and has specific theoretical and practical significance for the identification and localization of the intelligent picking robot of tea buds.
PMID:39670263 | PMC:PMC11634601 | DOI:10.3389/fpls.2024.1488185
Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2024 Jun;abs/210504906(2024):11269-11281. doi: 10.1109/cvpr52733.2024.01071. Epub 2024 Sep 16.
ABSTRACT
Humans effortlessly interpret images by parsing them into part-whole hierarchies; deep learning excels in learning multi-level feature spaces, but they often lack explicit coding of part-whole relations, a prominent property of medical imaging. To overcome this limitation, we introduce Adam-v2, a new self-supervised learning framework extending Adam [79] by explicitly incorporating part-whole hierarchies into its learning objectives through three key branches: (1) Localizability, acquiring discriminative representations to distinguish different anatomical patterns; (2) Composability, learning each anatomical structure in a parts-to-whole manner; and (3) Decomposability, comprehending each anatomical structure in a whole-to-parts manner. Experimental results across 10 tasks, compared to 11 baselines in zero-shot, few-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior performance over large-scale medical models and existing SSL methods across diverse downstream tasks. The higher generality and robustness of Adam-v2's representations originate from its explicit construction of hierarchies for distinct anatomical structures from unlabeled medical images. Adam-v2 preserves a semantic balance of anatomical diversity and harmony in its embedding, yielding representations that are both generic and semantically meaningful, yet overlooked in existing SSL methods. All code and pretrained models are available at GitHub.com/JLiangLab/Eden.
PMID:39670210 | PMC:PMC11636527 | DOI:10.1109/cvpr52733.2024.01071
Deepfake detection using deep feature stacking and meta-learning
Heliyon. 2024 Feb 15;10(4):e25933. doi: 10.1016/j.heliyon.2024.e25933. eCollection 2024 Feb 29.
ABSTRACT
Deepfake is a type of face manipulation technique using deep learning that allows for the replacement of faces in videos in a very realistic way. While this technology has many practical uses, if used maliciously, it can have a significant number of bad impacts on society, such as spreading fake news or cyberbullying. Therefore, the ability to detect deepfake has become a pressing need. This paper aims to address the problem of deepfake detection by identifying deepfake forgeries in video sequences. In this paper, a solution to the said problem is presented, which at first uses a stacking based ensemble approach, where features obtained from two popular deep learning models, namely Xception and EfficientNet-B7, are combined. Then by selecting a near-optimal subset of features using a ranking based approach, the final classification is performed to classify real and fake videos using a meta-learner, called multi-layer perceptron. In our experimentation, we have achieved an accuracy of 96.33% on Celeb-DF (V2) dataset and 98.00% on the FaceForensics++ dataset using the meta-learning model both of which are higher than the individual base models. Various types of experiments have been conducted to validate the robustness of the current method.
PMID:39670070 | PMC:PMC11636820 | DOI:10.1016/j.heliyon.2024.e25933
Construction and validation of deep learning model for cachexia in extensive-stage small cell lung cancer patients treated with immune checkpoint inhibitors: a multicenter study
Transl Lung Cancer Res. 2024 Nov 30;13(11):2958-2971. doi: 10.21037/tlcr-24-543. Epub 2024 Nov 28.
ABSTRACT
BACKGROUND: Cachexia is observed in around 60% of patients with extensive-stage small cell lung cancer (ES-SCLC) and may play an important role in the development of resistance to immunotherapy. This study aims to evaluate the influence of cachexia on the effectiveness of immunotherapy, develop and assess a deep learning (DL)-based prediction model for cachexia, as well as its prognostic value.
METHODS: The analysis encompassed ES-SCLC patients who received the combination of first-line immunotherapy and chemotherapy from Shandong Cancer Hospital and Institute, Qilu Hospital, and Jining First People's Hospital. Survival analysis was conducted to examine the correlation between cachexia and the efficacy of immunotherapy. Medical records and computed tomography (CT) images of the third lumbar vertebra (L3) level were collected to construct the clinical model, radiomics, and DL models. The receiver operating characteristic (ROC) curve analysis was conducted to assess and analyze the efficacy of various models in detecting and evaluating the risk of cachexia.
RESULTS: A total of 231 ES-SCLC patients were enrolled in the study. Cachexia was related to inferior progression-free survival (PFS) and overall survival (OS). In internal and external validation cohorts, the area under the curve (AUC) of the DL model were 0.73 and 0.71. Conversely, the radiomics model in external validation cohort recorded an AUC of 0.67, highlighting the superior performance of the DL model and its demonstrated capability for effective generalization in external validation. All patients were categorized into two groups, namely high risk and low risk using the DL model. It was shown that patients with low-risk cachexia were associated with significantly prolonged PFS and OS.
CONCLUSIONS: The DL model not only had better performance in predicting cachexia but also correlated with survival outcomes of ES-SCLC patients who receiving initial immunotherapy.
PMID:39670020 | PMC:PMC11632437 | DOI:10.21037/tlcr-24-543
Breath-hold diffusion-weighted MR imaging (DWI) using deep learning reconstruction: Comparison with navigator triggered DWI in patients with malignant liver tumors
Radiography (Lond). 2024 Dec 11;31(1):275-280. doi: 10.1016/j.radi.2024.11.027. Online ahead of print.
ABSTRACT
INTRODUCTION: This study investigated the feasibility of single breath-hold (BH) diffusion-weighted MR imaging (DWI) using deep learning reconstruction (DLR) compared to navigator triggered (NT) DWI in patients with malignant liver tumors.
METHODS: This study included 91 patients who underwent both BH-DWI and NT-DWI with 3T MR system. Abdominal MR images were subjectively analyzed to compare visualization of liver edges, presence of ghosting artifacts, conspicuity of malignant liver tumors, and overall image quality. Then, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and apparent diffusion coefficient (ADC) values of malignant liver tumors were objectively measured using regions of interest.
RESULTS: Image quality except conspicuity of malignant liver tumors were significantly better in BH-DW image than in NT-DW image (p < 0.01). Regarding the conspicuity of malignant liver tumors, there was no statistically significant difference between BH-DWI and NT-DWI (p = 0.67). The conspicuity score of 1 or 2 was rendered in 19 (21 %) patients in NT-DWI group. Conversely, BH-DWI showed a score of 3 or 4 in 11 (58 %) of these 19 patients. The SNR was significantly higher in BH-DWI than in NT-DWI (29.5 ± 14.0 vs. 27.3 ± 14.7, p < 0.047). No significant difference was observed between CNR and ADC values of malignant liver tumors between BH-DWI and NT-DWI (5.67 ± 3.57 vs. 5.78 ± 3.08, p = 0.243; 997.2 ± 207.0 vs. 1021.0 ± 253.1, p = 0.547).
CONCLUSION: The BH-DWI using DLR is feasible for liver MRI by improving the SNR and overall image quality, and may play a complementary role to NT-DWI by improving the conspicuity of malignant liver tumor in patients with image distortion in NT-DWI.
IMPLICATIONS FOR PRACTICE: BH-DWI with DLR would be a preferred approach to achieving sufficient image quality in patients with an irregular triggering pattern, as an alternative to NT-DWI. A further reduction in BH duration (<15 s) should be achieved, taking into account patient tolerance.
PMID:39667265 | DOI:10.1016/j.radi.2024.11.027
Artificial intelligence-driven quantification of antibiotic-resistant Bacteria in food by color-encoded multiplex hydrogel digital LAMP
Food Chem. 2024 Dec 4;468:142304. doi: 10.1016/j.foodchem.2024.142304. Online ahead of print.
ABSTRACT
Antibiotic-resistant bacteria pose considerable risks to global health, particularly through transmission in the food chain. Herein, we developed the artificial intelligence-driven quantification of antibiotic-resistant bacteria in food using a color-encoded multiplex hydrogel digital loop-mediated isothermal amplification (LAMP) system. The quenching of unincorporated amplification signal reporters (QUASR) was first introduced in multiplex digital LAMP. During amplification, primers labeled with different fluorophores were integrated into amplicons, generating color-specific fluorescent spots. While excess primers were quenched by complementary quenching probes. After amplification, fluorescent spots in red, green, and blue emerged in hydrogels, which were automatically identified and quantified using a deep learning model. Methicillin-resistant Staphylococcus aureus and carbapenem-resistant Escherichia coli in real fruit and vegetable samples were also successfully detected. This artificial intelligence-driven color-encoded multiplex hydrogel LAMP offers promising potential for the digital quantification of antibiotic-resistant bacteria in the food industry.
PMID:39667227 | DOI:10.1016/j.foodchem.2024.142304
Lifestyle factors and other predictors of common mental disorders in diagnostic machine learning studies: A systematic review
Comput Biol Med. 2024 Dec 11;185:109521. doi: 10.1016/j.compbiomed.2024.109521. Online ahead of print.
ABSTRACT
BACKGROUND: Machine Learning (ML) models have been used to predict common mental disorders (CMDs) and may provide insights into the key modifiable factors that can identify and predict CMD risk and be targeted through interventions. This systematic review aimed to synthesise evidence from ML studies predicting CMDs, evaluate their performance, and establish the potential benefit of incorporating lifestyle data in ML models alongside biological and/or demographic-environmental factors.
METHODS: This systematic review adheres to the PRISMA statement (Prospero CRD42023401194). Databases searched included MEDLINE, EMBASE, PsycInfo, IEEE Xplore, Engineering Village, Web of Science, and Scopus from database inception to 28/08/24. Included studies used ML methods with feature importance to predict CMDs in adults. Risk of bias (ROB) was assessed using PROBAST. Model performance metrics were compared. The ten most important variables reported by each study were assigned to broader categories to evaluate their frequency across studies.
RESULTS: 117 studies were included (111 model development-only, 16 development and validation). Deep learning methods showed best accuracy for predicting CMD cases. Studies commonly incorporated features from multiple categories (n = 56), and frequently identified demographic-environmental predictors in their top ten most important variables (63/69 models). These tended to be in combination with psycho-social and biological variables (n = 15). Lifestyle data were infrequently examined as sole predictors of CMDs across included studies (4.27 %). Studies commonly had high heterogeneity and ROB ratings.
CONCLUSION: This review is the first to evaluate the utility of diagnostic ML for CMDs, assess their ROB, and evaluate predictor types. CMDs were able to be predicted, however studies had high ROB and lifestyle data were underutilised, precluding full identification of a robust predictor set.
PMID:39667056 | DOI:10.1016/j.compbiomed.2024.109521
Few-shot classification of Cryo-ET subvolumes with deep Brownian distance covariance
Brief Bioinform. 2024 Nov 22;26(1):bbae643. doi: 10.1093/bib/bbae643.
ABSTRACT
Few-shot learning is a crucial approach for macromolecule classification of the cryo-electron tomography (Cryo-ET) subvolumes, enabling rapid adaptation to novel tasks with a small support set of labeled data. However, existing few-shot classification methods for macromolecules in Cryo-ET consider only marginal distributions and overlook joint distributions, failing to capture feature dependencies fully. To address this issue, we propose a method for macromolecular few-shot classification using deep Brownian Distance Covariance (BDC). Our method models the joint distribution within a transfer learning framework, enhancing the modeling capabilities. We insert the BDC module after the feature extractor and only train the feature extractor during the training phase. Then, we enhance the model's generalization capability with self-distillation techniques. In the adaptation phase, we fine-tune the classifier with minimal labeled data. We conduct experiments on publicly available SHREC datasets and a small-scale synthetic dataset to evaluate our method. Results show that our method improves the classification capabilities by introducing the joint distribution.
PMID:39668336 | DOI:10.1093/bib/bbae643
CACs Recognition of FISH Images Based on Adaptive Mean Teacher Semi-supervised Learning with Domain-Knowledge Pseudo Label
J Imaging Inform Med. 2024 Dec 12. doi: 10.1007/s10278-024-01348-8. Online ahead of print.
ABSTRACT
Circulating genetically abnormal cells (CACs) serve as crucial biomarkers for lung cancer diagnosis. Detecting CACs holds great value for early diagnosis and screening of lung cancer. To aid the identification of CACs, we have incorporated deep learning algorithms into our CACs detection system, specifically developing algorithms for cell segmentation and signal point detection. However, it is noteworthy that deep learning algorithms require extensive data labeling. Consequently, this study introduces a semi-supervised learning algorithm for CACs detection. For the cell segmentation task, a combination of self-training and Mean Teacher method was adopted in the semi-supervised training cell segmentation task. Furthermore, an Adaptive Mean Teacher approach was developed based on the Mean Teacher to enhance the effectiveness of semi-supervised cell segmentation. Regarding the signal point detection task, an end-to-end semi-supervised signal point detection algorithm was developed using the Adaptive Mean Teacher as the paradigm, and a Domain-Knowledge Pseudo Label was developed to improve the quality of pseudo-labeling and further enhance signal point detection. By incorporating semi-supervised training in both sub-tasks, the reliance on labeled data is reduced, thereby improving the performance of CACs detection. Our proposed semi-supervised method has achieved good results in cell segmentation tasks, signal point detection tasks, and the final CACs detection task. In the final CACs detection task, with 2%, 5%, and 10% of labeled data, our proposed semi-supervised method achieved 27.225%, 23.818%, and 4.513%, respectively. Experimental results demonstrated that the proposed method is effective.
PMID:39668308 | DOI:10.1007/s10278-024-01348-8
Large multimodality model fine-tuned for detecting breast and esophageal carcinomas on CT: a preliminary study
Jpn J Radiol. 2024 Dec 13. doi: 10.1007/s11604-024-01718-w. Online ahead of print.
ABSTRACT
PURPOSE: This study aimed to develop a large multimodality model (LMM) that can detect breast and esophageal carcinomas on chest contrast-enhanced CT.
MATERIALS AND METHODS: In this retrospective study, CT images of 401 (age, 62.9 ± 12.9 years; 169 males), 51 (age, 65.5 ± 11.6 years; 23 males), and 120 (age, 64.6 ± 14.2 years; 60 males) patients were used in the training, validation, and test phases. The numbers of CT images with breast carcinoma, esophageal carcinoma, and no lesion were 927, 2180, and 2087; 80, 233, and 270; and 184, 246, and 6919 for the training, validation, and test datasets, respectively. The LMM was fine-tuned using CT images as input and text data ("suspicious of breast carcinoma"/ "suspicious of esophageal carcinoma"/ "no lesion") as reference data on a desktop computer equipped with a single graphic processing unit. Because of the random nature of the training process, supervised learning was performed 10 times. The performance of the best performing model on the validation dataset was further tested using the time-independent test dataset. The detection performance was evaluated by calculating the area under the receiver operating characteristic curve (AUC).
RESULTS: The sensitivities of the fine-tuned LMM for detecting breast and esophageal carcinomas in the test dataset were 0.929 and 0.951, respectively. The diagnostic performance of the fine-tuned LMM for detecting breast and esophageal carcinomas was high, with AUCs of 0.890 (95%CI 0.871-0.909) and 0.880 (95%CI 0.865-0.894), respectively.
CONCLUSIONS: The fine-tuned LMM could detect both breast and esophageal carcinomas on chest contrast-enhanced CT with high diagnostic performance. Usefulness of large multimodality models in chest cancer imaging has not been assessed so far. The fine-tuned large multimodality model could detect breast and esophageal carcinomas with high diagnostic performance (area under the receiver operating characteristic curve of 0.890 and 0.880, respectively).
PMID:39668277 | DOI:10.1007/s11604-024-01718-w
Non-invasive eye tracking and retinal view reconstruction in free swimming schooling fish
Commun Biol. 2024 Dec 12;7(1):1636. doi: 10.1038/s42003-024-07322-y.
ABSTRACT
Eye tracking has emerged as a key method for understanding how animals process visual information, identifying crucial elements of perception and attention. Traditional fish eye tracking often alters animal behavior due to invasive techniques, while non-invasive methods are limited to either 2D tracking or restricting animals after training. Our study introduces a non-invasive technique for tracking and reconstructing the retinal view of free-swimming fish in a large 3D arena without behavioral training. Using 3D fish bodymeshes reconstructed by DeepShapeKit, our method integrates multiple camera angles, deep learning for 3D fish posture reconstruction, perspective transformation, and eye tracking. We evaluated our approach using data from two fish swimming in a flow tank, captured from two perpendicular viewpoints, and validated its accuracy using human-labeled and synthesized ground truth data. Our analysis of eye movements and retinal view reconstruction within leader-follower schooling behavior reveals that fish exhibit negatively synchronised eye movements and focus on neighbors centered in the retinal view. These findings are consistent with previous studies on schooling fish, providing a further, indirect, validation of our method. Our approach offers new insights into animal attention in naturalistic settings and potentially has broader implications for studying collective behavior and advancing swarm robotics.
PMID:39668195 | DOI:10.1038/s42003-024-07322-y
Establishment of cancer cell radiosensitivity database linked to multi-layer omics data
Cancer Sci. 2024 Dec 12. doi: 10.1111/cas.16334. Online ahead of print.
ABSTRACT
Personalized radiotherapy based on the intrinsic sensitivity of individual tumors is anticipated, however, it has yet to be realized. To explore cancer radiosensitivity, analysis in combination with omics data is important. The Cancer Cell Line Encyclopedia (CCLE) provides multi-layer omics data for hundreds of cancer cell lines. However, the radiosensitivity counterpart is lacking. To address this issue, we aimed to establish a database of radiosensitivity, as assessed by the gold standard clonogenic assays, for the CCLE cell lines by collecting data from the literature. A deep learning-based screen of 33,284 papers identified 926 relevant studies, from which SF2 (survival fraction after 2 Gy irradiation) data were extracted. The median SF2 (mSF2) was calculated for each cell line, generating an mSF2 database comprising 285 cell lines from 28 cancer types. The mSF2 showed a normal distribution among higher and lower cancer-type hierarchies, demonstrating a large variation across and within cancer types. In selected cell lines, mSF2 correlated significantly with the single-institution SF2 obtained using standardized experimental protocols or with integral survival, a radiosensitivity index that correlates with clonogenic survival. Notably, the mSF2 for blood cancer cell lines was significantly lower than that for solid cancer cell lines, which is in line with the empirical knowledge that blood cancers are radiosensitive. Furthermore, the CCLE-derived protein levels of NFE2L2 and SQSTM1, which are involved in antioxidant damage responses that confer radioresistance, correlated significantly with mSF2. These results suggest the robustness and potential utility of the mSF2 database, linked to multi-layer omics data, for exploring cancer radiosensitivity.
PMID:39668120 | DOI:10.1111/cas.16334
Towards U-Net-based intraoperative 2D dose prediction in high dose rate prostate brachytherapy
Brachytherapy. 2024 Dec 11:S1538-4721(24)00457-4. doi: 10.1016/j.brachy.2024.11.007. Online ahead of print.
ABSTRACT
BACKGROUND: Poor needle placement in prostate high-dose-rate brachytherapy (HDR-BT) results in sub-optimal dosimetry and mentally predicting these effects during HDR-BT is difficult, creating a barrier to widespread availability of high-quality prostate HDR-BT.
PURPOSE: To provide earlier feedback on needle implantation quality, we trained machine learning models to predict 2D dosimetry for prostate HDR-BT on axial TRUS images.
METHODS AND MATERIALS: Clinical treatment plans from 248 prostate HDR-BT patients were retrospectively collected and randomly split 80/20 for training/testing. Fifteen U-Net models were implemented to predict the 90%, 100%, 120%, 150%, and 200% isodose levels in the prostate base, midgland, and apex. Predicted isodose lines were compared to delivered dose using Dice similarity coefficient (DSC), precision, recall, average symmetric surface distance, area percent difference, and 95th percentile Hausdorff distance. To benchmark performance, 10 cases were retrospectively replanned and compared against the clinical plans using the same metrics.
RESULTS: Models predicting 90% and 100% isodose lines at midgland performed best, with median DSC of 0.97 and 0.96, respectively. Performance declined as isodose level increased, with median DSC of 0.90, 0.79, and 0.65 in the 120%, 150%, and 200% models. In the base, median DSC was 0.94 for 90% and decreased to 0.64 for 200%. In the apex, median DSC was 0.93 for 90% and decreased to 0.63 for 200%. Median prediction time was 25 ms.
CONCLUSION: U-Net models accurately predicted HDR-BT isodose lines on 2D TRUS images sufficiently quickly for real-time use. Incorporating auto-segmentation algorithms will allow intra-operative feedback on needle implantation quality.
PMID:39668102 | DOI:10.1016/j.brachy.2024.11.007
Mineralized tissue visualization with MRI: Practical insights and recommendations for optimized clinical applications
Diagn Interv Imaging. 2024 Dec 11:S2211-5684(24)00256-0. doi: 10.1016/j.diii.2024.11.001. Online ahead of print.
ABSTRACT
Magnetic resonance imaging (MRI) techniques that enhance the visualization of mineralized tissues (hereafter referred to as MT-MRI) are increasingly being incorporated into clinical practice, particularly in musculoskeletal imaging. These techniques aim to mimic the contrast provided by computed tomography (CT), while taking advantage of MRI's superior soft tissue contrast and lack of ionizing radiation. However, the variety of MT-MRI techniques, including three-dimensional gradient-echo, ultra-short and zero-echo time, susceptibility-weighted imaging, and artificial intelligence-generated synthetic CT, each offer different technical characteristics, advantages, and limitations. Understanding these differences is critical to optimizing clinical application. This review provides a comprehensive overview of the most commonly used MT-MRI techniques, categorizing them based on their technical principles and clinical utility. The advantages and disadvantages of each approach, including their performance in bone morphology assessment, fracture detection, arthropathy-related findings, and soft tissue calcification evaluation are discussed. Additionally, technical limitations and artifacts that may affect image quality and diagnostic accuracy, such as susceptibility effects, signal-to-noise ratio issues, and motion artifacts are addressed. Despite promising developments, MT-MRI remains inferior to conventional CT for evaluating subtle bone abnormalities and soft tissue calcification due to spatial resolution limitations. However, advances in deep learning and hardware innovations, such as artificial intelligence-generated synthetic CT and ultrahigh-field MRI, may bridge this gap in the future.
PMID:39667997 | DOI:10.1016/j.diii.2024.11.001
Enhancing thin slice 3D T2-weighted prostate MRI with super-resolution deep learning reconstruction: Impact on image quality and PI-RADS assessment
Magn Reson Imaging. 2024 Dec 10:110308. doi: 10.1016/j.mri.2024.110308. Online ahead of print.
ABSTRACT
PURPOSES: This study aimed to assess the effectiveness of Super-Resolution Deep Learning Reconstruction (SR-DLR) -a deep learning-based technique that enhances image resolution and quality during MRI reconstruction- in improving the image quality of thin-slice 3D T2-weighted imaging (T2WI) and Prostate Imaging-Reporting and Data System (PI-RADS) assessment in prostate Magnetic Resonance Imaging (MRI).
METHODS: This retrospective study included 33 patients who underwent prostate MRI with SR-DLR between November 2022 and April 2023. Thin-slice 3D-T2WI of the prostate was obtained and reconstructed with and without SR-DLR (matrix: 720 × 720 and 240 × 240, respectively). We calculated the contrast and contrast-to-noise ratio (CNR) between the internal and external glands of the prostate, as well as the slope of pelvic bone and adipose tissue. Two radiologists evaluated qualitative image quality and assessed PI-RADS scores of each reconstruction.
RESULTS: The final analysis included 28 male patients (age range: 47-88 years; mean age: 70.8 years). The CNR with SR-DLR was significantly higher than without SR-DLR (1.93 [IQR: 0.79, 3.83] vs. 1.88 [IQR: 0.63, 3.82], p = 0.002). No significant difference in contrast was observed between images with and without SR-DLR (p = 0.864). The slope with SR-DLR was significantly higher than without SR-DLR (0.21 [IQR: 0.15, 0.25] vs. 0.15 [IQR: 0.12, 0.19], p < 0.01). Qualitative scores for contrast, sharpness, artifacts, and overall image quality were significantly higher with SR-DLR than without SR-DLR (p < 0.05 for all). The kappa values for 2D-T2WI and 3D-T2WI increased from 0.694 and 0.640 to 0.870 and 0.827 with SR-DLR for both readers.
CONCLUSIONS: SR-DLR has the potential to improve image quality and the ability to assess PI-RADS scores in thin-slice 3D-T2WI of the prostate without extending MRI acquisition time.
SUMMARY: Super-Resolution Deep Learning Reconstruction (SR-DLR) significantly improved image quality of thin-slice 3D T2-weighted imaging (T2WI) without extending the acquisition time. Additionally, the PI-RADS scores from 3D-T2WI with SR-DLR demonstrated higher agreement with those from 2D-T2WI.
PMID:39667642 | DOI:10.1016/j.mri.2024.110308