Deep learning
DeepFusion: A deep bimodal information fusion network for unraveling protein-RNA interactions using in vivo RNA structures
Comput Struct Biotechnol J. 2023 Dec 30;23:617-625. doi: 10.1016/j.csbj.2023.12.040. eCollection 2024 Dec.
ABSTRACT
RNA-binding proteins (RBPs) are key post-transcriptional regulators, and the malfunctions of RBP-RNA binding lead to diverse human diseases. However, prediction of RBP binding sites is largely based on RNA sequence features, whereas in vivo RNA structural features based on high-throughput sequencing are rarely incorporated. Here, we designed a deep bimodal information fusion network called DeepFusion for unraveling protein-RNA interactions by incorporating structural features derived from DMS-seq data. DeepFusion integrates two sub-models to extract local motif-like information and long-term context information. We show that DeepFusion performs best compared with other cutting-edge methods with only sequence inputs on two datasets. DeepFusion's performance is further improved with bimodal input after adding in vivo DMS-seq structural features. Furthermore, DeepFusion can be used for analyzing RNA degradation, demonstrating significantly different RBP-binding scores in genes with slow degradation rates versus those with rapid degradation rates. DeepFusion thus provides enhanced abilities for further analysis of functional RNAs. DeepFusion's code and data are available at http://bioinfo.org/deepfusion/.
PMID:38274994 | PMC:PMC10808905 | DOI:10.1016/j.csbj.2023.12.040
The expert's knowledge combined with AI outperforms AI alone in seizure onset zone localization using resting state fMRI
Front Neurol. 2024 Jan 11;14:1324461. doi: 10.3389/fneur.2023.1324461. eCollection 2023.
ABSTRACT
We evaluated whether integration of expert guidance on seizure onset zone (SOZ) identification from resting state functional MRI (rs-fMRI) connectomics combined with deep learning (DL) techniques enhances the SOZ delineation in patients with refractory epilepsy (RE), compared to utilizing DL alone. Rs-fMRI was collected from 52 children with RE who had subsequently undergone ic-EEG and then, if indicated, surgery for seizure control (n = 25). The resting state functional connectomics data were previously independently classified by two expert epileptologists, as indicative of measurement noise, typical resting state network connectivity, or SOZ. An expert knowledge integrated deep network was trained on functional connectomics data to identify SOZ. Expert knowledge integrated with DL showed a SOZ localization accuracy of 84.8 ± 4.5% and F1 score, harmonic mean of positive predictive value and sensitivity, of 91.7 ± 2.6%. Conversely, a DL only model yielded an accuracy of <50% (F1 score 63%). Activations that initiate in gray matter, extend through white matter, and end in vascular regions are seen as the most discriminative expert-identified SOZ characteristics. Integration of expert knowledge of functional connectomics can not only enhance the performance of DL in localizing SOZ in RE but also lead toward potentially useful explanations of prevalent co-activation patterns in SOZ. RE with surgical outcomes and preoperative rs-fMRI studies can yield expert knowledge most salient for SOZ identification.
PMID:38274868 | PMC:PMC10808636 | DOI:10.3389/fneur.2023.1324461
DARTS: an open-source Python pipeline for Ca<sup>2+</sup> microdomain analysis in live cell imaging data
Front Immunol. 2024 Jan 11;14:1299435. doi: 10.3389/fimmu.2023.1299435. eCollection 2023.
ABSTRACT
Ca2+ microdomains play a key role in intracellular signaling processes. For instance, they mediate the activation of T cells and, thus, the initial adaptive immune system. They are, however, also of utmost importance for activation of other cells, and a detailed understanding of the dynamics of these spatially localized Ca2+ signals is crucial for a better understanding of the underlying signaling processes. A typical approach to analyze Ca2+ microdomain dynamics is live cell fluorescence microscopy imaging. Experiments usually involve imaging a larger number of cells of different groups (for instance, wild type and knockout cells), followed by a time consuming image and data analysis. With DARTS, we present a modular Python pipeline for efficient Ca2+ microdomain analysis in live cell imaging data. DARTS (Deconvolution, Analysis, Registration, Tracking, and Shape normalization) provides state-of-the-art image postprocessing options like deep learning-based cell detection and tracking, spatio-temporal image deconvolution, and bleaching correction. An integrated automated Ca2+ microdomain detection offers direct access to global statistics like the number of microdomains for cell groups, corresponding signal intensity levels, and the temporal evolution of the measures. With a focus on bead stimulation experiments, DARTS provides a so-called dartboard projection analysis and visualization approach. A dartboard projection covers spatio-temporal normalization of the bead contact areas and cell shape normalization onto a circular template that enables aggregation of the spatiotemporal information of the microdomain detection results for the individual cells of the cell groups of interest. The dartboard visualization allows intuitive interpretation of the spatio-temporal microdomain dynamics at the group level. The application of DARTS is illustrated by three use cases in the context of the formation of initial Ca2+ microdomains after cell stimulation. DARTS is provided as an open-source solution and will be continuously extended upon the feedback of the community. Code available at: 10.5281/zenodo.10459243.
PMID:38274810 | PMC:PMC10809147 | DOI:10.3389/fimmu.2023.1299435
Application of Machine Learning and Deep EfficientNets in Distinguishing Neonatal Adrenal Hematomas From Neuroblastoma in Enhanced Computed Tomography Images
World J Oncol. 2024 Feb;15(1):81-89. doi: 10.14740/wjon1744. Epub 2024 Jan 20.
ABSTRACT
BACKGROUND: The aim of the study was to employ a combination of radiomic indicators based on computed tomography (CT) imaging and machine learning (ML), along with deep learning (DL), to differentiate between adrenal hematoma and adrenal neuroblastoma in neonates.
METHODS: A total of 76 neonates were included in this retrospective study (40 with neuroblastomas and 36 with adrenal hematomas) who underwent CT and divided into a training group (n = 38) and a testing group (n = 38). The regions of interest (ROIs) were segmented by two radiologists to extract radiomics features using Pyradiomics package. ML classifications were done using support vector machine (SVM), AdaBoost, Extra Trees, gradient boosting, multi-layer perceptron (MLP), and random forest (RF). EfficientNets was employed and classified, based on radiometrics. The area under curve (AUC) of the receiver operating characteristic (ROC) was calculated to assess the performance of each model.
RESULTS: Among all features, the least absolute shrinkage and selection operator (LASSO) logistic regression selected nine features. These radiomics features were used to construct radiomics model. In the training cohort, the AUCs of SVM, MLP and Extra Trees models were 0.967, 0.969 and 1.000, respectively. The corresponding AUCs of the test cohort were 0.985, 0.971 and 0.958, respectively. In the classification task, the AUC of the DL framework was 0.987.
CONCLUSION: ML decision classifiers and DL framework constructed from CT-based radiomics features offered a non-invasive method to differentiate neonatal adrenal hematoma from neuroblastoma and performed better than the clinical experts.
PMID:38274719 | PMC:PMC10807921 | DOI:10.14740/wjon1744
Scaling behaviours of deep learning and linear algorithms for the prediction of stroke severity
Brain Commun. 2024 Jan 10;6(1):fcae007. doi: 10.1093/braincomms/fcae007. eCollection 2024.
ABSTRACT
Deep learning has allowed for remarkable progress in many medical scenarios. Deep learning prediction models often require 105-107 examples. It is currently unknown whether deep learning can also enhance predictions of symptoms post-stroke in real-world samples of stroke patients that are often several magnitudes smaller. Such stroke outcome predictions however could be particularly instrumental in guiding acute clinical and rehabilitation care decisions. We here compared the capacities of classically used linear and novel deep learning algorithms in their prediction of stroke severity. Our analyses relied on a total of 1430 patients assembled from the MRI-Genetics Interface Exploration collaboration and a Massachusetts General Hospital-based study. The outcome of interest was National Institutes of Health Stroke Scale-based stroke severity in the acute phase after ischaemic stroke onset, which we predict by means of MRI-derived lesion location. We automatically derived lesion segmentations from diffusion-weighted clinical MRI scans, performed spatial normalization and included a principal component analysis step, retaining 95% of the variance of the original data. We then repeatedly separated a train, validation and test set to investigate the effects of sample size; we subsampled the train set to 100, 300 and 900 and trained the algorithms to predict the stroke severity score for each sample size with regularized linear regression and an eight-layered neural network. We selected hyperparameters on the validation set. We evaluated model performance based on the explained variance (R2) in the test set. While linear regression performed significantly better for a sample size of 100 patients, deep learning started to significantly outperform linear regression when trained on 900 patients. Average prediction performance improved by ∼20% when increasing the sample size 9× [maximum for 100 patients: 0.279 ± 0.005 (R2, 95% confidence interval), 900 patients: 0.337 ± 0.006]. In summary, for sample sizes of 900 patients, deep learning showed a higher prediction performance than typically employed linear methods. These findings suggest the existence of non-linear relationships between lesion location and stroke severity that can be utilized for an improved prediction performance for larger sample sizes.
PMID:38274570 | PMC:PMC10808016 | DOI:10.1093/braincomms/fcae007
Radiomics Boosts Deep Learning Model for IPMN Classification
Mach Learn Med Imaging. 2023 Oct;14349:134-143. doi: 10.1007/978-3-031-45676-3_14. Epub 2023 Oct 15.
ABSTRACT
Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer. Therefore, detecting and stratifying their risk level is of ultimate importance for effective treatment planning and disease control. However, this is a highly challenging task because of the diverse and irregular shape, texture, and size of the IPMN cysts as well as the pancreas. In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from multi-contrast MRI scans. Our proposed analysis framework includes an efficient volumetric self-adapting segmentation strategy for pancreas delineation, followed by a newly designed deep learning-based classification scheme with a radiomics-based predictive approach. We test our proposed decision-fusion model in multi-center data sets of 246 multi-contrast MRI scans and obtain superior performance to the state of the art (SOTA) in this field. Our ablation studies demonstrate the significance of both radiomics and deep learning modules for achieving the new SOTA performance compared to international guidelines and published studies (81.9% vs 61.3% in accuracy). Our findings have important implications for clinical decision-making. In a series of rigorous experiments on multi-center data sets (246 MRI scans from five centers), we achieved unprecedented performance (81.9% accuracy). The code is available upon publication.
PMID:38274402 | PMC:PMC10810260 | DOI:10.1007/978-3-031-45676-3_14
Reconstructing growth and dynamic trajectories from single-cell transcriptomics data
Nat Mach Intell. 2024;6(1):25-39. doi: 10.1038/s42256-023-00763-w. Epub 2023 Nov 30.
ABSTRACT
Time-series single-cell RNA sequencing (scRNA-seq) datasets provide unprecedented opportunities to learn dynamic processes of cellular systems. Due to the destructive nature of sequencing, it remains challenging to link the scRNA-seq snapshots sampled at different time points. Here we present TIGON, a dynamic, unbalanced optimal transport algorithm that reconstructs dynamic trajectories and population growth simultaneously as well as the underlying gene regulatory network from multiple snapshots. To tackle the high-dimensional optimal transport problem, we introduce a deep learning method using a dimensionless formulation based on the Wasserstein-Fisher-Rao (WFR) distance. TIGON is evaluated on simulated data and compared with existing methods for its robustness and accuracy in predicting cell state transition and cell population growth. Using three scRNA-seq datasets, we show the importance of growth in the temporal inference, TIGON's capability in reconstructing gene expression at unmeasured time points and its applications to temporal gene regulatory networks and cell-cell communication inference.
PMID:38274364 | PMC:PMC10805654 | DOI:10.1038/s42256-023-00763-w
Deep learning for complex chemical systems
Natl Sci Rev. 2023 Dec 29;10(12):nwad335. doi: 10.1093/nsr/nwad335. eCollection 2023 Dec.
ABSTRACT
Deep learning forms a bridge between the local features of molecular fragments/localized orbitals and the global properties of complex systems, enabling multi-scale simulations of complex chemical systems and reaction processes.
PMID:38274240 | PMC:PMC10808951 | DOI:10.1093/nsr/nwad335
Generation of tissues outside the field of view (FOV) of radiation therapy simulation imaging based on machine learning and patient body outline (PBO)
Radiat Oncol. 2024 Jan 25;19(1):15. doi: 10.1186/s13014-023-02384-4.
ABSTRACT
BACKGROUND: It is not unusual to see some parts of tissues are excluded in the field of view of CT simulation images. A typical mitigation is to avoid beams entering the missing body parts at the cost of sub-optimal planning.
METHODS: This study is to solve the problem by developing 3 methods, (1) deep learning (DL) mechanism for missing tissue generation, (2) using patient body outline (PBO) based on surface imaging, and (3) hybrid method combining DL and PBO. The DL model was built upon a Globally and Locally Consistent Image Completion to learn features by Convolutional Neural Networks-based inpainting, based on Generative Adversarial Network. The database used comprised 10,005 CT training slices of 322 lung cancer patients and 166 CT evaluation test slices of 15 patients. CT images were from the publicly available database of the Cancer Imaging Archive. Since existing data were used PBOs were acquired from the CT images. For evaluation, Structural Similarity Index Metric (SSIM), Root Mean Square Error (RMSE) and Peak signal-to-noise ratio (PSNR) were evaluated. For dosimetric validation, dynamic conformal arc plans were made with the ground truth images and images generated by the proposed method. Gamma analysis was conducted at relatively strict criteria of 1%/1 mm (dose difference/distance to agreement) and 2%/2 mm under three dose thresholds of 1%, 10% and 50% of the maximum dose in the plans made on the ground truth image sets.
RESULTS: The average SSIM in generation part only was 0.06 at epoch 100 but reached 0.86 at epoch 1500. Accordingly, the average SSIM in the whole image also improved from 0.86 to 0.97. At epoch 1500, the average values of RMSE and PSNR in the whole image were 7.4 and 30.9, respectively. Gamma analysis showed excellent agreement with the hybrid method (equal to or higher than 96.6% of the mean of pass rates for all scenarios).
CONCLUSIONS: It was first demonstrated that missing tissues in simulation imaging could be generated with high similarity, and dosimetric limitation could be overcome. The benefit of this study can be significantly enlarged when MR-only simulation is considered.
PMID:38273278 | DOI:10.1186/s13014-023-02384-4
A comparative analysis of CNN-based deep learning architectures for early diagnosis of bone cancer using CT images
Sci Rep. 2024 Jan 25;14(1):2144. doi: 10.1038/s41598-024-52719-8.
ABSTRACT
Bone cancer is a rare in which cells in the bone grow out of control, resulting in destroying the normal bone tissue. A benign type of bone cancer is harmless and does not spread to other body parts, whereas a malignant type can spread to other body parts and might be harmful. According to Cancer Research UK (2021), the survival rate for patients with bone cancer is 40% and early detection can increase the chances of survival by providing treatment at the initial stages. Prior detection of these lumps or masses can reduce the risk of death and treat bone cancer early. The goal of this current study is to utilize image processing techniques and deep learning-based Convolution neural network (CNN) to classify normal and cancerous bone images. Medical image processing techniques, like pre-processing (e.g., median filter), K-means clustering segmentation, and, canny edge detection were used to detect the cancer region in Computer Tomography (CT) images for parosteal osteosarcoma, enchondroma and osteochondroma types of bone cancer. After segmentation, the normal and cancerous affected images were classified using various existing CNN-based models. The results revealed that AlexNet model showed a better performance with a training accuracy of 98%, validation accuracy of 98%, and testing accuracy of 100%.
PMID:38273131 | DOI:10.1038/s41598-024-52719-8
Added value of dynamic contrast-enhanced MR imaging in deep learning-based prediction of local recurrence in grade 4 adult-type diffuse gliomas patients
Sci Rep. 2024 Jan 25;14(1):2171. doi: 10.1038/s41598-024-52841-7.
ABSTRACT
Local recurrences in patients with grade 4 adult-type diffuse gliomas mostly occur within residual non-enhancing T2 hyperintensity areas after surgical resection. Unfortunately, it is challenging to distinguish non-enhancing tumors from edema in the non-enhancing T2 hyperintensity areas using conventional MRI alone. Quantitative DCE MRI parameters such as Ktrans and Ve convey permeability information of glioblastomas that cannot be provided by conventional MRI. We used the publicly available nnU-Net to train a deep learning model that incorporated both conventional and DCE MRI to detect the subtle difference in vessel leakiness due to neoangiogenesis between the non-recurrence area and the local recurrence area, which contains a higher proportion of high-grade glioma cells. We found that the addition of Ve doubled the sensitivity while nonsignificantly decreasing the specificity for prediction of local recurrence in glioblastomas, which implies that the combined model may result in fewer missed cases of local recurrence. The deep learning model predictive of local recurrence may enable risk-adapted radiotherapy planning in patients with grade 4 adult-type diffuse gliomas.
PMID:38273075 | DOI:10.1038/s41598-024-52841-7
Publisher Correction: Risk of data leakage in estimating the diagnostic performance of a deep-learning-based computer-aided system for psychiatric disorders
Sci Rep. 2024 Jan 25;14(1):2172. doi: 10.1038/s41598-024-52295-x.
NO ABSTRACT
PMID:38272974 | DOI:10.1038/s41598-024-52295-x
Unlocking the potential: analyzing 3D microstructure of small-scale cement samples from space using deep learning
NPJ Microgravity. 2024 Jan 25;10(1):11. doi: 10.1038/s41526-024-00349-9.
ABSTRACT
Due to the prohibitive cost of transporting raw materials into Space, in-situ materials along with cement-like binders are poised to be employed for extraterrestrial construction. A unique methodology for obtaining microstructural topology of cement samples hydrated in microgravity environment at the International Space Station (ISS) is presented here. Distinctive Scanning Electron Microscopy (SEM) micrographs of hardened tri-calcium silicate (C3S) samples were used as exemplars in a deep learning-based microstructure reconstruction framework. The proposed method aids in generation of an ensemble of microstructures that is inherently statistical in nature, by utilizing sparse experimental data such as the C3S samples hydrated in microgravity. The hydrated space-returned samples had exhibited higher porosity content (~70 %) with the portlandite phase assuming an elongated plate-like morphology. Qualitative assessment of the volumetric slices from the reconstructed volumes showcased similar visual characteristics to that of the target 2D exemplar. Detailed assessment of the reconstructed volumes was carried out using statistical descriptors, and was further compared against micro-CT virtual data. The reconstructed volumes captured the unique microstructural morphology of the hardened C3S samples of both space-returned and ground-based samples, and can be directly employed as Representative Volume Element (RVE) to characterize mechanical/transport properties.
PMID:38272924 | DOI:10.1038/s41526-024-00349-9
Radiomics based on T2-weighted and diffusion-weighted MR imaging for preoperative prediction of tumor deposits in rectal cancer
Am J Surg. 2024 Jan 10:S0002-9610(24)00004-7. doi: 10.1016/j.amjsurg.2024.01.002. Online ahead of print.
ABSTRACT
AIM: Preoperative diagnosis of tumor deposits (TDs) in patients with rectal cancer remains a challenge. This study aims to develop and validate a radiomics nomogram based on the combination of T2-weighted (T2WI) and diffusion-weighted MR imaging (DWI) for the preoperative identification of TDs in rectal cancer.
MATERIALS AND METHODS: A total of 199 patients with rectal cancer who underwent T2WI and DWI were retrospectively enrolled and divided into a training set (n = 159) and a validation set (n = 40). The total incidence of TDs was 37.2 % (74/199). Radiomics features were extracted from T2WI and apparent diffusion coefficient (ADC) images. A radiomics nomogram combining Rad-score (T2WI + ADC) and clinical factors was subsequently constructed. The area under the receiver operating characteristic curve (AUC) was then calculated to evaluate the models. The nomogram is also compared to three machine learning model constructed based on no-Rad scores.
RESULTS: The Rad-score (T2WI + ADC) achieved an AUC of 0.831 in the training and 0.859 in the validation set. The radiomics nomogram (the combined model), incorporating the Rad-score (T2WI + ADC), MRI-reported lymph node status (mLN-status), and CA19-9, showed good discrimination of TDs with an AUC of 0.854 for the training and 0.923 for the validation set, which was superior to Random Forests, Support Vector Machines, and Deep Learning models. The combined model for predicting TDs outperformed the other three machine learning models showed an accuracy of 82.5 % in the validation set, with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 66.7 %, 92.0 %, 83.3 %, and 82.1 %, respectively.
CONCLUSION: The radiomics nomogram based on Rad-score (T2WI + ADC) and clinical factors provides a promising and effective method for the preoperative prediction of TDs in patients with rectal cancer.
PMID:38272767 | DOI:10.1016/j.amjsurg.2024.01.002
Practical Applications of Artificial Intelligence in Spine Imaging: A Review
Radiol Clin North Am. 2024 Mar;62(2):355-370. doi: 10.1016/j.rcl.2023.10.005. Epub 2023 Nov 18.
ABSTRACT
Artificial intelligence (AI), a transformative technology with unprecedented potential in medical imaging, can be applied to various spinal pathologies. AI-based approaches may improve imaging efficiency, diagnostic accuracy, and interpretation, which is essential for positive patient outcomes. This review explores AI algorithms, techniques, and applications in spine imaging, highlighting diagnostic impact and challenges with future directions for integrating AI into spine imaging workflow.
PMID:38272627 | DOI:10.1016/j.rcl.2023.10.005
Novel AI-Based Algorithm for the Automated Measurement of Cervical Sagittal Balance Parameters. A Validation Study on Pre- and Postoperative Radiographs of 129 Patients
Global Spine J. 2024 Jan 25:21925682241227428. doi: 10.1177/21925682241227428. Online ahead of print.
ABSTRACT
STUDY DESIGN: Retrospective, mono-centric cohort research study.
OBJECTIVES: The analysis of cervical sagittal balance parameters is essential for preoperative planning and dependent on the physician's experience. A fully automated artificial intelligence-based algorithm could contribute to an objective analysis and save time. Therefore, this algorithm should be validated in this study.
METHODS: Two surgeons measured C2-C7 lordosis, C1-C7 Sagittal Vertical Axis (SVA), C2-C7-SVA, C7-slope and T1-slope in pre- and postoperative lateral cervical X-rays of 129 patients undergoing anterior cervical surgery. All parameters were measured twice by surgeons and compared to the measurements by the AI algorithm consisting of 4 deep convolutional neural networks. Agreement between raters was quantified, among other metrics, by mean errors and single measure intraclass correlation coefficients for absolute agreement.
RESULTS: ICC-values for intra- (range: .92-1.0) and inter-rater (.91-1.0) reliability reflect excellent agreement between human raters. The AI-algorithm could determine all parameters with excellent ICC-values (preop:0.80-1.0; postop:0.86-.99). For a comparison between the AI algorithm and 1 surgeon, mean errors were smallest for C1-C7 SVA (preop: -.3 mm (95% CI:-.6 to -.1 mm), post: .3 mm (.0-.7 mm)) and largest for C2-C7 lordosis (preop:-2.2° (-2.9 to -1.6°), postop: 2.3°(-3.0 to -1.7°)). The automatic measurement was possible in 99% and 98% of pre- and postoperative images for all parameters except T1 slope, which had a detection rate of 48% and 51% in pre- and postoperative images.
CONCLUSION: This study validates that an AI-algorithm can reliably measure cervical sagittal balance parameters automatically in patients suffering from degenerative spinal diseases. It may simplify manual measurements and autonomously analyze large-scale datasets. Further studies are required to validate the algorithm on a larger and more diverse patient cohort.
PMID:38272462 | DOI:10.1177/21925682241227428
Evaluating AI-generated CBCT-based synthetic CT images for target delineation in palliative treatments of pelvic bone metastasis at conventional C-arm linacs
Radiother Oncol. 2024 Jan 23:110110. doi: 10.1016/j.radonc.2024.110110. Online ahead of print.
ABSTRACT
PURPOSE: One-table treatments with treatment imaging, preparation and delivery occurring at one treatment couch, could increase patients' comfort and throughput for palliative treatments. On regular C-arm linacs, however, cone-beam CT (CBCT) imaging quality is currently insufficient. Therefore, our goal was to assess the suitability of AI-generated CBCT based synthetic CT (sCT) images for target delineation and treatment planning for palliative radiotherapy.
MATERIALS AND METHODS: CBCTs and planning CT-scans of 22 female patients with pelvic bone metastasis were included. For each CBCT, a corresponding sCT image was generated by a deep learning model in ADMIRE 3.38.0. Radiation oncologists delineated 23 target volumes (TV) on the sCTs (TVsCT) and scored their delineation confidence. The delineations were transferred to planning CTs and manually adjusted if needed to yield gold standard target volumes (TVclin). TVsCT were geometrically compared to TVclin using Dice coefficient (DC) and Hausdorff Distance (HD). The dosimetric impact of TVsCT inaccuracies was evaluated for VMAT plans with different PTV margins.
RESULTS: Radiation oncologists scored the sCT quality as sufficient for 13/23 TVsCT (median: DC=0.9, HD=11 mm) and insufficient for 10/23 TVsCT (median: DC=0.7, HD=34 mm). For the sufficient category, remaining inaccuracies could be compensated by +1 to +4 mm additional margin to achieve coverage of V95%>95% and V95%>98%, respectively in 12/13 TVsCT.
CONCLUSION: The evaluated sCT quality allowed for accurate delineation for most targets. sCTs with insufficient quality could be identified accurately upfront. A moderate PTV margin expansion could address remaining delineation inaccuracies. Therefore, these findings support further exploration of CBCT based one-table treatments on C-arm linacs.
PMID:38272314 | DOI:10.1016/j.radonc.2024.110110
Determination of Trace Organic Contaminant Concentration via Machine Classification of Surface-Enhanced Raman Spectra
Environ Sci Technol. 2024 Jan 25. doi: 10.1021/acs.est.3c06447. Online ahead of print.
ABSTRACT
Surface-enhanced Raman spectroscopy (SERS) has been well explored as a highly effective characterization technique that is capable of chemical pollutant detection and identification at very low concentrations. Machine learning has been previously used to identify compounds based on SERS spectral data. However, utilization of SERS to quantify concentrations, with or without machine learning, has been difficult due to the spectral intensity being sensitive to confounding factors such as the substrate parameters, orientation of the analyte, and sample preparation technique. Here, we demonstrate an approach for predicting the concentration of sample pollutants from SERS spectra using machine learning. Frequency domain transform methods, including the Fourier and Walsh-Hadamard transforms, are applied to spectral data sets of three analytes (rhodamine 6G, chlorpyrifos, and triclosan), which are then used to train machine learning algorithms. Using standard machine learning models, the concentration of the sample pollutants is predicted with >80% cross-validation accuracy from raw SERS data. A cross-validation accuracy of 85% was achieved using deep learning for a moderately sized data set (∼100 spectra), and 70-80% was achieved for small data sets (∼50 spectra). Performance can be maintained within this range even when combining various sample preparation techniques and environmental media interference. Additionally, as a spectral pretreatment, the Fourier and Hadamard transforms are shown to consistently improve prediction accuracy across multiple data sets. Finally, standard models were shown to accurately identify characteristic peaks of compounds via analysis of their importance scores, further verifying their predictive value.
PMID:38272008 | DOI:10.1021/acs.est.3c06447
Clinical validation of a deep-learning-based bone age software on healthy Korean children
Ann Pediatr Endocrinol Metab. 2024 Jan 24. doi: 10.6065/apem.2346050.025. Online ahead of print.
ABSTRACT
PURPOSE: Bone age is needed to assess developmental status and growth disorders. We aimed to evaluate the clinical performance of a deep learning-based bone age software on the chronological age of healthy Korean children.
METHODS: This retrospective study included 371 healthy children (217 boys, 154 girls), aged between 4 and 17 years, who visited the department of Pediatrics for health check-ups between January 2017 and December 2018. A total of 553 left-hand radiographs of 371 healthy Korean children were evaluated using a commercial deep learning-based bone age software (BoneAge, Vuno, Seoul, Korea). The clinical performance of the deep learning software was determined using the concordance rate and Bland-Altman analysis via comparison with the chronological age.
RESULTS: A two-sample t-test (P < 0.001) and Fisher's exact test (P = 0.011) showed a significant difference between the normal chronological age and the bone age estimated by the deep learning software. There was a good correlation between the two variables (r = 0.96, P < 0.001); however, the root mean square error was 15.4 months. With a 12-month cut-off, the concordance rate was 58.8%. The Bland-Altman plot showed that the deep learning software tended to underestimate the bone age compared with the chronological age, especially in children under the age of 8.3 years.
CONCLUSION: The deep learning-based bone age software showed a low concordance rate and a tendency to underestimate the bone age in healthy Korean children.
PMID:38271993 | DOI:10.6065/apem.2346050.025
Resolving the non-uniformity in the feature space of age estimation: A deep learning model based on feature clusters of panoramic images
Comput Med Imaging Graph. 2024 Jan 15;112:102329. doi: 10.1016/j.compmedimag.2024.102329. Online ahead of print.
ABSTRACT
Age estimation is important in forensics, and numerous techniques have been investigated to estimate age based on various parts of the body. Among them, dental tissue is considered reliable for estimating age as it is less influenced by external factors. The advancement in deep learning has led to the development of automatic estimation of age using dental panoramic images. Typically, most of the medical datasets used for model learning are non-uniform in the feature space. This causes the model to be highly influenced by dense feature areas, resulting in adequate estimations; however, relatively poor estimations are observed in other areas. An effective solution to address this issue can be pre-dividing the data by age feature and training each regressor to estimate the age for individual features. In this study, we divide the data based on feature clusters obtained from unsupervised learning. The developed model comprises a classification head and multi-regression head, wherein the former predicts the cluster to which the data belong and the latter estimates the age within the predicted cluster. The visualization results show that the model can focus on a clinically meaningful area in each cluster for estimating age. The proposed model outperforms the models without feature clusters by focusing on the differences within the area. The performance improvement is particularly noticeable in the growth and aging periods. Furthermore, the model can adequately estimate the age even for samples with a high probability of classification error as they are located at the border of two feature clusters.
PMID:38271869 | DOI:10.1016/j.compmedimag.2024.102329