Deep learning
Alzheimer's Disease Detection in EEG Sleep Signals
IEEE J Biomed Health Inform. 2024 Oct 11;PP. doi: 10.1109/JBHI.2024.3478380. Online ahead of print.
ABSTRACT
Alzheimer's disease (AD) and sleep disorders exhibit a close association, where disruptions in sleep patterns often precede the onset of Mild Cognitive Impairment (MCI) and early-stage AD. This study delves into the potential of utilizing sleep-related electroencephalography (EEG) signals acquired through polysomnography (PSG) for the early detection of AD. Our primary focus is on exploring semi-supervised Deep Learning techniques for the classification of EEG signals due to the clinical scenario characterized by the limited data availability. The methodology entails testing and comparing the performance of semi-supervised models, benchmarked against an unsupervised and a supervised model. The study highlights the significance of spatial and temporal analysis capabilities, conducting independent analyses of each sleep stage. Results demonstrate the effectiveness of one semi-supervised model in leveraging limited labeled data, achieving stable metrics across all sleep stages, and reaching 90% accuracy in its supervised form. Comparative analyses reveal this superior performance over the unsupervised model, while the supervised model ranges between 92-94% . These findings underscore the potential of semi-supervised models in early AD detection, particularly in overcoming the challenges associated with the scarcity of labeled data. Ablation tests affirm the critical role of spatio-temporal feature extraction in semi-supervised predictive performance, and t-SNE visualizations validate the model's proficiency in distinguishing AD patterns. Overall, this research contributes to the advancement of AD detection through innovative Deep Learning approaches, highlighting the crucial role of semi-supervised learning in addressing data limitations.
PMID:39392730 | DOI:10.1109/JBHI.2024.3478380
Knot data analysis using multiscale Gauss link integral
Proc Natl Acad Sci U S A. 2024 Oct 15;121(42):e2408431121. doi: 10.1073/pnas.2408431121. Epub 2024 Oct 11.
ABSTRACT
In the past decade, topological data analysis has emerged as a powerful algebraic topology approach in data science. Although knot theory and related subjects are a focus of study in mathematics, their success in practical applications is quite limited due to the lack of localization and quantization. We address these challenges by introducing knot data analysis (KDA), a paradigm that incorporates curve segmentation and multiscale analysis into the Gauss link integral. The resulting multiscale Gauss link integral (mGLI) recovers the global topological properties of knots and links at an appropriate scale and offers a multiscale geometric topology approach to capture the local structures and connectivities in data. By integration with machine learning or deep learning, the proposed mGLI significantly outperforms other state-of-the-art methods across various benchmark problems in 13 intricately complex biological datasets, including protein flexibility analysis, protein-ligand interactions, human Ether-à-go-go-Related Gene potassium channel blockade screening, and quantitative toxicity assessment. Our KDA opens a research area-knot deep learning-in data science.
PMID:39392667 | DOI:10.1073/pnas.2408431121
Detection of tea leaf blight in UAV remote sensing images by integrating super-resolution and detection networks
Environ Monit Assess. 2024 Oct 11;196(11):1044. doi: 10.1007/s10661-024-13221-w.
ABSTRACT
Tea leaf blight (TLB) is a common disease of tea plants and is widely distributed in tea gardens. Although the use of unmanned aerial vehicle (UAV) remote sensing can help to achieve a wider scale for TLB detection, the blurring of UAV images, overlapping of tea leaves, and small size of TLB spots pose significant challenges to the task of detection. This study proposes a method of detecting TLB in UAV remote sensing images by integrating super-resolution (SR) and detection networks. We use an SR network called SERB-Swin2sr to reconstruct the detailed features of UAV images and solve the problem of detail loss caused by the blurring in UAV images. In SERB-Swin2sr, a squeeze-and-excitation ResNet block (SERB) is introduced to enhance the models' ability to extract the target details in the images, and the convolution stem replaces the convolution block in order to increase the convergence rate and stability of the network. A detection network called SDDA-YOLO is applied to achieve precise detection of TLB in UAV remote sensing images. In SDDA-YOLO, a shuffle dual-dimensional attention (SDDA) module is introduced to enhance the feature fusion capability of the network, and an Xsmall-scale detection layer is used to enhance the detection ability of small lesions. Experimental results show that the proposed method is superior to current detection methods. Compared with a baseline YOLOv8 model, the precision, mAP@0.5, and mAP@0.5:0.95 of the proposed method are improved by 4.2%, 1.6%, and 1.8%, and the size of our model is only 4.6 MB.
PMID:39392511 | DOI:10.1007/s10661-024-13221-w
Deformable Image Registration Using Vision Transformers for Cardiac Motion Estimation from Cine Cardiac MRI Images
Funct Imaging Model Heart. 2023 Jun;13958:375-383. doi: 10.1007/978-3-031-35302-4_39. Epub 2023 Jun 16.
ABSTRACT
Accurate cardiac motion estimation is a crucial step in assessing the kinematic and contractile properties of the cardiac chambers, thereby directly quantifying the regional cardiac function, which plays an important role in understanding myocardial diseases and planning their treatment. Since the cine cardiac magnetic resonance imaging (MRI) provides dynamic, high-resolution 3D images of the heart that depict cardiac motion throughout the cardiac cycle, cardiac motion can be estimated by finding the optical flow representation between the consecutive 3D volumes from a 4D cine cardiac MRI dataset, thereby formulating it as an image registration problem. Therefore, we propose a hybrid convolutional neural network (CNN) and Vision Transformer (ViT) architecture for deformable image registration of 3D cine cardiac MRI images for consistent cardiac motion estimation. We compare the image registration results of our proposed method with those of the VoxelMorph CNN model and conventional B-spline free form deformation (FFD) non-rigid image registration algorithm. We conduct all our experiments on the open-source Automated Cardiac Diagnosis Challenge (ACDC) dataset. Our experiments show that the deformable image registration results obtained using the proposed method outperform the CNN model and the traditional FFD image registration method.
PMID:39391840 | PMC:PMC11466156 | DOI:10.1007/978-3-031-35302-4_39
OPTIMIZATION-DRIVEN STATISTICAL MODELS OF ANATOMIES USING RADIAL BASIS FUNCTION SHAPE REPRESENTATION
Proc IEEE Int Symp Biomed Imaging. 2024 May;2024. doi: 10.1109/ISBI56570.2024.10635852. Epub 2024 Aug 22.
ABSTRACT
Particle-based shape modeling (PSM) is a popular approach to automatically quantify shape variability in populations of anatomies. The PSM family of methods employs optimization to automatically populate a dense set of corresponding particles (as pseudo landmarks) on 3D surfaces to allow subsequent shape analysis. A recent deep learning approach leverages implicit radial basis function representations of shapes to better adapt to the underlying complex geometry of anatomies. Here, we propose an adaptation of this method using a traditional optimization approach that allows more precise control over the desired characteristics of models by leveraging both an eigenshape and a correspondence loss. Furthermore, the proposed approach avoids using a black-box model and allows more freedom for particles to navigate the underlying surfaces, yielding more informative statistical models. We demonstrate the efficacy of the proposed approach to state-of-the-art methods on two real datasets and justify our choice of losses empirically.
PMID:39391839 | PMC:PMC11463973 | DOI:10.1109/ISBI56570.2024.10635852
The Danger of Minimum Exposures: Understanding Cross-App Information Leaks on iOS through Multi-Side-Channel Learning
Conf Comput Commun Secur. 2023 Nov;2023:281-295. doi: 10.1145/3576915.3616655. Epub 2023 Nov 21.
ABSTRACT
Research on side-channel leaks has long been focusing on the information exposure from a single channel (memory, network traffic, power, etc.). Less studied is the risk of learning from multiple side channels related to a target activity (e.g., website visits) even when individual channels are not informative enough for an effective attack. Although the prior research made the first step on this direction, inferring the operations of foreground apps on iOS from a set of global statistics, still less clear are how to determine the maximum information leaks from all target-related side channels on a system, what can be learnt about the target from such leaks and most importantly, how to control information leaks from the whole system, not just from an individual channel. To answer these fundamental questions, we performed the first systematic study on multi-channel inference, focusing on iOS as the first step. Our research is based upon a novel attack technique, called Mischief, which given a set of potential side channels related to a target activity (e.g., foreground apps), utilizes probabilistic search to approximate an optimal subset of the channels exposing most information, as measured by Merit Score, a metric for correlation-based feature selection. On such an optimal subset, an inference attack is modeled as a multivariate time series classification problem, so the state-of-the-art deep-learning based solution, InceptionTime in particular, can be applied to achieve the best possible outcome. Mischief is found to work effectively on today's iOS (16.2), identifying foreground apps, website visits, sensitive IoT operations (e.g., opening the door) with a high confidence, even in an open-world scenario, which demonstrates that the protection Apple puts in place against the known attack is inadequate. Also importantly, this new understanding enables us to develop more comprehensive protection, which could elevate today's side-channel research from suppressing leaks from individual channels to controlling information exposure across the whole system.
PMID:39391799 | PMC:PMC11466504 | DOI:10.1145/3576915.3616655
Estimation of sorghum seedling number from drone image based on support vector machine and YOLO algorithms
Front Plant Sci. 2024 Sep 26;15:1399872. doi: 10.3389/fpls.2024.1399872. eCollection 2024.
ABSTRACT
Accurately counting the number of sorghum seedlings from images captured by unmanned aerial vehicles (UAV) is useful for identifying sorghum varieties with high seedling emergence rates in breeding programs. The traditional method is manual counting, which is time-consuming and laborious. Recently, UAV have been widely used for crop growth monitoring because of their low cost, and their ability to collect high-resolution images and other data non-destructively. However, estimating the number of sorghum seedlings is challenging because of the complexity of field environments. The aim of this study was to test three models for counting sorghum seedlings rapidly and automatically from red-green-blue (RGB) images captured at different flight altitudes by a UAV. The three models were a machine learning approach (Support Vector Machines, SVM) and two deep learning approaches (YOLOv5 and YOLOv8). The robustness of the models was verified using RGB images collected at different heights. The R2 values of the model outputs for images captured at heights of 15 m, 30 m, and 45 m were, respectively, (SVM: 0.67, 0.57, 0.51), (YOLOv5: 0.76, 0.57, 0.56), and (YOLOv8: 0.93, 0.90, 0.71). Therefore, the YOLOv8 model was most accurate in estimating the number of sorghum seedlings. The results indicate that UAV images combined with an appropriate model can be effective for large-scale counting of sorghum seedlings. This method will be a useful tool for sorghum phenotyping.
PMID:39391781 | PMC:PMC11464359 | DOI:10.3389/fpls.2024.1399872
Compressing recognition network of cotton disease with spot-adaptive knowledge distillation
Front Plant Sci. 2024 Sep 26;15:1433543. doi: 10.3389/fpls.2024.1433543. eCollection 2024.
ABSTRACT
Deep networks play a crucial role in the recognition of agricultural diseases. However, these networks often come with numerous parameters and large sizes, posing a challenge for direct deployment on resource-limited edge computing devices for plant protection robots. To tackle this challenge for recognizing cotton diseases on the edge device, we adopt knowledge distillation to compress the big networks, aiming to reduce the number of parameters and the computational complexity of the networks. In order to get excellent performance, we conduct combined comparison experiments from three aspects: teacher network, student network and distillation algorithm. The teacher networks contain three classical convolutional neural networks, while the student networks include six lightweight networks in two categories of homogeneous and heterogeneous structures. In addition, we investigate nine distillation algorithms using spot-adaptive strategy. The results demonstrate that the combination of DenseNet40 as the teacher and ShuffleNetV2 as the student show best performance when using NST algorithm, yielding a recognition accuracy of 90.59% and reducing FLOPs from 0.29 G to 0.045 G. The proposed method can facilitate the lightweighting of the model for recognizing cotton diseases while maintaining high recognition accuracy and offer a practical solution for deploying deep models on edge computing devices.
PMID:39391779 | PMC:PMC11464345 | DOI:10.3389/fpls.2024.1433543
Development and validation of a multimodal deep learning framework for vascular cognitive impairment diagnosis
iScience. 2024 Sep 13;27(10):110945. doi: 10.1016/j.isci.2024.110945. eCollection 2024 Oct 18.
ABSTRACT
Cerebrovascular disease (CVD) is the second leading cause of dementia worldwide. The accurate detection of vascular cognitive impairment (VCI) in CVD patients remains an unresolved challenge. We collected the clinical non-imaging data and neuroimaging data from 307 subjects with CVD. Using these data, we developed a multimodal deep learning framework that combined the vision transformer and extreme gradient boosting algorithms. The final hybrid model within the framework included only two neuroimaging features and six clinical features, demonstrating robust performance across both internal and external datasets. Furthermore, the diagnostic performance of our model on a specific dataset was demonstrated to be comparable to that of expert clinicians. Notably, our model can identify the brain regions and clinical features that significantly contribute to the VCI diagnosis, thereby enhancing transparency and interpretability. We developed an accurate and explainable clinical decision support tool to identify the presence of VCI in patients with CVD.
PMID:39391736 | PMC:PMC11465129 | DOI:10.1016/j.isci.2024.110945
DynProfiler: a Python package for comprehensive analysis and interpretation of signaling dynamics leveraged by deep learning techniques
Bioinform Adv. 2024 Oct 7;4(1):vbae145. doi: 10.1093/bioadv/vbae145. eCollection 2024.
ABSTRACT
SUMMARY: Signaling dynamics encode important features and regulatory mechanisms of biological systems, and recent studies have reported the use of simulated signaling dynamics with mechanistic modeling as biomarkers for human diseases. Given the success of deep learning techniques, it is expected that they can extract informative patterns from simulation results more effectively than traditional approaches involving manual feature selection, which can be used for subsequent analyses, such as patient stratification and survival prediction. Here, we propose DynProfiler, which utilizes the entire signaling dynamics, including intermediate variables, as input and leverages deep learning techniques to extract informative features without requiring any labels. Furthermore, DynProfiler incorporates a modern explainable AI solution to provide quantitative time-dependent importance scores for each dynamics. Using simulated dynamics of patients with breast cancer as an example, we demonstrate DynProfiler's ability to extract high-quality features that can predict mortality risk and identify important dynamics, highlighting upregulated phosphorylated GSK3β as a biomarker for poor prognosis. Overall, this tool can be useful for clinical application, as well as for elucidating biological system dynamics.
AVAILABILITY AND IMPLEMENTATION: The DynProfiler Python library is available in GitHub at https://github.com/okadalabipr/DynProfiler.
PMID:39391633 | PMC:PMC11464416 | DOI:10.1093/bioadv/vbae145
Deep bone oncology Diagnostics: Computed tomography based Machine learning for detection of bone tumors from breast cancer metastasis
J Bone Oncol. 2024 Sep 25;48:100638. doi: 10.1016/j.jbo.2024.100638. eCollection 2024 Oct.
ABSTRACT
PURPOSE: The objective of this study is to develop a novel diagnostic tool using deep learning and radiomics to distinguish bone tumors on CT images as metastases from breast cancer. By providing a more accurate and reliable method for identifying metastatic bone tumors, this approach aims to significantly improve clinical decision-making and patient management in the context of breast cancer.
METHODS: This study utilized CT images of bone tumors from 178 patients, including 78 cases of breast cancer bone metastases and 100 cases of non-breast cancer bone metastases. The dataset was processed using the Medical Image Segmentation via Self-distilling TransUNet (MISSU) model for automated segmentation. Radiomics features were extracted from the segmented tumor regions using the Pyradiomics library, capturing various aspects of tumor phenotype. Feature selection was conducted using LASSO regression to identify the most predictive features. The model's performance was evaluated using ten-fold cross-validation, with metrics including accuracy, sensitivity, specificity, and the Dice similarity coefficient.
RESULTS: The developed radiomics model using the SVM algorithm achieved high discriminatory power, with an AUC of 0.936 on the training set and 0.953 on the test set. The model's performance metrics demonstrated strong accuracy, sensitivity, and specificity. Specifically, the accuracy was 0.864 for the training set and 0.853 for the test set. Sensitivity values were 0.838 and 0.789 for the training and test sets, respectively, while specificity values were 0.896 and 0.933 for the training and test sets, respectively. These results indicate that the SVM model effectively distinguishes between bone metastases originating from breast cancer and other origins. Additionally, the average Dice similarity coefficient for the automated segmentation was 0.915, demonstrating a high level of agreement with manual segmentations.
CONCLUSION: This study demonstrates the potential of combining CT-based radiomics and deep learning for the accurate detection of bone metastases from breast cancer. The high-performance metrics indicate that this approach can significantly enhance diagnostic accuracy, aiding in early detection and improving patient outcomes. Future research should focus on validating these findings on larger datasets, integrating the model into clinical workflows, and exploring its use in personalized treatment planning.
PMID:39391583 | PMC:PMC11466622 | DOI:10.1016/j.jbo.2024.100638
Video-based AI module with raw-scale and ROI-scale information for thyroid nodule diagnosis
Heliyon. 2024 Sep 19;10(19):e37924. doi: 10.1016/j.heliyon.2024.e37924. eCollection 2024 Oct 15.
ABSTRACT
OBJECTIVES: Ultrasound examination is a primary method for detecting thyroid lesions in clinical practice. Incorrect ultrasound diagnosis may lead to delayed treatment or unnecessary biopsy punctures. Therefore, our objective is to propose an artificial intelligence model to increase the precision of thyroid ultrasound diagnosis and reduce puncture rates.
METHODS: We consecutively collected ultrasound recordings from 672 patients with 845 nodules across two Chinese hospitals. This dataset was divided into training, validation, and internal test sets in a ratio of 7:1:2. We constructed and tested six different model variants based on different video feature distillation strategies and whether additional information from ROI (Region of Interest) scales was used. The models' performances were evaluated using the internal test set and an additional external test set containing 126 nodules from a third hospital.
RESULTS: The dual-stream model, which contains both raw-scale and ROI-scale streams with the time dimensional convolution layer, achieved the best performance on both internal and external test sets. On the internal test set, it achieved an AUROC (Area Under Receiver Operating Characteristic Curve) of 0.969 (95 % confidence interval, CI: 0.944-0.993) and an accuracy of 92.6 %, outperforming other variants (AUROC: 0.936-0.955, accuracy: 80.2%-88.3 %) and experienced radiologists (accuracy: 91.9 %). The AUROC of the best model in the external test was 0.931 (95 % CI: 0.890-0.972).
CONCLUSION: Integrating a dual-stream model with additional ROI scale information and the time dimensional convolution layer can improve performance in diagnosing thyroid ultrasound videos.
PMID:39391469 | PMC:PMC11466579 | DOI:10.1016/j.heliyon.2024.e37924
Unified Noise-aware Network for Low-count PET Denoising with Varying Count Levels
IEEE Trans Radiat Plasma Med Sci. 2024 Apr;8(4):366-378. doi: 10.1109/trpms.2023.3334105. Epub 2023 Nov 20.
ABSTRACT
As PET imaging is accompanied by substantial radiation exposure and cancer risk, reducing radiation dose in PET scans is an important topic. However, low-count PET scans often suffer from high image noise, which can negatively impact image quality and diagnostic performance. Recent advances in deep learning have shown great potential for recovering underlying signal from noisy counterparts. However, neural networks trained on a specific noise level cannot be easily generalized to other noise levels due to different noise amplitude and variances. To obtain optimal denoised results, we may need to train multiple networks using data with different noise levels. But this approach may be infeasible in reality due to limited data availability. Denoising dynamic PET images presents additional challenge due to tracer decay and continuously changing noise levels across dynamic frames. To address these issues, we propose a Unified Noise-aware Network (UNN) that combines multiple sub-networks with varying denoising power to generate optimal denoised results regardless of the input noise levels. Evaluated using large-scale data from two medical centers with different vendors, presented results showed that the UNN can consistently produce promising denoised results regardless of input noise levels, and demonstrate superior performance over networks trained on single noise level data, especially for extremely low-count data.
PMID:39391291 | PMC:PMC11463975 | DOI:10.1109/trpms.2023.3334105
A deep learning model to enhance the classification of primary bone tumors based on incomplete multimodal images in X-ray, CT, and MRI
Cancer Imaging. 2024 Oct 10;24(1):135. doi: 10.1186/s40644-024-00784-7.
ABSTRACT
BACKGROUND: Accurately classifying primary bone tumors is crucial for guiding therapeutic decisions. The National Comprehensive Cancer Network guidelines recommend multimodal images to provide different perspectives for the comprehensive evaluation of primary bone tumors. However, in clinical practice, most patients' medical multimodal images are often incomplete. This study aimed to build a deep learning model using patients' incomplete multimodal images from X-ray, CT, and MRI alongside clinical characteristics to classify primary bone tumors as benign, intermediate, or malignant.
METHODS: In this retrospective study, a total of 1305 patients with histopathologically confirmed primary bone tumors (internal dataset, n = 1043; external dataset, n = 262) were included from two centers between January 2010 and December 2022. We proposed a Primary Bone Tumor Classification Transformer Network (PBTC-TransNet) fusion model to classify primary bone tumors. Areas under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were calculated to evaluate the model's classification performance.
RESULTS: The PBTC-TransNet fusion model achieved satisfactory micro-average AUCs of 0.847 (95% CI: 0.832, 0.862) and 0.782 (95% CI: 0.749, 0.817) on the internal and external test sets. For the classification of benign, intermediate, and malignant primary bone tumors, the model respectively achieved AUCs of 0.827/0.727, 0.740/0.662, and 0.815/0.745 on the internal/external test sets. Furthermore, across all patient subgroups stratified by the distribution of imaging modalities, the PBTC-TransNet fusion model gained micro-average AUCs ranging from 0.700 to 0.909 and 0.640 to 0.847 on the internal and external test sets, respectively. The model showed the highest micro-average AUC of 0.909, accuracy of 84.3%, micro-average sensitivity of 84.3%, and micro-average specificity of 92.1% in those with only X-rays on the internal test set. On the external test set, the PBTC-TransNet fusion model gained the highest micro-average AUC of 0.847 for patients with X-ray + CT.
CONCLUSIONS: We successfully developed and externally validated the transformer-based PBTC-Transnet fusion model for the effective classification of primary bone tumors. This model, rooted in incomplete multimodal images and clinical characteristics, effectively mirrors real-life clinical scenarios, thus enhancing its strong clinical practicability.
PMID:39390604 | DOI:10.1186/s40644-024-00784-7
A deep learning-based dose calculation method for volumetric modulated arc therapy
Radiat Oncol. 2024 Oct 10;19(1):141. doi: 10.1186/s13014-024-02534-2.
ABSTRACT
BACKGROUND: Volumetric modulated arc therapy (VMAT) planning optimization involves iterative adjustment of numerous parameters, and hence requires repeatedly dose recalculation. In this study, we used the deep learning method to develop a fast and accurate dose calculation method for VMAT.
METHODS: The classical 3D UNet was adopted and trained to learn the physics principle of dose calculation. The inputs included the projected fluence map (FM), computed tomography (CT) images, the radiological depth and the source-to-voxel distance (SVD). The projected FM was generated by projecting the accumulated FM between two consecutive control points (CPs) onto the patient's anatomy. The accumulated FM was calculated by simulating the movement of the multi-leaf collimator (MLC) from one CP to the next. The dose, calculated by the treatment planning system (TPS), was used as ground truth. 51 head and neck VMAT plans were used, with 43, 1 and 7 cases as training, validation, and testing datasets, respectively. Correspondingly, 7182, 180 and 1260 CP samples were included in the training, validation, and testing datasets.
RESULTS: This presented method was evaluated by comparing the derived dose distribution to the TPS calculated dose distribution. The dose profiles coincided for both the single CP and the entire plan (summation of all CPs). But the network derived dose was smoother than the TPS calculated dose. Gamma analysis was performed between the network derived dose and the TPS calculated dose. The average gamma pass rate was 96.56%, 98.75%, 98.03% and 99.30% under the criteria of 2% (tolerance) -2 mm (distance to agreement, DTA). 2%-3 mm, 3%-2 mm and 3%-3 mm. No significant difference was observed on the critical indices including the max, mean dose, and the relative volume covered by the 2000 cGy, 4000 cGy and the prescription dose. For one CP, the average computational time of the network and TPS was 0.09s and 0.53s. And for one patient, the average time was 16.51s and 95.60s.
CONCLUSION: The dose distribution derived by the network showed good agreement with the TPS calculated dose distribution. The computational time was reduced to approximate one-sixth of its original duration. Therefore the presented deep learning-based dose calculation method has the potential to be used for planning optimization.
PMID:39390598 | DOI:10.1186/s13014-024-02534-2
Enhanced ovarian cancer survival prediction using temporal analysis and graph neural networks
BMC Med Inform Decis Mak. 2024 Oct 10;24(1):299. doi: 10.1186/s12911-024-02665-2.
ABSTRACT
Ovarian cancer is a formidable health challenge that demands accurate and timely survival predictions to guide clinical interventions. Existing methods, while commendable, suffer from limitations in harnessing the temporal evolution of patient data and capturing intricate interdependencies among different data elements. In this paper, we present a novel methodology which combines Temporal Analysis and Graph Neural Networks (GNNs) to significantly enhance ovarian cancer survival rate predictions. The shortcomings of current processes originate from their disability to correctly seize the complex interactions amongst diverse scientific information units in addition to the dynamic modifications that arise in a affected person`s nation over time. By combining temporal information evaluation and GNNs, our cautioned approach overcomes those drawbacks and, whilst as compared to preceding methods, yields a noteworthy 8.3% benefit in precision, 4.9% more accuracy, 5.5% more advantageous recall, and a considerable 2.9% reduction in prediction latency. Our method's Temporal Analysis factor uses longitudinal affected person information to perceive good-sized styles and tendencies that offer precious insights into the direction of ovarian cancer. Through the combination of GNNs, we offer a robust framework able to shoot complicated interactions among exclusive capabilities of scientific data, permitting the version to realize diffused dependencies that would affect survival results. Our paintings have tremendous implications for scientific practice. Prompt and correct estimation of the survival price of ovarian most cancers allows scientific experts to customize remedy regimens, manipulate assets efficiently, and provide individualized care to patients. Additionally, the interpretability of our version`s predictions promotes a collaborative method for affected person care via way of means of strengthening agreement among scientific employees and the AI-driven selection help system. The proposed approach not only outperforms existing methods but also has the possible to develop ovarian cancer treatment by providing clinicians through a reliable tool for informed decision-making. Through a fusion of Temporal Analysis and Graph Neural Networks, we conduit the gap among data-driven insights and clinical practice, proposing a capable opportunity for refining patient outcomes in ovarian cancer management operations.
PMID:39390514 | DOI:10.1186/s12911-024-02665-2
Automatic maxillary sinus segmentation and pathology classification on cone-beam computed tomographic images using deep learning
BMC Oral Health. 2024 Oct 10;24(1):1208. doi: 10.1186/s12903-024-04924-0.
ABSTRACT
BACKGROUND: Maxillofacial complex automated segmentation could alternative traditional segmentation methods to increase the effectiveness of virtual workloads. The use of DL systems in the detection of maxillary sinus and pathologies will both facilitate the work of physicians and be a support mechanism before the planned surgeries.
OBJECTIVE: The aim was to use a modified You Only Look Oncev5x (YOLOv5x) architecture with transfer learning capabilities to segment both maxillary sinuses and maxillary sinus diseases on Cone-Beam Computed Tomographic (CBCT) images.
METHODS: Data set consists of 307 anonymised CBCT images of patients (173 women and 134 males) obtained from the radiology archive of the Department of Oral and Maxillofacial Radiology. Bilateral maxillary sinuses CBCT scans were used to identify mucous retention cysts (MRC), mucosal thickenings (MT), total and partial opacifications, and healthy maxillary sinuses without any radiological features.
RESULTS: Recall, precision and F1 score values for total maxillary sinus segmentation were 1, 0.985 and 0.992, respectively; 1, 0.931 and 0.964 for healthy maxillary sinus segmentation; 0.858, 0.923 and 0.889 for MT segmentation; 0.977, 0.877 and 0.924 for MRC segmentation; 1, 0.942 and 0.970 for sinusitis segmentation.
CONCLUSION: This study demonstrates that maxillary sinuses can be segmented, and maxillary sinus diseases can be accurately detected using the AI model.
PMID:39390490 | DOI:10.1186/s12903-024-04924-0
A multi-task graph deep learning model to predict drugs combination of synergy and sensitivity scores
BMC Bioinformatics. 2024 Oct 10;25(1):327. doi: 10.1186/s12859-024-05925-0.
ABSTRACT
BACKGROUND: Drug combination treatments have proven to be a realistic technique for treating challenging diseases such as cancer by enhancing efficacy and mitigating side effects. To achieve the therapeutic goals of these combinations, it is essential to employ multi-targeted drug combinations, which maximize effectiveness and synergistic effects.
RESULTS: This paper proposes 'MultiComb', a multi-task deep learning (MTDL) model designed to simultaneously predict the synergy and sensitivity of drug combinations. The model utilizes a graph convolution network to represent the Simplified Molecular-Input Line-Entry (SMILES) of two drugs, generating their respective features. Also, three fully connected subnetworks extract features of the cancer cell line. These drug and cell line features are then concatenated and processed through an attention mechanism, which outputs two optimized feature representations for the target tasks. The cross-stitch model learns the relationship between these tasks. At last, each learned task feature is fed into fully connected subnetworks to predict the synergy and sensitivity scores. The proposed model is validated using the O'Neil benchmark dataset, which includes 38 unique drugs combined to form 17,901 drug combination pairs and tested across 37 unique cancer cells. The model's performance is tested using some metrics like mean square error ( MSE ), mean absolute error ( MAE ), coefficient of determination ( R 2 ), Spearman, and Pearson scores. The mean synergy scores of the proposed model are 232.37, 9.59, 0.57, 0.76, and 0.73 for the previous metrics, respectively. Also, the values for mean sensitivity scores are 15.59, 2.74, 0.90, 0.95, and 0.95, respectively.
CONCLUSION: This paper proposes an MTDL model to predict synergy and sensitivity scores for drug combinations targeting specific cancer cell lines. The MTDL model demonstrates superior performance compared to existing approaches, providing better results.
PMID:39390357 | DOI:10.1186/s12859-024-05925-0
RNA-Seq analysis for breast cancer detection: a study on paired tissue samples using hybrid optimization and deep learning techniques
J Cancer Res Clin Oncol. 2024 Oct 10;150(10):455. doi: 10.1007/s00432-024-05968-z.
ABSTRACT
PROBLEM: Breast cancer is a leading global health issue, contributing to high mortality rates among women. The challenge of early detection is exacerbated by the high dimensionality and complexity of gene expression data, which complicates the classification process.
AIM: This study aims to develop an advanced deep learning model that can accurately detect breast cancer using RNA-Seq gene expression data, while effectively addressing the challenges posed by the data's high dimensionality and complexity.
METHODS: We introduce a novel hybrid gene selection approach that combines the Harris Hawk Optimization (HHO) and Whale Optimization (WO) algorithms with deep learning to improve feature selection and classification accuracy. The model's performance was compared to five conventional optimization algorithms integrated with deep learning: Genetic Algorithm (GA), Artificial Bee Colony (ABC), Cuckoo Search (CS), and Particle Swarm Optimization (PSO). RNA-Seq data was collected from 66 paired samples of normal and cancerous tissues from breast cancer patients at the Jawaharlal Nehru Cancer Hospital & Research Centre, Bhopal, India. Sequencing was performed by Biokart Genomics Lab, Bengaluru, India.
RESULTS: The proposed model achieved a mean classification accuracy of 99.0%, consistently outperforming the GA, ABC, CS, and PSO methods. The dataset comprised 55 female breast cancer patients, including both early and advanced stages, along with age-matched healthy controls.
CONCLUSION: Our findings demonstrate that the hybrid gene selection approach using HHO and WO, combined with deep learning, is a powerful and accurate tool for breast cancer detection. This approach shows promise for early detection and could facilitate personalized treatment strategies, ultimately improving patient outcomes.
PMID:39390265 | DOI:10.1007/s00432-024-05968-z
Application of artificial intelligence model in pathological staging and prognosis of clear cell renal cell carcinoma
Discov Oncol. 2024 Oct 10;15(1):545. doi: 10.1007/s12672-024-01437-8.
ABSTRACT
This study aims to develop a deep learning (DL) model based on whole-slide images (WSIs) to predict the pathological stage of clear cell renal cell carcinoma (ccRCC). The histopathological images of 513 ccRCC patients were downloaded from The Cancer Genome Atlas (TCGA) database and randomly divided into training set and validation set according to the ratio of 8∶2. The CLAM algorithm was used to establish the DL model, and the stability of the model was evaluated in the external validation set. DL features were extracted from the model to construct a prognostic risk model, which was validated in an external dataset. The results showed that the DL model showed excellent prediction ability with an area under the curve (AUC) of 0.875 and an average accuracy score of 0.809, indicating that the model could reliably distinguish ccRCC patients at different stages from histopathological images. In addition, the prognostic risk model constructed by DL characteristics showed that the overall survival rate of patients in the high-risk group was significantly lower than that in the low-risk group (P = 0.003), and AUC values for predicting 1-, 3- and 5-year overall survival rates were 0.68, 0.69 and 0.69, respectively, indicating that the prediction model had high sensitivity and specificity. The results of the validation set are consistent with the above results. Therefore, DL model can accurately predict the pathological stage and prognosis of ccRCC patients, and provide certain reference value for clinical diagnosis.
PMID:39390246 | DOI:10.1007/s12672-024-01437-8