Deep learning

Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm

Fri, 2024-02-23 06:00

Front Oncol. 2024 Feb 8;14:1335740. doi: 10.3389/fonc.2024.1335740. eCollection 2024.

ABSTRACT

Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.

PMID:38390266 | PMC:PMC10882068 | DOI:10.3389/fonc.2024.1335740

Categories: Literature Watch

Pneumonia classification: A limited data approach for global understanding

Fri, 2024-02-23 06:00

Heliyon. 2024 Feb 14;10(4):e26177. doi: 10.1016/j.heliyon.2024.e26177. eCollection 2024 Feb 29.

ABSTRACT

As the human race has advanced, so too have the ailments that afflict it. Diseases such as pneumonia, once considered to be basic flu or allergies, have evolved into more severe forms, including SARs and COVID-19, presenting significant risks to people worldwide. In our study, we focused on categorizing pneumonia-related inflammation in chest X-rays (CXR) using a relatively small dataset. Our approach was to encompass a comprehensive view, addressing every potential area of inflammation in the CXR. We employed enhanced class activation maps (mCAM) to meet the clinical criteria for classification rationale. Our model incorporates capsule network clusters (CNsC), which aids in learning different aspects such as geometry, orientation, and position of the inflammation seen in the CXR. Our Capsule Network Clusters (CNsC) rapidly interpret various perspectives in a single CXR without needing image augmentation, a common necessity in existing detection models. This approach significantly cuts down on training and evaluation durations. We conducted thorough testing using the RSNA pneumonia dataset of CXR images, achieving accuracy and recall rates as high as 98.3% and 99.5% in our conclusive tests. Additionally, we observed encouraging outcomes when applying our trained model to standard X-ray images obtained from medical clinics.

PMID:38390159 | PMC:PMC10881372 | DOI:10.1016/j.heliyon.2024.e26177

Categories: Literature Watch

Evaluation of deep learning computer vision for water level measurements in rivers

Fri, 2024-02-23 06:00

Heliyon. 2024 Feb 11;10(4):e25989. doi: 10.1016/j.heliyon.2024.e25989. eCollection 2024 Feb 29.

ABSTRACT

Image-based gauging stations offer the potential for substantial enhancement in the monitoring networks of river water levels. Nonetheless, the majority of camera gauges fall short in delivering reliable and precise measurements because of the fluctuating appearance of water in the rivers over the course of the year. In this study, we introduce a method for measuring water levels in rivers using both the traditional continuous image subtraction (CIS) approach and a SegNet neural network based on deep learning computer vision. The historical images collected from on-site investigations were employed to train three neural networks (SegNet, U-Net, and FCN) in order to evaluate their effectiveness, overall performance, and reliability. The research findings demonstrated that the SegNet neural network outperformed the CIS method in accurately measuring water levels. The root mean square error (RMSE) between the water level measurements obtained by the SegNet neural network and the gauge station's readings ranged from 0.013 m to 0.066 m, with a high correlation coefficient of 0.998. Furthermore, the study revealed that the performance of the SegNet neural network in analyzing water levels in rivers improved with the inclusion of a larger number of images, diverse image categories, and higher image resolutions in the training dataset. These promising results emphasize the potential of deep learning computer vision technology, particularly the SegNet neural network, to enhance water level measurement in rivers. Notably, the quality and diversity of the training dataset play a crucial role in optimizing the network's performance. Overall, the application of this advanced technology holds great promise for advancing water level monitoring and management in river systems.

PMID:38390142 | PMC:PMC10881344 | DOI:10.1016/j.heliyon.2024.e25989

Categories: Literature Watch

The optimization of college tennis training and teaching under deep learning

Fri, 2024-02-23 06:00

Heliyon. 2024 Feb 11;10(4):e25954. doi: 10.1016/j.heliyon.2024.e25954. eCollection 2024 Feb 29.

ABSTRACT

To enhance the integration of deep learning into tennis education and instigate reforms in sports programs, this paper employs deep learning techniques to analyze tennis tactics. The experiments initially introduce the concepts of sports science and backpropagation neural networks. Subsequently, these theories are applied to formulate a comprehensive system of tennis tactical diagnostic indicators, encompassing construction principles, basic requirements, diagnostic indicator content, and evaluation indicator design. Simultaneously, a Back Propagation Neural Network (BPNN) is utilized to construct a tennis tactical diagnostic model. The paper concludes with a series of experiments conducted to validate the effectiveness of the constructed indicator system and diagnostic model. The results indicate the excellent performance of the neural network model when trained on tennis match data, with a mean squared error of 0.00037146 on the validation set and 0.0104 on the training set. This demonstrates the outstanding predictive capability of the model. Additionally, the system proves capable of providing detailed tactical application analysis when employing the tennis tactical diagnostic indicator system for real-time athlete diagnosis. This functionality offers robust support for effective training and coaching during matches. In summary, this paper aims to evaluate athletes' performance by constructing a diagnostic system, providing a solid reference for optimizing tennis training and education. The insights offered by this paper have the potential to drive reforms in sports programs, particularly in the realm of tennis education.

PMID:38390121 | PMC:PMC10881878 | DOI:10.1016/j.heliyon.2024.e25954

Categories: Literature Watch

Improved automated tumor segmentation in whole-body 3D scans using multi-directional 2D projection-based priors

Fri, 2024-02-23 06:00

Heliyon. 2024 Feb 15;10(4):e26414. doi: 10.1016/j.heliyon.2024.e26414. eCollection 2024 Feb 29.

ABSTRACT

Early cancer detection, guided by whole-body imaging, is important for the overall survival and well-being of the patients. While various computer-assisted systems have been developed to expedite and enhance cancer diagnostics and longitudinal monitoring, the detection and segmentation of tumors, especially from whole-body scans, remain challenging. To address this, we propose a novel end-to-end automated framework that first generates a tumor probability distribution map (TPDM), incorporating prior information about the tumor characteristics (e.g. size, shape, location). Subsequently, the TPDM is integrated with a state-of-the-art 3D segmentation network along with the original PET/CT or PET/MR images. This aims to produce more meaningful tumor segmentation masks compared to using the baseline 3D segmentation network alone. The proposed method was evaluated on three independent cohorts (autoPET, CAR-T, cHL) of images containing different cancer forms, obtained with different imaging modalities, and acquisition parameters and lesions annotated by different experts. The evaluation demonstrated the superiority of our proposed method over the baseline model by significant margins in terms of Dice coefficient, and lesion-wise sensitivity and precision. Many of the extremely small tumor lesions (i.e. the most difficult to segment) were missed by the baseline model but detected by the proposed model without additional false positives, resulting in clinically more relevant assessments. On average, an improvement of 0.0251 (autoPET), 0.144 (CAR-T), and 0.0528 (cHL) in overall Dice was observed. In conclusion, the proposed TPDM-based approach can be integrated with any state-of-the-art 3D UNET with potentially more accurate and robust segmentation results.

PMID:38390107 | PMC:PMC10882139 | DOI:10.1016/j.heliyon.2024.e26414

Categories: Literature Watch

An Automated Heart Shunt Recognition Pipeline Using Deep Neural Networks

Thu, 2024-02-22 06:00

J Imaging Inform Med. 2024 Feb 22. doi: 10.1007/s10278-024-01047-4. Online ahead of print.

ABSTRACT

Automated recognition of heart shunts using saline contrast transthoracic echocardiography (SC-TTE) has the potential to transform clinical practice, enabling non-experts to assess heart shunt lesions. This study aims to develop a fully automated and scalable analysis pipeline for distinguishing heart shunts, utilizing a deep neural network-based framework. The pipeline consists of three steps: (1) chamber segmentation, (2) ultrasound microbubble localization, and (3) disease classification model establishment. The study's normal control group included 91 patients with intracardiac shunts, 61 patients with extracardiac shunts, and 84 asymptomatic individuals. Participants' SC-TTE images were segmented using the U-Net model to obtain cardiac chambers. The segmentation results were combined with ultrasound microbubble localization to generate multivariate time series data on microbubble counts in each chamber. A classification model was then trained using this data to distinguish between intracardiac and extracardiac shunts. The proposed framework accurately segmented heart chambers (dice coefficient = 0.92 ± 0.1) and localized microbubbles. The disease classification model achieved high accuracy, sensitivity, specificity, F1 score, kappa value, and AUC value for both intracardiac and extracardiac shunts. For intracardiac shunts, accuracy was 0.875 ± 0.008, sensitivity was 0.891 ± 0.002, specificity was 0.865 ± 0.012, F1 score was 0.836 ± 0.011, kappa value was 0.735 ± 0.017, and AUC value was 0.942 ± 0.014. For extracardiac shunts, accuracy was 0.902 ± 0.007, sensitivity was 0.763 ± 0.014, specificity was 0.966 ± 0.008, F1 score was 0.830 ± 0.012, kappa value was 0.762 ± 0.017, and AUC value was 0.916 ± 0.006. The proposed framework utilizing deep neural networks offers a fast, convenient, and accurate method for identifying intracardiac and extracardiac shunts. It aids in shunt recognition and generates valuable quantitative indices, assisting clinicians in diagnosing these conditions.

PMID:38388868 | DOI:10.1007/s10278-024-01047-4

Categories: Literature Watch

Fully-automated multi-organ segmentation tool applicable to both non-contrast and post-contrast abdominal CT: deep learning algorithm developed using dual-energy CT images

Thu, 2024-02-22 06:00

Sci Rep. 2024 Feb 22;14(1):4378. doi: 10.1038/s41598-024-55137-y.

ABSTRACT

A novel 3D nnU-Net-based of algorithm was developed for fully-automated multi-organ segmentation in abdominal CT, applicable to both non-contrast and post-contrast images. The algorithm was trained using dual-energy CT (DECT)-obtained portal venous phase (PVP) and spatiotemporally-matched virtual non-contrast images, and tested using a single-energy (SE) CT dataset comprising PVP and true non-contrast (TNC) images. The algorithm showed robust accuracy in segmenting the liver, spleen, right kidney (RK), and left kidney (LK), with mean dice similarity coefficients (DSCs) exceeding 0.94 for each organ, regardless of contrast enhancement. However, pancreas segmentation demonstrated slightly lower performance with mean DSCs of around 0.8. In organ volume estimation, the algorithm demonstrated excellent agreement with ground-truth measurements for the liver, spleen, RK, and LK (intraclass correlation coefficients [ICCs] > 0.95); while the pancreas showed good agreements (ICC = 0.792 in SE-PVP, 0.840 in TNC). Accurate volume estimation within a 10% deviation from ground-truth was achieved in over 90% of cases involving the liver, spleen, RK, and LK. These findings indicate the efficacy of our 3D nnU-Net-based algorithm, developed using DECT images, which provides precise segmentation of the liver, spleen, and RK and LK in both non-contrast and post-contrast CT images, enabling reliable organ volumetry, albeit with relatively reduced performance for the pancreas.

PMID:38388824 | DOI:10.1038/s41598-024-55137-y

Categories: Literature Watch

Deep learning for automatic bowel-obstruction identification on abdominal CT

Thu, 2024-02-22 06:00

Eur Radiol. 2024 Feb 22. doi: 10.1007/s00330-024-10657-z. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Automated evaluation of abdominal computed tomography (CT) scans should help radiologists manage their massive workloads, thereby leading to earlier diagnoses and better patient outcomes. Our objective was to develop a machine-learning model capable of reliably identifying suspected bowel obstruction (BO) on abdominal CT.

MATERIALS AND METHODS: The internal dataset comprised 1345 abdominal CTs obtained in 2015-2022 from 1273 patients with suspected BO; among them, 670 were annotated as BO yes/no by an experienced abdominal radiologist. The external dataset consisted of 88 radiologist-annotated CTs. We developed a full preprocessing pipeline for abdominal CT comprising a model to locate the abdominal-pelvic region and another model to crop the 3D scan around the body. We built, trained, and tested several neural-network architectures for the binary classification (BO, yes/no) of each CT. F1 and balanced accuracy scores were computed to assess model performance.

RESULTS: The mixed convolutional network pretrained on a Kinetics 400 dataset achieved the best results: with the internal dataset, the F1 score was 0.92, balanced accuracy 0.86, and sensitivity 0.93; with the external dataset, the corresponding values were 0.89, 0.89, and 0.89. When calibrated on sensitivity, this model produced 1.00 sensitivity, 0.84 specificity, and an F1 score of 0.88 with the internal dataset; corresponding values were 0.98, 0.76, and 0.87 with the external dataset.

CONCLUSION: The 3D mixed convolutional neural network developed here shows great potential for the automated binary classification (BO yes/no) of abdominal CT scans from patients with suspected BO.

CLINICAL RELEVANCE STATEMENT: The 3D mixed CNN automates bowel obstruction classification, potentially automating patient selection and CT prioritization, leading to an enhanced radiologist workflow.

KEY POINTS: • Bowel obstruction's rising incidence strains radiologists. AI can aid urgent CT readings. • Employed 1345 CT scans, neural networks for bowel obstruction detection, achieving high accuracy and sensitivity on external testing. • 3D mixed CNN automates CT reading prioritization effectively and speeds up bowel obstruction diagnosis.

PMID:38388719 | DOI:10.1007/s00330-024-10657-z

Categories: Literature Watch

Ultra-low dose chest CT with silver filter and deep learning reconstruction significantly reduces radiation dose and retains quantitative information in the investigation and monitoring of lymphangioleiomyomatosis (LAM)

Thu, 2024-02-22 06:00

Eur Radiol. 2024 Feb 22. doi: 10.1007/s00330-024-10649-z. Online ahead of print.

ABSTRACT

PURPOSE: Frequent CT scans to quantify lung involvement in cystic lung disease increases radiation exposure. Beam shaping energy filters can optimize imaging properties at lower radiation dosages. The aim of this study is to investigate whether use of SilverBeam filter and deep learning reconstruction algorithm allows for reduced radiation dose chest CT scanning in patients with lymphangioleiomyomatosis (LAM).

MATERIAL AND METHODS: In a single-center prospective study, 60 consecutive patients with LAM underwent chest CT at standard and ultra-low radiation doses. Standard dose scan was performed with standard copper filter and ultra-low dose scan was performed with SilverBeam filter. Scans were reconstructed using a soft tissue kernel with deep learning reconstruction (AiCE) technique and using a soft tissue kernel with hybrid iterative reconstruction (AIDR3D). Cyst scores were quantified by semi-automated software. Signal-to-noise ratio (SNR) was calculated for each reconstruction. Data were analyzed by linear correlation, paired t-test, and Bland-Altman plots.

RESULTS: Patients averaged 49.4 years and 100% were female with mean BMI 26.6 ± 6.1 kg/m2. Cyst score measured by AiCE reconstruction with SilverBeam filter correlated well with that of AIDR3D reconstruction with standard filter, with a 1.5% difference, and allowed for an 85.5% median radiation dosage reduction (0.33 mSv vs. 2.27 mSv, respectively, p < 0.001). Compared to standard filter with AIDR3D, SNR for SilverBeam AiCE images was slightly lower (3.2 vs. 3.1, respectively, p = 0.005).

CONCLUSION: SilverBeam filter with deep learning reconstruction reduces radiation dosage of chest CT, while maintaining accuracy of cyst quantification as well as image quality in cystic lung disease.

CLINICAL RELEVANCE STATEMENT: Radiation dosage from chest CT can be significantly reduced without sacrificing image quality by using silver filter in combination with a deep learning reconstructive algorithm.

KEY POINTS: • Deep learning reconstruction in chest CT had no significant effect on cyst quantification when compared to conventional hybrid iterative reconstruction. • SilverBeam filter reduced radiation dosage by 85.5% compared to standard dose chest CT. • SilverBeam filter in coordination with deep learning reconstruction maintained image quality and diagnostic accuracy for cyst quantification when compared to standard dose CT with hybrid iterative reconstruction.

PMID:38388717 | DOI:10.1007/s00330-024-10649-z

Categories: Literature Watch

A comprehensive computational benchmark for evaluating deep learning-based protein function prediction approaches

Thu, 2024-02-22 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae050. doi: 10.1093/bib/bbae050.

ABSTRACT

Proteins play an important role in life activities and are the basic units for performing functions. Accurately annotating functions to proteins is crucial for understanding the intricate mechanisms of life and developing effective treatments for complex diseases. Traditional biological experiments struggle to keep pace with the growing number of known proteins. With the development of high-throughput sequencing technology, a wide variety of biological data provides the possibility to accurately predict protein functions by computational methods. Consequently, many computational methods have been proposed. Due to the diversity of application scenarios, it is necessary to conduct a comprehensive evaluation of these computational methods to determine the suitability of each algorithm for specific cases. In this study, we present a comprehensive benchmark, BeProf, to process data and evaluate representative computational methods. We first collect the latest datasets and analyze the data characteristics. Then, we investigate and summarize 17 state-of-the-art computational methods. Finally, we propose a novel comprehensive evaluation metric, design eight application scenarios and evaluate the performance of existing methods on these scenarios. Based on the evaluation, we provide practical recommendations for different scenarios, enabling users to select the most suitable method for their specific needs. All of these servers can be obtained from https://csuligroup.com/BEPROF and https://github.com/CSUBioGroup/BEPROF.

PMID:38388682 | DOI:10.1093/bib/bbae050

Categories: Literature Watch

CRISPR-DIPOFF: an interpretable deep learning approach for CRISPR Cas-9 off-target prediction

Thu, 2024-02-22 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbad530. doi: 10.1093/bib/bbad530.

ABSTRACT

CRISPR Cas-9 is a groundbreaking genome-editing tool that harnesses bacterial defense systems to alter DNA sequences accurately. This innovative technology holds vast promise in multiple domains like biotechnology, agriculture and medicine. However, such power does not come without its own peril, and one such issue is the potential for unintended modifications (Off-Target), which highlights the need for accurate prediction and mitigation strategies. Though previous studies have demonstrated improvement in Off-Target prediction capability with the application of deep learning, they often struggle with the precision-recall trade-off, limiting their effectiveness and do not provide proper interpretation of the complex decision-making process of their models. To address these limitations, we have thoroughly explored deep learning networks, particularly the recurrent neural network based models, leveraging their established success in handling sequence data. Furthermore, we have employed genetic algorithm for hyperparameter tuning to optimize these models' performance. The results from our experiments demonstrate significant performance improvement compared with the current state-of-the-art in Off-Target prediction, highlighting the efficacy of our approach. Furthermore, leveraging the power of the integrated gradient method, we make an effort to interpret our models resulting in a detailed analysis and understanding of the underlying factors that contribute to Off-Target predictions, in particular the presence of two sub-regions in the seed region of single guide RNA which extends the established biological hypothesis of Off-Target effects. To the best of our knowledge, our model can be considered as the first model combining high efficacy, interpretability and a desirable balance between precision and recall.

PMID:38388680 | DOI:10.1093/bib/bbad530

Categories: Literature Watch

Deep learning segmentation of fibrous cap in intravascular optical coherence tomography images

Thu, 2024-02-22 06:00

Sci Rep. 2024 Feb 22;14(1):4393. doi: 10.1038/s41598-024-55120-7.

ABSTRACT

Thin-cap fibroatheroma (TCFA) is a prominent risk factor for plaque rupture. Intravascular optical coherence tomography (IVOCT) enables identification of fibrous cap (FC), measurement of FC thicknesses, and assessment of plaque vulnerability. We developed a fully-automated deep learning method for FC segmentation. This study included 32,531 images across 227 pullbacks from two registries (TRANSFORM-OCT and UHCMC). Images were semi-automatically labeled using our OCTOPUS with expert editing using established guidelines. We employed preprocessing including guidewire shadow detection, lumen segmentation, pixel-shifting, and Gaussian filtering on raw IVOCT (r,θ) images. Data were augmented in a natural way by changing θ in spiral acquisitions and by changing intensity and noise values. We used a modified SegResNet and comparison networks to segment FCs. We employed transfer learning from our existing much larger, fully-labeled calcification IVOCT dataset to reduce deep-learning training. Postprocessing with a morphological operation enhanced segmentation performance. Overall, our method consistently delivered better FC segmentation results (Dice: 0.837 ± 0.012) than other deep-learning methods. Transfer learning reduced training time by 84% and reduced the need for more training samples. Our method showed a high level of generalizability, evidenced by highly-consistent segmentations across five-fold cross-validation (sensitivity: 85.0 ± 0.3%, Dice: 0.846 ± 0.011) and the held-out test (sensitivity: 84.9%, Dice: 0.816) sets. In addition, we found excellent agreement of FC thickness with ground truth (2.95 ± 20.73 µm), giving clinically insignificant bias. There was excellent reproducibility in pre- and post-stenting pullbacks (average FC angle: 200.9 ± 128.0°/202.0 ± 121.1°). Our fully automated, deep-learning FC segmentation method demonstrated excellent performance, generalizability, and reproducibility on multi-center datasets. It will be useful for multiple research purposes and potentially for planning stent deployments that avoid placing a stent edge over an FC.

PMID:38388637 | DOI:10.1038/s41598-024-55120-7

Categories: Literature Watch

An adaptive adjacency matrix-based graph convolutional recurrent network for air quality prediction

Thu, 2024-02-22 06:00

Sci Rep. 2024 Feb 22;14(1):4408. doi: 10.1038/s41598-024-55060-2.

ABSTRACT

In recent years, air pollution has become increasingly serious and poses a great threat to human health. Timely and accurate air quality prediction is crucial for air pollution early warning and control. Although data-driven air quality prediction methods are promising, there are still challenges in studying spatial-temporal correlations of air pollutants to design effective predictors. To address this issue, a novel model called adaptive adjacency matrix-based graph convolutional recurrent network (AAMGCRN) is proposed in this study. The model inputs Point of Interest (POI) data and meteorological data into a fully connected neural network to learn the weights of the adjacency matrix thereby constructing the self-ringing adjacency matrix and passes the pollutant data with this matrix as input to the Graph Convolutional Network (GCN) unit. Then, the GCN unit is embedded into LSTM units to learn spatio-temporal dependencies. Furthermore, temporal features are extracted using Long Short-Term Memory network (LSTM). Finally, the outputs of these two components are merged and air quality predictions are generated through a hidden layer. To evaluate the performance of the model, we conducted multi-step predictions for the hourly concentration of PM2.5, PM10 and O3 at Fangshan, Tiantan and Dongsi monitoring stations in Beijing. The experimental results show that our method achieves better predicted effects compared with other baseline models based on deep learning. In general, we designed a novel air quality prediction method and effectively addressed the shortcomings of existing studies in learning the spatio-temporal correlations of air pollutants. This method can provide more accurate air quality predictions and is expected to provide support for public health protection and government environmental decision-making.

PMID:38388632 | DOI:10.1038/s41598-024-55060-2

Categories: Literature Watch

Dynamic educational recommender system based on Improved LSTM neural network

Thu, 2024-02-22 06:00

Sci Rep. 2024 Feb 22;14(1):4381. doi: 10.1038/s41598-024-54729-y.

ABSTRACT

Nowadays, virtual learning environments have become widespread to avoid time and space constraints and share high-quality learning resources. As a result of human-computer interaction, student behaviors are recorded instantly. This work aims to design an educational recommendation system according to the individual's interests in educational resources. This system is evaluated based on clicking or downloading the source with the help of the user so that the appropriate resources can be suggested to users. In online tutorials, in addition to the problem of choosing the right source, we face the challenge of being aware of diversity in users' preferences and tastes, especially their short-term interests in the near future, at the beginning of a session. We assume that the user's interests consist of two parts: (1) the user's long-term interests, which include the user's constant interests based on the history of the user's dynamic activities, and (2) the user's short-term interests, which indicate the user's current interests. Due to the use of Bilstm networks and their gradual learning feature, the proposed model supports learners' behavioral changes. An average accuracy of 0.9978 and a Loss of 0.0051 offer more appropriate recommendations than similar works.

PMID:38388560 | DOI:10.1038/s41598-024-54729-y

Categories: Literature Watch

Clinical Evaluation of Deep Learning for Tumor Delineation on (18)F-FDG PET/CT of Head and Neck Cancer

Thu, 2024-02-22 06:00

J Nucl Med. 2024 Feb 22:jnumed.123.266574. doi: 10.2967/jnumed.123.266574. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) may decrease 18F-FDG PET/CT-based gross tumor volume (GTV) delineation variability and automate tumor-volume-derived image biomarker extraction. Hence, we aimed to identify and evaluate promising state-of-the-art deep learning methods for head and neck cancer (HNC) PET GTV delineation. Methods: We trained and evaluated deep learning methods using retrospectively included scans of HNC patients referred for radiotherapy between January 2014 and December 2019 (ISRCTN16907234). We used 3 test datasets: an internal set to compare methods, another internal set to compare AI-to-expert variability and expert interobserver variability (IOV), and an external set to compare internal and external AI-to-expert variability. Expert PET GTVs were used as the reference standard. Our benchmark IOV was measured using the PET GTV of 6 experts. The primary outcome was the Dice similarity coefficient (DSC). ANOVA was used to compare methods, a paired t test was used to compare AI-to-expert variability and expert IOV, an unpaired t test was used to compare internal and external AI-to-expert variability, and post hoc Bland-Altman analysis was used to evaluate biomarker agreement. Results: In total, 1,220 18F-FDG PET/CT scans of 1,190 patients (mean age ± SD, 63 ± 10 y; 858 men) were included, and 5 deep learning methods were trained using 5-fold cross-validation (n = 805). The nnU-Net method achieved the highest similarity (DSC, 0.80 [95% CI, 0.77-0.86]; n = 196). We found no evidence of a difference between expert IOV and AI-to-expert variability (DSC, 0.78 for AI vs. 0.82 for experts; mean difference of 0.04 [95% CI, -0.01 to 0.09]; P = 0.12; n = 64). We found no evidence of a difference between the internal and external AI-to-expert variability (DSC, 0.80 internally vs. 0.81 externally; mean difference of 0.004 [95% CI, -0.05 to 0.04]; P = 0.87; n = 125). PET GTV-derived biomarkers of AI were in good agreement with experts. Conclusion: Deep learning can be used to automate 18F-FDG PET/CT tumor-volume-derived imaging biomarkers, and the deep-learning-based volumes have the potential to assist clinical tumor volume delineation in radiation oncology.

PMID:38388516 | DOI:10.2967/jnumed.123.266574

Categories: Literature Watch

SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy

Thu, 2024-02-22 06:00

Comput Med Imaging Graph. 2024 Feb 10;113:102353. doi: 10.1016/j.compmedimag.2024.102353. Online ahead of print.

ABSTRACT

Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.

PMID:38387114 | DOI:10.1016/j.compmedimag.2024.102353

Categories: Literature Watch

Prediction of electrical properties of GAAFET based on integrated learning model

Thu, 2024-02-22 06:00

Nanotechnology. 2024 Feb 22. doi: 10.1088/1361-6528/ad2c52. Online ahead of print.

ABSTRACT

As device feature sizes continue to decrease and fin field effect transistors (FinFETs) reach their physical limits, gate all around field effect transistors (GAAFETs) have emerged with larger gate control areas and stackable characteristics for better suppression of second-order effects such as short-channel effects due to their gate encircling characteristics. Traditional methods for studying the electrical characteristics of devices are mostly based on the technology computer-aided design (TCAD). Still, it is not conducive to developing new devices due to its time-consuming and inefficient drawbacks. Deep learning (DL) and machine learning (ML) have been well-used in recent years in many fields. In this paper, we propose an integrated learning model that integrates the advantages of DL and ML to solve many problems in traditional methods. This integrated learning model predicts the direct current characteristics, capacitance characteristics, and electrical parameters of GAAFET better than those predicted by DL or ML methods alone, with a linear regression factor (R2) greater than 0.99 and very small root mean square error (RMSE). The proposed integrated learning model achieves fast and accurate prediction of GAAFET electrical characteristics, which provides a new idea for device and circuit simulation and characteristics prediction in microelectronics.&#xD.

PMID:38387100 | DOI:10.1088/1361-6528/ad2c52

Categories: Literature Watch

SwinUNet: a multiscale feature learning approach to cardiovascular magnetic resonance parametric mapping for myocardial tissue characterization

Thu, 2024-02-22 06:00

Physiol Meas. 2024 Feb 22. doi: 10.1088/1361-6579/ad2c15. Online ahead of print.

ABSTRACT

Objective: Cardiovascular magnetic resonance (CMR) can measure T1 and T2 relaxation times for myocardial tissue characterization. However, the CMR procedure for T1/T2 parametric mapping is time-consuming, making it challenging to scan heart patients routinely in clinical practice. This study aims to accelerate CMR parametric mapping with deep learning.&#xD;&#xD;Approach: A deep-learning model, SwinUNet, was developed to accelerate T1/T2 mapping. SwinUNet used a convolutional UNet and a Swin transformer to form a hierarchical 3D computation structure, allowing for analyzing CMR images spatially and temporally with multiscale feature learning. A comparative study was conducted between SwinUNet and an existing deep-learning model, MyoMapNet, which only used temporal analysis for parametric mapping. The T1/T2 mapping performance was evaluated globally using mean absolute error (MAE) and structural similarity index measure (SSIM). The clinical T1/T2 indices for characterizing the left-ventricle myocardial walls were also calculated and evaluated using correlation and Bland-Altman analysis.&#xD;&#xD;Main results: We performed accelerated T1 mapping with ≤4 heartbeats and T2 mapping with 2 heartbeats in reference to the clinical standard, which required 11 heartbeats for T1 mapping and 3 heartbeats for T2 mapping. SwinUNet performed well in all the experiments (MAE<50ms, SSIM>0.8, correlation>0.75, and Bland-Altman agreement limits<100ms for T1 mapping; MAE<1ms, SSIM>0.9, correlation>0.95, and Bland-Altman agreement limits<1.5ms for T2 mapping). When the maximal acceleration was used (2 heartbeats), SwinUNet outperformed MyoMapNet and gave measurement accuracy similar to the clinical standard.&#xD;&#xD;Significance: SwinUNet offers an optimal solution to CMR parametric mapping for assessing myocardial diseases quantitatively in clinical cardiology.&#xD.

PMID:38387052 | DOI:10.1088/1361-6579/ad2c15

Categories: Literature Watch

Algorithmic detection of sleep-disordered breathing using respiratory signals: a systematic review

Thu, 2024-02-22 06:00

Physiol Meas. 2024 Feb 22. doi: 10.1088/1361-6579/ad2c13. Online ahead of print.

ABSTRACT

&#xD;Sleep-disordered breathing (SDB) poses health risks linked to hypertension, cardiovascular disease, and diabetes. However, the time-consuming and costly standard diagnostic method, polysomnography (PSG), limits its wide adoption and leads to underdiagnosis. To tackle this, cost-effective algorithms using single-lead signals (like respiratory, blood oxygen, and electrocardiogram) have emerged. Despite respiratory signals being preferred for SDB assessment, a lack of comprehensive reviews addressing their algorithmic scope and performance persists. This paper systematically reviews 2012-2022 literature, covering signal sources, processing, feature extraction, classification, and application, aiming to bridge this gap and provide future research references.&#xD;Methods:&#xD;This systematic review followed the registered PROSPERO protocol (CRDXXXXXXX), initially screening 342 papers, with 32 studies meeting data extraction criteria.&#xD;Results:&#xD;Respiratory signal sources include nasal airflow (NAF), oronasal airflow (OAF), and respiratory movement-related signals such as thoracic respiratory effort (TRE) and abdominal respiratory effort (ARE). Classification techniques include threshold rule-based methods (8), machine learning (ML) models (13), and deep learning (DL) models (11). The NAF-based algorithm achieved the highest average accuracy at 94.11%, surpassing 78.19% for other signals. Hypopnea detection sensitivity with single-source respiratory signals remained modest, peaking at 73.34%. The TRE and ARE signals proved to be reliable in identifying different types of SDB because distinct respiratory disorders exhibited different patterns of chest and abdominal motion.&#xD;Conclusions:&#xD;Multiple detection algorithms have been widely applied for SDB detection, and their accuracy is closely related to factors such as signal source, signal processing, feature selection, and model selection.&#xD.

PMID:38387048 | DOI:10.1088/1361-6579/ad2c13

Categories: Literature Watch

Semi-supervised auto-segmentation method for pelvic organ-at-risk in magnetic resonance images based on deep-learning

Thu, 2024-02-22 06:00

J Appl Clin Med Phys. 2024 Feb 22:e14296. doi: 10.1002/acm2.14296. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: In radiotherapy, magnetic resonance (MR) imaging has higher contrast for soft tissues compared to computed tomography (CT) scanning and does not emit radiation. However, manual annotation of the deep learning-based automatic organ-at-risk (OAR) delineation algorithms is expensive, making the collection of large-high-quality annotated datasets a challenge. Therefore, we proposed the low-cost semi-supervised OAR segmentation method using small pelvic MR image annotations.

METHODS: We trained a deep learning-based segmentation model using 116 sets of MR images from 116 patients. The bladder, femoral heads, rectum, and small intestine were selected as OAR regions. To generate the training set, we utilized a semi-supervised method and ensemble learning techniques. Additionally, we employed a post-processing algorithm to correct the self-annotation data. Both 2D and 3D auto-segmentation networks were evaluated for their performance. Furthermore, we evaluated the performance of semi-supervised method for 50 labeled data and only 10 labeled data.

RESULTS: The Dice similarity coefficient (DSC) of the bladder, femoral heads, rectum and small intestine between segmentation results and reference masks is 0.954, 0.984, 0.908, 0.852 only using self-annotation and post-processing methods of 2D segmentation model. The DSC of corresponding OARs is 0.871, 0.975, 0.975, 0.783, 0.724 using 3D segmentation network, 0.896, 0.984, 0.890, 0.828 using 2D segmentation network and common supervised method.

CONCLUSION: The outcomes of our study demonstrate that it is possible to train a multi-OAR segmentation model using small annotation samples and additional unlabeled data. To effectively annotate the dataset, ensemble learning and post-processing methods were employed. Additionally, when dealing with anisotropy and limited sample sizes, the 2D model outperformed the 3D model in terms of performance.

PMID:38386963 | DOI:10.1002/acm2.14296

Categories: Literature Watch

Pages