Deep learning

A new fusion neural network model and credit card fraud identification

Mon, 2024-10-28 06:00

PLoS One. 2024 Oct 28;19(10):e0311987. doi: 10.1371/journal.pone.0311987. eCollection 2024.

ABSTRACT

Credit card fraud identification is an important issue in risk prevention and control for banks and financial institutions. In order to establish an efficient credit card fraud identification model, this article studied the relevant factors that affect fraud identification. A credit card fraud identification model based on neural networks was constructed, and in-depth discussions and research were conducted. First, the layers of neural networks were deepened to improve the prediction accuracy of the model; second, this paper increase the hidden layer width of the neural network to improve the prediction accuracy of the model. This article proposes a new fusion neural network model by combining deep neural networks and wide neural networks, and applies the model to credit card fraud identification. The characteristic of this model is that the accuracy of prediction and F1 score are relatively high. Finally, use the random gradient descent method to train the model. On the test set, the proposed method has an accuracy of 96.44% and an F1 value of 96.17%, demonstrating good fraud recognition performance. After comparison, the method proposed in this paper is superior to machine learning models, ensemble learning models, and deep learning models.

PMID:39466806 | DOI:10.1371/journal.pone.0311987

Categories: Literature Watch

An inherently interpretable deep learning model for local explanations using visual concepts

Mon, 2024-10-28 06:00

PLoS One. 2024 Oct 28;19(10):e0311879. doi: 10.1371/journal.pone.0311879. eCollection 2024.

ABSTRACT

Over the past decade, deep learning has become the leading approach for various computer vision tasks and decision support systems. However, the opaque nature of deep learning models raises significant concerns about their fairness, reliability, and the underlying inferences they make. Many existing methods attempt to approximate the relationship between low-level input features and outcomes. However, humans tend to understand and reason based on high-level concepts rather than low-level input features. To bridge this gap, several concept-based interpretable methods have been developed. Most of these methods compute the importance of each discovered concept for a specific class. However, they often fail to provide local explanations. Additionally, these approaches typically rely on labeled concepts or learn directly from datasets, leading to the extraction of irrelevant concepts. They also tend to overlook the potential of these concepts to interpret model predictions effectively. This research proposes a two-stream model called the Cross-Attentional Fast/Slow Thinking Network (CA-SoftNet) to address these issues. The model is inspired by dual-process theory and integrates two key components: a shallow convolutional neural network (sCNN) as System-I for rapid, implicit pattern recognition and a cross-attentional concept memory network as System-II for transparent, controllable, and logical reasoning. Our evaluation across diverse datasets demonstrates the model's competitive accuracy, achieving 85.6%, 83.7%, 93.6%, and 90.3% on CUB 200-2011, Stanford Cars, ISIC 2016, and ISIC 2017, respectively. This performance outperforms existing interpretable models and is comparable to non-interpretable counterparts. Furthermore, our novel concept extraction method facilitates identifying and selecting salient concepts. These concepts are then used to generate concept-based local explanations that align with human thinking. Additionally, the model's ability to share similar concepts across distinct classes, such as in fine-grained classification, enhances its scalability for large datasets. This feature also induces human-like cognition and reasoning within the proposed framework.

PMID:39466770 | DOI:10.1371/journal.pone.0311879

Categories: Literature Watch

Discovery of Highly Bioactive Peptides through Hierarchical Structural Information and Molecular Dynamics Simulations

Mon, 2024-10-28 06:00

J Chem Inf Model. 2024 Oct 28. doi: 10.1021/acs.jcim.4c01006. Online ahead of print.

ABSTRACT

Peptide drugs play an essential role in modern therapeutics, but the computational design of these molecules is hindered by several challenges. Traditional methods like molecular docking and molecular dynamics (MD) simulation, as well as recent deep learning approaches, often face limitations related to computational resource demands, complex binding affinity assessments, extensive data requirements, and poor model interpretability. Here, we introduce PepHiRe, an innovative methodology that utilizes the hierarchical structural information in peptide sequences and employs a novel strategy called Ladderpath, rooted in algorithmic information theory, to rapidly generate and enhance the efficiency and clarity of novel peptide design. We applied PepHiRe to develop BH3-like peptide inhibitors targeting myeloid cell leukemia-1, a protein associated with various cancers. By analyzing just eight known bioactive BH3 peptide sequences, PepHiRe effectively derived a hierarchy of subsequences used to create new BH3-like peptides. These peptides underwent screening through MD simulations, leading to the selection of five candidates for synthesis and subsequent in vitro testing. Experimental results demonstrated that these five peptides possess high inhibitory activity, with IC50 values ranging from 28.13 ± 7.93 to 167.42 ± 22.15 nM. Our study explores a white-box model driven technique and a structured screening pipeline for identifying and generating novel peptides with potential bioactivity.

PMID:39466714 | DOI:10.1021/acs.jcim.4c01006

Categories: Literature Watch

Fully Automated Region-Specific Human-Perceptive-Equivalent Image Quality Assessment: Application to 18F-FDG PET Scans

Mon, 2024-10-28 06:00

Clin Nucl Med. 2024 Oct 21. doi: 10.1097/RLU.0000000000005526. Online ahead of print.

ABSTRACT

INTRODUCTION: We propose a fully automated framework to conduct a region-wise image quality assessment (IQA) on whole-body 18F-FDG PET scans. This framework (1) can be valuable in daily clinical image acquisition procedures to instantly recognize low-quality scans for potential rescanning and/or image reconstruction, and (2) can make a significant impact in dataset collection for the development of artificial intelligence-driven 18F-FDG PET analysis models by rejecting low-quality images and those presenting with artifacts, toward building clean datasets.

PATIENTS AND METHODS: Two experienced nuclear medicine physicians separately evaluated the quality of 174 18F-FDG PET images from 87 patients, for each body region, based on a 5-point Likert scale. The body regisons included the following: (1) the head and neck, including the brain, (2) the chest, (3) the chest-abdomen interval (diaphragmatic region), (4) the abdomen, and (5) the pelvis. Intrareader and interreader reproducibility of the quality scores were calculated using 39 randomly selected scans from the dataset. Utilizing a binarized classification, images were dichotomized into low-quality versus high-quality for physician quality scores ≤3 versus >3, respectively. Inputting the 18F-FDG PET/CT scans, our proposed fully automated framework applies 2 deep learning (DL) models on CT images to perform region identification and whole-body contour extraction (excluding extremities), then classifies PET regions as low and high quality. For classification, 2 mainstream artificial intelligence-driven approaches, including machine learning (ML) from radiomic features and DL, were investigated. All models were trained and evaluated on scores attributed by each physician, and the average of the scores reported. DL and radiomics-ML models were evaluated on the same test dataset. The performance evaluation was carried out on the same test dataset for radiomics-ML and DL models using the area under the curve, accuracy, sensitivity, and specificity and compared using the Delong test with P values <0.05 regarded as statistically significant.

RESULTS: In the head and neck, chest, chest-abdomen interval, abdomen, and pelvis regions, the best models achieved area under the curve, accuracy, sensitivity, and specificity of [0.97, 0.95, 0.96, and 0.95], [0.85, 0.82, 0.87, and 0.76], [0.83, 0.76, 0.68, and 0.80], [0.73, 0.72, 0.64, and 0.77], and [0.72, 0.68, 0.70, and 0.67], respectively. In all regions, models revealed highest performance, when developed on the quality scores with higher intrareader reproducibility. Comparison of DL and radiomics-ML models did not show any statistically significant differences, though DL models showed overall improved trends.

CONCLUSIONS: We developed a fully automated and human-perceptive equivalent model to conduct region-wise IQA over 18F-FDG PET images. Our analysis emphasizes the necessity of developing separate models for body regions and performing data annotation based on multiple experts' consensus in IQA studies.

PMID:39466652 | DOI:10.1097/RLU.0000000000005526

Categories: Literature Watch

Large vessel vasculitis evaluation by CTA: impact of deep-learning reconstruction and "dark blood" technique

Mon, 2024-10-28 06:00

Insights Imaging. 2024 Oct 28;15(1):260. doi: 10.1186/s13244-024-01843-0.

ABSTRACT

OBJECTIVES: To assess the performance of the "dark blood" (DB) technique, deep-learning reconstruction (DLR), and their combination on aortic images for large-vessel vasculitis (LVV) patients.

MATERIALS AND METHODS: Fifty patients diagnosed with LVV scheduled for aortic computed tomography angiography (CTA) were prospectively recruited in a single center. Arterial and delayed-phase images of the aorta were reconstructed using the hybrid iterative reconstruction (HIR) and DLR algorithms. HIR or DLR DB image sets were generated using corresponding arterial and delayed-phase image sets based on a "contrast-enhancement-boost" technique. Quantitative parameters of aortic wall image quality were evaluated.

RESULTS: Compared to the arterial phase image sets, decreased image noise and increased signal-noise-ratio (SNR) and CNRouter (all p < 0.05) were obtained for the DB image sets. Compared with delayed-phase image sets, dark-blood image sets combined with the DLR algorithm revealed equivalent noise (p > 0.99) and increased SNR (p < 0.001), CNRouter (p = 0.006), and CNRinner (p < 0.001). For overall image quality, the scores of DB image sets were significantly higher than those of delayed-phase image sets (all p < 0.001). Image sets obtained using the DLR algorithm received significantly better qualitative scores (all p < 0.05) in all three phases. The image quality improvement caused by the DLR algorithm was most prominent for the DB phase image sets.

CONCLUSION: DB CTA improves image quality and provides better visualization of the aorta for the LVV aorta vessel wall. The DB technique reconstructed by the DLR algorithm achieved the best overall performance compared with the other image sequences.

CRITICAL RELEVANCE STATEMENT: Deep-learning-based "dark blood" images improve vessel wall image wall quality and boundary visualization.

KEY POINTS: Dark blood CTA improves image quality and provides better aortic wall visualization. Deep-learning CTA presented higher quality and subjective scores compared to HIR. Combination of dark blood and deep-learning reconstruction obtained the best overall performance.

PMID:39466556 | DOI:10.1186/s13244-024-01843-0

Categories: Literature Watch

Comparison of image quality and lesion conspicuity between conventional and deep learning reconstruction in gadoxetic acid-enhanced liver MRI

Mon, 2024-10-28 06:00

Insights Imaging. 2024 Oct 28;15(1):257. doi: 10.1186/s13244-024-01825-2.

ABSTRACT

OBJECTIVE: To compare the image quality and lesion conspicuity of conventional vs deep learning (DL)-based reconstructed three-dimensional T1-weighted images in gadoxetic acid-enhanced liver magnetic resonance imaging (MRI).

METHODS: This prospective study (NCT05182099) enrolled participants scheduled for gadoxetic acid-enhanced liver MRI due to suspected focal liver lesions (FLLs) who provided signed informed consent. A liver MRI was conducted using a 3-T scanner. T1-weighted images were reconstructed using both conventional and DL-based (AIRTM Recon DL 3D) reconstruction algorithms. Three radiologists independently reviewed the image quality and lesion conspicuity on a 5-point scale.

RESULTS: Fifty participants (male = 36, mean age 62 ± 11 years) were included for image analysis. The DL-based reconstruction showed significantly higher image quality than conventional images in all phases (3.71-4.40 vs 3.37-3.99, p < 0.001 for all), as well as significantly less noise and ringing artifacts than conventional images (p < 0.05 for all), while also showing significantly altered image texture (p < 0.001 for all). Lesion conspicuity was significantly higher in DL-reconstructed images than in conventional images in the arterial phase (2.15 [95% confidence interval: 1.78, 2.52] vs 2.03 [1.65, 2.40], p = 0.036), but no significant difference was observed in the portal venous phase and hepatobiliary phase (p > 0.05 for all). There was no significant difference in the figure-of-merit (0.728 in DL vs 0.709 in conventional image, p = 0.474).

CONCLUSION: DL reconstruction provided higher-quality three-dimensional T1-weighted imaging than conventional reconstruction in gadoxetic acid-enhanced liver MRI.

CRITICAL RELEVANCE STATEMENT: DL reconstruction of 3D T1-weighted images improves image quality and arterial phase lesion conspicuity in gadoxetic acid-enhanced liver MRI compared to conventional reconstruction.

KEY POINTS: DL reconstruction is feasible for 3D T1-weighted images across different spatial resolutions and phases. DL reconstruction showed superior image quality with reduced noise and ringing artifacts. Hepatic anatomic structures were more conspicuous on DL-reconstructed images.

PMID:39466542 | DOI:10.1186/s13244-024-01825-2

Categories: Literature Watch

Implications of Big Data Analytics, AI, Machine Learning, and Deep Learning in the Health Care System of Bangladesh: Scoping Review

Mon, 2024-10-28 06:00

J Med Internet Res. 2024 Oct 28;26:e54710. doi: 10.2196/54710.

ABSTRACT

BACKGROUND: The rapid advancement of digital technologies, particularly in big data analytics (BDA), artificial intelligence (AI), machine learning (ML), and deep learning (DL), is reshaping the global health care system, including in Bangladesh. The increased adoption of these technologies in health care delivery within Bangladesh has sparked their integration into health care and public health research, resulting in a noticeable surge in related studies. However, a critical gap exists, as there is a lack of comprehensive evidence regarding the research landscape; regulatory challenges; use cases; and the application and adoption of BDA, AI, ML, and DL in the health care system of Bangladesh. This gap impedes the attainment of optimal results. As Bangladesh is a leading implementer of digital technologies, bridging this gap is urgent for the effective use of these advancing technologies.

OBJECTIVE: This scoping review aims to collate (1) the existing research in Bangladesh's health care system, using the aforementioned technologies and synthesizing their findings, and (2) the limitations faced by researchers in integrating the aforementioned technologies into health care research.

METHODS: MEDLINE (via PubMed), IEEE Xplore, Scopus, and Embase databases were searched to identify published research articles between January 1, 2000, and September 10, 2023, meeting the following inclusion criteria: (1) any study using any of the BDA, AI, ML, and DL technologies and health care and public health datasets for predicting health issues and forecasting any kind of outbreak; (2) studies primarily focusing on health care and public health issues in Bangladesh; and (3) original research articles published in peer-reviewed journals and conference proceedings written in English.

RESULTS: With the initial search, we identified 1653 studies. Following the inclusion and exclusion criteria and full-text review, 4.66% (77/1653) of the articles were finally included in this review. There was a substantial increase in studies over the last 5 years (2017-2023). Among the 77 studies, the majority (n=65, 84%) used ML models. A smaller proportion of studies incorporated AI (4/77, 5%), DL (7/77, 9%), and BDA (1/77, 1%) technologies. Among the reviewed articles, 52% (40/77) relied on primary data, while the remaining 48% (37/77) used secondary data. The primary research areas of focus were infectious diseases (15/77, 19%), noncommunicable diseases (23/77, 30%), child health (11/77, 14%), and mental health (9/77, 12%).

CONCLUSIONS: This scoping review highlights remarkable progress in leveraging BDA, AI, ML, and DL within Bangladesh's health care system. The observed surge in studies over the last 5 years underscores the increasing significance of AI and related technologies in health care research. Notably, most (65/77, 84%) studies focused on ML models, unveiling opportunities for advancements in predictive modeling. This review encapsulates the current state of technological integration and propels us into a promising era for the future of digital Bangladesh.

PMID:39466315 | DOI:10.2196/54710

Categories: Literature Watch

Breast cancer survival prediction using an automated mitosis detection pipeline

Mon, 2024-10-28 06:00

J Pathol Clin Res. 2024 Nov;10(6):e70008. doi: 10.1002/2056-4538.70008.

ABSTRACT

Mitotic count (MC) is the most common measure to assess tumor proliferation in breast cancer patients and is highly predictive of patient outcomes. It is, however, subject to inter- and intraobserver variation and reproducibility challenges that may hamper its clinical utility. In past studies, artificial intelligence (AI)-supported MC has been shown to correlate well with traditional MC on glass slides. Considering the potential of AI to improve reproducibility of MC between pathologists, we undertook the next validation step by evaluating the prognostic value of a fully automatic method to detect and count mitoses on whole slide images using a deep learning model. The model was developed in the context of the Mitosis Domain Generalization Challenge 2021 (MIDOG21) grand challenge and was expanded by a novel automatic area selector method to find the optimal mitotic hotspot and calculate the MC per 2 mm2. We employed this method on a breast cancer cohort with long-term follow-up from the University Medical Centre Utrecht (N = 912) and compared predictive values for overall survival of AI-based MC and light-microscopic MC, previously assessed during routine diagnostics. The MIDOG21 model was prognostically comparable to the original MC from the pathology report in uni- and multivariate survival analysis. In conclusion, a fully automated MC AI algorithm was validated in a large cohort of breast cancer with regard to retained prognostic value compared with traditional light-microscopic MC.

PMID:39466133 | DOI:10.1002/2056-4538.70008

Categories: Literature Watch

Prediction of carotid artery plaque area based on parallel multi-gate attention capture model

Mon, 2024-10-28 06:00

Rev Sci Instrum. 2024 Oct 1;95(10):105125. doi: 10.1063/5.0214828.

ABSTRACT

Cardiovascular disease (CVD) is a group of conditions involving the heart or blood vessels and is a leading cause of death and disability worldwide. Carotid artery plaque, as a key risk factor, is crucial for the early prevention and management of CVD. The purpose of this study is to combine clinical application and deep learning techniques to design a predictive model for the carotid artery plaque area. This model aims to identify individuals at high risk and reduce the incidence of cardiovascular disease through the implementation of relevant preventive measures. This study proposes an innovative multi-gate attention capture (MGAC) model that utilizes data such as risk factors, laboratory tests, and physical examinations to predict the area of carotid artery plaque. Experimental findings reveal the superior performance of the MGAC model, surpassing other commonly used deep learning models with the following metrics: mean absolute error of 4.17, root mean square error of 10.89, mean logarithmic squared error of 0.21, and coefficient of determination of 0.98.

PMID:39465991 | DOI:10.1063/5.0214828

Categories: Literature Watch

CNN-Based Neurodegenerative Disease Classification Using QR-Represented Gait Data

Mon, 2024-10-28 06:00

Brain Behav. 2024 Oct;14(10):e70100. doi: 10.1002/brb3.70100.

ABSTRACT

PURPOSE: The primary aim of this study is to develop an effective and reliable diagnostic system for neurodegenerative diseases by utilizing gait data transformed into QR codes and classified using convolutional neural networks (CNNs). The objective of this method is to enhance the precision of diagnosing neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), and Huntington's disease (HD), through the introduction of a novel approach to analyze gait patterns.

METHODS: The research evaluates the CNN-based classification approach using QR-represented gait data to address the diagnostic challenges associated with neurodegenerative diseases. The gait data of subjects were converted into QR codes, which were then classified using a CNN deep learning model. The dataset includes recordings from patients with Parkinson's disease (n = 15), Huntington's disease (n = 20), and amyotrophic lateral sclerosis (n = 13), and from 16 healthy controls.

RESULTS: The accuracy rates obtained through 10-fold cross-validation were as follows: 94.86% for NDD versus control, 95.81% for PD versus control, 93.56% for HD versus control, 97.65% for ALS versus control, and 84.65% for PD versus HD versus ALS versus control. These results demonstrate the potential of the proposed system in distinguishing between different neurodegenerative diseases and control groups.

CONCLUSION: The results indicate that the designed system may serve as a complementary tool for the diagnosis of neurodegenerative diseases, particularly in individuals who already present with varying degrees of motor impairment. Further validation and research are needed to establish its wider applicability.

PMID:39465642 | DOI:10.1002/brb3.70100

Categories: Literature Watch

Evaluation of root canal filling length on periapical radiograph using artificial intelligence

Mon, 2024-10-28 06:00

Oral Radiol. 2024 Oct 27. doi: 10.1007/s11282-024-00781-3. Online ahead of print.

ABSTRACT

OBJECTIVES: This work proposes a novel method to evaluate root canal filling (RCF) success using artificial intelligence (AI) and image analysis techniques.

METHODS: 1121 teeth with root canal treatment in 597 periapical radiographs (PARs) were anonymized and manually labeled. First, RCFs were segmented using 5 different state-of-the-art deep learning models based on convolutional neural networks. Their performances were compared based on the intersection over union (IoU), dice score and accuracy. Additionally, fivefold cross validation was applied for the best-performing model and their outputs were later used for further analysis. Secondly, images were processed via a graphical user interface (GUI) that allows dental clinicians to mark the apex of the tooth, which was used to find the distance between the apex of the tooth and the nearest RCF prediction of the deep learning model towards it. The distance can show whether the RCF is normal, short or long.

RESULTS: Model performances were evaluated by well-known evaluation metrics for segmentation such as IoU, Dice score and accuracy. CNN-based models can achieve an accuracy of 88%, an IoU of 79% and Dice score of 88% in segmenting root canal fillings.

CONCLUSIONS: Our study demonstrates that AI-based solutions present accurate and reliable performance for root canal filling evaluation.

PMID:39465425 | DOI:10.1007/s11282-024-00781-3

Categories: Literature Watch

Dual-attention transformer-based hybrid network for multi-modal medical image segmentation

Mon, 2024-10-28 06:00

Sci Rep. 2024 Oct 28;14(1):25704. doi: 10.1038/s41598-024-76234-y.

ABSTRACT

Accurate medical image segmentation plays a vital role in clinical practice. Convolutional Neural Network and Transformer are mainstream architectures for this task. However, convolutional neural network lacks the ability of modeling global dependency while Transformer cannot extract local details. In this paper, we propose DATTNet, Dual ATTention Network, an encoder-decoder deep learning model for medical image segmentation. DATTNet is exploited in hierarchical fashion with two novel components: (1) Dual Attention module is designed to model global dependency in spatial and channel dimensions. (2) Context Fusion Bridge is presented to remix the feature maps with multiple scales and construct their correlations. The experiments on ACDC, Synapse and Kvasir-SEG datasets are conducted to evaluate the performance of DATTNet. Our proposed model shows superior performance, effectiveness and robustness compared to SOTA methods, with mean Dice Similarity Coefficient scores of 92.2%, 84.5% and 89.1% on cardiac, abdominal organs and gastrointestinal poly segmentation tasks. The quantitative and qualitative results demonstrate that our proposed DATTNet attains favorable capability across different modalities (MRI, CT, and endoscopy) and can be generalized to various tasks. Therefore, it is envisaged as being potential for practicable clinical applications. The code has been released on https://github.com/MhZhang123/DATTNet/tree/main .

PMID:39465274 | DOI:10.1038/s41598-024-76234-y

Categories: Literature Watch

Bone scintigraphy based on deep learning model and modified growth optimizer

Mon, 2024-10-28 06:00

Sci Rep. 2024 Oct 27;14(1):25627. doi: 10.1038/s41598-024-73991-8.

ABSTRACT

Bone scintigraphy is recognized as an efficient diagnostic method for whole-body screening for bone metastases. At the moment, whole-body bone scan image analysis is primarily dependent on manual reading by nuclear medicine doctors. However, manual analysis needs substantial experience and is both stressful and time-consuming. To address the aforementioned issues, this work proposed a machine-learning technique that uses phases to detect Bone scintigraphy. The first phase in the proposed model is the feature extraction and it was conducted based on integrating the Mobile Vision Transformer (MobileViT) model in our framework to capture highly complex representations from raw medical imagery using two primary components including ViT and lightweight CNN featuring a limited number of parameters. In addition, the second phase is named feature selection, and it is dependent on the Arithmetic Optimization Algorithm (AOA) being used to improve the Growth Optimizer (GO). We evaluate the performance of the proposed FS model, named GOAOA using a set of 18 UCI datasets. Additionally, the applicability of Bone scintigraphy for real-world application is evaluated using 2800 bone scan images (1400 normal and 1400 abnormal). The results and statistical analysis revealed that the proposed GOAOA algorithm as an FS technique outperforms the other FS algorithms employed in this study.

PMID:39465262 | DOI:10.1038/s41598-024-73991-8

Categories: Literature Watch

A robust deep learning approach for identification of RNA 5-methyluridine sites

Mon, 2024-10-28 06:00

Sci Rep. 2024 Oct 28;14(1):25688. doi: 10.1038/s41598-024-76148-9.

ABSTRACT

RNA 5-methyluridine (m5U) sites play a significant role in understanding RNA modifications, which influence numerous biological processes such as gene expression and cellular functioning. Consequently, the identification of m5U sites can play a vital role in the integrity, structure, and function of RNA molecules. Therefore, this study introduces GRUpred-m5U, a novel deep learning-based framework based on a gated recurrent unit in mature RNA and full transcript RNA datasets. We used three descriptor groups: nucleic acid composition, pseudo nucleic acid composition, and physicochemical properties, which include five feature extraction methods ENAC, Kmer, DPCP, DPCP type 2, and PseDNC. Initially, we aggregated all the feature extraction methods and created a new merged set. Three hybrid models were developed employing deep-learning methods and evaluated through 10-fold cross-validation with seven evaluation metrics. After a comprehensive evaluation, the GRUpred-m5U model outperformed the other applied models, obtaining 98.41% and 96.70% accuracy on the two datasets, respectively. To our knowledge, the proposed model outperformed all the existing state-of-the-art technology. The proposed supervised machine learning model was evaluated using unsupervised machine learning techniques such as principal component analysis (PCA), and it was observed that the proposed method provided a valid performance for identifying m5U. Considering its multi-layered construction, the GRUpred-m5U model has tremendous potential for future applications in the biological industry. The model, which consisted of neurons processing complicated input, excelled at pattern recognition and produced reliable results. Despite its greater size, the model obtained accurate results, essential in detecting m5U.

PMID:39465261 | DOI:10.1038/s41598-024-76148-9

Categories: Literature Watch

Radar-Based Fall Detection: A Survey

Mon, 2024-10-28 06:00

IEEE Robot Autom Mag. 2024 Sep;31(3):170-185. doi: 10.1109/MRA.2024.3352851. Epub 2024 Feb 5.

ABSTRACT

Fall detection, particularly critical for high-risk demographics like the elderly, is a key public health concern where timely detection can greatly minimize harm. With the advancements in radio frequency technology, radar has emerged as a powerful tool for human detection and tracking. Traditional machine learning algorithms, such as Support Vector Machines (SVM) and k-Nearest Neighbors (kNN), have shown promising outcomes. However, deep learning approaches, notably Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), have outperformed in learning intricate features and managing large, unstructured datasets. This survey offers an in-depth analysis of radar-based fall detection, with emphasis on Micro-Doppler, Range-Doppler, and Range-Doppler-Angles techniques. We discuss the intricacies and challenges in fall detection and emphasize the necessity for a clear definition of falls and appropriate detection criteria, informed by diverse influencing factors. We present an overview of radar signal processing principles and the underlying technology of radar-based fall detection, providing an accessible insight into machine learning and deep learning algorithms. After examining 74 research articles on radar-based fall detection published since 2000, we aim to bridge current research gaps and underscore the potential future research strategies, emphasizing the real-world applications possibility and the unexplored potential of deep learning in improving radar-based fall detection.

PMID:39465183 | PMC:PMC11507471 | DOI:10.1109/MRA.2024.3352851

Categories: Literature Watch

Integrated deep learning approach for generating cross-polarized images and analyzing skin melanin and hemoglobin distributions

Mon, 2024-10-28 06:00

Biomed Eng Lett. 2024 Jul 26;14(6):1355-1364. doi: 10.1007/s13534-024-00409-9. eCollection 2024 Nov.

ABSTRACT

Cross-polarized images are beneficial for skin pigment analysis due to the enhanced visualization of melanin and hemoglobin regions. However, the required imaging equipment can be bulky and optically complex. Additionally, preparing ground truths for training pigment analysis models is labor-intensive. This study aims to introduce an integrated approach for generating cross-polarized images and creating skin melanin and hemoglobin maps without the need for ground truth preparation for pigment distributions. We propose a two-component approach: a cross-polarized image generation module and a skin analysis module. Three generative adversarial networks (CycleGAN, pix2pix, and pix2pixHD) are compared for creating cross-polarized images. The regression analysis network for skin analysis is trained with theoretically reconstructed ground truths based on the optical properties of pigments. The methodology is evaluated using the VISIA VAESTRO clinical system. The cross-polarized image generation module achieved a peak signal-to-noise ratio of 35.514 dB. The skin analysis module demonstrated correlation coefficients of 0.942 for hemoglobin and 0.922 for melanin. The integrated approach yielded correlation coefficients of 0.923 for hemoglobin and 0.897 for melanin, respectively. The proposed approach achieved a reasonable correlation with the professional system using actually captured images, offering a promising alternative to existing professional equipment without the need for additional optical instruments or extensive ground truth preparation.

PMID:39465115 | PMC:PMC11502720 | DOI:10.1007/s13534-024-00409-9

Categories: Literature Watch

A systematic review of deep learning-based denoising for low-dose computed tomography from a perceptual quality perspective

Mon, 2024-10-28 06:00

Biomed Eng Lett. 2024 Aug 30;14(6):1153-1173. doi: 10.1007/s13534-024-00419-7. eCollection 2024 Nov.

ABSTRACT

Low-dose computed tomography (LDCT) scans are essential in reducing radiation exposure but often suffer from significant image noise that can impair diagnostic accuracy. While deep learning approaches have enhanced LDCT denoising capabilities, the predominant reliance on objective metrics like PSNR and SSIM has resulted in over-smoothed images that lack critical detail. This paper explores advanced deep learning methods tailored specifically to improve perceptual quality in LDCT images, focusing on generating diagnostic-quality images preferred in clinical practice. We review and compare current methodologies, including perceptual loss functions and generative adversarial networks, addressing the significant limitations of current benchmarks and the subjective nature of perceptual quality evaluation. Through a systematic analysis, this study underscores the urgent need for developing methods that balance both perceptual and diagnostic quality, proposing new directions for future research in the field.

PMID:39465112 | PMC:PMC11502640 | DOI:10.1007/s13534-024-00419-7

Categories: Literature Watch

CT synthesis with deep learning for MR-only radiotherapy planning: a review

Mon, 2024-10-28 06:00

Biomed Eng Lett. 2024 Sep 26;14(6):1259-1278. doi: 10.1007/s13534-024-00430-y. eCollection 2024 Nov.

ABSTRACT

MR-only radiotherapy planning is beneficial from the perspective of both time and safety since it uses synthetic CT for radiotherapy dose calculation instead of real CT scans. To elevate the accuracy of treatment planning and apply the results in practice, various methods have been adopted, among which deep learning models for image-to-image translation have shown good performance by retaining domain-invariant structures while changing domain-specific details. In this paper, we present an overview of diverse deep learning approaches to MR-to-CT synthesis, divided into four classes: convolutional neural networks, generative adversarial networks, transformer models, and diffusion models. By comparing each model and analyzing the general approaches applied to this task, the potential of these models and ways to improve the current methods can be can be evaluated.

PMID:39465111 | PMC:PMC11502731 | DOI:10.1007/s13534-024-00430-y

Categories: Literature Watch

A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies

Mon, 2024-10-28 06:00

Biomed Eng Lett. 2024 Sep 17;14(6):1221-1242. doi: 10.1007/s13534-024-00425-9. eCollection 2024 Nov.

ABSTRACT

Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.

PMID:39465106 | PMC:PMC11502678 | DOI:10.1007/s13534-024-00425-9

Categories: Literature Watch

Self-supervised learning for CT image denoising and reconstruction: a review

Mon, 2024-10-28 06:00

Biomed Eng Lett. 2024 Sep 12;14(6):1207-1220. doi: 10.1007/s13534-024-00424-w. eCollection 2024 Nov.

ABSTRACT

This article reviews the self-supervised learning methods for CT image denoising and reconstruction. Currently, deep learning has become a dominant tool in medical imaging as well as computer vision. In particular, self-supervised learning approaches have attracted great attention as a technique for learning CT images without clean/noisy references. After briefly reviewing the fundamentals of CT image denoising and reconstruction, we examine the progress of deep learning in CT image denoising and reconstruction. Finally, we focus on the theoretical and methodological evolution of self-supervised learning for image denoising and reconstruction.

PMID:39465103 | PMC:PMC11502646 | DOI:10.1007/s13534-024-00424-w

Categories: Literature Watch

Pages