Deep learning

On-Device Deep Learning to Detect Carotid Stenosis With Smartphones: Development and Validation

Thu, 2024-09-05 06:00

Stroke. 2024 Sep 5. doi: 10.1161/STROKEAHA.124.048410. Online ahead of print.

NO ABSTRACT

PMID:39234680 | DOI:10.1161/STROKEAHA.124.048410

Categories: Literature Watch

Screening antimicrobial peptides and probiotics using multiple deep learning and directed evolution strategies

Thu, 2024-09-05 06:00

Acta Pharm Sin B. 2024 Aug;14(8):3476-3492. doi: 10.1016/j.apsb.2024.05.003. Epub 2024 May 10.

ABSTRACT

Owing to their limited accuracy and narrow applicability, current antimicrobial peptide (AMP) prediction models face obstacles in industrial application. To address these limitations, we developed and improved an AMP prediction model using Comparing and Optimizing Multiple DEep Learning (COMDEL) algorithms, coupled with high-throughput AMP screening method, finally reaching an accuracy of 94.8% in test and 88% in experiment verification, surpassing other state-of-the-art models. In conjunction with COMDEL, we employed the phage-assisted evolution method to screen Sortase in vivo and developed a cell-free AMP synthesis system in vitro, ultimately increasing AMPs yields to a range of 0.5-2.1 g/L within hours. Moreover, by multi-omics analysis using COMDEL, we identified Lactobacillus plantarum as the most promising candidate for AMP generation among 35 edible probiotics. Following this, we developed a microdroplet sorting approach and successfully screened three L. plantarum mutants, each showing a twofold increase in antimicrobial ability, underscoring their substantial industrial application values.

PMID:39234615 | PMC:PMC11372459 | DOI:10.1016/j.apsb.2024.05.003

Categories: Literature Watch

Use of artificial intelligence to support prehospital traumatic injury care: A scoping review

Thu, 2024-09-05 06:00

J Am Coll Emerg Physicians Open. 2024 Sep 4;5(5):e13251. doi: 10.1002/emp2.13251. eCollection 2024 Oct.

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has transformative potential to support prehospital clinicians, emergency physicians, and trauma surgeons in acute traumatic injury care. This scoping review examines the literature evaluating AI models using prehospital features to support early traumatic injury care.

METHODS: We conducted a systematic search in August 2023 of PubMed, Embase, and Web of Science. Two independent reviewers screened titles/abstracts, with a third reviewer for adjudication, followed by a full-text analysis. We included original research and conference presentations evaluating AI models-machine learning (ML), deep learning (DL), and natural language processing (NLP)-that used prehospital features or features available immediately upon emergency department arrival. Review articles were excluded. The same investigators extracted data and systematically categorized outcomes to ensure consistency and transparency. We calculated kappa for interrater reliability and descriptive statistics.

RESULTS: We identified 1050 unique publications, with 49 meeting inclusion criteria after title and abstract review (kappa 0.58) and full-text review. Publications increased annually from 2 in 2007 to 10 in 2022. Geographic analysis revealed a 61% focus on data from the United States. Studies were predominantly retrospective (88%), used local (45%) or national level (41%) data, focused on adults only (59%) or did not specify adults or pediatrics (27%), and 57% encompassed both blunt and penetrating injury mechanisms. The majority used machine learning (88%) alone or in conjunction with DL or NLP, and the top three algorithms used were support vector machine, logistic regression, and random forest. The most common study objectives were to predict the need for critical care and life-saving interventions (29%), assist in triage (22%), and predict survival (20%).

CONCLUSIONS: A small but growing body of literature described AI models based on prehospital features that may support decisions made by dispatchers, Emergency Medical Services clinicians, and trauma teams in early traumatic injury care.

PMID:39234533 | PMC:PMC11372236 | DOI:10.1002/emp2.13251

Categories: Literature Watch

Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI

Thu, 2024-09-05 06:00

J Med Imaging (Bellingham). 2024 Sep;11(5):054003. doi: 10.1117/1.JMI.11.5.054003. Epub 2024 Sep 3.

ABSTRACT

PURPOSE: Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.

APPROACH: We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.

RESULTS: For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.

CONCLUSIONS: Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.

PMID:39234425 | PMC:PMC11369361 | DOI:10.1117/1.JMI.11.5.054003

Categories: Literature Watch

Comprehensive hepatotoxicity prediction: ensemble model integrating machine learning and deep learning

Thu, 2024-09-05 06:00

Front Pharmacol. 2024 Aug 21;15:1441587. doi: 10.3389/fphar.2024.1441587. eCollection 2024.

ABSTRACT

BACKGROUND: Chemicals may lead to acute liver injuries, posing a serious threat to human health. Achieving the precise safety profile of a compound is challenging due to the complex and expensive testing procedures. In silico approaches will aid in identifying the potential risk of drug candidates in the initial stage of drug development and thus mitigating the developmental cost.

METHODS: In current studies, QSAR models were developed for hepatotoxicity predictions using the ensemble strategy to integrate machine learning (ML) and deep learning (DL) algorithms using various molecular features. A large dataset of 2588 chemicals and drugs was randomly divided into training (80%) and test (20%) sets, followed by the training of individual base models using diverse machine learning or deep learning based on three different kinds of descriptors and fingerprints. Feature selection approaches were employed to proceed with model optimizations based on the model performance. Hybrid ensemble approaches were further utilized to determine the method with the best performance.

RESULTS: The voting ensemble classifier emerged as the optimal model, achieving an excellent prediction accuracy of 80.26%, AUC of 82.84%, and recall of over 93% followed by bagging and stacking ensemble classifiers method. The model was further verified by an external test set, internal 10-fold cross-validation, and rigorous benchmark training, exhibiting much better reliability than the published models.

CONCLUSION: The proposed ensemble model offers a dependable assessment with a good performance for the prediction regarding the risk of chemicals and drugs to induce liver damage.

PMID:39234116 | PMC:PMC11373136 | DOI:10.3389/fphar.2024.1441587

Categories: Literature Watch

Artificial Intelligence and Deep Learning in Revolutionizing Brain Tumor Diagnosis and Treatment: A Narrative Review

Thu, 2024-09-05 06:00

Cureus. 2024 Aug 5;16(8):e66157. doi: 10.7759/cureus.66157. eCollection 2024 Aug.

ABSTRACT

The emergence of artificial intelligence (AI) in the medical field holds promise in improving medical management, particularly in personalized strategies for the diagnosis and treatment of brain tumors. However, integrating AI into clinical practice has proven to be a challenge. Deep learning (DL) is very convenient for extracting relevant information from large amounts of data that has increased in medical history and imaging records, which shortens diagnosis time, that would otherwise overwhelm manual methods. In addition, DL aids in automated tumor segmentation, classification, and diagnosis. DL models such as the Brain Tumor Classification Model and the Inception-Resnet V2, or hybrid techniques that enhance these functions and combine DL networks with support vector machine and k-nearest neighbors, identify tumor phenotypes and brain metastases, allowing real-time decision-making and enhancing preoperative planning. AI algorithms and DL development facilitate radiological diagnostics such as computed tomography, positron emission tomography scans, and magnetic resonance imaging (MRI) by integrating two-dimensional and three-dimensional MRI using DenseNet and 3D convolutional neural network architectures, which enable precise tumor delineation. DL offers benefits in neuro-interventional procedures, and the shift toward computer-assisted interventions acknowledges the need for more accurate and efficient image analysis methods. Further research is needed to realize the potential impact of DL in improving these outcomes.

PMID:39233936 | PMC:PMC11372433 | DOI:10.7759/cureus.66157

Categories: Literature Watch

Skin cancer classification leveraging multi-directional compact convolutional neural network ensembles and gabor wavelets

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20637. doi: 10.1038/s41598-024-69954-8.

ABSTRACT

Skin cancer (SC) is an important medical condition that necessitates prompt identification to ensure timely treatment. Although visual evaluation by dermatologists is considered the most reliable method, its efficacy is subjective and laborious. Deep learning-based computer-aided diagnostic (CAD) platforms have become valuable tools for supporting dermatologists. Nevertheless, current CAD tools frequently depend on Convolutional Neural Networks (CNNs) with huge amounts of deep layers and hyperparameters, single CNN model methodologies, large feature space, and exclusively utilise spatial image information, which restricts their effectiveness. This study presents SCaLiNG, an innovative CAD tool specifically developed to address and surpass these constraints. SCaLiNG leverages a collection of three compact CNNs and Gabor Wavelets (GW) to acquire a comprehensive feature vector consisting of spatial-textural-frequency attributes. SCaLiNG gathers a wide range of image details by breaking down these photos into multiple directional sub-bands using GW, and then learning several CNNs using those sub-bands and the original picture. SCaLiNG also combines attributes taken from various CNNs trained with the actual images and subbands derived from GW. This fusion process correspondingly improves diagnostic accuracy due to the thorough representation of attributes. Furthermore, SCaLiNG applies a feature selection approach which further enhances the model's performance by choosing the most distinguishing features. Experimental findings indicate that SCaLiNG maintains a classification accuracy of 0.9170 in categorising SC subcategories, surpassing conventional single-CNN models. The outstanding performance of SCaLiNG underlines its ability to aid dermatologists in swiftly and precisely recognising and classifying SC, thereby enhancing patient outcomes.

PMID:39232043 | DOI:10.1038/s41598-024-69954-8

Categories: Literature Watch

Brain tumor image segmentation method using hybrid attention module and improved mask RCNN

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20615. doi: 10.1038/s41598-024-71250-4.

ABSTRACT

To meet the needs of automated medical analysis of brain tumor magnetic resonance imaging, this study introduces an enhanced instance segmentation method built upon mask region-based convolutional neural network. By incorporating squeeze-and-excitation networks, a channel attention mechanism, and concatenated attention neural network, a spatial attention mechanism, the model can more adeptly focus on the critical regions and finer details of brain tumors. Residual network-50 combined attention module and feature pyramid network as the backbone network to effectively capture multi-scale characteristics of brain tumors. At the same time, the region proposal network and region of interest align technology were used to ensure that the segmentation area matched the actual tumor morphology. The originality of the research lies in the deep residual network that combines attention mechanism with feature pyramid network to replace the backbone based on mask region convolutional neural network, achieving an improvement in the efficiency of brain tumor feature extraction. After a series of experiments, the precision of the model is 90.72%, which is 0.76% higher than that of the original model. Recall was 91.68%, an increase of 0.95%; Mean Intersection over Union was 94.56%, an increase of 1.39%. This method achieves precise segmentation of brain tumor magnetic resonance imaging, and doctors can easily and accurately locate the tumor area through the segmentation results, thereby quickly measuring the diameter, area, and other information of the tumor, providing doctors with more comprehensive diagnostic information.

PMID:39232028 | DOI:10.1038/s41598-024-71250-4

Categories: Literature Watch

APIS: a paired CT-MRI dataset for ischemic stroke segmentation - methods and challenges

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20543. doi: 10.1038/s41598-024-71273-x.

ABSTRACT

Stroke, the second leading cause of mortality globally, predominantly results from ischemic conditions. Immediate attention and diagnosis, related to the characterization of brain lesions, play a crucial role in patient prognosis. Standard stroke protocols include an initial evaluation from a non-contrast CT to discriminate between hemorrhage and ischemia. However, non-contrast CTs lack sensitivity in detecting subtle ischemic changes in this phase. Alternatively, diffusion-weighted MRI studies provide enhanced capabilities, yet are constrained by limited availability and higher costs. Hence, we idealize new approaches that integrate ADC stroke lesion findings into CT, to enhance the analysis and accelerate stroke patient management. This study details a public challenge where scientists applied top computational strategies to delineate stroke lesions on CT scans, utilizing paired ADC information. Also, it constitutes the first effort to build a paired dataset with NCCT and ADC studies of acute ischemic stroke patients. Submitted algorithms were validated with respect to the references of two expert radiologists. The best achieved Dice score was 0.2 over a test study with 36 patient studies. Despite all the teams employing specialized deep learning tools, results reveal limitations of computational approaches to support the segmentation of small lesions with heterogeneous density.

PMID:39232010 | DOI:10.1038/s41598-024-71273-x

Categories: Literature Watch

Non-invasive multimodal CT deep learning biomarker to predict pathological complete response of non-small cell lung cancer following neoadjuvant immunochemotherapy: a multicenter study

Wed, 2024-09-04 06:00

J Immunother Cancer. 2024 Sep 3;12(9):e009348. doi: 10.1136/jitc-2024-009348.

ABSTRACT

OBJECTIVES: Although neoadjuvant immunochemotherapy has been widely applied in non-small cell lung cancer (NSCLC), predicting treatment response remains a challenge. We used pretreatment multimodal CT to explore deep learning-based immunochemotherapy response image biomarkers.

METHODS: This study retrospectively obtained non-contrast enhanced and contrast enhancedbubu CT scans of patients with NSCLC who underwent surgery after receiving neoadjuvant immunochemotherapy at multiple centers between August 2019 and February 2023. Deep learning features were extracted from both non-contrast enhanced and contrast enhanced CT scans to construct the predictive models (LUNAI-uCT model and LUNAI-eCT model), respectively. After the feature fusion of these two types of features, a fused model (LUNAI-fCT model) was constructed. The performance of the model was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value. SHapley Additive exPlanations analysis was used to quantify the impact of CT imaging features on model prediction. To gain insights into how our model makes predictions, we employed Gradient-weighted Class Activation Mapping to generate saliency heatmaps.

RESULTS: The training and validation datasets included 113 patients from Center A at the 8:2 ratio, and the test dataset included 112 patients (Center B n=73, Center C n=20, Center D n=19). In the test dataset, the LUNAI-uCT, LUNAI-eCT, and LUNAI-fCT models achieved AUCs of 0.762 (95% CI 0.654 to 0.791), 0.797 (95% CI 0.724 to 0.844), and 0.866 (95% CI 0.821 to 0.883), respectively.

CONCLUSIONS: By extracting deep learning features from contrast enhanced and non-contrast enhanced CT, we constructed the LUNAI-fCT model as an imaging biomarker, which can non-invasively predict pathological complete response in neoadjuvant immunochemotherapy for NSCLC.

PMID:39231545 | DOI:10.1136/jitc-2024-009348

Categories: Literature Watch

MADE-for-ASD: A multi-atlas deep ensemble network for diagnosing Autism Spectrum Disorder

Wed, 2024-09-04 06:00

Comput Biol Med. 2024 Sep 3;182:109083. doi: 10.1016/j.compbiomed.2024.109083. Online ahead of print.

ABSTRACT

In response to the global need for efficient early diagnosis of Autism Spectrum Disorder (ASD), this paper bridges the gap between traditional, time-consuming diagnostic methods and potential automated solutions. We propose a multi-atlas deep ensemble network, MADE-for-ASD, that integrates multiple atlases of the brain's functional magnetic resonance imaging (fMRI) data through a weighted deep ensemble network. Our approach integrates demographic information into the prediction workflow, which enhances ASD diagnosis performance and offers a more holistic perspective on patient profiling. We experiment with the well-known publicly available ABIDE (Autism Brain Imaging Data Exchange) I dataset, consisting of resting state fMRI data from 17 different laboratories around the globe. Our proposed system achieves 75.20% accuracy on the entire dataset and 96.40% on a specific subset - both surpassing reported ASD diagnosis accuracy in ABIDE I fMRI studies. Specifically, our model improves by 4.4 percentage points over prior works on the same amount of data. The model exhibits a sensitivity of 82.90% and a specificity of 69.70% on the entire dataset, and 91.00% and 99.50%, respectively, on the specific subset. We leverage the F-score to pinpoint the top 10 ROI in ASD diagnosis, such as precuneus and anterior cingulate/ventromedial. The proposed system can potentially pave the way for more cost-effective, efficient and scalable strategies in ASD diagnosis. Codes and evaluations are publicly available at https://github.com/hasan-rakibul/MADE-for-ASD.

PMID:39232404 | DOI:10.1016/j.compbiomed.2024.109083

Categories: Literature Watch

On the application of hybrid deep 3D convolutional neural network algorithms for predicting the micromechanics of brain white matter

Wed, 2024-09-04 06:00

Comput Methods Programs Biomed. 2024 Aug 22;256:108381. doi: 10.1016/j.cmpb.2024.108381. Online ahead of print.

ABSTRACT

BACKGROUND: Material characterization of brain white matter (BWM) is difficult due to the anisotropy inherent to the three-dimensional microstructure and the various interactions between heterogeneous brain-tissue (axon, myelin, and glia). Developing full scale finite element models that accurately represent the relationship between the micro and macroscale BWM is however extremely challenging and computationally expensive. The anisotropic properties of the microstructure of BWM computed by building unit cells under frequency domain viscoelasticity comprises of 36 individual constants each, for the loss and storage moduli. Furthermore, the architecture of each unit cell is arbitrary in an infinite dataset.

METHODS: In this study, we extend our previous work on developing representative volume elements (RVE) of the microstructure of the BWM in the frequency domain to develop 3D deep learning algorithms that can predict the anisotropic composite properties. The deep 3D convolutional neural network (CNN) algorithms utilizes a voxelization method to obtain geometry information from 3D RVEs. The architecture information encoded in the voxelized location is employed as input data while cross-referencing the RVEs' material properties (output data). We further develop methods by incorporating parallel pathways, Residual Neural Networks and inception modulus that improve the efficiency of deep learning algorithms.

RESULTS: This paper presents different CNN algorithms in predicting the anisotropic composite properties of BWM. A quantitative analysis of the individual algorithms is presented with the view of identifying optimal strategies to interpret the combined measurements of brain MRE and DTI.

SIGNIFICANCE: The proposed Multiscale 3D ResNet (M3DR) algorithm demonstrates high learning ability and performance over baseline CNN algorithms in predicting BWM tissue properties. The hybrid M3DR framework also overcomes the significant limitations encountered in modeling brain tissue using finite elements alone including those such as high computational cost, mesh and simulation failure. The proposed framework also provides an efficient and streamlined platform for implementing complex boundary conditions, modeling intrinsic material properties and imparting interfacial architecture information.

PMID:39232375 | DOI:10.1016/j.cmpb.2024.108381

Categories: Literature Watch

Automated Association for Osteosynthesis Foundation and Orthopedic Trauma Association classification of pelvic fractures on pelvic radiographs using deep learning

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20548. doi: 10.1038/s41598-024-71654-2.

ABSTRACT

High-energy impacts, like vehicle crashes or falls, can lead to pelvic ring injuries. Rapid diagnosis and treatment are crucial due to the risks of severe bleeding and organ damage. Pelvic radiography promptly assesses fracture extent and location, but struggles to diagnose bleeding. The AO/OTA classification system grades pelvic instability, but its complexity limits its use in emergency settings. This study develops and evaluates a deep learning algorithm to classify pelvic fractures on radiographs per the AO/OTA system. Pelvic radiographs of 773 patients with pelvic fractures and 167 patients without pelvic fractures were retrospectively analyzed at a single center. Pelvic fractures were classified into types A, B, and C using medical records categorized by an orthopedic surgeon according to the AO/OTA classification system. Accuracy, Dice Similarity Coefficient (DSC), and F1 score were measured to evaluate the diagnostic performance of the deep learning algorithms. The segmentation model showed high performance with 0.98 accuracy and 0.96-0.97 DSC. The AO/OTA classification model demonstrated effective performance with a 0.47-0.80 F1 score and 0.69-0.88 accuracy. Additionally, the classification model had a macro average of 0.77-0.94. Performance evaluation of the models showed relatively favorable results, which can aid in early classification of pelvic fractures.

PMID:39232189 | DOI:10.1038/s41598-024-71654-2

Categories: Literature Watch

Computer-aided diagnosis for lung cancer using waterwheel plant algorithm with deep learning

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20647. doi: 10.1038/s41598-024-71551-8.

ABSTRACT

Lung cancer (LC) is a life-threatening and dangerous disease all over the world. However, earlier diagnoses and treatment can save lives. Earlier diagnoses of malevolent cells in the lungs responsible for oxygenating the human body and expelling carbon dioxide due to significant procedures are critical. Even though a computed tomography (CT) scan is the best imaging approach in the healthcare sector, it is challenging for physicians to identify and interpret the tumour from CT scans. LC diagnosis in CT scan using artificial intelligence (AI) can help radiologists in earlier diagnoses, enhance performance, and decrease false negatives. Deep learning (DL) for detecting lymph node contribution on histopathological slides has become popular due to its great significance in patient diagnoses and treatment. This study introduces a computer-aided diagnosis for LC by utilizing the Waterwheel Plant Algorithm with DL (CADLC-WWPADL) approach. The primary aim of the CADLC-WWPADL approach is to classify and identify the existence of LC on CT scans. The CADLC-WWPADL method uses a lightweight MobileNet model for feature extraction. Besides, the CADLC-WWPADL method employs WWPA for the hyperparameter tuning process. Furthermore, the symmetrical autoencoder (SAE) model is utilized for classification. An investigational evaluation is performed to demonstrate the significant detection outputs of the CADLC-WWPADL technique. An extensive comparative study reported that the CADLC-WWPADL technique effectively performs with other models with a maximum accuracy of 99.05% under the benchmark CT image dataset.

PMID:39232180 | DOI:10.1038/s41598-024-71551-8

Categories: Literature Watch

A pathology foundation model for cancer diagnosis and prognosis prediction

Wed, 2024-09-04 06:00

Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Online ahead of print.

ABSTRACT

Histopathology image evaluation is indispensable for cancer diagnoses and subtype classification. Standard artificial intelligence methods for histopathology image analyses have focused on optimizing specialized models for each diagnostic task1,2. Although such methods have achieved some success, they often have limited generalizability to images generated by different digitization protocols or samples collected from different populations3. Here, to address this challenge, we devised the Clinical Histopathology Imaging Evaluation Foundation (CHIEF) model, a general-purpose weakly supervised machine learning framework to extract pathology imaging features for systematic cancer evaluation. CHIEF leverages two complementary pretraining methods to extract diverse pathology representations: unsupervised pretraining for tile-level feature identification and weakly supervised pretraining for whole-slide pattern recognition. We developed CHIEF using 60,530 whole-slide images spanning 19 anatomical sites. Through pretraining on 44 terabytes of high-resolution pathology imaging datasets, CHIEF extracted microscopic representations useful for cancer cell detection, tumour origin identification, molecular profile characterization and prognostic prediction. We successfully validated CHIEF using 19,491 whole-slide images from 32 independent slide sets collected from 24 hospitals and cohorts internationally. Overall, CHIEF outperformed the state-of-the-art deep learning methods by up to 36.1%, showing its ability to address domain shifts observed in samples from diverse populations and processed by different slide preparation methods. CHIEF provides a generalizable foundation for efficient digital pathology evaluation for patients with cancer.

PMID:39232164 | DOI:10.1038/s41586-024-07894-z

Categories: Literature Watch

A robust deep learning attack immune MRAM-based physical unclonable function

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20649. doi: 10.1038/s41598-024-71730-7.

ABSTRACT

The ubiquitous presence of electronic devices demands robust hardware security mechanisms to safeguard sensitive information from threats. This paper presents a physical unclonable function (PUF) circuit based on magnetoresistive random access memory (MRAM). The circuit utilizes inherent characteristics arising from fabrication variations, specifically magnetic tunnel junction (MTJ) cell resistance, to produce corresponding outputs for applied challenges. In contrast to Arbiter PUF, the proposed effectively satisfies the strict avalanche criterion (SAC). Additionally, the grid-like structure of the proposed circuit preserves its resistance against machine learning-based modeling attacks. Various machine learning (ML) attacks employing multilayer perceptron (MLP), linear regression (LR), and support vector machine (SVM) networks are simulated for two-array and four-array architectures. The MLP-attack prediction accuracy was 53.61% for a two-array circuit and 49.87% for a four-array circuit, showcasing robust performance even under the worst-case process variations. In addition, deep learning-based modeling attacks in considerable high dimensions utilizing multiple networks such as convolutional neural network (CNN), recurrent neural network (RNN), MLP, and Larq are used with the accuracy of 50.31%, 50.25%, 50.31%, and 50.31%, respectively. The efficiency of the proposed circuit at the layout level is also investigated for simplified two-array architecture. The simulation results indicate that the proposed circuit offers intra and inter-hamming distance (HD) with a mean of 0.98% and 49.96%, respectively, and a mean diffuseness of 49.09%.

PMID:39232128 | DOI:10.1038/s41598-024-71730-7

Categories: Literature Watch

Flying foxes optimization with reinforcement learning for vehicle detection in UAV imagery

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20616. doi: 10.1038/s41598-024-71582-1.

ABSTRACT

Intelligent transportation systems (ITS) are globally installed in smart cities, which enable the next generation of ITS depending on the potential integration of autonomous and connected vehicles. Both technologies are being tested widely in various cities across the world. However, these two developing technologies are vital in allowing a fully automatic transportation system; it is necessary to automate other transportation and road components. Unmanned aerial vehicles (UAVs) or drones are utilized for many surveillance applications in the ITS. Detecting on-ground vehicles in drone images is significant for disaster rescue operations, traffic and parking management, and navigating uneven territories. This study presents a flying foxes optimization with deep learning-based vehicle detection and classification model on aerial images (FFODL-VDCAI) technique for ITS application. The main objective of the FFODL-VDCAI technique is to automate and accurately classify vehicles that exist in aerial images. Three primary processes are involved in the presented FFODL-VDCAI technique. Initially, the FFODL-VDCAI approach utilizes YOLO-GD (Ghost-Net and Depthwise convolution) for vehicle detection, where the YOLO-GD uses lightweight Ghost Net in place on the backbone network of YOLO-v4 and interchanges the conventional convolutional with depthwise separable convolutional and pointwise convolutional. Next, the FFO technique is used for hyperparameter tuning the Ghost Net technique. Finally, a deep Q-network (DQN) based reinforcement learning technique is used to classify detected vehicles effectively. A comprehensive simulation analysis of the FFODL-VDCAI methodology is conducted on the UAV image dataset. The performance validation of the FFODL-VDCAI methodology exhibited superior values of 96.15% and 92.03% under PSU and Stanford datasets concerning various aspects.

PMID:39232093 | DOI:10.1038/s41598-024-71582-1

Categories: Literature Watch

Deep learning approach for detecting tomato flowers and buds in greenhouses on 3P2R gantry robot

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20552. doi: 10.1038/s41598-024-71013-1.

ABSTRACT

In recent years, significant advancements have been made in the field of smart greenhouses, particularly in the application of computer vision and robotics for pollinating flowers. Robotic pollination offers several benefits, including reduced labor requirements and preservation of costly pollen through artificial tomato pollination. However, previous studies have primarily focused on the labeling and detection of tomato flowers alone. Therefore, the objective of this study was to develop a comprehensive methodology for simultaneously labeling, training, and detecting tomato flowers specifically tailored for robotic pollination. To achieve this, transfer learning techniques were employed using well-known models, namely YOLOv5 and the recently introduced YOLOv8, for tomato flower detection. The performance of both models was evaluated using the same image dataset, and a comparison was made based on their Average Precision (AP) scores to determine the superior model. The results indicated that YOLOv8 achieved a higher mean AP (mAP) of 92.6% in tomato flower and bud detection, outperforming YOLOv5 with 91.2%. Notably, YOLOv8 also demonstrated an inference speed of 0.7 ms when considering an image size of 1920 × 1080 pixels resized to 640 × 640 pixels during detection. The image dataset was acquired during both morning and evening periods to minimize the impact of lighting conditions on the detection model. These findings highlight the potential of YOLOv8 for real-time detection of tomato flowers and buds, enabling further estimation of flower blooming peaks and facilitating robotic pollination. In the context of robotic pollination, the study also focuses on the deployment of the proposed detection model on the 3P2R gantry robot. The study introduces a kinematic model and a modified circuit for the gantry robot. The position-based visual servoing method is employed to approach the detected flower during the pollination process. The effectiveness of the proposed visual servoing approach is validated in both un-clustered and clustered plant environments in the laboratory setting. Additionally, this study provides valuable theoretical and practical insights for specialists in the field of greenhouse systems, particularly in the design of flower detection algorithms using computer vision and its deployment in robotic systems used in greenhouses.

PMID:39232065 | DOI:10.1038/s41598-024-71013-1

Categories: Literature Watch

ERABiLNet: enhanced residual attention with bidirectional long short-term memory

Wed, 2024-09-04 06:00

Sci Rep. 2024 Sep 4;14(1):20622. doi: 10.1038/s41598-024-71299-1.

ABSTRACT

Alzheimer's Disease (AD) causes slow death in brain cells due to shrinkage of brain cells which is more prevalent in older people. In most cases, the symptoms of AD are mistaken as age-related stresses. The most widely utilized method to detect AD is Magnetic Resonance Imaging (MRI). Along with Artificial Intelligence (AI) techniques, the efficacy of identifying diseases related to the brain has become easier. But, the identical phenotype makes it challenging to identify the disease from the neuro-images. Hence, a deep learning method to detect AD at the beginning stage is suggested in this work. The newly implemented "Enhanced Residual Attention with Bi-directional Long Short-Term Memory (Bi-LSTM) (ERABi-LNet)" is used in the detection phase to identify the AD from the MRI images. This model is used for enhancing the performance of the Alzheimer's detection in scale of 2-5%, minimizing the error rates, increasing the balance of the model, so that the multi-class problems are supported. At first, MRI images are given to "Residual Attention Network (RAN)", which is specially developed with three convolutional layers, namely atrous, dilated and Depth-Wise Separable (DWS), to obtain the relevant attributes. The most appropriate attributes are determined by these layers, and subjected to target-based fusion. Then the fused attributes are fed into the "Attention-based Bi-LSTM". The final outcome is obtained from this unit. The detection efficiency based on median is 26.37% and accuracy is 97.367% obtained by tuning the parameters in the ERABi-LNet with the help of Modified Search and Rescue Operations (MCDMR-SRO). The obtained results are compared with ROA-ERABi-LNet, EOO-ERABi-LNet, GTBO-ERABi-LNet and SRO-ERABi-LNet respectively. The ERABi_LNet thus provides enhanced accuracy and other performance metrics compared to such deep learning models. The proposed method has the better sensitivity, specificity, F1-Score and False Positive Rate compared with all the above mentioned competing models with values such as 97.49%.97.84%,97.74% and 2.616 respective;y. This ensures that the model has better learning capabilities and provides lesser false positives with balanced prediction.

PMID:39232053 | DOI:10.1038/s41598-024-71299-1

Categories: Literature Watch

The impact of deep learning on diagnostic performance in the differentiation of benign and malignant thyroid nodules

Wed, 2024-09-04 06:00

Med Ultrason. 2024 Sep 4. doi: 10.11152/mu-4432. Online ahead of print.

ABSTRACT

AIMS: This study aims to use deep learning (DL) to classify thyroid nodules as benign and malignant with ultrasonography (US). In addition, this study investigates the impact of DL on the diagnostic success of radiologists with different experiences. Material and methods: This study included 576 US images of thyroid nodules. The dataset was divided into 80% training and 20% test sets. Four radiologists with different levels of experience classified the images in the test set as benign-malignant. A DL model was then trained with the train set and predicted benign-malignant for the test set. Then, the output of the DL model for each nodule in the test set was presented to 4 radiologists, who were asked to make a benign-malignant classification again considering these DL results.

RESULTS: The accuracy of the DL model was 0.9391. The accuracy for junior resident (JR) 1, JR 2, senior resident (SR), and senior radiologist (Srad) before DL-assisting were 0.7043, 0.7826, 0.8435, and 0.8522 respectively. The accuracy in DL-assisted classifications was 0.9130, 0.8696, 0.9304, and 0.9043 for JR 1, JR2, SR, and Srad, respectively. DL assistance changed the decisions of less experienced radiologists more than more experienced radiologists. Conclusion: The DL model has superior accuracy in classifying thyroid nodules as benign-malignant with US images than radiologists with different levels of experience. Additionally, all radiologists, and most notably less experienced radiology residents, increased their accuracy in DL-assisted predictions.

PMID:39231286 | DOI:10.11152/mu-4432

Categories: Literature Watch

Pages