Deep learning

Deep learning models for interpretation of point of care ultrasound in military working dogs

Fri, 2024-06-21 06:00

Front Vet Sci. 2024 Jun 6;11:1374890. doi: 10.3389/fvets.2024.1374890. eCollection 2024.

ABSTRACT

INTRODUCTION: Military working dogs (MWDs) are essential for military operations in a wide range of missions. With this pivotal role, MWDs can become casualties requiring specialized veterinary care that may not always be available far forward on the battlefield. Some injuries such as pneumothorax, hemothorax, or abdominal hemorrhage can be diagnosed using point of care ultrasound (POCUS) such as the Global FAST® exam. This presents a unique opportunity for artificial intelligence (AI) to aid in the interpretation of ultrasound images. In this article, deep learning classification neural networks were developed for POCUS assessment in MWDs.

METHODS: Images were collected in five MWDs under general anesthesia or deep sedation for all scan points in the Global FAST® exam. For representative injuries, a cadaver model was used from which positive and negative injury images were captured. A total of 327 ultrasound clips were captured and split across scan points for training three different AI network architectures: MobileNetV2, DarkNet-19, and ShrapML. Gradient class activation mapping (GradCAM) overlays were generated for representative images to better explain AI predictions.

RESULTS: Performance of AI models reached over 82% accuracy for all scan points. The model with the highest performance was trained with the MobileNetV2 network for the cystocolic scan point achieving 99.8% accuracy. Across all trained networks the diaphragmatic hepatorenal scan point had the best overall performance. However, GradCAM overlays showed that the models with highest accuracy, like MobileNetV2, were not always identifying relevant features. Conversely, the GradCAM heatmaps for ShrapML show general agreement with regions most indicative of fluid accumulation.

DISCUSSION: Overall, the AI models developed can automate POCUS predictions in MWDs. Preliminarily, ShrapML had the strongest performance and prediction rate paired with accurately tracking fluid accumulation sites, making it the most suitable option for eventual real-time deployment with ultrasound systems. Further integration of this technology with imaging technologies will expand use of POCUS-based triage of MWDs.

PMID:38903685 | PMC:PMC11187302 | DOI:10.3389/fvets.2024.1374890

Categories: Literature Watch

Leveraging ChatGPT to optimize depression intervention through explainable deep learning

Fri, 2024-06-21 06:00

Front Psychiatry. 2024 Jun 6;15:1383648. doi: 10.3389/fpsyt.2024.1383648. eCollection 2024.

ABSTRACT

INTRODUCTION: Mental health issues bring a heavy burden to individuals and societies around the world. Recently, the large language model ChatGPT has demonstrated potential in depression intervention. The primary objective of this study was to ascertain the viability of ChatGPT as a tool for aiding counselors in their interactions with patients while concurrently evaluating its comparability to human-generated content (HGC).

METHODS: We propose a novel framework that integrates state-of-the-art AI technologies, including ChatGPT, BERT, and SHAP, to enhance the accuracy and effectiveness of mental health interventions. ChatGPT generates responses to user inquiries, which are then classified using BERT to ensure the reliability of the content. SHAP is subsequently employed to provide insights into the underlying semantic constructs of the AI-generated recommendations, enhancing the interpretability of the intervention.

RESULTS: Remarkably, our proposed methodology consistently achieved an impressive accuracy rate of 93.76%. We discerned that ChatGPT always employs a polite and considerate tone in its responses. It refrains from using intricate or unconventional vocabulary and maintains an impersonal demeanor. These findings underscore the potential significance of AIGC as an invaluable complementary component in enhancing conventional intervention strategies.

DISCUSSION: This study illuminates the considerable promise offered by the utilization of large language models in the realm of healthcare. It represents a pivotal step toward advancing the development of sophisticated healthcare systems capable of augmenting patient care and counseling practices.

PMID:38903640 | PMC:PMC11188778 | DOI:10.3389/fpsyt.2024.1383648

Categories: Literature Watch

Development of a machine vision-based weight prediction system of butterhead lettuce (Lactuca sativa L.) using deep learning models for industrial plant factory

Fri, 2024-06-21 06:00

Front Plant Sci. 2024 Jun 5;15:1365266. doi: 10.3389/fpls.2024.1365266. eCollection 2024.

ABSTRACT

INTRODUCTION: Indoor agriculture, especially plant factories, becomes essential because of the advantages of cultivating crops yearly to address global food shortages. Plant factories have been growing in scale as commercialized. Developing an on-site system that estimates the fresh weight of crops non-destructively for decision-making on harvest time is necessary to maximize yield and profits. However, a multi-layer growing environment with on-site workers is too confined and crowded to develop a high-performance system.This research developed a machine vision-based fresh weight estimation system to monitor crops from the transplant stage to harvest with less physical labor in an on-site industrial plant factory.

METHODS: A linear motion guide with a camera rail moving in both the x-axis and y-axis directions was produced and mounted on a cultivating rack with a height under 35 cm to get consistent images of crops from the top view. Raspberry Pi4 controlled its operation to capture images automatically every hour. The fresh weight was manually measured eleven times for four months to use as the ground-truth weight of the models. The attained images were preprocessed and used to develop weight prediction models based on manual and automatic feature extraction.

RESULTS AND DISCUSSION: The performance of models was compared, and the best performance among them was the automatic feature extraction-based model using convolutional neural networks (CNN; ResNet18). The CNN-based model on automatic feature extraction from images performed much better than any other manual feature extraction-based models with 0.95 of the coefficients of determination (R2) and 8.06 g of root mean square error (RMSE). However, another multiplayer perceptron model (MLP_2) was more appropriate to be adopted on-site since it showed around nine times faster inference time than CNN with a little less R2 (0.93). Through this study, field workers in a confined indoor farming environment can measure the fresh weight of crops non-destructively and easily. In addition, it would help to decide when to harvest on the spot.

PMID:38903437 | PMC:PMC11188371 | DOI:10.3389/fpls.2024.1365266

Categories: Literature Watch

Lightweight cotton diseases real-time detection model for resource-constrained devices in natural environments

Fri, 2024-06-21 06:00

Front Plant Sci. 2024 Jun 6;15:1383863. doi: 10.3389/fpls.2024.1383863. eCollection 2024.

ABSTRACT

Cotton, a vital textile raw material, is intricately linked to people's livelihoods. Throughout the cotton cultivation process, various diseases threaten cotton crops, significantly impacting both cotton quality and yield. Deep learning has emerged as a crucial tool for detecting these diseases. However, deep learning models with high accuracy often come with redundant parameters, making them challenging to deploy on resource-constrained devices. Existing detection models struggle to strike the right balance between accuracy and speed, limiting their utility in this context. This study introduces the CDDLite-YOLO model, an innovation based on the YOLOv8 model, designed for detecting cotton diseases in natural field conditions. The C2f-Faster module replaces the Bottleneck structure in the C2f module within the backbone network, using partial convolution. The neck network adopts Slim-neck structure by replacing the C2f module with the GSConv and VoVGSCSP modules, based on GSConv. In the head, we introduce the MPDIoU loss function, addressing limitations in existing loss functions. Additionally, we designed the PCDetect detection head, integrating the PCD module and replacing some CBS modules with PCDetect. Our experimental results demonstrate the effectiveness of the CDDLite-YOLO model, achieving a remarkable mean average precision (mAP) of 90.6%. With a mere 1.8M parameters, 3.6G FLOPS, and a rapid detection speed of 222.22 FPS, it outperforms other models, showcasing its superiority. It successfully strikes a harmonious balance between detection speed, accuracy, and model size, positioning it as a promising candidate for deployment on an embedded GPU chip without sacrificing performance. Our model serves as a pivotal technical advancement, facilitating timely cotton disease detection and providing valuable insights for the design of detection models for agricultural inspection robots and other resource-constrained agricultural devices.

PMID:38903431 | PMC:PMC11187009 | DOI:10.3389/fpls.2024.1383863

Categories: Literature Watch

Fusion of fruit image processing and deep learning: a study on identification of citrus ripeness based on R-LBP algorithm and YOLO-CIT model

Fri, 2024-06-21 06:00

Front Plant Sci. 2024 Jun 5;15:1397816. doi: 10.3389/fpls.2024.1397816. eCollection 2024.

ABSTRACT

Citrus fruits are extensively cultivated fruits with high nutritional value. The identification of distinct ripeness stages in citrus fruits plays a crucial role in guiding the planning of harvesting paths for citrus-picking robots and facilitating yield estimations in orchards. However, challenges arise in the identification of citrus fruit ripeness due to the similarity in color between green unripe citrus fruits and tree leaves, leading to an omission in identification. Additionally, the resemblance between partially ripe, orange-green interspersed fruits and fully ripe fruits poses a risk of misidentification, further complicating the identification of citrus fruit ripeness. This study proposed the YOLO-CIT (You Only Look Once-Citrus) model and integrated an innovative R-LBP (Roughness-Local Binary Pattern) method to accurately identify citrus fruits at distinct ripeness stages. The R-LBP algorithm, an extension of the LBP algorithm, enhances the texture features of citrus fruits at distinct ripeness stages by calculating the coefficient of variation in grayscale values of pixels within a certain range in different directions around the target pixel. The C3 model embedded by the CBAM (Convolutional Block Attention Module) replaced the original backbone network of the YOLOv5s model to form the backbone of the YOLO-CIT model. Instead of traditional convolution, Ghostconv is utilized by the neck network of the YOLO-CIT model. The fruit segment of citrus in the original citrus images processed by the R-LBP algorithm is combined with the background segment of the citrus images after grayscale processing to construct synthetic images, which are subsequently added to the training dataset. The experiment showed that the R-LBP algorithm is capable of amplifying the texture features among citrus fruits at distinct ripeness stages. The YOLO-CIT model combined with the R-LBP algorithm has a Precision of 88.13%, a Recall of 93.16%, an F1 score of 90.89, a mAP@0.5 of 85.88%, and 6.1ms of average detection speed for citrus fruit ripeness identification in complex environments. The model demonstrates the capability to accurately and swiftly identify citrus fruits at distinct ripeness stages in real-world environments, effectively guiding the determination of picking targets and path planning for harvesting robots.

PMID:38903428 | PMC:PMC11188418 | DOI:10.3389/fpls.2024.1397816

Categories: Literature Watch

Deep learning model for prenatal congenital heart disease (CHD) screening generalizes to the community setting and outperforms clinical detection

Fri, 2024-06-21 06:00

medRxiv [Preprint]. 2023 Mar 12:2023.03.10.23287134. doi: 10.1101/2023.03.10.23287134.

ABSTRACT

OBJECTIVE: Congenital heart defects (CHD) are still missed despite nearly universal prenatal ultrasound screening programs, which may result in severe morbidity or even death. Deep machine learning (DL) can automate image recognition from ultrasound. The aim of this study was to apply a previously developed DL model trained on images from a tertiary center, to fetal ultrasound images obtained during the second-trimester standard anomaly scan in a low-risk population.

METHODS: All pregnancies with isolated severe CHD in the Northwestern region of the Netherlands between 2015 and 2016 with available stored images were evaluated, as well as a sample of normal fetuses' examinations from the same region. We compared initial clinical diagnostic accuracy (made in real time), model accuracy, and performance of blinded human experts with access only to the stored images (like the model). We analyzed performance by study characteristics such as duration, quality (independently scored by study investigators), number of stored images, and availability of screening views.

RESULTS: A total of 42 normal fetuses and 66 cases of isolated CHD at birth were analyzed. Of the abnormal cases, 31 were missed and 35 were detected at the time of the clinical anatomy scan (sensitivity 53 percent). Model sensitivity and specificity was 91 and 93 percent, respectively. Blinded human experts (n=3) achieved sensitivity and specificity of 55±10 percent (range 47-67 percent) and 71±13 percent (range 57-83 percent), respectively. There was a statistically significant difference in model correctness by expert-grader quality score (p=0.04). Abnormal cases included 19 lesions the model had not encountered in its training; the model's performance (15/19 correct) was not statistically significantly different on previously encountered vs. never before seen lesions (p=0.07).

CONCLUSIONS: A previously trained DL algorithm out-performed human experts in detecting CHD in a cohort in which over 50 percent of CHD cases were initially missed clinically. Notably, the DL algorithm performed well on community-acquired images in a low-risk population, including lesions it had not been previously exposed to. Furthermore, when both the model and blinded human experts had access to stored images alone, the model outperformed expert humans. Together, these findings support the proposition that use of DL models can improve prenatal detection of CHD.

PMID:38903074 | PMC:PMC11188113 | DOI:10.1101/2023.03.10.23287134

Categories: Literature Watch

A deep learning-powered diagnostic model for acute pancreatitis

Thu, 2024-06-20 06:00

BMC Med Imaging. 2024 Jun 20;24(1):154. doi: 10.1186/s12880-024-01339-9.

ABSTRACT

BACKGROUND: Acute pancreatitis is one of the most common diseases requiring emergency surgery. Rapid and accurate recognition of acute pancreatitis can help improve clinical outcomes. This study aimed to develop a deep learning-powered diagnostic model for acute pancreatitis.

MATERIALS AND METHODS: In this investigation, we enrolled a cohort of 190 patients with acute pancreatitis who were admitted to Sichuan Provincial People's Hospital between January 2020 and December 2021. Abdominal computed tomography (CT) scans were obtained from both patients with acute pancreatitis and healthy individuals. Our model was constructed using two modules: (1) the acute pancreatitis classifier module; (2) the pancreatitis lesion segmentation module. Each model's performance was assessed based on precision, recall rate, F1-score, Area Under the Curve (AUC), loss rate, frequency-weighted accuracy (fwavacc), and Mean Intersection over Union (MIOU).

RESULTS: Upon admission, significant variations were observed between patients with mild and severe acute pancreatitis in inflammatory indexes, liver, and kidney function indicators, as well as coagulation parameters. The acute pancreatitis classifier module exhibited commendable diagnostic efficacy, showing an impressive AUC of 0.993 (95%CI: 0.978-0.999) in the test set (comprising healthy examination patients vs. those with acute pancreatitis, P < 0.001) and an AUC of 0.850 (95%CI: 0.790-0.898) in the external validation set (healthy examination patients vs. patients with acute pancreatitis, P < 0.001). Furthermore, the acute pancreatitis lesion segmentation module demonstrated exceptional performance in the validation set. For pancreas segmentation, peripancreatic inflammatory exudation, peripancreatic effusion, and peripancreatic abscess necrosis, the MIOU values were 86.02 (84.52, 87.20), 61.81 (56.25, 64.83), 57.73 (49.90, 68.23), and 66.36 (55.08, 72.12), respectively. These findings underscore the robustness and reliability of the developed models in accurately characterizing and assessing acute pancreatitis.

CONCLUSION: The diagnostic model for acute pancreatitis, driven by deep learning, exhibits excellent efficacy in accurately evaluating the severity of the condition.

TRIAL REGISTRATION: This is a retrospective study.

PMID:38902660 | DOI:10.1186/s12880-024-01339-9

Categories: Literature Watch

Estimating helmet wearing rates via a scalable, low-cost algorithm: a novel integration of deep learning and google street view

Thu, 2024-06-20 06:00

BMC Public Health. 2024 Jun 20;24(1):1645. doi: 10.1186/s12889-024-19118-0.

ABSTRACT

INTRODUCTION: Wearing a helmet reduces the risk of head injuries substantially in the event of a motorcycle crash. Countries around the world are committed to promoting helmet use, but the progress has been slow and uneven. There is an urgent need for large-scale data collection for situation assessment and intervention evaluation.

METHODS: This study proposes a scalable, low-cost algorithm to estimate helmet-wearing rates. Applying the state-of-the-art deep learning technique for object detection to images acquired from Google Street View, the algorithm has the potential to provide accurate estimates at the global level.

RESULTS: Trained on a sample of 3995 images, the algorithm achieved high accuracy. The out-of-sample prediction results for all three object classes (helmets, drivers, and passengers) reveal a precision of 0.927, a recall value of 0.922, and a mean average precision at 50 (mAP50) of 0.956.

DISCUSSION: The remarkable model performance suggests the algorithm's capacity to generate accurate estimates of helmet-wearing rates from an image source with global coverage. The significant enhancement in the availability of helmet usage data resulting from this approach could bolster progress tracking and facilitate evidence-based policymaking for helmet wearing globally.

PMID:38902622 | DOI:10.1186/s12889-024-19118-0

Categories: Literature Watch

Machine learning: an advancement in biochemical engineering

Thu, 2024-06-20 06:00

Biotechnol Lett. 2024 Jun 21. doi: 10.1007/s10529-024-03499-8. Online ahead of print.

ABSTRACT

One of the most remarkable techniques recently introduced into the field of bioprocess engineering is machine learning. Bioprocess engineering has drawn much attention due to its vast application in different domains like biopharmaceuticals, fossil fuel alternatives, environmental remediation, and food and beverage industry, etc. However, due to their unpredictable mechanisms, they are very often challenging to optimize. Furthermore, biological systems are extremely complicated; hence, machine learning algorithms could potentially be utilized to improve and build new biotechnological processes. Gaining insight into the fundamental mathematical understanding of commonly used machine learning algorithms, including Support Vector Machine, Principal Component Analysis, Partial Least Squares and Reinforcement Learning, the present study aims to discuss various case studies related to the application of machine learning in bioprocess engineering. Recent advancements as well as challenges posed in this area along with their potential solutions are also presented.

PMID:38902585 | DOI:10.1007/s10529-024-03499-8

Categories: Literature Watch

Automatic classification of normal and abnormal cell division using deep learning

Thu, 2024-06-20 06:00

Sci Rep. 2024 Jun 20;14(1):14241. doi: 10.1038/s41598-024-64834-7.

ABSTRACT

In recent years, there has been a surge in the development of methods for cell segmentation and tracking, with initiatives like the Cell Tracking Challenge driving progress in the field. Most studies focus on regular cell population videos in which cells are segmented and followed, and parental relationships annotated. However, DNA damage induced by genotoxic drugs or ionizing radiation produces additional abnormal events since it leads to behaviors like abnormal cell divisions (resulting in a number of daughters different from two) and cell death. With this in mind, we developed an automatic mitosis classifier to categorize small mitosis image sequences centered around one cell as "Normal" or "Abnormal." These mitosis sequences were extracted from videos of cell populations exposed to varying levels of radiation that affect the cell cycle's development. We explored several deep-learning architectures and found that a network with a ResNet50 backbone and including a Long Short-Term Memory (LSTM) layer produced the best results (mean F1-score: 0.93 ± 0.06). In the future, we plan to integrate this classifier with cell segmentation and tracking to build phylogenetic trees of the population after genomic stress.

PMID:38902496 | DOI:10.1038/s41598-024-64834-7

Categories: Literature Watch

Development and validation of a smartphone-based deep-learning-enabled system to detect middle-ear conditions in otoscopic images

Thu, 2024-06-20 06:00

NPJ Digit Med. 2024 Jun 20;7(1):162. doi: 10.1038/s41746-024-01159-9.

ABSTRACT

Middle-ear conditions are common causes of primary care visits, hearing impairment, and inappropriate antibiotic use. Deep learning (DL) may assist clinicians in interpreting otoscopic images. This study included patients over 5 years old from an ambulatory ENT practice in Strasbourg, France, between 2013 and 2020. Digital otoscopic images were obtained using a smartphone-attached otoscope (Smart Scope, Karl Storz, Germany) and labeled by a senior ENT specialist across 11 diagnostic classes (reference standard). An Inception-v2 DL model was trained using 41,664 otoscopic images, and its diagnostic accuracy was evaluated by calculating class-specific estimates of sensitivity and specificity. The model was then incorporated into a smartphone app called i-Nside. The DL model was evaluated on a validation set of 3,962 images and a held-out test set comprising 326 images. On the validation set, all class-specific estimates of sensitivity and specificity exceeded 98%. On the test set, the DL model achieved a sensitivity of 99.0% (95% confidence interval: 94.5-100) and a specificity of 95.2% (91.5-97.6) for the binary classification of normal vs. abnormal images; wax plugs were detected with a sensitivity of 100% (94.6-100) and specificity of 97.7% (95.0-99.1); other class-specific estimates of sensitivity and specificity ranged from 33.3% to 92.3% and 96.0% to 100%, respectively. We present an end-to-end DL-enabled system able to achieve expert-level diagnostic accuracy for identifying normal tympanic aspects and wax plugs within digital otoscopic images. However, the system's performance varied for other middle-ear conditions. Further prospective validation is necessary before wider clinical deployment.

PMID:38902477 | DOI:10.1038/s41746-024-01159-9

Categories: Literature Watch

Deep learning reconstruction for lumbar spine MRI acceleration: a prospective study

Thu, 2024-06-20 06:00

Eur Radiol Exp. 2024 Jun 21;8(1):67. doi: 10.1186/s41747-024-00470-0.

ABSTRACT

BACKGROUND: We compared magnetic resonance imaging (MRI) turbo spin-echo images reconstructed using a deep learning technique (TSE-DL) with standard turbo spin-echo (TSE-SD) images of the lumbar spine regarding image quality and detection performance of common degenerative pathologies.

METHODS: This prospective, single-center study included 31 patients (15 males and 16 females; aged 51 ± 16 years (mean ± standard deviation)) who underwent lumbar spine exams with both TSE-SD and TSE-DL acquisitions for degenerative spine diseases. Images were analyzed by two radiologists and assessed for qualitative image quality using a 4-point Likert scale, quantitative signal-to-noise ratio (SNR) of anatomic landmarks, and detection of common pathologies. Paired-sample t, Wilcoxon, and McNemar tests, unweighted/linearly weighted Cohen κ statistics, and intraclass correlation coefficients were used.

RESULTS: Scan time for TSE-DL and TSE-SD protocols was 2:55 and 5:17 min:s, respectively. The overall image quality was either significantly higher for TSE-DL or not significantly different between TSE-SD and TSE-DL. TSE-DL demonstrated higher SNR and subject noise scores than TSE-SD. For pathology detection, the interreader agreement was substantial to almost perfect for TSE-DL, with κ values ranging from 0.61 to 1.00; the interprotocol agreement was almost perfect for both readers, with κ values ranging from 0.84 to 1.00. There was no significant difference in the diagnostic confidence or detection rate of common pathologies between the two sequences (p ≥ 0.081).

CONCLUSIONS: TSE-DL allowed for a 45% reduction in scan time over TSE-SD in lumbar spine MRI without compromising the overall image quality and showed comparable detection performance of common pathologies in the evaluation of degenerative lumbar spine changes.

RELEVANCE STATEMENT: Deep learning-reconstructed lumbar spine MRI protocol enabled a 45% reduction in scan time compared with conventional reconstruction, with comparable image quality and detection performance of common degenerative pathologies.

KEY POINTS: • Lumbar spine MRI with deep learning reconstruction has broad application prospects. • Deep learning reconstruction of lumbar spine MRI saved 45% scan time without compromising overall image quality. • When compared with standard sequences, deep learning reconstruction showed similar detection performance of common degenerative lumbar spine pathologies.

PMID:38902467 | DOI:10.1186/s41747-024-00470-0

Categories: Literature Watch

Artificial intelligence in musculoskeletal imaging: realistic clinical applications in the next decade

Thu, 2024-06-20 06:00

Skeletal Radiol. 2024 Jun 20. doi: 10.1007/s00256-024-04684-6. Online ahead of print.

ABSTRACT

This article will provide a perspective review of the most extensively investigated deep learning (DL) applications for musculoskeletal disease detection that have the best potential to translate into routine clinical practice over the next decade. Deep learning methods for detecting fractures, estimating pediatric bone age, calculating bone measurements such as lower extremity alignment and Cobb angle, and grading osteoarthritis on radiographs have been shown to have high diagnostic performance with many of these applications now commercially available for use in clinical practice. Many studies have also documented the feasibility of using DL methods for detecting joint pathology and characterizing bone tumors on magnetic resonance imaging (MRI). However, musculoskeletal disease detection on MRI is difficult as it requires multi-task, multi-class detection of complex abnormalities on multiple image slices with different tissue contrasts. The generalizability of DL methods for musculoskeletal disease detection on MRI is also challenging due to fluctuations in image quality caused by the wide variety of scanners and pulse sequences used in routine MRI protocols. The diagnostic performance of current DL methods for musculoskeletal disease detection must be further evaluated in well-designed prospective studies using large image datasets acquired at different institutions with different imaging parameters and imaging hardware before they can be fully implemented in clinical practice. Future studies must also investigate the true clinical benefits of current DL methods and determine whether they could enhance quality, reduce error rates, improve workflow, and decrease radiologist fatigue and burnout with all of this weighed against the costs.

PMID:38902420 | DOI:10.1007/s00256-024-04684-6

Categories: Literature Watch

Deep Ensemble learning and quantum machine learning approach for Alzheimer's disease detection

Thu, 2024-06-20 06:00

Sci Rep. 2024 Jun 20;14(1):14196. doi: 10.1038/s41598-024-61452-1.

ABSTRACT

Alzheimer disease (AD) is among the most chronic neurodegenerative diseases that threaten global public health. The prevalence of Alzheimer disease and consequently the increased risk of spread all over the world pose a vital threat to human safekeeping. Early diagnosis of AD is a suitable action for timely intervention and medication, which may increase the prognosis and quality of life for affected individuals. Quantum computing provides a more efficient model for different disease classification tasks than classical machine learning approaches. The full potential of quantum computing is not applied to Alzheimer's disease classification tasks as expected. In this study, we proposed an ensemble deep learning model based on quantum machine learning classifiers to classify Alzheimer's disease. The Alzheimer's disease Neuroimaging Initiative I and Alzheimer's disease Neuroimaging Initiative II datasets are merged for the AD disease classification. We combined important features extracted based on the customized version of VGG16 and ResNet50 models from the merged images then feed these features to the Quantum Machine Learning classifier to classify them as non-demented, mild demented, moderate demented, and very mild demented. We evaluate the performance of our model by using six metrics; accuracy, the area under the curve, F1-score, precision, and recall. The result validates that the proposed model outperforms several state-of-the-art methods for detecting Alzheimer's disease by registering an accuracy of 99.89 and 98.37 F1-score.

PMID:38902368 | DOI:10.1038/s41598-024-61452-1

Categories: Literature Watch

A multi-feature spatial-temporal fusion network for traffic flow prediction

Thu, 2024-06-20 06:00

Sci Rep. 2024 Jun 20;14(1):14264. doi: 10.1038/s41598-024-65040-1.

ABSTRACT

The traffic flow prediction is the key to alleviate traffic congestion, yet very challenging due to the complex influence factors. Currently, the most of deep learning models are designed to dig out the intricate dependency in continuous standardized sequences, which are dependent to high requirements for data continuity and regularized distribution. However, the data discontinuity and irregular distribution are inevitable in the real-world practical application, then we need find a way to utilize the powerful effect of the multi-feature fusion rather than continuous relation in standardized sequences. To this end, we conduct the prediction based on the multiple traffic features reflecting the complex influence factors. Firstly, we propose the ATFEM, an adaptive traffic features extraction mechanism, which can select important influence factors to construct joint temporal features matrix and global spatial features matrix according to the traffic condition. In this way, the feature's representation ability can be improved. Secondly, we propose the MFSTN, a multi-feature spatial-temporal fusion network, which include the temporal transformer encoder and graph attention network to obtain the latent representation of spatial-temporal features. Especially, we design the scaled spatial-temporal fusion module, which can automatically learn optimal fusion weights, further adapt to inconsistent spatial-temporal dimensions. Finally, the multi-layer perceptron gets the mapping function between these comprehensive features and traffic flow. This method helps to improve the interpretability of the prediction. Experimental results show that the proposed model outperforms a variety of baselines, and it can accurately predict the traffic flow when the data missing rate is high.

PMID:38902350 | DOI:10.1038/s41598-024-65040-1

Categories: Literature Watch

Bio-inspired Deep Learning-Personalized Ensemble Alzheimer's Diagnosis Model for Mental Well-being

Thu, 2024-06-20 06:00

SLAS Technol. 2024 Jun 18:100161. doi: 10.1016/j.slast.2024.100161. Online ahead of print.

ABSTRACT

Most classification models for Alzheimer's Diagnosis (AD) do not have specific strategies for individual input samples, leading to the problem of easily overlooking personalized differences between samples. This research introduces a customized dynamically ensemble convolution neural network (PDECNN), which is able to build a specific integration strategy based on the distinctiveness of the sample. In this paper, we propose a personalized dynamic ensemble alzheimer's Diagnosis classification model. This model will dynamically modify the deteriorated brain areas of interest depending on various samples since it can adjust to variations in the degeneration of sample brain areas. In clinical problems, the PDECNN model has additional diagnostic importance since it can identify sample-specific degraded brain areas based on input samples. This model considers the variability of brain region degeneration levels between input samples, evaluates the degree of degeneration of specific brain regions using an attention mechanism, and selects and integrates brain region features based on the degree of degeneration. Furthermore, by redesigning the classification accuracy performance, we respectively improve it by 4%, 11%, and 8%. Moreover, the degraded brain regions identified by the model show high consistency with the clinical manifestations of AD.

PMID:38901762 | DOI:10.1016/j.slast.2024.100161

Categories: Literature Watch

Advancing the visibility of outer retinal integrity in neovascular age-related macular degeneration with high-resolution OCT

Thu, 2024-06-20 06:00

Can J Ophthalmol. 2024 Jun 17:S0008-4182(24)00157-1. doi: 10.1016/j.jcjo.2024.05.014. Online ahead of print.

ABSTRACT

OBJECTIVE: To compare the visibility and accessibility of the outer retina in neovascular age-related macular degeneration (nAMD) between 2 OCT devices.

METHODS: In this prospective, cross-sectional exploratory study, differences in thickness and loss of individual outer retinal layers in eyes with nAMD and in age-matched healthy eyes between a next-level High-Res OCT device and the conventional SPECTRALIS OCT (both Heidelberg Engineering GmbH, Heidelberg, Germany) were analyzed. Eyes with nAMD and at least 250 nL of retinal fluid, quantified by an approved deep-learning algorithm (Fluid Monitor, RetInSight, Vienna, Austria), fulfilled the inclusion criteria. The outer retinal layers were segmented using automated layer segmentation and were corrected manually. Layer loss and thickness were compared between both devices using a linear mixed-effects model and a paired t test.

RESULTS: Nineteen eyes of 17 patients with active nAMD and 17 healthy eyes were included. For nAMD eyes, the thickness of the retinal pigment epithelium (RPE) differed significantly between the devices (25.42 μm [95% CI, 14.24-36.61] and 27.31 μm [95% CI, 16.12-38.50] for high-resolution OCT and conventional OCT, respectively; p = 0.033). Furthermore, a significant difference was found in the mean relative external limiting membrane loss (p = 0.021). However, the thickness of photoreceptors, RPE integrity loss, and photoreceptor integrity loss did not differ significantly between devices in the central 3 mm. In healthy eyes, a significant difference in both RPE and photoreceptor thickness between devices was shown (p < 0.001).

CONCLUSION: Central RPE thickness was significantly thinner on high-resolution OCT compared with conventional OCT images explained by superior optical separation of the RPE and Bruch's membrane.

PMID:38901467 | DOI:10.1016/j.jcjo.2024.05.014

Categories: Literature Watch

Explainable AI based automated segmentation and multi-stage classification of gastroesophageal reflux using machine learning techniques

Thu, 2024-06-20 06:00

Biomed Phys Eng Express. 2024 Jun 20. doi: 10.1088/2057-1976/ad5a14. Online ahead of print.

ABSTRACT

&#xD;&#xD;&#xD;&#xD;&#xD;&#xD;Presently, close to two million patients globally succumb to gastrointestinal reflux diseases (GERD). Video endoscopy represents cutting-edge technology in medical imaging, facilitating the diagnosis of various gastrointestinal ailments including stomach ulcers, bleeding, and polyps. However, the abundance of images produced by medical video endoscopy necessitates significant time for doctors to analyze them thoroughly, posing a challenge for manual diagnosis. This challenge has spurred research into computer-aided techniques aimed at diagnosing the plethora of generated images swiftly and accurately. The novelty of the proposed methodology lies in the development of a system tailored for the diagnosis of gastrointestinal diseases. The proposed work used an object detection method called Yolov5 for identifying abnormal region of interest and Deep LabV3+ for segmentation of abnormal regions in GERD. Further, the features are extracted from the segmented image and given as an input to the seven different machine learning classifiers and custom deep neural network model for multi-stage classification of GERD. The DeepLabV3+ attains an excellent segmentation accuracy of 95.2% and an F1 score of 93.3%. The custom dense neural network obtained a classification accuracy of 90.5%. Among the seven different machine learning classifiers, support vector machine (SVM) outperformed with classification accuracy of 87% compared to all other classifiers. Thus, the combination of object detection, deep learning-based segmentation and machine learning classification enables the timely identification and surveillance of problems associated with GERD for healthcare providers.&#xD;&#xD.

PMID:38901416 | DOI:10.1088/2057-1976/ad5a14

Categories: Literature Watch

Discovery and characterization of novel FGFR1 inhibitors in triple-negative breast cancer via hybrid virtual screening and molecular dynamics simulations

Thu, 2024-06-20 06:00

Bioorg Chem. 2024 Jun 10;150:107553. doi: 10.1016/j.bioorg.2024.107553. Online ahead of print.

ABSTRACT

The overexpression of FGFR1 is thought to significantly contribute to the progression of triple-negative breast cancer (TNBC), impacting aspects such as tumorigenesis, growth, metastasis, and drug resistance. Consequently, the pursuit of effective inhibitors for FGFR1 is a key area of research interest. In response to this need, our study developed a hybrid virtual screening method. Utilizing KarmaDock, an innovative algorithm that blends deep learning with molecular docking, alongside Schrödinger's Residue Scanning. This strategy led us to identify compound 6, which demonstrated promising FGFR1 inhibitory activity, evidenced by an IC50 value of approximately 0.24 nM in the HTRF bioassay. Further evaluation revealed that this compound also inhibits the FGFR1 V561M variant with an IC50 value around 1.24 nM. Our subsequent investigations demonstrate that Compound 6 robustly suppresses the migration and invasion capacities of TNBC cell lines, through the downregulation of p-FGFR1 and modulation of EMT markers, highlighting its promise as a potent anti-metastatic therapeutic agent. Additionally, our use of molecular dynamics simulations provided a deeper understanding of the compound's specific binding interactions with FGFR1.

PMID:38901279 | DOI:10.1016/j.bioorg.2024.107553

Categories: Literature Watch

Diagnostic test accuracy of externally validated convolutional neural network (CNN) artificial intelligence (AI) models for emergency head CT scans - A systematic review

Thu, 2024-06-20 06:00

Int J Med Inform. 2024 Jun 13;189:105523. doi: 10.1016/j.ijmedinf.2024.105523. Online ahead of print.

ABSTRACT

BACKGROUND: The surge in emergency head CT imaging and artificial intelligence (AI) advancements, especially deep learning (DL) and convolutional neural networks (CNN), have accelerated the development of computer-aided diagnosis (CADx) for emergency imaging. External validation assesses model generalizability, providing preliminary evidence of clinical potential.

OBJECTIVES: This study systematically reviews externally validated CNN-CADx models for emergency head CT scans, critically appraises diagnostic test accuracy (DTA), and assesses adherence to reporting guidelines.

METHODS: Studies comparing CNN-CADx model performance to reference standard were eligible. The review was registered in PROSPERO (CRD42023411641) and conducted on Medline, Embase, EBM-Reviews and Web of Science following PRISMA-DTA guideline. DTA reporting were systematically extracted and appraised using standardised checklists (STARD, CHARMS, CLAIM, TRIPOD, PROBAST, QUADAS-2).

RESULTS: Six of 5636 identified studies were eligible. The common target condition was intracranial haemorrhage (ICH), and intended workflow roles auxiliary to experts. Due to methodological and clinical between-study variation, meta-analysis was inappropriate. The scan-level sensitivity exceeded 90 % in 5/6 studies, while specificities ranged from 58,0-97,7 %. The SROC 95 % predictive region was markedly broader than the confidence region, ranging above 50 % sensitivity and 20 % specificity. All studies had unclear or high risk of bias and concern for applicability (QUADAS-2, PROBAST), and reporting adherence was below 50 % in 20 of 32 TRIPOD items.

CONCLUSION: 0.01 % of identified studies met the eligibility criteria. The evidence on the DTA of CNN-CADx models for emergency head CT scans remains limited in the scope of this review, as the reviewed studies were scarce, inapt for meta-analysis and undermined by inadequate methodological conduct and reporting. Properly conducted, external validation remains preliminary for evaluating the clinical potential of AI-CADx models, but prospective and pragmatic clinical validation in comparative trials remains most crucial. In conclusion, future AI-CADx research processes should be methodologically standardized and reported in a clinically meaningful way to avoid research waste.

PMID:38901270 | DOI:10.1016/j.ijmedinf.2024.105523

Categories: Literature Watch

Pages