Deep learning

AACFlow: An end-to-end model based on attention augmented convolutional neural network and flow-attention mechanism for identification of anticancer peptides

Thu, 2024-03-07 06:00

Bioinformatics. 2024 Mar 7:btae142. doi: 10.1093/bioinformatics/btae142. Online ahead of print.

ABSTRACT

MOTIVATION: Anticancer peptides (ACPs) have natural cationic properties and can act on the anionic cell membrane of cancer cells to kill cancer cells. Therefore, ACPs have become a potential anticancer drug with good research value and prospect.

RESULTS: In this paper, we propose AACFlow, an end-to-end model for identification of ACPs based on deep learning. End-to-end models have more room to automatically adjust according to the data, making the overall fit better and reducing error propagation. The combination of attention augmented convolutional neural network (AAConv) and multi-layer convolutional neural network (CNN) forms a deep representation learning module, which is used to obtain global and local information on the sequence. Based on the concept of flow network, multi-head flow-attention mechanism is introduced to mine the deep features of the sequence to improve the efficiency of the model. On the independent test dataset, the ACC, Sn, Sp, and AUC values of AACFlow are 83.9%, 83.0%, 84.8%, and 0.892, respectively, which are 4.9%, 1.5%, 8.0%, and 0.016 higher than those of the baseline model. The MCC value is 67.85%. In addition, we visualize the features extracted by each module to enhance the interpretability of the model. Various experiments show that our model is more competitive in predicting ACPs.

AVAILABILITY: The codes and datasets are accessible at https://github.com/z11code/AACFlow.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38452348 | DOI:10.1093/bioinformatics/btae142

Categories: Literature Watch

Deep Learning-based Segmentation of CT Scans Predicts Disease Progression and Mortality in IPF

Thu, 2024-03-07 06:00

Am J Respir Crit Care Med. 2024 Mar 7. doi: 10.1164/rccm.202311-2185OC. Online ahead of print.

ABSTRACT

RATIONALE: Despite evidence demonstrating a prognostic role for CT scans in IPF, image-based biomarkers are not routinely used in clinical practice or trials.

OBJECTIVES: Develop automated imaging biomarkers using deep learning based segmentation of CT scans.

METHODS: We developed segmentation processes for four anatomical biomarkers which were applied to a unique cohort of treatment-naive IPF patients enrolled in the PROFILE study and tested against a further UK cohort. The relationship between CT biomarkers, lung function, disease progression and mortality were assessed.

MEASUREMENTS AND MAIN RESULTS: Data was analysed from 446 PROFILE patients. Median follow-up was 39.1 months (IQR 18.1-66.4) with cumulative incidence of death of 277 over 5 years (62.1%). Segmentation was successful on 97.8% of all scans, across multiple imaging vendors at slice thicknesses 0.5-5mm. Of 4 segmentations, lung volume showed strongest correlation with FVC (r=0.82, p<0.001). Lung, vascular and fibrosis volumes were consistently associated across cohorts with differential five-year survival, which persisted after adjustment for baseline GAP score. Lower lung volume (HR 0.98, CI 0.96-0.99, p=0.001), increased vascular volume (HR 1.30, CI 1.12-1.51, p=0.001) and increased fibrosis volume (HR 1.17, CI 1.12-1.22, p=<0.001) were associated with reduced two-year progression-free survival in the pooled PROFILE cohort. Longitudinally, decreasing lung volume (HR 3.41; 95% CI 1.36-8.54; p=0.009) and increasing fibrosis volume (HR 2.23; 95% CI 1.22-4.08; p=0.009) were associated with differential survival.

CONCLUSIONS: Automated models can rapidly segment IPF CT scans, providing prognostic near and long-term information, which could be used in routine clinical practice or as key trial endpoints. This article is open access and distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).

PMID:38452227 | DOI:10.1164/rccm.202311-2185OC

Categories: Literature Watch

Development of a multi-wear-site, deep learning-based physical activity intensity classification algorithm using raw acceleration data

Thu, 2024-03-07 06:00

PLoS One. 2024 Mar 7;19(3):e0299295. doi: 10.1371/journal.pone.0299295. eCollection 2024.

ABSTRACT

BACKGROUND: Accelerometers are widely adopted in research and consumer devices as a tool to measure physical activity. However, existing algorithms used to estimate activity intensity are wear-site-specific. Non-compliance to wear instructions may lead to misspecifications. In this study, we developed deep neural network models to classify device placement and activity intensity based on raw acceleration data. Performances of these models were evaluated by making comparisons to the ground truth and results derived from existing count-based algorithms.

METHODS: 54 participants (26 adults 26.9±8.7 years; 28 children, 12.1±2.3 years) completed a series of activity tasks in a laboratory with accelerometers attached to each of their hip, wrist, and chest. Their metabolic rates at rest and during activity periods were measured using the portable COSMED K5; data were then converted to metabolic equivalents, and used as the ground truth for activity intensity. Deep neutral networks using the Long Short-Term Memory approach were trained and evaluated based on raw acceleration data collected from accelerometers. Models to classify wear-site and activity intensity, respectively, were evaluated.

RESULTS: The trained models correctly classified wear-sites and activity intensities over 90% of the time, which outperformed count-based algorithms (wear-site correctly specified: 83% to 85%; wear-site misspecified: 64% to 75%). When additional parameters of age, height and weight of participants were specified, the accuracy of some prediction models surpassed 95%.

CONCLUSIONS: Results of the study suggest that accelerometer placement could be determined prospectively, and non-wear-site-specific algorithms had satisfactory accuracies. The performances, in terms of intensity classification, of these models also exceeded typical count-based algorithms. Without being restricted to one specific wear-site, research protocols for accelerometers wear could allow more autonomy to participants, which may in turn improve their acceptance and compliance to wear protocols, and in turn more accurate results.

PMID:38452147 | DOI:10.1371/journal.pone.0299295

Categories: Literature Watch

Generalized biomolecular modeling and design with RoseTTAFold All-Atom

Thu, 2024-03-07 06:00

Science. 2024 Mar 7:eadl2528. doi: 10.1126/science.adl2528. Online ahead of print.

ABSTRACT

Deep learning methods have revolutionized protein structure prediction and design but are currently limited to protein-only systems. We describe RoseTTAFold All-Atom (RFAA) which combines a residue-based representation of amino acids and DNA bases with an atomic representation of all other groups to model assemblies containing proteins, nucleic acids, small molecules, metals, and covalent modifications given their sequences and chemical structures. By fine tuning on denoising tasks we obtain RFdiffusionAA, which builds protein structures around small molecules. Starting from random distributions of amino acid residues surrounding target small molecules, we design and experimentally validate, through crystallography and binding measurements, proteins that bind the cardiac disease therapeutic digoxigenin, the enzymatic cofactor heme, and the light harvesting molecule bilin.

PMID:38452047 | DOI:10.1126/science.adl2528

Categories: Literature Watch

AI-luminating Artificial Intelligence in Inflammatory Bowel Diseases: A Narrative Review on the Role of AI in Endoscopy, Histology, and Imaging for IBD

Thu, 2024-03-07 06:00

Inflamm Bowel Dis. 2024 Mar 7:izae030. doi: 10.1093/ibd/izae030. Online ahead of print.

ABSTRACT

Endoscopy, histology, and cross-sectional imaging serve as fundamental pillars in the detection, monitoring, and prognostication of inflammatory bowel disease (IBD). However, interpretation of these studies often relies on subjective human judgment, which can lead to delays, intra- and interobserver variability, and potential diagnostic discrepancies. With the rising incidence of IBD globally coupled with the exponential digitization of these data, there is a growing demand for innovative approaches to streamline diagnosis and elevate clinical decision-making. In this context, artificial intelligence (AI) technologies emerge as a timely solution to address the evolving challenges in IBD. Early studies using deep learning and radiomics approaches for endoscopy, histology, and imaging in IBD have demonstrated promising results for using AI to detect, diagnose, characterize, phenotype, and prognosticate IBD. Nonetheless, the available literature has inherent limitations and knowledge gaps that need to be addressed before AI can transition into a mainstream clinical tool for IBD. To better understand the potential value of integrating AI in IBD, we review the available literature to summarize our current understanding and identify gaps in knowledge to inform future investigations.

PMID:38452040 | DOI:10.1093/ibd/izae030

Categories: Literature Watch

Tongue feature dataset construction and real-time detection

Thu, 2024-03-07 06:00

PLoS One. 2024 Mar 7;19(3):e0296070. doi: 10.1371/journal.pone.0296070. eCollection 2024.

ABSTRACT

BACKGROUND: Tongue diagnosis in traditional Chinese medicine (TCM) provides clinically important, objective evidence from direct observation of specific features that assist with diagnosis. However, the current interpretation of tongue features requires a significant amount of manpower and time. TCM physicians may have different interpretations of features displayed by the same tongue. An automated interpretation system that interprets tongue features would expedite the interpretation process and yield more consistent results.

MATERIALS AND METHODS: This study applied deep learning visualization to tongue diagnosis. After collecting tongue images and corresponding interpretation reports by TCM physicians in a single teaching hospital, various tongue features such as fissures, tooth marks, and different types of coatings were annotated manually with rectangles. These annotated data and images were used to train a deep learning object detection model. Upon completion of training, the position of each tongue feature was dynamically marked.

RESULTS: A large high-quality manually annotated tongue feature dataset was constructed and analyzed. A detection model was trained with average precision (AP) 47.67%, 58.94%, 71.25% and 59.78% for fissures, tooth marks, thick and yellow coatings, respectively. At over 40 frames per second on a NVIDIA GeForce GTX 1060, the model was capable of detecting tongue features from any viewpoint in real time.

CONCLUSIONS/SIGNIFICANCE: This study constructed a tongue feature dataset and trained a deep learning object detection model to locate tongue features in real time. The model provided interpretability and intuitiveness that are often lacking in general neural network models and implies good feasibility for clinical application.

PMID:38452007 | DOI:10.1371/journal.pone.0296070

Categories: Literature Watch

The segmentation and intelligent recognition of structural surfaces in borehole images based on the U2-Net network

Thu, 2024-03-07 06:00

PLoS One. 2024 Mar 7;19(3):e0299471. doi: 10.1371/journal.pone.0299471. eCollection 2024.

ABSTRACT

Structural planes decrease the strength and stability of rock masses, severely affecting their mechanical properties and deformation and failure characteristics. Therefore, investigation and analysis of structural planes are crucial tasks in mining rock mechanics. The drilling camera obtains image information of deep structural planes of rock masses through high-definition camera methods, providing important data sources for the analysis of deep structural planes of rock masses. This paper addresses the problems of high workload, low efficiency, high subjectivity, and poor accuracy brought about by manual processing based on current borehole image analysis and conducts an intelligent segmentation study of borehole image structural planes based on the U2-Net network. By collecting data from 20 different borehole images in different lithological regions, a dataset consisting of 1,013 borehole images with structural plane type, lithology, and color was established. Data augmentation methods such as image flipping, color jittering, blurring, and mixup were applied to expand the dataset to 12,421 images, meeting the requirements for deep network training data. Based on the PyTorch deep learning framework, the initial U2-Net network weights were set, the learning rate was set to 0.001, the training batch was 4, and the Adam optimizer adaptively adjusted the learning rate during the training process. A dedicated network model for segmenting structural planes was obtained, and the model achieved a maximum F-measure value of 0.749 when the confidence threshold was set to 0.7, with an accuracy rate of up to 0.85 within the range of recall rate greater than 0.5. Overall, the model has high accuracy for segmenting structural planes and very low mean absolute error, indicating good segmentation accuracy and certain generalization of the network. The research method in this paper can serve as a reference for the study of intelligent identification of structural planes in borehole images.

PMID:38451909 | DOI:10.1371/journal.pone.0299471

Categories: Literature Watch

Clustering honey samples with unsupervised machine learning methods using FTIR data

Thu, 2024-03-07 06:00

An Acad Bras Cienc. 2024 Mar 1;96(1):e20230409. doi: 10.1590/0001-3765202420230409. eCollection 2024.

ABSTRACT

This study utilizes Fourier transform infrared (FTIR) data from honey samples to cluster and categorize them based on their spectral characteristics. The aim is to group similar samples together, revealing patterns and aiding in classification. The process begins by determining the number of clusters using the elbow method, resulting in five distinct clusters. Principal Component Analysis (PCA) is then applied to reduce the dataset's dimensionality by capturing its significant variances. Hierarchical Cluster Analysis (HCA) further refines the sample clusters. 20% of the data, representing identified clusters, is randomly selected for testing, while the remainder serves as training data for a deep learning algorithm employing a multilayer perceptron (MLP). Following training, the test data are evaluated, revealing an impressive 96.15% accuracy. Accuracy measures the machine learning model's ability to predict class labels for new data accurately. This approach offers reliable honey sample clustering without necessitating extensive preprocessing. Moreover, its swiftness and cost-effectiveness enhance its practicality. Ultimately, by leveraging FTIR spectral data, this method successfully identifies similarities among honey samples, enabling efficient categorization and demonstrating promise in the field of spectral analysis in food science.

PMID:38451625 | DOI:10.1590/0001-3765202420230409

Categories: Literature Watch

Multinational External Validation of Autonomous Retinopathy of Prematurity Screening

Thu, 2024-03-07 06:00

JAMA Ophthalmol. 2024 Mar 7. doi: 10.1001/jamaophthalmol.2024.0045. Online ahead of print.

ABSTRACT

IMPORTANCE: Retinopathy of prematurity (ROP) is a leading cause of blindness in children, with significant disparities in outcomes between high-income and low-income countries, due in part to insufficient access to ROP screening.

OBJECTIVE: To evaluate how well autonomous artificial intelligence (AI)-based ROP screening can detect more-than-mild ROP (mtmROP) and type 1 ROP.

DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study evaluated the performance of an AI algorithm, trained and calibrated using 2530 examinations from 843 infants in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study, on 2 external datasets (6245 examinations from 1545 infants in the Stanford University Network for Diagnosis of ROP [SUNDROP] and 5635 examinations from 2699 infants in the Aravind Eye Care Systems [AECS] telemedicine programs). Data were taken from 11 and 48 neonatal care units in the US and India, respectively. Data were collected from January 2012 to July 2021, and data were analyzed from July to December 2023.

EXPOSURES: An imaging processing pipeline was created using deep learning to autonomously identify mtmROP and type 1 ROP in eye examinations performed via telemedicine.

MAIN OUTCOMES AND MEASURES: The area under the receiver operating characteristics curve (AUROC) as well as sensitivity and specificity for detection of mtmROP and type 1 ROP at the eye examination and patient levels.

RESULTS: The prevalence of mtmROP and type 1 ROP were 5.9% (91 of 1545) and 1.2% (18 of 1545), respectively, in the SUNDROP dataset and 6.2% (168 of 2699) and 2.5% (68 of 2699) in the AECS dataset. Examination-level AUROCs for mtmROP and type 1 ROP were 0.896 and 0.985, respectively, in the SUNDROP dataset and 0.920 and 0.982 in the AECS dataset. At the cross-sectional examination level, mtmROP detection had high sensitivity (SUNDROP: mtmROP, 83.5%; 95% CI, 76.6-87.7; type 1 ROP, 82.2%; 95% CI, 81.2-83.1; AECS: mtmROP, 80.8%; 95% CI, 76.2-84.9; type 1 ROP, 87.8%; 95% CI, 86.8-88.7). At the patient level, all infants who developed type 1 ROP screened positive (SUNDROP: 100%; 95% CI, 81.4-100; AECS: 100%; 95% CI, 94.7-100) prior to diagnosis.

CONCLUSIONS AND RELEVANCE: Where and when ROP telemedicine programs can be implemented, autonomous ROP screening may be an effective force multiplier for secondary prevention of ROP.

PMID:38451496 | DOI:10.1001/jamaophthalmol.2024.0045

Categories: Literature Watch

Development and validation of the effective CNR analysis method for evaluating the contrast resolution of CT images

Thu, 2024-03-07 06:00

Phys Eng Sci Med. 2024 Mar 7. doi: 10.1007/s13246-024-01400-5. Online ahead of print.

ABSTRACT

Contrast resolution is an important index for evaluating the signal detectability of computed tomographic (CT) images. Recently, various noise reduction algorithms, such as iterative reconstruction (IR) and deep learning reconstruction (DLR), have been proposed to reduce the image noise in CT images. However, these algorithms cause changes in the image noise texture and blurred image signals in CT images. Furthermore, the contrast-to-noise ratio (CNR) cannot be accurately evaluated in CT images reconstructed using noise reduction methods. Therefore, in this study, we devised a new method, namely, "effective CNR analysis," for evaluating the contrast resolution of CT images. We verified whether the proposed algorithm could evaluate the effective contrast resolution based on the signal detectability of CT images. The findings showed that the effective CNR values obtained using the proposed method correlated well with the subjective visual impressions of CT images. To investigate whether signal detectability was appropriately evaluated using effective CNR analysis, the conventional CNR analysis method was compared with the proposed method. The CNRs of the IR and DLR images calculated using conventional CNR analysis were 13.2 and 10.7, respectively. By contrast, those calculated using the effective CNR analysis were estimated to be 0.7 and 1.1, respectively. Considering that the signal visibility of DLR images was superior to that of IR images, our proposed effective CNR analysis was shown to be appropriate for evaluating the contrast resolution of CT images.

PMID:38451464 | DOI:10.1007/s13246-024-01400-5

Categories: Literature Watch

Transcriptomic Profiling of Plasma Extracellular Vesicles Enables Reliable Annotation of the Cancer-specific Transcriptome and Molecular Subtype

Thu, 2024-03-07 06:00

Cancer Res. 2024 Mar 7. doi: 10.1158/0008-5472.CAN-23-4070. Online ahead of print.

ABSTRACT

Longitudinal monitoring of patients with advanced cancers is crucial to evaluate both disease burden and treatment response. Current liquid biopsy approaches mostly rely on the detection of DNA-based biomarkers. However, plasma RNA analysis can unleash tremendous opportunities for tumor state interrogation and molecular subtyping. Through the application of deep learning algorithms to the deconvolved transcriptomes of RNA within plasma extracellular vesicles (evRNA), we successfully predict consensus molecular subtypes in metastatic colorectal cancer patients. We further demonstrate the ability to monitor changes in transcriptomic subtype under treatment selection pressure and identify molecular pathways in evRNA associated with recurrence. Our approach also identified expressed gene fusions and neoepitopes from evRNA. These results demonstrate the feasibility of transcriptomic-based liquid biopsy platforms for precision oncology approaches, spanning from the longitudinal monitoring of tumor subtype changes to identification of expressed fusions and neoantigens as cancer-specific therapeutic targets, sans the need for tissue-based sampling.

PMID:38451249 | DOI:10.1158/0008-5472.CAN-23-4070

Categories: Literature Watch

Deep-VEGF: deep stacked ensemble model for prediction of vascular endothelial growth factor by concatenating gated recurrent unit with two-dimensional convolutional neural network

Thu, 2024-03-07 06:00

J Biomol Struct Dyn. 2024 Mar 7:1-11. doi: 10.1080/07391102.2024.2323144. Online ahead of print.

ABSTRACT

Vascular endothelial growth factor (VEGF) is involved in the development and progression of various diseases, including cancer, diabetic retinopathy, macular degeneration and arthritis. Understanding the role of VEGF in various disorders has led to the development of effective treatments, including anti-VEGF drugs, which have significantly improved therapeutic methods. Accurate VEGF identification is critical, yet experimental identification is expensive and time-consuming. This study presents Deep-VEGF, a novel computational model for VEGF prediction based on deep-stacked ensemble learning. We formulated two datasets using primary sequences. A novel feature descriptor named K-Space Tri Slicing-Bigram position-specific scoring metrix (KSTS-BPSSM) is constructed to extract numerical features from primary sequences. The model training is performed by deep learning techniques, including gated recurrent unit (GRU), generative adversarial network (GAN) and convolutional neural network (CNN). The GRU and CNN are ensembled using stacking learning approach. KSTS-BPSSM-based ensemble model secured the most accurate predictive outcomes, surpassing other competitive predictors across both training and testing datasets. This demonstrates the potential of leveraging deep learning for accurate VEGF prediction as a powerful tool to accelerate research, streamline drug discovery and uncover novel therapeutic targets. This insightful approach holds promise for expanding our knowledge of VEGF's role in health and disease.Communicated by Ramaswamy H. Sarma.

PMID:38450715 | DOI:10.1080/07391102.2024.2323144

Categories: Literature Watch

Rapid and Precise Differentiation and Authentication of Agricultural Products via Deep Learning-Assisted Multiplex SERS Fingerprinting

Thu, 2024-03-07 06:00

Anal Chem. 2024 Mar 7. doi: 10.1021/acs.analchem.4c00064. Online ahead of print.

ABSTRACT

Accurate and rapid differentiation and authentication of agricultural products based on their origin and quality are crucial to ensuring food safety and quality control. However, similar chemical compositions and complex matrices often hinder precise identification, particularly for adulterated samples. Herein, we propose a novel method combining multiplex surface-enhanced Raman scattering (SERS) fingerprinting with a one-dimensional convolutional neural network (1D-CNN), which enables the effective differentiation of the category, origin, and grade of agricultural products. This strategy leverages three different SERS-active nanoparticles as multiplex sensors, each tailored to selectively amplify the signals of preferentially adsorbed chemicals within the sample. By strategically combining SERS spectra from different NPs, a 'SERS super-fingerprint' is constructed, offering a more comprehensive representation of the characteristic information on agricultural products. Subsequently, utilizing a custom-designed 1D-CNN model for feature extraction from the 'super-fingerprint' significantly enhances the predictive accuracy for agricultural products. This strategy successfully identified various agricultural products and simulated adulterated samples with exceptional accuracy, reaching 97.7% and 94.8%, respectively. Notably, the entire identification process, encompassing sample preparation, SERS measurement, and deep learning analysis, takes only 35 min. This development of deep learning-assisted multiplex SERS fingerprinting establishes a rapid and reliable method for the identification and authentication of agricultural products.

PMID:38450485 | DOI:10.1021/acs.analchem.4c00064

Categories: Literature Watch

Evaluation of atopic dermatitis severity using artificial intelligence

Thu, 2024-03-07 06:00

Narra J. 2023 Dec;3(3):e511. doi: 10.52225/narra.v3i3.511. Epub 2023 Dec 19.

ABSTRACT

Atopic dermatitis is a prevalent and persistent chronic inflammatory skin disorder that poses significant challenges when it comes to accurately assessing its severity. The aim of this study was to evaluate deep learning models for automated atopic dermatitis severity scoring using a dataset of Aceh ethnicity individuals in Indonesia. The dataset of clinical images was collected from 250 patients at Dr. Zainoel Abidin Hospital, Banda Aceh, Indonesia and labeled by dermatologists as mild, moderate, severe, or none. Five pretrained convolutional neural networks (CNN) architectures were evaluated: ResNet50, VGGNet19, MobileNetV3, MnasNet, and EfficientNetB0. The evaluation metrics, including accuracy, precision, sensitivity, specificity, and F1-score, were employed to assess the models. Among the models, ResNet50 emerged as the most proficient, demonstrating an accuracy of 89.8%, precision of 90.00%, sensitivity of 89.80%, specificity of 96.60%, and an F1-score of 89.85%. These results highlight the potential of incorporating advanced, data-driven models into the field of dermatology. These models can serve as invaluable tools to assist dermatologists in making early and precise assessments of atopic dermatitis severity and therefore improve patient care and outcomes.

PMID:38450339 | PMC:PMC10914065 | DOI:10.52225/narra.v3i3.511

Categories: Literature Watch

An India soyabean dataset for identification and classification of diseases using computer-vision algorithms

Thu, 2024-03-07 06:00

Data Brief. 2024 Feb 22;53:110216. doi: 10.1016/j.dib.2024.110216. eCollection 2024 Apr.

ABSTRACT

Intelligent agriculture heavily relies on the science of agricultural disease image recognition. India is also responsible for large production of French beans, accounting for 37.25% of total production. In India from south region of Maharashtra state this crop is cultivated thrice in year. Soyabean plant is planted between the months of June through July, during the months of October and September during the rabi season, as well as in February. In the Maharashtrian regions of Pune, Satara, Ahmednagar, Solapur, and Nashik, among others, Soyabean plant is a common crop. In Maharashtra, Soyabean plant is grown over an area of around 31,050 hectares. This research presents a dataset of leaves from soyabean plants that are both insect-damaged and healthy. Images were taken over the course of fewer than two to three seasons on several farms. There are 3363 photos altogether in the seven folders that make up the dataset. Six categories comprise the dataset: I) Healthy plants II) Vein Necrosis III) Dry leaf IV) Septoria brown spot V) Root images VI) Bacterial leaf blight. This study's goal is to give academics and students accessibility to our dataset so they may use it for their studies and to build machine learning models.

PMID:38450198 | PMC:PMC10915497 | DOI:10.1016/j.dib.2024.110216

Categories: Literature Watch

Exploring the application and future outlook of Artificial intelligence in pancreatic cancer

Thu, 2024-03-07 06:00

Front Oncol. 2024 Feb 21;14:1345810. doi: 10.3389/fonc.2024.1345810. eCollection 2024.

ABSTRACT

Pancreatic cancer, an exceptionally malignant tumor of the digestive system, presents a challenge due to its lack of typical early symptoms and highly invasive nature. The majority of pancreatic cancer patients are diagnosed when curative surgical resection is no longer possible, resulting in a poor overall prognosis. In recent years, the rapid progress of Artificial intelligence (AI) in the medical field has led to the extensive utilization of machine learning and deep learning as the prevailing approaches. Various models based on AI technology have been employed in the early screening, diagnosis, treatment, and prognostic prediction of pancreatic cancer patients. Furthermore, the development and application of three-dimensional visualization and augmented reality navigation techniques have also found their way into pancreatic cancer surgery. This article provides a concise summary of the current state of AI technology in pancreatic cancer and offers a promising outlook for its future applications.

PMID:38450187 | PMC:PMC10915754 | DOI:10.3389/fonc.2024.1345810

Categories: Literature Watch

Feasibility of video-based real-time nystagmus tracking: a lightweight deep learning model approach using ocular object segmentation

Thu, 2024-03-07 06:00

Front Neurol. 2024 Feb 21;15:1342108. doi: 10.3389/fneur.2024.1342108. eCollection 2024.

ABSTRACT

BACKGROUND: Eye movement tests remain significantly underutilized in emergency departments and primary healthcare units, despite their superior diagnostic sensitivity compared to neuroimaging modalities for the differential diagnosis of acute vertigo. This underutilization may be attributed to a potential lack of awareness regarding these tests and the absence of appropriate tools for detecting nystagmus. This study aimed to develop a nystagmus measurement algorithm using a lightweight deep-learning model that recognizes the ocular regions.

METHOD: The deep learning model was used to segment the eye regions, detect blinking, and determine the pupil center. The model was trained using images extracted from video clips of a clinical battery of eye movement tests and synthesized images reproducing real eye movement scenarios using virtual reality. Each eye image was annotated with segmentation masks of the sclera, iris, and pupil, with gaze vectors of the pupil center for eye tracking. We conducted a comprehensive evaluation of model performance and its execution speeds in comparison to various alternative models using metrics that are suitable for the tasks.

RESULTS: The mean Intersection over Union values of the segmentation model ranged from 0.90 to 0.97 for different classes (sclera, iris, and pupil) across types of images (synthetic vs. real-world images). Additionally, the mean absolute error for eye tracking was 0.595 for real-world data and the F1 score for blink detection was ≥ 0.95, which indicates our model is performing at a very high level of accuracy. Execution speed was also the most rapid for ocular object segmentation under the same hardware condition as compared to alternative models. The prediction for horizontal and vertical nystagmus in real eye movement video revealed high accuracy with a strong correlation between the observed and predicted values (r = 0.9949 for horizontal and r = 0.9950 for vertical; both p < 0.05).

CONCLUSION: The potential of our model, which can automatically segment ocular regions and track nystagmus in real time from eye movement videos, holds significant promise for emergency settings or remote intervention within the field of neurotology.

PMID:38450068 | PMC:PMC10915048 | DOI:10.3389/fneur.2024.1342108

Categories: Literature Watch

Machine learning-based evaluation of spontaneous pain and analgesics from cellular calcium signals in the mouse primary somatosensory cortex using explainable features

Thu, 2024-03-07 06:00

Front Mol Neurosci. 2024 Feb 21;17:1356453. doi: 10.3389/fnmol.2024.1356453. eCollection 2024.

ABSTRACT

INTRODUCTION: Pain that arises spontaneously is considered more clinically relevant than pain evoked by external stimuli. However, measuring spontaneous pain in animal models in preclinical studies is challenging due to methodological limitations. To address this issue, recently we developed a deep learning (DL) model to assess spontaneous pain using cellular calcium signals of the primary somatosensory cortex (S1) in awake head-fixed mice. However, DL operate like a "black box", where their decision-making process is not transparent and is difficult to understand, which is especially evident when our DL model classifies different states of pain based on cellular calcium signals. In this study, we introduce a novel machine learning (ML) model that utilizes features that were manually extracted from S1 calcium signals, including the dynamic changes in calcium levels and the cell-to-cell activity correlations.

METHOD: We focused on observing neural activity patterns in the primary somatosensory cortex (S1) of mice using two-photon calcium imaging after injecting a calcium indicator (GCaMP6s) into the S1 cortex neurons. We extracted features related to the ratio of up and down-regulated cells in calcium activity and the correlation level of activity between cells as input data for the ML model. The ML model was validated using a Leave-One-Subject-Out Cross-Validation approach to distinguish between non-pain, pain, and drug-induced analgesic states.

RESULTS AND DISCUSSION: The ML model was designed to classify data into three distinct categories: non-pain, pain, and drug-induced analgesic states. Its versatility was demonstrated by successfully classifying different states across various pain models, including inflammatory and neuropathic pain, as well as confirming its utility in identifying the analgesic effects of drugs like ketoprofen, morphine, and the efficacy of magnolin, a candidate analgesic compound. In conclusion, our ML model surpasses the limitations of previous DL approaches by leveraging manually extracted features. This not only clarifies the decision-making process of the ML model but also yields insights into neuronal activity patterns associated with pain, facilitating preclinical studies of analgesics with higher potential for clinical translation.

PMID:38450042 | PMC:PMC10915002 | DOI:10.3389/fnmol.2024.1356453

Categories: Literature Watch

Robust human locomotion and localization activity recognition over multisensory

Thu, 2024-03-07 06:00

Front Physiol. 2024 Feb 21;15:1344887. doi: 10.3389/fphys.2024.1344887. eCollection 2024.

ABSTRACT

Human activity recognition (HAR) plays a pivotal role in various domains, including healthcare, sports, robotics, and security. With the growing popularity of wearable devices, particularly Inertial Measurement Units (IMUs) and Ambient sensors, researchers and engineers have sought to take advantage of these advances to accurately and efficiently detect and classify human activities. This research paper presents an advanced methodology for human activity and localization recognition, utilizing smartphone IMU, Ambient, GPS, and Audio sensor data from two public benchmark datasets: the Opportunity dataset and the Extrasensory dataset. The Opportunity dataset was collected from 12 subjects participating in a range of daily activities, and it captures data from various body-worn and object-associated sensors. The Extrasensory dataset features data from 60 participants, including thousands of data samples from smartphone and smartwatch sensors, labeled with a wide array of human activities. Our study incorporates novel feature extraction techniques for signal, GPS, and audio sensor data. Specifically, for localization, GPS, audio, and IMU sensors are utilized, while IMU and Ambient sensors are employed for locomotion activity recognition. To achieve accurate activity classification, state-of-the-art deep learning techniques, such as convolutional neural networks (CNN) and long short-term memory (LSTM), have been explored. For indoor/outdoor activities, CNNs are applied, while LSTMs are utilized for locomotion activity recognition. The proposed system has been evaluated using the k-fold cross-validation method, achieving accuracy rates of 97% and 89% for locomotion activity over the Opportunity and Extrasensory datasets, respectively, and 96% for indoor/outdoor activity over the Extrasensory dataset. These results highlight the efficiency of our methodology in accurately detecting various human activities, showing its potential for real-world applications. Moreover, the research paper introduces a hybrid system that combines machine learning and deep learning features, enhancing activity recognition performance by leveraging the strengths of both approaches.

PMID:38449788 | PMC:PMC10915014 | DOI:10.3389/fphys.2024.1344887

Categories: Literature Watch

Deep learning based assessment of hemodynamics in the coarctation of the aorta: comparison of bidirectional recurrent and convolutional neural networks

Thu, 2024-03-07 06:00

Front Physiol. 2024 Feb 21;15:1288339. doi: 10.3389/fphys.2024.1288339. eCollection 2024.

ABSTRACT

The utilization of numerical methods, such as computational fluid dynamics (CFD), has been widely established for modeling patient-specific hemodynamics based on medical imaging data. Hemodynamics assessment plays a crucial role in treatment decisions for the coarctation of the aorta (CoA), a congenital heart disease, with the pressure drop (PD) being a crucial biomarker for CoA treatment decisions. However, implementing CFD methods in the clinical environment remains challenging due to their computational cost and the requirement for expert knowledge. This study proposes a deep learning approach to mitigate the computational need and produce fast results. Building upon a previous proof-of-concept study, we compared the effects of two different artificial neural network (ANN) architectures trained on data with different dimensionalities, both capable of predicting hemodynamic parameters in CoA patients: a one-dimensional bidirectional recurrent neural network (1D BRNN) and a three-dimensional convolutional neural network (3D CNN). The performance was evaluated by median point-wise root mean square error (RMSE) for pressures along the centerline in 18 test cases, which were not included in a training cohort. We found that the 3D CNN (median RMSE of 3.23 mmHg) outperforms the 1D BRNN (median RMSE of 4.25 mmHg). In contrast, the 1D BRNN is more precise in PD prediction, with a lower standard deviation of the error (±7.03 mmHg) compared to the 3D CNN (±8.91 mmHg). The differences between both ANNs are not statistically significant, suggesting that compressing the 3D aorta hemodynamics into a 1D centerline representation does not result in the loss of valuable information when training ANN models. Additionally, we evaluated the utility of the synthetic geometries of the aortas with CoA generated by using a statistical shape model (SSM), as well as the impact of aortic arch geometry (gothic arch shape) on the model's training. The results show that incorporating a synthetic cohort obtained through the SSM of the clinical cohort does not significantly increase the model's accuracy, indicating that the synthetic cohort generation might be oversimplified. Furthermore, our study reveals that selecting training cases based on aortic arch shape (gothic versus non-gothic) does not improve ANN performance for test cases sharing the same shape.

PMID:38449784 | PMC:PMC10916009 | DOI:10.3389/fphys.2024.1288339

Categories: Literature Watch

Pages