Deep learning

Correction to "Physics-Informed Deep Learning Approach for Reintroducing Atomic Detail in Coarse-Grained Configurations of Multiple Poly(lactic acid) Stereoisomers"

Mon, 2024-08-19 06:00

J Chem Inf Model. 2024 Aug 19. doi: 10.1021/acs.jcim.4c01407. Online ahead of print.

NO ABSTRACT

PMID:39158929 | DOI:10.1021/acs.jcim.4c01407

Categories: Literature Watch

Prediction of intraoperative hypotension using deep learning models based on non-invasive monitoring devices

Mon, 2024-08-19 06:00

J Clin Monit Comput. 2024 Aug 19. doi: 10.1007/s10877-024-01206-6. Online ahead of print.

ABSTRACT

PURPOSE: Intraoperative hypotension is associated with adverse outcomes. Predicting and proactively managing hypotension can reduce its incidence. Previously, hypotension prediction algorithms using artificial intelligence were developed for invasive arterial blood pressure monitors. This study tested whether routine non-invasive monitors could also predict intraoperative hypotension using deep learning algorithms.

METHODS: An open-source database of non-cardiac surgery patients ( https://vitadb.net/dataset ) was used to develop the deep learning algorithm. The algorithm was validated using external data obtained from a tertiary Korean hospital. Intraoperative hypotension was defined as a systolic blood pressure less than 90 mmHg. The input data included five monitors: non-invasive blood pressure, electrocardiography, photoplethysmography, capnography, and bispectral index. The primary outcome was the performance of the deep learning model as assessed by the area under the receiver operating characteristic curve (AUROC).

RESULTS: Data from 4754 and 421 patients were used for algorithm development and external validation, respectively. The fully connected model of Multi-head Attention architecture and the Globally Attentive Locally Recurrent model with Focal Loss function were able to predict intraoperative hypotension 5 min before its occurrence. The AUROC of the algorithm was 0.917 (95% confidence interval [CI], 0.915-0.918) for the original data and 0.833 (95% CI, 0.830-0.836) for the external validation data. Attention map, which quantified the contributions of each monitor, showed that our algorithm utilized data from each monitor with weights ranging from 8 to 22% for determining hypotension.

CONCLUSIONS: A deep learning model utilizing multi-channel non-invasive monitors could predict intraoperative hypotension with high accuracy. Future prospective studies are needed to determine whether this model can assist clinicians in preventing hypotension in patients undergoing surgery with non-invasive monitoring.

PMID:39158783 | DOI:10.1007/s10877-024-01206-6

Categories: Literature Watch

A review on advancements in feature selection and feature extraction for high-dimensional NGS data analysis

Mon, 2024-08-19 06:00

Funct Integr Genomics. 2024 Aug 19;24(5):139. doi: 10.1007/s10142-024-01415-x.

ABSTRACT

Recent advancements in biomedical technologies and the proliferation of high-dimensional Next Generation Sequencing (NGS) datasets have led to significant growth in the bulk and density of data. The NGS high-dimensional data, characterized by a large number of genomics, transcriptomics, proteomics, and metagenomics features relative to the number of biological samples, presents significant challenges for reducing feature dimensionality. The high dimensionality of NGS data poses significant challenges for data analysis, including increased computational burden, potential overfitting, and difficulty in interpreting results. Feature selection and feature extraction are two pivotal techniques employed to address these challenges by reducing the dimensionality of the data, thereby enhancing model performance, interpretability, and computational efficiency. Feature selection and feature extraction can be categorized into statistical and machine learning methods. The present study conducts a comprehensive and comparative review of various statistical, machine learning, and deep learning-based feature selection and extraction techniques specifically tailored for NGS and microarray data interpretation of humankind. A thorough literature search was performed to gather information on these techniques, focusing on array-based and NGS data analysis. Various techniques, including deep learning architectures, machine learning algorithms, and statistical methods, have been explored for microarray, bulk RNA-Seq, and single-cell, single-cell RNA-Seq (scRNA-Seq) technology-based datasets surveyed here. The study provides an overview of these techniques, highlighting their applications, advantages, and limitations in the context of high-dimensional NGS data. This review provides better insights for readers to apply feature selection and feature extraction techniques to enhance the performance of predictive models, uncover underlying biological patterns, and gain deeper insights into massive and complex NGS and microarray data.

PMID:39158621 | DOI:10.1007/s10142-024-01415-x

Categories: Literature Watch

X-ray absorption spectroscopy combined with deep learning for auto and rapid illicit drug detection

Mon, 2024-08-19 06:00

Am J Drug Alcohol Abuse. 2024 Aug 19:1-10. doi: 10.1080/00952990.2024.2377262. Online ahead of print.

ABSTRACT

Background: X-ray absorption spectroscopy (XAS) is a widely used substance analysis technique. It bases on the different absorption coefficients at different energy level to achieve material identification. Additionally, the combination of spectral technology and deep learning can achieve auto detection and high accuracy in material identification.Objectives: Current methods are difficult to identify drugs quickly and nondestructively. Therefore, we explore a novel approach utilizing XAS for the detection of prohibited drugs with common X-ray tube source and photon-counting (PC) detector.Method: To achieve automatic, rapid, and accurate detection of drugs. A CdTe detector and a common X-ray source were used to collect data, then dividing the data into training and testing sets. Finally, the improved transformer encoder model was used for classification. LSTM and ResU-net models are selected for comparation.Result: Fifty substances, which are isomers or compounds with similar molecular formulas of drugs, were selected for experiment substances. The results showed that the improved transformer model achieving 1.4 hours for training time and 96.73% for accuracy, which is better than the LSTM (2.6 hours and 65%) and ResU-net (1.5 hours and 92.7%).Conclusion: It can be concluded that the attention mechanism is more accurate for spectral material identification. XAS combined with deep learning can achieve efficient and accurate drug identification, offering promising application in clinical drug testing and drug enforcement.

PMID:39158551 | DOI:10.1080/00952990.2024.2377262

Categories: Literature Watch

Artificial intelligence in cardiovascular medicine: clinical applications

Mon, 2024-08-19 06:00

Eur Heart J. 2024 Aug 19:ehae465. doi: 10.1093/eurheartj/ehae465. Online ahead of print.

ABSTRACT

Clinical medicine requires the integration of various forms of patient data including demographics, symptom characteristics, electrocardiogram findings, laboratory values, biomarker levels, and imaging studies. Decision-making on the optimal management should be based on a high probability that the envisaged treatment is appropriate, provides benefit, and bears no or little potential harm. To that end, personalized risk-benefit considerations should guide the management of individual patients to achieve optimal results. These basic clinical tasks have become more and more challenging with the massively growing data now available; artificial intelligence and machine learning (AI/ML) can provide assistance for clinicians by obtaining and comprehensively preparing the history of patients, analysing face and voice and other clinical features, by integrating laboratory results, biomarkers, and imaging. Furthermore, AI/ML can provide a comprehensive risk assessment as a basis of optimal acute and chronic care. The clinical usefulness of AI/ML algorithms should be carefully assessed, validated with confirmation datasets before clinical use, and repeatedly re-evaluated as patient phenotypes change. This review provides an overview of the current data revolution that has changed and will continue to change the face of clinical medicine radically, if properly used, to the benefit of physicians and patients alike.

PMID:39158472 | DOI:10.1093/eurheartj/ehae465

Categories: Literature Watch

Enhancing novel isoform discovery: leveraging nanopore long-read sequencing and machine learning approaches

Mon, 2024-08-19 06:00

Brief Funct Genomics. 2024 Aug 19:elae031. doi: 10.1093/bfgp/elae031. Online ahead of print.

ABSTRACT

Long-read sequencing technologies can capture entire RNA transcripts in a single sequencing read, reducing the ambiguity in constructing and quantifying transcript models in comparison to more common and earlier methods, such as short-read sequencing. Recent improvements in the accuracy of long-read sequencing technologies have expanded the scope for novel splice isoform detection and have also enabled a far more accurate reconstruction of complex splicing patterns and transcriptomes. Additionally, the incorporation and advancements of machine learning and deep learning algorithms in bioinformatic software have significantly improved the reliability of long-read sequencing transcriptomic studies. However, there is a lack of consensus on what bioinformatic tools and pipelines produce the most precise and consistent results. Thus, this review aims to discuss and compare the performance of available methods for novel isoform discovery with long-read sequencing technologies, with 25 tools being presented. Furthermore, this review intends to demonstrate the need for developing standard analytical pipelines, tools, and transcript model conventions for novel isoform discovery and transcriptomic studies.

PMID:39158328 | DOI:10.1093/bfgp/elae031

Categories: Literature Watch

Prediction of aptamer affinity using an artificial intelligence approach

Mon, 2024-08-19 06:00

J Mater Chem B. 2024 Aug 19. doi: 10.1039/d4tb00909f. Online ahead of print.

ABSTRACT

Aptamers are oligonucleotide sequences that can connect to particular target molecules, similar to monoclonal antibodies. They can be chosen by systematic evolution of ligands by exponential enrichment (SELEX), and are modifiable and can be synthesized. Even if the SELEX approach has been improved a lot, it is frequently challenging and time-consuming to identify aptamers experimentally. In particular, structure-based methods are the most used in computer-aided design and development of aptamers. For this purpose, numerous web-based platforms have been suggested for the purpose of forecasting the secondary structure and 3D configurations of RNAs and DNAs. Also, molecular docking and molecular dynamics (MD), which are commonly utilized in protein compound selection by structural information, are suitable for aptamer selection. On the other hand, from a large number of sequences, artificial intelligence (AI) may be able to quickly discover the possible aptamer candidates. Conversely, sophisticated machine and deep-learning (DL) models have demonstrated efficacy in forecasting the binding properties between ligands and targets during drug discovery; as such, they may provide a reliable and precise method for forecasting the binding of aptamers to targets. This research looks at advancements in AI pipelines and strategies for aptamer binding ability prediction, such as machine and deep learning, as well as structure-based approaches, molecular dynamics and molecular docking simulation methods.

PMID:39158322 | DOI:10.1039/d4tb00909f

Categories: Literature Watch

Artificial Intelligence Enabled Interpretation of ECG Images to Predict Hematopoietic Cell Transplantation Toxicity

Mon, 2024-08-19 06:00

Blood Adv. 2024 Aug 16:bloodadvances.2024013636. doi: 10.1182/bloodadvances.2024013636. Online ahead of print.

ABSTRACT

Artificial intelligence enabled interpretation of electrocardiogram waveform images (AI-ECG) can identify patterns predictive of future adverse cardiac events. We hypothesized such an approach, which is well described in general medical and surgical patients, would provide prognostic information with respect to the risk of cardiac complications and overall mortality in patients undergoing hematopoietic cell transplantation (HCT) for blood malignancy. We retrospectively subjected ECGs obtained pre-HCT to an externally trained, deep learning model designed to predict risk of atrial fibrillation (AF). Included were 1,377 patients (849 autologous HCT and 528 allogeneic HCT recipients). Median follow-up was 2.9 years. The three-year cumulative incidence of AF was 9% (95% CI: 7-12%) in autologous HCT patients and 13% (10-16%) in allogeneic HCT patients. In the entire cohort, pre-HCT AI-ECG estimate of AF risk correlated highly with development of clinical AF (Hazard Ratio (HR) 7.37, 3.53-15.4, p <0.001), inferior overall survival (HR: 2.4; 1.3-4.5, p = 0.004), and greater risk of non-relapse mortality (HR 3.36, 1.39-8.13, p = 0.007), without increased risk of relapse. Significant associations with mortality were only noted in allo HCT recipients, where the risk of non-relapse mortality was greater. Compared to calcineurin inhibitor-based graft versus host disease prophylaxis, the use of post-transplantation cyclophosphamide resulted in greater 90-day incidence of AF (13% versus 5%, p = 0.01), corresponding to temporal changes in AI-ECG AF prediction post HCT. In summary, AI-ECG can inform risk of post-transplant cardiac outcomes and survival in HCT patients and represents a novel strategy for personalized risk assessment after HCT.

PMID:39158065 | DOI:10.1182/bloodadvances.2024013636

Categories: Literature Watch

Artificial intelligence in musculoskeletal applications: a primer for radiologists

Mon, 2024-08-19 06:00

Diagn Interv Radiol. 2024 Aug 19. doi: 10.4274/dir.2024.242830. Online ahead of print.

ABSTRACT

As an umbrella term, artificial intelligence (AI) covers machine learning and deep learning. This review aimed to elaborate on these terms to act as a primer for radiologists to learn more about the algorithms commonly used in musculoskeletal radiology. It also aimed to familiarize them with the common practices and issues in the use of AI in this domain.

PMID:39157958 | DOI:10.4274/dir.2024.242830

Categories: Literature Watch

Improving Deep Learning-Based Algorithm for Ploidy Status Prediction Through Combined U-NET Blastocyst Segmentation and Sequential Time-Lapse Blastocysts Images

Mon, 2024-08-19 06:00

J Reprod Infertil. 2024 Apr-Jun;25(2):110-119. doi: 10.18502/jri.v25i2.16006.

ABSTRACT

BACKGROUND: Several approaches have been proposed to optimize the construction of an artificial intelligence-based model for assessing ploidy status. These encompass the investigation of algorithms, refining image segmentation techniques, and discerning essential patterns throughout embryonic development. The purpose of the current study was to evaluate the effectiveness of using U-NET architecture for embryo segmentation and time-lapse embryo image sequence extraction, three and ten hr before biopsy to improve model accuracy for prediction of embryonic ploidy status.

METHODS: A total of 1.020 time-lapse videos of blastocysts with known ploidy status were used to construct a convolutional neural network (CNN)-based model for ploidy detection. Sequential images of each blastocyst were extracted from the time-lapse videos over a period of three and ten hr prior to the biopsy, generating 31.642 and 99.324 blastocyst images, respectively. U-NET architecture was applied for blastocyst image segmentation before its implementation in CNN-based model development.

RESULTS: The accuracy of ploidy prediction model without applying the U-NET segmented sequential embryo images was 0.59 and 0.63 over a period of three and ten hr before biopsy, respectively. Improved model accuracy of 0.61 and 0.66 was achieved, respectively with the implementation of U-NET architecture for embryo segmentation on the current model. Extracting blastocyst images over a 10 hr period yields higher accuracy compared to a three-hr extraction period prior to biopsy.

CONCLUSION: Combined implementation of U-NET architecture for blastocyst image segmentation and the sequential compilation of ten hr of time-lapse blastocyst images could yield a CNN-based model with improved accuracy in predicting ploidy status.

PMID:39157795 | PMC:PMC11327420 | DOI:10.18502/jri.v25i2.16006

Categories: Literature Watch

SpanSeq: similarity-based sequence data splitting method for improved development and assessment of deep learning projects

Mon, 2024-08-19 06:00

NAR Genom Bioinform. 2024 Aug 16;6(3):lqae106. doi: 10.1093/nargab/lqae106. eCollection 2024 Sep.

ABSTRACT

The use of deep learning models in computational biology has increased massively in recent years, and it is expected to continue with the current advances in the fields such as Natural Language Processing. These models, although able to draw complex relations between input and target, are also inclined to learn noisy deviations from the pool of data used during their development. In order to assess their performance on unseen data (their capacity to generalize), it is common to split the available data randomly into development (train/validation) and test sets. This procedure, although standard, has been shown to produce dubious assessments of generalization due to the existing similarity between samples in the databases used. In this work, we present SpanSeq, a database partition method for machine learning that can scale to most biological sequences (genes, proteins and genomes) in order to avoid data leakage between sets. We also explore the effect of not restraining similarity between sets by reproducing the development of two state-of-the-art models on bioinformatics, not only confirming the consequences of randomly splitting databases on the model assessment, but expanding those repercussions to the model development. SpanSeq is available at https://github.com/genomicepidemiology/SpanSeq.

PMID:39157582 | PMC:PMC11327874 | DOI:10.1093/nargab/lqae106

Categories: Literature Watch

A deep learning approach to dysphagia-aspiration detecting algorithm through pre- and post-swallowing voice changes

Mon, 2024-08-19 06:00

Front Bioeng Biotechnol. 2024 Aug 2;12:1433087. doi: 10.3389/fbioe.2024.1433087. eCollection 2024.

ABSTRACT

INTRODUCTION: This study aimed to identify differences in voice characteristics and changes between patients with dysphagia-aspiration and healthy individuals using a deep learning model, with a focus on under-researched areas of pre- and post-swallowing voice changes in patients with dysphagia. We hypothesized that these variations may be due to weakened muscles and blocked airways in patients with dysphagia.

METHODS: A prospective cohort study was conducted on 198 participants aged >40 years at the Seoul National University Bundang Hospital from October 2021 to February 2023. Pre- and post-swallowing voice data of the participants were converted to a 64-kbps mp3 format, and all voice data were trimmed to a length of 2 s. The data were divided for 10-fold cross-validation and stored in HDF5 format with anonymized IDs and labels for the normal and aspiration groups. During preprocessing, the data were converted to Mel spectrograms, and the EfficientAT model was modified using the final layer of MobileNetV3 to effectively detect voice changes and analyze pre- and post-swallowing voices. This enabled the model to probabilistically categorize new patient voices as normal or aspirated.

RESULTS: In a study of the machine-learning model for aspiration detection, area under the receiver operating characteristic curve (AUC) values were analyzed across sexes under different configurations. The average AUC values for males ranged from 0.8117 to 0.8319, with the best performance achieved at a learning rate of 3.00e-5 and a batch size of 16. The average AUC values for females improved from 0.6975 to 0.7331, with the best performance observed at a learning rate of 5.00e-5 and a batch size of 32. As there were fewer female participants, a combined model was developed to maintain the sex balance. In the combined model, the average AUC values ranged from 0.7746 to 0.7997, and optimal performance was achieved at a learning rate of 3.00e-5 and a batch size of 16.

CONCLUSION: This study evaluated a voice analysis-based program to detect pre- and post-swallowing changes in patients with dysphagia, potentially aiding in real-time monitoring. Such a system can provide healthcare professionals with daily insights into the conditions of patients, allowing for personalized interventions.

CLINICAL TRIAL REGISTRATION: ClinicalTrials.gov, identifier NCT05149976.

PMID:39157445 | PMC:PMC11327512 | DOI:10.3389/fbioe.2024.1433087

Categories: Literature Watch

Empowering vertical farming through IoT and AI-Driven technologies: A comprehensive review

Mon, 2024-08-19 06:00

Heliyon. 2024 Jul 23;10(15):e34998. doi: 10.1016/j.heliyon.2024.e34998. eCollection 2024 Aug 15.

ABSTRACT

The substantial increase in the human population dramatically strains food supplies. Farmers need healthy soil and natural minerals for traditional farming, and production takes a little longer time. The soil-free farming method known as vertical farming (VF) requires a small land and consumes a very small amount of water than conventional soil-dependent farming techniques. With modern technologies like hydroponics, aeroponics, and aquaponics, the notion of the VF appears to have a promising future in urban areas where farming land is very expensive and scarce. VF faces difficulty in the simultaneous monitoring of multiple indicators, nutrition advice, and plant diagnosis systems. However, these issues can be resolved by implementing current technical advancements like artificial intelligence (AI)-based control techniques such as machine learning (ML), deep learning (DL), the internet of things (IoT), image processing as well as computer vision. This article presents a thorough analysis of ML and IoT applications in VF system. The areas on which the attention is concentrated include disease detection, crop yield prediction, nutrition, and irrigation control management. In order to predict crop yield and crop diseases, the computer vision technique is investigated in view of the classification of distinct collections of crop images. This article also illustrates ML and IoT-based VF systems that can raise product quality and production over the long term. Assessment and evaluation of the knowledge-based VF system have also been outlined in the article with the potential outcomes, advantages, and limitations of ML and IoT in the VF system.

PMID:39157372 | PMC:PMC11328057 | DOI:10.1016/j.heliyon.2024.e34998

Categories: Literature Watch

Fetal-BET: Brain Extraction Tool for Fetal MRI

Mon, 2024-08-19 06:00

IEEE Open J Eng Med Biol. 2024 Jul 12;5:551-562. doi: 10.1109/OJEMB.2024.3426969. eCollection 2024.

ABSTRACT

Goal: In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development in-utero. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. Methods: In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. Results: Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. Conclusions:By leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.

PMID:39157057 | PMC:PMC11329220 | DOI:10.1109/OJEMB.2024.3426969

Categories: Literature Watch

Comparative study of the experimentally observed and GAN-generated 3D microstructures in dual-phase steels

Mon, 2024-08-19 06:00

Sci Technol Adv Mater. 2024 Aug 5;25(1):2388501. doi: 10.1080/14686996.2024.2388501. eCollection 2024.

ABSTRACT

In a deep-learning-based algorithm, generative adversarial networks can generate images similar to an input. Using this algorithm, an artificial three-dimensional (3D) microstructure can be reproduced from two-dimensional images. Although the generated 3D microstructure has a similar appearance, its reproducibility should be examined for practical applications. This study used an automated serial sectioning technique to compare the 3D microstructures of two dual-phase steels generated from three orthogonal surface images with their corresponding observed 3D microstructures. The mechanical behaviors were examined using the finite element analysis method for the representative volume element, in which finite element models of microstructures were directly constructed from the 3D voxel data using a voxel coarsening approach. The macroscopic material responses of the generated microstructures captured the anisotropy caused by the microscopic morphology. However, these responses did not quantitatively align with those of the observed microstructures owing to inaccuracies in reproducing the volume fraction of the ferrite/martensite phase. Additionally, the generation algorithm struggled to replicate the microscopic morphology, particularly in cases with a low volume fraction of the martensite phase where the martensite connectivity was not discernible from the input images. The results demonstrate the limitations of the generation algorithm and the necessity for 3D observations.

PMID:39156881 | PMC:PMC11328796 | DOI:10.1080/14686996.2024.2388501

Categories: Literature Watch

BFNet: a full-encoder skip connect way for medical image segmentation

Mon, 2024-08-19 06:00

Front Physiol. 2024 Aug 2;15:1412985. doi: 10.3389/fphys.2024.1412985. eCollection 2024.

ABSTRACT

In recent years, semantic segmentation in deep learning has been widely applied in medical image segmentation, leading to the development of numerous models. Convolutional Neural Network (CNNs) have achieved milestone achievements in medical image analysis. Particularly, deep neural networks based on U-shaped architectures and skip connections have been extensively employed in various medical image tasks. U-Net is characterized by its encoder-decoder architecture and pioneering skip connections, along with multi-scale features, has served as a fundamental network architecture for many modifications. But U-Net cannot fully utilize all the information from the encoder layer in the decoder layer. U-Net++ connects mid parameters of different dimensions through nested and dense skip connections. However, it can only alleviate the disadvantage of not being able to fully utilize the encoder information and will greatly increase the model parameters. In this paper, a novel BFNet is proposed to utilize all feature maps from the encoder at every layer of the decoder and reconnects with the current layer of the encoder. This allows the decoder to better learn the positional information of segmentation targets and improves learning of boundary information and abstract semantics in the current layer of the encoder. Our proposed method has a significant improvement in accuracy with 1.4 percent. Besides enhancing accuracy, our proposed BFNet also reduces network parameters. All the advantages we proposed are demonstrated on our dataset. We also discuss how different loss functions influence this model and some possible improvements.

PMID:39156824 | PMC:PMC11327084 | DOI:10.3389/fphys.2024.1412985

Categories: Literature Watch

Spatiotemporal Disentanglement of Arteriovenous Malformations in Digital Subtraction Angiography

Mon, 2024-08-19 06:00

Proc SPIE Int Soc Opt Eng. 2024 Feb;12926:129263B. doi: 10.1117/12.3006740. Epub 2024 Apr 2.

ABSTRACT

Although Digital Subtraction Angiography (DSA) is the most important imaging for visualizing cerebrovascular anatomy, its interpretation by clinicians remains difficult. This is particularly true when treating arteriovenous malformations (AVMs), where entangled vasculature connecting arteries and veins needs to be carefully identified. The presented method aims to enhance DSA image series by highlighting critical information via automatic classification of vessels using a combination of two learning models: An unsupervised machine learning method based on Independent Component Analysis that decomposes the phases of flow and a convolutional neural network that automatically delineates the vessels in image space. The proposed method was tested on clinical DSA images series and demonstrated efficient differentiation between arteries and veins that provides a viable solution to enhance visualizations for clinical use.

PMID:39156762 | PMC:PMC11330340 | DOI:10.1117/12.3006740

Categories: Literature Watch

A novel groundnut leaf dataset for detection and classification of groundnut leaf diseases

Mon, 2024-08-19 06:00

Data Brief. 2024 Jul 20;55:110763. doi: 10.1016/j.dib.2024.110763. eCollection 2024 Aug.

ABSTRACT

Groundnut (Arachis hypogaea) is a widely cultivated legume crop that plays a vital role in global agriculture and food security. It is a major source of vegetable oil and protein for human consumption, as well as a cash crop for farmers in many regions. Despite the importance of this crop to household food security and income, diseases, particularly Leaf spot (early and late), Alternaria leaf spot, Rust, and Rosette, have had a significant impact on its production. Deep learning (DL) techniques, especially convolutional neural networks (CNNs), have demonstrated significant ability for early diagnosis of the plant leaf diseases. However, the availability of groundnut-specific datasets for training and evaluation of DL models is limited, hindering the development and benchmarking of groundnut-related deep learning applications. Therefore, this study provides a dataset of groundnut leaf images, both diseased and healthy, captured in real cultivation fields at Ramchandrapur, Purba Medinipur, West Bengal, using a smartphone camera. The dataset contains a total of 1720 original images, that can be utilized to train DL models to detect groundnut leaf diseases at an early stage. Additionally, we provide baseline results of applying state-of-the-art CNN architectures on the dataset for groundnut disease classification, demonstrating the potential of the dataset for advancing groundnut-related research using deep learning. The aim of creating this dataset is to facilitate in the creation of sophisticated methods that will aid farmers accurately identify diseases and enhance groundnut yields.

PMID:39156669 | PMC:PMC11327543 | DOI:10.1016/j.dib.2024.110763

Categories: Literature Watch

An annotated image dataset of pests on different coloured sticky traps acquired with different imaging devices

Mon, 2024-08-19 06:00

Data Brief. 2024 Jul 14;55:110741. doi: 10.1016/j.dib.2024.110741. eCollection 2024 Aug.

ABSTRACT

The sticky trap is probably the most cost-effective tool for catching insect pests, but the identification and counting of insects on sticky traps is very labour-intensive. When investigating the automatic identification and counting of pests on sticky traps using computer vision and machine learning, two aspects can strongly influence the performance of the model - the colour of the sticky trap and the device used to capture the images of the pests on the sticky trap. As far as we know, there are no available image datasets to study these two aspects in computer vision and deep learning algorithms. Therefore, this paper presents a new dataset consisting of images of two pests commonly found in post-harvest crops - the red flour beetle (Tribolium castaneum) and the rice weevil (Sitophilus oryzae) - captured with three different devices (DSLR, webcam and smartphone) on blue, yellow, white and transparent sticky traps. The images were sorted by device, colour and species and divided into training, validation and test parts for the development of the deep learning model.

PMID:39156668 | PMC:PMC11327826 | DOI:10.1016/j.dib.2024.110741

Categories: Literature Watch

FPJA-Net: A Lightweight End-to-End Network for Sleep Stage Prediction Based on Feature Pyramid and Joint Attention

Sun, 2024-08-18 06:00

Interdiscip Sci. 2024 Aug 19. doi: 10.1007/s12539-024-00636-9. Online ahead of print.

ABSTRACT

Sleep staging is the most crucial work before diagnosing and treating sleep disorders. Traditional manual sleep staging is time-consuming and depends on the skill of experts. Nowadays, automatic sleep staging based on deep learning attracts more and more scientific researchers. As we know, the salient waves in sleep signals contain the most important information for automatic sleep staging. However, the key information is not fully utilized in existing deep learning methods since most of them only use CNN or RNN which could not capture multi-scale features in salient waves effectively. To tackle this limitation, we propose a lightweight end-to-end network for sleep stage prediction based on feature pyramid and joint attention. The feature pyramid module is designed to effectively extract multi-scale features in salient waves, and these features are then fed to the joint attention module to closely attend to the channel and location information of the salient waves. The proposed network has much fewer parameters and significant performance improvement, which is better than the state-of-the-art results. The overall accuracy and macro F1 score on the public dataset Sleep-EDF39, Sleep-EDF153 and SHHS are 90.1%, 87.8%, 87.4%, 84.4% and 86.9%, 83.9%, respectively. Ablation experiments confirm the effectiveness of each module.

PMID:39155326 | DOI:10.1007/s12539-024-00636-9

Categories: Literature Watch

Pages