Deep learning

Utilizing multimodal AI to improve genetic analyses of cardiovascular traits

Tue, 2024-04-02 06:00

medRxiv [Preprint]. 2024 Mar 20:2024.03.19.24304547. doi: 10.1101/2024.03.19.24304547.

ABSTRACT

Electronic health records, biobanks, and wearable biosensors contain multiple high-dimensional clinical data (HDCD) modalities (e.g., ECG, Photoplethysmography (PPG), and MRI) for each individual. Access to multimodal HDCD provides a unique opportunity for genetic studies of complex traits because different modalities relevant to a single physiological system (e.g., circulatory system) encode complementary and overlapping information. We propose a novel multimodal deep learning method, M-REGLE, for discovering genetic associations from a joint representation of multiple complementary HDCD modalities. We showcase the effectiveness of this model by applying it to several cardiovascular modalities. M-REGLE jointly learns a lower representation (i.e., latent factors) of multimodal HDCD using a convolutional variational autoencoder, performs genome wide association studies (GWAS) on each latent factor, then combines the results to study the genetics of the underlying system. To validate the advantages of M-REGLE and multimodal learning, we apply it to common cardiovascular modalities (PPG and ECG), and compare its results to unimodal learning methods in which representations are learned from each data modality separately, but the downstream genetic analyses are performed on the combined unimodal representations. M-REGLE identifies 19.3% more loci on the 12-lead ECG dataset, 13.0% more loci on the ECG lead I + PPG dataset, and its genetic risk score significantly outperforms the unimodal risk score at predicting cardiac phenotypes, such as atrial fibrillation (Afib), in multiple biobanks.

PMID:38562791 | PMC:PMC10984061 | DOI:10.1101/2024.03.19.24304547

Categories: Literature Watch

Identifying associations of <em>de novo</em> noncoding variants with autism through integration of gene expression, sequence and sex information

Tue, 2024-04-02 06:00

bioRxiv [Preprint]. 2024 Mar 21:2024.03.20.585624. doi: 10.1101/2024.03.20.585624.

ABSTRACT

Whole-genome sequencing (WGS) data is facilitating genome-wide identification of rare noncoding variants, while elucidating their roles in disease remains challenging. Towards this end, we first revisit a reported significant brain-related association signal of autism spectrum disorder (ASD) detected from de novo noncoding variants attributed to deep-learning and show that local GC content can capture similar association signals. We further show that the association signal appears driven by variants from male proband-female sibling pairs that are upstream of assigned genes. We then develop Expression Neighborhood Sequence Association Study (ENSAS), which utilizes gene expression correlations and sequence information, to more systematically identify phenotype-associated variant sets. Applying ENSAS to the same set of de novo variants, we identify gene expression-based neighborhoods showing significant ASD association signal, enriched for synapse-related gene ontology terms. For these top neighborhoods, we also identify chromatin states annotations of variants that are predictive of the proband-sibling local GC content differences. Our work provides new insights into associations of non-coding de novo mutations in ASD and presents an analytical framework applicable to other phenotypes.

PMID:38562739 | PMC:PMC10983996 | DOI:10.1101/2024.03.20.585624

Categories: Literature Watch

Enhanced Multiscale Human Brain Imaging by Semi-supervised Digital Staining and Serial Sectioning Optical Coherence Tomography

Tue, 2024-04-02 06:00

Res Sq [Preprint]. 2024 Mar 21:rs.3.rs-4014687. doi: 10.21203/rs.3.rs-4014687/v1.

ABSTRACT

A major challenge in neuroscience is to visualize the structure of the human brain at different scales. Traditional histology reveals micro- and meso-scale brain features, but suffers from staining variability, tissue damage and distortion that impedes accurate 3D reconstructions. Here, we present a new 3D imaging framework that combines serial sectioning optical coherence tomography (S-OCT) with a deep-learning digital staining (DS) model. We develop a novel semi-supervised learning technique to facilitate DS model training on weakly paired images. The DS model performs translation from S-OCT to Gallyas silver staining. We demonstrate DS on various human cerebral cortex samples with consistent staining quality. Additionally, we show that DS enhances contrast across cortical layer boundaries. Furthermore, we showcase geometry-preserving 3D DS on cubic-centimeter tissue blocks and visualization of meso-scale vessel networks in the white matter. We believe that our technique offers the potential for high-throughput, multiscale imaging of brain tissues and may facilitate studies of brain structures.

PMID:38562721 | PMC:PMC10984089 | DOI:10.21203/rs.3.rs-4014687/v1

Categories: Literature Watch

Comparison of Deep Learning Approaches for Conversion of International Classification of Diseases Codes to the Abbreviated Injury Scale

Tue, 2024-04-02 06:00

medRxiv [Preprint]. 2024 Mar 22:2024.03.06.24303847. doi: 10.1101/2024.03.06.24303847.

ABSTRACT

The injury severity classifications generated from the Abbreviated Injury Scale (AIS) provide information that allows for standardized comparisons in the field of trauma injury research. However, the majority of injuries are coded in International Classification of Diseases (ICD) and lack this severity information. A system to predict injury severity classifications from ICD codes would be beneficial as manually coding in AIS can be time-intensive or even impossible for some retrospective cases. It has been previously shown that the encoder-decoder-based neural machine translation (NMT) model is more accurate than a one-to-one mapping of ICD codes to AIS. The objective of this study is to compare the accuracy of two architectures, feedforward neural networks (FFNN) and NMT, in predicting Injury Severity Score (ISS) and ISS ≥16 classification. Both architectures were tested in direct conversion from ICD codes to ISS score and indirect conversion through AIS for a total of four models. Trauma cases from the U.S. National Trauma Data Bank were used to develop and test the four models as the injuries were coded in both ICD and AIS. 2,031,793 trauma cases from 2017-2018 were used to train and validate the models while 1,091,792 cases from 2019 were used to test and compare them. The results showed that indirect conversion through AIS using an NMT was the most accurate in predicting the exact ISS score, followed by direct conversion with FFNN, direct conversion with NMT, and lastly indirect conversion with FFNN, with statistically significant differences in performance on all pairwise comparisons. The rankings were similar when comparing the accuracy of predicting ISS ≥16 classification, however the differences were smaller. The NMT architecture continues to demonstrate notable accuracy in predicting exact ISS scores, but a simpler FFNN approach may be preferred in specific situations, such as if only ISS ≥16 classification is needed or large-scale computational resources are unavailable.

PMID:38562696 | PMC:PMC10984072 | DOI:10.1101/2024.03.06.24303847

Categories: Literature Watch

Assessment of Automated Identification of Phases in Videos of Total Hip Arthroplasty Using Deep Learning Techniques

Tue, 2024-04-02 06:00

Clin Orthop Surg. 2024 Apr;16(2):210-216. doi: 10.4055/cios23280. Epub 2024 Mar 15.

ABSTRACT

BACKGROUND: As the population ages, the rates of hip diseases and fragility fractures are increasing, making total hip arthroplasty (THA) one of the best methods for treating elderly patients. With the increasing number of THA surgeries and diverse surgical methods, there is a need for standard evaluation protocols. This study aimed to use deep learning algorithms to classify THA videos and evaluate the accuracy of the labelling of these videos.

METHODS: In our study, we manually annotated 7 phases in THA, including skin incision, broaching, exposure of acetabulum, acetabular reaming, acetabular cup positioning, femoral stem insertion, and skin closure. Within each phase, a second trained annotator marked the beginning and end of instrument usages, such as the skin blade, forceps, Bovie, suction device, suture material, retractor, rasp, femoral stem, acetabular reamer, head trial, and real head.

RESULTS: In our study, we utilized YOLOv3 to collect 540 operating images of THA procedures and create a scene annotation model. The results of our study showed relatively high accuracy in the clear classification of surgical techniques such as skin incision and closure, broaching, acetabular reaming, and femoral stem insertion, with a mean average precision (mAP) of 0.75 or higher. Most of the equipment showed good accuracy of mAP 0.7 or higher, except for the suction device, suture material, and retractor.

CONCLUSIONS: Scene annotation for the instrument and phases in THA using deep learning techniques may provide potentially useful tools for subsequent documentation, assessment of skills, and feedback.

PMID:38562629 | PMC:PMC10973629 | DOI:10.4055/cios23280

Categories: Literature Watch

AI-PUCMDL: artificial intelligence assisted plant counting through unmanned aerial vehicles in India's mountainous regions

Mon, 2024-04-01 06:00

Environ Monit Assess. 2024 Apr 2;196(4):406. doi: 10.1007/s10661-024-12550-0.

ABSTRACT

This work introduces a novel approach to remotely count and monitor potato plants in high-altitude regions of India using an unmanned aerial vehicle (UAV) and an artificial intelligence (AI)-based deep learning (DL) network. The proposed methodology involves the use of a self-created AI model called PlantSegNet, which is based on VGG-16 and U-Net architectures, to analyze aerial RGB images captured by a UAV. To evaluate the proposed approach, a self-created dataset of aerial images from different planting blocks is used to train and test the PlantSegNet model. The experimental results demonstrate the effectiveness and validity of the proposed method in challenging environmental conditions. The proposed approach achieves pixel accuracy of 98.65%, a loss of 0.004, an Intersection over Union (IoU) of 0.95, and an F1-Score of 0.94. Comparing the proposed model with existing models, such as Mask-RCNN and U-NET, demonstrates that PlantSegNet outperforms both models in terms of performance parameters. The proposed methodology provides a reliable solution for remote crop counting in challenging terrain, which can be beneficial for farmers in the Himalayan regions of India. The methods and results presented in this paper offer a promising foundation for the development of advanced decision support systems for planning planting operations.

PMID:38561525 | DOI:10.1007/s10661-024-12550-0

Categories: Literature Watch

Convolutional neural network deep learning model accurately detects rectal cancer in endoanal ultrasounds

Mon, 2024-04-01 06:00

Tech Coloproctol. 2024 Apr 1;28(1):44. doi: 10.1007/s10151-024-02917-3.

ABSTRACT

BACKGROUND: Imaging is vital for assessing rectal cancer, with endoanal ultrasound (EAUS) being highly accurate in large tertiary medical centers. However, EAUS accuracy drops outside such settings, possibly due to varied examiner experience and fewer examinations. This underscores the need for an AI-based system to enhance accuracy in non-specialized centers. This study aimed to develop and validate deep learning (DL) models to differentiate rectal cancer in standard EAUS images.

METHODS: A transfer learning approach with fine-tuned DL architectures was employed, utilizing a dataset of 294 images. The performance of DL models was assessed through a tenfold cross-validation.

RESULTS: The DL diagnostics model exhibited a sensitivity and accuracy of 0.78 each. In the identification phase, the automatic diagnostic platform achieved an area under the curve performance of 0.85 for diagnosing rectal cancer.

CONCLUSIONS: This research demonstrates the potential of DL models in enhancing rectal cancer detection during EAUS, especially in settings with lower examiner experience. The achieved sensitivity and accuracy suggest the viability of incorporating AI support for improved diagnostic outcomes in non-specialized medical centers.

PMID:38561492 | DOI:10.1007/s10151-024-02917-3

Categories: Literature Watch

Boxing behavior recognition based on artificial intelligence convolutional neural network with sports psychology assistant

Mon, 2024-04-01 06:00

Sci Rep. 2024 Apr 1;14(1):7640. doi: 10.1038/s41598-024-58518-5.

ABSTRACT

The purpose of this study is to deeply understand the psychological state of boxers before the competition, and explore an efficient boxing action classification and recognition model supported by artificial intelligence (AI) technology through these psychological characteristics. Firstly, this study systematically measures the key psychological dimensions of boxers, such as anxiety level, self-confidence, team identity, and opponent attitude, through psychological scale survey to obtain detailed psychological data. Then, based on these data, this study innovatively constructs a boxing action classification and recognition model based on BERT fusion 3D-ResNet, which not only comprehensively considers psychological information, but also carefully considers action characteristics to improve the classification accuracy of boxing actions. The performance evaluation shows that the model proposed in this study is significantly superior to the traditional model in terms of loss value, accuracy and F1 value, and the accuracy reaches 96.86%. Therefore, through the comprehensive application of psychology and deep learning, this study successfully constructs a boxing action classification and recognition model that can fully understand the psychological state of boxers, which provides strong support for the psychological training and action classification of boxers.

PMID:38561402 | DOI:10.1038/s41598-024-58518-5

Categories: Literature Watch

Intelligent wireless power transfer via a 2-bit compact reconfigurable transmissive-metasurface-based router

Mon, 2024-04-01 06:00

Nat Commun. 2024 Apr 1;15(1):2807. doi: 10.1038/s41467-024-46984-4.

ABSTRACT

With the rapid development of the Internet of Things, numerous devices have been deployed in complex environments for environmental monitoring and information transmission, which brings new power supply challenges. Wireless power transfer is a promising solution since it enables power delivery without cables, providing well-behaved flexibility for power supplies. Here we propose a compact wireless power transfer framework. The core components of the proposed framework include a plane-wave feeder and a transmissive 2-bit reconfigurable metasurface-based beam generator, which constitute a reconfigurable power router. The combined profile of the feeder and the beam generator is 0.8 wavelengths. In collaboration with a deep-learning-driven environment sensor, the router enables object detection and localization, and intelligent wireless power transfer to power-consuming targets, especially in dynamic multitarget environments. Experiments also show that the router is capable of simultaneous wireless power and information transfer. Due to the merits of low cost and compact size, the proposed framework may boost the commercialization of metasurface-based wireless power transfer routers.

PMID:38561373 | DOI:10.1038/s41467-024-46984-4

Categories: Literature Watch

GCMSFormer: A Fully Automatic Method for the Resolution of Overlapping Peaks in Gas Chromatography-Mass Spectrometry

Mon, 2024-04-01 06:00

Anal Chem. 2024 Apr 1. doi: 10.1021/acs.analchem.3c05772. Online ahead of print.

ABSTRACT

Gas chromatography-mass spectrometry (GC-MS) is one of the most important instruments for analyzing volatile organic compounds. However, the complexity of real samples and the limitations of chromatographic separation capabilities lead to coeluting compounds without ideal separation. In this study, a Transformer-based automatic resolution method (GCMSFormer) is proposed to resolve mass spectra from GC-MS peaks in an end-to-end manner, predicting the mass spectra of components directly from the raw overlapping peaks data. Furthermore, orthogonal projection resolution (OPR) was integrated into GCMSFormer to resolve minor components. The GCMSFormer model was trained, validated, and tested using 100,000 augmented data. It achieves 99.88% of the bilingual evaluation understudy (BLEU) value on the test set, significantly higher than the 97.68% BLEU value of the baseline sequence-to-sequence model long short-term memory (LSTM). GCMSFormer was also compared with two nondeep learning resolution tools (MZmine and AMDIS) and two deep learning resolution tools (PARAFAC2 with DL and MSHub/GNPS) on a real plant essential oil GC-MS data set. Their resolution results were compared on evaluation metrics, including the number of compounds resolved, mass spectral match score, correlation coefficient, explained variance, and resolution speed. The results demonstrate that GCMSFormer has better resolution performance, higher automation, and faster resolution speed. In summary, GCMSFormer is an end-to-end, fast, fully automatic, and accurate method for analyzing GC-MS data of complex samples.

PMID:38560891 | DOI:10.1021/acs.analchem.3c05772

Categories: Literature Watch

DLBCNet: A Deep Learning Network for Classifying Blood Cells

Mon, 2024-04-01 06:00

Big Data Cogn Comput. 2023 Apr 14;7(2):75. doi: 10.3390/bdcc7020075.

ABSTRACT

BACKGROUND: Blood is responsible for delivering nutrients to various organs, which store important health information about the human body. Therefore, the diagnosis of blood can indirectly help doctors judge a person's physical state. Recently, researchers have applied deep learning (DL) to the automatic analysis of blood cells. However, there are still some deficiencies in these models.

METHODS: To cope with these issues, we propose a novel network for the multi-classification of blood cells, which is called DLBCNet. A new specifical model for blood cells (BCGAN) is designed to generate synthetic images. The pre-trained ResNet50 is implemented as the backbone model, which serves as the feature extractor. The extracted features are fed to the proposed ETRN to improve the multi-classification performance of blood cells.

RESULTS: The average accuracy, average sensitivity, average precision, average specificity, and average f1-score of the proposed model are 95.05%, 93.25%, 97.75%, 93.72%, and 95.38%, accordingly.

CONCLUSIONS: The performance of the proposed model surpasses other state-of-the-art methods in reported classification results.

PMID:38560757 | PMC:PMC7615784 | DOI:10.3390/bdcc7020075

Categories: Literature Watch

nanoBERT: a deep learning model for gene agnostic navigation of the nanobody mutational space

Mon, 2024-04-01 06:00

Bioinform Adv. 2024 Mar 6;4(1):vbae033. doi: 10.1093/bioadv/vbae033. eCollection 2024.

ABSTRACT

MOTIVATION: Nanobodies are a subclass of immunoglobulins, whose binding site consists of only one peptide chain, bestowing favorable biophysical properties. Recently, the first nanobody therapy was approved, paving the way for further clinical applications of this antibody format. Further development of nanobody-based therapeutics could be streamlined by computational methods. One of such methods is infilling-positional prediction of biologically feasible mutations in nanobodies. Being able to identify possible positional substitutions based on sequence context, facilitates functional design of such molecules.

RESULTS: Here we present nanoBERT, a nanobody-specific transformer to predict amino acids in a given position in a query sequence. We demonstrate the need to develop such machine-learning based protocol as opposed to gene-specific positional statistics since appropriate genetic reference is not available. We benchmark nanoBERT with respect to human-based language models and ESM-2, demonstrating the benefit for domain-specific language models. We also demonstrate the benefit of employing nanobody-specific predictions for fine-tuning on experimentally measured thermostability dataset. We hope that nanoBERT will help engineers in a range of predictive tasks for designing therapeutic nanobodies.

AVAILABILITY AND IMPLEMENTATION: https://huggingface.co/NaturalAntibody/.

PMID:38560554 | PMC:PMC10978573 | DOI:10.1093/bioadv/vbae033

Categories: Literature Watch

Remote and low-cost intraocular pressure monitoring by deep learning of speckle patterns

Mon, 2024-04-01 06:00

J Biomed Opt. 2024 Mar;29(3):037003. doi: 10.1117/1.JBO.29.3.037003. Epub 2024 Mar 29.

ABSTRACT

SIGNIFICANCE: Glaucoma, a leading cause of global blindness, disproportionately affects low-income regions due to expensive diagnostic methods. Affordable intraocular pressure (IOP) measurement is crucial for early detection, especially in low- and middle-income countries.

AIM: We developed a remote photonic IOP biomonitoring method by deep learning of the speckle patterns reflected from an eye sclera stimulated by a sound source. We aimed to achieve precise IOP measurements.

APPROACH: IOP was artificially raised in 24 pig eyeballs, considered similar to human eyes, to apply our biomonitoring method. By deep learning of the speckle pattern videos, we analyzed the data for accurate IOP determination.

RESULTS: Our method demonstrated the possibility of high-precision IOP measurements. Deep learning effectively analyzed the speckle patterns, enabling accurate IOP determination, with the potential for global use.

CONCLUSIONS: The novel, affordable, and accurate remote photonic IOP biomonitoring method for glaucoma diagnosis, tested on pig eyes, shows promising results. Leveraging deep learning and speckle pattern analysis, together with the development of a prototype for human eyes testing, could enhance diagnosis and management, particularly in resource-constrained settings worldwide.

PMID:38560532 | PMC:PMC10979815 | DOI:10.1117/1.JBO.29.3.037003

Categories: Literature Watch

Deciphering the controlling factors for phase transitions in zeolitic imidazolate frameworks

Mon, 2024-04-01 06:00

Natl Sci Rev. 2024 Jan 13;11(4):nwae023. doi: 10.1093/nsr/nwae023. eCollection 2024 Apr.

ABSTRACT

Zeolitic imidazolate frameworks (ZIFs) feature complex phase transitions, including polymorphism, melting, vitrification, and polyamorphism. Experimentally probing their structural evolution during transitions involving amorphous phases is a significant challenge, especially at the medium-range length scale. To overcome this challenge, here we first train a deep learning-based force field to identify the structural characteristics of both crystalline and non-crystalline ZIF phases. This allows us to reproduce the structural evolution trend during the melting of crystals and formation of ZIF glasses at various length scales with an accuracy comparable to that of ab initio molecular dynamics, yet at a much lower computational cost. Based on this approach, we propose a new structural descriptor, namely, the ring orientation index, to capture the propensity for crystallization of ZIF-4 (Zn(Im)2, Im = C3H3N2-) glasses, as well as for the formation of ZIF-zni (Zn(Im)2) out of the high-density amorphous phase. This crystal formation process is a result of the reorientation of imidazole rings by sacrificing the order of the structure around the zinc-centered tetrahedra. The outcomes of this work are useful for studying phase transitions in other metal-organic frameworks (MOFs) and may thus guide the development of MOF glasses.

PMID:38560493 | PMC:PMC10980346 | DOI:10.1093/nsr/nwae023

Categories: Literature Watch

Super-resolution segmentation network for inner-ear tissue segmentation

Mon, 2024-04-01 06:00

Simul Synth Med Imaging. 2023 Oct;14288:11-20. doi: 10.1007/978-3-031-44689-4_2. Epub 2023 Oct 7.

ABSTRACT

Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.

PMID:38560492 | PMC:PMC10979466 | DOI:10.1007/978-3-031-44689-4_2

Categories: Literature Watch

CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification

Mon, 2024-04-01 06:00

Mach Learn Knowl Extr. 2023 Jun 27;5(3):684-712. doi: 10.3390/make5030037.

ABSTRACT

Since the COVID-19 pandemic outbreak, over 760 million confirmed cases and over 6.8 million deaths have been reported globally, according to the World Health Organization. While the SARS-CoV-2 virus carried by COVID-19 patients can be identified though the reverse transcription-polymerase chain reaction (RT-PCR) test with high accuracy, clinical misdiagnosis between COVID-19 and pneumonia patients remains a challenge. Therefore, we developed a novel CovC-ReDRNet model to distinguish COVID-19 patients from pneumonia patients as well as normal cases. ResNet-18 was introduced as the backbone model and tailored for the feature representation afterward. In our feature-based randomized neural network (RNN) framework, the feature representation automatically pairs with the deep random vector function link network (dRVFL) as the optimal classifier, producing a CovC-ReDRNet model for the classification task. Results based on five-fold cross-validation reveal that our method achieved 94.94%, 97.01%, 97.56%, 96.81%, and 95.84% MA sensitivity, MA specificity, MA accuracy, MA precision, and MA F1-score, respectively. Ablation studies evidence the superiority of ResNet-18 over different backbone networks, RNNs over traditional classifiers, and deep RNNs over shallow RNNs. Moreover, our proposed model achieved a better MA accuracy than the state-of-the-art (SOTA) methods, the highest score of which was 95.57%. To conclude, our CovC-ReDRNet model could be perceived as an advanced computer-aided diagnostic model with high speed and high accuracy for classifying and predicting COVID-19 diseases.

PMID:38560420 | PMC:PMC7615781 | DOI:10.3390/make5030037

Categories: Literature Watch

PAT (Periderm Assessment Toolkit): A Quantitative and Large-Scale Screening Method for Periderm Measurements

Mon, 2024-04-01 06:00

Plant Phenomics. 2024 Mar 29;6:0156. doi: 10.34133/plantphenomics.0156. eCollection 2024.

ABSTRACT

The periderm is a vital protective tissue found in the roots, stems, and woody elements of diverse plant species. It plays an important function in these plants by assuming the role of the epidermis as the outermost layer. Despite its critical role for protecting plants from environmental stresses and pathogens, research on root periderm development has been limited due to its late formation during root development, its presence only in mature root regions, and its impermeability. One of the most straightforward measurements for comparing periderm formation between different genotypes and treatments is periderm (phellem) length. We have developed PAT (Periderm Assessment Toolkit), a high-throughput user-friendly pipeline that integrates an efficient staining protocol, automated imaging, and a deep-learning-based image analysis approach to accurately detect and measure periderm length in the roots of Arabidopsis thaliana. The reliability and reproducibility of our method was evaluated using a diverse set of 20 Arabidopsis natural accessions. Our automated measurements exhibited a strong correlation with human-expert-generated measurements, achieving a 94% efficiency in periderm length quantification. This robust PAT pipeline streamlines large-scale periderm measurements, thereby being able to facilitate comprehensive genetic studies and screens. Although PAT proves highly effective with automated digital microscopes in Arabidopsis roots, its application may pose challenges with nonautomated microscopy. Although the workflow and principles could be adapted for other plant species, additional optimization would be necessary. While we show that periderm length can be used to distinguish a mutant impaired in periderm development from wild type, we also find it is a plastic trait. Therefore, care must be taken to include sufficient repeats and controls, to minimize variation, and to ensure comparability of periderm length measurements between different genotypes and growth conditions.

PMID:38560381 | PMC:PMC10981931 | DOI:10.34133/plantphenomics.0156

Categories: Literature Watch

Channel Attention GAN-Based Synthetic Weed Generation for Precise Weed Identification

Mon, 2024-04-01 06:00

Plant Phenomics. 2024 Mar 28;6:0122. doi: 10.34133/plantphenomics.0122. eCollection 2024.

ABSTRACT

Weed is a major biological factor causing declines in crop yield. However, widespread herbicide application and indiscriminate weeding with soil disturbance are of great concern because of their environmental impacts. Site-specific weed management (SSWM) refers to a weed management strategy for digital agriculture that results in low energy loss. Deep learning is crucial for developing SSWM, as it distinguishes crops from weeds and identifies weed species. However, this technique requires substantial annotated data, which necessitates expertise in weed science and agronomy. In this study, we present a channel attention mechanism-driven generative adversarial network (CA-GAN) that can generate realistic synthetic weed data. The performance of the model was evaluated using two datasets: the public segmented Plant Seedling Dataset (sPSD), featuring nine common broadleaf weeds from arable land, and the Institute for Sustainable Agro-ecosystem Services (ISAS) dataset, which includes five common summer weeds in Japan. Consequently, the synthetic dataset generated by the proposed CA-GAN obtained an 82.63% recognition accuracy on the sPSD and 93.46% on the ISAS dataset. The Fréchet inception distance (FID) score test measures the similarity between a synthetic and real dataset, and it has been shown to correlate well with human judgments of the quality of synthetic samples. The synthetic dataset achieved a low FID score (20.95 on the sPSD and 24.31 on the ISAS dataset). Overall, the experimental results demonstrated that the proposed method outperformed previous state-of-the-art GAN models in terms of image quality, diversity, and discriminability, making it a promising approach for synthetic agricultural data generation.

PMID:38560380 | PMC:PMC10981930 | DOI:10.34133/plantphenomics.0122

Categories: Literature Watch

Real-time driver identification in IoV: A deep learning and cloud integration approach

Mon, 2024-04-01 06:00

Heliyon. 2024 Mar 16;10(7):e28109. doi: 10.1016/j.heliyon.2024.e28109. eCollection 2024 Apr 15.

ABSTRACT

The Internet of Vehicles (IoV) emerges as a pivotal extension of the Internet of Things (IoT), specifically geared towards transforming the automotive landscape. In this evolving ecosystem, the demand for a seamless end-to-end system becomes paramount for enhancing operational efficiency and safety. Hence, this study introduces an innovative method for real-time driver identification by integrating cloud computing with deep learning. Utilizing the integrated capabilities of Google Cloud, Thingsboard, and Apache Kafka, the developed solution tailored for IoV technology is adept at managing real-time data collection, processing, prediction, and visualization, with resilience against sensor data anomalies. Also, this research suggests an appropriate method for driver identification by utilizing a combination of Convolutional Neural Networks (CNN) and multi-head self-attention in the proposed approach. The proposed model is validated on two datasets: Security and collected. Moreover, the results show that the proposed model surpassed the previous works by achieving an accuracy and F1 score of 99.95%. Even when challenged with data anomalies, this model maintains a high accuracy of 96.2%. By achieving accurate driver identification results, the proposed end-to-end IoV system can aid in optimizing fleet management, vehicle security, personalized driving experiences, insurance, and risk assessment. This emphasizes its potential for road safety and managing transportation more effectively.

PMID:38560228 | PMC:PMC10981028 | DOI:10.1016/j.heliyon.2024.e28109

Categories: Literature Watch

Deceptive learning in histopathology

Mon, 2024-04-01 06:00

Histopathology. 2024 Mar 31. doi: 10.1111/his.15180. Online ahead of print.

ABSTRACT

AIMS: Deep learning holds immense potential for histopathology, automating tasks that are simple for expert pathologists and revealing novel biology for tasks that were previously considered difficult or impossible to solve by eye alone. However, the extent to which the visual strategies learned by deep learning models in histopathological analysis are trustworthy or not has yet to be systematically analysed. Here, we systematically evaluate deep neural networks (DNNs) trained for histopathological analysis in order to understand if their learned strategies are trustworthy or deceptive.

METHODS AND RESULTS: We trained a variety of DNNs on a novel data set of 221 whole-slide images (WSIs) from lung adenocarcinoma patients, and evaluated their effectiveness at (1) molecular profiling of KRAS versus EGFR mutations, (2) determining the primary tissue of a tumour and (3) tumour detection. While DNNs achieved above-chance performance on molecular profiling, they did so by exploiting correlations between histological subtypes and mutations, and failed to generalise to a challenging test set obtained through laser capture microdissection (LCM). In contrast, DNNs learned robust and trustworthy strategies for determining the primary tissue of a tumour as well as detecting and localising tumours in tissue.

CONCLUSIONS: Our work demonstrates that DNNs hold immense promise for aiding pathologists in analysing tissue. However, they are also capable of achieving seemingly strong performance by learning deceptive strategies that leverage spurious correlations, and are ultimately unsuitable for research or clinical work. The framework we propose for model evaluation and interpretation is an important step towards developing reliable automated systems for histopathological analysis.

PMID:38556922 | DOI:10.1111/his.15180

Categories: Literature Watch

Pages