Deep learning

Predicting removal of arsenic from groundwater by iron based filters using deep neural network models

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26428. doi: 10.1038/s41598-024-76758-3.

ABSTRACT

Arsenic (As) contamination in drinking water has been highlighted for its environmental significance and potential health implications. Iron-based filters are cost-effective and sustainable solutions for As removal from contaminated water. Applying Machine Learning (ML) models to investigate and optimize As removal using iron-based filters is limited. The present study developed Deep Learning Neural Network (DLNN) models for predicting the removal of As and other contaminants by iron-based filters from groundwater. A small Original Dataset (ODS) consisting of 20 data points and 13 groundwater parameters was obtained from the field performances of 20 individual iron-amended ceramic filters. Cubic-spline interpolation (CSI) expanded the ODS, generating 1600 interpolated data points (IDPs) without duplication. The Bayesian optimization algorithm tuned the model hyper-parameters and IDPs in a Stratified fivefold Cross-Validation (CV) setup trained all the models. The models demonstrated reliable performances with the coefficient of determination (R2) 0.990-0.999 for As, 0.774-0.976 for Iron (Fe), 0.934-0.954 for Phosphorus (P), and 0.878-0.998 for predicting manganese (Mn) in the effluent. Sobol sensitivity analysis revealed that As (total order index (ST) = 0.563), P (ST = 0.441), Eh (ST = 0.712), and Temp (ST = 0.371) are the most sensitive parameters for the removal of As, Fe, P, and Mn. The comprehensive approach, from data expansion through DLNN model development, provides a valuable tool for estimating optimal As removal conditions from groundwater.

PMID:39488582 | DOI:10.1038/s41598-024-76758-3

Categories: Literature Watch

A deep learning approach for ovarian cancer detection and classification based on fuzzy deep learning

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26463. doi: 10.1038/s41598-024-75830-2.

ABSTRACT

Different oncologists make their own decisions about the detection and classification of the type of ovarian cancer from histopathological whole slide images. However, it is necessary to have an automated system that is more accurate and standardized for decision-making, which is essential for early detection of ovarian cancer. To help doctors, an automated detection and classification of ovarian cancer system is proposed. This model starts by extracting the main features from the histopathology images based on the ResNet-50 model to detect and classify the cancer. Then, recursive feature elimination based on a decision tree is introduced to remove unnecessary features extracted during the feature extraction process. Adam optimizers were implemented to optimize the network's weights during training data. Finally, the advantages of combining deep learning and fuzzy logic are combined to classify the images of ovarian cancer. The dataset consists of 288 hematoxylin and eosin (H&E) stained whole slides with clinical information from 78 patients. H&E-stained Whole Slide Images (WSIs), including 162 effective and 126 invalid WSIs were obtained from different tissue blocks of post-treatment specimens. Experimental results can diagnose ovarian cancer with a potential accuracy of 98.99%, sensitivity of 99%, specificity of 98.96%, and F1-score of 98.99%. The results show promising results indicating the potential of using fuzzy deep-learning classifiers for predicting ovarian cancer.

PMID:39488573 | DOI:10.1038/s41598-024-75830-2

Categories: Literature Watch

A spatiotemporal correlation and attention-based model for pipeline deformation prediction in foundation pit engineering

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26387. doi: 10.1038/s41598-024-77601-5.

ABSTRACT

In foundation pit engineering, the deformation prediction of adjacent pipelines is crucial for construction safety. Existing approaches depend on constitutive models, grey correlation prediction, or traditional feedforward neural networks. Due to the complex hydrological and geological conditions, as well as the nonstationary and nonlinear characteristics of monitoring data, this problem remains a challenge. By formulating the deformation of monitoring points as multivariate time series, a deep learning-based prediction model is proposed, which utilizes the convolutional neural network to extract the spatial dependencies among various monitoring points, and leverages the bi-directional long-short memory unit network to extract temporal features. Notably, an attention mechanism is introduced to adjust the trainable weights of spatial-temporal features extracted in the prediction. The evaluation of a real-world subway project demonstrates that the proposed model has advantages compared with current models, particularly in long-term prediction. It improves the Adjusted R2 index averagely by from 19.4 to 61.6 % compared with existing models. The proposed model also exhibits a decrease in mean absolute error ranging from 51.5 to 70.3 % compared to others. Experiments and analyses verify that the spatial-temporal dependencies in time series and the attention learning for spatial-temporal features can improve the prediction of such engineering problems.

PMID:39488572 | DOI:10.1038/s41598-024-77601-5

Categories: Literature Watch

Automated estimation of offshore polymetallic nodule abundance based on seafloor imagery using deep learning

Sat, 2024-11-02 06:00

Sci Total Environ. 2024 Oct 31:177225. doi: 10.1016/j.scitotenv.2024.177225. Online ahead of print.

ABSTRACT

The burgeoning demand for critical metals used in high-tech and green technology industries has turned attention toward the vast resources of polymetallic nodules on the ocean floor. Traditional methods for estimating the abundance of these nodules, such as direct sampling or acoustic imagery are time and labour-intensive or often insufficient for large-scale or accurate assessment. This paper advocates for the automatization of polymetallic nodules detection and abundance estimation using deep learning algorithms applied to seabed photographs. We propose UNET convolutional neural network framework specifically trained to process the unique features of seabed imagery, which can reliably detect and estimate the abundance of polymetallic nodules based on thousands of seabed photographs in significantly reduced time (below 10 h for 30 thousand photographs). Our approach addresses the challenges of data preparation, variable image quality, coverage-abundance transition model and sediments presence. We indicated the utilization of this approach can substantially increase the efficiency and accuracy of resource estimation, dramatically reducing the time and cost currently required for manual assessment. Furthermore, we discuss the potential of this method to be integrated into large-scale systems for sustainable exploitation of these undersea resources.

PMID:39488283 | DOI:10.1016/j.scitotenv.2024.177225

Categories: Literature Watch

GraphCVAE: Uncovering cell heterogeneity and therapeutic target discovery through residual and contrastive learning

Sat, 2024-11-02 06:00

Life Sci. 2024 Oct 31:123208. doi: 10.1016/j.lfs.2024.123208. Online ahead of print.

ABSTRACT

Advancements in Spatial Transcriptomics (ST) technologies in recent years have transformed the analysis of tissue structure and function within spatial contexts. However, accurately identifying spatial domains remains challenging due to data sparsity and noise. Traditional clustering methods often fail to capture spatial dependencies, while spatial clustering methods struggle with batch effects and data integration. We introduce GraphCVAE, a model designed to enhance spatial domain identification by integrating spatial and morphological information, correcting batch effects, and managing heterogeneous data. GraphCVAE employs a multi-layer Graph Convolutional Network (GCN) and a variational autoencoder to improve the representation and integration of spatial information. Through contrastive learning, the model captures subtle differences between cell types and states. Extensive testing on various ST datasets demonstrates GraphCVAE's robustness and biological contributions. In the dorsolateral prefrontal cortex (DLPFC) dataset, it accurately delineates cortical layer boundaries. In glioblastoma, GraphCVAE reveals critical therapeutic targets such as TF and NFIB. In colorectal cancer, it explores the role of the extracellular matrix in colorectal cancer. The model's performance metrics consistently surpass existing methods, validating its effectiveness. GraphCVAE's advanced visualization capabilities further highlight its precision in resolving spatial structures, making it a powerful tool for spatial transcriptomics analysis and offering new insights into disease studies.

PMID:39488267 | DOI:10.1016/j.lfs.2024.123208

Categories: Literature Watch

Automatic detection of temporomandibular joint osteoarthritis radiographic features using deep learning artificial intelligence. A Diagnostic accuracy study

Sat, 2024-11-02 06:00

J Stomatol Oral Maxillofac Surg. 2024 Oct 31:102124. doi: 10.1016/j.jormas.2024.102124. Online ahead of print.

ABSTRACT

OBJECTIVE: The purpose of this study was to investigate the diagnostic performance of a neural network Artificial Intelligence model for the radiographic confirmation of Temporomandibular Joint Osteoarthritis in reference to an experienced radiologist.

MATERIALS AND METHODS: The diagnostic performance of an AI model in identifying radiographic features in patients with TMJ-OA was evaluated in a diagnostic accuracy cohort study. Adult patients elected for radiographic examination by the Diagnostic Criteria for Temporomandibular Disorders decision tree were included. Cone-beam computed Tomography images were evaluated by object detection YOLO deep learning model. The diagnostic performance was verified against examiner radiographic evaluation.

RESULTS: The differences between the AI model and examiner were non-significant statistically, except in the subcortical cyst (P=0.049*). AI model showed substantial to near-perfect levels of agreement when compared to those of the examiner data. Regarding each radiographic phenotype, the AI model reported favorable sensitivity, specificity, accuracy, and highly statistically significant Receiver Operating Characteristic (ROC) analysis (p < 0.001). Area Under Curve ranged from 0.872, for surface erosion, to 0.911 for subcortical cyst.

CONCLUSION: AI object detection model could open the horizon for a valid, automated, and convenient modality for TMJ-OA radiographic confirmation and radiomic features identification with a significant diagnostic power.

PMID:39488247 | DOI:10.1016/j.jormas.2024.102124

Categories: Literature Watch

Zero-shot counting with a dual-stream neural network model

Sat, 2024-11-02 06:00

Neuron. 2024 Oct 29:S0896-6273(24)00729-3. doi: 10.1016/j.neuron.2024.10.008. Online ahead of print.

ABSTRACT

To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of "apple" and "three." In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items "zero-shot"-even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cortex in visual scene understanding.

PMID:39488209 | DOI:10.1016/j.neuron.2024.10.008

Categories: Literature Watch

AI-empowered perturbation proteomics for complex biological systems

Sat, 2024-11-02 06:00

Cell Genom. 2024 Oct 24:100691. doi: 10.1016/j.xgen.2024.100691. Online ahead of print.

ABSTRACT

The insufficient availability of comprehensive protein-level perturbation data is impeding the widespread adoption of systems biology. In this perspective, we introduce the rationale, essentiality, and practicality of perturbation proteomics. Biological systems are perturbed with diverse biological, chemical, and/or physical factors, followed by proteomic measurements at various levels, including changes in protein expression and turnover, post-translational modifications, protein interactions, transport, and localization, along with phenotypic data. Computational models, employing traditional machine learning or deep learning, identify or predict perturbation responses, mechanisms of action, and protein functions, aiding in therapy selection, compound design, and efficient experiment design. We propose to outline a generic PMMP (perturbation, measurement, modeling to prediction) pipeline and build foundation models or other suitable mathematical models based on large-scale perturbation proteomic data. Finally, we contrast modeling between artificially and naturally perturbed systems and highlight the importance of perturbation proteomics for advancing our understanding and predictive modeling of biological systems.

PMID:39488205 | DOI:10.1016/j.xgen.2024.100691

Categories: Literature Watch

GeoNet enables the accurate prediction of protein-ligand binding sites through interpretable geometric deep learning

Sat, 2024-11-02 06:00

Structure. 2024 Oct 23:S0969-2126(24)00446-5. doi: 10.1016/j.str.2024.10.011. Online ahead of print.

ABSTRACT

The identification of protein binding residues is essential for understanding their functions in vivo. However, it remains a computational challenge to accurately identify binding sites due to the lack of known residue binding patterns. Local residue spatial distribution and its interactive biophysical environment both determine binding patterns. Previous methods could not capture both information simultaneously, resulting in unsatisfactory performance. Here, we present GeoNet, an interpretable geometric deep learning model for predicting DNA, RNA, and protein binding sites by learning the latent residue binding patterns. GeoNet achieves this by introducing a coordinate-free geometric representation to characterize local residue distributions and generating an eigenspace to depict local interactive biophysical environments. Evaluation shows that GeoNet is superior compared to other leading predictors and it shows a strong interpretability of learned representations. We present three test cases, where interaction interfaces were successfully identified with GeoNet.

PMID:39488202 | DOI:10.1016/j.str.2024.10.011

Categories: Literature Watch

An objective comparison of methods for augmented reality in laparoscopic liver resection by preoperative-to-intraoperative image fusion from the MICCAI2022 challenge

Sat, 2024-11-02 06:00

Med Image Anal. 2024 Oct 22;99:103371. doi: 10.1016/j.media.2024.103371. Online ahead of print.

ABSTRACT

Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from Computed Tomography (CT) or Magnetic Resonance (MR) imaging data are registered to the intraoperative laparoscopic images during this process. Regarding 3D-2D fusion, most algorithms use anatomical landmarks to guide registration, such as the liver's inferior ridge, the falciform ligament, and the occluding contours. These are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and prone to error. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion challenge (P2ILF), held during the Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: (1) A 2D and 3D landmark segmentation task and (2) a 3D-2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated in the challenge, whose results were assessed for each task independently. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. The proposed methods were evaluated on 16 test images and 2 preoperative 3D models from 2 patients. In Task 1, the teams were able to segment most of the 2D landmarks, while the 3D landmarks showed to be more challenging to segment. In Task 2, only one team obtained acceptable qualitative and quantitative registration results. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.

PMID:39488186 | DOI:10.1016/j.media.2024.103371

Categories: Literature Watch

High spatiotemporal resolution estimation and analysis of global surface CO concentrations using a deep learning model

Sat, 2024-11-02 06:00

J Environ Manage. 2024 Nov 1;371:123096. doi: 10.1016/j.jenvman.2024.123096. Online ahead of print.

ABSTRACT

Ambient carbon monoxide (CO) is a primary air pollutant that poses significant health risks and contributes to the formation of secondary atmospheric pollutants, such as ozone (O3). This study aims to elucidate global CO pollution in relation to health risks and the influence of natural events like wildfires. Utilizing artificial intelligence (AI) big data techniques, we developed a high-performance Convolutional Neural Network (CNN)-based Residual Network (ResNet) model to estimate daily global CO concentrations at a high spatial resolution of 0.07° from June 2018 to May 2021. Our model integrated the global TROPOMI Total Column of atmospheric CO (TCCO) product and reanalysis datasets, achieving desirable estimation accuracies with R-values (correlation coefficients) of 0.90 and 0.96 for daily and monthly predictions, respectively. The analysis reveals that the CO concentrations were relatively high in northern and central China, as well as northern India, particularly during winter months. Given the significant role of wildfires in increasing surface CO levels, we examined their impact in the Indochina Peninsula, the Amazon Rain Forest, and Central Africa. Our results show increases of 60.0%, 28.7%, and 40.8% in CO concentrations for these regions during wildfire seasons, respectively. Additionally, we estimated short-term mortality cases related to CO exposure in 17 countries for 2019, with China having the highest mortality cases of 23,400 (95% confidence interval: 0-99,500). Our findings highlight the critical need for ongoing monitoring of CO levels and their health implications. The daily surface CO concentration dataset is publicly available and can support future relevant sustainable studies, which is accessible at https://doi.org/10.5281/zenodo.11806178.

PMID:39488180 | DOI:10.1016/j.jenvman.2024.123096

Categories: Literature Watch

Noise-resistant sharpness-aware minimization in deep learning

Sat, 2024-11-02 06:00

Neural Netw. 2024 Oct 24;181:106829. doi: 10.1016/j.neunet.2024.106829. Online ahead of print.

ABSTRACT

Sharpness-aware minimization (SAM) aims to enhance model generalization by minimizing the sharpness of the loss function landscape, leading to a robust model performance. To protect sensitive information and enhance privacy, prevailing approaches add noise to models. However, additive noises would inevitably degrade the generalization and robustness of the model. In this paper, we propose a noise-resistant SAM method, based on a noise-resistant parameter update rule. We analyze the convergence and noise resistance properties of the proposed method under noisy conditions. We elaborate on experimental results with several networks on various benchmark datasets to demonstrate the advantages of the proposed method with respect to model generalization and privacy protection.

PMID:39488109 | DOI:10.1016/j.neunet.2024.106829

Categories: Literature Watch

CNN-Informer: A hybrid deep learning model for seizure detection on long-term EEG

Sat, 2024-11-02 06:00

Neural Netw. 2024 Oct 28;181:106855. doi: 10.1016/j.neunet.2024.106855. Online ahead of print.

ABSTRACT

Timely detecting epileptic seizures can significantly reduce accidental injuries of epilepsy patients and offer a novel intervention approach to improve their quality of life. Investigation on seizure detection based on deep learning models has achieved great success. However, there still remain challenging issues, such as the high computational complexity of the models and overfitting caused by the scarce availability of ictal electroencephalogram (EEG) signals for training. Therefore, we propose a novel end-to-end automatic seizure detection model named CNN-Informer, which leverages the capability of Convolutional Neural Network (CNN) to extract EEG local features of multi-channel EEGs, and the low computational complexity and memory usage ability of the Informer to capture the long-range dependencies. In view of the existence of various artifacts in long-term EEGs, we filter those raw EEGs using Discrete Wavelet Transform (DWT) before feeding them into the proposed CNN-Informer model for feature extraction and classification. Post-processing operations are further employed to achieve the final detection results. Our method is extensively evaluated on the CHB-MIT dataset and SH-SDU dataset with both segment-based and event-based criteria. The experimental outcomes demonstrate the superiority of the proposed CNN-Informer model and its strong generalization ability across two EEG datasets. In addition, the lightweight architecture of CNN-Informer makes it suitable for real-time implementation.

PMID:39488107 | DOI:10.1016/j.neunet.2024.106855

Categories: Literature Watch

Coinciding Diabetic Retinopathy and Diabetic Macular Edema Grading With Rat Swarm Optimization Algorithm for Enhanced Capsule Generation Adversarial Network

Sat, 2024-11-02 06:00

Microsc Res Tech. 2024 Nov 2. doi: 10.1002/jemt.24709. Online ahead of print.

ABSTRACT

In the worldwide working-age population, visual disability and blindness are common conditions caused by diabetic retinopathy (DR) and diabetic macular edema (DME). Nowadays, due to diabetes, many people are affected by eye-related issues. Among these, DR and DME are the two foremost eye diseases, the severity of which may lead to some eye-related problems and blindness. Early detection of DR and DME is essential to preventing vision loss. Therefore, an enhanced capsule generation adversarial network (ECGAN) optimized with the rat swarm optimization (RSO) approach is proposed in this article to coincide with DR and DME grading (DR-DME-ECGAN-RSO-ISBI 2018 IDRiD). The input images are obtained from the ISBI 2018 unbalanced DR grading data set. Then, the input fundus images are preprocessed using the Savitzky-Golay (SG) filter filtering technique, which reduces noise from the input image. The preprocessed image is fed to the discrete shearlet transform (DST) for feature extraction. The extracting features of DR-DME are given to the ECGAN-RSO algorithm to categorize the grading of DR and DME disorders. The proposed approach is implemented in Python and achieves better accuracy by 7.94%, 36.66%, and 4.88% compared to the existing models, such as the combined DR with DME grading for the cross-disease attention network (DR-DME-CANet-ISBI 2018 IDRiD), category attention block for unbalanced grading of DR (DR-DME-HDLCNN-MGMO-ISBI 2018 IDRiD), combined DR-DME classification with a deep learning-convolutional neural network-based modified gray-wolf optimizer with variable weights (DR-DME-ANN-ISBI 2018 IDRiD).

PMID:39487733 | DOI:10.1002/jemt.24709

Categories: Literature Watch

Efficient brain tumor grade classification using ensemble deep learning models

Sat, 2024-11-02 06:00

BMC Med Imaging. 2024 Nov 1;24(1):297. doi: 10.1186/s12880-024-01476-1.

ABSTRACT

Detecting brain tumors early on is critical for effective treatment and life-saving efforts. The analysis of the brain with MRI scans is fundamental to the diagnosis because it contains detailed structural views of the brain, which is vital in identifying any of its abnormalities. The other option of performing an invasive biopsy is very painful and uncomfortable, which is not the case with MRI as it is free from surgically invasive margins and pieces of equipment. This helps patients to feel more at ease and hasten the diagnostic procedure, allowing physicians to formulate and practice action plans quicker. It is very difficult to locate a human brain tumor by manual because MRI scans produce large numbers of three-dimensional images. Complete applicability of pre-written computerized diagnostics, affords high possibilities in providing areas of interest earlier through the application of machine learning techniques and algorithms. The proposed work in the present study was to develop a deep learning model which will classify brain tumor grade images (BTGC), and hence enhance accuracy in diagnosing patients with different grades of brain tumors using MRI. A MobileNetV2 model, was used to extract the features from the images. This model increases the efficiency and generalizability of the model further. In this study, six standard Kaggle brain tumor MRI datasets were used to train and validate the developed and tested model of a brain tumor detection and classification algorithm into several types. This work consists of two key components: (i) brain tumor detection and (ii) classification of the tumor. The tumor classifications are conducted in both three classes (Meningioma, Pituitary, and glioma) and two classes (malignant, benign). The model has been reported to detect brain tumors with 99.85% accuracy, to distinguish benign and malignant tumors with 99.87% accuracy, and to type meningioma, pituitary, and glioma tumors with 99.38% accuracy. The results of this study indicate that the described technique is useful in the detection and classification of brain tumors.

PMID:39487431 | DOI:10.1186/s12880-024-01476-1

Categories: Literature Watch

Segmentation of periapical lesions with automatic deep learning on panoramic radiographs: an artificial intelligence study

Sat, 2024-11-02 06:00

BMC Oral Health. 2024 Nov 1;24(1):1332. doi: 10.1186/s12903-024-05126-4.

ABSTRACT

Periapical periodontitis may manifest as a radiographic lesion radiographically. Periapical lesions are amongst the most common dental pathologies that present as periapical radiolucencies on panoramic radiographs. The objective of this research is to assess the diagnostic accuracy of an artificial intelligence (AI) model based on U²-Net architecture in the detection of periapical lesions on dental panoramic radiographs and to determine whether they can be useful in aiding clinicians with diagnosis of periapical lesions and improving their clinical workflow. 400 panoramic radiographs that included at least one periapical radiolucency were selected retrospectively. 780 periapical radiolucencies in these anonymized radiographs were manually labeled by two independent examiners. These radiographs were later used to train the AI model based on U²-Net architecture trained using a deep supervision algorithm. An AI model based on the U²-Net architecture was implemented. The model achieved a dice score of 0.8 on the validation set and precision, recall, and F1-score of 0.82, 0.77, and 0.8 respectively on the test set. This study has shown that an AI model based on U²-Net architecture can accurately diagnose periapical lesions on panoramic radiographs. The research provides evidence that AI-based models have promising applications as adjunct tools for dentists in diagnosing periapical radiolucencies and procedure planning. Further studies with larger data sets would be required to improve the diagnostic accuracy of AI-based detection models.

PMID:39487404 | DOI:10.1186/s12903-024-05126-4

Categories: Literature Watch

Improving crop production using an agro-deep learning framework in precision agriculture

Sat, 2024-11-02 06:00

BMC Bioinformatics. 2024 Nov 1;25(1):341. doi: 10.1186/s12859-024-05970-9.

ABSTRACT

BACKGROUND: The study focuses on enhancing the effectiveness of precision agriculture through the application of deep learning technologies. Precision agriculture, which aims to optimize farming practices by monitoring and adjusting various factors influencing crop growth, can greatly benefit from artificial intelligence (AI) methods like deep learning. The Agro Deep Learning Framework (ADLF) was developed to tackle critical issues in crop cultivation by processing vast datasets. These datasets include variables such as soil moisture, temperature, and humidity, all of which are essential to understanding and predicting crop behavior. By leveraging deep learning models, the framework seeks to improve decision-making processes, detect potential crop problems early, and boost agricultural productivity.

RESULTS: The study found that the Agro Deep Learning Framework (ADLF) achieved an accuracy of 85.41%, precision of 84.87%, recall of 84.24%, and an F1-Score of 88.91%, indicating strong predictive capabilities for improving crop management. The false negative rate was 91.17% and the false positive rate was 89.82%, highlighting the framework's ability to correctly detect issues while minimizing errors. These results suggest that ADLF can significantly enhance decision-making in precision agriculture, leading to improved crop yield and reduced agricultural losses.

CONCLUSIONS: The ADLF can significantly improve precision agriculture by leveraging deep learning to process complex datasets and provide valuable insights into crop management. The framework allows farmers to detect issues early, optimize resource use, and improve yields. The study demonstrates that AI-driven agriculture has the potential to revolutionize farming, making it more efficient and sustainable. Future research could focus on further refining the model and exploring its applicability across different types of crops and farming environments.

PMID:39487390 | DOI:10.1186/s12859-024-05970-9

Categories: Literature Watch

Multi-level physics informed deep learning for solving partial differential equations in computational structural mechanics

Sat, 2024-11-02 06:00

Commun Eng. 2024 Nov 1;3(1):151. doi: 10.1038/s44172-024-00303-3.

ABSTRACT

Physics-informed neural network has emerged as a promising approach for solving partial differential equations. However, it is still a challenge for the computation of structural mechanics problems since it involves solving higher-order partial differential equations as the governing equations are fourth-order nonlinear equations. Here we develop a multi-level physics-informed neural network framework where an aggregation model is developed by combining multiple neural networks, with each one involving only first-order or second-order partial differential equations representing different physics information such as geometrical, constitutive, and equilibrium relations of the structure. The proposed framework demonstrates a remarkable advancement over the classical neural networks in terms of the accuracy and computation time. The proposed method holds the potential to become a promising paradigm for structural mechanics computation and facilitate the intelligent computation of digital twin systems.

PMID:39487342 | DOI:10.1038/s44172-024-00303-3

Categories: Literature Watch

Explainable machine learning by SEE-Net: closing the gap between interpretable models and DNNs

Sat, 2024-11-02 06:00

Sci Rep. 2024 Nov 1;14(1):26302. doi: 10.1038/s41598-024-77507-2.

ABSTRACT

Deep Neural Networks (DNNs) have achieved remarkable accuracy for numerous applications, yet their complexity often renders the explanation of predictions a challenging task. This complexity contrasts with easily interpretable statistical models, which, however, often suffer from lower accuracy. Our work suggests that this underperformance may stem more from inadequate training methods than from the inherent limitations of model structures. We hereby introduce the Synced Explanation-Enhanced Neural Network (SEE-Net), a novel architecture integrating a guiding DNN with a shallow neural network, functionally equivalent to a two-layer mixture of linear models. This shallow network is trained under the guidance of the DNN, effectively bridging the gap between the prediction power of deep learning and the need for explainable models. Experiments on image and tabular data demonstrate that SEE-Net can leverage the advantage of DNNs while providing an interpretable prediction framework. Critically, SEE-Net embodies a new paradigm in machine learning: it achieves high-level explainability with minimal compromise on prediction accuracy by training an almost "white-box" model under the co-supervision of a "black-box" model, which can be tailored for diverse applications.

PMID:39487274 | DOI:10.1038/s41598-024-77507-2

Categories: Literature Watch

In vivo assessment of cone loss and macular perfusion in children with myopia

Sat, 2024-11-02 06:00

Sci Rep. 2024 Nov 2;14(1):26373. doi: 10.1038/s41598-024-78280-y.

ABSTRACT

This study evaluated cone density (CD) in the macular region and assess macular perfusion in children with varying degrees of myopia. This was a prospective, cross-sectional, observational study. Children underwent confocal scanning laser ophthalmoscopy (cSLO), optical coherence tomography (OCT), and OCT angiography (OCTA) imaging. A built-in software was used to measure mean CD (cells/mm2), retinal vessel density, choriocapillaris perfusion area, and choroidal thickness (CT). The study included 140 eyes from children categorized into four groups: emmetropia (31 eyes), low myopia (44 eyes), moderate myopia (31 eyes), and high myopia (34 eyes). The high myopia group exhibited significantly lower macular CD than the emmetropia group (P < 0.05). Additionally, the high myopia group showed thinner CT and higher choriocapillaris perfusion area in the macular region than the emmetropia group (all P < 0.01). Macular CD was significantly correlated with age, spherical equivalent, axial length, and CT (all P < 0.05). Generalized linear models revealed CT as the independent factor associated with macular CD (Wald χ2 = 9.265, P = 0.002). Children with high myopia demonstrate reduced CD in the macular region, accompanied by reduced CT. These findings may have important implications for future myopia prevention and management strategies.

PMID:39487258 | DOI:10.1038/s41598-024-78280-y

Categories: Literature Watch

Pages