Deep learning

Human Claustrum Connections: Robust In Vivo Detection by DWI-Based Tractography in Two Large Samples

Sun, 2024-10-13 06:00

Hum Brain Mapp. 2024 Oct;45(14):e70042. doi: 10.1002/hbm.70042.

ABSTRACT

Despite substantial neuroscience research in the last decade revealing the claustrum's prominent role in mammalian forebrain organization, as evidenced by its extraordinarily widespread connectivity pattern, claustrum studies in humans are rare. This is particularly true for studies focusing on claustrum connections. Two primary reasons may account for this situation: First, the intricate anatomy of the human claustrum located between the external and extreme capsule hinders straightforward and reliable structural delineation. In addition, the few studies that used diffusion-weighted-imaging (DWI)-based tractography could not clarify whether in vivo tractography consistently and reliably identifies claustrum connections in humans across different subjects, cohorts, imaging methods, and connectivity metrics. To address these issues, we combined a recently developed deep-learning-based claustrum segmentation tool with DWI-based tractography in two large adult cohorts: 81 healthy young adults from the human connectome project and 81 further healthy young participants from the Bavarian longitudinal study. Tracts between the claustrum and 13 cortical and 9 subcortical regions were reconstructed in each subject using probabilistic tractography. Probabilistic group average maps and different connectivity metrics were generated to assess the claustrum's connectivity profile as well as consistency and replicability of tractography. We found, across individuals, cohorts, DWI-protocols, and measures, consistent and replicable cortical and subcortical ipsi- and contralateral claustrum connections. This result demonstrates robust in vivo tractography of claustrum connections in humans, providing a base for further examinations of claustrum connectivity in health and disease.

PMID:39397271 | DOI:10.1002/hbm.70042

Categories: Literature Watch

Non-resonant background removal in broadband CARS microscopy using deep-learning algorithms

Sun, 2024-10-13 06:00

Sci Rep. 2024 Oct 13;14(1):23903. doi: 10.1038/s41598-024-74912-5.

ABSTRACT

Broadband Coherent anti-Stokes Raman (BCARS) microscopy is an imaging technique that can acquire full Raman spectra (400-3200 cm-1) of biological samples within a few milliseconds. However, the CARS signal suffers from an undesired non-resonant background (NRB), deriving from four-wave-mixing processes, which distorts the peak line shapes and reduces the chemical contrast. Traditionally, the NRB is removed using numerical algorithms that require expert users and knowledge of the NRB spectral profile. Recently, deep-learning models proved to be powerful tools for unsupervised automation and acceleration of NRB removal. Here, we thoroughly review the existing NRB removal deep-learning models (SpecNet, VECTOR, LSTM, Bi-LSTM) and present two novel architectures. The first one combines convolutional layers with Gated Recurrent Units (CNN + GRU); the second one is a Generative Adversarial Network (GAN) that trains an encoder-decoder network and an adversarial convolutional neural network. We also introduce an improved training dataset, generalized on different BCARS experimental configurations. We compare the performances of all these networks on test and experimental data, using them in the pipeline for spectral unmixing of BCARS images. Our analyses show that CNN + GRU and VECTOR are the networks giving the highest accuracy, GAN is the one that predicts the highest number of true positive peaks in experimental data, whereas GAN and VECTOR are the most suitable ones for real-time processing of BCARS images.

PMID:39397092 | DOI:10.1038/s41598-024-74912-5

Categories: Literature Watch

Towards a general computed tomography image segmentation model for anatomical structures and lesions

Sun, 2024-10-13 06:00

Commun Eng. 2024 Oct 14;3(1):143. doi: 10.1038/s44172-024-00287-0.

ABSTRACT

Numerous deep-learning models have been developed using task-specific data, but they ignore the inherent connections among different tasks. By jointly learning a wide range of segmentation tasks, we prove that a general medical image segmentation model can improve segmentation performance for computerized tomography (CT) volumes. The proposed general CT image segmentation (gCIS) model utilizes a common transformer-based encoder for all tasks and incorporates automatic pathway modules for task prompt-based decoding. It is trained on one of the largest datasets, comprising 36,419 CT scans and 83 tasks. gCIS can automatically perform various segmentation tasks using automatic pathway modules of decoding networks through text prompt inputs, achieving an average Dice coefficient of 82.84%. Furthermore, the proposed automatic pathway routing mechanism allows for parameter pruning of the network during deployment, and gCIS can also be quickly adapted to unseen tasks with minimal training samples while maintaining great performance.

PMID:39397081 | DOI:10.1038/s44172-024-00287-0

Categories: Literature Watch

Machine learning-aided hybrid technique for dynamics of rail transit stations classification: a case study

Sun, 2024-10-13 06:00

Sci Rep. 2024 Oct 13;14(1):23929. doi: 10.1038/s41598-024-75541-8.

ABSTRACT

Accurate classification of rail transit stations is crucial for successful Transit-Oriented Development (TOD) and sustainable urban growth. This paper introduces a novel classification model integrating traditional methodologies with advanced machine learning algorithms. By employing mathematical models, clustering methods, and neural network techniques, the model enhances the precision of station classification, allowing for a refined evaluation of station attributes. A comprehensive case study on the Chengdu rail transit network validates the model's efficacy, highlighting its value in optimizing TOD strategies and guiding decision-making processes for urban planners and policymakers. The study employs several regression models trained on existing data to generate accurate ridership forecasts, and data clustering using mathematical algorithms reveals distinct categories of stations. Evaluation metrics confirm the rationality and accuracy of the results. Additionally, a neural network achieving high accuracy on labeled data enhances the model's predictive capabilities for unlabeled instances. The research demonstrates high accuracy, with the Mean Squared Error (MSE) for regression models (Multiple Linear Regression (MLR), Deep-Learning Neural Network (DNN), and K-Nearest Neighbor (KNN)) remaining below 0.012, while the neural networks used for station classification achieve 100% accuracy across seven time intervals and 98.15% accuracy for the eighth, ensuring reliable ridership forecasts and classification outcomes. Accuracy in rail transit station classification is critical, as it not only strengthens the model's predictive capabilities but also ensures more reliable data-driven decisions for transit planning and development, allowing for more precise ridership forecasts and evidence-based strategies for optimizing TOD. This classification model provides stakeholders with valuable insights into the dynamics and features of rail transit stations, supporting sustainable urban development planning.

PMID:39397065 | DOI:10.1038/s41598-024-75541-8

Categories: Literature Watch

An effective method for anomaly detection in industrial Internet of Things using XGBoost and LSTM

Sun, 2024-10-13 06:00

Sci Rep. 2024 Oct 14;14(1):23969. doi: 10.1038/s41598-024-74822-6.

ABSTRACT

In recent years, with the application of Internet of Things (IoT) and cloud technology in smart industrialization, Industrial Internet of Things (IIoT) has become an emerging hot topic. The increasing amount of data and device numbers in IIoT poses significant challenges to its security issues, making anomaly detection particularly important. Existing methods for anomaly detection in the IIoT often fall short when dealing with data imbalance, and the huge amount of IIoT data makes feature selection challenging and computationally intensive. In this paper, we propose an optimal deep learning model for anomaly detection in IIoT. Firstly, by setting different thresholds of eXtreme Gradient Boosting (XGBoost) for feature selection, features with importance above the given threshold are retained, while those below are ignored. Different thresholds yield different numbers of features. This approach not only secures effective features but also reduces the feature dimensionality, thereby decreasing the consumption of computational resources. Secondly, an optimized loss function is designed to study its impact on model performance in terms of handling imbalanced data, highly similar categories, and model training. We select the optimal threshold and loss function, which are part of our optimal model, by comparing metrics such as accuracy, precision, recall, False Alarm Rate (FAR), Area Under the Receiver Operating Characteristic Curve (AUC-ROC), and Area Under the Precision-Recall Curve (AUC-PR) values. Finally, combining the optimal threshold and loss function, we propose a model named MIX_LSTM for anomaly detection in IIoT. Experiments are conducted using the UNSW-NB15 and NSL-KDD datasets. The proposed MIX_LSTM model can achieve 0.084 FAR, 0.984 AUC-ROC, and 0.988 AUC-PR values in the binary anomaly detection experiment on the UNSW-NB15 dataset. In the NSL-KDD dataset, it can achieve 0.028 FAR, 0.967 AUC-ROC, and 0.962 AUC-PR values. By comparing the evaluation indicators, the model shows good performance in detecting abnormal attacks in the Industrial Internet of Things compared with traditional deep learning models, machine learning models and existing technologies.

PMID:39397055 | DOI:10.1038/s41598-024-74822-6

Categories: Literature Watch

Enhancing societal security: a multimodal deep learning approach for a public person identification and tracking system

Sun, 2024-10-13 06:00

Sci Rep. 2024 Oct 14;14(1):23952. doi: 10.1038/s41598-024-74560-9.

ABSTRACT

In public spaces, threats to societal security are a major concern, and emerging technologies offer potential countermeasures. The proposed intelligent person identification system monitors and identifies individuals in public spaces using gait, face, and iris recognition. The system employs a multimodal approach for secure identification and utilises deep convolutional neural networks (DCNNs) that have been pretrained to predict individuals. For increased accuracy, the proposed system is implemented on a cloud server and integrated with citizen identification systems such as Aadhar/SSN. The performance of the system is determined by the rate of accuracy achieved when identifying individuals in a public space. The proposed multimodal secure identification system achieves a 94% accuracy rate, which is higher than that of existing public space person identification systems. Integration with citizen identification systems improves precision and provides immediate life-saving assistance to those in need. Utilising secure deep learning techniques for precise person identification, the proposed system offers a promising solution to security threats in public spaces. This research is necessary to investigate the efficacy and potential applications of the proposed system, including accident identification, theft identification, and intruder identification in public spaces.

PMID:39397044 | DOI:10.1038/s41598-024-74560-9

Categories: Literature Watch

Deep learning of echocardiography distinguishes between presence and absence of late gadolinium enhancement on cardiac magnetic resonance in patients with hypertrophic cardiomyopathy

Sun, 2024-10-13 06:00

Echo Res Pract. 2024 Oct 14;11(1):23. doi: 10.1186/s44156-024-00059-8.

ABSTRACT

BACKGROUND: Hypertrophic cardiomyopathy (HCM) can cause myocardial fibrosis, which can be a substrate for fatal ventricular arrhythmias and subsequent sudden cardiac death. Although late gadolinium enhancement (LGE) on cardiac magnetic resonance (CMR) represents myocardial fibrosis and is associated with sudden cardiac death in patients with HCM, CMR is resource-intensive, can carry an economic burden, and is sometimes contraindicated. In this study for patients with HCM, we aimed to distinguish between patients with positive and negative LGE on CMR using deep learning of echocardiographic images.

METHODS: In the cross-sectional study of patients with HCM, we enrolled patients who underwent both echocardiography and CMR. The outcome was positive LGE on CMR. Among the 323 samples, we randomly selected 273 samples (training set) and employed deep convolutional neural network (DCNN) of echocardiographic 5-chamber view to discriminate positive LGE on CMR. We also developed a reference model using clinical parameters with significant differences between patients with positive and negative LGE. In the remaining 50 samples (test set), we compared the area under the receiver-operating-characteristic curve (AUC) between a combined model using the reference model plus the DCNN-derived probability and the reference model.

RESULTS: Among the 323 CMR studies, positive LGE was detected in 160 (50%). The reference model was constructed using the following 7 clinical parameters: family history of HCM, maximum left ventricular (LV) wall thickness, LV end-diastolic diameter, LV end-systolic volume, LV ejection fraction < 50%, left atrial diameter, and LV outflow tract pressure gradient at rest. The discriminant model combining the reference model with DCNN-derived probability significantly outperformed the reference model in the test set (AUC 0.86 [95% confidence interval 0.76-0.96] vs. 0.72 [0.57-0.86], P = 0.04). The sensitivity, specificity, positive predictive value, and negative predictive value of the combined model were 0.84, 0.76, 0.78, and 0.83, respectively.

CONCLUSION: Compared to the reference model solely based on clinical parameters, our new model integrating the reference model and deep learning-based analysis of echocardiographic images demonstrated superiority in distinguishing LGE on CMR in patients with HCM. The novel deep learning-based method can be used as an assistive technology to facilitate the decision-making process of performing CMR with gadolinium enhancement.

PMID:39396969 | DOI:10.1186/s44156-024-00059-8

Categories: Literature Watch

Exploring the Artificial Intelligence and Its Impact in Pharmaceutical Sciences: Insights Toward the Horizons Where Technology Meets Tradition

Sun, 2024-10-13 06:00

Chem Biol Drug Des. 2024 Oct;104(4):e14639. doi: 10.1111/cbdd.14639.

ABSTRACT

The technological revolutions in computers and the advancement of high-throughput screening technologies have driven the application of artificial intelligence (AI) for faster discovery of drug molecules with more efficiency, and cost-friendly finding of hit or lead molecules. The ability of software and network frameworks to interpret molecular structures' representations and establish relationships/correlations has enabled various research teams to develop numerous AI platforms for identifying new lead molecules or discovering new targets for already established drug molecules. The prediction of biological activity, ADME properties, and toxicity parameters in early stages have reduced the chances of failure and associated costs in later clinical stages, which was observed at a high rate in the tedious, expensive, and laborious drug discovery process. This review focuses on the different AI and machine learning (ML) techniques with their applications mainly focused on the pharmaceutical industry. The applications of AI frameworks in the identification of molecular target, hit identification/hit-to-lead optimization, analyzing drug-receptor interactions, drug repurposing, polypharmacology, synthetic accessibility, clinical trial design, and pharmaceutical developments are discussed in detail. We have also compiled the details of various startups in AI in this field. This review will provide a comprehensive analysis and outline various state-of-the-art AI/ML techniques to the readers with their framework applications. This review also highlights the challenges in this field, which need to be addressed for further success in pharmaceutical applications.

PMID:39396920 | DOI:10.1111/cbdd.14639

Categories: Literature Watch

AI driven Interpretable Deep Learning based Fetal Health Classification

Sun, 2024-10-13 06:00

SLAS Technol. 2024 Oct 11:100206. doi: 10.1016/j.slast.2024.100206. Online ahead of print.

ABSTRACT

In this study, a deep learning model is proposed for the classification of fetal health into 3 categories: Normal, suspect, and pathological. The primary objective is to utilize the power of deep learning to improve the efficiency and effectiveness of diagnostic processes. A deep neural network (DNN) model is proposed for fetal health analysis using data obtained from Cardiotocography (CTG). A dataset containing 21 attributes is used to carry out this work. The model incorporates multiple hidden layers, augmented with batch normalization and dropout layers for improved generalization. This study assesses the model's interpretation ability in fetal health classification using explainable deep learning. This enhances transparency in decision-making of the classifier model by leveraging feature importance and feature saliency analysis, fostering trust and facilitating the clinical adoption of fetal health assessments. Our proposed model demonstrates a remarkable performance with 0.99 accuracy, 0.93 sensitivity, 0.93 specificity, 0.96 AUC, 0.93 precision, and 0.93 F1 scores in classifying fetal health. We also performed comparative analysis with six other models including Logistic Regression, KNN, SVM, Naive Bayes, Random Forest, and Gradient Boosting to assess and compare the effectiveness of our model and the accuracies of 0.89, 0.88, 0.90, 081, 0.93, and 0.93 were achieved respectively by these baseline models. The results revealed that our proposed model outperformed all the baseline models in terms of accuracy. This indicates the potential of deep learning in improving fetal health assessment and contributing to the field of obstetrics by providing a robust tool for early risk detection.

PMID:39396731 | DOI:10.1016/j.slast.2024.100206

Categories: Literature Watch

Unipolar voltage electroanatomical mapping detects structural atrial remodeling identified by LGE-MRI

Sun, 2024-10-13 06:00

Heart Rhythm. 2024 Oct 11:S1547-5271(24)03430-1. doi: 10.1016/j.hrthm.2024.10.015. Online ahead of print.

ABSTRACT

BACKGROUND: In atrial fibrillation (AF) management, understanding left atrial (LA) substrate is crucial. While both electroanatomical mapping (EAM) and late gadolinium enhancement MRI (LGE-MRI) are accepted methods for assessing the atrial substrate and are associated with ablation outcome, recent findings have highlighted discrepancies between low voltage areas (LVAs) in EAM and LGE-areas.

OBJECTIVE: Explore the relationship between LGE regions and unipolar and bipolar-LVAs utilizing multipolar high-density (HD) mapping.

METHODS: 20 patients scheduled for AF ablation underwent pre-ablation LGE-MRI. LA segmentation was conducted using a deep learning approach, which subsequently generated a 3D mesh integrating the LGE data. HD-EAM was performed in sinus rhythm for each patient. The EAM map and LGE-MRI mesh were co-registered. LVAs were defined using voltage cut-offs of 0.5mV for bipolar and 2.5mV for unipolar. Correspondence between LGE-areas and LVAs in the LA was analyzed using confusion matrices and performance metrics.

RESULTS: A considerable 87.3% of LGE regions overlapped with unipolar-LVAs, compared to only 16.2% overlap observed with bipolar-LVAs. Across all performance metrics, unipolar-LVAs outperformed bipolar-LVAs in identifying LGE-areas [precision (78.6% vs. 61.1%); sensitivity (87.3% vs. 16.2%); F1 score (81.3% vs. 26.0%); accuracy (74.0% vs. 35.3%)].

CONCLUSION: Our findings demonstrate that unipolar-LVAs highly correlate with LGE regions. These findings support the integration of unipolar mapping alongside bipolar mapping into clinical practice. This would offer a nuanced approach to diagnose and manage atrial fibrillation by revealing critical insights into the complex architecture of the atrial substrate.

PMID:39396602 | DOI:10.1016/j.hrthm.2024.10.015

Categories: Literature Watch

Internet of Things and Cloud Computing-based Disease Diagnosis using Optimized Improved Generative Adversarial Network in Smart Healthcare System

Sun, 2024-10-13 06:00

Network. 2024 Oct 13:1-24. doi: 10.1080/0954898X.2024.2392770. Online ahead of print.

ABSTRACT

The integration of IoT and cloud services enhances communication and quality of life, while predictive analytics powered by AI and deep learning enables proactive healthcare. Deep learning, a subset of machine learning, efficiently analyzes vast datasets, offering rapid disease prediction. Leveraging recurrent neural networks on electronic health records improves accuracy for timely intervention and preventative care. In this manuscript, Internet of Things and Cloud Computing-based Disease Diagnosis using Optimized Improved Generative Adversarial Network in Smart Healthcare System (IOT-CC-DD-OICAN-SHS) is proposed. Initially, an Internet of Things (IoT) device collects diabetes, chronic kidney disease, and heart disease data from patients via wearable devices and intelligent sensors and then saves the patient's large data in the cloud. These cloud data are pre-processed to turn them into a suitable format. The pre-processed dataset is sent into the Improved Generative Adversarial Network (IGAN), which reliably classifies the data as disease-free or diseased. Then, IGAN was optimized using the Flamingo Search optimization algorithm (FSOA). The proposed technique is implemented in Java using Cloud Sim and examined utilizing several performance metrics. The proposed method attains greater accuracy and specificity with lower execution time compared to existing methodologies, IoT-C-SHMS-HDP-DL, PPEDL-MDTC and CSO-CLSTM-DD-SHS respectively.

PMID:39396229 | DOI:10.1080/0954898X.2024.2392770

Categories: Literature Watch

GraphPI: Efficient Protein Inference with Graph Neural Networks

Sun, 2024-10-13 06:00

J Proteome Res. 2024 Oct 13. doi: 10.1021/acs.jproteome.3c00845. Online ahead of print.

ABSTRACT

The integration of deep learning approaches in biomedical research has been transformative, enabling breakthroughs in various applications. Despite these strides, its application in protein inference is impeded by the scarcity of extensively labeled data sets, a challenge compounded by the high costs and complexities of accurate protein annotation. In this study, we introduce GraphPI, a novel framework that treats protein inference as a node classification problem. We treat proteins as interconnected nodes within a protein-peptide-PSM graph, utilizing a graph neural network-based architecture to elucidate their interrelations. To address label scarcity, we train the model on a set of unlabeled public protein data sets with pseudolabels derived from an existing protein inference algorithm, enhanced by self-training to iteratively refine labels based on confidence scores. Contrary to prevalent methodologies necessitating data set-specific training, our research illustrates that GraphPI, due to the well-normalized nature of Percolator features, exhibits universal applicability without data set-specific fine-tuning, a feature that not only mitigates the risk of overfitting but also enhances computational efficiency. Our empirical experiments reveal notable performance on various test data sets and deliver significantly reduced computation times compared to common protein inference algorithms.

PMID:39396189 | DOI:10.1021/acs.jproteome.3c00845

Categories: Literature Watch

DETECTION OF ORAL SQUAMOUS CELL CARCINOMA USING PRE-TRAINED DEEP LEARNING MODELS

Sun, 2024-10-13 06:00

Exp Oncol. 2024 Oct 9;46(2):119-128. doi: 10.15407/exp-oncology.2024.02.119.

ABSTRACT

BACKGROUND: Oral squamous cell carcinoma (OSCC), the 13th most common type of cancer, claimed 364,339 lives in 2020. Researchers have established a strong correlation between early detection and better prognosis for this type of cancer. Tissue biopsy, the most common diagnostic method used by doctors, is both expensive and time-consuming. The recent growth in using transfer learning methodologies to aid in medical diagnosis, along with the improved 5-year survival rate from early diagnosis serve as motivation for this study. The aim of the study was to evaluate an innovative approach using transfer learning of pre-trained classification models and convolutional neural networks (CNN) for the binary classification of OSCC from histopathological images.

MATERIALS AND METHODS: The dataset used for the experiments consisted of 5192 histopathological images in total. The following pre-trained deep learning models were used for feature extraction: ResNet-50, VGG16, and InceptionV3 along with a tuned CNN for classification.

RESULTS: The proposed methodologies were evaluated against the current state of the art. A high sensitivity and its importance in the medical field were highlighted. All three models were used in experiments with different hyperparameters and tested on a set of 126 histopathological images. The highest-performance developed model achieved an accuracy of 0.90, a sensitivity of 0.97, and an AUC of 0.94. The visualization of the results was done using ROC curves and confusion matrices. The study further interprets the results obtained and concludes with suggestions for future research.

CONCLUSION: The study successfully demonstrated the potential of using transfer learning-based methodologies in the medical field. The interpretation of the results suggests their practical viability and offers directions for future research aimed at improving diagnostic precision and serving as a reliable tool to physicians in the early diagnosis of cancer.

PMID:39396172 | DOI:10.15407/exp-oncology.2024.02.119

Categories: Literature Watch

The Growing Impact of Natural Language Processing in Healthcare and Public Health

Sun, 2024-10-13 06:00

Inquiry. 2024 Jan-Dec;61:469580241290095. doi: 10.1177/00469580241290095.

ABSTRACT

Natural Language Processing (NLP) is a subset of Artificial Intelligence, specifically focused on understanding and generating human language. NLP technologies are becoming more prevalent in healthcare and hold potential solutions to current problems. Some examples of existing and future uses include: public sentiment analysis in relation to health policies, electronic health record (EHR) screening, use of speech to text technology for extracting EHR data from point of care, patient communications, accelerated identification of eligible clinical trial candidates through automated searches and access of health data to assist in informed treatment decisions. This narrative review aims to summarize the current uses of NLP in healthcare, highlight successful implementation of computational linguistics-based approaches, and identify gaps, limitations, and emerging trends within the subfield of NLP in public health. The online databases Google Scholar and PubMed were scanned for papers published between 2018 and 2023. Keywords "Natural Language Processing, Health Policy, Large Language Models" were utilized in the initial search. Then, papers were limited to those written in English. Each of the 27 selected papers was subject to careful analysis, and their relevance in relation to NLP and healthcare respectively is utilized in this review. NLP and deep learning technologies scan large datasets, extracting valuable insights in various realms. This is especially significant in healthcare where huge amounts of data exist in the form of unstructured text. Automating labor intensive and tedious tasks with language processing algorithms, using text analytics systems and machine learning to analyze social media data and extracting insights from unstructured data allows for better public sentiment analysis, enhancement of risk prediction models, improved patient communication, and informed treatment decisions. In the recent past, some studies have applied NLP tools to social media posts to evaluate public sentiment regarding COVID-19 vaccine use. Social media data also has the capacity to be harnessed to develop pandemic prediction models based on reported symptoms. Furthermore, NLP has the potential to enhance healthcare delivery across the globe. Advanced language processing techniques such as Speech Recognition (SR) and Natural Language Understanding (NLU) tools can help overcome linguistic barriers and facilitate efficient communication between patients and healthcare providers.

PMID:39396164 | DOI:10.1177/00469580241290095

Categories: Literature Watch

Intelligent agricultural robotic detection system for greenhouse tomato leaf diseases using soft computing techniques and deep learning

Sat, 2024-10-12 06:00

Sci Rep. 2024 Oct 12;14(1):23887. doi: 10.1038/s41598-024-75285-5.

ABSTRACT

The development of soft computing methods has had a significant influence on the subject of autonomous intelligent agriculture. This paper offers a system for autonomous greenhouse navigation that employs a fuzzy control algorithm and a deep learning-based disease classification model for tomato plants, identifying illnesses using photos of tomato leaves. The primary novelty in this study is the introduction of an upgraded Deep Convolutional Generative Adversarial Network (DCGAN) that creates augmented pictures of disease tomato leaves from original genuine samples, considerably enhancing the training dataset. To find the optimum training model, four deep learning networks (VGG19, Inception-v3, DenseNet-201, and ResNet-152) were carefully compared on a dataset of nine tomato leaf disease classes. These models have validation accuracy of 92.32%, 90.83%, 96.61%, and 97.07%, respectively, when using the original PlantVillage dataset. The system then uses an enhanced dataset with ResNet-152 network design to achieve a high accuracy of 99.69%, as compared to the original dataset with ResNet-152's accuracy of 97.07%. This improvement indicates the use of the proposed DCGAN in improving the performance of the deep learning model for greenhouse plant monitoring and disease detection. Furthermore, the proposed approach may have a broader use in various agricultural scenarios, potentially altering the field of autonomous intelligent agriculture.

PMID:39396063 | DOI:10.1038/s41598-024-75285-5

Categories: Literature Watch

Data-driven solutions and parameter estimations of a family of higher-order KdV equations based on physics informed neural networks

Sat, 2024-10-12 06:00

Sci Rep. 2024 Oct 12;14(1):23874. doi: 10.1038/s41598-024-74600-4.

ABSTRACT

Physics informed neural network (PINN) demonstrates powerful capabilities in solving forward and inverse problems of nonlinear partial differential equations (NLPDEs) through combining data-driven and physical constraints. In this paper, two PINN methods that adopt tanh and sine as activation functions, respectively, are used to study data-driven solutions and parameter estimations of a family of high order KdV equations. Compared to the standard PINN with the tanh activation function, the PINN framework using the sine activation function can effectively learn the single soliton solution, double soliton solution, periodic traveling wave solution, and kink solution of the proposed equations with higher precision. The PINN framework using the sine activation function shows better performance in parameter estimation. In addition, the experiments show that the complexity of the equation influences the accuracy and efficiency of the PINN method. The outcomes of this study are poised to enhance the application of deep learning techniques in solving solutions and modeling of higher-order NLPDEs.

PMID:39396058 | DOI:10.1038/s41598-024-74600-4

Categories: Literature Watch

Corrigendum to 'Deep learning dives: Predicting anxiety in Zebrafish through novel tank assay analysis' Physiology & Behavior (2024), 114696

Sat, 2024-10-12 06:00

Physiol Behav. 2024 Oct 11:114705. doi: 10.1016/j.physbeh.2024.114705. Online ahead of print.

ABSTRACT

Abstract.

PMID:39395874 | DOI:10.1016/j.physbeh.2024.114705

Categories: Literature Watch

Enhancing quantitative imaging to study DNA damage response: A guide to automated liquid handling and imaging

Sat, 2024-10-12 06:00

DNA Repair (Amst). 2024 Oct 6;144:103769. doi: 10.1016/j.dnarep.2024.103769. Online ahead of print.

ABSTRACT

Laboratory automation and quantitative high-content imaging are pivotal in advancing diverse scientific fields. These innovative techniques alleviate the burden of manual labour, facilitating large-scale experiments characterized by exceptional reproducibility. Nonetheless, the seamless integration of such systems continues to pose a constant challenge in many laboratories. Here, we present a meticulously designed workflow that automates the immunofluorescence staining process, coupled with quantitative high-content imaging to study DNA damage signalling as an example. This is achieved by using an automatic liquid handling system for sample preparation. Additionally, we also offer practical recommendations aimed at ensuring the reproducibility and scalability of experimental outcomes. We illustrate the high level of efficiency and reproducibility achieved through the implementation of the liquid handling system but also addresses the associated challenges. Furthermore, we extend the discussion into critical aspects such as microscope selection, optimal objective choices, and considerations for high-content image acquisition. Our study streamlines the image analysis process, offering valuable recommendations for efficient computing resources and the integration of cutting-edge deep learning techniques. Emphasizing the paramount importance of robust data management systems aligned with the FAIR data principles, we provide practical insights into suitable storage options and effective data visualization techniques. Together, our work serves as a comprehensive guide for life science laboratories seeking to elevate their high-content quantitative imaging capabilities through the seamless integration of advanced laboratory automation.

PMID:39395383 | DOI:10.1016/j.dnarep.2024.103769

Categories: Literature Watch

Segmentation of four-chamber view images in fetal ultrasound exams using a novel deep learning model ensemble method

Sat, 2024-10-12 06:00

Comput Biol Med. 2024 Oct 11;183:109188. doi: 10.1016/j.compbiomed.2024.109188. Online ahead of print.

ABSTRACT

Fetal echocardiography, a specialized ultrasound application commonly utilized for fetal heart assessment, can greatly benefit from automated segmentation of anatomical structures, aiding operators in their evaluations. We introduce a novel approach that combines various deep learning models for segmenting key anatomical structures in 2D ultrasound images of the fetal heart. Our ensemble method combines the raw predictions from the selected models, obtaining the optimal set of segmentation components that closely approximate the distribution of the fetal heart, resulting in improved segmentation outcomes. The selection of these components involves sequential and hierarchical geometry filtering, focusing on the analysis of shape and relative distances. Unlike other ensemble strategies that average predictions, our method works as a shape selector, ensuring that the final segmentation aligns more accurately with anatomical expectations. Considering a large private dataset for model training and evaluation, we present both numerical and visual experiments highlighting the advantages of our method in comparison to the segmentations produced by the individual models and a conventional average ensemble. Furthermore, we show some applications where our method proves instrumental in obtaining reliable estimations.

PMID:39395344 | DOI:10.1016/j.compbiomed.2024.109188

Categories: Literature Watch

Lazy Resampling: Fast and information preserving preprocessing for deep learning

Sat, 2024-10-12 06:00

Comput Methods Programs Biomed. 2024 Sep 19;257:108422. doi: 10.1016/j.cmpb.2024.108422. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.

METHODS: We present Lazy Resampling, a software that rephrases spatial preprocessing operations as a graphics pipeline. Rather than each transform individually modifying the data, the transforms generate transform descriptions that are composited together into a single resample operation wherever possible. This reduces pipeline execution time and, most importantly, limits signal degradation. It enables simpler pipeline design as crops and other operations become non-destructive. Lazy Resampling is designed in such a way that it provides the maximum benefit to users without requiring them to understand the underlying concepts or change the way that they build pipelines.

RESULTS: We evaluate Lazy Resampling by comparing traditional pipelines and the corresponding lazy resampling pipeline for the following tasks on Medical Segmentation Decathlon datasets. We demonstrate lower information loss in lazy pipelines vs. traditional pipelines. We demonstrate that Lazy Resampling can avoid catastrophic loss of semantic segmentation label accuracy occurring in traditional pipelines when passing labels through a pipeline and then back through the inverted pipeline. Finally, we demonstrate statistically significant improvements when training UNets for semantic segmentation.

CONCLUSION: Lazy Resampling reduces the loss of information that occurs when running processing pipelines that traditionally have multiple resampling steps and enables researchers to build simpler pipelines by making operations such as rotation and cropping effectively non-destructive. It makes it possible to invert labels back through a pipeline without catastrophic loss of accuracy. A reference implementation for Lazy Resampling can be found at https://github.com/KCL-BMEIS/LazyResampling. Lazy Resampling is being implemented as a core feature in MONAI, an open source python-based deep learning library for medical imaging, with a roadmap for a full integration.

PMID:39395305 | DOI:10.1016/j.cmpb.2024.108422

Categories: Literature Watch

Pages