Deep learning

A CNN-LSTM model using elliptical constraints for temporally consistent sun position estimation

Fri, 2024-05-31 06:00

Heliyon. 2024 May 18;10(10):e31539. doi: 10.1016/j.heliyon.2024.e31539. eCollection 2024 May 30.

ABSTRACT

More accurate sun position estimation could transform the design and operation of solar power systems, weather forecasting services, and outdoor augmented reality systems. Although several image-based approaches to sun position estimation have been proposed, their performance is significantly affected by momentary disruptions in cloud cover because they use only a single image as input. This study proposes a deep learning-based sun position estimation system that leverages spatial, temporal, and geometric features to accurately regress sun positions even when the sun is partially or entirely occluded. In the proposed approach, spatial features are extracted from an input image sequence by applying a separate Resnet-based convolution network to each frame. To ensure that the temporal changes in the brightness distribution across frames are considered, the spatial features are concatenated and passed on to a stack of LSTM layers prior to regressing the final sun position. The proposed network is also trained with elliptical (geometric) constraints to ensure that predicted sun positions are consistent with the natural elliptical path of the sun in the sky. The proposed approach's performance was evaluated on the Sirta and Laval datasets along with a custom dataset, and an R2 Score of 0.98 was achieved, which is at least 0.1 higher than that of previous approaches. The proposed approach is capable of identifying the position of the sun even when occluded and was employed in a novel sky imaging system consisting of only a camera and fisheye lens in place of a complex array of sensors.

PMID:38818140 | PMC:PMC11137535 | DOI:10.1016/j.heliyon.2024.e31539

Categories: Literature Watch

WUREN: Whole-modal union representation for epitope prediction

Fri, 2024-05-31 06:00

Comput Struct Biotechnol J. 2024 May 16;23:2122-2131. doi: 10.1016/j.csbj.2024.05.023. eCollection 2024 Dec.

ABSTRACT

B-cell epitope identification plays a vital role in the development of vaccines, therapies, and diagnostic tools. Currently, molecular docking tools in B-cell epitope prediction are heavily influenced by empirical parameters and require significant computational resources, rendering a great challenge to meet large-scale prediction demands. When predicting epitopes from antigen-antibody complex, current artificial intelligence algorithms cannot accurately implement the prediction due to insufficient protein feature representations, indicating novel algorithm is desperately needed for efficient protein information extraction. In this paper, we introduce a multimodal model called WUREN (Whole-modal Union Representation for Epitope predictioN), which effectively combines sequence, graph, and structural features. It achieved AUC-PR scores of 0.213 and 0.193 on the solved structures and AlphaFold-generated structures, respectively, for the independent test proteins selected from DiscoTope3 benchmark. Our findings indicate that WUREN is an efficient feature extraction model for protein complexes, with the generalizable application potential in the development of protein-based drugs. Moreover, the streamlined framework of WUREN could be readily extended to model similar biomolecules, such as nucleic acids, carbohydrates, and lipids.

PMID:38817963 | PMC:PMC11137340 | DOI:10.1016/j.csbj.2024.05.023

Categories: Literature Watch

Simulation of Automatically Annotated Visible and Multi-/Hyperspectral Images Using the Helios 3D Plant and Radiative Transfer Modeling Framework

Fri, 2024-05-31 06:00

Plant Phenomics. 2024 May 30;6:0189. doi: 10.34133/plantphenomics.0189. eCollection 2024.

ABSTRACT

Deep learning and multimodal remote and proximal sensing are widely used for analyzing plant and crop traits, but many of these deep learning models are supervised and necessitate reference datasets with image annotations. Acquiring these datasets often demands experiments that are both labor-intensive and time-consuming. Furthermore, extracting traits from remote sensing data beyond simple geometric features remains a challenge. To address these challenges, we proposed a radiative transfer modeling framework based on the Helios 3-dimensional (3D) plant modeling software designed for plant remote and proximal sensing image simulation. The framework has the capability to simulate RGB, multi-/hyperspectral, thermal, and depth cameras, and produce associated plant images with fully resolved reference labels such as plant physical traits, leaf chemical concentrations, and leaf physiological traits. Helios offers a simulated environment that enables generation of 3D geometric models of plants and soil with random variation, and specification or simulation of their properties and function. This approach differs from traditional computer graphics rendering by explicitly modeling radiation transfer physics, which provides a critical link to underlying plant biophysical processes. Results indicate that the framework is capable of generating high-quality, labeled synthetic plant images under given lighting scenarios, which can lessen or remove the need for manually collected and annotated data. Two example applications are presented that demonstrate the feasibility of using the model to enable unsupervised learning by training deep learning models exclusively with simulated images and performing prediction tasks using real images.

PMID:38817960 | PMC:PMC11136674 | DOI:10.34133/plantphenomics.0189

Categories: Literature Watch

Dermoscopy-based Radiomics Help Distinguish Basal Cell Carcinoma and Actinic Keratosis: A Large-scale Real-world Study Based on a 207-combination Machine Learning Computational Framework

Fri, 2024-05-31 06:00

J Cancer. 2024 Apr 23;15(11):3350-3361. doi: 10.7150/jca.94759. eCollection 2024.

ABSTRACT

This study has used machine learning algorithms to develop a predictive model for differentiating between dermoscopic images of basal cell carcinoma (BCC) and actinic keratosis (AK). We compiled a total of 904 dermoscopic images from two sources - the public dataset (HAM10000) and our proprietary dataset from the First Affiliated Hospital of Dalian Medical University (DAYISET 1) - and subsequently categorised these images into four distinct cohorts. The study developed a deep learning model for quantitative analysis of image features and integrated 15 machine learning algorithms, generating 207 algorithmic combinations through random combinations and cross-validation. The final predictive model, formed by integrating XGBoost with Lasso regression, exhibited effective performance in the differential diagnosis of BCC and AK. The model demonstrated high sensitivity in the training set and maintained stable performance in three validation sets. The area under the curve (AUC) value reached 1.000 in the training set and an average of 0.695 in the validation sets. The study concludes that the constructed discriminative diagnostic model based on machine learning algorithms has excellent predictive capabilities that could enhance clinical decision-making efficiency, reduce unnecessary biopsies, and provide valuable guidance for further treatment.

PMID:38817855 | PMC:PMC11134443 | DOI:10.7150/jca.94759

Categories: Literature Watch

Revolutionising healthcare with artificial intelligence: A bibliometric analysis of 40 years of progress in health systems

Fri, 2024-05-31 06:00

Digit Health. 2024 May 28;10:20552076241258757. doi: 10.1177/20552076241258757. eCollection 2024 Jan-Dec.

ABSTRACT

The development of artificial intelligence (AI) has revolutionised the medical system, empowering healthcare professionals to analyse complex nonlinear big data and identify hidden patterns, facilitating well-informed decisions. Over the last decade, there has been a notable trend of research in AI, machine learning (ML), and their associated algorithms in health and medical systems. These approaches have transformed the healthcare system, enhancing efficiency, accuracy, personalised treatment, and decision-making. Recognising the importance and growing trend of research in the topic area, this paper presents a bibliometric analysis of AI in health and medical systems. The paper utilises the Web of Science (WoS) Core Collection database, considering documents published in the topic area for the last four decades. A total of 64,063 papers were identified from 1983 to 2022. The paper evaluates the bibliometric data from various perspectives, such as annual papers published, annual citations, highly cited papers, and most productive institutions, and countries. The paper visualises the relationship among various scientific actors by presenting bibliographic coupling and co-occurrences of the author's keywords. The analysis indicates that the field began its significant growth in the late 1970s and early 1980s, with significant growth since 2019. The most influential institutions are in the USA and China. The study also reveals that the scientific community's top keywords include 'ML', 'Deep Learning', and 'Artificial Intelligence'.

PMID:38817839 | PMC:PMC11138196 | DOI:10.1177/20552076241258757

Categories: Literature Watch

Quantifying lung fissure integrity using a three-dimensional patch-based convolutional neural network on CT images for emphysema treatment planning

Fri, 2024-05-31 06:00

J Med Imaging (Bellingham). 2024 May;11(3):034502. doi: 10.1117/1.JMI.11.3.034502. Epub 2024 May 29.

ABSTRACT

PURPOSE: Evaluation of lung fissure integrity is required to determine whether emphysema patients have complete fissures and are candidates for endobronchial valve (EBV) therapy. We propose a deep learning (DL) approach to segment fissures using a three-dimensional patch-based convolutional neural network (CNN) and quantitatively assess fissure integrity on CT to evaluate it in subjects with severe emphysema.

APPROACH: From an anonymized image database of patients with severe emphysema, 129 CT scans were used. Lung lobe segmentations were performed to identify lobar regions, and the boundaries among these regions were used to construct approximate interlobar regions of interest (ROIs). The interlobar ROIs were annotated by expert image analysts to identify voxels where the fissure was present and create a reference ROI that excluded non-fissure voxels (where the fissure is incomplete). A CNN configured by nnU-Net was trained using 86 CT scans and their corresponding reference ROIs to segment the ROIs of left oblique fissure (LOF), right oblique fissure (ROF), and right horizontal fissure (RHF). For an independent test set of 43 cases, fissure integrity was quantified by mapping the segmented fissure ROI along the interlobar ROI. A fissure integrity score (FIS) was then calculated as the percentage of labeled fissure voxels divided by total voxels in the interlobar ROI. Predicted FIS (p-FIS) was quantified from the CNN output, and statistical analyses were performed comparing p-FIS and reference FIS (r-FIS).

RESULTS: The absolute percent error mean (±SD) between r-FIS and p-FIS for the test set was 4.0% (±4.1%), 6.0% (±9.3%), and 12.2% (±12.5%) for the LOF, ROF, and RHF, respectively.

CONCLUSIONS: A DL approach was developed to segment lung fissures on CT images and accurately quantify FIS. It has potential to assist in the identification of emphysema patients who would benefit from EBV treatment.

PMID:38817711 | PMC:PMC11135203 | DOI:10.1117/1.JMI.11.3.034502

Categories: Literature Watch

Intelligent devices for assessing essential tremor: a comprehensive review

Thu, 2024-05-30 06:00

J Neurol. 2024 May 31. doi: 10.1007/s00415-024-12354-9. Online ahead of print.

ABSTRACT

Essential tremor (ET) stands as the most prevalent movement disorder, characterized by rhythmic and involuntary shaking of body parts. Achieving an accurate and comprehensive assessment of tremor severity is crucial for effectively diagnosing and managing ET. Traditional methods rely on clinical observation and rating scales, which may introduce subjective biases and hinder continuous evaluation of disease progression. Recent research has explored new approaches to quantifying ET. A promising method involves the use of intelligent devices to facilitate objective and quantitative measurements. These devices include inertial measurement units, electromyography, video equipment, and electronic handwriting boards, and more. Their deployment enables real-time monitoring of human activity data, featuring portability and efficiency. This capability allows for more extensive research in this field and supports the shift from in-lab/clinic to in-home monitoring of ET symptoms. Therefore, this review provides an in-depth analysis of the application, current development, potential characteristics, and roles of intelligent devices in assessing ET.

PMID:38816480 | DOI:10.1007/s00415-024-12354-9

Categories: Literature Watch

Interdisciplinary approach to identify language markers for post-traumatic stress disorder using machine learning and deep learning

Thu, 2024-05-30 06:00

Sci Rep. 2024 May 30;14(1):12468. doi: 10.1038/s41598-024-61557-7.

ABSTRACT

Post-traumatic stress disorder (PTSD) lacks clear biomarkers in clinical practice. Language as a potential diagnostic biomarker for PTSD is investigated in this study. We analyze an original cohort of 148 individuals exposed to the November 13, 2015, terrorist attacks in Paris. The interviews, conducted 5-11 months after the event, include individuals from similar socioeconomic backgrounds exposed to the same incident, responding to identical questions and using uniform PTSD measures. Using this dataset to collect nuanced insights that might be clinically relevant, we propose a three-step interdisciplinary methodology that integrates expertise from psychiatry, linguistics, and the Natural Language Processing (NLP) community to examine the relationship between language and PTSD. The first step assesses a clinical psychiatrist's ability to diagnose PTSD using interview transcription alone. The second step uses statistical analysis and machine learning models to create language features based on psycholinguistic hypotheses and evaluate their predictive strength. The third step is the application of a hypothesis-free deep learning approach to the classification of PTSD in our cohort. Results show that the clinical psychiatrist achieved a diagnosis of PTSD with an AUC of 0.72. This is comparable to a gold standard questionnaire (Area Under Curve (AUC) ≈ 0.80). The machine learning model achieved a diagnostic AUC of 0.69. The deep learning approach achieved an AUC of 0.64. An examination of model error informs our discussion. Importantly, the study controls for confounding factors, establishes associations between language and DSM-5 subsymptoms, and integrates automated methods with qualitative analysis. This study provides a direct and methodologically robust description of the relationship between PTSD and language. Our work lays the groundwork for advancing early and accurate diagnosis and using linguistic markers to assess the effectiveness of pharmacological treatments and psychotherapies.

PMID:38816468 | DOI:10.1038/s41598-024-61557-7

Categories: Literature Watch

Achieving large-scale clinician adoption of AI-enabled decision support

Thu, 2024-05-30 06:00

BMJ Health Care Inform. 2024 May 30;31(1):e100971. doi: 10.1136/bmjhci-2023-100971.

ABSTRACT

Computerised decision support (CDS) tools enabled by artificial intelligence (AI) seek to enhance accuracy and efficiency of clinician decision-making at the point of care. Statistical models developed using machine learning (ML) underpin most current tools. However, despite thousands of models and hundreds of regulator-approved tools internationally, large-scale uptake into routine clinical practice has proved elusive. While underdeveloped system readiness and investment in AI/ML within Australia and perhaps other countries are impediments, clinician ambivalence towards adopting these tools at scale could be a major inhibitor. We propose a set of principles and several strategic enablers for obtaining broad clinician acceptance of AI/ML-enabled CDS tools.

PMID:38816209 | DOI:10.1136/bmjhci-2023-100971

Categories: Literature Watch

Predicting isocitrate dehydrogenase status among adult patients with diffuse glioma using patient characteristics, radiomic features, and magnetic resonance imaging: Multi-modal analysis by variable vision transformer

Thu, 2024-05-30 06:00

Magn Reson Imaging. 2024 May 28:S0730-725X(24)00162-0. doi: 10.1016/j.mri.2024.05.012. Online ahead of print.

ABSTRACT

OBJECTIVES: To evaluate the performance of the multimodal model, termed variable Vision Transformer (vViT), in the task of predicting isocitrate dehydrogenase (IDH) status among adult patients with diffuse glioma.

MATERIALS AND METHODS: vViT was designed to predict IDH status using patient characteristics (sex and age), radiomic features, and contrast-enhanced T1-weighted images (CE-T1WI). Radiomic features were extracted from each enhancing tumor (ET), necrotic tumor core (NCR), and peritumoral edematous/infiltrated tissue (ED). CE-T1WI were split into four images and input to vViT. In the training, internal test, and external test, 271 patients with 1070 images (535 IDH wildtype, 535 IDH mutant), 35 patients with 194 images (97 IDH wildtype, 97 IDH mutant), and 291 patients with 872 images (436 IDH wildtype, 436 IDH mutant) were analyzed, respectively. Metrics including accuracy and AUC-ROC were calculated for the internal and external test datasets. Permutation importance analysis combined with the Mann-Whitney U test was performed to compare inputs.

RESULTS: For the internal test dataset, vViT correctly predicted IDH status for all patients. For the external test dataset, an accuracy of 0.935 (95% confidence interval; 0.913-0.945) and AUC-ROC of 0.887 (0.798-0.956) were obtained. For both internal and external test datasets, CE-T1WI ET radiomic features and patient characteristics had higher importance than other inputs (p < 0.05).

CONCLUSIONS: The vViT has the potential to be a competent model in predicting IDH status among adult patients with diffuse glioma. Our results indicate that age, sex, and CE-T1WI ET radiomic features have key information in estimating IDH status.

PMID:38815636 | DOI:10.1016/j.mri.2024.05.012

Categories: Literature Watch

Prediction of cervix cancer stage and grade from diffusion weighted imaging using EfficientNet

Thu, 2024-05-30 06:00

Biomed Phys Eng Express. 2024 May 30. doi: 10.1088/2057-1976/ad5207. Online ahead of print.

ABSTRACT

This study aims to introduce an innovative noninvasive method that leverages a single image for both grading and staging prediction. The grade and the stage of cervix cancer (CC) are determined from diffusion-weighted imaging (DWI) in particular apparent diffusion coefficient (ADC) mapping using deep convolutional neural networks (DCNN).&#xD;Methods: datasets composed of 85 patients having annotated tumor stage (I, II, III, and IV), out of this, 66 were with grade (II and III) and the remaining patients with no reported grade were retrospectively collected. The study was IRB approved. For each patient, sagittal and axial slices containing the gross tumor volume (GTV) were extracted from ADC maps. These were computed using the mono exponential model from diffusion weighted images (b-values =0, 100, 1000) that were acquired prior to radiotherapy treatment. Balanced training sets were created using the Synthetic Minority Oversampling Technique (SMOTE) and fed to the DCNN. EfficientNetB0 and EfficientNetB3 were transferred from the ImageNet application to binary and four-class classification tasks. Five-fold stratified cross validation was performed for the assessment of the networks. Multiple evaluation metrics were computed including the area under the receiver operating characteristic curve (AUC). Comparisons with Resnet50, Xception, and radiomic analysis were performed.&#xD;Results: for grade prediction, EfficientNetB3 gave the best performance with AUC=0.924. For stage prediction, EfficientNetB0 was the best with AUC=0.931. The difference between both models was, however, small and not statistically significant EfficientNetB0-B3 outperformed ResNet50 (AUC=0.71) and Xception (AUC=0.89) in stage prediction, and demonstrated comparable results in grade classification, where AUCs of 0.89 and 0.90 were achieved by ResNet50 and Xception, respectively. DCNN outperformed radiomic analysis that gave AUC=0.67 (grade) and AUC=0.66 (stage).&#xD;Conclusion: the prediction of CC grade and stage from ADC maps is feasible by adapting EfficientNet approaches to the medical context. &#xD.

PMID:38815562 | DOI:10.1088/2057-1976/ad5207

Categories: Literature Watch

Robustness of Deep Learning models in electrocardiogram noise detection and classification

Thu, 2024-05-30 06:00

Comput Methods Programs Biomed. 2024 May 24;253:108249. doi: 10.1016/j.cmpb.2024.108249. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic electrocardiogram (ECG) signal analysis for heart disease detection has gained significant attention due to busy lifestyles. However, ECG signals are susceptible to noise, which adversely affects the performance of ECG signal analysers. Traditional blind filtering methods use predefined noise frequency and filter order, but they alter ECG biomarkers. Several Deep Learning-based ECG noise detection and classification methods exist, but no study compares recurrent neural network (RNN) and convolutional neural network (CNN) architectures and their complexity.

METHODS: This paper introduces a knowledge-based ECG filtering system using Deep Learning to classify ECG noise types and compare popular computer vision model architectures in a practical Internet of Medical Things (IoMT) framework. Experimental results demonstrate that the CNN-based ECG noise classifier outperforms the RNN-based model in terms of performance and training time.

RESULTS: The study shows that AlexNet, visual geometry group (VGG), and residual network (ResNet) achieved over 70% accuracy, specificity, sensitivity, and F1 score across six datasets. VGG and ResNet performances were comparable, but VGG was more complex than ResNet, with only a 4.57% less F1 score.

CONCLUSIONS: This paper introduces a Deep Learning (DL) based ECG noise classifier for a knowledge-driven ECG filtering system, offering selective filtering to reduce signal distortion. Evaluation of various CNN and RNN-based models reveals VGG and Resnet outperform. Further, the VGG model is superior in terms of performance. But Resnet performs comparably to VGG with less model complexity.

PMID:38815528 | DOI:10.1016/j.cmpb.2024.108249

Categories: Literature Watch

Cross vision transformer with enhanced Growth Optimizer for breast cancer detection in IoMT environment

Thu, 2024-05-30 06:00

Comput Biol Chem. 2024 May 22;111:108110. doi: 10.1016/j.compbiolchem.2024.108110. Online ahead of print.

ABSTRACT

The recent advances in artificial intelligence modern approaches can play vital roles in the Internet of Medical Things (IoMT). Automatic diagnosis is one of the most important topics in the IoMT, including cancer diagnosis. Breast cancer is one of the top causes of death among women. Accurate diagnosis and early detection of breast cancer can improve the survival rate of patients. Deep learning models have demonstrated outstanding potential in accurately detecting and diagnosing breast cancer. This paper proposes a novel technology for breast cancer detection using CrossViT as the deep learning model and an enhanced version of the Growth Optimizer algorithm (MGO) as the feature selection method. CrossVit is a hybrid deep learning model that combines the strengths of both convolutional neural networks (CNNs) and transformers. The MGO is a meta-heuristic algorithm that selects the most relevant features from a large pool of features to enhance the performance of the model. The developed approach was evaluated on three publicly available breast cancer datasets and achieved competitive performance compared to other state-of-the-art methods. The results show that the combination of CrossViT and the MGO can effectively identify the most informative features for breast cancer detection, potentially assisting clinicians in making accurate diagnoses and improving patient outcomes. The MGO algorithm improves accuracy by approximately 1.59% on INbreast, 5.00% on MIAS, and 0.79% on MiniDDSM compared to other methods on each respective dataset. The developed approach can also be utilized to improve the Quality of Service (QoS) in the healthcare system as a deployable IoT-based intelligent solution or a decision-making assistance service, enhancing the efficiency and precision of the diagnosis.

PMID:38815500 | DOI:10.1016/j.compbiolchem.2024.108110

Categories: Literature Watch

Deep learning reveals lung shape differences on baseline chest CT between mild and severe COVID-19: A multi-site retrospective study

Thu, 2024-05-30 06:00

Comput Biol Med. 2024 May 23;177:108643. doi: 10.1016/j.compbiomed.2024.108643. Online ahead of print.

ABSTRACT

Severe COVID-19 can lead to extensive lung disease causing lung architectural distortion. In this study we employed machine learning and statistical atlas-based approaches to explore possible changes in lung shape among COVID-19 patients and evaluated whether the extent of these changes was associated with COVID-19 severity. On a large multi-institutional dataset (N = 3443), three different populations were defined; a) healthy (no COVID-19), b) mild COVID-19 (no ventilator required), c) severe COVID-19 (ventilator required), and the presence of lung shape differences between them were explored using baseline chest CT. Significant lung shape differences were observed along mediastinal surfaces of the lungs across all severity of COVID-19 disease. Additionally, differences were seen on basal surfaces of the lung when compared between healthy and severe COVID-19 patients. Finally, an AI model (a 3D residual convolutional network) characterizing these shape differences coupled with lung infiltrates (ground-glass opacities and consolidation regions) was found to be associated with COVID-19 severity.

PMID:38815485 | DOI:10.1016/j.compbiomed.2024.108643

Categories: Literature Watch

Early detection of nicosulfuron toxicity and physiological prediction in maize using multi-branch deep learning models and hyperspectral imaging

Thu, 2024-05-30 06:00

J Hazard Mater. 2024 May 28;474:134723. doi: 10.1016/j.jhazmat.2024.134723. Online ahead of print.

ABSTRACT

The misuse of herbicides in fields can cause severe toxicity in maize, resulting in significant reductions in both yield and quality. Therefore, it is crucial to develop early and efficient methods for assessing herbicide toxicity, protecting maize production, and maintaining the field environment. In this study, we utilized maize crops treated with the widely used nicosulfuron herbicide and their hyperspectral images to develop the HerbiNet model. After 4 d of nicosulfuron treatment, the model achieved an accuracy of 91.37 % in predicting toxicity levels, with correlation coefficient R² values of 0.82 and 0.73 for soil plant analysis development (SPAD) and water content, respectively. Additionally, the model exhibited higher generalizability across datasets from different years and seasons, which significantly surpassed support vector machines, AlexNet, and partial least squares regression models. A lightweight model, HerbiNet-Lite, exhibited significantly low complexity using 18 spectral wavelengths. After 4 d of nicosulfuron treatment, the HerbiNet-Lite model achieved an accuracy of 87.93 % for toxicity prediction and R² values of 0.80 and 0.71 for SPAD and water content, respectively, while significantly reducing overfitting. Overall, this study provides an innovative approach for the early and accurate detection of nicosulfuron toxicity within maize fields.

PMID:38815392 | DOI:10.1016/j.jhazmat.2024.134723

Categories: Literature Watch

Towards a machine-learning assisted non-invasive classification of dengue severity using wearable PPG data: a prospective clinical study

Thu, 2024-05-30 06:00

EBioMedicine. 2024 May 29;104:105164. doi: 10.1016/j.ebiom.2024.105164. Online ahead of print.

ABSTRACT

BACKGROUND: Dengue epidemics impose considerable strain on healthcare resources. Real-time continuous and non-invasive monitoring of patients admitted to the hospital could lead to improved care and outcomes. We evaluated the performance of a commercially available wearable (SmartCare) utilising photoplethysmography (PPG) to stratify clinical risk for a cohort of hospitalised patients with dengue in Vietnam.

METHODS: We performed a prospective observational study for adult and paediatric patients with a clinical diagnosis of dengue at the Hospital for Tropical Disease, Ho Chi Minh City, Vietnam. Patients underwent PPG monitoring early during admission alongside standard clinical care. PPG waveforms were analysed using machine learning models. Adult patients were classified between 3 severity classes: i) uncomplicated (ward-based), ii) moderate-severe (emergency department-based), and iii) severe (ICU-based). Data from paediatric patients were split into 2 classes: i) severe (during ICU stay) and ii) follow-up (14-21 days after the illness onset). Model performances were evaluated using standard classification metrics and 5-fold stratified cross-validation.

FINDINGS: We included PPG and clinical data from 132 adults and 15 paediatric patients with a median age of 28 (IQR, 21-35) and 12 (IQR, 9-13) years respectively. 1781 h of PPG data were available for analysis. The best performing convolutional neural network models (CNN) achieved a precision of 0.785 and recall of 0.771 in classifying adult patients according to severity class and a precision of 0.891 and recall of 0.891 in classifying between disease and post-disease state in paediatric patients.

INTERPRETATION: We demonstrate that the use of a low-cost wearable provided clinically actionable data to differentiate between patients with dengue of varying severity. Continuous monitoring and connectivity to early warning systems could significantly benefit clinical care in dengue, particularly within an endemic setting. Work is currently underway to implement these models for dynamic risk predictions and assist in individualised patient care.

FUNDING: EPSRC Centre for Doctoral Training in High-Performance Embedded and Distributed Systems (HiPEDS) (Grant: EP/L016796/1) and the Wellcome Trust (Grants: 215010/Z/18/Z and 215688/Z/19/Z).

PMID:38815363 | DOI:10.1016/j.ebiom.2024.105164

Categories: Literature Watch

Equivariant score-based generative diffusion framework for 3D molecules

Thu, 2024-05-30 06:00

BMC Bioinformatics. 2024 May 30;25(1):203. doi: 10.1186/s12859-024-05810-w.

ABSTRACT

BACKGROUND: Molecular biology is crucial for drug discovery, protein design, and human health. Due to the vastness of the drug-like chemical space, depending on biomedical experts to manually design molecules is exceedingly expensive. Utilizing generative methods with deep learning technology offers an effective approach to streamline the search space for molecular design and save costs. This paper introduces a novel E(3)-equivariant score-based diffusion framework for 3D molecular generation via SDEs, aiming to address the constraints of unified Gaussian diffusion methods. Within the proposed framework EMDS, the complete diffusion is decomposed into separate diffusion processes for distinct components of the molecular feature space, while the modeling processes also capture the complex dependency among these components. Moreover, angle and torsion angle information is integrated into the networks to enhance the modeling of atom coordinates and utilize spatial information more effectively.

RESULTS: Experiments on the widely utilized QM9 dataset demonstrate that our proposed framework significantly outperforms the state-of-the-art methods in all evaluation metrics for 3D molecular generation. Additionally, ablation experiments are conducted to highlight the contribution of key components in our framework, demonstrating the effectiveness of the proposed framework and the performance improvements of incorporating angle and torsion angle information for molecular generation. Finally, the comparative results of distribution show that our method is highly effective in generating molecules that closely resemble the actual scenario.

CONCLUSION: Through the experiments and comparative results, our framework clearly outperforms previous 3D molecular generation methods, exhibiting significantly better capacity for modeling chemically realistic molecules. The excellent performance of EMDS in 3D molecular generation brings novel and encouraging opportunities for tackling challenging biomedical molecule and protein scenarios.

PMID:38816718 | DOI:10.1186/s12859-024-05810-w

Categories: Literature Watch

Exploring the Conformational Ensembles of Protein-Protein Complex with Transformer-Based Generative Model

Thu, 2024-05-30 06:00

J Chem Theory Comput. 2024 May 30. doi: 10.1021/acs.jctc.4c00255. Online ahead of print.

ABSTRACT

Protein-protein interactions are the basis of many protein functions, and understanding the contact and conformational changes of protein-protein interactions is crucial for linking the protein structure to biological function. Although difficult to detect experimentally, molecular dynamics (MD) simulations are widely used to study the conformational ensembles and dynamics of protein-protein complexes, but there are significant limitations in sampling efficiency and computational costs. In this study, a generative neural network was trained on protein-protein complex conformations obtained from molecular simulations to directly generate novel conformations with physical realism. We demonstrated the use of a deep learning model based on the transformer architecture to explore the conformational ensembles of protein-protein complexes through MD simulations. The results showed that the learned latent space can be used to generate unsampled conformations of protein-protein complexes for obtaining new conformations complementing pre-existing ones, which can be used as an exploratory tool for the analysis and enhancement of molecular simulations of protein-protein complexes.

PMID:38816696 | DOI:10.1021/acs.jctc.4c00255

Categories: Literature Watch

Evaluating the Performance of ChatGPT in Urology: A Comparative Study of Knowledge Interpretation and Patient Guidance

Thu, 2024-05-30 06:00

J Endourol. 2024 May 30. doi: 10.1089/end.2023.0413. Online ahead of print.

ABSTRACT

Background/Aim: To evaluate the performance of Chat Generative Pre-trained Transformer (ChatGPT), a large language model trained by Open artificial intelligence. Materials and Methods: This study has three main steps to evaluate the effectiveness of ChatGPT in the urologic field. The first step involved 35 questions from our institution's experts, who have at least 10 years of experience in their fields. The responses of ChatGPT versions were qualitatively compared with the responses of urology residents to the same questions. The second step assesses the reliability of ChatGPT versions in answering current debate topics. The third step was to assess the reliability of ChatGPT versions in providing medical recommendations and directives to patients' commonly asked questions during the outpatient and inpatient clinic. Results: In the first step, version 4 provided correct answers to 25 questions out of 35 while version 3.5 provided only 19 (71.4% vs 54%). It was observed that residents in their last year of education in our clinic also provided a mean of 25 correct answers, and 4th year residents provided a mean of 19.3 correct responses. The second step involved evaluating the response of both versions to debate situations in urology, and it was found that both versions provided variable and inappropriate results. In the last step, both versions had a similar success rate in providing recommendations and guidance to patients based on expert ratings. Conclusion: The difference between the two versions of the 35 questions in the first step of the study was thought to be due to the improvement of ChatGPT's literature and data synthesis abilities. It may be a logical approach to use ChatGPT versions to inform the nonhealth care providers' questions with quick and safe answers but should not be used to as a diagnostic tool or make a choice among different treatment modalities.

PMID:38815140 | DOI:10.1089/end.2023.0413

Categories: Literature Watch

Long short-term memory (LSTM)-based news classification model

Thu, 2024-05-30 06:00

PLoS One. 2024 May 30;19(5):e0301835. doi: 10.1371/journal.pone.0301835. eCollection 2024.

ABSTRACT

In this study, we used unidirectional and bidirectional long short-term memory (LSTM) deep learning networks for Chinese news classification and characterized the effects of contextual information on text classification, achieving a high level of accuracy. A Chinese glossary was created using jieba-a word segmentation tool-stop-word removal, and word frequency analysis. Next, word2vec was used to map the processed words into word vectors, creating a convenient lookup table for word vectors that could be used as feature inputs for the LSTM model. A bidirectional LSTM (BiLSTM) network was used for feature extraction from word vectors to facilitate the transfer of information in both the backward and forward directions to the hidden layer. Subsequently, an LSTM network was used to perform feature integration on all the outputs of the BiLSTM network, with the output from the last layer of the LSTM being treated as the mapping of the text into a feature vector. The output feature vectors were then connected to a fully connected layer to construct a feature classifier using the integrated features, finally classifying the news articles. The hyperparameters of the model were optimized based on the loss between the true and predicted values using the adaptive moment estimation (Adam) optimizer. Additionally, multiple dropout layers were added to the model to reduce overfitting. As text classification models for Chinese news articles, the Bi-LSTM and unidirectional LSTM models obtained f1-scores of 94.15% and 93.16%, respectively, with the former outperforming the latter in terms of feature extraction.

PMID:38814925 | DOI:10.1371/journal.pone.0301835

Categories: Literature Watch

Pages