Deep learning
Knowledge, attitude, and practice of artificial intelligence among medical students in Sudan: a cross-sectional study
Ann Med Surg (Lond). 2024 Apr 24;86(7):3917-3923. doi: 10.1097/MS9.0000000000002070. eCollection 2024 Jul.
ABSTRACT
INTRODUCTION: In this cross-sectional study, the authors explored the knowledge, attitudes, and practices related to artificial intelligence (AI) among medical students in Sudan. With AI increasingly impacting healthcare, understanding its integration into medical education is crucial. This study aimed to assess the current state of AI awareness, perceptions, and practical experiences among medical students in Sudan. The authors aimed to evaluate the extent of AI familiarity among Sudanese medical students by examining their attitudes toward its application in medicine. Additionally, this study seeks to identify the factors influencing knowledge levels and explore the practical implementation of AI in the medical field.
METHOD: A web-based survey was distributed to medical students in Sudan via social media platforms and e-mail during October 2023. The survey included questions on demographic information, knowledge of AI, attitudes toward its applications, and practical experiences. The descriptive statistics, χ2 tests, logistic regression, and correlations were analyzed using SPSS version 26.0.
RESULTS: Out of the 762 participants, the majority exhibited a basic understanding of AI, but detailed knowledge of its applications was limited. Positive attitudes toward the importance of AI in diagnosis, radiology, and pathology were prevalent. However, practical application of these methods was infrequent, with only a minority of the participants having hands-on experience. Factors influencing knowledge included the lack of a formal curriculum and gender disparities.
CONCLUSION: This study highlights the need for comprehensive AI education in medical training programs in Sudan. While participants displayed positive attitudes, there was a notable gap in practical experience. Addressing these gaps through targeted educational interventions is crucial for preparing future healthcare professionals to navigate the evolving landscape of AI in medicine.
RECOMMENDATIONS: Policy efforts should focus on integrating AI education into the medical curriculum to ensure readiness for the technological advancements shaping the future of healthcare.
PMID:38989161 | PMC:PMC11230734 | DOI:10.1097/MS9.0000000000002070
A clinical-radiomics nomogram based on automated segmentation of chest CT to discriminate PRISm and COPD patients
Eur J Radiol Open. 2024 Jun 14;13:100580. doi: 10.1016/j.ejro.2024.100580. eCollection 2024 Dec.
ABSTRACT
PURPOSE: It is vital to develop noninvasive approaches with high accuracy to discriminate the preserved ratio impaired spirometry (PRISm) group from the chronic obstructive pulmonary disease (COPD) groups. Radiomics has emerged as an image analysis technique. This study aims to develop and confirm the new radiomics-based noninvasive approach to discriminate these two groups.
METHODS: Totally 1066 subjects from 4 centers were included in this retrospective research, and classified into training, internal validation or external validation sets. The chest computed tomography (CT) images were segmented by the fully automated deep learning segmentation algorithm (Unet231) for radiomics feature extraction. We established the radiomics signature (Rad-score) using the least absolute shrinkage and selection operator algorithm, then conducted ten-fold cross-validation using the training set. Last, we constructed a radiomics signature by incorporating independent risk factors using the multivariate logistic regression model. Model performance was evaluated by receiver operating characteristic (ROC) curve, calibration curve, and decision curve analyses (DCA).
RESULTS: The Rad-score, including 15 radiomic features in whole-lung region, which was suitable for diffuse lung diseases, was demonstrated to be effective for discriminating between PRISm and COPD. Its diagnostic accuracy was improved through integrating Rad-score with a clinical model, and the area under the ROC (AUC) were 0.82(95 %CI 0.79-0.86), 0.77(95 %CI 0.72-0.83) and 0.841(95 %CI 0.78-0.91) for training, internal validation and external validation sets, respectively. As revealed by analysis, radiomics nomogram showed good fit and superior clinical utility.
CONCLUSIONS: The present work constructed the new radiomics-based nomogram and verified its reliability for discriminating between PRISm and COPD.
PMID:38989052 | PMC:PMC11233899 | DOI:10.1016/j.ejro.2024.100580
Feasibility of remote monitoring for fatal coronary heart disease using Apple Watch ECGs
Cardiovasc Digit Health J. 2024 Apr 5;5(3):115-121. doi: 10.1016/j.cvdhj.2024.03.007. eCollection 2024 Jun.
ABSTRACT
BACKGROUND: Fatal coronary heart disease (FCHD) is often described as sudden cardiac death (affects >4 million people/year), where coronary artery disease is the only identified condition. Electrocardiographic artificial intelligence (ECG-AI) models for FCHD risk prediction using ECG data from wearable devices could enable wider screening/monitoring efforts.
OBJECTIVES: To develop a single-lead ECG-based deep learning model for FCHD risk prediction and assess concordance between clinical and Apple Watch ECGs.
METHODS: An FCHD single-lead ("lead I" from 12-lead ECGs) ECG-AI model was developed using 167,662 ECGs (50,132 patients) from the University of Tennessee Health Sciences Center. Eighty percent of the data (5-fold cross-validation) was used for training and 20% as a holdout. Cox proportional hazards (CPH) models incorporating ECG-AI predictions with age, sex, and race were also developed. The models were tested on paired clinical single-lead and Apple Watch ECGs from 243 St. Jude Lifetime Cohort Study participants. The correlation and concordance of the predictions were assessed using Pearson correlation (R), Spearman correlation (ρ), and Cohen's kappa.
RESULTS: The ECG-AI and CPH models resulted in AUC = 0.76 and 0.79, respectively, on the 20% holdout and AUC = 0.85 and 0.87 on the Atrium Health Wake Forest Baptist external validation data. There was moderate-strong positive correlation between predictions (R = 0.74, ρ = 0.67, and κ = 0.58) when tested on the 243 paired ECGs. The clinical (lead I) and Apple Watch predictions led to the same low/high-risk FCHD classification for 99% of the participants. CPH prediction correlation resulted in an R = 0.81, ρ = 0.76, and κ = 0.78.
CONCLUSION: Risk of FCHD can be predicted from single-lead ECGs obtained from wearable devices and are statistically concordant with lead I of a 12-lead ECG.
PMID:38989042 | PMC:PMC11232422 | DOI:10.1016/j.cvdhj.2024.03.007
Projected pooling loss for red nucleus segmentation with soft topology constraints
J Med Imaging (Bellingham). 2024 Jul;11(4):044002. doi: 10.1117/1.JMI.11.4.044002. Epub 2024 Jul 9.
ABSTRACT
PURPOSE: Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.
APPROACH: This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.
RESULTS: When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.
CONCLUSIONS: We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.
PMID:38988992 | PMC:PMC11232703 | DOI:10.1117/1.JMI.11.4.044002
Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach
J Med Imaging (Bellingham). 2024 Jul;11(4):044502. doi: 10.1117/1.JMI.11.4.044502. Epub 2024 Jul 9.
ABSTRACT
PURPOSE: Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.
APPROACH: In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.
RESULTS: The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity.
CONCLUSIONS: In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.
PMID:38988991 | PMC:PMC11232701 | DOI:10.1117/1.JMI.11.4.044502
Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation
J Med Imaging (Bellingham). 2024 Jul;11(4):044001. doi: 10.1117/1.JMI.11.4.044001. Epub 2024 Jul 9.
ABSTRACT
PURPOSE: Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.
APPROACH: We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.
RESULTS: Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.
CONCLUSIONS: Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.
PMID:38988990 | PMC:PMC11231955 | DOI:10.1117/1.JMI.11.4.044001
Author Correction: A fully automated classification of third molar development stages using deep learning
Sci Rep. 2024 Jul 10;14(1):15932. doi: 10.1038/s41598-024-66731-5.
NO ABSTRACT
PMID:38987634 | DOI:10.1038/s41598-024-66731-5
Nano fuzzy alarming system for blood transfusion requirement detection in cancer using deep learning
Sci Rep. 2024 Jul 10;14(1):15958. doi: 10.1038/s41598-024-66607-8.
ABSTRACT
Periodic blood transfusion is a need in cancer patients in which the disease process as well as the chemotherapy can disrupt the natural production of blood cells. However, there are concerns about blood transfusion side effects, the cost, and the availability of donated blood. Therefore, predicting the timely requirement for blood transfusion considering patient variability is a need, and here for the first-time deal with this issue in blood cancer using in vivo data. First, a data set of 98 samples of blood cancer patients including 61 features of demographic, clinical, and laboratory data are collected. After performing multivariate analysis and the approval of an expert, effective parameters are derived. Then using a deep recurrent neural network, a system is presented to predict a need for packed red blood cell transfusion. Here, we use a Long Short-Term Memory (LSTM) neural network for modeling and the cross-validation technique with 5 layers for validation of the model along with comparing the result with networking and non-networking machine learning algorithms including bidirectional LSTM, AdaBoost, bagging decision tree based, bagging KNeighbors, and Multi-Layer Perceptron (MLP). Results show the LSTM outperforms the other methods. Then, using the swarm of fuzzy bioinspired nanomachines and the most effective parameters of Hgb, PaO2, and pH, we propose a feasibility study on nano fuzzy alarming system (NFABT) for blood transfusion requirements. Alarming decisions using the Internet of Things (IoT) gateway are delivered to the physician for performing medical actions. Also, NFABT is considered a real-time non-invasive AI-based hemoglobin monitoring and alarming method. Results show the merits of the proposed method.
PMID:38987580 | DOI:10.1038/s41598-024-66607-8
Classification of osteoarthritic and healthy cartilage using deep learning with Raman spectra
Sci Rep. 2024 Jul 10;14(1):15902. doi: 10.1038/s41598-024-66857-6.
ABSTRACT
Raman spectroscopy is a rapid method for analysing the molecular composition of biological material. However, noise contamination in the spectral data necessitates careful pre-processing prior to analysis. Here we propose an end-to-end Convolutional Neural Network to automatically learn an optimal combination of pre-processing strategies, for the classification of Raman spectra of superficial and deep layers of cartilage harvested from 45 Osteoarthritis and 19 Osteoporosis (Healthy controls) patients. Using 6-fold cross-validation, the Multi-Convolutional Neural Network achieves comparable or improved classification accuracy against the best-performing Convolutional Neural Network applied to either the raw or pre-processed spectra. We utilised Integrated Gradients to identify the contributing features (Raman signatures) in the network decision process, showing they are biologically relevant. Using these features, we compared Artificial Neural Networks, Decision Trees and Support Vector Machines for the feature selection task. Results show that training on fewer than 3 and 300 features, respectively, for the disease classification and layer assignment task provide performance comparable to the best-performing CNN-based network applied to the full dataset. Our approach, incorporating multi-channel input and Integrated Gradients, can potentially facilitate the clinical translation of Raman spectroscopy-based diagnosis without the need for laborious manual pre-processing and feature selection.
PMID:38987563 | DOI:10.1038/s41598-024-66857-6
Exploiting the Role of Features for Antigens-Antibodies Interaction Site Prediction
Methods Mol Biol. 2024;2780:303-325. doi: 10.1007/978-1-0716-3985-6_16.
ABSTRACT
Antibodies are a class of proteins that recognize and neutralize pathogens by binding to their antigens. They are the most significant category of biopharmaceuticals for both diagnostic and therapeutic applications. Understanding how antibodies interact with their antigens plays a fundamental role in drug and vaccine design and helps to comprise the complex antigen binding mechanisms. Computational methods for predicting interaction sites of antibody-antigen are of great value due to the overall cost of experimental methods. Machine learning methods and deep learning techniques obtained promising results.In this work, we predict antibody interaction interface sites by applying HSS-PPI, a hybrid method defined to predict the interface sites of general proteins. The approach abstracts the proteins in terms of hierarchical representation and uses a graph convolutional network to classify the amino acids between interface and non-interface. Moreover, we also equipped the amino acids with different sets of physicochemical features together with structural ones to describe the residues. Analyzing the results, we observe that the structural features play a fundamental role in the amino acid descriptions. We compare the obtained performances, evaluated using standard metrics, with the ones obtained with SVM with 3D Zernike descriptors, Parapred, Paratome, and Antibody i-Patch.
PMID:38987475 | DOI:10.1007/978-1-0716-3985-6_16
Refinement of Docked Protein-Protein Complexes Using Repulsive Scaling Replica Exchange Simulations
Methods Mol Biol. 2024;2780:289-302. doi: 10.1007/978-1-0716-3985-6_15.
ABSTRACT
Accurate prediction and evaluation of protein-protein complex structures is of major importance to understand the cellular interactome. Predicted complex structures based on deep learning approaches or traditional docking methods require often structural refinement and rescoring for realistic evaluation. Standard molecular dynamics (MD) simulations are time-consuming and often do not structurally improve docking solutions. Better refinement can be achieved with our recently developed replica-exchange-based scheme employing different levels of repulsive biasing between proteins in each replica simulation (RS-REMD). The bias acts specifically on the intermolecular interactions based on an increase in effective pairwise van der Waals radii without changing interactions within each protein or with the solvent. It allows for an improvement of the predicted protein-protein complex structure and simultaneous realistic free energy scoring of protein-protein complexes. The setup of RS-REMD simulations is described in detail including the application on two examples (all necessary scripts and input files can be obtained from https://gitlab.com/TillCyrill/mmib ).
PMID:38987474 | DOI:10.1007/978-1-0716-3985-6_15
Assessment of Protein-Protein Docking Models Using Deep Learning
Methods Mol Biol. 2024;2780:149-162. doi: 10.1007/978-1-0716-3985-6_10.
ABSTRACT
Protein-protein interactions are involved in almost all processes in a living cell and determine the biological functions of proteins. To obtain mechanistic understandings of protein-protein interactions, the tertiary structures of protein complexes have been determined by biophysical experimental methods, such as X-ray crystallography and cryogenic electron microscopy. However, as experimental methods are costly in resources, many computational methods have been developed that model protein complex structures. One of the difficulties in computational protein complex modeling (protein docking) is to select the most accurate models among many models that are usually generated by a docking method. This article reviews advances in protein docking model assessment methods, focusing on recent developments that apply deep learning to several network architectures.
PMID:38987469 | DOI:10.1007/978-1-0716-3985-6_10
Machine Learning Methods in Protein-Protein Docking
Methods Mol Biol. 2024;2780:107-126. doi: 10.1007/978-1-0716-3985-6_7.
ABSTRACT
An exponential increase in the number of publications that address artificial intelligence (AI) usage in life sciences has been noticed in recent years, while new modeling techniques are constantly being reported. The potential of these methods is vast-from understanding fundamental cellular processes to discovering new drugs and breakthrough therapies. Computational studies of protein-protein interactions, crucial for understanding the operation of biological systems, are no exception in this field. However, despite the rapid development of technology and the progress in developing new approaches, many aspects remain challenging to solve, such as predicting conformational changes in proteins, or more "trivial" issues as high-quality data in huge quantities.Therefore, this chapter focuses on a short introduction to various AI approaches to study protein-protein interactions, followed by a description of the most up-to-date algorithms and programs used for this purpose. Yet, given the considerable pace of development in this hot area of computational science, at the time you read this chapter, the development of the algorithms described, or the emergence of new (and better) ones should come as no surprise.
PMID:38987466 | DOI:10.1007/978-1-0716-3985-6_7
Selection of pre-trained weights for transfer learning in automated cytomegalovirus retinitis classification
Sci Rep. 2024 Jul 10;14(1):15899. doi: 10.1038/s41598-024-67121-7.
ABSTRACT
Cytomegalovirus retinitis (CMVR) is a significant cause of vision loss. Regular screening is crucial but challenging in resource-limited settings. A convolutional neural network is a state-of-the-art deep learning technique to generate automatic diagnoses from retinal images. However, there are limited numbers of CMVR images to train the model properly. Transfer learning (TL) is a strategy to train a model with a scarce dataset. This study explores the efficacy of TL with different pre-trained weights for automated CMVR classification using retinal images. We utilised a dataset of 955 retinal images (524 CMVR and 431 normal) from Siriraj Hospital, Mahidol University, collected between 2005 and 2015. Images were processed using Kowa VX-10i or VX-20 fundus cameras and augmented for training. We employed DenseNet121 as a backbone model, comparing the performance of TL with weights pre-trained on ImageNet, APTOS2019, and CheXNet datasets. The models were evaluated based on accuracy, loss, and other performance metrics, with the depth of fine-tuning varied across different pre-trained weights. The study found that TL significantly enhances model performance in CMVR classification. The best results were achieved with weights sequentially transferred from ImageNet to APTOS2019 dataset before application to our CMVR dataset. This approach yielded the highest mean accuracy (0.99) and lowest mean loss (0.04), outperforming other methods. The class activation heatmaps provided insights into the model's decision-making process. The model with APTOS2019 pre-trained weights offered the best explanation and highlighted the pathologic lesions resembling human interpretation. Our findings demonstrate the potential of sequential TL in improving the accuracy and efficiency of CMVR diagnosis, particularly in settings with limited data availability. They highlight the importance of domain-specific pre-training in medical image classification. This approach streamlines the diagnostic process and paves the way for broader applications in automated medical image analysis, offering a scalable solution for early disease detection.
PMID:38987446 | DOI:10.1038/s41598-024-67121-7
A deep learning based cognitive model to probe the relation between psychophysics and electrophysiology of flicker stimulus
Brain Inform. 2024 Jul 10;11(1):18. doi: 10.1186/s40708-024-00231-0.
ABSTRACT
The flicker stimulus is a visual stimulus of intermittent illumination. A flicker stimulus can appear flickering or steady to a human subject, depending on the physical parameters associated with the stimulus. When the flickering light appears steady, flicker fusion is said to have occurred. This work aims to bridge the gap between the psychophysics of flicker fusion and the electrophysiology associated with flicker stimulus through a Deep Learning based computational model of flicker perception. Convolutional Recurrent Neural Networks (CRNNs) were trained with psychophysics data of flicker stimulus obtained from a human subject. We claim that many of the reported features of electrophysiology of the flicker stimulus, including the presence of fundamentals and harmonics of the stimulus, can be explained as the result of a temporal convolution operation on the flicker stimulus. We further show that the convolution layer output of a CRNN trained with psychophysics data is more responsive to specific frequencies as in human EEG response to flicker, and the convolution layer of a trained CRNN can give a nearly sinusoidal output for 10 hertz flicker stimulus as reported for some human subjects.
PMID:38987386 | DOI:10.1186/s40708-024-00231-0
Automatic wild bird repellent system that is based on deep-learning-based wild bird detection and integrated with a laser rotation mechanism
Sci Rep. 2024 Jul 10;14(1):15924. doi: 10.1038/s41598-024-66920-2.
ABSTRACT
Wild bird repulsion is critical in agriculture because it helps avoid agricultural food losses and mitigates the risk of avian influenza. Wild birds transmit avian influenza in poultry farms and thus cause large economic losses. In this study, we developed an automatic wild bird repellent system that is based on deep-learning-based wild bird detection and integrated with a laser rotation mechanism. When a wild bird appears at a farm, the proposed system detects the bird's position in an image captured by its detection unit and then uses a laser beam to repel the bird. The wild bird detection model of the proposed system was optimized for detecting small pixel targets, and trained through a deep learning method by using wild bird images captured at different farms. Various wild bird repulsion experiments were conducted using the proposed system at an outdoor duck farm in Yunlin, Taiwan. The statistical test results of our experimental data indicated that the proposed automatic wild bird repellent system effectively reduced the number of wild birds in the farm. The experimental results indicated that the developed system effectively repelled wild birds, with a high repulsion rate of 40.3% each day.
PMID:38987345 | DOI:10.1038/s41598-024-66920-2
Computational tools to predict context-specific protein complexes
Curr Opin Struct Biol. 2024 Jul 9;88:102883. doi: 10.1016/j.sbi.2024.102883. Online ahead of print.
ABSTRACT
Interactions between thousands of proteins define cells' protein-protein interaction (PPI) network. Some of these interactions lead to the formation of protein complexes. It is challenging to identify a protein complex in a haystack of protein-protein interactions, and it is even more difficult to predict all protein complexes of the complexome. Simulations and machine learning approaches try to crack these problems by looking at the PPI network or predicted protein structures. Clustering of PPI networks led to the first protein complex predictions, while most recently, atomistic models of protein complexes and deep-learning-based structure prediction methods have also emerged. The simulation of PPI level interactions even enables the quantitative prediction of protein complexes. These methods, the required data sources, and their potential future developments are discussed in this review.
PMID:38986166 | DOI:10.1016/j.sbi.2024.102883
Data modeling analysis of GFRP tubular filled concrete column based on small sample deep meta learning method
PLoS One. 2024 Jul 10;19(7):e0305038. doi: 10.1371/journal.pone.0305038. eCollection 2024.
ABSTRACT
The meta-learning method proposed in this paper addresses the issue of small-sample regression in the application of engineering data analysis, which is a highly promising direction for research. By integrating traditional regression models with optimization-based data augmentation from meta-learning, the proposed deep neural network demonstrates excellent performance in optimizing glass fiber reinforced plastic (GFRP) for wrapping concrete short columns. When compared with traditional regression models, such as Support Vector Regression (SVR), Gaussian Process Regression (GPR), and Radial Basis Function Neural Networks (RBFNN), the meta-learning method proposed here performs better in modeling small data samples. The success of this approach illustrates the potential of deep learning in dealing with limited amounts of data, offering new opportunities in the field of material data analysis.
PMID:38985781 | DOI:10.1371/journal.pone.0305038
HAPI: An efficient Hybrid Feature Engineering-based Approach for Propaganda Identification in social media
PLoS One. 2024 Jul 10;19(7):e0302583. doi: 10.1371/journal.pone.0302583. eCollection 2024.
ABSTRACT
Social media platforms serve as communication tools where users freely share information regardless of its accuracy. Propaganda on these platforms refers to the dissemination of biased or deceptive information aimed at influencing public opinion, encompassing various forms such as political campaigns, fake news, and conspiracy theories. This study introduces a Hybrid Feature Engineering Approach for Propaganda Identification (HAPI), designed to detect propaganda in text-based content like news articles and social media posts. HAPI combines conventional feature engineering methods with machine learning techniques to achieve high accuracy in propaganda detection. This study is conducted on data collected from Twitter via its API, and an annotation scheme is proposed to categorize tweets into binary classes (propaganda and non-propaganda). Hybrid feature engineering entails the amalgamation of various features, including Term Frequency-Inverse Document Frequency (TF-IDF), Bag of Words (BoW), Sentimental features, and tweet length, among others. Multiple Machine Learning classifiers undergo training and evaluation utilizing the proposed methodology, leveraging a selection of 40 pertinent features identified through the hybrid feature selection technique. All the selected algorithms including Multinomial Naive Bayes (MNB), Support Vector Machine (SVM), Decision Tree (DT), and Logistic Regression (LR) achieved promising results. The SVM-based HaPi (SVM-HaPi) exhibits superior performance among traditional algorithms, achieving precision, recall, F-Measure, and overall accuracy of 0.69, 0.69, 0.69, and 69.2%, respectively. Furthermore, the proposed approach is compared to well-known existing approaches where it overperformed most of the studies on several evaluation metrics. This research contributes to the development of a comprehensive system tailored for propaganda identification in textual content. Nonetheless, the purview of propaganda detection transcends textual data alone. Deep learning algorithms like Artificial Neural Networks (ANN) offer the capability to manage multimodal data, incorporating text, images, audio, and video, thereby considering not only the content itself but also its presentation and contextual nuances during dissemination.
PMID:38985703 | DOI:10.1371/journal.pone.0302583
AI-based Fully-Automated Stress Left Ventricular Ejection Fraction as a Prognostic Marker in Patients Undergoing Stress-CMR
Eur Heart J Cardiovasc Imaging. 2024 Jul 10:jeae168. doi: 10.1093/ehjci/jeae168. Online ahead of print.
ABSTRACT
AIM: To determine in patients undergoing stress CMR whether fully automated stress artificial intelligence (AI)-based left ventricular ejection fraction (LVEFAI) can provide incremental prognostic value to predict death above traditional prognosticators.
MATERIEL AND RESULTS: Between 2016 and 2018, we conducted a longitudinal study that included all consecutive patients referred for vasodilator stress CMR. LVEFAI was assessed using AI-algorithm combines multiple deep learning networks for LV segmentation. The primary outcome was all-cause death assessed using the French National Registry of Death. Cox regression was used to evaluate the association of stress LVEFAI with death after adjustment for traditional risk factors and CMR findings.In 9,712 patients (66±15 years, 67% men), there was an excellent correlation between stress LVEFAI and LVEF measured by expert (LVEFexpert) (r=0.94, p<0.001). Stress LVEFAI was associated with death (median [IQR] follow-up 4.5 [3.7-5.2] years) before and after adjustment for risk factors (adjusted hazard ratio [HR], 0.84 [95% CI, 0.82-0.87] per 5% increment, p<0.001). Stress LVEFAI had similar significant association with death occurrence compared with LVEFexpert. After adjustment, stress LVEFAI value showed the greatest improvement in model discrimination and reclassification over and above traditional risk factors and stress CMR findings (C-statistic improvement: 0.11; NRI=0.250; IDI=0.049, all p<0.001; LR-test p<0.001), with an incremental prognostic value over LVEFAI determined at rest.
CONCLUSION: AI-based fully automated LVEF measured at stress is independently associated with the occurrence of death in patients undergoing stress CMR, with an additional prognostic value above traditional risk factors, inducible ischemia and LGE.
PMID:38985691 | DOI:10.1093/ehjci/jeae168