Deep learning

Oil painting teaching design based on the mobile platform in higher art education

Fri, 2024-07-05 06:00

Sci Rep. 2024 Jul 5;14(1):15531. doi: 10.1038/s41598-024-65103-3.

ABSTRACT

To improve the current oil painting teaching mode in Chinese universities, this study combines deep learning technology and artificial intelligence technology to explore oil painting teaching. Firstly, the research status of individualized education and related research on image classification based on brush features are analyzed. Secondly, based on a convolutional neural network, mathematical morphology, and support vector machine, the oil painting classification model is constructed, in which the extracted features include color and brush features. Moreover, based on artificial intelligence technology and individualized education theory, a personalized intelligent oil painting teaching framework is built. Finally, the performance of the intelligent oil painting classification model is evaluated, and the content of the personalized intelligent oil painting teaching framework is explained. The results show that the average classification accuracy of oil painting is 90.25% when only brush features are extracted. When only color features are extracted, the average classification accuracy is over 89%. When the two features are extracted, the average accuracy of the oil painting classification model reaches 94.03%. Iterative Dichotomiser3, decision tree C4.5, and support vector machines have an average classification accuracy of 82.24%, 83.57%, and 94.03%. The training speed of epochs data with size 50 is faster than that of epochs original data with size 100, but the accuracy is slightly decreased. The personalized oil painting teaching system helps students adjust their learning plans according to their conditions, avoid learning repetitive content, and ultimately improve students' learning efficiency. Compared with other studies, this study obtains a good oil painting classification model and a personalized oil painting education system that plays a positive role in oil painting teaching. This study has laid the foundation for the development of higher art education.

PMID:38969717 | DOI:10.1038/s41598-024-65103-3

Categories: Literature Watch

An appearance quality classification method for Auricularia auricula based on deep learning

Fri, 2024-07-05 06:00

Sci Rep. 2024 Jul 5;14(1):15516. doi: 10.1038/s41598-023-50739-4.

ABSTRACT

The intelligent appearance quality classification method for Auricularia auricula is of great significance to promote this industry. This paper proposes an appearance quality classification method for Auricularia auricula based on the improved Faster Region-based Convolutional Neural Networks (improved Faster RCNN) framework. The original Faster RCNN is improved by establishing a multiscale feature fusion detection model to improve the accuracy and real-time performance of the model. The multiscale feature fusion detection model makes full use of shallow feature information to complete target detection. It fuses shallow features with rich detailed information with deep features rich in strong semantic information. Since the fusion algorithm directly uses the existing information of the feature extraction network, there is no additional calculation. The fused features contain more original detailed feature information. Therefore, the improved Faster RCNN can improve the final detection rate without sacrificing speed. By comparing with the original Faster RCNN model, the mean average precision (mAP) of the improved Faster RCNN is increased by 2.13%. The average precision (AP) of the first-level Auricularia auricula is almost unchanged at a high level. The AP of the second-level Auricularia auricula is increased by nearly 5%. And the third-level Auricularia auricula AP is increased by 1%. The improved Faster RCNN improves the frames per second from 6.81 of the original Faster RCNN to 13.5. Meanwhile, the influence of complex environment and image resolution on the Auricularia auricula detection is explored.

PMID:38969651 | DOI:10.1038/s41598-023-50739-4

Categories: Literature Watch

Predicting the Prognosis of HIFU Ablation of Uterine Fibroids Using a Deep Learning-Based 3D Super-Resolution DWI Radiomics Model: A Multicenter Study

Fri, 2024-07-05 06:00

Acad Radiol. 2024 Jul 4:S1076-6332(24)00384-2. doi: 10.1016/j.acra.2024.06.027. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: To assess the feasibility and efficacy of a deep learning-based three-dimensional (3D) super-resolution diffusion-weighted imaging (DWI) radiomics model in predicting the prognosis of high-intensity focused ultrasound (HIFU) ablation of uterine fibroids.

METHODS: This retrospective study included 360 patients with uterine fibroids who received HIFU treatment, including Center A (training set: N = 240; internal testing set: N = 60) and Center B (external testing set: N = 60) and were classified as having a favorable or unfavorable prognosis based on the postoperative non-perfusion volume ratio. A deep transfer learning approach was used to construct super-resolution DWI (SR-DWI) based on conventional high-resolution DWI (HR-DWI), and 1198 radiomics features were extracted from manually segmented regions of interest in both image types. Following data preprocessing and feature selection, radiomics models were constructed for HR-DWI and SR-DWI using Support Vector Machine (SVM), Random Forest (RF), and Light Gradient Boosting Machine (LightGBM) algorithms, with performance evaluated using area under the curve (AUC) and decision curves.

RESULT: All DWI radiomics models demonstrated superior AUC in predicting HIFU ablated uterine fibroids prognosis compared to expert radiologists (AUC: 0.706, 95% CI: 0.647-0.748). When utilizing different machine learning algorithms, the HR-DWI model achieved AUC values of 0.805 (95% CI: 0.679-0.931) with SVM, 0.797 (95% CI: 0.672-0.921) with RF, and 0.770 (95% CI: 0.631-0.908) with LightGBM. Meanwhile, the SR-DWI model outperformed the HR-DWI model (P < 0.05) across all algorithms, with AUC values of 0.868 (95% CI: 0.775-0.960) with SVM, 0.824 (95% CI: 0.715-0.934) with RF, and 0.821 (95% CI: 0.709-0.933) with LightGBM. And decision curve analysis further confirmed the good clinical value of the models.

CONCLUSION: Deep learning-based 3D SR-DWI radiomics model demonstrated favorable feasibility and effectiveness in predicting the prognosis of HIFU ablated uterine fibroids, which was superior to HR-DWI model and assessment by expert radiologists.

PMID:38969576 | DOI:10.1016/j.acra.2024.06.027

Categories: Literature Watch

Perceptible landscape patterns reveal invisible socioeconomic profiles of cities

Fri, 2024-07-05 06:00

Sci Bull (Beijing). 2024 Jun 22:S2095-9273(24)00447-X. doi: 10.1016/j.scib.2024.06.022. Online ahead of print.

ABSTRACT

Urban landscape is directly perceived by residents and is a significant symbol of urbanization development. A comprehensive assessment of urban landscapes is crucial for guiding the development of inclusive, resilient, and sustainable cities and human settlements. Previous studies have primarily analyzed two-dimensional landscape indicators derived from satellite remote sensing, potentially overlooking the valuable insights provided by the three-dimensional configuration of landscapes. This limitation arises from the high cost of acquiring large-area three-dimensional data and the lack of effective assessment indicators. Here, we propose four urban landscapes indicators in three dimensions (UL3D): greenness, grayness, openness, and crowding. We construct the UL3D using 4.03 million street view images from 303 major cities in China, employing a deep learning approach. We combine urban background and two-dimensional urban landscape indicators with UL3D to predict the socioeconomic profiles of cities. The results show that UL3D indicators differs from two-dimensional landscape indicators, with a low average correlation coefficient of 0.31 between them. Urban landscapes had a changing point in 2018-2019 due to new urbanization initiatives, with grayness and crowding rates slowing, while openness increased. The incorporation of UL3D indicators significantly enhances the explanatory power of the regression model for predicting socioeconomic profiles. Specifically, GDP per capita, urban population rate, built-up area per capita, and hospital count correspond to improvements of 25.0%, 19.8%, 35.5%, and 19.2%, respectively. These findings indicate that UL3D indicators have the potential to reflect the socioeconomic profiles of cities.

PMID:38969538 | DOI:10.1016/j.scib.2024.06.022

Categories: Literature Watch

Using multi-label ensemble CNN classifiers to mitigate labelling inconsistencies in patch-level Gleason grading

Fri, 2024-07-05 06:00

PLoS One. 2024 Jul 5;19(7):e0304847. doi: 10.1371/journal.pone.0304847. eCollection 2024.

ABSTRACT

This paper presents a novel approach to enhance the accuracy of patch-level Gleason grading in prostate histopathology images, a critical task in the diagnosis and prognosis of prostate cancer. This study shows that the Gleason grading accuracy can be improved by addressing the prevalent issue of label inconsistencies in the SICAPv2 prostate dataset, which employs a majority voting scheme for patch-level labels. We propose a multi-label ensemble deep-learning classifier that effectively mitigates these inconsistencies and yields more accurate results than the state-of-the-art works. Specifically, our approach leverages the strengths of three different one-vs-all deep learning models in an ensemble to learn diverse features from the histopathology images to individually indicate the presence of one or more Gleason grades (G3, G4, and G5) in each patch. These deep learning models have been trained using transfer learning to fine-tune a variant of the ResNet18 CNN classifier chosen after an extensive ablation study. Experimental results demonstrate that our multi-label ensemble classifier significantly outperforms traditional single-label classifiers reported in the literature by at least 14% and 4% on accuracy and f1-score metrics respectively. These results underscore the potential of our proposed machine learning approach to improve the accuracy and consistency of prostate cancer grading.

PMID:38968206 | DOI:10.1371/journal.pone.0304847

Categories: Literature Watch

Machine learning and deep learning approaches for enhanced prediction of hERG blockade: a comprehensive QSAR modeling study

Fri, 2024-07-05 06:00

Expert Opin Drug Metab Toxicol. 2024 Jul 5. doi: 10.1080/17425255.2024.2377593. Online ahead of print.

ABSTRACT

BACKGROUND: Cardiotoxicity is a major cause of drug withdrawal. The hERG channel, regulating ion flow, is pivotal for heart and nervous system function. Its blockade is a concern in drug development. Predicting hERG blockade is essential for identifying cardiac safety issues. Various QSAR models exist, but their performance varies. Ongoing improvements show promise, necessitating continued efforts to enhance accuracy using emerging deep learning algorithms in predicting potential hERG blockade.

STUDY DESIGN AND METHOD: Using a large training dataset, six individual QSAR models were developed. Additionally, three ensemble models were constructed. All models were evaluated using 10-fold cross-validations and two external datasets.

RESULTS: The 10-fold cross-validations resulted in Mathews correlation coefficient (MCC) values from 0.682 to 0.730, surpassing the best-reported model on the same dataset (0.689). External validations yielded MCC values from 0.520 to 0.715 for the first dataset, exceeding those of previously reported models (0 - 0.599). For the second dataset, MCC values fell between 0.025 and 0.215, aligning with those of reported models (0.112 - 0.220).

CONCLUSIONS: The developed models can assist the pharmaceutical industry and regulatory agencies in predicting hERG blockage activity, thereby enhancing safety assessments and reducing the risk of adverse cardiac events associated with new drug candidates.

PMID:38968091 | DOI:10.1080/17425255.2024.2377593

Categories: Literature Watch

Deep-KEDI: Deep learning-based zigzag generative adversarial network for encryption and decryption of medical images

Fri, 2024-07-05 06:00

Technol Health Care. 2024 Jun 20. doi: 10.3233/THC-231927. Online ahead of print.

ABSTRACT

BACKGROUND: Medical imaging techniques have improved to the point where security has become a basic requirement for all applications to ensure data security and data transmission over the internet. However, clinical images hold personal and sensitive data related to the patients and their disclosure has a negative impact on their right to privacy as well as legal ramifications for hospitals.

OBJECTIVE: In this research, a novel deep learning-based key generation network (Deep-KEDI) is designed to produce the secure key used for decrypting and encrypting medical images.

METHODS: Initially, medical images are pre-processed by adding the speckle noise using discrete ripplet transform before encryption and are removed after decryption for more security. In the Deep-KEDI model, the zigzag generative adversarial network (ZZ-GAN) is used as the learning network to generate the secret key.

RESULTS: The proposed ZZ-GAN is used for secure encryption by generating three different zigzag patterns (vertical, horizontal, diagonal) of encrypted images with its key. The zigzag cipher uses an XOR operation in both encryption and decryption using the proposed ZZ-GAN. Encrypting the original image requires a secret key generated during encryption. After identification, the encrypted image is decrypted using the generated key to reverse the encryption process. Finally, speckle noise is removed from the encrypted image in order to reconstruct the original image.

CONCLUSION: According to the experiments, the Deep-KEDI model generates secret keys with an information entropy of 7.45 that is particularly suitable for securing medical images.

PMID:38968065 | DOI:10.3233/THC-231927

Categories: Literature Watch

Forecasting deep learning-based risk assessment of vector-borne diseases using hybrid methodology

Fri, 2024-07-05 06:00

Technol Health Care. 2024 Jun 21. doi: 10.3233/THC-240046. Online ahead of print.

ABSTRACT

BACKGROUND: Dengue fever is rapidly becoming Malaysia's most pressing health concern, as the reported cases have nearly doubled over the past decade. Without efficacious antiviral medications, vector control remains the primary strategy for battling dengue, while the recently introduced tetravalent immunization is being evaluated. The most significant and dangerous risk increasing recently is vector-borne illnesses. These illnesses induce significant human sickness and are transmitted by blood-feeding arthropods such as fleas, parasites, and mosquitos. A thorough grasp of various factors is necessary to improve prediction accuracy and typically generate inaccurate and unstable predictions, as well as machine learning (ML) models, weather-driven mechanisms, and numerical time series.

OBJECTIVE: In this research, we propose a novel method for forecasting vector-borne disease risk using Radial Basis Function Networks (RBFNs) and the Darts Game Optimizer (DGO) algorithm.

METHODS: The proposed approach entails training the RBFNs with historical disease data and enhancing their parameters with the DGO algorithm. To prepare the RBFNs, we used a massive dataset of vector-borne disease incidences, climate variables, and geographical data. The DGO algorithm proficiently searches the RBFN parameter space, fine-tuning the model's architecture to increase forecast accuracy.

RESULTS: RBFN-DGO provides a potential method for predicting vector-borne disease risk. This study advances predictive demonstrating in public health by shedding light on effectively controlling vector-borne diseases to protect human populations. We conducted extensive testing to evaluate the performance of the proposed method to standard optimization methods and alternative forecasting methods.

CONCLUSION: According to the findings, the RBFN-DGO model beats others in terms of accuracy and robustness in predicting the likelihood of vector-borne illness occurrences.

PMID:38968030 | DOI:10.3233/THC-240046

Categories: Literature Watch

MS-DINO: Masked Self-Supervised Distributed Learning Using Vision Transformer

Fri, 2024-07-05 06:00

IEEE J Biomed Health Inform. 2024 Jul 5;PP. doi: 10.1109/JBHI.2024.3423797. Online ahead of print.

ABSTRACT

Despite promising advancements in deep learning in medical domains, challenges still remain owing to data scarcity, compounded by privacy concerns and data ownership disputes. Recent explorations of distributed-learning paradigms, particularly federated learning, have aimed to mitigate these challenges. However, these approaches are often encumbered by substantial communication and computational overhead, and potential vulnerabilities in privacy safeguards. Therefore, we propose a self-supervised masked sampling distillation technique called MS-DINO, tailored to the vision transformer architecture. This approach removes the need for incessant communication and strengthens privacy using a modified encryption mechanism inherent to the vision transformer while minimizing the computational burden on client-side devices. Rigorous evaluations across various tasks confirmed that our method outperforms existing self-supervised distributed learning strategies and fine-tuned baselines.

PMID:38968015 | DOI:10.1109/JBHI.2024.3423797

Categories: Literature Watch

Training and assessing convolutional neural network performance in automatic vascular segmentation using Ga-68 DOTATATE PET/CT

Fri, 2024-07-05 06:00

Int J Cardiovasc Imaging. 2024 Jul 5. doi: 10.1007/s10554-024-03171-2. Online ahead of print.

ABSTRACT

To evaluate a convolutional neural network's performance (nnU-Net) in the assessment of vascular contours, calcification and PET tracer activity using Ga-68 DOTATATE PET/CT. Patients who underwent Ga-68 DOTATATE PET/CT imaging over a 12-month period for neuroendocrine investigation were included. Manual cardiac and aortic segmentations were performed by an experienced observer. Scans were randomly allocated in ratio 64:16:20 for training, validation and testing of the nnU-Net model. PET tracer uptake and calcium scoring were compared between segmentation methods and different observers. 116 patients (53.5% female) with a median age of 64.5 years (range 23-79) were included. There were strong, positive correlations between all segmentations (mostly r > 0.98). There were no significant differences between manual and AI segmentation of SUVmean for global cardiac (mean ± SD 0.71 ± 0.22 vs. 0.71 ± 0.22; mean diff 0.001 ± 0.008, p > 0.05), ascending aorta (mean ± SD 0.44 ± 0.14 vs. 0.44 ± 0.14; mean diff 0.002 ± 0.01, p > 0.05), aortic arch (mean ± SD 0.44 ± 0.10 vs. 0.43 ± 0.10; mean diff 0.008 ± 0.16, p > 0.05) and descending aorta (mean ± SD < 0.001; 0.58 ± 0.12 vs. 0.57 ± 0.12; mean diff 0.01 ± 0.03, p > 0.05) contours. There was excellent agreement between the majority of manual and AI segmentation measures (r ≥ 0.80) and in all vascular contour calcium scores. Compared with the manual segmentation approach, the CNN required a significantly lower workflow time. AI segmentation of vascular contours using nnU-Net resulted in very similar measures of PET tracer uptake and vascular calcification when compared to an experienced observer and significantly reduced workflow time.

PMID:38967895 | DOI:10.1007/s10554-024-03171-2

Categories: Literature Watch

Fast, high-quality, and unshielded 0.2 T low-field mobile MRI using minimal hardware resources

Fri, 2024-07-05 06:00

MAGMA. 2024 Jul 5. doi: 10.1007/s10334-024-01184-5. Online ahead of print.

ABSTRACT

OBJECTIVE: To propose a deep learning-based low-field mobile MRI strategy for fast, high-quality, unshielded imaging using minimal hardware resources.

METHODS: Firstly, we analyze the correlation of EMI signals between the sensing coil and the MRI coil to preliminarily verify the feasibility of active EMI shielding using a single sensing coil. Then, a powerful deep learning EMI elimination model is proposed, which can accurately predict the EMI components in the MRI coil signals using EMI signals from at least one sensing coil. Further, deep learning models with different task objectives (super-resolution and denoising) are strategically stacked for multi-level post-processing to enable fast and high-quality low-field MRI. Finally, extensive phantom and brain experiments were conducted on a home-built 0.2 T mobile brain scanner for the evaluation of the proposed strategy.

RESULTS: 20 healthy volunteers were recruited to participate in the experiment. The results show that the proposed strategy enables the 0.2 T scanner to generate images with sufficient anatomical information and diagnostic value under unshielded conditions using a single sensing coil. In particular, the EMI elimination outperforms the state-of-the-art deep learning methods and numerical computation methods. In addition, 2 × super-resolution (DDSRNet) and denoising (SwinIR) techniques enable further improvements in imaging speed and quality.

DISCUSSION: The proposed strategy enables low-field mobile MRI scanners to achieve fast, high-quality imaging under unshielded conditions using minimal hardware resources, which has great significance for the widespread deployment of low-field mobile MRI scanners.

PMID:38967865 | DOI:10.1007/s10334-024-01184-5

Categories: Literature Watch

Enhanced deep leaning model for detection and grading of lumbar disc herniation from MRI

Fri, 2024-07-05 06:00

Med Biol Eng Comput. 2024 Jul 5. doi: 10.1007/s11517-024-03161-5. Online ahead of print.

ABSTRACT

Lumbar disc herniation is one of the most prevalent orthopedic issues in clinical practice. The lumbar spine is a crucial joint for movement and weight-bearing, so back pain can significantly impact the everyday lives of patients and is prone to recurring. The pathogenesis of lumbar disc herniation is complex and diverse, making it difficult to identify and assess after it has occurred. Magnetic resonance imaging (MRI) is the most effective method for detecting injuries, requiring continuous examination by medical experts to determine the extent of the injury. However, the continuous examination process is time-consuming and susceptible to errors. This study proposes an enhanced model, BE-YOLOv5, for hierarchical detection of lumbar disc herniation from MRI images. To tailor the training of the model to the job requirements, a specialized dataset was created. The data was cleaned and improved before the final calibration. A final training set of 2083 data points and a test set of 100 data points were obtained. The YOLOv5 model was enhanced by integrating the attention mechanism module, ECAnet, with a 3 × 3 convolutional kernel size, substituting its feature extraction network with a BiFPN, and implementing structural system pruning. The model achieved an 89.7% mean average precision (mAP) and 48.7 frames per second (FPS) on the test set. In comparison to Faster R-CNN, original YOLOv5, and the latest YOLOv8, this model performs better in terms of both accuracy and speed for the detection and grading of lumbar disc herniation from MRI, validating the effectiveness of multiple enhancement methods. The proposed model is expected to be used for diagnosing lumbar disc herniation from MRI images and to demonstrate efficient and high-precision performance.

PMID:38967693 | DOI:10.1007/s11517-024-03161-5

Categories: Literature Watch

Fully Automatic Quantitative Measurement of Equilibrium Radionuclide Angiocardiography Using a Convolutional Neural Network

Fri, 2024-07-05 06:00

Clin Nucl Med. 2024 Aug 1;49(8):727-732. doi: 10.1097/RLU.0000000000005275. Epub 2024 May 31.

ABSTRACT

PURPOSE: The aim of this study was to generate deep learning-based regions of interest (ROIs) from equilibrium radionuclide angiography datasets for left ventricular ejection fraction (LVEF) measurement.

PATIENTS AND METHODS: Manually drawn ROIs (mROIs) on end-systolic and end-diastolic images were extracted from reports in a Picture Archiving and Communications System. To reduce observer variability, preprocessed ROIs (pROIs) were delineated using a 41% threshold of the maximal pixel counts of the extracted mROIs and were labeled as ground-truth. Background ROIs were automatically created using an algorithm to identify areas with minimum counts within specified probability areas around the end-systolic ROI. A 2-dimensional U-Net convolutional neural network architecture was trained to generate deep learning-based ROIs (dlROIs) from pROIs. The model's performance was evaluated using Lin's concordance correlation coefficient (CCC). Bland-Altman plots were used to assess bias and 95% limits of agreement.

RESULTS: A total of 41,462 scans (19,309 patients) were included. Strong concordance was found between LVEF measurements from dlROIs and pROIs (CCC = 85.6%; 95% confidence interval, 85.4%-85.9%), and between LVEF measurements from dlROIs and mROIs (CCC = 86.1%; 95% confidence interval, 85.8%-86.3%). In the Bland-Altman analysis, the mean differences and 95% limits of agreement of the LVEF measurements were -0.6% and -6.6% to 5.3%, respectively, for dlROIs and pROIs, and -0.4% and -6.3% to 5.4% for dlROIs and mROIs, respectively. In 37,537 scans (91%), the absolute LVEF difference between dlROIs and mROIs was <5%.

CONCLUSIONS: Our 2-dimensional U-Net convolutional neural network architecture showed excellent performance in generating LV ROIs from equilibrium radionuclide angiography scans. It may enhance the convenience and reproducibility of LVEF measurements.

PMID:38967505 | DOI:10.1097/RLU.0000000000005275

Categories: Literature Watch

Advancing clinical understanding of surface electromyography biofeedback: bridging research, teaching, and commercial applications

Fri, 2024-07-05 06:00

Expert Rev Med Devices. 2024 Jul 5. doi: 10.1080/17434440.2024.2376699. Online ahead of print.

ABSTRACT

INTRODUCTION: Expanding the use of surface electromyography-biofeedback (EMG-BF) devices in different therapeutic settings highlights the gradually evolving role of visualizing muscle activity in the rehabilitation process. This review evaluates their concepts, uses, and trends, combining evidence-based research.

AREAS COVERED: This review dissects the anatomy of EMG-BF systems, emphasizing their transformative integration with machine-learning (ML) and deep-learning (DL) paradigms. Advances such as the application of sophisticated DL architectures for high-density EMG data interpretation, optimization techniques for heightened DL model performance, and the fusion of EMG with electroencephalogram (EEG) signals have been spotlighted for enhancing biomechanical analyses in rehabilitation. The literature survey also categorizes EMG-BF devices based on functionality and clinical usage, supported by insights from commercial sectors.

EXPERT OPINION: The current landscape of EMG-BF is rapidly evolving, chiefly propelled by innovations in artificial intelligence (AI). The incorporation of ML and DL into EMG-BF systems augments their accuracy, reliability, and scope, marking a leap in patient care. Despite challenges in model interpretability and signal noise, ongoing research promises to address these complexities, refining biofeedback modalities. The integration of AI not only predicts patient-specific recovery timelines but also tailors therapeutic interventions, heralding a new era of personalized medicine in rehabilitation and emotional detection.

PMID:38967375 | DOI:10.1080/17434440.2024.2376699

Categories: Literature Watch

GraphADT: Empowering interpretable predictions of acute dermal toxicity with Multi-View graph pooling and structure remapping

Fri, 2024-07-05 06:00

Bioinformatics. 2024 Jul 4:btae438. doi: 10.1093/bioinformatics/btae438. Online ahead of print.

ABSTRACT

MOTIVATION: Accurate prediction of acute dermal toxicity (ADT) is essential for the safe and effective development of contact drugs. Currently, graph neural networks (GNNs), a form of deep learning technology, accurately model the structure of compound molecules, enhancing predictions of their ADT. However, many existing methods emphasize atom-level information transfer and overlook crucial data conveyed by molecular bonds and their interrelationships. Additionally, these methods often generate" equal" node representations across the entire graph, failing to accentuate" important" substructures like functional groups, pharmacophores, and toxicophores, thereby reducing interpretability.

RESULTS: We introduce a novel model, GraphADT, utilizing structure remapping and multi-view graph pooling technologies to accurately predict compound ADT. Initially, our model applies structure remapping to better delineate bonds, transforming" bonds" into new nodes and" bond-atom-bond" interactions into new edges, thereby reconstructing the compound molecular graph. Subsequently, we employ multi-view graph pooling to amalgamate data from various perspectives, minimizing biases inherent to single-view analyses. Following this, the model generates a robust node ranking collaboratively, emphasizing critical nodes or substructures to enhance model interpretability. Lastly, we apply a graph comparison learning strategy to train both the original and structure remapped molecular graphs, deriving the final molecular representation. Experimental results on public datasets indicate that the GraphADT model outperforms existing state-of-the-art models. The GraphADT model has been demonstrated to effectively predict compound ADT, offering potential guidance for the development of contact drugs and related treatments.

AVAILABILITY AND IMPLEMENTATION: Our code and data are accessible at: https://github.com/mxqmxqmxq/GraphADT.git.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38967119 | DOI:10.1093/bioinformatics/btae438

Categories: Literature Watch

Automatic soft-tissue analysis on orthodontic frontal and lateral facial photographs based on deep learning

Fri, 2024-07-05 06:00

Orthod Craniofac Res. 2024 Jul 5. doi: 10.1111/ocr.12830. Online ahead of print.

ABSTRACT

BACKGROUND: To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues.

METHODS: A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability.

RESULTS: The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency.

CONCLUSION: Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.

PMID:38967085 | DOI:10.1111/ocr.12830

Categories: Literature Watch

Research progress of deep learning applications in mass spectrometry imaging data analysis

Fri, 2024-07-05 06:00

Se Pu. 2024 Jul;42(7):669-680. doi: 10.3724/SP.J.1123.2023.10035.

ABSTRACT

Mass spectrometry imaging (MSI) is a promising method for characterizing the spatial distribution of compounds. Given the diversified development of acquisition methods and continuous improvements in the sensitivity of this technology, both the total amount of generated data and complexity of analysis have exponentially increased, rendering increasing challenges of data postprocessing, such as large amounts of noise, background signal interferences, as well as image registration deviations caused by sample position changes and scan deviations, and etc. Deep learning (DL) is a powerful tool widely used in data analysis and image reconstruction. This tool enables the automatic feature extraction of data by building and training a neural network model, and achieves comprehensive and in-depth analysis of target data through transfer learning, which has great potential for MSI data analysis. This paper reviews the current research status, application progress and challenges of DL in MSI data analysis, focusing on four core stages: data preprocessing, image reconstruction, cluster analysis, and multimodal fusion. The application of a combination of DL and mass spectrometry imaging in the study of tumor diagnosis and subtype classification is also illustrated. This review also discusses trends of development in the future, aiming to promote a better combination of artificial intelligence and mass spectrometry technology.

PMID:38966975 | DOI:10.3724/SP.J.1123.2023.10035

Categories: Literature Watch

Microbial metaproteomics--From sample processing to data acquisition and analysis

Fri, 2024-07-05 06:00

Se Pu. 2024 Jul;42(7):658-668. doi: 10.3724/SP.J.1123.2024.02009.

ABSTRACT

Microorganisms are closely associated with human diseases and health. Understanding the composition and function of microbial communities requires extensive research. Metaproteomics has recently become an important method for throughout and in-depth study of microorganisms. However, major challenges in terms of sample processing, mass spectrometric data acquisition, and data analysis limit the development of metaproteomics owing to the complexity and high heterogeneity of microbial community samples. In metaproteomic analysis, optimizing the preprocessing method for different types of samples and adopting different microbial isolation, enrichment, extraction, and lysis schemes are often necessary. Similar to those for single-species proteomics, the mass spectrometric data acquisition modes for metaproteomics include data-dependent acquisition (DDA) and data-independent acquisition (DIA). DIA can collect comprehensive peptide information from a sample and holds great potential for future development. However, data analysis for DIA is challenged by the complexity of metaproteome samples, which hinders the deeper coverage of metaproteomes. The most important step in data analysis is the construction of a protein sequence database. The size and completeness of the database strongly influence not only the number of identifications, but also analyses at the species and functional levels. The current gold standard for metaproteome database construction is the metagenomic sequencing-based protein sequence database. A public database-filtering method based on an iterative database search has been proven to have strong practical value. The peptide-centric DIA data analysis method is a mainstream data analysis strategy. The development of deep learning and artificial intelligence will greatly promote the accuracy, coverage, and speed of metaproteomic analysis. In terms of downstream bioinformatics analysis, a series of annotation tools that can perform species annotation at the protein, peptide, and gene levels has been developed in recent years to determine the composition of microbial communities. The functional analysis of microbial communities is a unique feature of metaproteomics compared with other omics approaches. Metaproteomics has become an important component of the multi-omics analysis of microbial communities, and has great development potential in terms of depth of coverage, sensitivity of detection, and completeness of data analysis.

PMID:38966974 | DOI:10.3724/SP.J.1123.2024.02009

Categories: Literature Watch

Stochastic calculus-guided reinforcement learning: A probabilistic framework for optimal decision-making

Fri, 2024-07-05 06:00

MethodsX. 2024 Jun 3;12:102790. doi: 10.1016/j.mex.2024.102790. eCollection 2024 Jun.

ABSTRACT

Stochastic Calculus-guided Reinforcement learning (SCRL) is a new way to make decisions in situations where things are uncertain. It uses mathematical principles to make better choices and improve decision-making in complex situations. SCRL works better than traditional Stochastic Reinforcement Learning (SRL) methods. In tests, SCRL showed that it can adapt and perform well. It was better than the SRL methods. SCRL had a lower dispersion value of 63.49 compared to SRL's 65.96. This means SCRL had less variation in its results. SCRL also had lower risks than SRL in the short- and long-term. SCRL's short-term risk value was 0.64, and its long-term risk value was 0.78. SRL's short-term risk value was much higher at 18.64, and its long-term risk value was 10.41. Lower risk values are better because they mean less chance of something going wrong. Overall, SCRL is a better way to make decisions when things are uncertain. It uses math to make smarter choices and has less risk than other methods. Also, different metrics, viz training rewards, learning progress, and rolling averages between SRL and SCRL, were assessed, and the study found that SCRL outperforms well compared to SRL. This makes SCRL very useful for real-world situations where decisions must be made carefully.•By leveraging mathematical principles derived from stochastic calculus, SCRL offers a robust framework for making informed choices and enhancing performance in complex scenarios.•In comparison to traditional SRL methods, SCRL demonstrates superior adaptability and efficacy, as evidenced by empirical tests.

PMID:38966714 | PMC:PMC11223108 | DOI:10.1016/j.mex.2024.102790

Categories: Literature Watch

A comprehensive dataset for Arabic word sense disambiguation

Fri, 2024-07-05 06:00

Data Brief. 2024 Jun 4;55:110591. doi: 10.1016/j.dib.2024.110591. eCollection 2024 Aug.

ABSTRACT

This data paper introduces a comprehensive dataset tailored for word sense disambiguation tasks, explicitly focusing on a hundred polysemous words frequently employed in Modern Standard Arabic. The dataset encompasses a diverse set of senses for each word, ranging from 3 to 8, resulting in 367 unique senses. Each word sense is accompanied by contextual sentences comprising ten sentence examples that feature the polysemous word in various contexts. The data collection resulted in a dataset of 3670 samples. Significantly, the dataset is in Arabic, which is known for its rich morphology, complex syntax, and extensive polysemy. The data was meticulously collected from various web sources, spanning news, medicine, finance, and more domains. This inclusivity ensures the dataset's applicability across diverse fields, positioning it as a pivotal resource for Arabic Natural Language Processing (NLP) applications. The data collection timeframe spans from the first of April 2023 to the first of May 2023. The dataset provides comprehensive model learning by including all senses for a frequently used Arabic polysemous term, even rare senses that are infrequently used in real-world contexts, thereby mitigating biases. The dataset comprises synthetic sentences generated by GPT3.5-turbo, addressing instances where rare senses lack sufficient real-world data. The dataset collection process involved initial web scraping, followed by manual sorting to distinguish word senses, supplemented by thorough searches by a human expert to fill in missing contextual sentences. Finally, in instances where online data for rare word senses was lacking or insufficient, synthetic samples were generated. Beyond its primary utility in word sense disambiguation, this dataset holds considerable value for scientists and researchers across various domains, extending its relevance to sentiment analysis applications.

PMID:38966662 | PMC:PMC11222923 | DOI:10.1016/j.dib.2024.110591

Categories: Literature Watch

Pages