Deep learning
Using AI to Differentiate Mpox From Common Skin Lesions in a Sexual Health Clinic: Algorithm Development and Validation Study
J Med Internet Res. 2024 Sep 13;26:e52490. doi: 10.2196/52490.
ABSTRACT
BACKGROUND: The 2022 global outbreak of mpox has significantly impacted health facilities, and necessitated additional infection prevention and control measures and alterations to clinic processes. Early identification of suspected mpox cases will assist in mitigating these impacts.
OBJECTIVE: We aimed to develop and evaluate an artificial intelligence (AI)-based tool to differentiate mpox lesion images from other skin lesions seen in a sexual health clinic.
METHODS: We used a data set with 2200 images, that included mpox and non-mpox lesions images, collected from Melbourne Sexual Health Centre and web resources. We adopted deep learning approaches which involved 6 different deep learning architectures to train our AI models. We subsequently evaluated the performance of each model using a hold-out data set and an external validation data set to determine the optimal model for differentiating between mpox and non-mpox lesions.
RESULTS: The DenseNet-121 model outperformed other models with an overall area under the receiver operating characteristic curve (AUC) of 0.928, an accuracy of 0.848, a precision of 0.942, a recall of 0.742, and an F1-score of 0.834. Implementation of a region of interest approach significantly improved the performance of all models, with the AUC for the DenseNet-121 model increasing to 0.982. This approach resulted in an increase in the correct classification of mpox images from 79% (55/70) to 94% (66/70). The effectiveness of this approach was further validated by a visual analysis with gradient-weighted class activation mapping, demonstrating a reduction in false detection within the background of lesion images. On the external validation data set, ResNet-18 and DenseNet-121 achieved the highest performance. ResNet-18 achieved an AUC of 0.990 and an accuracy of 0.947, and DenseNet-121 achieved an AUC of 0.982 and an accuracy of 0.926.
CONCLUSIONS: Our study demonstrated it was possible to use an AI-based image recognition algorithm to accurately differentiate between mpox and common skin lesions. Our findings provide a foundation for future investigations aimed at refining the algorithm and establishing the place of such technology in a sexual health clinic.
PMID:39269753 | DOI:10.2196/52490
MMFA-DTA: Multimodal Feature Attention Fusion Network for Drug-Target Affinity Prediction for Drug Repurposing Against SARS-CoV-2
J Chem Theory Comput. 2024 Sep 13. doi: 10.1021/acs.jctc.4c00663. Online ahead of print.
ABSTRACT
The continuous emergence of novel infectious diseases poses a significant threat to global public health security, necessitating the development of small-molecule inhibitors that directly target pathogens. The RNA-dependent RNA polymerase (RdRp) and main protease (Mpro) of SARS-CoV-2 have been validated as potential key antiviral drug targets for the treatment of COVID-19. However, the conventional new drug R&D cycle takes 10-15 years, failing to meet the urgent needs during epidemics. Here, we propose a general multimodal deep learning framework for drug repurposing, MMFA-DTA, to enable rapid virtual screening of known drugs and significantly improve discovery efficiency. By extracting graph topological and sequence features from both small molecules and proteins, we design attention mechanisms to achieve dynamic fusion across modalities. Results demonstrate the superior performance of MMFA-DTA in drug-target affinity prediction over several state-of-the-art baseline methods on Davis and KIBA data sets, validating the benefits of heterogeneous information integration for representation learning and interaction modeling. Further fine-tuning on COVID-19-relevant bioactivity data enhances model predictions for critical SARS-CoV-2 enzymes. Case studies screening the FDA-approved drug library successfully identify etacrynic acid as the potential lead compound against both RdRp and Mpro. Molecular dynamics simulations further confirm the stability and binding affinity of etacrynic acid to these targets. This study proves the great potential and advantages of deep learning and drug repurposing strategies in supporting antiviral drug discovery. The proposed general and rapid response computational framework holds significance for preparedness against future public health events.
PMID:39269697 | DOI:10.1021/acs.jctc.4c00663
A zero precision loss framework for EEG channel selection: enhancing efficiency and maintaining interpretability
Comput Methods Biomech Biomed Engin. 2024 Sep 13:1-16. doi: 10.1080/10255842.2024.2401918. Online ahead of print.
ABSTRACT
The brain-computer interface (BCI) systems based on motor imagery typically rely on a large number of electrode channels to acquire information. The rational selection of electroencephalography (EEG) channel combinations is crucial for optimizing computational efficiency and enhancing practical applicability. However, evaluating all potential channel combinations individually is impractical. This study aims to explore a strategy for quickly achieving a balance between maximizing channel reduction and minimizing precision loss. To this end, we developed a spatio-temporal attention perception network named STAPNet. Based on the channel contributions adaptively generated by its subnetwork, we propose an extended step bi-directional search strategy that includes variable ratio channel selection (VRCS) and strided greedy channel selection (SGCS), designed to enhance global search capabilities and accelerate the optimization process. Experimental results show that on the High Gamma and BCI Competition IV 2a public datasets, the framework respectively achieved average maximum accuracies of 91.47% and 84.17%. Under conditions of zero precision loss, the average number of channels was reduced by a maximum of 87.5%. Additionally, to investigate the impact of neural information loss due to channel reduction on the interpretation of complex brain functions, we employed a heatmap visualization algorithm to verify the universal importance and complete symmetry of the selected optimal channel combination across multiple datasets. This is consistent with the brain's cooperative mechanism when processing tasks involving both the left and right hands.
PMID:39269692 | DOI:10.1080/10255842.2024.2401918
Identification of Chemical Scaffolds That Inhibit the <em>Mycobacterium tuberculosis</em> Respiratory Complex Succinate Dehydrogenase
ACS Infect Dis. 2024 Sep 13. doi: 10.1021/acsinfecdis.3c00655. Online ahead of print.
ABSTRACT
Drug-resistant Mycobacterium tuberculosis is a significant cause of infectious disease morbidity and mortality for which new antimicrobials are urgently needed. Inhibitors of mycobacterial respiratory energy metabolism have emerged as promising next-generation antimicrobials, but a number of targets remain unexplored. Succinate dehydrogenase (SDH), a focal point in mycobacterial central carbon metabolism and respiratory energy production, is required for growth and survival in M. tuberculosis under a number of conditions, highlighting the potential of inhibitors targeting mycobacterial SDH enzymes. To advance SDH as a novel drug target in M. tuberculosis, we utilized a combination of biochemical screening and in-silico deep learning technologies to identify multiple chemical scaffolds capable of inhibiting mycobacterial SDH activity. Antimicrobial susceptibility assays show that lead inhibitors are bacteriostatic agents with activity against wild-type and drug-resistant strains of M. tuberculosis. Mode of action studies on lead compounds demonstrate that the specific inhibition of SDH activity dysregulates mycobacterial metabolism and respiration and results in the secretion of intracellular succinate. Interaction assays demonstrate that the chemical inhibition of SDH activity potentiates the activity of other bioenergetic inhibitors and prevents the emergence of resistance to a variety of drugs. Overall, this study shows that SDH inhibitors are promising next-generation antimicrobials against M. tuberculosis.
PMID:39268963 | DOI:10.1021/acsinfecdis.3c00655
GRABSEEDS: extraction of plant organ traits through image analysis
Plant Methods. 2024 Sep 12;20(1):140. doi: 10.1186/s13007-024-01268-2.
ABSTRACT
BACKGROUND: Phenotyping of plant traits presents a significant bottleneck in Quantitative Trait Loci (QTL) mapping and genome-wide association studies (GWAS). Computerized phenotyping using digital images promises rapid, robust, and reproducible measurements of dimension, shape, and color traits of plant organs, including grain, leaf, and floral traits.
RESULTS: We introduce GRABSEEDS, which is specifically tailored to extract a comprehensive set of features from plant images based on state-of-the-art computer vision and deep learning methods. This command-line enabled tool, which is adept at managing varying light conditions, background disturbances, and overlapping objects, uses digital images to measure plant organ characteristics accurately and efficiently. GRABSEED has advanced features including label recognition and color correction in a batch setting.
CONCLUSION: GRABSEEDS streamlines the plant phenotyping process and is effective in a variety of seed, floral and leaf trait studies for association with agronomic traits and stress conditions. Source code and documentations for GRABSEEDS are available at: https://github.com/tanghaibao/jcvi/wiki/GRABSEEDS .
PMID:39267072 | DOI:10.1186/s13007-024-01268-2
Mild cognitive impairment prediction based on multi-stream convolutional neural networks
BMC Bioinformatics. 2024 Sep 12;22(Suppl 5):638. doi: 10.1186/s12859-024-05911-6.
ABSTRACT
BACKGROUND: Mild cognitive impairment (MCI) is the transition stage between the cognitive decline expected in normal aging and more severe cognitive decline such as dementia. The early diagnosis of MCI plays an important role in human healthcare. Current methods of MCI detection include cognitive tests to screen for executive function impairments, possibly followed by neuroimaging tests. However, these methods are expensive and time-consuming. Several studies have demonstrated that MCI and dementia can be detected by machine learning technologies from different modality data. This study proposes a multi-stream convolutional neural network (MCNN) model to predict MCI from face videos.
RESULTS: The total effective data are 48 facial videos from 45 participants, including 35 videos from normal cognitive participants and 13 videos from MCI participants. The videos are divided into several segments. Then, the MCNN captures the latent facial spatial features and facial dynamic features of each segment and classifies the segment as MCI or normal. Finally, the aggregation stage produces the final detection results of the input video. We evaluate 27 MCNN model combinations including three ResNet architectures, three optimizers, and three activation functions. The experimental results showed that the ResNet-50 backbone with Swish activation function and Ranger optimizer produces the best results with an F1-score of 89% at the segment level. However, the ResNet-18 backbone with Swish and Ranger achieves the F1-score of 100% at the participant level.
CONCLUSIONS: This study presents an efficient new method for predicting MCI from facial videos. Studies have shown that MCI can be detected from facial videos, and facial data can be used as a biomarker for MCI. This approach is very promising for developing accurate models for screening MCI through facial data. It demonstrates that automated, non-invasive, and inexpensive MCI screening methods are feasible and do not require highly subjective paper-and-pencil questionnaires. Evaluation of 27 model combinations also found that ResNet-50 with Swish is more stable for different optimizers. Such results provide directions for hyperparameter tuning to further improve MCI predictions.
PMID:39266977 | DOI:10.1186/s12859-024-05911-6
Deep Learning for Automated Classification of Hip Hardware on Radiographs
J Imaging Inform Med. 2024 Sep 12. doi: 10.1007/s10278-024-01263-y. Online ahead of print.
ABSTRACT
PURPOSE: To develop a deep learning model for automated classification of orthopedic hardware on pelvic and hip radiographs, which can be clinically implemented to decrease radiologist workload and improve consistency among radiology reports.
MATERIALS AND METHODS: Pelvic and hip radiographs from 4279 studies in 1073 patients were retrospectively obtained and reviewed by musculoskeletal radiologists. Two convolutional neural networks, EfficientNet-B4 and NFNet-F3, were trained to perform the image classification task into the following most represented categories: no hardware, total hip arthroplasty (THA), hemiarthroplasty, intramedullary nail, femoral neck cannulated screws, dynamic hip screw, lateral blade/plate, THA with additional femoral fixation, and post-infectious hip. Model performance was assessed on an independent test set of 851 studies from 262 patients and compared to individual performance of five subspecialty-trained radiologists using leave-one-out analysis against an aggregate gold standard label.
RESULTS: For multiclass classification, the area under the receiver operating characteristic curve (AUC) for NFNet-F3 was 0.99 or greater for all classes, and EfficientNet-B4 0.99 or greater for all classes except post-infectious hip, with an AUC of 0.97. When compared with human observers, models achieved an accuracy of 97%, which is non-inferior to four out of five radiologists and outperformed one radiologist. Cohen's kappa coefficient for both models ranged from 0.96 to 0.97, indicating excellent inter-reader agreement.
CONCLUSION: A deep learning model can be used to classify a range of orthopedic hip hardware with high accuracy and comparable performance to subspecialty-trained radiologists.
PMID:39266912 | DOI:10.1007/s10278-024-01263-y
Convolutional Neural Networks for Segmentation of Pleural Mesothelioma: Analysis of Probability Map Thresholds (CALGB 30901, Alliance)
J Imaging Inform Med. 2024 Sep 12. doi: 10.1007/s10278-024-01092-z. Online ahead of print.
ABSTRACT
The purpose of this study was to evaluate the impact of probability map threshold on pleural mesothelioma (PM) tumor delineations generated using a convolutional neural network (CNN). One hundred eighty-six CT scans from 48 PM patients were segmented by a VGG16/U-Net CNN. A radiologist modified the contours generated at a 0.5 probability threshold. Percent difference of tumor volume and overlap using the Dice Similarity Coefficient (DSC) were compared between the reference standard provided by the radiologist and CNN outputs for thresholds ranging from 0.001 to 0.9. CNN-derived contours consistently yielded smaller tumor volumes than radiologist contours. Reducing the probability threshold from 0.5 to 0.01 decreased the absolute percent volume difference, on average, from 42.93% to 26.60%. Median and mean DSC ranged from 0.57 to 0.59, with a peak at a threshold of 0.2; no distinct threshold was found for percent volume difference. The CNN exhibited deficiencies with specific disease presentations, such as severe pleural effusion or disease in the pleural fissure. No single output threshold in the CNN probability maps was optimal for both tumor volume and DSC. This study emphasized the importance of considering both figures of merit when evaluating deep learning-based tumor segmentations across probability thresholds. This work underscores the need to simultaneously assess tumor volume and spatial overlap when evaluating CNN performance. While automated segmentations may yield comparable tumor volumes to that of the reference standard, the spatial region delineated by the CNN at a specific threshold is equally important.
PMID:39266911 | DOI:10.1007/s10278-024-01092-z
Training and validation of a deep learning U-net architecture general model for automated segmentation of inner ear from CT
Eur Radiol Exp. 2024 Sep 12;8(1):104. doi: 10.1186/s41747-024-00508-3.
ABSTRACT
BACKGROUND: The intricate three-dimensional anatomy of the inner ear presents significant challenges in diagnostic procedures and critical surgical interventions. Recent advancements in deep learning (DL), particularly convolutional neural networks (CNN), have shown promise for segmenting specific structures in medical imaging. This study aimed to train and externally validate an open-source U-net DL general model for automated segmentation of the inner ear from computed tomography (CT) scans, using quantitative and qualitative assessments.
METHODS: In this multicenter study, we retrospectively collected a dataset of 271 CT scans to train an open-source U-net CNN model. An external set of 70 CT scans was used to evaluate the performance of the trained model. The model's efficacy was quantitatively assessed using the Dice similarity coefficient (DSC) and qualitatively assessed using a 4-level Likert score. For comparative analysis, manual segmentation served as the reference standard, with assessments made on both training and validation datasets, as well as stratified analysis of normal and pathological subgroups.
RESULTS: The optimized model yielded a mean DSC of 0.83 and achieved a Likert score of 1 in 42% of the cases, in conjunction with a significantly reduced processing time. Nevertheless, 27% of the patients received an indeterminate Likert score of 4. Overall, the mean DSCs were notably higher in the validation dataset than in the training dataset.
CONCLUSION: This study supports the external validation of an open-source U-net model for the automated segmentation of the inner ear from CT scans.
RELEVANCE STATEMENT: This study optimized and assessed an open-source general deep learning model for automated segmentation of the inner ear using temporal CT scans, offering perspectives for application in clinical routine. The model weights, study datasets, and baseline model are worldwide accessible.
KEY POINTS: A general open-source deep learning model was trained for CT automated inner ear segmentation. The Dice similarity coefficient was 0.83 and a Likert score of 1 was attributed to 42% of automated segmentations. The influence of scanning protocols on the model performances remains to be assessed.
PMID:39266784 | DOI:10.1186/s41747-024-00508-3
Predicting multiple sclerosis disease progression and outcomes with machine learning and MRI-based biomarkers: a review
J Neurol. 2024 Sep 12. doi: 10.1007/s00415-024-12651-3. Online ahead of print.
ABSTRACT
Multiple sclerosis (MS) is a demyelinating neurological disorder with a highly heterogeneous clinical presentation and course of progression. Disease-modifying therapies are the only available treatment, as there is no known cure for the disease. Careful selection of suitable therapies is necessary, as they can be accompanied by serious risks and adverse effects such as infection. Magnetic resonance imaging (MRI) plays a central role in the diagnosis and management of MS, though MRI lesions have displayed only moderate associations with MS clinical outcomes, known as the clinico-radiological paradox. With the advent of machine learning (ML) in healthcare, the predictive power of MRI can be improved by leveraging both traditional and advanced ML algorithms capable of analyzing increasingly complex patterns within neuroimaging data. The purpose of this review was to examine the application of MRI-based ML for prediction of MS disease progression. Studies were divided into five main categories: predicting the conversion of clinically isolated syndrome to MS, cognitive outcome, EDSS-related disability, motor disability and disease activity. The performance of ML models is discussed along with highlighting the influential MRI-derived biomarkers. Overall, MRI-based ML presents a promising avenue for MS prognosis. However, integration of imaging biomarkers with other multimodal patient data shows great potential for advancing personalized healthcare approaches in MS.
PMID:39266777 | DOI:10.1007/s00415-024-12651-3
An open-source framework for end-to-end analysis of electronic health record data
Nat Med. 2024 Sep 12. doi: 10.1038/s41591-024-03214-0. Online ahead of print.
ABSTRACT
With progressive digitalization of healthcare systems worldwide, large-scale collection of electronic health records (EHRs) has become commonplace. However, an extensible framework for comprehensive exploratory analysis that accounts for data heterogeneity is missing. Here we introduce ehrapy, a modular open-source Python framework designed for exploratory analysis of heterogeneous epidemiology and EHR data. ehrapy incorporates a series of analytical steps, from data extraction and quality control to the generation of low-dimensional representations. Complemented by rich statistical modules, ehrapy facilitates associating patients with disease states, differential comparison between patient clusters, survival analysis, trajectory inference, causal inference and more. Leveraging ontologies, ehrapy further enables data sharing and training EHR deep learning models, paving the way for foundational models in biomedical research. We demonstrate ehrapy's features in six distinct examples. We applied ehrapy to stratify patients affected by unspecified pneumonia into finer-grained phenotypes. Furthermore, we reveal biomarkers for significant differences in survival among these groups. Additionally, we quantify medication-class effects of pneumonia medications on length of stay. We further leveraged ehrapy to analyze cardiovascular risks across different data modalities. We reconstructed disease state trajectories in patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) based on imaging data. Finally, we conducted a case study to demonstrate how ehrapy can detect and mitigate biases in EHR data. ehrapy, thus, provides a framework that we envision will standardize analysis pipelines on EHR data and serve as a cornerstone for the community.
PMID:39266748 | DOI:10.1038/s41591-024-03214-0
Inferring gene regulatory networks with graph convolutional network based on causal feature reconstruction
Sci Rep. 2024 Sep 12;14(1):21342. doi: 10.1038/s41598-024-71864-8.
ABSTRACT
Inferring gene regulatory networks through deep learning and causal inference methods is a crucial task in the field of computational biology and bioinformatics. This study presents a novel approach that uses a Graph Convolutional Network (GCN) guided by causal information to infer Gene Regulatory Networks (GRN). The transfer entropy and reconstruction layer are utilized to achieve causal feature reconstruction, mitigating the information loss problem caused by multiple rounds of neighbor aggregation in GCN, resulting in a causal and integrated representation of node features. Separable features are extracted from gene expression data by the Gaussian-kernel Autoencoder to improve computational efficiency. Experimental results on the DREAM5 and the mDC dataset demonstrate that our method exhibits superior performance compared to existing algorithms, as indicated by the higher values of the AUPRC metrics. Furthermore, the incorporation of causal feature reconstruction enhances the inferred GRN, rendering them more reasonable, accurate, and reliable.
PMID:39266676 | DOI:10.1038/s41598-024-71864-8
Heat wave attribution assessment using deep learning
Nat Comput Sci. 2024 Sep 12. doi: 10.1038/s43588-024-00700-w. Online ahead of print.
NO ABSTRACT
PMID:39266671 | DOI:10.1038/s43588-024-00700-w
An attentional mechanism model for segmenting multiple lesion regions in the diabetic retina
Sci Rep. 2024 Sep 12;14(1):21354. doi: 10.1038/s41598-024-72481-1.
ABSTRACT
Diabetic retinopathy (DR), a leading cause of blindness in diabetic patients, necessitates the precise segmentation of lesions for the effective grading of lesions. DR multi-lesion segmentation faces the main concerns as follows. On the one hand, retinal lesions vary in location, shape, and size. On the other hand, the currently available multi-lesion region segmentation models are insufficient in their extraction of minute features and are prone to overlooking microaneurysms. To solve the above problems, we propose a novel deep learning method: the Multi-Scale Spatial Attention Gate (MSAG) mechanism network. The model inputs images of varying scales in order to extract a range of semantic information. Our innovative Spatial Attention Gate merges low-level spatial details with high-level semantic content, assigning hierarchical attention weights for accurate segmentation. The incorporation of the modified spatial attention gate in the inference stage enhances precision by combining prediction scales hierarchically, thereby improving segmentation accuracy without increasing the associated training costs. We conduct the experiments on the public datasets IDRiD and DDR, and the experimental results show that the proposed method achieves better performance than other methods.
PMID:39266650 | DOI:10.1038/s41598-024-72481-1
Advancements in supervised deep learning for metal artifact reduction in computed tomography: A systematic review
Eur J Radiol. 2024 Sep 7;181:111732. doi: 10.1016/j.ejrad.2024.111732. Online ahead of print.
ABSTRACT
BACKGROUND: Metallic artefacts caused by metal implants, are a common problem in computed tomography (CT) imaging, degrading image quality and diagnostic accuracy. With advancements in artificial intelligence, novel deep learning (DL)-based metal artefact reduction (MAR) algorithms are entering clinical practice.
OBJECTIVE: This systematic review provides an overview of the performance of the current supervised DL-based MAR algorithms for CT, focusing on three different domains: sinogram, image, and dual domain.
METHODS: A literature search was conducted in PubMed, EMBASE, Web of Science, and Scopus. Outcomes were assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) or any other objective measure comparing MAR performance to uncorrected images.
RESULTS: After screening, fourteen studies were selected that compared DL-based MAR-algorithms with uncorrected images. MAR-algorithms were categorised into the three domains. Thirteen MAR-algorithms showed a higher PSNR and SSIM value compared to the uncorrected images and to non-DL MAR-algorithms. One study showed statistically significant better MAR performance on clinical data compared to the uncorrected images and non-DL MAR-algorithms based on Hounsfield unit calculations.
CONCLUSION: DL MAR-algorithms show promising results in reducing metal artefacts, but standardised methodologies are needed to evaluate DL-based MAR-algorithms on clinical data to improve comparability between algorithms.
CLINICAL RELEVANCE STATEMENT: Recent studies highlight the effectiveness of supervised Deep Learning-based MAR-algorithms in improving CT image quality by reducing metal artefacts in the sinogram, image and dual domain. A systematic review is needed to provide an overview of newly developed algorithms.
PMID:39265203 | DOI:10.1016/j.ejrad.2024.111732
Comparison and benchmark of deep learning methods for non-coding RNA classification
PLoS Comput Biol. 2024 Sep 12;20(9):e1012446. doi: 10.1371/journal.pcbi.1012446. Online ahead of print.
ABSTRACT
The involvement of non-coding RNAs in biological processes and diseases has made the exploration of their functions crucial. Most non-coding RNAs have yet to be studied, creating the need for methods that can rapidly classify large sets of non-coding RNAs into functional groups, or classes. In recent years, the success of deep learning in various domains led to its application to non-coding RNA classification. Multiple novel architectures have been developed, but these advancements are not covered by current literature reviews. We present an exhaustive comparison of the different methods proposed in the state-of-the-art and describe their associated datasets. Moreover, the literature lacks objective benchmarks. We perform experiments to fairly evaluate the performance of various tools for non-coding RNA classification on popular datasets. The robustness of methods to non-functional sequences and sequence boundary noise is explored. We also measure computation time and CO2 emissions. With regard to these results, we assess the relevance of the different architectural choices and provide recommendations to consider in future methods.
PMID:39264986 | DOI:10.1371/journal.pcbi.1012446
A New Method for Scoliosis Screening Incorporating Deep Learning With Back Images
Global Spine J. 2024 Sep 12:21925682241282581. doi: 10.1177/21925682241282581. Online ahead of print.
ABSTRACT
STUDY DESIGN: Retrospective observational study.
OBJECTIVES: Scoliosis is commonly observed in adolescents, with a world0wide prevalence of 0.5%. It is prone to be overlooked by parents during its early stages, as it often lacks overt characteristics. As a result, many individuals are not aware that they may have scoliosis until the symptoms become quite severe, significantly affecting the physical and mental well-being of patients. Traditional screening methods for scoliosis demand significant physician effort and require unnecessary radiography exposure; thus, implementing large-scale screening is challenging. The application of deep learning algorithms has the potential to reduce unnecessary radiation risks as well as the costs of scoliosis screening.
METHODS: The data of 247 scoliosis patients observed between 2008 and 2021 were used for training. The dataset included frontal, lateral, and back upright images as well as X-ray images obtained during the same period. We proposed and validated deep learning algorithms for automated scoliosis screening using upright back images. The overall process involved the localization of the back region of interest (ROI), spinal region segmentation, and Cobb angle measurements.
RESULTS: The results indicated that the accuracy of the Cobb angle measurement was superior to that of the traditional human visual recognition method, providing a concise and convenient scoliosis screening capability without causing any harm to the human body.
CONCLUSIONS: The method was automated, accurate, concise, and convenient. It is potentially applicable to a wide range of screening methods for the detection of early scoliosis.
PMID:39264983 | DOI:10.1177/21925682241282581
Unveiling the economic potential of sports industry in China: A data driven analysis
PLoS One. 2024 Sep 12;19(9):e0310131. doi: 10.1371/journal.pone.0310131. eCollection 2024.
ABSTRACT
The article explains the economic dynamics of the sports industry with adoption of deep learning algorithms and data mining methodology. Despite outstanding improvements in research of sports industry, a significant gap prevails with regard to proper quantification of economic benefits of this industry. Therefore, the current research is an attempt to filling this gap by proposing a specific economic model for the sports sector. This paper examines the data of sports industry covering the time span of 2012 to 2022 by using data mining technology for quantitative analyses. Deep learning algorithms and data mining techniques transform the gained information from sports industry databases into sophisticated economic models. The developed model then makes the efficient analysis of diverse datasets for underlying patterns and insights, crucial in realizing the economic trajectory of the industry. The findings of the study reveal the importance of sports industry for economic growth of China. Moreover, the application of deep learning algorithm highlights the importance of continuous learning and training on the economic data from the sports industry. It is, therefore, an entirely novel approach to build up an economic simulation framework using deep learning and data mining, tailored to the intricate dynamics of the sports industry.
PMID:39264965 | DOI:10.1371/journal.pone.0310131
Enhancing reginal wall abnormality detection accuracy: Integrating machine learning, optical flow algorithms, and temporal convolutional networks in multi-view echocardiography
PLoS One. 2024 Sep 12;19(9):e0310107. doi: 10.1371/journal.pone.0310107. eCollection 2024.
ABSTRACT
BACKGROUND: Regional Wall Motion Abnormality (RWMA) serves as an early indicator of myocardial infarction (MI), the global leader in mortality. Accurate and early detection of RWMA is vital for the successful treatment of MI. Current automated echocardiography analyses typically concentrate on peak values from left ventricular (LV) displacement curves, based on LV contour annotations or key frames during the heart's systolic or diastolic phases within a single echocardiographic cycle. This approach may overlook the rich motion field features available in multi-cycle cardiac data, which could enhance RWMA detection.
METHODS: In this research, we put forward an innovative approach to detect RWMA by harnessing motion information across multiple echocardiographic cycles and multi-views. Our methodology synergizes U-Net-based segmentation with optical flow algorithms for detailed cardiac structure delineation, and Temporal Convolutional Networks (ConvNet) to extract nuanced motion features. We utilize a variety of machine learning and deep learning classifiers on both A2C and A4C views echocardiograms to enhance detection accuracy. A three-phase algorithm-originating from the HMC-QU dataset-incorporates U-Net for segmentation, followed by optical flow for cardiac wall motion field features. Temporal ConvNet, inspired by the Temporal Segment Network (TSN), is then applied to interpret these motion field features, independent of traditional cardiac parameter curves or specific key phase frame inputs.
RESULTS: Employing five-fold cross-validation, our SVM classifier demonstrated high performance, with a sensitivity of 93.13%, specificity of 83.61%, precision of 88.52%, and an F1 score of 90.39%. When compared with other studies using the HMC-QU datasets, these Fig s stand out, underlining our method's effectiveness. The classifier also attained an overall accuracy of 89.25% and Area Under the Curve (AUC) of 95%, reinforcing its potential for reliable RWMA detection in echocardiographic analysis.
CONCLUSIONS: This research not only demonstrates a novel technique but also contributes a more comprehensive and precise tool for early myocardial infarction diagnosis.
PMID:39264929 | DOI:10.1371/journal.pone.0310107
Antivirals for monkeypox virus: Proposing an effective machine/deep learning framework
PLoS One. 2024 Sep 12;19(9):e0299342. doi: 10.1371/journal.pone.0299342. eCollection 2024.
ABSTRACT
Monkeypox (MPXV) is one of the infectious viruses which caused morbidity and mortality problems in these years. Despite its danger to public health, there is no approved drug to stand and handle MPXV. On the other hand, drug repurposing is a promising screening method for the low-cost introduction of approved drugs for emerging diseases and viruses which utilizes computational methods. Therefore, drug repurposing is a promising approach to suggesting approved drugs for the MPXV. This paper proposes a computational framework for MPXV antiviral prediction. To do this, we have generated a new virus-antiviral dataset. Moreover, we applied several machine learning and one deep learning method for virus-antiviral prediction. The suggested drugs by the learning methods have been investigated using docking studies. The target protein structure is modeled using homology modeling and, then, refined and validated. To the best of our knowledge, this work is the first work to study deep learning methods for the prediction of MPXV antivirals. The screening results confirm that Tilorone, Valacyclovir, Ribavirin, Favipiravir, and Baloxavir marboxil are effective drugs for MPXV treatment.
PMID:39264896 | DOI:10.1371/journal.pone.0299342