Deep learning
The emerging paradigm in pediatric rheumatology: harnessing the power of artificial intelligence
Rheumatol Int. 2024 Jul 16. doi: 10.1007/s00296-024-05661-x. Online ahead of print.
ABSTRACT
Artificial intelligence algorithms, with roots extending into the past but experiencing a resurgence and evolution in recent years due to their superiority over traditional methods and contributions to human capabilities, have begun to make their presence felt in the field of pediatric rheumatology. In the ever-evolving realm of pediatric rheumatology, there have been incremental advancements supported by artificial intelligence in understanding and stratifying diseases, developing biomarkers, refining visual analyses, and facilitating individualized treatment approaches. However, like in many other domains, these strides have yet to gain clinical applicability and validation, and ethical issues remain unresolved. Furthermore, mastering different and novel terminologies appears challenging for clinicians. This review aims to provide a comprehensive overview of the current literature, categorizing algorithms and their applications, thus offering a fresh perspective on the nascent relationship between pediatric rheumatology and artificial intelligence, highlighting both its advancements and constraints.
PMID:39012357 | DOI:10.1007/s00296-024-05661-x
Automatic classification and grading of canine tracheal collapse on thoracic radiographs by using deep learning
Vet Radiol Ultrasound. 2024 Jul 16. doi: 10.1111/vru.13413. Online ahead of print.
ABSTRACT
Tracheal collapse is a chronic and progressively worsening disease; the severity of clinical symptoms experienced by affected individuals depends on the degree of airway collapse. Cutting-edge automated tools are necessary to modernize disease screening using radiographs across various veterinary settings, such as animal clinics and hospitals. This is primarily due to the inherent challenges associated with interpreting uncertainties among veterinarians. In this study, an artificial intelligence model was developed to screen canine tracheal collapse using archived lateral cervicothoracic radiographs. This model can differentiate between a normal and collapsed trachea, ranging from early to severe degrees. The you-only-look-once (YOLO) models, including YOLO v3, YOLO v4, and YOLO v4 tiny, were used to train and test data sets under the in-house XXX platform. The results showed that the YOLO v4 tiny-416 model had satisfactory performance in screening among the normal trachea, grade 1-2 tracheal collapse, and grade 3-4 tracheal collapse with 98.30% sensitivity, 99.20% specificity, and 98.90% accuracy. The area under the curve of the precision-recall curve was >0.8, which demonstrated high diagnostic accuracy. The intraobserver agreement between deep learning and radiologists was κ = 0.975 (P < .001), with all observers having excellent agreement (κ = 1.00, P < .001). The intraclass correlation coefficient between observers was >0.90, which represented excellent consistency. Therefore, the deep learning model can be a useful and reliable method for effective screening and classification of the degree of tracheal collapse based on routine lateral cervicothoracic radiographs.
PMID:39012062 | DOI:10.1111/vru.13413
User experience of and satisfaction with computer-aided design software when designing dental prostheses: A multicenter survey study
Int J Comput Dent. 2024 Jul 16;0(0):0. doi: 10.3290/j.ijcd.b5582929. Online ahead of print.
ABSTRACT
AIM: The current study aimed to compare the responses and satisfaction reported by users with varying levels of experience when using different types of computer-aided design (CAD) software programs to design crowns.
MATERIALS AND METHODS: A questionnaire was used to evaluate user responses to five domains (software visibility, 3Dscanned data preparation, crown design and adjustment, finish line registration, and overall experience) of various CAD software programs. The study included 50 undergraduate dental students (inexperienced group) and 50 dentists or dental technicians from two hospitals (experienced group). The participants used four different CAD software programs (Meshmixer, Exocad, BlueSkyPlan, and Dentbird) to design crowns and recorded the features using the questionnaire. Statistical analyses included one-way and two-way analysis of variance (ANOVA) tests to compare scores and verify the interaction between user response and experience.
RESULT: User evaluation scores in the domains of software visibility and 3D-scanned data preparation varied between software programs (P < 0.001), with Exocad being favored by the experienced group. When evaluating crown design and finish line registration, Dentbird and Exocad scored significantly higher than the other software in both groups as they offered automation of the process using deep learning (P < 0.001). Two-way ANOVA showed that prior experience of using CAD significantly affected the users' responses to all queries (P < 0.001).
CONCLUSION: User response and satisfaction varied with the type of CAD software used to design dental prostheses, with prior experience of using CAD playing a significant role. Automation of design functions can enhance user satisfaction with the software.
PMID:39011633 | DOI:10.3290/j.ijcd.b5582929
Brief Review and Primer of Key Terminology for Artificial Intelligence and Machine Learning in Hypertension
Hypertension. 2024 Jul 16. doi: 10.1161/HYPERTENSIONAHA.123.22347. Online ahead of print.
ABSTRACT
Recent breakthroughs in artificial intelligence (AI) have caught the attention of many fields, including health care. The vision for AI is that a computer model can process information and provide output that is indistinguishable from that of a human and, in specific repetitive tasks, outperform a human's capability. The 2 critical underlying technologies in AI are used for supervised and unsupervised machine learning. Machine learning uses neural networks and deep learning modeled after the human brain from structured or unstructured data sets to learn, make decisions, and continuously improve the model. Natural language processing, used for supervised learning, is understanding, interpreting, and generating information using human language in chatbots and generative and conversational AI. These breakthroughs result from increased computing power and access to large data sets, setting the stage for releasing large language models, such as ChatGPT and others, and new imaging models using computer vision. Hypertension management involves using blood pressure and other biometric data from connected devices and generative AI to communicate with patients and health care professionals. AI can potentially improve hypertension diagnosis and treatment through remote patient monitoring and digital therapeutics.
PMID:39011632 | DOI:10.1161/HYPERTENSIONAHA.123.22347
Moss-m7G: A Motif-Based Interpretable Deep Learning Method for RNA N7-Methlguanosine Site Prediction
J Chem Inf Model. 2024 Jul 16. doi: 10.1021/acs.jcim.4c00802. Online ahead of print.
ABSTRACT
N-7methylguanosine (m7G) modification plays a crucial role in various biological processes and is closely associated with the development and progression of many cancers. Accurate identification of m7G modification sites is essential for understanding their regulatory mechanisms and advancing cancer therapy. Previous studies often suffered from insufficient research data, underutilization of motif information, and lack of interpretability. In this work, we designed a novel motif-based interpretable method for m7G modification site prediction, called Moss-m7G. This approach enables the analysis of RNA sequences from a motif-centric perspective. Our proposed word-detection module and motif-embedding module within Moss-m7G extract motif information from sequences, transforming the raw sequences from base-level into motif-level and generating embeddings for these motif sequences. Compared with base sequences, motif sequences contain richer contextual information, which is further analyzed and integrated through the Transformer model. We constructed a comprehensive m7G data set to implement the training and testing process to address the data insufficiency noted in prior research. Our experimental results affirm the effectiveness and superiority of Moss-m7G in predicting m7G modification sites. Moreover, the introduction of the word-detection module enhances the interpretability of the model, providing insights into the predictive mechanisms.
PMID:39011571 | DOI:10.1021/acs.jcim.4c00802
Classification of pain expression images in elderly with hip fractures based on improved ResNet50 network
Front Med (Lausanne). 2024 Jul 1;11:1421800. doi: 10.3389/fmed.2024.1421800. eCollection 2024.
ABSTRACT
The aim of this study is designed an improved ResNet 50 network to achieve automatic classification model for pain expressions by elderly patients with hip fractures. This study built a dataset by combining the advantages of deep learning in image recognition, using a hybrid of the Multi-Task Cascaded Convolutional Neural Networks (MTCNN). Based on ResNet50 network framework utilized transfer learning to implement model function. This study performed the hyperparameters by Bayesian optimization in the learning process. This study calculated intraclass correlation between visual analog scale scores provided by clinicians independently and those provided by pain expression evaluation assistant(PEEA). The automatic pain expression recognition model in elderly patients with hip fractures, which constructed using the algorithm. The accuracy achieved 99.6% on the training set, 98.7% on the validation set, and 98.2% on the test set. The substantial kappa coefficient of 0.683 confirmed the efficacy of PEEA in clinic. This study demonstrates that the improved ResNet50 network can be used to construct an automatic pain expression recognition model for elderly patients with hip fractures, which has higher accuracy.
PMID:39011450 | PMC:PMC11247008 | DOI:10.3389/fmed.2024.1421800
Automated magnetic resonance imaging-based grading of the lumbar intervertebral disc and facet joints
JOR Spine. 2024 Jul 15;7(3):e1353. doi: 10.1002/jsp2.1353. eCollection 2024 Sep.
ABSTRACT
BACKGROUND: Degeneration of both intervertebral discs (IVDs) and facet joints in the lumbar spine has been associated with low back pain, but whether and how IVD/joint degeneration contributes to pain remains an open question. Joint degeneration can be identified by pairing T1 and T2 magnetic resonance imaging (MRI) with analysis techniques such as Pfirrmann grades (IVD degeneration) and Fujiwara scores (facet degeneration). However, these grades are subjective, prompting the need to develop an automated technique to enhance inter-rater reliability. This study introduces an automated convolutional neural network (CNN) technique trained on clinical MRI images of IVD and facet joints obtained from public-access Lumbar Spine MRI Dataset. The primary goal of the automated system is to classify health of lumbar discs and facet joints according to Pfirrmann and Fujiwara grading systems and to enhance inter-rater reliability associated with these grading systems.
METHODS: Performance of the CNN on both the Pfirrmann and Fujiwara scales was measured by comparing the percent agreement, Pearson's correlation and Fleiss kappa value for results from the classifier to the grades assigned by an expert grader.
RESULTS: The CNN demonstrates comparable performance to human graders for both Pfirrmann and Fujiwara grading systems, but with larger errors in Fujiwara grading. The CNN improves the reliability of the Pfirrmann system, aligning with previous findings for IVD assessment.
CONCLUSION: The study highlights the potential of using deep learning in classifying the IVD and facet joint health, and due to the high variability in the Fujiwara scoring system, highlights the need for improved imaging and scoring techniques to evaluate facet joint health. All codes required to use the automatic grading routines described herein are available in the Data Repository for University of Minnesota (DRUM).
PMID:39011368 | PMC:PMC11249006 | DOI:10.1002/jsp2.1353
Benchmarking Deep Learning-Based Image Retrieval of Oral Tumor Histology
Cureus. 2024 Jun 12;16(6):e62264. doi: 10.7759/cureus.62264. eCollection 2024 Jun.
ABSTRACT
INTRODUCTION: Oral tumors necessitate a dependable computer-assisted pathological diagnosis system considering their rarity and diversity. A content-based image retrieval (CBIR) system using deep neural networks has been successfully devised for digital pathology. No CBIR system for oral pathology has been investigated because of the lack of an extensive image database and feature extractors tailored to oral pathology.
MATERIALS AND METHODS: This study uses a large CBIR database constructed from 30 categories of oral tumors to compare deep learning methods as feature extractors.
RESULTS: The highest average area under the receiver operating characteristic curve (AUC) was achieved by models trained on database images using self-supervised learning (SSL) methods (0.900 with SimCLR and 0.897 with TiCo). The generalizability of the models was validated using query images from the same cases taken with smartphones. When smartphone images were tested as queries, both models yielded the highest mean AUC (0.871 with SimCLR and 0.857 with TiCo). We ensured the retrieved image result would be easily observed by evaluating the top 10 mean accuracies and checking for an exact diagnostic category and its differential diagnostic categories.
CONCLUSION: Training deep learning models with SSL methods using image data specific to the target site is beneficial for CBIR tasks in oral tumor histology to obtain histologically meaningful results and high performance. This result provides insight into the effective development of a CBIR system to help improve the accuracy and speed of histopathology diagnosis and advance oral tumor research in the future.
PMID:39011227 | PMC:PMC11247249 | DOI:10.7759/cureus.62264
Artificial intelligence automatic measurement technology of lumbosacral radiographic parameters
Front Bioeng Biotechnol. 2024 Jul 1;12:1404058. doi: 10.3389/fbioe.2024.1404058. eCollection 2024.
ABSTRACT
BACKGROUND: Currently, manual measurement of lumbosacral radiological parameters is time-consuming and laborious, and inevitably produces considerable variability. This study aimed to develop and evaluate a deep learning-based model for automatically measuring lumbosacral radiographic parameters on lateral lumbar radiographs.
METHODS: We retrospectively collected 1,240 lateral lumbar radiographs to train the model. The included images were randomly divided into training, validation, and test sets in a ratio of approximately 8:1:1 for model training, fine-tuning, and performance evaluation, respectively. The parameters measured in this study were lumbar lordosis (LL), sacral horizontal angle (SHA), intervertebral space angle (ISA) at L4-L5 and L5-S1 segments, and the percentage of lumbar spondylolisthesis (PLS) at L4-L5 and L5-S1 segments. The model identified key points using image segmentation results and calculated measurements. The average results of key points annotated by the three spine surgeons were used as the reference standard. The model's performance was evaluated using the percentage of correct key points (PCK), intra-class correlation coefficient (ICC), Pearson correlation coefficient (r), mean absolute error (MAE), root mean square error (RMSE), and box plots.
RESULTS: The model's mean differences from the reference standard for LL, SHA, ISA (L4-L5), ISA (L5-S1), PLS (L4-L5), and PLS (L5-S1) were 1.69°, 1.36°, 1.55°, 1.90°, 1.60%, and 2.43%, respectively. When compared with the reference standard, the measurements of the model had better correlation and consistency (LL, SHA, and ISA: ICC = 0.91-0.97, r = 0.91-0.96, MAE = 1.89-2.47, RMSE = 2.32-3.12; PLS: ICC = 0.90-0.92, r = 0.90-0.91, MAE = 1.95-2.93, RMSE = 2.52-3.70), and the differences between them were not statistically significant (p > 0.05).
CONCLUSION: The model developed in this study could correctly identify key vertebral points on lateral lumbar radiographs and automatically calculate lumbosacral radiographic parameters. The measurement results of the model had good consistency and reliability compared to manual measurements. With additional training and optimization, this technology holds promise for future measurements in clinical practice and analysis of large datasets.
PMID:39011157 | PMC:PMC11246908 | DOI:10.3389/fbioe.2024.1404058
IRTCI: Item Response Theory for Categorical Imputation
Res Sq [Preprint]. 2024 Jul 2:rs.3.rs-4529519. doi: 10.21203/rs.3.rs-4529519/v1.
ABSTRACT
Most datasets suffer from partial or complete missing values, which has downstream limitations on the available models on which to test the data and on any statistical inferences that can be made from the data. Several imputation techniques have been designed to replace missing data with stand in values. The various approaches have implications for calculating clinical scores, model building and model testing. The work showcased here offers a novel means for categorical imputation based on item response theory (IRT) and compares it against several methodologies currently used in the machine learning field including k-nearest neighbors (kNN), multiple imputed chained equations (MICE) and Amazon Web Services (AWS) deep learning method, Datawig. Analyses comparing these techniques were performed on three different datasets that represented ordinal, nominal and binary categories. The data were modified so that they also varied on both the proportion of data missing and the systematization of the missing data. Two different assessments of performance were conducted: accuracy in reproducing the missing values, and predictive performance using the imputed data. Results demonstrated that the new method, Item Response Theory for Categorical Imputation (IRTCI), fared quite well compared to currently used methods, outperforming several of them in many conditions. Given the theoretical basis for the new approach, and the unique generation of probabilistic terms for determining category belonging for missing cells, IRTCI offers a viable alternative to current approaches.
PMID:39011102 | PMC:PMC11247932 | DOI:10.21203/rs.3.rs-4529519/v1
Analyzing heterogeneity in Alzheimer Disease using multimodal normative modeling on imaging-based ATN biomarkers
ArXiv [Preprint]. 2024 Jul 1:arXiv:2404.05748v2.
ABSTRACT
INTRODUCTION: Previous studies have applied normative modeling on a single neuroimaging modality to investigate Alzheimer Disease (AD) heterogeneity. We employed a deep learning-based multimodal normative framework to analyze individual-level variation across ATN (amyloid-tau-neurodegeneration) imaging biomarkers.
METHODS: We selected cross-sectional discovery (n = 665) and replication cohorts (n = 430) with available T1-weighted MRI, amyloid and tau PET. Normative modeling estimated individual-level abnormal deviations in amyloid-positive individuals compared to amyloid-negative controls. Regional abnormality patterns were mapped at different clinical group levels to assess intra-group heterogeneity. An individual-level disease severity index (DSI) was calculated using both the spatial extent and magnitude of abnormal deviations across ATN.
RESULTS: Greater intra-group heterogeneity in ATN abnormality patterns was observed in more severe clinical stages of AD. Higher DSI was associated with worse cognitive function and increased risk of disease progression.
DISCUSSION: Subject-specific abnormality maps across ATN reveal the heterogeneous impact of AD on the brain.
PMID:39010871 | PMC:PMC11247918
Gender-based linguistic differences in letters of recommendation for rhinology fellowship over time: A dual-institutional follow-up study using natural language processing and deep learning
Int Forum Allergy Rhinol. 2024 Jul 16. doi: 10.1002/alr.23411. Online ahead of print.
ABSTRACT
This follow-up dual-institutional and longitudinal study further evaluated for underlying gender biases in LORs for rhinology fellowship. Explicit and implicit linguistic gender bias was found, heavily favoring male applicants.
PMID:39010845 | DOI:10.1002/alr.23411
Association between myosteatosis and impaired glucose metabolism: A deep learning whole-body magnetic resonance imaging population phenotyping approach
J Cachexia Sarcopenia Muscle. 2024 Jul 15. doi: 10.1002/jcsm.13527. Online ahead of print.
ABSTRACT
BACKGROUND: There is increasing evidence that myosteatosis, which is currently not assessed in clinical routine, plays an important role in risk estimation in individuals with impaired glucose metabolism, as it is associated with the progression of insulin resistance. With advances in artificial intelligence, automated and accurate algorithms have become feasible to fill this gap.
METHODS: In this retrospective study, we developed and tested a fully automated deep learning model using data from two prospective cohort studies (German National Cohort [NAKO] and Cooperative Health Research in the Region of Augsburg [KORA]) to quantify myosteatosis on whole-body T1-weighted Dixon magnetic resonance imaging as (1) intramuscular adipose tissue (IMAT; the current standard) and (2) quantitative skeletal muscle (SM) fat fraction (SMFF). Subsequently, we investigated the two measures for their discrimination of and association with impaired glucose metabolism beyond baseline demographics (age, sex and body mass index [BMI]) and cardiometabolic risk factors (lipid panel, systolic blood pressure, smoking status and alcohol consumption) in asymptomatic individuals from the KORA study. Impaired glucose metabolism was defined as impaired fasting glucose or impaired glucose tolerance (140-200 mg/dL) or prevalent diabetes mellitus.
RESULTS: Model performance was high, with Dice coefficients of ≥0.81 for IMAT and ≥0.91 for SM in the internal (NAKO) and external (KORA) testing sets. In the target population (380 KORA participants: mean age of 53.6 ± 9.2 years, BMI of 28.2 ± 4.9 kg/m2, 57.4% male), individuals with impaired glucose metabolism (n = 146; 38.4%) were older and more likely men and showed a higher cardiometabolic risk profile, higher IMAT (4.5 ± 2.2% vs. 3.9 ± 1.7%) and higher SMFF (22.0 ± 4.7% vs. 18.9 ± 3.9%) compared to normoglycaemic controls (all P ≤ 0.005). SMFF showed better discrimination for impaired glucose metabolism than IMAT (area under the receiver operating characteristic curve [AUC] 0.693 vs. 0.582, 95% confidence interval [CI] [0.06-0.16]; P < 0.001) but was not significantly different from BMI (AUC 0.733 vs. 0.693, 95% CI [-0.09 to 0.01]; P = 0.15). In univariable logistic regression, IMAT (odds ratio [OR] = 1.18, 95% CI [1.06-1.32]; P = 0.004) and SMFF (OR = 1.19, 95% CI [1.13-1.26]; P < 0.001) were associated with a higher risk of impaired glucose metabolism. This signal remained robust after multivariable adjustment for baseline demographics and cardiometabolic risk factors for SMFF (OR = 1.10, 95% CI [1.01-1.19]; P = 0.028) but not for IMAT (OR = 1.14, 95% CI [0.97-1.33]; P = 0.11).
CONCLUSIONS: Quantitative SMFF, but not IMAT, is an independent predictor of impaired glucose metabolism, and discrimination is not significantly different from BMI, making it a promising alternative for the currently established approach. Automated methods such as the proposed model may provide a feasible option for opportunistic screening of myosteatosis and, thus, a low-cost personalized risk assessment solution.
PMID:39009381 | DOI:10.1002/jcsm.13527
Generating Synthetic MR Spectroscopic Imaging Data with Generative Adversarial Networks to Train Machine Learning Models
Magn Reson Med Sci. 2024 Jul 12. doi: 10.2463/mrms.mp.2023-0125. Online ahead of print.
ABSTRACT
PURPOSE: To develop a new method to generate synthetic MR spectroscopic imaging (MRSI) data for training machine learning models.
METHODS: This study targeted routine MRI examination protocols with single voxel spectroscopy (SVS). A novel model derived from pix2pix generative adversarial networks was proposed to generate synthetic MRSI data using MRI and SVS data as inputs. T1- and T2-weighted, SVS, and reference MRSI data were acquired from healthy brains with clinically available sequences. The proposed model was trained to generate synthetic MRSI data. Quantitative evaluation involved the calculation of the mean squared error (MSE) against the reference and metabolite ratio value. The effect of the location of and the number of the SVS data on the quality of the synthetic MRSI data was investigated using the MSE.
RESULTS: The synthetic MRSI data generated from the proposed model were visually closer to the reference. The 95% confidence interval (CI) of the metabolite ratio value of synthetic MRSI data overlapped with the reference for seven of eight metabolite ratios. The MSEs tended to be lower in the same location than in different locations. The MSEs among groups of numbers of SVS data were not significantly different.
CONCLUSION: A new method was developed to generate MRSI data by integrating MRI and SVS data. Our method can potentially increase the volume of MRSI data training for other machine learning models by adding SVS acquisition to routine MRI examinations.
PMID:39010240 | DOI:10.2463/mrms.mp.2023-0125
Micro-CT determination of the porosity of two tricalcium silicate sealers applied using three obturation techniques
J Oral Sci. 2024;66(3):163-168. doi: 10.2334/josnusd.24-0031.
ABSTRACT
PURPOSE: Using X-ray micro-computed tomography (micro-CT), the aim of this study was to measure the porosity of two tricalcium silicate sealers (EndoSequence BC and NeoSealer Flo) applied using three obturation techniques (single-cone, warm-vertical, and cold-lateral) to six single-rooted human teeth.
METHODS: Six extracted, single-rooted human teeth were shaped with ProTaper Next rotary files and obturated with EndoSequence BC or NeoSealer Flo sealers and gutta-percha (GP) using one of the three techniques above. Micro-CT was used to map the full length of the canals. Deep learning cross-sectional segmentation was used to analyze image slices of the apical (0-2 mm) and coronal (14-16 mm from the apex) regions (n = 230-261 per tooth) for the areas of GP and sealer, as well as porosity. Median (%) with interquartile range of porosity were calculated , and the results were statistically analyzed with the Kruskal-Wallis test.
RESULTS: In the apical region, EndoSequence BC had significantly fewer pores than NeoSealer Flo with the single-cone obturation (% median-interquartile range, IQR: 0.00-1.62) and warm-vertical condensation (5.57-10.32) techniques, whereas in the coronal region, NeoSealer Flo had significantly fewer pores than EndoSequence BC with these two techniques (0.39-5.02) and (0.10-0.19), respectively. There was no significant difference in porosity between the two sealers for the cold-lateral condensation technique in both the apical and coronal regions.
CONCLUSION: For optimal obturation, the choice of technique and sealer is critical.
PMID:39010164 | DOI:10.2334/josnusd.24-0031
Ualign: pushing the limit of template-free retrosynthesis prediction with unsupervised SMILES alignment
J Cheminform. 2024 Jul 15;16(1):80. doi: 10.1186/s13321-024-00877-2.
ABSTRACT
MOTIVATION: Retrosynthesis planning poses a formidable challenge in the organic chemical industry, particularly in pharmaceuticals. Single-step retrosynthesis prediction, a crucial step in the planning process, has witnessed a surge in interest in recent years due to advancements in AI for science. Various deep learning-based methods have been proposed for this task in recent years, incorporating diverse levels of additional chemical knowledge dependency.
RESULTS: This paper introduces UAlign, a template-free graph-to-sequence pipeline for retrosynthesis prediction. By combining graph neural networks and Transformers, our method can more effectively leverage the inherent graph structure of molecules. Based on the fact that the majority of molecule structures remain unchanged during a chemical reaction, we propose a simple yet effective SMILES alignment technique to facilitate the reuse of unchanged structures for reactant generation. Extensive experiments show that our method substantially outperforms state-of-the-art template-free and semi-template-based approaches. Importantly, our template-free method achieves effectiveness comparable to, or even surpasses, established powerful template-based methods.
SCIENTIFIC CONTRIBUTION: We present a novel graph-to-sequence template-free retrosynthesis prediction pipeline that overcomes the limitations of Transformer-based methods in molecular representation learning and insufficient utilization of chemical information. We propose an unsupervised learning mechanism for establishing product-atom correspondence with reactant SMILES tokens, achieving even better results than supervised SMILES alignment methods. Extensive experiments demonstrate that UAlign significantly outperforms state-of-the-art template-free methods and rivals or surpasses template-based approaches, with up to 5% (top-5) and 5.4% (top-10) increased accuracy over the strongest baseline.
PMID:39010144 | DOI:10.1186/s13321-024-00877-2
Deep learning approach to femoral AVN detection in digital radiography: differentiating patients and pre-collapse stages
BMC Musculoskelet Disord. 2024 Jul 16;25(1):547. doi: 10.1186/s12891-024-07669-7.
ABSTRACT
OBJECTIVE: This study aimed to evaluate a new deep-learning model for diagnosing avascular necrosis of the femoral head (AVNFH) by analyzing pelvic anteroposterior digital radiography.
METHODS: The study sample included 1167 hips. The radiographs were independently classified into 6 stages by a radiologist using their simultaneous MRIs. After that, the radiographs were given to train and test the deep learning models of the project including SVM and ANFIS layer using the Python programming language and TensorFlow library. In the last step, the test set of hip radiographs was provided to two independent radiologists with different work experiences to compare their diagnosis performance to the deep learning models' performance using the F1 score and Mcnemar test analysis.
RESULTS: The performance of SVM for AVNFH detection (AUC = 82.88%) was slightly higher than less experienced radiologists (79.68%) and slightly lower than experienced radiologists (88.4%) without reaching significance (p-value > 0.05). Evaluation of the performance of SVM for pre-collapse AVNFH detection with an AUC of 73.58% showed significantly higher performance than less experienced radiologists (AUC = 60.70%, p-value < 0.001). On the other hand, no significant difference is noted between experienced radiologists and SVM for pre-collapse detection. ANFIS algorithm for AVNFH detection with an AUC of 86.60% showed significantly higher performance than less experienced radiologists (AUC = 79.68%, p-value = 0.04). Although reaching less performance compared to experienced radiologists statistically not significant (AUC = 88.40%, p-value = 0.20).
CONCLUSIONS: Our study has shed light on the remarkable capabilities of SVM and ANFIS as diagnostic tools for AVNFH detection in radiography. Their ability to achieve high accuracy with remarkable efficiency makes them promising candidates for early detection and intervention, ultimately contributing to improved patient outcomes.
PMID:39010001 | DOI:10.1186/s12891-024-07669-7
Radiogenomics as an Integrated Approach to Glioblastoma Precision Medicine
Curr Oncol Rep. 2024 Jul 16. doi: 10.1007/s11912-024-01580-z. Online ahead of print.
ABSTRACT
PURPOSE OF REVIEW: Isocitrate dehydrogenase wild-type glioblastoma is the most aggressive primary brain tumour in adults. Its infiltrative nature and heterogeneity confer a dismal prognosis, despite multimodal treatment. Precision medicine is increasingly advocated to improve survival rates in glioblastoma management; however, conventional neuroimaging techniques are insufficient in providing the detail required for accurate diagnosis of this complex condition.
RECENT FINDINGS: Advanced magnetic resonance imaging allows more comprehensive understanding of the tumour microenvironment. Combining diffusion and perfusion magnetic resonance imaging to create a multiparametric scan enhances diagnostic power and can overcome the unreliability of tumour characterisation by standard imaging. Recent progress in deep learning algorithms establishes their remarkable ability in image-recognition tasks. Integrating these with multiparametric scans could transform the diagnosis and monitoring of patients by ensuring that the entire tumour is captured. As a corollary, radiomics has emerged as a powerful approach to offer insights into diagnosis, prognosis, treatment, and tumour response through extraction of information from radiological scans, and transformation of these tumour characteristics into quantitative data. Radiogenomics, which links imaging features with genomic profiles, has exhibited its ability in characterising glioblastoma, and determining therapeutic response, with the potential to revolutionise management of glioblastoma. The integration of deep learning algorithms into radiogenomic models has established an automated, highly reproducible means to predict glioblastoma molecular signatures, further aiding prognosis and targeted therapy. However, challenges including lack of large cohorts, absence of standardised guidelines and the 'black-box' nature of deep learning algorithms, must first be overcome before this workflow can be applied in clinical practice.
PMID:39009914 | DOI:10.1007/s11912-024-01580-z
An efficient learning based approach for automatic record deduplication with benchmark datasets
Sci Rep. 2024 Jul 15;14(1):16254. doi: 10.1038/s41598-024-63242-1.
ABSTRACT
With technological innovations, enterprises in the real world are managing every iota of data as it can be mined to derive business intelligence (BI). However, when data comes from multiple sources, it may result in duplicate records. As data is given paramount importance, it is also significant to eliminate duplicate entities towards data integration, performance and resource optimization. To realize reliable systems for record deduplication, late, deep learning could offer exciting provisions with a learning-based approach. Deep ER is one of the deep learning-based methods used recently for dealing with the elimination of duplicates in structured data. Using it as a reference model, in this paper, we propose a framework known as Enhanced Deep Learning-based Record Deduplication (EDL-RD) for improving performance further. Towards this end, we exploited a variant of Long Short Term Memory (LSTM) along with various attribute compositions, similarity metrics, and numerical and null value resolution. We proposed an algorithm known as Efficient Learning based Record Deduplication (ELbRD). The algorithm extends the reference model with the aforementioned enhancements. An empirical study has revealed that the proposed framework with extensions outperforms existing methods.
PMID:39009682 | DOI:10.1038/s41598-024-63242-1
Deep learning application of vertebral compression fracture detection using mask R-CNN
Sci Rep. 2024 Jul 15;14(1):16308. doi: 10.1038/s41598-024-67017-6.
ABSTRACT
Vertebral compression fractures (VCFs) of the thoracolumbar spine are commonly caused by osteoporosis or result from traumatic events. Early diagnosis of vertebral compression fractures can prevent further damage to patients. When assessing these fractures, plain radiographs are used as the primary diagnostic modality. In this study, we developed a deep learning based fracture detection model that could be used as a tool for primary care in the orthopedic department. We constructed a VCF dataset using 487 lateral radiographs, which included 598 fractures in the L1-T11 vertebra. For detecting VCFs, Mask R-CNN model was trained and optimized, and was compared to three other popular models on instance segmentation, Cascade Mask R-CNN, YOLOACT, and YOLOv5. With Mask R-CNN we achieved highest mean average precision score of 0.58, and were able to locate each fracture pixel-wise. In addition, the model showed high overall sensitivity, specificity, and accuracy, indicating that it detected fractures accurately and without misdiagnosis. Our model can be a potential tool for detecting VCFs from a simple radiograph and assisting doctors in making appropriate decisions in initial diagnosis.
PMID:39009647 | DOI:10.1038/s41598-024-67017-6