Deep learning

Development of AI-Based Diagnostic Algorithm for Nasal Bone Fracture Using Deep Learning

Wed, 2024-01-31 06:00

J Craniofac Surg. 2024 Jan-Feb 01;35(1):29-32. doi: 10.1097/SCS.0000000000009856. Epub 2023 Nov 13.

ABSTRACT

Facial bone fractures are relatively common, with the nasal bone the most frequently fractured facial bone. Computed tomography is the gold standard for diagnosing such fractures. Most nasal bone fractures can be treated using a closed reduction. However, delayed diagnosis may cause nasal deformity or other complications that are difficult and expensive to treat. In this study, the authors developed an algorithm for diagnosing nasal fractures by learning computed tomography images of facial bones with artificial intelligence through deep learning. A significant concordance with human doctors' reading results of 100% sensitivity and 77% specificity was achieved. Herein, the authors report the results of a pilot study on the first stage of developing an algorithm for analyzing fractures in the facial bone.

PMID:38294297 | DOI:10.1097/SCS.0000000000009856

Categories: Literature Watch

Computational Chemistry in Structure-Based Solute Carrier Transporter Drug Design: Recent Advances and Future Perspectives

Wed, 2024-01-31 06:00

J Chem Inf Model. 2024 Jan 31. doi: 10.1021/acs.jcim.3c01736. Online ahead of print.

ABSTRACT

Solute carrier transporters (SLCs) are a class of important transmembrane proteins that are involved in the transportation of diverse solute ions and small molecules into cells. There are approximately 450 SLCs within the human body, and more than a quarter of them are emerging as attractive therapeutic targets for multiple complex diseases, e.g., depression, cancer, and diabetes. However, only 44 unique transporters (∼9.8% of the SLC superfamily) with 3D structures and specific binding sites have been reported. To design innovative and effective drugs targeting diverse SLCs, there are a number of obstacles that need to be overcome. However, computational chemistry, including physics-based molecular modeling and machine learning- and deep learning-based artificial intelligence (AI), provides an alternative and complementary way to the classical drug discovery approach. Here, we present a comprehensive overview on recent advances and existing challenges of the computational techniques in structure-based drug design of SLCs from three main aspects: (i) characterizing multiple conformations of the proteins during the functional process of transportation, (ii) identifying druggability sites especially the cryptic allosteric ones on the transporters for substrates and drugs binding, and (iii) discovering diverse small molecules or synthetic protein binders targeting the binding sites. This work is expected to provide guidelines for a deep understanding of the structure and function of the SLC superfamily to facilitate rational design of novel modulators of the transporters with the aid of state-of-the-art computational chemistry technologies including artificial intelligence.

PMID:38294194 | DOI:10.1021/acs.jcim.3c01736

Categories: Literature Watch

Rapid detection of multi-scale cotton pests based on lightweight GBW-YOLOv5 model

Wed, 2024-01-31 06:00

Pest Manag Sci. 2024 Jan 31. doi: 10.1002/ps.7978. Online ahead of print.

ABSTRACT

BACKGROUND: Pest infestation is one of the primary causes of decreased cotton yield and quality. Rapid and accurate identification of cotton pest categories is essential for producers to implement effective and expeditious control measures. Existing multi-scale cotton pest detection technology still suffers from poor accuracy and rapidity of detection. This study proposed the pruned GBW-YOLOv5 (Ghost-BiFPN-WIoU You Only Look Once version 5), a novel model for the rapid detection of cotton pests.

RESULTS: The detection performance of the pruned GBW-YOLOv5 model for cotton pests was evaluated based on the self-built cotton pest dataset. In comparison with the original YOLOv5 model, the pruned GBW-YOLOv5 model demonstrated significant reductions in complexity, size, and parameters by 68.4%, 66.7%, and 68.2%, respectively. Remarkably, the mean average precision (mAP) decreased by a mere 3.8%. The pruned GBW-YOLOv5 model outperformed other classic object detection models, achieving an outstanding detection speed of 114.9 FPS.

CONCLUSION: The methodology proposed by our research enabled rapid and accurate identification of cotton pests, laying a solid foundation for the implementation of precise pest control measures. The pruned GBW-YOLOv5 model provided theoretical research and technical support for detecting cotton pests under field conditions. © 2024 Society of Chemical Industry.

PMID:38294076 | DOI:10.1002/ps.7978

Categories: Literature Watch

Deep learning application to automated classification of recommendations made by hospital pharmacists during medication prescription review

Wed, 2024-01-31 06:00

Am J Health Syst Pharm. 2024 Jan 31:zxae011. doi: 10.1093/ajhp/zxae011. Online ahead of print.

ABSTRACT

DISCLAIMER: In an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time.

PURPOSE: Recommendations to improve therapeutics are proposals made by pharmacists during the prescription review process to address suboptimal use of medicines. Recommendations are generated daily as text documents but are rarely reused beyond their primary use to alert prescribers and caregivers. If recommendation data were easier to summarize, they could be used retrospectively to improve safeguards for better prescribing. The objective of this work was to train a deep learning algorithm for automated recommendation classification to valorize the large amount of recommendation data.

METHODS: The study was conducted in a French university hospital, at which recommendation data were collected throughout 2017. Data from the first 6 months of 2017 were labeled by 2 pharmacists who assigned recommendations to 1 of the 29 possible classes of the French Society of Clinical Pharmacy classification. A deep neural network classifier was trained to predict the class of recommendations.

RESULTS: In total, 27,699 labeled recommendations from the first half of 2017 were used to train and evaluate a classifier. The prediction accuracy calculated on a validation dataset was 78.0%. We also predicted classes for unlabeled recommendations collected during the second half of 2017. Of the 4,460 predictions reviewed, 67 required correction. When these additional labeled data were concatenated with the original dataset and the neural network was retrained, accuracy reached 81.0%.

CONCLUSION: To facilitate analysis of recommendations, we have implemented an automated classification system using deep learning that achieves respectable performance. This tool can help to retrospectively highlight the clinical significance of daily medication reviews performed by hospital clinical pharmacists.

PMID:38294025 | DOI:10.1093/ajhp/zxae011

Categories: Literature Watch

Secure and privacy improved cloud user authentication in biometric multimodal multi fusion using blockchain-based lightweight deep instance-based DetectNet

Wed, 2024-01-31 06:00

Network. 2024 Jan 31:1-19. doi: 10.1080/0954898X.2024.2304707. Online ahead of print.

ABSTRACT

This research introduces an innovative solution addressing the challenge of user authentication in cloud-based systems, emphasizing heightened security and privacy. The proposed system integrates multimodal biometrics, deep learning (Instance-based learning-based DetectNet-(IL-DN), privacy-preserving techniques, and blockchain technology. Motivated by the escalating need for robust authentication methods in the face of evolving cyber threats, the research aims to overcome the struggle between accuracy and user privacy inherent in current authentication methods. The proposed system swiftly and accurately identifies users using multimodal biometric data through IL-DN. To address privacy concerns, advanced techniques are employed to encode biometric data, ensuring user privacy. Additionally, the system utilizes blockchain technology to establish a decentralized, tamper-proof, and transparent authentication system. This is reinforced by smart contracts and an enhanced Proof of Work (PoW) mechanism. The research rigorously evaluates performance metrics, encompassing authentication accuracy, privacy preservation, security, and resource utilization, offering a comprehensive solution for secure and privacy-enhanced user authentication in cloud-based environments. This work significantly contributes to filling the existing research gap in this critical domain.

PMID:38293964 | DOI:10.1080/0954898X.2024.2304707

Categories: Literature Watch

Deep-Learning Reconstruction of High-Resolution CT Improves Interobserver Agreement for the Evaluation of Pulmonary Fibrosis

Wed, 2024-01-31 06:00

Can Assoc Radiol J. 2024 Jan 31:8465371241228468. doi: 10.1177/08465371241228468. Online ahead of print.

ABSTRACT

Objective: This study aimed to investigate whether deep-learning reconstruction (DLR) improves interobserver agreement in the evaluation of honeycombing for patients with interstitial lung disease (ILD) who underwent high-resolution computed tomography (CT) compared with hybrid iterative reconstruction (HIR). Methods: In this retrospective study, 35 consecutive patients suspected of ILD who underwent CT including the chest region were included. High-resolution CT images of the unilateral lung with DLR and HIR were reconstructed for the right and left lungs. A radiologist placed regions of interest on the lung and measured standard deviation of CT attenuation (i.e., quantitative image noise). In the qualitative image analyses, 5 blinded readers assessed the presence of honeycombing and reticulation, qualitative image noise, artifacts, and overall image quality using a 5-point scale (except for artifacts which was evaluated using a 3-point scale). Results: The quantitative and qualitative image noise in DLR was remarkably reduced compared to that in HIR (P < .001). Artifacts and overall DLR quality were significantly improved compared to those of HIR (P < .001 for 4 out of 5 readers). Interobserver agreement in the evaluations of honeycombing and reticulation for DLR (0.557 [0.450-0.693] and 0.525 [0.470-0.541], respectively) were higher than those for HIR (0.321 [0.211-0.520] and 0.470 [0.354-0.533], respectively). A statistically significant difference was found for honeycombing (P = .014). Conclusions: DLR improved interobserver agreement in the evaluation of honeycombing in patients with ILD on CT compared to HIR.

PMID:38293802 | DOI:10.1177/08465371241228468

Categories: Literature Watch

Compound Activity Prediction with Dose-Dependent Transcriptomic Profiles and Deep Learning

Wed, 2024-01-31 06:00

J Chem Inf Model. 2024 Jan 31. doi: 10.1021/acs.jcim.3c01855. Online ahead of print.

ABSTRACT

Predicting compound activity in assays is a long-standing challenge in drug discovery. Computational models based on compound-induced gene expression signatures from a single profiling assay have shown promise toward predicting compound activity in other, seemingly unrelated, assays. Applications of such models include predicting mechanisms-of-action (MoA) for phenotypic hits, identifying off-target activities, and identifying polypharmacologies. Here, we introduce transcriptomics-to-activity transformer (TAT) models that leverage gene expression profiles observed over compound treatment at multiple concentrations to predict the compound activity in other biochemical or cellular assays. We built TAT models based on gene expression data from a RASL-seq assay to predict the activity of 2692 compounds in 262 dose-response assays. We obtained useful models for 51% of the assays, as determined through a realistic held-out set. Prospectively, we experimentally validated the activity predictions of a TAT model in a malaria inhibition assay. With a 63% hit rate, TAT successfully identified several submicromolar malaria inhibitors. Our results thus demonstrate the potential of transcriptomic responses over compound concentration and the TAT modeling framework as a cost-efficient way to identify the bioactivities of promising compounds across many assays.

PMID:38293736 | DOI:10.1021/acs.jcim.3c01855

Categories: Literature Watch

Attitudes, knowledge, and perceptions of dentists and dental students toward artificial intelligence: a systematic review

Wed, 2024-01-31 06:00

J Taibah Univ Med Sci. 2024 Jan 12;19(2):327-337. doi: 10.1016/j.jtumed.2023.12.010. eCollection 2024 Apr.

ABSTRACT

OBJECTIVES: This research was aimed at assessing comprehension, attitudes, and perspectives regarding artificial intelligence (AI) in dentistry. The null hypothesis was a lack of foundational understanding of AI in dentistry.

METHODS: This systematic review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines was conducted in May 2023. The eligibility criteria included cross-sectional studies published in English until July 2023, focusing solely on dentists or dental students. Data on AI knowledge, use, and perceptions were extracted and assessed for bias risk with the Joanna Briggs Institute checklist.

RESULTS: Of 408 publications, 22 relevant articles were identified, and 13 studies were included in the review. The average basic AI knowledge score was 58.62 % among dental students and 71.75 % among dentists. More dental students (72.01 %) than dentists (62.60 %) believed in AI's potential for advancing dentistry.

CONCLUSIONS: Thorough AI instruction in dental schools and continuing education programs for practitioners are urgently needed to maximize AI's potential benefits in dentistry. An integrated PhD program could drive revolutionary discoveries and improve patient care globally. Embracing AI with informed understanding and training will position dental professionals at the forefront of technological advancements in the field.

PMID:38293587 | PMC:PMC10825554 | DOI:10.1016/j.jtumed.2023.12.010

Categories: Literature Watch

Comprehensive image dataset for enhancing object detection in chemical experiments

Wed, 2024-01-31 06:00

Data Brief. 2024 Jan 9;52:110054. doi: 10.1016/j.dib.2024.110054. eCollection 2024 Feb.

ABSTRACT

The application of image recognition in chemical experiments has the potential to enhance experiment recording and risk management. However, the current scarcity of suitable benchmarking datasets restricts the applications of machine vision techniques in chemical experiments. This data article presents an image dataset featuring common chemical apparatuses and experimenter's hands. The images have been meticulously annotated, providing detailed information for precise object detection through deep learning methods. The images were captured from videos filmed in organic chemistry laboratories. This dataset comprises a total of 5078 images including diverse backgrounds and situations surrounding the objects. Detailed annotations are provided in accompanying text files. The dataset is organized into training, validation, and test subsets. Each subset is stored within independent folders for easy access and utilization.

PMID:38293577 | PMC:PMC10827390 | DOI:10.1016/j.dib.2024.110054

Categories: Literature Watch

Leveraging Bayesian deep learning and ensemble methods for uncertainty quantification in image classification: A ranking-based approach

Wed, 2024-01-31 06:00

Heliyon. 2024 Jan 8;10(2):e24188. doi: 10.1016/j.heliyon.2024.e24188. eCollection 2024 Jan 30.

ABSTRACT

Bayesian deep learning (BDL) has emerged as a powerful technique for quantifying uncertainty in classification tasks, surpassing the effectiveness of traditional models by aligning with the probabilistic nature of real-world data. This alignment allows for informed decision-making by not only identifying the most likely outcome but also quantifying the surrounding uncertainty. Such capabilities hold great significance in fields like medical diagnoses and autonomous driving, where the consequences of misclassification are substantial. To further improve uncertainty quantification, the research community has introduced Bayesian model ensembles, which combines multiple Bayesian models to enhance predictive accuracy and uncertainty quantification. These ensembles have exhibited superior performance compared to individual Bayesian models and even non-Bayesian counterparts. In this study, we propose a novel approach that leverages the power of Bayesian ensembles for enhanced uncertainty quantification. The proposed method exploits the disparity between predicted positive and negative classes and employes it as a ranking metric for model selection. For each instance or sample, the ensemble's output for each class is determined by selecting the top 'k' models based on this ranking. Experimental results on different medical image classifications demonstrate that the proposed method consistently outperforms or achieves comparable performance to conventional Bayesian ensemble. This investigation highlights the practical application of Bayesian ensemble techniques in refining predictive performance and enhancing uncertainty evaluation in image classification tasks.

PMID:38293520 | PMC:PMC10825337 | DOI:10.1016/j.heliyon.2024.e24188

Categories: Literature Watch

An uncertainty approach for Electric Submersible Pump modeling through Deep Neural Network

Wed, 2024-01-31 06:00

Heliyon. 2024 Jan 9;10(2):e24047. doi: 10.1016/j.heliyon.2024.e24047. eCollection 2024 Jan 30.

ABSTRACT

This work proposes a new methodology to identify and validate deep learning models for artificial oil lift systems that use submersible electric pumps. The proposed methodology allows for obtaining the models and evaluating the prediction's uncertainty jointly and systematically. The methodology employs a nonlinear model to generate training and validation data and the Markov Chain Monte Carlo algorithm to assess the neural network's epistemic uncertainty. The nonlinear model was used to overcome the limitations of the need for big datasets for training deep learning models. However, the developed models are validated against experimental data after training and validation with synthetic data. The validation is also performed through the models' uncertainty assessment and experimental data. From the implementation point of view, the method was coded in Python with Tensorflow and Keras libraries used to build the neural Networks and find the hyperparameters. The results show that the proposed methodology obtained models representing both the nonlinear model's dynamic behavior and the experimental data. It provides a most probable value close to the experimental data, and the uncertainty of the generated deep learning models has the same order of magnitude as that of the nonlinear model. This uncertainty assessment shows that the built models were adequately validated. The proposed deep learning models can be applied in several applications requiring a reliable and computationally lighter model. Hence, the obtained AI dynamic models can be employed for digital twin construction, control, and optimization.

PMID:38293372 | PMC:PMC10827449 | DOI:10.1016/j.heliyon.2024.e24047

Categories: Literature Watch

Automatic tooth periodontal ligament segmentation of cone beam computed tomography based on instance segmentation network

Wed, 2024-01-31 06:00

Heliyon. 2024 Jan 9;10(2):e24097. doi: 10.1016/j.heliyon.2024.e24097. eCollection 2024 Jan 30.

ABSTRACT

OBJECTIVE: The three-dimensional morphological structures of periodontal ligaments (PDLs) are important data for periodontal, orthodontic, prosthodontic, and implant interventions. This study aimed to employ a deep learning (DL) algorithm to segment the PDL automatically in cone-beam computed tomography (CBCT).

METHOD: This was a retrospective study. We randomly selected 389 patients and 1734 axial CBCT images from the CBCT database, and designed a fully automatic PDL segmentation computer-aided model based on instance segmentation Mask R-CNN network. The labels of the model training were 'teeth' and 'alveolar bone', and the 'PDL' is defined as the region where the 'teeth' and 'alveolar bone' overlap. The model's segmentation performance was evaluated using CBCT data from eight patients outside the database.

RESULTS: Qualitative evaluation indicates that the PDL segmentation accuracy of incisors, canines, premolars, wisdom teeth, and implants reached 100%. The segmentation accuracy of molars was 96.4%. Quantitative evaluation indicates that the mIoU and mDSC of PDL segmentation were 0.667 ± 0.015 (>0.6) and 0.799 ± 0.015 (>0.7) respectively.

CONCLUSION: This study analysed a unique approach to AI-driven automatic segmentation of PDLs on CBCT imaging, possibly enabling chair-side measurements of PDLs to facilitate periodontists, orthodontists, prosthodontists, and implantologists in more efficient and accurate diagnosis and treatment planning.

PMID:38293338 | PMC:PMC10827460 | DOI:10.1016/j.heliyon.2024.e24097

Categories: Literature Watch

Editorial: Multi-modal learning and its application for biomedical data

Wed, 2024-01-31 06:00

Front Med (Lausanne). 2024 Jan 16;10:1342374. doi: 10.3389/fmed.2023.1342374. eCollection 2023.

NO ABSTRACT

PMID:38293296 | PMC:PMC10824823 | DOI:10.3389/fmed.2023.1342374

Categories: Literature Watch

TIE-GANs: single-shot quantitative phase imaging using transport of intensity equation with integration of GANs

Wed, 2024-01-31 06:00

J Biomed Opt. 2024 Jan;29(1):016010. doi: 10.1117/1.JBO.29.1.016010. Epub 2024 Jan 30.

ABSTRACT

SIGNIFICANCE: Artificial intelligence (AI) has become a prominent technology in computational imaging over the past decade. The expeditious and label-free characteristics of quantitative phase imaging (QPI) render it a promising contender for AI investigation. Though interferometric methodologies exhibit potential efficacy, their implementation involves complex experimental platforms and computationally intensive reconstruction procedures. Hence, non-interferometric methods, such as transport of intensity equation (TIE), are preferred over interferometric methods.

AIM: TIE method, despite its effectiveness, is tedious as it requires the acquisition of many images at varying defocus planes. The proposed methodology holds the ability to generate a phase image utilizing a single intensity image using generative adversarial networks (GANs). We present a method called TIE-GANs to overcome the multi-shot scheme of conventional TIE.

APPROACH: The present investigation employs the TIE as a QPI methodology, which necessitates reduced experimental and computational efforts. TIE is being used for the dataset preparation as well. The proposed method captures images from different defocus planes for training. Our approach uses an image-to-image translation technique to produce phase maps and is based on GANs. The main contribution of this work is the introduction of GANs with TIE (TIE:GANs) that can give better phase reconstruction results with shorter computation times. This is the first time the GANs is proposed for TIE phase retrieval.

RESULTS: The characterization of the system was carried out with microbeads of 4 μm size and structural similarity index (SSIM) for microbeads was found to be 0.98. We demonstrated the application of the proposed method with oral cells, which yielded a maximum SSIM value of 0.95. The key characteristics include mean squared error and peak-signal-to-noise ratio values of 140 and 26.42 dB for oral cells and 100 and 28.10 dB for microbeads.

CONCLUSIONS: The proposed methodology holds the ability to generate a phase image utilizing a single intensity image. Our method is feasible for digital cytology because of its reported high value of SSIM. Our approach can handle defocused images in such a way that it can take intensity image from any defocus plane within the provided range and able to generate phase map.

PMID:38293292 | PMC:PMC10826717 | DOI:10.1117/1.JBO.29.1.016010

Categories: Literature Watch

Deep Survival Analysis for Interpretable Time-Varying Prediction of Preeclampsia Risk

Wed, 2024-01-31 06:00

medRxiv. 2024 Jan 19:2024.01.18.24301456. doi: 10.1101/2024.01.18.24301456. Preprint.

ABSTRACT

OBJECTIVE: Survival analysis is widely utilized in healthcare to predict the timing of disease onset. Traditional methods of survival analysis are usually based on Cox Proportional Hazards model and assume proportional risk for all subjects. However, this assumption is rarely true for most diseases, as the underlying factors have complex, non-linear, and time-varying relationships. This concern is especially relevant for pregnancy, where the risk for pregnancy-related complications, such as preeclampsia, varies across gestation. Recently, deep learning survival models have shown promise in addressing the limitations of classical models, as the novel models allow for non-proportional risk handling, capturing nonlinear relationships, and navigating complex temporal dynamics.

METHODS: We present a methodology to model the temporal risk of preeclampsia during pregnancy and investigate the associated clinical risk factors. We utilized a retrospective dataset including 66,425 pregnant individuals who delivered in two tertiary care centers from 2015-2023. We modeled the preeclampsia risk by modifying DeepHit, a deep survival model, which leverages neural network architecture to capture time-varying relationships between covariates in pregnancy. We applied time series k-means clustering to DeepHit's normalized output and investigated interpretability using Shapley values.

RESULTS: We demonstrate that DeepHit can effectively handle high-dimensional data and evolving risk hazards over time with performance similar to the Cox Proportional Hazards model, achieving an area under the curve (AUC) of 0.78 for both models. The deep survival model outperformed traditional methodology by identifying time-varied risk trajectories for preeclampsia, providing insights for early and individualized intervention. K-means clustering resulted in patients delineating into low-risk, early-onset, and late-onset preeclampsia groups- notably, each of those has distinct risk factors.

CONCLUSION: This work demonstrates a novel application of deep survival analysis in time-varying prediction of preeclampsia risk. Our results highlight the advantage of deep survival models compared to Cox Proportional Hazards models in providing personalized risk trajectory and demonstrating the potential of deep survival models to generate interpretable and meaningful clinical applications in medicine.

PMID:38293230 | PMC:PMC10827248 | DOI:10.1101/2024.01.18.24301456

Categories: Literature Watch

Large-scale comparison of machine learning methods for profiling prediction of kinase inhibitors

Tue, 2024-01-30 06:00

J Cheminform. 2024 Jan 30;16(1):13. doi: 10.1186/s13321-023-00799-5.

ABSTRACT

Conventional machine learning (ML) and deep learning (DL) play a key role in the selectivity prediction of kinase inhibitors. A number of models based on available datasets can be used to predict the kinase profile of compounds, but there is still controversy about the advantages and disadvantages of ML and DL for such tasks. In this study, we constructed a comprehensive benchmark dataset of kinase inhibitors, involving in 141,086 unique compounds and 216,823 well-defined bioassay data points for 354 kinases. We then systematically compared the performance of 12 ML and DL methods on the kinase profiling prediction task. Extensive experimental results reveal that (1) Descriptor-based ML models generally slightly outperform fingerprint-based ML models in terms of predictive performance. RF as an ensemble learning approach displays the overall best predictive performance. (2) Single-task graph-based DL models are generally inferior to conventional descriptor- and fingerprint-based ML models, however, the corresponding multi-task models generally improves the average accuracy of kinase profile prediction. For example, the multi-task FP-GNN model outperforms the conventional descriptor- and fingerprint-based ML models with an average AUC of 0.807. (3) Fusion models based on voting and stacking methods can further improve the performance of the kinase profiling prediction task, specifically, RF::AtomPairs + FP2 + RDKitDes fusion model performs best with the highest average AUC value of 0.825 on the test sets. These findings provide useful information for guiding choices of the ML and DL methods for the kinase profiling prediction tasks. Finally, an online platform called KIPP ( https://kipp.idruglab.cn ) and python software are developed based on the best models to support the kinase profiling prediction, as well as various kinase inhibitor identification tasks including virtual screening, compound repositioning and target fishing.

PMID:38291477 | DOI:10.1186/s13321-023-00799-5

Categories: Literature Watch

Application of machine learning models on predicting the length of hospital stay in fragility fracture patients

Tue, 2024-01-30 06:00

BMC Med Inform Decis Mak. 2024 Jan 30;24(1):26. doi: 10.1186/s12911-024-02417-2.

ABSTRACT

BACKGROUND: The rate of geriatric hip fracture in Hong Kong is increasing steadily and associated mortality in fragility fracture is high. Moreover, fragility fracture patients increase the pressure on hospital bed demand. Hence, this study aims to develop a predictive model on the length of hospital stay (LOS) of geriatric fragility fracture patients using machine learning (ML) techniques.

METHODS: In this study, we use the basic information, such as gender, age, residence type, etc., and medical parameters of patients, such as the modified functional ambulation classification score (MFAC), elderly mobility scale (EMS), modified Barthel index (MBI) etc, to predict whether the length of stay would exceed 21 days or not.

RESULTS: Our results are promising despite the relatively small sample size of 8000 data. We develop various models with three approaches, namely (1) regularizing gradient boosting frameworks, (2) custom-built artificial neural network and (3) Google's Wide & Deep Learning technique. Our best results resulted from our Wide & Deep model with an accuracy of 0.79, with a precision of 0.73, with an area under the receiver operating characteristic curve (AUC-ROC) of 0.84. Feature importance analysis indicates (1) the type of hospital the patient is admitted to, (2) the mental state of the patient and (3) the length of stay at the acute hospital all have a relatively strong impact on the length of stay at palliative care.

CONCLUSIONS: Applying ML techniques to improve the quality and efficiency in the healthcare sector is becoming popular in Hong Kong and around the globe, but there has not yet been research related to fragility fracture. The integration of machine learning may be useful for health-care professionals to better identify fragility fracture patients at risk of prolonged hospital stays. These findings underline the usefulness of machine learning techniques in optimizing resource allocation by identifying high risk individuals and providing appropriate management to improve treatment outcome.

PMID:38291406 | DOI:10.1186/s12911-024-02417-2

Categories: Literature Watch

Automatic dental age calculation from panoramic radiographs using deep learning: a two-stage approach with object detection and image classification

Tue, 2024-01-30 06:00

BMC Oral Health. 2024 Jan 31;24(1):143. doi: 10.1186/s12903-024-03928-0.

ABSTRACT

BACKGROUND: Dental age is crucial for treatment planning in pediatric and orthodontic dentistry. Dental age calculation methods can be categorized into morphological, biochemical, and radiological methods. Radiological methods are commonly used because they are non-invasive and reproducible. When radiographs are available, dental age can be calculated by evaluating the developmental stage of permanent teeth and converting it into an estimated age using a table, or by measuring the length between some landmarks such as the tooth, root, or pulp, and substituting them into regression formulas. However, these methods heavily depend on manual time-consuming processes. In this study, we proposed a novel and completely automatic dental age calculation method using panoramic radiographs and deep learning techniques.

METHODS: Overall, 8,023 panoramic radiographs were used as training data for Scaled-YOLOv4 to detect dental germs and mean average precision were evaluated. In total, 18,485 single-root and 16,313 multi-root dental germ images were used as training data for EfficientNetV2 M to classify the developmental stages of detected dental germs and Top-3 accuracy was evaluated since the adjacent stages of the dental germ looks similar and the many variations of the morphological structure can be observed between developmental stages. Scaled-YOLOv4 and EfficientNetV2 M were trained using cross-validation. We evaluated a single selection, a weighted average, and an expected value to convert the probability of developmental stage classification to dental age. One hundred and fifty-seven panoramic radiographs were used to compare automatic and manual human experts' dental age calculations.

RESULTS: Dental germ detection was achieved with a mean average precision of 98.26% and dental germ classifiers for single and multi-root were achieved with a Top-3 accuracy of 98.46% and 98.36%, respectively. The mean absolute errors between the automatic and manual dental age calculations using single selection, weighted average, and expected value were 0.274, 0.261, and 0.396, respectively. The weighted average was better than the other methods and was accurate by less than one developmental stage error.

CONCLUSION: Our study demonstrates the feasibility of automatic dental age calculation using panoramic radiographs and a two-stage deep learning approach with a clinically acceptable level of accuracy.

PMID:38291396 | DOI:10.1186/s12903-024-03928-0

Categories: Literature Watch

CCL-DTI: contributing the contrastive loss in drug-target interaction prediction

Tue, 2024-01-30 06:00

BMC Bioinformatics. 2024 Jan 30;25(1):48. doi: 10.1186/s12859-024-05671-3.

ABSTRACT

BACKGROUND: The Drug-Target Interaction (DTI) prediction uses a drug molecule and a protein sequence as inputs to predict the binding affinity value. In recent years, deep learning-based models have gotten more attention. These methods have two modules: the feature extraction module and the task prediction module. In most deep learning-based approaches, a simple task prediction loss (i.e., categorical cross entropy for the classification task and mean squared error for the regression task) is used to learn the model. In machine learning, contrastive-based loss functions are developed to learn more discriminative feature space. In a deep learning-based model, extracting more discriminative feature space leads to performance improvement for the task prediction module.

RESULTS: In this paper, we have used multimodal knowledge as input and proposed an attention-based fusion technique to combine this knowledge. Also, we investigate how utilizing contrastive loss function along the task prediction loss could help the approach to learn a more powerful model. Four contrastive loss functions are considered: (1) max-margin contrastive loss function, (2) triplet loss function, (3) Multi-class N-pair Loss Objective, and (4) NT-Xent loss function. The proposed model is evaluated using four well-known datasets: Wang et al. dataset, Luo's dataset, Davis, and KIBA datasets.

CONCLUSIONS: Accordingly, after reviewing the state-of-the-art methods, we developed a multimodal feature extraction network by combining protein sequences and drug molecules, along with protein-protein interaction networks and drug-drug interaction networks. The results show it performs significantly better than the comparable state-of-the-art approaches.

PMID:38291364 | DOI:10.1186/s12859-024-05671-3

Categories: Literature Watch

Structure-aware deep model for MHC-II peptide binding affinity prediction

Tue, 2024-01-30 06:00

BMC Genomics. 2024 Jan 30;25(1):127. doi: 10.1186/s12864-023-09900-6.

ABSTRACT

The prediction of major histocompatibility complex (MHC)-peptide binding affinity is an important branch in immune bioinformatics, especially helpful in accelerating the design of disease vaccines and immunity therapy. Although deep learning-based solutions have yielded promising results on MHC-II molecules in recent years, these methods ignored structure knowledge from each peptide when employing the deep neural network models. Each peptide sequence has its specific combination order, so it is worth considering adding the structural information of the peptide sequence to the deep model training. In this work, we use positional encoding to represent the structural information of peptide sequences and validly combine the positional encoding with existing models by different strategies. Experiments on three datasets show that the introduction of position-coding information can further improve the performance built upon the existing model. The idea of introducing positional encoding to this field can provide important reference significance for the optimization of the deep network structure in the future.

PMID:38291350 | DOI:10.1186/s12864-023-09900-6

Categories: Literature Watch

Pages