Deep learning

A fusion of deep neural networks and game theory for retinal disease diagnosis with OCT images

Fri, 2024-05-17 06:00

J Xray Sci Technol. 2024 May 14. doi: 10.3233/XST-240027. Online ahead of print.

ABSTRACT

Retinal disorders pose a serious threat to world healthcare because they frequently result in visual loss or impairment. For retinal disorders to be diagnosed precisely, treated individually, and detected early, deep learning is a necessary subset of artificial intelligence. This paper provides a complete approach to improve the accuracy and reliability of retinal disease identification using images from OCT (Retinal Optical Coherence Tomography). The Hybrid Model GIGT, which combines Generative Adversarial Networks (GANs), Inception, and Game Theory, is a novel method for diagnosing retinal diseases using OCT pictures. This technique, which is carried out in Python, includes preprocessing images, feature extraction, GAN classification, and a game-theoretic examination. Resizing, grayscale conversion, noise reduction using Gaussian filters, contrast enhancement using Contrast Limiting Adaptive Histogram Equalization (CLAHE), and edge recognition via the Canny technique are all part of the picture preparation step. These procedures set up the OCT pictures for efficient analysis. The Inception model is used for feature extraction, which enables the extraction of discriminative characteristics from the previously processed pictures. GANs are used for classification, which improves accuracy and resilience by adding a strategic and dynamic aspect to the diagnostic process. Additionally, a game-theoretic analysis is utilized to evaluate the security and dependability of the model in the face of hostile attacks. Strategic analysis and deep learning work together to provide a potent diagnostic tool. This suggested model's remarkable 98.2% accuracy rate shows how this method has the potential to improve the detection of retinal diseases, improve patient outcomes, and address the worldwide issue of visual impairment.

PMID:38759091 | DOI:10.3233/XST-240027

Categories: Literature Watch

A deep learning approach for acute liver failure prediction with combined fully connected and convolutional neural networks

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 17. doi: 10.3233/THC-248048. Online ahead of print.

ABSTRACT

BACKGROUND: Acute Liver Failure (ALF) is a critical medical condition with rapid development, often caused by viral infections, hepatotoxic drug abuse, or other severe liver diseases. Timely and accurate prediction of ALF occurrence is clinically crucial. However, predicting ALF poses challenges due to the diverse physiological differences among patients and the dynamic nature of the disease.

OBJECTIVE: This study introduces a deep learning approach that combines fully connected and convolutional neural networks for effective ALF prediction. The goal is to overcome limitations of traditional machine learning methods and enhance predictive model performance and generalization.

METHODS: The proposed model integrates a fully connected neural network for handling basic patient features and a convolutional neural network dedicated to capturing temporal patterns in patient data. The combination allows automatic learning of complex patterns and abstract features present in highly dynamic medical data associated with ALF.

RESULTS: The model's effectiveness is demonstrated through comprehensive experiments and performance evaluations. It outperforms traditional machine learning methods, achieving 94.8% accuracy and superior generalization capabilities.

CONCLUSIONS: The study highlights the potential of deep learning in ALF prediction, emphasizing the importance of considering individualized medical factors. Future research should focus on improving model robustness, addressing imbalanced data, and further exploring personalized features for enhanced predictive accuracy in real-world clinical scenarios.

PMID:38759076 | DOI:10.3233/THC-248048

Categories: Literature Watch

Intelligent deep learning supports biomedical image detection and classification of oral cancer

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 18. doi: 10.3233/THC-248041. Online ahead of print.

ABSTRACT

BACKGROUND: Oral cancer is a malignant tumor that usually occurs within the tissues of the mouth. This type of cancer mainly includes tumors in the lining of the mouth, tongue, lips, buccal mucosa and gums. Oral cancer is on the rise globally, especially in some specific risk groups. The early stage of oral cancer is usually asymptomatic, while the late stage may present with ulcers, lumps, bleeding, etc.

OBJECTIVE: The objective of this paper is to propose an effective and accurate method for the identification and classification of oral cancer.

METHODS: We applied two deep learning methods, CNN and Transformers. First, we propose a new CANet classification model for oral cancer, which uses attention mechanisms combined with neglected location information to explore the complex combination of attention mechanisms and deep networks, and fully tap the potential of attention mechanisms. Secondly, we design a classification model based on Swim transform. The image is segmented into a series of two-dimensional image blocks, which are then processed by multiple layers of conversion blocks.

RESULTS: The proposed classification model was trained and predicted on Kaggle Oral Cancer Images Dataset, and satisfactory results were obtained. The average accuracy, sensitivity, specificity and F1-Socre of Swin transformer architecture are 94.95%, 95.37%, 95.52% and 94.66%, respectively. The average accuracy, sensitivity, specificity and F1-Score of CANet model were 97.00%, 97.82%, 97.82% and 96.61%, respectively.

CONCLUSIONS: We studied different deep learning algorithms for oral cancer classification, including convolutional neural networks, converters, etc. Our Attention module in CANet leverages the benefits of channel attention to model the relationships between channels while encoding precise location information that captures the long-term dependencies of the network. The model achieves a high classification effect with an accuracy of 97.00%, which can be used in the automatic recognition and classification of oral cancer.

PMID:38759069 | DOI:10.3233/THC-248041

Categories: Literature Watch

Deep learning-based differentiation of ventricular septal defect from tetralogy of Fallot in fetal echocardiography images

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 18. doi: 10.3233/THC-248040. Online ahead of print.

ABSTRACT

BACKGROUND: Congenital heart disease (CHD) seriously affects children's health and quality of life, and early detection of CHD can reduce its impact on children's health. Tetralogy of Fallot (TOF) and ventricular septal defect (VSD) are two types of CHD that have similarities in echocardiography. However, TOF has worse diagnosis and higher morality than VSD. Accurate differentiation between VSD and TOF is highly important for administrative property treatment and improving affected factors' diagnoses.

OBJECTIVE: TOF and VSD were differentiated using convolutional neural network (CNN) models that classified fetal echocardiography images.

METHODS: We collected 105 fetal echocardiography images of TOF and 96 images of VSD. Four CNN models, namely, VGG19, ResNet50, NTS-Net, and the weakly supervised data augmentation network (WSDAN), were used to differentiate the two congenital heart diseases. The performance of these four models was compared based on sensitivity, accuracy, specificity, and AUC.

RESULTS: VGG19 and ResNet50 performed similarly, with AUCs of 0.799 and 0.802, respectively. A superior performance was observed with NTS-Net and WSDAN specific for fine-grained image categorization tasks, with AUCs of 0.823 and 0.873, respectively. WSDAN had the best performance among all models tested.

CONCLUSIONS: WSDAN exhibited the best performance in differentiating between TOF and VSD and is worthy of further clinical popularization.

PMID:38759068 | DOI:10.3233/THC-248040

Categories: Literature Watch

Super-resolution of diffusion-weighted images using space-customized learning model

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 16. doi: 10.3233/THC-248037. Online ahead of print.

ABSTRACT

BACKGROUND: Diffusion-weighted imaging (DWI) is a noninvasive method used for investigating the microstructural properties of the brain. However, a tradeoff exists between resolution and scanning time in clinical practice. Super-resolution has been employed to enhance spatial resolution in natural images, but its application on high-dimensional and non-Euclidean DWI remains challenging.

OBJECTIVE: This study aimed to develop an end-to-end deep learning network for enhancing the spatial resolution of DWI through post-processing.

METHODS: We proposed a space-customized deep learning approach that leveraged convolutional neural networks (CNNs) for the grid structural domain (x-space) and graph CNNs (GCNNs) for the diffusion gradient domain (q-space). Moreover, we represented the output of CNN as a graph using correlations defined by a Gaussian kernel in q-space to bridge the gap between CNN and GCNN feature formats.

RESULTS: Our model was evaluated on the Human Connectome Project, demonstrating the effective improvement of DWI quality using our proposed method. Extended experiments also highlighted its advantages in downstream tasks.

CONCLUSION: The hybrid convolutional neural network exhibited distinct advantages in enhancing the spatial resolution of DWI scans for the feature learning of heterogeneous spatial data.

PMID:38759065 | DOI:10.3233/THC-248037

Categories: Literature Watch

Optimizing cardiovascular image segmentation through integrated hierarchical features and attention mechanisms

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 16. doi: 10.3233/THC-248035. Online ahead of print.

ABSTRACT

BACKGROUND: Cardiovascular diseases are the top cause of death in China. Manual segmentation of cardiovascular images, prone to errors, demands an automated, rapid, and precise solution for clinical diagnosis.

OBJECTIVE: The paper highlights deep learning in automatic cardiovascular image segmentation, efficiently identifying pixel regions of interest for auxiliary diagnosis and research in cardiovascular diseases.

METHODS: In our study, we introduce innovative Region Weighted Fusion (RWF) and Shape Feature Refinement (SFR) modules, utilizing polarized self-attention for significant performance improvement in multiscale feature integration and shape fine-tuning. The RWF module includes reshaping, weight computation, and feature fusion, enhancing high-resolution attention computation and reducing information loss. Model optimization through loss functions offers a more reliable solution for cardiovascular medical image processing.

RESULTS: Our method excels in segmentation accuracy, emphasizing the vital role of the RWF module. It demonstrates outstanding performance in cardiovascular image segmentation, potentially raising clinical practice standards.

CONCLUSIONS: Our method ensures reliable medical image processing, guiding cardiovascular segmentation for future advancements in practical healthcare and contributing scientifically to enhanced disease diagnosis and treatment.

PMID:38759064 | DOI:10.3233/THC-248035

Categories: Literature Watch

Applications of deep learning models in precision prediction of survival rates for heart failure patients

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 12. doi: 10.3233/THC-248029. Online ahead of print.

ABSTRACT

BACKGROUND: Heart failure poses a significant challenge in the global health domain, and accurate prediction of mortality is crucial for devising effective treatment plans. In this study, we employed a Seq2Seq model from deep learning, integrating 12 patient features. By finely modeling continuous medical records, we successfully enhanced the accuracy of mortality prediction.

OBJECTIVE: The objective of this research was to leverage the Seq2Seq model in conjunction with patient features for precise mortality prediction in heart failure cases, surpassing the performance of traditional machine learning methods.

METHODS: The study utilized a Seq2Seq model in deep learning, incorporating 12 patient features, to intricately model continuous medical records. The experimental design aimed to compare the performance of Seq2Seq with traditional machine learning methods in predicting mortality rates.

RESULTS: The experimental results demonstrated that the Seq2Seq model outperformed conventional machine learning methods in terms of predictive accuracy. Feature importance analysis provided critical patient risk factors, offering robust support for formulating personalized treatment plans.

CONCLUSIONS: This research sheds light on the significant applications of deep learning, specifically the Seq2Seq model, in enhancing the precision of mortality prediction in heart failure cases. The findings present a valuable direction for the application of deep learning in the medical field and provide crucial insights for future research and clinical practices.

PMID:38759059 | DOI:10.3233/THC-248029

Categories: Literature Watch

Intelligent quality control of traditional chinese medical tongue diagnosis images based on deep learning

Fri, 2024-05-17 06:00

Technol Health Care. 2024 Apr 25. doi: 10.3233/THC-248018. Online ahead of print.

ABSTRACT

BACKGROUND: Computer-aided tongue and face diagnosis technology can make Traditional Chinese Medicine (TCM) more standardized, objective and quantified. However, many tongue images collected by the instrument may not meet the standard in clinical applications, which affects the subsequent quantitative analysis. The common tongue diagnosis instrument cannot determine whether the patient has fully extended the tongue or collected the face.

OBJECTIVE: This paper proposes an image quality control algorithm based on deep learning to verify the eligibility of TCM tongue diagnosis images.

METHODS: We firstly gathered enough images and categorized them into five states. Secondly, we preprocessed the training images. Thirdly, we built a ResNet34 model and trained it by the transfer learning method. Finally, we input the test images into the trained model and automatically filter out unqualified images and point out the reasons.

RESULTS: Experimental results show that the model's quality control accuracy rate of the test dataset is as high as 97.06%. Our methods have the strong discriminative power of the learned representation. Compared with previous studies, it can guarantee subsequent tongue image processing.

CONCLUSIONS: Our methods can guarantee the subsequent quantitative analysis of tongue shape, tongue state, tongue spirit, and facial complexion.

PMID:38759050 | DOI:10.3233/THC-248018

Categories: Literature Watch

A Fusion Learning Model Based on Deep Learning for Single-Cell RNA Sequencing Data Clustering

Fri, 2024-05-17 06:00

J Comput Biol. 2024 May 20. doi: 10.1089/cmb.2024.0512. Online ahead of print.

ABSTRACT

Single-cell RNA sequencing (scRNA-seq) technology provides a means for studying biology from a cellular perspective. The fundamental goal of scRNA-seq data analysis is to discriminate single-cell types using unsupervised clustering. Few single-cell clustering algorithms have taken into account both deep and surface information, despite the recent slew of suggestions. Consequently, this article constructs a fusion learning framework based on deep learning, namely scGASI. For learning a clustering similarity matrix, scGASI integrates data affinity recovery and deep feature embedding in a unified scheme based on various top feature sets. Next, scGASI learns the low-dimensional latent representation underlying the data using a graph autoencoder to mine the hidden information residing in the data. To efficiently merge the surface information from raw area and the deeper potential information from underlying area, we then construct a fusion learning model based on self-expression. scGASI uses this fusion learning model to learn the similarity matrix of an individual feature set as well as the clustering similarity matrix of all feature sets. Lastly, gene marker identification, visualization, and clustering are accomplished using the clustering similarity matrix. Extensive verification on actual data sets demonstrates that scGASI outperforms many widely used clustering techniques in terms of clustering accuracy.

PMID:38758925 | DOI:10.1089/cmb.2024.0512

Categories: Literature Watch

WilsonGenAI a deep learning approach to classify pathogenic variants in Wilson Disease

Fri, 2024-05-17 06:00

PLoS One. 2024 May 17;19(5):e0303787. doi: 10.1371/journal.pone.0303787. eCollection 2024.

ABSTRACT

BACKGROUND: Advances in Next Generation Sequencing have made rapid variant discovery and detection widely accessible. To facilitate a better understanding of the nature of these variants, American College of Medical Genetics and Genomics and the Association of Molecular Pathologists (ACMG-AMP) have issued a set of guidelines for variant classification. However, given the vast number of variants associated with any disorder, it is impossible to manually apply these guidelines to all known variants. Machine learning methodologies offer a rapid way to classify large numbers of variants, as well as variants of uncertain significance as either pathogenic or benign. Here we classify ATP7B genetic variants by employing ML and AI algorithms trained on our well-annotated WilsonGen dataset.

METHODS: We have trained and validated two algorithms: TabNet and XGBoost on a high-confidence dataset of manually annotated, ACMG & AMP classified variants of the ATP7B gene associated with Wilson's Disease.

RESULTS: Using an independent validation dataset of ACMG & AMP classified variants, as well as a patient set of functionally validated variants, we showed how both algorithms perform and can be used to classify large numbers of variants in clinical as well as research settings.

CONCLUSION: We have created a ready to deploy tool, that can classify variants linked with Wilson's disease as pathogenic or benign, which can be utilized by both clinicians and researchers to better understand the disease through the nature of genetic variants associated with it.

PMID:38758754 | DOI:10.1371/journal.pone.0303787

Categories: Literature Watch

Dynamic Projection of Medication Nonpersistence and Nonadherence Among Patients With Early Breast Cancer

Fri, 2024-05-17 06:00

JAMA Netw Open. 2024 May 1;7(5):e2411909. doi: 10.1001/jamanetworkopen.2024.11909.

ABSTRACT

IMPORTANCE: Oral endocrine treatments have been shown to be effective when carefully adhered to. However, in patients with early breast cancer, adherence challenges are notable, with 17% experiencing nonpersistence and 41% nonadherence at least once.

OBJECTIVE: To model the persistence of and adherence to oral anticancer treatment of a patient with localized breast cancer.

DESIGN, SETTING, AND PARTICIPANTS: This cohort study was conducted using anonymous reimbursement data belonging to French female patients with breast cancer, extracted from the French Health Insurance database from January 2013 to December 2018. Data analysis was conducted from January 2021 to May 2022.

MAIN OUTCOMES AND MEASURES: The main outcome was the detection of episodes of nonpersistence and nonadherence 6 months before they happened. Adherence was defined as the ratio between the time covered by a drug purchase and the time between 2 purchases; patients were considered nonadherent if the ratio of their next 3 purchases was less than 80%. Disparities in persistence and adherence based on criteria such as age, treatment type, and income were identified.

RESULTS: A total of 229 695 female patients (median [IQR] age, 63 [52-72] years) with localized breast cancer were included. A deep learning model based on a gated-recurrent unit architecture was used to detect episodes of nonpersistence or nonadherence. This model demonstrated an area under the receiving operating curve of 0.71 for persistence and 0.73 for adherence. Analyzing the Shapley Additive Explanations values also gave insights into the contribution of the different features over the model's decision. Patients older than 70 years, with past nonadherence, taking more than 1 treatment in the previous 3 months, and with low income had greater risk of episodes of nonpersistence. Age and past nonadherence, including regularity of past adherence, were also important features in the nonadherence model.

CONCLUSIONS AND RELEVANCE: This cohort study found associations of patient age and past adherence with nonpersistence or nonadherence. It also suggested that regular intervals in treatment purchases enhanced adherence, in contrast to irregular purchasing patterns. This research offers valuable tools for improving persistence of and adherence to oral anticancer treatment among patients with early breast cancer.

PMID:38758553 | DOI:10.1001/jamanetworkopen.2024.11909

Categories: Literature Watch

Deep learning-based super-resolution of structural brain MRI at 1.5 T: application to quantitative volume measurement

Fri, 2024-05-17 06:00

MAGMA. 2024 May 17. doi: 10.1007/s10334-024-01165-8. Online ahead of print.

ABSTRACT

OBJECTIVE: This study investigated the feasibility of using deep learning-based super-resolution (DL-SR) technique on low-resolution (LR) images to generate high-resolution (HR) MR images with the aim of scan time reduction. The efficacy of DL-SR was also assessed through the application of brain volume measurement (BVM).

MATERIALS AND METHODS: In vivo brain images acquired with 3D-T1W from various MRI scanners were utilized. For model training, LR images were generated by downsampling the original 1 mm-2 mm isotropic resolution images. Pairs of LR and HR images were used for training 3D residual dense net (RDN). For model testing, actual scanned 2 mm isotropic resolution 3D-T1W images with one-minute scan time were used. Normalized root-mean-square error (NRMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were used for model evaluation. The evaluation also included brain volume measurement, with assessments of subcortical brain regions.

RESULTS: The results showed that DL-SR model improved the quality of LR images compared with cubic interpolation, as indicated by NRMSE (24.22% vs 30.13%), PSNR (26.19 vs 24.65), and SSIM (0.96 vs 0.95). For volumetric assessments, there were no significant differences between DL-SR and actual HR images (p > 0.05, Pearson's correlation > 0.90) at seven subcortical regions.

DISCUSSION: The combination of LR MRI and DL-SR enables addressing prolonged scan time in 3D MRI scans while providing sufficient image quality without affecting brain volume measurement.

PMID:38758489 | DOI:10.1007/s10334-024-01165-8

Categories: Literature Watch

Application of machine-learning model to optimize colonic adenoma detection in India

Fri, 2024-05-17 06:00

Indian J Gastroenterol. 2024 May 17. doi: 10.1007/s12664-024-01530-4. Online ahead of print.

ABSTRACT

AIMS: There is limited data on the prevalence and risk factors of colonic adenoma from the Indian sub-continent. We aimed at developing a machine-learning model to optimize colonic adenoma detection in a prospective cohort.

METHODS: All consecutive adult patients undergoing diagnostic colonoscopy were enrolled between October 2020 and November 2022. Patients with a high risk of colonic adenoma were excluded. The predictive model was developed using the gradient-boosting machine (GBM)-learning method. The GBM model was optimized further by adjusting the learning rate and the number of trees and 10-fold cross-validation.

RESULTS: Total 10,320 patients (mean age 45.18 ± 14.82 years; 69% men) were included in the study. In the overall population, 1152 (11.2%) patients had at least one adenoma. In patients with age > 50 years, hospital-based adenoma prevalence was 19.5% (808/4144). The area under the receiver operating curve (AUC) (SD) of the logistic regression model was 72.55% (4.91), while the AUCs for deep learning, decision tree, random forest and gradient-boosted tree model were 76.25% (4.22%), 65.95% (4.01%), 79.38% (4.91%) and 84.76% (2.86%), respectively. After model optimization and cross-validation, the AUC of the gradient-boosted tree model has increased to 92.2% (1.1%).

CONCLUSIONS: Machine-learning models may predict colorectal adenoma more accurately than logistic regression. A machine-learning model may help optimize the use of colonoscopy to prevent colorectal cancers.

TRIAL REGISTRATION: ClinicalTrials.gov (ID: NCT04512729).

PMID:38758433 | DOI:10.1007/s12664-024-01530-4

Categories: Literature Watch

Deep learning for automatic facial detection and recognition in Japanese macaques: illuminating social networks

Fri, 2024-05-17 06:00

Primates. 2024 May 17. doi: 10.1007/s10329-024-01137-5. Online ahead of print.

ABSTRACT

Individual identification plays a pivotal role in ecology and ethology, notably as a tool for complex social structures understanding. However, traditional identification methods often involve invasive physical tags and can prove both disruptive for animals and time-intensive for researchers. In recent years, the integration of deep learning in research has offered new methodological perspectives through the automatisation of complex tasks. Harnessing object detection and recognition technologies is increasingly used by researchers to achieve identification on video footage. This study represents a preliminary exploration into the development of a non-invasive tool for face detection and individual identification of Japanese macaques (Macaca fuscata) through deep learning. The ultimate goal of this research is, using identification done on the dataset, to automatically generate a social network representation of the studied population. The current main results are promising: (i) the creation of a Japanese macaques' face detector (Faster-RCNN model), reaching an accuracy of 82.2% and (ii) the creation of an individual recogniser for the Kōjima Island macaque population (YOLOv8n model), reaching an accuracy of 83%. We also created a Kōjima population social network by traditional methods, based on co-occurrences on videos. Thus, we provide a benchmark against which the automatically generated network will be assessed for reliability. These preliminary results are a testament to the potential of this approach to provide the scientific community with a tool for tracking individuals and social network studies in Japanese macaques.

PMID:38758427 | DOI:10.1007/s10329-024-01137-5

Categories: Literature Watch

Deep learning reconstruction computed tomography with low-dose imaging

Fri, 2024-05-17 06:00

Pediatr Radiol. 2024 May 17. doi: 10.1007/s00247-024-05950-4. Online ahead of print.

NO ABSTRACT

PMID:38758373 | DOI:10.1007/s00247-024-05950-4

Categories: Literature Watch

Adap-BDCM: Adaptive Bilinear Dynamic Cascade Model for Classification Tasks on CNV Datasets

Fri, 2024-05-17 06:00

Interdiscip Sci. 2024 May 17. doi: 10.1007/s12539-024-00635-w. Online ahead of print.

ABSTRACT

Copy number variation (CNV) is an essential genetic driving factor of cancer formation and progression, making intelligent classification based on CNV feasible. However, there are a few challenges in the current machine learning and deep learning methods, such as the design of base classifier combination schemes in ensemble methods and the selection of layers of neural networks, which often result in low accuracy. Therefore, an adaptive bilinear dynamic cascade model (Adap-BDCM) is developed to further enhance the accuracy and applicability of these methods for intelligent classification on CNV datasets. In this model, a feature selection module is introduced to mitigate the interference of redundant information, and a bilinear model based on the gated attention mechanism is proposed to extract more beneficial deep fusion features. Furthermore, an adaptive base classifier selection scheme is designed to overcome the difficulty of manually designing base classifier combinations and enhance the applicability of the model. Lastly, a novel feature fusion scheme with an attribute recall submodule is constructed, effectively avoiding getting stuck in local solutions and missing some valuable information. Numerous experiments have demonstrated that our Adap-BDCM model exhibits optimal performance in cancer classification, stage prediction, and recurrence on CNV datasets. This study can assist physicians in making diagnoses faster and better.

PMID:38758306 | DOI:10.1007/s12539-024-00635-w

Categories: Literature Watch

An Exaggeration? Reality?: Can ChatGPT Be Used in Neonatal Nursing?

Fri, 2024-05-17 06:00

J Perinat Neonatal Nurs. 2024 Apr-Jun 01;38(2):120-121. doi: 10.1097/JPN.0000000000000826. Epub 2024 May 13.

ABSTRACT

Artificial intelligence (AI) represents a system endowed with the ability to derive meaningful inferences from a diverse array of datasets. Rooted in the advancements of machine learning models, AI has spawned various transformative technologies such as deep learning, natural language processing, computer vision, and robotics. This technological evolution is poised to witness a broadened spectrum of applications across diverse domains, with a particular focus on revolutionizing healthcare services. Noteworthy among these innovations is OpenAI's creation, ChatGPT, which stands out for its profound capabilities in intricate analysis, primarily facilitated through extensive language modeling. In the realm of healthcare, AI applications, including ChatGPT, have showcased promising outcomes, especially in the domain of neonatal nursing. Areas such as pain assessment, feeding processes, and patient status determination have witnessed substantial enhancements through the integration of AI technologies. However, it is crucial to approach the deployment of such applications with a judicious mindset. The accuracy of the underlying data must undergo rigorous validation, and any results lacking a solid foundation in scientific insights should be approached with skepticism. The paramount consideration remains patient safety, necessitating that AI applications, like ChatGPT, undergo thorough scrutiny through controlled and evidence-based studies. Only through such meticulous evaluation can the transformative potential of AI be harnessed responsibly, ensuring its alignment with the highest standards of healthcare practice.

PMID:38758263 | DOI:10.1097/JPN.0000000000000826

Categories: Literature Watch

COMPAS-3: a dataset of <em>peri</em>-condensed polybenzenoid hydrocarbons

Fri, 2024-05-17 06:00

Phys Chem Chem Phys. 2024 May 17. doi: 10.1039/d4cp01027b. Online ahead of print.

ABSTRACT

We introduce the third installment of the COMPAS Project - a COMputational database of Polycyclic Aromatic Systems, focused on peri-condensed polybenzenoid hydrocarbons. In this installment, we develop two datasets containing the optimized ground-state structures and a selection of molecular properties of ∼39k and ∼9k peri-condensed polybenzenoid hydrocarbons (at the GFN2-xTB and CAM-B3LYP-D3BJ/cc-pvdz//CAM-B3LYP-D3BJ/def2-SVP levels, respectively). The manuscript details the enumeration and data generation processes and describes the information available within the datasets. An in-depth comparison between the two types of computation is performed, and it is found that the geometrical disagreement is maximal for slightly-distorted molecules. In addition, a data-driven analysis of the structure-property trends of peri-condensed PBHs is performed, highlighting the effect of the size of peri-condensed islands and linearly annulated rings on the HOMO-LUMO gap. The insights described herein are important for rational design of novel functional aromatic molecules for use in, e.g., organic electronics. The generated datasets provide a basis for additional data-driven machine- and deep-learning studies in chemistry.

PMID:38758092 | DOI:10.1039/d4cp01027b

Categories: Literature Watch

Deep learning for temporomandibular joint arthropathies: A systematic review and meta-analysis

Fri, 2024-05-17 06:00

J Oral Rehabil. 2024 May 17. doi: 10.1111/joor.13701. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: The accurate diagnosis of temporomandibular disorders continues to be a challenge, despite the existence of internationally agreed-upon diagnostic criteria. The purpose of this study is to review applications of deep learning models in the diagnosis of temporomandibular joint arthropathies.

MATERIALS AND METHODS: An electronic search was conducted on PubMed, Scopus, Embase, Google Scholar, IEEE, arXiv, and medRxiv up to June 2023. Studies that reported the efficacy (outcome) of prediction, object detection or classification of TMJ arthropathies by deep learning models (intervention) of human joint-based or arthrogenous TMDs (population) in comparison to reference standard (comparison) were included. To evaluate the risk of bias, included studies were critically analysed using the quality assessment of diagnostic accuracy studies (QUADAS-2). Diagnostic odds ratios (DOR) were calculated. Forrest plot and funnel plot were created using STATA 17 and MetaDiSc.

RESULTS: Full text review was performed on 46 out of the 1056 identified studies and 21 studies met the eligibility criteria and were included in the systematic review. Four studies were graded as having a low risk of bias for all domains of QUADAS-2. The accuracy of all included studies ranged from 74% to 100%. Sensitivity ranged from 54% to 100%, specificity: 85%-100%, Dice coefficient: 85%-98%, and AUC: 77%-99%. The datasets were then pooled based on the sensitivity, specificity, and dataset size of seven studies that qualified for meta-analysis. The pooled sensitivity was 95% (85%-99%), specificity: 92% (86%-96%), and AUC: 97% (96%-98%). DORs were 232 (74-729). According to Deek's funnel plot and statistical evaluation (p =.49), publication bias was not present.

CONCLUSION: Deep learning models can detect TMJ arthropathies high sensitivity and specificity. Clinicians, and especially those not specialized in orofacial pain, may benefit from this methodology for assessing TMD as it facilitates a rigorous and evidence-based framework, objective measurements, and advanced analysis techniques, ultimately enhancing diagnostic accuracy.

PMID:38757865 | DOI:10.1111/joor.13701

Categories: Literature Watch

Public health nurse perspectives on predicting nonattendance for cervical cancer screening through classification, ensemble, and deep learning models

Fri, 2024-05-17 06:00

Public Health Nurs. 2024 May 17. doi: 10.1111/phn.13334. Online ahead of print.

ABSTRACT

OBJECTIVES: Women's attendance to cervical cancer screening (CCS) is a major concern for healthcare providers in community. This study aims to use the various algorithms that can accurately predict the most barriers of women for nonattendance to CS.

DESIGN: The real-time data were collected from women presented at OPD of primary health centers (PHCs). About 1046 women's data regarding attendance and nonattendance to CCS were included. In this study, we have used three models, classification, ensemble, and deep learning models, to compare the specific accuracy and AU-ROC for predicting non-attenders for CC.

RESULTS: The current model employs 22 predictors, with soft voting in ensemble models showing slightly higher specificity (96%) and sensitivity (93%) than weighted averaging. Bagging excels with the highest accuracy (98.49%), specificity (97.3%), and ideal sensitivity (100%) with an AUC of 0.99. Classification models reveal Naive Bayes with higher specificity (97%) but lower sensitivity (91%) than Logistic Regression. Random Forest and Neural Network achieve the highest accuracy (98.49%), with an AUC of 0.98. In deep learning, LSTM has an accuracy of 95.68%, higher specificity (97.60%), and lower sensitivity (93.42%) compared to other models. MLP and NN showed the highest AUC values of 0.99.

CONCLUSION: Employing ensemble and deep learning models proved most effective in predicting barriers to nonattendance in cervical screening.

PMID:38757647 | DOI:10.1111/phn.13334

Categories: Literature Watch

Pages