Deep learning
COVID-19 lateral flow test image classification using deep CNN and StyleGAN2
Front Artif Intell. 2024 Jan 29;6:1235204. doi: 10.3389/frai.2023.1235204. eCollection 2023.
ABSTRACT
INTRODUCTION: Artificial intelligence (AI) in healthcare can enhance clinical workflows and diagnoses, particularly in large-scale operations like COVID-19 mass testing. This study presents a deep Convolutional Neural Network (CNN) model for automated COVID-19 RATD image classification.
METHODS: To address the absence of a RATD image dataset, we crowdsourced 900 real-world images focusing on positive and negative cases. Rigorous data augmentation and StyleGAN2-ADA generated simulated images to overcome dataset limitations and class imbalances.
RESULTS: The best CNN model achieved a 93% validation accuracy. Test accuracies were 88% for simulated datasets and 82% for real datasets. Augmenting simulated images during training did not significantly improve real-world test image performance but enhanced simulated test image performance.
DISCUSSION: The findings of this study highlight the potential of the developed model in expediting COVID-19 testing processes and facilitating large-scale testing and tracking systems. The study also underscores the challenges in designing and developing such models, emphasizing the importance of addressing dataset limitations and class imbalances.
CONCLUSION: This research contributes to the deployment of large-scale testing and tracking systems, offering insights into the potential applications of AI in mitigating outbreaks similar to COVID-19. Future work could focus on refining the model and exploring its adaptability to other healthcare scenarios.
PMID:38348096 | PMC:PMC10860423 | DOI:10.3389/frai.2023.1235204
Learning Ordinal-Hierarchical Constraints for Deep Learning Classifiers
IEEE Trans Neural Netw Learn Syst. 2024 Feb 12;PP. doi: 10.1109/TNNLS.2024.3360641. Online ahead of print.
ABSTRACT
Real-world classification problems may disclose different hierarchical levels where the categories are displayed in an ordinal structure. However, no specific deep learning (DL) models simultaneously learn hierarchical and ordinal constraints while improving generalization performance. To fill this gap, we propose the introduction of two novel ordinal-hierarchical DL methodologies, namely, the hierarchical cumulative link model (HCLM) and hierarchical-ordinal binary decomposition (HOBD), which are able to model the ordinal structure within different hierarchical levels of the labels. In particular, we decompose the hierarchical-ordinal problem into local and global graph paths that may encode an ordinal constraint for each hierarchical level. Thus, we frame this problem as simultaneously minimizing global and local losses. Furthermore, the ordinal constraints are set by two approaches ordinal binary decomposition (OBD) and cumulative link model (CLM) within each global and local function. The effectiveness of the proposed approach is measured on four real-use case datasets concerning industrial, biomedical, computer vision, and financial domains. The extracted results demonstrate a statistically significant improvement to state-of-the-art nominal, ordinal, and hierarchical approaches.
PMID:38347692 | DOI:10.1109/TNNLS.2024.3360641
SSBlazer: a genome-wide nucleotide-resolution model for predicting single-strand break sites
Genome Biol. 2024 Feb 12;25(1):46. doi: 10.1186/s13059-024-03179-w.
ABSTRACT
Single-strand breaks are the major DNA damage in the genome and serve a crucial role in various biological processes. To reveal the significance of single-strand breaks, multiple sequencing-based single-strand break detection methods have been developed, which are costly and unfeasible for large-scale analysis. Hence, we propose SSBlazer, an explainable and scalable deep learning framework for single-strand break site prediction at the nucleotide level. SSBlazer is a lightweight model with robust generalization capabilities across various species and is capable of numerous unexplored SSB-related applications.
PMID:38347618 | DOI:10.1186/s13059-024-03179-w
Research and application of artificial intelligence in dentistry from lower-middle income countries - a scoping review
BMC Oral Health. 2024 Feb 12;24(1):220. doi: 10.1186/s12903-024-03970-y.
ABSTRACT
Artificial intelligence (AI) has been integrated into dentistry for improvement of current dental practice. While many studies have explored the utilization of AI in various fields, the potential of AI in dentistry, particularly in low-middle income countries (LMICs) remains understudied. This scoping review aimed to study the existing literature on the applications of artificial intelligence in dentistry in low-middle income countries. A comprehensive search strategy was applied utilizing three major databases: PubMed, Scopus, and EBSCO Dentistry & Oral Sciences Source. The search strategy included keywords related to AI, Dentistry, and LMICs. The initial search yielded a total of 1587, out of which 25 articles were included in this review. Our findings demonstrated that limited studies have been carried out in LMICs in terms of AI and dentistry. Most of the studies were related to Orthodontics. In addition gaps in literature were noted such as cost utility and patient experience were not mentioned in the included studies.
PMID:38347508 | DOI:10.1186/s12903-024-03970-y
DeepVAQ : an adaptive deep learning for prediction of vascular access quality in hemodialysis patients
BMC Med Inform Decis Mak. 2024 Feb 12;24(1):45. doi: 10.1186/s12911-024-02441-2.
ABSTRACT
BACKGROUND: Chronic kidney disease is a prevalent global health issue, particularly in advanced stages requiring dialysis. Vascular access (VA) quality is crucial for the well-being of hemodialysis (HD) patients, ensuring optimal blood transfer through a dialyzer machine. The ultrasound dilution technique (UDT) is used as the gold standard for assessing VA quality; however, its limited availability due to high costs impedes its widespread adoption. We aimed to develop a novel deep learning model specifically designed to predict VA quality from Photoplethysmography (PPG) sensors.
METHODS: Clinical data were retrospectively gathered from 398 HD patients, spanning from February 2021 to February 2022. The DeepVAQ model leverages a convolutional neural network (CNN) to process PPG sensor data, pinpointing specific frequencies and patterns that are indicative of VA quality. Meticulous training and fine-tuning were applied to ensure the model's accuracy and reliability. Validation of the DeepVAQ model was carried out against established diagnostic standards using key performance metrics, including accuracy, specificity, precision, F-score, and area under the receiver operating characteristic curve (AUC).
RESULT: DeepVAQ demonstrated superior performance, achieving an accuracy of 0.9213 and a specificity of 0.9614. Its precision and F-score stood at 0.8762 and 0.8364, respectively, with an AUC of 0.8605. In contrast, traditional models like Decision Tree, Naive Bayes, and kNN demonstrated significantly lower performance across these metrics. This comparison underscores DeepVAQ's enhanced capability in accurately predicting VA quality compared to existing methodologies.
CONCLUSION: Exemplifying the potential of artificial intelligence in healthcare, particularly in the realm of deep learning, DeepVAQ represents a significant advancement in non-invasive diagnostics. Its precise multi-class classification ability for VA quality in hemodialysis patients holds substantial promise for improving patient outcomes, potentially leading to a reduction in mortality rates.
PMID:38347504 | DOI:10.1186/s12911-024-02441-2
TB-DROP: deep learning-based drug resistance prediction of Mycobacterium tuberculosis utilizing whole genome mutations
BMC Genomics. 2024 Feb 12;25(1):167. doi: 10.1186/s12864-024-10066-y.
ABSTRACT
The most widely practiced strategy for constructing the deep learning (DL) prediction model for drug resistance of Mycobacterium tuberculosis (MTB) involves the adoption of ready-made and state-of-the-art architectures usually proposed for non-biological problems. However, the ultimate goal is to construct a customized model for predicting the drug resistance of MTB and eventually for the biological phenotypes based on genotypes. Here, we constructed a DL training framework to standardize and modularize each step during the training process using the latest tensorflow 2 API. A systematic and comprehensive evaluation of each module in the three currently representative models, including Convolutional Neural Network, Denoising Autoencoder, and Wide & Deep, which were adopted by CNNGWP, DeepAMR, and WDNN, respectively, was performed in this framework regarding module contributions in order to assemble a novel model with proper dedicated modules. Based on the whole-genome level mutations, a de novo learning method was developed to overcome the intrinsic limitations of previous models that rely on known drug resistance-associated loci. A customized DL model with the multilayer perceptron architecture was constructed and achieved a competitive performance (the mean sensitivity and specificity were 0.90 and 0.87, respectively) compared to previous ones. The new model developed was applied in an end-to-end user-friendly graphical tool named TB-DROP (TuBerculosis Drug Resistance Optimal Prediction: https://github.com/nottwy/TB-DROP ), in which users only provide sequencing data and TB-DROP will complete analysis within several minutes for one sample. Our study contributes to both a new strategy of model construction and clinical application of deep learning-based drug-resistance prediction methods.
PMID:38347478 | DOI:10.1186/s12864-024-10066-y
Atrial Septal Defect Detection in Children Based on Ultrasound Video Using Multiple Instances Learning
J Imaging Inform Med. 2024 Feb 12. doi: 10.1007/s10278-024-00987-1. Online ahead of print.
ABSTRACT
Thoracic echocardiography (TTE) can provide sufficient cardiac structure information, evaluate hemodynamics and cardiac function, and is an effective method for atrial septal defect (ASD) examination. This paper aims to study a deep learning method based on cardiac ultrasound video to assist in ASD diagnosis. We chose four standard views in pediatric cardiac ultrasound to identify atrial septal defects; the four standard views were as follows: subcostal sagittal view of the atrium septum (subSAS), apical four-chamber view (A4C), the low parasternal four-chamber view (LPS4C), and parasternal short-axis view of large artery (PSAX). We enlist data from 300 children patients as part of a double-blind experiment for five-fold cross-validation to verify the performance of our model. In addition, data from 30 children patients (15 positives and 15 negatives) are collected for clinician testing and compared to our model test results (these 30 samples do not participate in model training). In our model, we present a block random selection, maximal agreement decision, and frame sampling strategy for training and testing respectively, resNet18 and r3D networks are used to extract the frame features and aggregate them to build a rich video-level representation. We validate our model using our private dataset by five cross-validation. For ASD detection, we achieve [Formula: see text] AUC, [Formula: see text] accuracy, [Formula: see text] sensitivity, [Formula: see text] specificity, and [Formula: see text] F1 score. The proposed model is a multiple instances learning-based deep learning model for video atrial septal defect detection which effectively improves ASD detection accuracy when compared to the performances of previous networks and clinical doctors.
PMID:38347394 | DOI:10.1007/s10278-024-00987-1
Identifying Pathological Subtypes of Brain Metastasis from Lung Cancer Using MRI-Based Deep Learning Approach: A Multicenter Study
J Imaging Inform Med. 2024 Feb 12. doi: 10.1007/s10278-024-00988-0. Online ahead of print.
ABSTRACT
The aim of this study was to investigate the feasibility of deep learning (DL) based on multiparametric MRI to differentiate the pathological subtypes of brain metastasis (BM) in lung cancer patients. This retrospective analysis collected 246 patients (456 BMs) from five medical centers from July 2016 to June 2022. The BMs were from small-cell lung cancer (SCLC, n = 230) and non-small-cell lung cancer (NSCLC, n = 226; 119 adenocarcinoma and 107 squamous cell carcinoma). Patients from four medical centers were assigned to training set and internal validation set with a ratio of 4:1, and we selected another medical center as an external test set. An attention-guided residual fusion network (ARFN) model for T1WI, T2WI, T2-FLAIR, DWI, and contrast-enhanced T1WI based on the ResNet-18 basic network was developed. The area under the receiver operating characteristic curve (AUC) was used to assess the classification performance. Compared with models based on five single-sequence and other combinations, a multiparametric MRI model based on five sequences had higher specificity in distinguishing BMs from different types of lung cancer. In the internal validation and external test sets, AUCs of the model for the classification of SCLC and NSCLC brain metastasis were 0.796 and 0.751, respectively; in terms of differentiating adenocarcinoma from squamous cell carcinoma BMs, the AUC values of the prediction models combining the five sequences were 0.771 and 0.738, respectively. DL together with multiparametric MRI has discriminatory feasibility in identifying pathology type of BM from lung cancer.
PMID:38347392 | DOI:10.1007/s10278-024-00988-0
Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening
NPJ Digit Med. 2024 Feb 12;7(1):34. doi: 10.1038/s41746-024-01018-7.
ABSTRACT
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
PMID:38347098 | DOI:10.1038/s41746-024-01018-7
A study on the DAM-EfficientNet hail rapid identification algorithm based on FY-4A_AGRI
Sci Rep. 2024 Feb 12;14(1):3505. doi: 10.1038/s41598-024-54142-5.
ABSTRACT
Hail, a highly destructive weather phenomenon, necessitates critical identification and forecasting for the protection of human lives and properties. The identification and forecasting of hail are vital for ensuring human safety and safeguarding assets. This research proposes a deep learning algorithm named Dual Attention Module EfficientNet (DAM-EfficientNet), based on EfficientNet, for detecting hail weather conditions. DAM-EfficientNet was evaluated using FY-4A satellite imagery and real hail fall records, achieving an accuracy of 98.53% in hail detection, a 97.92% probability of detection, a false alarm rate of 2.08%, and a critical success index of 95.92%. DAM-EfficientNet outperforms existing deep learning models in terms of accuracy and detection capability, with fewer parameters and computational needs. The results validate DAM-EfficientNet's effectiveness and superior performance in hail weather detection. Case studies indicate that the model can accurately forecast potential hail-affected areas and times. Overall, the DAM-EfficientNet model proves to be effective in identifying hail weather, offering robust support for weather disaster alerts and prevention. It holds promise for further enhancements and broader application across more data sources and meteorological parameters, thereby increasing the precision and timeliness of hail forecasting to combat hail disasters and boost public safety.
PMID:38347073 | DOI:10.1038/s41598-024-54142-5
White blood cells classification using multi-fold pre-processing and optimized CNN model
Sci Rep. 2024 Feb 12;14(1):3570. doi: 10.1038/s41598-024-52880-0.
ABSTRACT
White blood cells (WBCs) play a vital role in immune responses against infections and foreign agents. Different WBC types exist, and anomalies within them can indicate diseases like leukemia. Previous research suffers from limited accuracy and inflated performance due to the usage of less important features. Moreover, these studies often focus on fewer WBC types, exaggerating accuracy. This study addresses the crucial task of classifying WBC types using microscopic images. This study introduces a novel approach using extensive pre-processing with data augmentation techniques to produce a more significant feature set to achieve more promising results. The study conducts experiments employing both conventional deep learning and transfer learning models, comparing performance with state-of-the-art machine and deep learning models. Results reveal that a pre-processed feature set and convolutional neural network classifier achieves a significantly better accuracy of 0.99. The proposed method demonstrates superior accuracy and computational efficiency compared to existing state-of-the-art works.
PMID:38347011 | DOI:10.1038/s41598-024-52880-0
A new approach to classifying polymer type of microplastics based on Faster-RCNN-FPN and spectroscopic imagery under ultraviolet light
Sci Rep. 2024 Feb 12;14(1):3529. doi: 10.1038/s41598-024-53251-5.
ABSTRACT
Hazardous compounds from microplastics in coastal and marine environments are adsorbed by live organisms, affecting human and marine life. It takes time, money and effort to study the distribution and type of microplastics in the environment, using appropriate expensive equipment in a laboratory. However, deep learning can assist in identifying and quantifying microplastics from an image. This paper presents a novel microplastic classification method that combines the benefits of UV light with deep learning. The Faster-RCNN model with a ResNet-50-FPN backbone was implemented to detect and identify microplastics. Microplastic images from the field taken under UV light were used to train and validate the model. This classification model achieved a high precision of 85.5-87.8%, and the mAP scores were 33.9% on an internal test set and 35.7% on an external test set. This classification approach provides a high-accuracy, low-cost, and time-effective automated identification and counting of microplastics.
PMID:38346972 | DOI:10.1038/s41598-024-53251-5
Diagnosis of Alzheimer's disease via optimized lightweight convolution-attention and structural MRI
Comput Biol Med. 2024 Feb 8;171:108116. doi: 10.1016/j.compbiomed.2024.108116. Online ahead of print.
ABSTRACT
Alzheimer's disease (AD) poses a substantial public health challenge, demanding accurate screening and diagnosis. Identifying AD in its early stages, including mild cognitive impairment (MCI) and healthy control (HC), is crucial given the global aging population. Structural magnetic resonance imaging (sMRI) is essential for understanding the brain's structural changes due to atrophy. While current deep learning networks overlook voxel long-term dependencies, vision transformers (ViT) excel at recognizing such dependencies in images, making them valuable in AD diagnosis. Our proposed method integrates convolution-attention mechanisms in transformer-based classifiers for AD brain datasets, enhancing performance without excessive computing resources. Replacing multi-head attention with lightweight multi-head self-attention (LMHSA), employing inverted residual (IRU) blocks, and introducing local feed-forward networks (LFFN) yields exceptional results. Training on AD datasets with a gradient-centralized optimizer and Adam achieves an impressive accuracy rate of 94.31% for multi-class classification, rising to 95.37% for binary classification (AD vs. HC) and 92.15% for HC vs. MCI. These outcomes surpass existing AD diagnosis approaches, showcasing the model's efficacy. Identifying key brain regions aids future clinical solutions for AD and neurodegenerative diseases. However, this study focused exclusively on the AD Neuroimaging Initiative (ADNI) cohort, emphasizing the need for a more robust, generalizable approach incorporating diverse databases beyond ADNI in future research.
PMID:38346370 | DOI:10.1016/j.compbiomed.2024.108116
Motivations for Artificial Intelligence, for Deep Learning, for ALife: Mortality and Existential Risk
Artif Life. 2024 Feb 11:1-17. doi: 10.1162/artl_a_00427. Online ahead of print.
ABSTRACT
We survey the general trajectory of artificial intelligence (AI) over the last century, in the context of influences from Artificial Life. With a broad brush, we can divide technical approaches to solving AI problems into two camps: GOFAIstic (or computationally inspired) or cybernetic (or ALife inspired). The latter approach has enabled advances in deep learning and the astonishing AI advances we see today-bringing immense benefits but also societal risks. There is a similar divide, regrettably unrecognized, over the very way that such AI problems have been framed. To date, this has been overwhelmingly GOFAIstic, meaning that tools for humans to use have been developed; they have no agency or motivations of their own. We explore the implications of this for concerns about existential risk for humans of the "robots taking over. " The risks may be blamed exclusively on human users-the robots could not care less.
PMID:38346273 | DOI:10.1162/artl_a_00427
MRI-based prostate cancer classification using 3D efficient capsule network
Med Phys. 2024 Feb 12. doi: 10.1002/mp.16975. Online ahead of print.
ABSTRACT
BACKGROUND: Prostate cancer (PCa) is the most common cancer in men and the second leading cause of male cancer-related death. Gleason score (GS) is the primary driver of PCa risk-stratification and medical decision-making, but can only be assessed at present via biopsy under anesthesia. Magnetic resonance imaging (MRI) is a promising non-invasive method to further characterize PCa, providing additional anatomical and functional information. Meanwhile, the diagnostic power of MRI is limited by qualitative or, at best, semi-quantitative interpretation criteria, leading to inter-reader variability.
PURPOSES: Computer-aided diagnosis employing quantitative MRI analysis has yielded promising results in non-invasive prediction of GS. However, convolutional neural networks (CNNs) do not implicitly impose a frame of reference to the objects. Thus, CNNs do not encode the positional information properly, limiting method robustness against simple image variations such as flipping, scaling, or rotation. Capsule network (CapsNet) has been proposed to address this limitation and achieves promising results in this domain. In this study, we develop a 3D Efficient CapsNet to stratify GS-derived PCa risk using T2-weighted (T2W) MRI images.
METHODS: In our method, we used 3D CNN modules to extract spatial features and primary capsule layers to encode vector features. We then propose to integrate fully-connected capsule layers (FC Caps) to create a deeper hierarchy for PCa grading prediction. FC Caps comprises a secondary capsule layer which routes active primary capsules and a final capsule layer which outputs PCa risk. To account for data imbalance, we propose a novel dynamic weighted margin loss. We evaluate our method on a public PCa T2W MRI dataset from the Cancer Imaging Archive containing data from 976 patients.
RESULTS: Two groups of experiments were performed: (1) we first identified high-risk disease by classifying low + medium risk versus high risk; (2) we then stratified disease in one-versus-one fashion: low versus high risk, medium versus high risk, and low versus medium risk. Five-fold cross validation was performed. Our model achieved an area under receiver operating characteristic curve (AUC) of 0.83 and 0.64 F1-score for low versus high grade, 0.79 AUC and 0.75 F1-score for low + medium versus high grade, 0.75 AUC and 0.69 F1-score for medium versus high grade and 0.59 AUC and 0.57 F1-score for low versus medium grade. Our method outperformed state-of-the-art radiomics-based classification and deep learning methods with the highest metrics for each experiment. Our divide-and-conquer strategy achieved weighted Cohen's Kappa score of 0.41, suggesting moderate agreement with ground truth PCa risks.
CONCLUSIONS: In this study, we proposed a novel 3D Efficient CapsNet for PCa risk stratification and demonstrated its feasibility. This developed tool provided a non-invasive approach to assess PCa risk from T2W MR images, which might have potential to personalize the treatment of PCa and reduce the number of unnecessary biopsies.
PMID:38346111 | DOI:10.1002/mp.16975
Stiffness-tunable substrate fabrication by DMDbased optical projection lithography for cancer cell invasion studies
IEEE Trans Biomed Eng. 2024 Feb 12;PP. doi: 10.1109/TBME.2024.3364971. Online ahead of print.
ABSTRACT
Cancer cell invasion is a critical cause of fatality in cancer patients. Physiologically relevant tumor models play a key role in revealing the mechanisms underlying the invasive behavior of cancer cells. However, most existing models only consider interactions between cells and extracellular matrix (ECM) components while neglecting the role of matrix stiffness in tumor invasion. Here, we propose an effective approach that can construct stiffness-tunable substrates using digital mirror device (DMD)-based optical projection lithography to explore the invasion behavior of cancer cells. The printability, mechanical properties, and cell viability of three-dimensional (3D) models can be tuned by the concentration of prepolymer and the exposure time. The invasion trajectories of gastric cancer cells in tumor models of different stiffness were automatically detected and tracked in real-time using a deep learning algorithm. The results show that tumor models of different mechanical stiffness can yield distinct regulatory effects. Moreover, owing to the biophysical characteristics of the 3D in vitro model, different cellular substructures of cancer cells were induced. The proposed tunable substrate construction method can be used to build various microstructures to achieve simulation of cancer invasion and antitumor screening, which has great potential in promoting personalized therapy.
PMID:38345950 | DOI:10.1109/TBME.2024.3364971
Bubbles-Induced Porous Structure-Based Flexible Piezoresistive Sensors for Speech Recognition
ACS Appl Mater Interfaces. 2024 Feb 12. doi: 10.1021/acsami.3c18233. Online ahead of print.
ABSTRACT
Flexible piezoresistive sensors with a porous structure that are used in the field of speech recognition are seldom characterized by both high sensitivity and ease of preparation. In this study, a piezoresistive sensor with a porous structure that is both highly sensitive and can be prepared by using a simple method is proposed for speech recognition. The preparation process utilizes the interaction of bubbles generated by ethanol evaporation and active agents with polydimethylsiloxane to produce a porous flexible substrate. This preparation process requires neither templates nor harsh experimental conditions such as a low temperature and a low pressure. Furthermore, the prepared piezoresistive sensor has excellent properties, such as a high sensitivity (27.6 kPa-1), a satisfactory response time (800 μs), and a good stability (10,000 cycles). When used for speech recognition, more than 1500 vocalizations and silent speech signals obtained from subjects saying numbers from "0" to "9" were collected by the sensor for training a convolutional neural network model. The average accuracy of the recognition reached 94.8%. The simple preparation process and the excellent performance of the prepared flexible piezoresistive sensor endow it with a wide application prospect in the field of speech recognition.
PMID:38345942 | DOI:10.1021/acsami.3c18233
Length of Stay Prediction With Standardized Hospital Data From Acute and Emergency Care Using a Deep Neural Network
Med Care. 2024 Feb 12. doi: 10.1097/MLR.0000000000001975. Online ahead of print.
ABSTRACT
OBJECTIVE: Length of stay (LOS) is an important metric for the organization and scheduling of care activities. This study sought to propose a LOS prediction method based on deep learning using widely available administrative data from acute and emergency care and compare it with other methods.
PATIENTS AND METHODS: All admissions between January 1, 2011 and December 31, 2019, at 6 university hospitals of the Hospices Civils de Lyon metropolis were included, leading to a cohort of 1,140,100 stays of 515,199 patients. Data included demographics, primary and associated diagnoses, medical procedures, the medical unit, the admission type, socio-economic factors, and temporal information. A model based on embeddings and a Feed-Forward Neural Network (FFNN) was developed to provide fine-grained LOS predictions per hospitalization step. Performances were compared with random forest and logistic regression, with the accuracy, Cohen kappa, and a Bland-Altman plot, through a 5-fold cross-validation.
RESULTS: The FFNN achieved an accuracy of 0.944 (CI: 0.937, 0.950) and a kappa of 0.943 (CI: 0.935, 0.950). For the same metrics, random forest yielded 0.574 (CI: 0.573, 0.575) and 0.602 (CI: 0.601, 0.603), respectively, and 0.352 (CI: 0.346, 0.358) and 0.414 (CI: 0.408, 0.422) for the logistic regression. The FFNN had a limit of agreement ranging from -2.73 to 2.67, which was better than random forest (-6.72 to 6.83) or logistic regression (-7.60 to 9.20).
CONCLUSION: The FFNN was better at predicting LOS than random forest or logistic regression. Implementing the FFNN model for routine acute care could be useful for improving the quality of patients' care.
PMID:38345863 | DOI:10.1097/MLR.0000000000001975
Improving the geographical origin classification of <em>Radix glycyrrhizae</em> (licorice) through hyperspectral imaging assisted by U-Net fine structure recognition
Analyst. 2024 Feb 12. doi: 10.1039/d3an02064a. Online ahead of print.
ABSTRACT
Radix glycyrrhizae (licorice) is extensively employed in traditional Chinese medicine, and serves as a crucial raw material in industries such as food and cosmetics. The quality of licorice from different origins varies greatly, so classification of its geographical origin is particularly important. This study proposes a technique for fine structure recognition and segmentation of hyperspectral images of licorice using deep learning U-Net neural networks to segment the tissue structure patterns (phloem, xylem, and pith). Firstly, the three partitions were separately labeled using the Labelme tool, which was utilized to train the U-Net model. Secondly, the obtained optimal U-Net model was applied to predict three partitions of all samples. Lastly, various machine learning models (LDA, SVM, and PLS-DA) were trained based on segmented hyperspectral data. In addition, a threshold method and a circumcircle method were applied to segment licorice hyperspectral images for comparison. The results revealed that compared with the threshold segmentation method (which yielded SVM classifier accuracies of 99.17%, 91.15%, and 92.50% on the training set, validation set, and test set, respectively), the U-Net segmentation method significantly enhanced the accuracy of origin classification (99.06%, 94.72% and 96.07%). Conversely, the circumcircle segmentation method did not effectively improve the accuracy of origin classification (99.65%, 91.16% and 92.13%). By integrating Raman imaging of licorice, it can be inferred that the U-Net model, designed for region segmentation based on the inherent tissue structure of licorice, can effectively improve the accuracy origin classification, which has positive significance in the development of intelligence and information technology of Chinese medicine quality control.
PMID:38345564 | DOI:10.1039/d3an02064a
Harnessing artificial intelligence for advancing early diagnosis in hidradenitis suppurativa
Ital J Dermatol Venerol. 2024 Feb;159(1):43-49. doi: 10.23736/S2784-8671.23.07829-5.
ABSTRACT
This perspective delves into the integration of artificial intelligence (AI) to enhance early diagnosis in hidradenitis suppurativa (HS). Despite significantly impacting Quality of Life, HS presents diagnostic challenges leading to treatment delays. We present a viewpoint on AI-powered clinical decision support system designed for HS, emphasizing the transformative potential of AI in dermatology. HS diagnosis, primarily reliant on clinical evaluation and visual inspection, often results in late-stage identification with substantial tissue damage. The incorporation of AI, utilizing machine learning and deep learning algorithms, addresses this challenge by excelling in image analysis. AI adeptly recognizes subtle patterns in skin lesions, providing objective and standardized analyses to mitigate subjectivity in traditional diagnostic approaches. The AI integration encompasses diverse datasets, including clinical records, images, biochemical and immunological data and OMICs data. AI algorithms enable nuanced comprehension, allowing for precise and customized diagnoses. We underscore AI's potential for continuous learning and adaptation, refining recommendations based on evolving data. Challenges in AI integration, such as data privacy, algorithm bias, and interpretability, are addressed, emphasizing the ethical considerations of responsible AI deployment, including transparency, human oversight, and striking a balance between automation and human intervention. From the dermatologists' standpoint, we illustrate how AI enhances diagnostic accuracy, treatment planning, and long-term follow-up in HS management. Dermatologists leverage AI to analyze clinical records, dermatological images, and various data types, facilitating a proactive and personalized approach. AI's dynamic nature supports continuous learning, refining diagnostic and treatment strategies, ultimately reshaping standards of care in dermatology.
PMID:38345291 | DOI:10.23736/S2784-8671.23.07829-5