Deep learning

A deep transfer learning based convolution neural network framework for air temperature classification using human clothing images

Tue, 2024-12-31 06:00

Sci Rep. 2024 Dec 30;14(1):31658. doi: 10.1038/s41598-024-80657-y.

ABSTRACT

Weather recognition is crucial due to its significant impact on various aspects of daily life, such as weather prediction, environmental monitoring, tourism, and energy production. Several studies have already conducted research on image-based weather recognition. However, previous studies have addressed few types of weather phenomena recognition from images with insufficient accuracy. In this paper, we propose a transfer learning CNN framework for classifying air temperature levels from human clothing images. The framework incorporates various deep transfer learning approaches, including DeepLabV3 Plus for semantic segmentation and others for classification such as BigTransfer (BiT), Vision Transformer (ViT), ResNet101, VGG16, VGG19, and DenseNet121. Meanwhile, we have collected a dataset called the Human Clothing Image Dataset (HCID), consisting of 10,000 images with two categories (High and Low air temperature). All the models were evaluated using various classification metrics, such as the confusion matrix, loss, precision, F1-score, recall, accuracy, and AUC-ROC. Additionally, we applied Gradient-weighted Class Activation Mapping (Grad-CAM) to emphasize significant features and regions identified by models during the classification process. The results show that DenseNet121 outperformed other models with an accuracy of 98.13%. Promising experimental results highlight the potential benefits of the proposed framework for detecting air temperature levels, aiding in weather prediction and environmental monitoring.

PMID:39738164 | DOI:10.1038/s41598-024-80657-y

Categories: Literature Watch

The Theranostic Genome

Tue, 2024-12-31 06:00

Nat Commun. 2024 Dec 30;15(1):10904. doi: 10.1038/s41467-024-55291-x.

ABSTRACT

Theranostic drugs represent an emerging path to deliver on the promise of precision medicine. However, bottlenecks remain in characterizing theranostic targets, identifying theranostic lead compounds, and tailoring theranostic drugs. To overcome these bottlenecks, we present the Theranostic Genome, the part of the human genome whose expression can be utilized to combine therapeutic and diagnostic applications. Using a deep learning-based hybrid human-AI pipeline that cross-references PubMed, the Gene Expression Omnibus, DisGeNET, The Cancer Genome Atlas and the NIH Molecular Imaging and Contrast Agent Database, we bridge individual genes in human cancers with respective theranostic compounds. Cross-referencing the Theranostic Genome with RNAseq data from over 17'000 human tissues identifies theranostic targets and lead compounds for various human cancers, and allows tailoring targeted theranostics to relevant cancer subpopulations. We expect the Theranostic Genome to facilitate the development of new targeted theranostics to better diagnose, understand, treat, and monitor a variety of human cancers.

PMID:39738156 | DOI:10.1038/s41467-024-55291-x

Categories: Literature Watch

Piecing together the narrative of #longcovid: an unsupervised deep learning of 1,354,889 X (formerly Twitter) posts from 2020 to 2023

Tue, 2024-12-31 06:00

Front Public Health. 2024 Dec 16;12:1491087. doi: 10.3389/fpubh.2024.1491087. eCollection 2024.

ABSTRACT

OBJECTIVE: To characterize the public conversations around long COVID, as expressed through X (formerly Twitter) posts from May 2020 to April 2023.

METHODS: Using X as the data source, we extracted tweets containing #long-covid, #long_covid, or "long covid," posted from May 2020 to April 2023. We then conducted an unsupervised deep learning analysis using Bidirectional Encoder Representations from Transformers (BERT). This method allowed us to process and analyze large-scale textual data, focusing on individual user tweets. We then employed BERT-based topic modeling, followed by reflexive thematic analysis to categorize and further refine tweets into coherent themes to interpret the overarching narratives within the long COVID discourse. In contrast to prior studies, the constructs framing our analyses were data driven as well as informed by the tenets of social constructivism.

RESULTS: Out of an initial dataset of 2,905,906 tweets, a total of 1,354,889 unique, English-language tweets from individual users were included in the final dataset for analysis. Three main themes were generated: (1) General discussions of long COVID, (2) Skepticism about long COVID, and (3) Adverse effects of long COVID on individuals. These themes highlighted various aspects, including public awareness, community support, misinformation, and personal experiences with long COVID. The analysis also revealed a stable temporal trend in the long COVID discussions from 2020 to 2023, indicating its sustained interest in public discourse.

CONCLUSION: Social media, specifically X, helped in shaping public awareness and perception of long COVID, and the posts demonstrate a collective effort in community building and information sharing.

PMID:39737451 | PMC:PMC11683113 | DOI:10.3389/fpubh.2024.1491087

Categories: Literature Watch

FacialNet: facial emotion recognition for mental health analysis using UNet segmentation with transfer learning model

Tue, 2024-12-31 06:00

Front Comput Neurosci. 2024 Dec 11;18:1485121. doi: 10.3389/fncom.2024.1485121. eCollection 2024.

ABSTRACT

Facial emotion recognition (FER) can serve as a valuable tool for assessing emotional states, which are often linked to mental health. However, mental health encompasses a broad range of factors that go beyond facial expressions. While FER provides insights into certain aspects of emotional well-being, it can be used in conjunction with other assessments to form a more comprehensive understanding of an individual's mental health. This research work proposes a framework for human FER using UNet image segmentation and transfer learning with the EfficientNetB4 model (called FacialNet). The proposed model demonstrates promising results, achieving an accuracy of 90% for six emotion classes (happy, sad, fear, pain, anger, and disgust) and 96.39% for binary classification (happy and sad). The significance of FacialNet is judged by extensive experiments conducted against various machine learning and deep learning models, as well as state-of-the-art previous research works in FER. The significance of FacialNet is further validated using a cross-validation technique, ensuring reliable performance across different data splits. The findings highlight the effectiveness of leveraging UNet image segmentation and EfficientNetB4 transfer learning for accurate and efficient human facial emotion recognition, offering promising avenues for real-world applications in emotion-aware systems and effective computing platforms. Experimental findings reveal that the proposed approach performs substantially better than existing works with an improved accuracy of 96.39% compared to existing 94.26%.

PMID:39737446 | PMC:PMC11683786 | DOI:10.3389/fncom.2024.1485121

Categories: Literature Watch

Research on adverse event classification algorithm of da Vinci surgical robot based on Bert-BiLSTM model

Tue, 2024-12-31 06:00

Front Comput Neurosci. 2024 Dec 16;18:1476164. doi: 10.3389/fncom.2024.1476164. eCollection 2024.

ABSTRACT

This study aims to enhance the classification accuracy of adverse events associated with the da Vinci surgical robot through advanced natural language processing techniques, thereby ensuring medical device safety and protecting patient health. Addressing the issues of incomplete and inconsistent adverse event records, we employed a deep learning model that combines BERT and BiLSTM to predict whether adverse event reports resulted in patient harm. We developed the Bert-BiLSTM-Att_dropout model specifically for text classification tasks with small datasets, optimizing the model's generalization ability and key information capture through the integration of dropout and attention mechanisms. Our model demonstrated exceptional performance on a dataset comprising 4,568 da Vinci surgical robot adverse event reports collected from 2013 to 2023, achieving an average F1 score of 90.15%, significantly surpassing baseline models such as GRU, LSTM, BiLSTM-Attention, and BERT. This achievement not only validates the model's effectiveness in text classification within this specific domain but also substantially improves the usability and accuracy of adverse event reporting, contributing to the prevention of medical incidents and reduction of patient harm. Furthermore, our research experimentally confirmed the model's performance, alleviating the data classification and analysis burden for healthcare professionals. Through comparative analysis, we highlighted the potential of combining BERT and BiLSTM in text classification tasks, particularly for small datasets in the medical field. Our findings advance the development of adverse event monitoring technologies for medical devices and provide critical insights for future research and enhancements.

PMID:39737445 | PMC:PMC11682881 | DOI:10.3389/fncom.2024.1476164

Categories: Literature Watch

Retinal OCT Layer Segmentation via Joint Motion Correction and Graph-Assisted 3D Neural Network

Tue, 2024-12-31 06:00

IEEE Access. 2023;11:103319-103332. doi: 10.1109/access.2023.3317011. Epub 2023 Sep 18.

ABSTRACT

Optical Coherence Tomography (OCT) is a widely used 3D imaging technology in ophthalmology. Segmentation of retinal layers in OCT is important for diagnosis and evaluation of various retinal and systemic diseases. While 2D segmentation algorithms have been developed, they do not fully utilize contextual information and suffer from inconsistency in 3D. We propose neural networks to combine motion correction and segmentation in 3D. The proposed segmentation network utilizes 3D convolution and a novel graph pyramid structure with graph-inspired building blocks. We also collected one of the largest OCT segmentation dataset with manually corrected segmentation covering both normal examples and various diseases. The experimental results on three datasets with multiple instruments and various diseases show the proposed method can achieve improved segmentation accuracy compared with commercial softwares and conventional or deep learning methods in literature. Specifically, the proposed method reduced the average error from 38.47% to 11.43% compared to clinically available commercial software for severe deformations caused by diseases. The diagnosis and evaluation of diseases with large deformation such as DME, wet AMD and CRVO would greatly benefit from the improved accuracy, which impacts tens of millions of patients.

PMID:39737086 | PMC:PMC11684756 | DOI:10.1109/access.2023.3317011

Categories: Literature Watch

MM-DRPNet: A multimodal dynamic radial partitioning network for enhanced protein-ligand binding affinity prediction

Tue, 2024-12-31 06:00

Comput Struct Biotechnol J. 2024 Dec 4;23:4396-4405. doi: 10.1016/j.csbj.2024.11.050. eCollection 2024 Dec.

ABSTRACT

Accurate prediction of drug-target binding affinity remains a fundamental challenge in contemporary drug discovery. Despite significant advances in computational methods for protein-ligand binding affinity prediction, current approaches still face substantial limitations in prediction accuracy. Moreover, the prevalent methodologies often overlook critical three-dimensional (3D) structural information, thereby constraining their practical utility in computer-aided drug design (CADD). Here we present MM-DRPNet, a multimodal deep learning framework that enhances binding affinity prediction by integrating protein-ligand structural information with interaction features and physicochemical properties. The core innovation lies in our dynamic radial partitioning (DRP) algorithm, which adaptively segments 3D space based on complex-specific interaction patterns, surpassing traditional fixed partitioning methods in capturing spatial interactions. MM-DRPNet further incorporates molecular topological features to comprehensively model both structural and spatial relationships. Extensive evaluations on benchmark datasets demonstrate that MM-DRPNet significantly outperforms state-of-the-art methods across multiple metrics, with ablation studies confirming the substantial contribution of each architectural component. Source code for MM-DRPNet is freely available for download at https://github.com/Bigrock-dd/MMDRPv1.

PMID:39737077 | PMC:PMC11683220 | DOI:10.1016/j.csbj.2024.11.050

Categories: Literature Watch

Leveraging compact convolutional transformers for enhanced COVID-19 detection in chest X-rays: a grad-CAM visualization approach

Tue, 2024-12-31 06:00

Front Big Data. 2024 Dec 16;7:1489020. doi: 10.3389/fdata.2024.1489020. eCollection 2024.

NO ABSTRACT

PMID:39736985 | PMC:PMC11683681 | DOI:10.3389/fdata.2024.1489020

Categories: Literature Watch

Artificial intelligence and glaucoma: a lucid and comprehensive review

Tue, 2024-12-31 06:00

Front Med (Lausanne). 2024 Dec 16;11:1423813. doi: 10.3389/fmed.2024.1423813. eCollection 2024.

ABSTRACT

Glaucoma is a pathologically irreversible eye illness in the realm of ophthalmic diseases. Because it is difficult to detect concealed and non-obvious progressive changes, clinical diagnosis and treatment of glaucoma is extremely challenging. At the same time, screening and monitoring for glaucoma disease progression are crucial. Artificial intelligence technology has advanced rapidly in all fields, particularly medicine, thanks to ongoing in-depth study and algorithm extension. Simultaneously, research and applications of machine learning and deep learning in the field of glaucoma are fast evolving. Artificial intelligence, with its numerous advantages, will raise the accuracy and efficiency of glaucoma screening and diagnosis to new heights, as well as significantly cut the cost of diagnosis and treatment for the majority of patients. This review summarizes the relevant applications of artificial intelligence in the screening and diagnosis of glaucoma, as well as reflects deeply on the limitations and difficulties of the current application of artificial intelligence in the field of glaucoma, and presents promising prospects and expectations for the application of artificial intelligence in other eye diseases such as glaucoma.

PMID:39736974 | PMC:PMC11682886 | DOI:10.3389/fmed.2024.1423813

Categories: Literature Watch

Aalto Gear Fault datasets for deep-learning based diagnosis

Tue, 2024-12-31 06:00

Data Brief. 2024 Dec 2;57:111171. doi: 10.1016/j.dib.2024.111171. eCollection 2024 Dec.

ABSTRACT

Accurate system health state prediction through deep learning requires extensive and varied data. However, real-world data scarcity poses a challenge for developing robust fault diagnosis models. This study introduces two extensive datasets, Aalto Shim Dataset and Aalto Gear Fault Dataset, collected under controlled laboratory conditions, aimed at advancing deep learning-based fault diagnosis. The datasets encompass a wide range of gear faults, including synthetic and realistic failure modes, replicated on a downsized azimuth thruster testbench equipped with multiple sensors. The data features various fault types and severities under different operating conditions. The comprehensive data collected, along with the methodologies for creating synthetic faults and replicating common gear failures, provide valuable resources for developing and testing intelligent fault diagnosis models, enhancing their generalization and robustness across diverse scenarios.

PMID:39736909 | PMC:PMC11683272 | DOI:10.1016/j.dib.2024.111171

Categories: Literature Watch

The cadenza woodwind dataset: Synthesised quartets for music information retrieval and machine learning

Tue, 2024-12-31 06:00

Data Brief. 2024 Dec 4;57:111199. doi: 10.1016/j.dib.2024.111199. eCollection 2024 Dec.

ABSTRACT

This paper presents the Cadenza Woodwind Dataset. This publicly available data is synthesised audio for woodwind quartets including renderings of each instrument in isolation. The data was created to be used as training data within Cadenza's second open machine learning challenge (CAD2) for the task on rebalancing classical music ensembles. The dataset is also intended for developing other music information retrieval (MIR) algorithms using machine learning. It was created because of the lack of large-scale datasets of classical woodwind music with separate audio for each instrument and permissive license for reuse. Music scores were selected from the OpenScore String Quartet corpus. These were rendered for two woodwind ensembles of (i) flute, oboe, clarinet and bassoon; and (ii) flute, oboe, alto saxophone and bassoon. This was done by a professional music producer using industry-standard software. Virtual instruments were used to create the audio for each instrument using software that interpreted expression markings in the score. Convolution reverberation was used to simulate a performance space and the ensembles mixed. The dataset consists of the audio and associated metadata.

PMID:39736904 | PMC:PMC11683209 | DOI:10.1016/j.dib.2024.111199

Categories: Literature Watch

Csec-net: a novel deep features fusion and entropy-controlled firefly feature selection framework for leukemia classification

Tue, 2024-12-31 06:00

Health Inf Sci Syst. 2024 Dec 28;13(1):9. doi: 10.1007/s13755-024-00327-1. eCollection 2025 Dec.

ABSTRACT

Leukemia, a life-threatening form of cancer, poses a significant global health challenge affecting individuals of all age groups, including both children and adults. Currently, the diagnostic process relies on manual analysis of microscopic images of blood samples. In recent years, machine learning employing deep learning approaches has emerged as cutting-edge solutions for image classification problems. Thus, the aim of this work was to develop and evaluate deep learning methods to enable a computer-aided leukemia diagnosis. The proposed method is composed of multiple stages: Firstly, the given dataset images undergo preprocessing. Secondly, five pre-trained convolutional neural network models, namely MobileNetV2, EfficientNetB0, ConvNeXt-V2, EfficientNetV2, and DarkNet-19, are modified and transfer learning is used for training. Thirdly, deep feature vectors are extracted from each of the convolutional neural network and combined using a convolutional sparse image decomposition fusion strategy. Fourthly, the proposed approach employs an entropy-controlled firefly feature selection technique, which selects the most optimal features for subsequent classification. Finally, the selected features are fed into a multi-class support vector machine for the final classification. The proposed algorithm was applied to a total of 15562 images having four datasets, namely ALLID_B1, ALLID_B2, C_NMC 2019, and ASH and demonstrated superior accuracies of 99.64%, 98.96%, 96.67%, and 98.89%, respectively, surpassing the performance of previous works in the field.

PMID:39736875 | PMC:PMC11682032 | DOI:10.1007/s13755-024-00327-1

Categories: Literature Watch

PharmRL: pharmacophore elucidation with deep geometric reinforcement learning

Mon, 2024-12-30 06:00

BMC Biol. 2024 Dec 31;22(1):301. doi: 10.1186/s12915-024-02096-5.

ABSTRACT

BACKGROUND: Molecular interactions between proteins and their ligands are important for drug design. A pharmacophore consists of favorable molecular interactions in a protein binding site and can be utilized for virtual screening. Pharmacophores are easiest to identify from co-crystal structures of a bound protein-ligand complex. However, designing a pharmacophore in the absence of a ligand is a much harder task.

RESULTS: In this work, we develop a deep learning method that can identify pharmacophores in the absence of a ligand. Specifically, we train a CNN model to identify potential favorable interactions in the binding site, and develop a deep geometric Q-learning algorithm that attempts to select an optimal subset of these interaction points to form a pharmacophore. With this algorithm, we show better prospective virtual screening performance, in terms of F1 scores, on the DUD-E dataset than random selection of ligand-identified features from co-crystal structures. We also conduct experiments on the LIT-PCBA dataset and show that it provides efficient solutions for identifying active molecules. Finally, we test our method by screening the COVID moonshot dataset and show that it would be effective in identifying prospective lead molecules even in the absence of fragment screening experiments.

CONCLUSIONS: PharmRL addresses the need for automated methods in pharmacophore design, particularly in cases where a cognate ligand is unavailable. Experimental results demonstrate that PharmRL generates functional pharmacophores. Additionally, we provide a Google Colab notebook to facilitate the use of this method.

PMID:39736736 | DOI:10.1186/s12915-024-02096-5

Categories: Literature Watch

Development of an individualized dementia risk prediction model using deep learning survival analysis incorporating genetic and environmental factors

Mon, 2024-12-30 06:00

Alzheimers Res Ther. 2024 Dec 30;16(1):278. doi: 10.1186/s13195-024-01663-w.

ABSTRACT

BACKGROUND: Dementia is a major public health challenge in modern society. Early detection of high-risk dementia patients and timely intervention or treatment are of significant clinical importance. Neural network survival analysis represents the most advanced technology for survival analysis to date. However, there is a lack of deep learning-based survival analysis models that integrate both genetic and clinical factors to develop and validate individualized dynamic dementia risk prediction models.

METHODS AND RESULTS: This study is based on a large prospective cohort from the UK Biobank, which includes a total of 41,484 participants with an average follow-up period of 12.6 years. Initially, 364 candidate features (predictor variables) were screened. The top 30 key features were then identified by ranking the importance of each predictor variable using the Gradient Boosting Machine (GBM) model. A multi-model comparison strategy was employed to evaluate the predictive performance of four survival analysis models: DeepSurv, DeepHit, Kaplan-Meier estimation, and the Cox proportional hazards model (CoxPH). The results showed that the average Harrell's C-index for the DeepSurv model was 0.743, for the DeepHit model it was 0.633, for the CoxPH model it was 0.749, and for the Kaplan-Meier estimator model it was 0.500. In addition, the average D-Calibration Survival Measure was 6.014, 4408.086, 32274.743, and 1.508, respectively. The Brier score (BS) was used to assess the importance of features for the DeepSurv dementia prediction model, and the relationship between features and dementia was visualized using a partial dependence plot (PDP). To facilitate further research, the team deployed the DeepSurv dementia prediction model on AliCloud servers and designated it as the UKB-DementiaPre Tool.

CONCLUSION: This study successfully developed and validated the DeepSurv dementia prediction model for individuals aged 60 years and above, integrating both genetic and clinical data. The model was then deployed on AliCloud servers to promote its clinical translation. It is anticipated that this prediction model will provide more accurate decision support for clinical treatment and will serve as a valuable tool for the primary prevention of dementia.

PMID:39736679 | DOI:10.1186/s13195-024-01663-w

Categories: Literature Watch

Autonomous detection of nail disorders using a hybrid capsule CNN: a novel deep learning approach for early diagnosis

Mon, 2024-12-30 06:00

BMC Med Inform Decis Mak. 2024 Dec 30;24(1):414. doi: 10.1186/s12911-024-02840-5.

ABSTRACT

Major underlying health issues can be indicated by even minor nail infections. Subungual Melanoma is one of the most severe kinds since it is identified at a much later stage than other conditions. The purpose of this research is to offer novel deep-learning algorithms that target the autonomous categorization of six forms of nail disorders by employing images: Blue Finger, Clubbing, Pitting, Onychogryphosis, Acral Lentiginous Melanoma, and Normal Nail or Healthy Nail Appearance. Based on this, we build an initial baseline CNN model, which is then further advanced by the introduction of the Hybrid Capsule CNN model by the reduction of space hierarchy deficiencies of the classic CNN model. All these models were trained and tested using the Nail Disease Detection dataset with intensive uses of techniques of data augmentation. The Hybrid Capsule CNN model, thus, provided superior classification accuracy compared to the others; the training accuracy was 99.40%, while the validation accuracy was 99.25%, whereas the hybrid model outperformed the Base CNN model with astounding precision, recall of 97.35% and 96.79%. The hybrid model additionally leverages the capsule network and dynamic routing, offering improved robustness against transformations as well as improving spatial properties. The current study consequently provides a very viable, economical, and accessible diagnostic tool, especially for places with a paucity of medical services. The proposed methodology provides tremendous capacity for early diagnosis and better outcomes for the patient in a healthcare scenario. Clinical trial number Not applicable.

PMID:39736622 | DOI:10.1186/s12911-024-02840-5

Categories: Literature Watch

A prediction approach to COVID-19 time series with LSTM integrated attention mechanism and transfer learning

Mon, 2024-12-30 06:00

BMC Med Res Methodol. 2024 Dec 31;24(1):323. doi: 10.1186/s12874-024-02433-w.

ABSTRACT

BACKGROUND: The prediction of coronavirus disease in 2019 (COVID-19) in broader regions has been widely researched, but for specific areas such as urban areas the predictive models were rarely studied. It may be inaccurate to apply predictive models from a broad region directly to a small area. This paper builds a prediction approach for small size COVID-19 time series in a city.

METHODS: Numbers of COVID-19 daily confirmed cases were collected from November 1, 2022 to November 16, 2023 in Xuzhou city of China. Classical deep learning models including recurrent neural network (RNN), long and short-term memory (LSTM), gated recurrent unit (GRU) and temporal convolutional network (TCN) are initially trained, then RNN, LSTM and GRU are integrated with a new attention mechanism and transfer learning to improve the performance. Ten times ablation experiments are conducted to show the robustness of the performance in prediction. The performances among the models are compared by the mean absolute error, root mean square error and coefficient of determination.

RESULTS: LSTM outperforms than others, and TCN has the worst generalization ability. Thus, LSTM is integrated with the new attention mechanism to construct an LSTMATT model, which improves the performance. LSTMATT is trained on the smoothed time series curve through frequency domain convolution augmentation, then transfer learning is adopted to transfer the learned features back to the original time series resulting in a TLLA model that further improves the performance. RNN and GRU are also integrated with the attention mechanism and transfer learning and their performances are also improved, but TLLA still performs best.

CONCLUSIONS: The TLLA model has the best prediction performance for the time series of COVID-19 daily confirmed cases, and the new attention mechanism and transfer learning contribute to improve the prediction performance in the flatten part and the jagged part, respectively.

PMID:39736527 | DOI:10.1186/s12874-024-02433-w

Categories: Literature Watch

An retrospective study on the effects of deep learning model-based optimization emergency nursing on treatment compliance and curative effect of patients with acute left heart failure

Mon, 2024-12-30 06:00

BMC Emerg Med. 2024 Dec 31;24(1):240. doi: 10.1186/s12873-024-01156-x.

ABSTRACT

BACKGROUND: Based on explainable DenseNet model, the therapeutic effects of optimization nursing on patients with acute left heart failure (ALHF) and its application values were discussed.

METHOD: In this study, 96 patients with ALHF in the emergency department of the Affiliated Hospital of Xuzhou Medical University were selected. According to different nursing methods, they were divided into conventional group and optimization group. Activity of daily living (ADL) scale was used to evaluate ADL of patients 6 months after discharge. Self-rating anxiety scale (SAS) and self-rating depression scale (SDS) were employed to assess patients' psychological state. 45 min improvement rate, 60 min show efficiency, rescue success rate, and transfer rate were used to assess the effect of first aid. Likert 5-level scoring method was adopted to evaluate nursing satisfaction.

RESULTS: The optimization group showed shorter durations for first aid, hospitalization, electrocardiography, vein channel establishment, and blood collection compared to the conventional group. However, their SBP, DBP, and HR were inferior. On the other hand, LVEF and FS were significantly better in the optimization group. After nursing intervention, SAS and SDS scores were lower in the optimization group. Additionally, the optimization group had higher 45-minute improvement rates, 60-minute show efficiency, rescue success, and transfer rates. They also performed better in 6-minute walking distance and ADL scores 6 months post-discharge. The optimization group had better compliance, total effective rates, and satisfaction than the conventional group.

CONCLUSION: It was demonstrated that explainable DenseNet model had application values in the diagnosis of ALHF. Optimization emergency method could effectively shorten the duration of first aid, relieve anxiety, and other adverse emotions, and improve rescue success rate and short-term efficacy. Nursing intervention has a positive impact on the total effective efficiency and patient satisfaction.

PMID:39736523 | DOI:10.1186/s12873-024-01156-x

Categories: Literature Watch

Fluorescence images of skin lesions and automated diagnosis using convolutional neural networks

Mon, 2024-12-30 06:00

Photodiagnosis Photodyn Ther. 2024 Dec 28:104462. doi: 10.1016/j.pdpdt.2024.104462. Online ahead of print.

ABSTRACT

In recent years, interest in applying deep learning (DL) to medical diagnosis has rapidly increased, driven primarily by the development of Convolutional Neural Networks and Transformers. Despite advancements in DL, the automated diagnosis of skin cancer remains a significant challenge. Emulating dermatologists, deep learning approaches using clinical images acquired from smartphones and considering patient lesion information have achieved performance levels close to those of specialists. While including clinical information, such as whether the lesion bleeds, hurts, or itches, improves diagnostic metrics, it is insufficient for correctly differentiating some major skin cancer lesions. An alternate technology for diagnosing skin cancer is fluorescence widefield imaging, where the skin lesion is illuminated with excitation light, causing it to emit fluorescence. Since there is no public dataset of fluorescence images for skin lesions, to the best of our knowledge, an effort has been made and resulted in 1,563 fluorescence images of major skin lesions taken by smartphones using the handheld LED wieldfield fluorescence device. The collected images were annotated and analyzed, creating a new dataset named FLUO-SC. Convolutional neural networks were then applied to classify skin lesions using these fluorescence images. Experimental results indicate that fluorescence images are competitive with clinical images (baseline) for classifying major skin lesions and show promising potential for discrimination.

PMID:39736369 | DOI:10.1016/j.pdpdt.2024.104462

Categories: Literature Watch

Towards safe and reliable deep learning for lung nodule malignancy estimation using out-of-distribution detection

Mon, 2024-12-30 06:00

Comput Biol Med. 2024 Dec 29;186:109633. doi: 10.1016/j.compbiomed.2024.109633. Online ahead of print.

ABSTRACT

Artificial Intelligence (AI) models may fail or suffer from reduced performance when applied to unseen data that differs from the training data distribution, referred to as dataset shift. Automatic detection of out-of-distribution (OOD) data contributes to safe and reliable clinical implementation of AI models. In this study, we propose a recognized OOD detection method that utilizes the Mahalanobis distance (MD) and compare its performance to widely known classical methods. The MD measures the similarity between features of an unseen sample and the distribution of development samples features of intermediate model layers. We integrate our proposed method in an existing deep learning (DL) model for lung nodule malignancy risk estimation on chest CT and validate it across four dataset shifts known to reduce AI model performance. The results show that our proposed method outperforms the classical methods and can effectively detect near- and far-OOD samples across all datasets with different data distribution shifts. Additionally, we demonstrate that our proposed method can seamlessly incorporate additional In-distribution (ID) data while maintaining the ability to accurately differentiate between the remaining OOD cases. Lastly, we searched for the optimal OOD threshold in the OOD dataset where the performance of the DL model stays reliable, however no decline in DL performance was revealed as the OOD score increased.

PMID:39736253 | DOI:10.1016/j.compbiomed.2024.109633

Categories: Literature Watch

Using artificial intelligence and statistics for managing peritoneal metastases from gastrointestinal cancers

Mon, 2024-12-30 06:00

Brief Funct Genomics. 2024 Dec 30:elae049. doi: 10.1093/bfgp/elae049. Online ahead of print.

ABSTRACT

OBJECTIVE: The primary objective of this study is to investigate various applications of artificial intelligence (AI) and statistical methodologies for analyzing and managing peritoneal metastases (PM) caused by gastrointestinal cancers.

METHODS: Relevant keywords and search criteria were comprehensively researched on PubMed and Google Scholar to identify articles and reviews related to the topic. The AI approaches considered were conventional machine learning (ML) and deep learning (DL) models, and the relevant statistical approaches included biostatistics and logistic models.

RESULTS: The systematic literature review yielded nearly 30 articles meeting the predefined criteria. Analyses of these studies showed that AI methodologies consistently outperformed traditional statistical approaches. In the AI approaches, DL consistently produced the most precise results, while classical ML demonstrated varied performance but maintained high predictive accuracy. The sample size was the recurring factor that increased the accuracy of the predictions for models of the same type.

CONCLUSIONS: AI and statistical approaches can detect PM developing among patients with gastrointestinal cancers. Therefore, if clinicians integrated these approaches into diagnostics and prognostics, they could better analyze and manage PM, enhancing clinical decision-making and patients' outcomes. Collaboration across multiple institutions would also help in standardizing methods for data collection and allowing consistent results.

PMID:39736152 | DOI:10.1093/bfgp/elae049

Categories: Literature Watch

Pages