Deep learning

Revolutionizing crop disease detection with computational deep learning: a comprehensive review

Sat, 2024-02-24 06:00

Environ Monit Assess. 2024 Feb 24;196(3):302. doi: 10.1007/s10661-024-12454-z.

ABSTRACT

Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.

PMID:38401024 | DOI:10.1007/s10661-024-12454-z

Categories: Literature Watch

Efficacy of the methods of age determination using artificial intelligence in panoramic radiographs - a systematic review

Sat, 2024-02-24 06:00

Int J Legal Med. 2024 Feb 24. doi: 10.1007/s00414-024-03162-x. Online ahead of print.

ABSTRACT

The aim of this systematic review is to analyze the literature to determine whether the methods of artificial intelligence are effective in determining age in panoramic radiographs. Searches without language and year limits were conducted in PubMed/Medline, Embase, Web of Science, and Scopus databases. Hand searches were also performed, and unpublished manuscripts were searched in specialized journals. Thirty-six articles were included in the analysis. Significant differences in terms of root mean square error and mean absolute error were found between manual methods and artificial intelligence techniques, favoring the use of artificial intelligence (p < 0.00001). Few articles compared deep learning methods with machine learning models or manual models. Although there are advantages of machine learning in data processing and deep learning in data collection and analysis, non-comparable data was a limitation of this study. More information is needed on the comparison of these techniques, with particular emphasis on time as a variable.

PMID:38400923 | DOI:10.1007/s00414-024-03162-x

Categories: Literature Watch

EEG-based brain-computer interface methods with the aim of rehabilitating advanced stage ALS patients

Sat, 2024-02-24 06:00

Disabil Rehabil Assist Technol. 2024 Feb 24:1-11. doi: 10.1080/17483107.2024.2316312. Online ahead of print.

ABSTRACT

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease that leads to progressive muscle weakness and paralysis, ultimately resulting in the loss of ability to communicate and control the environment. EEG-based Brain-Computer Interface (BCI) methods have shown promise in providing communication and control with the aim of rehabilitating ALS patients. In particular, P300-based BCI has been widely studied and used for ALS rehabilitation. Other EEG-based BCI methods, such as Motor Imagery (MI) based BCI and Hybrid BCI, have also shown promise in ALS rehabilitation. Nonetheless, EEG-based BCI methods hold great potential for improvement. This review article introduces and reviews FFT, WPD, CSP, CSSP, CSP, and GC feature extraction methods. The Common Spatial Pattern (CSP) is an efficient and common technique for extracting data properties used in BCI systems. In addition, Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Neural Networks (NN), and Deep Learning (DL) classification methods were introduced and reviewed. SVM is the most appropriate classifier due to its insensitivity to the curse of dimensionality. Also, DL is used in the design of BCI systems and is a good choice for BCI systems based on motor imagery with big datasets. Despite the progress made in the field, there are still challenges to overcome, such as improving the accuracy and reliability of EEG signal detection and developing more intuitive and user-friendly interfaces By using BCI, disabled patients can communicate with their caregivers and control their environment using various devices, including wheelchairs, and robotic arms.

PMID:38400897 | DOI:10.1080/17483107.2024.2316312

Categories: Literature Watch

An enumerative pre-processing approach for retinopathy severity grading using an interpretable classifier: a comparative study

Sat, 2024-02-24 06:00

Graefes Arch Clin Exp Ophthalmol. 2024 Feb 24. doi: 10.1007/s00417-024-06396-y. Online ahead of print.

ABSTRACT

BACKGROUND: Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved.

METHODS: An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined.

RESULTS: The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively.

CONCLUSION: This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.

PMID:38400856 | DOI:10.1007/s00417-024-06396-y

Categories: Literature Watch

Computer-aided diagnosis in real-time endoscopy for all stages of gastric carcinogenesis: Development and validation study

Sat, 2024-02-24 06:00

United European Gastroenterol J. 2024 Feb 24. doi: 10.1002/ueg2.12551. Online ahead of print.

ABSTRACT

OBJECTIVE: Using endoscopic images, we have previously developed computer-aided diagnosis models to predict the histopathology of gastric neoplasms. However, no model that categorizes every stage of gastric carcinogenesis has been published. In this study, a deep-learning-based diagnosis model was developed and validated to automatically classify all stages of gastric carcinogenesis, including atrophy and intestinal metaplasia, in endoscopy images.

DESIGN: A total of 18,701 endoscopic images were collected retrospectively and randomly divided into train, validation, and internal-test datasets in an 8:1:1 ratio. The primary outcome was lesion-classification accuracy in six categories: normal/atrophy/intestinal metaplasia/dysplasia/early /advanced gastric cancer. External-validation of performance in the established model used 1427 novel images from other institutions that were not used in training, validation, or internal-tests.

RESULTS: The internal-test lesion-classification accuracy was 91.2% (95% confidence interval: 89.9%-92.5%). For performance validation, the established model achieved an accuracy of 82.3% (80.3%-84.3%). The external-test per-class receiver operating characteristic in the diagnosis of atrophy and intestinal metaplasia was 93.4 ± 0% and 91.3 ± 0%, respectively.

CONCLUSIONS: The established model demonstrated high performance in the diagnosis of preneoplastic lesions (atrophy and intestinal metaplasia) as well as gastric neoplasms.

PMID:38400815 | DOI:10.1002/ueg2.12551

Categories: Literature Watch

Image deblurring by multi-scale modified U-Net using dilated convolution

Sat, 2024-02-24 06:00

Sci Prog. 2024 Jan-Mar;107(1):368504241231161. doi: 10.1177/00368504241231161.

ABSTRACT

In modern urban traffic systems, intersection monitoring systems are used to monitor traffic flows and track vehicles by recognizing license plates. However, intersection monitors often produce motion-blurred images because of the rapid movement of cars. If a deep learning network is used for image deblurring, the blurring of the image can be eliminated first, and then the complete vehicle information can be obtained to improve the recognition rate. To restore a dynamic blurred image to a sharp image, this paper proposes a multi-scale modified U-Net image deblurring network using dilated convolution and employs a variable scaling iterative strategy to make the scheme more adaptable to actual blurred images. Multi-scale architecture uses scale changes to learn the characteristics of different scales of images, and the use of dilated convolution can improve the advantages of the receptive field and obtain more information from features without increasing the computational cost. Experimental results are obtained using a synthetic motion-blurred image dataset and a real blurred image dataset for comparison with existing deblurring methods. The experimental results demonstrate that the image deblurring method proposed in this paper has a favorable effect on actual motion-blurred images.

PMID:38400510 | DOI:10.1177/00368504241231161

Categories: Literature Watch

VELIE: A Vehicle-Based Efficient Low-Light Image Enhancement Method for Intelligent Vehicles

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 19;24(4):1345. doi: 10.3390/s24041345.

ABSTRACT

In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex environments, particularly underperforming in low-light conditions, which raises a significant concern. To address these challenges, multi-sensor fusion systems or specialized low-light cameras have been proposed, but their high costs render them unsuitable for widespread deployment. On the other hand, improvements in post-processing algorithms offer a more economical and effective solution. However, current research in low-light image enhancement still shows substantial gaps in detail enhancement on nighttime driving datasets and is characterized by high deployment costs, failing to achieve real-time inference and edge deployment. Therefore, this paper leverages the Swin Vision Transformer combined with a gamma transformation integrated U-Net for the decoupled enhancement of initial low-light inputs, proposing a deep learning enhancement network named Vehicle-based Efficient Low-light Image Enhancement (VELIE). VELIE achieves state-of-the-art performance on various driving datasets with a processing time of only 0.19 s, significantly enhancing high-dimensional environmental perception tasks in low-light conditions.

PMID:38400503 | DOI:10.3390/s24041345

Categories: Literature Watch

Semantic and Geometric-Aware Day-to-Night Image Translation Network

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 19;24(4):1339. doi: 10.3390/s24041339.

ABSTRACT

Autonomous driving systems heavily depend on perception tasks for optimal performance. However, the prevailing datasets are primarily focused on scenarios with clear visibility (i.e., sunny and daytime). This concentration poses challenges in training deep-learning-based perception models for environments with adverse conditions (e.g., rainy and nighttime). In this paper, we propose an unsupervised network designed for the translation of images from day-to-night to solve the ill-posed problem of learning the mapping between domains with unpaired data. The proposed method involves extracting both semantic and geometric information from input images in the form of attention maps. We assume that the multi-task network can extract semantic and geometric information during the estimation of semantic segmentation and depth maps, respectively. The image-to-image translation network integrates the two distinct types of extracted information, employing them as spatial attention maps. We compare our method with related works both qualitatively and quantitatively. The proposed method shows both qualitative and qualitative improvements in visual presentation over related work.

PMID:38400497 | DOI:10.3390/s24041339

Categories: Literature Watch

FV-MViT: Mobile Vision Transformer for Finger Vein Recognition

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 19;24(4):1331. doi: 10.3390/s24041331.

ABSTRACT

In addressing challenges related to high parameter counts and limited training samples for finger vein recognition, we present the FV-MViT model. It serves as a lightweight deep learning solution, emphasizing high accuracy, portable design, and low latency. The FV-MViT introduces two key components. The Mul-MV2 Block utilizes a dual-path inverted residual connection structure for multi-scale convolutions, extracting additional local features. Simultaneously, the Enhanced MobileViT Block eliminates the large-scale convolution block at the beginning of the original MobileViT Block. It converts the Transformer's self-attention into separable self-attention with linear complexity, optimizing the back end of the original MobileViT Block with depth-wise separable convolutions. This aims to extract global features and effectively reduce parameter counts and feature extraction times. Additionally, we introduce a soft target center cross-entropy loss function to enhance generalization and increase accuracy. Experimental results indicate that the FV-MViT achieves a recognition accuracy of 99.53% and 100.00% on the Shandong University (SDU) and Universiti Teknologi Malaysia (USM) datasets, with equal error rates of 0.47% and 0.02%, respectively. The model has a parameter count of 5.26 million and exhibits a latency of 10.00 milliseconds from the sample input to the recognition output. Comparison with state-of-the-art (SOTA) methods reveals competitive performance for FV-MViT.

PMID:38400488 | DOI:10.3390/s24041331

Categories: Literature Watch

Electronic Nose Drift Suppression Based on Smooth Conditional Domain Adversarial Networks

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 18;24(4):1319. doi: 10.3390/s24041319.

ABSTRACT

Anti-drift is a new and serious challenge in the field related to gas sensors. Gas sensor drift causes the probability distribution of the measured data to be inconsistent with the probability distribution of the calibrated data, which leads to the failure of the original classification algorithm. In order to make the probability distributions of the drifted data and the regular data consistent, we introduce the Conditional Adversarial Domain Adaptation Network (CDAN)+ Sharpness Aware Minimization (SAM) optimizer-a state-of-the-art deep transfer learning method.The core approach involves the construction of feature extractors and domain discriminators designed to extract shared features from both drift and clean data. These extracted features are subsequently input into a classifier, thereby amplifying the overall model's generalization capabilities. The method boasts three key advantages: (1) Implementation of semi-supervised learning, thereby negating the necessity for labels on drift data. (2) Unlike conventional deep transfer learning methods such as the Domain-adversarial Neural Network (DANN) and Wasserstein Domain-adversarial Neural Network (WDANN), it accommodates inter-class correlations. (3) It exhibits enhanced ease of training and convergence compared to traditional deep transfer learning networks. Through rigorous experimentation on two publicly available datasets, we substantiate the efficiency and effectiveness of our proposed anti-drift methodology when juxtaposed with state-of-the-art techniques.

PMID:38400477 | DOI:10.3390/s24041319

Categories: Literature Watch

Characterization of Partial Discharges in Dielectric Oils Using High-Resolution CMOS Image Sensor and Convolutional Neural Networks

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 18;24(4):1317. doi: 10.3390/s24041317.

ABSTRACT

In this work, an exhaustive analysis of the partial discharges that originate in the bubbles present in dielectric mineral oils is carried out. To achieve this, a low-cost, high-resolution CMOS image sensor is used. Partial discharge measurements using that image sensor are validated by a standard electrical detection system that uses a discharge capacitor. In order to accurately identify the images corresponding to partial discharges, a convolutional neural network is trained using a large set of images captured by the image sensor. An image classification model is also developed using deep learning with a convolutional network based on a TensorFlow and Keras model. The classification results of the experiments show that the accuracy achieved by our model is around 95% on the validation set and 82% on the test set. As a result of this work, a non-destructive diagnosis method has been developed that is based on the use of an image sensor and the design of a convolutional neural network. This approach allows us to obtain information about the state of mineral oils before breakdown occurs, providing a valuable tool for the evaluation and maintenance of these dielectric oils.

PMID:38400475 | DOI:10.3390/s24041317

Categories: Literature Watch

Noninvasive Diabetes Detection through Human Breath Using TinyML-Powered E-Nose

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 17;24(4):1294. doi: 10.3390/s24041294.

ABSTRACT

Volatile organic compounds (VOCs) in exhaled human breath serve as pivotal biomarkers for disease identification and medical diagnostics. In the context of diabetes mellitus, the noninvasive detection of acetone, a primary biomarker using electronic noses (e-noses), has gained significant attention. However, employing e-noses requires pre-trained algorithms for precise diabetes detection, often requiring a computer with a programming environment to classify newly acquired data. This study focuses on the development of an embedded system integrating Tiny Machine Learning (TinyML) and an e-nose equipped with Metal Oxide Semiconductor (MOS) sensors for real-time diabetes detection. The study encompassed 44 individuals, comprising 22 healthy individuals and 22 diagnosed with various types of diabetes mellitus. Test results highlight the XGBoost Machine Learning algorithm's achievement of 95% detection accuracy. Additionally, the integration of deep learning algorithms, particularly deep neural networks (DNNs) and one-dimensional convolutional neural network (1D-CNN), yielded a detection efficacy of 94.44%. These outcomes underscore the potency of combining e-noses with TinyML in embedded systems, offering a noninvasive approach for diabetes mellitus detection.

PMID:38400451 | DOI:10.3390/s24041294

Categories: Literature Watch

Fault Diagnosis of the Rolling Bearing by a Multi-Task Deep Learning Method Based on a Classifier Generative Adversarial Network

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 17;24(4):1290. doi: 10.3390/s24041290.

ABSTRACT

Accurate fault diagnosis is essential for the safe operation of rotating machinery. Recently, traditional deep learning-based fault diagnosis have achieved promising results. However, most of these methods focus only on supervised learning and tend to use small convolution kernels non-effectively to extract features that are not controllable and have poor interpretability. To this end, this study proposes an innovative semi-supervised learning method for bearing fault diagnosis. Firstly, multi-scale dilated convolution squeeze-and-excitation residual blocks are designed to exact local and global features. Secondly, a classifier generative adversarial network is employed to achieve multi-task learning. Both unsupervised and supervised learning are performed simultaneously to improve the generalization ability. Finally, supervised learning is applied to fine-tune the final model, which can extract multi-scale features and be further improved by implicit data augmentation. Experiments on two datasets were carried out, and the results verified the superiority of the proposed method.

PMID:38400448 | DOI:10.3390/s24041290

Categories: Literature Watch

From Pixels to Precision: A Survey of Monocular Visual Odometry in Digital Twin Applications

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 17;24(4):1274. doi: 10.3390/s24041274.

ABSTRACT

This survey provides a comprehensive overview of traditional techniques and deep learning-based methodologies for monocular visual odometry (VO), with a focus on displacement measurement applications. This paper outlines the fundamental concepts and general procedures for VO implementation, including feature detection, tracking, motion estimation, triangulation, and trajectory estimation. This paper also explores the research challenges inherent in VO implementation, including scale estimation and ground plane considerations. The scientific literature is rife with diverse methodologies aiming to overcome these challenges, particularly focusing on the problem of accurate scale estimation. This issue has been typically addressed through the reliance on knowledge regarding the height of the camera from the ground plane and the evaluation of feature movements on that plane. Alternatively, some approaches have utilized additional tools, such as LiDAR or depth sensors. This survey of approaches concludes with a discussion of future research challenges and opportunities in the field of monocular visual odometry.

PMID:38400432 | DOI:10.3390/s24041274

Categories: Literature Watch

Single Person Identification and Activity Estimation in a Room from Waist-Level Contours Captured by 2D Light Detection and Ranging

Sat, 2024-02-24 06:00

Sensors (Basel). 2024 Feb 17;24(4):1272. doi: 10.3390/s24041272.

ABSTRACT

To develop socially assistive robots for monitoring older adults at home, a sensor is required to identify residents and capture activities within the room without violating privacy. We focused on 2D Light Detection and Ranging (2D-LIDAR) capable of robustly measuring human contours in a room. While horizontal 2D contour data can provide human location, identifying humans and activities from these contours is challenging. To address this issue, we developed novel methods using deep learning techniques. This paper proposes methods for person identification and activity estimation in a room using contour point clouds captured by a single 2D-LIDAR at hip height. In this approach, human contours were extracted from 2D-LIDAR data using density-based spatial clustering of applications with noise. Subsequently, the person and activity within a 10-s interval were estimated employing deep learning techniques. Two deep learning models, namely Long Short-Term Memory (LSTM) and image classification (VGG16), were compared. In the experiment, a total of 120 min of walking data and 100 min of additional activities (door opening, sitting, and standing) were collected from four participants. The LSTM-based and VGG16-based methods achieved accuracies of 65.3% and 89.7%, respectively, for person identification among the four individuals. Furthermore, these methods demonstrated accuracies of 94.2% and 97.9%, respectively, for the estimation of the four activities. Despite the 2D-LIDAR point clouds at hip height containing small features related to gait, the results indicate that the VGG16-based method has the capability to identify individuals and accurately estimate their activities.

PMID:38400430 | DOI:10.3390/s24041272

Categories: Literature Watch

Highly robust reconstruction framework for three-dimensional optical imaging based on physical model constrained neural networks

Fri, 2024-02-23 06:00

Phys Med Biol. 2024 Feb 23. doi: 10.1088/1361-6560/ad2ca3. Online ahead of print.

ABSTRACT

OBJECTIVE: The reconstruction of three-dimensional optical imaging that can quantitatively acquire the target distribution from surface measurements is a serious ill-posed problem. The objective of this work is to develop a highly robust reconstruction framework to solve the existing problems.

APPROACH: This paper proposes a physical model constrained neural networks-based reconstruction framework. In the framework, the neural networks are to generate a target distribution from surface measurements, while the physical model is used to calculate the surface light distribution based on this target distribution. The mean square error between the calculated surface light distribution and the surface measurements is then used as a loss function to optimize the neural network. To further reduce the dependence on a priori information, a moveable region is randomly selected and then traverses the entire solution interval. We reconstruct the target distribution in this moveable region and the results are used as the basis for its next movement.

MAIN RESULTS: The performance of the proposed framework is evaluated with a series of simulations and in vivo experiment, including accuracy robustness of different target distributions, noise immunity, depth robustness, and spatial resolution. The results collectively demonstrate that the framework can reconstruct targets with a high accuracy, stability and versatility.

SIGNIFICANCE: The proposed framework has high accuracy and robustness, as well as good generalizability. Compared with traditional regularization-based reconstruction methods, it eliminates the need to manually delineate feasible regions and adjust regularization parameters. Compared with emerging deep learning assisted methods, it does not require any training dataset, thus saving a lot of time and resources and solving the problem of poor generalization and robustness of deep learning methods. Thus, the framework opens up a new perspective for the reconstruction of three-dimension optical imaging.

PMID:38394682 | DOI:10.1088/1361-6560/ad2ca3

Categories: Literature Watch

The incredible bulk: Human cytomegalovirus tegument architectures uncovered by AI-empowered cryo-EM

Fri, 2024-02-23 06:00

Sci Adv. 2024 Feb 23;10(8):eadj1640. doi: 10.1126/sciadv.adj1640. Epub 2024 Feb 23.

ABSTRACT

The compartmentalization of eukaryotic cells presents considerable challenges to the herpesvirus life cycle. The herpesvirus tegument, a bulky proteinaceous aggregate sandwiched between herpesviruses' capsid and envelope, is uniquely evolved to address these challenges, yet tegument structure and organization remain poorly characterized. We use deep-learning-enhanced cryogenic electron microscopy to investigate the tegument of human cytomegalovirus virions and noninfectious enveloped particles (NIEPs; a genome packaging-aborted state), revealing a portal-biased tegumentation scheme. We resolve atomic structures of portal vertex-associated tegument (PVAT) and identify multiple configurations of PVAT arising from layered reorganization of pUL77, pUL48 (large tegument protein), and pUL47 (inner tegument protein) assemblies. Analyses show that pUL77 seals the last-packaged viral genome end through electrostatic interactions, pUL77 and pUL48 harbor a head-linker-capsid-binding motif conducive to PVAT reconfiguration, and pUL47/48 dimers form 45-nm-long filaments extending from the portal vertex. These results provide a structural framework for understanding how herpesvirus tegument facilitates and evolves during processes spanning viral genome packaging to delivery.

PMID:38394211 | DOI:10.1126/sciadv.adj1640

Categories: Literature Watch

Research on the construction of information-based nursing quality control system based on deep learning model under the lean perspective

Fri, 2024-02-23 06:00

Technol Health Care. 2024 Jan 30. doi: 10.3233/THC-230730. Online ahead of print.

ABSTRACT

OBJECTIVE: In order to improve nursing quality management and protect patient medical safety, it is necessary to change the default mode and completely integrate information technology and nursing quality control utilising lean management.

METHODS: A database was created, the nurse quality control scoring standard was entered into the computer and after the inspection, and various inspection reports were entered into the computer to precisely and promptly preserve data. The computer was then utilised to precisely assess the intensity and quality of nursing work, compute, count, and analyse the stored data, output the quality of nursing work in each department as a report, and adopt lean management for the gathered issues.

RESULTS: To reach the objective of raising nursing quality, data analysis makes it simple to identify flaws and consistently strengthen the weak points. In order to create an information-based nursing quality control system with a simple and effective method as well as results that are scientific and objective, lean management is brought into the construction process.

PMID:38393932 | DOI:10.3233/THC-230730

Categories: Literature Watch

Altered Motor Activity Patterns within 10-Minute Timescale Predict Incident Clinical Alzheimer's Disease

Fri, 2024-02-23 06:00

J Alzheimers Dis. 2024 Feb 19. doi: 10.3233/JAD-230928. Online ahead of print.

ABSTRACT

BACKGROUND: Fractal motor activity regulation (FMAR), characterized by self-similar temporal patterns in motor activity across timescales, is robust in healthy young humans but degrades with aging and in Alzheimer's disease (AD).

OBJECTIVE: To determine the timescales where alterations of FMAR can best predict the clinical onset of AD.

METHODS: FMAR was assessed from actigraphy at baseline in 1,077 participants who had annual follow-up clinical assessments for up to 15 years. Survival analysis combined with deep learning (DeepSurv) was used to examine how baseline FMAR at different timescales from 3 minutes up to 6 hours contributed differently to the risk for incident clinical AD.

RESULTS: Clinical AD occurred in 270 participants during the follow-up. DeepSurv identified three potential regions of timescales in which FMAR alterations were significantly linked to the risk for clinical AD: <10, 20-40, and 100-200 minutes. Confirmed by the Cox and random survival forest models, the effect of FMAR alterations in the timescale of <10 minutes was the strongest, after adjusting for covariates.

CONCLUSIONS: Subtle changes in motor activity fluctuations predicted the clinical onset of AD, with the strongest association observed in activity fluctuations at timescales <10 minutes. These findings suggest that short actigraphy recordings may be used to assess the risk of AD.

PMID:38393904 | DOI:10.3233/JAD-230928

Categories: Literature Watch

Pages