Deep learning

Editorial for Innovative Artificial Intelligence System in the Children's Hospital in Japan

Mon, 2025-05-26 06:00

JMA J. 2025 Apr 28;8(2):361-362. doi: 10.31662/jmaj.2025-0076. Epub 2025 Mar 21.

NO ABSTRACT

PMID:40416027 | PMC:PMC12095351 | DOI:10.31662/jmaj.2025-0076

Categories: Literature Watch

Innovative Artificial Intelligence System in the Children's Hospital in Japan

Mon, 2025-05-26 06:00

JMA J. 2025 Apr 28;8(2):354-360. doi: 10.31662/jmaj.2024-0312. Epub 2025 Feb 21.

ABSTRACT

The evolution of innovative artificial intelligence (AI) systems in pediatric hospitals in Japan promises benefits for patients and healthcare providers. We actively contribute to advancements in groundbreaking medical treatments by leveraging deep learning technology and using vast medical datasets. Our team of data scientists closely collaborates with departments within the hospital. Our research themes based on deep learning are wide-ranging, including acceleration of pathological diagnosis using image data, distinguishing of bacterial species, early detection of eye diseases, and prediction of genetic disorders from physical features. Furthermore, we implement Information and Communication Technology to diagnose pediatric cancer. Moreover, we predict immune responses based on genomic data and diagnose autism by quantifying behavior and communication. Our expertise extends beyond research to provide comprehensive AI development services, including data collection, annotation, high-speed computing, utilization of machine learning frameworks, design of web services, and containerization. In addition, as active members of medical AI platform collaboration partnerships, we provide unique data and analytical technologies to facilitate the development of AI development platforms. Furthermore, we address the challenges of securing medical data in the cloud to ensure compliance with stringent confidentiality standards. We will discuss AI's advancements in pediatric hospitals and their challenges.

PMID:40415999 | PMC:PMC12095641 | DOI:10.31662/jmaj.2024-0312

Categories: Literature Watch

Response to the Letter by Matsubara

Mon, 2025-05-26 06:00

JMA J. 2025 Apr 28;8(2):664. doi: 10.31662/jmaj.2024-0420. Epub 2025 Mar 7.

NO ABSTRACT

PMID:40415986 | PMC:PMC12095420 | DOI:10.31662/jmaj.2024-0420

Categories: Literature Watch

Editorial: Advances in computer vision: from deep learning models to practical applications

Mon, 2025-05-26 06:00

Front Neurosci. 2025 May 9;19:1615276. doi: 10.3389/fnins.2025.1615276. eCollection 2025.

NO ABSTRACT

PMID:40415892 | PMC:PMC12098266 | DOI:10.3389/fnins.2025.1615276

Categories: Literature Watch

Improving annotation efficiency for fully labeling a breast mass segmentation dataset

Mon, 2025-05-26 06:00

J Med Imaging (Bellingham). 2025 May;12(3):035501. doi: 10.1117/1.JMI.12.3.035501. Epub 2025 May 21.

ABSTRACT

PURPOSE: Breast cancer remains a leading cause of death for women. Screening programs are deployed to detect cancer at early stages. One current barrier identified by breast imaging researchers is a shortage of labeled image datasets. Addressing this problem is crucial to improve early detection models. We present an active learning (AL) framework for segmenting breast masses from 2D digital mammography, and we publish labeled data. Our method aims to reduce the input needed from expert annotators to reach a fully labeled dataset.

APPROACH: We create a dataset of 1136 mammographic masses with pixel-wise binary segmentation labels, with the test subset labeled independently by two different teams. With this dataset, we simulate a human annotator within an AL framework to develop and compare AI-assisted labeling methods, using a discriminator model and a simulated oracle to collect acceptable segmentation labels. A UNet model is retrained on these labels, generating new segmentations. We evaluate various oracle heuristics using the percentage of segmentations that the oracle relabels and measure the quality of the proposed labels by evaluating the intersection over union over a validation dataset.

RESULTS: Our method reduces expert annotator input by 44%. We present a dataset of 1136 binary segmentation labels approved by board-certified radiologists and make the 143-image validation set public for comparison with other researchers' methods.

CONCLUSIONS: We demonstrate that AL can significantly improve the efficiency and time-effectiveness of creating labeled mammogram datasets. Our framework facilitates the development of high-quality datasets while minimizing manual effort in the domain of digital mammography.

PMID:40415867 | PMC:PMC12094908 | DOI:10.1117/1.JMI.12.3.035501

Categories: Literature Watch

Convolutional variational auto-encoder and vision transformer hybrid approach for enhanced early Alzheimer's detection

Mon, 2025-05-26 06:00

J Med Imaging (Bellingham). 2025 May;12(3):034501. doi: 10.1117/1.JMI.12.3.034501. Epub 2025 May 21.

ABSTRACT

PURPOSE: Alzheimer's disease (AD) is becoming more prevalent among the elderly, with projections indicating that it will affect a significantly large population in the future. Regardless of substantial research efforts and investments focused on exploring the underlying biological factors, a definitive cure has yet to be discovered. The currently available treatments are only effective in slowing disease progression if it is identified in the early stages of the disease. Therefore, early diagnosis has become critical in treating AD.

APPROACH: Recently, the use of deep learning techniques has demonstrated remarkable improvement in enhancing the precision and speed of automatic AD diagnosis through medical image analysis. We propose a hybrid model that integrates a convolutional variational auto-encoder (CVAE) with a vision transformer (ViT). During the encoding phase, the CVAE captures key features from the MRI scans, whereas the decoding phase reduces irrelevant details in MRIs. These refined inputs enhance the ViT's ability to analyze complex patterns through its multihead attention mechanism.

RESULTS: The model was trained and evaluated using 14,000 structural MRI samples from the ADNI and SCAN databases. Compared with three benchmark methods and previous studies with Alzheimer's classification techniques, our approach achieved a significant improvement, with a test accuracy of 93.3%.

CONCLUSIONS: Through this research, we identified the potential of the CVAE-ViT hybrid approach in detecting minor structural abnormalities related to AD. Integrating unsupervised feature extraction via CVAE can significantly enhance transformer-based models in distinguishing between stages of cognitive impairment, thereby identifying early indicators of AD.

PMID:40415866 | PMC:PMC12094909 | DOI:10.1117/1.JMI.12.3.034501

Categories: Literature Watch

Classifying chronic obstructive pulmonary disease status using computed tomography imaging and convolutional neural networks: comparison of model input image types and training data severity

Mon, 2025-05-26 06:00

J Med Imaging (Bellingham). 2025 May;12(3):034502. doi: 10.1117/1.JMI.12.3.034502. Epub 2025 May 22.

ABSTRACT

PURPOSE: Convolutional neural network (CNN)-based models using computed tomography images can classify chronic obstructive pulmonary disease (COPD) with high performance, but various input image types have been investigated, and it is unclear what image types are optimal. We propose a 2D airway-optimized topological multiplanar reformat (tMPR) input image and compare its performance with established 2D/3D input image types for COPD classification. As a secondary aim, we examined the impact of training on a dataset with predominantly mild COPD cases and testing on a more severe dataset to assess whether it improves generalizability.

APPROACH: CanCOLD study participants were used for training/internal testing; SPIROMICS participants were used for external testing. Several 2D/3D input image types were adapted from the literature. In the proposed models, 2D airway-optimized tMPR images (to convey shape and interior/contextual information) and 3D output fusion of axial/sagittal/coronal images were investigated. The area-under-the-receiver-operator-curve (AUC) was used to evaluate model performance and Brier scores were used to evaluate model calibration. To further examine how training dataset severity impacts generalization, we compared model performance when trained on the milder CanCOLD dataset versus the more severe SPIROMICS dataset, and vice versa.

RESULTS: A total of n = 742 CanCOLD participants were used for training/validation and n = 309 for testing; n = 448 SPIROMICS participants were used for external testing. For the CanCOLD and SPIROMICS test set, the proposed 2D tMPR on its own (CanCOLD: AUC = 0.79 ; SPIROMICS: AUC = 0.94 ) and combined with the 3D axial/coronal/sagittal lung view (CanCOLD: AUC = 0.82 ; SPIROMICS: AUC = 0.93 ) had the highest performance. The combined 2D tMPR and 3D axial/coronal/sagittal lung view had the lowest Brier score (CanCOLD: score = 0.16; SPIROMICS: score = 0.24). Conversely, using SPIROMICS for training/testing and CanCOLD for external testing resulted in lower performance when tested on CanCOLD for 2D tMPR on its own (SPIROMICS: AUC = 0.92; CanCOLD: AUC = 0.74) and when combined with the 3D axial/coronal/sagittal lung view (SPIROMICS: AUC = 0.92 ; CanCOLD: AUC = 0.75 ).

CONCLUSIONS: The CNN-based model with the combined 2D tMPR images and 3D lung view as input image types had the highest performance for COPD classification, highlighting the importance of airway information and that the fusion of different types of information as input image can improve CNN-based model performance. In addition, models trained on CanCOLD demonstrated strong generalization to the more severe SPIROMICS cohort, whereas training on SPIROMICS resulted in lower performance when tested on CanCOLD. These findings suggest that training on milder COPD cases may improve classification performance across the disease spectrum.

PMID:40415865 | PMC:PMC12097752 | DOI:10.1117/1.JMI.12.3.034502

Categories: Literature Watch

Optimised Hybrid Attention-Based Capsule Network Integrated Three-Pathway Network for Chronic Disease Detection in Retinal Images

Mon, 2025-05-26 06:00

J Eval Clin Pract. 2025 Jun;31(4):e70126. doi: 10.1111/jep.70126.

ABSTRACT

BACKGROUND: Over the past 20 years, researchers have concentrated on generating retinal images as a means of detecting and classifying chronic diseases. Early diagnosis and treatment are essential to avoid chronic diseases. Manually grading retinal images is time-consuming, prone to errors, and lacks patient-friendliness. Various Deep Learning (DL) algorithms are employed to detect chronic diseases from retinal fundus images. Also, these methods have some disadvantages, such as overfitting, computational cost, and so on.

OBJECTIVE: The proposed research aims to develop Optimized DL based system for detecting chronic diseases in retinal images and solving existing issues.

METHODOLOGY: Initially, the retinal images are pre-processed to clean and organize the data. Normalization and HSI Colour Conversion are the techniques used for pre-processing. Inception-V3, ResNet-152 and a Convolutional Vision Transformer (Conv-ViT) are used to perform feature extraction. The classifier is an Optimized Hybrid Attention-based Capsule Network. An optimization is included in the proposed model to increase the classifier s performance.

RESULTS: The proposed approach attains accuracies of 99.05 % and 99.15% using Diabetic Retinopathy 224 × 224 (2019 Data) and the APTOS-2019 dataset, respectively. The superior performance of the proposed technique highlights its effectiveness in this domain.

CONCLUSION: The implementation of such automated methods can significantly improve the efficiency and accuracy of chronic disease diagnosis, benefiting both healthcare providers and patients.

PMID:40415584 | DOI:10.1111/jep.70126

Categories: Literature Watch

AI in Orthopedic Research: A Comprehensive Review

Mon, 2025-05-26 06:00

J Orthop Res. 2025 May 26. doi: 10.1002/jor.26109. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) is revolutionizing orthopedic research and clinical practice by enhancing diagnostic accuracy, optimizing treatment strategies, and streamlining clinical workflows. Recent advances in deep learning have enabled the development of algorithms that detect fractures, grade osteoarthritis, and identify subtle pathologies in radiographic and magnetic resonance images with performance comparable to expert clinicians. These AI-driven systems reduce missed diagnoses and provide objective, reproducible assessments that facilitate early intervention and personalized treatment planning. Moreover, AI has made significant strides in predictive analytics by integrating diverse patient data-including gait and imaging features-to forecast surgical outcomes, implant survivorship, and rehabilitation trajectories. Emerging applications in robotics, augmented reality, digital twin technologies, and exoskeleton control promise to further transform preoperative planning and intraoperative guidance. Despite these promising developments, challenges such as data heterogeneity, algorithmic bias, and the "black box" nature of many models-as well as issues with robust validation-remain. This comprehensive review synthesizes current developments, critically examines limitations, and outlines future directions for integrating AI into musculoskeletal care.

PMID:40415515 | DOI:10.1002/jor.26109

Categories: Literature Watch

Automated landmark-based mid-sagittal plane: reliability for 3-dimensional mandibular asymmetry assessment on head CT scans

Sun, 2025-05-25 06:00

Clin Oral Investig. 2025 May 26;29(6):311. doi: 10.1007/s00784-025-06397-z.

ABSTRACT

OBJECTIVE: The determination of the mid-sagittal plane (MSP) on three-dimensional (3D) head imaging is key to the assessment of facial asymmetry. The aim of this study was to evaluate the reliability of an automated landmark-based MSP to quantify mandibular asymmetry on head computed tomography (CT) scans.

MATERIALS AND METHODS: A dataset of 368 CT scans, including orthognathic surgery patients, was automatically annotated with 3D cephalometric landmarks via a previously published deep learning-based method. Five of these landmarks were used to automatically construct an MSP orthogonal to the Frankfurt horizontal plane. The reliability of automatic MSP construction was compared with the reliability of manual MSP construction based on 6 manual localizations by 3 experienced operators on 19 randomly selected CT scans. The mandibular asymmetry of the 368 CT scans with respect to the MSP was calculated and compared with clinical expert judgment.

RESULTS: The construction of the MSP was found to be highly reliable, both manually and automatically. The manual reproducibility 95% limit of agreement was less than 1 mm for -y translation and less than 1.1° for -x and -z rotation, and the automatic measurement lied within the confidence interval of the manual method. The automatic MSP construction was shown to be clinically relevant, with the mandibular asymmetry measures being consistent with the expertly assessed levels of asymmetry.

CONCLUSION: The proposed automatic landmark-based MSP construction was found to be as reliable as manual construction and clinically relevant in assessing the mandibular asymmetry of 368 head CT scans.

CLINICAL RELEVANCE: Once implemented in a clinical software, fully automated landmark-based MSP construction could be clinically used to assess mandibular asymmetry on head CT scans.

PMID:40415151 | DOI:10.1007/s00784-025-06397-z

Categories: Literature Watch

Advancing e-waste classification with customizable YOLO based deep learning models

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18151. doi: 10.1038/s41598-025-94772-x.

ABSTRACT

The burgeoning problem of electronic waste (e-waste) management necessitates sophisticated, efficient, and precise classification techniques for recycling and repurposing. To address these critical environmental and health implications, this research delves into a comprehensive analysis of three cutting-edge object detection models: YOLOv5, YOLOv7, and YOLOv8. These models are examined through the lens of efficient e-waste classification, a pivotal step in recycling and repurposing efforts. The 'You Only Look Once' (YOLO) methodology underpins our research, highlighting the distinctive architectural features of each model, including the CSPDarknet53 backbone, PANet, and advanced anchor-free detection. This research approach involved the creation of a specialized image dataset encompassing seven distinct e-waste categories to facilitate the training and validation of these models. The performance of improved and customizable YOLOv5, YOLOv7, and YOLOv8 was meticulously evaluated across various parameters such as precision, recall, speed, and training efficiency. This evaluation explores the architectural nuances of each model and its efficacy in accurately detecting diverse e-waste components. The standout performer, YOLOv8, demonstrated exceptional capabilities with its enhanced feature pyramid networks and improved CSPDarknet53 backbone with 53 convolutional layers, achieving superior precision and accuracy. Notably, this model showcased a significant reduction in training time while leveraging the computational power of the Tesla T4 GPU on Google Colab. However, the research also identified challenges, particularly in object orientation detection, suggesting avenues for future refinement. This study underscores the vital role of advanced YOLO architectures in e-waste management, providing critical insights into their practical viability, applicability in real-world scenarios, and potential limitations. By setting a benchmark in real-time object detection, our work paves the way for future innovations and improvements in environmental management technologies, specifically tailored to meet the escalating challenge of e-waste management.

PMID:40415121 | DOI:10.1038/s41598-025-94772-x

Categories: Literature Watch

An advanced three stage lightweight model for underwater human detection

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18137. doi: 10.1038/s41598-025-03677-2.

ABSTRACT

This study presents StarEye, a lightweight deep learning model designed for underwater human body detection (UHBD) that addresses the challenges of complex underwater environments. The proposed model incorporates several innovative components: a comprehensive underwater dataset construction methodology, a StarBlock-based backbone structure for efficient feature extraction, a Context Anchor Attention (CAA) mechanism integrated into both backbone and neck components, and a Shared Convolution Batch Normalization (SCBN) detection head. Extensive experiments demonstrate that StarEye achieves 91.1% precision, 88.6% recall, and 95.1% mAP50 while reducing the model size to 3.8MB (16.9% of the original size). The model maintains robust performance across various underwater conditions, including poor visibility, varying illumination, and biological interference. The results indicate that StarEye effectively balances model efficiency and detection accuracy, making it particularly suitable for mobile device deployment in underwater scenarios.

PMID:40415110 | DOI:10.1038/s41598-025-03677-2

Categories: Literature Watch

Exploring treatment effects and fluid resuscitation strategies in septic shock: a deep learning-based causal inference approach

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18262. doi: 10.1038/s41598-025-03141-1.

ABSTRACT

Septic shock exhibits diverse etiologies and patient characteristics, necessitating tailored fluid management. We aimed to compare resuscitation strategies using normal saline, Ringer's lactate, and albumin, and to determine which patient factors are associated with improved outcomes. We analyzed septic shock patients from the MIMIC-IV database, categorizing them by the fluid administered: normal saline, Ringer's lactate, albumin, or their combinations. A deep learning-based causal inference model estimated treatment effects on in-hospital mortality and kidney outcomes (defined as a doubling of creatinine or the initiation of kidney replacement therapy). Multivariable logistic regression was then applied to the individual treatment effects to identify patient characteristics linked to better outcomes for Ringer's lactate and additional albumin infusion compared to normal saline alone. Among 13,527 patients, 17.8% experienced in-hospital mortality and 16.2% developed kidney injury. Ringer's lactate reduced mortality by 2.33% and kidney injury by 1.41% compared to normal saline. Adding albumin to normal saline further reduced mortality by 1.20% and kidney outcomes by 0.71%. The combination of Ringer's lactate and albumin provided the greatest benefit (mortality: -3.07%, kidney injury: -3.00%). Patients with high SOFA scores, low albumin, or high lactate levels benefited more from normal saline, whereas those with low eGFR or on vasopressors were less likely to benefit from albumin. Ringer's lactate, particularly when combined with albumin, is superior to normal saline in reducing mortality and kidney injury in septic shock patients, underscoring the need for personalized fluid management based on patient-specific factors.

PMID:40415107 | DOI:10.1038/s41598-025-03141-1

Categories: Literature Watch

Bio inspired optimization techniques for disease detection in deep learning systems

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18202. doi: 10.1038/s41598-025-02846-7.

ABSTRACT

Numerous contemporary computer-aided disease detection methodologies predominantly depend on feature engineering techniques; yet, they possess several drawbacks, including the presence of redundant features and excessive time consumption. Conventional feature engineering necessitates considerable manual effort, resulting in issues from superfluous features that diminish the model's performance potential. In contrast to recent effective deep-learning models, these may address these issues while concurrently obtaining and capturing intricate structures inside extensive medical image datasets. Deep learning models autonomously develop feature extraction abilities but require substantial computational resources and extensive datasets to yield significant abstraction methods. The dimensionality problem is a key challenge in healthcare research. Despite the hopeful advancements in illness identification with deep learning architectures in recent years, attaining high performance remains notably tough, particularly in scenarios with limited data or intricate feature spaces. This research endeavors to elucidate the integration of bio-inspired optimization techniques that improve disease diagnostics through deep learning models. The targeted feature selection of bio-inspired methods enhances computational efficiency and operational efficacy by minimizing model redundancy and computational costs, particularly when data availability is constrained. These algorithms employ natural selection and social behavior models to efficiently explore feature spaces, enhancing the robustness and generalizability of deep learning systems. This paper seeks to elucidate the efficacy of deep learning models in medical diagnostics by employing concepts and strategies derived from biological system ontologies, such as genetic algorithms, particle swarm optimization, ant colony optimization, artificial immune systems, and swarm intelligence. Bio-inspired methodologies have exhibited significant potential in addressing critical challenges in illness detection across many data types. It seeks to tackle the problem by creating bio-inspired optimization methods to enhance efficient and equitable deep learning for illness diagnosis. This work assists researchers in selecting the most effective bio-inspired algorithm for disease categorization, prediction, and the analysis of high-dimensional biomedical data.

PMID:40415068 | DOI:10.1038/s41598-025-02846-7

Categories: Literature Watch

MobNas ensembled model for breast cancer prediction

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18238. doi: 10.1038/s41598-025-01920-4.

ABSTRACT

Breast cancer poses a real and immense threat to humankind, thus a need to develop a way of diagnosing this devastating disease early, accurately, and in a simpler manner. Thus, while substantial progress has been made in developing machine learning algorithms, deep learning, and transfer learning models, issues with diagnostic accuracy and minimizing diagnostic errors persist. This paper introduces MobNAS, a model that uses MobileNetV2 and NASNetLarge to sort breast cancer images into benign, malignant, or normal classes. The study employs a multi-class classification design and uses a publicly available dataset comprising 1,578 ultrasound images, including 891 benign, 421 malignant, and 266 normal cases. By deploying MobileNetV2, it is easy to work well on devices with less computational capability than is used by NASNetLarge, which enhances its applicability and effectiveness in other tasks. The performance of the proposed MobNAS model was tested on the breast cancer image dataset, and the accuracy level achieved was 97%, the Mean Absolute Error (MAE) was 0.05, and the Matthews Correlation Coefficient (MCC) was 95%. From the findings of this research, it is evident that MobNAS can enhance diagnostic accuracy and reduce existing shortcomings in breast cancer detection.

PMID:40415060 | DOI:10.1038/s41598-025-01920-4

Categories: Literature Watch

Prediction of reproductive and developmental toxicity using an attention and gate augmented graph convolutional network

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18186. doi: 10.1038/s41598-025-02590-y.

ABSTRACT

Due to the diverse molecular structures of chemical compounds and their intricate biological pathways of toxicity, predicting their reproductive and developmental toxicity remains a challenge. Traditional Quantitative Structure-Activity Relationship models that rely on molecular descriptors have limitations in capturing the complexity of reproductive and developmental toxicity to achieve high predictive performance. In this study, we developed a descriptor-free deep learning model by constructing a Graph Convolutional Network designed with multi-head attention and gated skip-connections to predict reproductive and developmental toxicity. By integrating structural alerts directly related to toxicity into the model, we enabled more effective learning of toxicologically relevant substructures. We built a dataset of 4,514 diverse compounds, including both organic and inorganic substances. The model was trained and validated using stratified 5-fold cross-validation. It demonstrated excellent predictive performance, achieving an accuracy of 81.19% on the test set. To address the interpretability of the deep learning model, we identified subgraphs corresponding to known structural alerts, providing insights into the model's decision-making process. This study was conducted in accordance with the OECD principles for reliable Quantitative Structure-Activity Relationship modeling and contributes to the development of robust in silico models for toxicity prediction.

PMID:40415056 | DOI:10.1038/s41598-025-02590-y

Categories: Literature Watch

A lightweight and efficient gesture recognizer for traffic police commands using spatiotemporal feature fusion

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18256. doi: 10.1038/s41598-025-02833-y.

ABSTRACT

In response to the demand for efficient and accurate recognition of traffic police gestures by driverless vehicles, this paper introduces a novel traffic police gesture recognition framework (Novel Traffic Police Gesture Recognizer, NTPGR). Initially, keypoints related to traffic police gestures are extracted using the Efficient Progressive Feature Fusion Network (EPFFNet), followed by feature modeling and fusion to enable the recognition network to better learn the temporal characteristics of gestures. Additionally, a convolution network branch and a hybrid attention branch are incorporated to further extract skeleton information from the traffic police gesture data, assign different temporal weights to key frames, and enhance the focus on important channels. Finally, in conjunction with Long Short Term Memory (LSTM), a multi-branch gesture recognition network, termed the Multi-Sequence Gesture Recognition Network (MSNet), is proposed to facilitate the integration of three branches of gesture features, thereby enhancing the targeted extraction of temporal characteristics in traffic police gestures. Experimental results indicate that NTPGR achieves 97.56% and 96.76% accuracy on the Police Gesture Dataset and UTD-MHAD Dataset, respectively, as well as average response times of 0.76s and 0.74s. It not only recognizes traffic police gestures in real-time with high efficiency but also demonstrates strong robustness and Credibility in recognizing gestures in complex environments and dynamic scenarios.

PMID:40415045 | DOI:10.1038/s41598-025-02833-y

Categories: Literature Watch

Building molecular model series from heterogeneous CryoEM structures using Gaussian mixture models and deep neural networks

Sun, 2025-05-25 06:00

Commun Biol. 2025 May 25;8(1):798. doi: 10.1038/s42003-025-08202-9.

ABSTRACT

Cryogenic electron microscopy (CryoEM) produces structures of macromolecules at near-atomic resolution. However, building molecular models with good stereochemical geometry from those structures can be challenging and time-consuming, especially when many structures are obtained from datasets with conformational heterogeneity. Here we present a model refinement protocol that automatically generates series of molecular models from CryoEM datasets, which describe the dynamics of the macromolecular system and have near-perfect geometry scores. This method makes it easier to interpret the movement of the protein complex from heterogeneity analysis and to compare the structural dynamics observed from CryoEM data with results from other experimental and simulation techniques.

PMID:40415012 | DOI:10.1038/s42003-025-08202-9

Categories: Literature Watch

A novel feature fusion and mountain gazelle optimizer based framework for the recognition of jute pests in sustainable agriculture

Sun, 2025-05-25 06:00

Sci Rep. 2025 May 25;15(1):18148. doi: 10.1038/s41598-025-00642-x.

ABSTRACT

Sustainable agriculture is an approach that involves adopting and developing agricultural practices to increase efficiency and preserve resources, both environmentally and economically. Jute is one of the primary sources of income grown in many countries. At this stage, increasing efficiency in jute production and protecting it from pests is essential. Detecting jute pests at an early stage will not only improve crop yield but also provide more income. In this paper, an artificial intelligence-based model was suggested to detect jute pests at an early stage. In this developed model, two different pre-trained models were used for feature extraction. To improve the performance of the developed model, the features obtained using the DarkNet-53 and DenseNet-201 models were combined. After this stage, the metaheuristic Mountain Gazelle Optimizer (MGO) was used, allowing the developed model to work faster and achieve more successful results. Feature selection was carried out using MGO; thus, more successful results were obtained with fewer, more compelling features. The proposed model was compared with six different models and five different classifiers accepted in the literature. In the developed model, 17 different jute pests were detected with 96.779% accuracy. The accuracy value achieved in the developed model is promising in successfully detecting jute pests.

PMID:40414953 | DOI:10.1038/s41598-025-00642-x

Categories: Literature Watch

Pulse Pressure, White Matter Hyperintensities, and Cognition: Mediating Effects Across the Adult Lifespan

Sun, 2025-05-25 06:00

Ann Clin Transl Neurol. 2025 May 25. doi: 10.1002/acn3.70086. Online ahead of print.

ABSTRACT

OBJECTIVES: To investigate whether pulse pressure or mean arterial pressure mediates the relationship between age and white matter hyperintensity load and to examine the mediating effect of white matter hyperintensities on cognition.

METHODS: Demographic information, blood pressure, current medication lists, and Montreal Cognitive Assessment scores for 231 stroke- and dementia-free adults were retrospectively obtained from the Aging Brain Cohort study. Total WMH load was determined from T2-FLAIR magnetic resonance scans using the TrUE-Net deep learning tool for white matter segmentation. In separate models, we used mediation analysis to assess whether pulse pressure or MAP mediates the relationship between age and total white matter hyperintensity load, controlling for cardiovascular confounds. We also assessed whether white matter hyperintensity load mediated the relationship between age and cognitive scores.

RESULTS: Pulse pressure, but not mean arterial pressure, significantly mediated the relationship between age and white matter hyperintensity load. White matter hyperintensity load partially mediated the relationship between age and Montreal Cognitive Assessment score.

INTERPRETATION: Our results indicate that pulse pressure, but not mean arterial pressure, is mechanistically associated with age-related accumulation of white matter hyperintensities, independent of other cardiovascular risk factors. White matter hyperintensity load was a mediator of cognitive scores across the adult lifespan. Effective management of pulse pressure may be especially important for maintenance of brain health and cognition.

PMID:40413732 | DOI:10.1002/acn3.70086

Categories: Literature Watch

Pages