Deep learning

Physics-informed machine learning digital twin for reconstructing prostate cancer tumor growth via PSA tests

Tue, 2025-07-29 06:00

NPJ Digit Med. 2025 Jul 29;8(1):485. doi: 10.1038/s41746-025-01890-x.

ABSTRACT

Existing prostate cancer monitoring methods, reliant on prostate-specific antigen (PSA) measurements in blood tests often fail to detect tumor growth. We develop a computational framework to reconstruct tumor growth from the PSA integrating physics-based modeling and machine learning in digital twins. The physics-based model considers PSA secretion and flux from tissue to blood, depending on local vascularity. This model is enhanced by deep learning, which regulates tumor growth dynamics through the patient's PSA blood tests and 3D spatial interactions of physiological variables of the digital twin. We showcase our framework by reconstructing tumor growth in real patients over 2.5 years from diagnosis, with tumor volume relative errors ranging from 0.8% to 12.28%. Additionally, our results reveal scenarios of tumor growth despite no significant rise in PSA levels. Therefore, our framework serves as a promising tool for prostate cancer monitoring, supporting the advancement of personalized monitoring protocols.

PMID:40730645 | DOI:10.1038/s41746-025-01890-x

Categories: Literature Watch

Pretraining-improved Spatiotemporal graph network for the generalization performance enhancement of traffic forecasting

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27668. doi: 10.1038/s41598-025-11375-2.

ABSTRACT

Traffic forecasting is considered a cornerstone of smart city development. A key challenge is capturing the long-term spatiotemporal dependencies of traffic data while improving the model's generalization ability. To address these issues, various sophisticated modules are embedded into different models. However, this approach increases the computational cost of the model. Additionally, adding or replacing datasets in a trained model requires retraining, which decreases prediction accuracy and increases time cost. To address the challenges faced by existing models in handling long-term spatiotemporal dependencies and high computational costs, this study proposes an enhanced pre-training method called the Improved Spatiotemporal Diffusion Graph (ImPreSTDG). While existing traffic prediction models, particularly those based on Graph Convolutional Networks (GCNs) and deep learning, are effective at capturing short-term spatiotemporal dependencies, they often experience accuracy degradation and increased computational demands when dealing with long-term dependencies. To overcome these limitations, we introduce a Denoised Diffusion Probability Model (DDPM) as part of the pre-training process, which enhances the model's ability to learn from long-term spatiotemporal data while significantly reducing computational costs. During the pre-training phase, ImPreSTDG employs a data masking and recovery strategy, with DDPM facilitating the reconstruction of masked data segments, thereby enabling the model to capture long-term dependencies in the traffic data. Additionally, we propose the Mamba module, which leverages the Selective State Space Model (SSM) to effectively capture long-term multivariate spatiotemporal correlations. This module enables more efficient processing of long sequences, extracting essential patterns while minimizing computational resource consumption. By improving computational efficiency, the Mamba module addresses the challenge of modeling long-term dependencies without compromising accuracy in capturing extended spatiotemporal trends. In the fine-tuning phase, the decoder is replaced with a forecasting header, and the pre-trained parameters are frozen. The forecasting header includes a meta-learning fusion module and a spatiotemporal convolutional layer, which facilitates the integration of both long-term and short-term traffic data for accurate forecasting. The model is then trained and adapted to the specific forecasting task. Experiments conducted on three real-world traffic datasets demonstrate that the proposed pre-training method significantly enhances the model's ability to handle long-term dependencies, missing data, and high computational costs, providing a more efficient solution for traffic prediction.

PMID:40730627 | DOI:10.1038/s41598-025-11375-2

Categories: Literature Watch

A novel contrastive Dual-Branch Network (CDB-Net) for robust EEG-Based Alzheimer's disease diagnosis

Tue, 2025-07-29 06:00

Brain Res. 2025 Jul 27:149863. doi: 10.1016/j.brainres.2025.149863. Online ahead of print.

ABSTRACT

Alzheimer's Disease (AD) is neurodegenerative disorder that causes cognitive decline, memory loss, confusion, and changes in behavior. Early and accurate detection is important for timely intervention, current diagnostic methods can be slow, expensive, and have limited sensitivity. Electroencephalography (EEG) offers a simple and non-invasive way to measure brain activity, and it has shown promise in supporting AD diagnosis. However, EEG signals are often affected by noise-such as muscle movement, blinking, or electrical interference-which can make it harder for models to give reliable results. To address these challenges, we introduce CDB-Net (Contrastive Dual-Branch Network), a deep learning model built to improve the accuracy and robustness of EEG-based AD classification. The model uses two parallel branches: one processes clean EEG data, while the other processes a noisy version of the same data. By training these branches together using contrastive learning, the model learns to focus on features that stay consistent even when the signal is distorted by noise. A classification head is trained jointly using cross-entropy loss for downstream diagnosis. We tested our method on a public EEG dataset and found that CDB-Net achieved 97.92% accuracy on clean data and 83.41% accuracy even under adversarial attacks (FGSM), outperforming traditional machine learning classifiers and deep learning baselines models. These results highlight the effectiveness of contrastive dual-branch learning in enhancing model generalization and robustness, positioning CDB-Net as a promising tool for reliable EEG-based clinical decision support in the context of Alzheimer's Disease diagnosis.

PMID:40730254 | DOI:10.1016/j.brainres.2025.149863

Categories: Literature Watch

A scoping review of artificial intelligence applications in clinical trial risk assessment

Tue, 2025-07-29 06:00

NPJ Digit Med. 2025 Jul 30;8(1):486. doi: 10.1038/s41746-025-01886-7.

ABSTRACT

Artificial intelligence (AI) is increasingly applied to clinical trial risk assessment, aiming to improve safety and efficiency. This scoping review analyzed 142 studies published between 2013 and 2024, focusing on safety (n = 55), efficacy (n = 46), and operational (n = 45) risk prediction. AI techniques, including traditional machine learning, deep learning (e.g., graph neural networks, transformers), and causal machine learning, are used for tasks like adverse drug event prediction, treatment effect estimation, and phase transition prediction. These methods utilize diverse data sources, from molecular structures and clinical trial protocols to patient data and scientific publications. Recently, large language models (LLMs) have seen a surge in applications, featuring in 7 out of 33 studies in 2023. While some models achieve high performance (AUROC up to 96%), challenges remain, including selection bias, limited prospective studies, and data quality issues. Despite these limitations, AI-based risk assessment holds substantial promise for transforming clinical trials, particularly through improved risk-based monitoring frameworks.

PMID:40731070 | DOI:10.1038/s41746-025-01886-7

Categories: Literature Watch

Studying the performance of YOLOv11 incorporating DHSA BRA and PPA modules in railway track fasteners defect detection

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27698. doi: 10.1038/s41598-025-13435-z.

ABSTRACT

With the development of railway transportation and the advancement of deep learning, object detection algorithms are increasingly replacing manual inspection of track fasteners. However, current algorithms struggle with low accuracy in complex weather conditions or low-contrast backgrounds. To address this, we propose a track fastener defect detection algorithm based on YOLOv11 (You Only Look Once).First, we incorporate the DHSA (Dynamic-range Histogram Self-Attention) module into the backbone network of YOLOv11 to enhance noise robustness. Second, we introduce the BRA (Bi-Level Routing Attention) sparse attention mechanism into the neck network for improved efficiency. Finally, we add the PPA (Parallelized Patch-Aware Attention) module to the original neck network to enhance multi-scale feature extraction, specifically for small object detection.To validate the model, we created a dataset and conducted experiments. The experimental results show that YOLO-DRPA achieves a mAP@0.5 of 94.6% and a mAP@0.5:0.95 of 80.7%, marking improvements of 1.8% and 4.0% over YOLOv11n, respectively. The model also demonstrates competitive performance compared to other popular object detection algorithms, highlighting its potential to improve both detection accuracy and efficiency.

PMID:40731066 | DOI:10.1038/s41598-025-13435-z

Categories: Literature Watch

Sign language recognition based on dual-channel star-attention convolutional neural network

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27685. doi: 10.1038/s41598-025-13625-9.

NO ABSTRACT

PMID:40731054 | DOI:10.1038/s41598-025-13625-9

Categories: Literature Watch

Nondestructive freshness recognition of chicken breast meat based on deep learning

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27538. doi: 10.1038/s41598-025-13576-1.

ABSTRACT

Identifying chicken breast freshness is an important component of poultry food safety. Traditional methods for chicken breast freshness recognition suffer from issues such as high cost, difficulty in recognition, and low efficiency. In this study, the YOLOv8n_CA_DSC3 algorithm is employed for non-destructive recognition of chicken breast freshness. Specifically, chicken breast samples under different lighting intensities, densities, sampling angles, etc., were collected. Based on the total microbial count (TAC), coliform count (ANC), and pH value of the samples, the freshness of chicken breast is classified into 7 levels: fresh meat, slightly fresh meat 1, slightly fresh meat 2, slightly fresh meat 3, spoiled meat 1, spoiled meat 2, and spoiled meat 3. The dataset was augmented with eight types of data enhancement, resulting in 34,380 samples. The CONV convolutional layers were replaced with the deformable convolution DCNv3 modules to improve network efficiency and key feature extraction of chicken breast through long-range dependencies, adaptive spatial aggregation, and sparse sampling, thereby enhancing algorithm generalization performance. The introduction of the CA attention mechanism module enhances feature fusion between multiple channels and long-distance high-level and low-level data dependencies. Experimental results show that in the improved algorithm, YOLOv8n_CA_DSC3 achieves suboptimal recall rate but optimal precision, average precision at IoU = 0.5, and average precision at IoU = 0.5:0.95. The accuracy of chicken breast freshness recognition is 95.6%, average precision at IoU = 0.5 is 97.5%, and average precision at IoU = 0.5:0.95 is 77.5%, representing improvements of 5.3%, 5.1%, and 6.1%, respectively, compared to the original YOLOv8n. In conclusion, the YOLOv8n_CA_DSC3 algorithm demonstrates good performance in feature extraction and integration of upper and lower layer information for chicken breast freshness, exhibiting high robustness and providing technical support for non-destructive recognition of chicken breast freshness and food safety.

PMID:40730876 | DOI:10.1038/s41598-025-13576-1

Categories: Literature Watch

A dual input dual spliced network with data augmentation for robust modulation recognition in communication countermeasures

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27543. doi: 10.1038/s41598-025-11166-9.

ABSTRACT

In the realm of communication reconnaissance, the accuracy of modulation recognition is of paramount importance. The modulation recognition task mainly encompasses two key aspects: the dataset and the network model. To enhance the recognition rate across diverse signal-to-noise ratio (SNR) conditions, this paper initiates from the current mainstream modulation recognition methods. It endeavors to identify a methodology capable of achieving a higher recognition rate in both low SNR and high SNR scenarios. Feature extraction in the transform domain of the dataset can, to a certain extent, improve the modulation recognition accuracy. However, it significantly prolongs the algorithm's running time. Consequently, from the dataset perspective, this study incorporates data augmentation techniques and amplitude-phase features to enhance the modulation recognition performance. Regarding the network model design, the baseline modulation recognition model is first replicated multiple times. Through numerous comparisons across different datasets, the features among various models are identified. Concentrating on the high-recognition-rate models, the design is ultimately optimized based on the characteristics of two specific models, taking into account different SNRs, datasets, and the number of network layers. This results in the formation of a novel dual-spliced deep-learning modulation recognition model. Deployed separately in the - 20-0 dB and 0-20 dB intervals, the model was tested on six datasets and achieved remarkable results. Simulation results show that the proposed method outperforms the other 11 types of modulation recognition methods in terms of modulation recognition rates on six major datasets. Moreover, it can address the recognition confusion between AM-DSB and AM-SSB, as well as between 16QAM and 64QAM to a certain extent.

PMID:40730853 | DOI:10.1038/s41598-025-11166-9

Categories: Literature Watch

A multimodal deep learning architecture for predicting interstitial glucose for effective type 2 diabetes management

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27625. doi: 10.1038/s41598-025-07272-3.

ABSTRACT

The accurate prediction of blood glucose is critical for the effective management of diabetes. Modern continuous glucose monitoring (CGM) technology enables real-time acquisition of interstitial glucose concentrations, which can be calibrated against blood glucose measurements. However, a key challenge in the effective management of type 2 diabetes lies in forecasting critical events driven by glucose variability. While recent advances in deep learning enable modeling of temporal patterns in glucose fluctuations, most of the existing methods rely on unimodal inputs and fail to account for individual physiological differences that influence interstitial glucose dynamics. These limitations highlight the need for multimodal approaches that integrate additional personalized physiological information. One of the primary reasons for multimodal approaches not being widely studied in this field is the bottleneck associated with the availability of subjects' health records. In this paper, we propose a multimodal approach trained on sequences of CGM values and enriched with physiological context derived from health records of 40 individuals with type 2 diabetes. The CGM time series were processed using a stacked Convolutional Neural Network (CNN) and a Bidirectional Long Short-Term Memory (BiLSTM) network followed by an attention mechanism. The BiLSTM learned long-term temporal dependencies, while the CNN captured local sequential features. Physiological heterogeneity was incorporated through a separate pipeline of neural networks that processed baseline health records and was later fused with the CGM modeling stream. To validate our model, we utilized CGM values of 30 min sampled with a moving window of 5 min to predict the CGM values with a prediction horizon of (a) 15 min, (b) 30 min, and (c) 60 min. We achieved the multimodal architecture prediction results with Mean Absolute Point Error (MAPE) between 14 and 24 mg/dL, 19-22 mg/dL, 25-26 mg/dL in case of Menarini sensor and 6-11 mg/dL, 9-14 mg/dL, 12-18 mg/dL in case of Abbot sensor for 15, 30 and 60 min prediction horizon respectively. The results suggested that the proposed multimodal model achieved higher prediction accuracy compared to unimodal approaches; with upto 96.7% prediction accuracy; supporting its potential as a generalizable solution for interstitial glucose prediction and personalized management in the type 2 diabetes population.

PMID:40730832 | DOI:10.1038/s41598-025-07272-3

Categories: Literature Watch

Supervised learning of the Jaynes-Cummings Hamiltonian

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27556. doi: 10.1038/s41598-025-02611-w.

ABSTRACT

We investigate the utility of deep neural networks (DNNs) in estimating the Jaynes-Cummings Hamiltonian's parameters from its energy spectrum alone. We assume that the energy spectrum may or may not be corrupted by noise. In the noiseless case, we use the vanilla DNN (vDNN) model and find that the error tends to decrease as the number of input nodes increases. The best-achieved root mean squared error is of the order of [Formula: see text]. The vDNN model, trained on noiseless data, demonstrates resilience to Gaussian noise, but only up to a certain extent. To cope with this issue, we employ a denoising U-Net and combine it with the vDNN to find that the new model reduces the error by up to about 77%. Our study exemplifies that deep learning models can help estimate the parameters of a Hamiltonian even when the data is corrupted by noise.

PMID:40730829 | DOI:10.1038/s41598-025-02611-w

Categories: Literature Watch

Diagnosis of unilateral vocal fold paralysis using auto-diagnostic deep learning model

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27635. doi: 10.1038/s41598-025-09797-z.

ABSTRACT

Unilateral vocal fold paralysis (UVFP) is a condition characterized by impaired vocal fold mobility, typically diagnosed using laryngeal videoendoscopy. While deep learning (DL) models using static images have been explored for UVFP detection, they often lack the ability to assess vocal fold dynamics. We developed an auto-diagnostic DL system for UVFP using both image-based and video-based models. Using laryngeal videoendoscopic data from 500 participants, the model was trained and validated on 2639 video clips. The image-based DL model achieved over 98% accuracy for UVFP detection, but demonstrated limited performance in predicting laterality and paralysis type. In contrast, the video-based model achieved comparable accuracy (about 99%) in detecting UVFP, and substantially higher accuracy in predicting laterality and paralysis type, outperforming the image-based model in overall diagnostic utility. These results demonstrate the advantages of incorporating temporal motion cues in video-based analysis and support the use of DL for comprehensive, multi-task assessment of UVFP. This automated approach demonstrates high diagnostic performance and may serve as a complementary tool to assist clinicians in the assessment of UVFP, particularly in enhancing workflow efficiency and supporting multi-dimensional interpretation of laryngeal motion.

PMID:40730807 | DOI:10.1038/s41598-025-09797-z

Categories: Literature Watch

A hybrid filtering and deep learning approach for early Alzheimer's disease identification

Tue, 2025-07-29 06:00

Sci Rep. 2025 Jul 29;15(1):27694. doi: 10.1038/s41598-025-03472-z.

ABSTRACT

Alzheimer's disease is a progressive neurological disorder that profoundly affects cognitive functions and daily activities. Rapid and precise identification is essential for effective intervention and improved patient outcomes. This research introduces an innovative hybrid filtering approach with a deep transfer learning model for detecting Alzheimer's disease utilizing brain imaging data. The hybrid filtering method integrates the Adaptive Non-Local Means filter with a Sharpening filter for image preprocessing. Furthermore, the deep learning model used in this study is constructed on the EfficientNetV2B3 architecture, augmented with additional layers and fine-tuning to guarantee effective classification among four categories: Mild, moderate, very mild, and non-demented. The work employs Grad-CAM++ to enhance interpretability by localizing disease-relevant characteristics in brain images. The experimental assessment, performed on a publicly accessible dataset, illustrates the ability of the model to achieve an accuracy of 99.45%. These findings underscore the capability of sophisticated deep learning methodologies to aid clinicians in accurately identifying Alzheimer's disease.

PMID:40730777 | DOI:10.1038/s41598-025-03472-z

Categories: Literature Watch

MalNet-DAF: Dual-Attentive Fusion Deep Learning Model for Malaria Parasite Classification

Tue, 2025-07-29 06:00

IEEE J Biomed Health Inform. 2025 Jul 29;PP. doi: 10.1109/JBHI.2025.3593638. Online ahead of print.

ABSTRACT

Malaria remains a life-threatening disease caused by Plasmodium parasites, necessitating accurate and rapid diagnosis for effective treatment. Conventional diagnostic methods are often prone to human error and limited by data insufficiency, impacting their reliability. This paper introduces MalNet-DAF (Malaria Network with Dual-Attentive Fusion), a novel deep-learning model designed to improve the diagnosis and classification of malaria-infected cells. The proposed architecture integrates stacked Convolutional Neural Networks (CNNs) for spatial feature extraction with Bidirectional Long Short-Term Memory (Bi-LSTM) networks for modeling temporal dependencies. To enhance interpretability and performance, MalNet-DAF incorporates dynamic attention mechanisms. The Spatial Attention Module (SAM) highlights critical disease-affected regions by refining CNN-extracted features, while the Temporal Attention Module (TAM), applied after Bi-LSTM processing, emphasizes informative time steps by suppressing irrelevant signals. The fused spatial and temporal feature vectors are processed through Dense Layers with ReLU activation and Dropout regularization. The model is trained and validated using a publicly available malaria dataset from the National Institutes of Health (NIH). Experimental results show that MalNet-DAF achieves a classification accuracy of 99.24%, outperforming traditional baseline models. These findings demonstrate the potential of dynamic attention-driven deep learning (DL) in supporting real-time clinical decision-making and addressing challenges in healthcare diagnostics.

PMID:40729719 | DOI:10.1109/JBHI.2025.3593638

Categories: Literature Watch

Diagnosis of Major Depressive Disorder Based on Multi-Granularity Brain Networks Fusion

Tue, 2025-07-29 06:00

IEEE J Biomed Health Inform. 2025 Jul 29;PP. doi: 10.1109/JBHI.2025.3593617. Online ahead of print.

ABSTRACT

Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.

PMID:40729718 | DOI:10.1109/JBHI.2025.3593617

Categories: Literature Watch

Enhancing EEG-Based Schizophrenia Diagnosis with Explainable Multi-Branch Deep Learning

Tue, 2025-07-29 06:00

IEEE J Biomed Health Inform. 2025 Jul 29;PP. doi: 10.1109/JBHI.2025.3593647. Online ahead of print.

ABSTRACT

Schizophrenia poses diagnostic challenges due to a lack of objective assessment. We propose MBSzEEGNet, a multi-branch deep-learning (DL) model for robust and interpretable EEG-based schizophrenia classification. Its specialized branches capture oscillatory and spatial-spectral features, enhancing generalization across two resting-state schizophrenia EEG datasets. MBSzEEGNet consistently outperforms leading DL architectures, achieving up to 85.71% subject-wise accuracy on one dataset and 75.64% on the other. Saliency-based explanations highlight potential biomarkers in the delta (0.5-4 Hz) and alpha (8-12 Hz) bands and the temporal and right parietal region. Our findings suggest that integrating explainable multi-branch DL architecture with EEG can enhance schizophrenia diagnosis and provide deeper insights into schizophrenia-related neural markers.

PMID:40729717 | DOI:10.1109/JBHI.2025.3593647

Categories: Literature Watch

FedEMG: Achieving Generalization, Personalization, and Resource Efficiency in EMG-based Upper-Limb Rehabilitation through Federated Prototype Learning

Tue, 2025-07-29 06:00

IEEE Trans Biomed Eng. 2025 Jul 29;PP. doi: 10.1109/TBME.2025.3593485. Online ahead of print.

ABSTRACT

Upper extremity amputation, often necessitated by traumatic injuries, significantly impacts an individual's well-being. This paper addresses the critical challenges of deploying deep learning for real-time electromyography-based gesture recognition in prosthetic control: generalization across users and time, the personalization-generalization trade-off, and computational constraints. We propose Federated Electromyography (FedEMG), a novel Federated Prototype Learning (FPL) framework that leverages a prototype-based approach for efficient knowledge transfer and a unique adaptive personalization mechanism. Unlike existing Federated Learning (FL) methods, FedEMG balances global knowledge with user-specific adaptations, achieving high accuracy and personalization without sacrificing generalization. Furthermore, FedEMG utilizes a lightweight gesture detector in combination with an efficient neural network architecture optimized for resource-constrained devices, enabling real-time performance. Extensive evaluations on public and neural-prosthetic interface datasets demonstrate FedEMG's superior accuracy in intra- and inter-subject gesture recognition under various non-IID cases, while also highlighting its efficient resource utilization. FedEMG thus advances the field of upper-limb rehabilitation through improved and accessible prosthetic control.

PMID:40729714 | DOI:10.1109/TBME.2025.3593485

Categories: Literature Watch

SimAD: A Simple Dissimilarity-Based Approach for Time-Series Anomaly Detection

Tue, 2025-07-29 06:00

IEEE Trans Neural Netw Learn Syst. 2025 Jul 29;PP. doi: 10.1109/TNNLS.2025.3590220. Online ahead of print.

ABSTRACT

Despite the prevalence of reconstruction-based deep learning methods, time-series anomaly detection (TSAD) remains a tremendous challenge. Existing approaches often struggle with limited temporal contexts, insufficient representation of normal patterns, and flawed evaluation metrics, all of which hinder their effectiveness in detecting anomalous behavior. To address these issues, we introduce a simple dissimilarity-based approach for time-series anomaly detection (SimAD). Specifically, SimAD first incorporates a patching-based feature extractor capable of processing extended temporal windows and employs the EmbedPatch encoder to fully integrate normal behavioral patterns. Second, we design an innovative ContrastFusion module in SimAD, which strengthens the robustness of anomaly detection by highlighting the distributional differences between normal and abnormal data. Third, we introduce two robust enhanced evaluation metrics, unbiased affiliation (UAff) and normalized affiliation (NAff), designed to overcome the limitations of existing metrics by providing better distinctiveness and semantic clarity. The reliability of these two metrics has been demonstrated by both theoretical and experimental analyses. Experiments conducted on seven diverse time-series datasets clearly demonstrate SimAD's superior performance compared with state-of-the-art (SOTA) methods, achieving relative improvements of 19.85% on ${F}1$ , 4.44% on Aff-F1, 77.79% on NAff-F1, and 9.69% on AUC on six multivariate datasets. Code and pretrained models are available at https://github.com/EmorZz1G/SimAD.

PMID:40729708 | DOI:10.1109/TNNLS.2025.3590220

Categories: Literature Watch

3D Deep-learning-based Segmentation of Human Skin Sweat Glands and Their 3D Morphological Response to Temperature Variations

Tue, 2025-07-29 06:00

IEEE Trans Med Imaging. 2025 Jul 29;PP. doi: 10.1109/TMI.2025.3593284. Online ahead of print.

ABSTRACT

Skin, the primary regulator of heat exchange, relies on sweat glands for thermoregulation. Alterations in sweat gland morphology play a crucial role in various pathological conditions and clinical diagnoses. Current methods for observing sweat gland morphology are limited by their two-dimensional, in vitro, and destructive nature, underscoring the urgent need for real-time, non-invasive, quantifiable technologies. We proposed a novel three-dimensional (3D) transformer-based segmentation framework, enabling quite precise 3D sweat gland segmentation from skin volume data captured by optical coherence tomography (OCT). We quantitatively reveal, for the first time, 3D sweat gland morphological changes with temperature: for instance, volume, surface area, and length increase by 42.0%, 26.4%, and 12.8% at 43°C vs. 10°C (all p < 0.001), while S/V ratio decreases (p = 0.01). By establishing a benchmark for normal sweat gland morphology and offering a real-time, non-invasive tool for quantifying 3D structural parameters, our approach facilitates the study of individual variability and pathological changes in sweat gland morphology, contributing to advancements in dermatological research and clinical applications.

PMID:40729704 | DOI:10.1109/TMI.2025.3593284

Categories: Literature Watch

Indigenous wood species classification using a multi-stage deep learning with grad-CAM explainability and an ensemble technique for Northern Bangladesh

Tue, 2025-07-29 06:00

PLoS One. 2025 Jul 29;20(7):e0328102. doi: 10.1371/journal.pone.0328102. eCollection 2025.

ABSTRACT

Wood species recognition has recently emerged as a vital field in the realm of forestry and ecological conservation. Early studies in this domain have offered various methods for classifying distinct wood species found worldwide using data collected from a particular region. An image dataset has been developed for wood species classification of Bangladeshi forest. Our aim is to address the gaps by comparing and contrasting our developed sequential Convolutional Neural Network based BdWood model with several deep learning, ensemble technique, Machine learning classification models on specific wood species identification for Bangladeshi forests. Using our own dataset, comprising more than 7119 high-quality captured images representing seven types of wood species of Bangladesh. It is found that DenseNet121 is the clear winner in our thorough evaluation among seven pre-trained models. The highest accuracy of DenseNet121 is achecived 97.09%. In addition, our customized BdWood model, which is adapted to the desired outcome, produced results that are excellent. BdWood model achieves a training accuracy of 99.80%, validation accuracy of 97.93%, an F1-score of 97.94%, and an outstanding ROC-AUC of 99.85%, demonstrating its effectiveness in wood species classification. Gradient-weighted Class Activation Mapping (Grad CAM) is used to interpret the model's predictions, providing insights into the features contributing to the classification decisions. Finally, to make our research practically applicable, we have also developed an Android application as a tangible outcome of this work.

PMID:40729379 | DOI:10.1371/journal.pone.0328102

Categories: Literature Watch

Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support

Tue, 2025-07-29 06:00

PLOS Digit Health. 2025 Jul 29;4(7):e0000801. doi: 10.1371/journal.pdig.0000801. eCollection 2025 Jul.

ABSTRACT

Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model's function. In this work, we compare three such methods on the task of predicting prostate cancer-specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks-exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors-such as spectral-normalized neural Gaussian processes (SNGP)-provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.

PMID:40729366 | DOI:10.1371/journal.pdig.0000801

Categories: Literature Watch

Pages