Deep learning

scCobra allows contrastive cell embedding learning with domain adaptation for single cell data integration and harmonization

Thu, 2025-02-13 06:00

Commun Biol. 2025 Feb 13;8(1):233. doi: 10.1038/s42003-025-07692-x.

ABSTRACT

The rapid advancement of single-cell technologies has created an urgent need for effective methods to integrate and harmonize single-cell data. Technical and biological variations across studies complicate data integration, while conventional tools often struggle with reliance on gene expression distribution assumptions and over-correction. Here, we present scCobra, a deep generative neural network designed to overcome these challenges through contrastive learning with domain adaptation. scCobra effectively mitigates batch effects, minimizes over-correction, and ensures biologically meaningful data integration without assuming specific gene expression distributions. It enables online label transfer across datasets with batch effects, allowing continuous integration of new data without retraining. Additionally, scCobra supports batch effect simulation, advanced multi-omic integration, and scalable processing of large datasets. By integrating and harmonizing datasets from similar studies, scCobra expands the available data for investigating specific biological problems, improving cross-study comparability, and revealing insights that may be obscured in isolated datasets.

PMID:39948393 | DOI:10.1038/s42003-025-07692-x

Categories: Literature Watch

Unraveling microglial spatial organization in the developing human brain with DeepCellMap, a deep learning approach coupled with spatial statistics

Thu, 2025-02-13 06:00

Nat Commun. 2025 Feb 13;16(1):1577. doi: 10.1038/s41467-025-56560-z.

ABSTRACT

Mapping cellular organization in the developing brain presents significant challenges due to the multidimensional nature of the data, characterized by complex spatial patterns that are difficult to interpret without high-throughput tools. Here, we present DeepCellMap, a deep-learning-assisted tool that integrates multi-scale image processing with advanced spatial and clustering statistics. This pipeline is designed to map microglial organization during normal and pathological brain development and has the potential to be adapted to any cell type. Using DeepCellMap, we capture the morphological diversity of microglia, identify strong coupling between proliferative and phagocytic phenotypes, and show that distinct spatial clusters rarely overlap as human brain development progresses. Additionally, we uncover an association between microglia and blood vessels in fetal brains exposed to maternal SARS-CoV-2. These findings offer insights into whether various microglial phenotypes form networks in the developing brain to occupy space, and in conditions involving haemorrhages, whether microglia respond to, or influence changes in blood vessel integrity. DeepCellMap is available as an open-source software and is a powerful tool for extracting spatial statistics and analyzing cellular organization in large tissue sections, accommodating various imaging modalities. This platform opens new avenues for studying brain development and related pathologies.

PMID:39948387 | DOI:10.1038/s41467-025-56560-z

Categories: Literature Watch

Functionally characterizing obesity-susceptibility genes using CRISPR/Cas9, in vivo imaging and deep learning

Thu, 2025-02-13 06:00

Sci Rep. 2025 Feb 13;15(1):5408. doi: 10.1038/s41598-025-89823-2.

ABSTRACT

Hundreds of loci have been robustly associated with obesity-related traits, but functional characterization of candidate genes remains a bottleneck. Aiming to systematically characterize candidate genes for a role in accumulation of lipids in adipocytes and other cardiometabolic traits, we developed a pipeline using CRISPR/Cas9, non-invasive, semi-automated fluorescence imaging and deep learning-based image analysis in live zebrafish larvae. Results from a dietary intervention show that 5 days of overfeeding is sufficient to increase the odds of lipid accumulation in adipocytes by 10 days post-fertilization (dpf, n = 275). However, subsequent experiments show that across 12 to 16 established obesity genes, 10 dpf is too early to detect an effect of CRISPR/Cas9-induced mutations on lipid accumulation in adipocytes (n = 1014), and effects on food intake at 8 dpf (n = 1127) are inconsistent with earlier results from mammals. Despite this, we observe effects of CRISPR/Cas9-induced mutations on ectopic accumulation of lipids in the vasculature (sh2b1 and sim1b) and liver (bdnf); as well as on body size (pcsk1, pomca, irs1); whole-body LDLc and/or total cholesterol content (irs2b and sh2b1); and pancreatic beta cell traits and/or glucose content (pcsk1, pomca, and sim1a). Taken together, our results illustrate that CRISPR/Cas9- and image-based experiments in zebrafish larvae can highlight direct effects of obesity genes on cardiometabolic traits, unconfounded by their - not yet apparent - effect on excess adiposity.

PMID:39948378 | DOI:10.1038/s41598-025-89823-2

Categories: Literature Watch

Prediction of InSAR deformation time-series using improved LSTM deep learning model

Thu, 2025-02-13 06:00

Sci Rep. 2025 Feb 13;15(1):5333. doi: 10.1038/s41598-024-83084-1.

ABSTRACT

Mining-induced subsidence is one of the major concerns of mining industry/mine owners, statutory bodies, and environmental organisations. Therefore, mine subsidence monitoring and prediction is of utmost importance for its effective management. In the present study, a modified LSTM model is developed to predict the InSAR deformation time series. The modified LSTM model may also be extended for prediction based on time-series data in general. Further, to check the developed model's performance, InSAR deformation time-series results obtained from 26 TSX/TDX datasets of Mine-A in Khetri Copper Belt, India, are used as an input. Further obtained results from mLSTM have been compared with the other two models, namely RNN and LSTM. Efficiency comparison results reveal that RNN, LSTM, and modified LSTM over the applied single reference PSI-derived deformation time-series result are 82.6%, 97.54%, and 98.57%, respectively. It also reveals that the RMS error of RNN, LSTM, and modified LSTM over the applied single reference PSI-derived deformation time-series result are 6.58 mm/year, 5.34 mm/year and 4.22 mm/year, respectively. In addition, the study reveals that the prediction of the mLSTM model, compared to RNN and LSTM, is quite close to the observed/measured deformation velocity values obtained from a single reference PSI-derived result. Furthermore, prediction for the next five years using mLSTM shows that the maximum value of the deformation is -20.87 mm/year and a minimum of 4.99 mm/year. Predictions for the next five years show that most of the area is stable, but points around the plant area have shown some deformation.

PMID:39948371 | DOI:10.1038/s41598-024-83084-1

Categories: Literature Watch

Diagnosis of microbial keratitis using smartphone-captured images; a deep-learning model

Thu, 2025-02-13 06:00

J Ophthalmic Inflamm Infect. 2025 Feb 13;15(1):8. doi: 10.1186/s12348-025-00465-x.

ABSTRACT

BACKGROUND: Microbial keratitis (MK) poses a substantial threat to vision and is the leading cause of corneal blindness. The outcome of MK is heavily reliant on immediate treatment following an accurate diagnosis. The current diagnostics are often hindered by the difficulties faced in low and middle-income countries where there may be a lack of access to ophthalmic units with clinical experts and standardized investigating equipment. Hence, it is crucial to develop new and expeditious diagnostic approaches. This study explores the application of deep learning (DL) in diagnosing and differentiating subtypes of MK using smartphone-captured images.

MATERIALS AND METHODS: The dataset comprised 889 cases of bacterial keratitis (BK), fungal keratitis (FK), and acanthamoeba keratitis (AK) collected from 2020 to 2023. A convolutional neural network-based model was developed and trained for classification.

RESULTS: The study demonstrates the model's overall classification accuracy of 83.8%, with specific accuracies for AK, BK, and FK at 81.2%, 82.3%, and 86.6%, respectively, with an AUC of 0.92 for the ROC curves.

CONCLUSION: The model exhibits practicality, especially with the ease of image acquisition using smartphones, making it applicable in diverse settings.

PMID:39946047 | DOI:10.1186/s12348-025-00465-x

Categories: Literature Watch

Dwarf Updated Pelican Optimization Algorithm for Depression and Suicide Detection from Social Media

Thu, 2025-02-13 06:00

Psychiatr Q. 2025 Feb 13. doi: 10.1007/s11126-024-10111-9. Online ahead of print.

ABSTRACT

Depression and suicidal thoughts are significant global health concerns typically diagnosed through clinical assessments, which can be constrained by issues of accessibility and stigma. However, current methods often face challenges with this variability and struggle to integrate different models effectively and generalize across different settings, leading to reduced effectiveness when applied to new contexts, resulting in less accurate outcomes. This research presents a novel approach to suicide and depression detection from social media (SADDSM) by addressing the challenges of variability and model generalization. The process involves four key stages: first, preprocessing the input data through stop word removal, tokenization, and stemming to improve text clarity; then, extracting relevant features such as TF-IDF, style features, and enhanced word2vec features to capture semantic relationships and emotional cues. A modified mutual information score is used for feature fusion, selecting the most informative features. Subsequently, deep learning models like RNN, DBN, and improved LSTM are stacked to form an ensemble model that boosts accuracy while reducing overfitting. The performance is further optimized using the Dwarf Updated Pelican optimization algorithm (DU-POA) to fine-tune model weights, achieving an impressive 0.962 accuracy at 90% training data, outperforming existing techniques.

PMID:39946018 | DOI:10.1007/s11126-024-10111-9

Categories: Literature Watch

Lesion segmentation method for multiple types of liver cancer based on balanced dice loss

Thu, 2025-02-13 06:00

Med Phys. 2025 Feb 13. doi: 10.1002/mp.17624. Online ahead of print.

ABSTRACT

BACKGROUND: Obtaining accurate segmentation regions for liver cancer is of paramount importance for the clinical diagnosis and treatment of the disease. In recent years, a large number of variants of deep learning based liver cancer segmentation methods have been proposed to assist radiologists. Due to the differences in characteristics between different types of liver tumors and data imbalance, it is difficult to train a deep model that can achieve accurate segmentation for multiple types of liver cancer.

PURPOSE: In this paper, We propose a balance Dice Loss(BD Loss) function for balanced learning of multiple categories segmentation features. We also introduce a comprehensive method based on BD Loss to achieve accurate segmentation of multiple categories of liver cancer.

MATERIALS AND METHODS: We retrospectively collected computed tomography (CT) screening images and tumor segmentation of 591 patients with malignant liver tumors from West China Hospital of Sichuan University. We use the proposed BD Loss to train a deep model that can segment multiple types of liver tumors and, through a greedy parameter averaging algorithm (GPA algorithm) obtain a more generalized segmentation model. Finally, we employ model integration and our proposed post-processing method, which leverages inter-slice information, to achieve more accurate segmentation of liver cancer lesions.

RESULTS: We evaluated the performance of our proposed automatic liver cancer segmentation method on the dataset we collected. The BD loss we proposed can effectively mitigate the adverse effects of data imbalance on the segmentation model. Our proposed method can achieve a dice per case (DPC) of 0.819 (95%CI 0.798-0.841), significantly higher than baseline which achieve a DPC of 0.768(95%CI 0.740-0.796).

CONCLUSIONS: The differences in CT images between different types of liver cancer necessitate deep learning models to learn distinct features. Our method addresses this challenge, enabling balanced and accurate segmentation performance across multiple types of liver cancer.

PMID:39945728 | DOI:10.1002/mp.17624

Categories: Literature Watch

Spatial-temporal activity-informed diarization and separation

Thu, 2025-02-13 06:00

J Acoust Soc Am. 2025 Feb 1;157(2):1162-1175. doi: 10.1121/10.0035830.

ABSTRACT

A robust multichannel speaker diarization and separation system is proposed by exploiting the spatiotemporal activity of the speakers. The system is realized in a hybrid architecture that combines the array signal processing units and the deep learning units. For speaker diarization, a spatial coherence matrix across time frames is computed based on the whitened Relative Transfer Functions of the microphone array. This serves as a robust feature for subsequent machine learning without the need for prior knowledge of the array configuration. A computationally efficient modified End-to-End Neural Diarization system in the Encoder-Decoder-based Attractor network is constructed to estimate the speaker activity from the spatial coherence matrix. For speaker separation, we propose the Global and Local Activity-driven Speaker Extraction network to separate speaker signals via speaker-specific global and local spatial activity functions. The local spatial activity functions depend on the coherence between the whitened Relative Transfer Functions of each time-frequency bin and the target speaker-dominant bins. The global spatial activity functions are computed from the global spatial coherence functions based on frequency-averaged local spatial activity functions. Experimental results have demonstrated superior speaker, diarization, counting, and separation performance achieved by the proposed system with low computational complexity compared to the pre-selected baselines.

PMID:39945646 | DOI:10.1121/10.0035830

Categories: Literature Watch

Analytical Capabilities and Future Perspectives of Chemometrics in Omics for Food Microbial Investigation

Thu, 2025-02-13 06:00

Crit Rev Anal Chem. 2025 Feb 13:1-14. doi: 10.1080/10408347.2025.2463430. Online ahead of print.

ABSTRACT

Microbiomes significantly impact food flavor, food quality and human health. The development of omics technologies has revolutionized our understanding of the microbiome, the generated complex datasets, as well as their processing and interpretation need to be taken seriously. Currently, chemometrics has shown huge potential in omics data analysis, which is crucial to reveal the functional attributes and mechanisms of microbiomes in food nutrition and safety. However, various chemometric tools have their own characteristics, selecting appropriate technologies and performing multiomics data fusion analysis to improve the precision and reliability of food microbial investigations is still a huge challenge. In this review, we summarized the omics technologies used in food microbiome studies, overviewed the principle and applicability of chemometrics in omics, and discussed the challenges and prospects of chemometrics. The urgent need for chemometrics is to integrate deep learning (DL) and artificial intelligence algorithms to enhance its analytical capabilities and prediction accuracy. We hope this review will provide valuable insights of the integration of multiomics and bioinformatics combined with various chemometric techniques in data analysis for food microbial investigation. In the future, chemometrics combined with modern technologies for multiomics data analysis will further deepen our understanding of food microbiology and improve food safety.

PMID:39945579 | DOI:10.1080/10408347.2025.2463430

Categories: Literature Watch

Physics-informed model-based generative neural network for synthesizing scanner- and algorithm-specific low-dose CT exams

Thu, 2025-02-13 06:00

Med Phys. 2025 Feb 13. doi: 10.1002/mp.17680. Online ahead of print.

ABSTRACT

BACKGROUND: Accurate low-dose CT simulation is required to efficiently assess reconstruction and dose reduction techniques. Projection domain noise insertion requires proprietary information from manufacturers. Analytic image domain noise insertion methods are successful for linear reconstruction algorithms, however extending them to non-linear algorithms remains challenging. Emerging, deep-learning-based image domain noise insertion methods have potential, but few approaches have explicitly incorporated physics information and a texture-synthesis model to guide the generation of locally and globally correlated noise texture.

PURPOSE: We proposed a physics-informed model-based generative neural network for simulating scanner- and algorithm-specific low-dose CT exams (PALETTE). It is expected to provide an alternative to projection domain noise insertion methods in the absence of manufacturers' proprietary information and tools.

METHODS: PALETTE integrated a physics-based noise prior generation process, a Noise2Noisier sub-network, and a noise texture synthesis sub-network. The Noise2Noisier sub-network provided a bias prior, which, combined with the noise prior, served as the inputs to noise texture synthesis sub-network. Explicit regularizations in spatial and frequency domains were developed to account for noise spatial correlation and frequency characteristics. For proof-of-concept, PALETTE was trained and validated for a commercial iterative reconstruction algorithm (SAFIRE, Siemens Healthineers), using the paired routine and 25% dose images from CT phantoms (lateral size 30-40 cm; three training and four testing phantoms) and open-access patient cases (10 training and 20 testing cases). In phantom validation, noise power spectra (NPS) were compared in water background and tissue-mimicking inserts, using peak frequency and mean-absolute-error (MAE). In patient case evaluation, visual inspection and quantitative assessment were conducted on axial, coronal, and sagittal planes. Local and global noise texture were visually inspected in low-dose CT images and the difference images between routine and low dose. Noise levels in liver and fat were measured. Local and global 2D Fourier magnitude spectra of the difference images and the corresponding radial mean profiles were used to assess similarity in noise frequency components within tissues and entire field-of-view, using spectral correlation mapper (SCM) and spectral angle mapper (SAM). Several baseline neural network models (e.g., GAN) were included in the evaluation. Statistical significance was tested using a t-test for related samples.

RESULTS: PALETTE-derived NPS showed accurate noise peak frequency (PALETTE/reference: water 1.40/1.40 lp/cm; inserts 1.7/1.7lp/cm) and small MAE (≤0.65 HU2cm2). PALETTE created anatomy-dependent noise texture, showing realistic local and global granularity and streaks. No statistically significant difference was observed in noise levels (p > 0.05). Noise range was comparable across 3D image volume (PALETTE / reference):liver - [18.0, 53.4] / [19.3, 50.0] HU; fat - [11.7, 42.4] / [12.1, 41.3] HU. Percent absolute difference of local noise was small (mean ± standard deviation): liver 4.1%±3.1%, fat 4.6%±3.1%. Noise frequency distribution was close to the reference (mean per case): SCM ≥ 0.92, SAM ≤ 0.22. Additionally, PALETTE outperformed all baseline models in visual inspection and quantitative comparison.

CONCLUSION: PALETTE can provide high-quality image domain noise insertion for simulating accurate low-dose CT images created with a commercial non-linear reconstruction algorithm.

PMID:39945452 | DOI:10.1002/mp.17680

Categories: Literature Watch

Simpler Protein Domain Identification Using Spectral Clustering

Thu, 2025-02-13 06:00

Proteins. 2025 Feb 13. doi: 10.1002/prot.26808. Online ahead of print.

ABSTRACT

The decomposition of a biomolecular complex into domains is an important step to investigate biological functions and ease structure determination. A successful approach to do so is the SPECTRUS algorithm, which provides a segmentation based on spectral clustering applied to a graph coding inter-atomic fluctuations derived from an elastic network model. We present SPECTRALDOM, which makes three straightforward and useful additions to SPECTRUS. For single structures, we show that high quality partitionings can be obtained from a graph Laplacian derived from pairwise interactions-without normal modes. For sets of homologous structures, we introduce a Multiple Sequence Alignment mode, exploiting both the sequence based information (MSA) and the geometric information embodied in experimental structures. Finally, we propose to analyze the clusters/domains delivered using the so-called D $$ D $$ -family-matching algorithm, which establishes a correspondence between domains yielded by two decompositions, and can be used to handle fragmentation issues. Our domains compare favorably to those of the original SPECTRUS, and those of the deep learning based method Chainsaw. Using two complex cases, we show in particular that SPECTRALDOM is the only method handling complex conformational changes involving several sub-domains. Finally, a comparison of SPECTRALDOM and Chainsaw on the manually curated domain classification ECOD as a reference shows that high quality domains are obtained without using any evolutionary related piece of information. SPECTRALDOM is provided in the Structural Bioinformatics Library, see http://sbl.inria.fr and https://sbl.inria.fr/doc/Spectral_domain_explorer-user-manual.html.

PMID:39945423 | DOI:10.1002/prot.26808

Categories: Literature Watch

A Veterinary DICOM-Based Deep Learning Denoising Algorithm Can Improve Subjective and Objective Brain MRI Image Quality

Thu, 2025-02-13 06:00

Vet Radiol Ultrasound. 2025 Mar;66(2):e70015. doi: 10.1111/vru.70015.

ABSTRACT

In this analytical cross-sectional method comparison study, we evaluated brain MR images in 30 dogs and cats with and without using a DICOM-based deep-learning (DL) denoising algorithm developed specifically for veterinary patients. Quantitative comparison was performed by measuring signal-to-noise (SNR) and contrast-to-noise ratios (CNR) on the same T2-weighted (T2W), T2-FLAIR, and Gradient Echo (GRE) MR brain images in each patient (native images and after denoising) in identical regions of interest. Qualitative comparisons were then conducted: three experienced veterinary radiologists independently evaluated each patient's T2W, T2-FLAIR, and GRE image series. Native and denoised images were evaluated separately, with observers blinded to the type of images they were assessing. For each image type (native and denoised) and pulse sequence type image, they assigned a subjective grade of coarseness, contrast, and overall quality. For all image series tested (T2W, T2-FLAIR, and GRE), the SNRs of cortical gray matter, subcortical white matter, deep gray matter, and internal capsule were statistically significantly higher on images treated with DL denoising algorithm than native images. Similarly, for all image series types tested, the CNRs between cortical gray and white matter and between deep gray matter and internal capsule were significantly higher on DL algorithm-treated images than native images. The qualitative analysis confirmed these results, with generally better coarseness, contrast, and overall quality scores for the images treated with the DL denoising algorithm. In this study, this DICOM-based DL denoising algorithm reduced noise in 1.5T MRI canine and feline brain images, and radiologists' perceived image quality improved.

PMID:39945204 | DOI:10.1111/vru.70015

Categories: Literature Watch

MUNet: a novel framework for accurate brain tumor segmentation combining UNet and mamba networks

Thu, 2025-02-13 06:00

Front Comput Neurosci. 2025 Jan 29;19:1513059. doi: 10.3389/fncom.2025.1513059. eCollection 2025.

ABSTRACT

Brain tumors are one of the major health threats to humans, and their complex pathological features and anatomical structures make accurate segmentation and detection crucial. However, existing models based on Transformers and Convolutional Neural Networks (CNNs) still have limitations in medical image processing. While Transformers are proficient in capturing global features, they suffer from high computational complexity and require large amounts of data for training. On the other hand, CNNs perform well in extracting local features but have limited performance when handling global information. To address these issues, this paper proposes a novel network framework, MUNet, which combines the advantages of UNet and Mamba, specifically designed for brain tumor segmentation. MUNet introduces the SD-SSM module, which effectively captures both global and local features of the image through selective scanning and state-space modeling, significantly improving segmentation accuracy. Additionally, we design the SD-Conv structure, which reduces feature redundancy without increasing model parameters, further enhancing computational efficiency. Finally, we propose a new loss function that combines mIoU loss, Dice loss, and Boundary loss, which improves segmentation overlap, similarity, and boundary accuracy from multiple perspectives. Experimental results show that, on the BraTS2020 dataset, MUNet achieves DSC values of 0.835, 0.915, and 0.823 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively, and Hausdorff95 scores of 2.421, 3.755, and 6.437. On the BraTS2018 dataset, MUNet achieves DSC values of 0.815, 0.901, and 0.815, with Hausdorff95 scores of 4.389, 6.243, and 6.152, all outperforming existing methods and achieving significant performance improvements. Furthermore, when validated on the independent LGG dataset, MUNet demonstrated excellent generalization ability, proving its effectiveness in various medical imaging scenarios. The code is available at https://github.com/Dalin1977331/MUNet.

PMID:39944950 | PMC:PMC11814164 | DOI:10.3389/fncom.2025.1513059

Categories: Literature Watch

Application of deep learning for real-time detection, localization, and counting of the malignant invasive weed Solanum rostratum Dunal

Thu, 2025-02-13 06:00

Front Plant Sci. 2025 Jan 29;15:1486929. doi: 10.3389/fpls.2024.1486929. eCollection 2024.

ABSTRACT

Solanum rostratum Dunal (SrD) is a globally harmful invasive weed that has spread widely across many countries, posing a serious threat to agriculture and ecosystem security. A deep learning network model, TrackSolanum, was designed for real-time detection, location, and counting of SrD in the field. The TrackSolanmu network model comprises four modules: detection, tracking, localization, and counting. The detection module uses YOLO_EAND for SrD identification, the tracking module applies DeepSort for multi-target tracking of SrD in consecutive video frames, the localization module determines the position of the SrD through center-of-mass localization, and the counting module counts the plants using a target ID over-the-line invalidation method. The field test results show that for UAV video at a height of 2m, TrackSolanum achieved precision and recall of 0.950 and 0.970, with MOTA and IDF1 scores of 0.826 and 0.960, a counting error rate of 2.438%, and FPS of 17. For UAV video at a height of 3m, the model reached precision and recall of 0.846 and 0.934, MOTA and IDF1 scores of 0.708 and 0.888, a counting error rate of 4.634%, and FPS of 79. Thus, the TrackSolanum supports real-time SrD detection, offering crucial technical support for hazard assessment and precise management of SrD.

PMID:39944948 | PMC:PMC11814178 | DOI:10.3389/fpls.2024.1486929

Categories: Literature Watch

Sleep Apnea Detection Using EEG: A Systematic Review of Datasets, Methods, Challenges, and Future Directions

Wed, 2025-02-12 06:00

Ann Biomed Eng. 2025 Feb 12. doi: 10.1007/s10439-025-03691-5. Online ahead of print.

ABSTRACT

PURPOSE: Sleep Apnea (SA) affects an estimated 936 million adults globally, posing a significant public health concern. The gold standard for diagnosing SA, polysomnography, is costly and uncomfortable. Electroencephalogram (EEG)-based SA detection is promising due to its ability to capture distinctive sleep stage-related characteristics across different sub-band frequencies. This study aims to review and analyze research from the past decade on the potential of EEG signals in SA detection and classification focusing on various deep learning and machine learning techniques, including signal decomposition, feature extraction, feature selection, and classification methodologies.

METHOD: A systematic literature review using the preferred reporting items for systematic reviews and meta-Analysis (PRISMA) and PICO guidelines was conducted across 5 databases for publications from January 2010 to December 2024.

RESULTS: The review involved screening a total of 402 papers, with 63 selected for in-depth analysis to provide valuable insights into the application of EEG signals for SA detection. The findings underscore the potential of EEG-based methods in improving SA diagnosis.

CONCLUSION: This study provides valuable insights, showcasing significant advancements while identifying key areas for further exploration, thereby laying a strong foundation for future research in EEG-based SA detection.

PMID:39939549 | DOI:10.1007/s10439-025-03691-5

Categories: Literature Watch

Automated grading of oleaster fruit using deep learning

Wed, 2025-02-12 06:00

Sci Rep. 2025 Feb 12;15(1):5206. doi: 10.1038/s41598-025-89358-6.

ABSTRACT

The agriculture sector is crucial to many economies, particularly in developing regions, with post-harvest technology emerging as a key growth area. The oleaster, valued for its nutritional and medicinal properties, has traditionally been graded manually based on color and appearance. As global demand rises, there is a growing need for efficient automated grading methods. Therefore, this study aimed to develop a real-time machine vision system for classifying oleaster fruit at various grading velocities. Initially, in the offline phase, a dataset containing video frames of four different quality classes of oleaster, categorized based on the Iranian national standard, was acquired at different linear conveyor belt velocities (ranging from 4.82 to 21.51 cm/s). The Mask R-CNN algorithm was used to segment the extracted frames to obtain the position and boundary of the samples. Experimental results indicated that, with a 100% detection rate and an average instance segmentation accuracy error ranging from 4.17 to 5.79%, the Mask R-CNN algorithm is capable of accurately segmenting all classes of oleaster at all the examined grading velocity levels. The results of the fivefold cross validation indicated that the general YOLOv8x and YOLOv8n models, created using the dataset obtained from all conveyor belt velocity levels, have a similarly reliable classification performance. Therefore, given its simpler architecture and lower processing time requirements, the YOLOv8n model was used to evaluate the grading system in real-time mode. The overall classification accuracy of this model was 92%, with a sensitivity range of 87.10-94.89% for distinguishing different classes of oleaster at a grading velocity of 21.51 cm/s. The results of this study demonstrate the effectiveness of deep learning-based models in developing grading machines for the oleaster fruit.

PMID:39939355 | DOI:10.1038/s41598-025-89358-6

Categories: Literature Watch

Stroke Management and Analysis Risk Tool (SMART): An interpretable clinical application for diabetes-related stroke prediction

Wed, 2025-02-12 06:00

Nutr Metab Cardiovasc Dis. 2024 Dec 29:103841. doi: 10.1016/j.numecd.2024.103841. Online ahead of print.

ABSTRACT

BACKGROUND AND AIMS: The growing global burden of diabetes and stroke poses a significant public health challenge. This study aims to analyze factors and create an interpretable stroke prediction model for diabetic patients.

METHODS AND RESULTS: Data from 20,014 patients were collected from the Affiliated Drum Tower Hospital, Medical School of Nanjing University, between 2021 and 2022. After handling the missing values, feature engineering included LASSO, SVM-RFE, and multi-factor regression techniques. The dataset was split 8:2 for training and testing, with the Synthetic Minority Oversampling Technique (SMOTE) to balance classes. Various machine learning and deep learning techniques, such as Random Forest (RF) and deep neural networks (DNN), have been utilized for model training. SHAP and a dedicated website showed the interpretability and practicality of the model. This study identified 11 factors influencing stroke incidence, with the RF and DNN algorithms achieving AUC values of 0.95 and 0.91, respectively. The Stroke Management and Analysis Risk Tool (SMART) was developed for clinical use.

PRIMARY ENDPOINT: The predictive performance of SMART in assessing stroke risk in diabetic patients was evaluated using AUC.

SECONDARY ENDPOINTS: Evaluated accuracy (precision, recall, F1-score), interpretability via SHAP values, and clinical utility, emphasizing user interface. Statistical analysis of EHR data using univariate and multivariate methods, with model validation on a separate test set.

CONCLUSIONS: An interpretable stroke-predictive model was created for patients with diabetes. This model proposes that standard clinical and laboratory parameters can predict the stroke risk in individuals with diabetes.

PMID:39939252 | DOI:10.1016/j.numecd.2024.103841

Categories: Literature Watch

Use of deep learning-accelerated T2 TSE for prostate MRI: Comparison with and without hyoscine butylbromide admission

Wed, 2025-02-12 06:00

Magn Reson Imaging. 2025 Feb 10:110358. doi: 10.1016/j.mri.2025.110358. Online ahead of print.

ABSTRACT

OBJECTIVE: To investigate the use of deep learning (DL) T2-weighted turbo spin echo (TSE) imaging sequence with deep learning acceleration (T2DL) in prostate MRI regarding the necessity of hyoscine butylbromide (HBB) administration for high image quality.

METHODS: One hundred twenty consecutive patients divided into four groups (30 for each group) were included in this study. All patients received a T2DL (version 2022/23) and a conventional T2 TSE (cT2) sequence on an implemented 3 T scanner and software system. Group A received cT2 with HBB compared to T2DL without HBB with a field of view (FOV) of 130 mm and group B with a FOV of 160 mm. Group C received both sequences with a FOV of 160 mm plus HBB and group D without HBB. Two radiologists independently evaluated all imaging datasets in a blinded reading regarding motion, sharpness, noise, and diagnostic confidence. Furthermore, we analyzed quantitative parameters by calculating edge rise distance (ERD), signal-to-noise-ratio (SNR), and contrast-to-noise-ratio (CNR). Friedman test was used for group comparisons.

RESULTS: Baseline characteristics showed no significant differences between groups A-D. After HBB cT2 showed less motion artifacts, more sharpness, and a higher diagnostic confidence than T2DL, though DL sequences had significantly lower noise (p < 0.01). Quantitative analysis revealed higher SNR and CNR for T2DL sequences (p < 0.01), while edge rise distance (ERD) remained similar. Inter-reader agreement was good to excellent, with ICCs ranging from 0.84 to 0.93. T2DL acquisition time was significantly lower than for cT2.

CONCLUSIONS: In our study, cT2 sequences with HBB showed superior image quality and diagnostic confidence while the T2DL sequence offer promising potential for reducing MRI acquisition times and performed better in quantitative measures like SNR and CNR. Additional studies are required to evaluate further adjusted and developed DL applications for prostate MRI on upcoming scanner generations and to assess tumor detection rates.

PMID:39938669 | DOI:10.1016/j.mri.2025.110358

Categories: Literature Watch

Estimating the treatment effects of multiple drug combinations on multiple outcomes in hypertension

Wed, 2025-02-12 06:00

Cell Rep Med. 2025 Feb 5:101947. doi: 10.1016/j.xcrm.2025.101947. Online ahead of print.

ABSTRACT

Hypertension management is complex due to the need for multiple drug combinations and consideration of diverse outcomes. Traditional treatment effect estimation methods struggle to address this complexity, as they typically focus on binary treatments and binary outcomes. To overcome these challenges, we introduce a framework that accommodates multiple drug combinations and multiple outcomes (METO). METO uses multi-treatment encoding to handle drug combinations and sequences, distinguishing between effectiveness and safety outcomes by learning the outcome type during prediction. To mitigate confounding bias, METO employs an inverse probability weighting method for multiple treatments, assigning balance weights based on propensity scores. Evaluated on real-world data, METO achieves significant performance improvements over existing methods, with an average improvement of 6.4% in influence function-based precision of estimating heterogeneous effects. A case study demonstrates METO's ability to identify personalized antihypertensive treatments that optimize efficacy and minimize safety risks, highlighting its potential for improving hypertension treatment strategies.

PMID:39938524 | DOI:10.1016/j.xcrm.2025.101947

Categories: Literature Watch

Segment Anything for Microscopy

Wed, 2025-02-12 06:00

Nat Methods. 2025 Feb 12. doi: 10.1038/s41592-024-02580-4. Online ahead of print.

ABSTRACT

Accurate segmentation of objects in microscopy images remains a bottleneck for many researchers despite the number of tools developed for this purpose. Here, we present Segment Anything for Microscopy (μSAM), a tool for segmentation and tracking in multidimensional microscopy data. It is based on Segment Anything, a vision foundation model for image segmentation. We extend it by fine-tuning generalist models for light and electron microscopy that clearly improve segmentation quality for a wide range of imaging conditions. We also implement interactive and automatic segmentation in a napari plugin that can speed up diverse segmentation tasks and provides a unified solution for microscopy annotation across different microscopy modalities. Our work constitutes the application of vision foundation models in microscopy, laying the groundwork for solving image analysis tasks in this domain with a small set of powerful deep learning models.

PMID:39939717 | DOI:10.1038/s41592-024-02580-4

Categories: Literature Watch

Pages