Deep learning

Predicting the risk of relapsed or refractory in patients with diffuse large B-cell lymphoma via deep learning

Tue, 2025-03-18 06:00

Front Oncol. 2025 Mar 3;15:1480645. doi: 10.3389/fonc.2025.1480645. eCollection 2025.

ABSTRACT

INTRODUCTION: Diffuse large B-cell lymphoma (DLBCL) is the most common type of non-Hodgkin lymphoma (NHL) in humans, and it is a highly heterogeneous malignancy with a 40% to 50% risk of relapsed or refractory (R/R), leading to a poor prognosis. So early prediction of R/R risk is of great significance for adjusting treatments and improving the prognosis of patients.

METHODS: We collected clinical information and H&E images of 227 patients diagnosed with DLBCL in Xuzhou Medical University Affiliated Hospital from 2015 to 2018. Patients were then divided into R/R group and non-relapsed & non-refractory group based on clinical diagnosis, and the two groups were randomly assigned to the training set, validation set and test set in a ratio of 7:1:2. We developed a model to predict the R/R risk of patients based on clinical features utilizing the random forest algorithm. Additionally, a prediction model based on histopathological images was constructed using CLAM, a weakly supervised learning method after extracting image features with convolutional networks. To improve the prediction performance, we further integrated image features and clinical information for fusion modeling.

RESULTS: The average area under the ROC curve value of the fusion model was 0.71±0.07 in the validation dataset and 0.70±0.04 in the test dataset. This study proposed a novel method for predicting the R/R risk of DLBCL based on H&E images and clinical features.

DISCUSSION: For patients predicted to have high risk, follow-up monitoring can be intensified, and treatment plans can be adjusted promptly.

PMID:40098696 | PMC:PMC11911189 | DOI:10.3389/fonc.2025.1480645

Categories: Literature Watch

Deep learning imaging analysis to identify bacterial metabolic states associated with carcinogen production

Tue, 2025-03-18 06:00

Discov Imaging. 2025;2(1):2. doi: 10.1007/s44352-025-00006-1. Epub 2025 Mar 10.

ABSTRACT

BACKGROUND: Colorectal cancer (CRC) is a globally prevalent cancer. Emerging research implicates the gut microbiome in CRC pathogenesis. Bacteria such as Clostridium scindens can produce the carcinogenic bile acid deoxycholic acid (DCA). It is unknown whether imaging methods can differentiate DCA-producing and DCA-non-producing C. scindens cells.

METHODS: Light microscopy images of anaerobically cultured C. scindens in four conditions were acquired at 100× magnification using the Tissue FAX system: C. scindens in media alone (DCA-non-producing state), C. scindens in media with cholic acid (DCA-producing state), or C. scindens in co-culture with one of two Bacteroides species (intermediate DCA production states). We evaluated three approaches: whole-image classification, per-cell classification, and image segmentation-based classification. For whole-image classification, we used a custom Convolutional Neural Network (CNN), pre-trained DenseNet, pre-trained ResNet, and ResNet enhanced by integrating the Digital Images of Bacterial Species (DIBaS) dataset. For cell detection and classification, we applied thresholding (OTSU or adaptive thresholding) followed by a ResNet model. Finally, image segmentation-based classification was performed using nnU-Net.

RESULTS: For whole-image analysis, DIBaS-enhanced ResNet models achieved the best performance in distinguishing C. scindens states in monoculture (accuracy 0.89 ± 0.006) and in co-cultures (accuracy 0.86 ± 0.004). Per-cell analysis was optimal at a C constant value of 3, with the ResNet model achieving 62-74% accuracy for C. scindens states in monoculture. Segmentation-based analysis using nnU-Net resulted in Dice coefficients of 87% for C. scindens and 74-76% for the Bacteroides species.

CONCLUSIONS: This study demonstrates feasibility of image-based deep learning models in identifying health-relevant gut bacterial metabolic states.

SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s44352-025-00006-1.

PMID:40098681 | PMC:PMC11912549 | DOI:10.1007/s44352-025-00006-1

Categories: Literature Watch

An efficient deep learning strategy for accurate and automated detection of breast tumors in ultrasound image datasets

Tue, 2025-03-18 06:00

Front Oncol. 2025 Mar 3;14:1461542. doi: 10.3389/fonc.2024.1461542. eCollection 2024.

ABSTRACT

BACKGROUND: Breast cancer ranks as one of the leading malignant tumors among women worldwide in terms of incidence and mortality. Ultrasound examination is a critical method for breast cancer screening and diagnosis in China. However, conventional breast ultrasound examinations are time-consuming and labor-intensive, necessitating the development of automated and efficient detection models.

METHODS: We developed a novel approach based on an improved deep learning model for the intelligent auxiliary diagnosis of breast tumors. Combining an optimized U2NET-Lite model with the efficient DeepCardinal-50 model, this method demonstrates superior accuracy and efficiency in the precise segmentation and classification of breast ultrasound images compared to traditional deep learning models such as ResNet and AlexNet.

RESULTS: Our proposed model demonstrated exceptional performance in experimental test sets. For segmentation, the U2NET-Lite model processed breast cancer images with an accuracy of 0.9702, a recall of 0.7961, and an IoU of 0.7063. In classification, the DeepCardinal-50 model excelled, achieving higher accuracy and AUC values compared to other models. Specifically, ResNet-50 achieved accuracies of 0.78 for benign, 0.67 for malignant, and 0.73 for normal cases, while DeepCardinal-50 achieved 0.76, 0.63, and 0.90 respectively. These results highlight our model's superior capability in breast tumor identification and classification.

CONCLUSION: The automatic detection of benign and malignant breast tumors using deep learning can rapidly and accurately identify breast tumor types at an early stage, which is crucial for the early diagnosis and treatment of malignant breast tumors.

PMID:40098633 | PMC:PMC11911202 | DOI:10.3389/fonc.2024.1461542

Categories: Literature Watch

Rediscovering histology - the application of artificial intelligence in inflammatory bowel disease histologic assessment

Tue, 2025-03-18 06:00

Therap Adv Gastroenterol. 2025 Mar 17;18:17562848251325525. doi: 10.1177/17562848251325525. eCollection 2025.

ABSTRACT

Integrating artificial intelligence (AI) into histologic disease assessment is transforming the management of inflammatory bowel disease (IBD). AI-aided histology enables precise, objective evaluations of disease activity by analysing whole-slide images, facilitating accurate predictions of histologic remission (HR) in ulcerative colitis and Crohn's disease. Additionally, AI shows promise in predicting adverse outcomes and therapeutic responses, making it a promising tool for clinical practice and clinical trials. By leveraging advanced algorithms, AI enhances diagnostic accuracy, reduces assessment variability and streamlines histological workflows in clinical settings. In clinical trials, AI aids in assessing histological endpoints, enabling real-time analysis, standardising evaluations and supporting adaptive trial designs. Recent advancements are further refining AI-aided digital pathology in IBD. New developments in multimodal AI models integrating clinical, endoscopic, histologic and molecular data pave the way for a comprehensive approach to precision medicine in IBD. Automated assessment of intestinal barrier healing - a deeper level of healing beyond endoscopic and HR - shows promise for improved outcome prediction and patient management. Preliminary evidence also suggests that AI applied to colitis-associated neoplasia can aid in the detection, characterisation and molecular profiling of lesions, holding potential for enhanced dysplasia management and organ-sparing approaches. Although challenges remain in standardisation, validation through randomised controlled trials and ethical considerations. AI is poised to revolutionise IBD management by advancing towards a more personalised and efficient care model, while the path to full clinical implementation may be lengthy. However, the transformative impact of AI on IBD care is already shining through.

PMID:40098604 | PMC:PMC11912177 | DOI:10.1177/17562848251325525

Categories: Literature Watch

Lit-OTAR Framework for Extracting Biological Evidences from Literature

Mon, 2025-03-17 06:00

Bioinformatics. 2025 Mar 17:btaf113. doi: 10.1093/bioinformatics/btaf113. Online ahead of print.

ABSTRACT

SUMMARY: The lit-OTAR framework, developed through a collaboration between Europe PMC and Open Targets, leverages deep learning to revolutionise drug discovery by extracting evidence from scientific literature for drug target identification and validation. This novel framework combines Named Entity Recognition (NER) for identifying gene/protein (target), disease, organism, and chemical/drug within scientific texts, and entity normalisation to map these entities to databases like Ensembl, Experimental Factor Ontology (EFO), and ChEMBL. Continuously operational, it has processed over 39 million abstracts and 4.5 million full-text articles and preprints to date, identifying more than 48.5 million unique associations that significantly help accelerate the drug discovery process and scientific research >29.9 m distinct target-disease, 11.8 m distinct target-drug, and 8.3 m distinct disease-drug relationships).

AVAILABILITY AND IMPLEMENTATION: The results are accessible through Europe PMC's SciLite web app (https://europepmc.org/) and its annotations API (https://europepmc.org/annotationsapi), as well as via the Open Targets Platform (https://platform.opentargets.org/). The daily pipeline is available at https://github.com/ML4LitS/otar-maintenance, and the Open Targets ETL processes are available at https://github.com/opentargets.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:40097274 | DOI:10.1093/bioinformatics/btaf113

Categories: Literature Watch

H2GnnDTI: hierarchical heterogeneous graph neural networks for drug target interaction prediction

Mon, 2025-03-17 06:00

Bioinformatics. 2025 Mar 17:btaf117. doi: 10.1093/bioinformatics/btaf117. Online ahead of print.

ABSTRACT

MOTIVATION: Identifying drug target interactions is a crucial step in drug repurposing and drug discovery. The significant increase in demand and the expensive nature for experimentally identifying drug target interactions necessitate computational tools for automated prediction and comprehension of drug target interactions. Despite recent advancements, current methods fail to fully leverage the hierarchical information in drug target interactions.

RESULTS: Here we introduce H2GnnDTI, a novel two-level hierarchical heterogeneous graph learning model to predict drug target interactions, by integrating the structures of drugs and proteins via a low-level view GNN (LGNN) and a high-level view GNN (HGNN). The hierarchical graph consists of high-level heterogeneous nodes representing drugs and proteins, connected by edges representing known DTIs. Each drug or protein node is further detailed in a low-level graph, where nodes represent molecules within each drug or amino acids within each protein, accompanied by their respective chemical descriptors. Two distinct low-level graph neural networks are first deployed to capture structural and chemical features specific to drugs and proteins from these low-level graphs. Subsequently, a high-level graph encoder is employed to comprehensively capture and merge interactive features pertaining to drugs and proteins from the high-level graph. The high-level encoder incorporates a structure and attribute information fusion module designed to explicitly integrate representations acquired from both a feature encoder and a graph encoder, facilitating consensus representation learning. Extensive experiments conducted on three benchmark datasets have shown that our proposed H2GnnDTI model consistently outperforms state-of-the-art deep learning methods.

AVAILABILITY AND IMPLEMENTATION: The codes are freely available at https://github.com/LiminLi-xjtu/H2GnnDTI.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:40097269 | DOI:10.1093/bioinformatics/btaf117

Categories: Literature Watch

Development of an abdominal acupoint localization system based on AI deep learning

Mon, 2025-03-17 06:00

Zhongguo Zhen Jiu. 2025 Mar 12;45(3):391-396. doi: 10.13703/j.0255-2930.20240207-0003. Epub 2024 Oct 28.

ABSTRACT

This study aims to develop an abdominal acupoint localization system based on computer vision and convolutional neural networks (CNNs). To address the challenge of abdominal acupoint localization, a multi-task CNNs architecture was constructed and trained to locate the Shenque (CV8) and human body boundaries. Based on the identified Shenque (CV8), the system further deduces key characteristics of four acupoints: Shangwan (CV13), Qugu (CV2), and bilateral Daheng (SP15). An affine transformation matrix is applied to accurately map image coordinates to an acupoint template space, achieving precise localization of abdominal acupoints. Testing has verified that this system can accurately identify and locate abdominal acupoints in images. The development of this localization system provides technical support for TCM remote education, diagnostic assistance, and advanced TCM equipment, such as intelligent acupuncture robots, facilitating the standardization and intelligent advancement of acupuncture.

PMID:40097227 | DOI:10.13703/j.0255-2930.20240207-0003

Categories: Literature Watch

Artificial intelligence for predicting interstitial fibrosis and tubular atrophy using diagnostic ultrasound imaging and biomarkers

Mon, 2025-03-17 06:00

BMJ Health Care Inform. 2025 Mar 17;32(1):e101192. doi: 10.1136/bmjhci-2024-101192.

ABSTRACT

BACKGROUND: Chronic kidney disease (CKD) is a global health concern characterised by irreversible renal damage that is often assessed using invasive renal biopsy. Accurate evaluation of interstitial fibrosis and tubular atrophy (IFTA) is crucial for CKD management. This study aimed to leverage machine learning (ML) models to predict IFTA using a combination of ultrasonography (US) images and patient biomarkers.

METHODS: We retrospectively collected US images and biomarkers from 632 patients with CKD across three hospitals. The data were subjected to pre-processing, exclusion of sub-optimal images, and feature extraction using a dual-path convolutional neural network. Various ML models, including XGBoost, random forest and logistic regression, were trained and validated using fivefold cross-validation.

RESULTS: The dataset was divided into training and test datasets. For image-level IFTA classification, the best performance was achieved by combining US image features and patient biomarkers, with logistic regression yielding an area under the receiver operating characteristic curve (AUROC) of 99%. At the patient level, logistic regression combining US image features and biomarkers provided an AUROC of 96%. Models trained solely on US image features or biomarkers also exhibited high performance, with AUROC exceeding 80%.

CONCLUSION: Our artificial intelligence-based approach to IFTA classification demonstrated high accuracy and AUROC across various ML models. By leveraging patient biomarkers alone, this method offers a non-invasive and robust tool for early CKD assessment, demonstrating that biomarkers alone may suffice for accurate predictions without the added complexity of image-derived features.

PMID:40097202 | DOI:10.1136/bmjhci-2024-101192

Categories: Literature Watch

Magnetic resonance imaging-based radiation treatment plans for dogs may be feasible with the use of generative adversarial networks

Mon, 2025-03-17 06:00

Am J Vet Res. 2025 Mar 17:1-8. doi: 10.2460/ajvr.24.08.0248. Online ahead of print.

ABSTRACT

OBJECTIVE: The purpose of this research was to examine the feasibility of utilizing generative adversarial networks (GANs) to generate accurate pseudo-CT images for dogs.

METHODS: This study used head standard CT images and T1-weighted transverse with contrast 3-D fast spoiled gradient echo head MRI images from 45 nonbrachycephalic dogs that received treatment between 2014 and 2023. Two conditional GANs (CGANs), one with a U-Net generator and a PatchGAN discriminator and another with a residual neural network (ResNet) U-Net generator and ResNet discriminator were used to generate the pseudo-CT images.

RESULTS: The CGAN with a ResNet U-Net generator and ResNet discriminator had an average mean absolute error of 109.5 ± 153.7 HU, average peak signal-to-noise ratio of 21.2 ± 4.31 dB, normalized mutual information of 0.89 ± 0.05, and dice similarity coefficient of 0.91 ± 0.12. The dice similarity coefficient for the bone was 0.71 ± 0.17. Qualitative results indicated that the most common ranking was "slightly similar" for both models. The CGAN with a ResNet U-Net generator and ResNet discriminator produced more accurate pseudo-CT images than the CGAN with a U-Net generator and PatchGAN discriminator.

CONCLUSIONS: The study concludes that CGAN can generate relatively accurate pseudo-CT images but suggests exploring alternative GAN extensions.

CLINICAL RELEVANCE: Implementing generative learning into veterinary radiation therapy planning demonstrates the potential to reduce imaging costs and time.

PMID:40096825 | DOI:10.2460/ajvr.24.08.0248

Categories: Literature Watch

Optimized attention-enhanced U-Net for autism detection and region localization in MRI

Mon, 2025-03-17 06:00

Psychiatry Res Neuroimaging. 2025 Mar 14;349:111970. doi: 10.1016/j.pscychresns.2025.111970. Online ahead of print.

ABSTRACT

Autism spectrum disorder (ASD) is a neurodevelopmental condition that affects a child's cognitive and social skills, often diagnosed only after symptoms appear around age 2. Leveraging MRI for early ASD detection can improve intervention outcomes. This study proposes a framework for autism detection and region localization using an optimized deep learning approach with attention mechanisms. The pipeline includes MRI image collection, pre-processing (bias field correction, histogram equalization, artifact removal, and non-local mean filtering), and autism classification with a Symmetric Structured MobileNet with Attention Mechanism (SSM-AM). Enhanced by Refreshing Awareness-aided Election-Based Optimization (RA-EBO), SSM-AM achieves robust classification. Abnormality region localization utilizes a Multiscale Dilated Attention-based Adaptive U-Net (MDA-AUnet) further optimized by RA-EBO. Experimental results demonstrate that our proposed model outperforms existing methods, achieving an accuracy of 97.29%, sensitivity of 97.27%, specificity of 97.36%, and precision of 98.98%, significantly improving classification and localization performance. These results highlight the potential of our approach for early ASD diagnosis and targeted interventions. The datasets utilized for this work are publicly available at https://fcon_1000.projects.nitrc.org/indi/abide/.

PMID:40096789 | DOI:10.1016/j.pscychresns.2025.111970

Categories: Literature Watch

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis

Mon, 2025-03-17 06:00

Psychiatry Res Neuroimaging. 2025 Mar 13;349:111969. doi: 10.1016/j.pscychresns.2025.111969. Online ahead of print.

ABSTRACT

Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

PMID:40096788 | DOI:10.1016/j.pscychresns.2025.111969

Categories: Literature Watch

Deep learning algorithm classification of tympanostomy tube images from a heterogenous pediatric population

Mon, 2025-03-17 06:00

Int J Pediatr Otorhinolaryngol. 2025 Mar 13;192:112311. doi: 10.1016/j.ijporl.2025.112311. Online ahead of print.

ABSTRACT

IMPORTANCE: The ability to augment routine post-operative tube check appointments with at-home digital otoscopes and deep learning AI could improve health care access as well as reduce financial and time burden on families.

OBJECTIVE: Tympanostomy tube checks are necessary but are also burdensome to families and impact access to care for other children seeking otolaryngologic care. Telemedicine care would be ideal, but ear exams are limited. This study aimed to assess whether an artificial intelligence (AI) algorithm trained with images from an over-the-counter digital otoscope can accurately assess tube status as in place and patent, extruded, or absent.

DESIGN: A prospective study of children aged 10 months to 10 years being seen for tympanostomy tube follow-up was carried out in three clinics from May-November 2023. A smartphone otoscope was used by non-MDs to capture images of the ear canal and tympanic membranes. Pediatric otolaryngologist exam findings (tube in place, extruded, absent) were used as a gold standard. A deep learning algorithm was trained and tested with these images. Statistical analysis was performed to determine the performance of the algorithm.

SETTING: 3 urban, pediatric otolaryngology clinics within an academic medical center.

PARTICIPANTS: Pediatric patients aged 10 months to 10 years with a past or current history of tympanostomy tubes were recruited. Patients were excluded from this study if they had a history of myringoplasty, tympanoplasty, or cholesteatoma. Main Outcome MeasureCalculated accuracy, sensitivity, and specificity for the deep learning algorithm in classifying tubal status as either in place and patent, extruded in external ear canal, or absent.

RESULTS: A heterogeneous group of 69 children yielded 296 images. Multiple types of tympanostomy tubes were included. The image capture success rate was 90.8 % in all subjects and 80 % in children with developmental delay/autism spectrum disorder. The classification accuracy was 97.1 %, sensitivity 97.1 %, and specificity 98.6 %.

CONCLUSION: A deep learning algorithm was trained with images from a representative pediatric population. It was highly accurate, sensitive, and specific. These results suggest that AI technology could be used to augment tympanostomy tube checks.

PMID:40096786 | DOI:10.1016/j.ijporl.2025.112311

Categories: Literature Watch

Extraction of fetal heartbeat locations in abdominal phonocardiograms using deep attention transformer

Mon, 2025-03-17 06:00

Comput Biol Med. 2025 Mar 16;189:110002. doi: 10.1016/j.compbiomed.2025.110002. Online ahead of print.

ABSTRACT

Assessing fetal health traditionally involves techniques like echocardiography, which require skilled professionals and specialized equipment, making them unsuitable for low-resource settings. An emerging alternative is Phonocardiography (PCG), which offers affordability but suffers from challenges related to accuracy and complexity. To address these limitations, we propose a deep learning model, Fetal Heart Sounds U-NetR (FHSU-NETR), capable of extracting both fetal and maternal heart rates directly from raw PCG signals. FHSU-NETR is designed for practical implementation in various healthcare environments, enhancing accessibility and reliability of fetal monitoring. Due to its enhanced capacity to simulate remote interactions and capture global context, the suggested pipeline utilizes the self-attention mechanism of the transformer. Validated with data from 20 normal subjects, including a case of fetal tachycardia arrhythmia, FHSU-NETR demonstrated exceptional performance. It accurately identified most of the fetal heartbeat locations with a low mean difference in fetal heart rate estimation (-2.55±10.25 bpm) across the entire dataset, and successfully detected the arrhythmia case. Similarly, FHSU-NETR showed a low mean difference in maternal heart rate estimation (-1.15±5.76 bpm) compared to the ground-truth maternal ECG. The model's exceptional ability to identify arrhythmia cases within the dataset underscores its potential for real-world application and generalization. By leveraging the capabilities of deep learning, our proposed model holds promise to reduce the reliance on medical experts for the interpretation of extensive PCG recordings, thereby enhancing efficiency in clinical settings.

PMID:40096767 | DOI:10.1016/j.compbiomed.2025.110002

Categories: Literature Watch

EEG-based emotion recognition with autoencoder feature fusion and MSC-TimesNet model

Mon, 2025-03-17 06:00

Comput Methods Biomech Biomed Engin. 2025 Mar 17:1-18. doi: 10.1080/10255842.2025.2477801. Online ahead of print.

ABSTRACT

Electroencephalography (EEG) signals are widely employed due to their spontaneity and robustness against artifacts in emotion recognition. However, existing methods are often unable to fully integrate high-dimensional features and capture changing patterns in time series when processing EEG signals, which results in limited classification performance. This paper proposes an emotion recognition method (AEF-DL) based on autoencoder fusion features and MSC-TimesNet models. Firstly, we segment the EEG signal in five frequency bands into time windows of 0.5 s, extract power spectral density (PSD) features and differential entropy (DE) features, and implement feature fusion using the autoencoder to enhance feature representation. Based on the TimesNet model and incorporating the multi-scale convolutional kernels, this paper proposes an innovative deep learning model (MSC-TimesNet) for processing fused features. MSC-TimesNet efficiently extracts inter-period and intra-period information. To validate the performance of the proposed method, we conducted systematic experiments on the public datasets DEAP and Dreamer. In dependent experiments with subjects, the classification accuracies reached 98.97% and 95.71%, respectively; in independent experiments with subjects, the accuracies reached 97.23% and 92.95%, respectively. These results demonstrate that the proposed method exhibits significant advantages over existing methods, highlighting its effectiveness and broad applicability in emotion recognition tasks.

PMID:40096584 | DOI:10.1080/10255842.2025.2477801

Categories: Literature Watch

Dynamic glucose enhanced imaging using direct water saturation

Mon, 2025-03-17 06:00

Magn Reson Med. 2025 Mar 17. doi: 10.1002/mrm.30447. Online ahead of print.

ABSTRACT

PURPOSE: Dynamic glucose enhanced (DGE) MRI studies employ CEST or spin lock (CESL) to study glucose uptake. Currently, these methods are hampered by low effect size and sensitivity to motion. To overcome this, we propose to utilize exchange-based linewidth (LW) broadening of the direct water saturation (DS) curve of the water saturation spectrum (Z-spectrum) during and after glucose infusion (DS-DGE MRI).

METHODS: To estimate the glucose-infusion-induced LW changes (ΔLW), Bloch-McConnell simulations were performed for normoglycemia and hyperglycemia in blood, gray matter (GM), white matter (WM), CSF, and malignant tumor tissue. Whole-brain DS-DGE imaging was implemented at 3 T using dynamic Z-spectral acquisitions (1.2 s per offset frequency, 38 s per spectrum) and assessed on four brain tumor patients using infusion of 35 g of D-glucose. To assess ΔLW, a deep learning-based Lorentzian fitting approach was used on voxel-based DS spectra acquired before, during, and post-infusion. Area-under-the-curve (AUC) images, obtained from the dynamic ΔLW time curves, were compared qualitatively to perfusion-weighted imaging parametric maps.

RESULTS: In simulations, ΔLW was 1.3%, 0.30%, 0.29/0.34%, 7.5%, and 13% in arterial blood, venous blood, GM/WM, malignant tumor tissue, and CSF, respectively. In vivo, ΔLW was approximately 1% in GM/WM, 5% to 20% for different tumor types, and 40% in CSF. The resulting DS-DGE AUC maps clearly outlined lesion areas.

CONCLUSIONS: DS-DGE MRI is highly promising for assessing D-glucose uptake. Initial results in brain tumor patients show high-quality AUC maps of glucose-induced line broadening and DGE-based lesion enhancement similar and/or complementary to perfusion-weighted imaging.

PMID:40096575 | DOI:10.1002/mrm.30447

Categories: Literature Watch

Accelerated EPR imaging using deep learning denoising

Mon, 2025-03-17 06:00

Magn Reson Med. 2025 Mar 17. doi: 10.1002/mrm.30473. Online ahead of print.

ABSTRACT

PURPOSE: Trityl OXO71-based pulse electron paramagnetic resonance imaging (EPRI) is an excellent technique to obtain partial pressure of oxygen (pO2) maps in tissues. In this study, we used deep learning techniques to denoise 3D EPR amplitude and pO2 maps.

METHODS: All experiments were performed using a 25 mT EPR imager, JIVA-25®. The MONAI implementation of four neural networks (autoencoder, Attention UNet, UNETR, and UNet) was tested, and the best model (UNet) was then enhanced with joint bilateral filters (JBF). The training dataset was comprised of 227 3D images (56 in vivo and 171 in vitro), 159 images for training, 45 for validation, and 23 for testing. UNet with 1, 2, and 3 JBF layers was tested to improve image SNR, focusing on multiscale structural similarity index measure and edge sensitivity preservation. The trained algorithm was tested using acquisitions with 15, 30, and 150 averages in vitro with a sealed deoxygenated OXO71 phantom and in vivo with fibrosarcoma tumors grown in a hind leg of C3H mice.

RESULTS: We demonstrate that UNet with 2 JBF layers (UNet+JBF2) provides the best outcome. We demonstrate that using the UNet+JBF2 model, the SNR of 15-shot amplitude maps provides higher SNR compared to 150-shot pre-filter maps, both in phantoms and in tumors, therefore, allowing 10-fold accelerated imaging. We demonstrate that the trained algorithm improves SNR in pO2 maps.

CONCLUSIONS: We demonstrate the application of deep learning techniques to EPRI denoising. Higher SNR will bring the EPRI technique one step closer to clinics.

PMID:40096518 | DOI:10.1002/mrm.30473

Categories: Literature Watch

YOLO-ACE: Enhancing YOLO with Augmented Contextual Efficiency for Precision Cotton Weed Detection

Mon, 2025-03-17 06:00

Sensors (Basel). 2025 Mar 6;25(5):1635. doi: 10.3390/s25051635.

ABSTRACT

Effective weed management is essential for protecting crop yields in cotton production, yet conventional deep learning approaches often falter in detecting small or occluded weeds and can be restricted by large parameter counts. To tackle these challenges, we propose YOLO-ACE, an advanced extension of YOLOv5s, which was selected for its optimal balance of accuracy and speed, making it well suited for agricultural applications. YOLO-ACE integrates a Context Augmentation Module (CAM) and Selective Kernel Attention (SKAttention) to capture multi-scale features and dynamically adjust the receptive field, while a decoupled detection head separates classification from bounding box regression, enhancing overall efficiency. Experiments on the CottonWeedDet12 (CWD12) dataset show that YOLO-ACE achieves notable mAP@0.5 and mAP@0.5:0.95 scores-95.3% and 89.5%, respectively-surpassing previous benchmarks. Additionally, we tested the model's transferability and generalization across different crops and environments using the CropWeed dataset, where it achieved a competitive mAP@0.5 of 84.3%, further showcasing its robust ability to adapt to diverse conditions. These results confirm that YOLO-ACE combines precise detection with parameter efficiency, meeting the exacting demands of modern cotton weed management.

PMID:40096500 | DOI:10.3390/s25051635

Categories: Literature Watch

Quality of Experience (QoE) in Cloud Gaming: A Comparative Analysis of Deep Learning Techniques via Facial Emotions in a Virtual Reality Environment

Mon, 2025-03-17 06:00

Sensors (Basel). 2025 Mar 5;25(5):1594. doi: 10.3390/s25051594.

ABSTRACT

Cloud gaming has rapidly transformed the gaming industry, allowing users to play games on demand from anywhere without the need for powerful hardware. Cloud service providers are striving to enhance user Quality of Experience (QoE) using traditional assessment methods. However, these traditional methods often fail to capture the actual user QoE because some users are not serious about providing feedback regarding cloud services. Additionally, some players, even after receiving services as per the Service Level Agreement (SLA), claim that they are not receiving services as promised. This poses a significant challenge for cloud service providers in accurately identifying QoE and improving actual services. In this paper, we have compared our previous proposed novel technique that utilizes a deep learning (DL) model to assess QoE through players' facial expressions during cloud gaming sessions in a virtual reality (VR) environment. The EmotionNET model technique is based on a convolutional neural network (CNN) architecture. Later, we have compared the EmotionNET technique with three other DL techniques, namely ConvoNEXT, EfficientNET, and Vision Transformer (ViT). We trained the EmotionNET, ConvoNEXT, EfficientNET, and ViT model techniques on our custom-developed dataset, achieving 98.9% training accuracy and 87.8% validation accuracy with the EmotionNET model technique. Based on the training and comparison results, it is evident that the EmotionNET model technique predicts and performs better than the other model techniques. At the end, we have compared the EmotionNET results on two network (WiFi and mobile data) datasets. Our findings indicate that facial expressions are strongly correlated with QoE.

PMID:40096493 | DOI:10.3390/s25051594

Categories: Literature Watch

Landsat Time Series Reconstruction Using a Closed-Form Continuous Neural Network in the Canadian Prairies Region

Mon, 2025-03-17 06:00

Sensors (Basel). 2025 Mar 6;25(5):1622. doi: 10.3390/s25051622.

ABSTRACT

The Landsat archive stands as one of the most critical datasets for studying landscape change, offering over 50 years of imagery. This invaluable historical record facilitates the monitoring of land cover and land use changes, helping to detect trends in and the dynamics of the Earth's system. However, the relatively low temporal frequency and irregular clear-sky observations of Landsat data pose significant challenges for multi-temporal analysis. To address these challenges, this research explores the application of a closed-form continuous-depth neural network (CFC) integrated within a recurrent neural network (RNN) called CFC-mmRNN for reconstructing historical Landsat time series in the Canadian Prairies region from 1985 to present. The CFC method was evaluated against the continuous change detection (CCD) method, widely used for Landsat time series reconstruction and change detection. The findings indicate that the CFC method significantly outperforms CCD across all spectral bands, achieving higher accuracy with improvements ranging from 33% to 42% and providing more accurate dense time series reconstructions. The CFC approach excels in handling the irregular and sparse time series characteristic of Landsat data, offering improvements in capturing complex temporal patterns. This study underscores the potential of leveraging advanced deep learning techniques like CFC to enhance the quality of reconstructed satellite imagery, thus supporting a wide range of remote sensing (RS) applications. Furthermore, this work opens up avenues for further optimization and application of CFC in higher-density time series datasets such as MODIS and Sentinel-2, paving the way for improved environmental monitoring and forecasting.

PMID:40096481 | DOI:10.3390/s25051622

Categories: Literature Watch

Fault Diagnosis Method for Centrifugal Pumps in Nuclear Power Plants Based on a Multi-Scale Convolutional Self-Attention Network

Mon, 2025-03-17 06:00

Sensors (Basel). 2025 Mar 5;25(5):1589. doi: 10.3390/s25051589.

ABSTRACT

The health status of rotating machinery equipment in nuclear power plants is of paramount importance for ensuring the overall normal operation of the power plant system. In particular, significant failures in large rotating machinery equipment, such as main pumps, pose critical safety hazards to the system. Therefore, this paper takes pump equipment as a representative of rotating machinery in nuclear power plants and proposes a fault diagnosis method based on a multi-scale convolutional self-attention network for three types of faults: outer ring fracture, inner ring fracture, and rolling element pitting corrosion. Within the multi-scale convolutional self-attention network, a multi-scale hybrid feature complementarity mechanism is introduced. This mechanism leverages an adaptive encoder to capture deep feature information from the acoustic signals of rolling bearings and constructs a hybrid-scale feature set based on deep features and original signal characteristics in the time-frequency domain. This approach enriches the fault information present in the feature set and establishes a nonlinear mapping relationship between fault features and rolling bearing faults. The results demonstrate that, without significantly increasing model complexity or the volume of feature data, this method achieves a substantial increase in fault diagnosis accuracy, exceeding 99.5% under both vibration signal and acoustic signal conditions.

PMID:40096472 | DOI:10.3390/s25051589

Categories: Literature Watch

Pages