Deep learning

Deep learning-driven multi-omics sequential diagnosis with Hybrid-OmniSeq: Unraveling breast cancer complexity

Wed, 2025-03-19 06:00

Technol Health Care. 2025 Mar;33(2):1099-1120. doi: 10.1177/09287329241296438. Epub 2024 Dec 4.

ABSTRACT

BackgroundBreast cancer results from an uncontrolled growth of breast tissue. Many methods of diagnosis are using multi-omics data to better understand the complexity of breast cancer.ObjectiveThe new strategy laid out in this work, called "Hybrid-OmniSeq," is a deep learning-based multi-omics data analysis technology that uses molecular subtypes of breast cancer gene to increase the precision and effectiveness of breast cancer diagnosis.MethodFor preprocessing, the BC-VM procedure is utilized, and for molecular subtype analysis, the BC-MSA procedure is utilized. The implementation of Deep Neural Network (DNN) technology in conjunction with Sequential Forward Floating Selection (SFFS) and Truncated Singular Value Decomposition (TSVD) entropy enable adaptive learning from multi-omics gene data. Five machine learning classifiers are used for classification purpose. Hybrid-OmniSeq uses a variety of machine learning classifiers in a thorough analytical process to achieve remarkable diagnostic accuracy. Deep Learning-based multi-omics sequential approach was evaluated using METABRIC RNA-seq data sets of intrinsic subtypes of breast cancer.ResultsAccording to test results, Logistic Regression (LR) had ER (Estrogen Receptor) status values of 94.51%, ER status values of 96.33%, and HER2 (Human Epidermal growth factor Receptor) status values of 92.3%; Random Forest (RF) had ER status values of 93.77%, ER status values of 95.23%, and HER2 status values of 93.4%.ConclusionLR and RF increase the cancer detection accuracy for all subtypes when compared to alternative machine learning classifiers or the majority voting method, providing a comprehensive understanding of the underlying causes of breast cancer.

PMID:40105178 | DOI:10.1177/09287329241296438

Categories: Literature Watch

Developing a method for predicting DNA nucleosomal sequences using deep learning

Wed, 2025-03-19 06:00

Technol Health Care. 2025 Mar;33(2):989-999. doi: 10.1177/09287329241297900. Epub 2024 Nov 20.

ABSTRACT

BackgroundDeep learning excels at processing raw data because it automatically extracts and classifies high-level features. Despite biology's low popularity in data analysis, incorporating computer technology can improve biological research.ObjectiveTo create a deep learning model that can identify nucleosomes from nucleotide sequences and to show that simpler models outperform more complicated ones in solving biological challenges.MethodsA classifier was created utilising deep learning and machine learning approaches. The final model consists of two convolutional layers, one max pooling layer, two fully connected layers, and a dropout regularisation layer. This structure was chosen on the basis of the 'less is frequently more' approach, which emphasises simple design without large hidden layers.ResultsExperimental results show that deep learning methods, specifically deep neural networks, outperform typical machine learning algorithms for recognising nucleosomes. The simplified network architecture proved suitable without the requirement for numerous hidden neurons, resulting in effective network performance.ConclusionThis study demonstrates that machine learning and other computational techniques may streamline and expedite the resolution of biological issues. The model helps identify nucleosomes and can be used in future research or labs. This study discusses the challenges of understanding and addressing simple biological problems with sophisticated computer technology and offers practical solutions for academic and economic sectors.

PMID:40105177 | DOI:10.1177/09287329241297900

Categories: Literature Watch

Role of AI in empowering and redefining the oncology care landscape: perspective from a developing nation

Wed, 2025-03-19 06:00

Front Digit Health. 2025 Mar 4;7:1550407. doi: 10.3389/fdgth.2025.1550407. eCollection 2025.

ABSTRACT

Early diagnosis and accurate prognosis play a pivotal role in the clinical management of cancer and in preventing cancer-related mortalities. The burgeoning population of Asia in general and South Asian countries like India in particular pose significant challenges to the healthcare system. Regrettably, the demand for healthcare services in India far exceeds the available resources, resulting in overcrowded hospitals, prolonged wait times, and inadequate facilities. The scarcity of trained manpower in rural settings, lack of awareness and low penetrance of screening programs further compounded the problem. Artificial Intelligence (AI), driven by advancements in machine learning, deep learning, and natural language processing, can profoundly transform the underlying shortcomings in the healthcare industry, more for populous nations like India. With about 1.4 million cancer cases reported annually and 0.9 million deaths, India has a significant cancer burden that surpassed several nations. Further, India's diverse and large ethnic population is a data goldmine for healthcare research. Under these circumstances, AI-assisted technology, coupled with digital health solutions, could support effective oncology care and reduce the economic burden of GDP loss in terms of years of potential productive life lost (YPPLL) due to India's stupendous cancer burden. This review explores different aspects of cancer management, such as prevention, diagnosis, precision treatment, prognosis, and drug discovery, where AI has demonstrated promising clinical results. By harnessing the capabilities of AI in oncology research, healthcare professionals can enhance their ability to diagnose cancers at earlier stages, leading to more effective treatments and improved patient outcomes. With continued research and development, AI and digital health can play a transformative role in mitigating the challenges posed by the growing population and advancing the fight against cancer in India. Moreover, AI-driven technologies can assist in tailoring personalized treatment plans, optimizing therapeutic strategies, and supporting oncologists in making well-informed decisions. However, it is essential to ensure responsible implementation and address potential ethical and privacy concerns associated with using AI in healthcare.

PMID:40103737 | PMC:PMC11913822 | DOI:10.3389/fdgth.2025.1550407

Categories: Literature Watch

Magnetic resonance image generation using enhanced TransUNet in Temporomandibular disorder patients

Wed, 2025-03-19 06:00

Dentomaxillofac Radiol. 2025 Mar 18:twaf017. doi: 10.1093/dmfr/twaf017. Online ahead of print.

ABSTRACT

OBJECTIVES: Temporomandibular joint disorder (TMD) patients experience a variety of clinical symptoms, and magnetic resonance imaging (MRI) is the most effective tool for diagnosing temporomandibular joint (TMJ) disc displacement. This study aimed to develop a transformer-based deep learning model to generate T2-weighted (T2w) images from proton density-weighted (PDw) images, reducing MRI scan time for TMD patients.

METHODS: A dataset of 7,226 images from 178 patients who underwent TMJ MRI examinations was used. The proposed model employed a generative adversarial network framework with a TransUNet architecture as the generator for image translation. Additionally, a disc segmentation decoder was integrated to improve image quality in the TMJ disc region. The model performance was evaluated using metrics such as the structural similarity index measure (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID). Three experienced oral radiologists also performed a qualitative assessment through the mean opinion score (MOS).

RESULTS: The model demonstrated high performance in generating T2w images from PDw images, achieving average SSIM, LPIPS, and FID values of 82.28%, 2.46, and 23.85, respectively, in the disc region. The model also obtained an average MOS score of 4.58, surpassing other models. Additionally, the model showed robust segmentation capabilities for the TMJ disc.

CONCLUSION: The proposed model using the transformer, complemented by an integrated disc segmentation task, demonstrated strong performance in MR image generation, both quantitatively and qualitatively. This suggests its potential clinical significance in reducing MRI scan times for TMD patients while maintaining high image quality.

PMID:40104864 | DOI:10.1093/dmfr/twaf017

Categories: Literature Watch

NucleoSeeker-precision filtering of RNA databases to curate high-quality datasets

Wed, 2025-03-19 06:00

NAR Genom Bioinform. 2025 Mar 18;7(1):lqaf021. doi: 10.1093/nargab/lqaf021. eCollection 2025 Mar.

ABSTRACT

The structural prediction of biomolecules via computational methods complements the often involved wet-lab experiments. Unlike protein structure prediction, RNA structure prediction remains a significant challenge in bioinformatics, primarily due to the scarcity of annotated RNA structure data and its varying quality. Many methods have used this limited data to train deep learning models but redundancy, data leakage and bad data quality hampers their performance. In this work, we present NucleoSeeker, a tool designed to curate high-quality, tailored datasets from the Protein Data Bank (PDB) database. It is a unified framework that combines multiple tools and streamlines an otherwise complicated process of data curation. It offers multiple filters at structure, sequence, and annotation levels, giving researchers full control over data curation. Further, we present several use cases. In particular, we demonstrate how NucleoSeeker allows the creation of a nonredundant RNA structure dataset to assess AlphaFold3's performance for RNA structure prediction. This demonstrates NucleoSeeker's effectiveness in curating valuable nonredundant tailored datasets to both train novel and judge existing methods. NucleoSeeker is very easy to use, highly flexible, and can significantly increase the quality of RNA structure datasets.

PMID:40104673 | PMC:PMC11915511 | DOI:10.1093/nargab/lqaf021

Categories: Literature Watch

A novel rotation and scale-invariant deep learning framework leveraging conical transformers for precise differentiation between meningioma and solitary fibrous tumor

Wed, 2025-03-19 06:00

J Pathol Inform. 2025 Feb 4;17:100422. doi: 10.1016/j.jpi.2025.100422. eCollection 2025 Apr.

ABSTRACT

Meningiomas, the most prevalent tumors of the central nervous system, can have overlapping histopathological features with solitary fibrous tumors (SFT), presenting a significant diagnostic challenge. Accurate differentiation between these two diagnoses is crucial for optimal medical management. Currently, immunohistochemistry and molecular techniques are the methods of choice for distinguishing between them; however, these techniques are expensive and not universally available. In this article, we propose a rotational and scale-invariant deep learning framework to enable accurate discrimination between these two tumor types. The proposed framework employs a novel architecture of conical transformers to capture both global and local imaging markers from whole-slide images, accommodating variations across different magnification scales. A weighted majority voting schema is utilized to combine individual scale decisions, ultimately producing a complementary and more accurate diagnostic outcome. A dataset comprising 92 patients (46 with meningioma and 46 with SFT) was used for evaluation. The experimental results demonstrate robust performance across different validation methods. In train-test evaluation, the model achieved 92.27% accuracy, 87.77% sensitivity, 97.55% specificity, and 92.46% F1-score. Performance further improved in 4-fold cross-validation, achieving 94.68% accuracy, 96.05% sensitivity, 93.11% specificity, and 95.07% F1-score. These findings highlight the potential of AI-based diagnostic approaches for precise differentiation between meningioma and SFT, paving the way for innovative diagnostic tools in pathology.

PMID:40104410 | PMC:PMC11914819 | DOI:10.1016/j.jpi.2025.100422

Categories: Literature Watch

Integrated convolutional neural network for skin cancer classification with hair and noise restoration

Wed, 2025-03-19 06:00

Turk J Med Sci. 2023 Oct 16;55(1):161-177. doi: 10.55730/1300-0144.5954. eCollection 2025.

ABSTRACT

BACKGROUND/AIM: Skin lesions are commonly diagnosed and classified using dermoscopic images. There are many artifacts visible in dermoscopic images, including hair strands, noise, bubbles, blood vessels, poor illumination, and moles. These artifacts can obscure crucial information about lesions, which limits the ability to diagnose lesions automatically. This study investigated how hair and noise artifacts in lesion images affect classifier performance and how they can be removed to improve diagnostic accuracy.

MATERIALS AND METHODS: A synthetic dataset created using hair simulation and noise simulation was used in conjunction with the HAM10000 benchmark dataset. Moreover, integrated convolutional neural networks (CNNs) were proposed for removing hair artifacts using hair inpainting and classification of refined dehaired images, called integrated hair removal (IHR), and for removing noise artifacts using nonlocal mean denoising and classification of refined denoised images, called integrated noise removal (INR).

RESULTS: Five deep learning models were used for the classification: ResNet50, DenseNet121, ResNet152, VGG16, and VGG19. The proposed IHR-DenseNet121, IHR-ResNet50, and IHR-ResNet152 achieved 2.3%, 1.78%, and 1.89% higher accuracy than DenseNet121, ResNet50, and ResNet152, respectively, in removing hairs. The proposed INR-DenseNet121, INR-ResNet50, and INR-VGG19 achieved 1.41%, 2.39%, and 18.4% higher accuracy than DenseNet121, ResNet50, and VGG19, respectively, in removing noise.

CONCLUSION: A significant proportion of pixels within lesion areas are influenced by hair and noise, resulting in reduced classification accuracy. However, the proposed CNNs based on IHR and INR exhibit notably improved performance when restoring pixels affected by hair and noise. The performance outcomes of this proposed approach surpass those of existing methods.

PMID:40104314 | PMC:PMC11913500 | DOI:10.55730/1300-0144.5954

Categories: Literature Watch

Duple-MONDNet: duple deep learning-based mobile net for motor neuron disease identification

Wed, 2025-03-19 06:00

Turk J Med Sci. 2024 Aug 6;55(1):140-151. doi: 10.55730/1300-0144.5952. eCollection 2025.

ABSTRACT

BACKGROUND/AIM: Motor neuron disease (MND) is a devastating neuron ailment that affects the motor neurons that regulate muscular voluntary actions. It is a rare disorder that gradually destroys aspects of neurological function. In general, MND arises as a result of a combination of natural, behavioral, and genetic influences. However, early detection of MND is a challenging task and manual identification is time-consuming. To overcome this, a novel deep learning-based duple feature extraction framework is proposed for the early detection of MND.

MATERIALS AND METHODS: Diffusion tensor imaging tractography (DTI) images were initially analyzed for color and textural features using dual feature extraction. Local binary pattern (LBP)-based methods were used to extract textural data from images by examining nearby pixel values. A color information feature was then added to the LBP-based feature during the classification phase for extracting color features. A flattened image was then fed into the MONDNet for classifying normal and abnormal cases of MND based on color and texture features.

RESULTS: The proposed deep MONDNet is suitable because it achieved a detection rate of 99.66% and can identify MND in its early stages.

CONCLUSION: The proposed mobile net model achieved an overall F1 score of 13.26%, 6.15%, 5.56%, and 5.96% compared to the BPNN, CNN, SVM-RFE, and MLP algorithms, respectively.

PMID:40104302 | PMC:PMC11913516 | DOI:10.55730/1300-0144.5952

Categories: Literature Watch

Evaluating deep learning auto-contouring for lung radiation therapy: A review of accuracy, variability, efficiency and dose, in target volumes and organs at risk

Wed, 2025-03-19 06:00

Phys Imaging Radiat Oncol. 2025 Feb 21;33:100736. doi: 10.1016/j.phro.2025.100736. eCollection 2025 Jan.

ABSTRACT

BACKGROUND AND PURPOSE: Delineation of target volumes (TVs) and organs at risk (OARs) is a resource intensive process in lung radiation therapy and, despite the introduction of some auto-contouring, inter-observer variability remains a challenge. Deep learning algorithms may prove an efficient alternative and this review aims to map the evidence base on the use of deep learning algorithms for TV and OAR delineation in the radiation therapy planning process for lung cancer patients.

MATERIALS AND METHODS: A literature search identified studies relating to deep learning. Manual contouring and deep learning auto-contours were evaluated against one another for accuracy, inter-observer variability, contouring time and dose-volume effects. A total of 40 studies were included for review.

RESULTS: Thirty nine out of 40 studies investigated the accuracy of deep learning auto-contours and determined that they were of a comparable accuracy to manual contours. Inter-observer variability outcomes were heterogeneous in the seven relevant studies identified. Twenty-four studies analysed the time saving associated with deep learning auto-contours and reported a significant time reduction in comparison to manual contours. The eight studies that conducted a dose-volume metric evaluation on deep learning auto-contours identified negligible effect on treatment plans.

CONCLUSION: The accuracy and time-saving capacity of deep learning auto-contours in comparison to manual contours has been extensively studied. However, additional research is required in the areas of inter-observer variability and dose-volume metric evaluation to further substantiate its clinical use.

PMID:40104215 | PMC:PMC11914827 | DOI:10.1016/j.phro.2025.100736

Categories: Literature Watch

Sex Differences in Age-Related Changes in Retinal Arteriovenous Area Based on Deep Learning Segmentation Model

Wed, 2025-03-19 06:00

Ophthalmol Sci. 2025 Jan 28;5(3):100719. doi: 10.1016/j.xops.2025.100719. eCollection 2025 May-Jun.

NO ABSTRACT

PMID:40103835 | PMC:PMC11914739 | DOI:10.1016/j.xops.2025.100719

Categories: Literature Watch

A bibliometric analysis of artificial intelligence research in critical illness: a quantitative approach and visualization study

Wed, 2025-03-19 06:00

Front Med (Lausanne). 2025 Mar 4;12:1553970. doi: 10.3389/fmed.2025.1553970. eCollection 2025.

ABSTRACT

BACKGROUND: Critical illness medicine faces challenges such as high data complexity, large individual differences, and rapid changes in conditions. Artificial Intelligence (AI) technology, especially machine learning and deep learning, offers new possibilities for addressing these issues. By analyzing large amounts of patient data, AI can help identify diseases earlier, predict disease progression, and support clinical decision-making.

METHODS: In this study, scientific literature databases such as Web of Science were searched, and bibliometric methods along with visualization tools R-bibliometrix, VOSviewer 1.6.19, and CiteSpace 6.2.R4 were used to perform a visual analysis of the retrieved data.

RESULTS: This study analyzed 900 articles from 6,653 authors in 82 countries between 2005 and 2024. The United States is a major contributor in this field, with Harvard University having the highest betweenness centrality. Noseworthy PA is a core author in this field, and Frontiers in Cardiovascular Medicine and Diagnostics lead other journals in terms of the number of publications. Artificial Intelligence has tremendous potential in the identification and management of heart failure and sepsis.

CONCLUSION: The application of AI in critical illness holds great potential, particularly in enhancing diagnostic accuracy, personalized treatment, and clinical decision support. However, to achieve widespread application of AI technology in clinical practice, challenges such as data privacy, model interpretability, and ethical issues need to be addressed. Future research should focus on the transparency, interpretability, and clinical validation of AI models to ensure their effectiveness and safety in critical illness.

PMID:40103796 | PMC:PMC11914116 | DOI:10.3389/fmed.2025.1553970

Categories: Literature Watch

High-resolution dataset for tea garden disease management: Precision agriculture insights

Wed, 2025-03-19 06:00

Data Brief. 2025 Feb 12;59:111379. doi: 10.1016/j.dib.2025.111379. eCollection 2025 Apr.

ABSTRACT

The economic development of many countries largely depends on tea plantations that suffer from diseases adversely affecting their productivity and quality. This study presents a high-resolution dataset aimed at advancing precision agriculture for managing tea garden diseases. The size of the dataset is 3960 images and pixel dimension is (1024 × 1024) of the images were collected by using smartphones. This dataset contains detailed images of Tea Leaf Blight, Tea Red Leaf Spot and Tea Red Scab maladies inflicted on tea leaves as well as environmental statistics and plant health. The images were captured and stored in JPG format. The main aim of this dataset is to provide tool for detection and classification of different types of tea garden disease. Applying this dataset will enable the development of early detection systems, best-practice care regimens, and enhanced general garden upkeep. A range of images presenting the most prevalent diseases afflicting tea plants are paired with images of healthy leaves to provide a comprehensive overview of all the circumstances that can arise in a tea plantation. Therefore, it can be used to automate diseases tracking, targeted pesticide spraying, and even the making of smart farm tools with development of smart agricultural tools hence enhancing sustainability and efficiency in tea production. This dataset not only provides a strong foundation for applying precision techniques in tea cultivation in agriculture, but also can become an invaluable asset to scientists studying the issues of tea production.

PMID:40103762 | PMC:PMC11914274 | DOI:10.1016/j.dib.2025.111379

Categories: Literature Watch

CommRad RF: A dataset of communication radio signals for detection, identification and classification

Wed, 2025-03-19 06:00

Data Brief. 2025 Feb 12;59:111387. doi: 10.1016/j.dib.2025.111387. eCollection 2025 Apr.

ABSTRACT

The rapid growth in wireless technology has revolutionized the way of living but at the same time, raising security concerns of unauthorized access of spectrum, both military and commercial sectors. The subject of Radio Frequency (RF) fingerprinting has got special attention in recent years. Researchers proposed various datasets of radio signals of different types of devices (drones, cell phones, IoT, and Radar). However, presently there is no freely available dataset on walkie-talkies/commercial radios. To fill out the void, we present an innovative dataset including more than 2700 radio signals captured from 27 radios located in an indoor multipath environment. This dataset can enhance the security of the communication channels by providing the possibility to analyse and detect any unauthorized source of transmission. Furthermore, we also propose two innovative deep learning models named Light Weight 1DCNN and Light Weight Bivariate 1DCNN, for efficient data processing and learning patterns from the complex dataset of radio signals.

PMID:40103755 | PMC:PMC11914181 | DOI:10.1016/j.dib.2025.111387

Categories: Literature Watch

Histogram matching-enhanced adversarial learning for unsupervised domain adaptation in medical image segmentation

Tue, 2025-03-18 06:00

Med Phys. 2025 Mar 18. doi: 10.1002/mp.17757. Online ahead of print.

ABSTRACT

BACKGROUND: Unsupervised domain adaptation (UDA) seeks to mitigate the performance degradation of deep neural networks when applied to new, unlabeled domains by leveraging knowledge from source domains. In medical image segmentation, prevailing UDA techniques often utilize adversarial learning to address domain shifts for cross-modality adaptation. Current research on adversarial learning tends to adopt increasingly complex models and loss functions, making the training process highly intricate and less stable/robust. Furthermore, most methods primarily focused on segmentation accuracy while neglecting the associated confidence levels and uncertainties.

PURPOSE: To develop a simple yet effective UDA method based on histogram matching-enhanced adversarial learning (HMeAL-UDA), and provide comprehensive uncertainty estimations of the model predictions.

METHODS: Aiming to bridge the domain gap while reducing the model complexity, we developed a novel adversarial learning approach to align multi-modality features. The method, termed HMeAL-UDA, integrates a plug-and-play histogram matching strategy to mitigate domain-specific image style biases across modalities. We employed adversarial learning to constrain the model in the prediction space, enabling it to focus on domain-invariant features during segmentation. Moreover, we quantified the model's prediction confidence using Monte Carlo (MC) dropouts to assess two voxel-level uncertainty estimates of the segmentation results, which were subsequently aggregated into a volume-level uncertainty score, providing an overall measure of the model's reliability. The proposed method was evaluated on three public datasets (Combined Healthy Abdominal Organ Segmentation [CHAOS], Beyond the Cranial Vault [BTCV], and Abdominal Multi-Organ Segmentation Challenge [AMOS]) and one in-house clinical dataset (UTSW). We used 30 MRI scans (20 from the CHAOS dataset and 10 from the in-house dataset) and 30 CT scans from the BTCV dataset for UDA-based, cross-modality liver segmentation. Additionally, 240 CT scans and 60 MRI scans from the AMOS dataset were utilized for cross-modality multi-organ segmentation. The training and testing sets for each modality were split with ratios of approximately 4:1-3:1.

RESULTS: Extensive experiments on cross-modality medical image segmentation demonstrated the superiority of HMeAL-UDA over two state-of-the-art approaches. HMeAL-UDA achieved a mean (± s.d.) Dice similarity coefficient (DSC) of 91.34% ± 1.23% and an HD95 of 6.18 ± 2.93 mm for cross-modality (from CT to MRI) adaptation of abdominal multi-organ segmentation, and a DSC of 87.13% ± 3.67% with an HD95 of 2.48 ± 1.56 mm for segmentation adaptation in the opposite direction (MRI to CT). The results are approaching or even outperforming those of supervised methods trained with "ground-truth" labels in the target domain. In addition, we provide a comprehensive assessment of the model's uncertainty, which can help with the understanding of segmentation reliability to guide clinical decisions.

CONCLUSION: HMeAL-UDA provides a powerful segmentation tool to address cross-modality domain shifts, with the potential to generalize to other deep learning applications in medical imaging.

PMID:40102198 | DOI:10.1002/mp.17757

Categories: Literature Watch

Multimodal feature-guided diffusion model for low-count PET image denoising

Tue, 2025-03-18 06:00

Med Phys. 2025 Mar 18. doi: 10.1002/mp.17764. Online ahead of print.

ABSTRACT

BACKGROUND: To minimize radiation exposure while obtaining high-quality Positron Emission Tomography (PET) images, various methods have been developed to derive standard-count PET (SPET) images from low-count PET (LPET) images. Although deep learning methods have enhanced LPET images, they rarely utilize the rich complementary information from MR images. Even when MR images are used, these methods typically employ early, intermediate, or late fusion strategies to merge features from different CNN streams, failing to fully exploit the complementary properties of multimodal fusion.

PURPOSE: In this study, we introduce a novel multimodal feature-guided diffusion model, termed MFG-Diff, designed for the denoising of LPET images with the full utilization of MRI.

METHODS: MFG-Diff replaces random Gaussian noise with LPET images and introduces a novel degradation operator to simulate the physical degradation processes of PET imaging. Besides, it uses a novel cross-modal guided restoration network to fully exploit the modality-specific features provided by the LPET and MR images and utilizes a multimodal feature fusion module employing cross-attention mechanisms and positional encoding at multiple feature levels for better feature fusion.

RESULTS: Under four counts (2.5%, 5.0%, 10%, and 25%), the images generated by our proposed network showed superior performance compared to those produced by other networks in both qualitative and quantitative evaluations, as well as in statistical analysis. In particular, the peak-signal-to-noise ratio of the generated PET images improved by more than 20% under a 2.5% count, the structural similarity index improved by more than 16%, and the root mean square error reduced by nearly 50%. On the other hand, our generated PET images had significant correlation (Pearson correlation coefficient, 0.9924), consistency, and excellent quantitative evaluation results with the SPET images.

CONCLUSIONS: The proposed method outperformed existing state-of-the-art LPET denoising models and can be used to generate highly correlated and consistent SPET images obtained from LPET images.

PMID:40102174 | DOI:10.1002/mp.17764

Categories: Literature Watch

Envelope spectrum knowledge-guided domain invariant representation learning strategy for intelligent fault diagnosis of bearing

Tue, 2025-03-18 06:00

ISA Trans. 2025 Mar 11:S0019-0578(25)00145-4. doi: 10.1016/j.isatra.2025.03.004. Online ahead of print.

ABSTRACT

Deep learning has significantly advanced bearing fault diagnosis. Traditional models rely on the assumption of independent and identically distributed, which is frequently violated due to variations in rotational speeds and loads during bearing fault diagnosis. The fault diagnosis of the bearing based on representation learning lacks the consideration of spectrum knowledge and representation diversity under multiple working conditions. Therefore, this study presents a domain-invariant representation learning strategy (DIRLs) for diagnosing bearing faults across differing working conditions. DIRLs, by leveraging envelope spectrum knowledge distillation, captures the Fourier characteristics as domain-invariant features and secures robust health state representations by aligning high-order statistics of the samples under different working conditions. Moreover, an innovative loss function, which maximizes the two-paradigm metric of the health state representation, is designed to enrich representation diversity. Experimental results demonstrate an average AUC improvement of 28.6 % on the Paderborn-bearing dataset and an overall diagnostic accuracy of 88.7 % on a private bearing dataset, validating the effectiveness of the proposed method.

PMID:40102111 | DOI:10.1016/j.isatra.2025.03.004

Categories: Literature Watch

DeepSMCP - Deep-learning powered denoising of Monte Carlo dose distributions within the Swiss Monte Carlo Plan

Tue, 2025-03-18 06:00

Z Med Phys. 2025 Mar 17:S0939-3889(25)00034-0. doi: 10.1016/j.zemedi.2025.02.004. Online ahead of print.

ABSTRACT

This work demonstrated the development of a fast, deep-learning framework (DeepSMCP) to mitigate noise in Monte Carlo dose distributions (MC-DDs) of photon treatment plans with high statistical uncertainty (SU) and its integration into the Swiss Monte Carlo Plan (SMCP). To this end, a two-channel input (MC-DD and computed tomography (CT) scan) 3D U-net was trained, validated and tested (80%/10%/10%) on high/low-SU MC-DD-pairs of 106 clinically-motivated VMAT arcs for 29 available CTs, augmented to 3074 pairs. The model was integrated into SMCP to enable a "one-click" workflow of calculating and denoising MC-DDs of high SU to obtain MC-DDs of low SU. The model accuracy was evaluated on the test set using Gamma passing rate (2% global, 2 mm, 10% threshold) comparing denoised and low-SU MC-DD. Calculation time for the whole workflow was recorded. Denoised MC-DDs match low-SU MC-DDs with average (standard deviation) Gamma passing rate of 82.9% (4.7%). Additional application of DeepSMCP to 12 unseen clinically-motivated cases of different treatment sites, including treatment sites not present during training, resulted in an average Gamma passing rate of 91.0%. Denoised DDs were obtained on average in 35.1 s, a 340-fold efficiency gain compared to low-SU MC-DD calculation. DeepSMCP presented a first seamlessly integrated promising denoising framework for MC-DDs.

PMID:40102103 | DOI:10.1016/j.zemedi.2025.02.004

Categories: Literature Watch

Compressed chromatographic fingerprint of Artemisiae argyi Folium empowered by 1D-CNN: Reduce mobile phase consumption using chemometric algorithm

Tue, 2025-03-18 06:00

J Chromatogr A. 2025 Mar 13;1748:465874. doi: 10.1016/j.chroma.2025.465874. Online ahead of print.

ABSTRACT

INTRODUCTION: High-Performance Liquid Chromatography (HPLC) is widely used for its high sensitivity, stability, and accuracy. Nonetheless, it often involves lengthy analysis times and considerable solvent consumption, especially when dealing with complex systems and quality control, posing challenges to green and eco-friendly analytical practices.

OBJECTIVE: This study proposes a compressed fingerprint chromatogram analysis technique that combines a one-dimensional convolutional neural network (1D-CNN) with HPLC, aiming to improve the analytical efficiency of various compounds in complex systems while reducing the use of organic solvents.

MATERIALS AND METHODS: The natural product Artemisiae argyi Folium (AAF) was selected as the experimental subject. Firstly, HPLC fingerprints of AAF were developed based on conventional programs. Next, a compressed fingerprint was obtained without losing compound information. Finally, a 1D-CNN deep learning model was used to analyze and identify the compressed chromatograms, enabling quantitative analysis of 10 compounds in complex systems.

RESULTS: The results indicate that the 1D-CNN model can effectively extract features from complex data, reducing the analysis time for each sample by about 40 min. In addition, the consumption of mobile phase has significantly decreased by 78 % compared to before. Among the ten compounds to be analyzed, nine of them achieved good results, with the highest correlation coefficient reaching above 0.95, indicating that the model has strong explanatory power.

CONCLUSION: The proposed compressed fingerprint chromatograms recognition technique enhances the environmental sustainability and efficiency of traditional HPLC methods, offering valuable insights for future advancements in analytical methodologies and equipment development.

PMID:40101658 | DOI:10.1016/j.chroma.2025.465874

Categories: Literature Watch

Revolutionizing biological digital twins: Integrating internet of bio-nano things, convolutional neural networks, and federated learning

Tue, 2025-03-18 06:00

Comput Biol Med. 2025 Mar 17;189:109970. doi: 10.1016/j.compbiomed.2025.109970. Online ahead of print.

ABSTRACT

Digital twins (DTs) are advancing biotechnology by providing digital models for drug discovery, digital health applications, and biological assets, including microorganisms. However, the hypothesis posits that implementing micro- and nanoscale DTs, especially for biological entities like bacteria, presents substantial challenges. These challenges stem from the complexities of data extraction, transmission, and computation, along with the necessity for a specialized Internet of Things (IoT) infrastructure. To address these challenges, this article proposes a novel framework that leverages bio-network technologies, including the Internet of Bio-Nano Things (IoBNT), and decentralized deep learning algorithms such as federated learning (FL) and convolutional neural networks (CNN). The methodology involves using CNNs for robust pattern recognition and FL to reduce bandwidth consumption while enhancing security. IoBNT devices are utilized for precise microscopic data acquisition and transmission, which ensures minimal error rates. The results demonstrate a multi-class classification accuracy of 98.7% across 33 bacteria categories, achieving over 99% bandwidth savings. Additionally, IoBNT integration reduces biological data transfer errors by up to 98%, even under worst-case conditions. This framework is further supported by an adaptable, user-friendly dashboard, expanding its applicability across pharmaceutical and biotechnology industries.

PMID:40101583 | DOI:10.1016/j.compbiomed.2025.109970

Categories: Literature Watch

Feature compensation and network reconstruction imaging with high-order helical modes in cylindrical waveguides

Tue, 2025-03-18 06:00

Ultrasonics. 2025 Mar 9;151:107631. doi: 10.1016/j.ultras.2025.107631. Online ahead of print.

ABSTRACT

Pipe wall loss assessment is crucial in oil and gas transportation. Ultrasonic guided wave is an effective technology to detect pipe defects. However, accurately inverting weak-feature defects under limited view conditions remains challenging due to constraints in transducer arrangements and inconsistent signal characteristics. This paper proposes a stepwise inversion method based on feature compensation and network reconstruction through deep learning, combined with high-order helical guided waves to expand the imaging view and achieve high-resolution imaging of pipe defects. A forward model was established using the finite difference method, with the two-dimensional Pearson correlation coefficient and maximum wall loss estimation accuracy defined as imaging metrics to evaluate and compare the method. Among 50 randomly selected defect samples in the test set, the inversion model achieved a correlation coefficient of 0.9669 and a maximum wall loss estimation accuracy of 96.65 %. Additionally, Gaussian noise was introduced to assess imaging robustness under pure signal, 5 dB, and 3 dB conditions. Laboratory experiments validated the practical feasibility of the proposed method. This approach is generalizable and holds significant potential for nondestructive testing in cylindrical waveguide structures represented by pipes.

PMID:40101471 | DOI:10.1016/j.ultras.2025.107631

Categories: Literature Watch

Pages