Literature Watch

Corrigendum to "Crystal structure and functional characterization of a novel bacterial lignin-degrading dye-decolorizing peroxidase" [Int. J. Biol. Macromol. 297 (2025) 139900]

Systems Biology - Sun, 2025-02-09 06:00

Int J Biol Macromol. 2025 Feb 8;304(Pt 1):140766. doi: 10.1016/j.ijbiomac.2025.140766. Online ahead of print.

NO ABSTRACT

PMID:39923600 | DOI:10.1016/j.ijbiomac.2025.140766

Categories: Literature Watch

Safety reporting in neonatal clinical trials: reflections towards optimal, globally relevant approaches

Drug-induced Adverse Events - Sun, 2025-02-09 06:00

Trials. 2025 Feb 9;26(1):46. doi: 10.1186/s13063-025-08723-y.

ABSTRACT

Adverse event (AE) collection is a key part of evidence generation in clinical trials and an integral element of safety reporting. AE assessment and documentation is particularly challenging in neonates who are a heterogeneous population with high rates of co-morbidities. Neonatal research is finally gaining the attention of regulators regarding drug development and the need for optimal dosing specific to this population. However, further efforts are necessary to ensure that adverse events (AEs) are adequately collected, allowing for the generation of essential safety data. It is also crucial that the methodology used aligns with the intended trial outcomes to minimise the burden on trial sites. In resource-constrained settings, where pharmacovigilance implementation can be particularly challenging, a pragmatic approach to safety reporting is even more important given the significant public health need for effective drugs. This commentary reflects on some of the challenges and potential areas of improvement in safety reporting that could be addressed in future neonatal-focused trials.

PMID:39924475 | DOI:10.1186/s13063-025-08723-y

Categories: Literature Watch

Protocol to study chloride regulation in cultured mouse cortical neurons using electrophysiology

Systems Biology - Sun, 2025-02-09 06:00

STAR Protoc. 2025 Feb 8;6(1):103628. doi: 10.1016/j.xpro.2025.103628. Online ahead of print.

ABSTRACT

Inhibitory synaptic transmission mediated by the neurotransmitter γ-aminobutyric acid (GABA) is dependent on the concentration of chloride ions (Cl-) in neurons, which can be assessed by making patch-clamp recordings of the reversal potential for GABA (EGABA). Here, we present a protocol to study the regulation of cation-chloride cotransporters and the strength of synaptic inhibition in cultured mouse cortical neurons using electrophysiology. We describe steps for culturing neurons isolated from postnatal pups and electrophysiological measurement of EGABA. For complete details on the use and execution of this protocol, please refer to Raveendran et al.1.

PMID:39923240 | DOI:10.1016/j.xpro.2025.103628

Categories: Literature Watch

Next-generation sequencing based deep learning model for prediction of HER2 status and response to HER2-targeted neoadjuvant chemotherapy

Deep learning - Sun, 2025-02-09 06:00

J Cancer Res Clin Oncol. 2025 Feb 9;151(2):72. doi: 10.1007/s00432-025-06105-0.

ABSTRACT

INTRODUCTION: For patients with breast cancer, the amplification of Human Epidermal Growth Factor 2 (HER2) is closely related to their prognosis and treatment decisions. This study aimed to further improve the accuracy and efficiency of HER2 amplification status detection with a deep learning model, and apply the model to predict the efficacy of neoadjuvant therapy.

METHODS: We combined Next-Generation Sequencing (NGS) data and IHC staining images of 606 breast cancer patients and developed a Vision Transformer (ViT) deep learning model to identify the amplification of HER2 through these IHC staining images. This model was then applied to predict the efficacy of neoadjuvant therapy in 399 HER2-positive breast cancer patients.

RESULTS: The NGS data of 606 patients were split into training (N = 404), validation (N = 101), and testing (N = 101) sets. The top 3 genes with highest mutation frequency were TP53, ERBB2 and PIK3CA. With the NGS results as deep learning model labels, the accuracy of our ViT model was 93.1% for HER2 amplification recognition. The misidentifications was likely due to the heterogeneity of HER2 expression in cancer tissues. For predicting the efficacy of neoadjuvant therapy, receiver operating characteristic (ROC) curves were plotted, and the combination of image recognition result and clinical pathological features yielded an area under the curve (AUC) value of 0.855 in the training set and 0.841 in the testing set.

CONCLUSIONS: Our study provided a method of HER2 status recognition based on IHC images, improving the efficiency and accuracy of HER2 status assessment, and can be used for predicting the efficacy of anti-HER2 targeted neoadjuvant therapy. We intend our deep learning model to assist pathologists in HER2 amplification recognition.

PMID:39923208 | DOI:10.1007/s00432-025-06105-0

Categories: Literature Watch

Subject-Based Transfer Learning in Longitudinal Multiple Sclerosis Lesion Segmentation

Deep learning - Sun, 2025-02-09 06:00

J Neuroimaging. 2025 Jan-Feb;35(1):e70024. doi: 10.1111/jon.70024.

ABSTRACT

BACKGROUND AND PURPOSE: Accurate and consistent lesion segmentation from magnetic resonance imaging is required for longitudinal multiple sclerosis (MS) data analysis. In this work, we propose two new transfer learning-based pipelines to improve segmentation performance for subjects in longitudinal MS datasets.

METHOD: In general, transfer learning is used to improve deep learning model performance for the unseen dataset by fine-tuning a pretrained model with a limited number of labeled scans from the unseen dataset. The proposed methodologies fine-tune the deep learning model for each subject using the first scan and improve segmentation performance for later scans for the same subject. We also investigated the statistical benefits of the proposed methodology by modeling lesion volume over time between progressors according to confirmed disability progression and nonprogressors for a large in-house dataset (937 MS patients, 3210 scans) using a linear mixed effect (LME) model.

RESULTS: The results show statistically significant improvement for the proposed methodology compared with the traditional transfer learning method using Dice (improvement: 2%), sensitivity (6%), and average volumetric difference (16%), as well as visual analysis for public and in-house datasets. The LME result showed that the proposed subject-wise transfer learning method had increased statistical power for the measurement of longitudinal lesion volume.

CONCLUSION: The proposed method improved lesion segmentation performance and can reduce manual effort to correct the automatic segmentations for final data analysis in longitudinal studies.

PMID:39923192 | DOI:10.1111/jon.70024

Categories: Literature Watch

An ontology-based rare disease common data model harmonising international registries, FHIR, and Phenopackets

Orphan or Rare Diseases - Sat, 2025-02-08 06:00

Sci Data. 2025 Feb 8;12(1):234. doi: 10.1038/s41597-025-04558-z.

ABSTRACT

Although rare diseases (RDs) affect over 260 million individuals worldwide, low data quality and scarcity challenge effective care and research. This work aims to harmonise the Common Data Set by European Rare Disease Registry Infrastructure, Health Level 7 Fast Healthcare Interoperability Base Resources, and the Global Alliance for Genomics and Health Phenopacket Schema into a novel rare disease common data model (RD-CDM), laying the foundation for developing international RD-CDMs aligned with these data standards. We developed a modular-based GitHub repository and documentation to account for flexibility, extensions and further development. Recommendations on the model's cardinalities are given, inviting further refinement and international collaboration. An ontology-based approach was selected to find a common denominator between the semantic and syntactic data standards. Our RD-CDM version 2.0.0 comprises 78 data elements, extending the ERDRI-CDS by 62 elements with previous versions implemented in four German university hospitals capturing real world data for development and evaluation. We identified three categories for evaluation: Medical Data Granularity, Clinical Reasoning and Medical Relevance, and Interoperability and Harmonisation.

PMID:39922817 | DOI:10.1038/s41597-025-04558-z

Categories: Literature Watch

Decision-making and role preferences for receiving individual pharmacogenomic research results among participants at a Ugandan HIV research institute

Pharmacogenomics - Sat, 2025-02-08 06:00

BMC Med Ethics. 2025 Feb 8;26(1):23. doi: 10.1186/s12910-025-01181-w.

ABSTRACT

Little is known about how people living with HIV should be engaged in the decision-making process for returning individual pharmacogenomic research results. This study explored the role people living with HIV want to play in making decisions about whether and how individual results of pharmacogenomic research should be presented to them. A convergent parallel mixed methods study was conducted, comprising a survey of 221 research participants and five deliberative focus group discussions with 30 purposively selected research participants. Most participants (122, 55.2%) preferred the collaborative role, 67 (30.3%) preferred the active role and 32 (14.5%) preferred the passive role. Factors that significantly influenced preference for an active role compared with a collaborative role were marital status (OR: 0.282, p = 0.013), research experience (OR: 4.37, p = 0.028), and religion (OR: 2.346, p = 0.041). The reasons proffered for the active role included prior experience with antiretroviral treatment and increased exposure to research activities. The reasons given for preferring the passive role included limited level of awareness about the interaction between patients' genes and drugs, trust in researchers to make the right decision, and fear of making decisions with harmful implications. Overall, findings from our study show that participants want to be engaged in the decision-making process. Research teams ought to provide adequate and simple information about the pharmacogenomic research and implications of the results to support participants' informed decisions.

PMID:39923018 | DOI:10.1186/s12910-025-01181-w

Categories: Literature Watch

Responsible governance of genomics data and biospecimens in the context of broad consent: experiences of a pioneering access committee in Africa

Pharmacogenomics - Sat, 2025-02-08 06:00

BMJ Glob Health. 2025 Feb 8;10(2):e016026. doi: 10.1136/bmjgh-2024-016026.

ABSTRACT

International collaboration in genomic research is gaining momentum in African countries and is often supported by external funding. Over the last decade, there has been an increased interest in African genomic data. The contribution of this rich data resource in understanding diseases predominant in both African and global populations has been limited to date. There has been some non-governmental funding dedicated to the advancement of genomic research and innovation by African-based and African-led research groups, but the impact of these initiatives is hard to quantify. However, there is now an opportunity for the global research community to leverage decades of genomic data and biospecimens originating from African populations. The experience we describe in this paper is of an access governance framework established under the Human, Heredity, and Health in Africa (H3A) consortium, given the task of managing wider access to the data and biospecimen resources collected via its various projects. The function of the Data and Biospecimen Access Committee (DBAC) is to facilitate the advancement of medicine and health while fostering the development of bioinformatics capabilities at Africa-based institutions or regional hubs. Our collective experiences and lessons learnt as a committee provide examples of nuanced considerations when evaluating access to African data. The committee was semi-autonomous in its establishment and had independence in decision-making. The DBAC continually advocates for the responsible use of genomic data and biospecimens that were obtained from African research participants, under broad consent, by primary researchers who no longer have oversight over the future use of these resources.

PMID:39922566 | DOI:10.1136/bmjgh-2024-016026

Categories: Literature Watch

Predictive classification-based read-across for diverse functional vitiligo-linked chemical exposomes (ViCE): A new approach for the assessment of chemical safety for the vitiligo disease in humans

Pharmacogenomics - Sat, 2025-02-08 06:00

Toxicol In Vitro. 2025 Feb 6:106018. doi: 10.1016/j.tiv.2025.106018. Online ahead of print.

ABSTRACT

We have explored a new approach using a similarity measure-based read-across derived hypothesis to address the precise risk assessment of vitiligo active chemicals. In this analysis, we initially developed a data set by combining vitiligo active compounds taken from the previous literature with non-vitiligo chemicals, which are non-skin sensitizers reported in another literature. Afterward, we performed the manual curation process to obtain a curated dataset. Furthermore, the optimum similarity measure was identified from a validation set using a pool of 47 descriptors from the analysis of the most discriminating features. The identified optimum similarity measure (i.e., Euclidean distance-based similarity along with seven close source compounds) has been utilized in the read-across derived similarity-based classification studies on close source congeners concerning target compounds. In this study, we identified the positive and negative contributing features toward the assessment of vitiligo potential as well, including the estimation of target chemicals with better accuracy. The applicability domain status of the reported compounds was also studied, and the outliers were identified. As there are no comparative studies in this regard to the best of our knowledge, we can further affirm that it is the first report on the in-silico identification of potential vitiligo-linked chemical exposomes (ViCE) based on the similarity measure of the read-across.

PMID:39922550 | DOI:10.1016/j.tiv.2025.106018

Categories: Literature Watch

Identifying novel therapeutic targets in cystic fibrosis through advanced single-cell transcriptomics analysis

Cystic Fibrosis - Sat, 2025-02-08 06:00

Comput Biol Med. 2025 Feb 7;187:109748. doi: 10.1016/j.compbiomed.2025.109748. Online ahead of print.

ABSTRACT

BACKGROUND: Lung disease remains a leading cause of morbidity and mortality in individuals with cystic fibrosis (CF). Despite significant advances, the complex molecular mechanisms underlying CF-related airway pathology are not fully understood. Building upon previous single-cell transcriptomics studies in CF patients and healthy controls, this study employs enhanced analytical methodologies to deepen our understanding of CF-associated gene expression.

METHODS: We employed advanced single-cell transcriptomics techniques, integrating data from multiple sources and implementing rigorous normalization and mapping strategies using a comprehensive lung reference panel. These sophisticated methods were designed to enhance the accuracy and depth of our analysis, with a focus on elucidating differential gene expression and characterizing co-expression network dynamics associated with cystic fibrosis (CF).

RESULTS: Our analysis uncovered novel genes and regulatory networks that had not been previously associated with CF airway disease. These findings highlight new potential therapeutic targets that could be exploited to develop more effective interventions for managing CF-related lung conditions.

CONCLUSION: This study provides critical insights into the molecular landscape of CF airway disease, offering new avenues for targeted therapeutic strategies. By identifying key genes and networks involved in CF pathogenesis, our research contributes to the broader efforts to improve the prognosis and quality of life for patients with CF. These discoveries pave the way for future studies aimed at translating these findings into clinical practice.

PMID:39921941 | DOI:10.1016/j.compbiomed.2025.109748

Categories: Literature Watch

CGNet: Few-shot learning for Intracranial Hemorrhage Segmentation

Deep learning - Sat, 2025-02-08 06:00

Comput Med Imaging Graph. 2025 Feb 5;121:102505. doi: 10.1016/j.compmedimag.2025.102505. Online ahead of print.

ABSTRACT

In recent years, with the increasing attention from researchers towards medical imaging, deep learning-based image segmentation techniques have become mainstream in the field, requiring large amounts of manually annotated data. Annotating datasets for Intracranial Hemorrhage(ICH) is particularly tedious and costly. Few-shot segmentation holds significant potential for medical imaging. In this work, we designed a novel segmentation model CGNet to leverage a limited dataset for segmenting ICH regions, we propose a Cross Feature Module (CFM) enhances the understanding of lesion details by facilitating interaction between feature information from the query and support sets and Support Guide Query (SGQ) refines segmentation targets by integrating features from support and query sets at different scales, preserving the integrity of target feature information while further enhancing segmentation detail. We first propose transforming the ICH segmentation task into a few-shot learning problem. We evaluated our model using the publicly available BHSD dataset and the private IHSAH dataset. Our approach outperforms current state-of-the-art few-shot segmentation models, outperforming methods of 3% and 1.8% in Dice coefficient scores, respectively, and also exceeds the performance of fully supervised segmentation models with the same amount of data.

PMID:39921928 | DOI:10.1016/j.compmedimag.2025.102505

Categories: Literature Watch

DLPVI: Deep learning framework integrating projection, view-by-view backprojection, and image domains for high- and ultra-sparse-view CBCT reconstruction

Deep learning - Sat, 2025-02-08 06:00

Comput Med Imaging Graph. 2025 Feb 1;121:102508. doi: 10.1016/j.compmedimag.2025.102508. Online ahead of print.

ABSTRACT

This study proposes a deep learning framework, DLPVI, which integrates projection, view-by-view backprojection (VVBP), and image domains to improve the quality of high-sparse-view and ultra-sparse-view cone-beam computed tomography (CBCT) images. The DLPVI comprises a projection domain sub-framework, a VVBP domain sub-framework, and a Transformer-based image domain model. First, full-view projections were restored from sparse-view projections via the projection domain sub-framework, then filtered and view-by-view backprojected to generate VVBP raw data. Next, the VVBP raw data was processed by the VVBP domain sub-framework to suppress residual noise and artifacts, and produce CBCT axial images. Finally, the axial images were further refined using the image domain model. The DLPVI was trained, validated, and tested on CBCT data from 163, 30, and 30 real patients respectively. Quantitative metrics including root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) were calculated to evaluate the method performance. The DLPVI was compared with 15 state-of-the-art (SOTA) methods, including 2 projection domain models, 10 image domain models, and 3 projection-image dual-domain frameworks, on 1/8 high-sparse-view and 1/16 ultra-sparse-view reconstruction tasks. Statistical analysis was conducted using the Kruskal-Wallis test, followed by the post-hoc Dunn's test. Experimental results demonstrated that the DLPVI outperformed all 15 SOTA methods for both tasks, with statistically significant improvements (p < 0.05 in Kruskal-Wallis test and p < 0.05/15 in Dunn's test). The proposed DLPVI effectively improves the quality of high- and ultra-sparse-view CBCT images.

PMID:39921927 | DOI:10.1016/j.compmedimag.2025.102508

Categories: Literature Watch

Exploration of the optimal deep learning model for english-Japanese machine translation of medical device adverse event terminology

Deep learning - Sat, 2025-02-08 06:00

BMC Med Inform Decis Mak. 2025 Feb 8;25(1):66. doi: 10.1186/s12911-025-02912-0.

ABSTRACT

BACKGROUND: In Japan, reporting of medical device malfunctions and related health problems is mandatory, and efforts are being made to standardize terminology through the Adverse Event Terminology Collection of the Japan Federation of Medical Device Associations (JFMDA). Internationally, the Adverse Event Terminology of the International Medical Device Regulators Forum (IMDRF-AET) provides a standardized terminology collection in English. Mapping between the JFMDA terminology collection and the IMDRF-AET is critical to international harmonization. However, the process of translating the terminology collections from English to Japanese and reconciling them is done manually, resulting in high human workloads and potential inaccuracies.

OBJECTIVE: The purpose of this study is to investigate the optimal machine translation model for the IMDRF-AET into Japanese for the part of a function for the automatic terminology mapping system.

METHODS: English-Japanese parallel data for IMDRF-AET published by the Ministry of Health, Labor and Welfare in Japan was obtained from 50 sentences randomly extracted from the terms and their definitions. These English sentences were fed into the following machine translation models to produce Japanese translations: mBART50, m2m-100, Google Translation, Multilingual T5, GPT-3, ChatGPT, and GPT-4. The evaluations included the quantitative metrics of BiLingual Evaluation Understudy (BLEU), Character Error Rate (CER), Word Error Rate (WER), Metric for Evaluation of Translation with Explicit ORdering (METEOR), and Bidirectional Encoder Representations from Transformers (BERT) score, as well as qualitative evaluations by four experts.

RESULTS: GPT-4 outperformed other models in both the quantitative and qualitative evaluations, with ChatGPT showing the same capability, but with lower quantitative scores, in the qualitative evaluation. Scores of other models, including mBART50 and m2m-100, lagged behind, particularly in the CER and BERT scores.

CONCLUSION: GPT-4's superior performance in translating medical terminology, indicates its potential utility in improving the efficiency of the terminology mapping system.

PMID:39923074 | DOI:10.1186/s12911-025-02912-0

Categories: Literature Watch

Molecular optimization using a conditional transformer for reaction-aware compound exploration with reinforcement learning

Deep learning - Sat, 2025-02-08 06:00

Commun Chem. 2025 Feb 8;8(1):40. doi: 10.1038/s42004-025-01437-x.

ABSTRACT

Designing molecules with desirable properties is a critical endeavor in drug discovery. Because of recent advances in deep learning, molecular generative models have been developed. However, the existing compound exploration models often disregard the important issue of ensuring the feasibility of organic synthesis. To address this issue, we propose TRACER, which is a framework that integrates the optimization of molecular property optimization with synthetic pathway generation. The model can predict the product derived from a given reactant via a conditional transformer under the constraints of a reaction type. The molecular optimization results of an activity prediction model targeting DRD2, AKT1, and CXCR4 revealed that TRACER effectively generated compounds with high scores. The transformer model, which recognizes the entire structures, captures the complexity of the organic synthesis and enables its navigation in a vast chemical space while considering real-world reactivity constraints.

PMID:39922979 | DOI:10.1038/s42004-025-01437-x

Categories: Literature Watch

Severe deviation in protein fold prediction by advanced AI: a case study

Deep learning - Sat, 2025-02-08 06:00

Sci Rep. 2025 Feb 8;15(1):4778. doi: 10.1038/s41598-025-89516-w.

ABSTRACT

Artificial intelligence (AI) and deep learning are making groundbreaking strides in protein structure prediction. AlphaFold is remarkable in this arena for its outstanding accuracy in modelling proteins fold based solely on their amino acid sequences. In spite of these remarkable advances, experimental structure determination remains critical. Here we report severe deviations between the experimental structure of a two-domain protein and its equivalent AI-prediction. These observations are particularly relevant to the relative orientation of the domains within the global protein scaffold. We observe positional divergence in equivalent residues beyond 30 Å, and an overall RMSD of 7.7 Å. Significant deviation between experimental structures and AI-predicted models echoes the presence of unusual conformations, insufficient training data and high complexity in protein folding that can ultimately lead to current limitations in protein structure prediction.

PMID:39922965 | DOI:10.1038/s41598-025-89516-w

Categories: Literature Watch

Applying genetic algorithm to extreme learning machine in prediction of tumbler index with principal component analysis for iron ore sintering

Deep learning - Sat, 2025-02-08 06:00

Sci Rep. 2025 Feb 8;15(1):4777. doi: 10.1038/s41598-025-88755-1.

ABSTRACT

As a major burden of blast furnace, sinter mineral with desired quality performance needs to be produced in sinter plants. The tumbler index (TI) is one of the most important indices to characterize the quality of sinter, which depends on the raw materials proportion, operating system parameters and the chemical compositions. To accurately predict TI, an integrate model is proposed in this study. First, to decrease the data dimensionality, the sintering production data is addressed through principal component analysis (PCA) and the principal components with the accumulated contribution rate no more than 95% are extracted as the inputs of the predictive model based on Extreme Learning Machine (ELM). Second, the genetic algorithm (GA) has been applied to promote the improvement of the robustness and generalization performance of the original ELM. Finally, the model is examined using actual production data of a year from a sinter plant, and is compared with the algorithms of single ELM, GA-BP and deep learning method. A comparison is conducted to confirm the superiority of the proposed model with two traditional models. The results showed that an improvement in predictive accuracy can be obtained by the GA-ELM approach, and the accuracy of TI prediction is 81.85% for absolute error under 0.7%.

PMID:39922958 | DOI:10.1038/s41598-025-88755-1

Categories: Literature Watch

BiAF: research on dynamic goat herd detection and tracking based on machine vision

Deep learning - Sat, 2025-02-08 06:00

Sci Rep. 2025 Feb 8;15(1):4754. doi: 10.1038/s41598-025-89231-6.

ABSTRACT

As technology advances, rangeland management is rapidly transitioning toward intelligent systems. To optimize grassland resources and implement scientific grazing practices, livestock grazing monitoring has become a pivotal area of research. Traditional methods, such as manual tracking and wearable monitoring, often disrupt the natural movement and feeding behaviors of grazing livestock, posing significant challenges for in-depth studies of grazing patterns. In this paper, we propose a machine vision-based grazing goat herd detection algorithm that enhances the streamlined ELAN module in YOLOv7-tiny, incorporates an optimized CBAM attention mechanism, refines the SPPCSPC module to reduce the parameter count, and improves the anchor boxes in YOLOv7-tiny to enhance target detection accuracy. The BiAF-YOLOv7 algorithm achieves precision, recall, F1 score, and mAP values of 94.5, 96.7, 94.8, and 96.0%, respectively, on the goat herd dataset. Combined with DeepSORT, our system successfully tracks goat herds, demonstrating the effectiveness of the BiAF-YOLOv7 algorithm as a tool for livestock grazing monitoring. This study not only validates the practicality of the proposed algorithm but also highlights the broader applicability of machine vision-based monitoring in large-scale environments. It provides innovative approaches to achieve grass-animal balance through information-driven methods, such as monitoring and tracking.

PMID:39922902 | DOI:10.1038/s41598-025-89231-6

Categories: Literature Watch

Looking outside the box with a pathology aware AI approach for analyzing OCT retinal images in Stargardt disease

Deep learning - Sat, 2025-02-08 06:00

Sci Rep. 2025 Feb 8;15(1):4739. doi: 10.1038/s41598-025-85213-w.

ABSTRACT

Stargardt disease type 1 (STGD1) is a genetic disorder that leads to progressive vision loss, with no approved treatments currently available. The development of effective therapies faces the challenge of identifying appropriate outcome measures that accurately reflect treatment benefits. Optical Coherence Tomography (OCT) provides high-resolution retinal images, serving as a valuable tool for deriving potential outcome measures, such as retinal thickness. However, automated segmentation of OCT images, particularly in regions disrupted by degeneration, remains complex. In this study, we propose a deep learning-based approach that incorporates a pathology-aware loss function to segment retinal sublayers in OCT images from patients with STGD1. This method targets relatively unaffected regions for sublayer segmentation, ensuring accurate boundary delineation in areas with minimal disruption. In severely affected regions, identified by a box detection model, the total retina is segmented as a single layer to avoid errors. Our model significantly outperforms standard models, achieving an average Dice coefficient of [Formula: see text] for total retina and [Formula: see text] for retinal sublayers. The most substantial improvement was in the segmentation of the photoreceptor inner segment, with Dice coefficient increasing by [Formula: see text]. This approach provides a balance between granularity and reliability, making it suitable for clinical application in tracking disease progression and evaluating therapeutic efficacy.

PMID:39922894 | DOI:10.1038/s41598-025-85213-w

Categories: Literature Watch

A deep learning-driven multi-layered steganographic approach for enhanced data security

Deep learning - Sat, 2025-02-08 06:00

Sci Rep. 2025 Feb 8;15(1):4761. doi: 10.1038/s41598-025-89189-5.

ABSTRACT

In the digital era, ensuring data integrity, authenticity, and confidentiality is critical amid growing interconnectivity and evolving security threats. This paper addresses key limitations of traditional steganographic methods, such as limited payload capacity, susceptibility to detection, and lack of robustness against attacks. A novel multi-layered steganographic framework is proposed, integrating Huffman coding, Least Significant Bit (LSB) embedding, and a deep learning-based encoder-decoder to enhance imperceptibility, robustness, and security. Huffman coding compresses data and obfuscates statistical patterns, enabling efficient embedding within cover images. At the same time, the deep learning encoder adds layer of protection by concealing an image within another. Extensive evaluations using benchmark datasets, including Tiny ImageNet, COCO, and CelebA, demonstrate the approach's superior performance. Key contributions include achieving high visual fidelity with Structural Similarity Index Metrics (SSIM) consistently above 99%, robust data recovery with text recovery accuracy reaching 100% under standard conditions, and enhanced resistance to common attacks such as noise and compression. The proposed framework significantly improves robustness, security, and computational efficiency compared to traditional methods. By balancing imperceptibility and resilience, this paper advances secure communication and digital rights management, addressing modern challenges in data hiding through an innovative combination of compression, adaptive embedding, and deep learning techniques.

PMID:39922893 | DOI:10.1038/s41598-025-89189-5

Categories: Literature Watch

Deep learning based gasket fault detection: a CNN approach

Deep learning - Sat, 2025-02-08 06:00

Sci Rep. 2025 Feb 8;15(1):4776. doi: 10.1038/s41598-025-85223-8.

ABSTRACT

Gasket inspection is a critical step in the quality control of a product. The proposed method automates the detection of misaligned or incorrectly fitting gaskets, ensuring timely repair action. The suggested method uses deep learning approaches to recognize and evaluate radiator images, with a focus on identifying misaligned or incorrectly installed gaskets. Deep learning algorithms are specific for feature extraction and classification together with a convolutional neural network (CNN) module that allows for seamless connection. A gasket inspection system based on a CNN architecture is developed in this work. The system consists of two sets of convolution layers, followed by two sets of batch normalization layer, two sets of RELU layer, max pooling layer and finally fully connected layer for classification of gasket images. The obtained results indicate that our system has great potential for practical applications in the manufacturing industry. Moreover, our system provides a reliable and efficient mechanism for quality control, which can help reduce the risk of defects and ensure product reliability.

PMID:39922855 | DOI:10.1038/s41598-025-85223-8

Categories: Literature Watch

Pages

Subscribe to Anil Jegga aggregator - Literature Watch