Literature Watch

MuSARCyto: Multi-Head Self-Attention-Based Representation Learning for Unsupervised Clustering of Cytometry Data

Deep learning - Mon, 2025-08-11 06:00

Cytometry A. 2025 Aug 11. doi: 10.1002/cyto.a.24956. Online ahead of print.

ABSTRACT

Cytometry enables simultaneous assessment of individual cellular characteristics, offering vital insights for diagnosis, prognosis, and monitoring various human diseases. Despite its significance, the process of manual cell clustering, or gating, remains labor-intensive, tedious, and highly subjective, which restricts its broader application in both research and clinical settings. Although automated clustering solutions have been developed, manual gating continues to be the clinical gold standard, possibly due to the suboptimal performance of automated solutions. We hypothesize that their performance can be improved via an appropriate representation of data from the clustering point of view. To this end, this work presents a novel unsupervised deep learning (DL) architecture wherein an efficient cytometry data representation is learned that helps discover cluster assignments. Specifically, we propose MuSARCyto, a multi-head self-attention-based representation learning network (RN) for the unsupervised clustering of cytometry data, utilizing a fully-connected representation network backbone. To benchmark MuSARCyto against the state-of-the-art cytometry clustering methods, we propose a cluster evaluation metric adjudicator score ( Ad n $$ {\mathrm{Ad}}_n $$ ), which is an ensemble of prevalent cluster evaluation metrics. Extensive experimentation demonstrates the superior performance of MuSARCyto against the existing state-of-the-art cytometry clustering methods across six publicly available mass and flow cytometry datasets. The proposed DL achitectures are small and easily deployable for clinical settings. This work further suggests using DL methods for identifying meaningful clusters, particularly in the context of critical immunology applications.

PMID:40785593 | DOI:10.1002/cyto.a.24956

Categories: Literature Watch

DBFF-Net: A Dual-Branch Feature Fusion Network for low angular resolution fiber orientation distribution reconstruction

Deep learning - Mon, 2025-08-11 06:00

Magn Reson Med. 2025 Aug 11. doi: 10.1002/mrm.70025. Online ahead of print.

ABSTRACT

PURPOSE: Estimation of Fiber Orientation Distribution (FOD) is an essential step in tractography. However, traditional reconstruction methods such as Multi-shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) are demanding in terms of data quality and hardware equipment, limiting their application to low-angle resolution data. Deep learning has demonstrated significant potential for fiber orientation distribution reconstruction in recent years. Nevertheless, there is still room for improvement in the models, particularly in terms of reconstruction accuracy and the retention of fine details. This study aims to develop an efficient and reliable deep- learning framework to improve the accuracy of fiber orientation distribution reconstruction, namely, the Dual-Branch Feature Fusion Network (DBFF-Net).

METHODS: DBFF-Net learns the key features of high angular resolution FOD through a multi-branch network architecture, which incorporates high-quality MSMT-CSD data as the target during the training process, and by fusing multi-scale feature information, significantly improves the FOD reconstruction performance of Low Angular Resolution Diffusion Imaging (LARDI) data.

RESULTS: The experimental results show that DBFF-Net surpasses existing traditional and deep-learning methods across multiple metrics, particularly in the fiber crossing regions and under LARDI data conditions.

CONCLUSION: DBFF-Net provides an efficient and reliable FOD reconstruction scheme and offers a new white matter fiber imaging tool in clinical and scientific research.

PMID:40785496 | DOI:10.1002/mrm.70025

Categories: Literature Watch

A Deep Learning-Based Automatic Recognition Model for Polycystic Ovary Ultrasound Images

Deep learning - Mon, 2025-08-11 06:00

Balkan Med J. 2025 Aug 11. doi: 10.4274/balkanmedj.galenos.2025.2025-5-114. Online ahead of print.

ABSTRACT

BACKGROUND: Polycystic ovary syndrome (PCOS) has a significant impact on endocrine metabolism, reproductive function, and mental health in women of reproductive age. Ultrasound remains an essential diagnostic tool for PCOS, particularly in individuals presenting with oligomenorrhea or ovulatory dysfunction accompanied by polycystic ovaries, as well as hyperandrogenism associated with polycystic ovaries. However, the accuracy of ultrasound in identifying polycystic ovarian morphology remains variable.

AIMS: To develop a deep learning model capable of rapidly and accurately identifying PCOS using ovarian ultrasound images.

STUDY DESIGN: Prospective diagnostic accuracy study.

METHODS: This prospective study included data from 1,751 women with suspected PCOS who presented at two affiliated hospitals at Central South University, with clinical and ultrasound information collected and archived. Patients from center 1 were randomly divided into a training set and an internal validation set in a 7:3 ratio, while patients from center 2 served as the external validation set. Using the YOLOv11 deep learning framework, an automated recognition model for ovarian ultrasound images in PCOS cases was constructed, and its diagnostic performance was evaluated.

RESULTS: Ultrasound images from 933 patients (781 from center 1 and 152 from center 2) were analyzed. The mean average precision of the YOLOv11 model in detecting the target ovary was 95.7%, 97.6%, and 97.8% for the training, internal validation, and external validation sets, respectively. For diagnostic classification, the model achieved an F1 score of 95.0% in the training set and 96.9% in both validation sets. The area under the curve values were 0.953, 0.973, and 0.967 for the training, internal validation, and external validation sets respectively. The model also demonstrated significantly faster evaluation of a single ovary compared to clinicians (doctor, 5.0 seconds; model, 0.1 seconds; p < 0.01).

CONCLUSION: The YOLOv11-based automatic recognition model for PCOS ovarian ultrasound images exhibits strong target detection and diagnostic performance. This approach can streamline the follicle counting process in conventional ultrasound and enhance the efficiency and generalizability of ultrasound-based PCOS assessment.

PMID:40785235 | DOI:10.4274/balkanmedj.galenos.2025.2025-5-114

Categories: Literature Watch

Multimodal Deep Learning Approaches for Early Detection of Alzheimer's Disease: A Comprehensive Systematic Review of Image Processing Techniques

Deep learning - Mon, 2025-08-11 06:00

Curr Alzheimer Res. 2025 Aug 7. doi: 10.2174/0115672050401817250721190509. Online ahead of print.

ABSTRACT

INTRODUCTION: Alzheimer's disease (AD) is the most common form of dementia, and it is important to diagnose the disease at an early stage to help people with the condition and their families. Recently, artificial intelligence, especially deep learning approaches applied to medical imaging, has shown potential in enhancing AD diagnosis. This comprehensive review investigates the current state of the art in multimodal deep learning for the early diagnosis of Alzheimer's disease using image processing.

METHODS: The research underpinning this review spanned several months. Numerous deep learning architectures are examined, including CNNs, transfer learning methods, and combined models that use different imaging modalities, such as structural MRI, functional MRI, and amyloid PET. The latest work on explainable AI (XAI) is also reviewed to improve the understandability of the models and identify the particular regions of the brain related to AD pathology.

RESULTS: The results indicate that multimodal approaches generally outperform single-modality methods, and three-dimensional (volumetric) data provides a better form of representation compared to two-dimensional images.

DISCUSSION: Current challenges are also discussed, including insufficient and/or poorly prepared datasets, computational expense, and the lack of integration with clinical practice. The findings highlight the potential of applying deep learning approaches for early AD diagnosis and for directing future research pathways.

CONCLUSION: The integration of multimodal imaging with deep learning techniques presents an exciting direction for developing improved AD diagnostic tools. However, significant challenges remain in achieving accurate, reliable, and understandable clinical applications.

PMID:40785178 | DOI:10.2174/0115672050401817250721190509

Categories: Literature Watch

Investigation of metric properties of the Londrina Activities of Daily Living Protocol in patients with idiopathic pulmonary fibrosis

Idiopathic Pulmonary Fibrosis - Mon, 2025-08-11 06:00

Physiother Theory Pract. 2025 Aug 10:1-12. doi: 10.1080/09593985.2025.2544192. Online ahead of print.

ABSTRACT

BACKGROUND: Patients with idiopathic pulmonary fibrosis (IPF) often experience a decline in activities of daily living (ADL) due to progressive lung impairment. Therefore, it is essential to evaluate ADL performance in a way that reflects real-life challenges.

PURPOSE: This study aims to investigate the metric properties of the Londrina ADL Protocol in patients with IPF.

METHODS: Thirty-three patients (66.7% men, age: 66.7 ± 5.2 years) participated in this observational metric analysis study. We evaluated the protocol's validity, reliability, standard error of measurement (SEM), and minimal detectable change with 95% confidence (MDC95). To assess validity, we calculated correlation coefficients between the Londrina-ADL protocol and the Glittre Test (TGlittre), the 6-Minute Walk Test (6MWT), the London Chest-ADL (LCADL) Scale, hand grip strength, knee extension strength, and respiratory functions. Intra-rater reliability was analyzed using the intraclass correlation coefficient (ICC), paired sample t-test, SEM, MDC95, and the learning effect.

RESULTS: The Londrina-ADL protocol showed significant correlations with the TGlittre, the 6MWT, the LCADL, hand grip strength, knee extension strength, and respiratory functions (r = 0.742, r = -0.619, r = 0.665, r = -0.601, r = -0.587, p < .001, respectively.). The protocol demonstrated excellent intra-rater reliability (ICC = 0.939). Test durations differed significantly between tests (p < .001). The SEM and MDC95 values were 8.69 seconds and 24.01 seconds, respectively, with a learning effect of 4.7% observed between the first and second tests.

CONCLUSION: Given its strong metric properties, the Londrina-ADL protocol may serve as a practical and sensitive tool for evaluating daily functional capacity and guiding individualized rehabilitation strategies in patients with IPF.

PMID:40785163 | DOI:10.1080/09593985.2025.2544192

Categories: Literature Watch

Adverse drug reactions, particularly liver disorders, drive interruptions in anti-tuberculosis treatment: A retrospective cohort study

Drug-induced Adverse Events - Mon, 2025-08-11 06:00

Br J Clin Pharmacol. 2025 Aug 11. doi: 10.1002/bcp.70197. Online ahead of print.

ABSTRACT

AIMS: Adverse drug reactions (ADRs) are a key driver of missed doses of anti-tuberculosis (TB) therapy. We aimed to determine the relative burden of ADR-driven missed doses, the missed dose patterns associated with ADRs, and the association between specific ADRs and missed doses.

METHODS: In this retrospective cohort study, adults (≥18 years) who began the standard 6-month drug-sensitive anti-TB regimen in an outpatient facility in Riga, Latvia (May 2015-September 2022) and missed at least one dose of treatment were included. Data were collected from medical records and observed therapy records. Missed doses were subdivided into early discontinuation or sporadically missed. Descriptive analyses and lasagne plots were used.

RESULTS: Across 174 patients, 54 (31.0%, CI: 24.2-37.9%) missed doses due to ADRs. Of 31 320 doses, 4217 (13.5%, CI: 13.1-13.9%) were missed, 20.9% (880/4217, CI: 19.6-22.1%) were due to ADRs. Eighteen (10.3%) of the 174 patients discontinued treatment early, two of which (11.1%) were due to ADRs. Doses missed due to ADRs caused longer yet less frequent periods of sporadic missed doses: 56.4% (479/849) of sporadic missed doses were 1 day in length vs. only 9.1% (7/77) for ADR-related ones. Hepatobiliary disorders were the leading ADR group causing missed doses. Hepatobiliary ADRs caused long median durations of missed doses (median 15.0, CI: 13.0-22.0).

CONCLUSION: Our study underscores the importance of ADRs as a cause of missed doses of treatment, particularly hepatobiliary disorders. Regimens that are less prone to ADRs and strong healthcare system support structures for patients with ADRs are required to minimize missed doses, reducing unfavourable outcomes.

PMID:40785321 | DOI:10.1002/bcp.70197

Categories: Literature Watch

Deep learning and radiomics fusion for predicting the invasiveness of lung adenocarcinoma within ground glass nodules

Deep learning - Sun, 2025-08-10 06:00

Sci Rep. 2025 Aug 11;15(1):29285. doi: 10.1038/s41598-025-13447-9.

ABSTRACT

Microinvasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC) require distinct treatment strategies and are associated with different prognoses, underscoring the importance of accurate differentiation. This study aims to develop a predictive model that combines radiomics and deep learning to effectively distinguish between MIA and IAC. In this retrospective study, 252 pathologically confirmed cases of ground-glass nodules (GGNs) were included, with 177 allocated to the training set and 75 to the testing set. Radiomics, 2D deep learning, and 3D deep learning models were constructed based on CT images. In addition, two fusion strategies were employed to integrate these modalities: early fusion, which concatenates features from all modalities prior to classification, and late fusion, which ensembles the output probabilities of the individual models. The predictive performance of all five models was evaluated using the area under the receiver operating characteristic curve (AUC), and DeLong's test was performed to compare differences in AUC between models. The radiomics model achieved an AUC of 0.794 (95% CI: 0.684-0.898), while the 2D and 3D deep learning models achieved AUCs of 0.754 (95% CI: 0.594-0.882) and 0.847 (95% CI: 0.724-0.945), respectively, in the testing set. Among the fusion models, the late fusion strategy demonstrated the highest predictive performance, with an AUC of 0.898 (95% CI: 0.784-0.962), outperforming the early fusion model, which achieved an AUC of 0.857 (95% CI: 0.731-0.936). Although the differences were not statistically significant, the late fusion model yielded the highest numerical values for diagnostic accuracy, sensitivity, and specificity across all models. The fusion of radiomics and deep learning features shows potential in improving the differentiation of MIA and IAC in GGNs. The late fusion strategy demonstrated promising results, warranting further validation in larger, multicenter studies.

PMID:40784883 | DOI:10.1038/s41598-025-13447-9

Categories: Literature Watch

Application of Artificial Intelligence in Bone Quality and Quantity Assessment for Dental Implant Planning: A Scoping Review

Deep learning - Sun, 2025-08-10 06:00

J Dent. 2025 Aug 8:106027. doi: 10.1016/j.jdent.2025.106027. Online ahead of print.

ABSTRACT

OBJECTIVES: To assess how artificial intelligence (AI) models perform in evaluating bone quality and quantity in the preoperative planning process for dental implants.

DATA: This review included studies that utilized AI-based assessments of bone quality and/or quantity based on radiographic images in the preoperative phase.

SOURCES: Studies published in English before April 2025 were used in this review, which were obtained from searches in PubMed/MEDLINE, Embase, Web of Science, Scopus, and the Cochrane Library, as well as from manual searches.

STUDY SELECTION: Eleven studies met the inclusion criteria. Five studies focused on bone quality evaluation and six studies included volumetric assessments using AI models. The performance measures included accuracy, sensitivity, specificity, precision, F1 score, and Dice coefficient, and were compared with human expert evaluations. AI models demonstrated high accuracy (76.2%-99.84%), high sensitivity (78.9%-100%), and high specificity (66.2%-99%).

CONCLUSIONS: AI models have potential for the evaluation of bone quality and quantity, although standardization and external validation studies are lacking. Future studies should propose multicenter datasets, integration into clinical workflows, and the development of refined models to better reflect real-life conditions.

CLINICAL SIGNIFICANCE: AI has the potential to offer clinicians with reliable automated evaluations of bone quality and quantity, with the promise of a fully automated system of implant planning. It may also support preoperative workflows for clinical decision-making based on evidence more efficiently.

PMID:40784481 | DOI:10.1016/j.jdent.2025.106027

Categories: Literature Watch

Automated coronary artery segmentation / tissue characterization and detection of lipid-rich plaque: An integrated backscatter intravascular ultrasound study

Deep learning - Sun, 2025-08-10 06:00

Int J Cardiol. 2025 Aug 8:133744. doi: 10.1016/j.ijcard.2025.133744. Online ahead of print.

ABSTRACT

BACKGROUND: Intravascular ultrasound (IVUS)-based tissue characterization has been used to detect vulnerable plaque or lipid-rich plaque (LRP). Recently, advancements in artificial intelligence (AI) technology have enabled automated coronary arterial plaque segmentation and tissue characterization. The purpose of this study was to evaluate the feasibility and diagnostic accuracy of a deep learning model for plaque segmentation, tissue characterization and identification of LRP.

METHODS: A total of 1,098 IVUS images from 67 patients who underwent IVUS-guided percutaneous coronary intervention were selected for the training group, while 1,100 IVUS images from 100 vessels (88 patients) were used for the validation group. A 7-layer U-Net ++ was applied for automated coronary artery segmentation and tissue characterization. Segmentation and quantification of the external elastic membrane (EEM), lumen and guidewire artifact were performed and compared with manual measurements. Plaque tissue characterization was conducted using integrated backscatter (IB)-IVUS as the gold standard. LRP was defined as %lipid area of ≥65 %.

RESULTS: The deep learning model accurately segmented EEM and lumen. AI-predicted %lipid area (R = 0.90, P < 0.001), % fibrosis area (R = 0.89, P < 0.001), %dense fibrosis area (R = 0.81, P < 0.001) and % calcification area (R = 0.89, P < 0.001), showed strong correlation with IB-IVUS measurements. The model predicted LRP with a sensitivity of 62 %, specificity of 94 %, positive predictive value of 69 %, negative predictive value of 92 % and an area under the receiver operating characteristic curve of 0.919 (95 % CI:0.902-0.934), respectively.

CONCLUSION: The deep-learning model demonstrated accurate automatic segmentation and tissue characterization of human coronary arteries, showing promise for identifying LRP.

PMID:40784375 | DOI:10.1016/j.ijcard.2025.133744

Categories: Literature Watch

Repurposing Asparaginase Therapy to Target Cisplatin-Resistant Cancer Cells

Drug Repositioning - Sun, 2025-08-10 06:00

Fundam Clin Pharmacol. 2025 Oct;39(5):e70044. doi: 10.1111/fcp.70044.

ABSTRACT

BACKGROUND: Cisplatin and its derivatives remain a cornerstone in the treatment of solid malignancies. Resistance is a major factor limiting their clinical utility.

OBJECTIVES: In the present study, we set out to interrogate therapeutic approaches to target cisplatin-resistant cancer cells. We focused on therapies exploiting metabolic pathways that are altered in drug-resistant cells. We sought to find an existing therapy that has monotherapy efficacy against cisplatin-resistant cancer cells that can also re-sensitize to cisplatin.

METHODS: We used lung and ovarian cancer cell lines with acquired resistance to cisplatin together with drug sensitivity assays, conducted both with monotherapies and cisplatin combinations.

RESULTS: We show that cancer cell lines with acquired resistance to cisplatin have altered levels of enzymes involved in glutamine metabolism, which can result in differential sensitivity to targeted agents. We show that expression of one of these enzymes-the glutamate-cystine antiporter SLC7A11, up-regulated 6-fold in a cisplatin-resistant lung cancer cell line-has potential prognostic significance in lung cancer but not ovarian cancer. After identifying a common dependency of cisplatin-resistant cancer cells upon extracellular glutamine, we then evaluate the utility of the long-standing anti-leukemic therapy asparaginase (ASNase)-which possesses both asparaginase and glutaminase activity-as a potential approach. We show ASNase preferentially inhibits the proliferation of cisplatin-resistant cancer cells and can potentially re-sensitize these cells to cisplatin.

CONCLUSIONS: Our results underpin the prevalence of altered metabolism in cisplatin-resistant cells and highlight the potential utility of re-purposing ASNase to target these cells, warranting further investigation.

PMID:40784667 | DOI:10.1111/fcp.70044

Categories: Literature Watch

The use of knowledge graphs for drug repurposing: From classical machine learning algorithms to graph neural networks

Drug Repositioning - Sun, 2025-08-10 06:00

Comput Biol Med. 2025 Aug 9;196(Pt C):110873. doi: 10.1016/j.compbiomed.2025.110873. Online ahead of print.

ABSTRACT

Drug repurposing, the development of new therapeutic indications for existing drugs, is a promising strategy in drug development. Computational methods and artificial intelligence may be used to identify new drug repurposing candidates. Knowledge graph (KG) based methods have emerged as powerful tools for modeling and predicting drug-disease relationships, because of their intuitive way of exploiting biomedical knowledge and data. This review provides an overview of computational drug repurposing methods based on KGs. The motivation for adopting KG-based knowledge representations, traditional machine learning and deep learning approaches are discussed, followed by an analysis of selected tools, their construction, link prediction capabilities, and inherent advantages and limitations.

PMID:40784078 | DOI:10.1016/j.compbiomed.2025.110873

Categories: Literature Watch

TRIAGE-GS: protocol for a randomised controlled trial of a genomics-first approach to rare disease diagnosis for patients awaiting assessment by a clinical geneticist

Orphan or Rare Diseases - Sun, 2025-08-10 06:00

BMJ Open. 2025 Aug 10;15(8):e107603. doi: 10.1136/bmjopen-2025-107603.

ABSTRACT

INTRODUCTION: Rare diseases (RD) are collectively common and often genetic. Families value and can benefit from precise molecular diagnoses. Prolonged diagnostic odysseys exacerbate the burden of RD on patients, families and the healthcare system. Genome sequencing (GS) is a near-comprehensive test for genetic RD, but existing care models-where consultation with a medical geneticist is a prerequisite for testing-predate GS and may limit access or delay diagnosis. Evidence is needed to guide the optimal positioning of GS in care pathways. While initiating GS prior to geneticist consultation has been trialled in acute care settings, there are no data to inform the utility of this approach in outpatient care, where most patients with RD seek genetics services. We aim to evaluate the diagnostic yield, time to diagnosis, clinical and personal utility and incremental cost-effectiveness of GS initiated at the time of referral triage (pre-geneticist evaluation) compared with standard of care.

METHODS AND ANALYSIS: 200 paediatric patients referred to one of two large genetics centres in Ontario, Canada, for suspected genetic RD will be randomised into a 1:1 ratio to the intervention (GS first) or standard of care (geneticist first) arm. An unblinded, permuted block randomisation design will be used, stratified within each recruitment site by phenotype and prior genetic testing. The primary outcome measure is time to genetic diagnosis or to cessation of active follow-up. Survival analysis will be used to analyse time-to-event data. Additional measures will include patient-reported and family-reported measures of satisfaction, understanding and perceived test utility, clinician-reported measures of perceived test utility and management impact, and healthcare system utilisation and costs.

ETHICS AND DISSEMINATION: This study was approved by Clinical Trials Ontario. Results will be disseminated, at minimum, via peer-reviewed journals, professional conferences and internal reports to funding bodies. Efforts will be made to share aggregated study results with participants and their families.

TRIAL REGISTRATION NUMBER: NCT06935019.

PMID:40784761 | DOI:10.1136/bmjopen-2025-107603

Categories: Literature Watch

Synergistic effects of oncogene inhibition and pyruvate dehydrogenase kinase blockade in resistant NSCLC cells

Pharmacogenomics - Sun, 2025-08-10 06:00

Biochim Biophys Acta Mol Basis Dis. 2025 Aug 8:168014. doi: 10.1016/j.bbadis.2025.168014. Online ahead of print.

ABSTRACT

The metabolic reprogramming of tumor cells plays a critical role in cancer progression, contributing to drug resistance and tumor survival. Tyrosine kinase inhibitors (TKIs) have shown promising clinical results by targeting specific signaling pathways in cancer cell proliferation, survival, and metastasis and are now standard of care for NSCLC with actionable mutations. However, secondary resistance to TKIs remains a significant challenge. Here, we explored the rationale behind combining TKIs with an inhibitor of glucose metabolism (dichloroacetate, DCA), focusing on the synergistic effects from dual inhibition of oncogenic and metabolic reprogramming. We selected three NSCLC cell line models (H1975, H1993, A549) with EGFR/MET/KRAS mutations and determined the optimal DCA dose (500 μM) to reverse the Warburg effect. TKIs in combination with DCA (CI < 1, indicating synergy) altered cell metabolism, by improving oxidative phosphorylation via reduced glucose consumption (~50 %, p < 0.05) and increased ATP (~50 %, p < 0.0001), particularly mitoATP, confirmed by metabolite levels. The combination also reduced cell proliferation (S phase p < 0.001), increased cell death (~40 %, p < 0.0001 less MMP, ~1.6 fold more BIM, 2.5-fold more autophagy) and blocked invasion (~3 fold fewer protrusions). Our findings show DCA potentiates TKIs at lower doses, likely via Warburg effect reversal. These changes in tumor behaviour leads to a higher pro-apoptotic status responsible for an increased tumor response and, in parallel, the lower doses reduced alternative evasion pathways contributing to decrease of tumor invasion and resistance mechanism. This study shed light on a new potential combined therapeutic approach to improve clinical outcomes in targeted cancer therapy scenarios.

PMID:40784600 | DOI:10.1016/j.bbadis.2025.168014

Categories: Literature Watch

Increasing Statin Prescribing through a Pharmacogenomics-Guided Initiative

Pharmacogenomics - Sun, 2025-08-10 06:00

J Am Pharm Assoc (2003). 2025 Aug 8:102898. doi: 10.1016/j.japh.2025.102898. Online ahead of print.

ABSTRACT

BACKGROUND: Despite clear benefits of statin therapy, utilization remains suboptimal. Concern for adverse effects are a top reason for declining or discontinuing a statin. Certain genetic variations can predispose a patient to statin intolerance.

OBJECTIVE: To offer Veterans pharmacogenomics testing to help guide statin therapy decision making and increase appropriate statin prescribing within a single Veterans Affairs Health Care System (VAHCS).

METHODS: A team of pharmacists designed a quality improvement (QI) initiative which included personalized phone calls offering pharmacogenomics testing and/or statin initiation. Patients initiated on a statin were assessed for adherence and tolerability at least four weeks after initiation.

RESULTS: A total of 107 patients were contacted for the statin initiative. About half [(n = 50 (47%)] initiated a statin, and of those, 45 (90%) completed pharmacogenomics testing for a genomics-guided statin prescription. Most patients initiated on a statin (72%) reported adherence and tolerance at least 4 weeks after starting statin therapy.

CONCLUSION: Pharmacogenomics testing can potentially be used as a tool in the statin initiation process to facilitate a patient-centered discussion and increase shared clinical decision making.

PMID:40784538 | DOI:10.1016/j.japh.2025.102898

Categories: Literature Watch

Automated weed and crop recognition and classification model using deep transfer learning with optimization algorithm

Deep learning - Sun, 2025-08-10 06:00

Sci Rep. 2025 Aug 10;15(1):29279. doi: 10.1038/s41598-025-15275-3.

ABSTRACT

Weeds and crops contribute to a endless resistance for similar assets, which leads to potential declines in crop production and enlarged agricultural expenses. Conventional models of weed control like extensive pesticide use, appear with the hassle of environmental pollution and advancing weed battle. As the need for organic agricultural and pollutant-free products increases, there is a crucial need for revolutionary solutions. The rise of smart agricultural tools, containing satellite technology, unmanned aerial vehicles (UAV), and intelligent robots certifies to be paramount in dealing with weed-related challenges. Deep learning (DL) based object detection model has been carried out in numerous applications. As a result, need for instance-level analyses of the weed dataset places constraints on the significance of influential DL methods. Artificial intelligence (AI) led image analysis for weed recognition and mainly, machine learning (ML) and deep learning (DL) utilizing images from cultivated lands have commonly been employed in the literature for identifying numerous kinds of weeds that are cultivated beside crops. This method develops an Automated Weed Recognition and Classification using a Deep Learning Model with Lemrus Optimization (AWRC-DLMLO). The main purpose of the AWRC-DLMLO method is to effectively detect and classify weeds and crop. In the proposed AWRC-DLMLO technique, the main phase of Gaussian filtering (GF) utilizing image pre-processing is implemented to eliminate unwanted noise. The plant segmentation was also developed utilizing the Residual Attention U-Net (RA-UNet) for generating segments. The ShuffleNetV2 approach is exploited in the AWRC-DLMLO method to ascertain feature vector. Next, the lemurs optimization algorithm (LOA) is applied to increase the hyperparameter and fine-tune the DL technique, further enhancing its performance. Eventually, the cascading Q-network (CQN)model is employed for the classification process. To emphasize the improved weed detection performance of the projected AWRC-DLMLO method, a wide range of simulations were done. The extensive outcome highlighted the improvements of the developed AWRC-DLMLO technique with other existing models.

PMID:40785014 | DOI:10.1038/s41598-025-15275-3

Categories: Literature Watch

Diabetic retinopathy classification using a multi-attention residual refinement architecture

Deep learning - Sun, 2025-08-10 06:00

Sci Rep. 2025 Aug 10;15(1):29266. doi: 10.1038/s41598-025-15269-1.

ABSTRACT

Diabetic Retinopathy (DR) is a complication caused by diabetes that can destroy the retina, leading to blurred vision and even blindness. We propose a multi-attention residual refinement architecture that enhances conventional CNN performance through three strategic modifications: class-specific multi-attention for diagnostic feature weighting, space-to-depth preprocessing for improved spatial information preservation, and Squeeze-and-Excitation blocks for enhanced representational capacity. Our framework demonstrates universal applicability across different CNN architectures (ResNet, DenseNet, EfficientNet, MobileNet), consistently achieving 2-5% performance improvements on the EyePACS dataset while maintaining computational efficiency. The attention mechanism provides interpretable visualizations that align with clinical pathological patterns, validating the model's diagnostic reasoning.

PMID:40785010 | DOI:10.1038/s41598-025-15269-1

Categories: Literature Watch

Decoding fetal motion in 4D ultrasound with DeepLabCut

Deep learning - Sun, 2025-08-10 06:00

J Med Ultrason (2001). 2025 Aug 11. doi: 10.1007/s10396-025-01557-w. Online ahead of print.

ABSTRACT

PURPOSE: This study aimed to objectively and quantitatively analyze fetal motor behavior using DeepLabCut (DLC), a markerless posture estimation tool based on deep learning, applied to four-dimensional ultrasound (4DUS) data collected during the second trimester. We propose a novel clinical method for precise assessment of fetal neurodevelopment.

METHODS: Fifty 4DUS video recordings of normal singleton fetuses aged 12 to 22 gestational weeks were analyzed. Eight fetal joints were manually labeled in 2% of each video to train a customized DLC model. The model's accuracy was evaluated using likelihood scores. Intra- and inter-rater reliability of manual labeling were assessed using intraclass correlation coefficients (ICC). Angular velocity time series derived from joint coordinates were analyzed to quantify fetal movement patterns and developmental coordination.

RESULTS: Manual labeling demonstrated excellent reproducibility (inter-rater ICC = 0.990, intra-rater ICC = 0.961). The trained DLC model achieved a mean likelihood score of 0.960, confirming high tracking accuracy. Kinematic analysis revealed developmental trends: localized rapid limb movements were common at 12-13 weeks; movements became more coordinated and systemic by 18-20 weeks, reflecting advancing neuromuscular maturation. Although a modest increase in tracking accuracy was observed with gestational age, this trend did not reach statistical significance (p < 0.001).

CONCLUSION: DLC enables precise quantitative analysis of fetal motor behavior from 4DUS recordings. This AI-driven approach offers a promising, noninvasive alternative to conventional qualitative assessments, providing detailed insights into early fetal neurodevelopmental trajectories and potential early screening for neurodevelopmental disorders.

PMID:40785001 | DOI:10.1007/s10396-025-01557-w

Categories: Literature Watch

Next-generation AI framework for comprehensive oral leukoplakia evaluation and management

Deep learning - Sun, 2025-08-10 06:00

NPJ Digit Med. 2025 Aug 10;8(1):513. doi: 10.1038/s41746-025-01885-8.

ABSTRACT

Oral potentially malignant disorder poses a significant risk of malignant transformation, particularly in cases with epithelial dysplasia (OED). Current OED assessment methods are invasive and lack reliable decision-support tools for cancer risk evaluation and follow-up optimization. This study developed and validated OMMT-PredNet, a fully automated multimodal deep learning framework requiring no manual ROI annotation, for non-invasive OED identification and time-dependent cancer risk prediction. Utilizing data from 649 histopathologically confirmed leukoplakia cases across multiple institutions (2003-2024), including 598 cases in the primary cohort and 51 in the external validation set, the model integrated paired high-resolution clinical images and medical records. OMMT-PredNet achieved an AUC of 0.9592 (95% CI: 0.9491-0.9693) for cancer risk prediction and 0.9219 (95% CI: 0.9088-0.9349) for OED identification, with high specificity (MT: 0.9490; OED: 0.9182) and precision (MT: 0.9442; OED: 0.9303). Calibration and decision curve analyses confirmed clinical applicability, while external validation demonstrated robustness. This multidimensional model effectively predicts OED and cancer risk, highlighting its global applicability in enhancing oral cancer screening and improving patient outcomes.

PMID:40784991 | DOI:10.1038/s41746-025-01885-8

Categories: Literature Watch

An ensemble of deep representation learning with metaheuristic optimisation algorithm for critical health monitoring using internet of medical things

Deep learning - Sun, 2025-08-10 06:00

Sci Rep. 2025 Aug 10;15(1):29241. doi: 10.1038/s41598-025-15005-9.

ABSTRACT

The Internet of Things (IoT) plays a significant part in the healthcare field. The growth of smart devices, smart sensors, and advanced lightweight communication protocols has created an opportunity to connect medical devices for monitoring biomedical signals and identifying patients' illnesses without human involvement, known as the Internet of Medical Things (IoMT). The IoMT enables a medical method to connect various smart devices, such as hospital assets, wearable sensors, and medical examination instruments, to create an information platform. In recent times, the IoMT has been extensively utilized in various areas, including disease diagnosis, smart hospitals, infectious disease tracking, and remote health monitoring. Still, safety is one of the key requirements for the success of IoMT systems. Thus, at present, deep learning (DL) is considered a safe IoMT system, as it can enhance the system's performance. In this manuscript, the Ensemble of Deep Learning and Metaheuristic Optimisation algorithms for the Critical Health Monitoring (EDLMOA-CHM) technique is proposed. The EDLMOA-CHM technique aims to develop and evaluate effective methods for monitoring health conditions in the IoMT to enhance healthcare system security and patient safety. Initially, the Z-score normalization method is employed in the data pre-processing step to clean, transform, and organize raw data into an appropriate format. For the feature selection process, the binary grey wolf optimization (BGWO) model is employed to identify and retain the most significant features in the dataset. The classification process utilizes ensemble models, including the Temporal Convolutional Network (TCN), the Attention-based Bidirectional Gated Recurrent Unit (A-BiGRU), and the Hybrid Deep Belief Network (HDBN) techniques. To further optimize model performance, the pelican optimization algorithm (POA) is utilized for hyperparameter tuning to ensure that the optimum hyperparameters are chosen for enhanced accuracy. To demonstrate the improved performance of the EDLMOA-CHM model, a comprehensive experimental analysis is conducted using the healthcare IoT dataset. The comparison analysis of the EDLMOA-CHM model demonstrated a superior accuracy value of 99.56% over existing techniques.

PMID:40784985 | DOI:10.1038/s41598-025-15005-9

Categories: Literature Watch

Feature fusion and selection using handcrafted vs. deep learning methods for multimodal hand biometric recognition

Deep learning - Sun, 2025-08-10 06:00

Sci Rep. 2025 Aug 10;15(1):29237. doi: 10.1038/s41598-025-10075-1.

ABSTRACT

Feature fusion is a widely adopted strategy in multi-biometrics to enhance reliability, performance and real-world applicability. While combining multiple biometric sources can improve recognition accuracy, practical performance depends heavily on feature dependencies, redundancies, and selection methods. This study provides a comprehensive analysis of multimodal hand biometric recognition systems. We aim to guide the design of efficient, high-accuracy biometric systems by evaluating trade-offs between classical and learning-based approaches. For feature extraction, we employ Zernike moments and log-Gabor filters, evaluating multiple selection techniques to optimize performance. While baseline palmprint and fingerprint systems exhibit varying classification rates. Our feature fusion method achieves a consistent 99.29% identification rate across diverse classifiers. Additionally, we explore EfficientNET as an end-to-end feature extractor and classifier, comparing its fusion performance with the traditional approach. Our findings emphasize feature selection as the key of building efficient and stable recognition systems. Using the minimal optimal feature set, we achieve an equal error rate (EER) of 0.71%, demonstrating superior efficiency and accuracy.

PMID:40784983 | DOI:10.1038/s41598-025-10075-1

Categories: Literature Watch

Pages

Subscribe to Anil Jegga aggregator - Literature Watch