Deep learning

Advanced Anticounterfeiting: Angle-Dependent Structural Color-Based CuO/ZnO Nanopatterns with Deep Neural Network Supervised Learning

Wed, 2025-03-12 06:00

ACS Appl Mater Interfaces. 2025 Mar 12. doi: 10.1021/acsami.4c17414. Online ahead of print.

ABSTRACT

Current anticounterfeiting technologies rely on deterministic processes that are easily replicable, require specialized devices for authentication, and involve complex manufacturing, resulting in high costs and limited scalability. This study presents a low-cost, mass-producible structural color-based anticounterfeiting pattern and a simple algorithm for discrimination. Nanopatterns aligned with the direction of incident light were fabricated by electrospinning, while CuO and ZnO were grown independently through a solution process. CuO acts as a reflective layer, imparting an angle-dependent color dependence, while ZnO allows the structural color to be tuned by controlling the hydrothermal synthesis time. The inherent randomness of electrospinning enables the creation of unclonable patterns, providing a robust anticounterfeiting solution. The fabricated CuO/ZnO nanopatterns exhibit strong angular color dependence and are capable of encoding high-density information. It uses deep learning algorithms to achieve an average discrimination accuracy of 94%, with a streamlined computational structure based on shape and color features to achieve a processing speed of 80 ms per sample. The training images are acquired with standard high-resolution cameras, ensuring accessibility and practicality. This approach offers an efficient and scalable next-generation solution for anticounterfeiting applications, including documents, currency, and brand labels.

PMID:40072024 | DOI:10.1021/acsami.4c17414

Categories: Literature Watch

Deep Learning-Based Contrast Boosting in Low-Contrast Media Pre-TAVR CT Imaging

Wed, 2025-03-12 06:00

Can Assoc Radiol J. 2025 Mar 12:8465371251322054. doi: 10.1177/08465371251322054. Online ahead of print.

ABSTRACT

Purpose: This study investigates the impact of deep learning-based contrast boosting (DL-CB) on image quality and measurement reliability in low-contrast media (low-CM) CT for pre-transcatheter aortic valve replacement (TAVR) assessment. Methods: This retrospective study included TAVR candidates with renal dysfunction who underwent low-CM (30-mL: 15-mL bolus of contrast followed by 50-mL of 30% iomeprol solution) pre-TAVR CT between April and December 2023, along with matched standard-CM controls (n = 68). Low-CM images were reconstructed as conventional, 50-keV, and DL-CB images. Qualitative and quantitative image quality were compared among image sets. The aortic annulus was measured by 2 independent readers on low-CM CT images, and interobserver reliability was assessed. Results: DL-CB significantly improved contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) compared to conventional and 50-keV images (CNR: 12.5-13.4, 18-19.8, and 21.9-24; SNR: 10.8-15.5, 10.7-15.5, and 16.8-26.7 on conventional, 50-keV, and DL-CB images, respectively; P < .001). DL-CB achieved comparable CNR (21.9-24 vs 27-27.7, P = .39-.61) and comparable to slightly higher SNR (16.8-26.7 vs 15.7-20.2, P = .003-.80) to standard-CM images. For aortic annular measurement, DL-CB demonstrated high interobserver reliability, with an intraclass correlation coefficient (ICC) of .96 and small mean differences (area: 0.01 cm², limits of agreement [LoA]: -0.52 to 0.55 cm²; perimeter: 0.02 mm, LoA: -4.49 to 4.53 mm). Conclusions: DL-CB improves image quality and provides high measurement reliability in low-CM CT for pre-TAVR assessment in patients with renal dysfunction, without requiring dual-energy CT.

PMID:40071690 | DOI:10.1177/08465371251322054

Categories: Literature Watch

"Optimizing sEMG Gesture Recognition with Stacked Autoencoder Neural Network for Bionic Hand"

Wed, 2025-03-12 06:00

MethodsX. 2025 Feb 15;14:103207. doi: 10.1016/j.mex.2025.103207. eCollection 2025 Jun.

ABSTRACT

This study presents a novel deep learning approach for surface electromyography (sEMG) gesture recognition using stacked autoencoder neural network (SAE)s. The method leverages hierarchical representation learning to extract meaningful features from raw sEMG signals, enhancing the precision and robustness of gesture classification.•Feature Extraction and Classification MODWT Decomposition: The sEMG signals were decomposed using the MODWT DECOMPOSITION(Maximal Overlap Discrete Wavelet Transform) to capture various frequency components.•Time Domain Parameters: A total of 28 features per subject were extracted from the time domain, including statistical and spectral features.•Classifier Evaluation: Initial evaluations involved Autoencoder and LDA (Linear Discriminant Analysis) classifiers, with Autoencoder achieving an average accuracy of 77.96 % ± 1.24, outperforming LDA's 65.36 % ± 1.09.Advanced Neural Network Approach: Stacked Autoencoder Neural Network: To address challenges in distinguishing similar gestures within grasp groups, a Stacked Autoencoder Neural Network was employed. This advanced neural network architecture improved classification accuracy to over 100 %, demonstrating its effectiveness in handling complex gesture recognition tasks. These findings emphasize the significant potential of deep learning models in enhancing prosthetic control and rehabilitation technologies. . To verify these findings, we developed a 3d hand module in ADAMS software that is simulated using Matlab-ADAMS cosimulation.

PMID:40071216 | PMC:PMC11894319 | DOI:10.1016/j.mex.2025.103207

Categories: Literature Watch

Parameter optimization of 3D convolutional neural network for dry-EEG motor imagery brain-machine interface

Wed, 2025-03-12 06:00

Front Neurosci. 2025 Feb 25;19:1469244. doi: 10.3389/fnins.2025.1469244. eCollection 2025.

ABSTRACT

Easing the behavioral restrictions of those in need of care not only improves their own quality of life (QoL) but also reduces the burden on care workers and may help reduce the number of care workers in countries with declining birthrates. The brain-machine interface (BMI), in which appliances and machines are controlled only by brain activity, can be used in nursing care settings to alleviate behavioral restrictions and reduce stress for those in need of care. It is also expected to reduce the workload of care workers. In this study, we focused on motor imagery (MI) classification by deep-learning to construct a system that can identify MI obtained by electroencephalography (EEG) measurements with high accuracy and a low latency response. By completing the system on the edge, the privacy of personal MI data can be ensured, and the system is ubiquitous, which improves user convenience. On the other hand, however, the edge is limited by hardware resources, and the implementation of models with a huge number of parameters and high computational cost, such as deep-learning, on the edge is challenging. Therefore, by optimizing the MI measurement conditions and various parameters of the deep-learning model, we attempted to reduce the power consumption and improve the response latency of the system by minimizing the computational cost while maintaining high classification accuracy. In addition, we investigated the use of a 3-dimension convolutional neural network (3D CNN), which can retain spatial locality as a feature to further improve the classification accuracy. We propose a method to maintain a high classification accuracy while enabling processing on the edge by optimizing the size and number of kernels and the layer structure. Furthermore, to develop a practical BMI system, we introduced dry electrodes, which are more comfortable for daily use, and optimized the number of parameters and memory consumption size of the proposed model to maintain classification accuracy even with fewer electrodes, less recall time, and a lower sampling rate. Compared to EEGNet, the proposed 3D CNN reduces the number of parameters, the number of multiply-accumulates, and memory footprint by approximately 75.9%, 16.3%, and 12.5%, respectively, while maintaining the same level of classification accuracy with the conditions of eight electrodes, 3.5 seconds sample window size, and 125 Hz sampling rate in 4-class dry-EEG MI.

PMID:40071135 | PMC:PMC11893816 | DOI:10.3389/fnins.2025.1469244

Categories: Literature Watch

Data augmented lung cancer prediction framework using the nested case control NLST cohort

Wed, 2025-03-12 06:00

Front Oncol. 2025 Feb 25;15:1492758. doi: 10.3389/fonc.2025.1492758. eCollection 2025.

ABSTRACT

PURPOSE: In the context of lung cancer screening, the scarcity of well-labeled medical images poses a significant challenge to implement supervised learning-based deep learning methods. While data augmentation is an effective technique for countering the difficulties caused by insufficient data, it has not been fully explored in the context of lung cancer screening. In this research study, we analyzed the state-of-the-art (SOTA) data augmentation techniques for lung cancer binary prediction.

METHODS: To comprehensively evaluate the efficiency of data augmentation approaches, we considered the nested case control National Lung Screening Trial (NLST) cohort comprising of 253 individuals who had the commonly used CT scans without contrast. The CT scans were pre-processed into three-dimensional volumes based on the lung nodule annotations. Subsequently, we evaluated five basic (online) and two generative model-based offline data augmentation methods with ten state-of-the-art (SOTA) 3D deep learning-based lung cancer prediction models.

RESULTS: Our results demonstrated that the performance improvement by data augmentation was highly dependent on approach used. The Cutmix method resulted in the highest average performance improvement across all three metrics: 1.07%, 3.29%, 1.19% for accuracy, F1 score and AUC, respectively. MobileNetV2 with a simple data augmentation approach achieved the best AUC of 0.8719 among all lung cancer predictors, demonstrating a 7.62% improvement compared to baseline. Furthermore, the MED-DDPM data augmentation approach was able to improve prediction performance by rebalancing the training set and adding moderately synthetic data.

CONCLUSIONS: The effectiveness of online and offline data augmentation methods were highly sensitive to the prediction model, highlighting the importance of carefully selecting the optimal data augmentation method. Our findings suggest that certain traditional methods can provide more stable and higher performance compared to SOTA online data augmentation approaches. Overall, these results offer meaningful insights for the development and clinical integration of data augmented deep learning tools for lung cancer screening.

PMID:40071099 | PMC:PMC11893409 | DOI:10.3389/fonc.2025.1492758

Categories: Literature Watch

Deep learning model for the early prediction of pathologic response following neoadjuvant chemotherapy in breast cancer patients using dynamic contrast-enhanced MRI

Wed, 2025-03-12 06:00

Front Oncol. 2025 Feb 25;15:1491843. doi: 10.3389/fonc.2025.1491843. eCollection 2025.

ABSTRACT

PURPOSE: This study aims to investigate the diagnostic accuracy of various deep learning methods on DCE-MRI, in order to provide a simple and accessible tool for predicting pathologic response of NAC in breast cancer patients.

METHODS: In this study, we enrolled 313 breast cancer patients who had complete DCE-MRI data and underwent NAC followed by breast surgery. According to Miller-Payne criteria, the efficacy of NAC was categorized into two groups: the patients achieved grade 1-3 of Miller-Payne criteria were classified as the non-responders, while patients achieved grade 4-5 of Miller-Payne criteria were classified as responders. Multiple deep learning frameworks, including ViT, VGG16, ShuffleNet_v2, ResNet18, MobileNet_v2, MnasNet-0.5, GoogleNet, DenseNet121, and AlexNet, were used for transfer learning of the classification model. The deep learning features were obtained from the final fully connected layer of the deep learning models, with 256 features extracted based on DCE-MRI data for each patient of each deep learning model. Various machine-learning techniques, including support vector machine (SVM), K-nearest neighbor (KNN), RandomForest, ExtraTrees, XGBoost, LightGBM, and multiple-layer perceptron (MLP), were employed to construct classification models.

RESULTS: We utilized various deep learning models to extract features and subsequently constructed machine learning models. Based on the performance of different machine learning models' AUC values, we selected the classifiers with the best performance. ResNet18 exhibited superior performance, with an AUC of 0.87 (95% CI: 0.82 - 0.91) and 0.87 (95% CI: 0.78 - 0.96) in the train and test cohorts, respectively.

CONCLUSIONS: Using pre-treatment DCE-MRI images, our study trained multiple deep models and developed the best-performing DLR model for predicting pathologic response of NAC in breast cancer patients. This prognostic tool provides a dependable and impartial basis for effectively identifying breast cancer patients who are most likely to benefit from NAC before its initiation. At the same time, it can also identify those patients who are insensitive to NAC, allowing them to proceed directly to surgical treatment and prevent the risk of losing the opportunity for surgery due to disease progression after NAC.

PMID:40071096 | PMC:PMC11893424 | DOI:10.3389/fonc.2025.1491843

Categories: Literature Watch

Advancements in the application of artificial intelligence in the field of colorectal cancer

Wed, 2025-03-12 06:00

Front Oncol. 2025 Feb 25;15:1499223. doi: 10.3389/fonc.2025.1499223. eCollection 2025.

ABSTRACT

Colorectal cancer (CRC) is a prevalent malignant tumor in the digestive system. As reported in the 2020 global cancer statistics, CRC accounted for more than 1.9 million new cases and 935,000 deaths, making it the third most common cancer worldwide in terms of incidence and the second leading cause of cancer-related deaths globally. This poses a significant threat to global public health. Early screening methods, such as fecal occult blood tests, colonoscopies, and imaging techniques, are crucial for detecting early lesions and enabling timely intervention before cancer becomes invasive. Early detection greatly enhances treatment possibilities, such as surgery, radiation therapy, and chemotherapy, with surgery being the main approach for treating early-stage CRC. In this context, artificial intelligence (AI) has shown immense potential in revolutionizing CRC management, serving as one of the most effective screening tools. AI, utilizing machine learning (ML) and deep learning (DL) algorithms, improves early detection, diagnosis, and treatment by processing large volumes of medical data, uncovering hidden patterns, and forecasting disease development. DL, a more advanced form of ML, simulates the brain's processing power, enhancing the accuracy of tumor detection, differentiation, and prognosis predictions. These innovations offer the potential to revolutionize cancer care by boosting diagnostic accuracy, refining treatment approaches, and ultimately enhancing patient outcomes.

PMID:40071094 | PMC:PMC11893421 | DOI:10.3389/fonc.2025.1499223

Categories: Literature Watch

Generative artificial intelligence ChatGPT in clinical nutrition - Advances and challenges

Tue, 2025-03-11 06:00

Nutr Hosp. 2025 Feb 26. doi: 10.20960/nh.05692. Online ahead of print.

ABSTRACT

ChatGPT and other artificial intelligence (AI) tools can modify nutritional management in clinical settings. These technologies, based on machine learning and deep learning, enable the identification of risks, the proposal of personalized interventions, and the monitoring of patient progress using data extracted from clinical records. ChatGPT excels in areas such as nutritional assessment by calculating caloric needs and suggesting nutrient-rich foods, and in diagnosis, by identifying nutritional issues with technical terminology. In interventions, it offers dietary and educational strategies but lacks critical abilities such as interpreting non-verbal cues or performing physical examinations. Recent studies indicate that ChatGPT achieves high accuracy in questions related to clinical guidelines but shows deficiencies in integrating multiple medical conditions or ensuring the accuracy of meal plans. Additionally, generated plans may exhibit significant caloric deviations and imbalances in micronutrients such as vitamin D and B12. Despite its limitations, this AI has the potential to complement clinical practice by improving accessibility and personalization in nutritional care. However, its effective implementation requires professional supervision, integration with existing healthcare systems, and constant updates to its databases. In conclusion, while it does not replace nutrition experts, ChatGPT can serve as a valuable tool to optimize nutrition education and management of our patiens, always under the guidance of trained professionals.

PMID:40066572 | DOI:10.20960/nh.05692

Categories: Literature Watch

Diagnostic accuracy of artificial intelligence models in detecting congenital heart disease in the second-trimester fetus through prenatal cardiac screening: a systematic review and meta-analysis

Tue, 2025-03-11 06:00

Front Cardiovasc Med. 2025 Feb 24;12:1473544. doi: 10.3389/fcvm.2025.1473544. eCollection 2025.

ABSTRACT

BACKGROUND: Congenital heart disease (CHD) is a major contributor to morbidity and infant mortality and imposes the highest burden on global healthcare costs. Early diagnosis and prompt treatment of CHD contribute to enhanced neonatal outcomes and survival rates; however, there is a shortage of proficient examiners in remote regions. Artificial intelligence (AI)-powered ultrasound provides a potential solution to improve the diagnostic accuracy of fetal CHD screening.

METHODS: A literature search was conducted across seven databases for systematic review. Articles were retrieved based on PRISMA Flow 2020 and inclusion and exclusion criteria. Eligible diagnostic data were further meta-analyzed, and the risk of bias was tested using Quality Assessment of Diagnostic Accuracy Studies-Artificial Intelligence.

FINDINGS: A total of 374 studies were screened for eligibility, but only 9 studies were included. Most studies utilized deep learning models using either ultrasound or echocardiographic images. Overall, the AI models performed exceptionally well in accurately identifying normal and abnormal ultrasound images. A meta-analysis of these nine studies on CHD diagnosis resulted in a pooled sensitivity of 0.89 (0.81-0.94), a specificity of 0.91 (0.87-0.94), and an area under the curve of 0.952 using a random-effects model.

CONCLUSION: Although several limitations must be addressed before AI models can be implemented in clinical practice, AI has shown promising results in CHD diagnosis. Nevertheless, prospective studies with bigger datasets and more inclusive populations are needed to compare AI algorithms to conventional methods.

SYSTEMATIC REVIEW REGISTRATION: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023461738, PROSPERO (CRD42023461738).

PMID:40066351 | PMC:PMC11891181 | DOI:10.3389/fcvm.2025.1473544

Categories: Literature Watch

Unified resilience model using deep learning for assessing power system performance

Tue, 2025-03-11 06:00

Heliyon. 2025 Feb 19;11(4):e42802. doi: 10.1016/j.heliyon.2025.e42802. eCollection 2025 Feb 28.

ABSTRACT

Energy resilience in renewable energy sources dissemination components such as batteries and inverters is crucial for achieving high operational fidelity. Resilience factors play a vital role in determining the performance of power systems, regardless of their operating environment and interruptions. This article introduces a Unified Resilience Model (URM) using Deep Learning (DL) to enhance power system performance. The proposed model analyzes environmental factors impacting the resilience of batteries and energy storage devices. This deep learning approach trains performance-impacting factors using previously known low resilience drain data. The learning output is utilized to augment various strengthening factors, thereby improving resilience. Drain mitigation and performance improvements are combined for direct impact verification. This process validates the model's fidelity in enhancing power system performance, with a specific focus on the impact of weather factors.

PMID:40066024 | PMC:PMC11891688 | DOI:10.1016/j.heliyon.2025.e42802

Categories: Literature Watch

Temporal Radiographic Trajectory and Clinical Outcomes in COVID-19 Pneumonia: A Longitudinal Study

Tue, 2025-03-11 06:00

J Korean Med Sci. 2025 Mar 10;40(9):e25. doi: 10.3346/jkms.2025.40.e25.

ABSTRACT

BACKGROUND: Currently, little is known about the relationship between the temporal radiographic latent trajectories, which are based on the extent of coronavirus disease 2019 (COVID-19) pneumonia and clinical outcomes. This study aimed to elucidate the differences in the temporal trends of critical laboratory biomarkers, utilization of critical care support, and clinical outcomes according to temporal radiographic latent trajectories.

METHODS: We enrolled 2,385 patients who were hospitalized with COVID-19 and underwent serial chest radiographs from December 2019 to March 2022. The extent of radiographic pneumonia was quantified as a percentage using a previously developed deep-learning algorithm. A latent class growth model was used to identify the trajectories of the longitudinal changes of COVID-19 pneumonia extents during hospitalization. We investigated the differences in the temporal trends of critical laboratory biomarkers among the temporal radiographic trajectory groups. Cox regression analyses were conducted to investigate differences in the utilization of critical care supports and clinical outcomes among the temporal radiographic trajectory groups.

RESULTS: The mean age of the enrolled patients was 58.0 ± 16.9 years old, with 1,149 (48.2%) being male. Radiographic pneumonia trajectories were classified into three groups: The steady group (n = 1,925, 80.7%) exhibited stable minimal pneumonia, the downhill group (n = 135, 5.7%) exhibited initial worsening followed by improving pneumonia, and the uphill group (n = 325, 13.6%) exhibited progressive deterioration of pneumonia. There were distinct differences in the patterns of temporal blood urea nitrogen (BUN) and C-reactive protein (CRP) levels between the uphill group and the other two groups. Cox regression analyses revealed that the hazard ratios (HRs) for the need for critical care support and the risk of intensive care unit admission were significantly higher in both the downhill and uphill groups compared to the steady group. However, regarding in-hospital mortality, only the uphill group demonstrated a significantly higher risk than the steady group (HR, 8.2; 95% confidence interval, 3.08-21.98).

CONCLUSION: Stratified pneumonia trajectories, identified through serial chest radiographs, are linked to different patterns of temporal changes in BUN and CRP levels. These changes can predict the need for critical care support and clinical outcomes in COVID-19 pneumonia. Appropriate therapeutic strategies should be tailored based on these disease trajectories.

PMID:40065711 | DOI:10.3346/jkms.2025.40.e25

Categories: Literature Watch

Dementia Overdiagnosis in Younger, Higher Educated Individuals Based on MMSE Alone: Analysis Using Deep Learning Technology

Tue, 2025-03-11 06:00

J Korean Med Sci. 2025 Mar 10;40(9):e20. doi: 10.3346/jkms.2025.40.e20.

ABSTRACT

BACKGROUND: Dementia is a multifaceted disorder that affects cognitive function, necessitating accurate diagnosis for effective management and treatment. Although the Mini-Mental State Examination (MMSE) is widely used to assess cognitive impairment, its standalone efficacy is debated. This study examined the effectiveness of the MMSE alone versus in combination with other cognitive assessments in predicting dementia diagnosis, with the aim of refining the diagnostic accuracy for dementia.

METHODS: A total of 2,863 participants with subjective cognitive complaints who underwent comprehensive neuropsychological assessments were included. We developed two random forest models: one using only the MMSE and another incorporating additional cognitive tests. These models were evaluated based on their accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC) on a 70:30 training-to-testing split.

RESULTS: The MMSE-alone model predicted dementia with an accuracy of 86% and AUC of 0.872. The expanded model demonstrated increased accuracy (88%) and an AUC of 0.934. Notably, 17.46% of the cases were reclassified from dementia to non-dementia category upon including additional tests. Higher educational level and younger age were associated with these shifts.

CONCLUSION: The findings suggest that although the MMSE is a valuable screening tool, it should not be used in isolation to determine dementia severity. The addition of diverse cognitive assessments can significantly enhance diagnostic precision, particularly in younger and more educated populations. Future diagnostic protocols should integrate multifaceted cognitive evaluations to reflect the complexity of dementia accurately.

PMID:40065710 | DOI:10.3346/jkms.2025.40.e20

Categories: Literature Watch

A CT-based interpretable deep learning signature for predicting PD-L1 expression in bladder cancer: a two-center study

Tue, 2025-03-11 06:00

Cancer Imaging. 2025 Mar 10;25(1):27. doi: 10.1186/s40644-025-00849-1.

ABSTRACT

BACKGROUND: To construct and assess a deep learning (DL) signature that employs computed tomography imaging to predict the expression status of programmed cell death ligand 1 in patients with bladder cancer (BCa).

METHODS: This retrospective study included 190 patients from two hospitals who underwent surgical removal of BCa (training set/external validation set, 127/63). We used convolutional neural network and radiomics machine learning technology to generate prediction models. We then compared the performance of the DL signature with the radiomics machine learning signature and selected the optimal signature to build a nomogram with the clinical model. Finally, the internal forecasting process of the DL signature was explained using Shapley additive explanation technology.

RESULTS: On the external validation set, the DL signature had an area under the curve of 0.857 (95% confidence interval: 0.745-0.932), and demonstrated superior prediction performance in comparison with the other models. SHAP expression images revealed that the prediction of PD-L1 expression status is mainly influenced by the tumor edge region, particularly the area close to the bladder wall.

CONCLUSIONS: The DL signature performed well in comparison with other models and proved to be a valuable, dependable, and interpretable tool for predicting programmed cell death ligand 1 expression status in patients with BCa.

PMID:40065444 | DOI:10.1186/s40644-025-00849-1

Categories: Literature Watch

Development of a deep learning-based model for guiding a dissection during robotic breast surgery

Tue, 2025-03-11 06:00

Breast Cancer Res. 2025 Mar 10;27(1):34. doi: 10.1186/s13058-025-01981-3.

ABSTRACT

BACKGROUND: Traditional surgical education is based on observation and assistance in surgical practice. Recently introduced deep learning (DL) techniques enable the recognition of the surgical view and automatic identification of surgical landmarks. However, there was no previous studies have conducted to develop surgical guide for robotic breast surgery. To develop a DL model for guiding the dissection plane during robotic mastectomy for beginners and trainees.

METHODS: Ten surgical videos of robotic mastectomy procedures were recorded. Video frames taken at 1-s intervals were converted to PNG format. The ground truth was manually delineated by two experienced surgeons using ImageJ software. The evaluation metrics were the Dice similarity coefficient (DSC) and Hausdorff distance (HD).

RESULTS: A total of 8,834 images were extracted from ten surgical videos of robotic mastectomies performed between 2016 and 2020. Skin flap dissection during the robotic mastectomy console time was recorded. The median age and body mass index of the patients was 47.5 (38-52) years and 22.00 (19.30-29.52) kg/m2, respectively, and the median console time was 32 (21-48) min. Among the 8,834 images, 428 were selected and divided into training, validation, and testing datasets at a ratio of 7:1:2. Two experts determined that the DSC of our model was 0.828[Formula: see text]5.28 and 0.818[Formula: see text]6.96, while the HDs were 9.80[Formula: see text]2.57 and 10.32[Formula: see text]1.09.

CONCLUSION: DL can serve as a surgical guide for beginners and trainees, and can be used as a training tool to enhance surgeons' surgical skills.

PMID:40065440 | DOI:10.1186/s13058-025-01981-3

Categories: Literature Watch

Precise engineering of gene expression by editing plasticity

Tue, 2025-03-11 06:00

Genome Biol. 2025 Mar 10;26(1):51. doi: 10.1186/s13059-025-03516-7.

ABSTRACT

BACKGROUND: Identifying transcriptional cis-regulatory elements (CREs) and understanding their role in gene expression are essential for the precise manipulation of gene expression and associated phenotypes. This knowledge is fundamental for advancing genetic engineering and improving crop traits.

RESULTS: We here demonstrate that CREs can be accurately predicted and utilized to precisely regulate gene expression beyond the range of natural variation. We firstly build two sequence-to-expression deep learning models to respectively identify distal and proximal CREs by combining them with interpretability methods in multiple crops. A large number of distal CREs are verified for enhancer activity in vitro using UMI-STARR-seq on 12,000 synthesized sequences. These comprehensively characterized CREs and their precisely predicted effects further contribute to the design of in silico editing schemes for precise engineering of gene expression. We introduce a novel concept of "editingplasticity" to evaluate the potential of promoter editing to alter expression of each gene. As a proof of concept, both exhaustive prediction and random knockout mutants are analyzed within the promoter region of ZmVTE4, a key gene affecting α-tocopherol content in maize. A high degree of agreement between predicted and observed expression is observed, extending the range of natural variation and thereby allowing the creation of an optimal phenotype.

CONCLUSIONS: Our study provides a robust computational framework that advances knowledge-guided gene editing for precise regulation of gene expression and crop improvement. By reliably predicting and validating CREs, we offer a tool for targeted genetic modifications, enhancing desirable traits in crops.

PMID:40065399 | DOI:10.1186/s13059-025-03516-7

Categories: Literature Watch

Advancing AI-driven thematic analysis in qualitative research: a comparative study of nine generative models on Cutaneous Leishmaniasis data

Tue, 2025-03-11 06:00

BMC Med Inform Decis Mak. 2025 Mar 10;25(1):124. doi: 10.1186/s12911-025-02961-5.

ABSTRACT

BACKGROUND: As part of qualitative research, the thematic analysis is time-consuming and technical. The rise of generative artificial intelligence (A.I.), especially large language models, has brought hope in enhancing and partly automating thematic analysis.

METHODS: The study assessed the relative efficacy of conventional against AI-assisted thematic analysis when investigating the psychosocial impact of cutaneous leishmaniasis (CL) scars. Four hundred forty-eight participant responses from a core study were analysed comparing nine A.I. generative models: Llama 3.1 405B, Claude 3.5 Sonnet, NotebookLM, Gemini 1.5 Advanced Ultra, ChatGPT o1-Pro, ChatGPT o1, GrokV2, DeepSeekV3, Gemini 2.0 Advanced with manual expert analysis. Jamovi software maintained methodological rigour through Cohen's Kappa coefficient calculations for concordance assessment and similarity measurement via Python using Jaccard index computations.

RESULTS: Advanced A.I. models showed impressive congruence with reference standards; some even had perfect concordance (Jaccard index = 1.00). Gender-specific analyses demonstrated consistent performance across subgroups, allowing a nuanced understanding of psychosocial consequences. The grounded theory process developed the framework for the fragile circle of vulnerabilities that incorporated new insights into CL-related psychosocial complexity while establishing novel dimensions.

CONCLUSIONS: This study shows how A.I. can be incorporated in qualitative research methodology, particularly in complex psychosocial analysis. Consequently, the A.I. deep learning models proved to be highly efficient and accurate. These findings imply that the future directions for qualitative research methodology should focus on maintaining analytical rigour through the utilisation of technology using a combination of A.I. capabilities and human expertise following standardised future checklist of reporting full process transparency.

PMID:40065373 | DOI:10.1186/s12911-025-02961-5

Categories: Literature Watch

Automated deep learning-based assessment of tumour-infiltrating lymphocyte density determines prognosis in colorectal cancer

Tue, 2025-03-11 06:00

J Transl Med. 2025 Mar 10;23(1):298. doi: 10.1186/s12967-025-06254-3.

ABSTRACT

BACKGROUND: The presence of tumour-infiltrating lymphocytes (TILs) is a well-established prognostic biomarker across multiple cancer types, with higher TIL counts being associated with lower recurrence rates and improved patient survival. We aimed to examine whether an automated intraepithelial TIL (iTIL) assessment could stratify patients by risk, with the ability to generalise across independent patient cohorts, using routine H&E slides of colorectal cancer (CRC). To our knowledge, no other existing fully automated iTIL system has demonstrated this capability.

METHODS: An automated method employing deep neural networks was developed to enumerate iTILs in H&E slides of CRC. The method was applied to a Stage III discovery cohort (n = 353) to identify an optimal threshold of 17 iTILs per-mm2 tumour for stratifying relapse-free survival. Using this threshold, patients from two independent Stage II-III validation cohorts (n = 1070, n = 885) were classified as "TIL-High" or "TIL-Low".

RESULTS: Significant stratification was observed in terms of overall survival for a combined validation cohort univariate (HR 1.67, 95%CI 1.39-2.00; p < 0.001) and multivariate (HR 1.37, 95%CI 1.13-1.66; p = 0.001) analysis. Our iTIL classifier was an independent prognostic factor within proficient DNA mismatch repair (pMMR) Stage II CRC cases with clinical high-risk features. Of these, those classified as TIL-High had outcomes similar to pMMR clinical low risk cases, and those classified TIL-Low had significantly poorer outcomes (univariate HR 2.38, 95%CI 1.57-3.61; p < 0.001, multivariate HR 2.17, 95%CI 1.42-3.33; p < 0.001).

CONCLUSIONS: Our deep learning method is the first fully automated system to stratify patient outcome by analysing TILs in H&E slides of CRC, that has shown generalisation capabilities across multiple independent cohorts.

PMID:40065354 | DOI:10.1186/s12967-025-06254-3

Categories: Literature Watch

Assessment of CNNs, Transformers, and Hybrid Architectures in Dental Image Segmentation

Mon, 2025-03-10 06:00

J Dent. 2025 Mar 8:105668. doi: 10.1016/j.jdent.2025.105668. Online ahead of print.

ABSTRACT

OBJECTIVES: Convolutional Neural Networks (CNNs) have long dominated image analysis in dentistry, reaching remarkable results in a range of different tasks. However, Transformer-based architectures, originally proposed for Natural Language Processing, are also promising for dental image analysis. The present study aimed to compare CNNs with Transformers for different image analysis tasks in dentistry.

METHODS: Two CNNs (U-Net, DeepLabV3+), two Hybrids (SwinUNETR, UNETR) and two Transformer-based architectures (TransDeepLab, SwinUnet) were compared on three dental segmentation tasks on different image modalities. Datasets consisted of (1) 1881 panoramic radiographs used for tooth segmentation, (2) 1625 bitewings used for tooth structure segmentation, and (3) 2689 bitewings for caries lesions segmentation. All models were trained and evaluated using 5-fold cross-validation.

RESULTS: CNNs were found to be significantly superior over Hybrids and Transformer-based architectures for all three tasks. (1) Tooth segmentation showed mean±SD F1-Score of 0.89±0.009 for CNNs, 0.86±0.015 for Hybrids and 0.83±0.22 for Transformer-based architectures. (2) In tooth structure segmentation CNNs also outperformed with 0.85±0.008 compared to Hybrids 0.84±0.005 and Transformers 0.83±0.011. (3) Even more pronounced results were found for caries lesions segmentation; 0.49±0.031 for CNNs, 0.39±0.072 for Hybrids and 0.32±0.039 for Transformer-based architectures.

CONCLUSION: CNNs significantly outperformed Transformer-based architectures and their Hybrids on three segmentation tasks (teeth, tooth structures, caries lesions) on varying dental data modalities (panoramic and bitewing radiographs).

PMID:40064460 | DOI:10.1016/j.jdent.2025.105668

Categories: Literature Watch

PHOTODIAGNOSIS WITH DEEP LEARNING: A GAN AND AUTOENCODER-BASED APPROACH FOR DIABETIC RETINOPATHY DETECTION

Mon, 2025-03-10 06:00

Photodiagnosis Photodyn Ther. 2025 Mar 8:104552. doi: 10.1016/j.pdpdt.2025.104552. Online ahead of print.

ABSTRACT

BACKGROUND: Diabetic retinopathy (DR) is a leading cause of visual impairment and blindness worldwide, necessitating early detection and accurate diagnosis. This study proposes a novel framework integrating Generative Adversarial Networks (GANs) for data augmentation, denoising autoencoders for noise reduction, and transfer learning with EfficientNetB0 to enhance the performance of DR classification models.

METHODS: GANs were employed to generate high-quality synthetic retinal images, effectively addressing class imbalance and enriching the training dataset. Denoising autoencoders further improved image quality by reducing noise and eliminating common artifacts such as speckle noise, motion blur, and illumination inconsistencies, providing clean and consistent inputs for the classification model. EfficientNetB0 was fine-tuned on the augmented and denoised dataset.

RESULTS: The framework achieved exceptional classification metrics, including 99.00% accuracy, recall, and specificity, surpassing state-of-the-art methods. The study employed a custom-curated OCT dataset featuring high-resolution and clinically relevant images, addressing challenges such as limited annotated data and noisy inputs.

CONCLUSIONS: Unlike existing studies, our work uniquely integrates GANs, autoencoders, and EfficientNetB0, demonstrating the robustness, scalability, and clinical potential of the proposed framework. Future directions include integrating interpretability tools to enhance clinical adoption and exploring additional imaging modalities to further improve generalizability. This study highlights the transformative potential of deep learning in addressing critical challenges in diabetic retinopathy diagnosis.

PMID:40064432 | DOI:10.1016/j.pdpdt.2025.104552

Categories: Literature Watch

Genetic Distinctions Between Reticular Pseudodrusen and Drusen: A Genome-Wide Association Study

Mon, 2025-03-10 06:00

Am J Ophthalmol. 2025 Mar 8:S0002-9394(25)00119-9. doi: 10.1016/j.ajo.2025.03.007. Online ahead of print.

ABSTRACT

OBJECTIVE: To identify genetic determinants specific to reticular pseudodrusen (RPD) compared with drusen.

DESIGN: Genome-wide association study (GWAS) SUBJECTS: Participants with RPD, drusen, and controls from the UK Biobank (UKBB), a large, multisite, community-based cohort.

METHODS: A deep learning framework analyzed 169,370 optical coherence tomography (OCT) volumes to identify cases and controls within the UKBB. Five retina specialists validated the cohorts using OCT and color fundus photographs. Several GWAS were undertaken utilizing the quantity and presence of RPD and drusen. Genome-wide significance was defined as p<5e-8.

MAIN OUTCOMES MEASURES: Genetic associations were examined with the number of RPD and drusen within 'pure' cases, where only RPD or drusen were present in either eye. A candidate approach assessed 46 previously known AMD loci. Secondary GWAS were conducted for number of RPD and drusen in mixed cases, and binary case-control analyses for pure RPD and pure drusen.

RESULTS: The study included 1,787 participants: 1,037 controls, 361 pure drusen, 66 pure RPD, and 323 mixed cases. The primary pure RPD GWAS identified four genome-wide significant loci: rs11200630 near ARMS2-HTRA1 (p=1.9e-09), rs79641866 at PARD3B (p=1.3e-08), rs143184903 near ITPR1 (p=8.1e-09), and rs76377757 near SLN (p=4.3e-08). The latter three are uncommon variants (minor allele frequency <5%). A significant association at the CFH locus was also observed using a candidate approach (p=1.8e-04). For pure drusen, two loci reached genome-wide significance: rs10801555 at CFH (p=6.0e-33) and rs61871744 at ARMS2-HTRA1 (p=4.2e-20).

CONCLUSIONS: The study highlights a clear association between the ARMS2-HTRA1 locus and higher RPD load. Although the CFH locus association did not achieve genome-wide significance, a suggestive link was observed. Three novel associations unique to RPD were identified, albeit for uncommon genetic variants. Further studies with larger sample sizes are needed to explore these findings.

PMID:40064387 | DOI:10.1016/j.ajo.2025.03.007

Categories: Literature Watch

Pages