Deep learning
DENSE-SIM: A modular pipeline for the evaluation of cine DENSE images with sub-voxel ground-truth strain
J Cardiovasc Magn Reson. 2025 Feb 21:101866. doi: 10.1016/j.jocmr.2025.101866. Online ahead of print.
ABSTRACT
BACKGROUND: Myocardial strain is a valuable biomarker for diagnosing and predicting cardiac conditions, offering additional prognostic information to traditional metrics like ejection fraction. While cardiovascular magnetic resonance (CMR) methods, particularly cine displacement encoding with stimulated echoes (DENSE), are the gold standard for strain estimation, evaluation of regional strain estimation requires precise ground truth. This study introduces DENSE-Sim, an open-source simulation pipeline for generating realistic cine DENSE images with high-resolution known ground truth strain, enabling evaluation of accuracy and precision in strain analysis pipelines.
METHODS: This pipeline is a modular tool designed for simulating cine DENSE images and evaluating strain estimation performance. It comprises four main modules: 1) anatomy generation, for creating end-diastolic cardiac shapes; 2) motion generation, to produce myocardial deformations over time and Lagrangian strain; 3) DENSE image generation, using Bloch equation simulations with realistic noise, spiral sampling, and phase-cycling; and 4) strain evaluation. To illustrate the pipeline, a synthetic dataset of 180 short-axis slices was created, and analysed using the commonly-used DENSEanalysis tool. The impact of the spatial regularization parameter (k) in DENSEanalysis was evaluated against the ground-truth pixel strain, to particularly assess the resulting bias and variance characteristics.
RESULTS: Simulated strain profiles were generated with a myocardial SNR ranging from 3.9 to 17.7. For end-systolic radial strain, DENSEanalysis average signed error (ASE) in Green strain ranged from 0.04 ± 0.09 (true-calculated, mean ± std) for a typical regularization (k=0.9), to - 0.01 ± 0.21 at low regularization (k=0.1). Circumferential strain ASE ranged from - 0.00 ± 0.04 at k=0.9 to - 0.01 ± 0.10 at k=0.1. This demonstrates that the circumferential strain closely matched the ground truth, while radial strain displayed more significant underestimations, particularly near the endocardium. A lower regularization parameter from 0.3 to 0.6 depending on the myocardial SNR, would be more appropriate to estimate the radial strain, as a compromise between noise compensation and global strain accuracy.
CONCLUSION: Generating realistic cine DENSE images with high-resolution ground-truth strain and myocardial segmentation enables accurate evaluation of strain analysis tools, while reproducing key in vivo acquisition features, and will facilitate the future development of deep-learning models for myocardial strain analysis, enhancing clinical CMR workflows.
PMID:39988298 | DOI:10.1016/j.jocmr.2025.101866
Missing-modality enabled multi-modal fusion architecture for medical data
J Biomed Inform. 2025 Feb 21:104796. doi: 10.1016/j.jbi.2025.104796. Online ahead of print.
ABSTRACT
BACKGROUND: Fusion of multi-modal data can improve the performance of deep learning models. However, missing modalities are common in medical data due to patient specificity, which is detrimental to the performance of multi-modal models in applications. Therefore, it is critical to adapt the models to missing modalities.
OBJECTIVE: This study aimed to develop an effective multi-modal fusion architecture for medical data that was robust to missing modalities and further improved the performance for clinical tasks.
METHODS: X-ray chest radiographs for the image modality, radiology reports for the text modality, and structured value data for the tabular data modality were fused in this study. Each modality pair was fused with a Transformer-based bi-modal fusion module, and the three bi-modal fusion modules were then combined into a tri-modal fusion framework. Additionally, multivariate loss functions were introduced into the training process to improve models' robustness to missing modalities during the inference process. Finally, we designed comparison and ablation experiments to validate the effectiveness of the fusion, the robustness to missing modalities, and the enhancements from each key component. Experiments were conducted on MIMIC-IV and MIMIC-CXR datasets with the 14-label disease diagnosis and patient in-hospital mortality prediction task The area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) were used to evaluate models' performance.
RESULTS: Our proposed architecture showed superior predictive performance, achieving the average AUROC and AUPRC of 0.916 and 0.551 in the 14-label classification task, 0.816 and 0.392 in the mortality prediction task. while the best average AUROC and AUPRC among the comparison methods were 0.876, 0.492 in the 14-label classification task and 0.806, 0.366 in the mortality prediction task. Both metrics decreased only slightly when tested with modal-incomplete data. Different levels of enhancements were achieved through three key components.
CONCLUSIONS: The proposed multi-modal fusion architecture effectively fused three modalities and showed strong robustness to missing modalities. This architecture holds promise for scaling up to more modalities to enhance the clinical practicality of the model.
PMID:39988001 | DOI:10.1016/j.jbi.2025.104796
Explainable paroxysmal atrial fibrillation diagnosis using an artificial intelligence-enabled electrocardiogram
Korean J Intern Med. 2025 Feb 21. doi: 10.3904/kjim.2024.130. Online ahead of print.
ABSTRACT
BACKGROUND/AIMS: Atrial fibrillation (AF) significantly contributes to global morbidity and mortality. Paroxysmal atrial fibrillation (PAF) is particularly common among patients with cryptogenic strokes or transient ischemic attacks and has a silent nature. This study aims to develop reliable artificial intelligence (AI) algorithms to detect early signs of AF in patients with normal sinus rhythm (NSR) using a 12-lead electrocardiogram (ECG).
METHODS: Between 2013 and 2020, 552,372 ECG traces from 318,321 patients were collected and split into training (n = 331,422), validation (n = 110,475), and test sets (n = 110,475). Deep neural networks were then trained to predict AF onset within one month of NSR. Model performance was evaluated using the area under the receiver operating characteristic curve (AUROC). An explainable AI technique was employed to identify the inference evidence underlying the predictions of deep learning models.
RESULTS: The AUROC for early diagnosis of PAF was 0.905 ± 0.007. The findings reveal that the vicinity of the T wave, including the ST segment and S-peak, significantly influences the ability of the trained neural network to diagnose PAF. Additionally, comparing the summarized ECG in NSR with those in PAF revealed that nonspecific ST-T abnormalities and inverted T waves were associated with PAF.
CONCLUSIONS: Deep learning can predict AF onset from NSR while detecting key features that influence decisions. This suggests that identifying undetected AF may serve as a predictive tool for PAF screening, offering valuable insights into cardiac dysfunction and stroke risk.
PMID:39987899 | DOI:10.3904/kjim.2024.130
Multi-atlas multi-modality morphometry analysis of the South Texas Alzheimer's Disease Research Center postmortem repository
Neuroimage Clin. 2025 Feb 18;45:103752. doi: 10.1016/j.nicl.2025.103752. Online ahead of print.
ABSTRACT
Histopathology provides critical insights into the neurological processes inducing neurodegenerative diseases and their impact on the brain, but brain banks combining histology and neuroimaging data are difficult to create. As part of an ongoing global effort to establish new brain banks providing both high-quality neuroimaging scans and detailed histopathology examinations, the South Texas Alzheimer's Disease Re- search Center postmortem repository was recently created with the specific purpose of studying comorbid dementias. As the repository is reaching a milestone of two hundred brain donations and a hundred curated MRI sessions are ready for processing, robust statistical analyses can now be conducted. In this work, we report the very first morphometry analysis conducted with this new data set. We describe the processing pipelines that were specifically developed to exploit the available MRI sequences, and we explain how we addressed several postmortem neuroimaging challenges, such as the separation of brain tissues from fixative fluids, the need for updated brain atlases, and the tissue contrast changes induced by brain fixation. In general, our results establish that a combination of structural MRI sequences can provide enough informa- tion for state-of-the-art Deep Learning algorithms to almost perfectly separate brain tissues from a formalin buffered solution. Regional brain volumes are challenging to measure in postmortem scans, but robust estimates sensitive to sex differences and age trends, reflecting clinical diagnosis, neuropathology findings, and the shrinkage induced by tissue fixation can be obtained. We hope that the new processing methods developed in this work, such as the lightweight Deep Networks we used to identify the formalin signal in multimodal MRI scans and the MRI synthesis tools we used to fix our anisotropic resolution brain scans, will inspire other research teams working with postmortem MRI scans.
PMID:39987858 | DOI:10.1016/j.nicl.2025.103752
Deep learning imputes DNA methylation states in single cells and enhances the detection of epigenetic alterations in schizophrenia
Cell Genom. 2025 Feb 15:100774. doi: 10.1016/j.xgen.2025.100774. Online ahead of print.
ABSTRACT
DNA methylation (DNAm) is a key epigenetic mark with essential roles in gene regulation, mammalian development, and human diseases. Single-cell technologies enable profiling DNAm at cytosines in individual cells, but they often suffer from low coverage for CpG sites. We introduce scMeFormer, a transformer-based deep learning model for imputing DNAm states at each CpG site in single cells. Comprehensive evaluations across five single-nucleus DNAm datasets from human and mouse demonstrate scMeFormer's superior performance over alternative models, achieving high-fidelity imputation even with coverage reduced to 10% of original CpG sites. Applying scMeFormer to a single-nucleus DNAm dataset from the prefrontal cortex of patients with schizophrenia and controls identified thousands of schizophrenia-associated differentially methylated regions that would have remained undetectable without imputation and added granularity to our understanding of epigenetic alterations in schizophrenia. We anticipate that scMeFormer will be a valuable tool for advancing single-cell DNAm studies.
PMID:39986279 | DOI:10.1016/j.xgen.2025.100774
Genetic association studies using disease liabilities from deep neural networks
Am J Hum Genet. 2025 Feb 19:S0002-9297(25)00019-9. doi: 10.1016/j.ajhg.2025.01.019. Online ahead of print.
ABSTRACT
The case-control study is a widely used method for investigating the genetic underpinnings of binary traits. However, long-term, prospective cohort studies often grapple with absent or evolving health-related outcomes. Here, we propose two methods, liability and meta, for conducting genome-wide association studies (GWASs) that leverage disease liabilities calculated from deep patient phenotyping. Analyzing 38 common traits in ∼300,000 UK Biobank participants, we identified an increased number of loci in comparison to the number identified by the conventional case-control approach, and there were high replication rates in larger external GWASs. Further analyses confirmed the disease specificity of the genetic architecture; the meta method demonstrated higher robustness when phenotypes were imputed with low accuracy. Additionally, polygenic risk scores based on disease liabilities more effectively predicted newly diagnosed cases in the 2022 dataset, which were controls in the earlier 2019 dataset. Our findings demonstrate that integrating high-dimensional phenotypic data into deep neural networks enhances genetic association studies while capturing disease-relevant genetic architecture.
PMID:39986278 | DOI:10.1016/j.ajhg.2025.01.019
Electrocardiographic-Driven artificial intelligence Model: A new approach to predicting One-Year mortality in heart failure with reduced ejection fraction patients
Int J Med Inform. 2025 Feb 19;197:105843. doi: 10.1016/j.ijmedinf.2025.105843. Online ahead of print.
ABSTRACT
BACKGROUND: Despite the proliferation of heart failure (HF) mortality prediction models, their practical utility is limited. Addressing this, we utilized a significant dataset to develop and validate a deep learning artificial intelligence (AI) model for predicting one-year mortality in heart failure with reduced ejection fraction (HFrEF) patients. The study's focus was to assess the effectiveness of an AI algorithm, trained on an extensive collection of ECG data, in predicting one-year mortality in HFrEF patients.
METHODS: We selected HFrEF patients who had high-quality baseline ECGs from two hospital visits between September 2016 and May 2021. A total of 3,894 HFrEF patients (64% male, mean age 64.3, mean ejection fraction 29.8%) were included. Using this ECG data, we developed a deep learning model and evaluated its performance using the area under the receiver operating characteristic curve (AUROC).
RESULTS: The model, validated against 16,228 independent ECGs from the original cohort, achieved an AUROC of 0.826 (95 % CI, 0.794-0.859). It displayed a high sensitivity of 99.0 %, positive predictive value of 16.6 %, and negative predictive value of 98.4 %. Importantly, the deep learning algorithm emerged as an independent predictor of 1-yr mortality of HFrEF patients with an adjusted hazards ratio of 4.12 (95 % CI 2.32-7.33, p < 0.001).
CONCLUSIONS: The depth and quality of our dataset and our AI-driven ECG analysis model significantly enhance the prediction of one-year mortality in HFrEF patients. This promises a more personalized, future-focused approach in HF patient management.
PMID:39986123 | DOI:10.1016/j.ijmedinf.2025.105843
Specific glycomacropeptide detection via polyacrylamide gel electrophoresis with dual imaging and signal-fusion deep learning
Food Chem. 2025 Feb 12;476:143293. doi: 10.1016/j.foodchem.2025.143293. Online ahead of print.
ABSTRACT
Herein, we report a sodium dodecyl-sulfate polyacrylamide gel electrophoresis (SDS-PAGE) method featuring dual imaging and signal-fusion deep learning for specific identification and analysis of glycomacropeptide (GMP) in milk sample. Conventional SDS-PAGE methods lack specificity because of the signle staining of protein bands, and the overlap between GMP and β-lactoglobulin (βLg). Our dual imaging method generated a pair of complementary detection signals by recruiting intrinsic fluorescence imaging (IFI) and silver staining. Comparing the IFI image with the staining image highlighted the presence of GMP and differentiated it from βLg. Additionally, we trained a signal-fusion deep learning model to improve the quantitative performance of our method. The model fused the features extracted from the paired detection signals (IFI and staining) and accurately classified them into different mixing ratios (proportion of GMP-containing whey in the sample), indicating the potential for quantitative analysis on the mixing ratios of GMP added into whey sample. The developed method has the merits of specificity, sensitivity and simplilcity, and has potential to analysis of protein/peptides with unique IFI properties in food safety, basic research and biopharming etc.
PMID:39986063 | DOI:10.1016/j.foodchem.2025.143293
Building rooftop extraction from high resolution aerial images using multiscale global perceptron with spatial context refinement
Sci Rep. 2025 Feb 22;15(1):6499. doi: 10.1038/s41598-025-91206-6.
ABSTRACT
Building rooftop extraction has been applied in various fields, such as cartography, urban planning, automatic driving, and intelligent city construction. Automatic building detection and extraction algorithms using high spatial resolution aerial images can provide precise location and geometry information, significantly reducing time, costs, and labor. Recently, deep learning algorithms, especially convolution neural networks (CNNs) and Transformer, have robust local or global feature extraction ability, achieving advanced performance in intelligent interpretation compared with conventional methods. However, buildings often exhibit scale variation, spectral heterogeneity, and similarity with complex geometric shapes. Hence, the building rooftop extraction results exist fragmentation and lack spatial details using these methods. To address these issues, this study developed a multi-scale global perceptron network based on Transformer and CNN using novel encoder-decoders for enhancing contextual representation of buildings. Specifically, an improved multi-head-attention encoder is employed by constructing multi-scale tokens to enhance global semantic correlations. Meanwhile, the context refinement decoder is developed and synergistically uses high-level semantic representation and shallow features to restore spatial details. Overall, quantitative analysis and visual experiments confirmed that the proposed model is more efficient and superior to other state-of-the-art methods, with a 95.18% F1 score on the WHU dataset and a 93.29% F1 score on the Massub dataset.
PMID:39987354 | DOI:10.1038/s41598-025-91206-6
Enhanced recognition and counting of high-coverage Amorphophallus konjac by integrating UAV RGB imagery and deep learning
Sci Rep. 2025 Feb 22;15(1):6501. doi: 10.1038/s41598-025-91364-7.
ABSTRACT
Accurate counting of Amorphophallus konjac (Konjac) plants can offer valuable insights for agricultural management and yield prediction. While current studies have primarily focused on detecting and counting crop plants during the early stages of low coverage, there is limited investigation into the later stages of high coverage, which could impact the accuracy of forecasting yield. High canopy coverage and severe occlusion in later stages pose significant challenges for plant detection and counting. Therefore, this study evaluated the performance of the Count Crops tool and a deep learning (DL) model derived from early-stage unmanned aerial vehicle (UAV) imagery in detecting and counting Konjac plants during the high-coverage growth stage. Additionally, the study proposed an approach that integrates the DL model with Konjac location information from both early-stage and high canopy coverage stage imagery to improve the accuracy of recognizing Konjac plants during the high canopy coverage stage. The results indicated that the Count Crops tool outperformed the DL model constructed solely from early-stage imagery in detecting and counting Konjac plants during the high-coverage period. However, given the single stem and erect growth characteristics of Konjac, incorporating the DL model with the location information of the Konjac plants achieved the highest accuracy (Precision = 98.7%, Recall = 86.7%, F1-score = 92.3%). Our findings indicate that combining DL detection results from the early growth stages of Konjac, along with plant positional information from both growth stages, not only significantly improved the accuracy of detecting and counting plants but also saved time on annotating and training DL samples in the later stages. This study introduces an innovative approach for detecting and counting Konjac plants during high-coverage periods, providing a new perspective for recognizing and counting other crop plants at high-overlapping growth stages.
PMID:39987316 | DOI:10.1038/s41598-025-91364-7
A deep learning digital biomarker to detect hypertension and stratify cardiovascular risk from the electrocardiogram
NPJ Digit Med. 2025 Feb 22;8(1):120. doi: 10.1038/s41746-025-01491-8.
ABSTRACT
Hypertension is a major risk factor for cardiovascular disease (CVD), yet blood pressure is measured intermittently and under suboptimal conditions. We developed a deep learning model to identify hypertension and stratify risk of CVD using 12-lead electrocardiogram waveforms. HTN-AI was trained to detect hypertension using 752,415 electrocardiograms from 103,405 adults at Massachusetts General Hospital. We externally validated HTN-AI and demonstrated associations between HTN-AI risk and incident CVD in 56,760 adults at Brigham and Women's Hospital. HTN-AI accurately discriminated hypertension (internal and external validation AUROC 0.803 and 0.771, respectively). In Fine-Gray regression analyses model-predicted probability of hypertension was associated with mortality (hazard ratio per standard deviation: 1.47 [1.36-1.60], p < 0.001), HF (2.26 [1.90-2.69], p < 0.001), MI (1.87 [1.69-2.07], p < 0.001), stroke (1.30 [1.18-1.44], p < 0.001), and aortic dissection or rupture (1.69 [1.22-2.35], p < 0.001) after adjustment for demographics and risk factors. HTN-AI may facilitate diagnosis of hypertension and serve as a digital biomarker of hypertension-associated CVD.
PMID:39987256 | DOI:10.1038/s41746-025-01491-8
Semi-supervised tissue segmentation from histopathological images with consistency regularization and uncertainty estimation
Sci Rep. 2025 Feb 22;15(1):6506. doi: 10.1038/s41598-025-90221-x.
ABSTRACT
Pathologists have depended on their visual experience to assess tissue structures in smear images, which was time-consuming, error-prone, and inconsistent. Deep learning, particularly Convolutional Neural Networks (CNNs), offers the ability to automate this procedure by recognizing patterns in tissue images. However, training these models necessitates huge amounts of labeled data, which can be difficult to come by due to the skill required for annotation and the unavailability of data, particularly for rare diseases. This work introduces a new semi-supervised method for tissue structure semantic segmentation in histopathological images. The study presents a CNN based teacher model that generates pseudo-labels to train a student model, aiming to overcome the drawbacks of conventional supervised learning approaches. Self-supervised training is used to improve the teacher model's performance on smaller datasets. Consistency regularization is integrated to efficiently train the student model on labeled data. Further, the study uses Monte Carlo dropout to estimate the uncertainty of proposed model. The proposed model demonstrated promising results by achieving an mIoU score of 0.64 on a public dataset, highlighting its potential to improve segmentation accuracy in histopathological image analysis.
PMID:39987243 | DOI:10.1038/s41598-025-90221-x
An intelligent prediction method for rock core integrity based on deep learning
Sci Rep. 2025 Feb 22;15(1):6456. doi: 10.1038/s41598-025-90924-1.
ABSTRACT
To address the issue of serious inefficiency in the traditional manual evaluation methods of rock core integrity, a deep learning-based algorithm named IDA-RCF (Intelligent detection algorithm for Rock Core Fissure) is proposed in this paper, which realizes the automatic evaluation of rock core integrity in accordance with the fissure identification results. In IDA-RCF, a two-branch feature extraction network is firstly proposed, in which branch one is used to fully extract the complex and variable local detail fissure features by Deformable convolution, and branch two is used to capture the global context information of the rock core images by EfficientViT network based on the self-attention. Then a multi-level feature fusion network is proposed for adaptively fusing local and global features from the same level and the fused feature information from the previous level, thereby capturing more valid information and eliminating redundancies. Then the fused feature layer is decoded by the feature decoder to output the detection results of rock core fissure. Finally, the fissure rate is automatically calculated based on the detection results to predict the degree of rock core integrity. The experimental results show that the accuracy indexes F1, mAP@0.5 and mAP@0.5:0.95 of IDA-RCF are 93.09%, 94.44% and 84.61%, respectively. The relative error between the prediction results and the manual statistical results of the fissure rate is only 4.38%, and the prediction accuracy for the degree of rock core integrity is 93.8%, indicating that the proposed method in this paper is able to accomplish the intelligent evaluation task of rock core integrity with high precision.
PMID:39987183 | DOI:10.1038/s41598-025-90924-1
A hybrid inception-dilated-ResNet architecture for deep learning-based prediction of COVID-19 severity
Sci Rep. 2025 Feb 22;15(1):6490. doi: 10.1038/s41598-025-91322-3.
ABSTRACT
Chest computed tomography (CT) scans are essential for accurately assessing the severity of the novel Coronavirus (COVID-19), facilitating appropriate therapeutic interventions and monitoring disease progression. However, determining COVID-19 severity requires a radiologist with significant expertise. This study introduces a pioneering utilization of deep learning (DL) for evaluate COVID-19 severity using lung CT images, presenting a novel and effective method for assessing the severity of pulmonary manifestations in COVID-19 patients. Inception-Residual networks (Inception-ResNet), advanced hybrid models known for their compactness and effectiveness, were used to extract relevant features from CT scans. Inception-ResNet incorporates the dilated mechanism into its ResNet component, enhancing its ability to accurately classify lung involvement stages. This study demonstrates that dilated residual networks (dResNet) outperform their non-dilated counterparts in image classification tasks, as their architectural designs allow the systems to acquire comprehensive global data by expanding their receptive fields. Our study utilized an initial dataset of 1548 human thoracic CT scans, meticulously annotated by two experienced specialists. Lung involvement was determined by calculating a percentage based on observations made at each scan. The hybrid methodology successfully distinguished the ten distinct severity levels associated with COVID-19, achieving a maximum accuracy of 96.40%. This system demonstrates its effectiveness as a diagnostic framework for assessing lung involvement in COVID-19-affected individuals, facilitating disease progression tracking.
PMID:39987169 | DOI:10.1038/s41598-025-91322-3
SVEA: an accurate model for structural variation detection using multi-channel image encoding and enhanced AlexNet architecture
J Transl Med. 2025 Feb 22;23(1):221. doi: 10.1186/s12967-025-06213-y.
ABSTRACT
BACKGROUND: Structural variations (SVs) are a pervasive and impactful class of genetic variation within the genome, significantly influencing gene function, impacting human health, and contributing to disease. Recent advances in deep learning have shown promise for SV detection; however, current methods still encounter key challenges in effective feature extraction and accurately predicting complex variations.
METHODS: We introduce SVEA, an advanced deep learning model designed to address these challenges. SVEA employs a novel multi-channel image encoding approach that transforms SVs into multi-dimensional image formats, improving the model's ability to capture subtle genomic variations. Additionally, SVEA integrates multi-head self-attention mechanisms and multi-scale convolution modules, enhancing its ability to capture global context and multi-scale features. The model was trained and tested on a diverse range of genomic datasets to evaluate its accuracy and generalizability.
RESULTS: SVEA demonstrated superior performance in detecting complex SVs compared to existing methods, with improved accuracy across various genomic regions. The multi-channel encoding and advanced feature extraction techniques contributed to the model's enhanced ability to predict subtle and complex variations.
CONCLUSIONS: This study presents SVEA, a deep learning model incorporating advanced encoding and feature extraction techniques to enhance structural variation prediction. The model demonstrates high accuracy, outperforming existing methods by approximately 4%, while also identifying areas for further optimization.
PMID:39987107 | DOI:10.1186/s12967-025-06213-y
Association of Sarcopenia With Toxicity and Survival in Patients With Lung Cancer, a Multi-Institutional Study With External Dataset Validation
Clin Lung Cancer. 2025 Jan 28:S1525-7304(25)00021-X. doi: 10.1016/j.cllc.2025.01.010. Online ahead of print.
ABSTRACT
INTRODUCTION: Sarcopenia is associated with worse survival in non-small cell lung cancer (NSCLC), but less studied in association with toxicity. Here, we investigated the association between imaging-assessed sarcopenia with toxicity in patients with NSCLC.
METHODS: We analyzed a "chemoradiation" cohort (n = 318) of patients with NSCLC treated with chemoradiation, and an external validation "chemo-surgery" cohort (n = 108) who were treated with chemotherapy and surgery from 2002 to 2013 at a different institution. A deep-learning pipeline utilized pretreatment computed tomography scans to estimate SM area at the third lumbar vertebral level. Sarcopenia was defined by dichotomizing SM index, (SM adjusted for height and sex). Primary endpoint was NCI CTCAE v5.0 grade 3 to 5 (G3-5) toxicity within 21-days of first chemotherapy cycle. Multivariable analyses (MVA) of toxicity endpoints with sarcopenia and baseline characteristics were performed by logistic regression, and overall survival (OS) was analyzed using Cox regression.
RESULTS: Sarcopenia was identified in 36% and 36% of patients in the chemoradiation and chemo-surgery cohorts, respectively. On MVA, sarcopenia was associated with worse G3-5 toxicity in chemoradiation (HR 2.00, P < .01) and chemo-surgery cohorts (HR 2.95, P = .02). In the chemoradiation cohort, worse OS was associated with G3-5 toxicity (HR 1.42, P = .02) but not sarcopenia on MVA. In chemo-surgery cohort, worse OS was associated with sarcopenia (HR 2.03, P = .02) but not G3-5 toxicity on MVA.
CONCLUSION: Sarcopenia, assessed by an automated deep-learning system, was associated with worse toxicity and survival outcomes in patients with NSCLC. Sarcopenia can be utilized to tailor treatment decisions to optimize adverse events and survival.
PMID:39986945 | DOI:10.1016/j.cllc.2025.01.010
Mechanosensing alters platelet migration
Acta Biomater. 2025 Feb 20:S1742-7061(25)00136-9. doi: 10.1016/j.actbio.2025.02.042. Online ahead of print.
ABSTRACT
Platelets have long been established as a safeguard of our vascular system. Recently, haptotactic platelet migration has been discovered as a part of the immune response. In addition, platelets exhibit mechanosensing properties, changing their behavior in response to the stiffness of the underlying substrate. However, the influence of substrate stiffness on platelet migration behavior remains elusive. Here, we investigated the migration of platelets on fibrinogen-coated polydimethylsiloxane (PDMS) substrates with different stiffnesses. Using phase-contrast and fluorescence microscopy as well as a deep-learning neural network, we tracked single migrating platelets and measured their migration distance and velocity. We found that platelets migrated on stiff PDMS substrates (E = 2 MPa), while they did not migrate on soft PDMS substrates (E = 5 kPa). Platelets migrated also on PDMS substrates with intermediate stiffness (E = 100 kPa), but their velocity and the fraction of migrating platelets were diminished compared to platelets on stiff PDMS substrates. The straightness of platelet migration, however, was not significantly influenced by substrate stiffness. We used scanning ion conductance microscopy (SICM) to image the three-dimensional shape of migrating platelets, finding that platelets on soft substrates did not show the polarization and shape change associated with migration. Furthermore, the fibrinogen density gradient, which is generated by migrating platelets, was reduced for platelets on soft substrates. Our work demonstrates that substrate stiffness, and thus platelet mechanosensing, influences platelet migration. Substrate stiffness for optimal platelet migration is quite high (>100 kPa) in comparison to other cell types, with possible implications on platelet behavior in inflammatory and injured tissue. STATEMENT OF SIGNIFICANCE: Platelets can feel and react to the stiffness of their surroundings - a process called mechanosensation. Additionally, platelets migrate via substrate-bound fibrinogen as part of the innate immune response during injury or inflammation. It has been shown that the migration of immune cells is influenced by the stiffness of the underlying substrate, but the effect of substrate stiffness on the migration of platelets has not yet been investigated. Using differently stiff substrates made from PDMS, we show that substrate stiffness affects platelet migration. Stiff substrates facilitate fast and frequent platelet migration with a strong platelet shape anisotropy and a strong fibrinogen removal while soft substrates inhibit platelet migration. These findings highlight the influence of the stiffness of the surrounding tissue on the platelet immune response, possibly enhancing platelet migration in inflamed tissue.
PMID:39986637 | DOI:10.1016/j.actbio.2025.02.042
Artificial intelligence and different image modalities in Uveal Melanoma diagnosis and prognosis: A narrative review
Photodiagnosis Photodyn Ther. 2025 Feb 20:104528. doi: 10.1016/j.pdpdt.2025.104528. Online ahead of print.
ABSTRACT
BACKGROUND: The most widespread primary intraocular tumor in adults is called uveal melanoma (UM), if detected early enough, it can be curable. Various methods are available to treat UM, but the most commonly used and effective approach is plaque radiotherapy using Iodine-125 and Ruthenium-106.
METHOD: The authors performed searches to distinguish relevant studies from 2017 to 2024 by three databases (PubMed, Scopus, and Google Scholar).
RESULTS: Imaging technologies such as Ultrasound (US), Fundus Photography (FP), Optical Coherent Tomography (OCT), Fluorescein Angiography (FA), and Magnetic Resonance Images (MRI) play a vital role in the diagnosis and prognosis of UM. The present review assessed the power of different image modalities when integrated with artificial intelligence (AI) to diagnose and prognosis of patients affected by UM.
CONCLUSION: Finally, after reviewing the studies conducted, it was concluded that AI is a developing tool in image analysis and enhances workflows in diagnosis from data and image processing to clinical decisions, improving tailored treatment scenarios, response prediction, and prognostication.
PMID:39986588 | DOI:10.1016/j.pdpdt.2025.104528
A novel generative model for brain tumor detection using magnetic resonance imaging
Comput Med Imaging Graph. 2025 Feb 19;121:102498. doi: 10.1016/j.compmedimag.2025.102498. Online ahead of print.
ABSTRACT
Brain tumors are a disease that kills thousands of people worldwide each year. Early identification through diagnosis is essential for monitoring and treating patients. The proposed study brings a new method through intelligent computational cells that are capable of segmenting the tumor region with high precision. The method uses deep learning to detect brain tumors with the "You only look once" (Yolov8) framework, and a fine-tuning process at the end of the network layer using intelligent computational cells capable of traversing the detected region, segmenting the edges of the brain tumor. In addition, the method uses a classification pipeline that combines a set of classifiers and extractors combined with grid search, to find the best combination and the best parameters for the dataset. The method obtained satisfactory results above 98% accuracy for region detection, and above 99% for brain tumor segmentation and accuracies above 98% for binary classification of brain tumor, and segmentation time obtaining less than 1 s, surpassing the state of the art compared to the same database, demonstrating the effectiveness of the proposed method. The new approach proposes the classification of different databases through data fusion to classify the presence of tumor in MRI images, as well as the patient's life span. The segmentation and classification steps are validated by comparing them with the literature, with comparisons between works that used the same dataset. The method addresses a new generative AI for brain tumor capable of generating a pre-diagnosis through input data through Large Language Model (LLM), and can be used in systems to aid medical imaging diagnosis. As a contribution, this study employs new detection models combined with innovative methods based on digital image processing to improve segmentation metrics, as well as the use of Data Fusion, combining two tumor datasets to enhance classification performance. The study also utilizes LLM models to refine the pre-diagnosis obtained post-classification. Thus, this study proposes a Computer-Aided Diagnosis (CAD) method through AI with PDI, CNN, and LLM.
PMID:39985841 | DOI:10.1016/j.compmedimag.2025.102498
Enhancing Functional Protein Design Using Heuristic Optimization and Deep Learning for Anti-Inflammatory and Gene Therapy Applications
Proteins. 2025 Feb 22. doi: 10.1002/prot.26810. Online ahead of print.
ABSTRACT
Protein sequence design is a highly challenging task, aimed at discovering new proteins that are more functional and producible under laboratory conditions than their natural counterparts. Deep learning-based approaches developed to address this problem have achieved significant success. However, these approaches often do not adequately emphasize the functional properties of proteins. In this study, we developed a heuristic optimization method to enhance key functionalities such as solubility, flexibility, and stability, while preserving the structural integrity of proteins. This method aims to reduce laboratory demands by enabling a design that is both functional and structurally sound. This approach is particularly valuable for the synthetic production of proteins with anti-inflammatory properties and those used in gene therapy. The designed proteins were initially evaluated for their ability to preserve natural structures using recovery and confidence metrics, followed by assessments with the AlphaFold tool. Additionally, natural protein sequences were mutated using a genetic algorithm and compared with those designed by our method. The results demonstrate that the protein sequences generated by our method exhibit much greater similarity to native protein sequences and structures. The code and sequences for the designed proteins are available at https://github.com/aysenursoyturk/HMHO.
PMID:39985803 | DOI:10.1002/prot.26810