Deep learning

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data

Fri, 2025-01-10 06:00

J Magn Reson Imaging. 2025 Jan 10. doi: 10.1002/jmri.29686. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden.

PURPOSE: This work tests the viability of semi-supervision for brain metastases segmentation.

STUDY TYPE: Retrospective.

SUBJECTS: There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases.

FIELD STRENGTH/SEQUENCES: 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR).

ASSESSMENT: Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training.

STATISTICAL TESTS: Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort.

RESULTS: Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data.

DATA CONCLUSION: Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data.

PLAIN LANGUAGE SUMMARY: Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners.

LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 2.

PMID:39792624 | DOI:10.1002/jmri.29686

Categories: Literature Watch

Visualizing Preosteoarthritis: Updates on UTE-Based Compositional MRI and Deep Learning Algorithms

Fri, 2025-01-10 06:00

J Magn Reson Imaging. 2025 Jan 10. doi: 10.1002/jmri.29710. Online ahead of print.

ABSTRACT

Osteoarthritis (OA) is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Detecting OA before the onset of irreversible changes is crucial for early proactive management and limit growing disease burden. The more recent advanced quantitative imaging techniques and deep learning (DL) algorithms in musculoskeletal imaging have shown great potential for visualizing "pre-OA." In this review, we first focus on ultrashort echo time-based magnetic resonance imaging (MRI) techniques for direct visualization as well as quantitative morphological and compositional assessment of both short- and long-T2 musculoskeletal tissues, and second explore how DL revolutionize the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the classification, prediction, and management of OA. PLAIN LANGUAGE SUMMARY: Detecting osteoarthritis (OA) before the onset of irreversible changes is crucial for early proactive management. OA is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Ultrashort echo time-based magnetic resonance imaging (MRI), in particular, enables direct visualization and quantitative compositional assessment of short-T2 tissues. Deep learning is revolutionizing the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the detection, classification, and prediction of disease. They together have made further advances toward identification of imaging biomarkers/features for pre-OA. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 2.

PMID:39792443 | DOI:10.1002/jmri.29710

Categories: Literature Watch

deep-AMPpred: A Deep Learning Method for Identifying Antimicrobial Peptides and Their Functional Activities

Fri, 2025-01-10 06:00

J Chem Inf Model. 2025 Jan 10. doi: 10.1021/acs.jcim.4c01913. Online ahead of print.

ABSTRACT

Antimicrobial peptides (AMPs) are small peptides that play an important role in disease defense. As the problem of pathogen resistance caused by the misuse of antibiotics intensifies, the identification of AMPs as alternatives to antibiotics has become a hot topic. Accurately identifying AMPs using computational methods has been a key issue in the field of bioinformatics in recent years. Although there are many machine learning-based AMP identification tools, most of them do not focus on or only focus on a few functional activities. Predicting the multiple activities of antimicrobial peptides can help discover candidate peptides with broad-spectrum antimicrobial ability. We propose a two-stage AMP predictor deep-AMPpred, in which the first stage distinguishes AMP from other peptides, and the second stage solves the multilabel problem of 13 common functional activities of AMP. deep-AMPpred combines the ESM-2 model to encode the features of AMP and integrates CNN, BiLSTM, and CBAM models to discover AMP and its functional activities. The ESM-2 model captures the global contextual features of the peptide sequence, while CNN, BiLSTM, and CBAM combine local feature extraction, long-term and short-term dependency modeling, and attention mechanisms to improve the performance of deep-AMPpred in AMP and its function prediction. Experimental results demonstrate that deep-AMPpred performs well in accurately identifying AMPs and predicting their functional activities. This confirms the effectiveness of using the ESM-2 model to capture meaningful peptide sequence features and integrating multiple deep learning models for AMP identification and activity prediction.

PMID:39792442 | DOI:10.1021/acs.jcim.4c01913

Categories: Literature Watch

Addendum to: The effectiveness of deep learning model in differentiating benign and malignant pulmonary nodules on spiral CT

Fri, 2025-01-10 06:00

Technol Health Care. 2025;33(1):695. doi: 10.3233/THC-249001.

NO ABSTRACT

PMID:39792355 | DOI:10.3233/THC-249001

Categories: Literature Watch

Multimodal deep-learning model using pre-treatment endoscopic images and clinical information to predict efficacy of neoadjuvant chemotherapy in esophageal squamous cell carcinoma

Fri, 2025-01-10 06:00

Esophagus. 2025 Jan 10. doi: 10.1007/s10388-025-01106-x. Online ahead of print.

ABSTRACT

BACKGROUND: Neoadjuvant chemotherapy is standard for advanced esophageal squamous cell carcinoma, though often ineffective. Therefore, predicting the response to chemotherapy before treatment is desirable. However, there is currently no established method for predicting response to neoadjuvant chemotherapy. This study aims to build a deep-learning model to predict the response of esophageal squamous cell carcinoma to preoperative chemotherapy by utilizing multimodal data integrating esophageal endoscopic images and clinical information.

METHODS: 170 patients with locally advanced esophageal squamous cell carcinoma were retrospectively studied, and endoscopic images and clinical information before neoadjuvant chemotherapy were collected. Endoscopic images alone and endoscopic images plus clinical information were each analyzed with a deep-learning model based on ResNet50. The clinical information alone was analyzed using logistic regression machine learning models, and the area under a receiver operating characteristic curve was calculated to compare the accuracy of each model. Gradient-weighted Class Activation Mapping was used on the endoscopic images to analyze the trend of the regions of interest in this model.

RESULTS: The area under the curve by clinical information alone, endoscopy alone, and both combined were 0.64, 0.55, and 0.77, respectively. The endoscopic image plus clinical information group was statistically more significant than the other models. This model focused more on the tumor when trained with clinical information.

CONCLUSIONS: The deep-learning model developed suggests that gastrointestinal endoscopic imaging, in combination with other clinical information, has the potential to predict the efficacy of neoadjuvant chemotherapy in locally advanced esophageal squamous cell carcinoma before treatment.

PMID:39792350 | DOI:10.1007/s10388-025-01106-x

Categories: Literature Watch

GraphkmerDTA: integrating local sequence patterns and topological information for drug-target binding affinity prediction and applications in multi-target anti-Alzheimer's drug discovery

Fri, 2025-01-10 06:00

Mol Divers. 2025 Jan 10. doi: 10.1007/s11030-024-11065-7. Online ahead of print.

ABSTRACT

Identifying drug-target binding affinity (DTA) plays a critical role in early-stage drug discovery. Despite the availability of various existing methods, there are still two limitations. Firstly, sequence-based methods often extract features from fixed length protein sequences, requiring truncation or padding, which can result in information loss or the introduction of unwanted noise. Secondly, structure-based methods prioritize extracting topological information but struggle to effectively capture sequence features. To address these challenges, we propose a novel deep learning model named GraphkmerDTA, which integrates Kmer features with structural topology. Specifically, GraphkmerDTA utilizes graph neural networks to extract topological features from both molecules and proteins, while fully connected networks learn local sequence patterns from the Kmer features of proteins. Experimental results indicate that GraphkmerDTA outperforms existing methods on benchmark datasets. Furthermore, a case study on lung cancer demonstrates the effectiveness of GraphkmerDTA, as it successfully identifies seven known EGFR inhibitors from a screening library of over two thousand compounds. To further assess the practical utility of GraphkmerDTA, we integrated it with network pharmacology to investigate the mechanisms underlying the therapeutic effects of Lonicera japonica flower in treating Alzheimer's disease. Through this interdisciplinary approach, three potential compounds were identified and subsequently validated through molecular docking studies. In conclusion, we present not only a novel AI model for the DTA task but also demonstrate its practical application in drug discovery by integrating modern AI approaches with traditional drug discovery methodologies.

PMID:39792322 | DOI:10.1007/s11030-024-11065-7

Categories: Literature Watch

Assessing the efficiency of pixel-based and object-based image classification using deep learning in an agricultural Mediterranean plain

Fri, 2025-01-10 06:00

Environ Monit Assess. 2025 Jan 10;197(2):155. doi: 10.1007/s10661-024-13431-2.

ABSTRACT

Recent advancements in satellite technology have greatly expanded data acquisition capabilities, making satellite imagery more accessible. Despite these strides, unlocking the full potential of satellite images necessitates efficient interpretation. Image classification, a widely adopted for extracting valuable information, has seen a surge in the application of deep learning methodologies due to their effectiveness. However, the success of deep learning is contingent upon the quality of the training data. In our study, we compared the efficiency of pixel-based and object-based classifications in Sentinel-2 satellite imagery using the Deeplabv3 deep learning method. The image sharpness was enhanced through a high-pass filter, aiding in data visualization and preparation. Deeplabv3 underwent training, leading to the development of classifiers following the extraction of training samples from the enhanced image. The majority zonal statistic method was implemented to assign class values to objects in the workflow. The accuracy of pixel-based and object-based classification was 83.1% and 83.5%, respectively, with corresponding kappa values of 0.786 and 0.791. These accuracies highlighted the efficient performance of the object-based method when integrated with a deep learning classifier. These results can serve as a valuable reference for future studies, aiding in the improvement of accuracy while potentially saving time and effort. By evaluating this nuanced impact pixel and object-based classification as well as on class-specific accuracy, this research contributes to the ongoing refinement of satellite image interpretation techniques in environmental applications.

PMID:39792312 | DOI:10.1007/s10661-024-13431-2

Categories: Literature Watch

Application of deep learning model based on unenhanced chest CT for opportunistic screening of osteoporosis: a multicenter retrospective cohort study

Fri, 2025-01-10 06:00

Insights Imaging. 2025 Jan 10;16(1):10. doi: 10.1186/s13244-024-01817-2.

ABSTRACT

INTRODUCTION: A large number of middle-aged and elderly patients have an insufficient understanding of osteoporosis and its harm. This study aimed to establish and validate a convolutional neural network (CNN) model based on unenhanced chest computed tomography (CT) images of the vertebral body and skeletal muscle for opportunistic screening in patients with osteoporosis.

MATERIALS AND METHODS: Our team retrospectively collected clinical information from participants who underwent unenhanced chest CT and dual-energy X-ray absorptiometry (DXA) examinations between January 1, 2022, and December 31, 2022, at four hospitals. These participants were divided into a training set (n = 581), an external test set 1 (n = 229), an external test set 2 (n = 198) and an external test set 3 (n = 118). Five CNN models were constructed based on chest CT images to screen patients with osteoporosis and compared with the SMI model to predict the performance of osteoporosis patients.

RESULTS: All CNN models have good performance in predicting osteoporosis patients. The average F1 score of Densenet121 in the three external test sets was 0.865. The area under the curve (AUC) of Desenet121 in external test set 1, external test set 2, and external test set 3 were 0.827, 0.859, and 0.865, respectively. Furthermore, the Densenet121 model demonstrated a notably superior performance compared to the SMI model in predicting osteoporosis patients.

CONCLUSIONS: The CNN model based on unenhanced chest CT vertebral and skeletal muscle images can opportunistically screen patients with osteoporosis. Clinicians can use the CNN model to intervene in patients with osteoporosis and promptly avoid fragility fractures.

CRITICAL RELEVANCE STATEMENT: The CNN model based on unenhanced chest CT vertebral and skeletal muscle images can opportunistically screen patients with osteoporosis. Clinicians can use the CNN model to intervene in patients with osteoporosis and promptly avoid fragility fractures.

KEY POINTS: The application of unenhanced chest CT is increasing. Most people do not consciously use DXA to screen themselves for osteoporosis. A deep learning model was constructed based on CT images from four institutions.

PMID:39792306 | DOI:10.1186/s13244-024-01817-2

Categories: Literature Watch

Deep learning-based lymph node metastasis status predicts prognosis from muscle-invasive bladder cancer histopathology

Fri, 2025-01-10 06:00

World J Urol. 2025 Jan 10;43(1):65. doi: 10.1007/s00345-025-05440-8.

ABSTRACT

PURPOSE: To develop a deep learning (DL) model based on primary tumor tissue to predict the lymph node metastasis (LNM) status of muscle invasive bladder cancer (MIBC), while validating the prognostic value of the predicted aiN score in MIBC patients.

METHODS: A total of 323 patients from The Cancer Genome Atlas (TCGA) were used as the training and internal validation set, with image features extracted using a visual encoder called UNI. We investigated the ability to predict LNM status while assessing the prognostic value of aiN score. External validation was conducted on 139 patients from Renmin Hospital of Wuhan University (RHWU; Wuhan, China).

RESULTS: The DL model achieved area under the receiver operating characteristic curves of 0.79 (95% confidence interval [CI], 0.69-0.88) in the internal validation set for predicting LNM status, and 0.72 (95% CI, 0.68-0.75) in the external validation set. In multivariable Cox analysis, the model-predicted aiN score emerged as an independent predictor of survival for MIBC patients, with a hazard ratio of 1.608 (95% CI, 1.128-2.291; p = 0.008) in the TCGA cohort and 2.746 (95% CI, 1.486-5.076; p < 0.001) in the RHWU cohort. Additionally, the aiN score maintained prognostic value across different subgroups.

CONCLUSION: In this study, DL-based image analysis showed promising results by directly extracting relevant prognostic information from H&E-stained histology to predict the LNM status of MIBC patients. It might be used for personalized management of MIBC patients following prospective validation in the future.

PMID:39792275 | DOI:10.1007/s00345-025-05440-8

Categories: Literature Watch

Deep learning-based image domain reconstruction enhances image quality and pulmonary nodule detection in ultralow-dose CT with adaptive statistical iterative reconstruction-V

Fri, 2025-01-10 06:00

Eur Radiol. 2025 Jan 10. doi: 10.1007/s00330-024-11317-y. Online ahead of print.

ABSTRACT

OBJECTIVES: To evaluate the image quality and lung nodule detectability of ultralow-dose CT (ULDCT) with adaptive statistical iterative reconstruction-V (ASiR-V) post-processed using a deep learning image reconstruction (DLIR)-based image domain compared to low-dose CT (LDCT) and ULDCT without DLIR.

MATERIALS AND METHODS: A total of 210 patients undergoing lung cancer screening underwent LDCT (mean ± SD, 0.81 ± 0.28 mSv) and ULDCT (0.17 ± 0.03 mSv) scans. ULDCT images were reconstructed with ASiR-V (ULDCT-ASiR-V) and post-processed using DLIR (ULDCT-DLIR). The quality of the three CT images was analyzed. Three radiologists detected and measured pulmonary nodules on all CT images, with LDCT results serving as references. Nodule conspicuity was assessed using a five-point Likert scale, followed by further statistical analyses.

RESULTS: A total of 463 nodules were detected using LDCT. The image noise of ULDCT-DLIR decreased by 60% compared to that of ULDCT-ASiR-V and was lower than that of LDCT (p < 0.001). The subjective image quality scores for ULDCT-DLIR (4.4 [4.1, 4.6]) were also higher than those for ULDCT-ASiR-V (3.6 [3.1, 3.9]) (p < 0.001). The overall nodule detection rates for ULDCT-ASiR-V and ULDCT-DLIR were 82.1% (380/463) and 87.0% (403/463), respectively (p < 0.001). The percentage difference between diameters > 1 mm was 2.9% (ULDCT-ASiR-V vs. LDCT) and 0.5% (ULDCT-DLIR vs. LDCT) (p = 0.009). Scores of nodule imaging sharpness on ULDCT-DLIR (4.0 ± 0.68) were significantly higher than those on ULDCT-ASiR-V (3.2 ± 0.50) (p < 0.001).

CONCLUSION: DLIR-based image domain improves image quality, nodule detection rate, nodule imaging sharpness, and nodule measurement accuracy of ASiR-V on ULDCT.

KEY POINTS: Question Deep learning post-processing is simple and cheap compared with raw data processing, but its performance is not clear on ultralow-dose CT. Findings Deep learning post-processing enhanced image quality and improved the nodule detection rate and accuracy of nodule measurement of ultralow-dose CT. Clinical relevance Deep learning post-processing improves the practicability of ultralow-dose CT and makes it possible for patients with less radiation exposure during lung cancer screening.

PMID:39792163 | DOI:10.1007/s00330-024-11317-y

Categories: Literature Watch

Automated classification of coronary LEsions fRom coronary computed Tomography angiography scans with an updated deep learning model: ALERT study

Fri, 2025-01-10 06:00

Eur Radiol. 2025 Jan 10. doi: 10.1007/s00330-024-11308-z. Online ahead of print.

ABSTRACT

OBJECTIVES: The use of deep learning models for quantitative measurements on coronary computed tomography angiography (CCTA) may reduce inter-reader variability and increase efficiency in clinical reporting. This study aimed to investigate the diagnostic performance of a recently updated deep learning model (CorEx-2.0) for quantifying coronary stenosis, compared separately with two expert CCTA readers as references.

METHODS: This single-center retrospective study included 50 patients that underwent CCTA to rule out obstructive coronary artery disease between 2017-2022. Two expert CCTA readers and CorEx-2.0 independently assessed all 150 vessels using Coronary Artery Disease-Reporting and Data System (CAD-RADS). Inter-reader agreement analysis and diagnostic performance of CorEx-2.0, compared with each expert reader as references, were evaluated using percent agreement, Cohen's kappa for the binary CAD-RADS classification (CAD-RADS 0-3 versus 4-5) at patient level, and linearly weighted kappa for the 6-group CAD-RADS classification at vessel level.

RESULTS: Overall, 50 patients and 150 vessels were evaluated. Inter-reader agreement using the binary classification at patient level was 91.8% (45/49) with a Cohen's kappa of 0.80. For the 6-group classification at vessel level, inter-reader agreement was 67.6% (100/148) with a linearly weighted kappa of 0.77. CorEx-2.0 showed 100% sensitivity for detecting CAD-RADS ≥ 4 and kappa values of 0.86 versus both readers using the binary classification at patient level. For the 6-group classification at vessel level, CorEx-2.0 demonstrated weighted kappa values of 0.71 versus reader 1 and 0.73 versus reader 2.

CONCLUSION: CorEx-2.0 identified all patients with severe stenosis (CAD-RADS ≥ 4) compared with expert readers and approached expert reader performance at vessel level (weighted kappa > 0.70).

KEY POINTS: Question Can deep learning models improve objectivity in coronary stenosis grading and reporting as coronary CT angiography (CTA) workloads rise? Findings The deep learning model (CorEx-2.0) identified all patients with severe stenoses when compared with expert readers and approached expert reader performance at vessel level. Clinical relevance CorEx-2.0 is a reliable tool for identifying patients with severe stenoses (≥ 70%), underscoring the potential of using this deep learning model to prioritize coronary CTA reading by flagging patients at risk of severe obstructive coronary artery disease.

PMID:39792162 | DOI:10.1007/s00330-024-11308-z

Categories: Literature Watch

CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning

Fri, 2025-01-10 06:00

Med Image Comput Comput Assist Interv. 2024 Oct;15012:465-475. doi: 10.1007/978-3-031-72390-2_44. Epub 2024 Oct 23.

ABSTRACT

Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.

PMID:39791126 | PMC:PMC11709740 | DOI:10.1007/978-3-031-72390-2_44

Categories: Literature Watch

Differentiating Cystic Lesions in the Sellar Region of the Brain Using Artificial Intelligence and Machine Learning for Early Diagnosis: A Prospective Review of the Novel Diagnostic Modalities

Fri, 2025-01-10 06:00

Cureus. 2024 Dec 10;16(12):e75476. doi: 10.7759/cureus.75476. eCollection 2024 Dec.

ABSTRACT

This paper investigates the potential of artificial intelligence (AI) and machine learning (ML) to enhance the differentiation of cystic lesions in the sellar region, such as pituitary adenomas, Rathke cleft cysts (RCCs) and craniopharyngiomas (CP), through the use of advanced neuroimaging techniques, particularly magnetic resonance imaging (MRI). The goal is to explore how AI-driven models, including convolutional neural networks (CNNs), deep learning, and ensemble methods, can overcome the limitations of traditional diagnostic approaches, providing more accurate and early differentiation of these lesions. The review incorporates findings from critical studies, such as using the Open Access Series of Imaging Studies (OASIS) dataset (Kaggle, San Francisco, USA) for MRI-based brain research, highlighting the significance of statistical rigor and automated segmentation in developing reliable AI models. By drawing on these insights and addressing the challenges posed by small, single-institutional datasets, the paper aims to demonstrate how AI applications can improve diagnostic precision, enhance clinical decision-making, and ultimately lead to better patient outcomes in managing sellar region cystic lesions.

PMID:39791061 | PMC:PMC11717160 | DOI:10.7759/cureus.75476

Categories: Literature Watch

Impact of cardiovascular magnetic resonance in single ventricle physiology: a narrative review

Fri, 2025-01-10 06:00

Cardiovasc Diagn Ther. 2024 Dec 31;14(6):1161-1175. doi: 10.21037/cdt-24-409. Epub 2024 Dec 19.

ABSTRACT

BACKGROUND AND OBJECTIVE: Cardiovascular magnetic resonance (CMR) is a routine cross-sectional imaging modality in adults with congenital heart disease. Developing CMR techniques and the knowledge that CMR is well suited to assess long-term complications and to provide prognostic information for single ventricle (SV) patients makes CMR the ideal assessment tool for this patient cohort. Nevertheless, many of the techniques have not yet been incorporated into day-to-day practice. The aim of this review is to provide a comprehensive overview of CMR applications in SV patients together with recent scientific findings.

METHODS: Articles from 2009 to August 2024 retrieved from PubMed on CMR in SV patients were included. Case reports and non-English literature were excluded.

KEY CONTENT AND FINDINGS: CMR is essential for serial follow-up of SV patients and CMR-derived standard markers can improve patient management and prognosis assessment. Advanced CMR techniques likely will enhance our understanding of Fontan hemodynamics and are promising tools for a comprehensive patient evaluation and care.

CONCLUSIONS: There is increasing research that shows the advantages of CMR in Fontan patients. However, further research about the prognostic role of CMR in older Fontan patients and how new methods such as modeling and deep learning pipelines can be clinically implemented is warranted.

PMID:39790200 | PMC:PMC11707479 | DOI:10.21037/cdt-24-409

Categories: Literature Watch

Evaluating the effect of noise reduction strategies in CT perfusion imaging for predicting infarct core with deep learning

Fri, 2025-01-10 06:00

Neuroradiol J. 2025 Jan 9:19714009251313517. doi: 10.1177/19714009251313517. Online ahead of print.

ABSTRACT

This study evaluates the efficacy of deep learning models in identifying infarct tissue on computed tomography perfusion (CTP) scans from patients with acute ischemic stroke due to large vessel occlusion, specifically addressing the potential influence of varying noise reduction techniques implemented by different vendors. We analyzed CTP scans from 60 patients who underwent mechanical thrombectomy achieving a modified thrombolysis in cerebral infarction (mTICI) score of 2c or 3, ensuring minimal changes in the infarct core between the initial CTP and follow-up MR imaging. Noise reduction techniques, including principal component analysis (PCA), wavelet, non-local means (NLM), and a no denoising approach, were employed to create hemodynamic parameter maps. Infarct regions identified on follow-up diffusion-weighted imaging (DWI) within 48 hours were co-registered with initial CTP scans and refined with ADC maps to serve as ground truth for training a data-augmented U-Net model. The performance of this convolutional neural network (CNN) was assessed using Dice coefficients across different denoising methods and infarct sizes, visualized through box plots for each parameter map. Our findings show no significant differences in model accuracy between PCA and other denoising methods, with minimal variation in Dice scores across techniques. This study confirms that CNNs are adaptable and capable of handling diverse processing schemas, indicating their potential to streamline diagnostic processes and effectively manage CTP input data quality variations.

PMID:39789894 | DOI:10.1177/19714009251313517

Categories: Literature Watch

LOGOWheat: deep learning-based prediction of regulatory effects for noncoding variants in wheats

Fri, 2025-01-10 06:00

Brief Bioinform. 2024 Nov 22;26(1):bbae705. doi: 10.1093/bib/bbae705.

ABSTRACT

Identifying the regulatory effects of noncoding variants presents a significant challenge. Recently, the accumulation of epigenomic profiling data in wheat has provided an opportunity to model the functional impacts of these variants. In this study, we introduce Language of Genome for Wheat (LOGOWheat), a deep learning-based tool designed to predict the regulatory effects of noncoding variants in wheat. LOGOWheat initially employs a self-attention-based, contextualized pretrained language model to acquire bidirectional representations of the unlabeled wheat reference genome. Epigenomic profiling data are also collected and utilized to fine-tune the model, enabling it to discern the regulatory code inherent in genomic sequences. The test results suggest that LOGOWheat is highly effective in predicting multiple chromatin features, achieving an average area under the receiver operating characteristic (AUROC) of 0.8531 and an average area under the precision-recall curve (AUPRC) of 0.7633. Two case studies illustrate and demonstrate the main functions provided by LOGOWheat: assigning scores and prioritizing causal variants within a given variant set and constructing a saturated mutagenesis map in silico to discover high-impact sites or functional motifs in a given sequence. Finally, we propose the concept of extracting potential functional variations from the wheat population by integrating evolutionary conservation information. LOGOWheat is available at http://logowheat.cn/.

PMID:39789857 | DOI:10.1093/bib/bbae705

Categories: Literature Watch

AutoGP: An Intelligent Breeding Platform for Enhancing Maize Genomic Selection

Fri, 2025-01-10 06:00

Plant Commun. 2025 Jan 8:101240. doi: 10.1016/j.xplc.2025.101240. Online ahead of print.

ABSTRACT

In the face of climate change and the growing global population, there is an urgent need to accelerate the development of high-yielding crop varieties. To this end, vast amounts of genotype-to-phenotype data have been collected, and many machine learning (ML) models have been developed to predict phenotype from a given genotype. However, the requirement for high densities of single-nucleotide polymorphisms (SNPs) and the labor-intensive collection of phenotypic data are hampering the use of these models to advance breeding. Furthermore, recently developed genomic selection (GS) models such as deep learning (DL) are complicated and inconvenient for breeders to navigate and optimize within their breeding programs. Here, we present the development of an intelligent breeding platform named AutoGP (http://autogp.hzau.edu.cn), which integrates genotype extraction, phenotypic extraction, and GS models of genotype-to-phenotype within a user-friendly web interface. AutoGP has three main advantages over previously developed platforms: 1) we designed an efficient sequencing chip to identify high-quality, high-confidence SNPs throughout gene regulatory networks; 2) we developed a complete workflow for plant phenotypic extraction (such as plant height and leaf count) from smartphone-captured video; 3) we provided a broad model pool, allowing users to select from five ML models (SVM, XGBoost, GBDT, MLP, and RF) and four commonly used DL models (DeepGS, DLGWAS, DNNGP, and SoyDNGP). For the convenience of breeders, we employ datasets from the maize (Zea mays) CUBIC population as a case study to demonstrate the usefulness of AutoGP. We show that our genotype chips can effectively extract high-quality SNPs associated with the days to tasseling and plant height. The models present reliable predictive accuracy on different populations, which can provide effective guidance for breeders. Overall, AutoGP offers a practical solution to streamline the breeding process, enabling breeders to achieve more efficient and accurate genomic selection.

PMID:39789848 | DOI:10.1016/j.xplc.2025.101240

Categories: Literature Watch

Two decades of advances in sequence-based prediction of MoRFs, disorder-to-order transitioning binding regions

Fri, 2025-01-10 06:00

Expert Rev Proteomics. 2025 Jan 9. doi: 10.1080/14789450.2025.2451715. Online ahead of print.

ABSTRACT

INTRODUCTION: Molecular recognition features (MoRFs) are regions in protein sequences that undergo induced folding upon binding partner molecules. MoRFs are common in nature and can be predicted from sequences based on their distinctive sequence signatures.

AREAS COVERED: We overview twenty years of progress in the sequence-based prediction of MoRFs which resulted in the development of 25 predictors of MoRFs that interact with proteins, peptides and lipids. These methods range from simple discriminant analysis to sophisticated deep transformer networks that use protein language models. They generate relatively accurate predictions as evidenced by the results of a recently published community-driven assessment.

EXPERT OPINION: MoRFs prediction is a mature field of research that is poised to continue at a steady pace in the foreseeable future. We anticipate further expansion of the scope of MoRF predictions to additional partner molecules, such as nucleic acids, and continued use of recent machine learning advances. Other future efforts should concentrate on improving availability of MoRF predictions by releasing, maintaining and popularizing web servers and by depositing MoRF predictions to large databases of protein structure and function predictions. Furthermore, accurate MoRF predictions should be coupled with the equally accurate prediction and modeling of the resulting structures of complexes.

PMID:39789785 | DOI:10.1080/14789450.2025.2451715

Categories: Literature Watch

Deep learning MRI models for the differential diagnosis of tumefactive demyelination versus IDH-wildtype glioblastoma

Thu, 2025-01-09 06:00

AJNR Am J Neuroradiol. 2025 Jan 9:ajnr.A8645. doi: 10.3174/ajnr.A8645. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Diagnosis of tumefactive demyelination can be challenging. The diagnosis of indeterminate brain lesions on MRI often requires tissue confirmation via brain biopsy. Noninvasive methods for accurate diagnosis of tumor and non-tumor etiologies allows for tailored therapy, optimal tumor control, and a reduced risk of iatrogenic morbidity and mortality. Tumefactive demyelination has imaging features that mimic isocitrate dehydrogenase-wildtype glioblastoma (IDHwt GBM). We hypothesized that deep learning applied to postcontrast T1-weighted (T1C) and T2-weighted (T2) MRI images can discriminate tumefactive demyelination from IDHwt GBM.

MATERIALS AND METHODS: Patients with tumefactive demyelination (n=144) and IDHwt GBM (n=455) were identified by clinical registries. A 3D DenseNet121 architecture was used to develop models to differentiate tumefactive demyelination and IDHwt GBM using both T1C and T2 MRI images, as well as only T1C and only T2 images. A three-stage design was used: (i) model development and internal validation via five-fold cross validation using a sex-, age-, and MRI technology-matched set of tumefactive demyelination and IDHwt GBM, (ii) validation of model specificity on independent IDHwt GBM, and (iii) prospective validation on tumefactive demyelination and IDHwt GBM. Stratified AUCs were used to evaluate model performance stratified by sex, age at diagnosis, MRI scanner strength, and MRI acquisition.

RESULTS: The deep learning model developed using both T1C and T2 images had a prospective validation area under the receiver operator characteristic curve (AUC) of 88% (95% CI: 0.82 - 0.95). In the prospective validation stage, a model score threshold of 0.28 resulted in 91% sensitivity of correctly classifying tumefactive demyelination and 80% specificity (correctly classifying IDHwt GBM). Stratified AUCs demonstrated that model performance may be improved if thresholds were chosen stratified by age and MRI acquisition.

CONCLUSIONS: MRI images can provide the basis for applying deep learning models to aid in the differential diagnosis of brain lesions. Further validation is needed to evaluate how well the model generalizes across institutions, patient populations, and technology, and to evaluate optimal thresholds for classification. Next steps also should incorporate additional tumor etiologies such as CNS lymphoma and brain metastases.

ABBREVIATIONS: AUC = area under the receiver operator characteristic curve; CNS = central nervous system; CNSIDD = central nervous system inflammatory demyelinating disease; FeTS = federated tumor segmentation; GBM = glioblastoma; IDHwt = isocitrate dehydrogenase wildtype; IHC = immunohistochemistry; MOGAD = myelin oligodendrocyte glycoprotein antibody associated disorder; MS = multiple sclerosis; NMOSD = neuromyelitis optica spectrum disorder; wt = wildtype.

PMID:39788628 | DOI:10.3174/ajnr.A8645

Categories: Literature Watch

Computational pathology applied to clinical colorectal cancer cohorts identifies immune and endothelial cell spatial patterns predictive of outcome

Thu, 2025-01-09 06:00

J Pathol. 2025 Feb;265(2):198-210. doi: 10.1002/path.6378.

ABSTRACT

Colorectal cancer (CRC) is a histologically heterogeneous disease with variable clinical outcome. The role the tumour microenvironment (TME) plays in determining tumour progression is complex and not fully understood. To improve our understanding, it is critical that the TME is studied systematically within clinically annotated patient cohorts with long-term follow-up. Here we studied the TME in three clinical cohorts of metastatic CRC with diverse molecular subtype and treatment history. The MISSONI cohort included cases with microsatellite instability that received immunotherapy (n = 59, 24 months median follow-up). The BRAF cohort included BRAF V600E mutant microsatellite stable (MSS) cancers (n = 141, 24 months median follow-up). The VALENTINO cohort included RAS/RAF WT MSS cases who received chemotherapy and anti-EGFR therapy (n = 175, 32 months median follow-up). Using a Deep learning cell classifier, trained upon >38,000 pathologist annotations, to detect eight cell types within H&E-stained sections of CRC, we quantified the spatial tissue organisation and colocalisation of cell types across these cohorts. We found that the ratio of infiltrating endothelial cells to cancer cells, a possible marker of vascular invasion, was an independent predictor of progression-free survival (PFS) in the BRAF+MISSONI cohort (p = 0.033, HR = 1.44, CI = 1.029-2.01). In the VALENTINO cohort, this pattern was also an independent PFS predictor in TP53 mutant patients (p = 0.009, HR = 0.59, CI = 0.40-0.88). Tumour-infiltrating lymphocytes were an independent predictor of PFS in BRAF+MISSONI (p = 0.016, HR = 0.36, CI = 0.153-0.83). Elevated tumour-infiltrating macrophages were predictive of improved PFS in the MISSONI cohort (p = 0.031). We validated our cell classification using highly multiplexed immunofluorescence for 17 markers applied to the same sections that were analysed by the classifier (n = 26 cases). These findings uncovered important microenvironmental factors that underpin treatment response across and within CRC molecular subtypes, while providing an atlas of the distribution of 180 million cells in 375 clinically annotated CRC patients. © 2025 The Author(s). The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.

PMID:39788558 | DOI:10.1002/path.6378

Categories: Literature Watch

Pages