Deep learning

SFE-Net: Spatial-Frequency Enhancement Network for robust nuclei segmentation in histopathology images

Wed, 2024-03-06 06:00

Comput Biol Med. 2024 Feb 22;171:108131. doi: 10.1016/j.compbiomed.2024.108131. Online ahead of print.

ABSTRACT

Morphological features of individual nuclei serve as a dependable foundation for pathologists in making accurate diagnoses. Existing methods that rely on spatial information for feature extraction have achieved commendable results in nuclei segmentation tasks. However, these approaches are not sufficient to extract edge information of nuclei with small sizes and blurred outlines. Moreover, the lack of attention to the interior of the nuclei leads to significant internal inconsistencies. To address these challenges, we introduce a novel Spatial-Frequency Enhancement Network (SFE-Net) to incorporate spatial-frequency features and promote intra-nuclei consistency for robust nuclei segmentation. Specifically, SFE-Net incorporates a distinctive Spatial-Frequency Feature Extraction module and a Spatial-Guided Feature Enhancement module, which are designed to preserve spatial-frequency information and enhance feature representation respectively, to achieve comprehensive extraction of edge information. Furthermore, we introduce the Label-Guided Distillation method, which utilizes semantic features to guide the segmentation network in strengthening boundary constraints and learning the intra-nuclei consistency of individual nuclei, to improve the robustness of nuclei segmentation. Extensive experiments on three publicly available histopathology image datasets (MoNuSeg, TNBC and CryoNuSeg) demonstrate the superiority of our proposed method, which achieves 79.23%, 81.96% and 73.26% Aggregated Jaccard Index, respectively. The proposed model is available at https://github.com/jinshachen/SFE-Net.

PMID:38447498 | DOI:10.1016/j.compbiomed.2024.108131

Categories: Literature Watch

Magnetic resonance imaging-based deep learning imaging biomarker for predicting functional outcomes after acute ischemic stroke

Wed, 2024-03-06 06:00

Eur J Radiol. 2024 Mar 2;174:111405. doi: 10.1016/j.ejrad.2024.111405. Online ahead of print.

ABSTRACT

PURPOSE: Clinical risk scores are essential for predicting outcomes in stroke patients. The advancements in deep learning (DL) techniques provide opportunities to develop prediction applications using magnetic resonance (MR) images. We aimed to develop an MR-based DL imaging biomarker for predicting outcomes in acute ischemic stroke (AIS) and evaluate its additional benefit to current risk scores.

METHOD: This study included 3338 AIS patients. We trained a DL model using deep neural network architectures on MR images and radiomics to predict poor functional outcomes at three months post-stroke. The DL model generated a DL score, which served as the DL imaging biomarker. We compared the predictive performance of this biomarker to five risk scores on a holdout test set. Additionally, we assessed whether incorporating the imaging biomarker into the risk scores improved the predictive performance.

RESULTS: The DL imaging biomarker achieved an area under the receiver operating characteristic curve (AUC) of 0.788. The AUCs of the five studied risk scores were 0.789, 0.793, 0.804, 0.810, and 0.826, respectively. The imaging biomarker's predictive performance was comparable to four of the risk scores but inferior to one (p = 0.038). Adding the imaging biomarker to the risk scores improved the AUCs (p-values) to 0.831 (0.003), 0.825 (0.001), 0.834 (0.003), 0.836 (0.003), and 0.839 (0.177), respectively. The net reclassification improvement and integrated discrimination improvement indices also showed significant improvements (all p < 0.001).

CONCLUSIONS: Using DL techniques to create an MR-based imaging biomarker is feasible and enhances the predictive ability of current risk scores.

PMID:38447430 | DOI:10.1016/j.ejrad.2024.111405

Categories: Literature Watch

Deep-Learning-Assisted Sensor with Multiple Perception Capabilities for an Intelligent Driver Assistance Monitoring System

Wed, 2024-03-06 06:00

ACS Appl Mater Interfaces. 2024 Mar 6. doi: 10.1021/acsami.3c15956. Online ahead of print.

ABSTRACT

Driver assistance systems can help drivers achieve better control of their vehicles while driving and reduce driver fatigue and errors. However, the current driver assistance devices have a complex structure and severely violate the privacy of drivers, hindering the development of driver assistance technology. To address these limitations, this article proposes an intelligent driver assistance monitoring system (IDAMS), which combines a Kresling origami structure-based triboelectric sensor (KOS-TS) and a convolutional neural network (CNN)-based data analysis. For different driving behaviors, the output signals of the KOS-TSs contain various features, such as a driver's pressing force, pressing time, and sensor triggering sequence. This study develops a multiscale CNN that employs different pooling methods to process KOS-TS data and analyze temporal information. The proposed IDAMS is verified by driver identification experiments, and the results show that the accuracy of the IDAMS in discriminating eight different users is improved from 96.25% to 99.38%. In addition, the results indicate that IDAMS can successfully monitor driving behaviors and can accurately distinguish between different driving behaviors. Finally, the proposed IDAMS has excellent hands-off detection (HOD), identification, and driving behavior monitoring capabilities and shows broad potential for application in the fields of safety warning, personalization, and human-computer interaction.

PMID:38447140 | DOI:10.1021/acsami.3c15956

Categories: Literature Watch

Transformers enable accurate prediction of acute and chronic chemical toxicity in aquatic organisms

Wed, 2024-03-06 06:00

Sci Adv. 2024 Mar 8;10(10):eadk6669. doi: 10.1126/sciadv.adk6669. Epub 2024 Mar 6.

ABSTRACT

Environmental hazard assessments are reliant on toxicity data that cover multiple organism groups. Generating experimental toxicity data is, however, resource-intensive and time-consuming. Computational methods are fast and cost-efficient alternatives, but the low accuracy and narrow applicability domains have made their adaptation slow. Here, we present a AI-based model for predicting chemical toxicity. The model uses transformers to capture toxicity-specific features directly from the chemical structures and deep neural networks to predict effect concentrations. The model showed high predictive performance for all tested organism groups-algae, aquatic invertebrates and fish-and has, in comparison to commonly used QSAR methods, a larger applicability domain and a considerably lower error. When the model was trained on data with multiple effect concentrations (EC50/EC10), the performance was further improved. We conclude that deep learning and transformers have the potential to markedly advance computational prediction of chemical toxicity.

PMID:38446886 | DOI:10.1126/sciadv.adk6669

Categories: Literature Watch

Machine learning predictions of T cell antigen specificity from intracellular calcium dynamics

Wed, 2024-03-06 06:00

Sci Adv. 2024 Mar 8;10(10):eadk2298. doi: 10.1126/sciadv.adk2298. Epub 2024 Mar 6.

ABSTRACT

Adoptive T cell therapies rely on the production of T cells with an antigen receptor that directs their specificity toward tumor-specific antigens. Methods for identifying relevant T cell receptor (TCR) sequences, predominantly achieved through the enrichment of antigen-specific T cells, represent a major bottleneck in the production of TCR-engineered cell therapies. Fluctuation of intracellular calcium is a proximal readout of TCR signaling and candidate marker for antigen-specific T cell identification that does not require T cell expansion; however, calcium fluctuations downstream of TCR engagement are highly variable. We propose that machine learning algorithms may allow for T cell classification from complex datasets such as polyclonal T cell signaling events. Using deep learning tools, we demonstrate accurate prediction of TCR-transgenic CD8+ T cell activation based on calcium fluctuations and test the algorithm against T cells bearing a distinct TCR as well as polyclonal T cells. This provides the foundation for an antigen-specific TCR sequence identification pipeline for adoptive T cell therapies.

PMID:38446885 | DOI:10.1126/sciadv.adk2298

Categories: Literature Watch

Melt electrowriting enabled 3D liquid crystal elastomer structures for cross-scale actuators and temperature field sensors

Wed, 2024-03-06 06:00

Sci Adv. 2024 Mar 8;10(10):eadk3854. doi: 10.1126/sciadv.adk3854. Epub 2024 Mar 6.

ABSTRACT

Liquid crystal elastomers (LCEs) have garnered attention for their remarkable reversible strains under various stimuli. Early studies on LCEs mainly focused on basic dimensional changes in macrostructures or quasi-three-dimensional (3D) microstructures. However, fabricating complex 3D microstructures and cross-scale LCE-based structures has remained challenging. In this study, we report a compatible method named melt electrowriting (MEW) to fabricate LCE-based microfiber actuators and various 3D actuators on the micrometer to centimeter scales. By controlling printing parameters, these actuators were fabricated with high resolutions (4.5 to 60 μm), actuation strains (10 to 55%), and a maximum work density of 160 J/kg. In addition, through the integration of a deep learning-based model, we demonstrated the application of LCE materials in temperature field sensing. Large-scale, real-time, LCE grid-based spatial temperature field sensors have been designed, exhibiting a low response time of less than 42 ms and a high precision of 94.79%.

PMID:38446880 | DOI:10.1126/sciadv.adk3854

Categories: Literature Watch

Deep learning for 3D biliary anatomy for living liver donor hepatectomy planning

Wed, 2024-03-06 06:00

Int J Surg. 2024 Mar 4. doi: 10.1097/JS9.0000000000001274. Online ahead of print.

NO ABSTRACT

PMID:38446840 | DOI:10.1097/JS9.0000000000001274

Categories: Literature Watch

Vision transformer with masked autoencoders for referable diabetic retinopathy classification based on large-size retina image

Wed, 2024-03-06 06:00

PLoS One. 2024 Mar 6;19(3):e0299265. doi: 10.1371/journal.pone.0299265. eCollection 2024.

ABSTRACT

Computer-aided diagnosis systems based on deep learning algorithms have shown potential applications in rapid diagnosis of diabetic retinopathy (DR). Due to the superior performance of Transformer over convolutional neural networks (CNN) on natural images, we attempted to develop a new model to classify referable DR based on a limited number of large-size retinal images by using Transformer. Vision Transformer (ViT) with Masked Autoencoders (MAE) was applied in this study to improve the classification performance of referable DR. We collected over 100,000 publicly fundus retinal images larger than 224×224, and then pre-trained ViT on these retinal images using MAE. The pre-trained ViT was applied to classify referable DR, the performance was also compared with that of ViT pre-trained using ImageNet. The improvement in model classification performance by pre-training with over 100,000 retinal images using MAE is superior to that pre-trained with ImageNet. The accuracy, area under curve (AUC), highest sensitivity and highest specificity of the present model are 93.42%, 0.9853, 0.973 and 0.9539, respectively. This study shows that MAE can provide more flexibility to the input image and substantially reduce the number of images required. Meanwhile, the pretraining dataset scale in this study is much smaller than ImageNet, and the pre-trained weights from ImageNet are not required also.

PMID:38446810 | DOI:10.1371/journal.pone.0299265

Categories: Literature Watch

Cracking the black box of deep sequence-based protein-protein interaction prediction

Wed, 2024-03-06 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae076. doi: 10.1093/bib/bbae076.

ABSTRACT

Identifying protein-protein interactions (PPIs) is crucial for deciphering biological pathways. Numerous prediction methods have been developed as cheap alternatives to biological experiments, reporting surprisingly high accuracy estimates. We systematically investigated how much reproducible deep learning models depend on data leakage, sequence similarities and node degree information, and compared them with basic machine learning models. We found that overlaps between training and test sets resulting from random splitting lead to strongly overestimated performances. In this setting, models learn solely from sequence similarities and node degrees. When data leakage is avoided by minimizing sequence similarities between training and test set, performances become random. Moreover, baseline models directly leveraging sequence similarity and network topology show good performances at a fraction of the computational cost. Thus, we advocate that any improvements should be reported relative to baseline methods in the future. Our findings suggest that predicting PPIs remains an unsolved task for proteins showing little sequence similarity to previously studied proteins, highlighting that further experimental research into the 'dark' protein interactome and better computational methods are needed.

PMID:38446741 | DOI:10.1093/bib/bbae076

Categories: Literature Watch

Diff-AMP: tailored designed antimicrobial peptide framework with all-in-one generation, identification, prediction and optimization

Wed, 2024-03-06 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae078. doi: 10.1093/bib/bbae078.

ABSTRACT

Antimicrobial peptides (AMPs), short peptides with diverse functions, effectively target and combat various organisms. The widespread misuse of chemical antibiotics has led to increasing microbial resistance. Due to their low drug resistance and toxicity, AMPs are considered promising substitutes for traditional antibiotics. While existing deep learning technology enhances AMP generation, it also presents certain challenges. Firstly, AMP generation overlooks the complex interdependencies among amino acids. Secondly, current models fail to integrate crucial tasks like screening, attribute prediction and iterative optimization. Consequently, we develop a integrated deep learning framework, Diff-AMP, that automates AMP generation, identification, attribute prediction and iterative optimization. We innovatively integrate kinetic diffusion and attention mechanisms into the reinforcement learning framework for efficient AMP generation. Additionally, our prediction module incorporates pre-training and transfer learning strategies for precise AMP identification and screening. We employ a convolutional neural network for multi-attribute prediction and a reinforcement learning-based iterative optimization strategy to produce diverse AMPs. This framework automates molecule generation, screening, attribute prediction and optimization, thereby advancing AMP research. We have also deployed Diff-AMP on a web server, with code, data and server details available in the Data Availability section.

PMID:38446739 | DOI:10.1093/bib/bbae078

Categories: Literature Watch

Prediction of protein-ligand binding affinity via deep learning models

Wed, 2024-03-06 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbae081. doi: 10.1093/bib/bbae081.

ABSTRACT

Accurately predicting the binding affinity between proteins and ligands is crucial in drug screening and optimization, but it is still a challenge in computer-aided drug design. The recent success of AlphaFold2 in predicting protein structures has brought new hope for deep learning (DL) models to accurately predict protein-ligand binding affinity. However, the current DL models still face limitations due to the low-quality database, inaccurate input representation and inappropriate model architecture. In this work, we review the computational methods, specifically DL-based models, used to predict protein-ligand binding affinity. We start with a brief introduction to protein-ligand binding affinity and the traditional computational methods used to calculate them. We then introduce the basic principles of DL models for predicting protein-ligand binding affinity. Next, we review the commonly used databases, input representations and DL models in this field. Finally, we discuss the potential challenges and future work in accurately predicting protein-ligand binding affinity via DL models.

PMID:38446737 | DOI:10.1093/bib/bbae081

Categories: Literature Watch

A New Automated Prognostic Prediction Method Based on Multi-Sequence Magnetic Resonance Imaging for Hepatic Resection of Colorectal Cancer Liver Metastases

Wed, 2024-03-06 06:00

IEEE J Biomed Health Inform. 2024 Mar;28(3):1528-1539. doi: 10.1109/JBHI.2024.3350247.

ABSTRACT

Colorectal cancer is a prevalent and life-threatening disease, where colorectal cancer liver metastasis (CRLM) exhibits the highest mortality rate. Currently, surgery stands as the most effective curative option for eligible patients. However, due to the insufficient performance of traditional methods and the lack of multi-modality MRI feature complementarity in existing deep learning methods, the prognosis of CRLM surgical resection has not been fully explored. This paper proposes a new method, multi-modal guided complementary network (MGCNet), which employs multi-sequence MRI to predict 1-year recurrence and recurrence-free survival in patients after CRLM resection. In light of the complexity and redundancy of features in the liver region, we designed the multi-modal guided local feature fusion module to utilize the tumor features to guide the dynamic fusion of prognostically relevant local features within the liver. On the other hand, to solve the loss of spatial information during multi-sequence MRI fusion, the cross-modal complementary external attention module designed an external mask branch to establish inter-layer correlation. The results show that the model has accuracy (ACC) of 0.79, the area under the curve (AUC) of 0.84, C-Index of 0.73, and hazard ratio (HR) of 4.0, which is a significant improvement over state-of-the-art methods. Additionally, MGCNet exhibits good interpretability.

PMID:38446655 | DOI:10.1109/JBHI.2024.3350247

Categories: Literature Watch

Deep Learning-Augmented ECG Analysis for Screening and Genotype Prediction of Congenital Long QT Syndrome

Wed, 2024-03-06 06:00

JAMA Cardiol. 2024 Mar 6. doi: 10.1001/jamacardio.2024.0039. Online ahead of print.

ABSTRACT

IMPORTANCE: Congenital long QT syndrome (LQTS) is associated with syncope, ventricular arrhythmias, and sudden death. Half of patients with LQTS have a normal or borderline-normal QT interval despite LQTS often being detected by QT prolongation on resting electrocardiography (ECG).

OBJECTIVE: To develop a deep learning-based neural network for identification of LQTS and differentiation of genotypes (LQTS1 and LQTS2) using 12-lead ECG.

DESIGN, SETTING, AND PARTICIPANTS: This diagnostic accuracy study used ECGs from patients with suspected inherited arrhythmia enrolled in the Hearts in Rhythm Organization Registry (HiRO) from August 2012 to December 2021. The internal dataset was derived at 2 sites and an external validation dataset at 4 sites within the HiRO Registry; an additional cross-sectional validation dataset was from the Montreal Heart Institute. The cohort with LQTS included probands and relatives with pathogenic or likely pathogenic variants in KCNQ1 or KCNH2 genes with normal or prolonged corrected QT (QTc) intervals.

EXPOSURES: Convolutional neural network (CNN) discrimination between LQTS1, LQTS2, and negative genetic test results.

MAIN OUTCOMES AND MEASURES: The main outcomes were area under the curve (AUC), F1 scores, and sensitivity for detecting LQTS and differentiating genotypes using a CNN method compared with QTc-based detection.

RESULTS: A total of 4521 ECGs from 990 patients (mean [SD] age, 42 [18] years; 589 [59.5%] female) were analyzed. External validation within the national registry (101 patients) demonstrated the CNN's high diagnostic capacity for LQTS detection (AUC, 0.93; 95% CI, 0.89-0.96) and genotype differentiation (AUC, 0.91; 95% CI, 0.86-0.96). This surpassed expert-measured QTc intervals in detecting LQTS (F1 score, 0.84 [95% CI, 0.78-0.90] vs 0.22 [95% CI, 0.13-0.31]; sensitivity, 0.90 [95% CI, 0.86-0.94] vs 0.36 [95% CI, 0.23-0.47]), including in patients with normal or borderline QTc intervals (F1 score, 0.70 [95% CI, 0.40-1.00]; sensitivity, 0.78 [95% CI, 0.53-0.95]). In further validation in a cross-sectional cohort (406 patients) of high-risk patients and genotype-negative controls, the CNN detected LQTS with an AUC of 0.81 (95% CI, 0.80-0.85), which was better than QTc interval-based detection (AUC, 0.74; 95% CI, 0.69-0.78).

CONCLUSIONS AND RELEVANCE: The deep learning model improved detection of congenital LQTS from resting ECGs and allowed for differentiation between the 2 most common genetic subtypes. Broader validation over an unselected general population may support application of this model to patients with suspected LQTS.

PMID:38446445 | DOI:10.1001/jamacardio.2024.0039

Categories: Literature Watch

Comparing code-free and bespoke deep learning approaches in ophthalmology

Wed, 2024-03-06 06:00

Graefes Arch Clin Exp Ophthalmol. 2024 Mar 6. doi: 10.1007/s00417-024-06432-x. Online ahead of print.

ABSTRACT

AIM: Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management.

METHODS: We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review.

RESULTS: Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs.

CONCLUSION: For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.

PMID:38446200 | DOI:10.1007/s00417-024-06432-x

Categories: Literature Watch

Noninvasive Molecular Subtyping of Pediatric Low-Grade Glioma with Self-Supervised Transfer Learning

Wed, 2024-03-06 06:00

Radiol Artif Intell. 2024 Mar 6:e230333. doi: 10.1148/ryai.230333. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and externally test a scan-to-prediction deep-learning pipeline for noninvasive, MRI-based BRAF mutational status classification for pediatric low-grade glioma (pLGG). Materials and Methods This retrospective study included two pLGG datasets with linked genomic and diagnostic T2-weighted MRI data of patients: BCH (development dataset, n = 214 [60 (28%) BRAF-Fusion, 50 (23%) BRAF V600E, 104 (49%) wild-type), and the Children's Brain Tumor Network (external testing, n = 112 [60 (53%) with BRAF-Fusion, 17 (15%) BRAF-V600E, 35 (32%) wild-type]). A deep learning pipeline was developed to classify BRAF mutational status (V600E versus Fusion versus Wild-Type) via a two-stage process: 1) 3D tumor segmentation and extraction of axial tumor images, and 2) slice-wise, deep learning-based classification of mutational status. Knowledge-transfer and self-supervised approaches were investigated to prevent model overfitting, with a primary endpoint of the area under the receiver operating characteristic curve (AUC). To enhance model interpretability, we developed a novel metric, COMDist (Center of Mass Distance), that quantifies the model attention around the tumor. Results A combination of transfer learning from a pretrained medical imaging-specific network and self-supervised label cross-training (TransferX) coupled with consensus logic yielded the highest classification performance with AUC of 0.82 [95% CI: 0.72-0.91], 0.87 [95% CI: 0.61-0.97], and 0.85[95% CI: 0.66-0.95] for Wild-Type, BRAF-Fusion and BRAF-V600E, respectively, on internal testing. On external testing, the pipeline yielded AUC of 0.72 [95% CI: 0.64-0.86], 0.78 [95% CI: 0.61-0.89], and 0.72 [95% CI: 0.64-0.88] for Wild-Type, BRAF-Fusion and BRAF-V600E classes, respectively. Conclusion Transfer learning and self-supervised cross-training improved classification performance and generalizability for noninvasive pLGG mutational status prediction in a limited data scenario. ©RSNA, 2024.

PMID:38446044 | DOI:10.1148/ryai.230333

Categories: Literature Watch

Semisupervised Learning for Generalizable Intracranial Hemorrhage Detection and Segmentation

Wed, 2024-03-06 06:00

Radiol Artif Intell. 2024 Mar 6:e230077. doi: 10.1148/ryai.230077. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and evaluate a semisupervised learning model for intracranial hemorrhage detection and segmentation on an out-of-distribution head CT evaluation set. Materials and Methods This retrospective study used semisupervised learning to bootstrap performance. An initial "teacher" deep learning model was trained on 457 pixel-labeled head CT scans collected from one U.S. institution from 2010-2017 and used to generate pseudo-labels on a separate unlabeled corpus of 25,000 examinations from the RSNA and ASNR. A second "student" model was trained on this combined pixel-and pseudo-labeled dataset. Hyperparameter tuning was performed on a validation set of 93 scans. Testing for both classification (n = 481 examinations) and segmentation (n = 23 examinations, or 529 images) was performed on CQ500, a dataset of 481 scans performed in India, to evaluate out-of-distribution generalizability. The semisupervised model was compared with a baseline model trained on only labeled data using area under the receiver operating characteristic curve (AUC), Dice similarity coefficient (DSC), and average precision (AP) metrics. Results The semisupervised model achieved statistically significantly higher examination AUC on CQ500 compared with the baseline (0.939 [0.938, 0.940] versus 0.907 [0.906, 0.908]) (P = .009). It also achieved a higher DSC (0.829 [0.825, 0.833] versus 0.809 [0.803, 0.812]) (P = .012) and Pixel AP (0.848 [0.843, 0.853]) versus 0.828 [0.817, 0.828]) compared with the baseline. Conclusion The addition of unlabeled data in a semisupervised learning framework demonstrates stronger generalizability potential for intracranial hemorrhage detection and segmentation compared with a supervised baseline. ©RSNA, 2024.

PMID:38446043 | DOI:10.1148/ryai.230077

Categories: Literature Watch

Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis

Wed, 2024-03-06 06:00

Radiol Artif Intell. 2024 Mar 6:e230362. doi: 10.1148/ryai.230362. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop an MRI-based model for clinically significant prostate cancer (csPCa) diagnosis that can resist rectal artifact interference. Materials and Methods This retrospective study included 2203 male patients with prostate lesions who underwent biparametric MRI and biopsy between January 2019 and June 2023. A targeted adversarial training with proprietary adversarial samples (TPAS) strategy was proposed to enhance model resistance against rectal artifacts. The automated csPCa diagnostic models trained with and without TPAS were compared using multicenter validation datasets. The impact of rectal artifacts on diagnostic performance of each model at the patient and lesion levels was compared using the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPRC). The AUC between models was compared using the Delong test, and the AUPRC was compared using the bootstrap method. Results The TPAS model exhibited diagnostic performance improvements of 6% at the patient level (AUC: 0.87 versus 0.81; P < .001) and 7% at the lesion level (AUPRC: 0.84 versus 0.77; P = .007) compared with the control model. The TPAS model demonstrated less performance decline in the presence of rectal artifact-pattern adversarial noise than the control model (ΔAUC: -17% versus -19%; ΔAUPRC: -18% versus -21%). The TPAS model performed better than the control model in patients with moderate (AUC: 0.79 versus 0.73; AUPRC: 0.68 versus 0.61), and severe (AUC: 0.75 versus 0.57; AUPRC: 0.69 versus 0.59) artifacts. Conclusion This study demonstrates that the TPAS model can reduce rectal artifact interference in MRI-based PCa diagnosis, thereby improving its performance in clinical applications. Published under a CC BY 4.0 license.

PMID:38446042 | DOI:10.1148/ryai.230362

Categories: Literature Watch

Structural modeling of ion channels using AlphaFold2, RoseTTAFold2, and ESMFold

Wed, 2024-03-06 06:00

Channels (Austin). 2024 Dec;18(1):2325032. doi: 10.1080/19336950.2024.2325032. Epub 2024 Mar 6.

ABSTRACT

Ion channels play key roles in human physiology and are important targets in drug discovery. The atomic-scale structures of ion channels provide invaluable insights into a fundamental understanding of the molecular mechanisms of channel gating and modulation. Recent breakthroughs in deep learning-based computational methods, such as AlphaFold, RoseTTAFold, and ESMFold have transformed research in protein structure prediction and design. We review the application of AlphaFold, RoseTTAFold, and ESMFold to structural modeling of ion channels using representative voltage-gated ion channels, including human voltage-gated sodium (NaV) channel - NaV1.8, human voltage-gated calcium (CaV) channel - CaV1.1, and human voltage-gated potassium (KV) channel - KV1.3. We compared AlphaFold, RoseTTAFold, and ESMFold structural models of NaV1.8, CaV1.1, and KV1.3 with corresponding cryo-EM structures to assess details of their similarities and differences. Our findings shed light on the strengths and limitations of the current state-of-the-art deep learning-based computational methods for modeling ion channel structures, offering valuable insights to guide their future applications for ion channel research.

PMID:38445990 | DOI:10.1080/19336950.2024.2325032

Categories: Literature Watch

pyM2aia: Python interface for mass spectrometry imaging with focus on Deep Learning

Wed, 2024-03-06 06:00

Bioinformatics. 2024 Mar 5:btae133. doi: 10.1093/bioinformatics/btae133. Online ahead of print.

ABSTRACT

SUMMARY: Python is the most commonly used language for deep learning (DL). Existing Python packages for mass spectrometry imaging (MSI) data are not optimized for DL tasks. We therefore introduce pyM2aia, a Python package for MSI data analysis with a focus on memory-efficient handling, processing and convenient data-access for DL applications. pyM2aia provides interfaces to its parent application M2aia, which offers interactive capabilities for exploring and annotating MSI data in imzML format. pyM2aia utilizes the image input and output routines, data formats, and processing functions of M2aia, ensures data interchangeability, and enables the writing of readable and easy-to-maintain DL pipelines by providing batch generators for typical MSI data access strategies. We showcase the package in several examples, including imzML metadata parsing, signal processing, ion-image generation, and, in particular, DL model training and inference for spectrum-wise approaches, ion-image-based approaches, and approaches that use spectral and spatial information simultaneously.

AVAILABILITY: Python package, code and examples are available at (https://m2aia.github.io/m2aia).

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38445753 | DOI:10.1093/bioinformatics/btae133

Categories: Literature Watch

Pages