Deep learning
Diff-AMP: tailored designed antimicrobial peptide framework with all-in-one generation, identification, prediction and optimization
Brief Bioinform. 2024 Jan 22;25(2):bbae078. doi: 10.1093/bib/bbae078.
ABSTRACT
Antimicrobial peptides (AMPs), short peptides with diverse functions, effectively target and combat various organisms. The widespread misuse of chemical antibiotics has led to increasing microbial resistance. Due to their low drug resistance and toxicity, AMPs are considered promising substitutes for traditional antibiotics. While existing deep learning technology enhances AMP generation, it also presents certain challenges. Firstly, AMP generation overlooks the complex interdependencies among amino acids. Secondly, current models fail to integrate crucial tasks like screening, attribute prediction and iterative optimization. Consequently, we develop a integrated deep learning framework, Diff-AMP, that automates AMP generation, identification, attribute prediction and iterative optimization. We innovatively integrate kinetic diffusion and attention mechanisms into the reinforcement learning framework for efficient AMP generation. Additionally, our prediction module incorporates pre-training and transfer learning strategies for precise AMP identification and screening. We employ a convolutional neural network for multi-attribute prediction and a reinforcement learning-based iterative optimization strategy to produce diverse AMPs. This framework automates molecule generation, screening, attribute prediction and optimization, thereby advancing AMP research. We have also deployed Diff-AMP on a web server, with code, data and server details available in the Data Availability section.
PMID:38446739 | DOI:10.1093/bib/bbae078
Prediction of protein-ligand binding affinity via deep learning models
Brief Bioinform. 2024 Jan 22;25(2):bbae081. doi: 10.1093/bib/bbae081.
ABSTRACT
Accurately predicting the binding affinity between proteins and ligands is crucial in drug screening and optimization, but it is still a challenge in computer-aided drug design. The recent success of AlphaFold2 in predicting protein structures has brought new hope for deep learning (DL) models to accurately predict protein-ligand binding affinity. However, the current DL models still face limitations due to the low-quality database, inaccurate input representation and inappropriate model architecture. In this work, we review the computational methods, specifically DL-based models, used to predict protein-ligand binding affinity. We start with a brief introduction to protein-ligand binding affinity and the traditional computational methods used to calculate them. We then introduce the basic principles of DL models for predicting protein-ligand binding affinity. Next, we review the commonly used databases, input representations and DL models in this field. Finally, we discuss the potential challenges and future work in accurately predicting protein-ligand binding affinity via DL models.
PMID:38446737 | DOI:10.1093/bib/bbae081
A New Automated Prognostic Prediction Method Based on Multi-Sequence Magnetic Resonance Imaging for Hepatic Resection of Colorectal Cancer Liver Metastases
IEEE J Biomed Health Inform. 2024 Mar;28(3):1528-1539. doi: 10.1109/JBHI.2024.3350247.
ABSTRACT
Colorectal cancer is a prevalent and life-threatening disease, where colorectal cancer liver metastasis (CRLM) exhibits the highest mortality rate. Currently, surgery stands as the most effective curative option for eligible patients. However, due to the insufficient performance of traditional methods and the lack of multi-modality MRI feature complementarity in existing deep learning methods, the prognosis of CRLM surgical resection has not been fully explored. This paper proposes a new method, multi-modal guided complementary network (MGCNet), which employs multi-sequence MRI to predict 1-year recurrence and recurrence-free survival in patients after CRLM resection. In light of the complexity and redundancy of features in the liver region, we designed the multi-modal guided local feature fusion module to utilize the tumor features to guide the dynamic fusion of prognostically relevant local features within the liver. On the other hand, to solve the loss of spatial information during multi-sequence MRI fusion, the cross-modal complementary external attention module designed an external mask branch to establish inter-layer correlation. The results show that the model has accuracy (ACC) of 0.79, the area under the curve (AUC) of 0.84, C-Index of 0.73, and hazard ratio (HR) of 4.0, which is a significant improvement over state-of-the-art methods. Additionally, MGCNet exhibits good interpretability.
PMID:38446655 | DOI:10.1109/JBHI.2024.3350247
Deep Learning-Augmented ECG Analysis for Screening and Genotype Prediction of Congenital Long QT Syndrome
JAMA Cardiol. 2024 Mar 6. doi: 10.1001/jamacardio.2024.0039. Online ahead of print.
ABSTRACT
IMPORTANCE: Congenital long QT syndrome (LQTS) is associated with syncope, ventricular arrhythmias, and sudden death. Half of patients with LQTS have a normal or borderline-normal QT interval despite LQTS often being detected by QT prolongation on resting electrocardiography (ECG).
OBJECTIVE: To develop a deep learning-based neural network for identification of LQTS and differentiation of genotypes (LQTS1 and LQTS2) using 12-lead ECG.
DESIGN, SETTING, AND PARTICIPANTS: This diagnostic accuracy study used ECGs from patients with suspected inherited arrhythmia enrolled in the Hearts in Rhythm Organization Registry (HiRO) from August 2012 to December 2021. The internal dataset was derived at 2 sites and an external validation dataset at 4 sites within the HiRO Registry; an additional cross-sectional validation dataset was from the Montreal Heart Institute. The cohort with LQTS included probands and relatives with pathogenic or likely pathogenic variants in KCNQ1 or KCNH2 genes with normal or prolonged corrected QT (QTc) intervals.
EXPOSURES: Convolutional neural network (CNN) discrimination between LQTS1, LQTS2, and negative genetic test results.
MAIN OUTCOMES AND MEASURES: The main outcomes were area under the curve (AUC), F1 scores, and sensitivity for detecting LQTS and differentiating genotypes using a CNN method compared with QTc-based detection.
RESULTS: A total of 4521 ECGs from 990 patients (mean [SD] age, 42 [18] years; 589 [59.5%] female) were analyzed. External validation within the national registry (101 patients) demonstrated the CNN's high diagnostic capacity for LQTS detection (AUC, 0.93; 95% CI, 0.89-0.96) and genotype differentiation (AUC, 0.91; 95% CI, 0.86-0.96). This surpassed expert-measured QTc intervals in detecting LQTS (F1 score, 0.84 [95% CI, 0.78-0.90] vs 0.22 [95% CI, 0.13-0.31]; sensitivity, 0.90 [95% CI, 0.86-0.94] vs 0.36 [95% CI, 0.23-0.47]), including in patients with normal or borderline QTc intervals (F1 score, 0.70 [95% CI, 0.40-1.00]; sensitivity, 0.78 [95% CI, 0.53-0.95]). In further validation in a cross-sectional cohort (406 patients) of high-risk patients and genotype-negative controls, the CNN detected LQTS with an AUC of 0.81 (95% CI, 0.80-0.85), which was better than QTc interval-based detection (AUC, 0.74; 95% CI, 0.69-0.78).
CONCLUSIONS AND RELEVANCE: The deep learning model improved detection of congenital LQTS from resting ECGs and allowed for differentiation between the 2 most common genetic subtypes. Broader validation over an unselected general population may support application of this model to patients with suspected LQTS.
PMID:38446445 | DOI:10.1001/jamacardio.2024.0039
Comparing code-free and bespoke deep learning approaches in ophthalmology
Graefes Arch Clin Exp Ophthalmol. 2024 Mar 6. doi: 10.1007/s00417-024-06432-x. Online ahead of print.
ABSTRACT
AIM: Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management.
METHODS: We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review.
RESULTS: Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs.
CONCLUSION: For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.
PMID:38446200 | DOI:10.1007/s00417-024-06432-x
Evolution in Development of a Predictive Deep-Learning Model for Total Hip Replacement Based on Radiographs: Commentary on an article by Yi Xu, MD, et al.: "Development and Validation of a Deep-Learning Model to Predict Total Hip Replacement on...
J Bone Joint Surg Am. 2024 Mar 6;106(5):e12. doi: 10.2106/JBJS.23.01317. Epub 2024 Mar 6.
NO ABSTRACT
PMID:38446184 | DOI:10.2106/JBJS.23.01317
Noninvasive Molecular Subtyping of Pediatric Low-Grade Glioma with Self-Supervised Transfer Learning
Radiol Artif Intell. 2024 Mar 6:e230333. doi: 10.1148/ryai.230333. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and externally test a scan-to-prediction deep-learning pipeline for noninvasive, MRI-based BRAF mutational status classification for pediatric low-grade glioma (pLGG). Materials and Methods This retrospective study included two pLGG datasets with linked genomic and diagnostic T2-weighted MRI data of patients: BCH (development dataset, n = 214 [60 (28%) BRAF-Fusion, 50 (23%) BRAF V600E, 104 (49%) wild-type), and the Children's Brain Tumor Network (external testing, n = 112 [60 (53%) with BRAF-Fusion, 17 (15%) BRAF-V600E, 35 (32%) wild-type]). A deep learning pipeline was developed to classify BRAF mutational status (V600E versus Fusion versus Wild-Type) via a two-stage process: 1) 3D tumor segmentation and extraction of axial tumor images, and 2) slice-wise, deep learning-based classification of mutational status. Knowledge-transfer and self-supervised approaches were investigated to prevent model overfitting, with a primary endpoint of the area under the receiver operating characteristic curve (AUC). To enhance model interpretability, we developed a novel metric, COMDist (Center of Mass Distance), that quantifies the model attention around the tumor. Results A combination of transfer learning from a pretrained medical imaging-specific network and self-supervised label cross-training (TransferX) coupled with consensus logic yielded the highest classification performance with AUC of 0.82 [95% CI: 0.72-0.91], 0.87 [95% CI: 0.61-0.97], and 0.85[95% CI: 0.66-0.95] for Wild-Type, BRAF-Fusion and BRAF-V600E, respectively, on internal testing. On external testing, the pipeline yielded AUC of 0.72 [95% CI: 0.64-0.86], 0.78 [95% CI: 0.61-0.89], and 0.72 [95% CI: 0.64-0.88] for Wild-Type, BRAF-Fusion and BRAF-V600E classes, respectively. Conclusion Transfer learning and self-supervised cross-training improved classification performance and generalizability for noninvasive pLGG mutational status prediction in a limited data scenario. ©RSNA, 2024.
PMID:38446044 | DOI:10.1148/ryai.230333
Semisupervised Learning for Generalizable Intracranial Hemorrhage Detection and Segmentation
Radiol Artif Intell. 2024 Mar 6:e230077. doi: 10.1148/ryai.230077. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and evaluate a semisupervised learning model for intracranial hemorrhage detection and segmentation on an out-of-distribution head CT evaluation set. Materials and Methods This retrospective study used semisupervised learning to bootstrap performance. An initial "teacher" deep learning model was trained on 457 pixel-labeled head CT scans collected from one U.S. institution from 2010-2017 and used to generate pseudo-labels on a separate unlabeled corpus of 25,000 examinations from the RSNA and ASNR. A second "student" model was trained on this combined pixel-and pseudo-labeled dataset. Hyperparameter tuning was performed on a validation set of 93 scans. Testing for both classification (n = 481 examinations) and segmentation (n = 23 examinations, or 529 images) was performed on CQ500, a dataset of 481 scans performed in India, to evaluate out-of-distribution generalizability. The semisupervised model was compared with a baseline model trained on only labeled data using area under the receiver operating characteristic curve (AUC), Dice similarity coefficient (DSC), and average precision (AP) metrics. Results The semisupervised model achieved statistically significantly higher examination AUC on CQ500 compared with the baseline (0.939 [0.938, 0.940] versus 0.907 [0.906, 0.908]) (P = .009). It also achieved a higher DSC (0.829 [0.825, 0.833] versus 0.809 [0.803, 0.812]) (P = .012) and Pixel AP (0.848 [0.843, 0.853]) versus 0.828 [0.817, 0.828]) compared with the baseline. Conclusion The addition of unlabeled data in a semisupervised learning framework demonstrates stronger generalizability potential for intracranial hemorrhage detection and segmentation compared with a supervised baseline. ©RSNA, 2024.
PMID:38446043 | DOI:10.1148/ryai.230077
Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis
Radiol Artif Intell. 2024 Mar 6:e230362. doi: 10.1148/ryai.230362. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop an MRI-based model for clinically significant prostate cancer (csPCa) diagnosis that can resist rectal artifact interference. Materials and Methods This retrospective study included 2203 male patients with prostate lesions who underwent biparametric MRI and biopsy between January 2019 and June 2023. A targeted adversarial training with proprietary adversarial samples (TPAS) strategy was proposed to enhance model resistance against rectal artifacts. The automated csPCa diagnostic models trained with and without TPAS were compared using multicenter validation datasets. The impact of rectal artifacts on diagnostic performance of each model at the patient and lesion levels was compared using the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPRC). The AUC between models was compared using the Delong test, and the AUPRC was compared using the bootstrap method. Results The TPAS model exhibited diagnostic performance improvements of 6% at the patient level (AUC: 0.87 versus 0.81; P < .001) and 7% at the lesion level (AUPRC: 0.84 versus 0.77; P = .007) compared with the control model. The TPAS model demonstrated less performance decline in the presence of rectal artifact-pattern adversarial noise than the control model (ΔAUC: -17% versus -19%; ΔAUPRC: -18% versus -21%). The TPAS model performed better than the control model in patients with moderate (AUC: 0.79 versus 0.73; AUPRC: 0.68 versus 0.61), and severe (AUC: 0.75 versus 0.57; AUPRC: 0.69 versus 0.59) artifacts. Conclusion This study demonstrates that the TPAS model can reduce rectal artifact interference in MRI-based PCa diagnosis, thereby improving its performance in clinical applications. Published under a CC BY 4.0 license.
PMID:38446042 | DOI:10.1148/ryai.230362
Structural modeling of ion channels using AlphaFold2, RoseTTAFold2, and ESMFold
Channels (Austin). 2024 Dec;18(1):2325032. doi: 10.1080/19336950.2024.2325032. Epub 2024 Mar 6.
ABSTRACT
Ion channels play key roles in human physiology and are important targets in drug discovery. The atomic-scale structures of ion channels provide invaluable insights into a fundamental understanding of the molecular mechanisms of channel gating and modulation. Recent breakthroughs in deep learning-based computational methods, such as AlphaFold, RoseTTAFold, and ESMFold have transformed research in protein structure prediction and design. We review the application of AlphaFold, RoseTTAFold, and ESMFold to structural modeling of ion channels using representative voltage-gated ion channels, including human voltage-gated sodium (NaV) channel - NaV1.8, human voltage-gated calcium (CaV) channel - CaV1.1, and human voltage-gated potassium (KV) channel - KV1.3. We compared AlphaFold, RoseTTAFold, and ESMFold structural models of NaV1.8, CaV1.1, and KV1.3 with corresponding cryo-EM structures to assess details of their similarities and differences. Our findings shed light on the strengths and limitations of the current state-of-the-art deep learning-based computational methods for modeling ion channel structures, offering valuable insights to guide their future applications for ion channel research.
PMID:38445990 | DOI:10.1080/19336950.2024.2325032
pyM2aia: Python interface for mass spectrometry imaging with focus on Deep Learning
Bioinformatics. 2024 Mar 5:btae133. doi: 10.1093/bioinformatics/btae133. Online ahead of print.
ABSTRACT
SUMMARY: Python is the most commonly used language for deep learning (DL). Existing Python packages for mass spectrometry imaging (MSI) data are not optimized for DL tasks. We therefore introduce pyM2aia, a Python package for MSI data analysis with a focus on memory-efficient handling, processing and convenient data-access for DL applications. pyM2aia provides interfaces to its parent application M2aia, which offers interactive capabilities for exploring and annotating MSI data in imzML format. pyM2aia utilizes the image input and output routines, data formats, and processing functions of M2aia, ensures data interchangeability, and enables the writing of readable and easy-to-maintain DL pipelines by providing batch generators for typical MSI data access strategies. We showcase the package in several examples, including imzML metadata parsing, signal processing, ion-image generation, and, in particular, DL model training and inference for spectrum-wise approaches, ion-image-based approaches, and approaches that use spectral and spatial information simultaneously.
AVAILABILITY: Python package, code and examples are available at (https://m2aia.github.io/m2aia).
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
PMID:38445753 | DOI:10.1093/bioinformatics/btae133
The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination
Ann R Coll Surg Engl. 2024 Mar 6. doi: 10.1308/rcsann.2024.0023. Online ahead of print.
ABSTRACT
INTRODUCTION: Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination.
METHODS: The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%.
RESULTS: The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively (p = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% (p = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions (p = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs.
CONCLUSIONS: Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.
PMID:38445611 | DOI:10.1308/rcsann.2024.0023
Deep learning-based multimodel prediction for disease-free survival status of patients with clear cell renal cell carcinoma after surgery: a multicenter cohort study
Int J Surg. 2024 Mar 4. doi: 10.1097/JS9.0000000000001222. Online ahead of print.
ABSTRACT
BACKGROUND: Although separate analysis of individual factor can somewhat improve the prognostic performance, integration of multimodal information into a single signature is necessary to stratify patients with clear cell renal cell carcinoma (ccRCC) for adjuvant therapy after surgery.
METHODS: A total of 414 patients with whole slide images, computed tomography images, and clinical data from three patient cohorts were retrospectively analyzed. The authors performed deep learning and machine learning algorithm to construct three single-modality prediction models for disease-free survival of ccRCC based on whole slide images, cell segmentation, and computed tomography images, respectively. A multimodel prediction signature (MMPS) for disease-free survival were further developed by combining three single-modality prediction models and tumor stage/grade system. Prognostic performance of the prognostic model was also verified in two independent validation cohorts.
RESULTS: Single-modality prediction models performed well in predicting the disease-free survival status of ccRCC. The MMPS achieved higher area under the curve value of 0.742, 0.917, and 0.900 in three independent patient cohorts, respectively. MMPS could distinguish patients with worse disease-free survival, with HR of 12.90 (95% CI: 2.443-68.120, P<0.0001), 11.10 (95% CI: 5.467-22.520, P<0.0001), and 8.27 (95% CI: 1.482-46.130, P<0.0001) in three different patient cohorts. In addition, MMPS outperformed single-modality prediction models and current clinical prognostic factors, which could also provide complements to current risk stratification for adjuvant therapy of ccRCC.
CONCLUSION: Our novel multimodel prediction analysis for disease-free survival exhibited significant improvements in prognostic prediction for patients with ccRCC. After further validation in multiple centers and regions, the multimodal system could be a potential practical tool for clinicians in the treatment for ccRCC patients.
PMID:38445478 | DOI:10.1097/JS9.0000000000001222
Development and validation of a deep learning radiomics model with clinical-radiological characteristics for the identification of occult peritoneal metastases in patients with pancreatic ductal adenocarcinoma
Int J Surg. 2024 Mar 4. doi: 10.1097/JS9.0000000000001213. Online ahead of print.
ABSTRACT
BACKGROUND: Occult peritoneal metastases (OPM) in patients with pancreatic ductal adenocarcinoma (PDAC) are frequently overlooked during imaging. We aimed to develop and validate a CT-based deep learning-based radiomics (DLR) model to identify OPM in PDAC before treatment.
METHODS: This retrospective, bicentric study included 302 patients with PDAC (training: n=167, OPM-positive, n=22; internal test: n=72, OPM-positive, n=9: external test, n=63, OPM-positive, n=9) who had undergone baseline CT examinations between January 2012 and October 2022. Handcrafted radiomics (HCR) and DLR features of the tumor and HCR features of peritoneum were extracted from CT images. Mutual information and least absolute shrinkage and selection operator algorithms were used for feature selection. A combined model, which incorporated the selected clinical-radiological, HCR, and DLR features, was developed using a logistic regression classifier using data from the training cohort and validated in the test cohorts.
RESULTS: Three clinical-radiological characteristics (carcinoembryonic antigen 19-9 and CT-based T and N stages), nine HCR features of the tumor, 14 DLR features of the tumor and three HCR features of the peritoneum were retained after feature selection. The combined model yielded satisfactory predictive performance, with an area under the curve (AUC) of 0.853 (95% confidence interval [CI], 0.790-0.903), 0.845 (95% CI, 0.740-0.919), and 0.852 (95% CI, 0.740-0.929) in the training, internal test, and external test cohorts, respectively (all P<0.05). The combined model showed better discrimination than the clinical-radiological model in the training (AUC=0.853 vs. 0.612, P<0.001) and the total test (AUC=0.842 vs. 0.638, P<0.05) cohorts. The decision curves revealed that the combined model had greater clinical applicability than the clinical-radiological model.
CONCLUSIONS: The model combining CT-based deep learning radiomics and clinical-radiological features showed satisfactory performance for predicting occult peritoneal metastases in patients with pancreatic ductal adenocarcinoma.
PMID:38445459 | DOI:10.1097/JS9.0000000000001213
Dynamic changes in AI-based analysis of endometrial cellular composition: Analysis of PCOS and RIF endometrium
J Pathol Inform. 2024 Feb 1;15:100364. doi: 10.1016/j.jpi.2024.100364. eCollection 2024 Dec.
ABSTRACT
BACKGROUND: The human endometrium undergoes a monthly cycle of tissue growth and degeneration. During the mid-secretory phase, the endometrium establishes an optimal niche for embryo implantation by regulating cellular composition (e.g., epithelial and stromal cells) and differentiation. Impaired endometrial development observed in conditions such as polycystic ovary syndrome (PCOS) and recurrent implantation failure (RIF) contributes to infertility. Surprisingly, despite the importance of the endometrial lining properly developing prior to pregnancy, precise measures of endometrial cellular composition in these two infertility-associated conditions are entirely lacking. Additionally, current methods for measuring the epithelial and stromal area have limitations, including intra- and inter-observer variability and efficiency.
METHODS: We utilized a deep-learning artificial intelligence (AI) model, created on a cloud-based platform and developed in our previous study. The AI model underwent training to segment both areas populated by epithelial and stromal endometrial cells. During the training step, a total of 28.36 mm2 areas were annotated, comprising 2.56 mm2 of epithelium and 24.87 mm2 of stroma. Two experienced pathologists validated the performance of the AI model. 73 endometrial samples from healthy control women were included in the sample set to establish cycle phase-dependent dynamics of the endometrial epithelial-to-stroma ratio from the proliferative (PE) to secretory (SE) phases. In addition, 91 samples from PCOS cases, accounting for the presence or absence of ovulation and representing all menstrual cycle phases, and 29 samples from RIF patients on day 5 after progesterone administration in the hormone replacement treatment cycle were also included and analyzed in terms of cellular composition.
RESULTS: Our AI model exhibited reliable and reproducible performance in delineating epithelial and stromal compartments, achieving an accuracy of 92.40% and 99.23%, respectively. Moreover, the performance of the AI model was comparable to the pathologists' assessment, with F1 scores exceeding 82% for the epithelium and >96% for the stroma. Next, we compared the endometrial epithelial-to-stromal ratio during the menstrual cycle in women with PCOS and in relation to endometrial receptivity status in RIF patients. The ovulatory PCOS endometrium exhibited epithelial cell proportions similar to those of control and healthy women's samples in every cycle phase, from the PE to the late SE, correlating with progesterone levels (control SE, r2 = 0.64, FDR < 0.001; PCOS SE, r2 = 0.52, FDR < 0.001). The mid-SE endometrium showed the highest epithelial percentage compared to both the early and late SE endometrium in both healthy women and PCOS patients. Anovulatory PCOS cases showed epithelial cellular fractions comparable to those of PCOS cases in the PE (Anovulatory, 14.54%; PCOS PE, 15.56%, p = 1.00). We did not observe significant differences in the epithelial-to-stroma ratio in the hormone-induced endometrium in RIF patients with different receptivity statuses.
CONCLUSION: The AI model rapidly and accurately identifies endometrial histology features by calculating areas occupied by epithelial and stromal cells. The AI model demonstrates changes in epithelial cellular proportions according to the menstrual cycle phase and reveals no changes in epithelial cellular proportions based on PCOS and RIF conditions. In conclusion, the AI model can potentially improve endometrial histology assessment by accelerating the analysis of the cellular composition of the tissue and by ensuring maximal objectivity for research and clinical purposes.
PMID:38445292 | PMC:PMC10914580 | DOI:10.1016/j.jpi.2024.100364
Automated volumetric evaluation of intracranial compartments and cerebrospinal fluid distribution on emergency trauma head CT scans to quantify mass effect
Front Neurosci. 2024 Feb 19;18:1341734. doi: 10.3389/fnins.2024.1341734. eCollection 2024.
ABSTRACT
BACKGROUND: Intracranial space is divided into three compartments by the falx cerebri and tentorium cerebelli. We assessed whether cerebrospinal fluid (CSF) distribution evaluated by a specifically developed deep-learning neural network (DLNN) could assist in quantifying mass effect.
METHODS: Head trauma CT scans from a high-volume emergency department between 2018 and 2020 were retrospectively analyzed. Manual segmentations of intracranial compartments and CSF served as the ground truth to develop a DLNN model to automate the segmentation process. Dice Similarity Coefficient (DSC) was used to evaluate the segmentation performance. Supratentorial CSF Ratio was calculated by dividing the volume of CSF on the side with reduced CSF reserve by the volume of CSF on the opposite side.
RESULTS: Two hundred and seventy-four patients (mean age, 61 years ± 18.6) after traumatic brain injury (TBI) who had an emergency head CT scan were included. The average DSC for training and validation datasets were respectively: 0.782 and 0.765. Lower DSC were observed in the segmentation of CSF, respectively 0.589, 0.615, and 0.572 for the right supratentorial, left supratentorial, and infratentorial CSF regions in the training dataset, and slightly lower values in the validation dataset, respectively 0.567, 0.574, and 0.556. Twenty-two patients (8%) had midline shift exceeding 5 mm, and 24 (8.8%) presented with high/mixed density lesion exceeding >25 ml. Fifty-five patients (20.1%) exhibited mass effect requiring neurosurgical treatment. They had lower supratentorial CSF volume and lower Supratentorial CSF Ratio (both p < 0.001). A Supratentorial CSF Ratio below 60% had a sensitivity of 74.5% and specificity of 87.7% (AUC 0.88, 95%CI 0.82-0.94) in identifying patients that require neurosurgical treatment for mass effect. On the other hand, patients with CSF constituting 10-20% of the intracranial space, with 80-90% of CSF specifically in the supratentorial compartment, and whose Supratentorial CSF Ratio exceeded 80% had minimal risk.
CONCLUSION: CSF distribution may be presented as quantifiable ratios that help to predict surgery in patients after TBI. Automated segmentation of intracranial compartments using the DLNN model demonstrates a potential of artificial intelligence in quantifying mass effect. Further validation of the described method is necessary to confirm its efficacy in triaging patients and identifying those who require neurosurgical treatment.
PMID:38445256 | PMC:PMC10913188 | DOI:10.3389/fnins.2024.1341734
Between neurons and networks: investigating mesoscale brain connectivity in neurological and psychiatric disorders
Front Neurosci. 2024 Feb 20;18:1340345. doi: 10.3389/fnins.2024.1340345. eCollection 2024.
ABSTRACT
The study of brain connectivity has been a cornerstone in understanding the complexities of neurological and psychiatric disorders. It has provided invaluable insights into the functional architecture of the brain and how it is perturbed in disorders. However, a persistent challenge has been achieving the proper spatial resolution, and developing computational algorithms to address biological questions at the multi-cellular level, a scale often referred to as the mesoscale. Historically, neuroimaging studies of brain connectivity have predominantly focused on the macroscale, providing insights into inter-regional brain connections but often falling short of resolving the intricacies of neural circuitry at the cellular or mesoscale level. This limitation has hindered our ability to fully comprehend the underlying mechanisms of neurological and psychiatric disorders and to develop targeted interventions. In light of this issue, our review manuscript seeks to bridge this critical gap by delving into the domain of mesoscale neuroimaging. We aim to provide a comprehensive overview of conditions affected by aberrant neural connections, image acquisition techniques, feature extraction, and data analysis methods that are specifically tailored to the mesoscale. We further delineate the potential of brain connectivity research to elucidate complex biological questions, with a particular focus on schizophrenia and epilepsy. This review encompasses topics such as dendritic spine quantification, single neuron morphology, and brain region connectivity. We aim to showcase the applicability and significance of mesoscale neuroimaging techniques in the field of neuroscience, highlighting their potential for gaining insights into the complexities of neurological and psychiatric disorders.
PMID:38445254 | PMC:PMC10912403 | DOI:10.3389/fnins.2024.1340345
Dataset of directional room impulse responses for realistic speech data
Data Brief. 2024 Feb 22;53:110229. doi: 10.1016/j.dib.2024.110229. eCollection 2024 Apr.
ABSTRACT
Obtaining real-world multi-channel speech recordings is expensive and time-consuming. Therefore, multi-channel recordings are often artificially generated by convolving existing monaural speech recordings with simulated Room Impulse Responses (RIRs) from a so-called shoebox room [1] for stationary (not moving) speakers. Far-field speech processing for home automation or smart assistants have to cope with moving speakers in reverberant environments. With this dataset, we aim to support the generation of realistic speech data by providing multiple directional RIRs along a fine grid of locations in a real room. We provide directional RIR recordings for a classroom and a large corridor. These RIRs can be used to simulate moving speakers by generating random trajectories on that grid, and quantize the trajectories along the grid points. For each matching grid point, the monaural speech recording can be convolved with the RIR at this grid point. Then, the spatialized recording can be compiled using the overlap-add method for each grid point [2]. An example is provided with the data.
PMID:38445201 | PMC:PMC10912595 | DOI:10.1016/j.dib.2024.110229
A multichannel electromyography dataset for continuous intraoperative neurophysiological monitoring of cranial nerve
Data Brief. 2024 Feb 27;53:110250. doi: 10.1016/j.dib.2024.110250. eCollection 2024 Apr.
ABSTRACT
Continuous Intraoperative Neurophysiologic Monitoring (cIONM) is a widely used technology to improve surgical outcomes and prevent cranial nerve injury during skull base surgery. Monitoring of free-running electromyogram (EMG) plays an important role in cIONM, which can be used to identify different discharge patterns, alert the surgeon to potential nerve damage promptly, etc. In this dataset, we collected clinical multichannel EMG signals from 11 independent patients' data using a Neuromaster G1 MEE-2000 system (Nihon Kohden, Inc., Tokyo, Japan). Through innovative classification methods, these signals were categorized into seven different categories. Remarkably, channel 1 and channel 2 captured continuous EMG signals from the facial nerve (VII cranial nerve), while channel 3 to channel 6 focused on V, XI, X, and XII cranial nerves. This is the first time that intraoperative EMG signals have been collated and presented as a dataset and labelled by professional neurophysiologists. These data can be utilized to develop the architecture of neural networks in deep learning, machine learning, pattern recognition, and other commonly employed biomedical engineering research methods, thereby providing valuable information to enhance the safety and efficacy of surgical procedures.
PMID:38445198 | PMC:PMC10914548 | DOI:10.1016/j.dib.2024.110250
SleepMI: An AI-based screening algorithm for myocardial infarction using nocturnal electrocardiography
Heliyon. 2024 Feb 16;10(4):e26548. doi: 10.1016/j.heliyon.2024.e26548. eCollection 2024 Feb 29.
ABSTRACT
Myocardial infarction (MI) is a common cardiovascular disease, the early diagnosis of which is essential for effective treatment and reduced mortality. Therefore, novel methods are required for automatic screening or early diagnosis of MI, and many studies have proposed diverse conventional methods for its detection. In this study, we aimed to develop a sleep-myocardial infarction (sleepMI) algorithm for automatic screening of MI based on nocturnal electrocardiography (ECG) findings from diagnostic polysomnography (PSG) data using artificial intelligence (AI) models. The proposed sleepMI algorithm was designed using representation and ensemble learning methods and optimized via dropout and batch normalization. In the sleepMI algorithm, a deep convolutional neural network and light gradient boost machine (LightGBM) models were mixed to obtain robust and stable performance for screening MI from nocturnal ECG findings. The nocturnal ECG signal was extracted from 2,691 participants (2,331 healthy individuals and 360 patients with MI) from the PSG data of the second follow-up stage of the Sleep Heart Health Study. The nocturnal ECG signal was extracted 3 h after sleep onset and segmented at 30-s intervals for each participant. All ECG datasets were divided into training, validation, and test sets consisting of 574,729, 143,683, and 718,412 segments, respectively. The proposed sleepMI model exhibited very high performance with precision, recall, and F1-score of 99.38%, 99.38%, and 99.38%, respectively. The total mean accuracy for automatic screening of MI using a nocturnal single-lead ECG was 99.387%. MI events can be detected using conventional 12-lead ECG signals and polysomnographic ECG recordings using our model.
PMID:38444951 | PMC:PMC10912038 | DOI:10.1016/j.heliyon.2024.e26548