Deep learning
Comparative analysis of image quality and interchangeability between standard and deep learning-reconstructed T2-weighted spine MRI
Magn Reson Imaging. 2024 Mar 19:S0730-725X(24)00078-X. doi: 10.1016/j.mri.2024.03.022. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: MRI reconstruction of undersampled data using a deep learning (DL) network has been recently performed as part of accelerated imaging. Herein, we compared DL-reconstructed T2-weighted image (T2-WI) to conventional T2-WI regarding image quality and degenerative lesion detection.
MATERIALS AND METHODS: Sixty-two patients underwent C-spine (n = 27) or L-spine (n = 35) MRIs, including conventional and DL-reconstructed T2-WI. Image quality was assessed with non-uniformity measurement and 4-scale grading of structural visibility. Three readers (R1, R2, R3) independently assessed the presence and types of degenerative lesions. Student t-test was used to compare non-uniformity measurements. Interprotocol and interobserver agreement of structural visibility was analyzed with Wilcoxon signed-rank test and weighted-κ values, respectively. The diagnostic equivalence of degenerative lesion detection between two protocols was assessed with interchangeability test.
RESULTS: The acquisition time of DL-reconstructed images was reduced to about 21-58% compared to conventional images. Non-uniformity measurement was insignificantly different between the two images (p-value = 0.17). All readers rated DL-reconstructed images as showing the same or superior structural visibility compared to conventional images. Significantly improved visibility was observed at disk margin of C-spine (R1, p < 0.001; R2, p = 0.04) and dorsal root ganglia (R1, p = 0.03; R3, p = 0.02) and facet joint (R1, p = 0.04; R2, p < 0.001; R3, p = 0.03) of L-spine. Interobserver agreements of image quality were variable in each structure. Clinical interchangeability between two protocols for degenerative lesion detection was verified showing <5% in the upper bounds of 95% confidence intervals of agreement rate differences.
CONCLUSIONS: DL-reconstructed T2-WI demonstrates comparable image quality and diagnostic performance with conventional T2-WI in spine imaging, with reduced acquisition time.
PMID:38513791 | DOI:10.1016/j.mri.2024.03.022
PhosAF: An integrated deep learning architecture for predicting protein phosphorylation sites with AlphaFold2 predicted structures
Anal Biochem. 2024 Mar 19:115510. doi: 10.1016/j.ab.2024.115510. Online ahead of print.
ABSTRACT
Phosphorylation is indispensable in comprehending biological processes, while biological experimental methods for identifying phosphorylation sites are tedious and arduous. With the rapid growth of biotechnology, deep learning methods have made significant progress in site prediction tasks. Nevertheless, most existing predictors only consider protein sequence information, that limits the capture of protein spatial information. Building upon the latest advancement in protein structure prediction by AlphaFold2, a novel integrated deep learning architecture PhosAF is developed to predict phosphorylation sites in human proteins by integrating CMA-Net and MFC-Net, which considers sequence and structure information predicted by AlphaFold2. Here, CMA-Net module is composed of multiple convolutional neural network layers and multi-head attention is appended to obtaining the local and long-term dependencies of sequence features. Meanwhile, the MFC-Net module composed of deep neural network layers is used to capture the complex representations of evolutionary and structure features. Furthermore, different features are combined to predict the final phosphorylation sites. In addition, we put forward a new strategy to construct reliable negative samples via protein secondary structures. Experimental results on independent test data and case study indicate that our model PhosAF surpasses the current most advanced methods in phosphorylation site prediction.
PMID:38513769 | DOI:10.1016/j.ab.2024.115510
Application of deep learning in predicting suspended sediment concentration: A case study in Jiaozhou Bay, China
Mar Pollut Bull. 2024 Mar 20;201:116255. doi: 10.1016/j.marpolbul.2024.116255. Online ahead of print.
ABSTRACT
Previous research methodologies for quantifying Suspended Sediment Concentration (SSC) have encompassed in-situ observations, numerical simulations, and analyses of remote sensing datasets, each with inherent constraints. In this study, we have harnessed Convolutional Neural Networks (CNNs) to create a deep learning model, which has been applied to the remote sensing data procured from the Geostationary Ocean Color Imager (GOCI) spanning April 2011 to March 2021. Our research indicates that on a small time scale, wind and hydrodynamic forces both have a significant impact on the prediction results of CNNs model. Considering both wind and hydrodynamic forces can effectively improve the model's prediction efficiency for SSC. Moreover, we have employed CNNs to interpolate absent values within the remote sensing datasets, yielding enhancements superior to those attained via linear or multivariate regression techniques. Finally, the correlation coefficient between CNN-derived SSC estimates for Jiaozhou Bay (JZB) and its corresponding remote sensing data is 0.72. Correlation coefficient and root mean square error differ in different regions. In the shallow water of JZB, due to water level changes, there is limited data, and the correlation coefficient in this area is about 0.5-0.6. In the central region of JZB with sufficient data, the correlation coefficient is generally higher than 0.75. Therefore, we believe that this CNNs model can be used to predict the hourly variation of SSC. When juxtaposed with alternative methodologies, the CNN approach is found to economize computational resources and enhance processing efficiency.
PMID:38513605 | DOI:10.1016/j.marpolbul.2024.116255
eDeeplepsy: An artificial neural framework to reveal different brain states in children with epileptic spasms
Epilepsy Behav. 2024 Mar 20;154:109744. doi: 10.1016/j.yebeh.2024.109744. Online ahead of print.
ABSTRACT
OBJECTIVE: Despite advances, analysis and interpretation of EEG still essentially rely on visual inspection by a super-specialized physician. Considering the vast amount of data that composes the EEG, much of the detail inevitably escapes ordinary human scrutiny. Significant information may not be evident and is missed, and misinterpretation remains a serious problem. Can we develop an artificial intelligence system to accurately and efficiently classify EEG and even reveal novel information? In this study, deep learning techniques and, in particular, Convolutional Neural Networks, have been used to develop a model (which we have named eDeeplepsy) for distinguishing different brain states in children with epilepsy.
METHODS: A novel EEG database from a homogenous pediatric population with epileptic spasms beyond infancy was constituted by epileptologists, representing a particularly intriguing seizure type and challenging EEG. The analysis was performed on such samples from long-term video-EEG recordings, previously coded as images showing how different parts of the epileptic brain are distinctly activated during varying states within and around this seizure type.
RESULTS: Results show that not only could eDeeplepsy differentiate ictal from interictal states but also discriminate brain activity between spasms within a cluster from activity away from clusters, usually undifferentiated by visual inspection. Accuracies between 86 % and 94 % were obtained for the proposed use cases.
SIGNIFICANCE: We present a model for computer-assisted discrimination that can consistently detect subtle differences in the various brain states of children with epileptic spasms, and which can be used in other settings in epilepsy with the purpose of reducing workload and discrepancies or misinterpretations. The research also reveals previously undisclosed information that allows for a better understanding of the pathophysiology and evolving characteristics of this particular seizure type. It does so by documenting a different state (interspasms) that indicates a potentially non-standard signal with distinctive epileptogenicity at that period.
PMID:38513569 | DOI:10.1016/j.yebeh.2024.109744
Glass box machine learning for retrospective cohort studies using many patient records. The complex example of bleeding peptic ulcer
Comput Biol Med. 2024 Feb 7;173:108085. doi: 10.1016/j.compbiomed.2024.108085. Online ahead of print.
ABSTRACT
Glass Box Machine Learning is, in this study, a type of partially supervised data mining and prediction technique, like a neural network in which each weight or pattern of mutually relevant weights is now replaced by a meaningful "probabilistic knowledge element." We apply it to retrospective cohort studies using large numbers of structured medical records to help select candidate patients for future cohort studies and similar clinical trials. Here it is applied to aid analysis of approaches to aid Deep Learning, but the method lends itself well to direct computation of odds with "explainability" in study design that can complement "Black Box" Deep Learning. Cohort studies and clinical trials traditionally involved at least one 2 × 2 contingency table, but in the age of emerging personalized medicine and the use of machine learning to discover and incorporate further relevant factors, these tables can extend into many extra dimensions as a 2 × 2 x 2 × 2 x ….data structure by considering different conditional demographic and clinical factors of a patient or group, as well as variations in treatment. We consider this in terms of multiple 2 × 2 x 2 data substructures where each one is summarized by an appropriate measure of risk and success called DOR*. This is the diagnostic odds ratio DOR for a specified disease conditional on a favorable outcome divided by the corresponding DOR conditional on an unfavorable outcome. Bleeding peptic ulcer was chosen as a complex disease with many influencing factors, one that is still subject to controversy and that highlights the challenges of using Real World Data.
PMID:38513393 | DOI:10.1016/j.compbiomed.2024.108085
Gas Graph Convolutional Transformer for Robust Generalization in Adaptive Gas Mixture Concentration Estimation
ACS Sens. 2024 Mar 21. doi: 10.1021/acssensors.3c02654. Online ahead of print.
ABSTRACT
Gas concentration estimation has a tremendous research significance in various fields. However, existing methods for estimating the concentration of mixed gases generally depend on specific data-preprocessing methods and suffer from poor generalizability to diverse types of gases. This paper proposes a graph neural network-based gas graph convolutional transformer model (GGCT) incorporating the information propagation properties and the physical characteristics of temporal sensor data. GGCT accurately predicts mixed gas concentrations and enhances its generalizability by analyzing the concentration tokens. The experimental results highlight the GGCT's robust performance, achieving exceptional levels of accuracy across most tested gas components, underscoring its strong potential for practical applications in mixed gas analysis.
PMID:38513127 | DOI:10.1021/acssensors.3c02654
HDS-Net: Achieving fine-grained skin lesion segmentation using hybrid encoding and dynamic sparse attention
PLoS One. 2024 Mar 21;19(3):e0299392. doi: 10.1371/journal.pone.0299392. eCollection 2024.
ABSTRACT
Skin cancer is one of the most common malignant tumors worldwide, and early detection is crucial for improving its cure rate. In the field of medical imaging, accurate segmentation of lesion areas within skin images is essential for precise diagnosis and effective treatment. Due to the capacity of deep learning models to conduct adaptive feature learning through end-to-end training, they have been widely applied in medical image segmentation tasks. However, challenges such as boundary ambiguity between normal skin and lesion areas, significant variations in the size and shape of lesion areas, and different types of lesions in different samples pose significant obstacles to skin lesion segmentation. Therefore, this study introduces a novel network model called HDS-Net (Hybrid Dynamic Sparse Network), aiming to address the challenges of boundary ambiguity and variations in lesion areas in skin image segmentation. Specifically, the proposed hybrid encoder can effectively extract local feature information and integrate it with global features. Additionally, a dynamic sparse attention mechanism is introduced, mitigating the impact of irrelevant redundancies on segmentation performance by precisely controlling the sparsity ratio. Experimental results on multiple public datasets demonstrate a significant improvement in Dice coefficients, reaching 0.914, 0.857, and 0.898, respectively.
PMID:38512922 | DOI:10.1371/journal.pone.0299392
Deep-Learning Driven, High-Precision Plasmonic Scattering Interferometry for Single-Particle Identification
ACS Nano. 2024 Mar 21. doi: 10.1021/acsnano.4c01411. Online ahead of print.
ABSTRACT
Label-free probing of the material composition of (bio)nano-objects directly in solution at the single-particle level is crucial in various fields, including colloid analysis and medical diagnostics. However, it remains challenging to decipher the constituents of heterogeneous mixtures of nano-objects with high sensitivity and resolution. Here, we present deep-learning plasmonic scattering interferometric microscopy, which is capable of identifying the composition of nanoparticles automatically with high throughput at the single-particle level. By employing deep learning to decode the quantitative relationship between the interferometric scattering patterns of nanoparticles and their intrinsic material properties, this technique is capable of high-throughput, label-free identification of diverse nanoparticle types. We demonstrate its versatility in analyzing dynamic surface chemical reactions on single nanoparticles, revealing its potential as a universal platform for nanoparticle imaging and reaction analysis. This technique not only streamlines the process of nanoparticle characterization, but also proposes a methodology for a deeper understanding of nanoscale dynamics, holding great potential for addressing extensive fundamental questions in nanoscience and nanotechnology.
PMID:38512797 | DOI:10.1021/acsnano.4c01411
Predicting ICU Interventions: A Transparent Decision Support Model Based on Multivariate Time Series Graph Convolutional Neural Network
IEEE J Biomed Health Inform. 2024 Mar 21;PP. doi: 10.1109/JBHI.2024.3379998. Online ahead of print.
ABSTRACT
In this study, we present a novel approach for predicting interventions for patients in the intensive care unit using a multivariate time series graph convolutional neural network. Our method addresses two critical challenges: the need for timely and accurate decisions based on changing physiological signals, drug administration information, and static characteristics; and the need for interpretability in the decision-making process. Drawing on real-world ICU records from the MIMIC-III dataset, we demonstrate that our approach significantly improves upon existing machine learning and deep learning methods for predicting two targeted interventions, mechanical ventilation and vasopressors. Our model achieved an accuracy improvement from 81.6% to 91.9% and a F1 score improvement from 0.524 to 0.606 for predicting mechanical ventilation interventions. For predicting vasopressor interventions, our model achieved an accuracy improvement from 76.3% to 82.7% and a F1 score improvement from 0.509 to 0.619. We also assessed the interpretability by performing an adjacency matrix importance analysis, which revealed that our model uses clinically meaningful and appropriate features for prediction. This critical aspect can help clinicians gain insights into the underlying mechanisms of interventions, allowing them to make more informed and precise clinical decisions. Overall, our study represents a significant step forward in the development of decision support systems for ICU patient care, providing a powerful tool for improving clinical outcomes and enhancing patient safety.
PMID:38512747 | DOI:10.1109/JBHI.2024.3379998
AutoNet-Generated Deep Layer-Wise Convex Networks for ECG Classification
IEEE Trans Pattern Anal Mach Intell. 2024 Mar 21;PP. doi: 10.1109/TPAMI.2024.3378843. Online ahead of print.
ABSTRACT
The design of neural networks typically involves trial-and-error, a time-consuming process for obtaining an optimal architecture, even for experienced researchers. Additionally, it is widely accepted that loss functions of deep neural networks are generally non-convex with respect to the parameters to be optimised. We propose the Layer-wise Convex Theorem to ensure that the loss is convex with respect to the parameters of a given layer, achieved by constraining each layer to be an overdetermined system of non-linear equations. Based on this theorem, we developed an end-to-end algorithm (the AutoNet) to automatically generate layer-wise convex networks (LCNs) for any given training set. We then demonstrate the performance of the AutoNet-generated LCNs (AutoNet-LCNs) compared to state-of-the-art models on three electrocardiogram (ECG) classification benchmark datasets, with further validation on two non-ECG benchmark datasets for more general tasks. The AutoNet-LCN was able to find networks customised for each dataset without manual fine-tuning under 2 GPU-hours, and the resulting networks outperformed the state-of-the-art models with fewer than 5% parameters on all the above five benchmark datasets. The efficiency and robustness of the AutoNet-LCN markedly reduce model discovery costs and enable efficient training of deep learning models in resource-constrained settings.
PMID:38512733 | DOI:10.1109/TPAMI.2024.3378843
Development of Machine Learning Algorithm to Predict the Risk of Incontinence After Robot-Assisted Radical Prostatectomy
J Endourol. 2024 Mar 21. doi: 10.1089/end.2024.0057. Online ahead of print.
ABSTRACT
Introduction: Predicting postoperative incontinence beforehand is crucial for intensified and personalized rehabilitation after robot-assisted radical prostatectomy. Although nomograms exist, their retrospective limitations highlight artificial intelligence (AI)'s potential. This study seeks to develop a machine learning algorithm using robot-assisted radical prostatectomy (RARP) data to predict postoperative incontinence, advancing personalized care. Materials and Methods: In this propsective observational study, patients with localized prostate cancer undergoing RARP between April 2022 and January 2023 were assessed. Preoperative variables included age, body mass index, prostate-specific antigen (PSA) levels, digital rectal examination (DRE) results, Gleason score, International Society of Urological Pathology grade, and continence and potency questionnaires responses. Intraoperative factors, postoperative outcomes, and pathological variables were recorded. Urinary continence was evaluated using the Expanded Prostate cancer Index Composite questionnaire, and machine learning models (XGBoost, Random Forest, Logistic Regression) were explored to predict incontinence risk. The chosen model's SHAP values elucidated variables impacting predictions. Results: A dataset of 227 patients undergoing RARP was considered for the study. Post-RARP complications were predominantly low grade, and urinary continence rates were 74.2%, 80.7%, and 91.4% at 7, 13, and 90 days after catheter removal, respectively. Employing machine learning, XGBoost proved the most effective in predicting postoperative incontinence risk. Significant variables identified by the algorithm included nerve-sparing approach, age, DRE, and total PSA. The model's threshold of 0.67 categorized patients into high or low risk, offering personalized predictions about the risk of incontinence after surgery. Conclusions: Predicting postoperative incontinence is crucial for tailoring rehabilitation after RARP. Machine learning algorithm, particularly XGBoost, can effectively identify those variables more heavily, impacting the outcome of postoperative continence, allowing to build an AI-driven model addressing the current challenges in post-RARP rehabilitation.
PMID:38512711 | DOI:10.1089/end.2024.0057
Image-based artificial intelligence for the prediction of pathological complete response to neoadjuvant chemoradiotherapy in patients with rectal cancer: a systematic review and meta-analysis
Radiol Med. 2024 Mar 21. doi: 10.1007/s11547-024-01796-w. Online ahead of print.
ABSTRACT
OBJECTIVE: Artificial intelligence (AI) holds enormous potential for noninvasively identifying patients with rectal cancer who could achieve pathological complete response (pCR) following neoadjuvant chemoradiotherapy (nCRT). We aimed to conduct a meta-analysis to summarize the diagnostic performance of image-based AI models for predicting pCR to nCRT in patients with rectal cancer.
METHODS: This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A literature search of PubMed, Embase, Cochrane Library, and Web of Science was performed from inception to July 29, 2023. Studies that developed or utilized AI models for predicting pCR to nCRT in rectal cancer from medical images were included. The Quality Assessment of Diagnostic Accuracy Studies-AI was used to appraise the methodological quality of the studies. The bivariate random-effects model was used to summarize the individual sensitivities, specificities, and areas-under-the-curve (AUCs). Subgroup and meta-regression analyses were conducted to identify potential sources of heterogeneity. Protocol for this study was registered with PROSPERO (CRD42022382374).
RESULTS: Thirty-four studies (9933 patients) were identified. Pooled estimates of sensitivity, specificity, and AUC of AI models for pCR prediction were 82% (95% CI: 76-87%), 84% (95% CI: 79-88%), and 90% (95% CI: 87-92%), respectively. Higher specificity was seen for the Asian population, low risk of bias, and deep-learning, compared with the non-Asian population, high risk of bias, and radiomics (all P < 0.05). Single-center had a higher sensitivity than multi-center (P = 0.001). The retrospective design had lower sensitivity (P = 0.012) but higher specificity (P < 0.001) than the prospective design. MRI showed higher sensitivity (P = 0.001) but lower specificity (P = 0.044) than non-MRI. The sensitivity and specificity of internal validation were higher than those of external validation (both P = 0.005).
CONCLUSIONS: Image-based AI models exhibited favorable performance for predicting pCR to nCRT in rectal cancer. However, further clinical trials are warranted to verify the findings.
PMID:38512622 | DOI:10.1007/s11547-024-01796-w
Computed tomography-based 3D convolutional neural network deep learning model for predicting micropapillary or solid growth pattern of invasive lung adenocarcinoma
Radiol Med. 2024 Mar 21. doi: 10.1007/s11547-024-01800-3. Online ahead of print.
ABSTRACT
PURPOSE: To investigate the value of a computed tomography (CT)-based deep learning (DL) model to predict the presence of micropapillary or solid (M/S) growth pattern in invasive lung adenocarcinoma (ILADC).
MATERIALS AND METHODS: From June 2019 to October 2022, 617 patients with ILADC who underwent preoperative chest CT scans in our institution were randomly placed into training and internal validation sets in a 4:1 ratio, and 353 patients with ILADC from another institution were included as an external validation set. Then, a self-paced learning (SPL) 3D Net was used to establish two DL models: model 1 was used to predict the M/S growth pattern in ILADC, and model 2 was used to predict that pattern in ≤ 2-cm-diameter ILADC.
RESULTS: For model 1, the training cohort's area under the curve (AUC), accuracy, recall, precision, and F1-score were 0.924, 0.845, 0.851, 0.842, and 0.843; the internal validation cohort's were 0.807, 0.744, 0.756, 0.750, and 0.743; and the external validation cohort's were 0.857, 0.805, 0.804, 0.806, and 0.804, respectively. For model 2, the training cohort's AUC, accuracy, recall, precision, and F1-score were 0.946, 0.858, 0.881,0.844, and 0.851; the internal validation cohort's were 0.869, 0.809, 0.786, 0.794, and 0.790; and the external validation cohort's were 0.831, 0.792, 0.789, 0.790, and 0.790, respectively. The SPL 3D Net model performed better than the ResNet34, ResNet50, ResNeXt50, and DenseNet121 models.
CONCLUSION: The CT-based DL model performed well as a noninvasive screening tool capable of reliably detecting and distinguishing the subtypes of ILADC, even in small-sized tumors.
PMID:38512613 | DOI:10.1007/s11547-024-01800-3
Deep learning using contrast-enhanced ultrasound images to predict the nuclear grade of clear cell renal cell carcinoma
World J Urol. 2024 Mar 21;42(1):184. doi: 10.1007/s00345-024-04889-3.
ABSTRACT
PURPOSE: To assess the effectiveness of a deep learning model using contrastenhanced ultrasound (CEUS) images in distinguishing between low-grade (grade I and II) and high-grade (grade III and IV) clear cell renal cell carcinoma (ccRCC).
METHODS: A retrospective study was conducted using CEUS images of 177 Fuhrmangraded ccRCCs (93 low-grade and 84 high-grade) from May 2017 to December 2020. A total of 6412 CEUS images were captured from the videos and normalized for subsequent analysis. A deep learning model using the RepVGG architecture was proposed to differentiate between low-grade and high-grade ccRCC. The model's performance was evaluated based on sensitivity, specificity, positive predictive value, negative predictive value and area under the receiver operating characteristic curve (AUC). Class activation mapping (CAM) was used to visualize the specific areas that contribute to the model's predictions.
RESULTS: For discriminating high-grade ccRCC from low-grade, the deep learning model achieved a sensitivity of 74.8%, specificity of 79.1%, accuracy of 77.0%, and an AUC of 0.852 in the test set.
CONCLUSION: The deep learning model based on CEUS images can accurately differentiate between low-grade and high-grade ccRCC in a non-invasive manner.
PMID:38512539 | DOI:10.1007/s00345-024-04889-3
Deep learning-based image reconstruction for the multi-arterial phase images: improvement of the image quality to assess the small hypervascular hepatic tumor on gadoxetic acid-enhanced liver MRI
Abdom Radiol (NY). 2024 Mar 21. doi: 10.1007/s00261-024-04236-5. Online ahead of print.
ABSTRACT
PURPOSE: To evaluated the impact of a deep learning (DL)-based image reconstruction on multi-arterial-phase magnetic resonance imaging (MA-MRI) for small hypervascular hepatic masses in patients who underwent gadoxetic acid-enhanced liver MRI.
METHODS: We retrospectively enrolled 55 adult patients (aged ≥ 18 years) with small hepatic hypervascular mass (≤ 3 cm) between December 2022 and February 2023. All patients underwent MA-MRI, subsequently reconstructed with a DL-based application. Qualitative assessment with Linkert scale including motion artifact (MA), liver edge (LE), hepatic vessel clarity (HVC) and image quality (IQ) was performed. Quantitative image analysis including signal to noise ratio (SNR), contrast to noise ratio (CNR) and noise was performed.
RESULTS: On both arterial phases (APs), all qualitative parameters were significantly improved after DL-based image reconstruction. (LE on 1st AP, 1.22 vs 1.61; LE on 2nd AP, 1.21 vs 1.65; HVC on 1st AP, 1.24 vs 1.39; HVC on 2nd AP, 1.24 vs 1.44; IQ on 1st AP, 1.17 vs 1.45; IQ on 2nd AP, 1.17 vs 1.47, all p values < 0.05). The SNR, CNR and noise were significantly improved after DL-based image reconstruction. (SNR on AP1, 279.08 vs 176.14; SNR on AP2, 334.34 vs 199.24; CNR on AP1, 106.09 vs 64.14; CNR on AP2, 129.66 vs 73.73; noise on AP1, 1.51 vs 2.33; noise on AP2, 1.45 vs 2.28, all p values < 0.05).
CONCLUSIONS: Gadoxetic acid-enhanced MA-MRI with DL-based image reconstruction improved the qualitative and quantitative parameters. Despite the short acquisition time, high-quality MA-MRI is now achievable.
PMID:38512517 | DOI:10.1007/s00261-024-04236-5
Choroidal Vascularity Index and Choroidal Structural Changes in Children With Nephrotic Syndrome
Transl Vis Sci Technol. 2024 Mar 1;13(3):18. doi: 10.1167/tvst.13.3.18.
ABSTRACT
PURPOSE: To investigate the choroidal vascularity index (CVI) and choroidal structural changes in children with nephrotic syndrome.
METHODS: This was a cross-sectional study involving 45 children with primary nephrotic syndrome and 40 normal controls. All participants underwent enhanced depth imaging-optical coherence tomography examinations. An automatic segmentation method based on deep learning was used to segment the choroidal vessels and stroma, and the choroidal volume (CV), vascular volume (VV), and CVI within a 4.5 mm diameter circular area centered around the macular fovea were obtained. Clinical data, including blood lipids, serum proteins, renal function, and renal injury indicators, were collected from the patients.
RESULTS: Compared with normal controls, children with nephrotic syndrome had a significant increase in CV (nephrotic syndrome: 4.132 ± 0.464 vs. normal controls: 3.873 ± 0.574; P = 0.024); no significant change in VV (nephrotic syndrome: 1.276 ± 0.173 vs. normal controls: 1.277 ± 0.165; P = 0.971); and a significant decrease in the CVI (nephrotic syndrome: 0.308 [range, 0.270-0.386] vs. normal controls: 0.330 [range, 0.288-0.387]; P < 0.001). In the correlation analysis, the CVI was positively correlated with serum total protein, serum albumin, serum prealbumin, ratio of serum albumin to globulin, and 24-hour urine volume and was negatively correlated with total cholesterol, low-density lipoprotein cholesterol, urinary protein concentration, and ratio of urinary transferrin to creatinine (all P < 0.05).
CONCLUSIONS: The CVI is significantly reduced in children with nephrotic syndrome, and the decrease in the CVI parallels the severity of kidney disease, indicating choroidal involvement in the process of nephrotic syndrome.
TRANSLATIONAL RELEVANCE: Our findings contribute to a deeper understanding of how nephrotic syndrome affects the choroid.
PMID:38512284 | DOI:10.1167/tvst.13.3.18
Exploring biased activation characteristics by molecular dynamics simulation and machine learning for the μ-opioid receptor
Phys Chem Chem Phys. 2024 Mar 21. doi: 10.1039/d3cp05050e. Online ahead of print.
ABSTRACT
Biased ligands selectively activating specific downstream signaling pathways (termed as biased activation) exhibit significant therapeutic potential. However, the conformational characteristics revealed are very limited for the biased activation, which is not conducive to biased drug development. Motivated by the issue, we combine extensive accelerated molecular dynamics simulations and an interpretable deep learning model to probe the biased activation features for two complex systems constructed by the inactive μOR and two different biased agonists (G-protein-biased agonist TRV130 and β-arrestin-biased agonist endomorphin2). The results indicate that TRV130 binds deeper into the receptor core compared to endomorphin2, located between W2936.48 and D1142.50, and forms hydrogen bonding with D1142.50, while endomorphin2 binds above W2936.48. The G protein-biased agonist induces greater outward movements of the TM6 intracellular end, forming a typical active conformation, while the β-arrestin-biased agonist leads to a smaller extent of outward movements of TM6. Compared with TRV130, endomorphin2 causes more pronounced inward movements of the TM7 intracellular end and more complex conformational changes of H8 and ICL1. In addition, important residues determining the two different biased activation states were further identified by using an interpretable deep learning classification model, including some common biased activation residues across Class A GPCRs like some key residues on the TM2 extracellular end, ECL2, TM5 intracellular end, TM6 intracellular end, and TM7 intracellular end, and some specific important residues of ICL3 for μOR. The observations will provide valuable information for understanding the biased activation mechanism for GPCRs.
PMID:38512140 | DOI:10.1039/d3cp05050e
Work Function Prediction by Graph Neural Networks for Configurationally Hybridized Boron-Doped Graphene
Langmuir. 2024 Mar 21. doi: 10.1021/acs.langmuir.4c00228. Online ahead of print.
ABSTRACT
Graphene, serving as electrodes, is widely applied in electronic and optoelectronic devices. Work function as one of the fundamental intrinsic characteristics of graphene directly affects the interfacial properties of the electrodes, thereby affecting the performance of the devices. Much work has been done to regulate the work function of graphene to expand its application fields, and doping has been demonstrated as an effective method. However, the numerous types of doped graphene make the investigation of its work function time-consuming and labor-intensive. In order to quickly obtain the relationship between the structure and property, a deep learning method is employed to predict the work function in this study. Specifically, a data set of over 30,000 compositions with the work function on boron-doped graphene at different concentrations and doping positions via density functional theory simulations was established through ab initio calculations. Then, a novel fusion model (GT-Net) combining transformers and graph neural networks (GNNs) was proposed. After that, improved effective GNN-based descriptors were developed. Finally, three different GNN methods were compared, and the results show that the proposed method could accurately predicate the work function with the R2 = 0.975 and RMSE = 0.027. This study not only provides the possibility of designing materials with specific properties at the atomic level but also demonstrates the performance of GNNs on graph-level tasks with the same graph structure and atomic number.
PMID:38511875 | DOI:10.1021/acs.langmuir.4c00228
Decoding phenotypic screening: A comparative analysis of image representations
Comput Struct Biotechnol J. 2024 Mar 12;23:1181-1188. doi: 10.1016/j.csbj.2024.02.022. eCollection 2024 Dec.
ABSTRACT
Biomedical imaging techniques such as high content screening (HCS) are valuable for drug discovery, but high costs limit their use to pharmaceutical companies. To address this issue, The JUMP-CP consortium released a massive open image dataset of chemical and genetic perturbations, providing a valuable resource for deep learning research. In this work, we aim to utilize the JUMP-CP dataset to develop a universal representation model for HCS data, mainly data generated using U2OS cells and CellPainting protocol, using supervised and self-supervised learning approaches. We propose an evaluation protocol that assesses their performance on mode of action and property prediction tasks using a popular phenotypic screening dataset. Results show that the self-supervised approach that uses data from multiple consortium partners provides representation that is more robust to batch effects whilst simultaneously achieving performance on par with standard approaches. Together with other conclusions, it provides recommendations on the training strategy of a representation model for HCS images.
PMID:38510976 | PMC:PMC10951426 | DOI:10.1016/j.csbj.2024.02.022
Enhancement of handwritten text recognition using AI-based hybrid approach
MethodsX. 2024 Mar 10;12:102654. doi: 10.1016/j.mex.2024.102654. eCollection 2024 Jun.
ABSTRACT
Handwritten text recognition (HTR) within computer vision and image processing stands as a prominent and challenging research domain, holding significant implications for diverse applications. Among these, it finds usefulness in reading bank checks, prescriptions, and deciphering characters on various forms. Optical character recognition (OCR) technology, specifically tailored for handwritten documents, plays a pivotal role in translating characters from a range of file formats, encompassing both word and image documents. Challenges in HTR encompass intricate layout designs, varied handwriting styles, limited datasets, and less accuracy achieved. Recent advancements in Deep Learning and Machine Learning algorithms, coupled with the vast repositories of unprocessed data, have propelled researchers to achieve remarkable progress in HTR. This paper aims to address the challenges in handwritten text recognition by proposing a hybrid approach. The primary objective is to enhance the accuracy of recognizing handwritten text from images. Through the integration of Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) with a Connectionist Temporal Classification (CTC) decoder, the results indicate substantial improvement. The proposed hybrid model achieved an impressive 98.50% and 98.80% accuracy on the IAM and RIMES datasets, respectively. This underscores the potential and efficacy of the consecutive use of these advanced neural network architectures in enhancing handwritten text recognition accuracy. •The proposed method introduces a hybrid approach for handwritten text recognition, employing CNN and BiLSTM with CTC decoder.•Results showcase a remarkable accuracy improvement of 98.50% and 98.80% on IAM and RIMES datasets, emphasizing the potential of this model for enhanced accuracy in recognizing handwritten text from images.
PMID:38510932 | PMC:PMC10950881 | DOI:10.1016/j.mex.2024.102654