Deep learning
IPCT-Net: Parallel information bottleneck modality fusion network for obstructive sleep apnea diagnosis
Neural Netw. 2024 Oct 20;181:106836. doi: 10.1016/j.neunet.2024.106836. Online ahead of print.
ABSTRACT
Obstructive sleep apnea (OSA) is a common sleep breathing disorder and timely diagnosis helps to avoid the serious medical expenses caused by related complications. Existing deep learning (DL)-based methods primarily focus on single-modal models, which cannot fully mine task-related representations. This paper develops a modality fusion representation enhancement (MFRE) framework adaptable to flexible modality fusion types with the objective of improving OSA diagnostic performance, and providing quantitative evidence for clinical diagnostic modality selection. The proposed parallel information bottleneck modality fusion network (IPCT-Net) can extract local-global multi-view representations and eliminate redundant information in modality fusion representations through branch sharing mechanisms. We utilize large-scale real-world home sleep apnea test (HSAT) multimodal data to comprehensively evaluate relevant modality fusion types. Extensive experiments demonstrate that the proposed method significantly outperforms existing methods in terms of participant numbers and OSA diagnostic performance. The proposed MFRE framework delves into modality fusion in OSA diagnosis and contributes to enhancing the screening performance of artificial intelligence (AI)-assisted diagnosis for OSA.
PMID:39471579 | DOI:10.1016/j.neunet.2024.106836
An end-to-end bi-objective approach to deep graph partitioning
Neural Netw. 2024 Oct 21;181:106823. doi: 10.1016/j.neunet.2024.106823. Online ahead of print.
ABSTRACT
Graphs are ubiquitous in real-world applications, such as computation graphs and social networks. Partitioning large graphs into smaller, balanced partitions is often essential, with the bi-objective graph partitioning problem aiming to minimize both the "cut" across partitions and the imbalance in partition sizes. However, existing heuristic methods face scalability challenges or overlook partition balance, leading to suboptimal results. Recent deep learning approaches, while promising, typically focus only on node-level features and lack a truly end-to-end framework, resulting in limited performance. In this paper, we introduce a novel method based on graph neural networks (GNNs) that leverages multilevel graph features and addresses the problem end-to-end through a bi-objective formulation. Our approach explores node-, local-, and global-level features, and introduces a well-bounded bi-objective function that minimizes the cut while ensuring partition-wise balance across all partitions. Additionally, we propose a GNN-based deep model incorporating a Hardmax operator, allowing the model to optimize partitions in a fully end-to-end manner. Experimental results on 12 datasets across various applications and scales demonstrate that our method significantly improves both partitioning quality and scalability compared to existing bi-objective and deep graph partitioning baselines.
PMID:39471576 | DOI:10.1016/j.neunet.2024.106823
Enhancing soil nitrogen measurement via visible-near infrared spectroscopy: Integrating soil particle size distribution with long short-term memory models
Spectrochim Acta A Mol Biomol Spectrosc. 2024 Oct 22;327:125317. doi: 10.1016/j.saa.2024.125317. Online ahead of print.
ABSTRACT
Good quality of soil nitrogen data, which is essential for the advancement of both enhanced agricultural management and ecological environment, traditionally depends on labor intensive chemical procedures. Visible near-infrared (Vis-NIR) spectroscopy, acknowledged for its efficiency, environmental compatibility and rapidity, merges as a promising alternative. However, the effectiveness of Vis-NIR measurement models are significantly compromised by soil particle size distribution (PSD), presenting a substantial challenge in improving the measurement accuracy and reliability. Here an innovative deep learning methodology that integrates PSD with Vis-NIR spectroscopy was proposed for the measurement of nitrogen content in soil samples. By leveraging the LUCAS dataset, different strategies for integrating PSD with Vis-NIR spectral data in deep learning models were explored, revealing that our proposed InSGraL framework, which incorporated mixed features of PSD and spectra as LSTM inputs achieves superior performance. Compared to models utilizing solely Vis-NIR data, InSGraL exhibits a 39.47 % reduction in RMSE and a 42.55 % decrease in MAE, and demonstrates robust performance across various land cover types, achieving an R2 of 0.94 on grassland samples. Moreover, Shapley Additive exPlanations (SHAP) analysis revealed that incorporating PSD modifies the spectral input importance distribution, effectively mitigating spectral interference from particle size while highlighting critical wavelengths previously obscured. This study provides an innovative modeling strategy to mitigate the influence of PSD by integrating it within deep learning framework using Vis-NIR, contributing a deeper understanding of the relationship between PSD and Vis-NIR spectra for the measurement of nitrogen content and offering an effective means to attain soil nitrogen data.
PMID:39471554 | DOI:10.1016/j.saa.2024.125317
Deep contrastive learning for predicting cancer prognosis using gene expression values
Brief Bioinform. 2024 Sep 23;25(6):bbae544. doi: 10.1093/bib/bbae544.
ABSTRACT
Recent advancements in image classification have demonstrated that contrastive learning (CL) can aid in further learning tasks by acquiring good feature representation from a limited number of data samples. In this paper, we applied CL to tumor transcriptomes and clinical data to learn feature representations in a low-dimensional space. We then utilized these learned features to train a classifier to categorize tumors into a high- or low-risk group of recurrence. Using data from The Cancer Genome Atlas (TCGA), we demonstrated that CL can significantly improve classification accuracy. Specifically, our CL-based classifiers achieved an area under the receiver operating characteristic curve (AUC) greater than 0.8 for 14 types of cancer, and an AUC greater than 0.9 for 3 types of cancer. We also developed CL-based Cox (CLCox) models for predicting cancer prognosis. Our CLCox models trained with the TCGA data outperformed existing methods significantly in predicting the prognosis of 19 types of cancer under consideration. The performance of CLCox models and CL-based classifiers trained with TCGA lung and prostate cancer data were validated using the data from two independent cohorts. We also show that the CLCox model trained with the whole transcriptome significantly outperforms the Cox model trained with the 16 genes of Oncotype DX that is in clinical use for breast cancer patients. The trained models and the Python codes are publicly accessible and provide a valuable resource that will potentially find clinical applications for many types of cancer.
PMID:39471411 | DOI:10.1093/bib/bbae544
Classifying histopathological growth patterns for resected colorectal liver metastasis with a deep learning analysis
BJS Open. 2024 Oct 29;8(6):zrae127. doi: 10.1093/bjsopen/zrae127.
ABSTRACT
BACKGROUND: Histopathological growth patterns are one of the strongest prognostic factors in patients with resected colorectal liver metastases. Development of an efficient, objective and ideally automated histopathological growth pattern scoring method can substantially help the implementation of histopathological growth pattern assessment in daily practice and research. This study aimed to develop and validate a deep-learning algorithm, namely neural image compression, to distinguish desmoplastic from non-desmoplastic histopathological growth patterns of colorectal liver metastases based on digital haematoxylin and eosin-stained slides.
METHODS: The algorithm was developed using digitalized whole-slide images obtained in a single-centre (Erasmus MC Cancer Institute, the Netherlands) cohort of patients who underwent first curative intent resection for colorectal liver metastases between January 2000 and February 2019. External validation was performed on whole-slide images of patients resected between October 2004 and December 2017 in another institution (Radboud University Medical Center, the Netherlands). The outcomes of interest were the automated classification of dichotomous hepatic growth patterns, distinguishing between desmoplastic hepatic growth pattern and non-desmoplatic growth pattern by a deep-learning model; secondary outcome was the correlation of these classifications with overall survival in the histopathology manual-assessed histopathological growth pattern and those assessed using neural image compression.
RESULTS: Nine hundred and thirty-two patients, corresponding to 3.641 whole-slide images, were reviewed to develop the algorithm and 870 whole-slide images were used for external validation. Median follow-up for the development and the validation cohorts was 43 and 29 months respectively. The neural image compression approach achieved significant discriminatory power to classify 100% desmoplastic histopathological growth pattern with an area under the curve of 0.93 in the development cohort and 0.95 upon external validation. Both the histopathology manual-scored histopathological growth pattern and neural image compression-classified histopathological growth pattern achieved a similar multivariable hazard ratio for desmoplastic versus non-desmoplastic growth pattern in the development cohort (histopathology manual score: 0.63 versus neural image compression: 0.64) and in the validation cohort (histopathology manual score: 0.40 versus neural image compression: 0.48).
CONCLUSIONS: The neural image compression approach is suitable for pathology-based classification tasks of colorectal liver metastases.
PMID:39471410 | DOI:10.1093/bjsopen/zrae127
Artificial Intelligence-based Software for Breast Arterial Calcification Detection on Mammograms
J Breast Imaging. 2024 Oct 29:wbae064. doi: 10.1093/jbi/wbae064. Online ahead of print.
ABSTRACT
OBJECTIVE: The performance of a commercially available artificial intelligence (AI)-based software that detects breast arterial calcifications (BACs) on mammograms is presented.
METHODS: This retrospective study was exempt from IRB approval and adhered to the HIPAA regulations. Breast arterial calcification detection using AI was assessed in 253 patients who underwent 314 digital mammography (DM) examinations and 143 patients who underwent 277 digital breast tomosynthesis (DBT) examinations between October 2004 and September 2022. Artificial intelligence performance for binary BAC detection was compared with ground truth (GT) determined by the majority consensus of breast imaging radiologists. Area under the receiver operating curve (AUC), sensitivity, specificity, positive predictive value and negative predictive value (NPV), accuracy, and BAC prevalence rates of the AI algorithm were compared.
RESULTS: The case-level AUCs of AI were 0.96 (0.93-0.98) for DM and 0.95 (0.92-0.98) for DBT. Sensitivity, specificity, and accuracy were 87% (79%-93%), 92% (88%-96%), and 91% (87%-94%) for DM and 88% (80%-94%), 90% (84%-94%), and 89% (85%-92%) for DBT. Positive predictive value and NPV were 82% (72%-89%) and 95% (92%-97%) for DM and 84% (76%-90%) and 92% (88%-96%) for DBT, respectively. Results are 95% confidence intervals. Breast arterial calcification prevalence was similar for both AI and GT assessments.
CONCLUSION: Breast AI software for detection of BAC presence on mammograms showed promising performance for both DM and DBT examinations. Artificial intelligence has potential to aid radiologists in detection and reporting of BAC on mammograms, which is a known cardiovascular risk marker specific to women.
PMID:39471407 | DOI:10.1093/jbi/wbae064
MAR-YOLOv9: A multi-dataset object detection method for agricultural fields based on YOLOv9
PLoS One. 2024 Oct 29;19(10):e0307643. doi: 10.1371/journal.pone.0307643. eCollection 2024.
ABSTRACT
With the development of deep learning technology, object detection has been widely applied in various fields. However, in cross-dataset object detection, conventional deep learning models often face performance degradation issues. This is particularly true in the agricultural field, where there is a multitude of crop types and a complex and variable environment. Existing technologies still face performance bottlenecks when dealing with diverse scenarios. To address these issues, this study proposes a lightweight, cross-dataset enhanced object detection method for the agricultural domain based on YOLOv9, named Multi-Adapt Recognition-YOLOv9 (MAR-YOLOv9). The traditional 32x downsampling Backbone network has been optimized, and a 16x downsampling Backbone network has been innovatively designed. A more streamlined and lightweight Main Neck structure has been introduced, along with innovative methods for feature extraction, up-sampling, and Concat connection. The hybrid connection strategy allows the model to flexibly utilize features from different levels. This solves the issues of increased training time and redundant weights caused by the detection neck and auxiliary branch structures in traditional YOLOv9, enabling MAR-YOLOv9 to maintain high performance while reducing the model's computational complexity and improving detection speed, making it more suitable for real-time detection tasks. In comparative experiments on four plant datasets, MAR-YOLOv9 improved the mAP@0.5 accuracy by 39.18% compared to seven mainstream object detection algorithms, and by 1.28% compared to the YOLOv9 model. At the same time, the model size was reduced by 9.3%, and the number of model layers was decreased, reducing computational costs and storage requirements. Additionally, MAR-YOLOv9 demonstrated significant advantages in detecting complex agricultural images, providing an efficient, lightweight, and adaptable solution for object detection tasks in the agricultural field. The curated data and code can be accessed at the following link: https://github.com/YangxuWangamI/MAR-YOLOv9.
PMID:39471150 | DOI:10.1371/journal.pone.0307643
Asymmetrical Contrastive Learning Network via Knowledge Distillation for No-Service Rail Surface Defect Detection
IEEE Trans Neural Netw Learn Syst. 2024 Oct 29;PP. doi: 10.1109/TNNLS.2024.3479453. Online ahead of print.
ABSTRACT
Owing to extensive research on deep learning, significant progress has recently been made in trackless surface defect detection (SDD). Nevertheless, existing algorithms face two main challenges. First, while depth features contain rich spatial structure features, most models only accept red-green-blue (RGB) features as input, which severely constrains performance. Thus, this study proposes a dual-stream teacher model termed the asymmetrical contrastive learning network (ACLNet-T), which extracts both RGB and depth features to achieve high performance. Second, the introduction of the dual-stream model facilitates an exponential increase in the number of parameters. As a solution, we designed a single-stream student model (ACLNet-S) that extracted RGB features. We leveraged a contrastive distillation loss via knowledge distillation (KD) techniques to transfer rich multimodal features from the ACLNet-T to the ACLNet-S pixel by pixel and channel by channel. Furthermore, to compensate for the lack of contrastive distillation loss that focuses exclusively on local features, we employed multiscale graph mapping to establish long-range dependencies and transfer global features to the ACLNet-S through multiscale graph mapping distillation loss. Finally, an attentional distillation loss based on the adaptive attention decoder (AAD) was designed to further improve the performance of the ACLNet-S. Consequently, we obtained the ACLNet-S ∗ , which achieved performance similar to that of ACLNet-T, despite having a nearly eightfold parameter count gap. Through comprehensive experimentation using the industrial RGB-D dataset NEU RSDDS-AUG, the ACLNet-S ∗ (ACLNet-S with KD) was confirmed to outperform 16 state-of-the-art methods. Moreover, to showcase the generalization capacity of ACLNet-S ∗ , the proposed network was evaluated on three additional public datasets, and ACLNet-S ∗ achieved comparable results. The code is available at https://github.com/Yuride0404127/ACLNet-KD.
PMID:39471124 | DOI:10.1109/TNNLS.2024.3479453
Spectral Super-Resolution in Frequency Domain
IEEE Trans Neural Netw Learn Syst. 2024 Oct 29;PP. doi: 10.1109/TNNLS.2024.3481060. Online ahead of print.
ABSTRACT
Spectral super-resolution aims to reconstruct a hyperspectral image (HSI) from its corresponding RGB image, which has drawn much more attention in remote sensing field. Recent advances in the application of deep learning models for spectral super-resolution have demonstrated great potential. However, these methods only work in spectral-spatial domain while rarely explore the potential property in the frequency domain. In this work, we first attempt to address spectral super-resolution in the frequency domain. To well merge the frequency information into the super-resolution network, a spectral-spatial-frequency domain fusion network (SSFDF) is designed, which consists of three key parts: frequency-domain feature learning, spectral-spatial domain feature learning, and feature fusion module. In more detail, a frequency-domain feature learning network is first exploited to dig the frequency-domain information of the input data. Then, a symmetric convolutional neural network (CNN) is developed to acquire the spectral-spatial features of the input data, where a parameter-sharing strategy is utilized to reduce network parameters. Finally, a feature fusion module is proposed to reconstruct HSI. Comprehensive experiments on several datasets reveal that our method can attain state-of-the-art reconstruction result with respect to other spectral super-resolution techniques.
PMID:39471122 | DOI:10.1109/TNNLS.2024.3481060
A Multimodal Consistency-Based Self-Supervised Contrastive Learning Framework for Automated Sleep Staging in Patients with Disorders of Consciousness
IEEE J Biomed Health Inform. 2024 Oct 29;PP. doi: 10.1109/JBHI.2024.3487657. Online ahead of print.
ABSTRACT
Sleep is a fundamental human activity, and automated sleep staging holds considerable investigational potential. Despite numerous deep learning methods proposed for sleep staging that exhibit notable performance, several challenges remain unresolved, including inadequate representation and generalization capabilities, limitations in multimodal feature extraction, the scarcity of labeled data, and the restricted practical application for patients with disorder of consciousness (DOC). This paper proposes MultiConsSleepNet, a multimodal consistency-based sleep staging network. This network comprises a unimodal feature extractor and a multimodal consistency feature extractor, aiming to explore universal representations of electroencephalograms (EEGs) and electrooculograms (EOGs) and extract the consistency of intra- and intermodal features. Additionally, self-supervised contrastive learning strategies are designed for unimodal and multimodal consistency learning to address the current situation in clinical practice where it is difficult to obtain high-quality labeled data but has a huge amount of unlabeled data. It can effectively alleviate the model's dependence on labeled data, and improve the model's generalizability for effective migration to DOC patients. Experimental results on three publicly available datasets demonstrate that MultiConsSleepNet achieves state-of-the-art performance in sleep staging with limited labeled data and effectively utilizes unlabeled data, enhancing its practical applicability. Furthermore, the proposed model yields promising results on a self-collected DOC dataset, offering a novel perspective for sleep staging research in patients with DOC.
PMID:39471113 | DOI:10.1109/JBHI.2024.3487657
Attention-based q-space Deep Learning Generalized for Accelerated Diffusion Magnetic Resonance Imaging
IEEE J Biomed Health Inform. 2024 Oct 29;PP. doi: 10.1109/JBHI.2024.3487755. Online ahead of print.
ABSTRACT
Diffusion magnetic resonance imaging (dMRI) is a non-invasive method for capturing the microanatomical information of tissues by measuring the diffusion weighted signals along multiple directions, which is widely used in the quantification of microstructures. Obtaining microscopic parameters requires dense sampling in the q space, leading to significant time consumption. The most popular approach to accelerating dMRI acquisition is to undersample the q-space data, along with applying deep learning methods to reconstruct quantitative diffusion parameters. However, the reliance on a predetermined q-space sampling strategy often constrains traditional deep learning-based reconstructions. The present study proposed a novel deep learning model, named attention-based q-space deep learning (aqDL), to implement the reconstruction with variable q-space sampling strategies. The aqDL maps dMRI data from different scanning strategies onto a common feature space by using a series of Transformer encoders. The latent features are employed to reconstruct dMRI parameters via a multilayer perceptron. The performance of the aqDL model was assessed utilizing the Human Connectome Project datasets at varying undersampling numbers. To validate its generalizability, the model was further tested on two additional independent datasets. Our results showed that aqDL consistently achieves the highest reconstruction accuracy at various undersampling numbers, regardless of whether variable or predetermined q-space scanning strategies are employed. These findings suggest that aqDL has the potential to be used on general clinical dMRI datasets.
PMID:39471111 | DOI:10.1109/JBHI.2024.3487755
Comparison of time-series models for predicting physiological metrics under sedation
J Clin Monit Comput. 2024 Oct 29. doi: 10.1007/s10877-024-01237-z. Online ahead of print.
ABSTRACT
This study presents a comprehensive comparison of multiple time-series models applied to physiological metric predictions. It aims to explore the effectiveness of both statistical prediction models and pharmacokinetic-pharmacodynamic prediction model and modern deep learning approaches. Specifically, the study focuses on predicting the bispectral index (BIS), a vital metric in anesthesia used to assess the depth of sedation during surgery, using datasets collected from real-life surgeries. The goal is to evaluate and compare model performance considering both univariate and multivariate schemes. Accurate BIS prediction is essential for avoiding under- or over-sedation, which can lead to adverse outcomes. The study investigates a range of models: The traditional mathematical models include the pharmacokinetic-pharmacodynamic model and statistical models such as autoregressive integrated moving average (ARIMA) and vector autoregression (VAR). The deep learning models encompass recurrent neural networks (RNNs), specifically Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), as well as Temporal Convolutional Networks (TCNs) and Transformer models. The analysis focuses on evaluating model performance in predicting the BIS using two distinct datasets of physiological metrics collected from actual surgical procedures. It explores both univariate and multivariate prediction schemes and investigates how different combinations of features and input sequence lengths impact model accuracy. The experimental findings reveal significant performance differences among the models: In univariate prediction scenarios for predicting BIS, the LSTM model demonstrates a 2.88% improvement over the second-best performing model. For multivariate predictions, the LSTM model outperforms others by 6.67% compared to the next best model. Furthermore, the addition of Electromyography (EMG) and Mean Arterial Pressure (MAP) brings significant accuracy improvement when predicting BIS. The study emphasizes the importance of selecting and building appropriate time-series models to achieve accurate predictions in biomedical applications. This research provides insights to guide future efforts in improving vital sign prediction methodologies for clinical and research purposes. Clinically, with improvements in the prediction of physiological parameters, clinicians can be informed of interventions if an anomaly is detected or predicted.
PMID:39470955 | DOI:10.1007/s10877-024-01237-z
Style harmonization of panoramic radiography using deep learning
Oral Radiol. 2024 Oct 29. doi: 10.1007/s11282-024-00782-2. Online ahead of print.
ABSTRACT
OBJECTIVES: This study aimed to harmonize panoramic radiograph images from different equipment in a single institution to display similar styles.
METHODS: A total of 15,624 panoramic images were acquired using two different equipment: 8079 images from Rayscan Alpha Plus (R-unit) and 7545 images from Pax-i plus (P-unit). Among these, 222 image pairs (444 images) from the same patients comprised the test dataset to harmonize the P-unit images with the R-unit image style using CycleGAN. Objective evaluations included Frechet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS) assessments. Additionally, expert evaluation was conducted by two oral and maxillofacial radiologists on transformed P-unit and R-unit images. The statistical analysis of LPIPS employed a Student's t-test.
RESULTS: The FID and mean LPIPS values of the transformed P-unit images (7.362, 0.488) were lower than those of the original P-unit images (8.380, 0.519), with a significant difference in LPIPS (p < 0.05). The experts evaluated 43.3-46.7% of the transformed P-unit images as R-unit images, 20.0-28.3% as P-units, and 28.3-33.3% as undetermined images.
CONCLUSIONS: CycleGAN has the potential to harmonize panoramic radiograph image styles. Enhancement of the model is anticipated for the application of images produced by additional units.
PMID:39470914 | DOI:10.1007/s11282-024-00782-2
Dynamic entrainment: A deep learning and data-driven process approach for synchronization in the Hodgkin-Huxley model
Chaos. 2024 Oct 1;34(10):103124. doi: 10.1063/5.0219848.
ABSTRACT
Resonance and synchronized rhythm are significant phenomena observed in dynamical systems in nature, particularly in biological contexts. These phenomena can either enhance or disrupt system functioning. Numerous examples illustrate the necessity for organs within the human body to maintain their rhythmic patterns for proper operation. For instance, in the brain, synchronized or desynchronized electrical activities can contribute to neurodegenerative conditions like Huntington's disease. In this paper, we utilize the well-established Hodgkin-Huxley (HH) model, which describes the propagation of action potentials in neurons through conductance-based mechanisms. Employing a "data-driven" approach alongside the outputs of the HH model, we introduce an innovative technique termed "dynamic entrainment." This technique leverages deep learning methodologies to dynamically sustain the system within its entrainment regime. Our findings show that the results of the dynamic entrainment technique match with the outputs of the mechanistic (HH) model.
PMID:39470595 | DOI:10.1063/5.0219848
Structures of Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus virions reveal species-specific tegument and envelope features
J Virol. 2024 Oct 29:e0119424. doi: 10.1128/jvi.01194-24. Online ahead of print.
ABSTRACT
Epstein-Barr virus (EBV) and Kaposi's sarcoma-associated herpesvirus (KSHV) are classified into the gammaherpesvirus subfamily of Herpesviridae, which stands out from its alpha- and betaherpesvirus relatives due to the tumorigenicity of its members. Although structures of human alpha- and betaherpesviruses by cryogenic electron tomography (cryoET) have been reported, reconstructions of intact human gammaherpesvirus virions remain elusive. Here, we structurally characterize extracellular virions of EBV and KSHV by deep learning-enhanced cryoET, resolving both previously known monomorphic capsid structures and previously unknown pleomorphic features beyond the capsid. Through subtomogram averaging and subsequent tomogram-guided sub-particle reconstruction, we determined the orientation of KSHV nucleocapsids from mature virions with respect to the portal to provide spatial context for the tegument within the virion. Both EBV and KSHV have an eccentric capsid position and polarized distribution of tegument. Tegument species span from the capsid to the envelope and may serve as scaffolds for tegumentation and envelopment. The envelopes of EBV and KSHV are less densely populated with glycoproteins than those of herpes simplex virus 1 (HSV-1) and human cytomegalovirus (HCMV), representative members of alpha- and betaherpesviruses, respectively. Also, we observed fusion protein gB trimers exist within triplet arrangements in addition to standalone complexes, which is relevant to understanding dynamic processes such as fusion pore formation. Taken together, this study reveals nuanced yet important differences in the tegument and envelope architectures among human herpesviruses and provides insights into their varied cell tropism and infection.
IMPORTANCE: Discovered in 1964, Epstein-Barr virus (EBV) is the first identified human oncogenic virus and the founding member of the gammaherpesvirus subfamily. In 1994, another cancer-causing virus was discovered in lesions of AIDS patients and later named Kaposi's sarcoma-associated herpesvirus (KSHV), the second human gammaherpesvirus. Despite the historical importance of EBV and KSHV, technical difficulties with isolating large quantities of these viruses and the pleiomorphic nature of their envelope and tegument layers have limited structural characterization of their virions. In this study, we employed the latest technologies in cryogenic electron microscopy (cryoEM) and tomography (cryoET) supplemented with an artificial intelligence-powered data processing software package to reconstruct 3D structures of the EBV and KSHV virions. We uncovered unique properties of the envelope glycoproteins and tegument layers of both EBV and KSHV. Comparison of these features with their non-tumorigenic counterparts provides insights into their relevance during infection.
PMID:39470208 | DOI:10.1128/jvi.01194-24
Hydrogen bond network structures of protonated 2,2,2-trifluoroethanol/ethanol mixed clusters probed by infrared spectroscopy combined with a deep-learning structure sampling approach: the origin of the linear type network preference in protonated...
Phys Chem Chem Phys. 2024 Oct 29. doi: 10.1039/d4cp03534h. Online ahead of print.
ABSTRACT
While preferential hydrogen bond network structures of cold protonated alcohol clusters H+(ROH)n are generally switched from a linear type to a cyclic one at n = 4-5, those of protonated 2,2,2-trifluoroethanol (TFE) clusters maintain linear type structures at least in the size range of n = 3-7. To explore the origin of the strong linear type network preference of H+(TFE)n, infrared spectra of protonated mixed clusters H+(TFE)m(ethanol)n (m + n = 5) were measured. An efficient structure sampling technique using parallelized basin-hopping algorithms and deep-learning neural network potentials is developed to search for essential isomers of the mixed clusters. Vibrational simulations based on the harmonic superposition approximation were compared with the observed spectra to identify the major isomer component at each mixing ratio. It was found that the formation of the cyclic structure occurs only in n ≥ 3 of the mixed clusters, in which the proton solvating sites and the double acceptor site are occupied by ethanol. The crucial role of the stability of the double acceptor site in the cyclic structure formation is discussed.
PMID:39470069 | DOI:10.1039/d4cp03534h
Deep learning-assisted morphological segmentation for effective particle area estimation and prediction of interfacial properties in polymer composites
Nanoscale. 2024 Oct 29. doi: 10.1039/d4nr01018c. Online ahead of print.
ABSTRACT
The link between the macroscopic properties of polymer nanocomposites and the underlying microstructural features necessitates an understanding of nanoparticle dispersion. The dispersion of nanoparticles introduces variability, potentially leading to clustering and localized accumulation of nanoparticles. This non-uniform dispersion impacts the accuracy of predictive models. In response to this challenge, this study developed an automated and precise technique for particle recognition and detailed mapping of particle positions in scanning electron microscopy (SEM) micrographs. This was achieved by integrating deep convolutional neural networks with advanced image processing techniques. Following particle detection, two dispersion factors were introduced, namely size uniformity and supercritical clustering, to quantify the impact of particle dispersion on properties. These factors, estimated using the computer vision technique, were subsequently used to calculate the effective load-bearing area influenced by the particles. An adapted micromechanical model was then employed to quantify the interfacial strength and thickness of the nanocomposites. This approach enabled the establishment of a correlation between dispersion characteristics and interfacial properties by integrating experimental data, relevant micromechanical models, and quantified dispersion factors. The proposed systematic procedure demonstrates considerable promise in utilizing deep learning to capture and quantify particle dispersion characteristics for structure-property analyses in polymer nanocomposites.
PMID:39469845 | DOI:10.1039/d4nr01018c
Artificial intelligence-based power market price prediction in smart renewable energy systems: Combining prophet and transformer models
Heliyon. 2024 Oct 18;10(20):e38227. doi: 10.1016/j.heliyon.2024.e38227. eCollection 2024 Oct 30.
ABSTRACT
With the increasing integration of smart renewable energy systems and power electronic converters, electricity market price prediction is particularly important. It is not only crucial for the interests of power suppliers and market regulators but also plays a key role in ensuring the reliable and flexible operation of the power system, particularly during extreme weather events or abnormal conditions. This study develops a hybrid time series forecasting model that combines Prophet and Transformer, which takes advantage of deep learning to provide a new solution for electricity market price forecasting. By introducing the Stacking optimization strategy, this study improves the accuracy and stability of electricity market price sequence prediction. In addition, the study tries to integrate traditional time series forecasting methods (such as the Prophet model) with deep learning models (such as the Transformer model), aiming to make full use of their respective advantages to achieve more accurate and stable predictions. Through experimental evaluation on four electricity market data sets, this study finds that the hybrid forecast model exhibits significant performance improvements in enhancing the accuracy and stability of electricity market price predictions. This method not only provides a more accurate tool for power market price prediction, but also provides solid technical support for the efficient operation and sustainable development of smart renewable energy systems. Experimental results also show that the combination of deep learning models with traditional time series methods and the introduction of Stacking strategies is crucial to improving the performance of power market price prediction, and also helps us better understand and design smart renewable energy systems, price and energy management strategies, thereby providing an effective method for achieving efficient and reliable power and energy transmission.
PMID:39469701 | PMC:PMC11513455 | DOI:10.1016/j.heliyon.2024.e38227
Modeling epithelial-mesenchymal transition in patient-derived breast cancer organoids
Front Oncol. 2024 Oct 14;14:1470379. doi: 10.3389/fonc.2024.1470379. eCollection 2024.
ABSTRACT
Cellular plasticity is enhanced by dedifferentiation processes such as epithelial-mesenchymal transition (EMT). The dynamic and transient nature of EMT-like processes challenges the investigation of cell plasticity in patient-derived breast cancer models. Here, we utilized patient-derived organoids (PDOs) as a model to study the susceptibility of primary breast cancer cells to EMT. Upon induction with TGF-β, PDOs exhibited EMT-like features, including morphological changes, E-cadherin downregulation and cytoskeletal reorganization, leading to an invasive phenotype. Image analysis and the integration of deep learning algorithms enabled the implantation of microscopy-based quantifications demonstrating repetitive results between organoid lines from different breast cancer patients. Interestingly, epithelial plasticity was also expressed in terms of alterations in luminal and myoepithelial distribution upon TGF-β induction. The effective modeling of dynamic processes such as EMT in organoids and their characteristic spatial diversity highlight their potential to advance research on cancer cell plasticity in cancer patients.
PMID:39469640 | PMC:PMC11513879 | DOI:10.3389/fonc.2024.1470379
Benchmarking Scalable Epistemic Uncertainty Quantification in Organ Segmentation
Uncertain Safe Util Mach Learn Med Imaging (2023). 2023 Oct;14291:53-63. doi: 10.1007/978-3-031-44336-7_6. Epub 2023 Oct 7.
ABSTRACT
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning. However, quantifying and understanding the uncertainty associated with model predictions is crucial in critical clinical applications. While many techniques have been proposed for epistemic or model-based uncertainty estimation, it is unclear which method is preferred in the medical image analysis setting. This paper presents a comprehensive benchmarking study that evaluates epistemic uncertainty quantification methods in organ segmentation in terms of accuracy, uncertainty calibration, and scalability. We provide a comprehensive discussion of the strengths, weaknesses, and out-of-distribution detection capabilities of each method as well as recommendations for future improvements. These findings contribute to the development of reliable and robust models that yield accurate segmentations while effectively quantifying epistemic uncertainty.
PMID:39469570 | PMC:PMC11514142 | DOI:10.1007/978-3-031-44336-7_6