Deep learning
Dynamic entrainment: A deep learning and data-driven process approach for synchronization in the Hodgkin-Huxley model
Chaos. 2024 Oct 1;34(10):103124. doi: 10.1063/5.0219848.
ABSTRACT
Resonance and synchronized rhythm are significant phenomena observed in dynamical systems in nature, particularly in biological contexts. These phenomena can either enhance or disrupt system functioning. Numerous examples illustrate the necessity for organs within the human body to maintain their rhythmic patterns for proper operation. For instance, in the brain, synchronized or desynchronized electrical activities can contribute to neurodegenerative conditions like Huntington's disease. In this paper, we utilize the well-established Hodgkin-Huxley (HH) model, which describes the propagation of action potentials in neurons through conductance-based mechanisms. Employing a "data-driven" approach alongside the outputs of the HH model, we introduce an innovative technique termed "dynamic entrainment." This technique leverages deep learning methodologies to dynamically sustain the system within its entrainment regime. Our findings show that the results of the dynamic entrainment technique match with the outputs of the mechanistic (HH) model.
PMID:39470595 | DOI:10.1063/5.0219848
Structures of Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus virions reveal species-specific tegument and envelope features
J Virol. 2024 Oct 29:e0119424. doi: 10.1128/jvi.01194-24. Online ahead of print.
ABSTRACT
Epstein-Barr virus (EBV) and Kaposi's sarcoma-associated herpesvirus (KSHV) are classified into the gammaherpesvirus subfamily of Herpesviridae, which stands out from its alpha- and betaherpesvirus relatives due to the tumorigenicity of its members. Although structures of human alpha- and betaherpesviruses by cryogenic electron tomography (cryoET) have been reported, reconstructions of intact human gammaherpesvirus virions remain elusive. Here, we structurally characterize extracellular virions of EBV and KSHV by deep learning-enhanced cryoET, resolving both previously known monomorphic capsid structures and previously unknown pleomorphic features beyond the capsid. Through subtomogram averaging and subsequent tomogram-guided sub-particle reconstruction, we determined the orientation of KSHV nucleocapsids from mature virions with respect to the portal to provide spatial context for the tegument within the virion. Both EBV and KSHV have an eccentric capsid position and polarized distribution of tegument. Tegument species span from the capsid to the envelope and may serve as scaffolds for tegumentation and envelopment. The envelopes of EBV and KSHV are less densely populated with glycoproteins than those of herpes simplex virus 1 (HSV-1) and human cytomegalovirus (HCMV), representative members of alpha- and betaherpesviruses, respectively. Also, we observed fusion protein gB trimers exist within triplet arrangements in addition to standalone complexes, which is relevant to understanding dynamic processes such as fusion pore formation. Taken together, this study reveals nuanced yet important differences in the tegument and envelope architectures among human herpesviruses and provides insights into their varied cell tropism and infection.
IMPORTANCE: Discovered in 1964, Epstein-Barr virus (EBV) is the first identified human oncogenic virus and the founding member of the gammaherpesvirus subfamily. In 1994, another cancer-causing virus was discovered in lesions of AIDS patients and later named Kaposi's sarcoma-associated herpesvirus (KSHV), the second human gammaherpesvirus. Despite the historical importance of EBV and KSHV, technical difficulties with isolating large quantities of these viruses and the pleiomorphic nature of their envelope and tegument layers have limited structural characterization of their virions. In this study, we employed the latest technologies in cryogenic electron microscopy (cryoEM) and tomography (cryoET) supplemented with an artificial intelligence-powered data processing software package to reconstruct 3D structures of the EBV and KSHV virions. We uncovered unique properties of the envelope glycoproteins and tegument layers of both EBV and KSHV. Comparison of these features with their non-tumorigenic counterparts provides insights into their relevance during infection.
PMID:39470208 | DOI:10.1128/jvi.01194-24
Hydrogen bond network structures of protonated 2,2,2-trifluoroethanol/ethanol mixed clusters probed by infrared spectroscopy combined with a deep-learning structure sampling approach: the origin of the linear type network preference in protonated...
Phys Chem Chem Phys. 2024 Oct 29. doi: 10.1039/d4cp03534h. Online ahead of print.
ABSTRACT
While preferential hydrogen bond network structures of cold protonated alcohol clusters H+(ROH)n are generally switched from a linear type to a cyclic one at n = 4-5, those of protonated 2,2,2-trifluoroethanol (TFE) clusters maintain linear type structures at least in the size range of n = 3-7. To explore the origin of the strong linear type network preference of H+(TFE)n, infrared spectra of protonated mixed clusters H+(TFE)m(ethanol)n (m + n = 5) were measured. An efficient structure sampling technique using parallelized basin-hopping algorithms and deep-learning neural network potentials is developed to search for essential isomers of the mixed clusters. Vibrational simulations based on the harmonic superposition approximation were compared with the observed spectra to identify the major isomer component at each mixing ratio. It was found that the formation of the cyclic structure occurs only in n ≥ 3 of the mixed clusters, in which the proton solvating sites and the double acceptor site are occupied by ethanol. The crucial role of the stability of the double acceptor site in the cyclic structure formation is discussed.
PMID:39470069 | DOI:10.1039/d4cp03534h
Deep learning-assisted morphological segmentation for effective particle area estimation and prediction of interfacial properties in polymer composites
Nanoscale. 2024 Oct 29. doi: 10.1039/d4nr01018c. Online ahead of print.
ABSTRACT
The link between the macroscopic properties of polymer nanocomposites and the underlying microstructural features necessitates an understanding of nanoparticle dispersion. The dispersion of nanoparticles introduces variability, potentially leading to clustering and localized accumulation of nanoparticles. This non-uniform dispersion impacts the accuracy of predictive models. In response to this challenge, this study developed an automated and precise technique for particle recognition and detailed mapping of particle positions in scanning electron microscopy (SEM) micrographs. This was achieved by integrating deep convolutional neural networks with advanced image processing techniques. Following particle detection, two dispersion factors were introduced, namely size uniformity and supercritical clustering, to quantify the impact of particle dispersion on properties. These factors, estimated using the computer vision technique, were subsequently used to calculate the effective load-bearing area influenced by the particles. An adapted micromechanical model was then employed to quantify the interfacial strength and thickness of the nanocomposites. This approach enabled the establishment of a correlation between dispersion characteristics and interfacial properties by integrating experimental data, relevant micromechanical models, and quantified dispersion factors. The proposed systematic procedure demonstrates considerable promise in utilizing deep learning to capture and quantify particle dispersion characteristics for structure-property analyses in polymer nanocomposites.
PMID:39469845 | DOI:10.1039/d4nr01018c
Artificial intelligence-based power market price prediction in smart renewable energy systems: Combining prophet and transformer models
Heliyon. 2024 Oct 18;10(20):e38227. doi: 10.1016/j.heliyon.2024.e38227. eCollection 2024 Oct 30.
ABSTRACT
With the increasing integration of smart renewable energy systems and power electronic converters, electricity market price prediction is particularly important. It is not only crucial for the interests of power suppliers and market regulators but also plays a key role in ensuring the reliable and flexible operation of the power system, particularly during extreme weather events or abnormal conditions. This study develops a hybrid time series forecasting model that combines Prophet and Transformer, which takes advantage of deep learning to provide a new solution for electricity market price forecasting. By introducing the Stacking optimization strategy, this study improves the accuracy and stability of electricity market price sequence prediction. In addition, the study tries to integrate traditional time series forecasting methods (such as the Prophet model) with deep learning models (such as the Transformer model), aiming to make full use of their respective advantages to achieve more accurate and stable predictions. Through experimental evaluation on four electricity market data sets, this study finds that the hybrid forecast model exhibits significant performance improvements in enhancing the accuracy and stability of electricity market price predictions. This method not only provides a more accurate tool for power market price prediction, but also provides solid technical support for the efficient operation and sustainable development of smart renewable energy systems. Experimental results also show that the combination of deep learning models with traditional time series methods and the introduction of Stacking strategies is crucial to improving the performance of power market price prediction, and also helps us better understand and design smart renewable energy systems, price and energy management strategies, thereby providing an effective method for achieving efficient and reliable power and energy transmission.
PMID:39469701 | PMC:PMC11513455 | DOI:10.1016/j.heliyon.2024.e38227
Modeling epithelial-mesenchymal transition in patient-derived breast cancer organoids
Front Oncol. 2024 Oct 14;14:1470379. doi: 10.3389/fonc.2024.1470379. eCollection 2024.
ABSTRACT
Cellular plasticity is enhanced by dedifferentiation processes such as epithelial-mesenchymal transition (EMT). The dynamic and transient nature of EMT-like processes challenges the investigation of cell plasticity in patient-derived breast cancer models. Here, we utilized patient-derived organoids (PDOs) as a model to study the susceptibility of primary breast cancer cells to EMT. Upon induction with TGF-β, PDOs exhibited EMT-like features, including morphological changes, E-cadherin downregulation and cytoskeletal reorganization, leading to an invasive phenotype. Image analysis and the integration of deep learning algorithms enabled the implantation of microscopy-based quantifications demonstrating repetitive results between organoid lines from different breast cancer patients. Interestingly, epithelial plasticity was also expressed in terms of alterations in luminal and myoepithelial distribution upon TGF-β induction. The effective modeling of dynamic processes such as EMT in organoids and their characteristic spatial diversity highlight their potential to advance research on cancer cell plasticity in cancer patients.
PMID:39469640 | PMC:PMC11513879 | DOI:10.3389/fonc.2024.1470379
Benchmarking Scalable Epistemic Uncertainty Quantification in Organ Segmentation
Uncertain Safe Util Mach Learn Med Imaging (2023). 2023 Oct;14291:53-63. doi: 10.1007/978-3-031-44336-7_6. Epub 2023 Oct 7.
ABSTRACT
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning. However, quantifying and understanding the uncertainty associated with model predictions is crucial in critical clinical applications. While many techniques have been proposed for epistemic or model-based uncertainty estimation, it is unclear which method is preferred in the medical image analysis setting. This paper presents a comprehensive benchmarking study that evaluates epistemic uncertainty quantification methods in organ segmentation in terms of accuracy, uncertainty calibration, and scalability. We provide a comprehensive discussion of the strengths, weaknesses, and out-of-distribution detection capabilities of each method as well as recommendations for future improvements. These findings contribute to the development of reliable and robust models that yield accurate segmentations while effectively quantifying epistemic uncertainty.
PMID:39469570 | PMC:PMC11514142 | DOI:10.1007/978-3-031-44336-7_6
Diagnostic performance of deep learning for infectious keratitis: a systematic review and meta-analysis
EClinicalMedicine. 2024 Oct 18;77:102887. doi: 10.1016/j.eclinm.2024.102887. eCollection 2024 Nov.
ABSTRACT
BACKGROUND: Infectious keratitis (IK) is the leading cause of corneal blindness globally. Deep learning (DL) is an emerging tool for medical diagnosis, though its value in IK is unclear. We aimed to assess the diagnostic accuracy of DL for IK and its comparative accuracy with ophthalmologists.
METHODS: In this systematic review and meta-analysis, we searched EMBASE, MEDLINE, and clinical registries for studies related to DL for IK published between 1974 and July 16, 2024. We performed meta-analyses using bivariate models to estimate summary sensitivities and specificities. This systematic review was registered with PROSPERO (CRD42022348596).
FINDINGS: Of 963 studies identified, 35 studies (136,401 corneal images from >56,011 patients) were included. Most studies had low risk of bias (68.6%) and low applicability concern (91.4%) in all domains of QUADAS-2, except the index test domain. Against the reference standard of expert consensus and/or microbiological results (seven external validation studies; 10,675 images), the summary estimates (95% CI) for sensitivity and specificity of DL for IK were 86.2% (71.6-93.9) and 96.3% (91.5-98.5). From 28 internal validation studies (16,059 images), summary estimates for sensitivity and specificity were 91.6% (86.8-94.8) and 90.7% (84.8-94.5). Based on seven studies (4007 images), DL and ophthalmologists had comparable summary sensitivity [89.2% (82.2-93.6) versus 82.2% (71.5-89.5); P = 0.20] and specificity [(93.2% (85.5-97.0) versus 89.6% (78.8-95.2); P = 0.45].
INTERPRETATION: DL models may have good diagnostic accuracy for IK and comparable performance to ophthalmologists. These findings should be interpreted with caution due to the image-based analysis that did not account for potential correlation within individuals, relatively homogeneous population studies, lack of pre-specification of DL thresholds, and limited external validation. Future studies should improve their reporting, data diversity, external validation, transparency, and explainability to increase the reliability and generalisability of DL models for clinical deployment.
FUNDING: NIH, Wellcome Trust, MRC, Fight for Sight, BHP, and ESCRS.
PMID:39469534 | PMC:PMC11513659 | DOI:10.1016/j.eclinm.2024.102887
VP-net: an end-to-end deep learning network for elastic wave velocity prediction in human skin in vivo using optical coherence elastography
Front Bioeng Biotechnol. 2024 Oct 14;12:1465823. doi: 10.3389/fbioe.2024.1465823. eCollection 2024.
ABSTRACT
INTRODUCTION: Acne vulgaris, one of the most common skin conditions, affects up to 85% of late adolescents, currently no universally accepted assessment system. The biomechanical properties of skin provide valuable information for the assessment and management of skin conditions. Wave-based optical coherence elastography (OCE) quantitatively assesses these properties of tissues by analyzing induced elastic wave velocities. However, velocity estimation methods require significant expertise and lengthy image processing times, limiting the clinical translation of OCE technology. Recent advances in machine learning offer promising solutions to simplify velocity estimation process.
METHODS: In this study, we proposed a novel end-to-end deep-learning model, named velocity prediction network (VP-Net), aiming to accurately predict elastic wave velocity from raw OCE data of in vivo healthy and abnormal human skin. A total of 16,424 raw phase slices from 1% to 5% agar-based tissue-mimicking phantoms, 28,270 slices from in vivo human skin sites including the palm, forearm, back of the hand from 16 participants, and 580 slices of facial closed comedones were acquired to train, validate, and test VP-Net.
RESULTS: VP-Net demonstrated highly accurate velocity prediction performance compared to other deep-learning-based methods, as evidenced by small evaluation metrics. Furthermore, VP-Net exhibited low model complexity and parameter requirements, enabling end-to-end velocity prediction from a single raw phase slice in 1.32 ms, enhancing processing speed by a factor of ∼100 compared to a conventional wave velocity estimation method. Additionally, we employed gradient-weighted class activation maps to showcase VP-Net's proficiency in discerning wave propagation patterns from raw phase slices. VP-Net predicted wave velocities that were consistent with the ground truth velocities in agar phantom, two age groups (20s and 30s) of multiple human skin sites and closed comedones datasets.
DISCUSSION: This study indicates that VP-Net could rapidly and accurately predict elastic wave velocities related to biomechanical properties of in vivo healthy and abnormal skin, offering potential clinical applications in characterizing skin aging, as well as assessing and managing the treatment of acne vulgaris.
PMID:39469517 | PMC:PMC11513296 | DOI:10.3389/fbioe.2024.1465823
Study on Univariate Modeling and Prediction Methods Using Monthly HIV Incidence and Mortality Cases in China
HIV AIDS (Auckl). 2024 Oct 24;16:397-412. doi: 10.2147/HIV.S476371. eCollection 2024.
ABSTRACT
PURPOSE: AIDS presents serious harms to public health worldwide. In this paper, we used five single models: ARIMA, SARIMA, Prophet, BP neural network, and LSTM method to model and predict the number of monthly AIDS incidence cases and mortality cases in China. We have also proposed the LSTM-SARIMA combination model to enhance the accuracy of the prediction. This study provides strong data support for the prevention and treatment of AIDS.
METHODS: We collected data on monthly AIDS incidence cases and mortality cases in China from January 2010 to February 2024. Among them, for modeling, we used data from January 2010 to February 2021 and the rest for validation. Treatments were applied to the dataset based on its characteristics during modeling. All models in our study were performed using Python 3.11.6. Meanwhile, we used the constructed model to predict monthly incidence and mortality cases from March 2024 to July 2024. We then evaluated our prediction results using RMSE, MAE, MAPE, and SMAPE.
RESULTS: The deep learning methods of LSTM and BPNN outperform ARIMA, SARIMA, and Prophet in predicting the number of mortality cases. When predicting the number of AIDS incidence cases, there is little difference between the two types of methods, and the LSTM method performs slightly better than the rest of the methods. Meanwhile, the average error in predicting AIDS mortality cases is significantly lower than in predicting AIDS incidence cases. The LSTM-SARIMA method outperforms other methods in predicting AIDS incidence and mortality.
CONCLUSION: Due to the different characteristics of the AIDS incidence and mortality cases series, the performance of distinct methods is slightly different. The AIDS mortality series is smoother than the incidence series. The combined LSTM-SARIMA model outperforms the traditional method in prediction and the LSTM method alone, which is of practical significance for optimizing the prediction results of AIDS.
PMID:39469494 | PMC:PMC11514643 | DOI:10.2147/HIV.S476371
Remote physiological signal recovery with efficient spatio-temporal modeling
Front Physiol. 2024 Oct 14;15:1428351. doi: 10.3389/fphys.2024.1428351. eCollection 2024.
ABSTRACT
Contactless physiological signal measurement has great applications in various fields, such as affective computing and health monitoring. Physiological measurements based on remote photoplethysmography (rPPG) are realized by capturing the weak periodic color changes. The changes are caused by the variation in the light absorption of skin surface during systole and diastole stages of a functioning heart. This measurement mode has advantages of contactless measurement, simple operation, low cost, etc. In recent years, several deep learning-based rPPG measurement methods have been proposed. However, the features learned by deep learning models are vulnerable to motion and illumination artefacts, and are unable to fully exploit the intrinsic temporal characteristics of the rPPG. This paper presents an efficient spatiotemporal modeling-based rPPG recovery method for physiological signal measurements. First, two modules are utilized in the rPPG task: 1) 3D central difference convolution for temporal context modeling with enhanced representation and generalization capacity, and 2) Huber loss for robust intensity-level rPPG recovery. Second, a dual branch structure for both motion and appearance modeling and a soft attention mask are adapted to take full advantage of the central difference convolution. Third, a multi-task setting for joint cardiac and respiratory signals measurements is introduced to benefit from the internal relevance between two physiological signals. Last, extensive experiments performed on three public databases show that the proposed method outperforms prior state-of-the-art methods with the Pearson's correlation coefficient higher than 0.96 on all three datasets. The generalization ability of the proposed method is also evaluated by cross-database and video compression experiments. The effectiveness and necessity of each module are confirmed by ablation studies.
PMID:39469440 | PMC:PMC11513465 | DOI:10.3389/fphys.2024.1428351
Artificial Intelligence in Forensic Sciences: A Systematic Review of Past and Current Applications and Future Perspectives
Cureus. 2024 Sep 28;16(9):e70363. doi: 10.7759/cureus.70363. eCollection 2024 Sep.
ABSTRACT
The aim of this study is to review the available knowledge concerning the use of artificial Intelligence (AI) in general in different areas of Forensic Sciences from human identification to postmortem interval estimation and the estimation of different causes of death. This paper aims to emphasize the different uses of AI, especially in Forensic Medicine, and elucidate its technical part. This will be achieved through an explanation of different technologies that have been so far employed and through new ideas that may contribute as a first step to the adoption of new practices and to the development of new technologies. A systematic literature search was performed in accordance with the Preferred Reported Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines in the PubMed Database and Cochrane Central Library. Neither time nor regional constrictions were adopted, and all the included papers were written in English. Terms used were MACHINE AND LEARNING AND FORENSIC AND PATHOLOGY and ARTIFICIAL AND INTELIGENCE AND FORENSIC AND PATHOLOGY. Quality control was performed using the Joanna Briggs Institute critical appraisal tools. A search of 224 articles was performed. Seven more articles were extracted from the references of the initial selection. After excluding all non-relevant articles, the remaining 45 articles were thoroughly reviewed through the whole text. A final number of 33 papers were identified as relevant to the subject, in accordance with the criteria previously established. It must be clear that AI is not meant to replace forensic experts but to assist them in their everyday work life.
PMID:39469392 | PMC:PMC11513614 | DOI:10.7759/cureus.70363
Nutritional composition analysis in food images: an innovative Swin Transformer approach
Front Nutr. 2024 Oct 14;11:1454466. doi: 10.3389/fnut.2024.1454466. eCollection 2024.
ABSTRACT
Accurate recognition of nutritional components in food is crucial for dietary management and health monitoring. Current methods often rely on traditional chemical analysis techniques, which are time-consuming, require destructive sampling, and are not suitable for large-scale or real-time applications. Therefore, there is a pressing need for efficient, non-destructive, and accurate methods to identify and quantify nutrients in food. In this study, we propose a novel deep learning model that integrates EfficientNet, Swin Transformer, and Feature Pyramid Network (FPN) to enhance the accuracy and efficiency of food nutrient recognition. Our model combines the strengths of EfficientNet for feature extraction, Swin Transformer for capturing long-range dependencies, and FPN for multi-scale feature fusion. Experimental results demonstrate that our model significantly outperforms existing methods. On the Nutrition5k dataset, it achieves a Top-1 accuracy of 79.50% and a Mean Absolute Percentage Error (MAPE) for calorie prediction of 14.72%. On the ChinaMartFood109 dataset, the model achieves a Top-1 accuracy of 80.25% and a calorie MAPE of 15.21%. These results highlight the model's robustness and adaptability across diverse food images, providing a reliable and efficient tool for rapid, non-destructive nutrient detection. This advancement supports better dietary management and enhances the understanding of food nutrition, potentially leading to more effective health monitoring applications.
PMID:39469326 | PMC:PMC11514735 | DOI:10.3389/fnut.2024.1454466
Deep Learning-Based Method for Rapid 3D Whole-Heart Modeling in Congenital Heart Disease: Correspondence
Cardiology. 2024 Oct 28:1-3. doi: 10.1159/000542318. Online ahead of print.
NO ABSTRACT
PMID:39467517 | DOI:10.1159/000542318
Clinical evaluation of accelerated diffusion-weighted imaging of rectal cancer using a denoising neural network
Eur J Radiol. 2024 Oct 24;181:111802. doi: 10.1016/j.ejrad.2024.111802. Online ahead of print.
ABSTRACT
BACKGROUND: To evaluate the effectiveness of a deep learning denoising approach to accelerate diffusion-weighted imaging (DWI) and thus improve diagnostic accuracy and image quality in restaging rectal MRI following total neoadjuvant therapy (TNT).
METHODS: This retrospective single-center study included patients with locally advanced rectal cancer who underwent restaging rectal MRI between December 30, 2021, and June 1, 2022, following TNT. A convolutional neural network trained with DWI data was employed to denoise accelerated DWI acquisitions (i.e., acquisitions performed with a reduced number of repetitions compared to standard acquisitions). Image characteristics and residual disease were independently assessed by two radiologists across original and denoised images. Statistical analyses included the Wilcoxon signed-rank test to compare image quality scores across denoised and original images, weighted kappa statistics for inter-reader agreement assessment, and the calculation of measures of diagnostic accuracy.
RESULTS: In 46 patients (median age, 60 years [IQR: 47-72]; 37 men and 9 women), 8- and 16-fold accelerated images maintained or exhibited enhanced lesion visibility and image quality compared with original images that were performed 16 repetitions. Denoised images maintained diagnostic accuracy, with conditional specificities of up to 96 %. Moderate-to-high inter-reader agreement indicated reliable image and diagnostic assessment. The overall test yield for denoised DWI reconstructions ranged from 76-98 %, demonstrating a reduction in equivocal interpretations.
CONCLUSION: Applying a denoising network to accelerate rectal DWI acquisitions can reduce scan times and enhance image quality while maintaining diagnostic accuracy, presenting a potential pathway for more efficient rectal cancer management.
PMID:39467396 | DOI:10.1016/j.ejrad.2024.111802
Anchoring temporal convolutional networks for epileptic seizure prediction
J Neural Eng. 2024 Oct 28. doi: 10.1088/1741-2552/ad8bf3. Online ahead of print.
ABSTRACT
OBJECTIVE: Accurate and timely prediction of epileptic seizures is crucial for empowering patients to mitigate their impact or prevent them altogether. Current studies predominantly focus on short-term seizure predictions, which causes the prediction time to be shorter than the onset of antiepileptic, thus failing to prevent seizures. However, longer epilepsy prediction faces the problem that as the preictal period lengthens, it increasingly resembles the interictal period, complicating differentiation.
APPROACH: To address these issues, we employ the sample entropy method for feature extraction from electroencephalography (EEG) signals. Subsequently, we introduce the Anchoring Temporal Convolutional Networks (ATCN) model for longer-term, patient-specific epilepsy prediction. ATCN utilizes dilated causal convolutional networks to learn time-dependent features from previous data, capturing temporal causal correlations within and between samples. Additionally, the model also incorporates anchoring data to enhance the performance of epilepsy prediction further. Finally, we proposed a multilayer sliding window prediction algorithm for seizure alarms.
MAIN RESULTS: Evaluation on the Freiburg intracranial EEG dataset shows our approach achieves 100% sensitivity, a false prediction rate (FPR) of 0.08 per hour and an average prediction time (APT) of 99.98 minutes. Using the CHB-MIT scalp EEG dataset, we a achieve 97.44% sensitivity, an FPR of 0.11 per hour, and an APT of 92.99 minutes.
SIGNIFICANCE: These results demonstrate that our approach is adequate for seizure prediction over a more extended prediction range on intracranial and scalp EEG datasets. The average prediction time of our approach exceeds the typical onset time of antiepileptic. This approach is particularly beneficial for patients who need to take medication at regular intervals, as they may only need to take their medication when our method issues an alarm. This capability has the potential to prevent seizures, which will greatly improving patients' quality of life.
PMID:39467384 | DOI:10.1088/1741-2552/ad8bf3
Improving Accuracy and Reproducibility of Cartilage T<sub>2</sub> Mapping in the OAI Dataset Through Extended Phase Graph Modeling
J Magn Reson Imaging. 2024 Oct 28. doi: 10.1002/jmri.29646. Online ahead of print.
ABSTRACT
BACKGROUND: The Osteoarthritis Initiative (OAI) collected extensive imaging data, including Multi-Echo Spin-Echo (MESE) sequences for measuring knee cartilage T2 relaxation times. Mono-exponential models are used in the OAI for T2 fitting, which neglects stimulated echoes and B1 inhomogeneities. Extended Phase Graph (EPG) modeling addresses these limitations but has not been applied to the OAI dataset.
PURPOSE: To assess how different fitting methods, including EPG-based and exponential-based approaches, affect the accuracy and reproducibility of cartilage T2 in the OAI dataset.
STUDY TYPE: Retrospective.
POPULATION: From OAI dataset, 50 subjects, stratified by osteoarthritis (OA) severity using Kellgren-Lawrence grades (KLG), and 50 subjects without OA diagnosis during OAI duration were selected (each group: 25 females, mean ages ~61 years).
FIELD STRENGTH/SEQUENCE: 3-T, two-dimensional (2D) MESE sequence.
ASSESSMENT: Femoral and tibial cartilages were segmented from DESS images, subdivided into seven sub-regions, and co-registered to MESE. T2 maps were obtained using three EPG-based methods (nonlinear least squares, dictionary matching, and deep learning) and three mono-exponential approaches (linear least squares, nonlinear least squares, and noise-corrected exponential). Average T2 values within sub-regions were obtained. Pair-wise agreement among fitting methods was evaluated using the stratified subjects, while reproducibility using healthy subjects. Each method's T2 accuracy and repeatability varying signal-to-noise ratio (SNR) were assessed with simulations.
STATISTICAL TESTS: Bland-Altman analysis, Lin's concordance coefficient, and coefficient of variation assessed agreement, repeatability, and reproducibility. Statistical significance was set at P-value <0.05.
RESULTS: EPG-based methods demonstrated superior T2 accuracy (mean absolute error below 0.5 msec at SNR > 100) compared to mono-exponential methods (error > 7 msec). EPG-based approaches had better reproducibility, with limits of agreement 1.5-5 msec narrower than exponential-based methods. T2 values from EPG methods were systematically 10-17 msec lower than those from mono-exponential fitting.
DATA CONCLUSION: EPG modeling improved agreement and reproducibility of cartilage T2 mapping in subjects from the OAI dataset.
EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 1.
PMID:39467097 | DOI:10.1002/jmri.29646
Air quality index prediction with optimisation enabled deep learning model in IoT application
Environ Technol. 2024 Oct 28:1-17. doi: 10.1080/09593330.2024.2409993. Online ahead of print.
ABSTRACT
The development of industrial and urban places caused air pollution, which has resulted in a variety of effects on individuals and the atmosphere over the years. The measurement of the air quality index (AQI) depends on various environmental situations, such as emissions, dispersions, and chemical reactions. This paper developed the Internet of Things (IoT)-based Deep Learning (DL) technique for predicting air quality. Initially, the IoT simulation is performed, where the nodes receive input data. The routing technique is used to identify the best route toward the Base station (BS). The proposed Tangent Two-Stage Algorithm (TTSA) is used in the routing mechanism. For AQI prediction, the time series data is transmitted to the BS. The Z-score normalisation is employed to neglect the unessential data. Furthermore, feature indicator extraction is employed to extract the relevant feature indicators. The Deep Feedforward Neural Network (DFNN) is used to predict air quality. Furthermore, the proposed Fractional Tangent Two-Stage Optimisation (FTTSA) is employed for the training process of DFNN. Moreover, metrics such as energy, time, and distance are used to evaluate the routing process, and superior results such as 0.979J, 0.025s and 0.196 m are obtained. Furthermore, the AQI is predicted by metrics like root mean square error (RMSE), R-squared (R2), mean square error (MSE), and mean absolute percentage error (MAPE), whereas the superior values such as 0.602, 0.598, 0.362, and 0.456 are attained.
PMID:39467096 | DOI:10.1080/09593330.2024.2409993
A Nuclei-Focused Strategy for Automated Histopathology Grading of Renal Cell Carcinoma
IEEE J Biomed Health Inform. 2024 Oct 28;PP. doi: 10.1109/JBHI.2024.3487004. Online ahead of print.
ABSTRACT
The rising incidence of kidney cancer underscores the need for precise and reproducible diagnostic methods. In particular, renal cell carcinoma (RCC), the most prevalent type of kidney cancer, requires accurate nuclear grading for better prognostic prediction. Recent advances in deep learning have facilitated end-to-end diagnostic methods using contextual features in histopathological images. However, most existing methods focus only on image-level features or lack an effective process for aggregating nuclei prediction results, limiting their diagnostic accuracy. In this paper, we introduce a novel framework, Nuclei feature Assisted Patch-level RCC grading (NuAP-RCC), that leverages nuclei-level features for enhanced patch-level RCC grading. Our approach employs a nuclei-level RCC grading network to extract grade-aware features, which serve as node features in a graph. These node features are aggregated using graph neural networks to capture the morphological characteristics and distributions of the nuclei. The aggregated features are then combined with global image-level features extracted by convolutional neural networks, resulting in a final feature for accurate RCC grading. In addition, we present a new dataset for patch-level RCC grading. Experimental results demonstrate the superior accuracy and generalizability of NuAP-RCC across datasets from different medical institutions, achieving a 6.15% improvement in accuracy over the second-best model on the USM-RCC dataset.
PMID:39466875 | DOI:10.1109/JBHI.2024.3487004
Noise Self-Regression: A New Learning Paradigm to Enhance Low-Light Images Without Task-Related Data
IEEE Trans Pattern Anal Mach Intell. 2024 Oct 28;PP. doi: 10.1109/TPAMI.2024.3487361. Online ahead of print.
ABSTRACT
Deep learning-based low-light image enhancement (LLIE) is a task of leveraging deep neural networks to enhance the image illumination while keeping the image content unchanged. From the perspective of training data, existing methods complete the LLIE task driven by one of the following three data types: paired data, unpaired data and zero-reference data. Each type of these data-driven methods has its own advantages, e.g., zero-reference data-based methods have very low requirements on training data and can meet the human needs in many scenarios. In this paper, we leverage pure Gaussian noise to complete the LLIE task, which further reduces the requirements for training data in LLIE tasks and can be used as another alternative in practical use. Specifically, we propose Noise SElf-Regression (NoiSER) without access to any task-related data, simply learns a convolutional neural network equipped with an instance-normalization layer by taking a random noise image, N(0,σ2) for each pixel, as both input and output for each training pair, and then the low-light image is fed to the trained network for predicting the normal-light image. Technically, an intuitive explanation for its effectiveness is as follows: 1) the self-regression reconstructs the contrast between adjacent pixels of the input image, 2) the instance-normalization layer may naturally remediate the overall magnitude/lighting of the input image, and 3) the N(0,σ2) assumption for each pixel enforces the output image to follow the well-known gray-world hypothesis [1] when the image size is big enough. Compared to current state-of-the-art LLIE methods with access to different task-related data, NoiSER is highly competitive in enhancement quality, yet with a much smaller model size, and much lower training and inference cost. In addition, the experiments also demonstrate that NoiSER has great potential in overexposure suppression and joint processing with other restoration tasks.
PMID:39466857 | DOI:10.1109/TPAMI.2024.3487361