Deep learning

A bibliometric literature review of stock price forecasting: From statistical model to deep learning approach

Fri, 2024-03-15 06:00

Sci Prog. 2024 Jan-Mar;107(1):368504241236557. doi: 10.1177/00368504241236557.

ABSTRACT

We introduce a comprehensive analysis of several approaches used in stock price forecasting, including statistical, machine learning, and deep learning models. The advantages and limitations of these models are discussed to provide an insight into stock price forecasting. Traditional statistical methods, such as the autoregressive integrated moving average and its variants, are recognized for their efficiency, but they also have some limitations in addressing non-linear problems and providing long-term forecasts. Machine learning approaches, including algorithms such as artificial neural networks and random forests, are praised for their ability to grasp non-linear information without depending on stochastic data or economic theory. Moreover, deep learning approaches, such as convolutional neural networks and recurrent neural networks, can deal with complex patterns in stock prices. Additionally, this study further investigates hybrid models, combining various approaches to explore their strengths and counterbalance individual weaknesses, thereby enhancing predictive accuracy. By presenting a detailed review of various studies and methods, this study illuminates the direction of stock price forecasting and highlights potential approaches for further studies refining the stock price forecasting models.

PMID:38490223 | DOI:10.1177/00368504241236557

Categories: Literature Watch

A novel measurement approach to dynamic change of limb length discrepancy using deep learning and wearable sensors

Fri, 2024-03-15 06:00

Sci Prog. 2024 Jan-Mar;107(1):368504241236345. doi: 10.1177/00368504241236345.

ABSTRACT

The accurate identification of dynamic change of limb length discrepancy (LLD) in non-clinical settings is of great significance for monitoring gait function change in people's everyday lives. How to search for advanced techniques to measure LLD changes in non-clinical settings has always been a challenging endeavor in recent related research. In this study, we have proposed a novel approach to accurately measure the dynamic change of LLD outdoors by using deep learning and wearable sensors. The basic idea is that the measurement of dynamic change of LLD was considered as a multiple gait classification task based on LLD change that is clearly associated with its gait pattern. A hybrid deep learning model of convolutional neural network and long short-term memory (CNN-LSTM) was developed to precisely classify LLD gait patterns by discovering the most representative spatial-temporal LLD dynamic change features. Twenty-three healthy subjects were recruited to simulate four levels of LLD by wearing a shoe lift with different heights. The Delsys TrignoTM system was implemented to simultaneously acquire gait data from six sensors positioned on the hip, knee and ankle joint of two lower limbs respectively. The experimental results showed that the developed CNN-LSTM model could reach a higher accuracy of 93.24% and F1-score of 93.48% to classify four different LLD gait patterns when compared with CNN, LSTM, and CNN-gated recurrent unit(CNN-GRU), and gain better recall and precision (more than 92%) to detect each LLD gait pattern accurately. Our model could achieve excellent learning ability to discover the most representative LLD dynamic change features for classifying LLD gait patterns accurately. Our technical solution would help not only to accurately measure LLD dynamic change in non-clinical settings, but also to potentially find out lower limb joints with more abnormal compensatory change caused by LLD.

PMID:38490169 | DOI:10.1177/00368504241236345

Categories: Literature Watch

Enhancing moisture detection in coal gravels: A deep learning-based adaptive microwave spectra fusion method

Fri, 2024-03-15 06:00

Spectrochim Acta A Mol Biomol Spectrosc. 2024 Mar 12;313:124147. doi: 10.1016/j.saa.2024.124147. Online ahead of print.

ABSTRACT

The accurate and effective detection of moisture in coal gravels is crucial. Conventional air oven-drying method suffers from prolonged processing times and their disruptive nature. This paper proposes a deep learning-based adaptive fusion method for multiple microwave spectra to non-destructively detect the moisture content of coal gravels. First, a purpose-built free-space measurement platform is employed to acquire microwave spectra of coal samples, encompassing the magnitude and phase spectra of reflection coefficients (S11) and transmission coefficients (S21). Subsequently, a Monte-Carlo cross-validation-based method is adopted to detect and eliminate outliers in the spectra. Furthermore, a novel feature extraction module is proposed, enhancing the traditional U-shaped network using residual learning (ResNet) and the convolutional block attention module (CBAM) to extract and reconstruct subtle spectral features. Inspired by the high-level data fusion, an adaptive spectra fusion method is then introduced that can autonomously balance the contributions between different spectra. The experimental results underscore the advantages of the proposed method, with narrow frequency intervals between 2.50-3.25 GHz, 3.75-4.00 GHz, and 4.75-5.00 GHz exhibiting superior detection accuracy compared to the entire frequency band, achieving R2 = 0.9034, MAE = 1.0254, RMSE = 1.2948 and RPIQ = 6.0630.

PMID:38490123 | DOI:10.1016/j.saa.2024.124147

Categories: Literature Watch

Edge Computing Transformers for Fall Detection in Older Adults

Fri, 2024-03-15 06:00

Int J Neural Syst. 2024 Mar 16:2450026. doi: 10.1142/S0129065724500266. Online ahead of print.

ABSTRACT

The global trend of increasing life expectancy introduces new challenges with far-reaching implications. Among these, the risk of falls among older adults is particularly significant, affecting individual health and the quality of life, and placing an additional burden on healthcare systems. Existing fall detection systems often have limitations, including delays due to continuous server communication, high false-positive rates, low adoption rates due to wearability and comfort issues, and high costs. In response to these challenges, this work presents a reliable, wearable, and cost-effective fall detection system. The proposed system consists of a fit-for-purpose device, with an embedded algorithm and an Inertial Measurement Unit (IMU), enabling real-time fall detection. The algorithm combines a Threshold-Based Algorithm (TBA) and a neural network with low number of parameters based on a Transformer architecture. This system demonstrates notable performance with 95.29% accuracy, 93.68% specificity, and 96.66% sensitivity, while only using a 0.38% of the trainable parameters used by the other approach.

PMID:38490957 | DOI:10.1142/S0129065724500266

Categories: Literature Watch

ASPTF: A computational tool to predict abiotic stress-responsive transcription factors in plants by employing machine learning algorithms

Fri, 2024-03-15 06:00

Biochim Biophys Acta Gen Subj. 2024 Mar 13:130597. doi: 10.1016/j.bbagen.2024.130597. Online ahead of print.

ABSTRACT

BACKGROUND: Abiotic stresses pose serious threat to the growth and yield of crop plants. Several studies suggest that in plants, transcription factors (TFs) are important regulators of gene expression, especially when it comes to coping with abiotic stresses. Therefore, it is crucial to identify TFs associated with abiotic stress response for breeding of abiotic stress tolerant crop cultivars.

METHODS: Based on a machine learning frame work, a computational model was envisaged to predict TFs associated with abiotic stress response in plants. To numerically encode TF sequences, four distinct sequence derived features were generated. The prediction was performed using ten shallow learning and four deep learning algorithms. For prediction using more pertinent and informative features, feature selection techniques were also employed.

RESULTS: Using the features chosen by the light-gradient boosting machine-variable importance measure (LGBM-VIM), the LGBM achieved the highest cross-validation performance metrics (accuracy: 86.81%, auROC: 92.98%, and auPRC: 94.03%). Further evaluation of the proposed model (LGBM prediction method + LGBM-VIM selected features) was also done using an independent test dataset, where the accuracy, auROC and auPRC were observed 81.98%, 90.65% and 91.30%, respectively.

CONCLUSIONS: To facilitate the adoption of the proposed strategy by users, the technique was implemented and made available as a prediction server called ASPTF, accessible at https://iasri-sg.icar.gov.in/asptf/. The developed approach and the corresponding web application are anticipated to complement experimental methods in the identification of transcription factors (TFs) responsive to abiotic stress in plants.

PMID:38490467 | DOI:10.1016/j.bbagen.2024.130597

Categories: Literature Watch

Generalization analysis of deep CNNs under maximum correntropy criterion

Fri, 2024-03-15 06:00

Neural Netw. 2024 Mar 5;174:106226. doi: 10.1016/j.neunet.2024.106226. Online ahead of print.

ABSTRACT

Convolutional neural networks (CNNs) have gained immense popularity in recent years, finding their utility in diverse fields such as image recognition, natural language processing, and bio-informatics. Despite the remarkable progress made in deep learning theory, most studies on CNNs, especially in regression tasks, tend to heavily rely on the least squares loss function. However, there are situations where such learning algorithms may not suffice, particularly in the presence of heavy-tailed noises or outliers. This predicament emphasizes the necessity of exploring alternative loss functions that can handle such scenarios more effectively, thereby unleashing the true potential of CNNs. In this paper, we investigate the generalization error of deep CNNs with the rectified linear unit (ReLU) activation function for robust regression problems within an information-theoretic learning framework. Our study demonstrates that when the regression function exhibits an additive ridge structure and the noise possesses a finite pth moment, the empirical risk minimization scheme, generated by the maximum correntropy criterion and deep CNNs, achieves fast convergence rates. Notably, these rates align with the mini-max optimal convergence rates attained by fully connected neural network model with the Huber loss function up to a logarithmic factor. Additionally, we further establish the convergence rates of deep CNNs under the maximum correntropy criterion when the regression function resides in a Sobolev space on the sphere.

PMID:38490117 | DOI:10.1016/j.neunet.2024.106226

Categories: Literature Watch

Source-free unsupervised domain adaptation: A survey

Fri, 2024-03-15 06:00

Neural Netw. 2024 Mar 11;174:106230. doi: 10.1016/j.neunet.2024.106230. Online ahead of print.

ABSTRACT

Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to the unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.

PMID:38490115 | DOI:10.1016/j.neunet.2024.106230

Categories: Literature Watch

Automatic identification of hypertension and assessment of its secondary effects using artificial intelligence: A systematic review (2013-2023)

Fri, 2024-03-15 06:00

Comput Biol Med. 2024 Feb 28;172:108207. doi: 10.1016/j.compbiomed.2024.108207. Online ahead of print.

ABSTRACT

Artificial Intelligence (AI) techniques are increasingly used in computer-aided diagnostic tools in medicine. These techniques can also help to identify Hypertension (HTN) in its early stage, as it is a global health issue. Automated HTN detection uses socio-demographic, clinical data, and physiological signals. Additionally, signs of secondary HTN can also be identified using various imaging modalities. This systematic review examines related work on automated HTN detection. We identify datasets, techniques, and classifiers used to develop AI models from clinical data, physiological signals, and fused data (a combination of both). Image-based models for assessing secondary HTN are also reviewed. The majority of the studies have primarily utilized single-modality approaches, such as biological signals (e.g., electrocardiography, photoplethysmography), and medical imaging (e.g., magnetic resonance angiography, ultrasound). Surprisingly, only a small portion of the studies (22 out of 122) utilized a multi-modal fusion approach combining data from different sources. Even fewer investigated integrating clinical data, physiological signals, and medical imaging to understand the intricate relationships between these factors. Future research directions are discussed that could build better healthcare systems for early HTN detection through more integrated modeling of multi-modal data sources.

PMID:38489986 | DOI:10.1016/j.compbiomed.2024.108207

Categories: Literature Watch

Reliable prediction of difficult airway for tracheal intubation from patient preoperative photographs by machine learning methods

Fri, 2024-03-15 06:00

Comput Methods Programs Biomed. 2024 Mar 12;248:108118. doi: 10.1016/j.cmpb.2024.108118. Online ahead of print.

ABSTRACT

BACKGROUND: Estimating the risk of a difficult tracheal intubation should help clinicians in better anaesthesia planning, to maximize patient safety. Routine bedside screenings suffer from low sensitivity.

OBJECTIVE: To develop and evaluate machine learning (ML) and deep learning (DL) algorithms for the reliable prediction of intubation risk, using information about airway morphology.

METHODS: Observational, prospective cohort study enrolling n=623 patients who underwent tracheal intubation: 53/623 difficult cases (prevalence 8.51%). First, we used our previously validated deep convolutional neural network (DCNN) to extract 2D image coordinates for 27 + 13 relevant anatomical landmarks in two preoperative photos (frontal and lateral views). Here we propose a method to determine the 3D pose of the camera with respect to the patient and to obtain the 3D world coordinates of these landmarks. Then we compute a novel set of dM=59 morphological features (distances, areas, angles and ratios), engineered with our anaesthesiologists to characterize each individual's airway anatomy towards prediction. Subsequently, here we propose four ad hoc ML pipelines for difficult intubation prognosis, each with four stages: feature scaling, imputation, resampling for imbalanced learning, and binary classification (Logistic Regression, Support Vector Machines, Random Forests and eXtreme Gradient Boosting). These compound ML pipelines were fed with the dM=59 morphological features, alongside dD=7 demographic variables. Here we trained them with automatic hyperparameter tuning (Bayesian search) and probability calibration (Platt scaling). In addition, we developed an ad hoc multi-input DCNN to estimate the intubation risk directly from each pair of photographs, i.e. without any intermediate morphological description. Performance was evaluated using optimal Bayesian decision theory. It was compared against experts' judgement and against state-of-the-art methods (three clinical formulae, four ML, four DL models).

RESULTS: Our four ad hoc ML pipelines with engineered morphological features achieved similar discrimination capabilities: median AUCs between 0.746 and 0.766. They significantly outperformed both expert judgement and all state-of-the-art methods (highest AUC at 0.716). Conversely, our multi-input DCNN yielded low performance due to overfitting. This same behaviour occurred for the state-of-the-art DL algorithms. Overall, the best method was our XGB pipeline, with the fewest false negatives at the optimal Bayesian decision threshold.

CONCLUSIONS: We proposed and validated ML models to assist clinicians in anaesthesia planning, providing a reliable calibrated estimate of airway intubation risk, which outperformed expert assessments and state-of-the-art methods. Our novel set of engineered features succeeded in providing informative descriptions for prognosis.

PMID:38489935 | DOI:10.1016/j.cmpb.2024.108118

Categories: Literature Watch

Use of superpixels for improvement of inter-rater and intra-rater reliability during annotation of medical images

Fri, 2024-03-15 06:00

Med Image Anal. 2024 Mar 12;94:103141. doi: 10.1016/j.media.2024.103141. Online ahead of print.

ABSTRACT

In the context of automatic medical image segmentation based on statistical learning, raters' variability of ground truth segmentations in training datasets is a widely recognized issue. Indeed, the reference information is provided by experts but bias due to their knowledge may affect the quality of the ground truth data, thus hindering creation of robust and reliable datasets employed in segmentation, classification or detection tasks. In such a framework, automatic medical image segmentation would significantly benefit from utilizing some form of presegmentation during training data preparation process, which could lower the impact of experts' knowledge and reduce time-consuming labeling efforts. The present manuscript proposes a superpixels-driven procedure for annotating medical images. Three different superpixeling methods with two different number of superpixels were evaluated on three different medical segmentation tasks and compared with manual annotations. Within the superpixels-based annotation procedure medical experts interactively select superpixels of interest, apply manual corrections, when necessary, and then the accuracy of the annotations, the time needed to prepare them, and the number of manual corrections are assessed. In this study, it is proven that the proposed procedure reduces inter- and intra-rater variability leading to more reliable annotations datasets which, in turn, may be beneficial for the development of more robust classification or segmentation models. In addition, the proposed approach reduces time needed to prepare the annotations.

PMID:38489896 | DOI:10.1016/j.media.2024.103141

Categories: Literature Watch

Machine learning aided single cell image analysis improves understanding of morphometric heterogeneity of human mesenchymal stem cells

Fri, 2024-03-15 06:00

Methods. 2024 Mar 13:S1046-2023(24)00068-9. doi: 10.1016/j.ymeth.2024.03.005. Online ahead of print.

ABSTRACT

The multipotent stem cells of our body have been largely harnessed in biotherapeutics. However, as they are derived from multiple anatomical sources, from different tissues, human mesenchymal stem cells (hMSCs) are a heterogeneous population showing ambiguity in their in vitro behavior. Intra-clonal population heterogeneity has also been identified and pre-clinical mechanistic studies suggest that these cumulatively depreciate the therapeutic effects of hMSC transplantation. Although various biomarkers identify these specific stem cell populations, recent artificial intelligence-based methods have capitalized on the cellular morphologies of hMSCs, opening a new approach to understand their attributes. A robust and rapid platform is required to accommodate and eliminate the heterogeneity observed in the cell population, to standardize the quality of hMSC therapeutics globally. Here, we report our primary findings of morphological heterogeneity observed within and across two sources of hMSCs namely, stem cells from human exfoliated deciduous teeth (SHEDs) and human Wharton jelly mesenchymal stem cells (hWJ MSCs), using real-time single-cell images generated on immunophenotyping by imaging flow cytometry (IFC). We used the ImageJ software for identification and comparison between the two types of hMSCs using statistically significant morphometric descriptors that are biologically relevant. To expand on these insights, we have further applied deep learning methods and successfully report the development of a Convolutional Neural Network-based image classifier. In our research, we introduced a machine learning methodology to streamline the entire procedure, utilizing convolutional neural networks and transfer learning for binary classification, achieving an accuracy rate of 97.54%. We have also critically discussed the challenges, comparisons between solutions and future directions of machine learning in hMSC classification in biotherapeutics.

PMID:38490594 | DOI:10.1016/j.ymeth.2024.03.005

Categories: Literature Watch

Understanding cellulose pyrolysis via ab initio deep learning potential field

Fri, 2024-03-15 06:00

Bioresour Technol. 2024 Mar 13:130590. doi: 10.1016/j.biortech.2024.130590. Online ahead of print.

ABSTRACT

Comprehensive and dynamic studies of cellulose pyrolysis reaction mechanisms are crucial in designing experiments and processes with enhanced safety, efficiency, and sustainability. The details of the pyrolysis mechanism are not readily available from experiments but can be better described via molecular dynamics (MD) simulations. However, the large size of cellulose molecules challenges accurate ab initio MD simulations, while existing reactive force field parameters lack precision. In this work, precise ab initio deep learning potentials field (DPLF) are developed and applied in MD simulations to facilitate the study of cellulose pyrolysis mechanisms. The formation mechanism and production rate of both valuable and greenhouse products from cellulose at temperatures larger than 1073 K are comprehensively described. This study underscores the critical role of advanced simulation techniques, particularly DLPF, in achieving efficient and accurate understanding of cellulose pyrolysis mechanisms, thus promoting wider industrial applications.

PMID:38490462 | DOI:10.1016/j.biortech.2024.130590

Categories: Literature Watch

Artificial intelligence: The future for multimodality imaging of right ventricle

Fri, 2024-03-15 06:00

Int J Cardiol. 2024 Mar 13:131970. doi: 10.1016/j.ijcard.2024.131970. Online ahead of print.

ABSTRACT

The crucial pathophysiological and prognostic roles of the right ventricle in various diseases have been well-established. Nonetheless, conventional cardiovascular imaging modalities are frequently associated with intrinsic limitations when evaluating right ventricular (RV) morphology and function. The integration of artificial intelligence (AI) in multimodality imaging presents a promising avenue to circumvent these obstacles, paving the way for future fully automated imaging paradigms. This review aimed to address the current challenges faced by clinicians and researchers in integrating RV imaging and AI technology, to provide a comprehensive overview of the current applications of AI in RV imaging, and to offer insights into future directions, opportunities, and potential challenges in this rapidly advancing field.

PMID:38490268 | DOI:10.1016/j.ijcard.2024.131970

Categories: Literature Watch

Non-invasive assessment of response to transcatheter arterial chemoembolization for hepatocellular carcinoma with the deep neural networks-based radiomics nomogram

Fri, 2024-03-15 06:00

Acta Radiol. 2024 Mar 15:2841851241229185. doi: 10.1177/02841851241229185. Online ahead of print.

ABSTRACT

BACKGROUND: Transcatheter arterial chemoembolization (TACE) is a mainstay treatment for intermediate and advanced hepatocellular carcinoma (HCC), with the potential to enhance patient survival. Preoperative prediction of postoperative response to TACE in patients with HCC is crucial.

PURPOSE: To develop a deep neural network (DNN)-based nomogram for the non-invasive and precise prediction of TACE response in patients with HCC.

MATERIAL AND METHODS: We retrospectively collected clinical and imaging data from 110 patients with HCC who underwent TACE surgery. Radiomics features were extracted from specific imaging methods. We employed conventional machine-learning algorithms and a DNN-based model to construct predictive probabilities (RScore). Logistic regression helped identify independent clinical risk factors, which were integrated with RScore to create a nomogram. We evaluated diagnostic performance using various metrics.

RESULTS: Among the radiomics models, the DNN_LASSO-based one demonstrated the highest predictive accuracy (area under the curve [AUC] = 0.847, sensitivity = 0.892, specificity = 0.791). Peritumoral enhancement and alkaline phosphatase were identified as independent risk factors. Combining RScore with these clinical factors, a DNN-based nomogram exhibited superior predictive performance (AUC = 0.871, sensitivity = 0.844, specificity = 0.873).

CONCLUSION: In this study, we successfully developed a deep learning-based nomogram that can noninvasively and accurately predict TACE response in patients with HCC, offering significant potential for improving the clinical management of HCC.

PMID:38489805 | DOI:10.1177/02841851241229185

Categories: Literature Watch

Deep learning for automatic prediction of early activation of treatment naive non-exudative MNVs in AMD

Fri, 2024-03-15 06:00

Retina. 2024 Mar 14. doi: 10.1097/IAE.0000000000004106. Online ahead of print.

ABSTRACT

BACKGROUND: Around 30% of non-exudative macular neovascularizations(NE-MNVs) exudate within 2 years from diagnosis in patients with age-related macular degeneration(AMD).The aim of the study is to develop a deep learning classifier based on optical coherence tomography(OCT) and OCT angiography(OCTA) to identify NE-MNVs at risk of exudation.

METHODS: AMD patients showing OCTA and fluorescein angiography (FA) documented NE-MNV with a 2-years minimum imaging follow-up were retrospectively selected. Patients showing OCT B-scan-documented MNV exudation within the first 2 years formed the EX-GROUP while the others formed QU-GROUP.ResNet-101, Inception-ResNet-v2 and DenseNet-201 were independently trained on OCTA and OCT B-scan images. Combinations of the 6 models were evaluated with major and soft voting techniques.

RESULTS: Eighty-nine (89) eyes of 89 patients with a follow-up of 5.7 ± 1.5 years were recruited(35 EX GROUP and 54 QU GROUP). Inception-ResNet-v2 was the best performing among the 3 single convolutional neural networks(CNNs).The major voting model resulting from the association of the 3 different CNNs resulted in improvement of performance both for OCTA and OCT B-scan (both significantly higher than human graders' performance). Soft voting model resulting from the combination of OCTA and OCT B-scan based major voting models showed a testing accuracy of 94.4%. Peripheral arcades and large vessels on OCTA enface imaging were more prevalent in QU GROUP.

CONCLUSIONS: Artificial intelligence shows high performances in identifications of NE-MNVs at risk for exudation within the first 2 years of follow up, allowing better customization of follow up timing and avoiding treatment delay. Better results are obtained with the combination of OCTA and OCT B-scan image analysis.

PMID:38489765 | DOI:10.1097/IAE.0000000000004106

Categories: Literature Watch

Deep-Learning Density Functional Perturbation Theory

Fri, 2024-03-15 06:00

Phys Rev Lett. 2024 Mar 1;132(9):096401. doi: 10.1103/PhysRevLett.132.096401.

ABSTRACT

Calculating perturbation response properties of materials from first principles provides a vital link between theory and experiment, but is bottlenecked by the high computational cost. Here, a general framework is proposed to perform density functional perturbation theory (DFPT) calculations by neural networks, greatly improving the computational efficiency. Automatic differentiation is applied on neural networks, facilitating accurate computation of derivatives. High efficiency and good accuracy of the approach are demonstrated by studying electron-phonon coupling and related physical quantities. This work brings deep-learning density functional theory and DFPT into a unified framework, creating opportunities for developing ab initio artificial intelligence.

PMID:38489617 | DOI:10.1103/PhysRevLett.132.096401

Categories: Literature Watch

Deep learning-assisted detection and segmentation of intracranial hemorrhage in noncontrast computed tomography scans of acute stroke patients: a systematic review and meta-analysis

Fri, 2024-03-15 06:00

Int J Surg. 2024 Mar 15. doi: 10.1097/JS9.0000000000001266. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning (DL)-assisted detection and segmentation of intracranial hemorrhage stroke in noncontrast computed tomography (NCCT) scans are well-established, but evidence on this topic is lacking.

MATERIALS AND METHODS: PubMed and Embase databases were searched from their inception to November 2023 to identify related studies. The primary outcomes included sensitivity, specificity, and the Dice Similarity Coefficient (DSC); while the secondary outcomes were positive predictive value (PPV), negative predictive value (NPV), precision, area under the receiver operating characteristic curve (AUROC), processing time, and volume of bleeding. Random-effect model and bivariate model were used to pooled independent effect size and diagnostic meta-analysis data, respectively.

RESULTS: A total of 36 original studies were included in this meta-analysis. Pooled results indicated that DL technologies have a comparable performance in intracranial hemorrhage detection and segmentation with high values of sensitivity (0.89, 95% CI: 0.88-0.90), specificity (0.91, 95% CI: 0.89-0.93), AUROC (0.94, 95% CI: 0.93-0.95), PPV (0.92, 95% CI: 0.91-0.93), NPV (0.94, 95% CI: 0.91-0.96), precision (0.83, 95% CI: 0.77-0.90), DSC (0.84, 95% CI: 0.82-0.87). There is no significant difference between manual labeling and DL technologies in hemorrhage quantification (MD 0.08, 95% CI: -5.45-5.60, P=0.98), but the latter takes less process time than manual labeling (WMD 2.26, 95% CI: 1.96-2.56, P=0.001).

CONCLUSION: This systematic review has identified a range of DL algorithms that the performance was comparable to experienced clinicians in hemorrhage lesions identification, segmentation, and quantification but with greater efficiency and reduced cost. It is highly emphasized that multicenter randomized controlled clinical trials will be needed to validate the performance of these tools in the future, paving the way for fast and efficient decision-making during clinical procedure in patients with acute hemorrhagic stroke.

PMID:38489547 | DOI:10.1097/JS9.0000000000001266

Categories: Literature Watch

Quantitative evaluation model of variable diagnosis for chest X-ray images using deep learning

Fri, 2024-03-15 06:00

PLOS Digit Health. 2024 Mar 15;3(3):e0000460. doi: 10.1371/journal.pdig.0000460. eCollection 2024 Mar.

ABSTRACT

The purpose of this study is to demonstrate the use of a deep learning model in quantitatively evaluating clinical findings typically subject to uncertain evaluations by physicians, using binary test results based on routine protocols. A chest X-ray is the most commonly used diagnostic tool for the detection of a wide range of diseases and is generally performed as a part of regular medical checkups. However, when it comes to findings that can be classified as within the normal range but are not considered disease-related, the thresholds of physicians' findings can vary to some extent, therefore it is necessary to define a new evaluation method and quantify it. The implementation of such methods is difficult and expensive in terms of time and labor. In this study, a total of 83,005 chest X-ray images were used to diagnose the common findings of pleural thickening and scoliosis. A novel method for quantitatively evaluating the probability that a physician would judge the images to have these findings was established. The proposed method successfully quantified the variation in physicians' findings using a deep learning model trained only on binary annotation data. It was also demonstrated that the developed method could be applied to both transfer learning using convolutional neural networks for general image analysis and a newly learned deep learning model based on vector quantization variational autoencoders with high correlations ranging from 0.89 to 0.97.

PMID:38489375 | DOI:10.1371/journal.pdig.0000460

Categories: Literature Watch

BetaBuddy: An automated end-to-end computer vision pipeline for analysis of calcium fluorescence dynamics in β-cells

Fri, 2024-03-15 06:00

PLoS One. 2024 Mar 15;19(3):e0299549. doi: 10.1371/journal.pone.0299549. eCollection 2024.

ABSTRACT

Insulin secretion from pancreatic β-cells is integral in maintaining the delicate equilibrium of blood glucose levels. Calcium is known to be a key regulator and triggers the release of insulin. This sub-cellular process can be monitored and tracked through live-cell imaging and subsequent cell segmentation, registration, tracking, and analysis of the calcium level in each cell. Current methods of analysis typically require the manual outlining of β-cells, involve multiple software packages, and necessitate multiple researchers-all of which tend to introduce biases. Utilizing deep learning algorithms, we have therefore created a pipeline to automatically segment and track thousands of cells, which greatly reduces the time required to gather and analyze a large number of sub-cellular images and improve accuracy. Tracking cells over a time-series image stack also allows researchers to isolate specific calcium spiking patterns and spatially identify those of interest, creating an efficient and user-friendly analysis tool. Using our automated pipeline, a previous dataset used to evaluate changes in calcium spiking activity in β-cells post-electric field stimulation was reanalyzed. Changes in spiking activity were found to be underestimated previously with manual segmentation. Moreover, the machine learning pipeline provides a powerful and rapid computational approach to examine, for example, how calcium signaling is regulated by intracellular interactions.

PMID:38489336 | DOI:10.1371/journal.pone.0299549

Categories: Literature Watch

Neural Computation-Based Methods for the Early Diagnosis and Prognosis of Alzheimer's Disease Not Using Neuroimaging Biomarkers: A Systematic Review

Fri, 2024-03-15 06:00

J Alzheimers Dis. 2024 Mar 10. doi: 10.3233/JAD-231271. Online ahead of print.

ABSTRACT

BACKGROUND: The growing number of older adults in recent decades has led to more prevalent geriatric diseases, such as strokes and dementia. Therefore, Alzheimer's disease (AD), as the most common type of dementia, has become more frequent too.

BACKGROUND: Objective: The goals of this work are to present state-of-the-art studies focused on the automatic diagnosis and prognosis of AD and its early stages, mainly mild cognitive impairment, and predicting how the research on this topic may change in the future.

METHODS: Articles found in the existing literature needed to fulfill several selection criteria. Among others, their classification methods were based on artificial neural networks (ANNs), including deep learning, and data not from brain signals or neuroimaging techniques were used. Considering our selection criteria, 42 articles published in the last decade were finally selected.

RESULTS: The most medically significant results are shown. Similar quantities of articles based on shallow and deep ANNs were found. Recurrent neural networks and transformers were common with speech or in longitudinal studies. Convolutional neural networks (CNNs) were popular with gait or combined with others in modular approaches. Above one third of the cross-sectional studies utilized multimodal data. Non-public datasets were frequently used in cross-sectional studies, whereas the opposite in longitudinal ones. The most popular databases were indicated, which will be helpful for future researchers in this field.

CONCLUSIONS: The introduction of CNNs in the last decade and their superb results with neuroimaging data did not negatively affect the usage of other modalities. In fact, new ones emerged.

PMID:38489188 | DOI:10.3233/JAD-231271

Categories: Literature Watch

Pages