Deep learning

A Siamese Swin-Unet for image change detection

Sun, 2024-02-25 06:00

Sci Rep. 2024 Feb 25;14(1):4577. doi: 10.1038/s41598-024-54096-8.

ABSTRACT

The problem of change detection in remote sensing image processing is both difficult and important. It is extensively used in a variety of sectors, including land resource planning, monitoring and forecasting of agricultural plant health, and monitoring and assessment of natural disasters. Remote sensing images provide a large amount of long-term and fully covered data for earth environmental monitoring. A lot of progress has been made thanks to deep learning's quick development. But the majority of deep learning-based change detection techniques currently in use rely on the well-known Convolutional neural network (CNN). However, considering the locality of convolutional operation, CNN unable to master the interplay between global and distant semantic information. Some researches has employ Vision Transformer as a backbone in remote sensing field. Inspired by these researches, in this paper, we propose a network named Siam-Swin-Unet, which is a Siamesed pure Transformer with U-shape construction for remote sensing image change detection. Swin Transformer is a hierarchical vision transformer with shifted windows that can extract global feature. To learn local and global semantic feature information, the dual-time image are fed into Siam-Swin-Unet which is composed of Swin Transformer, Unet Siamesenet and two feature fusion module. Considered the Unet and Siamesenet are effective for change detection, We applied it to the model. The feature fusion module is designed for fusion of dual-time image features, and is efficient and low-compute confirmed by our experiments. Our network achieved 94.67 F1 on the CDD dataset (season varying).

PMID:38403711 | DOI:10.1038/s41598-024-54096-8

Categories: Literature Watch

Automated neonatal nnU-Net brain MRI extractor trained on a large multi-institutional dataset

Sun, 2024-02-25 06:00

Sci Rep. 2024 Feb 26;14(1):4583. doi: 10.1038/s41598-024-54436-8.

ABSTRACT

Brain extraction, or skull-stripping, is an essential data preprocessing step for machine learning approaches to brain MRI analysis. Currently, there are limited extraction algorithms for the neonatal brain. We aim to adapt an established deep learning algorithm for the automatic segmentation of neonatal brains from MRI, trained on a large multi-institutional dataset for improved generalizability across image acquisition parameters. Our model, ANUBEX (automated neonatal nnU-Net brain MRI extractor), was designed using nnU-Net and was trained on a subset of participants (N = 433) enrolled in the High-dose Erythropoietin for Asphyxia and Encephalopathy (HEAL) study. We compared the performance of our model to five publicly available models (BET, BSE, CABINET, iBEATv2, ROBEX) across conventional and machine learning methods, tested on two public datasets (NIH and dHCP). We found that our model had a significantly higher Dice score on the aggregate of both data sets and comparable or significantly higher Dice scores on the NIH (low-resolution) and dHCP (high-resolution) datasets independently. ANUBEX performs similarly when trained on sequence-agnostic or motion-degraded MRI, but slightly worse on preterm brains. In conclusion, we created an automatic deep learning-based neonatal brain extraction algorithm that demonstrates accurate performance with both high- and low-resolution MRIs with fast computation time.

PMID:38403673 | DOI:10.1038/s41598-024-54436-8

Categories: Literature Watch

Digital pathology-based artificial intelligence models for differential diagnosis and prognosis of sporadic odontogenic keratocysts

Sun, 2024-02-25 06:00

Int J Oral Sci. 2024 Feb 26;16(1):16. doi: 10.1038/s41368-024-00287-y.

ABSTRACT

Odontogenic keratocyst (OKC) is a common jaw cyst with a high recurrence rate. OKC combined with basal cell carcinoma as well as skeletal and other developmental abnormalities is thought to be associated with Gorlin syndrome. Moreover, OKC needs to be differentiated from orthokeratinized odontogenic cyst and other jaw cysts. Because of the different prognosis, differential diagnosis of several cysts can contribute to clinical management. We collected 519 cases, comprising a total of 2 157 hematoxylin and eosin-stained images, to develop digital pathology-based artificial intelligence (AI) models for the diagnosis and prognosis of OKC. The Inception_v3 neural network was utilized to train and test models developed from patch-level images. Finally, whole slide image-level AI models were developed by integrating deep learning-generated pathology features with several machine learning algorithms. The AI models showed great performance in the diagnosis (AUC = 0.935, 95% CI: 0.898-0.973) and prognosis (AUC = 0.840, 95%CI: 0.751-0.930) of OKC. The advantages of multiple slides model for integrating of histopathological information are demonstrated through a comparison with the single slide model. Furthermore, the study investigates the correlation between AI features generated by deep learning and pathological findings, highlighting the interpretative potential of AI models in the pathology. Here, we have developed the robust diagnostic and prognostic models for OKC. The AI model that is based on digital pathology shows promise potential for applications in odontogenic diseases of the jaw.

PMID:38403665 | DOI:10.1038/s41368-024-00287-y

Categories: Literature Watch

Radiomics for residual tumour detection and prognosis in newly diagnosed glioblastoma based on postoperative [<sup>11</sup>C] methionine PET and T1c-w MRI

Sun, 2024-02-25 06:00

Sci Rep. 2024 Feb 25;14(1):4576. doi: 10.1038/s41598-024-55092-8.

ABSTRACT

Personalized treatment strategies based on non-invasive biomarkers have potential to improve patient management in patients with newly diagnosed glioblastoma (GBM). The residual tumour burden after surgery in GBM patients is a prognostic imaging biomarker. However, in clinical patient management, its assessment is a manual and time-consuming process that is at risk of inter-rater variability. Furthermore, the prediction of patient outcome prior to radiotherapy may identify patient subgroups that could benefit from escalated radiotherapy doses. Therefore, in this study, we investigate the capabilities of traditional radiomics and 3D convolutional neural networks for automatic detection of the residual tumour status and to prognosticate time-to-recurrence (TTR) and overall survival (OS) in GBM using postoperative [11C] methionine positron emission tomography (MET-PET) and gadolinium-enhanced T1-w magnetic resonance imaging (MRI). On the independent test data, the 3D-DenseNet model based on MET-PET achieved the best performance for residual tumour detection, while the logistic regression model with conventional radiomics features performed best for T1c-w MRI (AUC: MET-PET 0.95, T1c-w MRI 0.78). For the prognosis of TTR and OS, the 3D-DenseNet model based on MET-PET integrated with age and MGMT status achieved the best performance (Concordance-Index: TTR 0.68, OS 0.65). In conclusion, we showed that both deep-learning and conventional radiomics have potential value for supporting image-based assessment and prognosis in GBM. After prospective validation, these models may be considered for treatment personalization.

PMID:38403632 | DOI:10.1038/s41598-024-55092-8

Categories: Literature Watch

Prediction of chemical reaction yields with large-scale multi-view pre-training

Sun, 2024-02-25 06:00

J Cheminform. 2024 Feb 25;16(1):22. doi: 10.1186/s13321-024-00815-2.

ABSTRACT

Developing machine learning models with high generalization capability for predicting chemical reaction yields is of significant interest and importance. The efficacy of such models depends heavily on the representation of chemical reactions, which has commonly been learned from SMILES or graphs of molecules using deep neural networks. However, the progression of chemical reactions is inherently determined by the molecular 3D geometric properties, which have been recently highlighted as crucial features in accurately predicting molecular properties and chemical reactions. Additionally, large-scale pre-training has been shown to be essential in enhancing the generalization capability of complex deep learning models. Based on these considerations, we propose the Reaction Multi-View Pre-training (ReaMVP) framework, which leverages self-supervised learning techniques and a two-stage pre-training strategy to predict chemical reaction yields. By incorporating multi-view learning with 3D geometric information, ReaMVP achieves state-of-the-art performance on two benchmark datasets. Notably, the experimental results indicate that ReaMVP has a significant advantage in predicting out-of-sample data, suggesting an enhanced generalization ability to predict new reactions. Scientific Contribution: This study presents the ReaMVP framework, which improves the generalization capability of machine learning models for predicting chemical reaction yields. By integrating sequential and geometric views and leveraging self-supervised learning techniques with a two-stage pre-training strategy, ReaMVP achieves state-of-the-art performance on benchmark datasets. The framework demonstrates superior predictive ability for out-of-sample data and enhances the prediction of new reactions.

PMID:38403627 | DOI:10.1186/s13321-024-00815-2

Categories: Literature Watch

Developments of ex vivo cardiac electrical mapping and intelligent labeling of atrial fibrillation substrates

Sun, 2024-02-25 06:00

Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Feb 25;41(1):184-190. doi: 10.7507/1001-5515.202211046.

ABSTRACT

Cardiac three-dimensional electrophysiological labeling technology is the prerequisite and foundation of atrial fibrillation (AF) ablation surgery, and invasive labeling is the current clinical method, but there are many shortcomings such as large trauma, long procedure duration, and low success rate. In recent years, because of its non-invasive and convenient characteristics, ex vivo labeling has become a new direction for the development of electrophysiological labeling technology. With the rapid development of computer hardware and software as well as the accumulation of clinical database, the application of deep learning technology in electrocardiogram (ECG) data is becoming more extensive and has made great progress, which provides new ideas for the research of ex vivo cardiac mapping and intelligent labeling of AF substrates. This paper reviewed the research progress in the fields of ECG forward problem, ECG inverse problem, and the application of deep learning in AF labeling, discussed the problems of ex vivo intelligent labeling of AF substrates and the possible approaches to solve them, prospected the challenges and future directions for ex vivo cardiac electrophysiology labeling.

PMID:38403620 | DOI:10.7507/1001-5515.202211046

Categories: Literature Watch

Identification of breast cancer subtypes based on graph convolutional network

Sun, 2024-02-25 06:00

Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Feb 25;41(1):121-128. doi: 10.7507/1001-5515.202306071.

ABSTRACT

Identification of molecular subtypes of malignant tumors plays a vital role in individualized diagnosis, personalized treatment, and prognosis prediction of cancer patients. The continuous improvement of comprehensive tumor genomics database and the ongoing breakthroughs in deep learning technology have driven further advancements in computer-aided tumor classification. Although the existing classification methods based on gene expression omnibus database take the complexity of cancer molecular classification into account, they ignore the internal correlation and synergism of genes. To solve this problem, we propose a multi-layer graph convolutional network model for breast cancer subtype classification combined with hierarchical attention network. This model constructs the graph embedding datasets of patients' genes, and develops a new end-to-end multi-classification model, which can effectively recognize molecular subtypes of breast cancer. A large number of test data prove the good performance of this new model in the classification of breast cancer subtypes. Compared to the original graph convolutional neural networks and two mainstream graph neural network classification algorithms, the new model has remarkable advantages. The accuracy, weight-F1-score, weight-recall, and weight-precision of our model in seven-category classification has reached 0.851 7, 0.823 5, 0.851 7 and 0.793 6 respectively. In the four-category classification, the results are 0.928 5, 0.894 9, 0.928 5 and 0.865 0 respectively. In addition, compared with the latest breast cancer subtype classification algorithms, the method proposed in this paper also achieved the highest classification accuracy. In summary, the model proposed in this paper may serve as an auxiliary diagnostic technology, providing a reliable option for precise classification of breast cancer subtypes in the future and laying the theoretical foundation for computer-aided tumor classification.

PMID:38403612 | DOI:10.7507/1001-5515.202306071

Categories: Literature Watch

Deep learning approach for automatic segmentation of auricular acupoint divisions

Sun, 2024-02-25 06:00

Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Feb 25;41(1):114-120. doi: 10.7507/1001-5515.202309010.

ABSTRACT

The automatic segmentation of auricular acupoint divisions is the basis for realizing intelligent auricular acupoint therapy. However, due to the large number of ear acupuncture areas and the lack of clear boundary, existing solutions face challenges in automatically segmenting auricular acupoints. Therefore, a fast and accurate automatic segmentation approach of auricular acupuncture divisions is needed. A deep learning-based approach for automatic segmentation of auricular acupoint divisions is proposed, which mainly includes three stages: ear contour detection, anatomical part segmentation and keypoints localization, and image post-processing. In the anatomical part segmentation and keypoints localization stages, K-YOLACT was proposed to improve operating efficiency. Experimental results showed that the proposed approach achieved automatic segmentation of 66 acupuncture points in the frontal image of the ear, and the segmentation effect was better than existing solutions. At the same time, the mean average precision (mAP) of the anatomical part segmentation of the K-YOLACT was 83.2%, mAP of keypoints localization was 98.1%, and the running speed was significantly improved. The implementation of this approach provides a reliable solution for the accurate segmentation of auricular point images, and provides strong technical support for the modern development of traditional Chinese medicine.

PMID:38403611 | DOI:10.7507/1001-5515.202309010

Categories: Literature Watch

Application of electrical impedance tomography imaging technology combined with generative adversarial network in pulmonary ventilation monitoring

Sun, 2024-02-25 06:00

Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Feb 25;41(1):105-113. doi: 10.7507/1001-5515.202308026.

ABSTRACT

Electrical impedance tomography (EIT) plays a crucial role in the monitoring of pulmonary ventilation and regional pulmonary function test. However, the inherent ill-posed nature of EIT algorithms results in significant deviations in the reconstructed conductivity obtained from voltage data contaminated with noise, making it challenging to obtain accurate distribution images of conductivity change as well as clear boundary contours. In order to enhance the image quality of EIT in lung ventilation monitoring, a novel approach integrating the EIT with deep learning algorithm was proposed. Firstly, an optimized operator was introduced to enhance the Kalman filter algorithm, and Tikhonov regularization was incorporated into the state-space expression of the algorithm to obtain the initial lung image reconstructed. Following that, the imaging outcomes were fed into a generative adversarial network model in order to reconstruct accurate lung contours. The simulation experiment results indicate that the proposed method produces pulmonary images with clear boundaries, demonstrating increased robustness against noise interference. This methodology effectively achieves a satisfactory level of visualization and holds potential significance as a reference for the diagnostic purposes of imaging modalities such as computed tomography.

PMID:38403610 | DOI:10.7507/1001-5515.202308026

Categories: Literature Watch

Research on bark-frequency spectral coefficients heart sound classification algorithm based on multiple window time-frequency reassignment

Sun, 2024-02-25 06:00

Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Feb 25;41(1):51-59. doi: 10.7507/1001-5515.202212037.

ABSTRACT

The multi-window time-frequency reassignment helps to improve the time-frequency resolution of bark-frequency spectral coefficient (BFSC) analysis of heart sounds. For this purpose, a new heart sound classification algorithm combining feature extraction based on multi-window time-frequency reassignment BFSC with deep learning was proposed in this paper. Firstly, the randomly intercepted heart sound segments are preprocessed with amplitude normalization, the heart sounds were framed and time-frequency rearrangement based on short-time Fourier transforms were computed using multiple orthogonal windows. A smooth spectrum estimate is calculated by arithmetic averaging each of the obtained independent spectra. Finally, the BFSC of reassignment spectrum is extracted as a feature by the Bark filter bank. In this paper, convolutional network and recurrent neural network are used as classifiers for model comparison and performance evaluation of the extracted features. Eventually, the multi-window time-frequency rearrangement improved BFSC method extracts more discriminative features, with a binary classification accuracy of 0.936, a sensitivity of 0.946, and a specificity of 0.922. These results present that the algorithm proposed in this paper does not need to segment the heart sounds and randomly intercepts the heart sound segments, which greatly simplifies the computational process and is expected to be used for screening of congenital heart disease.

PMID:38403604 | DOI:10.7507/1001-5515.202212037

Categories: Literature Watch

A topological deep learning framework for neural spike decoding

Sun, 2024-02-25 06:00

Biophys J. 2024 Feb 8:S0006-3495(24)00041-9. doi: 10.1016/j.bpj.2024.01.025. Online ahead of print.

ABSTRACT

The brain's spatial orientation system uses different neuron ensembles to aid in environment-based navigation. Two of the ways brains encode spatial information are through head direction cells and grid cells. Brains use head direction cells to determine orientation, whereas grid cells consist of layers of decked neurons that overlay to provide environment-based navigation. These neurons fire in ensembles where several neurons fire at once to activate a single head direction or grid. We want to capture this firing structure and use it to decode head direction and animal location from head direction and grid cell activity. Understanding, representing, and decoding these neural structures require models that encompass higher-order connectivity, more than the one-dimensional connectivity that traditional graph-based models provide. To that end, in this work, we develop a topological deep learning framework for neural spike train decoding. Our framework combines unsupervised simplicial complex discovery with the power of deep learning via a new architecture we develop herein called a simplicial convolutional recurrent neural network. Simplicial complexes, topological spaces that use not only vertices and edges but also higher-dimensional objects, naturally generalize graphs and capture more than just pairwise relationships. Additionally, this approach does not require prior knowledge of the neural activity beyond spike counts, which removes the need for similarity measurements. The effectiveness and versatility of the simplicial convolutional neural network is demonstrated on head direction and trajectory prediction via head direction and grid cell datasets.

PMID:38402607 | DOI:10.1016/j.bpj.2024.01.025

Categories: Literature Watch

U-PASS: An uncertainty-guided deep learning pipeline for automated sleep staging

Sat, 2024-02-24 06:00

Comput Biol Med. 2024 Feb 23;171:108205. doi: 10.1016/j.compbiomed.2024.108205. Online ahead of print.

ABSTRACT

With the increasing prevalence of machine learning in critical fields like healthcare, ensuring the safety and reliability of these systems is crucial. Estimating uncertainty plays a vital role in enhancing reliability by identifying areas of high and low confidence and reducing the risk of errors. This study introduces U-PASS, a specialized human-centered machine learning pipeline tailored for clinical applications, which effectively communicates uncertainty to clinical experts and collaborates with them to improve predictions. U-PASS incorporates uncertainty estimation at every stage of the process, including data acquisition, training, and model deployment. Training is divided into a supervised pre-training step and a semi-supervised recording-wise finetuning step. We apply U-PASS to the challenging task of sleep staging and demonstrate that it systematically improves performance at every stage. By optimizing the training dataset, actively seeking feedback from domain experts for informative samples, and deferring the most uncertain samples to experts, U-PASS achieves an impressive expert-level accuracy of 85% on a challenging clinical dataset of elderly sleep apnea patients. This represents a significant improvement over the starting point at 75% accuracy. The largest improvement gain is due to the deferral of uncertain epochs to a sleep expert. U-PASS presents a promising AI approach to incorporating uncertainty estimation in machine learning pipelines, improving their reliability and unlocking their potential in clinical settings.

PMID:38401452 | DOI:10.1016/j.compbiomed.2024.108205

Categories: Literature Watch

External validation of the RSNA 2020 pulmonary embolism detection challenge winning deep learning algorithm

Sat, 2024-02-24 06:00

Eur J Radiol. 2024 Feb 13;173:111361. doi: 10.1016/j.ejrad.2024.111361. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate the diagnostic performance and generalizability of the winning DL algorithm of the RSNA 2020 PE detection challenge to a local population using CTPA data from two hospitals.

MATERIALS AND METHODS: Consecutive CTPA images from patients referred for suspected PE were retrospectively analysed. The winning RSNA 2020 DL algorithm was retrained on the RSNA-STR Pulmonary Embolism CT (RSPECT) dataset. The algorithm was tested in hospital A on multidetector CT (MDCT) images of 238 patients and in hospital B on spectral detector CT (SDCT) and virtual monochromatic images (VMI) of 114 patients. The output of the DL algorithm was compared with a reference standard, which included a consensus reading by at least two experienced cardiothoracic radiologists for both hospitals. Areas under the receiver operating characteristic curve (AUCs) were calculated. Sensitivity and specificity were determined using the maximum Youden index.

RESULTS: According to the reference standard, PE was present in 73 patients (30.7%) in hospital A and 33 patients (29.0%) in hospital B. For the DL algorithm the AUC was 0.96 (95% CI 0.92-0.98) in hospital A, 0.89 (95% CI 0.81-0.94) for conventional reconstruction in hospital B and 0.87 (95% CI 0.80-0.93) for VMI.

CONCLUSION: The RSNA 2020 pulmonary embolism detection on CTPA challenge winning DL algorithm, retrained on the RSPECT dataset, showed high diagnostic accuracy on MDCT images. A somewhat lower performance was observed on SDCT images, which suggest additional training on novel CT technology may improve generalizability of this DL algorithm.

PMID:38401407 | DOI:10.1016/j.ejrad.2024.111361

Categories: Literature Watch

A cognitive deep learning approach for medical image processing

Sat, 2024-02-24 06:00

Sci Rep. 2024 Feb 24;14(1):4539. doi: 10.1038/s41598-024-55061-1.

ABSTRACT

In ophthalmic diagnostics, achieving precise segmentation of retinal blood vessels is a critical yet challenging task, primarily due to the complex nature of retinal images. The intricacies of these images often hinder the accuracy and efficiency of segmentation processes. To overcome these challenges, we introduce the cognitive DL retinal blood vessel segmentation (CoDLRBVS), a novel hybrid model that synergistically combines the deep learning capabilities of the U-Net architecture with a suite of advanced image processing techniques. This model uniquely integrates a preprocessing phase using a matched filter (MF) for feature enhancement and a post-processing phase employing morphological techniques (MT) for refining the segmentation output. Also, the model incorporates multi-scale line detection and scale space methods to enhance its segmentation capabilities. Hence, CoDLRBVS leverages the strengths of these combined approaches within the cognitive computing framework, endowing the system with human-like adaptability and reasoning. This strategic integration enables the model to emphasize blood vessels, accurately segment effectively, and proficiently detect vessels of varying sizes. CoDLRBVS achieves a notable mean accuracy of 96.7%, precision of 96.9%, sensitivity of 99.3%, and specificity of 80.4% across all of the studied datasets, including DRIVE, STARE, HRF, retinal blood vessel and Chase-DB1. CoDLRBVS has been compared with different models, and the resulting metrics surpass the compared models and establish a new benchmark in retinal vessel segmentation. The success of CoDLRBVS underscores its significant potential in advancing medical image processing, particularly in the realm of retinal blood vessel segmentation.

PMID:38402321 | DOI:10.1038/s41598-024-55061-1

Categories: Literature Watch

Novel antimicrobial peptides against Cutibacterium acnes designed by deep learning

Sat, 2024-02-24 06:00

Sci Rep. 2024 Feb 24;14(1):4529. doi: 10.1038/s41598-024-55205-3.

ABSTRACT

The increasing prevalence of antibiotic resistance in Cutibacterium acnes (C. acnes) requires the search for alternative therapeutic strategies. Antimicrobial peptides (AMPs) offer a promising avenue for the development of new treatments targeting C. acnes. In this study, to design peptides with the specific inhibitory activity against C. acnes, we employed a deep learning pipeline with generators and classifiers, using transfer learning and pretrained protein embeddings, trained on publicly available data. To enhance the training data specific to C. acnes inhibition, we constructed a phylogenetic tree. A panel of 42 novel generated linear peptides was then synthesized and experimentally evaluated for their antimicrobial selectivity and activity. Five of them demonstrated their high potency and selectivity against C. acnes with MIC of 2-4 µg/mL. Our findings highlight the potential of these designed peptides as promising candidates for anti-acne therapeutics and demonstrate the power of computational approaches for the rational design of targeted antimicrobial peptides.

PMID:38402320 | DOI:10.1038/s41598-024-55205-3

Categories: Literature Watch

Deep Learning for Perfusion Cerebral Blood Flow (CBF) and Volume (CBV) Predictions and Diagnostics

Sat, 2024-02-24 06:00

Ann Biomed Eng. 2024 Feb 24. doi: 10.1007/s10439-024-03471-7. Online ahead of print.

ABSTRACT

Dynamic susceptibility contrast magnetic resonance perfusion (DSC-MRP) is a non-invasive imaging technique for hemodynamic measurements. Various perfusion parameters, such as cerebral blood volume (CBV) and cerebral blood flow (CBF), can be derived from DSC-MRP, hence this non-invasive imaging protocol is widely used clinically for the diagnosis and assessment of intracranial pathologies. Currently, most institutions use commercially available software to compute the perfusion parametric maps. However, these conventional methods often have limitations, such as being time-consuming and sensitive to user input, which can lead to inconsistent results; this highlights the need for a more robust and efficient approach like deep learning. Using the relative cerebral blood volume (rCBV) and relative cerebral blood flow (rCBF) perfusion maps generated by FDA-approved software, we trained a multistage deep learning model. The model, featuring a combination of a 1D convolutional neural network (CNN) and a 2D U-Net encoder-decoder network, processes each 4D MRP dataset by integrating temporal and spatial features of the brain for voxel-wise perfusion parameters prediction. An auxiliary model, with similar architecture, but trained with truncated datasets that had fewer time-points, was designed to explore the contribution of temporal features. Both qualitatively and quantitatively evaluated, deep learning-generated rCBV and rCBF maps showcased effective integration of temporal and spatial data, producing comprehensive predictions for the entire brain volume. Our deep learning model provides a robust and efficient approach for calculating perfusion parameters, demonstrating comparable performance to FDA-approved commercial software, and potentially mitigating the challenges inherent to traditional techniques.

PMID:38402314 | DOI:10.1007/s10439-024-03471-7

Categories: Literature Watch

A novel neural network-based framework to estimate oil and gas pipelines life with missing input parameters

Sat, 2024-02-24 06:00

Sci Rep. 2024 Feb 24;14(1):4511. doi: 10.1038/s41598-024-54964-3.

ABSTRACT

Dry gas pipelines can encounter various operational, technical, and environmental issues, such as corrosion, leaks, spills, restrictions, and cyber threats. To address these difficulties, proactive maintenance and management and a new technological strategy are needed to increase safety, reliability, and efficiency. A novel neural network model for forecasting the life of a dry gas pipeline system and detecting the metal loss dimension class that is exposed to a harsh environment is presented in this study to handle the missing data. The proposed strategy blends the strength of deep learning techniques with industry-specific expertise. The main advantage of this study is to predict the pipeline life with a significant advantage of predicting the dimension classification of metal loss simultaneously employing a Bayesian regularization-based neural network framework when there are missing inputs in the datasets. The proposed intelligent model, trained on four pipeline datasets of a dry gas pipeline system, can predict the health condition of pipelines with high accuracy, even if there are missing parameters in the dataset. The proposed model using neural network technology generated satisfactory results in terms of numerical performance, with MSE and R2 values closer to 0 and 1, respectively. A few cases with missing input data are carried out, and the missing data is forecasted for each case. Then, a model is developed to predict the life condition of pipelines with the predicted missing input variables. The findings reveal that the model has the potential for real-world applications in the oil and gas sector for estimating the health condition of pipelines, even if there are missing input parameters. Additionally, multi-model comparative analysis and sensitivity analysis are incorporated, offering an extensive comprehension of multi-model prediction abilities and beneficial insights into the impact of various input variables on model outputs, thereby improving the interpretability and reliability of our results. The proposed framework could help business plans by lowering the chance of severe accidents and environmental harm with better safety and reliability.

PMID:38402261 | DOI:10.1038/s41598-024-54964-3

Categories: Literature Watch

Prevalence and risk factors analysis of postpartum depression at early stage using hybrid deep learning model

Sat, 2024-02-24 06:00

Sci Rep. 2024 Feb 24;14(1):4533. doi: 10.1038/s41598-024-54927-8.

ABSTRACT

Postpartum Depression Disorder (PPDD) is a prevalent mental health condition and results in severe depression and suicide attempts in the social community. Prompt actions are crucial in tackling PPDD, which requires a quick recognition and accurate analysis of the probability factors associated with this condition. This concern requires attention. The primary aim of our research is to investigate the feasibility of anticipating an individual's mental state by categorizing individuals with depression from those without depression using a dataset consisting of text along with audio recordings from patients diagnosed with PPDD. This research proposes a hybrid PPDD framework that combines Improved Bi-directional Long Short-Term Memory (IBi-LSTM) with Transfer Learning (TL) based on two Convolutional Neural Network (CNN) architectures, respectively CNN-text and CNN audio. In the proposed model, the CNN section efficiently utilizes TL to obtain crucial knowledge from text and audio characteristics, whereas the improved Bi-LSTM module combines written material and sound data to obtain intricate chronological interpersonal relationships. The proposed model incorporates an attention technique to augment the effectiveness of the Bi-LSTM scheme. An experimental analysis is conducted on the PPDD online textual and speech audio dataset collected from UCI. It includes textual features such as age, women's health tracks, medical histories, demographic information, daily life metrics, psychological evaluations, and 'speech records' of PPDD patients. Data pre-processing is applied to maintain the data integrity and achieve reliable model performance. The proposed model demonstrates a great performance in better precision, recall, accuracy, and F1-score over existing deep learning models, including VGG-16, Base-CNN, and CNN-LSTM. These metrics indicate the model's ability to differentiate among women at risk of PPDD vs. non-PPDD. In addition, the feature importance analysis demonstrates that specific risk factors substantially impact the prediction of PPDD. The findings of this research establish a basis for improved precision and promptness in assessing the risk of PPDD, which may ultimately result in earlier implementation of interventions and the establishment of support networks for women who are susceptible to PPDD.

PMID:38402249 | DOI:10.1038/s41598-024-54927-8

Categories: Literature Watch

Rapid deep learning-assisted predictive diagnostics for point-of-care testing

Sat, 2024-02-24 06:00

Nat Commun. 2024 Feb 24;15(1):1695. doi: 10.1038/s41467-024-46069-2.

ABSTRACT

Prominent techniques such as real-time polymerase chain reaction (RT-PCR), enzyme-linked immunosorbent assay (ELISA), and rapid kits are currently being explored to both enhance sensitivity and reduce assay time for diagnostic tests. Existing commercial molecular methods typically take several hours, while immunoassays can range from several hours to tens of minutes. Rapid diagnostics are crucial in Point-of-Care Testing (POCT). We propose an approach that integrates a time-series deep learning architecture and AI-based verification, for the enhanced result analysis of lateral flow assays. This approach is applicable to both infectious diseases and non-infectious biomarkers. In blind tests using clinical samples, our method achieved diagnostic times as short as 2 minutes, exceeding the accuracy of human analysis at 15 minutes. Furthermore, our technique significantly reduces assay time to just 1-2 minutes in the POCT setting. This advancement has the potential to greatly enhance POCT diagnostics, enabling both healthcare professionals and non-experts to make rapid, accurate decisions.

PMID:38402240 | DOI:10.1038/s41467-024-46069-2

Categories: Literature Watch

Artificial intelligence for radiographic imaging detection of caries lesions: a systematic review

Sat, 2024-02-24 06:00

BMC Oral Health. 2024 Feb 24;24(1):274. doi: 10.1186/s12903-024-04046-7.

ABSTRACT

BACKGROUND: The aim of this systematic review is to evaluate the diagnostic performance of Artificial Intelligence (AI) models designed for the detection of caries lesion (CL).

MATERIALS AND METHODS: An electronic literature search was conducted on PubMed, Web of Science, SCOPUS, LILACS and Embase databases for retrospective, prospective and cross-sectional studies published until January 2023, using the following keywords: artificial intelligence (AI), machine learning (ML), deep learning (DL), artificial neural networks (ANN), convolutional neural networks (CNN), deep convolutional neural networks (DCNN), radiology, detection, diagnosis and dental caries (DC). The quality assessment was performed using the guidelines of QUADAS-2.

RESULTS: Twenty articles that met the selection criteria were evaluated. Five studies were performed on periapical radiographs, nine on bitewings, and six on orthopantomography. The number of imaging examinations included ranged from 15 to 2900. Four studies investigated ANN models, fifteen CNN models, and two DCNN models. Twelve were retrospective studies, six cross-sectional and two prospective. The following diagnostic performance was achieved in detecting CL: sensitivity from 0.44 to 0.86, specificity from 0.85 to 0.98, precision from 0.50 to 0.94, PPV (Positive Predictive Value) 0.86, NPV (Negative Predictive Value) 0.95, accuracy from 0.73 to 0.98, area under the curve (AUC) from 0.84 to 0.98, intersection over union of 0.3-0.4 and 0.78, Dice coefficient 0.66 and 0.88, F1-score from 0.64 to 0.92. According to the QUADAS-2 evaluation, most studies exhibited a low risk of bias.

CONCLUSION: AI-based models have demonstrated good diagnostic performance, potentially being an important aid in CL detection. Some limitations of these studies are related to the size and heterogeneity of the datasets. Future studies need to rely on comparable, large, and clinically meaningful datasets.

PROTOCOL: PROSPERO identifier: CRD42023470708.

PMID:38402191 | DOI:10.1186/s12903-024-04046-7

Categories: Literature Watch

Pages