Deep learning

Predicting Risk of Mortality in Pediatric ICU Based on Ensemble Step-Wise Feature Selection

Fri, 2024-03-15 06:00

Health Data Sci. 2021 May 31;2021:9365125. doi: 10.34133/2021/9365125. eCollection 2021.

ABSTRACT

Background. Prediction of mortality risk in intensive care units (ICU) is an important task. Data-driven methods such as scoring systems, machine learning methods, and deep learning methods have been investigated for a long time. However, few data-driven methods are specially developed for pediatric ICU. In this paper, we aim to amend this gap-build a simple yet effective linear machine learning model from a number of hand-crafted features for mortality prediction in pediatric ICU.Methods. We use a recently released publicly available pediatric ICU dataset named pediatric intensive care (PIC) from Children's Hospital of Zhejiang University School of Medicine in China. Unlike previous sophisticated machine learning methods, we want our method to keep simple that can be easily understood by clinical staffs. Thus, an ensemble step-wise feature ranking and selection method is proposed to select a small subset of effective features from the entire feature set. A logistic regression classifier is built upon selected features for mortality prediction.Results. The final predictive linear model with 11 features achieves a 0.7531 ROC-AUC score on the hold-out test set, which is comparable with a logistic regression classifier using all 397 features (0.7610 ROC-AUC score) and is higher than the existing well known pediatric mortality risk scorer PRISM III (0.6895 ROC-AUC score).Conclusions. Our method improves feature ranking and selection by utilizing an ensemble method while keeping a simple linear form of the predictive model and therefore achieves better generalizability and performance on mortality prediction in pediatric ICU.

PMID:38487508 | PMC:PMC10880178 | DOI:10.34133/2021/9365125

Categories: Literature Watch

Advances in Deep Learning-Based Medical Image Analysis

Fri, 2024-03-15 06:00

Health Data Sci. 2021 May 19;2021:8786793. doi: 10.34133/2021/8786793. eCollection 2021.

ABSTRACT

Importance. With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions.Highlights. This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors.Conclusion. Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.

PMID:38487506 | PMC:PMC10880179 | DOI:10.34133/2021/8786793

Categories: Literature Watch

2.5D UNet with context-aware feature sequence fusion for accurate esophageal tumor semantic segmentation

Thu, 2024-03-14 06:00

Phys Med Biol. 2024 Mar 14. doi: 10.1088/1361-6560/ad3419. Online ahead of print.

ABSTRACT

Segmenting esophageal tumor from Computed Tomography (CT) sequence images can assist doctors in diagnosing and treating patients with this malignancy. However, accurately extracting esophageal tumor features from CT images often present challenges due to their small area, variable position, and shape, as well as the low contrast with surrounding tissues. This results in not achieving the level of accuracy required for practical applications in current methods. To address this problem, we propose a 2.5D Context-Aware Feature Sequence Fusion UNet (2.5D CFSF-UNet) model for esophageal tumor segmentation in CT sequence images. Specifically, we embed Intra-slice Multiscale Attention Feature Fusion (Intra-slice MAFF) in each skip connection of UNet to improve feature learning capabilities, better expressing the differences between anatomical structures within CT sequence images. Additionally, the Inter-slice Context Fusion Block (Inter-slice CFB) is utilized in the final layer of UNet to enhance the depiction of context features between CT slices, thereby preventing the loss of structural information between slices. Experiments are conducted on a dataset of 430 esophageal tumor patients. The results show an 87.13% dice similarity coefficient, a 79.71% intersection over union (IOU) and a 2.4758 mm Hausdorff distance, which demonstrates that our approach can improve contouring consistency and can be applied to clinical applications.&#xD.

PMID:38484399 | DOI:10.1088/1361-6560/ad3419

Categories: Literature Watch

Automatic Detection and Tracking of Anatomical Landmarks in Transesophageal Echocardiography for Quantification of Left Ventricular Function

Thu, 2024-03-14 06:00

Ultrasound Med Biol. 2024 Mar 13:S0301-5629(24)00031-0. doi: 10.1016/j.ultrasmedbio.2024.01.017. Online ahead of print.

ABSTRACT

OBJECTIVE: Evaluation of left ventricular (LV) function in critical care patients is useful for guidance of therapy and early detection of LV dysfunction, but the tools currently available are too time-consuming. To resolve this issue, we previously proposed a method for the continuous and automatic quantification of global LV function in critical care patients based on the detection and tracking of anatomical landmarks on transesophageal heart ultrasound. In the present study, our aim was to improve the performance of mitral annulus detection in transesophageal echocardiography (TEE).

METHODS: We investigated several state-of-the-art networks for both the detection and tracking of the mitral annulus in TEE. We integrated the networks into a pipeline for automatic assessment of LV function through estimation of the mitral annular plane systolic excursion (MAPSE), called autoMAPSE. TEE recordings from a total of 245 patients were collected from St. Olav's University Hospital and used to train and test the respective networks. We evaluated the agreement between autoMAPSE estimates and manual references annotated by expert echocardiographers in 30 Echolab patients and 50 critical care patients. Furthermore, we proposed a prototype of autoMAPSE for clinical integration and tested it in critical care patients in the intensive care unit.

RESULTS: Compared with manual references, we achieved a mean difference of 0.8 (95% limits of agreement: -2.9 to 4.7) mm in Echolab patients, with a feasibility of 85.7%. In critical care patients, we reached a mean difference of 0.6 (95% limits of agreement: -2.3 to 3.5) mm and a feasibility of 88.1%. The clinical prototype of autoMAPSE achieved real-time performance.

CONCLUSION: Automatic quantification of LV function had high feasibility in clinical settings. The agreement with manual references was comparable to inter-observer variability of clinical experts.

PMID:38485534 | DOI:10.1016/j.ultrasmedbio.2024.01.017

Categories: Literature Watch

Deep learning model to predict lupus nephritis renal flare based on dynamic multivariable time-series data

Thu, 2024-03-14 06:00

BMJ Open. 2024 Mar 14;14(3):e071821. doi: 10.1136/bmjopen-2023-071821.

ABSTRACT

OBJECTIVES: To develop an interpretable deep learning model of lupus nephritis (LN) relapse prediction based on dynamic multivariable time-series data.

DESIGN: A single-centre, retrospective cohort study in China.

SETTING: A Chinese central tertiary hospital.

PARTICIPANTS: The cohort study consisted of 1694 LN patients who had been registered in the Nanjing Glomerulonephritis Registry at the National Clinical Research Center of Kidney Diseases, Jinling Hospital from January 1985 to December 2010.

METHODS: We developed a deep learning algorithm to predict LN relapse that consists of 59 features, including demographic, clinical, immunological, pathological and therapeutic characteristics that were collected for baseline analysis. A total of 32 227 data points were collected by the sliding window method and randomly divided into training (80%), validation (10%) and testing sets (10%). We developed a deep learning algorithm-based interpretable multivariable long short-term memory model for LN relapse risk prediction considering censored time-series data based on a cohort of 1694 LN patients. A mixture attention mechanism was deployed to capture variable interactions at different time points for estimating the temporal importance of the variables. Model performance was assessed according to C-index (concordance index).

RESULTS: The median follow-up time since remission was 4.1 (IQR, 1.7-6.7) years. The interpretable deep learning model based on dynamic multivariable time-series data achieved the best performance, with a C-index of 0.897, among models using only variables at the point of remission or time-variant variables. The importance of urinary protein, serum albumin and serum C3 showed time dependency in the model, that is, their contributions to the risk prediction increased over time.

CONCLUSIONS: Deep learning algorithms can effectively learn through time-series data to develop a predictive model for LN relapse. The model provides accurate predictions of LN relapse for different renal disease stages, which could be used in clinical practice to guide physicians on the management of LN patients.

PMID:38485471 | DOI:10.1136/bmjopen-2023-071821

Categories: Literature Watch

Validation of a deep learning model for automatic detection and quantification of five OCT critical retinal features associated with neovascular age-related macular degeneration

Thu, 2024-03-14 06:00

Br J Ophthalmol. 2024 Mar 14:bjo-2023-324647. doi: 10.1136/bjo-2023-324647. Online ahead of print.

ABSTRACT

PURPOSE: To develop and validate a deep learning model for the segmentation of five retinal biomarkers associated with neovascular age-related macular degeneration (nAMD).

METHODS: 300 optical coherence tomography volumes from subject eyes with nAMD were collected. Images were manually segmented for the presence of five crucial nAMD features: intraretinal fluid, subretinal fluid, subretinal hyperreflective material, drusen/drusenoid pigment epithelium detachment (PED) and neovascular PED. A deep learning architecture based on a U-Net was trained to perform automatic segmentation of these retinal biomarkers and evaluated on the sequestered data. The main outcome measures were receiver operating characteristic curves for detection, summarised using the area under the curves (AUCs) both on a per slice and per volume basis, correlation score, enface topography overlap (reported as two-dimensional (2D) correlation score) and Dice coefficients.

RESULTS: The model obtained a mean (±SD) AUC of 0.93 (±0.04) per slice and 0.88 (±0.07) per volume for fluid detection. The correlation score (R2) between automatic and manual segmentation obtained by the model resulted in a mean (±SD) of 0.89 (±0.05). The mean (±SD) 2D correlation score was 0.69 (±0.04). The mean (±SD) Dice score resulted in 0.61 (±0.10).

CONCLUSIONS: We present a fully automated segmentation model for five features related to nAMD that performs at the level of experienced graders. The application of this model will open opportunities for the study of morphological changes and treatment efficacy in real-world settings. Furthermore, it can facilitate structured reporting in the clinic and reduce subjectivity in clinicians' assessments.

PMID:38485214 | DOI:10.1136/bjo-2023-324647

Categories: Literature Watch

Results of an AI-Based Image Review System to Detect Patient Misalignment Errors in a Multi-Institutional Database of CBCT-Guided Radiotherapy Treatments

Thu, 2024-03-14 06:00

Int J Radiat Oncol Biol Phys. 2024 Mar 12:S0360-3016(24)00392-4. doi: 10.1016/j.ijrobp.2024.02.065. Online ahead of print.

ABSTRACT

PURPOSE: Present knowledge of patient setup and alignment errors in image-guided radiotherapy (IGRT) relies on voluntary reporting, which is thought to underestimate error frequencies. A manual retrospective patient-setup misalignment error search is infeasible due to the bulk of cases to be reviewed. We applied a deep learning-based misalignment error detection algorithm (EDA) to perform a fully-automated retrospective error search of clinical IGRT databases and determine an absolute gross patient misalignment error rate.

METHODS: The EDA was developed to analyze the registration between planning scans and pre-treatment CBCT scans, outputting a misalignment score ranging from 0 (most unlikely) to 1 (most likely). The algorithm was trained using simulated translational errors on a dataset obtained from 680 patients treated at two radiotherapy clinics between 2017 and 2022. A receiver operating characteristic analysis was performed to obtain target thresholds. A DICOM Query and Retrieval software was integrated with the EDA to interact with the clinical database and fully automate data retrieval and analysis during a retrospective error search from 2016-2017 and 2021-2022 for the two institutions, respectively. Registrations were flagged for human review using both a hard-thresholding method and a prediction trending analysis over each individual patient's treatment course. Flagged registrations were manually reviewed and categorized as errors (>1cm misalignment at the target) or non-errors.

RESULTS: A total of 17,612 registrations were analyzed by the EDA, resulting in 7.7% flagged events. Three previously reported errors were successfully flagged by the EDA and four previously-unreported vertebral body misalignment errors were discovered during case reviews. False positive cases often displayed substantial image artifacts, patient rotation, and soft-tissue anatomy changes.

CONCLUSION: Our results validated the clinical utility of the EDA for bulk image reviews, and highlighted the reliability and safety of IGRT, with an absolute gross patient misalignment error rate of 0.04% ± 0.02% per delivered fraction.

PMID:38485098 | DOI:10.1016/j.ijrobp.2024.02.065

Categories: Literature Watch

Deepm6A-MT: A deep learning-based method for identifying RNA N6-methyladenosine sites in multiple tissues

Thu, 2024-03-14 06:00

Methods. 2024 Mar 12:S1046-2023(24)00067-7. doi: 10.1016/j.ymeth.2024.03.004. Online ahead of print.

ABSTRACT

N6-methyladenosine (m6A) is the most prevalent, abundant, and conserved internal modification in the eukaryotic messenger RNA (mRNAs) and plays a crucial role in the cellular process. Although more than ten methods were developed for m6A detection over the past decades, there were rooms left to improve the predictive accuracy and the efficiency. In this paper, we proposed an improved method for predicting m6A modification sites, which was based on bi-directional gated recurrent unit (Bi-GRU) and convolutional neural networks (CNN), called Deepm6A-MT. The Deepm6A-MT has two input channels. One is to use an embedding layer followed by the Bi-GRU and then by the CNN, and another is to use one-hot encoding, dinucleotide one-hot encoding, and nucleotide chemical property codes. We trained and evaluated the Deepm6A-MT both by the 5-fold cross-validation and the independent test. The empirical tests showed that the Deepm6A-MT achieved the state of the art performance. In addition, we also conducted the cross-species and the cross-tissues tests to further verify the Deepm6A-MT for effectiveness and efficiency. Finally, for the convenience of academic research, we deployed the Deepm6A-MT to the web server, which is accessed at the URL http://www.biolscience.cn/Deepm6A-MT/.

PMID:38485031 | DOI:10.1016/j.ymeth.2024.03.004

Categories: Literature Watch

Improved nested U-structure for accurate nailfold capillary segmentation

Thu, 2024-03-14 06:00

Microvasc Res. 2024 Mar 12:104680. doi: 10.1016/j.mvr.2024.104680. Online ahead of print.

ABSTRACT

Changes in the structure and function of nailfold capillaries may be indicators of numerous diseases. Noninvasive diagnostic tools are commonly used for the extraction of morphological information from segmented nailfold capillaries to study physiological and pathological changes therein. However, current segmentation methods for nailfold capillaries cannot accurately separate capillaries from the background, resulting in issues such as unclear segmentation boundaries. Therefore, improving the accuracy of nailfold capillary segmentation is necessary to facilitate more efficient clinical diagnosis and research. Herein, we propose a nailfold capillary image segmentation method based on a U2-Net backbone network combined with a Transformer structure. This method integrates the U2-Net and Transformer networks to establish a decoder-encoder network, which inserts Transformer layers into the nested two-layer U-shaped architecture of the U2-Net. This structure effectively extracts multiscale features within stages and aggregates multilevel features across stages to generate high-resolution feature maps. The experimental results demonstrate an overall accuracy of 98.23 %, a Dice coefficient of 88.56 %, and an IoU of 80.41 % compared to the ground truth. Furthermore, our proposed method improves the overall accuracy by approximately 2 %, 3 %, and 5 % compared to the original U2-Net, Res-Unet, and U-Net, respectively. These results indicate that the Transformer-U2Net network performs well in nailfold capillary image segmentation and provides more detailed and accurate information on the segmented nailfold capillary structure, which may aid clinicians in the more precise diagnosis and treatment of nailfold capillary-related diseases.

PMID:38484792 | DOI:10.1016/j.mvr.2024.104680

Categories: Literature Watch

Efficiently improving the Wi-Fi-based human activity recognition, using auditory features, autoencoders, and fine-tuning

Thu, 2024-03-14 06:00

Comput Biol Med. 2024 Feb 27;172:108232. doi: 10.1016/j.compbiomed.2024.108232. Online ahead of print.

ABSTRACT

Human activity recognition (HAR) based on Wi-Fi signals has attracted significant attention due to its convenience and the availability of infrastructures and sensors. Channel State Information (CSI) measures how Wi-Fi signals propagate through the environment. However, many scenarios and applications have insufficient training data due to constraints such as cost, time, or resources. This poses a challenge for achieving high accuracy levels with machine learning techniques. In this study, multiple deep learning models for HAR were employed to achieve acceptable accuracy levels with much less training data than other methods. A pretrained encoder trained from a Multi-Input Multi-Output Autoencoder (MIMO AE) on Mel Frequency Cepstral Coefficients (MFCC) from a small subset of data samples was used for feature extraction. Then, fine-tuning was applied by adding the encoder as a fixed layer in the classifier, which was trained on a small fraction of the remaining data. The evaluation results (K-fold cross-validation and K = 5) showed that using only 30% of the training and validation data (equivalent to 24% of the total data), the accuracy was improved by 17.7% compared to the case where the encoder was not used (with an accuracy of 79.3% for the designed classifier, and an accuracy of 90.3% for the classifier with the fixed encoder). While by considering more calculational cost, achieving higher accuracy using the pretrained encoder as a trainable layer is possible (up to 2.4% improvement), this small gap demonstrated the effectiveness and efficiency of the proposed method for HAR using Wi-Fi signals.

PMID:38484697 | DOI:10.1016/j.compbiomed.2024.108232

Categories: Literature Watch

Deep learning-assisted flavonoid-based fluorescent sensor array for the nondestructive detection of meat freshness

Thu, 2024-03-14 06:00

Food Chem. 2024 Mar 4;447:138931. doi: 10.1016/j.foodchem.2024.138931. Online ahead of print.

ABSTRACT

Gas sensors containing indicators have been widely used in meat freshness testing. However, concerns about the toxicity of indicators have prevented their commercialization. Here, we prepared three fluorescent sensors by complexing each flavonoid (fisetin, puerarin, daidzein) with a flexible film, forming a fluorescent sensor array. The fluorescent sensor array was used as a freshness indication label for packaged meat. Then, the images of the indication labels on the packaged meat under different freshness levels were collected by smartphones. A deep convolutional neural network (DCNN) model was built using the collected indicator label images and freshness labels as the dataset. Finally, the model was used to detect the freshness of meat samples, and the overall accuracy of the prediction model was as high as 97.1%. Unlike the TVB-N measurement, this method provides a nondestructive, real-time measurement of meat freshness.

PMID:38484548 | DOI:10.1016/j.foodchem.2024.138931

Categories: Literature Watch

Circadian assessment of heart failure using explainable deep learning and novel multi-parameter polar images

Thu, 2024-03-14 06:00

Comput Methods Programs Biomed. 2024 Mar 6;248:108107. doi: 10.1016/j.cmpb.2024.108107. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Heart failure (HF) is a multi-faceted and life-threatening syndrome that affects more than 64.3 million people worldwide. Current gold-standard screening technique, echocardiography, neglects cardiovascular information regulated by the circadian rhythm and does not incorporate knowledge from patient profiles. In this study, we propose a novel multi-parameter approach to assess heart failure using heart rate variability (HRV) and patient clinical information.

METHODS: In this approach, features from 24-hour HRV and clinical information were combined as a single polar image and fed to a 2D deep learning model to infer the HF condition. The edges of the polar image correspond to the timely variation of different features, each of which carries information on the function of the heart, and internal illustrates color-coded patient clinical information.

RESULTS: Under a leave-one-subject-out cross-validation scheme and using 7,575 polar images from a multi-center cohort (American and Greek) of 303 coronary artery disease patients (median age: 58 years [50-65], median body mass index (BMI): 27.28 kg/m2 [24.91-29.41]), the model yielded mean values for the area under the receiver operating characteristics curve (AUC), sensitivity, specificity, normalized Matthews correlation coefficient (NMCC), and accuracy of 0.883, 90.68%, 95.19%, 0.93, and 92.62%, respectively. Moreover, interpretation of the model showed proper attention to key hourly intervals and clinical information for each HF stage.

CONCLUSIONS: The proposed approach could be a powerful early HF screening tool and a supplemental circadian enhancement to echocardiography which sets the basis for next-generation personalized healthcare.

PMID:38484409 | DOI:10.1016/j.cmpb.2024.108107

Categories: Literature Watch

Cross noise level PET denoising with continuous adversarial domain generalization

Thu, 2024-03-14 06:00

Phys Med Biol. 2024 Mar 14. doi: 10.1088/1361-6560/ad341a. Online ahead of print.

ABSTRACT

Objective
Performing PET denoising within the image space proves effective in reducing the variance in PET images. In recent years, deep learning has demonstrated superior denoising performance, but models trained on a specific noise level typically fail to generalize well on different noise levels, due to inherent distribution shifts between inputs. The distribution shift usually results in bias in the denoised images. Our goal is to tackle such a problem using a domain generalization technique.
Approach
We propose to utilize the domain generalization technique with a novel feature space continuous discriminator (CD) for adversarial training, using the fraction of events as a continuous domain label. The core idea is to enforce the extraction of noise-level invariant features. Thus minimizing the distribution divergence of latent feature representation for different continuous noise levels, and making the model general for arbitrary noise levels. We created three sets of 10%, 13-22% (uniformly randomly selected), or 25% fractions of events from 97 $^{18}$F-MK6240 tau PET studies of 60 subjects. For each set, we generated 20 noise realizations. Training, validation, and testing were implemented using 1400, 120, and 420 pairs of 3D image volumes from the same or different sets. 
Main results
The proposed CD improves the denoising performance of our model trained in a 13-22% fraction set for testing in both 10% and 25% fraction sets, measured by bias and standard deviation using full-count images as references. In addition, our CD method can improve the SSIM and PSNR consistently for Alzheimer-related regions and the whole brain. 
Significance
To our knowledge, this is the first attempt to alleviate the performance degradation in cross-noise level denoising from the perspective of domain generalization. Our study is also a pioneer work of continuous domain generalization.

PMID:38484401 | DOI:10.1088/1361-6560/ad341a

Categories: Literature Watch

CRISPR-M: Predicting sgRNA off-target effect using a multi-view deep learning network

Thu, 2024-03-14 06:00

PLoS Comput Biol. 2024 Mar 14;20(3):e1011972. doi: 10.1371/journal.pcbi.1011972. Online ahead of print.

ABSTRACT

Using the CRISPR-Cas9 system to perform base substitutions at the target site is a typical technique for genome editing with the potential for applications in gene therapy and agricultural productivity. When the CRISPR-Cas9 system uses guide RNA to direct the Cas9 endonuclease to the target site, it may misdirect it to a potential off-target site, resulting in an unintended genome editing. Although several computational methods have been proposed to predict off-target effects, there is still room for improvement in the off-target effect prediction capability. In this paper, we present an effective approach called CRISPR-M with a new encoding scheme and a novel multi-view deep learning model to predict the sgRNA off-target effects for target sites containing indels and mismatches. CRISPR-M takes advantage of convolutional neural networks and bidirectional long short-term memory recurrent neural networks to construct a three-branch network towards multi-views. Compared with existing methods, CRISPR-M demonstrates significant performance advantages running on real-world datasets. Furthermore, experimental analysis of CRISPR-M under multiple metrics reveals its capability to extract features and validates its superiority on sgRNA off-target effect predictions.

PMID:38483980 | DOI:10.1371/journal.pcbi.1011972

Categories: Literature Watch

Deep learning in public health: Comparative predictive models for COVID-19 case forecasting

Thu, 2024-03-14 06:00

PLoS One. 2024 Mar 14;19(3):e0294289. doi: 10.1371/journal.pone.0294289. eCollection 2024.

ABSTRACT

The COVID-19 pandemic has had a significant impact on both the United Arab Emirates (UAE) and Malaysia, emphasizing the importance of developing accurate and reliable forecasting mechanisms to guide public health responses and policies. In this study, we compared several cutting-edge deep learning models, including Long Short-Term Memory (LSTM), bidirectional LSTM, Convolutional Neural Networks (CNN), hybrid CNN-LSTM, Multilayer Perceptron's, and Recurrent Neural Networks (RNN), to project COVID-19 cases in the aforementioned regions. These models were calibrated and evaluated using a comprehensive dataset that includes confirmed case counts, demographic data, and relevant socioeconomic factors. To enhance the performance of these models, Bayesian optimization techniques were employed. Subsequently, the models were re-evaluated to compare their effectiveness. Analytic approaches, both predictive and retrospective in nature, were used to interpret the data. Our primary objective was to determine the most effective model for predicting COVID-19 cases in the United Arab Emirates (UAE) and Malaysia. The findings indicate that the selected deep learning algorithms were proficient in forecasting COVID-19 cases, although their efficacy varied across different models. After a thorough evaluation, the model architectures most suitable for the specific conditions in the UAE and Malaysia were identified. Our study contributes significantly to the ongoing efforts to combat the COVID-19 pandemic, providing crucial insights into the application of sophisticated deep learning algorithms for the precise and timely forecasting of COVID-19 cases. These insights hold substantial value for shaping public health strategies, enabling authorities to develop targeted and evidence-based interventions to manage the virus spread and its impact on the populations of the UAE and Malaysia. The study confirms the usefulness of deep learning methodologies in efficiently processing complex datasets and generating reliable projections, a skill of great importance in healthcare and professional settings.

PMID:38483948 | DOI:10.1371/journal.pone.0294289

Categories: Literature Watch

Deep learning-based fully automated grading system for dry eye disease severity

Thu, 2024-03-14 06:00

PLoS One. 2024 Mar 14;19(3):e0299776. doi: 10.1371/journal.pone.0299776. eCollection 2024.

ABSTRACT

There is an increasing need for an objective grading system to evaluate the severity of dry eye disease (DED). In this study, a fully automated deep learning-based system for the assessment of DED severity was developed. Corneal fluorescein staining (CFS) images of DED patients from one hospital for system development (n = 1400) and from another hospital for external validation (n = 94) were collected. Three experts graded the CFS images using NEI scale, and the median value was used as ground truth. The system was developed in three steps: (1) corneal segmentation, (2) CFS candidate region classification, and (3) estimation of NEI grades by CFS density map generation. Also, two images taken on different days in 50 eyes (100 images) were compared to evaluate the probability of improvement or deterioration. The Dice coefficient of the segmentation model was 0.962. The correlation between the system and the ground truth data was 0.868 (p<0.001) and 0.863 (p<0.001) for the internal and external validation datasets, respectively. The agreement rate for improvement or deterioration was 88% (44/50). The fully automated deep learning-based grading system for DED severity can evaluate the CFS score with high accuracy and thus may have potential for clinical application.

PMID:38483911 | DOI:10.1371/journal.pone.0299776

Categories: Literature Watch

PTransIPs: Identification of Phosphorylation Sites Enhanced by Protein PLM Embeddings

Thu, 2024-03-14 06:00

IEEE J Biomed Health Inform. 2024 Mar 14;PP. doi: 10.1109/JBHI.2024.3377362. Online ahead of print.

ABSTRACT

Phosphorylation is pivotal in numerous fundamental cellular processes and plays a significant role in the onset and progression of various diseases. The accurate identification of these phosphorylation sites is crucial for unraveling the molecular mechanisms within cells and during viral infections, potentially leading to the discovery of novel therapeutic targets. In this study, we develop PTransIPs, a new deep learning framework for the identification of phosphorylation sites. Independent testing results demonstrate that PTransIPs outperforms existing state-of-the-art (SOTA) methods, achieving AUCs of 0.9232 and 0.9660 for the identification of phosphorylated S/T and Y sites, respectively. PTransIPs contributes from three aspects. 1) PTransIPs is the first to apply protein pre-trained language model (PLM) embeddings to this task. It utilizes ProtTrans and EMBER2 to extract sequence and structure embeddings, respectively, as additional inputs into the model, effectively addressing issues of dataset size and overfitting, thus enhancing model performance; 2) PTransIPs is based on Transformer architecture, optimized through the integration of convolutional neural networks and TIM loss function, providing practical insights for model design and training; 3) The encoding of amino acids in PTransIPs enables it to serve as a universal framework for other peptide bioactivity tasks, with its excellent performance shown in extended experiments of this paper. Our code, data and models are publicly available at https://github.com/StatXzy7/PTransIPs.

PMID:38483806 | DOI:10.1109/JBHI.2024.3377362

Categories: Literature Watch

Cross-Attention Enhanced Pyramid Multi-Scale Networks for Sensor-based Human Activity Recognition

Thu, 2024-03-14 06:00

IEEE J Biomed Health Inform. 2024 Mar 14;PP. doi: 10.1109/JBHI.2024.3377353. Online ahead of print.

ABSTRACT

Human Activity Recognition (HAR) has recently attracted widespread attention, with the effective application of this technology helping people in areas such as healthcare, smart homes, and gait analysis. Deep learning methods have shown remarkable performance in HAR. A pivotal challenge is the trade-off between recognition accuracy and computational efficiency, especially in resource-constrained mobile devices. This challenge necessitates the development of models that enhance feature representation capabilities without imposing additional computational burdens. Addressing this, we introduce a novel HAR model leveraging deep learning, ingeniously designed to navigate the accuracy-efficiency trade-off. The model comprises two innovative modules: 1) Pyramid Multi-scale Convolutional Network (PMCN), which is designed with a symmetric structure and is capable of obtaining a rich receptive field at a finer level through its multiscale representation capability; 2) Cross-Attention Mechanism, which establishes interrelationships among sensor dimensions, temporal dimensions, and channel dimensions, and effectively enhances useful information while suppressing irrelevant data. The proposed model is rigorously evaluated across four diverse datasets: UCI, WISDM, PAMAP2, and OPPORTUNITY. Additional ablation and comparative studies are conducted to comprehensively assess the performance of the model. Experimental results demonstrate that the proposed model achieves superior activity recognition accuracy while maintaining low computational overhead.

PMID:38483804 | DOI:10.1109/JBHI.2024.3377353

Categories: Literature Watch

Long-term Regional Influenza-like-illness Forecasting Using Exogenous Data

Thu, 2024-03-14 06:00

IEEE J Biomed Health Inform. 2024 Mar 14;PP. doi: 10.1109/JBHI.2024.3377529. Online ahead of print.

ABSTRACT

Disease forecasting is a longstanding problem for the research community, which aims at informing and improving decisions with the best available evidence. Specifically, the interest in respiratory disease forecasting has dramatically increased since the beginning of the coronavirus pandemic, rendering the accurate prediction of influenza-like-illness (ILI) a critical task. Although methods for short-term ILI forecasting and nowcasting have achieved good accuracy, their performance worsens at long-term ILI forecasts. Machine learning models have outperformed conventional forecasting approaches enabling to utilize diverse exogenous data sources, such as social media, internet users' search query logs, and climate data. However, the most recent deep learning ILI forecasting models use only historical occurrence data achieving state-of-the-art results. Inspired by recent deep neural network architectures in time series forecasting, this work proposes the Regional Influenza-Like-Illness Forecasting (ReILIF) method for regional long-term ILI prediction. The proposed architecture takes advantage of diverse exogenous data, that are, meteorological and population data, introducing an efficient intermediate fusion mechanism to combine the different types of information with the aim to capture the variations of ILI from various views. The efficacy of the proposed approach compared to state-of-the-art ILI forecasting methods is confirmed by an extensive experimental study following standard evaluation measures.

PMID:38483802 | DOI:10.1109/JBHI.2024.3377529

Categories: Literature Watch

Advancing brain tumor classification through MTAP model: an innovative approach in medical diagnostics

Thu, 2024-03-14 06:00

Med Biol Eng Comput. 2024 Mar 14. doi: 10.1007/s11517-024-03064-5. Online ahead of print.

ABSTRACT

The early diagnosis of brain tumors is critical in the area of healthcare, owing to the potentially life-threatening repercussions unstable growths within the brain can pose to individuals. The accurate and early diagnosis of brain tumors enables prompt medical intervention. In this context, we have established a new model called MTAP to enable a highly accurate diagnosis of brain tumors. The MTAP model addresses dataset class imbalance by utilizing the ADASYN method, employs a network pruning technique to reduce unnecessary weights and nodes in the neural network, and incorporates Avg-TopK pooling method for enhanced feature extraction. The primary goal of our research is to enhance the accuracy of brain tumor type detection, a critical aspect of medical imaging and diagnostics. The MTAP model introduces a novel classification strategy for brain tumors, leveraging the strength of deep learning methods and novel model refinement techniques. Following comprehensive experimental studies and meticulous design, the MTAP model has achieved a state-of-the-art accuracy of 99.69%. Our findings indicate that the use of deep learning and innovative model refinement techniques shows promise in facilitating the early detection of brain tumors. Analysis of the model's heat map revealed a notable focus on regions encompassing the parietal and temporal lobes.

PMID:38483711 | DOI:10.1007/s11517-024-03064-5

Categories: Literature Watch

Pages