Deep learning

SSF-DDI: a deep learning method utilizing drug sequence and substructure features for drug-drug interaction prediction

Tue, 2024-01-23 06:00

BMC Bioinformatics. 2024 Jan 23;25(1):39. doi: 10.1186/s12859-024-05654-4.

ABSTRACT

BACKGROUND: Drug-drug interactions (DDI) are prevalent in combination therapy, necessitating the importance of identifying and predicting potential DDI. While various artificial intelligence methods can predict and identify potential DDI, they often overlook the sequence information of drug molecules and fail to comprehensively consider the contribution of molecular substructures to DDI.

RESULTS: In this paper, we proposed a novel model for DDI prediction based on sequence and substructure features (SSF-DDI) to address these issues. Our model integrates drug sequence features and structural features from the drug molecule graph, providing enhanced information for DDI prediction and enabling a more comprehensive and accurate representation of drug molecules.

CONCLUSION: The results of experiments and case studies have demonstrated that SSF-DDI significantly outperforms state-of-the-art DDI prediction models across multiple real datasets and settings. SSF-DDI performs better in predicting DDI involving unknown drugs, resulting in a 5.67% improvement in accuracy compared to state-of-the-art methods.

PMID:38262923 | DOI:10.1186/s12859-024-05654-4

Categories: Literature Watch

Deep Learning-Based Detection and Classification of Bone Lesions on Staging Computed Tomography in Prostate Cancer: A Development Study

Tue, 2024-01-23 06:00

Acad Radiol. 2024 Jan 22:S1076-6332(24)00008-4. doi: 10.1016/j.acra.2024.01.009. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists.

MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36).

RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%).

CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.

PMID:38262813 | DOI:10.1016/j.acra.2024.01.009

Categories: Literature Watch

GHA-DenseNet prediction and diagnosis of malignancy in femoral bone tumors using magnetic resonance imaging

Tue, 2024-01-23 06:00

J Bone Oncol. 2023 Dec 29;44:100520. doi: 10.1016/j.jbo.2023.100520. eCollection 2024 Feb.

ABSTRACT

BACKGROUND AND OBJECTIVE: Due to their aggressive nature and poor prognosis, malignant femoral bone tumors present considerable hurdles. Early treatment commencement is essential for enhancing vital and practical outcomes. In this investigation, deep learning algorithms will be used to analyze magnetic resonance imaging (MRI) data to identify bone tumors that are malignant.

METHODOLOGY: The study cohort included 44 patients, with ages ranging from 17 to 78 (22 women and 22 males). To categorize T1 and T2 weighted MRI data, this paper presents an improved DenseNet network model for the classification of bone tumor MRI, which is named GHA-DenseNet. Based on the original DenseNet model, the attention module is added to solve the problem that the deep convolutional model can reduce the loss of key features when capturing the location and content information of femoral bone tumor tissue due to the limitation of local receptive field. In addition, the sparse connection mode is used to prune the connection mode of the original model, so as to remove unnecessary and retain more useful fast connection mode, and alleviate the overfitting problem caused by small dataset size and image characteristics. In a clinical model designed to anticipate tumor malignancy, the utilization of T1 and T2 classifier output values, in combination with patient-specific clinical information, was a crucial component.

RESULTS: The T1 classifier's accuracy during the training phase was 92.88% whereas the T2 classifier's accuracy was 87.03%. Both classifiers demonstrated accuracy of 95.24% throughout the validation phase. During training and validation, the clinical model's accuracy was 82.17% and 81.51%, respectively. The clinical model's receiver operating characteristic (ROC) curve demonstrated its capacity to separate classes.

CONCLUSIONS: The proposed method does not require manual segmentation of MRI scans because it makes use of pretrained deep learning classifiers. These algorithms have the ability to predict tumor malignancy and shorten the diagnostic and therapeutic turnaround times. Although the procedure only needs a little amount of radiologists' involvement, more testing on a larger patient cohort is required to confirm its efficacy.

PMID:38261934 | PMC:PMC10797540 | DOI:10.1016/j.jbo.2023.100520

Categories: Literature Watch

Classification of Pulmonary Nodules in 2-[<sup>18</sup>F]FDG PET/CT Images with a 3D Convolutional Neural Network

Tue, 2024-01-23 06:00

Nucl Med Mol Imaging. 2024 Feb;58(1):9-24. doi: 10.1007/s13139-023-00821-6. Epub 2023 Aug 30.

ABSTRACT

PURPOSE: 2-[18F]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[18F]FDG PET images.

METHODS: One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[18F]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used.

RESULTS: The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives.

CONCLUSION: A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[18F]FDG PET images.

SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13139-023-00821-6.

PMID:38261899 | PMC:PMC10796312 | DOI:10.1007/s13139-023-00821-6

Categories: Literature Watch

Comparing three-dimensional and two-dimensional deep-learning, radiomics, and fusion models for predicting occult lymph node metastasis in laryngeal squamous cell carcinoma based on CT imaging: a multicentre, retrospective, diagnostic study

Tue, 2024-01-23 06:00

EClinicalMedicine. 2024 Jan 3;67:102385. doi: 10.1016/j.eclinm.2023.102385. eCollection 2024 Jan.

ABSTRACT

BACKGROUND: The occult lymph node metastasis (LNM) of laryngeal squamous cell carcinoma (LSCC) affects the treatment and prognosis of patients. This study aimed to comprehensively compare the performance of the three-dimensional and two-dimensional deep learning models, radiomics model, and the fusion models for predicting occult LNM in LSCC.

METHODS: In this retrospective diagnostic study, a total of 553 patients with clinical N0 stage LSCC, who underwent surgical treatment without distant metastasis and multiple primary cancers, were consecutively enrolled from four Chinese medical centres between January 01, 2016 and December 30, 2020. The participant data were manually retrieved from medical records, imaging databases, and pathology reports. The study cohort was divided into a training set (n = 300), an internal test set (n = 89), and two external test sets (n = 120 and 44, respectively). The three-dimensional deep learning (3D DL), two-dimensional deep learning (2D DL), and radiomics model were developed using CT images of the primary tumor. The clinical model was constructed based on clinical and radiological features. Two fusion strategies were utilized to develop the fusion model: the feature-based DLRad_FB model and the decision-based DLRad_DB model. The discriminative ability and correlation of 3D DL, 2D DL and radiomics features were analysed comprehensively. The performances of the predictive models were evaluated based on the pathological diagnosis.

FINDINGS: The 3D DL features had superior discriminative ability and lower internal redundancy compared to 2D DL and radiomics features. The DLRad_DB model achieved the highest AUC (0.89-0.90) among all the study sets, significantly outperforming the clinical model (AUC = 0.73-0.78, P = 0.0001-0.042, Delong test). Compared to the DLRad_DB model, the AUC values for the DLRad_FB, 3D DL, 2D DL, and radiomics models were 0.82-0.84 (P = 0.025-0.46), 0.86-0.89 (P = 0.75-0.97), 0.83-0.86 (P = 0.029-0.66), and 0.79-0.82 (P = 0.0072-0.10), respectively in the study sets. Additionally, the DLRad_DB model exhibited the best sensitivity (82-88%) and specificity (79-85%) in the test sets.

INTERPRETATION: The decision-based fusion model DLRad_DB, which combines 3D DL, 2D DL, radiomics, and clinical data, can be utilized to predict occult LNM in LSCC. This has the potential to minimize unnecessary lymph node dissection and prophylactic radiotherapy in patients with cN0 disease.

FUNDING: National Natural Science Foundation of China, Natural Science Foundation of Shandong Province.

PMID:38261897 | PMC:PMC10796944 | DOI:10.1016/j.eclinm.2023.102385

Categories: Literature Watch

Joint masking and self-supervised strategies for inferring small molecule-miRNA associations

Tue, 2024-01-23 06:00

Mol Ther Nucleic Acids. 2023 Dec 18;35(1):102103. doi: 10.1016/j.omtn.2023.102103. eCollection 2024 Mar 12.

ABSTRACT

Inferring small molecule-miRNA associations (MMAs) is crucial for revealing the intricacies of biological processes and disease mechanisms. Deep learning, renowned for its exceptional speed and accuracy, is extensively used for predicting MMAs. However, given their heavy reliance on data, inaccuracies during data collection can make these methods susceptible to noise interference. To address this challenge, we introduce the joint masking and self-supervised (JMSS)-MMA model. This model synergizes graph autoencoders with a probability distribution-based masking strategy, effectively countering the impact of noisy data and enabling precise predictions of unknown MMAs. Operating in a self-supervised manner, it deeply encodes the relationship data of small molecules and miRNA through the graph autoencoder, delving into its latent information. Our masking strategy has successfully reduced data noise, enhancing prediction accuracy. To our knowledge, this is the pioneering integration of a masking strategy with graph autoencoders for MMA prediction. Furthermore, the JMSS-MMA model incorporates a node-degree-based decoder, deepening the understanding of the network's structure. Experiments on two mainstream datasets confirm the model's efficiency and precision, and ablation studies further attest to its robustness. We firmly believe that this model will revolutionize drug development, personalized medicine, and biomedical research.

PMID:38261851 | PMC:PMC10794920 | DOI:10.1016/j.omtn.2023.102103

Categories: Literature Watch

Automated detection of vitritis using ultrawide-field fundus photographs and deep learning

Tue, 2024-01-23 06:00

Retina. 2024 Jan 23. doi: 10.1097/IAE.0000000000004049. Online ahead of print.

ABSTRACT

PURPOSE: Evaluate the performance of a deep learning (DL) algorithm for the automated detection and grading of vitritis on ultra-wide field (UWF) imaging.

DESIGN: Cross-sectional non-interventional study.

METHOD: UWF fundus retinophotographs of uveitis patients were used. Vitreous haze was defined according to the 6 steps of the SUN classification. The DL framework TensorFlow and the DenseNet121 convolutional neural network were used to perform the classification task. The best fitted model was tested in a validation study.

RESULTS: 1181 images were included. The performance of the model for the detection of vitritis was good with a sensitivity of 91%, a specificity of 89%, an accuracy of 0.90 and an area under the ROC curve of 0.97. When used on an external set of images, the accuracy for the detection of vitritis was 0.78. The accuracy to classify vitritis in one of the 6 SUN grades was limited (0.61), but improved to 0.75 when the grades were grouped in three categories. When accepting an error of one grade, the accuracy for the 6-class classification increased to 0.90, suggesting the need for a larger sample to improve the model performances.

CONCLUSION: We describe a new DL model based on UWF fundus imaging that produces an efficient tool for the detection of vitritis. The performance of the model for the grading into 3 categories of increasing vitritis severity was acceptable. The performance for the 6-class grading of vitritis was limited but can probably be improved with a larger set of images.

PMID:38261816 | DOI:10.1097/IAE.0000000000004049

Categories: Literature Watch

Multi-indicator comparative evaluation for deep Learning-Based protein sequence design methods

Tue, 2024-01-23 06:00

Bioinformatics. 2024 Jan 23:btae037. doi: 10.1093/bioinformatics/btae037. Online ahead of print.

ABSTRACT

MOTIVATION: Proteins found in nature represent only a fraction of the vast space of possible proteins. Protein design presents an opportunity to explore and expand this protein landscape. Within protein design, protein sequence design plays a crucial role, and numerous successful methods have been developed. Notably, deep learning-based protein sequence design methods have experienced significant advancements in recent years. However, a comprehensive and systematic comparison and evaluation of these methods have been lacking, with indicators provided by different methods often inconsistent or lacking effectiveness.

RESULTS: To address this gap, we have designed a diverse set of indicators that cover several important aspects, including sequence recovery, diversity, root-mean-square deviation of protein structure, secondary structure, and the distribution of polar and non-polar amino acids. In our evaluation, we have employed an improved weighted inferiority-superiority distance method to comprehensively assess the performance of eight widely used deep learning-based protein sequence design methods. Our evaluation not only provides rankings of these methods but also offers optimization suggestions by analyzing the strengths and weaknesses of each method. Furthermore, we have developed a method to select the best temperature parameter and proposed solutions for the common issue of designing sequences with consecutive repetitive amino acids, which is often encountered in protein design methods. These findings can greatly assist users in selecting suitable protein sequence design methods. Overall, our work contributes to the field of protein sequence design by providing a comprehensive evaluation system and optimization suggestions for different methods.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38261649 | DOI:10.1093/bioinformatics/btae037

Categories: Literature Watch

Detection and severity quantification of pulmonary embolism with 3D CT data using an automated deep learning-based artificial solution

Tue, 2024-01-23 06:00

Diagn Interv Imaging. 2023 Oct 10:S2211-5684(23)00180-8. doi: 10.1016/j.diii.2023.09.006. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of this study was to propose a deep learning-based approach to detect pulmonary embolism and quantify its severity using the Qanadli score and the right-to-left ventricle diameter (RV/LV) ratio on three-dimensional (3D) computed tomography pulmonary angiography (CTPA) examinations with limited annotations.

MATERIALS AND METHODS: Using a database of 3D CTPA examinations of 1268 patients with image-level annotations, and two other public datasets of CTPA examinations from 91 (CAD-PE) and 35 (FUME-PE) patients with pixel-level annotations, a pipeline consisting of: (i), detecting blood clots; (ii), performing PE-positive versus negative classification; (iii), estimating the Qanadli score; and (iv), predicting RV/LV diameter ratio was followed. The method was evaluated on a test set including 378 patients. The performance of PE classification and severity quantification was quantitatively assessed using an area under the curve (AUC) analysis for PE classification and a coefficient of determination (R²) for the Qanadli score and the RV/LV diameter ratio.

RESULTS: Quantitative evaluation led to an overall AUC of 0.870 (95% confidence interval [CI]: 0.850-0.900) for PE classification task on the training set and an AUC of 0.852 (95% CI: 0.810-0.890) on the test set. Regression analysis yielded R² value of 0.717 (95% CI: 0.668-0.760) and of 0.723 (95% CI: 0.668-0.766) for the Qanadli score and the RV/LV diameter ratio estimation, respectively on the test set.

CONCLUSION: This study shows the feasibility of utilizing AI-based assistance tools in detecting blood clots and estimating PE severity scores with 3D CTPA examinations. This is achieved by leveraging blood clots and cardiac segmentations. Further studies are needed to assess the effectiveness of these tools in clinical practice.

PMID:38261553 | DOI:10.1016/j.diii.2023.09.006

Categories: Literature Watch

Radial Undersampled MRI Reconstruction Using Deep Learning with Mutual Constraints between Real and Imaginary Components of K-Space

Tue, 2024-01-23 06:00

IEEE J Biomed Health Inform. 2024 Jan 23;PP. doi: 10.1109/JBHI.2024.3357784. Online ahead of print.

ABSTRACT

The deep learning method is an efficient solution for improving the quality of undersampled magnetic resonance (MR) image reconstruction while reducing lengthy data acquisition. Most deep learning methods neglect the mutual constraints between the real and imaginary components of complex-valued k-space data. In this paper, a new complex-valued convolutional neural network (CNN), namely, Dense-U-Dense Net (DUD-Net), is proposed to interpolate the undersampled k-space data and reconstruct MR images. The proposed network comprises dense layers, U-Net, and other dense layers in sequence. The dense layers are used to simulate the mutual constraints between real and imaginary components, and U-Net performs feature sparsity and interpolation estimation for the k-space data. Two MRI datasets were used to evaluate the proposed method: brain magnitude-only MR images and knee complex-valued k-space data. Several operations were conducted to simulate the true undersampled k-space. First, the complex-valued MR images were synthesized by phase modulation on magnitude-only images. Second, a particular radial trajectory based on the golden ratio was used for k-space undersampling, whereby a reversible normalization method was proposed to balance the distribution of positive and negative values in k-space data. The optimal performance of DUD-Net was demonstrated based on a quantitative evaluation of inter-method comparisons of widely used CNNs and intra-method comparisons using an ablation study. When compared with other methods, significant improvements were achieved, PSNRs were increased by 10.78 and 5.74dB, whereas RMSEs were decreased by 71.53% and 30.31% for magnitude and phase image at least, respectively. It is concluded that DUD-Net significantly improves the performance of complex-valued k-space interpolation and MR image reconstruction.

PMID:38261493 | DOI:10.1109/JBHI.2024.3357784

Categories: Literature Watch

Identification of congenital valvular murmurs in young patients using deep learning-based attention transformers and phonocardiograms

Tue, 2024-01-23 06:00

IEEE J Biomed Health Inform. 2024 Jan 23;PP. doi: 10.1109/JBHI.2024.3357506. Online ahead of print.

ABSTRACT

One in every four newborns suffers from congenital heart disease (CHD) that causes defects in the heart structure. The current gold-standard assessment technique, echocardiography, causes delays in the diagnosis owing to the need for experts who vary markedly in their ability to detect and interpret pathological patterns. Moreover, echo is still causing cost difficulties for low-and middle-income countries. Here, we developed a deep learning based attention transformer model to automate the detection of heart murmurs caused by CHD at an early stage of life using cost-effective and widely available phonocardiography (PCG). PCG recordings were obtained from 942 young patients at four major auscultation locations, including the aortic valve (AV), mitral valve (MV), pulmonary valve (PV), and tricuspid valve (TV), and they were annotated by experts as absent, present, or unknown murmurs. A transformation to wavelet features was performed to reduce the dimensionality before the deep learning stage for inferring the medical condition. The performance was validated through 10-fold cross-validation and yielded an average accuracy and sensitivity of 90.23% and 72.41%, respectively. The accuracy of discriminating between murmurs' absence and presence reached 76.10% when evaluated on unseen data. The model had accuracies of 70%, 88%, and 86% in predicting murmur presence in infants, children, and adolescents, respectively. The interpretation of the model revealed proper discrimination between the learned attributes, and AV channel was found important (score > 0.75) for the murmur absence predictions while MV and TV were more important for murmur presence predictions. The findings potentiate deep learning as a powerful front-line tool for inferring CHD status in PCG recordings leveraging early detection of heart anomalies in young people. It is suggested as a tool that can be used independently from high-cost machinery or expert assessment. With additional validation on external datasets, more insights on the generalizability of deep learning tools could be obtained before being implemented in real-world clinical settings.

PMID:38261492 | DOI:10.1109/JBHI.2024.3357506

Categories: Literature Watch

QTNet: Deep Learning for Estimating QT Intervals Using a Single Lead ECG

Tue, 2024-01-23 06:00

Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul;2023:1-4. doi: 10.1109/EMBC40787.2023.10341204.

ABSTRACT

QT prolongation often leads to fatal arrhythmia and sudden cardiac death. Antiarrhythmic drugs can increase the risk of QT prolongation and therefore require strict post-administration monitoring and dosage control. Measurement of the QT interval from the 12-lead electrocardiogram (ECG) by a trained expert, in a clinical setting, is the accepted method for tracking QT prolongation. Recent advances in wearable ECG technology, however, raise the possibility of automated out-of-hospital QT tracking. Applications of Deep Learning (DL) - a subfield within Machine Learning - in ECG analysis holds the promise of automation for a variety of classification and regression tasks. In this work, we propose a residual neural network, QTNet, for the regression of QT intervals from a single lead (Lead-I) ECG. QTNet is trained in a supervised manner on a large ECG dataset from a U.S. hospital. We demonstrate the robustness and generalizability of QTNet on four test-sets; one from the same hospital, one from another U.S. hospital, and two public datasets. Over all four datasets, the mean absolute error (MAE) in the estimated QT interval ranges between 9ms and 15.8ms. Pearson correlation coefficients vary between 0.899 and 0.914. By contrast, QT interval estimation on these datasets with a standard method for automated ECG analysis (NeuroKit2) yields MAEs between 22.29ms and 90.79ms, and Pearson correlation coefficients 0.345 and 0.620. These results demonstrate the utility of QTNet across distinct datasets and patient populations, thereby highlighting the potential utility of DL models for ubiquitous QT tracking.Clinical Relevance- QTNet can be applied to inpatient or ambulatory Lead-I ECG signals to track QT intervals. The method facilitates ambulatory monitoring of patients at risk of QT prolongation.

PMID:38261472 | DOI:10.1109/EMBC40787.2023.10341204

Categories: Literature Watch

GMFGRN: a matrix factorization and graph neural network approach for gene regulatory network inference

Tue, 2024-01-23 06:00

Brief Bioinform. 2024 Jan 22;25(2):bbad529. doi: 10.1093/bib/bbad529.

ABSTRACT

The recent advances of single-cell RNA sequencing (scRNA-seq) have enabled reliable profiling of gene expression at the single-cell level, providing opportunities for accurate inference of gene regulatory networks (GRNs) on scRNA-seq data. Most methods for inferring GRNs suffer from the inability to eliminate transitive interactions or necessitate expensive computational resources. To address these, we present a novel method, termed GMFGRN, for accurate graph neural network (GNN)-based GRN inference from scRNA-seq data. GMFGRN employs GNN for matrix factorization and learns representative embeddings for genes. For transcription factor-gene pairs, it utilizes the learned embeddings to determine whether they interact with each other. The extensive suite of benchmarking experiments encompassing eight static scRNA-seq datasets alongside several state-of-the-art methods demonstrated mean improvements of 1.9 and 2.5% over the runner-up in area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC). In addition, across four time-series datasets, maximum enhancements of 2.4 and 1.3% in AUROC and AUPRC were observed in comparison to the runner-up. Moreover, GMFGRN requires significantly less training time and memory consumption, with time and memory consumed <10% compared to the second-best method. These findings underscore the substantial potential of GMFGRN in the inference of GRNs. It is publicly available at https://github.com/Lishuoyy/GMFGRN.

PMID:38261340 | DOI:10.1093/bib/bbad529

Categories: Literature Watch

Advantages and Pitfalls of the Use of Optical Coherence Tomography for Papilledema

Tue, 2024-01-23 06:00

Curr Neurol Neurosci Rep. 2024 Jan 23. doi: 10.1007/s11910-023-01327-6. Online ahead of print.

ABSTRACT

PURPOSE OF REVIEW: Papilledema refers to optic disc swelling caused by raised intracranial pressure. This syndrome arises from numerous potential causes, which may pose varying degrees of threat to patients. Manifestations of papilledema range from mild to severe, and early diagnosis is important to prevent vision loss and other deleterious outcomes. The purpose of this review is to highlight the role of optical coherence tomography (OCT) in the diagnosis and management of syndromes of raised intracranial pressure associated with papilledema.

RECENT FINDINGS: Ophthalmoscopy is an unreliable skill for many clinicians. Optical coherence tomography is a non-invasive ocular imaging technique which may fill a current care gap, by facilitating detection of papilledema for those who cannot perform a detailed fundus examination. Optical coherence tomography may help confirm the presence of papilledema, by detecting subclinical peripapillary retinal nerve fiber layer (pRNFL) thickening that might otherwise be missed with ophthalmoscopy. Enhanced depth imaging (EDI) and swept source OCT techniques may identify optic disc drusen as cause of pseudo-papilledema. Macular ganglion cell inner plexiform layer (mGCIPL) values may provide early signs of neuroaxonal injury in patients with papilledema and inform management for patients with syndromes of raised intracranial pressure. There are well-established advantages and disadvantages of OCT that need to be fully understood to best utilize this tool for the detection of papilledema. Overall, OCT may complement other existing tools by facilitating detection of papilledema and tracking response to therapies. Moving forward, OCT findings may be included in deep learning models to diagnose papilledema.

PMID:38261144 | DOI:10.1007/s11910-023-01327-6

Categories: Literature Watch

BDHusk: A comprehensive dataset of different husk species images as a component of cattle feed from different regions of Bangladesh

Tue, 2024-01-23 06:00

Data Brief. 2023 Dec 28;52:110018. doi: 10.1016/j.dib.2023.110018. eCollection 2024 Feb.

ABSTRACT

This study presents a recently compiled dataset called "BDHusk," which encompasses a wide range of husk images representing eight different husk species as a component of cattle feed sourced from different locales in Sirajganj, Bangladesh. The following are eight husk species: Oryza sativa, Zea mays, Triticum aestivum, Cicer arietinum, Lens culinaris, Glycine max, Lathyrus sativus, and Pisum sativum var. arvense L. Poiret. This dataset consists of a total of 2,400 original images and an additional 9,280 augmented images, all showcasing various husk species. Every single one of the original images was taken with the right backdrop and in enough amount of natural light. Every image was appropriately positioned into its respective subfolder, enabling a wide variety of machine learning and deep learning models to make the most effective use of the images. By utilizing this extensive dataset and employing various machine learning and deep learning techniques, researchers have the potential to achieve significant advancements in the fields of agriculture, food and nutrition science, environmental monitoring, and computer sciences. This dataset allows researchers to improve cattle feeding using data-driven methods. Researchers can improve cattle health and production by improving feed compositions. Furthermore, it not only presents potential for substantial advancements in these fields but also serves as a crucial resource for future research endeavors.

PMID:38260865 | PMC:PMC10801301 | DOI:10.1016/j.dib.2023.110018

Categories: Literature Watch

Comprehensive experimental dataset on large-amplitude Rayleigh-Plateau instability in continuous InkJet printing regime

Tue, 2024-01-23 06:00

Data Brief. 2023 Dec 19;52:109941. doi: 10.1016/j.dib.2023.109941. eCollection 2024 Feb.

ABSTRACT

The Rayleigh-Plateau instability, a phenomenon of paramount significance in fluid dynamics, finds widespread application in the Continuous InkJet (CIJ) printing process. This study presents a comprehensive dataset comprising experimental investigations of fluid jet breakup phenomena under large-amplitude stimulation conditions using an industrial CIJ print-head from Markem-Imaje. Unlike previous studies, this dataset encompasses a diverse range of experimental conditions, including nine different Newtonian fluids with meticulously measured rheological properties (viscosities, surface tensions and densities). The applied stimulation amplitudes vary from 5V to 45V, representing a substantial span of excitation levels. The experimental setup captures the intricate dynamics of fluid jets subjected to these varying conditions, producing a rich collection of over 5,000 high-resolution images depicting the breakup phenomena. Each amplitude of stimulation and fluid type yields more than 55 distinct images, providing detailed insights into the evolving jet morphologies. To ensure the accuracy and relevance of the dataset, all ejection parameters are rigorously documented and included. The dataset thus serves as a valuable resource for researchers seeking to explore the dynamics of large-amplitude Rayleigh-Plateau instability in CIJ printing. Its comprehensiveness and diversity make it particularly suitable for the application of novel machine learning and deep-learning approaches, enabling the study of jet morphological evolution beyond the confines of classical Rayleigh's theory. This dataset holds promise for advancing our understanding of fluid jet dynamics and enhancing the efficiency and quality of CIJ printing processes.

PMID:38260863 | PMC:PMC10801292 | DOI:10.1016/j.dib.2023.109941

Categories: Literature Watch

A novel dataset of date fruit for inspection and classification

Tue, 2024-01-23 06:00

Data Brief. 2024 Jan 2;52:110026. doi: 10.1016/j.dib.2023.110026. eCollection 2024 Feb.

ABSTRACT

Date fruit grading and inspection is a challenging and crucial process in the industry. The grading process requires skilled and experienced labour. Moreover, the labour turnover in the date processing industries has been increased regularly. Therefore, due to the lack of trained labour, the quality of date fruit is often compromised. It leads to fruit wastage and instability of fruit prices. Currently, deep learning algorithms have achieved the research community's attention in solving the problems in the agriculture sector. The pre-trained models like VGG16 and VGG19 have been applied for the classification of date fruit [1,2]. Furthermore, machine learning techniques like K-Nearest Neighbors, Support Vector Machine, Random Forest and a few others [3], [4], [5], [6] have been used for grading of date fruit. Therefore, classification and sorting of date fruit problems have become common in the industry. The classification and grading of date fruit needed a neat and clean dataset. In this article, an indigenous and state-of-the-art dataset of date fruit is offered. The dataset contains images of four date fruit varieties. It consists of 3004 pre-processed images of different classes and grades. Moreover, images have been sorted based on size as large, medium, and small. Additionally, it is graded based on the quality as grade 1, grade 2, and grade 3. This dataset is separated into eighteen different directories for the facilitation of the researchers. It may contribute to develop an intelligent system to grade and inspect date fruit. This system may add value to the sustainable economic growth of fruit processing industries and farmers locally and internationally.

PMID:38260861 | PMC:PMC10801328 | DOI:10.1016/j.dib.2023.110026

Categories: Literature Watch

Deep learning based on 68Ga-PSMA-11 PET/CT for predicting pathological upgrading in patients with prostate cancer

Tue, 2024-01-23 06:00

Front Oncol. 2024 Jan 8;13:1273414. doi: 10.3389/fonc.2023.1273414. eCollection 2023.

ABSTRACT

OBJECTIVES: To explore the feasibility and importance of deep learning (DL) based on 68Ga-prostate-specific membrane antigen (PSMA)-11 PET/CT in predicting pathological upgrading from biopsy to radical prostatectomy (RP) in patients with prostate cancer (PCa).

METHODS: In this retrospective study, all patients underwent 68Ga-PSMA-11 PET/CT, transrectal ultrasound (TRUS)-guided systematic biopsy, and RP for PCa sequentially between January 2017 and December 2022. Two DL models (three-dimensional [3D] ResNet-18 and 3D DenseNet-121) based on 68Ga-PSMA-11 PET and support vector machine (SVM) models integrating clinical data with DL signature were constructed. The model performance was evaluated using area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.

RESULTS: Of 109 patients, 87 (44 upgrading, 43 non-upgrading) were included in the training set and 22 (11 upgrading, 11 non-upgrading) in the test set. The combined SVM model, incorporating clinical features and signature of 3D ResNet-18 model, demonstrated satisfactory prediction in the test set with an AUC value of 0.628 (95% confidence interval [CI]: 0.365, 0.891) and accuracy of 0.727 (95% CI: 0.498, 0.893).

CONCLUSION: A DL method based on 68Ga-PSMA-11 PET may have a role in predicting pathological upgrading from biopsy to RP in patients with PCa.

PMID:38260839 | PMC:PMC10800856 | DOI:10.3389/fonc.2023.1273414

Categories: Literature Watch

Deep learning application for abdominal organs segmentation on 0.35 T MR-Linac images

Tue, 2024-01-23 06:00

Front Oncol. 2024 Jan 8;13:1285924. doi: 10.3389/fonc.2023.1285924. eCollection 2023.

ABSTRACT

INTRODUCTION: Linear accelerator (linac) incorporating a magnetic resonance (MR) imaging device providing enhanced soft tissue contrast is particularly suited for abdominal radiation therapy. In particular, accurate segmentation for abdominal tumors and organs at risk (OARs) required for the treatment planning is becoming possible. Currently, this segmentation is performed manually by radiation oncologists. This process is very time consuming and subject to inter and intra operator variabilities. In this work, deep learning based automatic segmentation solutions were investigated for abdominal OARs on 0.35 T MR-images.

METHODS: One hundred and twenty one sets of abdominal MR images and their corresponding ground truth segmentations were collected and used for this work. The OARs of interest included the liver, the kidneys, the spinal cord, the stomach and the duodenum. Several UNet based models have been trained in 2D (the Classical UNet, the ResAttention UNet, the EfficientNet UNet, and the nnUNet). The best model was then trained with a 3D strategy in order to investigate possible improvements. Geometrical metrics such as Dice Similarity Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD) and analysis of the calculated volumes (thanks to Bland-Altman plot) were performed to evaluate the results.

RESULTS: The nnUNet trained in 3D mode achieved the best performance, with DSC scores for the liver, the kidneys, the spinal cord, the stomach, and the duodenum of 0.96 ± 0.01, 0.91 ± 0.02, 0.91 ± 0.01, 0.83 ± 0.10, and 0.69 ± 0.15, respectively. The matching IoU scores were 0.92 ± 0.01, 0.84 ± 0.04, 0.84 ± 0.02, 0.54 ± 0.16 and 0.72 ± 0.13. The corresponding HD scores were 13.0 ± 6.0 mm, 16.0 ± 6.6 mm, 3.3 ± 0.7 mm, 35.0 ± 33.0 mm, and 42.0 ± 24.0 mm. The analysis of the calculated volumes followed the same behavior.

DISCUSSION: Although the segmentation results for the duodenum were not optimal, these findings imply a potential clinical application of the 3D nnUNet model for the segmentation of abdominal OARs for images from 0.35 T MR-Linac.

PMID:38260833 | PMC:PMC10800957 | DOI:10.3389/fonc.2023.1285924

Categories: Literature Watch

Prediction of knee biomechanics with different tibial component malrotations after total knee arthroplasty: conventional machine learning vs. deep learning

Tue, 2024-01-23 06:00

Front Bioeng Biotechnol. 2024 Jan 8;11:1255625. doi: 10.3389/fbioe.2023.1255625. eCollection 2023.

ABSTRACT

The precise alignment of tibiofemoral components in total knee arthroplasty is a crucial factor in enhancing the longevity and functionality of the knee. However, it is a substantial challenge to quickly predict the biomechanical response to malrotation of tibiofemoral components after total knee arthroplasty using musculoskeletal multibody dynamics models. The objective of the present study was to conduct a comparative analysis between a deep learning method and four conventional machine learning methods for predicting knee biomechanics with different tibial component malrotation during a walking gait after total knee arthroplasty. First, the knee contact forces and kinematics with different tibial component malrotation in the range of ±5° in the three directions of anterior/posterior slope, internal/external rotation, and varus/valgus rotation during a walking gait after total knee arthroplasty were calculated based on the developed musculoskeletal multibody dynamics model. Subsequently, deep learning and four conventional machine learning methods were developed using the above 343 sets of biomechanical data as the dataset. Finally, the results predicted by the deep learning method were compared to the results predicted by four conventional machine learning methods. The findings indicated that the deep learning method was more accurate than four conventional machine learning methods in predicting knee contact forces and kinematics with different tibial component malrotation during a walking gait after total knee arthroplasty. The deep learning method developed in this study enabled quickly determine the biomechanical response with different tibial component malrotation during a walking gait after total knee arthroplasty. The proposed method offered surgeons and surgical robots the ability to establish a calibration safety zone, which was essential for achieving precise alignment in both preoperative surgical planning and intraoperative robotic-assisted surgical navigation.

PMID:38260731 | PMC:PMC10800660 | DOI:10.3389/fbioe.2023.1255625

Categories: Literature Watch

Pages