Deep learning

Deep learning-based characterization of neutrophil activation phenotypes in ex vivo human Candida blood infections

Fri, 2024-03-29 06:00

Comput Struct Biotechnol J. 2024 Mar 18;23:1260-1273. doi: 10.1016/j.csbj.2024.03.006. eCollection 2024 Dec.

ABSTRACT

Early identification of human pathogens is crucial for the effective treatment of bloodstream infections to prevent sepsis. Since pathogens that are present in small numbers are usually difficult to detect directly, we hypothesize that the behavior of the immune cells that are present in large numbers may provide indirect evidence about the causative pathogen of the infection. We previously applied time-lapse microscopy to observe that neutrophils isolated from human whole-blood samples, which had been infected with the human-pathogenic fungus Candida albicans or C. glabrata, indeed exhibited a characteristic morphodynamic behavior. Tracking the neutrophil movement and shape dynamics over time, combined with machine learning approach, the accuracy for the differentiation between the two Candida species was about 75%. In this study, the focus is on improving the classification accuracy of the Candida species using advanced deep learning methods. We implemented (i) gated recurrent unit (GRU) networks and transformer-based networks for video data, and (ii) convolutional neural networks (CNNs) for individual frames of the time-lapse microscopy data. While the GRU and transformer-based approaches yielded promising results with 96% and 100% accuracy, respectively, the classification based on videos proved to be very time-consuming and required several hours. In contrast, the CNN model for individual microscopy frames yielded results within minutes, and, utilizing a majority-vote technique, achieved 100% accuracy both in identifying the pathogen-free blood samples and in distinguishing between the Candida species. The applied CNN demonstrates the potential for automatically differentiating bloodstream Candida infections with high accuracy and efficiency. We further analysed the results of the CNN using explainable artificial intelligence (XAI) techniques to understand the critical features and patterns, thereby shedding light on potential key morphodynamic characteristics of neutrophils in response to different Candida species. This approach could provide new insights into host-pathogen interactions and may facilitate the development of rapid, automated diagnostic tools for differentiating fungal species in blood samples.

PMID:38550973 | PMC:PMC10973576 | DOI:10.1016/j.csbj.2024.03.006

Categories: Literature Watch

SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction

Fri, 2024-03-29 06:00

Shape Med Imaging (2023). 2023 Oct;14350:287-300. doi: 10.1007/978-3-031-46914-5_23. Epub 2023 Oct 31.

ABSTRACT

3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis. While deep learning-based approaches have achieved impressive performance in this area, existing deep networks often fail to effectively utilize the shape structures of objects presented in images. As a result, the topology of reconstructed objects may not be well preserved, leading to the presence of artifacts such as discontinuities, holes, or mismatched connections between different parts. In this paper, we propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues. In contrast to previous methods that primarily rely on spatial correlations of image intensities for 3D reconstruction, our model leverages shape priors learned from the training data to guide the reconstruction process. To achieve this, we develop a joint learning network that simultaneously learns a mean shape under deformation models. Each reconstructed image is then considered as a deformed variant of the mean shape. We validate our model, SADIR, on both brain and cardiac magnetic resonance images (MRIs). Experimental results show that our method outperforms the baselines with lower reconstruction error and better preservation of the shape structure of objects within the images.

PMID:38550968 | PMC:PMC10977919 | DOI:10.1007/978-3-031-46914-5_23

Categories: Literature Watch

Artificial intelligence for early detection of renal cancer in computed tomography: A review

Fri, 2024-03-29 06:00

Camb Prism Precis Med. 2022 Nov 11;1:e4. doi: 10.1017/pcm.2022.9. eCollection 2023.

ABSTRACT

Renal cancer is responsible for over 100,000 yearly deaths and is principally discovered in computed tomography (CT) scans of the abdomen. CT screening would likely increase the rate of early renal cancer detection, and improve general survival rates, but it is expected to have a prohibitively high financial cost. Given recent advances in artificial intelligence (AI), it may be possible to reduce the cost of CT analysis and enable CT screening by automating the radiological tasks that constitute the early renal cancer detection pipeline. This review seeks to facilitate further interdisciplinary research in early renal cancer detection by summarising our current knowledge across AI, radiology, and oncology and suggesting useful directions for future novel work. Initially, this review discusses existing approaches in automated renal cancer diagnosis, and methods across broader AI research, to summarise the existing state of AI cancer analysis. Then, this review matches these methods to the unique constraints of early renal cancer detection and proposes promising directions for future research that may enable AI-based early renal cancer detection via CT screening. The primary targets of this review are clinicians with an interest in AI and data scientists with an interest in the early detection of cancer.

PMID:38550952 | PMC:PMC10953744 | DOI:10.1017/pcm.2022.9

Categories: Literature Watch

Applications of artificial intelligence in dementia research

Fri, 2024-03-29 06:00

Camb Prism Precis Med. 2022 Dec 6;1:e9. doi: 10.1017/pcm.2022.10. eCollection 2023.

ABSTRACT

More than 50 million older people worldwide are suffering from dementia, and this number is estimated to increase to 150 million by 2050. Greater caregiver burdens and financial impacts on the healthcare system are expected as we wait for an effective treatment for dementia. Researchers are constantly exploring new therapies and screening approaches for the early detection of dementia. Artificial intelligence (AI) is widely applied in dementia research, including machine learning and deep learning methods for dementia diagnosis and progression detection. Computerized apps are also convenient tools for patients and caregivers to monitor cognitive function changes. Furthermore, social robots can potentially provide daily life support or guidance for the elderly who live alone. This review aims to provide an overview of AI applications in dementia research. We divided the applications into three categories according to different stages of cognitive impairment: (1) cognitive screening and training, (2) diagnosis and prognosis for dementia, and (3) dementia care and interventions. There are numerous studies on AI applications for dementia research. However, one challenge that remains is comparing the effectiveness of different AI methods in real clinical settings.

PMID:38550934 | PMC:PMC10953738 | DOI:10.1017/pcm.2022.10

Categories: Literature Watch

Erratum: Convolutional neural network (CNN)-enabled electrocardiogram (ECG) analysis: a comparison between standard twelve-lead and single-lead setups

Fri, 2024-03-29 06:00

Front Cardiovasc Med. 2024 Mar 14;11:1396396. doi: 10.3389/fcvm.2024.1396396. eCollection 2024.

ABSTRACT

[This corrects the article DOI: 10.3389/fcvm.2024.1327179.].

PMID:38550518 | PMC:PMC10973542 | DOI:10.3389/fcvm.2024.1396396

Categories: Literature Watch

Review of Deep Learning Based Autosegmentation for Clinical Target Volume: Current Status and Future Directions

Fri, 2024-03-29 06:00

Adv Radiat Oncol. 2024 Feb 8;9(5):101470. doi: 10.1016/j.adro.2024.101470. eCollection 2024 May.

ABSTRACT

PURPOSE: Manual contour work for radiation treatment planning takes significant time to ensure volumes are accurately delineated. The use of artificial intelligence with deep learning based autosegmentation (DLAS) models has made itself known in recent years to alleviate this workload. It is used for organs at risk contouring with significant consistency in performance and time saving. The purpose of this study was to evaluate the performance of present published data for DLAS of clinical target volume (CTV) contours, identify areas of improvement, and discuss future directions.

METHODS AND MATERIALS: A literature review was performed by using the key words "deep learning" AND ("segmentation" or "delineation") AND "clinical target volume" in an indexed search into PubMed. A total of 154 articles based on the search criteria were reviewed. The review considered the DLAS model used, disease site, targets contoured, guidelines used, and the overall performance.

RESULTS: Of the 53 articles investigating DLAS of CTV, only 6 were published before 2020. Publications have increased in recent years, with 46 articles published between 2020 and 2023. The cervix (n = 19) and the prostate (n = 12) were studied most frequently. Most studies (n = 43) involved a single institution. Median sample size was 130 patients (range, 5-1052). The most common metrics used to measure DLAS performance were Dice similarity coefficient followed by Hausdorff distance. Dosimetric performance was seldom reported (n = 11). There was also variability in specific guidelines used (Radiation Therapy Oncology Group (RTOG), European Society for Therapeutic Radiology and Oncology (ESTRO), and others). DLAS models had good overall performance for contouring CTV volumes for multiple disease sites, with most studies showing Dice similarity coefficient values >0.7. DLAS models also delineated CTV volumes faster compared with manual contouring. However, some DLAS model contours still required at least minor edits, and future studies investigating DLAS of CTV volumes require improvement.

CONCLUSIONS: DLAS demonstrates capability of completing CTV contour plans with increased efficiency and accuracy. However, most models are developed and validated by single institutions using guidelines followed by the developing institutions. Publications about DLAS of the CTV have increased in recent years. Future studies and DLAS models need to include larger data sets with different patient demographics, disease stages, validation in multi-institutional settings, and inclusion of dosimetric performance.

PMID:38550365 | PMC:PMC10966174 | DOI:10.1016/j.adro.2024.101470

Categories: Literature Watch

Inversion of winter wheat leaf area index from UAV multispectral images: classical vs. deep learning approaches

Fri, 2024-03-29 06:00

Front Plant Sci. 2024 Mar 14;15:1367828. doi: 10.3389/fpls.2024.1367828. eCollection 2024.

ABSTRACT

Precise and timely leaf area index (LAI) estimation for winter wheat is crucial for precision agriculture. The emergence of high-resolution unmanned aerial vehicle (UAV) data and machine learning techniques offers a revolutionary approach for fine-scale estimation of wheat LAI at the low cost. While machine learning has proven valuable for LAI estimation, there are still model limitations and variations that impede accurate and efficient LAI inversion. This study explores the potential of classical machine learning models and deep learning model for estimating winter wheat LAI using multispectral images acquired by drones. Initially, the texture features and vegetation indices served as inputs for the partial least squares regression (PLSR) model and random forest (RF) model. Then, the ground-measured LAI data were combined to invert winter wheat LAI. In contrast, this study also employed a convolutional neural network (CNN) model that solely utilizes the cropped original image for LAI estimation. The results show that vegetation indices outperform the texture features in terms of correlation analysis with LAI and estimation accuracy. However, the highest accuracy is achieved by combining both vegetation indices and texture features to invert LAI in both conventional machine learning methods. Among the three models, the CNN approach yielded the highest LAI estimation accuracy (R 2 = 0.83), followed by the RF model (R 2 = 0.82), with the PLSR model exhibited the lowest accuracy (R 2 = 0.78). The spatial distribution and values of the estimated results for the RF and CNN models are similar, whereas the PLSR model differs significantly from the first two models. This study achieves rapid and accurate winter wheat LAI estimation using classical machine learning and deep learning methods. The findings can serve as a reference for real-time wheat growth monitoring and field management practices.

PMID:38550285 | PMC:PMC10972960 | DOI:10.3389/fpls.2024.1367828

Categories: Literature Watch

Editorial: Rising stars in PET and SPECT: 2022

Fri, 2024-03-29 06:00

Front Nucl Med. 2023;3:1326549. doi: 10.3389/fnume.2023.1326549. Epub 2023 Nov 10.

NO ABSTRACT

PMID:38550275 | PMC:PMC10976900 | DOI:10.3389/fnume.2023.1326549

Categories: Literature Watch

Spherical convolutional neural networks can improve brain microstructure estimation from diffusion MRI data

Fri, 2024-03-29 06:00

Front Neuroimaging. 2024 Mar 14;3:1349415. doi: 10.3389/fnimg.2024.1349415. eCollection 2024.

ABSTRACT

Diffusion magnetic resonance imaging is sensitive to the microstructural properties of brain tissue. However, estimating clinically and scientifically relevant microstructural properties from the measured signals remains a highly challenging inverse problem that machine learning may help solve. This study investigated if recently developed rotationally invariant spherical convolutional neural networks can improve microstructural parameter estimation. We trained a spherical convolutional neural network to predict the ground-truth parameter values from efficiently simulated noisy data and applied the trained network to imaging data acquired in a clinical setting to generate microstructural parameter maps. Our network performed better than the spherical mean technique and multi-layer perceptron, achieving higher prediction accuracy than the spherical mean technique with less rotational variance than the multi-layer perceptron. Although we focused on a constrained two-compartment model of neuronal tissue, the network and training pipeline are generalizable and can be used to estimate the parameters of any Gaussian compartment model. To highlight this, we also trained the network to predict the parameters of a three-compartment model that enables the estimation of apparent neural soma density using tensor-valued diffusion encoding.

PMID:38550242 | PMC:PMC10972853 | DOI:10.3389/fnimg.2024.1349415

Categories: Literature Watch

Artificial Intelligence Predicts Hospitalization for Acute Heart Failure Exacerbation in Patients Undergoing Myocardial Perfusion Imaging

Thu, 2024-03-28 06:00

J Nucl Med. 2024 Mar 28:jnumed.123.266761. doi: 10.2967/jnumed.123.266761. Online ahead of print.

ABSTRACT

Heart failure (HF) is a leading cause of morbidity and mortality in the United States and worldwide, with a high associated economic burden. This study aimed to assess whether artificial intelligence models incorporating clinical, stress test, and imaging parameters could predict hospitalization for acute HF exacerbation in patients undergoing SPECT/CT myocardial perfusion imaging. Methods: The HF risk prediction model was developed using data from 4,766 patients who underwent SPECT/CT at a single center (internal cohort). The algorithm used clinical risk factors, stress variables, SPECT imaging parameters, and fully automated deep learning-generated calcium scores from attenuation CT scans. The model was trained and validated using repeated hold-out (10-fold cross-validation). External validation was conducted on a separate cohort of 2,912 patients. During a median follow-up of 1.9 y, 297 patients (6%) in the internal cohort were admitted for HF exacerbation. Results: The final model demonstrated a higher area under the receiver-operating-characteristic curve (0.87 ± 0.03) for predicting HF admissions than did stress left ventricular ejection fraction (0.73 ± 0.05, P < 0.0001) or a model developed using only clinical parameters (0.81 ± 0.04, P < 0.0001). These findings were confirmed in the external validation cohort (area under the receiver-operating-characteristic curve: 0.80 ± 0.04 for final model, 0.70 ± 0.06 for stress left ventricular ejection fraction, 0.72 ± 0.05 for clinical model; P < 0.001 for all). Conclusion: Integrating SPECT myocardial perfusion imaging into an artificial intelligence-based risk assessment algorithm improves the prediction of HF hospitalization. The proposed method could enable early interventions to prevent HF hospitalizations, leading to improved patient care and better outcomes.

PMID:38548351 | DOI:10.2967/jnumed.123.266761

Categories: Literature Watch

Who Are the Anatomic Outliers Undergoing Total Knee Arthroplasty? A Computed Tomography (CT)-Based Analysis of the Hip-Knee-Ankle Axis Across 1,352 Preoperative CTs Using a Deep Learning and Computer Vision-Based Pipeline

Thu, 2024-03-28 06:00

J Arthroplasty. 2024 Mar 26:S0883-5403(24)00268-7. doi: 10.1016/j.arth.2024.03.053. Online ahead of print.

ABSTRACT

BACKGROUND: Dissatisfaction after total knee arthroplasty (TKA) ranges from 15 to 30%. While patient selection may be partially responsible, morphological and reconstructive challenges may be determinants. Preoperative computed tomography (CT) scans for TKA planning allow us to evaluate the hip-knee-ankle axis and establish a baseline phenotypic distribution across anatomic parameters. The purpose of this cross-sectional analysis was to establish the distributions of 27 parameters in a pre-TKA cohort and perform threshold analysis to identify anatomic outliers.

METHODS: There were 1,352 pre-TKA CTs that were processed. A two-step deep learning pipeline of classification and segmentation models identified landmark images, then generated contour representations. We utilized an open-source computer vision library to compute measurements for 27 anatomic metrics along the hip-knee axis. Normative distribution plots were established, and thresholds for the 15th percentile at both extremes were calculated. Metrics falling outside the central 70th percentile were considered outlier indices. A threshold analysis of outlier indices against the proportion of the cohort was performed.

RESULTS: Significant variation exists in pre-TKA anatomy across 27 normally distributed metrics. Threshold analysis revealed a sigmoid function with a critical point at nine outlier indices, representing 31.2% of subjects as anatomic outliers. Metrics with the greatest variation related to deformity (tibiofemoral angle, medial proximal tibial angle, lateral distal femoral angle), bony size (tibial width, anteroposterior femoral size, femoral head size, medial femoral condyle size), intraoperative landmarks (posterior tibial slope, transepicondylar and posterior condylar axes), and neglected rotational considerations (acetabular and femoral version, femoral torsion).

CONCLUSION: In the largest non-industry database of pre-TKA CTs using a fully automated three-stage deep learning and computer vision-based pipeline, marked anatomic variation exists. In the pursuit of understanding the dissatisfaction rate after TKA, acknowledging that 31% of patients represent anatomic outliers may help us better achieve anatomically personalized TKA, with or without adjunctive technology.

PMID:38548237 | DOI:10.1016/j.arth.2024.03.053

Categories: Literature Watch

Precise tooth design using deep learning-based tooth templates

Thu, 2024-03-28 06:00

J Dent. 2024 Mar 26:104971. doi: 10.1016/j.jdent.2024.104971. Online ahead of print.

ABSTRACT

OBJECTIVES: In many prosthodontic procedures, traditional computer-aided design (CAD) is often time-consuming and lacks accuracy in shape restoration. In this study, we innovatively combined implicit template and deep learning (DL) to construct a precise neural network for personalized tooth defect restoration.

METHODS: Ninety models of right maxillary central incisor (80 for training, 10 for validation) were collected. A DL model named ToothDIT was trained to establish an implicit template and a neural network capable of predicting unique identifications. In the validation stage, teeth in validation set were processed into corner, incisive, and medium defects. The defective teeth were inputted into ToothDIT to predict the unique identification, which actuated the deformation of the implicit template to generate the highly customized template (DIT) for the target tooth. Morphological restorations were executed with templates from template shape library (TSL), average tooth template (ATT), and DIT in Exocad (GmbH, Germany). RMSestimate, width, length, aspect ratio, incisal edge curvature, incisive end retraction, and guiding inclination were introduced to assess the restorative accuracy. Statistical analysis was conducted using two-way ANOVA and paired t-test for overall and detailed differences.

RESULTS: DIT displayed significantly smaller RMSestimate than TSL and ATT. In 2D detailed analysis, DIT exhibited significantly less deviations from the natural teeth compared to TSL and ATT.

CONCLUSION: The proposed DL model successfully reconstructed the morphology of anterior teeth with various degrees of defects and achieved satisfactory accuracy. This approach provides a more reliable reference for prostheses design, resulting in enhanced accuracy in morphological restoration.

CLINICAL SIGNIFICANCE: This DL model holds promise in assisting dentists and technicians in obtaining morphology templates that closely resemble the original shape of the defective teeth. These customized templates serve as a foundation for enhancing the efficiency and precision of digital restorative design for defective teeth.

PMID:38548165 | DOI:10.1016/j.jdent.2024.104971

Categories: Literature Watch

Application of deep learning radiomics in oral squamous cell carcinoma-Extracting more information from medical images using advanced feature analysis

Thu, 2024-03-28 06:00

J Stomatol Oral Maxillofac Surg. 2024 Mar 26:101840. doi: 10.1016/j.jormas.2024.101840. Online ahead of print.

ABSTRACT

OBJECTIVE: To conduct a systematic review with meta-analyses to assess the recent scientific literature addressing the application of deep learning radiomics in oral squamous cell carcinoma (OSCC).

MATERIALS AND METHODS: Electronic and manual literature retrieval was performed using PubMed, Web of Science, EMbase, Ovid-MEDLINE, and IEEE databases from 2012 to 2023. The ROBINS-I tool was used for quality evaluation; random-effects model was used; and results were reported according to the PRISMA statement.

RESULTS: A total of 26 studies involving 64,731 medical images were included in quantitative synthesis. The meta-analysis showed that, the pooled sensitivity and specificity were 0.88 (95%CI: 0.87∼0.88) and 0.80 (95%CI: 0.80∼0.81), respectively. Deeks' asymmetry test revealed there existed slight publication bias (P = 0.03).

CONCLUSIONS: The advances in the application of radiomics combined with learning algorithm in OSCC were reviewed, including diagnosis and differential diagnosis of OSCC, efficacy assessment and prognosis prediction. The demerits of deep learning radiomics at the current stage and its future development direction aimed at medical imaging diagnosis were also summarized and analyzed at the end of the article.

PMID:38548062 | DOI:10.1016/j.jormas.2024.101840

Categories: Literature Watch

Evaluation of pore-fracture microstructure of gypsum rock fragments using micro-CT

Thu, 2024-03-28 06:00

Micron. 2024 Mar 17;181:103633. doi: 10.1016/j.micron.2024.103633. Online ahead of print.

ABSTRACT

This study utilized X-ray micro-computed tomography (micro-CT) to investigate weathered gypsum rocks which can or do serve as a rock substrate for endolithic organisms, focusing on their internal pore-fracture microstructure, estimating porosity, and quantitative comparison between various samples. Examining sections and reconstructed 3D models provides a more detailed insight into the overall structural conditions within rock fragments and the interconnectivity in pore networks, surpassing the limitations of analyzing individual 2D images. Results revealed diverse gypsum forms, cavities, fractures, and secondary features influenced by weathering. Using deep learning segmentation based on the U-Net models within the Dragonfly software enabled to identify and visualize the porous systems and determinate void space which was used to calculate porosity. This approach allowed to describe what type of microstructures and cavities is responsible for the porous spaces in different gypsum samples. A set of quantitative analysis of the detected void and modeled networks provided a needed information about the development of the pore system, connectivity, and pore size distribution. Comparison with mercury intrusion porosimetry showed that both methods consider different populations of pores. In our case, micro-CT typically detects larger pores (> 10 μm) which is related to the effective resolution of the scanned images. Still, micro-CT demonstrated to be an efficient tool in examining the internal microstructures of weathered gypsum rocks, with promising implications particularly in geobiology and microbiology for the characterization of lithic habitats.

PMID:38547790 | DOI:10.1016/j.micron.2024.103633

Categories: Literature Watch

Classifying alkaliphilic proteins using embeddings from protein language model

Thu, 2024-03-28 06:00

Comput Biol Med. 2024 Mar 26;173:108385. doi: 10.1016/j.compbiomed.2024.108385. Online ahead of print.

ABSTRACT

Alkaliphilic proteins have great potential as biocatalysts in biotechnology, especially for enzyme engineering. Extensive research has focused on exploring the enzymatic potential of alkaliphiles and characterizing alkaliphilic proteins. However, the current method employed for identifying these proteins that requires web lab experiment is time-consuming, labor-intensive, and expensive. Therefore, the development of a computational method for alkaliphilic protein identification would be invaluable for protein engineering and design. In this study, we present a novel approach that uses embeddings from a protein language model called ESM-2(3B) in a deep learning framework to classify alkaliphilic and non-alkaliphilic proteins. To our knowledge, this is the first attempt to employ embeddings from a pre-trained protein language model to classify alkaliphilic protein. A reliable dataset comprising 1,002 alkaliphilic and 1,866 non-alkaliphilic proteins was constructed for training and testing the proposed model. The proposed model, dubbed ALPACA, achieves performance scores of 0.88, 0.84, and 0.75 for accuracy, f1-score, and Matthew correlation coefficient respectively on independent dataset. ALPACA is likely to serve as a valuable resource for exploring protein alkalinity and its role in protein design and engineering.

PMID:38547659 | DOI:10.1016/j.compbiomed.2024.108385

Categories: Literature Watch

GraphormerDTI: A graph transformer-based approach for drug-target interaction prediction

Thu, 2024-03-28 06:00

Comput Biol Med. 2024 Mar 18;173:108339. doi: 10.1016/j.compbiomed.2024.108339. Online ahead of print.

ABSTRACT

The application of Artificial Intelligence (AI) to screen drug molecules with potential therapeutic effects has revolutionized the drug discovery process, with significantly lower economic cost and time consumption than the traditional drug discovery pipeline. With the great power of AI, it is possible to rapidly search the vast chemical space for potential drug-target interactions (DTIs) between candidate drug molecules and disease protein targets. However, only a small proportion of molecules have labelled DTIs, consequently limiting the performance of AI-based drug screening. To solve this problem, a machine learning-based approach with great ability to generalize DTI prediction across molecules is desirable. Many existing machine learning approaches for DTI identification failed to exploit the full information with respect to the topological structures of candidate molecules. To develop a better approach for DTI prediction, we propose GraphormerDTI, which employs the powerful Graph Transformer neural network to model molecular structures. GraphormerDTI embeds molecular graphs into vector-format representations through iterative Transformer-based message passing, which encodes molecules' structural characteristics by node centrality encoding, node spatial encoding and edge encoding. With a strong structural inductive bias, the proposed GraphormerDTI approach can effectively infer informative representations for out-of-sample molecules and as such, it is capable of predicting DTIs across molecules with an exceptional performance. GraphormerDTI integrates the Graph Transformer neural network with a 1-dimensional Convolutional Neural Network (1D-CNN) to extract the drugs' and target proteins' representations and leverages an attention mechanism to model the interactions between them. To examine GraphormerDTI's performance for DTI prediction, we conduct experiments on three benchmark datasets, where GraphormerDTI achieves a superior performance than five state-of-the-art baselines for out-of-molecule DTI prediction, including GNN-CPI, GNN-PT, DeepEmbedding-DTI, MolTrans and HyperAttentionDTI, and is on a par with the best baseline for transductive DTI prediction. The source codes and datasets are publicly accessible at https://github.com/mengmeng34/GraphormerDTI.

PMID:38547658 | DOI:10.1016/j.compbiomed.2024.108339

Categories: Literature Watch

FLP: Factor lattice pattern-based automated detection of Parkinson's disease and specific language impairment using recorded speech

Thu, 2024-03-28 06:00

Comput Biol Med. 2024 Mar 20;173:108280. doi: 10.1016/j.compbiomed.2024.108280. Online ahead of print.

ABSTRACT

BACKGROUND: Timely detection of neurodevelopmental and neurological conditions is crucial for early intervention. Specific Language Impairment (SLI) in children and Parkinson's disease (PD) manifests in speech disturbances that may be exploited for diagnostic screening using recorded speech signals. We were motivated to develop an accurate yet computationally lightweight model for speech-based detection of SLI and PD, employing novel feature engineering techniques to mimic the adaptable dynamic weight assignment network capability of deep learning architectures.

MATERIALS AND METHODS: In this research, we have introduced an advanced feature engineering model incorporating a novel feature extraction function, the Factor Lattice Pattern (FLP), which is a quantum-inspired method and uses a superposition-like mechanism, making it dynamic in nature. The FLP encompasses eight distinct patterns, from which the most appropriate pattern was discerned based on the data structure. Through the implementation of the FLP, we automatically extracted signal-specific textural features. Additionally, we developed a new feature engineering model to assess the efficacy of the FLP. This model is self-organizing, producing nine potential results and subsequently choosing the optimal one. Our speech classification framework consists of (1) feature extraction using the proposed FLP and a statistical feature extractor; (2) feature selection employing iterative neighborhood component analysis and an intersection-based feature selector; (3) classification via support vector machine and k-nearest neighbors; and (4) outcome determination using combinational majority voting to select the most favorable results.

RESULTS: To validate the classification capabilities of our proposed feature engineering model, designed to automatically detect PD and SLI, we employed three speech datasets of PD and SLI patients. Our presented FLP-centric model achieved classification accuracy of more than 95% and 99.79% for all PD and SLI datasets, respectively.

CONCLUSIONS: Our results indicate that the proposed model is an accurate alternative to deep learning models in classifying neurological conditions using speech signals.

PMID:38547655 | DOI:10.1016/j.compbiomed.2024.108280

Categories: Literature Watch

AML leukocyte classification method for small samples based on ACGAN

Thu, 2024-03-28 06:00

Biomed Tech (Berl). 2024 Mar 29. doi: 10.1515/bmt-2024-0028. Online ahead of print.

ABSTRACT

Leukemia is a class of hematologic malignancies, of which acute myeloid leukemia (AML) is the most common. Screening and diagnosis of AML are performed by microscopic examination or chemical testing of images of the patient's peripheral blood smear. In smear-microscopy, the ability to quickly identify, count, and differentiate different types of blood cells is critical for disease diagnosis. With the development of deep learning (DL), classification techniques based on neural networks have been applied to the recognition of blood cells. However, DL methods have high requirements for the number of valid datasets. This study aims to assess the applicability of the auxiliary classification generative adversarial network (ACGAN) in the classification task for small samples of white blood cells. The method is trained on the TCIA dataset, and the classification accuracy is compared with two classical classifiers and the current state-of-the-art methods. The results are evaluated using accuracy, precision, recall, and F1 score. The accuracy of the ACGAN on the validation set is 97.1 % and the precision, recall, and F1 scores on the validation set are 97.5 , 97.3, and 97.4 %, respectively. In addition, ACGAN received a higher score in comparison with other advanced methods, which can indicate that it is competitive in classification accuracy.

PMID:38547466 | DOI:10.1515/bmt-2024-0028

Categories: Literature Watch

Self-replicating artificial neural networks give rise to universal evolutionary dynamics

Thu, 2024-03-28 06:00

PLoS Comput Biol. 2024 Mar 28;20(3):e1012004. doi: 10.1371/journal.pcbi.1012004. Online ahead of print.

ABSTRACT

In evolutionary models, mutations are exogenously introduced by the modeler, rather than endogenously introduced by the replicator itself. We present a new deep-learning based computational model, the self-replicating artificial neural network (SeRANN). We train it to (i) copy its own genotype, like a biological organism, which introduces endogenous spontaneous mutations; and (ii) simultaneously perform a classification task that determines its fertility. Evolving 1,000 SeRANNs for 6,000 generations, we observed various evolutionary phenomena such as adaptation, clonal interference, epistasis, and evolution of both the mutation rate and the distribution of fitness effects of new mutations. Our results demonstrate that universal evolutionary phenomena can naturally emerge in a self-replicator model when both selection and mutation are implicit and endogenous. We therefore suggest that SeRANN can be applied to explore and test various evolutionary dynamics and hypotheses.

PMID:38547320 | DOI:10.1371/journal.pcbi.1012004

Categories: Literature Watch

DEW: A wavelet approach of rare sound event detection

Thu, 2024-03-28 06:00

PLoS One. 2024 Mar 28;19(3):e0300444. doi: 10.1371/journal.pone.0300444. eCollection 2024.

ABSTRACT

This paper presents a novel sound event detection (SED) system for rare events occurring in an open environment. Wavelet multiresolution analysis (MRA) is used to decompose the input audio clip of 30 seconds into five levels. Wavelet denoising is then applied on the third and fifth levels of MRA to filter out the background. Significant transitions, which may represent the onset of a rare event, are then estimated in these two levels by combining the peak-finding algorithm with the K-medoids clustering algorithm. The small portions of one-second duration, called 'chunks' are cropped from the input audio signal corresponding to the estimated locations of the significant transitions. Features from these chunks are extracted by the wavelet scattering network (WSN) and are given as input to a support vector machine (SVM) classifier, which classifies them. The proposed SED framework produces an error rate comparable to the SED systems based on convolutional neural network (CNN) architecture. Also, the proposed algorithm is computationally efficient and lightweight as compared to deep learning models, as it has no learnable parameter. It requires only a single epoch of training, which is 5, 10, 200, and 600 times lesser than the models based on CNNs and deep neural networks (DNNs), CNN with long short-term memory (LSTM) network, convolutional recurrent neural network (CRNN), and CNN respectively. The proposed model neither requires concatenation with previous frames for anomaly detection nor any additional training data creation needed for other comparative deep learning models. It needs to check almost 360 times fewer chunks for the presence of rare events than the other baseline systems used for comparison in this paper. All these characteristics make the proposed system suitable for real-time applications on resource-limited devices.

PMID:38547253 | DOI:10.1371/journal.pone.0300444

Categories: Literature Watch

Pages