Deep learning

Surveying the landscape of diagnostic imaging in dentistry's future: Four emerging technologies with promise

Sat, 2024-03-23 06:00

J Am Dent Assoc. 2024 Mar 23:S0002-8177(24)00056-4. doi: 10.1016/j.adaj.2024.01.005. Online ahead of print.

ABSTRACT

BACKGROUND: Advances in digital radiography for both intraoral and panoramic imaging and cone-beam computed tomography have led the way to an increase in diagnostic capabilities for the dental care profession. In this article, the authors provide information on 4 emerging technologies with promise.

TYPES OF STUDIES REVIEWED: The authors feature the following: artificial intelligence in the form of deep learning using convolutional neural networks, dental magnetic resonance imaging, stationary intraoral tomosynthesis, and second-generation cone-beam computed tomography sources based on carbon nanotube technology and multispectral imaging. The authors review and summarize articles featuring these technologies.

RESULTS: The history and background of these emerging technologies are previewed along with their development and potential impact on the practice of dental diagnostic imaging. The authors conclude that these emerging technologies have the potential to have a substantial influence on the practice of dentistry as these systems mature. The degree of influence most likely will vary, with artificial intelligence being the most influential of the 4.

CONCLUSIONS AND PRACTICAL IMPLICATIONS: The readers are informed about these emerging technologies and the potential effects on their practice going forward, giving them information on which to base decisions on adopting 1 or more of these technologies. The 4 technologies reviewed in this article have the potential to improve imaging diagnostics in dentistry thereby leading to better patient care and heightened professional satisfaction.

PMID:38520421 | DOI:10.1016/j.adaj.2024.01.005

Categories: Literature Watch

Assessing brain involvement in Fabry disease with deep learning and the brain-age paradigm

Sat, 2024-03-23 06:00

Hum Brain Mapp. 2024 Apr;45(5):e26599. doi: 10.1002/hbm.26599.

ABSTRACT

While neurological manifestations are core features of Fabry disease (FD), quantitative neuroimaging biomarkers allowing to measure brain involvement are lacking. We used deep learning and the brain-age paradigm to assess whether FD patients' brains appear older than normal and to validate brain-predicted age difference (brain-PAD) as a possible disease severity biomarker. MRI scans of FD patients and healthy controls (HCs) from a single Institution were, retrospectively, studied. The Fabry stabilization index (FASTEX) was recorded as a measure of disease severity. Using minimally preprocessed 3D T1-weighted brain scans of healthy subjects from eight publicly available sources (N = 2160; mean age = 33 years [range 4-86]), we trained a model predicting chronological age based on a DenseNet architecture and used it to generate brain-age predictions in the internal cohort. Within a linear modeling framework, brain-PAD was tested for age/sex-adjusted associations with diagnostic group (FD vs. HC), FASTEX score, and both global and voxel-level neuroimaging measures. We studied 52 FD patients (40.6 ± 12.6 years; 28F) and 58 HC (38.4 ± 13.4 years; 28F). The brain-age model achieved accurate out-of-sample performance (mean absolute error = 4.01 years, R2 = .90). FD patients had significantly higher brain-PAD than HC (estimated marginal means: 3.1 vs. -0.1, p = .01). Brain-PAD was associated with FASTEX score (B = 0.10, p = .02), brain parenchymal fraction (B = -153.50, p = .001), white matter hyperintensities load (B = 0.85, p = .01), and tissue volume reduction throughout the brain. We demonstrated that FD patients' brains appear older than normal. Brain-PAD correlates with FD-related multi-organ damage and is influenced by both global brain volume and white matter hyperintensities, offering a comprehensive biomarker of (neurological) disease severity.

PMID:38520360 | DOI:10.1002/hbm.26599

Categories: Literature Watch

Discovery of Covalent Lead Compounds Targeting 3CL Protease with a Lateral Interactions Spiking Neural Network

Sat, 2024-03-23 06:00

J Chem Inf Model. 2024 Mar 23. doi: 10.1021/acs.jcim.3c01900. Online ahead of print.

ABSTRACT

Covalent drugs exhibit advantages in that noncovalent drugs cannot match, and covalent docking is an important method for screening covalent lead compounds. However, it is difficult for covalent docking to screen covalent compounds on a large scale because covalent docking requires determination of the covalent reaction type of the compound. Here, we propose to use deep learning of a lateral interactions spiking neural network to construct a covalent lead compound screening model to quickly screen covalent lead compounds. We used the 3CL protease (3CL Pro) of SARS-CoV-2 as the screen target and constructed two classification models based on LISNN to predict the covalent binding and inhibitory activity of compounds. The two classification models were trained on the covalent complex data set targeting cysteine (Cys) and the compound inhibitory activity data set targeting 3CL Pro, respected, with good prediction accuracy (ACC > 0.9). We then screened the screening compound library with 6 covalent binding screening models and 12 inhibitory activity screening models. We tested the inhibitory activity of the 32 compounds, and the best compound inhibited SARS-CoV-2 3CL Pro with an IC50 value of 369.5 nM. Further assay implied that dithiothreitol can affect the inhibitory activity of the compound to 3CL Pro, indicating that the compound may covalently bind 3CL Pro. The selectivity test showed that the compound had good target selectivity to 3CL Pro over cathepsin L. These correlation assays can prove the rationality of the covalent lead compound screening model. Finally, covalent docking was performed to demonstrate the binding conformation of the compound with 3CL Pro. The source code can be obtained from the GitHub repository (https://github.com/guzh970630/Screen_Covalent_Compound_by_LISNN).

PMID:38520328 | DOI:10.1021/acs.jcim.3c01900

Categories: Literature Watch

Unpaired deep learning for pharmacokinetic parameter estimation from dynamic contrast-enhanced MRI without AIF measurements

Fri, 2024-03-22 06:00

Neuroimage. 2024 Mar 20:120571. doi: 10.1016/j.neuroimage.2024.120571. Online ahead of print.

ABSTRACT

DCE-MRI provides information about vascular permeability and tissue perfusion through the acquisition of pharmacokinetic parameters. However, traditional methods for estimating these pharmacokinetic parameters involve fitting tracer kinetic models, which often suffer from computational complexity and low accuracy due to noisy arterial input function (AIF) measurements. Although some deep learning approaches have been proposed to tackle these challenges, most existing methods rely on supervised learning that requires paired input DCE-MRI and labeled pharmacokinetic parameter maps. This dependency on labeled data introduces significant time and resource constraints and potential noise in the labels, making supervised learning methods often impractical. To address these limitations, we present a novel unpaired deep learning method for estimating pharmacokinetic parameters and the AIF using a physics-driven CycleGAN approach. Our proposed CycleGAN framework is designed based on the underlying physics model, resulting in a simpler architecture with a single generator and discriminator pair. Crucially, our experimental results indicate that our method does not necessitate separate AIF measurements and produces more reliable pharmacokinetic parameters than other techniques.

PMID:38518829 | DOI:10.1016/j.neuroimage.2024.120571

Categories: Literature Watch

Nacre-like block lattice metamaterials with targeted phononic band gap and mechanical properties

Fri, 2024-03-22 06:00

J Mech Behav Biomed Mater. 2024 Mar 18;154:106511. doi: 10.1016/j.jmbbm.2024.106511. Online ahead of print.

ABSTRACT

The extraordinary quasi-static mechanical properties of nacre-like composite metamaterials, such as high specific strength, stiffness, and toughness, are due to the periodic arrangement of two distinct phases in a "brick and mortar" structure. It is also theorized that the hierarchical periodic structure of nacre structures can provide wider band gaps at different frequency scales. However, the function of hierarchy in the dynamic behavior of metamaterials is largely unknown, and most current investigations are focused on a single objective and specialized applications. Nature, on the other hand, appears to develop systems that represent a trade-off between multiple objectives, such as stiffness, fatigue resistance, and wave attenuation. Given the wide range of design options available to these systems, a multidisciplinary strategy combining diverse objectives may be a useful opportunity provided by bioinspired artificial systems. This paper describes a class of hierarchically-architected block lattice metamaterials with simultaneous wave filtering and enhanced mechanical properties, using deep learning based on artificial neural networks (ANN), to overcome the shortcomings of traditional design methods for forward prediction, parameter design, and topology design of block lattice metamaterial. Our approach uses ANN to efficiently describe the complicated interactions between nacre geometry and its attributes, and then use the Bayesian optimization technique to determine the optimal geometry constants that match the given fitness requirements. We numerically demonstrate that complete band gaps, that is attributed to the coupling effects of local resonances and Bragg scattering, exist. The coupling effects are naturally influenced by the topological arrangements of the continuous structures and the mechanical characteristics of the component phases. We also demonstrate how we can tune the frequency of the complete band gap by modifying the geometrical configurations and volume fraction distribution of the metamaterials. This research contributes to the development of mechanically robust block lattice metamaterials and lenses capable of controlling acoustic and elastic waves in hostile settings.

PMID:38518512 | DOI:10.1016/j.jmbbm.2024.106511

Categories: Literature Watch

Foresight-a generative pretrained transformer for modelling of patient timelines using electronic health records: a retrospective modelling study

Fri, 2024-03-22 06:00

Lancet Digit Health. 2024 Apr;6(4):e281-e290. doi: 10.1016/S2589-7500(24)00025-6.

ABSTRACT

BACKGROUND: An electronic health record (EHR) holds detailed longitudinal information about a patient's health status and general clinical history, a large portion of which is stored as unstructured, free text. Existing approaches to model a patient's trajectory focus mostly on structured data and a subset of single-domain outcomes. This study aims to evaluate the effectiveness of Foresight, a generative transformer in temporal modelling of patient data, integrating both free text and structured formats, to predict a diverse array of future medical outcomes, such as disorders, substances (eg, to do with medicines, allergies, or poisonings), procedures, and findings (eg, relating to observations, judgements, or assessments).

METHODS: Foresight is a novel transformer-based pipeline that uses named entity recognition and linking tools to convert EHR document text into structured, coded concepts, followed by providing probabilistic forecasts for future medical events, such as disorders, substances, procedures, and findings. The Foresight pipeline has four main components: (1) CogStack (data retrieval and preprocessing); (2) the Medical Concept Annotation Toolkit (structuring of the free-text information from EHRs); (3) Foresight Core (deep-learning model for biomedical concept modelling); and (4) the Foresight web application. We processed the entire free-text portion from three different hospital datasets (King's College Hospital [KCH], South London and Maudsley [SLaM], and the US Medical Information Mart for Intensive Care III [MIMIC-III]), resulting in information from 811 336 patients and covering both physical and mental health institutions. We measured the performance of models using custom metrics derived from precision and recall.

FINDINGS: Foresight achieved a precision@10 (ie, of 10 forecasted candidates, at least one is correct) of 0·68 (SD 0·0027) for the KCH dataset, 0·76 (0·0032) for the SLaM dataset, and 0·88 (0·0018) for the MIMIC-III dataset, for forecasting the next new disorder in a patient timeline. Foresight also achieved a precision@10 value of 0·80 (0·0013) for the KCH dataset, 0·81 (0·0026) for the SLaM dataset, and 0·91 (0·0011) for the MIMIC-III dataset, for forecasting the next new biomedical concept. In addition, Foresight was validated on 34 synthetic patient timelines by five clinicians and achieved a relevancy of 33 (97% [95% CI 91-100]) of 34 for the top forecasted candidate disorder. As a generative model, Foresight can forecast follow-on biomedical concepts for as many steps as required.

INTERPRETATION: Foresight is a general-purpose model for biomedical concept modelling that can be used for real-world risk forecasting, virtual trials, and clinical research to study the progression of disorders, to simulate interventions and counterfactuals, and for educational purposes.

FUNDING: National Health Service Artificial Intelligence Laboratory, National Institute for Health and Care Research Biomedical Research Centre, and Health Data Research UK.

PMID:38519155 | DOI:10.1016/S2589-7500(24)00025-6

Categories: Literature Watch

A deep-learning model for intracranial aneurysm detection on CT angiography images in China: a stepwise, multicentre, early-stage clinical validation study

Fri, 2024-03-22 06:00

Lancet Digit Health. 2024 Apr;6(4):e261-e271. doi: 10.1016/S2589-7500(23)00268-6.

ABSTRACT

BACKGROUND: Artificial intelligence (AI) models in real-world implementation are scarce. Our study aimed to develop a CT angiography (CTA)-based AI model for intracranial aneurysm detection, assess how it helps clinicians improve diagnostic performance, and validate its application in real-world clinical implementation.

METHODS: We developed a deep-learning model using 16 546 head and neck CTA examination images from 14 517 patients at eight Chinese hospitals. Using an adapted, stepwise implementation and evaluation, 120 certified clinicians from 15 geographically different hospitals were recruited. Initially, the AI model was externally validated with images of 900 digital subtraction angiography-verified CTA cases (examinations) and compared with the performance of 24 clinicians who each viewed 300 of these cases (stage 1). Next, as a further external validation a multi-reader multi-case study enrolled 48 clinicians to individually review 298 digital subtraction angiography-verified CTA cases (stage 2). The clinicians reviewed each CTA examination twice (ie, with and without the AI model), separated by a 4-week washout period. Then, a randomised open-label comparison study enrolled 48 clinicians to assess the acceptance and performance of this AI model (stage 3). Finally, the model was prospectively deployed and validated in 1562 real-world clinical CTA cases.

FINDINGS: The AI model in the internal dataset achieved a patient-level diagnostic sensitivity of 0·957 (95% CI 0·939-0·971) and a higher patient-level diagnostic sensitivity than clinicians (0·943 [0·921-0·961] vs 0·658 [0·644-0·672]; p<0·0001) in the external dataset. In the multi-reader multi-case study, the AI-assisted strategy improved clinicians' diagnostic performance both on a per-patient basis (the area under the receiver operating characteristic curves [AUCs]; 0·795 [0·761-0·830] without AI vs 0·878 [0·850-0·906] with AI; p<0·0001) and a per-aneurysm basis (the area under the weighted alternative free-response receiver operating characteristic curves; 0·765 [0·732-0·799] vs 0·865 [0·839-0·891]; p<0·0001). Reading time decreased with the aid of the AI model (87·5 s vs 82·7 s, p<0·0001). In the randomised open-label comparison study, clinicians in the AI-assisted group had a high acceptance of the AI model (92·6% adoption rate), and a higher AUC when compared with the control group (0·858 [95% CI 0·850-0·866] vs 0·789 [0·780-0·799]; p<0·0001). In the prospective study, the AI model had a 0·51% (8/1570) error rate due to poor-quality CTA images and recognition failure. The model had a high negative predictive value of 0·998 (0·994-1·000) and significantly improved the diagnostic performance of clinicians; AUC improved from 0·787 (95% CI 0·766-0·808) to 0·909 (0·894-0·923; p<0·0001) and patient-level sensitivity improved from 0·590 (0·511-0·666) to 0·825 (0·759-0·880; p<0·0001).

INTERPRETATION: This AI model demonstrated strong clinical potential for intracranial aneurysm detection with improved clinician diagnostic performance, high acceptance, and practical implementation in real-world clinical cases.

FUNDING: National Natural Science Foundation of China.

TRANSLATION: For the Chinese translation of the abstract see Supplementary Materials section.

PMID:38519154 | DOI:10.1016/S2589-7500(23)00268-6

Categories: Literature Watch

Development and external validation of a dynamic risk score for early prediction of cardiogenic shock in cardiac intensive care units using machine learning

Fri, 2024-03-22 06:00

Eur Heart J Acute Cardiovasc Care. 2024 Mar 22:zuae037. doi: 10.1093/ehjacc/zuae037. Online ahead of print.

ABSTRACT

BACKGROUND: Myocardial infarction and heart failure are major cardiovascular diseases that affect millions of people in the US with the morbidity and mortality being highest among patients who develop cardiogenic shock. Early recognition of cardiogenic shock allows prompt implementation of treatment measures. Our objective is to develop a new dynamic risk score, called CShock, to improve early detection of cardiogenic shock in cardiac intensive care unit (ICU).

METHODS: We developed and externally validated a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict onset of cardiogenic shock. We prepared a cardiac ICU dataset using MIMIC-III database by annotating with physician adjudicated outcomes. This dataset that consisted of 1500 patients with 204 having cardiogenic/mixed shock was then used to train CShock. The features used to train the model for CShock included patient demographics, cardiac ICU admission diagnoses, routinely measured laboratory values and vital signs, and relevant features manually extracted from echocardiogram and left heart catheterization reports. We externally validated the risk model on the New York University (NYU) Langone Health cardiac ICU database that was also annotated with physician adjudicated outcomes. The external validation cohort consisted of 131 patients with 25 patients experiencing cardiogenic/mixed shock.

RESULTS: CShock achieved an area under the receiver operator characteristic curve (AUROC) of 0.821 (95% CI 0.792-0.850). CShock was externally validated in the more contemporary NYU cohort and achieved an AUROC of 0.800 (95% CI 0.717-0.884), demonstrating its generalizability in other cardiac ICUs. Having an elevated heart rate is most predictive of cardiogenic shock development based on Shapley values. The other top ten predictors are having an admission diagnosis of myocardial infarction with ST-segment elevation, having an admission diagnosis of acute decompensated heart failure, Braden Scale, Glasgow Coma Scale, Blood urea nitrogen, Systolic blood pressure, Serum chloride, Serum sodium, and Arterial blood pH.

CONCLUSIONS: The novel CShock score has the potential to provide automated detection and early warning for cardiogenic shock and improve the outcomes for the millions of patients who suffer from myocardial infarction and heart failure.

PMID:38518758 | DOI:10.1093/ehjacc/zuae037

Categories: Literature Watch

Multi modality fusion transformer with spatio-temporal feature aggregation module for psychiatric disorder diagnosis

Fri, 2024-03-22 06:00

Comput Med Imaging Graph. 2024 Mar 19;114:102368. doi: 10.1016/j.compmedimag.2024.102368. Online ahead of print.

ABSTRACT

Bipolar disorder (BD) is characterized by recurrent episodes of depression and mild mania. In this paper, to address the common issue of insufficient accuracy in existing methods and meet the requirements of clinical diagnosis, we propose a framework called Spatio-temporal Feature Fusion Transformer (STF2Former). It improves on our previous work - MFFormer by introducing a Spatio-temporal Feature Aggregation Module (STFAM) to learn the temporal and spatial features of rs-fMRI data. It promotes intra-modality attention and information fusion across different modalities. Specifically, this method decouples the temporal and spatial dimensions and designs two feature extraction modules for extracting temporal and spatial information separately. Extensive experiments demonstrate the effectiveness of our proposed STFAM in extracting features from rs-fMRI, and prove that our STF2Former can significantly outperform MFFormer and achieve much better results among other state-of-the-art methods.

PMID:38518412 | DOI:10.1016/j.compmedimag.2024.102368

Categories: Literature Watch

Multi-scale feature fusion for prediction of IDH1 mutations in glioma histopathological images

Fri, 2024-03-22 06:00

Comput Methods Programs Biomed. 2024 Mar 3;248:108116. doi: 10.1016/j.cmpb.2024.108116. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Mutations in isocitrate dehydrogenase 1 (IDH1) play a crucial role in the prognosis, diagnosis, and treatment of gliomas. However, current methods for determining its mutation status, such as immunohistochemistry and gene sequencing, are difficult to implement widely in routine clinical diagnosis. Recent studies have shown that using deep learning methods based on pathological images of glioma can predict the mutation status of the IDH1 gene. However, our research focuses on utilizing multi-scale information in pathological images to improve the accuracy of predicting IDH1 gene mutations, thereby providing an accurate and cost-effective prediction method for routine clinical diagnosis.

METHODS: In this paper, we propose a multi-scale fusion gene identification network (MultiGeneNet). The network first uses two feature extractors to obtain feature maps at different scale images, and then by employing a bilinear pooling layer based on Hadamard product to realize the fusion of multi-scale features. Through fully exploiting the complementarity among features at different scales, we are able to obtain a more comprehensive and rich representation of multi-scale features.

RESULTS: Based on the Hematoxylin and Eosin stained pathological section dataset of 296 patients, our method achieved an accuracy of 83.575 % and an AUC of 0.886, thus significantly outperforming other single-scale methods.

CONCLUSIONS: Our method can be deployed in medical aid systems at very low cost, serving as a diagnostic or prognostic tool for glioma patients in medically underserved areas.

PMID:38518408 | DOI:10.1016/j.cmpb.2024.108116

Categories: Literature Watch

Application of deep learning in radiation therapy for cancer

Fri, 2024-03-22 06:00

Cancer Radiother. 2024 Mar 21:S1278-3218(24)00026-X. doi: 10.1016/j.canrad.2023.07.015. Online ahead of print.

ABSTRACT

In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer.

PMID:38519291 | DOI:10.1016/j.canrad.2023.07.015

Categories: Literature Watch

Study on the saltiness-enhancing mechanism of chicken-derived umami peptides by sensory evaluation and molecular docking to transmembrane channel-like protein 4 (TMC4)

Fri, 2024-03-22 06:00

Food Res Int. 2024 Apr;182:114139. doi: 10.1016/j.foodres.2024.114139. Epub 2024 Feb 18.

ABSTRACT

The previously obtained chicken-derived umami peptides in the laboratory were evaluated for their saltiness-enhancing effect by sensory evaluation and S-curve, and the results revealed that peptides TPPKID, PKESEKPN, TEDWGR, LPLQDAH, NEFGYSNR, and LPLQD had significant saltiness-enhancing effects. In the binary solution system with salt, the ratio of the experimental detection threshold (129.17 mg/L) to the theoretical detection threshold (274.43 mg/L) of NEFGYSNR was 0.47, which had a synergistic saltiness-enhancing effect with salt. The model of transmembrane channel-like protein 4 (TMC4) channel protein was constructed by homology modeling, which had a 10-fold transmembrane structure and was well evaluated. Molecular docking and frontier molecular orbitals showed that the main active sites of TMC4 were Lys 471, Met 379, Cys 475, Gln 377, and Pro 380, and the main active sites of NEFGYSNR were Tyr, Ser and Asn. This study may provide a theoretical reference for low-sodium diets.

PMID:38519171 | DOI:10.1016/j.foodres.2024.114139

Categories: Literature Watch

RNAirport: a deep neural network-based database characterizing representative gene models in plants

Fri, 2024-03-22 06:00

J Genet Genomics. 2024 Mar 20:S1673-8527(24)00057-2. doi: 10.1016/j.jgg.2024.03.004. Online ahead of print.

ABSTRACT

A 5'-leader, known initially as the 5'-untranslated region, contains multiple isoforms due to alternative splicings (aS) and transcription start sites (aTSS). Therefore, a representative 5'-leader is demanded to examine the embedded RNA regulatory elements in controlling translation efficiency. Here, we develop a ranking algorithm and a deep-learning model to annotate representative 5'-leaders for five plant species. We rank the intra- and inter-sample frequency of aS-mediated transcript isoforms using the Kruskal-Wallis test-based algorithm and identify the representative aS-5'-leader. To further assign a representative 5'-end, we train the deep-learning model 5'leaderP to learn aTSS-mediated 5'-end distribution patterns from cap-analysis gene expression (CAGE) data. The model accurately predicts the 5'-end, confirmed experimentally in Arabidopsis and rice. The representative 5'-leader-contained gene models and 5'leaderP can be accessed at RNAirport (http://www.rnairport.com/leader5P/). This stage 1 5'-leader annotation records 5'-leader diversity and will pave the way to Ribo-Seq ORF annotation, identical to the project recently initiated by human GENCODE.

PMID:38518981 | DOI:10.1016/j.jgg.2024.03.004

Categories: Literature Watch

Integration of lanthanide MOFs/methylcellulose-based fluorescent sensor arrays and deep learning for fish freshness monitoring

Fri, 2024-03-22 06:00

Int J Biol Macromol. 2024 Mar 20:131011. doi: 10.1016/j.ijbiomac.2024.131011. Online ahead of print.

ABSTRACT

Preserving fish meat poses a significant challenge due to its high protein and low fat content. This study introduces a novel approach that utilizes a common type of lanthanide metal-organic frameworks (Ln-MOFs), EuMOFs, in combination with 5-fluorescein isothiocyanate (FITC) and methylcellulose (MC) to develop fluorescent sensor arrays for real-time monitoring the freshness of fish meat. The EuMOF-FITC/MC fluorescence films were characterized with excellent fluorescence response, ideal morphology, good mechanical properties, and improved hydrophobicity. The efficacy of the fluorescence sensor array was evaluated by testing various concentrations of spoilage gases (such as ammonia, dimethylamine, and trimethylamine) within a 20-min timeframe using a smartphone-based camera obscura device. This sensor array enables the real-time monitoring of fish freshness, with the ability to preliminarily identify the freshness status of mackerel meat with the naked eye. Furthermore, the study employed four convolutional neural network (CNN) models to enhance the performance of freshness assessment, all of which achieved accuracy levels exceeding 93 %. Notably, the ResNext-101 model demonstrated a particularly high accuracy of 98.97 %. These results highlight the potential of the EuMOF-based fluorescence sensor array, in conjunction with the CNN model, as a reliable and accurate method for real-time monitoring the freshness of fish meat.

PMID:38518947 | DOI:10.1016/j.ijbiomac.2024.131011

Categories: Literature Watch

Deep Learning-Based construction of a Drug-Like compound database and its application in virtual screening of HsDHODH inhibitors

Fri, 2024-03-22 06:00

Methods. 2024 Mar 20:S1046-2023(24)00080-X. doi: 10.1016/j.ymeth.2024.03.008. Online ahead of print.

ABSTRACT

The process of virtual screening relies heavily on the databases, but it is disadvantageous to conduct virtual screening based on commercial databases with patent-protected compounds, high compound toxicity and side effects. Therefore, this paper utilizes generative recurrent neural networks (RNN) containing long short-term memory (LSTM) cells to learn the properties of drug compounds in the DrugBank, aiming to obtain a new and virtual screening compounds database with drug-like properties. Ultimately, a compounds database consisting of 26,316 compounds is obtained by this method. To evaluate the potential of this compounds database, a series of tests are performed, including chemical space, ADME properties, compound fragmentation, and synthesizability analysis. As a result, it is proved that the database is equipped with good drug-like properties and a relatively new backbone, its potential in virtual screening is further tested. Finally, a series of seedling compounds with completely new backbones are obtained through docking and binding free energy calculations.

PMID:38518843 | DOI:10.1016/j.ymeth.2024.03.008

Categories: Literature Watch

DMAF-Net: Deformable multi-scale adaptive fusion network for dental structure detection with panoramic radiographs

Fri, 2024-03-22 06:00

Dentomaxillofac Radiol. 2024 Mar 22:twae014. doi: 10.1093/dmfr/twae014. Online ahead of print.

ABSTRACT

OBJECTIVES: Panoramic radiography is one of the most commonly used diagnostic modalities in dentistry. Automatic recognition of panoramic radiography helps dentists in decision support. In order to improve the accuracy of the detection of dental structural problems in panoramic radiographs, we have improved the YOLO network and verified the feasibility of this new method in aiding the detection of dental problems.

METHODS: We propose a Deformable Multi-scale Adaptive Fusion Net (DMAF-Net) to detect five types of dental situations (impacted teeth, missing teeth, implants, crown restorations and root canal-treated teeth) in panoramic radiography by improving the You Only Look Once (YOLO) network. In DMAF-Net, we propose different modules to enhance the feature extraction capability of the network as well as to acquire high-level features at different scales, while using adaptive spatial feature fusion to solve the problem of scale mismatches of different feature layers, which effectively improves the detection performance. In order to evaluate the detection performance of the models, we compare the experimental results of different models in the test set, and select the optimal results of the models by calculating the average of different metrics in each category as the evaluation criteria.

RESULTS: 1474 panoramic radiographs were divided into training, validation and test sets in the ratio of 7:2:1. In the test set, the average precision and recall of DMAF-Net are 92.7% and 87.6%, respectively; the mean Average Precision (mAP0.5 and mAP [0.5:0.95]) are 91.8% and 63.7%, respectively.

CONCLUSIONS: The proposed DMAF-Net model improves existing deep learning models and achieves automatic detection of tooth structure problems in panoramic radiographs. This new method has great potential for new computer-aided diagnostic, teaching and clinical applications in the future.

PMID:38518093 | DOI:10.1093/dmfr/twae014

Categories: Literature Watch

Stochastic neuro-fuzzy system implemented in memristor crossbar arrays

Fri, 2024-03-22 06:00

Sci Adv. 2024 Mar 22;10(12):eadl3135. doi: 10.1126/sciadv.adl3135. Epub 2024 Mar 22.

ABSTRACT

Neuro-symbolic artificial intelligence has garnered considerable attention amid increasing industry demands for high-performance neural networks that are interpretable and adaptable to previously unknown problem domains with minimal reconfiguration. However, implementing neuro-symbolic hardware is challenging due to the complexity in symbolic knowledge representation and calculation. We experimentally demonstrated a memristor-based neuro-fuzzy hardware based on TiN/TaOx/HfOx/TiN chips that is superior to its silicon-based counterpart in terms of throughput and energy efficiency by using array topological structure for knowledge representation and physical laws for computing. Intrinsic memristor variability is fully exploited to increase robustness in knowledge representation. A hybrid in situ training strategy is proposed for error minimizing in training. The hardware adapts easier to a previously unknown environment, achieving ~6.6 times faster convergence and ~6 times lower error than deep learning. The hardware energy efficiency is over two orders of magnitude greater than field-programmable gate arrays. This research greatly extends the capability of memristor-based neuromorphic computing systems in artificial intelligence.

PMID:38517972 | DOI:10.1126/sciadv.adl3135

Categories: Literature Watch

The effect of the re-segmentation method on improving the performance of rectal cancer image segmentation models

Fri, 2024-03-22 06:00

Technol Health Care. 2023 Nov 13. doi: 10.3233/THC-230690. Online ahead of print.

ABSTRACT

BACKGROUND: Rapid and accurate segmentation of tumor regions from rectal cancer images can better understand the patientâs lesions and surrounding tissues, providing more effective auxiliary diagnostic information. However, cutting rectal tumors with deep learning still cannot be compared with manual segmentation, and a major obstacle to cutting rectal tumors with deep learning is the lack of high-quality data sets.

OBJECTIVE: We propose to use our Re-segmentation Method to manually correct the model segmentation area and put it into training and training ideas. The data set has been made publicly available. Methods: A total of 354 rectal cancer CT images and 308 rectal region images labeled by experts from Jiangxi Cancer Hospital were included in the data set. Six network architectures are used to train the data set, and the region predicted by the model is manually revised and then put into training to improve the ability of model segmentation and then perform performance measurement.

RESULTS: In this study, we use the Resegmentation Method for various popular network architectures.

CONCLUSION: By comparing the evaluation indicators before and after using the Re-segmentation Method, we prove that our proposed Re-segmentation Method can further improve the performance of the rectal cancer image segmentation model.

PMID:38517809 | DOI:10.3233/THC-230690

Categories: Literature Watch

Deep learning approaches for breast cancer detection in histopathology images: A review

Fri, 2024-03-22 06:00

Cancer Biomark. 2024 Mar 7. doi: 10.3233/CBM-230251. Online ahead of print.

ABSTRACT

BACKGROUND: Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images.

OBJECTIVE: To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques.

METHODS: This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models.

RESULTS: Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images.

CONCLUSION: This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.

PMID:38517775 | DOI:10.3233/CBM-230251

Categories: Literature Watch

Multiview Deep Subspace Clustering Networks

Fri, 2024-03-22 06:00

IEEE Trans Cybern. 2024 Mar 22;PP. doi: 10.1109/TCYB.2024.3372309. Online ahead of print.

ABSTRACT

Multiview subspace clustering aims to discover the inherent structure of data by fusing multiple views of complementary information. Most existing methods first extract multiple types of handcrafted features and then learn a joint affinity matrix for clustering. The disadvantage of this approach lies in two aspects: 1) multiview relations are not embedded into feature learning and 2) the end-to-end learning manner of deep learning is not suitable for multiview clustering. Even when deep features have been extracted, it is a nontrivial problem to choose a proper backbone for clustering on different datasets. To address these issues, we propose the multiview deep subspace clustering networks (MvDSCNs), which learns a multiview self-representation matrix in an end-to-end manner. The MvDSCN consists of two subnetworks, i.e., a diversity network (Dnet) and a universality network (Unet). A latent space is built using deep convolutional autoencoders, and a self-representation matrix is learned in the latent space using a fully connected layer. Dnet learns view-specific self-representation matrices, whereas Unet learns a common self-representation matrix for all views. To exploit the complementarity of multiview representations, the Hilbert-Schmidt independence criterion (HSIC) is introduced as a diversity regularizer that captures the nonlinear, high-order interview relations. Because different views share the same label space, the self-representation matrices of each view are aligned to the common one by universality regularization. The MvDSCN also unifies multiple backbones to boost clustering performance and avoid the need for model selection. Experiments demonstrate the superiority of the MvDSCN.

PMID:38517724 | DOI:10.1109/TCYB.2024.3372309

Categories: Literature Watch

Pages