Deep learning

Microscale 3-D Capacitance Tomography with a CMOS Sensor Array

Thu, 2024-02-22 06:00

IEEE Biomed Circuits Syst Conf. 2023 Oct;2023. doi: 10.1109/biocas58349.2023.10388576. Epub 2024 Jan 18.

ABSTRACT

Electrical capacitance tomography (ECT) is a non-optical imaging technique in which a map of the interior permittivity of a volume is estimated by making capacitance measurements at its boundary and solving an inverse problem. While previous ECT demonstrations have often been at centimeter scales, ECT is not limited to macroscopic systems. In this paper, we demonstrate ECT imaging of polymer microspheres and bacterial biofilms using a CMOS microelectrode array, achieving spatial resolution of 10 microns. Additionally, we propose a deep learning architecture and an improved multi-objective training scheme for reconstructing out-of-plane permittivity maps from the sensor measurements. Experimental results show that the proposed approach is able to resolve microscopic 3-D structures, achieving 91.5% prediction accuracy on the microsphere dataset and 82.7% on the biofilm dataset, including an average of 4.6% improvement over baseline computational methods.

PMID:38384749 | PMC:PMC10880799 | DOI:10.1109/biocas58349.2023.10388576

Categories: Literature Watch

Application of flash GC e-nose and FT-NIR combined with deep learning algorithm in preventing age fraud and quality evaluation of pericarpium citri reticulatae

Thu, 2024-02-22 06:00

Food Chem X. 2024 Feb 12;21:101220. doi: 10.1016/j.fochx.2024.101220. eCollection 2024 Mar 30.

ABSTRACT

Pericarpium citri reticulatae (PCR) is the dried mature fruit peel of Citrus reticulata Blanco and its cultivated varieties in the Brassicaceae family. It can be used as both food and medicine, and has the effect of relieving cough and phlegm, and promoting digestion. The smell and medicinal properties of PCR are aged over the years; only varieties with aging value can be called "Chenpi". That is to say, the storage year of PCR has a great influence on its quality. As the color and smell of PCR of different storage years are similar, some unscrupulous merchants often use PCRs of low years to pretend to be PCRs of high years, and make huge profits. Therefore, we did this study with the aim of establishing a rapid and nondestructive method to identify the counterfeiting of PCR storage year, so as to protect the legitimate rights and interests of consumers. In this study, a classification model of PCR was established by e-eye, flash GC e-nose, and Fourier transform near-infrared (FT-NIR) combined with machine learning algorithms, which can quickly and accurately distinguish PCRs of different storage years. DFA and PLS-DA models were established by flash GC e-nose to distinguish PCRs of different ages, and 8 odor components were identified, among which (+)-limonene and γ-terpinene were the key components to distinguish PCRs of different ages. In addition, the classification and calibration model of PCRs were established by the combination of FT-NIR and machine learning algorithms. The classification models included SVM, KNN, LSTM, and CNN-LSTM, while the calibration models included PLSR, LSTM, and CNN-LSTM. Among them, the CNN-LSTM model built by internal capsule had significantly better classification and calibration performance than the other models. The accuracy of the classification model was 98.21 %. The R2P of age, (+)-limonene and γ-terpinene was 0.9912, 0.9875 and 0.9891, respectively. These results showed that the combination of flash GC e-nose and FT-NIR combined with deep learning algorithm could quickly and accurately distinguish PCRs of different ages. It also provided an effective and reliable method to monitor the quality of PCR in the market.

PMID:38384686 | PMC:PMC10879671 | DOI:10.1016/j.fochx.2024.101220

Categories: Literature Watch

An intelligent neural network model to detect red blood cells for various blood structure classification in microscopic medical images

Thu, 2024-02-22 06:00

Heliyon. 2024 Feb 13;10(4):e26149. doi: 10.1016/j.heliyon.2024.e26149. eCollection 2024 Feb 29.

ABSTRACT

Biomedical image analysis plays a crucial role in enabling high-performing imaging and various clinical applications. For the proper diagnosis of blood diseases related to red blood cells, red blood cells must be accurately identified and categorized. Manual analysis is time-consuming and prone to mistakes. Analyzing multi-label samples, which contain clusters of cells, is challenging due to difficulties in separating individual cells, such as touching or overlapping cells. High-performance biomedical imaging and several medical applications are made possible by advanced biosensors. We develop an intelligent neural network model that can automatically identify and categorize red blood cells from microscopic medical images using region-based convolutional neural networks (RCNN) and cutting-edge biosensors. Our model successfully navigates obstacles like touching or overlapping cells and accurately recognizes various blood structures. Additionally, we utilized data augmentation as a pre-processing method on microscopic images to enhance the model's computational efficiency and expand the sample size. To refine the data and eliminate noise from the dataset, we utilized the Radial Gradient Index filtering algorithm for imaging data equalization. We exhibit improved detection accuracy and a reduced model loss rate when using medical imagery datasets to apply our proposed model in comparison to existing ResNet and GoogleNet models. Our model precisely detected red blood cells in a collection of medical images with 99% training accuracy and 91.21% testing accuracy. Our proposed model outperformed earlier models like ResNet-50 and GoogleNet by 10-15%. Our results demonstrated that Artificial intelligence (AI)-assisted automated red blood cell detection has the potential to revolutionize and speed up blood cell analysis, minimizing human error and enabling early illness diagnosis.

PMID:38384569 | PMC:PMC10879026 | DOI:10.1016/j.heliyon.2024.e26149

Categories: Literature Watch

Advanced analysis of disintegrating pharmaceutical compacts using deep learning-based segmentation of time-resolved micro-tomography images

Thu, 2024-02-22 06:00

Heliyon. 2024 Feb 12;10(4):e26025. doi: 10.1016/j.heliyon.2024.e26025. eCollection 2024 Feb 29.

ABSTRACT

The mechanism governing pharmaceutical tablet disintegration is far from fully understood. Despite the importance of controlling a formulation's disintegration process to maximize the active pharmaceutical ingredient's bioavailability and ensure predictable and consistent release profiles, the current understanding of the process is based on indirect or superficial measurements. Formulation science could, therefore, additionally deepen the understanding of the fundamental physical principles governing disintegration based on direct observations of the process. We aim to help bridge the gap by generating a series of time-resolved X-ray micro-computed tomography (μCT) images capturing volumetric images of a broad range of mini-tablet formulations undergoing disintegration. Automated image segmentation was a prerequisite to overcoming the challenges of analyzing multiple time series of heterogeneous tomographic images at high magnification. We devised and trained a convolutional neural network (CNN) based on the U-Net architecture for autonomous, rapid, and consistent image segmentation. We created our own μCT data reconstruction pipeline and parameterized it to deliver image quality optimal for our CNN-based segmentation. Our approach enabled us to visualize the internal microstructures of the tablets during disintegration and to extract parameters of disintegration kinetics from the time-resolved data. We determine by factor analysis the influence of the different formulation components on the disintegration process in terms of both qualitative and quantitative experimental responses. We relate our findings to known formulation component properties and established experimental results. Our direct imaging approach, enabled by deep learning-based image processing, delivers new insights into the disintegration mechanism of pharmaceutical tablets.

PMID:38384517 | PMC:PMC10878950 | DOI:10.1016/j.heliyon.2024.e26025

Categories: Literature Watch

Differential diagnosis of frontotemporal dementia subtypes with explainable deep learning on structural MRI

Thu, 2024-02-22 06:00

Front Neurosci. 2024 Feb 7;18:1331677. doi: 10.3389/fnins.2024.1331677. eCollection 2024.

ABSTRACT

BACKGROUND: Frontotemporal dementia (FTD) represents a collection of neurobehavioral and neurocognitive syndromes that are associated with a significant degree of clinical, pathological, and genetic heterogeneity. Such heterogeneity hinders the identification of effective biomarkers, preventing effective targeted recruitment of participants in clinical trials for developing potential interventions and treatments. In the present study, we aim to automatically differentiate patients with three clinical phenotypes of FTD, behavioral-variant FTD (bvFTD), semantic variant PPA (svPPA), and nonfluent variant PPA (nfvPPA), based on their structural MRI by training a deep neural network (DNN).

METHODS: Data from 277 FTD patients (173 bvFTD, 63 nfvPPA, and 41 svPPA) recruited from two multi-site neuroimaging datasets: the Frontotemporal Lobar Degeneration Neuroimaging Initiative and the ARTFL-LEFFTDS Longitudinal Frontotemporal Lobar Degeneration databases. Raw T1-weighted MRI data were preprocessed and parcellated into patch-based ROIs, with cortical thickness and volume features extracted and harmonized to control the confounding effects of sex, age, total intracranial volume, cohort, and scanner difference. A multi-type parallel feature embedding framework was trained to classify three FTD subtypes with a weighted cross-entropy loss function used to account for unbalanced sample sizes. Feature visualization was achieved through post-hoc analysis using an integrated gradient approach.

RESULTS: The proposed differential diagnosis framework achieved a mean balanced accuracy of 0.80 for bvFTD, 0.82 for nfvPPA, 0.89 for svPPA, and an overall balanced accuracy of 0.84. Feature importance maps showed more localized differential patterns among different FTD subtypes compared to groupwise statistical mapping.

CONCLUSION: In this study, we demonstrated the efficiency and effectiveness of using explainable deep-learning-based parallel feature embedding and visualization framework on MRI-derived multi-type structural patterns to differentiate three clinically defined subphenotypes of FTD: bvFTD, nfvPPA, and svPPA, which could help with the identification of at-risk populations for early and precise diagnosis for intervention planning.

PMID:38384484 | PMC:PMC10879283 | DOI:10.3389/fnins.2024.1331677

Categories: Literature Watch

MHGTMDA: Molecular heterogeneous graph transformer based on biological entity graph for miRNA-disease associations prediction

Thu, 2024-02-22 06:00

Mol Ther Nucleic Acids. 2024 Feb 5;35(1):102139. doi: 10.1016/j.omtn.2024.102139. eCollection 2024 Mar 12.

ABSTRACT

MicroRNAs (miRNAs) play a crucial role in the prevention, prognosis, diagnosis, and treatment of complex diseases. Existing computational methods primarily focus on biologically relevant molecules directly associated with miRNA or disease, overlooking the fact that the human body is a highly complex system where miRNA or disease may indirectly correlate with various types of biomolecules. To address this, we propose a novel prediction model named MHGTMDA (miRNA and disease association prediction using heterogeneous graph transformer based on molecular heterogeneous graph). MHGTMDA integrates biological entity relationships of eight biomolecules, constructing a relatively comprehensive heterogeneous biological entity graph. MHGTMDA serves as a powerful molecular heterogeneity map transformer, capturing structural elements and properties of miRNAs and diseases, revealing potential associations. In a 5-fold cross-validation study, MHGTMDA achieved an area under the receiver operating characteristic curve of 0.9569, surpassing state-of-the-art methods by at least 3%. Feature ablation experiments suggest that considering features among multiple biomolecules is more effective in uncovering miRNA-disease correlations. Furthermore, we conducted differential expression analyses on breast cancer and lung cancer, using MHGTMDA to further validate differentially expressed miRNAs. The results demonstrate MHGTMDA's capability to identify novel MDAs.

PMID:38384447 | PMC:PMC10879798 | DOI:10.1016/j.omtn.2024.102139

Categories: Literature Watch

Enhancing digital health services: A machine learning approach to personalized exercise goal setting

Thu, 2024-02-22 06:00

Digit Health. 2024 Feb 20;10:20552076241233247. doi: 10.1177/20552076241233247. eCollection 2024 Jan-Dec.

ABSTRACT

BACKGROUND: The utilization of digital health has increased recently, and these services provide extensive guidance to encourage users to exercise frequently by setting daily exercise goals to promote a healthy lifestyle. These comprehensive guides evolved from the consideration of various personalized behavioral factors. Nevertheless, existing approaches frequently neglect the users' dynamic behavior and the changing in their health conditions.

OBJECTIVE: This study aims to fill this gap by developing a machine learning algorithm that dynamically updates auto-suggestion exercise goals using retrospective data and realistic behavior trajectory.

METHODS: We conducted a methodological study by designing a deep reinforcement learning algorithm to evaluate exercise performance, considering fitness-fatigue effects. The deep reinforcement learning algorithm combines deep learning techniques to analyze time series data and infer user's exercise behavior. In addition, we use the asynchronous advantage actor-critic algorithm for reinforcement learning to determine the optimal exercise intensity through exploration and exploitation. The personalized exercise data and biometric data used in this study were collected from publicly available datasets, encompassing walking, sports logs, and running.

RESULTS: In our study, we conducted the statistical analyses/inferential tests to compare the effectiveness of machine learning approach in exercise goal setting across different exercise goal-setting strategies. The 95% confidence intervals demonstrated the robustness of these findings, emphasizing the superior outcomes of the machine learning approach.

CONCLUSIONS: Our study demonstrates the adaptability of machine learning algorithm to users' exercise preferences and behaviors in exercise goal setting, emphasizing the substantial influence of goal design on service effectiveness.

PMID:38384365 | PMC:PMC10880527 | DOI:10.1177/20552076241233247

Categories: Literature Watch

Developing a Radiomics Atlas Dataset of normal Abdominal and Pelvic computed Tomography (RADAPT)

Wed, 2024-02-21 06:00

J Imaging Inform Med. 2024 Feb 21. doi: 10.1007/s10278-024-01028-7. Online ahead of print.

ABSTRACT

Atlases of normal genomics, transcriptomics, proteomics, and metabolomics have been published in an attempt to understand the biological phenotype in health and disease and to set the basis of comprehensive comparative omics studies. No such atlas exists for radiomics data. The purpose of this study was to systematically create a radiomics dataset of normal abdominal and pelvic radiomics that can be used for model development and validation. Young adults without any previously known disease, aged > 17 and ≤ 36 years old, were retrospectively included. All patients had undergone CT scanning for emergency indications. In case abnormal findings were identified, the relevant anatomical structures were excluded. Deep learning was used to automatically segment the majority of visible anatomical structures with the TotalSegmentator model as applied in 3DSlicer. Radiomics features including first order, texture, wavelet, and Laplacian of Gaussian transformed features were extracted with PyRadiomics. A Github repository was created to host the resulting dataset. Radiomics data were extracted from a total of 531 patients with a mean age of 26.8 ± 5.19 years, including 250 female and 281 male patients. A maximum of 53 anatomical structures were segmented and used for subsequent radiomics data extraction. Radiomics features were derived from a total of 526 non-contrast and 400 contrast-enhanced (portal venous) series. The dataset is publicly available for model development and validation purposes.

PMID:38383807 | DOI:10.1007/s10278-024-01028-7

Categories: Literature Watch

Auto-segmentation of Adult-Type Diffuse Gliomas: Comparison of Transfer Learning-Based Convolutional Neural Network Model vs. Radiologists

Wed, 2024-02-21 06:00

J Imaging Inform Med. 2024 Feb 21. doi: 10.1007/s10278-024-01044-7. Online ahead of print.

ABSTRACT

Segmentation of glioma is crucial for quantitative brain tumor assessment, to guide therapeutic research and clinical management, but very time-consuming. Fully automated tools for the segmentation of multi-sequence MRI are needed. We developed and pretrained a deep learning (DL) model using publicly available datasets A (n = 210) and B (n = 369) containing FLAIR, T2WI, and contrast-enhanced (CE)-T1WI. This was then fine-tuned with our institutional dataset (n = 197) containing ADC, T2WI, and CE-T1WI, manually annotated by radiologists, and split into training (n = 100) and testing (n = 97) sets. The Dice similarity coefficient (DSC) was used to compare model outputs and manual labels. A third independent radiologist assessed segmentation quality on a semi-quantitative 5-scale score. Differences in DSC between new and recurrent gliomas, and between uni or multifocal gliomas were analyzed using the Mann-Whitney test. Semi-quantitative analyses were compared using the chi-square test. We found that there was good agreement between segmentations from the fine-tuned DL model and ground truth manual segmentations (median DSC: 0.729, std-dev: 0.134). DSC was higher for newly diagnosed (0.807) than recurrent (0.698) (p < 0.001), and higher for unifocal (0.747) than multi-focal (0.613) cases (p = 0.001). Semi-quantitative scores of DL and manual segmentation were not significantly different (mean: 3.567 vs. 3.639; 93.8% vs. 97.9% scoring ≥ 3, p = 0.107). In conclusion, the proposed transfer learning DL performed similarly to human radiologists in glioma segmentation on both structural and ADC sequences. Further improvement in segmenting challenging postoperative and multifocal glioma cases is needed.

PMID:38383806 | DOI:10.1007/s10278-024-01044-7

Categories: Literature Watch

Automatic Tracking of Hyoid Bone Displacement and Rotation Relative to Cervical Vertebrae in Videofluoroscopic Swallow Studies Using Deep Learning

Wed, 2024-02-21 06:00

J Imaging Inform Med. 2024 Feb 21. doi: 10.1007/s10278-024-01039-4. Online ahead of print.

ABSTRACT

The hyoid bone displacement and rotation are critical kinematic events of the swallowing process in the assessment of videofluoroscopic swallow studies (VFSS). However, the quantitative analysis of such events requires frame-by-frame manual annotation, which is labor-intensive and time-consuming. Our work aims to develop a method of automatically tracking hyoid bone displacement and rotation in VFSS. We proposed a full high-resolution network, a deep learning architecture, to detect the anterior and posterior of the hyoid bone to identify its location and rotation. Meanwhile, the anterior-inferior corners of the C2 and C4 vertebrae were detected simultaneously to automatically establish a new coordinate system and eliminate the effect of posture change. The proposed model was developed by 59,468 VFSS frames collected from 1488 swallowing samples, and it achieved an average landmark localization error of 2.38 pixels (around 0.5% of the image with 448 × 448 pixels) and an average angle prediction error of 0.065 radians in predicting C2-C4 and hyoid bone angles. In addition, the displacement of the hyoid bone center was automatically tracked on a frame-by-frame analysis, achieving an average mean absolute error of 2.22 pixels and 2.78 pixels in the x-axis and y-axis, respectively. The results of this study support the effectiveness and accuracy of the proposed method in detecting hyoid bone displacement and rotation. Our study provided an automatic method of analyzing hyoid bone kinematics during VFSS, which could contribute to early diagnosis and effective disease management.

PMID:38383805 | DOI:10.1007/s10278-024-01039-4

Categories: Literature Watch

Intelligent ultrafast total-body PET for sedation-free pediatric [<sup>18</sup>F]FDG imaging

Wed, 2024-02-21 06:00

Eur J Nucl Med Mol Imaging. 2024 Feb 22. doi: 10.1007/s00259-024-06649-2. Online ahead of print.

ABSTRACT

PURPOSE: This study aims to develop deep learning techniques on total-body PET to bolster the feasibility of sedation-free pediatric PET imaging.

METHODS: A deformable 3D U-Net was developed based on 245 adult subjects with standard total-body PET imaging for the quality enhancement of simulated rapid imaging. The developed method was first tested on 16 children receiving total-body [18F]FDG PET scans with standard 300-s acquisition time with sedation. Sixteen rapid scans (acquisition time about 3 s, 6 s, 15 s, 30 s, and 75 s) were retrospectively simulated by selecting the reconstruction time window. In the end, the developed methodology was prospectively tested on five children without sedation to prove the routine feasibility.

RESULTS: The approach significantly improved the subjective image quality and lesion conspicuity in abdominal and pelvic regions of the generated 6-s data. In the first test set, the proposed method enhanced the objective image quality metrics of 6-s data, such as PSNR (from 29.13 to 37.09, p < 0.01) and SSIM (from 0.906 to 0.921, p < 0.01). Furthermore, the errors of mean standardized uptake values (SUVmean) for lesions between 300-s data and 6-s data were reduced from 12.9 to 4.1% (p < 0.01), and the errors of max SUV (SUVmax) were reduced from 17.4 to 6.2% (p < 0.01). In the prospective test, radiologists reached a high degree of consistency on the clinical feasibility of the enhanced PET images.

CONCLUSION: The proposed method can effectively enhance the image quality of total-body PET scanning with ultrafast acquisition time, leading to meeting clinical diagnostic requirements of lesion detectability and quantification in abdominal and pelvic regions. It has much potential to solve the dilemma of the use of sedation and long acquisition time that influence the health of pediatric patients.

PMID:38383744 | DOI:10.1007/s00259-024-06649-2

Categories: Literature Watch

Development of a 3D tracking system for multiple marmosets under free-moving conditions

Wed, 2024-02-21 06:00

Commun Biol. 2024 Feb 21;7(1):216. doi: 10.1038/s42003-024-05864-9.

ABSTRACT

Assessment of social interactions and behavioral changes in nonhuman primates is useful for understanding brain function changes during life events and pathogenesis of neurological diseases. The common marmoset (Callithrix jacchus), which lives in a nuclear family like humans, is a useful model, but longitudinal automated behavioral observation of multiple animals has not been achieved. Here, we developed a Full Monitoring and Animal Identification (FulMAI) system for longitudinal detection of three-dimensional (3D) trajectories of each individual in multiple marmosets under free-moving conditions by combining video tracking, Light Detection and Ranging, and deep learning. Using this system, identification of each animal was more than 97% accurate. Location preferences and inter-individual distance could be calculated, and deep learning could detect grooming behavior. The FulMAI system allows us to analyze the natural behavior of individuals in a family over their lifetime and understand how behavior changes due to life events together with other data.

PMID:38383741 | DOI:10.1038/s42003-024-05864-9

Categories: Literature Watch

Deep learning-based, fully automated, pediatric brain segmentation

Wed, 2024-02-21 06:00

Sci Rep. 2024 Feb 22;14(1):4344. doi: 10.1038/s41598-024-54663-z.

ABSTRACT

The purpose of this study was to demonstrate the performance of a fully automated, deep learning-based brain segmentation (DLS) method in healthy controls and in patients with neurodevelopmental disorders, SCN1A mutation, under eleven. The whole, cortical, and subcortical volumes of previously enrolled 21 participants, under 11 years of age, with a SCN1A mutation, and 42 healthy controls, were obtained using a DLS method, and compared to volumes measured by Freesurfer with manual correction. Additionally, the volumes which were calculated with the DLS method between the patients and the control group. The volumes of total brain gray and white matter using DLS method were consistent with that volume which were measured by Freesurfer with manual correction in healthy controls. Among 68 cortical parcellated volume analysis, the volumes of only 7 areas measured by DLS methods were significantly different from that measured by Freesurfer with manual correction, and the differences decreased with increasing age in the subgroup analysis. The subcortical volume measured by the DLS method was relatively smaller than that of the Freesurfer volume analysis. Further, the DLS method could perfectly detect the reduced volume identified by the Freesurfer software and manual correction in patients with SCN1A mutations, compared with healthy controls. In a pediatric population, this new, fully automated DLS method is compatible with the classic, volumetric analysis with Freesurfer software and manual correction, and it can also well detect brain morphological changes in children with a neurodevelopmental disorder.

PMID:38383725 | DOI:10.1038/s41598-024-54663-z

Categories: Literature Watch

Raman spectrum combined with deep learning for precise recognition of Carbapenem-resistant Enterobacteriaceae

Wed, 2024-02-21 06:00

Anal Bioanal Chem. 2024 Feb 21. doi: 10.1007/s00216-024-05209-9. Online ahead of print.

ABSTRACT

Carbapenem-resistant Enterobacteriaceae (CRE) is a major pathogen that poses a serious threat to human health. Unfortunately, currently, there are no effective measures to curb its rapid development. To address this, an in-depth study on the surface-enhanced Raman spectroscopy (SERS) of 22 strains of 7 categories of CRE using a gold silver composite SERS substrate was conducted. The residual networks with an attention mechanism to classify the SERS spectrum from three perspectives (pathogenic bacteria type, enzyme-producing subtype, and sensitive antibiotic type) were performed. The results show that the SERS spectrum measured by the composite SERS substrate was repeatable and consistent. The SERS spectrum of CRE showed varying degrees of species differences, and the strain difference in the SERS spectrum of CRE was closely related to the type of enzyme-producing subtype. The introduced attention mechanism improved the classification accuracy of the residual network (ResNet) model. The accuracy of CRE classification for different strains and enzyme-producing subtypes reached 94.0% and 96.13%, respectively. The accuracy of CRE classification by pathogen sensitive antibiotic combination reached 93.9%. This study is significant for guiding antibiotic use in CRE infection, as the sensitive antibiotic used in treatment can be predicted directly by measuring CRE spectra. Our study demonstrates the potential of combining SERS with deep learning algorithms to identify CRE without culture labels and classify its sensitive antibiotics. This approach provides a new idea for rapid and accurate clinical detection of CRE and has important significance for alleviating the rapid development of resistance to CRE.

PMID:38383664 | DOI:10.1007/s00216-024-05209-9

Categories: Literature Watch

Data encoding for healthcare data democratization and information leakage prevention

Wed, 2024-02-21 06:00

Nat Commun. 2024 Feb 21;15(1):1582. doi: 10.1038/s41467-024-45777-z.

ABSTRACT

The lack of data democratization and information leakage from trained models hinder the development and acceptance of robust deep learning-based healthcare solutions. This paper argues that irreversible data encoding can provide an effective solution to achieve data democratization without violating the privacy constraints imposed on healthcare data and clinical models. An ideal encoding framework transforms the data into a new space where it is imperceptible to a manual or computational inspection. However, encoded data should preserve the semantics of the original data such that deep learning models can be trained effectively. This paper hypothesizes the characteristics of the desired encoding framework and then exploits random projections and random quantum encoding to realize this framework for dense and longitudinal or time-series data. Experimental evaluation highlights that models trained on encoded time-series data effectively uphold the information bottleneck principle and hence, exhibit lesser information leakage from trained models.

PMID:38383571 | DOI:10.1038/s41467-024-45777-z

Categories: Literature Watch

Design of target specific peptide inhibitors using generative deep learning and molecular dynamics simulations

Wed, 2024-02-21 06:00

Nat Commun. 2024 Feb 21;15(1):1611. doi: 10.1038/s41467-024-45766-2.

ABSTRACT

We introduce a computational approach for the design of target-specific peptides. Our method integrates a Gated Recurrent Unit-based Variational Autoencoder with Rosetta FlexPepDock for peptide sequence generation and binding affinity assessment. Subsequently, molecular dynamics simulations are employed to narrow down the selection of peptides for experimental assays. We apply this computational strategy to design peptide inhibitors that specifically target β-catenin and NF-κB essential modulator. Among the twelve β-catenin inhibitors, six exhibit improved binding affinity compared to the parent peptide. Notably, the best C-terminal peptide binds β-catenin with an IC50 of 0.010 ± 0.06 μM, which is 15-fold better than the parent peptide. For NF-κB essential modulator, two of the four tested peptides display substantially enhanced binding compared to the parent peptide. Collectively, this study underscores the successful integration of deep learning and structure-based modeling and simulation for target specific peptide design.

PMID:38383543 | DOI:10.1038/s41467-024-45766-2

Categories: Literature Watch

A multimodal deep learning approach for the prediction of cognitive decline and its effectiveness in clinical trials for Alzheimer's disease

Wed, 2024-02-21 06:00

Transl Psychiatry. 2024 Feb 21;14(1):105. doi: 10.1038/s41398-024-02819-w.

ABSTRACT

Alzheimer's disease is one of the most important health-care challenges in the world. For decades, numerous efforts have been made to develop therapeutics for Alzheimer's disease, but most clinical trials have failed to show significant treatment effects on slowing or halting cognitive decline. Among several challenges in such trials, one recently noticed but unsolved is biased allocation of fast and slow cognitive decliners to treatment and placebo groups during randomization caused by the large individual variation in the speed of cognitive decline. This allocation bias directly results in either over- or underestimation of the treatment effect from the outcome of the trial. In this study, we propose a stratified randomization method using the degree of cognitive decline predicted by an artificial intelligence model as a stratification index to suppress the allocation bias in randomization and evaluate its effectiveness by simulation using ADNI data set.

PMID:38383536 | DOI:10.1038/s41398-024-02819-w

Categories: Literature Watch

Suppressing HIFU interference in ultrasound images using 1D U-Net-based neural networks

Wed, 2024-02-21 06:00

Phys Med Biol. 2024 Feb 21. doi: 10.1088/1361-6560/ad2b95. Online ahead of print.

ABSTRACT

OBJECTIVE: One big challenge with high-intensity focused ultrasound (HIFU) is that the intense acoustic interference generated by HIFU irradiation overwhelms the B-mode monitoring images, compromising monitoring effectiveness. This study aims to overcome this problem using a one-dimensional (1D) deep convolutional neural network.

APPROACH: U-Net-based networks have been proven to be effective in image reconstruction and denoising, and the two-dimensional (2D) U-Net has already been investigated for suppressing HIFU interference in ultrasound monitoring images. In this study, we propose that the one-dimensional (1D) convolution in U-Net-based networks is more suitable for removing HIFU artifacts and can better recover the contaminated B-mode images compared to 2D convolution. Ex-vivo and in-vivo HIFU experiments were performed on a clinically equivalent ultrasound-guided HIFU platform to collect image data, and the 1D convolution in U-Net, Attention U-Net, U-Net++, and FUS-Net was applied to verify our proposal.

MAIN RESULTS: All 1D U-Net-based networks were more effective in suppressing HIFU interference than their 2D counterparts, with over 30% improvement in terms of structural similarity (SSIM) to the uncontaminated B-mode images. Additionally, 1D U-Nets trained using ex-vivo datasets demonstrated better generalization performance in in-vivo experiments.

SIGNIFICANCE: These findings indicate that the utilization of 1D convolution in U-Net-based networks offers great potential in addressing the challenges of monitoring in ultrasound-guided HIFU systems.

PMID:38382109 | DOI:10.1088/1361-6560/ad2b95

Categories: Literature Watch

S-Net: an S-shaped network for nodule detection in 3D CT images

Wed, 2024-02-21 06:00

Phys Med Biol. 2024 Feb 21. doi: 10.1088/1361-6560/ad2b96. Online ahead of print.

ABSTRACT

OBJECTIVE: Accurate and automatic detection of pulmonary nodules is critical for early lung cancer diagnosis, and promising progress has been achieved in developing effective deep models for nodule detection. However, most existing nodule detection methods merely focus on integrating elaborately designed feature extraction modules into the backbone of the detection network to extract rich nodule features while ignore disadvantages of the structure of detection network itself. This study aims to address these disadvantages and develop a deep learning-based algorithm for pulmonary nodule detection to improve the accuracy of early lung cancer diagnosis.

APPROACH: In this paper, an S-shaped network called S-Net is developed with the U-shaped network as backbone, where an information fusion branch is used to propagate lower-level details and positional information critical for nodule detection to higher-level feature maps, head shared scale adaptive detection strategy is utilized to capture information from different scales for better detecting nodules with different shapes and sizes and the feature decoupling detection head is used to allow the classification and regression branches to focus on the information required for their respective tasks. A hybrid loss function is utilized to fully exploit the interplay between the classification and regression branches.

MAIN RESULTS: The proposed S-Net network with ResSENet and other three U-shaped backbones from SANet, OSAF-YOLOv3 and MSANet (R+SC+ECA) models achieve average CPM scores of 0.914, 0.915, 0.917 and 0.923 on the LUNA16 dataset, which are significantly higher than those achieved with other existing state-of-the-art models.

SIGNIFICANCE: The experimental results demonstrate that our proposed method effectively improves nodule detection performance, which implies potential applications of the proposed method in clinical practice.

PMID:38382097 | DOI:10.1088/1361-6560/ad2b96

Categories: Literature Watch

Multiscale Flow for robust and optimal cosmological analysis

Wed, 2024-02-21 06:00

Proc Natl Acad Sci U S A. 2024 Feb 27;121(9):e2309624121. doi: 10.1073/pnas.2309624121. Epub 2024 Feb 21.

ABSTRACT

We propose Multiscale Flow, a generative Normalizing Flow that creates samples and models the field-level likelihood of two-dimensional cosmological data such as weak lensing. Multiscale Flow uses hierarchical decomposition of cosmological fields via a wavelet basis and then models different wavelet components separately as Normalizing Flows. The log-likelihood of the original cosmological field can be recovered by summing over the log-likelihood of each wavelet term. This decomposition allows us to separate the information from different scales and identify distribution shifts in the data such as unknown scale-dependent systematics. The resulting likelihood analysis can not only identify these types of systematics, but can also be made optimal, in the sense that the Multiscale Flow can learn the full likelihood at the field without any dimensionality reduction. We apply Multiscale Flow to weak lensing mock datasets for cosmological inference and show that it significantly outperforms traditional summary statistics such as power spectrum and peak counts, as well as machine learning-based summary statistics such as scattering transform and convolutional neural networks. We further show that Multiscale Flow is able to identify distribution shifts not in the training data such as baryonic effects. Finally, we demonstrate that Multiscale Flow can be used to generate realistic samples of weak lensing data.

PMID:38381782 | DOI:10.1073/pnas.2309624121

Categories: Literature Watch

Pages