Deep learning

Revolutionizing Healthcare: Qure.AI's Innovations in Medical Diagnosis and Treatment

Thu, 2024-07-04 06:00

Cureus. 2024 Jun 3;16(6):e61585. doi: 10.7759/cureus.61585. eCollection 2024 Jun.

ABSTRACT

Qure.AI, a leading company in artificial intelligence (AI) applied to healthcare, has developed a suite of innovative solutions to revolutionize medical diagnosis and treatment. With a plethora of FDA-approved tools for clinical use, Qure.AI continually strives for innovation in integrating AI into healthcare systems. This article delves into the efficacy of Qure.AI's chest X-ray interpretation tool, "qXR," in medicine, drawing from a comprehensive review of clinical trials conducted by various institutions. Key applications of AI in healthcare include machine learning, deep learning, and natural language processing (NLP), all of which contribute to enhanced diagnostic accuracy, efficiency, and speed. Through the analysis of vast datasets, AI algorithms assist physicians in interpreting medical data and making informed decisions, thereby improving patient care outcomes. Illustrative examples highlight AI's impact on medical imaging, particularly in the diagnosis of conditions such as breast cancer, heart failure, and pulmonary nodules. AI can significantly reduce diagnostic errors and expedite the interpretation of medical images, leading to more timely interventions and treatments. Furthermore, AI-powered predictive analytics enable early detection of diseases and facilitate personalized treatment plans, thereby reducing healthcare costs and improving patient outcomes. The efficacy of AI in healthcare is underscored by its ability to complement traditional diagnostic methods, providing physicians with valuable insights and support in clinical decision-making. As AI continues to evolve, its role in patient care and medical research is poised to expand, promising further advancements in diagnostic accuracy and treatment efficacy.

PMID:38962585 | PMC:PMC11221395 | DOI:10.7759/cureus.61585

Categories: Literature Watch

Challenges in Reducing Bias Using Post-Processing Fairness for Breast Cancer Stage Classification with Deep Learning

Thu, 2024-07-04 06:00

Algorithms. 2024 Apr;17(4):141. doi: 10.3390/a17040141. Epub 2024 Mar 28.

ABSTRACT

Breast cancer is the most common cancer affecting women globally. Despite the significant impact of deep learning models on breast cancer diagnosis and treatment, achieving fairness or equitable outcomes across diverse populations remains a challenge when some demographic groups are underrepresented in the training data. We quantified the bias of models trained to predict breast cancer stage from a dataset consisting of 1000 biopsies from 842 patients provided by AIM-Ahead (Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity). Notably, the majority of data (over 70%) were from White patients. We found that prior to post-processing adjustments, all deep learning models we trained consistently performed better for White patients than for non-White patients. After model calibration, we observed mixed results, with only some models demonstrating improved performance. This work provides a case study of bias in breast cancer medical imaging models and highlights the challenges in using post-processing to attempt to achieve fairness.

PMID:38962581 | PMC:PMC11221567 | DOI:10.3390/a17040141

Categories: Literature Watch

Integration between constrained optimization and deep networks: a survey

Thu, 2024-07-04 06:00

Front Artif Intell. 2024 Jun 19;7:1414707. doi: 10.3389/frai.2024.1414707. eCollection 2024.

ABSTRACT

Integration between constrained optimization and deep networks has garnered significant interest from both research and industrial laboratories. Optimization techniques can be employed to optimize the choice of network structure based not only on loss and accuracy but also on physical constraints. Additionally, constraints can be imposed during training to enhance the performance of networks in specific contexts. This study surveys the literature on the integration of constrained optimization with deep networks. Specifically, we examine the integration of hyper-parameter tuning with physical constraints, such as the number of FLOPS (FLoating point Operations Per Second), a measure of computational capacity, latency, and other factors. This study also considers the use of context-specific knowledge constraints to improve network performance. We discuss the integration of constraints in neural architecture search (NAS), considering the problem as both a multi-objective optimization (MOO) challenge and through the imposition of penalties in the loss function. Furthermore, we explore various approaches that integrate logic with deep neural networks (DNNs). In particular, we examine logic-neural integration through constrained optimization applied during the training of NNs and the use of semantic loss, which employs the probabilistic output of the networks to enforce constraints on the output.

PMID:38962503 | PMC:PMC11220227 | DOI:10.3389/frai.2024.1414707

Categories: Literature Watch

Calibrated geometric deep learning improves kinase-drug binding predictions

Thu, 2024-07-04 06:00

Nat Mach Intell. 2023 Dec;5(12):1390-1401. doi: 10.1038/s42256-023-00751-0. Epub 2023 Nov 6.

ABSTRACT

Protein kinases regulate various cellular functions and hold significant pharmacological promise in cancer and other diseases. Although kinase inhibitors are one of the largest groups of approved drugs, much of the human kinome remains unexplored but potentially druggable. Computational approaches, such as machine learning, offer efficient solutions for exploring kinase-compound interactions and uncovering novel binding activities. Despite the increasing availability of three-dimensional (3D) protein and compound structures, existing methods predominantly focus on exploiting local features from one-dimensional protein sequences and two-dimensional molecular graphs to predict binding affinities, overlooking the 3D nature of the binding process. Here we present KDBNet, a deep learning algorithm that incorporates 3D protein and molecule structure data to predict binding affinities. KDBNet uses graph neural networks to learn structure representations of protein binding pockets and drug molecules, capturing the geometric and spatial characteristics of binding activity. In addition, we introduce an algorithm to quantify and calibrate the uncertainties of KDBNet's predictions, enhancing its utility in model-guided discovery in chemical or protein space. Experiments demonstrated that KDBNet outperforms existing deep learning models in predicting kinase-drug binding affinities. The uncertainties estimated by KDBNet are informative and well-calibrated with respect to prediction errors. When integrated with a Bayesian optimization framework, KDBNet enables data-efficient active learning and accelerates the exploration and exploitation of diverse high-binding kinase-drug pairs.

PMID:38962391 | PMC:PMC11221792 | DOI:10.1038/s42256-023-00751-0

Categories: Literature Watch

Encoding temporal information in deep convolution neural network

Thu, 2024-07-04 06:00

Front Neuroergon. 2024 Jun 19;5:1287794. doi: 10.3389/fnrgo.2024.1287794. eCollection 2024.

ABSTRACT

A recent development in deep learning techniques has attracted attention to the decoding and classification of electroencephalogram (EEG) signals. Despite several efforts to utilize different features in EEG signals, a significant research challenge is using time-dependent features in combination with local and global features. Several attempts have been made to remodel the deep learning convolution neural networks (CNNs) to capture time-dependency information. These features are usually either handcrafted features, such as power ratios, or splitting data into smaller-sized windows related to specific properties, such as a peak at 300 ms. However, these approaches partially solve the problem but simultaneously hinder CNNs' capability to learn from unknown information that might be present in the data. Other approaches, like recurrent neural networks, are very suitable for learning time-dependent information from EEG signals in the presence of unrelated sequential data. To solve this, we have proposed an encoding kernel (EnK), a novel time-encoding approach, which uniquely introduces time decomposition information during the vertical convolution operation in CNNs. The encoded information lets CNNs learn time-dependent features in addition to local and global features. We performed extensive experiments on several EEG data sets-physical human-robot collaborations, P300 visual-evoked potentials, motor imagery, movement-related cortical potentials, and the Dataset for Emotion Analysis Using Physiological Signals. The EnK outperforms the state of the art with an up to 6.5% reduction in mean squared error (MSE) and a 9.5% improvement in F1-scores compared to the average for all data sets together compared to base models. These results support our approach and show a high potential to improve the performance of physiological and non-physiological data. Moreover, the EnK can be applied to virtually any deep learning architecture with minimal effort.

PMID:38962279 | PMC:PMC11220250 | DOI:10.3389/fnrgo.2024.1287794

Categories: Literature Watch

Decoding functional proteome information in model organisms using protein language models

Thu, 2024-07-04 06:00

NAR Genom Bioinform. 2024 Jul 2;6(3):lqae078. doi: 10.1093/nargab/lqae078. eCollection 2024 Sep.

ABSTRACT

Protein language models have been tested and proved to be reliable when used on curated datasets but have not yet been applied to full proteomes. Accordingly, we tested how two different machine learning-based methods performed when decoding functional information from the proteomes of selected model organisms. We found that protein language models are more precise and informative than deep learning methods for all the species tested and across the three gene ontologies studied, and that they better recover functional information from transcriptomic experiments. The results obtained indicate that these language models are likely to be suitable for large-scale annotation and downstream analyses, and we recommend a guide for their use.

PMID:38962255 | PMC:PMC11217674 | DOI:10.1093/nargab/lqae078

Categories: Literature Watch

Generative preparation tasks in digital collaborative learning: actor and partner effects of constructive preparation activities on deep comprehension

Thu, 2024-07-04 06:00

Front Psychol. 2024 Jun 19;15:1335682. doi: 10.3389/fpsyg.2024.1335682. eCollection 2024.

ABSTRACT

Deep learning from collaboration occurs if the learner enacts interactive activities in the sense of leveraging the knowledge externalized by co-learners as resource for own inferencing processes and if these interactive activities in turn promote the learner's deep comprehension outcomes. This experimental study investigates whether inducing dyad members to enact constructive preparation activities can promote deep learning from subsequent collaboration while examining prior knowledge as moderator. In a digital collaborative learning environment, 122 non-expert university students assigned to 61 dyads studied a text about the human circulatory system and then prepared individually for collaboration according to their experimental conditions: the preparation tasks varied across dyads with respect to their generativity, that is, the degree to which they required the learners to enact constructive activities (note-taking, compare-contrast, or explanation). After externalizing their answer to the task, learners in all conditions inspected their partner's externalization and then jointly discussed their text understanding via chat. Results showed that more rather than less generative tasks fostered constructive preparation but not interactive collaboration activities or deep comprehension outcomes. Moderated mediation analyses considering actor and partner effects indicated the indirect effects of constructive preparation activities on deep comprehension outcomes via interactive activities to depend on prior knowledge: when own prior knowledge was relatively low, self-performed but not partner-performed constructive preparation activities were beneficial. When own prior knowledge was relatively high, partner-performed constructive preparation activities were conducive while one's own were ineffective or even detrimental. Given these differential effects, suggestions are made for optimizing the instructional design around generative preparation tasks to streamline the effectiveness of constructive preparation activities for deep learning from digital collaboration.

PMID:38962237 | PMC:PMC11220279 | DOI:10.3389/fpsyg.2024.1335682

Categories: Literature Watch

A comprehensive standardized dataset of numerous pomegranate fruit diseases for deep learning

Thu, 2024-07-04 06:00

Data Brief. 2024 Mar 1;54:110284. doi: 10.1016/j.dib.2024.110284. eCollection 2024 Jun.

ABSTRACT

Pomegranate fruit disease detection and classification based on computer vision remains challenging because of various diseases, building the task of collecting or creating datasets is extremely difficult. The usage of machine learning and deep learning in farming has increased significantly in recent years. For developing precise and consistent machine learning models and reducing misclassification in real-time situations, efficient and clean datasets are a key obligation. The current pomegranate fruit diseases classification standardized and publicly accessible datasets for agriculture are not adequate to train the models efficiently. To address this issue, our primary goal of the current study is to create an image dataset of pomegranate fruits of numerous diseases that is ready to use and publicly available. We have composed 5 types of pomegranate fruit healthy and diseases from different places like Ballari, Bengaluru, Bagalakote, Etc. These images were taken from July to October 2023. The dataset contains 5099 pomegranate fruit images which are labeled and classified into 5 types: Healthy, Bacterial blight, Anthracnose, Cercospora fruit spot, and Alternaria fruit spot. The dataset comprises 5 folders entitled with corresponding diseases. This dataset might be useful for locating pomegranate diseases in other nations as well as increasing the production of pomegranate yield. This dataset is extremely useful for researchers of machine learning or deep learning in the field of agriculture for emerging computer vision applications.

PMID:38962206 | PMC:PMC11220843 | DOI:10.1016/j.dib.2024.110284

Categories: Literature Watch

Mulberry leaf dataset for image classification task

Thu, 2024-07-04 06:00

Data Brief. 2024 Mar 1;54:110281. doi: 10.1016/j.dib.2024.110281. eCollection 2024 Jun.

ABSTRACT

This manuscript presents a mulberry leaf dataset collected from five provinces within three regions in Thailand. The dataset contains ten categories of mulberry leaves. We proposed this dataset due to the challenges of classifying leaf images taken in natural environments arising from high inter-class similarity and variations in illumination and background conditions (multiple leaves from a mulberry tree and shadows appearing in the leaf images). We highlight that our research team recorded mulberry leaves independently from various perspectives during our data acquisition using multiple camera types. The mulberry leaf dataset can serve as vital input data passed to computer vision algorithms (conventional deep learning and vision transformer algorithms) for creating image recognition systems. The dataset will allow other researchers to propose novel computer vision techniques to approach mulberry recognition challenges.

PMID:38962203 | PMC:PMC11220858 | DOI:10.1016/j.dib.2024.110281

Categories: Literature Watch

Deep Learning-based Hierarchical Brain Segmentation with Preliminary Analysis of the Repeatability and Reproducibility

Wed, 2024-07-03 06:00

Magn Reson Med Sci. 2024 Jul 2. doi: 10.2463/mrms.mp.2023-0124. Online ahead of print.

ABSTRACT

PURPOSE: We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS).

METHODS: Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility.

RESULTS: In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons.

CONCLUSION: Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

PMID:38960679 | DOI:10.2463/mrms.mp.2023-0124

Categories: Literature Watch

Antibody design using deep learning: from sequence and structure design to affinity maturation

Wed, 2024-07-03 06:00

Brief Bioinform. 2024 May 23;25(4):bbae307. doi: 10.1093/bib/bbae307.

ABSTRACT

Deep learning has achieved impressive results in various fields such as computer vision and natural language processing, making it a powerful tool in biology. Its applications now encompass cellular image classification, genomic studies and drug discovery. While drug development traditionally focused deep learning applications on small molecules, recent innovations have incorporated it in the discovery and development of biological molecules, particularly antibodies. Researchers have devised novel techniques to streamline antibody development, combining in vitro and in silico methods. In particular, computational power expedites lead candidate generation, scaling and potential antibody development against complex antigens. This survey highlights significant advancements in protein design and optimization, specifically focusing on antibodies. This includes various aspects such as design, folding, antibody-antigen interactions docking and affinity maturation.

PMID:38960409 | DOI:10.1093/bib/bbae307

Categories: Literature Watch

AttABseq: an attention-based deep learning prediction method for antigen-antibody binding affinity changes based on protein sequences

Wed, 2024-07-03 06:00

Brief Bioinform. 2024 May 23;25(4):bbae304. doi: 10.1093/bib/bbae304.

ABSTRACT

The optimization of therapeutic antibodies through traditional techniques, such as candidate screening via hybridoma or phage display, is resource-intensive and time-consuming. In recent years, computational and artificial intelligence-based methods have been actively developed to accelerate and improve the development of therapeutic antibodies. In this study, we developed an end-to-end sequence-based deep learning model, termed AttABseq, for the predictions of the antigen-antibody binding affinity changes connected with antibody mutations. AttABseq is a highly efficient and generic attention-based model by utilizing diverse antigen-antibody complex sequences as the input to predict the binding affinity changes of residue mutations. The assessment on the three benchmark datasets illustrates that AttABseq is 120% more accurate than other sequence-based models in terms of the Pearson correlation coefficient between the predicted and experimental binding affinity changes. Moreover, AttABseq also either outperforms or competes favorably with the structure-based approaches. Furthermore, AttABseq consistently demonstrates robust predictive capabilities across a diverse array of conditions, underscoring its remarkable capacity for generalization across a wide spectrum of antigen-antibody complexes. It imposes no constraints on the quantity of altered residues, rendering it particularly applicable in scenarios where crystallographic structures remain unavailable. The attention-based interpretability analysis indicates that the causal effects of point mutations on antibody-antigen binding affinity changes can be visualized at the residue level, which might assist automated antibody sequence optimization. We believe that AttABseq provides a fiercely competitive answer to therapeutic antibody optimization.

PMID:38960407 | DOI:10.1093/bib/bbae304

Categories: Literature Watch

Comprehensive single-cell RNA-seq analysis using deep interpretable generative modeling guided by biological hierarchy knowledge

Wed, 2024-07-03 06:00

Brief Bioinform. 2024 May 23;25(4):bbae314. doi: 10.1093/bib/bbae314.

ABSTRACT

Recent advances in microfluidics and sequencing technologies allow researchers to explore cellular heterogeneity at single-cell resolution. In recent years, deep learning frameworks, such as generative models, have brought great changes to the analysis of transcriptomic data. Nevertheless, relying on the potential space of these generative models alone is insufficient to generate biological explanations. In addition, most of the previous work based on generative models is limited to shallow neural networks with one to three layers of latent variables, which may limit the capabilities of the models. Here, we propose a deep interpretable generative model called d-scIGM for single-cell data analysis. d-scIGM combines sawtooth connectivity techniques and residual networks, thereby constructing a deep generative framework. In addition, d-scIGM incorporates hierarchical prior knowledge of biological domains to enhance the interpretability of the model. We show that d-scIGM achieves excellent performance in a variety of fundamental tasks, including clustering, visualization, and pseudo-temporal inference. Through topic pathway studies, we found that d-scIGM-learned topics are better enriched for biologically meaningful pathways compared to the baseline models. Furthermore, the analysis of drug response data shows that d-scIGM can capture drug response patterns in large-scale experiments, which provides a promising way to elucidate the underlying biological mechanisms. Lastly, in the melanoma dataset, d-scIGM accurately identified different cell types and revealed multiple melanin-related driver genes and key pathways, which are critical for understanding disease mechanisms and drug development.

PMID:38960404 | DOI:10.1093/bib/bbae314

Categories: Literature Watch

Neoadjuvant Chemotherapy Response in Triple-Negative Apocrine Carcinoma: Comparing Apocrine Morphology, Androgen Receptor, and Immune Phenotypes

Wed, 2024-07-03 06:00

Arch Pathol Lab Med. 2024 Jul 4. doi: 10.5858/arpa.2023-0561-OA. Online ahead of print.

ABSTRACT

CONTEXT.—: Apocrine differentiation and androgen receptor (AR) positivity represent a specific subset of triple-negative breast cancer (TNBC) and are often considered potential prognostic or predictive factors.

OBJECTIVE.—: To evaluate the response of TNBC to neoadjuvant chemotherapy (NAC) and to assess the impact of apocrine morphology, AR status, Ki-67 labeling index (Ki-67LI), and tumor-infiltrating lymphocytes (TILs).

DESIGN.—: A total of 232 TNBC patients who underwent NAC followed by surgical resection in a single institute were analyzed. The study evaluated apocrine morphology and AR and Ki-67LI expression via immunohistochemistry from pre-NAC biopsy samples. Additionally, pre-NAC intratumoral TILs and stromal TILs (sTILs) were quantified from biopsies using a deep learning model. The response to NAC after surgery was assessed based on residual cancer burden.

RESULTS.—: Both apocrine morphology and high AR expression correlated with lower Ki-67LI (P < .001 for both). Apocrine morphology was associated with lower postoperative pathologic complete response (pCR) rates after NAC (P = .02), but the difference in TILs between TNBC cases with and without apocrine morphology was not statistically significant (P = .09 for sTILs). In contrast, AR expression did not significantly affect pCR (P = .13). Pre-NAC TILs strongly correlated with postoperative pCR in TNBCs without apocrine morphology (P < .001 for sTILs), whereas TNBC with apocrine morphology demonstrated an indeterminate trend (P = .82 for sTILs).

CONCLUSIONS.—: Although TIL counts did not vary significantly based on apocrine morphology, apocrine morphology itself was a more reliable predictor of NAC response than AR expression. Consequently, although apocrine morphology is a rare subtype of TNBC, its identification is clinically important.

PMID:38960391 | DOI:10.5858/arpa.2023-0561-OA

Categories: Literature Watch

A Deep-learning-based Threshold-free Method for Automated Analysis of Rodent Behavior in the Forced Swim Test and Tail Suspension Test

Wed, 2024-07-03 06:00

J Neurosci Methods. 2024 Jul 1:110212. doi: 10.1016/j.jneumeth.2024.110212. Online ahead of print.

ABSTRACT

BACKGROUND: The forced swim test (FST) and tail suspension test (TST) are widely used to assess depressive-like behaviors in animals. Immobility time is used as an important parameter in both FST and TST. Traditional methods for analyzing FST and TST rely on manually setting the threshold for immobility, which is time-consuming and subjective.

NEW METHOD: We proposed a threshold-free method for automated analysis of mice in these tests using a Dual-Stream Activity Analysis Network (DSAAN). Specifically, this network extracted spatial information of mice using a limited number of video frames and combined it with temporal information extracted from differential feature maps to determine the mouse's state. To do so, we developed the Mouse FSTST dataset, which consisted of annotated video recordings of FST and TST.

RESULTS: By using DSAAN methods, we identify immobility states at accuracies of 92.51% and 88.70% for the TST and FST, respectively. The predicted immobility time from DSAAN is nicely correlated with a manual score, which indicates the reliability of the proposed method. Importantly, the DSAAN achieved over 80% accuracy for both FST and TST by utilizing only 94 annotated images, suggesting that even a very limited training dataset can yield good performance in our model.

COMPARISON WITH EXISTING METHOD(S): Compared with DBscorer and EthoVision XT, our method exhibits the highest Pearson correlation coefficient with manual annotation results on the Mouse FSTST dataset.

CONCLUSIONS: We established a powerful tool for analyzing depressive-like behavior independent of threshold, which is capable of freeing users from time-consuming manual analysis.

PMID:38960331 | DOI:10.1016/j.jneumeth.2024.110212

Categories: Literature Watch

Discovering 3D Hidden Elasticity in Isotropic and Transversely Isotropic Materials with Physics-informed UNets

Wed, 2024-07-03 06:00

Acta Biomater. 2024 Jul 1:S1742-7061(24)00353-2. doi: 10.1016/j.actbio.2024.06.038. Online ahead of print.

ABSTRACT

Three-dimensional variation in structural components or fiber alignments results in complex mechanical property distribution in tissues and biomaterials. In this paper, we use a physics-informed UNet-based neural network model (El-UNet) to discover the three-dimensional (3D) internal composition and space-dependent material properties of heterogeneous isotropic and transversely isotropic materials without a priori knowledge of the composition. We then show the capabilities of El-UNet by validating against data obtained from finite-element simulations of two soft tissues, namely, brain tissue and articular cartilage, under various loading conditions. We first simulated compressive loading of 3D brain tissue comprising of distinct white matter and gray matter mechanical properties undergoing small strains with isotropic linear elastic behavior, where El-UNet reached mean absolute relative errors under 1.5% for elastic modulus and Poisson's ratio estimations across the 3D volume. We showed that the 3D solution achieved by El-UNet was superior to relative stiffness mapping by inverse of axial strain and two-dimensional plane stress/plane strain approximations. Additionally, we simulated a transversely isotropic articular cartilage with known fiber orientations undergoing compressive loading, and accurately estimated the spatial distribution of all five material parameters, with mean absolute relative errors under 5%. Our work demonstrates the application of the computationally efficient physics-informed El-UNet in 3D elasticity imaging and provides methods for translation to experimental 3D characterization of soft tissues and other materials. The proposed El-UNet offers a powerful tool for both in vitro and ex vivo tissue analysis, with potential extensions to in vivo diagnostics. STATEMENT OF SIGNIFICANCE: Elasticity imaging is a technique that reconstructs mechanical properties of tissue using deformation and force measurements. Given the complexity of this reconstruction, most existing methods have mostly focused on 2D problems. Our work is the first implementation of physics-informed UNets to reconstruct three-dimensional material parameter distributions for isotropic and transversely isotropic linear elastic materials by having deformation and force measurements. We comprehensively validate our model using synthetic data generated using finite element models of biological tissues with high bio-fidelity-the brain and articular cartilage. Our method can be implemented in elasticity imaging scenarios for in vitro and ex vivo mechanical characterization of biomaterials and biological tissues, with potential extensions to in vivo diagnostics.

PMID:38960112 | DOI:10.1016/j.actbio.2024.06.038

Categories: Literature Watch

ReMAR: a preoperative CT angiography guided metal artifact reduction framework designed for follow-up CTA of endovascular coiling

Wed, 2024-07-03 06:00

Phys Med Biol. 2024 Jul 3. doi: 10.1088/1361-6560/ad5ef4. Online ahead of print.

ABSTRACT

&#xD;Follow-up CT Angiography (CTA) is necessary for ensuring occlusion effect of endovascular coiling. However, the implanted metal coil will introduce artifacts that have a negative spillover into radiologic assessment. &#xD;Method. &#xD;A framework named ReMAR is proposed in this paper for metal artifacts reduction (MAR) from follow-up CTA of patients with coiled aneurysms. It employs preoperative CTA to provide the prior knowledge of the aneurysm and the expected position of the coil as a guidance thus balances the metal artifacts removal performance and clinical feasibility. The ReMAR is composed of three modules: segmentation, registration and MAR module. The segmentation and registration modules obtain the metal coil knowledge via implementing aneurysms delineation on preoperative CTA and alignment of follow-up CTA. The MAR module consisting of hybrid CNN- and transformer- architectures is utilized to restore sinogram and remove the artifact from reconstructed image. Both image quality and vessel rendering effect after metal artifacts removal are assessed in order to responding clinical concerns.&#xD;Main results. &#xD;137 patients undergone endovascular coiling have been enrolled in the study: 13 of them have complete diagnosis/follow-up records for end-to-end validation, while the rest lacked of follow-up records are used for model training. Quantitative metrics show ReMAR significantly reduced the metal-artifact burden in follow-up CTA. Qualitative ranks show ReMAR could preserve the morphology of blood vessels during artifact removal as desired by doctors.&#xD;Significance. &#xD;The ReMAR could significantly remove the artifacts caused by implanted metal coil in the follow-up CTA. It can be used to enhance the overall image quality and convince CTA an alternative to invasive follow-up in treated intracranial aneurysm (IA).

PMID:38959913 | DOI:10.1088/1361-6560/ad5ef4

Categories: Literature Watch

Head and neck tumor segmentation from [<sup>18</sup>F]F-FDG PET/CT images based on 3D diffusion model

Wed, 2024-07-03 06:00

Phys Med Biol. 2024 Jul 3. doi: 10.1088/1361-6560/ad5ef2. Online ahead of print.

ABSTRACT

Head and neck (H&N) cancers are among the most prevalent types of cancer worldwide, and [18F]F-FDG PET/CT is widely used for H&N cancer management. Recently, the diffusion model has demonstrated remarkable performance in various image-generation tasks. In this work, we proposed a 3D diffusion model to accurately perform H&N tumor segmentation from 3D PET and CT volumes.&#xD;&#xD;Approach. The 3D diffusion model was developed considering the 3D nature of PET and CT images acquired. During the reverse process, the model utilized a 3D U-Net structure and took the concatenation of 3D PET, CT, and Gaussian noise volumes as the network input to generate the tumor mask. Experiments based on the HECKTOR challenge dataset were conducted to evaluate the effectiveness of the proposed diffusion model. Several state-of-the-art techniques based on U-Net and Transformer structures were adopted as the reference methods. Benefits of employing both PET and CT as the network input as well as further extending the diffusion model from 2D to 3D were investigated based on various quantitative metrics and the uncertainty maps generated.&#xD;&#xD;Main results. Results showed that the proposed 3D diffusion model could generate more accurate segmentation results compared with other methods (mean Dice of 0.739 compared to less than 0.726 for other methods). Compared to the diffusion model in 2D format, the proposed 3D model yielded superior results (mean Dice of 0.739 compared to 0.669). Our experiments also highlighted the advantage of utilizing dual-modality PET and CT data over only single-modality data for H&N tumor segmentation (with mean Dice less than 0.570).&#xD;&#xD;Significance. This work demonstrated the effectiveness of the proposed 3D diffusion model in generating more accurate H&N tumor segmentation masks compared to the other reference methods.&#xD.

PMID:38959909 | DOI:10.1088/1361-6560/ad5ef2

Categories: Literature Watch

Thermal facial image analyses reveal quantitative hallmarks of aging and metabolic diseases

Wed, 2024-07-03 06:00

Cell Metab. 2024 Jul 2;36(7):1482-1493.e7. doi: 10.1016/j.cmet.2024.05.012.

ABSTRACT

Although human core body temperature is known to decrease with age, the age dependency of facial temperature and its potential to indicate aging rate or aging-related diseases remains uncertain. Here, we collected thermal facial images of 2,811 Han Chinese individuals 20-90 years old, developed the ThermoFace method to automatically process and analyze images, and then generated thermal age and disease prediction models. The ThermoFace deep learning model for thermal facial age has a mean absolute deviation of about 5 years in cross-validation and 5.18 years in an independent cohort. The difference between predicted and chronological age is highly associated with metabolic parameters, sleep time, and gene expression pathways like DNA repair, lipolysis, and ATPase in the blood transcriptome, and it is modifiable by exercise. Consistently, ThermoFace disease predictors forecast metabolic diseases like fatty liver with high accuracy (AUC > 0.80), with predicted disease probability correlated with metabolic parameters.

PMID:38959862 | DOI:10.1016/j.cmet.2024.05.012

Categories: Literature Watch

A Deep Learning-Based Rotten Food Recognition App for Older Adults: Development and Usability Study

Wed, 2024-07-03 06:00

JMIR Form Res. 2024 Jul 3;8:e55342. doi: 10.2196/55342.

ABSTRACT

BACKGROUND: Older adults are at greater risk of eating rotten fruits and of getting food poisoning because cognitive function declines as they age, making it difficult to distinguish rotten fruits. To address this problem, researchers have developed and evaluated various tools to detect rotten food items in various ways. Nevertheless, little is known about how to create an app to detect rotten food items to support older adults at a risk of health problems from eating rotten food items.

OBJECTIVE: This study aimed to (1) create a smartphone app that enables older adults to take a picture of food items with a camera and classifies the fruit as rotten or not rotten for older adults and (2) evaluate the usability of the app and the perceptions of older adults about the app.

METHODS: We developed a smartphone app that supports older adults in determining whether the 3 fruits selected for this study (apple, banana, and orange) were fresh enough to eat. We used several residual deep networks to check whether the fruit photos collected were of fresh fruit. We recruited healthy older adults aged over 65 years (n=15, 57.7%, males and n=11, 42.3%, females) as participants. We evaluated the usability of the app and the participants' perceptions about the app through surveys and interviews. We analyzed the survey responses, including an after-scenario questionnaire, as evaluation indicators of the usability of the app and collected qualitative data from the interviewees for in-depth analysis of the survey responses.

RESULTS: The participants were satisfied with using an app to determine whether a fruit is fresh by taking a picture of the fruit but are reluctant to use the paid version of the app. The survey results revealed that the participants tended to use the app efficiently to take pictures of fruits and determine their freshness. The qualitative data analysis on app usability and participants' perceptions about the app revealed that they found the app simple and easy to use, they had no difficulty taking pictures, and they found the app interface visually satisfactory.

CONCLUSIONS: This study suggests the possibility of developing an app that supports older adults in identifying rotten food items effectively and efficiently. Future work to make the app distinguish the freshness of various food items other than the 3 fruits selected still remains.

PMID:38959501 | DOI:10.2196/55342

Categories: Literature Watch

Pages