Deep learning

Advancing diagnostic performance and clinical applicability of deep learning-driven generative adversarial networks for Alzheimer's disease

Fri, 2024-04-26 06:00

Psychoradiology. 2021 Dec 23;1(4):225-248. doi: 10.1093/psyrad/kkab017. eCollection 2021 Dec.

ABSTRACT

Alzheimer's disease (AD) is a neurodegenerative disease that severely affects the activities of daily living in aged individuals, which typically needs to be diagnosed at an early stage. Generative adversarial networks (GANs) provide a new deep learning method that show good performance in image processing, while it remains to be verified whether a GAN brings benefit in AD diagnosis. The purpose of this research is to systematically review psychoradiological studies on the application of a GAN in the diagnosis of AD from the aspects of classification of AD state and AD-related image processing compared with other methods. In addition, we evaluated the research methodology and provided suggestions from the perspective of clinical application. Compared with other methods, a GAN has higher accuracy in the classification of AD state and better performance in AD-related image processing (e.g. image denoising and segmentation). Most studies used data from public databases but lacked clinical validation, and the process of quantitative assessment and comparison in these studies lacked clinicians' participation, which may have an impact on the improvement of generation effect and generalization ability of the GAN model. The application value of GANs in the classification of AD state and AD-related image processing has been confirmed in reviewed studies. Improvement methods toward better GAN architecture were also discussed in this paper. In sum, the present study demonstrated advancing diagnostic performance and clinical applicability of GAN for AD, and suggested that the future researchers should consider recruiting clinicians to compare the algorithm with clinician manual methods and evaluate the clinical effect of the algorithm.

PMID:38666217 | PMC:PMC10917234 | DOI:10.1093/psyrad/kkab017

Categories: Literature Watch

SpecReFlow: an algorithm for specular reflection restoration using flow-guided video completion

Fri, 2024-04-26 06:00

J Med Imaging (Bellingham). 2024 Mar;11(2):024012. doi: 10.1117/1.JMI.11.2.024012. Epub 2024 Apr 24.

ABSTRACT

PURPOSE: Specular reflections (SRs) are highlight artifacts commonly found in endoscopy videos that can severely disrupt a surgeon's observation and judgment. Despite numerous attempts to restore SR, existing methods are inefficient and time consuming and can lead to false clinical interpretations. Therefore, we propose the first complete deep-learning solution, SpecReFlow, to detect and restore SR regions from endoscopy video with spatial and temporal coherence.

APPROACH: SpecReFlow consists of three stages: (1) an image preprocessing stage to enhance contrast, (2) a detection stage to indicate where the SR region is present, and (3) a restoration stage in which we replace SR pixels with an accurate underlying tissue structure. Our restoration approach uses optical flow to seamlessly propagate color and structure from other frames of the endoscopy video.

RESULTS: Comprehensive quantitative and qualitative tests for each stage reveal that our SpecReFlow solution performs better than previous detection and restoration methods. Our detection stage achieves a Dice score of 82.8% and a sensitivity of 94.6%, and our restoration stage successfully incorporates temporal information with spatial information for more accurate restorations than existing techniques.

CONCLUSIONS: SpecReFlow is a first-of-its-kind solution that combines temporal and spatial information for effective detection and restoration of SR regions, surpassing previous methods relying on single-frame spatial information. Future work will look to optimizing SpecReFlow for real-time applications. SpecReFlow is a software-only solution for restoring image content lost due to SR, making it readily deployable in existing clinical settings to improve endoscopy video quality for accurate diagnosis and treatment.

PMID:38666040 | PMC:PMC11042492 | DOI:10.1117/1.JMI.11.2.024012

Categories: Literature Watch

Incremental Learning for Heterogeneous Structure Segmentation in Brain Tumor MRI

Fri, 2024-04-26 06:00

Med Image Comput Comput Assist Interv. 2023 Oct;14221:46-56. doi: 10.1007/978-3-031-43895-0_5. Epub 2023 Oct 1.

ABSTRACT

Deep learning (DL) models for segmenting various anatomical structures have achieved great success via a static DL model that is trained in a single source domain. Yet, the static DL model is likely to perform poorly in a continually evolving environment, requiring appropriate model updates. In an incremental learning setting, we would expect that well-trained static models are updated, following continually evolving target domain data-e.g., additional lesions or structures of interest-collected from different sites, without catastrophic forgetting. This, however, poses challenges, due to distribution shifts, additional structures not seen during the initial model training, and the absence of training data in a source domain. To address these challenges, in this work, we seek to progressively evolve an "off-the-shelf" trained segmentation model to diverse datasets with additional anatomical categories in a unified manner. Specifically, we first propose a divergence-aware dual-flow module with balanced rigidity and plasticity branches to decouple old and new tasks, which is guided by continuous batch renormalization. Then, a complementary pseudo-label training scheme with self-entropy regularized momentum MixUp decay is developed for adaptive network optimization. We evaluated our framework on a brain tumor segmentation task with continually changing target domains-i.e., new MRI scanners/modalities with incremental structures. Our framework was able to well retain the discriminability of previously learned structures, hence enabling the realistic life-long segmentation model extension along with the widespread accumulation of big medical data.

PMID:38665992 | PMC:PMC11045038 | DOI:10.1007/978-3-031-43895-0_5

Categories: Literature Watch

Creatinine-to-cystatin C ratio and body composition predict response to PD-1 inhibitors-based combination treatment in metastatic gastric cancer

Fri, 2024-04-26 06:00

Front Immunol. 2024 Apr 11;15:1364728. doi: 10.3389/fimmu.2024.1364728. eCollection 2024.

ABSTRACT

BACKGROUND: Creatinine-to-cystatin C ratio (CCR) and body composition (BC) parameters have emerged as significant prognostic factors in cancer patients. However, the potential effects of CCR in gastric cancer (GC) remains to be elucidated. This multi-center retrospective study explored the predictive and prognostic value of CCR and BC-parameters in patients with metastatic GC receiving PD-1 inhibitors-based combination therapy.

METHODS: One hundred and thirteen GC patients undergoing PD-1 inhibitors-based combination therapy were enrolled at three academic medical centers from January 2021 to July 2023. A deep-learning platform based on U-Net was developed to automatically segment skeletal muscle index (SMI), subcutaneous adipose tissue index (SATI) and visceral adipose tissue index (VATI). Patients were divided into two groups based on the median of CCR or the upper tertile of BC-parameters. Logistic and Cox regression analysis were used to determine the effect of CCR and BC-parameters in predicting response rates and survival rates.

RESULTS: The CCR was positively correlated with SMI (r=0.43; P<0.001), but not with SATI or VATI (P>0.05). Multivariable logistic analysis identified that both low CCR (OR=0.423, P=0.066 for ORR; OR=0.026, P=0.005 for DCR) and low SATI (OR=0.270, P=0.020 for ORR; OR=0.149, P=0.056 for DCR) were independently associated with worse objective response rate (ORR) and disease control rate (DCR). Patients with low CCR or low SATI had significantly lower 8-month progression-free survival (PFS) rate and 16-month overall survival (OS) rate than those with high CCR (PFS rate, 37.6% vs. 55.1%, P=0.011; OS rate, 19.4% vs. 44.9%, P=0.002) or those with high SATI (PFS rate, 37.2% vs. 53.8%, P=0.035; OS rate, 8.0% vs. 36.0%, P<0.001). Multivariate Cox analysis showed that low CCR (HR=2.395, 95% CI: 1.234-4.648, P=0.010 for PFS rate; HR=2.528, 95% CI: 1.317-4.854, P=0.005 for OS rate) and low SATI (HR=2.188, 95% CI: 1.050-4.560, P=0.037 for PFS rate; HR=2.818, 95% CI: 1.381-5.752, P=0.004 for OS rate) were both independent prognostic factors of poor 8-month PFS rate and 16-month OS rate. A nomogram based on CCR and BC-parameters showed a good performance in predicting the 12- and 16-month OS, with a concordance index of 0.756 (95% CI, 0.722-0.789).

CONCLUSIONS: Low pre-treatment CCR and SATI were independently associated with lower response rates and worse survival in patients with metastatic GC receiving PD-1 inhibitors-based combination therapy.

PMID:38665913 | PMC:PMC11043572 | DOI:10.3389/fimmu.2024.1364728

Categories: Literature Watch

Deep learning-assisted concentration gradient generation for the study of 3D cell cultures in hydrogel beads of varying stiffness

Fri, 2024-04-26 06:00

Front Bioeng Biotechnol. 2024 Apr 11;12:1364553. doi: 10.3389/fbioe.2024.1364553. eCollection 2024.

ABSTRACT

The study of dose-response relationships underpins analytical biosciences. Droplet microfluidics platforms can automate the generation of microreactors encapsulating varying concentrations of an assay component, providing datasets across a large chemical space in a single experiment. A classical method consists in varying the flow rate of multiple solutions co-flowing into a single microchannel (producing different volume fractions) before encapsulating the contents into water-in-oil droplets. This process can be automated through controlling the pumping elements but lacks the ability to adapt to unpredictable experimental scenarios, often requiring constant human supervision. In this paper, we introduce an image-based, closed-loop control system for assessing and adjusting volume fractions, thereby generating unsupervised, uniform concentration gradients. We trained a shallow convolutional neural network to assess the position of the laminar flow interface between two co-flowing fluids and used this model to adjust flow rates in real-time. We apply the method to generate alginate microbeads in which HEK293FT cells could grow in three dimensions. The stiffnesses ranged from 50 Pa to close to 1 kPa in Young modulus and were encoded with a fluorescent marker. We trained deep learning models based on the YOLOv4 object detector to efficiently detect both microbeads and multicellular spheroids from high-content screening images. This allowed us to map relationships between hydrogel stiffness and multicellular spheroid growth.

PMID:38665812 | PMC:PMC11044700 | DOI:10.3389/fbioe.2024.1364553

Categories: Literature Watch

Artificial Intelligence-Powered Mammography: Navigating the Landscape of Deep Learning for Breast Cancer Detection

Fri, 2024-04-26 06:00

Cureus. 2024 Mar 26;16(3):e56945. doi: 10.7759/cureus.56945. eCollection 2024 Mar.

ABSTRACT

Worldwide, breast cancer (BC) is one of the most commonly diagnosed malignancies in women. Early detection is key to improving survival rates and health outcomes. This literature review focuses on how artificial intelligence (AI), especially deep learning (DL), can enhance the ability of mammography, a key tool in BC detection, to yield more accurate results. Artificial intelligence has shown promise in reducing diagnostic errors and increasing early cancer detection chances. Nevertheless, significant challenges exist, including the requirement for large amounts of high-quality data and concerns over data privacy. Despite these hurdles, AI and DL are advancing the field of radiology, offering better ways to diagnose, detect, and treat diseases. The U.S. Food and Drug Administration (FDA) has approved several AI diagnostic tools. Yet, the full potential of these technologies, especially for more advanced screening methods like digital breast tomosynthesis (DBT), depends on further clinical studies and the development of larger databases. In summary, this review highlights the exciting potential of AI in BC screening. It calls for more research and validation to fully employ the power of AI in clinical practice, ensuring that these technologies can help save lives by improving diagnosis accuracy and efficiency.

PMID:38665752 | PMC:PMC11044525 | DOI:10.7759/cureus.56945

Categories: Literature Watch

Multi-Head Graph Convolutional Network for Structural Connectome Classification

Fri, 2024-04-26 06:00

Graphs Biomed Image Anal Overlapped Cell Tissue Dataset Histopathol (2023). 2024;14373:27-36. doi: 10.1007/978-3-031-55088-1_3. Epub 2024 Mar 10.

ABSTRACT

We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain-connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, thoroughly capturing representations from the input data. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.

PMID:38665679 | PMC:PMC11044650 | DOI:10.1007/978-3-031-55088-1_3

Categories: Literature Watch

BT-CNN: a balanced binary tree architecture for classification of brain tumour using MRI imaging

Fri, 2024-04-26 06:00

Front Physiol. 2024 Apr 11;15:1349111. doi: 10.3389/fphys.2024.1349111. eCollection 2024.

ABSTRACT

Deep learning is a very important technique in clinical diagnosis and therapy in the present world. Convolutional Neural Network (CNN) is a recent development in deep learning that is used in computer vision. Our medical investigation focuses on the identification of brain tumour. To improve the brain tumour classification performance a Balanced binary Tree CNN (BT-CNN) which is framed in a binary tree-like structure is proposed. It has a two distinct modules-the convolution and the depthwise separable convolution group. The usage of convolution group achieves lower time and higher memory, while the opposite is true for the depthwise separable convolution group. This balanced binarty tree inspired CNN balances both the groups to achieve maximum performance in terms of time and space. The proposed model along with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. are experimented on public datasets. Before we feed the data into model the images are pre-processed using CLAHE, denoising, cropping, and scaling. The pre-processed dataset is partitioned into training and testing datasets as per 5 fold cross validation. The proposed model is trained and compared its perforarmance with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. The proposed model reported average training accuracy of 99.61% compared to other models. The proposed model achieved 96.06% test accuracy where as other models achieved 68.86%, 85.8%, 86.88%, and 90.41% respectively. Further, the proposed model obtained lowest standard deviation on training and test accuracies across all folds, making it invariable to dataset.

PMID:38665597 | PMC:PMC11043606 | DOI:10.3389/fphys.2024.1349111

Categories: Literature Watch

Spleen volume is independently associated with non-alcoholic fatty liver disease, liver volume and liver fibrosis

Fri, 2024-04-26 06:00

Heliyon. 2024 Mar 17;10(8):e28123. doi: 10.1016/j.heliyon.2024.e28123. eCollection 2024 Apr 30.

ABSTRACT

Non-alcoholic fatty liver disease (NAFLD) can lead to irreversible liver damage manifesting in systemic effects (e.g., elevated portal vein pressure and splenomegaly) with increased risk of deadly outcomes. However, the association of spleen volume with NAFLD and related type 2-diabetes (T2D) is not fully understood. The UK Biobank contains comprehensive health-data of 500,000 participants, including clinical data and MR images of >40,000 individuals. The present study estimated the spleen volume of 37,066 participants through automated deep learning-based image segmentation of neck-to-knee MR images. The aim was to investigate the associations of spleen volume with NAFLD, T2D and liver fibrosis, while adjusting for natural confounders. The recent redefinition and new designation of NAFLD to metabolic dysfunction-associated steatotic liver disease (MASLD), promoted by major organisations of studies on liver disease, was not employed as introduced after the conduct of this study. The results showed that spleen volume decreased with age, correlated positively with body size and was smaller in females compared to males. Larger spleens were observed in subjects with NAFLD and T2D compared to controls. Spleen volume was also positively and independently associated with liver fat fraction, liver volume and the fibrosis-4 score, with notable volumetric increases already at low liver fat fractions and volumes, but not independently associated with T2D. These results suggest a link between spleen volume and NAFLD already at an early stage of the disease, potentially due to initial rise in portal vein pressure.

PMID:38665588 | PMC:PMC11043861 | DOI:10.1016/j.heliyon.2024.e28123

Categories: Literature Watch

A transformer-based approach empowered by a self-attention technique for semantic segmentation in remote sensing

Fri, 2024-04-26 06:00

Heliyon. 2024 Apr 12;10(8):e29396. doi: 10.1016/j.heliyon.2024.e29396. eCollection 2024 Apr 30.

ABSTRACT

Semantic segmentation of Remote Sensing (RS) images involves the classification of each pixel in a satellite image into distinct and non-overlapping regions or segments. This task is crucial in various domains, including land cover classification, autonomous driving, and scene understanding. While deep learning has shown promising results, there is limited research that specifically addresses the challenge of processing fine details in RS images while also considering the high computational demands. To tackle this issue, we propose a novel approach that combines convolutional and transformer architectures. Our design incorporates convolutional layers with a low receptive field to generate fine-grained feature maps for small objects in very high-resolution images. On the other hand, transformer blocks are utilized to capture contextual information from the input. By leveraging convolution and self-attention in this manner, we reduce the need for extensive downsampling and enable the network to work with full-resolution features, which is particularly beneficial for handling small objects. Additionally, our approach eliminates the requirement for vast datasets, which is often necessary for purely transformer-based networks. In our experimental results, we demonstrate the effectiveness of our method in generating local and contextual features using convolutional and transformer layers, respectively. Our approach achieves a mean dice score of 80.41%, outperforming other well-known techniques such as UNet, Fully-Connected Network (FCN), Pyramid Scene Parsing Network (PSP Net), and the recent Convolutional vision Transformer (CvT) model, which achieved mean dice scores of 78.57%, 74.57%, 73.45%, and 62.97% respectively, under the same training conditions and using the same training dataset.

PMID:38665569 | PMC:PMC11043938 | DOI:10.1016/j.heliyon.2024.e29396

Categories: Literature Watch

Unlocking the potential of generative AI in drug discovery

Thu, 2024-04-25 06:00

Drug Discov Today. 2024 Apr 23:103992. doi: 10.1016/j.drudis.2024.103992. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) is revolutionizing drug discovery by enhancing precision, reducing timelines and costs, and enabling AI-driven computer-aided drug design. This review focuses on recent advancements in deep generative models (DGMs) for de novo drug design, exploring diverse algorithms and their profound impact. It critically analyses the challenges that are intricately interwoven into these technologies, proposing strategies to unlock their full potential. It features case studies of both successes and failures in advancing drugs to clinical trials with AI assistance. Last, it outlines a forward-looking plan for optimizing DGMs in de novo drug design, thereby fostering faster and more cost-effective drug development.

PMID:38663579 | DOI:10.1016/j.drudis.2024.103992

Categories: Literature Watch

A lightweight deep learning approach for detecting electrocardiographic lead misplacement

Thu, 2024-04-25 06:00

Physiol Meas. 2024 Apr 25. doi: 10.1088/1361-6579/ad43ae. Online ahead of print.

ABSTRACT

OBJECTIVE: Electrocardiographic (ECG) lead misplacement can result in distorted waveforms and amplitudes, significantly impacting accurate interpretation. Although lead misplacement is a relatively low-probability event, with an incidence ranging from 0.4% to 4%, the large number of ECG records in clinical practice necessitates the development of an effective detection method. This paper aimed to address this gap by presenting a novel lead misplacement detection method based on deep learning models.

APPROACH: We developed two novel lightweight deep learning model for limb and chest lead misplacement detection, respectively. For limb lead misplacement detection, two limb leads and V6 were used as inputs, while for chest lead misplacement detection, six chest leads were used as inputs. Our models were trained and validated using the Chapman database, with an 8:2 train-validation split, and evaluated on the PTB-XL, PTB, and LUDB databases. Additionally, we examined the model interpretability on the LUDB databases. Limb lead misplacement simulations were performed using mathematical transformations, while chest lead misplacement scenarios were simulated by interchanging pairs of leads. The detection performance was assessed using metrics such as accuracy, precision, sensitivity, specificity, and Macro F1-score.

MAIN RESULTS: Our experiments simulated three scenarios of limb lead misplacement and nine scenarios of chest lead misplacement. The proposed two models achieved Macro F1-scores ranging from 93.42% to 99.61% on two heterogeneous test sets, demonstrating their effectiveness in accurately detecting lead misplacement across various arrhythmias.

SIGNIFICANCE: The significance of this study lies in providing a reliable open-source algorithm for lead misplacement detection in ECG recordings. The source code is available at https://github.com/wjcai/ECG_lead_check.

PMID:38663434 | DOI:10.1088/1361-6579/ad43ae

Categories: Literature Watch

Improving speech depression detection using transfer learning with wav2vec 2.0 in low-resource environments

Thu, 2024-04-25 06:00

Sci Rep. 2024 Apr 25;14(1):9543. doi: 10.1038/s41598-024-60278-1.

ABSTRACT

Depression, a pervasive global mental disorder, profoundly impacts daily lives. Despite numerous deep learning studies focused on depression detection through speech analysis, the shortage of annotated bulk samples hampers the development of effective models. In response to this challenge, our research introduces a transfer learning approach for detecting depression in speech, aiming to overcome constraints imposed by limited resources. In the context of feature representation, we obtain depression-related features by fine-tuning wav2vec 2.0. By integrating 1D-CNN and attention pooling structures, we generate advanced features at the segment level, thereby enhancing the model's capability to capture temporal relationships within audio frames. In the realm of prediction results, we integrate LSTM and self-attention mechanisms. This incorporation assigns greater weights to segments associated with depression, thereby augmenting the model's discernment of depression-related information. The experimental results indicate that our model has achieved impressive F1 scores, reaching 79% on the DAIC-WOZ dataset and 90.53% on the CMDC dataset. It outperforms recent baseline models in the field of speech-based depression detection. This provides a promising solution for effective depression detection in low-resource environments.

PMID:38664511 | DOI:10.1038/s41598-024-60278-1

Categories: Literature Watch

Resolution of tonic concentrations of highly similar neurotransmitters using voltammetry and deep learning

Thu, 2024-04-25 06:00

Mol Psychiatry. 2024 Apr 25. doi: 10.1038/s41380-024-02537-1. Online ahead of print.

ABSTRACT

With advances in our understanding regarding the neurochemical underpinnings of neurological and psychiatric diseases, there is an increased demand for advanced computational methods for neurochemical analysis. Despite having a variety of techniques for measuring tonic extracellular concentrations of neurotransmitters, including voltammetry, enzyme-based sensors, amperometry, and in vivo microdialysis, there is currently no means to resolve concentrations of structurally similar neurotransmitters from mixtures in the in vivo environment with high spatiotemporal resolution and limited tissue damage. Since a variety of research and clinical investigations involve brain regions containing electrochemically similar monoamines, such as dopamine and norepinephrine, developing a model to resolve the respective contributions of these neurotransmitters is of vital importance. Here we have developed a deep learning network, DiscrimNet, a convolutional autoencoder capable of accurately predicting individual tonic concentrations of dopamine, norepinephrine, and serotonin from both in vitro mixtures and the in vivo environment in anesthetized rats, measured using voltammetry. The architecture of DiscrimNet is described, and its ability to accurately predict in vitro and unseen in vivo concentrations is shown to vastly outperform a variety of shallow learning algorithms previously used for neurotransmitter discrimination. DiscrimNet is shown to generalize well to data captured from electrodes unseen during model training, eliminating the need to retrain the model for each new electrode. DiscrimNet is also shown to accurately predict the expected changes in dopamine and serotonin after cocaine and oxycodone administration in anesthetized rats in vivo. DiscrimNet therefore offers an exciting new method for real-time resolution of in vivo voltammetric signals into component neurotransmitters.

PMID:38664492 | DOI:10.1038/s41380-024-02537-1

Categories: Literature Watch

An efficient lightweight network for image denoising using progressive residual and convolutional attention feature fusion

Thu, 2024-04-25 06:00

Sci Rep. 2024 Apr 25;14(1):9554. doi: 10.1038/s41598-024-60139-x.

ABSTRACT

While deep learning has become the go-to method for image denoising due to its impressive noise removal capabilities, excessive network depth often plagues existing approaches, leading to significant computational burdens. To address this critical bottleneck, we propose a novel lightweight progressive residual and attention mechanism fusion network that effectively alleviates these limitations. This architecture tackles both Gaussian and real-world image noise with exceptional efficacy. Initiated through dense blocks (DB) tasked with discerning the noise distribution, this approach substantially reduces network parameters while comprehensively extracting local image features. The network then adopts a progressive strategy, whereby shallow convolutional features are incrementally integrated with deeper features, establishing a residual fusion framework adept at extracting encompassing global features relevant to noise characteristics. The process concludes by integrating the output feature maps from each DB and the robust edge features from the convolutional attention feature fusion module (CAFFM). These combined elements are then directed to the reconstruction layer, ultimately producing the final denoised image. Empirical analyses conducted in environments characterized by Gaussian white noise and natural noise, spanning noise levels 15-50, indicate a marked enhancement in performance. This assertion is quantitatively corroborated by increased average values in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index for Color images (FSIMc), outperforming the outcomes of more than 20 existing methods across six varied datasets. Collectively, the network delineated in this research exhibits exceptional adeptness in image denoising. Simultaneously, it adeptly preserves essential image features such as edges and textures, thereby signifying a notable progression in the domain of image processing. The proposed model finds applicability in a range of image-centric domains, encompassing image processing, computer vision, video analysis, and pattern recognition.

PMID:38664440 | DOI:10.1038/s41598-024-60139-x

Categories: Literature Watch

Automatic brain-tumor diagnosis using cascaded deep convolutional neural networks with symmetric U-Net and asymmetric residual-blocks

Thu, 2024-04-25 06:00

Sci Rep. 2024 Apr 25;14(1):9501. doi: 10.1038/s41598-024-59566-7.

ABSTRACT

The use of various kinds of magnetic resonance imaging (MRI) techniques for examining brain tissue has increased significantly in recent years, and manual investigation of each of the resulting images can be a time-consuming task. This paper presents an automatic brain-tumor diagnosis system that uses a CNN for detection, classification, and segmentation of glioblastomas; the latter stage seeks to segment tumors inside glioma MRI images. The structure of the developed multi-unit system consists of two stages. The first stage is responsible for tumor detection and classification by categorizing brain MRI images into normal, high-grade glioma (glioblastoma), and low-grade glioma. The uniqueness of the proposed network lies in its use of different levels of features, including local and global paths. The second stage is responsible for tumor segmentation, and skip connections and residual units are used during this step. Using 1800 images extracted from the BraTS 2017 dataset, the detection and classification stage was found to achieve a maximum accuracy of 99%. The segmentation stage was then evaluated using the Dice score, specificity, and sensitivity. The results showed that the suggested deep-learning-based system ranks highest among a variety of different strategies reported in the literature.

PMID:38664436 | DOI:10.1038/s41598-024-59566-7

Categories: Literature Watch

Deep learning the cis-regulatory code for gene expression in selected model plants

Thu, 2024-04-25 06:00

Nat Commun. 2024 Apr 25;15(1):3488. doi: 10.1038/s41467-024-47744-0.

ABSTRACT

Elucidating the relationship between non-coding regulatory element sequences and gene expression is crucial for understanding gene regulation and genetic variation. We explored this link with the training of interpretable deep learning models predicting gene expression profiles from gene flanking regions of the plant species Arabidopsis thaliana, Solanum lycopersicum, Sorghum bicolor, and Zea mays. With over 80% accuracy, our models enabled predictive feature selection, highlighting e.g. the significant role of UTR regions in determining gene expression levels. The models demonstrated remarkable cross-species performance, effectively identifying both conserved and species-specific regulatory sequence features and their predictive power for gene expression. We illustrated the application of our approach by revealing causal links between genetic variation and gene expression changes across fourteen tomato genomes. Lastly, our models efficiently predicted genotype-specific expression of key functional gene groups, exemplified by underscoring known phenotypic and metabolic differences between Solanum lycopersicum and its wild, drought-resistant relative, Solanum pennellii.

PMID:38664394 | DOI:10.1038/s41467-024-47744-0

Categories: Literature Watch

Neuroimage analysis using artificial intelligence approaches: a systematic review

Thu, 2024-04-25 06:00

Med Biol Eng Comput. 2024 Apr 26. doi: 10.1007/s11517-024-03097-w. Online ahead of print.

ABSTRACT

In the contemporary era, artificial intelligence (AI) has undergone a transformative evolution, exerting a profound influence on neuroimaging data analysis. This development has significantly elevated our comprehension of intricate brain functions. This study investigates the ramifications of employing AI techniques on neuroimaging data, with a specific objective to improve diagnostic capabilities and contribute to the overall progress of the field. A systematic search was conducted in prominent scientific databases, including PubMed, IEEE Xplore, and Scopus, meticulously curating 456 relevant articles on AI-driven neuroimaging analysis spanning from 2013 to 2023. To maintain rigor and credibility, stringent inclusion criteria, quality assessments, and precise data extraction protocols were consistently enforced throughout this review. Following a rigorous selection process, 104 studies were selected for review, focusing on diverse neuroimaging modalities with an emphasis on mental and neurological disorders. Among these, 19.2% addressed mental illness, and 80.7% focused on neurological disorders. It is found that the prevailing clinical tasks are disease classification (58.7%) and lesion segmentation (28.9%), whereas image reconstruction constituted 7.3%, and image regression and prediction tasks represented 9.6%. AI-driven neuroimaging analysis holds tremendous potential, transforming both research and clinical applications. Machine learning and deep learning algorithms outperform traditional methods, reshaping the field significantly.

PMID:38664348 | DOI:10.1007/s11517-024-03097-w

Categories: Literature Watch

Feasibility of Artificial Intelligence Constrained Compressed SENSE Accelerated 3D Isotropic T1 VISTA Sequence For Vessel Wall MR Imaging: Exploring the Potential of Higher Acceleration Factors Compared to Traditional Compressed SENSE

Thu, 2024-04-25 06:00

Acad Radiol. 2024 Apr 24:S1076-6332(24)00206-X. doi: 10.1016/j.acra.2024.03.041. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Investigate the feasibility of using deep learning-based accelerated 3D T1-weighted volumetric isotropic turbo spin-echo acquisition (VISTA) for vessel wall magnetic resonance imaging (VW-MRI), compared to traditional Compressed SENSE and optimize acceleration factor (AF) to obtain high-quality clinical images.

METHODS: 40 patients with atherosclerotic plaques in the intracranial or carotid artery were prospectively enrolled in our study from October 1, 2022 to October 31, 2023 underwent high-resolution vessel wall imaging on a 3.0 T MR system using variable Compressed SENSE (CS) AFs and reconstructed by an optimized artificial intelligence constrained Compressed SENSE (CS-AI). Images were reconstructed through both traditional CS and optimized CS-AI. Two radiologists qualitatively assessed the image quality scores of CS and CS-AI across different segments and quantitatively evaluated SNR (signal-to-noise ratio) and CNR (contrast-to-noise ratio) metrics. Paired t-tests, ANOVA, and Friedman tests analyzed image quality metrics. Written informed consent was obtained from all patients in this study.

RESULTS: CS-AI groups demonstrated good image quality scores compared to reference scans until AF up to 12 (P < 0.05). The CS-AI 10 protocol provided the best images in the lumen of both normal and lesion sites (P < 0.05). The plaque SNR was significantly higher in CS-AI groups compared to CS groups until the AF increased to 12 (P < 0.05). CS-AI protocols had higher CNR compared to CS with whichever AF on both pre-and post-contrast T1WI (P < 0.05), The CNR was highest in the CS-AI 10 protocol on pre-contrast T1WI and in CS-AI 12 on post-contrast T1WI (P < 0.05).

CONCLUSION: The study demonstrated the feasibility of using CS-AI technology to diagnose arteriosclerotic vascular disease with 3D T1 VISTA sequences. The image quality and diagnostic efficiency of CS-AI images were comparable or better than traditional CS images. Higher AFs are feasible and have potential for use in VW-MRI. The determination of standardized AFs for clinical scanning protocol is expected to help for empirical evaluation of new imaging technology.

PMID:38664146 | DOI:10.1016/j.acra.2024.03.041

Categories: Literature Watch

Machine learning and new insights for breast cancer diagnosis

Thu, 2024-04-25 06:00

J Int Med Res. 2024 Apr;52(4):3000605241237867. doi: 10.1177/03000605241237867.

ABSTRACT

Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.

PMID:38663911 | DOI:10.1177/03000605241237867

Categories: Literature Watch

Pages