Deep learning

Deep learning analysis of fMRI data for predicting Alzheimer's Disease: A focus on convolutional neural networks and model interpretability

Wed, 2024-12-04 06:00

PLoS One. 2024 Dec 4;19(12):e0312848. doi: 10.1371/journal.pone.0312848. eCollection 2024.

ABSTRACT

The early detection of Alzheimer's Disease (AD) is thought to be important for effective intervention and management. Here, we explore deep learning methods for the early detection of AD. We consider both genetic risk factors and functional magnetic resonance imaging (fMRI) data. However, we found that the genetic factors do not notably enhance the AD prediction by imaging. Thus, we focus on building an effective imaging-only model. In particular, we utilize data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), employing a 3D Convolutional Neural Network (CNN) to analyze fMRI scans. Despite the limitations posed by our dataset (small size and imbalanced nature), our CNN model demonstrates accuracy levels reaching 92.8% and an ROC of 0.95. Our research highlights the complexities inherent in integrating multimodal medical datasets. It also demonstrates the potential of deep learning in medical imaging for AD prediction.

PMID:39630834 | DOI:10.1371/journal.pone.0312848

Categories: Literature Watch

Multiscale effective connectivity analysis of brain activity using neural ordinary differential equations

Wed, 2024-12-04 06:00

PLoS One. 2024 Dec 4;19(12):e0314268. doi: 10.1371/journal.pone.0314268. eCollection 2024.

ABSTRACT

Neural mechanisms and underlying directionality of signaling among brain regions depend on neural dynamics spanning multiple spatiotemporal scales of population activity. Despite recent advances in multimodal measurements of brain activity, there is no broadly accepted multiscale dynamical models for the collective activity represented in neural signals. Here we introduce a neurobiological-driven deep learning model, termed multiscale neural dynamics neural ordinary differential equation (msDyNODE), to describe multiscale brain communications governing cognition and behavior. We demonstrate that msDyNODE successfully captures multiscale activity using both simulations and electrophysiological experiments. The msDyNODE-derived causal interactions between recording locations and scales not only aligned well with the abstraction of the hierarchical neuroanatomy of the mammalian central nervous system but also exhibited behavioral dependences. This work offers a new approach for mechanistic multiscale studies of neural processes.

PMID:39630698 | DOI:10.1371/journal.pone.0314268

Categories: Literature Watch

Predicting progression-free survival in sarcoma using MRI-based automatic segmentation models and radiomics nomograms: a preliminary multicenter study

Wed, 2024-12-04 06:00

Skeletal Radiol. 2024 Dec 4. doi: 10.1007/s00256-024-04837-7. Online ahead of print.

ABSTRACT

OBJECTIVES: Some sarcomas are highly malignant, associated with high recurrence despite treatment. This multicenter study aimed to develop and validate a radiomics signature to estimate sarcoma progression-free survival (PFS).

MATERIALS AND METHODS: The study retrospectively enrolled 202 consecutive patients with pathologically diagnosed sarcoma, who had pre-treatment axial fat-suppressed T2-weighted images (FS-T2WI), and included them in the ROI-Net model for training. Among them, 120 patients were included in the radiomics analysis, all of whom had pre-treatment axial T1-weighted and transverse FS-T2WI images, and were randomly divided into a development group (n = 96) and a validation group (n = 24). In the development cohort, Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression was used to develop the radiomics features for PFS prediction. By combining significant clinical features with radiomics features, a nomogram was constructed using Cox regression.

RESULTS: The proposed ROI-Net framework achieved a Dice coefficient of 0.820 (0.791-0.848). The radiomics signature based on 21 features could distinguish high-risk patients with poor PFS. Univariate Cox analysis revealed that peritumoral edema, metastases, and the radiomics score were associated with poor PFS and were included in the construction of the nomogram. The Radiomics-T1WI-Clinical model exhibited the best performance, with AUC values of 0.947, 0.907, and 0.924 at 300 days, 600 days, and 900 days, respectively.

CONCLUSION: The proposed ROI-Net framework demonstrated high consistency between its segmentation results and expert annotations. The radiomics features and the combined nomogram have the potential to aid in predicting PFS for patients with sarcoma.

PMID:39630238 | DOI:10.1007/s00256-024-04837-7

Categories: Literature Watch

Correction to: Deep learning-based reconstruction improves the image quality of low-dose CT enterography in patients with inflammatory bowel disease

Wed, 2024-12-04 06:00

Abdom Radiol (NY). 2024 Dec 4. doi: 10.1007/s00261-024-04694-x. Online ahead of print.

NO ABSTRACT

PMID:39630201 | DOI:10.1007/s00261-024-04694-x

Categories: Literature Watch

Automatic Quantitative Analysis of Internal Quantum Efficiency Measurements of GaAs Solar Cells Using Deep Learning

Wed, 2024-12-04 06:00

Adv Sci (Weinh). 2024 Dec 4:e2407048. doi: 10.1002/advs.202407048. Online ahead of print.

ABSTRACT

A solar cell's internal quantum efficiency (IQE) measurement reveals critical information about the device's performance. This information can be obtained using a qualitative analysis of the shape of the curve, identifying and attributing current losses such as at the front and rear interfaces, and extracting key electrical and optical performance parameters. However, conventional methods to extract the performance parameters from IQE measurements are often time-consuming and require manual fitting approaches. While several methodologies exist to extract those parameters from silicon solar cells, there is a lack of accessible approaches for non-silicon cell technologies, like gallium arsenide cells, typically limiting the analysis to only the qualitative level. Therefore, this study proposes using a deep learning method to automatically predict multiple key parameters from IQE measurements of gallium arsenide cells. The proposed method is demonstrated to achieve a very high level of prediction accuracy across the entire range of parameter values and exhibits a high resilience for noisy measurements. By enhancing the quantitative analysis of IQE measurements, the method will unlock the full potential of quantum efficiency measurements as a powerful characterization tool for diverse solar cell technologies.

PMID:39630124 | DOI:10.1002/advs.202407048

Categories: Literature Watch

Automated Segmentation of Fetal Intracranial Volume in Three-Dimensional Ultrasound Using Deep Learning: Identifying Sex Differences in Prenatal Brain Development

Wed, 2024-12-04 06:00

Hum Brain Mapp. 2024 Dec 1;45(17):e70058. doi: 10.1002/hbm.70058.

ABSTRACT

The human brain undergoes major developmental changes during pregnancy. Three-dimensional (3D) ultrasound images allow for the opportunity to investigate typical prenatal brain development on a large scale. Transabdominal ultrasound can be challenging due to the small fetal brain and its movement, as well as multiple sweeps that may not yield high-quality images, especially when brain structures are unclear. By applying the latest developments in artificial intelligence for automated image processing allowing automated training of brain anatomy in these images retrieving reliable quantitative brain measurements becomes possible at a large scale. Here, we developed a convolutional neural network (CNN) model for automated segmentation of fetal intracranial volume (ICV) from 3D ultrasound. We applied the trained model in a large longitudinal population sample from the YOUth Baby and Child cohort measured at 20- and 30-week of gestational age to investigate biological sex differences in fetal ICV as a proof-of-principle and validation for our automated method (N = 2235 individuals with 43492 ultrasounds). A total of 168 annotated, randomly selected, good quality 3D ultrasound whole-brain images were included to train a 3D CNN for automated fetal ICV segmentation. A data augmentation strategy provided physical variation to train the network. K-fold cross-validation and Bayesian optimization were used for network selection and the ensemble-based system combined multiple networks to form the final ensemble network. The final ensemble network produced consistent and high-quality segmentations of ICV (Dice Similarity Coefficient (DSC) > 0.93, Hausdorff Distance (HD): HDvoxel < 4.6 voxels, and HDphysical < 1.4 mm). In addition, we developed an automated quality control procedure to include the ultrasound scans that successfully predicted ICV from all 43492 3D ultrasounds available in all individuals, no longer requiring manual selection of the best scan for analysis. Our trained model automatically retrieved ultrasounds with brain data and estimated ICV and ICV growth in 7672 (18%) of ultrasounds in 1762 participants that passed the automatic quality control procedure. Boys had significantly larger ICV at 20-weeks (81.7 ± 0.4 mL vs. 80.8 ± 0.5 mL; B = 2.86; p = 5.7e-14) and 30-weeks (257.0 ± 0.9 mL vs. 245.1 ± 0.9 mL; B = 12.35; p = 8.2e-27) of pregnancy, and more pronounced ICV growth than girls (delta growth 0.12 mL/day; p = 1.8e-5). Our automated artificial intelligence approach provides an opportunity to investigate fetal brain development on a much larger scale and to answer fundamental questions related to prenatal brain development.

PMID:39629904 | DOI:10.1002/hbm.70058

Categories: Literature Watch

A Deep Learning Based Framework to Identify Undocumented Orphaned Oil and Gas Wells from Historical Maps: A Case Study for California and Oklahoma

Wed, 2024-12-04 06:00

Environ Sci Technol. 2024 Dec 4. doi: 10.1021/acs.est.4c04413. Online ahead of print.

ABSTRACT

Undocumented Orphaned Wells (UOWs) are wells without an operator that have limited or no documentation with regulatory authorities. An estimated 310,000 to 800,000 UOWs exist in the United States (US), whose locations are largely unknown. These wells can potentially leak methane and other volatile organic compounds to the atmosphere, and contaminate groundwater. In this study, we developed a novel framework utilizing a state-of-the-art computer vision neural network model to identify the precise locations of potential UOWs. The U-Net model is trained to detect oil and gas well symbols in georeferenced historical topographic maps, and potential UOWs are identified as symbols that are further than 100 m from any documented well. A custom tool was developed to rapidly validate the potential UOW locations. We applied this framework to four counties in California and Oklahoma, leading to the discovery of 1301 potential UOWs across >40,000 km2. We confirmed the presence of 29 UOWs from satellite images and 15 UOWs from magnetic surveys in the field with a spatial accuracy on the order of 10 m. This framework can be scaled to identify potential UOWs across the US since the historical maps are available for the entire nation.

PMID:39629830 | DOI:10.1021/acs.est.4c04413

Categories: Literature Watch

Prediction of Brain Cancer Occurrence and Risk Assessment of Brain Hemorrhage Using Hybrid Deep Learning Technique

Wed, 2024-12-04 06:00

Cancer Invest. 2024 Dec 4:1-23. doi: 10.1080/07357907.2024.2431829. Online ahead of print.

ABSTRACT

The prediction of brain cancer occurrence and risk assessment of brain hemorrhage using a hybrid deep learning (DL) technique is a critical area of research in medical imaging analysis. One prominent challenge in this field is the accurate identification and classification of brain tumors and hemorrhages, which can significantly impact patient prognosis and treatment planning. The objectives of the study address the prediction of brain cancer occurrence and the assessment of risk levels associated with both brain cancers due to brain hemorrhage. A diverse dataset of brain MRI and CT scan images. Utilize Unsymmetrical Trimmed Median Filter with Optics Clustering for noise removal while preserving edges and details. The Chan-Vese segmentation process for refined segmentation. Brain cancer detection using Multi-Head Self-Attention Dilated Convolution Neural Network (MH-SA-DCNN) with Efficient Net Model. Brain cancer detection using MH-SA-DCNN with Efficient Net Model. This trains the algorithm to predict cancerous regions in brain images. Further, implement a Graph-Based Deep Neural Network Model (G-DNN) to capture spatial relationships and risk factors from brain images. Cox regression model to estimate cancer risk over time and fine-tune and optimize the model's parameters and features using the Osprey optimization algorithm (OPA).

PMID:39629783 | DOI:10.1080/07357907.2024.2431829

Categories: Literature Watch

Deep learning-based hyperspectral image correction and unmixing for brain tumor surgery

Wed, 2024-12-04 06:00

iScience. 2024 Oct 28;27(12):111273. doi: 10.1016/j.isci.2024.111273. eCollection 2024 Dec 20.

ABSTRACT

Hyperspectral imaging for fluorescence-guided brain tumor resection improves visualization of tissue differences, which can ameliorate patient outcomes. However, current methods do not effectively correct for heterogeneous optical and geometric tissue properties, leading to less accurate results. We propose two deep learning models for correction and unmixing that can capture these effects. While one is trained with protoporphyrin IX (PpIX) concentration labels, the other is semi-supervised. The models were evaluated on phantom and pig brain data with known PpIX concentration; the supervised and semi-supervised models achieved Pearson correlation coefficients (phantom, pig brain) between known and computed PpIX concentrations of (0.997, 0.990) and (0.98, 0.91), respectively. The classical approach achieved (0.93, 0.82). The semi-supervised approach also generalizes better to human data, achieving a 36% lower false-positive rate for PpIX detection and giving qualitatively more realistic results than existing methods. These results show promise for using deep learning to improve hyperspectral fluorescence-guided neurosurgery.

PMID:39628576 | PMC:PMC11613202 | DOI:10.1016/j.isci.2024.111273

Categories: Literature Watch

Self-Supervised Super-Resolution of 2D Pre-clinical MRI Acquisitions

Wed, 2024-12-04 06:00

Proc SPIE Int Soc Opt Eng. 2024 Feb;12930:129302K. doi: 10.1117/12.3016094. Epub 2024 Apr 2.

ABSTRACT

Animal models are pivotal in disease research and the advancement of therapeutic methods. The translation of results from these models to clinical applications is enhanced by employing technologies which are consistent for both humans and animals, like Magnetic Resonance Imaging (MRI), offering the advantage of longitudinal disease evaluation without compromising animal welfare. However, current animal MRI techniques predominantly employ 2D acquisitions due to constraints related to organ size, scan duration, image quality, and hardware limitations. While 3D acquisitions are feasible, they are constrained by longer scan times and ethical considerations related to extended sedation periods. This study evaluates the efficacy of SMORE, a self-supervised deep learning super-resolution approach, to enhance the through-plane resolution of anisotropic 2D MRI scans into isotropic resolutions. SMORE accomplishes this by self-training with high-resolution in-plane data, thereby eliminating domain discrepancies between the input data and external training sets. The approach is tested on mouse MRI scans acquired across a range of through-plane resolutions. Experimental results show SMORE substantially outperforms traditional interpolation methods. Additionally, we find that pre-training offers a promising approach to reduce processing time without compromising performance.

PMID:39628511 | PMC:PMC11613139 | DOI:10.1117/12.3016094

Categories: Literature Watch

Towards Explainable Detection of Alzheimer's Disease: A Fusion of Deep Convolutional Neural Network and Enhanced Weighted Fuzzy C-Mean

Wed, 2024-12-04 06:00

Curr Med Imaging. 2024;20(1):e15734056317205. doi: 10.2174/0115734056317205241014060633.

ABSTRACT

BACKGROUND: Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline, posing a significant challenge for individuals and society. Early detection and treatment are essential for effective disease management.

OBJECTIVE: The objective of this research is to develop a novel and interpretable deep learning model for rapid and accurate Alzheimer's disease detection, incorporating Explainable Artificial Intelligence (XAI) techniques. The model aims to ensure generalizability through cross-validation and data augmentation, while enhancing interpretability and transparency by using Explainable Artificial Intelligence methods such as Grad-CAM, SHAP, and LIME, alongside an Enhanced Fuzzy C-Means (FCM) algorithm to clarify feature categorization and improve understanding of the model's decision-making process.

METHODS: The proposed model employs a multi-stage approach. Initially, MRI scans are transformed into feature vectors suitable for input into a Deep Convolutional Neural Network (CNN). Subsequently, an Enhanced Fuzzy C-Mean (FCM) algorithm, incorporating spatial information, refines these features to improve clustering precision. The model integrates Explainable Artificial Intelligence techniques, including Grad-CAM, SHAP, and LIME, to elucidate critical features and regions influencing classification outcomes. The performance metrics such as Accuracy, Recall and Specificity are used for assessing the performance of the model.

RESULTS: The XAI-DEF Alzheimer's disease detection model consistently demonstrated exceptional performance across both the ADNI and OASIS datasets. On ADNI, the model achieved an accuracy of 99.39%, recall of 99.47%, and specificity of 99.3%. Similarly, on OASIS, the model attained an accuracy of 99.36%, recall of 99.53%, and specificity of 99.15%. These results underscore the model's effectiveness in accurately classifying Alzheimer's disease cases while minimizing false positives and negatives.

CONCLUSION: Through the development of this model, we contribute to the advancement of dependable diagnostic tools tailored for the detection and management of Alzheimer's disease. By prioritizing interpretability alongside accuracy, our approach provides valuable insights into the decisionmaking process of the model, ultimately improving patient outcomes and facilitating further research in neurodegenerative disorders.

PMID:39629569 | DOI:10.2174/0115734056317205241014060633

Categories: Literature Watch

Models for the marrow: A comprehensive review of AI-based cell classification methods and malignancy detection in bone marrow aspirate smears

Wed, 2024-12-04 06:00

Hemasphere. 2024 Dec 3;8(12):e70048. doi: 10.1002/hem3.70048. eCollection 2024 Dec.

ABSTRACT

Given the high prevalence of artificial intelligence (AI) research in medicine, the development of deep learning (DL) algorithms based on image recognition, such as the analysis of bone marrow aspirate (BMA) smears, is rapidly increasing in the field of hematology and oncology. The models are trained to identify the optimal regions of the BMA smear for differential cell count and subsequently detect and classify a number of cell types, which can ultimately be utilized for diagnostic purposes. Moreover, AI is capable of identifying genetic mutations phenotypically. This pipeline has the potential to offer an accurate and rapid preliminary analysis of the bone marrow in the clinical routine. However, the intrinsic complexity of hematological diseases presents several challenges for the automatic morphological assessment. To ensure general applicability across multiple medical centers and to deliver high accuracy on prospective clinical data, AI models would require highly heterogeneous training datasets. This review presents a systematic analysis of models for cell classification and detection of hematological malignancies published in the last 5 years (2019-2024). It provides insight into the challenges and opportunities of these DL-assisted tasks.

PMID:39629240 | PMC:PMC11612571 | DOI:10.1002/hem3.70048

Categories: Literature Watch

Pushing the limits of zero-shot self-supervised super-resolution of anisotropic MR images

Wed, 2024-12-04 06:00

Proc SPIE Int Soc Opt Eng. 2024 Feb;12926:1292606. doi: 10.1117/12.3007304. Epub 2024 Apr 2.

ABSTRACT

Magnetic resonance images are often acquired as several 2D slices and stacked into a 3D volume, yielding a lower through-plane resolution than in-plane resolution. Many super-resolution (SR) methods have been proposed to address this, including those that use the inherent high-resolution (HR) in-plane signal as HR data to train deep neural networks. Techniques with this approach are generally both self-supervised and internally trained, so no external training data is required. However, in such a training paradigm limited data are present for training machine learning models and the frequency content of the in-plane data may be insufficient to capture the true HR image. In particular, the recovery of high frequency information is usually lacking. In this work, we show this shortcoming with Fourier analysis; we subsequently propose and compare several approaches to address the recovery of high frequency information. We test a particular internally trained self-supervised method named SMORE on ten subjects at three common clinical resolutions with three types of modification: frequency-type losses (Fourier and wavelet), feature-type losses, and low-resolution re-gridding strategies for estimating the residual. We find a particular combination to balance between signal recovery in both spatial and frequency domains qualitatively and quantitatively, yet none of the modifications alone or in tandem yield a vastly superior result. We postulate that there may either be limits on internally trained techniques that such modifications cannot address, or limits on modeling SR as finding a map from low-resolution to HR, or both.

PMID:39629198 | PMC:PMC11613508 | DOI:10.1117/12.3007304

Categories: Literature Watch

Blood Pressure Predicted From Artificial Intelligence Analysis of Retinal Images Correlates With Future Cardiovascular Events

Wed, 2024-12-04 06:00

JACC Adv. 2024 Nov 18;3(12):101410. doi: 10.1016/j.jacadv.2024.101410. eCollection 2024 Dec.

ABSTRACT

BACKGROUND: High systolic blood pressure (SBP) is one of the leading modifiable risk factors for premature cardiovascular death. The retinal vasculature exhibits well-documented adaptations to high SBP and these vascular changes are known to correlate with atherosclerotic cardiovascular disease (ASCVD) events.

OBJECTIVES: The purpose of this study was to determine whether using artificial intelligence (AI) to predict an individual's SBP from retinal images would more accurately correlate with future ASCVD events compared to measured SBP.

METHODS: 95,665 macula-centered retinal images drawn from the 51,778 individuals in the UK Biobank who had not experienced an ASCVD event prior to retinal imaging were used. A deep-learning model was trained to predict an individual's SBP. The correlation of subsequent ASCVD events with the AI-predicted SBP and the mean of the measured SBP acquired at the time of retinal imaging was determined and compared.

RESULTS: The overall ASCVD event rate observed was 3.4%. The correlation between SBP and future ASCVD events was significantly higher if the AI-predicted SBP was used compared to the measured SBP: 0.067 v 0.049, P = 0.008. Variability in measured SBP in UK Biobank was present (mean absolute difference = 8.2 mm Hg), which impacted the 10-year ASCVD risk score in 6% of the participants.

CONCLUSIONS: With the variability and challenges of real-world SBP measurement, AI analysis of retinal images may provide a more reliable and accurate biomarker for predicting future ASCVD events than traditionally measured SBP.

PMID:39629061 | PMC:PMC11612377 | DOI:10.1016/j.jacadv.2024.101410

Categories: Literature Watch

A deep generative prior for high-resolution isotropic MR head slices

Wed, 2024-12-04 06:00

Proc SPIE Int Soc Opt Eng. 2023 Feb;12464:124640I. doi: 10.1117/12.2654032. Epub 2023 Apr 3.

ABSTRACT

Generative priors for magnetic resonance (MR) images have been used in a number of medical image analysis applications. Due to the plethora of deep learning methods based on 2D medical images, it would be beneficial to have a generator trained on complete, high-resolution 2D head MR slices from multiple orientations and multiple contrasts. In this work, we trained a StyleGAN3-T model for head MR slices for T1 and T2-weighted contrasts on public data. We restricted the training corpus of this model to slices from 1mm isotropic volumes corresponding to three standard radiological views with set pre-processing steps. In order to retain full applicability to downstream tasks, we did not skull-strip the images. Several analyses of the trained network, including examination of qualitative samples, interpolation of latent codes, and style mixing, demonstrate the expressivity of the network. Images from this network can be used for a variety of downstream tasks. The weights are open-sourced and are available at https://gitlab.com/iacl/high-res-mri-head-slice-gan.

PMID:39629055 | PMC:PMC11613141 | DOI:10.1117/12.2654032

Categories: Literature Watch

Learning-based object's stiffness and shape estimation with confidence level in multi-fingered hand grasping

Wed, 2024-12-04 06:00

Front Neurorobot. 2024 Nov 19;18:1466630. doi: 10.3389/fnbot.2024.1466630. eCollection 2024.

ABSTRACT

INTRODUCTION: When humans grasp an object, they are capable of recognizing its characteristics, such as its stiffness and shape, through the sensation of their hands. They can also determine their level of confidence in the estimated object properties. In this study, we developed a method for multi-fingered hands to estimate both physical and geometric properties, such as the stiffness and shape of an object. Their confidence levels were measured using proprioceptive signals, such as joint angles and velocity.

METHOD: We have developed a learning framework based on probabilistic inference that does not necessitate hyperparameters to maintain equilibrium between the estimation of diverse types of properties. Using this framework, we have implemented recurrent neural networks that estimate the stiffness and shape of grasped objects with their uncertainty in real time.

RESULTS: We demonstrated that the trained neural networks are capable of representing the confidence level of estimation that includes the degree of uncertainty and task difficulty in the form of variance and entropy.

DISCUSSION: We believe that this approach will contribute to reliable state estimation. Our approach would also be able to combine with flexible object manipulation and probabilistic inference-based decision making.

PMID:39628962 | PMC:PMC11611863 | DOI:10.3389/fnbot.2024.1466630

Categories: Literature Watch

Self-Supervised Super-Resolution for Anisotropic MR Images with and Without Slice Gap

Wed, 2024-12-04 06:00

Simul Synth Med Imaging. 2023 Oct;14288:118-128. doi: 10.1007/978-3-031-44689-4_12. Epub 2023 Oct 7.

ABSTRACT

Magnetic resonance (MR) images are often acquired as multi-slice volumes to reduce scan time and motion artifacts while improving signal-to-noise ratio. These slices often are thicker than their in-plane resolution and sometimes are acquired with gaps between slices. Such thick-slice image volumes (possibly with gaps) can impact the accuracy of volumetric analysis and 3D methods. While many super-resolution (SR) methods have been proposed to address thick slices, few have directly addressed the slice gap scenario. Furthermore, data-driven methods are sensitive to domain shift due to the variability of resolution, contrast in acquisition, pathology, and differences in anatomy. In this work, we propose a self-supervised SR technique to address anisotropic MR images with and without slice gap. We compare against competing methods and validate in both signal recovery and downstream task performance on two open-source datasets and show improvements in all respects. Our code publicly available at https://gitlab.com/iacl/smore.

PMID:39628924 | PMC:PMC11613142 | DOI:10.1007/978-3-031-44689-4_12

Categories: Literature Watch

SMART-PET: a Self-SiMilARiTy-aware generative adversarial framework for reconstructing low-count [18F]-FDG-PET brain imaging

Wed, 2024-12-04 06:00

Front Nucl Med. 2024 Nov 19;4:1469490. doi: 10.3389/fnume.2024.1469490. eCollection 2024.

ABSTRACT

INTRODUCTION: In Positron Emission Tomography (PET) imaging, the use of tracers increases radioactive exposure for longitudinal evaluations and in radiosensitive populations such as pediatrics. However, reducing injected PET activity potentially leads to an unfavorable compromise between radiation exposure and image quality, causing lower signal-to-noise ratios and degraded images. Deep learning-based denoising approaches can be employed to recover low count PET image signals: nonetheless, most of these methods rely on structural or anatomic guidance from magnetic resonance imaging (MRI) and fails to effectively preserve global spatial features in denoised PET images, without impacting signal-to-noise ratios.

METHODS: In this study, we developed a novel PET only deep learning framework, the Self-SiMilARiTy-Aware Generative Adversarial Framework (SMART), which leverages Generative Adversarial Networks (GANs) and a self-similarity-aware attention mechanism for denoising [18F]-fluorodeoxyglucose (18F-FDG) PET images. This study employs a combination of prospective and retrospective datasets in its design. In total, 114 subjects were included in the study, comprising 34 patients who underwent 18F-Fluorodeoxyglucose PET (FDG) PET imaging for drug-resistant epilepsy, 10 patients for frontotemporal dementia indications, and 70 healthy volunteers. To effectively denoise PET images without anatomical details from MRI, a self-similarity attention mechanism (SSAB) was devised. which learned the distinctive structural and pathological features. These SSAB-enhanced features were subsequently applied to the SMART GAN algorithm and trained to denoise the low-count PET images using the standard dose PET image acquired from each individual participant as reference. The trained GAN algorithm was evaluated using image quality measures including structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), normalized root mean square (NRMSE), Fréchet inception distance (FID), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR).

RESULTS: In comparison to the standard-dose, SMART-PET had on average a SSIM of 0.984 ± 0.007, PSNR of 38.126 ± 2.631 dB, NRMSE of 0.091 ± 0.028, FID of 0.455 ± 0.065, SNR of 0.002 ± 0.001, and CNR of 0.011 ± 0.011. Regions of interest measurements obtained with datasets decimated down to 10% of the original counts, showed a deviation of less than 1.4% when compared to the ground-truth values.

DISCUSSION: In general, SMART-PET shows promise in reducing noise in PET images and can synthesize diagnostic quality images with a 90% reduction in standard of care injected activity. These results make it a potential candidate for clinical applications in radiosensitive populations and for longitudinal neurological studies.

PMID:39628873 | PMC:PMC11611550 | DOI:10.3389/fnume.2024.1469490

Categories: Literature Watch

Dual assurance for healthcare and future education development: normalized assistance for low-income population in rural areas-evidence from the population identification

Wed, 2024-12-04 06:00

Front Public Health. 2024 Nov 19;12:1384474. doi: 10.3389/fpubh.2024.1384474. eCollection 2024.

ABSTRACT

INTRODUCTION: This study aims to explore the relationship between healthcare and future education among the rural low-income population, using J City in Guangdong Province as the focal area. Addressing both healthcare and educational concerns, this research seeks to provide insights that can guide policy and support for this demographic.

METHODS: Utilizing big data analysis and deep learning algorithms, a targeted intelligent identification classification model was developed to accurately detect and classify rural low-income individuals. Additionally, a questionnaire survey methodology was employed to separately investigate healthcare and future education dimensions among the identified population.

RESULTS: The proposed model achieved a population identification accuracy of 91.93%, surpassing other baseline neural network algorithms by at least 2.65%. Survey results indicated low satisfaction levels in healthcare areas, including medical resource distribution, medication costs, and access to basic medical facilities, with satisfaction rates below 50%. Regarding future education, issues such as tuition burdens, educational opportunity disparities, and accessibility challenges highlighted the concerns of rural low-income families.

DISCUSSION: The high accuracy of the model demonstrates its potential for precise identification and classification of low-income populations. Insights derived from healthcare and education surveys reveal systemic issues affecting satisfaction and accessibility. This research thus provides a valuable foundation for future studies and policy development targeting rural low-income populations in healthcare and education.

PMID:39628808 | PMC:PMC11611847 | DOI:10.3389/fpubh.2024.1384474

Categories: Literature Watch

Sustainable and smart rail transit based on advanced self-powered sensing technology

Wed, 2024-12-04 06:00

iScience. 2024 Nov 5;27(12):111306. doi: 10.1016/j.isci.2024.111306. eCollection 2024 Dec 20.

ABSTRACT

As rail transit continues to develop, expanding railway networks increase the demand for sustainable energy supply and intelligent infrastructure management. In recent years, advanced rail self-powered technology has rapidly progressed toward artificial intelligence and the internet of things (AIoT). This review primarily discusses the self-powered and self-sensing systems in rail transit, analyzing their current characteristics and innovative potentials in different scenarios. Based on this analysis, we further explore an IoT framework supported by sustainable self-powered sensing systems including device nodes, network communication, and platform deployment. Additionally, technologies about cloud computing and edge computing deployed in railway IoT enable more effective utilization. The deployed intelligent algorithms such as machine learning (ML) and deep learning (DL) can provide comprehensive monitoring, management, and maintenance in railway environments. Furthermore, this study explores research in other cross-disciplinary fields to investigate the potential of emerging technologies and analyze the trends for future development in rail transit.

PMID:39628585 | PMC:PMC11612783 | DOI:10.1016/j.isci.2024.111306

Categories: Literature Watch

Pages