Deep learning

Neural network-assisted meta-router for fiber mode and polarization demultiplexing

Thu, 2024-12-05 06:00

Nanophotonics. 2024 Sep 5;13(22):4181-4189. doi: 10.1515/nanoph-2024-0338. eCollection 2024 Sep.

ABSTRACT

Advancements in computer science have propelled society into an era of data explosion, marked by a critical need for enhanced data transmission capacity, particularly in the realm of space-division multiplexing and demultiplexing devices for fiber communications. However, recently developed mode demultiplexers primarily focus on mode divisions within one dimension rather than multiple dimensions (i.e., intensity distributions and polarization states), which significantly limits their applicability in space-division multiplexing communications. In this context, we introduce a neural network-assisted meta-router to recognize intensity distributions and polarization states of optical fiber modes, achieved through a single layer of metasurface optimized via neural network techniques. Specifically, a four-mode meta-router is theoretically designed and experimentally characterized, which enables four modes, comprising two spatial modes with two polarization states, independently divided into distinct spatial regions, and successfully recognized by positions of corresponding spatial regions. Our framework provides a paradigm for fiber mode demultiplexing apparatus characterized by application compatibility, transmission capacity, and function scalability with ultra-simple design and ultra-compact device. Merging metasurfaces, neural network and mode routing, this proposed framework paves a practical pathway towards intelligent metasurface-aided optical interconnection, including applications such as fiber communication, object recognition and classification, as well as information display, processing, and encryption.

PMID:39635450 | PMC:PMC11501066 | DOI:10.1515/nanoph-2024-0338

Categories: Literature Watch

Leveraging deep transfer learning and explainable AI for accurate COVID-19 diagnosis: Insights from a multi-national chest CT scan study

Wed, 2024-12-04 06:00

Comput Biol Med. 2024 Dec 3;185:109461. doi: 10.1016/j.compbiomed.2024.109461. Online ahead of print.

ABSTRACT

The COVID-19 pandemic has emerged as a global health crisis, impacting millions worldwide. Although chest computed tomography (CT) scan images are pivotal in diagnosing COVID-19, their manual interpretation by radiologists is time-consuming and potentially subjective. Automated computer-aided diagnostic (CAD) frameworks offer efficient and objective solutions. However, machine or deep learning methods often face challenges in their reproducibility due to underlying biases and methodological flaws. To address these issues, we propose XCT-COVID, an explainable, transferable, and reproducible CAD framework based on deep transfer learning to predict COVID-19 infection from CT scan images accurately. This is the first study to develop three distinct models within a unified framework by leveraging a previously unexplored large dataset and two widely used smaller datasets. We employed five known convolutional neural network architectures, both with and without pretrained weights, on the larger dataset. We optimized hyperparameters through extensive grid search and 5-fold cross-validation (CV), significantly enhancing the model performance. Experimental results from the larger dataset showed that the VGG16 architecture (XCT-COVID-L) with pretrained weights consistently outperformed other architectures, achieving the best performance, on both 5-fold CV and independent test. When evaluated with the external datasets, XCT-COVID-L performed well with data with similar distributions, demonstrating its transferability. However, its performance significantly decreased on smaller datasets with lower-quality images. To address this, we developed other models, XCT-COVID-S1 and XCT-COVID-S2, specifically for the smaller datasets, outperforming existing methods. Moreover, eXplainable Artificial Intelligence (XAI) analyses were employed to interpret the models' functionalities. For prediction and reproducibility purposes, the implementation of XCT-COVID is publicly accessible at https://github.com/cbbl-skku-org/XCT-COVID/.

PMID:39631112 | DOI:10.1016/j.compbiomed.2024.109461

Categories: Literature Watch

A smart CardioSenseNet framework with advanced data processing models for precise heart disease detection

Wed, 2024-12-04 06:00

Comput Biol Med. 2024 Dec 3;185:109473. doi: 10.1016/j.compbiomed.2024.109473. Online ahead of print.

ABSTRACT

Heart diseases remain one of the leading causes of death worldwide. As a result, early and accurate diagnostics have become an urgent need for treatment and management. Most of the conventional methods adopted tend to have major drawbacks: the issues of accuracy, interpretability, and feature representation. This work, therefore, proposes CardioSenseNet, which may provide a new framework that can improve accuracy and efficiency in heart disease detection. Firstly, the approach introduces a few new methods: DGPN for data preprocessing, STHIO for feature selection, and SADNet for prediction. DGPN normalizes the data depending on the distribution characteristic, which improves the quality of the feature representation. STHIO adopts the Sheep Flock Optimization method for the exploration of features and Tuna Swarm Optimization for the exploitation of features, guaranteeing the optimality in feature selection. SADNet is one such deep learning model that tries to find the complicated pattern in high-dimensional data for better prediction accuracy. Extensive experiments on benchmark datasets such as Cleveland and CVD endorse the efficiency of CardioSenseNet with a high accuracy of 99 % and at an minimum loss of 0.12 %. The results thus indicate that CardioSenseNet is a promising solution for the detection of heart diseases with high accuracy and at an early stage; therefore, it will contribute significantly to cardiovascular healthcare developments.

PMID:39631110 | DOI:10.1016/j.compbiomed.2024.109473

Categories: Literature Watch

A review of convolutional neural network based methods for medical image classification

Wed, 2024-12-04 06:00

Comput Biol Med. 2024 Dec 3;185:109507. doi: 10.1016/j.compbiomed.2024.109507. Online ahead of print.

ABSTRACT

This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.

PMID:39631108 | DOI:10.1016/j.compbiomed.2024.109507

Categories: Literature Watch

Physics-guided deep learning for skillful wind-wave modeling

Wed, 2024-12-04 06:00

Sci Adv. 2024 Dec 6;10(49):eadr3559. doi: 10.1126/sciadv.adr3559. Epub 2024 Dec 4.

ABSTRACT

Modeling sea surface wind-waves is crucial for both scientific research and engineering applications. Nowadays, the most accurate wave models are based on numerical methods, which primarily concern the wave spectrum evolution by solving wave action balance partial differential equations. These methods are computationally expensive and limited by incomplete physical representations of wave spectral evolution. Here, we present a deep learning-based wave model trained using observation-merged wave hindcasts. Guided by the physics knowledge that waves are either generated by local current winds or by remote historical winds, this method can directly model significant wave height, bypassing the need for wave spectral information. This feature engineering effectively reduces the complexity of model inputs and outputs. The resulting artificial intelligence method can model 1 year of global significant wave heights at a 0.5° × 0.5° × 1-hour resolution within half an hour on a personal computer, achieving higher accuracy than state-of-the-art numerical wave models.

PMID:39630901 | DOI:10.1126/sciadv.adr3559

Categories: Literature Watch

Sunflower-like self-sustainable plant-wearable sensing probe

Wed, 2024-12-04 06:00

Sci Adv. 2024 Dec 6;10(49):eads1136. doi: 10.1126/sciadv.ads1136. Epub 2024 Dec 4.

ABSTRACT

Powering and communicating with wearable devices on bio-interfaces is challenging due to strict weight, size, and resource constraints. This study presents a sunflower-like plant-wearable sensing device that harnesses solar energy, achieving complete energy self-sustainability for long-term monitoring of plant sap flow, a crucial indicator of plant health. It features foldable solar panels along with all essential flexible electronic components, resulting in a compact system that is lightweight enough for small plants. To tackle the low-energy density of solar power, we developed an ultralow-energy light communication mechanism inspired by fireflies. Together with unmanned aerial vehicles and deep learning algorithms, this approach enables efficient data retrieval from multiple devices across large agricultural fields. With its simple deployment, it shows great potential as a low-cost plant phenotyping tool. We believe our energy and communication solution for wearable devices can be extended to similar resource-limited and challenging scenarios, leading to exciting applications.

PMID:39630896 | DOI:10.1126/sciadv.ads1136

Categories: Literature Watch

Deep learning analysis of fMRI data for predicting Alzheimer's Disease: A focus on convolutional neural networks and model interpretability

Wed, 2024-12-04 06:00

PLoS One. 2024 Dec 4;19(12):e0312848. doi: 10.1371/journal.pone.0312848. eCollection 2024.

ABSTRACT

The early detection of Alzheimer's Disease (AD) is thought to be important for effective intervention and management. Here, we explore deep learning methods for the early detection of AD. We consider both genetic risk factors and functional magnetic resonance imaging (fMRI) data. However, we found that the genetic factors do not notably enhance the AD prediction by imaging. Thus, we focus on building an effective imaging-only model. In particular, we utilize data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), employing a 3D Convolutional Neural Network (CNN) to analyze fMRI scans. Despite the limitations posed by our dataset (small size and imbalanced nature), our CNN model demonstrates accuracy levels reaching 92.8% and an ROC of 0.95. Our research highlights the complexities inherent in integrating multimodal medical datasets. It also demonstrates the potential of deep learning in medical imaging for AD prediction.

PMID:39630834 | DOI:10.1371/journal.pone.0312848

Categories: Literature Watch

Multiscale effective connectivity analysis of brain activity using neural ordinary differential equations

Wed, 2024-12-04 06:00

PLoS One. 2024 Dec 4;19(12):e0314268. doi: 10.1371/journal.pone.0314268. eCollection 2024.

ABSTRACT

Neural mechanisms and underlying directionality of signaling among brain regions depend on neural dynamics spanning multiple spatiotemporal scales of population activity. Despite recent advances in multimodal measurements of brain activity, there is no broadly accepted multiscale dynamical models for the collective activity represented in neural signals. Here we introduce a neurobiological-driven deep learning model, termed multiscale neural dynamics neural ordinary differential equation (msDyNODE), to describe multiscale brain communications governing cognition and behavior. We demonstrate that msDyNODE successfully captures multiscale activity using both simulations and electrophysiological experiments. The msDyNODE-derived causal interactions between recording locations and scales not only aligned well with the abstraction of the hierarchical neuroanatomy of the mammalian central nervous system but also exhibited behavioral dependences. This work offers a new approach for mechanistic multiscale studies of neural processes.

PMID:39630698 | DOI:10.1371/journal.pone.0314268

Categories: Literature Watch

Predicting progression-free survival in sarcoma using MRI-based automatic segmentation models and radiomics nomograms: a preliminary multicenter study

Wed, 2024-12-04 06:00

Skeletal Radiol. 2024 Dec 4. doi: 10.1007/s00256-024-04837-7. Online ahead of print.

ABSTRACT

OBJECTIVES: Some sarcomas are highly malignant, associated with high recurrence despite treatment. This multicenter study aimed to develop and validate a radiomics signature to estimate sarcoma progression-free survival (PFS).

MATERIALS AND METHODS: The study retrospectively enrolled 202 consecutive patients with pathologically diagnosed sarcoma, who had pre-treatment axial fat-suppressed T2-weighted images (FS-T2WI), and included them in the ROI-Net model for training. Among them, 120 patients were included in the radiomics analysis, all of whom had pre-treatment axial T1-weighted and transverse FS-T2WI images, and were randomly divided into a development group (n = 96) and a validation group (n = 24). In the development cohort, Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression was used to develop the radiomics features for PFS prediction. By combining significant clinical features with radiomics features, a nomogram was constructed using Cox regression.

RESULTS: The proposed ROI-Net framework achieved a Dice coefficient of 0.820 (0.791-0.848). The radiomics signature based on 21 features could distinguish high-risk patients with poor PFS. Univariate Cox analysis revealed that peritumoral edema, metastases, and the radiomics score were associated with poor PFS and were included in the construction of the nomogram. The Radiomics-T1WI-Clinical model exhibited the best performance, with AUC values of 0.947, 0.907, and 0.924 at 300 days, 600 days, and 900 days, respectively.

CONCLUSION: The proposed ROI-Net framework demonstrated high consistency between its segmentation results and expert annotations. The radiomics features and the combined nomogram have the potential to aid in predicting PFS for patients with sarcoma.

PMID:39630238 | DOI:10.1007/s00256-024-04837-7

Categories: Literature Watch

Correction to: Deep learning-based reconstruction improves the image quality of low-dose CT enterography in patients with inflammatory bowel disease

Wed, 2024-12-04 06:00

Abdom Radiol (NY). 2024 Dec 4. doi: 10.1007/s00261-024-04694-x. Online ahead of print.

NO ABSTRACT

PMID:39630201 | DOI:10.1007/s00261-024-04694-x

Categories: Literature Watch

Automatic Quantitative Analysis of Internal Quantum Efficiency Measurements of GaAs Solar Cells Using Deep Learning

Wed, 2024-12-04 06:00

Adv Sci (Weinh). 2024 Dec 4:e2407048. doi: 10.1002/advs.202407048. Online ahead of print.

ABSTRACT

A solar cell's internal quantum efficiency (IQE) measurement reveals critical information about the device's performance. This information can be obtained using a qualitative analysis of the shape of the curve, identifying and attributing current losses such as at the front and rear interfaces, and extracting key electrical and optical performance parameters. However, conventional methods to extract the performance parameters from IQE measurements are often time-consuming and require manual fitting approaches. While several methodologies exist to extract those parameters from silicon solar cells, there is a lack of accessible approaches for non-silicon cell technologies, like gallium arsenide cells, typically limiting the analysis to only the qualitative level. Therefore, this study proposes using a deep learning method to automatically predict multiple key parameters from IQE measurements of gallium arsenide cells. The proposed method is demonstrated to achieve a very high level of prediction accuracy across the entire range of parameter values and exhibits a high resilience for noisy measurements. By enhancing the quantitative analysis of IQE measurements, the method will unlock the full potential of quantum efficiency measurements as a powerful characterization tool for diverse solar cell technologies.

PMID:39630124 | DOI:10.1002/advs.202407048

Categories: Literature Watch

Automated Segmentation of Fetal Intracranial Volume in Three-Dimensional Ultrasound Using Deep Learning: Identifying Sex Differences in Prenatal Brain Development

Wed, 2024-12-04 06:00

Hum Brain Mapp. 2024 Dec 1;45(17):e70058. doi: 10.1002/hbm.70058.

ABSTRACT

The human brain undergoes major developmental changes during pregnancy. Three-dimensional (3D) ultrasound images allow for the opportunity to investigate typical prenatal brain development on a large scale. Transabdominal ultrasound can be challenging due to the small fetal brain and its movement, as well as multiple sweeps that may not yield high-quality images, especially when brain structures are unclear. By applying the latest developments in artificial intelligence for automated image processing allowing automated training of brain anatomy in these images retrieving reliable quantitative brain measurements becomes possible at a large scale. Here, we developed a convolutional neural network (CNN) model for automated segmentation of fetal intracranial volume (ICV) from 3D ultrasound. We applied the trained model in a large longitudinal population sample from the YOUth Baby and Child cohort measured at 20- and 30-week of gestational age to investigate biological sex differences in fetal ICV as a proof-of-principle and validation for our automated method (N = 2235 individuals with 43492 ultrasounds). A total of 168 annotated, randomly selected, good quality 3D ultrasound whole-brain images were included to train a 3D CNN for automated fetal ICV segmentation. A data augmentation strategy provided physical variation to train the network. K-fold cross-validation and Bayesian optimization were used for network selection and the ensemble-based system combined multiple networks to form the final ensemble network. The final ensemble network produced consistent and high-quality segmentations of ICV (Dice Similarity Coefficient (DSC) > 0.93, Hausdorff Distance (HD): HDvoxel < 4.6 voxels, and HDphysical < 1.4 mm). In addition, we developed an automated quality control procedure to include the ultrasound scans that successfully predicted ICV from all 43492 3D ultrasounds available in all individuals, no longer requiring manual selection of the best scan for analysis. Our trained model automatically retrieved ultrasounds with brain data and estimated ICV and ICV growth in 7672 (18%) of ultrasounds in 1762 participants that passed the automatic quality control procedure. Boys had significantly larger ICV at 20-weeks (81.7 ± 0.4 mL vs. 80.8 ± 0.5 mL; B = 2.86; p = 5.7e-14) and 30-weeks (257.0 ± 0.9 mL vs. 245.1 ± 0.9 mL; B = 12.35; p = 8.2e-27) of pregnancy, and more pronounced ICV growth than girls (delta growth 0.12 mL/day; p = 1.8e-5). Our automated artificial intelligence approach provides an opportunity to investigate fetal brain development on a much larger scale and to answer fundamental questions related to prenatal brain development.

PMID:39629904 | DOI:10.1002/hbm.70058

Categories: Literature Watch

A Deep Learning Based Framework to Identify Undocumented Orphaned Oil and Gas Wells from Historical Maps: A Case Study for California and Oklahoma

Wed, 2024-12-04 06:00

Environ Sci Technol. 2024 Dec 4. doi: 10.1021/acs.est.4c04413. Online ahead of print.

ABSTRACT

Undocumented Orphaned Wells (UOWs) are wells without an operator that have limited or no documentation with regulatory authorities. An estimated 310,000 to 800,000 UOWs exist in the United States (US), whose locations are largely unknown. These wells can potentially leak methane and other volatile organic compounds to the atmosphere, and contaminate groundwater. In this study, we developed a novel framework utilizing a state-of-the-art computer vision neural network model to identify the precise locations of potential UOWs. The U-Net model is trained to detect oil and gas well symbols in georeferenced historical topographic maps, and potential UOWs are identified as symbols that are further than 100 m from any documented well. A custom tool was developed to rapidly validate the potential UOW locations. We applied this framework to four counties in California and Oklahoma, leading to the discovery of 1301 potential UOWs across >40,000 km2. We confirmed the presence of 29 UOWs from satellite images and 15 UOWs from magnetic surveys in the field with a spatial accuracy on the order of 10 m. This framework can be scaled to identify potential UOWs across the US since the historical maps are available for the entire nation.

PMID:39629830 | DOI:10.1021/acs.est.4c04413

Categories: Literature Watch

Prediction of Brain Cancer Occurrence and Risk Assessment of Brain Hemorrhage Using Hybrid Deep Learning Technique

Wed, 2024-12-04 06:00

Cancer Invest. 2024 Dec 4:1-23. doi: 10.1080/07357907.2024.2431829. Online ahead of print.

ABSTRACT

The prediction of brain cancer occurrence and risk assessment of brain hemorrhage using a hybrid deep learning (DL) technique is a critical area of research in medical imaging analysis. One prominent challenge in this field is the accurate identification and classification of brain tumors and hemorrhages, which can significantly impact patient prognosis and treatment planning. The objectives of the study address the prediction of brain cancer occurrence and the assessment of risk levels associated with both brain cancers due to brain hemorrhage. A diverse dataset of brain MRI and CT scan images. Utilize Unsymmetrical Trimmed Median Filter with Optics Clustering for noise removal while preserving edges and details. The Chan-Vese segmentation process for refined segmentation. Brain cancer detection using Multi-Head Self-Attention Dilated Convolution Neural Network (MH-SA-DCNN) with Efficient Net Model. Brain cancer detection using MH-SA-DCNN with Efficient Net Model. This trains the algorithm to predict cancerous regions in brain images. Further, implement a Graph-Based Deep Neural Network Model (G-DNN) to capture spatial relationships and risk factors from brain images. Cox regression model to estimate cancer risk over time and fine-tune and optimize the model's parameters and features using the Osprey optimization algorithm (OPA).

PMID:39629783 | DOI:10.1080/07357907.2024.2431829

Categories: Literature Watch

Deep learning-based hyperspectral image correction and unmixing for brain tumor surgery

Wed, 2024-12-04 06:00

iScience. 2024 Oct 28;27(12):111273. doi: 10.1016/j.isci.2024.111273. eCollection 2024 Dec 20.

ABSTRACT

Hyperspectral imaging for fluorescence-guided brain tumor resection improves visualization of tissue differences, which can ameliorate patient outcomes. However, current methods do not effectively correct for heterogeneous optical and geometric tissue properties, leading to less accurate results. We propose two deep learning models for correction and unmixing that can capture these effects. While one is trained with protoporphyrin IX (PpIX) concentration labels, the other is semi-supervised. The models were evaluated on phantom and pig brain data with known PpIX concentration; the supervised and semi-supervised models achieved Pearson correlation coefficients (phantom, pig brain) between known and computed PpIX concentrations of (0.997, 0.990) and (0.98, 0.91), respectively. The classical approach achieved (0.93, 0.82). The semi-supervised approach also generalizes better to human data, achieving a 36% lower false-positive rate for PpIX detection and giving qualitatively more realistic results than existing methods. These results show promise for using deep learning to improve hyperspectral fluorescence-guided neurosurgery.

PMID:39628576 | PMC:PMC11613202 | DOI:10.1016/j.isci.2024.111273

Categories: Literature Watch

Self-Supervised Super-Resolution of 2D Pre-clinical MRI Acquisitions

Wed, 2024-12-04 06:00

Proc SPIE Int Soc Opt Eng. 2024 Feb;12930:129302K. doi: 10.1117/12.3016094. Epub 2024 Apr 2.

ABSTRACT

Animal models are pivotal in disease research and the advancement of therapeutic methods. The translation of results from these models to clinical applications is enhanced by employing technologies which are consistent for both humans and animals, like Magnetic Resonance Imaging (MRI), offering the advantage of longitudinal disease evaluation without compromising animal welfare. However, current animal MRI techniques predominantly employ 2D acquisitions due to constraints related to organ size, scan duration, image quality, and hardware limitations. While 3D acquisitions are feasible, they are constrained by longer scan times and ethical considerations related to extended sedation periods. This study evaluates the efficacy of SMORE, a self-supervised deep learning super-resolution approach, to enhance the through-plane resolution of anisotropic 2D MRI scans into isotropic resolutions. SMORE accomplishes this by self-training with high-resolution in-plane data, thereby eliminating domain discrepancies between the input data and external training sets. The approach is tested on mouse MRI scans acquired across a range of through-plane resolutions. Experimental results show SMORE substantially outperforms traditional interpolation methods. Additionally, we find that pre-training offers a promising approach to reduce processing time without compromising performance.

PMID:39628511 | PMC:PMC11613139 | DOI:10.1117/12.3016094

Categories: Literature Watch

Towards Explainable Detection of Alzheimer's Disease: A Fusion of Deep Convolutional Neural Network and Enhanced Weighted Fuzzy C-Mean

Wed, 2024-12-04 06:00

Curr Med Imaging. 2024;20(1):e15734056317205. doi: 10.2174/0115734056317205241014060633.

ABSTRACT

BACKGROUND: Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline, posing a significant challenge for individuals and society. Early detection and treatment are essential for effective disease management.

OBJECTIVE: The objective of this research is to develop a novel and interpretable deep learning model for rapid and accurate Alzheimer's disease detection, incorporating Explainable Artificial Intelligence (XAI) techniques. The model aims to ensure generalizability through cross-validation and data augmentation, while enhancing interpretability and transparency by using Explainable Artificial Intelligence methods such as Grad-CAM, SHAP, and LIME, alongside an Enhanced Fuzzy C-Means (FCM) algorithm to clarify feature categorization and improve understanding of the model's decision-making process.

METHODS: The proposed model employs a multi-stage approach. Initially, MRI scans are transformed into feature vectors suitable for input into a Deep Convolutional Neural Network (CNN). Subsequently, an Enhanced Fuzzy C-Mean (FCM) algorithm, incorporating spatial information, refines these features to improve clustering precision. The model integrates Explainable Artificial Intelligence techniques, including Grad-CAM, SHAP, and LIME, to elucidate critical features and regions influencing classification outcomes. The performance metrics such as Accuracy, Recall and Specificity are used for assessing the performance of the model.

RESULTS: The XAI-DEF Alzheimer's disease detection model consistently demonstrated exceptional performance across both the ADNI and OASIS datasets. On ADNI, the model achieved an accuracy of 99.39%, recall of 99.47%, and specificity of 99.3%. Similarly, on OASIS, the model attained an accuracy of 99.36%, recall of 99.53%, and specificity of 99.15%. These results underscore the model's effectiveness in accurately classifying Alzheimer's disease cases while minimizing false positives and negatives.

CONCLUSION: Through the development of this model, we contribute to the advancement of dependable diagnostic tools tailored for the detection and management of Alzheimer's disease. By prioritizing interpretability alongside accuracy, our approach provides valuable insights into the decisionmaking process of the model, ultimately improving patient outcomes and facilitating further research in neurodegenerative disorders.

PMID:39629569 | DOI:10.2174/0115734056317205241014060633

Categories: Literature Watch

Models for the marrow: A comprehensive review of AI-based cell classification methods and malignancy detection in bone marrow aspirate smears

Wed, 2024-12-04 06:00

Hemasphere. 2024 Dec 3;8(12):e70048. doi: 10.1002/hem3.70048. eCollection 2024 Dec.

ABSTRACT

Given the high prevalence of artificial intelligence (AI) research in medicine, the development of deep learning (DL) algorithms based on image recognition, such as the analysis of bone marrow aspirate (BMA) smears, is rapidly increasing in the field of hematology and oncology. The models are trained to identify the optimal regions of the BMA smear for differential cell count and subsequently detect and classify a number of cell types, which can ultimately be utilized for diagnostic purposes. Moreover, AI is capable of identifying genetic mutations phenotypically. This pipeline has the potential to offer an accurate and rapid preliminary analysis of the bone marrow in the clinical routine. However, the intrinsic complexity of hematological diseases presents several challenges for the automatic morphological assessment. To ensure general applicability across multiple medical centers and to deliver high accuracy on prospective clinical data, AI models would require highly heterogeneous training datasets. This review presents a systematic analysis of models for cell classification and detection of hematological malignancies published in the last 5 years (2019-2024). It provides insight into the challenges and opportunities of these DL-assisted tasks.

PMID:39629240 | PMC:PMC11612571 | DOI:10.1002/hem3.70048

Categories: Literature Watch

Pushing the limits of zero-shot self-supervised super-resolution of anisotropic MR images

Wed, 2024-12-04 06:00

Proc SPIE Int Soc Opt Eng. 2024 Feb;12926:1292606. doi: 10.1117/12.3007304. Epub 2024 Apr 2.

ABSTRACT

Magnetic resonance images are often acquired as several 2D slices and stacked into a 3D volume, yielding a lower through-plane resolution than in-plane resolution. Many super-resolution (SR) methods have been proposed to address this, including those that use the inherent high-resolution (HR) in-plane signal as HR data to train deep neural networks. Techniques with this approach are generally both self-supervised and internally trained, so no external training data is required. However, in such a training paradigm limited data are present for training machine learning models and the frequency content of the in-plane data may be insufficient to capture the true HR image. In particular, the recovery of high frequency information is usually lacking. In this work, we show this shortcoming with Fourier analysis; we subsequently propose and compare several approaches to address the recovery of high frequency information. We test a particular internally trained self-supervised method named SMORE on ten subjects at three common clinical resolutions with three types of modification: frequency-type losses (Fourier and wavelet), feature-type losses, and low-resolution re-gridding strategies for estimating the residual. We find a particular combination to balance between signal recovery in both spatial and frequency domains qualitatively and quantitatively, yet none of the modifications alone or in tandem yield a vastly superior result. We postulate that there may either be limits on internally trained techniques that such modifications cannot address, or limits on modeling SR as finding a map from low-resolution to HR, or both.

PMID:39629198 | PMC:PMC11613508 | DOI:10.1117/12.3007304

Categories: Literature Watch

Blood Pressure Predicted From Artificial Intelligence Analysis of Retinal Images Correlates With Future Cardiovascular Events

Wed, 2024-12-04 06:00

JACC Adv. 2024 Nov 18;3(12):101410. doi: 10.1016/j.jacadv.2024.101410. eCollection 2024 Dec.

ABSTRACT

BACKGROUND: High systolic blood pressure (SBP) is one of the leading modifiable risk factors for premature cardiovascular death. The retinal vasculature exhibits well-documented adaptations to high SBP and these vascular changes are known to correlate with atherosclerotic cardiovascular disease (ASCVD) events.

OBJECTIVES: The purpose of this study was to determine whether using artificial intelligence (AI) to predict an individual's SBP from retinal images would more accurately correlate with future ASCVD events compared to measured SBP.

METHODS: 95,665 macula-centered retinal images drawn from the 51,778 individuals in the UK Biobank who had not experienced an ASCVD event prior to retinal imaging were used. A deep-learning model was trained to predict an individual's SBP. The correlation of subsequent ASCVD events with the AI-predicted SBP and the mean of the measured SBP acquired at the time of retinal imaging was determined and compared.

RESULTS: The overall ASCVD event rate observed was 3.4%. The correlation between SBP and future ASCVD events was significantly higher if the AI-predicted SBP was used compared to the measured SBP: 0.067 v 0.049, P = 0.008. Variability in measured SBP in UK Biobank was present (mean absolute difference = 8.2 mm Hg), which impacted the 10-year ASCVD risk score in 6% of the participants.

CONCLUSIONS: With the variability and challenges of real-world SBP measurement, AI analysis of retinal images may provide a more reliable and accurate biomarker for predicting future ASCVD events than traditionally measured SBP.

PMID:39629061 | PMC:PMC11612377 | DOI:10.1016/j.jacadv.2024.101410

Categories: Literature Watch

Pages