Deep learning

Smartwatch ECG and artificial intelligence in detecting acute coronary syndrome compared to traditional 12-lead ECG

Tue, 2024-12-17 06:00

Int J Cardiol Heart Vasc. 2024 Dec 1;56:101573. doi: 10.1016/j.ijcha.2024.101573. eCollection 2025 Feb.

ABSTRACT

BACKGROUND: Acute coronary syndromes (ACS) require prompt diagnosis through initial electrocardiograms (ECG), but ECG machines are not always accessible. Meanwhile, smartwatches offering ECG functionality have become widespread. This study evaluates the feasibility of an image-based ECG analysis artificial intelligence (AI) system with smartwatch-based multichannel, asynchronous ECG for diagnosing ACS.

METHODS: Fifty-six patients with ACS and 15 healthy participants were included, and their standard 12-lead and smartwatch-based 9-lead ECGs were analyzed. The ACS group was categorized into ACS with acute total occlusion (ACS-O(+), culprit stenosis ≥ 99 %, n = 44) and ACS without occlusion (ACS-O(-), culprit stenosis 70 % to < 99 %, n = 12) based on coronary angiography. A deep learning-based AI-ECG tool interpreting 2-dimensional ECG images generated probability scores for ST-elevation myocardial infarction (qSTEMI), ACS (qACS), and myocardial injury (qMI: troponin I > 0.1 ng/mL).

RESULTS: The AI-driven qSTEMI, qACS, and qMI demonstrated correlation coefficients of 0.882, 0.874, and 0.872 between standard and smartwatch ECGs (all P < 0.001). The qACS score effectively distinguished ACS-O(±) from control, with AUROC for both ECGs (0.991 for standard and 0.987 for smartwatch, P = 0.745). The AUROC of qSTEMI in identifying ACS-O(+) from control was 0.989 and 0.982 with 12-lead and smartwatch (P = 0.617). Discriminating ACS-O(+) from ACS-O(-) or control presented a slight challenge, with an AUROC for qSTEMI of 0.855 for 12-lead and 0.880 for smartwatch ECGs (P = 0.352).

CONCLUSION: AI-ECG scores from standard and smartwatch-based ECGs showed high concordance with comparable diagnostic performance in differentiating ACS-O(+) and ACS-O(-). With increasing accessibility smartwatch accessibility, they may hold promise for aiding ACS diagnosis, regardless of location.

PMID:39687687 | PMC:PMC11648863 | DOI:10.1016/j.ijcha.2024.101573

Categories: Literature Watch

Improving the generalizability of white blood cell classification with few-shot domain adaptation

Tue, 2024-12-17 06:00

J Pathol Inform. 2024 Nov 7;15:100405. doi: 10.1016/j.jpi.2024.100405. eCollection 2024 Dec.

ABSTRACT

The morphological classification of nucleated blood cells is fundamental for the diagnosis of hematological diseases. Many Deep Learning algorithms have been implemented to automatize this classification task, but most of the time they fail to classify images coming from different sources. This is known as "domain shift". Whereas some research has been conducted in this area, domain adaptation techniques are often computationally expensive and can introduce significant modifications to initial cell images. In this article, we propose an easy-to-implement workflow where we trained a model to classify images from two datasets, and tested it on images coming from eight other datasets. An EfficientNet model was trained on a source dataset comprising images from two different datasets. It was afterwards fine-tuned on each of the eight target datasets by using 100 or less-annotated images from these datasets. Images from both the source and the target dataset underwent a color transform to put them into a standardized color style. The importance of color transform and fine-tuning was evaluated through an ablation study and visually assessed with scatter plots, and an extensive error analysis was carried out. The model achieved an accuracy higher than 80% for every dataset and exceeded 90% for more than half of the datasets. The presented workflow yielded promising results in terms of generalizability, significantly improving performance on target datasets, whereas keeping low computational cost and maintaining consistent color transformations. Source code is available at: https://github.com/mc2295/WBC_Generalization.

PMID:39687668 | PMC:PMC11648780 | DOI:10.1016/j.jpi.2024.100405

Categories: Literature Watch

Application of a deep learning algorithm for the diagnosis of HCC

Tue, 2024-12-17 06:00

JHEP Rep. 2024 Sep 16;7(1):101219. doi: 10.1016/j.jhepr.2024.101219. eCollection 2025 Jan.

ABSTRACT

BACKGROUND & AIMS: Hepatocellular carcinoma (HCC) is characterized by a high mortality rate. The Liver Imaging Reporting and Data System (LI-RADS) results in a considerable number of indeterminate observations, rendering an accurate diagnosis difficult.

METHODS: We developed four deep learning models for diagnosing HCC on computed tomography (CT) via a training-validation-testing approach. Thin-slice triphasic CT liver images and relevant clinical information were collected and processed for deep learning. HCC was diagnosed and verified via a 12-month clinical composite reference standard. CT observations among at-risk patients were annotated using LI-RADS. Diagnostic performance was assessed by internal validation and independent external testing. We conducted sensitivity analyses of different subgroups, deep learning explainability evaluation, and misclassification analysis.

RESULTS: From 2,832 patients and 4,305 CT observations, the best-performing model was Spatio-Temporal 3D Convolution Network (ST3DCN), achieving area under receiver-operating-characteristic curves (AUCs) of 0.919 (95% CI, 0.903-0.935) and 0.901 (95% CI, 0.879-0.924) at the observation (n = 1,077) and patient (n = 685) levels, respectively during internal validation, compared with 0.839 (95% CI, 0.814-0.864) and 0.822 (95% CI, 0.790-0.853), respectively for standard of care radiological interpretation. The negative predictive values of ST3DCN were 0.966 (95% CI, 0.954-0.979) and 0.951 (95% CI, 0.931-0.971), respectively. The observation-level AUCs among at-risk patients, 2-5-cm observations, and singular portovenous phase analysis of ST3DCN were 0.899 (95% CI, 0.874-0.924), 0.872 (95% CI, 0.838-0.909) and 0.912 (95% CI, 0.895-0.929), respectively. In external testing (551/717 patients/observations), the AUC of ST3DCN was 0.901 (95% CI, 0.877-0.924), which was non-inferior to radiological interpretation (AUC 0.900; 95% CI, 0.877--923).

CONCLUSIONS: ST3DCN achieved strong, robust performance for accurate HCC diagnosis on CT. Thus, deep learning can expedite and improve the process of diagnosing HCC.

IMPACT AND IMPLICATIONS: The clinical applicability of deep learning in HCC diagnosis is potentially huge, especially considering the expected increase in the incidence and mortality of HCC worldwide. Early diagnosis through deep learning can lead to earlier definitive management, particularly for at-risk patients. The model can be broadly deployed for patients undergoing a triphasic contrast CT scan of the liver to reduce the currently high mortality rate of HCC.

PMID:39687602 | PMC:PMC11648772 | DOI:10.1016/j.jhepr.2024.101219

Categories: Literature Watch

Correction methods and applications of ERT in complex terrain

Tue, 2024-12-17 06:00

MethodsX. 2024 Nov 22;13:103012. doi: 10.1016/j.mex.2024.103012. eCollection 2024 Dec.

ABSTRACT

Electrical Resistivity Tomography (ERT) is an efficient geophysical exploration technique widely used in the exploration of groundwater resources, environmental monitoring, engineering geological assessment, and archaeology. However, the undulation of the terrain significantly affects the accuracy of ERT data, potentially leading to false anomalies in the resistivity images and increasing the complexity of interpreting subsurface structures. This paper reviews the progress in the research on terrain correction for resistivity methods since the early 20th century. From the initial physical simulation methods to modern numerical simulation techniques, terrain correction technology has evolved to accommodate a variety of exploration site types. The paper provides a detailed introduction to various terrain correction techniques, including the ratio method, numerical simulation methods (including the finite element method and finite difference method), the angular domain method, conformal transformation method, inversion method, and orthogonal projection method. These methods correct the distortions caused by terrain using different mathematical and physical models, aiming to enhance the interpretative accuracy of ERT data. Although existing correction methods have made progress in mitigating the effects of terrain, they still have limitations such as high computational demands and poor alignment with actual geological conditions. Future research could explore the improvement of existing methods, the enhancement of computational efficiency, the reduction of resource consumption, and the use of advanced technologies like deep learning to improve the precision and reliability of corrections.

PMID:39687593 | PMC:PMC11648252 | DOI:10.1016/j.mex.2024.103012

Categories: Literature Watch

Action recognition in rehabilitation: combining 3D convolution and LSTM with spatiotemporal attention

Tue, 2024-12-17 06:00

Front Physiol. 2024 Dec 2;15:1472380. doi: 10.3389/fphys.2024.1472380. eCollection 2024.

ABSTRACT

This study addresses the limitations of traditional sports rehabilitation, emphasizing the need for improved accuracy and response speed in real-time action detection and recognition in complex rehabilitation scenarios. We propose the STA-C3DL model, a deep learning framework that integrates 3D Convolutional Neural Networks (C3D), Long Short-Term Memory (LSTM) networks, and spatiotemporal attention mechanisms to capture nuanced action dynamics more precisely. Experimental results on multiple datasets, including NTU RGB + D, Smarthome Rehabilitation, UCF101, and HMDB51, show that the STA-C3DL model significantly outperforms existing methods, achieving up to 96.42% accuracy and an F1 score of 95.83% on UCF101, with robust performance across other datasets. The model demonstrates particular strength in handling real-time feedback requirements, highlighting its practical application in enhancing rehabilitation processes. This work provides a powerful, accurate tool for action recognition, advancing the application of deep learning in rehabilitation therapy and offering valuable support to therapists and researchers. Future research will focus on expanding the model's adaptability to unconventional and extreme actions, as well as its integration into a wider range of rehabilitation settings to further support individualized patient recovery.

PMID:39687520 | PMC:PMC11646842 | DOI:10.3389/fphys.2024.1472380

Categories: Literature Watch

Toward improving reproducibility in neuroimaging deep learning studies

Tue, 2024-12-17 06:00

Front Neurosci. 2024 Dec 2;18:1509358. doi: 10.3389/fnins.2024.1509358. eCollection 2024.

NO ABSTRACT

PMID:39687491 | PMC:PMC11647000 | DOI:10.3389/fnins.2024.1509358

Categories: Literature Watch

Raw dataset of tensile tests in a 3D-printed nylon reinforced with oriented short carbon fibers

Tue, 2024-12-17 06:00

Data Brief. 2024 Nov 20;57:111149. doi: 10.1016/j.dib.2024.111149. eCollection 2024 Dec.

ABSTRACT

This dataset presents the results of tensile tests conducted on 3D-printed nylon composites reinforced with short carbon fibers, commercially known as Onyx™. Specimens were printed using a Markforged™ Mark 2 printer with three different printing orientations: 0°, ±45°, and 90°, following the ASTM D638-22 standard for Type IV tensile specimens. The dataset includes mechanical testing data, scanning electron microscope (SEM) images, and digital image correlation (DIC) images. Mechanical test data were collected using an Instron universal testing machine, while SEM images were captured to examine microstructural features and fracture surfaces, both before and after testing. DIC images were obtained using two cameras positioned orthogonally to capture deformation on multiple planes. Limitations include fracture at the radius of the testing region in some 0° specimens and premature failure of 90° specimens, which reduced the number of captured images. These data provide valuable insights into the anisotropic mechanical behavior of 3D-printed composites and can be reused for further research on material behavior under varying conditions like multiscale simulations and deep learning algorithms.

PMID:39687376 | PMC:PMC11647152 | DOI:10.1016/j.dib.2024.111149

Categories: Literature Watch

Dataset of Sentinel-1 SAR and Sentinel-2 RGB-NDVI imagery

Tue, 2024-12-17 06:00

Data Brief. 2024 Nov 20;57:111160. doi: 10.1016/j.dib.2024.111160. eCollection 2024 Dec.

ABSTRACT

This article presents a comprehensive dataset combining Synthetic Aperture Radar (SAR) imagery from the Sentinel-1 mission with optical imagery, including RGB and Normalized Difference Vegetation Index (NDVI), from the Sentinel-2 mission. The dataset consists of 8800 images, organized into four folders-SAR_VV, SAR_VH, RGB, and NDVI-each containing 2200 images with dimensions of 512 × 512 pixels. These images were collected from various global locations using random geographic coordinates and strict criteria for cloud cover, snow presence, and water percentage, ensuring high-quality and diverse data. The primary motivation for creating this dataset is to address the limitations of optical sensors, which are often hindered by cloud cover and atmospheric conditions. By integrating SAR data, which is unaffected by these factors, the dataset offers a robust tool for a wide range of applications, including land cover classification, vegetation monitoring, and environmental change detection. The dataset is particularly valuable for training machine learning models that require multimodal inputs, such as translating SAR images to optical imagery or enhancing the quality of noisy data. Additionally, the structure of the dataset and the preprocessing steps applied make it readily usable for various research purposes. The SAR images are processed to Level-1 Ground Range Detected (GRD) format, including radiometric calibration and terrain correction, while the optical images are filtered to ensure minimal cloud interference.

PMID:39687373 | PMC:PMC11648187 | DOI:10.1016/j.dib.2024.111160

Categories: Literature Watch

Diffusion data augmentation for enhancing Norberg hip angle estimation

Mon, 2024-12-16 06:00

Vet Radiol Ultrasound. 2025 Jan;66(1):e13463. doi: 10.1111/vru.13463.

ABSTRACT

The Norberg angle (NA) plays a crucial role in evaluating hip joint conformation, particularly in canines, by quantifying femoral head subluxation within the hip joint. Therefore, it is an important metric for evaluating hip joint quality and diagnosing canine hip dysplasia, the most prevalent hereditary orthopedic disorder in dogs. While contemporary tools offer automated quantification of the NA, their usage typically entails manual labeling and verification of radiographic images by professional veterinarians. To enhance efficiency and streamline this process, the study aims to develop a tool capable of predicting the NA directly from the image without the need for veterinary intervention. Due to the challenges in acquiring annotated, diverse, high-quality images, this study introduces diffusion models to expand the training dataset from 219 to 1493 images, encompassing original images. This augmentation enhances the dataset's diversity and scale, thereby improving the accuracy of Norberg angle estimation. The model predicts four key points: the center of left and right femoral heads and the edge of the left and right acetabulum, as well as the radii of the femoral heads and the Norberg angles. By evaluating 18 distinct pretrained ImageNet models, we investigate their performance pre- and post-incorporating augmented data from generated images. The results demonstrate a significant enhancement, with an average 35.3% improvement based on mean absolute percentage error when utilizing generated images from diffusion models. This study showcases the potential of diffusion modeling in advancing canine hip dysplasia diagnosis and underscores the value of incorporating augmented data to elevate model accuracy.

PMID:39681980 | DOI:10.1111/vru.13463

Categories: Literature Watch

Correction: Deep learning assists early-detection of hypertension-mediated heart change on ECG signals

Mon, 2024-12-16 06:00

Hypertens Res. 2024 Dec 16. doi: 10.1038/s41440-024-01980-5. Online ahead of print.

NO ABSTRACT

PMID:39681651 | DOI:10.1038/s41440-024-01980-5

Categories: Literature Watch

Novel approach for Arabic fake news classification using embedding from large language features with CNN-LSTM ensemble model and explainable AI

Mon, 2024-12-16 06:00

Sci Rep. 2024 Dec 16;14(1):30463. doi: 10.1038/s41598-024-82111-5.

ABSTRACT

The widespread fake news challenges the management of low-quality information, making effective detection strategies necessary. This study addresses this critical issue by advancing fake news detection in Arabic and overcoming limitations in existing approaches. Deep learning models, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM), EfficientNetB4, Inception, Xception, ResNet, ConvLSTM and a novel voting ensemble framework combining CNN and LSTM are employed for text classification. The proposed framework integrates the ELMO word embedding technique having contextual representation capabilities, which is compared with GloVe, BERT, FastText and FastText subwords. Comprehensive experiments demonstrate that the proposed voting ensemble, combined with ELMo word embeddings, consistently outperforms previous approaches. It achieves an accuracy of 98.42%, precision of 98.54%, recall of 99.5%, and an F1 score of 98.93%, offering an efficient and highly effective solution for text classification tasks.The proposed framework benchmark against state-of-the-art transformer architectures, including BERT and RoBERTa, demonstrates competitive performance with significantly reduced inference time and enhanced interpretability accompanied by a 5-fold cross-validation technique. Furthermore, this research utilizes the LIME XAI technique to provide deeper insights into the contribution of each feature in predicting a specific target class. These findings show the proposed framework's effectiveness in dealing with the issues of detecting false news, particularly in Arabic text. By generating higher performance metrics and displaying comparable results, this work opens the way for more reliable and interpretable text classification solutions.

PMID:39681596 | DOI:10.1038/s41598-024-82111-5

Categories: Literature Watch

Artificial Intelligence for Dental Implant Classification and Peri-Implant Pathology Identification in 2D Radiographs: A Systematic Review

Mon, 2024-12-16 06:00

J Dent. 2024 Dec 14:105533. doi: 10.1016/j.jdent.2024.105533. Online ahead of print.

ABSTRACT

OBJECTIVE: This systematic review aimed to summarize and evaluate the available information regarding the performance of artificial intelligence on dental implant classification and peri-implant pathology identification in 2D radiographs.

DATA SOURCES: Electronic databases (Medline, Embase, and Cochrane) were searched up to September 2024 for relevant observational studies and both randomized and controlled clinical trials. The search was limited to studies published in English from the last 7 years. Two reviewers independently conducted both study selection and data extraction. Risk of bias assessment was also performed individually by both operators using the Quality Assessment Diagnostic Tool (QUADAS-2).

STUDY SELECTION: Of the 1,465 records identified, 29 references were selected to perform qualitative analysis. The study characteristics were tabulated in a self-designed table. QUADAS-2 tool identified 10 and 15 studies to respectively have a high and an unclear risk of bias, while only four were categorized as low risk of bias. Overall, accuracy rates for dental implant classification ranged from 67% to 99%. Peri-implant pathology identification showed results with accuracy detection rates over 78,6%.

CONCLUSIONS: While AI-based models, particularly convolutional neural networks, have shown high accuracy in dental implant classification and peri-implant pathology detection, several limitations must be addressed before widespread clinical application. More advanced AI techniques, such as Federated Learning should be explored to improve the generalizability and efficiency of these models in clinical practice.

CLINICAL SIGNIFICANCE: AI-based models offer can and clinicians to accurately classify unknown dental implants and enable early detection of peri-implantitis, improving patient outcomes and streamline treatment planning.

PMID:39681182 | DOI:10.1016/j.jdent.2024.105533

Categories: Literature Watch

Predicting molecular subtypes of breast cancer based on multi-parametric MRI dataset using deep learning method

Mon, 2024-12-16 06:00

Magn Reson Imaging. 2024 Dec 14:110305. doi: 10.1016/j.mri.2024.110305. Online ahead of print.

ABSTRACT

PURPOSE: To develop a multi-parametric MRI model for the prediction of molecular subtypes of breast cancer using five types of breast cancer preoperative MRI images.

METHODS: In this study, we retrospectively analyzed clinical data and five types of MRI images (FS-T1WI, T2WI, Contrast-enhanced T1-weighted imaging (T1-C), DWI, and ADC) from 325 patients with pathologically confirmed breast cancer. Using the five types of MRI images as inputs to the ResNeXt50 model respectively, five base models were constructed, and then the outputs of the five base models were fused using an ensemble learning approach to develop a multi-parametric MRI model. Breast cancer was classified into four molecular subtypes based on immunohistochemical results: luminal A, luminal B, human epidermal growth factor receptor 2-positive (HER2-positive), and triple-negative (TN). The whole dataset was randomly divided into a training set (n = 260; 76 luminal A, 80 luminal B, 50 HER2-positive, 54 TN) and a testing set (n = 65; 20 luminal A, 20 luminal B, 12 HER2-positive, 13 TN). Accuracy, sensitivity, specificity, receiver operating characteristic curve (ROC) and area under the curve (AUC) were calculated to assess the predictive performance of the models.

RESULTS: In the testing set, for the assessment of the four molecular subtypes of breast cancer, the multi-parametric MRI model yielded an AUC of 0.859-0.912; the AUCs based on the FS-T1WI, T2WI, T1-C, DWI, and ADC models achieved respectively 0.632-0. 814, 0.641-0.788, 0.621-0.709, 0.620-0.701and 0.611-0.785.

CONCLUSION: The multi-parametric MRI model we developed outperformed the base models in predicting breast cancer molecular subtypes. Our study also showed the potential of FS-T1WI base model in predicting breast cancer molecular subtypes.

PMID:39681144 | DOI:10.1016/j.mri.2024.110305

Categories: Literature Watch

Pose analysis in free-swimming adult zebrafish, Danio rerio: "fishy" origins of movement design

Mon, 2024-12-16 06:00

Brain Behav Evol. 2024 Dec 16:1-26. doi: 10.1159/000543081. Online ahead of print.

ABSTRACT

Introduction Movement requires maneuvers that generate thrust to either make turns or move the body forward in physical space. The computational space for perpetually controlling the relative position of every point on the body surface can be vast. We hypothesize the evolution of efficient design for movement that minimizes active (neural) control by leveraging the passive (reactive) forces between the body and the surrounding medium at play. To test our hypothesis, we investigate the presence of stereotypical postures during free-swimming in adult zebrafish, Danio rerio. Methods We perform markerless tracking using DeepLabCut, a deep learning pose estimation toolkit, to track geometric relationships between body parts. To identify putative clusters of postural configurations obtained from twelve freely behaving zebrafish, we use unsupervised multivariate time-series analysis (B-SOiD machine learning software). Results When applied to single individuals, this method reveals a best-fit for 36 to 50 clusters in contrast to 86 clusters for data pooled from all 12 animals. The centroids of each cluster obtained over 14,000 sequential frames recorded for a single fish represent an apriori classification into relatively stable "target body postures" and inter-pose "transitional postures" that lead towards and away from a target pose. We use multidimensional scaling of mean parameter values for each cluster to map cluster-centroids within two dimensions of postural space. From a post-priori visual analysis, we condense neighboring postural variants into 15 superclusters or core body configurations. We develop a nomenclature specifying the anteroposterior level/s (upper, mid and lower) and degree of bending. Conclusion Our results suggest that constraining bends to mainly three levels in adult zebrafish preempts the neck, fore- and hindlimb design for maneuverability in land vertebrates.

PMID:39681106 | DOI:10.1159/000543081

Categories: Literature Watch

Ultra-fast prediction of D-pi-A organic dye absorption maximum with advanced ensemble deep learning models

Mon, 2024-12-16 06:00

Spectrochim Acta A Mol Biomol Spectrosc. 2024 Dec 9;329:125536. doi: 10.1016/j.saa.2024.125536. Online ahead of print.

ABSTRACT

The quick and precise estimation of D-π-A Organic Dye absorption maxima in different solvents is an important challenge for the efficient design of novel chemical structures that could improve the performance of dye-sensitized solar cells (DSSCs) and related technologies. Time-Dependent Density Functional Theory (TD-DFT) has often been employed for these predictions, but it has limitations, including high computing costs and functional dependence, particularly for solvent interactions. In this study, we introduce a high-accuracy and rapid deep-learning ensemble method using daylight fingerprints as chemical descriptors to predict the absorption maxima (λmax) of D-π-A organic dyes in 18 different solvent environments. This study introduces a novel approach leveraging advanced ensemble deep learning of 10 models of multiple neural architectures including convolutional networks to demonstrate exceptional predictive power in capturing complex relationships between molecular structures with solvent interaction and absorption maximum. Leveraging a comprehensive range of molecular descriptors from organic dye fingerprints, we developed a highly accurate ensemble model with an R2 of 0.94 and a mean absolute error (MAE) of 8.6 nm, which enhances predictive accuracy and significantly reduces computational time. Additionally, we developed a user-friendly web-based platform that allows for quick prediction of absorption maxima including solvent effect. This tool, which directly uses SMILES representations and advanced deep learning techniques, offers significant potential for accelerating the discovery of efficient dye candidates for various applications, including solar energy, environmental solutions, and medical research. This research opens the door to more effective next-generation dye design, which will facilitate rapid testing in a variety of fields and design an efficient new material.

PMID:39681030 | DOI:10.1016/j.saa.2024.125536

Categories: Literature Watch

Descriptive overview of AI applications in x-ray imaging and radiotherapy

Mon, 2024-12-16 06:00

J Radiol Prot. 2024 Dec 16. doi: 10.1088/1361-6498/ad9f71. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) is transforming medical radiation applications by handling complex data, learning patterns, and making accurate predictions, leading to improved patient outcomes. This article examines the use of AI in optimizing radiation doses for X-ray imaging, improving radiotherapy outcomes, and briefly addresses the benefits, challenges, and limitations of AI integration into clinical workflows. In diagnostic radiology, AI plays a pivotal role in optimizing radiation exposure, reducing noise, enhancing image contrast, and lowering radiation doses, especially in high-dose procedures like computed tomography. Deep learning-powered CT reconstruction methods have already been incorporated into clinical routine. Moreover, AI-powered methodologies have been developed to provide real-time, patient-specific radiation dose estimates. These AI-driven tools have the potential to streamline workflows and potentially become integral parts of imaging practices. In radiotherapy, AI's ability to automate and enhance the precision of treatment planning is emphasized. Traditional methods, such as manual contouring, are time-consuming and prone to variability. AI-driven techniques, particularly deep learning models, are automating the segmentation of organs and tumors, improving the accuracy of radiation delivery, and minimizing damage to healthy tissues. Moreover, AI supports adaptive radiotherapy, allowing continuous optimization of treatment plans based on changes in a patient's anatomy over time, ensuring the highest accuracy in radiation delivery and better therapeutic outcomes. Some of these methods have been validated and integrated into radiation treatment systems, while others are not yet ready for routine clinical use mainly due to challenges in validation, particularly ensuring reliability across diverse patient populations and clinical settings. Despite the potential of AI, there are challenges in fully integrating these technologies into clinical practice. Issues such as data protection, privacy, data quality, model validation, and the need for large and diverse datasets are crucial to ensuring the reliability of AI systems.

PMID:39681008 | DOI:10.1088/1361-6498/ad9f71

Categories: Literature Watch

Development of A Low-Dose Strategy for Propagation-based Imaging Helical Computed Tomography (PBI-HCT): High Image Quality and Reduced Radiation Dose

Mon, 2024-12-16 06:00

Biomed Phys Eng Express. 2024 Dec 16. doi: 10.1088/2057-1976/ad9f66. Online ahead of print.

ABSTRACT

&#xD;Propagation-based imaging computed tomography (PBI-CT) has been recently emerging for visualizing low-density materials due to its excellent image contrast and high resolution. Based on this, PBI-CT with a helical acquisition mode (PBI-HCT) offers superior imaging quality (e.g., fewer ring artifacts) and dose uniformity, making it ideal for biomedical imaging applications. However, the excessive radiation dose associated with high-resolution PBI-HCT may potentially harm objects or hosts being imaged, especially in live animal imaging, raising a great need to reduce radiation dose.&#xD;Methods:&#xD;In this study, we strategically integrated Sparse2Noise (a deep learning approach) with PBI-HCT imaging to reduce radiation dose without compromising image quality. Sparse2Noise uses paired low-dose noisy images with different photon fluxes and projection numbers for high-quality reconstruction via a convolutional neural network (CNN). Then, we examined the imaging quality and radiation dose of PBI-HCT imaging using Sparse2Noise, as compared to when Sparse2Noise was used in low-dose PBI-CT imaging (circular scanning mode). Furthermore, we conducted a comparison study on using Sparse2Noise versus two other state-of-the-art low-dose imaging algorithms (i.e., Noise2Noise and Noise2Inverse) for imaging low-density materials using PBI-HCT at equivalent dose levels.&#xD;Results:&#xD;Sparse2Noise allowed for a 90% dose reduction in PBI-HCT imaging while maintaining high image quality. As compared to PBI-CT imaging, the use of Sparse2Noise in PBI-HCT imaging shows more effective by reducing additional radiation dose (30%-36%). Furthermore, helical scanning mode also enhances the performance of existing low-dose algorithms (Noise2Noise and Noise2Inverse); nevertheless, Sparse2Noise shows significantly higher signal-to-noise ratio (SNR) value compared to Noise2Noise and Noise2Inverse at the same radiation dose level.&#xD;Conclusions and significance:&#xD;Our proposed low-dose imaging strategy Sparse2Noise can be effectively applied to the PBI-HCT imaging technique and requires lower dose for acceptable quality imaging. This would represent a significant advance imaging for low-density materials imaging and for future live animal imaging applications.&#xD.

PMID:39681007 | DOI:10.1088/2057-1976/ad9f66

Categories: Literature Watch

Validation of a Rapid Algorithm for Repeated Intensity Modulated Radiation Therapy Dose Calculations

Mon, 2024-12-16 06:00

Biomed Phys Eng Express. 2024 Dec 16. doi: 10.1088/2057-1976/ad9f6a. Online ahead of print.

ABSTRACT

As adaptive radiotherapy workflows and deep learning model training rise in popularity, the need for a repeated applications of a rapid dose calculation algorithm increases. In this work we evaluate the feasibility of a simple algorithm that can calculate dose directly from MLC positions in near real-time. Given the necessary machine parameters, the intensity modulated radiation therapy (IMRT) doses are calculated and can be used in optimization, deep learning model training, or other cases where fast repeated segment dose calculations are needed. The algorithm uses normalized beamlets to modify a pre-calculated patient specific open field into any MLC segment shape. This algorithm was validated on 91 prostate IMRT plans as well as 20 lung IMRT plans generated for the Elekta Unity MR-Linac. IMRT plans calculated using the proposed method were found to match reference Monte Carlo calculated dose within 98.02±0.84% and 96.57±2.41% for prostate and lung patients respectively with a 3%/2 mm gamma criterion. After the patient-specific open field calculation, the algorithm can calculate the dose of a 9-field IMRT plan in 1.016±0.284 s for a single patient or 0.264 ms per patient for a parallelized batch of 24 patients relevant for deep learning training. The presented algorithm demonstrates an alternative rapid IMRT dose calculator that does not rely on training a deep learning model while still being competitive in terms of speed and accuracy making it a compelling choice in cases where repetitive dose calculation is desired.

PMID:39681005 | DOI:10.1088/2057-1976/ad9f6a

Categories: Literature Watch

Bidirectional Interaction Directional Variance Attention Model Based on Increased-Transformer for Thyroid Nodule Classification

Mon, 2024-12-16 06:00

Biomed Phys Eng Express. 2024 Dec 16. doi: 10.1088/2057-1976/ad9f68. Online ahead of print.

ABSTRACT

Malignant thyroid nodules are closely linked to cancer, making the precise classification of thyroid nodules into benign and malignant categories highly significant. However, the subtle differences in contour between benign and malignant thyroid nodules, combined with the texture features obscured by the inherent noise in ultrasound images, often result in low classification accuracy in most models. To address this, we propose a Bidirectional Interaction Directional Variance Attention Model based on Increased-Transformer, named IFormer-DVNet. This paper proposes the Increased-Transformer, which enables global feature modeling of feature maps extracted by the Convolutional Feature Extraction Module (CFEM). This design maximally alleviates noise interference in ultrasound images. The Bidirectional Interaction Directional Variance Attention module (BIDVA) dynamically calculates attention weights using the variance of input tensors along both vertical and horizontal directions. This allows the model to focus more effectively on regions with rich information in the image. The vertical and horizontal features are interactively combined to enhance the model's representational capability. During the model training process, we designed a Multi-Dimensional Loss function (MD Loss) to stretch the boundary distance between different classes and reduce the distance between samples of the same class. Additionally, the MD Loss function helps mitigate issues related to class imbalance in the dataset. We evaluated our network model using the public TNCD dataset and a private dataset. The results show that our network achieved an accuracy of 76.55% on the TNCD dataset and 93.02% on the private dataset. Compared to other state-of-the-art classification networks, our model outperformed them across all evaluation metrics.

PMID:39681000 | DOI:10.1088/2057-1976/ad9f68

Categories: Literature Watch

Blood Pressure Estimation Using Explainable Deep-Learning Models Based on Photoplethysmography

Mon, 2024-12-16 06:00

Anesth Analg. 2025 Jan 1;140(1):119-128. doi: 10.1213/ANE.0000000000007295. Epub 2024 Dec 16.

ABSTRACT

BACKGROUND: Due to their invasiveness, arterial lines are not typically used in routine monitoring, despite their superior responsiveness in hemodynamic monitoring and detecting intraoperative hypotension. To address this issue, noninvasive, continuous arterial pressure monitoring is necessary. We developed a deep-learning model that reconstructs continuous mean arterial pressure (MAP) using the photoplethysmograhy (PPG) signal and compared it to the arterial line gold standard.

METHODS: We analyzed high-frequency PPG signals from 117 patients in neuroradiology and digestive surgery with a median of 2201 (interquartile range [IQR], 788-4775) measurements per patient. We compared models with different combinations of convolutional and recurrent layers using as inputs for our neural network high-frequency PPG and derived features including dicrotic notch relative amplitude, perfusion index, and heart rate. Mean absolute error (MAE) was used as performance metrics. Explainability of the deep-learning model was reconstructed with Grad-CAM, a visualization technique using saliency maps to highlight the parts of an input that are significant for a deep-learning model decision-making process.

RESULTS: An MAP baseline model, which consisted only of standard cuff measures, reached an MAE of 6.1 (± 14.5) mm Hg. In contrast, the deep-learning model achieved an MAE of 3.5 (± 4.4) mm Hg on the external test set (a 42.6% improvement). This model also achieved the narrowest confidence intervals and met international standards used within the community (grade A). The saliency map revealed that the deep-learning model primarily extracts information near the dicrotic notch region.

CONCLUSIONS: Our deep-learning model noninvasively estimates arterial pressure with high accuracy. This model may show potential as a decision-support tool in operating-room settings, particularly in scenarios where invasive blood pressure monitoring is unavailable.

PMID:39680992 | DOI:10.1213/ANE.0000000000007295

Categories: Literature Watch

Pages