Deep learning
Application of a Diabetic Foot Smart APP in the measurement of diabetic foot ulcers
Int J Orthop Trauma Nurs. 2024 Mar 22;54:101095. doi: 10.1016/j.ijotn.2024.101095. Online ahead of print.
ABSTRACT
AIMS: In the early stage, we developed an intelligent measurement APP for diabetic foot ulcers, named Diabetic Foot Smart APP. This study aimed to validate the APP in the measurement of ulcer area for diabetic foot ulcer (DFU).
METHODS: We selected 150 DFU images to measure the ulcer areas using three assessment tools: the Smart APP software package, the ruler method, and the gold standard Image J software, and compared the measurement results and measurement time of the three tools. The intra-rater and inter-rater reliability were described by Pearson correlation coefficient, intra-group correlation coefficient, and coefficient of variation.
RESULTS: The Image J software showed a median ulcer area of 4.02 cm2, with a mean measurement time of 66.37 ± 7.95 s. The ruler method showed a median ulcer area of 5.14 cm2, with a mean measurement time of 171.47 ± 46.43 s. The APP software showed a median ulcer area of 3.70 cm2, with a mean measurement time of 38.25 ± 6.81 s. There were significant differences between the ruler method and the golden standard Image J software (Z = -4.123, p < 0.05), but no significant difference between the APP software and the Image J software (Z = 1.103, p > 0.05). The APP software also showed good inter-rater reliability and intra-rater reliability, with both reaching 0.99.
CONCLUSION: The Diabetic Foot Smart APP is a fast and reliable measurement tool with high measurement accuracy that can be easily used in clinical practice for the measurement of ulcer areas of DFU.
TRIAL REGISTRATION: Chinese clinical trial registration number: ChiCTR2100047210.
PMID:38599150 | DOI:10.1016/j.ijotn.2024.101095
Hybrid WT-CNN-GRU-based model for the estimation of reservoir water quality variables considering spatio-temporal features
J Environ Manage. 2024 Apr 9;358:120756. doi: 10.1016/j.jenvman.2024.120756. Online ahead of print.
ABSTRACT
Water quality indicators (WQIs), such as chlorophyll-a (Chl-a) and dissolved oxygen (DO), are crucial for understanding and assessing the health of aquatic ecosystems. Precise prediction of these indicators is fundamental for the efficient administration of rivers, lakes, and reservoirs. This research utilized two unique DL algorithms-namely, convolutional neural network (CNNs) and gated recurrent units (GRUs)-alongside their amalgamation, CNN-GRU, to precisely gauge the concentration of these indicators within a reservoir. Moreover, to optimize the outcomes of the developed hybrid model, we considered the impact of a decomposition technique, specifically the wavelet transform (WT). In addition to these efforts, we created two distinct machine learning (ML) algorithms-namely, random forest (RF) and support vector regression (SVR)-to demonstrate the superior performance of deep learning algorithms over individual ML ones. We initially gathered WQIs from diverse locations and varying depths within the reservoir using an AAQ-RINKO device in the study area to achieve this. It is important to highlight that, despite utilizing diverse data-driven models in water quality estimation, a significant gap persists in the existing literature regarding implementing a comprehensive hybrid algorithm. This algorithm integrates the wavelet transform, convolutional neural network (CNN), and gated recurrent unit (GRU) methodologies to estimate WQIs accurately within a spatiotemporal framework. Subsequently, the effectiveness of the models that were developed was assessed utilizing various statistical metrics, encompassing the correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) throughout both the training and testing phases. The findings demonstrated that the WT-CNN-GRU model exhibited better performance in comparison with the other algorithms by 13% (SVR), 13% (RF), 9% (CNN), and 8% (GRU) when R-squared and DO were considered as evaluation indices and WQIs, respectively.
PMID:38599080 | DOI:10.1016/j.jenvman.2024.120756
A feature-enhanced network for stroke lesion segmentation from brain MRI images
Comput Biol Med. 2024 Mar 26;174:108326. doi: 10.1016/j.compbiomed.2024.108326. Online ahead of print.
ABSTRACT
Accurate and expeditious segmentation of stroke lesions can greatly assist physicians in making accurate medical diagnoses and administering timely treatments. However, there are two limitations to the current deep learning methods. On the one hand, the attention structure utilizes only local features, which misleads the subsequent segmentation; on the other hand, simple downsampling compromises task-relevant detailed semantic information. To address these challenges, we propose a novel feature refinement and protection network (FRPNet) for stroke lesion segmentation. FRPNet employs a symmetric encoding-decoding structure and incorporates twin attention gate (TAG) and multi-dimension attention pooling (MAP) modules. The TAG module leverages the self-attention mechanism and bi-directional attention to extract both global and local features of the lesion. On the other hand, the MAP module establishes multidimensional pooling attention to effectively mitigate the loss of features during the encoding process. Extensive comparative experiments show that, our method significantly outperforms the state-of-the-art approaches with 60.16% DSC, 36.20px HD and 85.72% DSC, 27.02px HD on two ischemic stroke datasets that contain all stroke stages and several sequences of stroke images. The excellent results that exceed those of existing methods illustrate the efficacy and generalizability of the proposed method. The source code is released on https://github.com/wu2ze2lin2/FRPNet.
PMID:38599066 | DOI:10.1016/j.compbiomed.2024.108326
Deep learning-based glomerulus detection and classification with generative morphology augmentation in renal pathology images
Comput Med Imaging Graph. 2024 Mar 29;115:102375. doi: 10.1016/j.compmedimag.2024.102375. Online ahead of print.
ABSTRACT
Glomerulus morphology on renal pathology images provides valuable diagnosis and outcome prediction information. To provide better care, an efficient, standardized, and scalable method is urgently needed to optimize the time-consuming and labor-intensive interpretation process by renal pathologists. This paper proposes a deep convolutional neural network (CNN)-based approach to automatically detect and classify glomeruli with different stains in renal pathology images. In the glomerulus detection stage, this paper proposes a flattened Xception with a feature pyramid network (FX-FPN). The FX-FPN is employed as a backbone in the framework of faster region-based CNN to improve glomerulus detection performance. In the classification stage, this paper considers classifications of five glomerulus morphologies using a flattened Xception classifier. To endow the classifier with higher discriminability, this paper proposes a generative data augmentation approach for patch-based glomerulus morphology augmentation. New glomerulus patches of different morphologies are generated for data augmentation through the cycle-consistent generative adversarial network (CycleGAN). The single detection model shows the F1 score up to 0.9524 in H&E and PAS stains. The classification result shows that the average sensitivity and specificity are 0.7077 and 0.9316, respectively, by using the flattened Xception with the original training data. The sensitivity and specificity increase to 0.7623 and 0.9443, respectively, by using the generative data augmentation. Comparisons with different deep CNN models show the effectiveness and superiority of the proposed approach.
PMID:38599040 | DOI:10.1016/j.compmedimag.2024.102375
Development of an automatic surgical planning system for high tibial osteotomy using artificial intelligence
Knee. 2024 Apr 9;48:128-137. doi: 10.1016/j.knee.2024.03.008. Online ahead of print.
ABSTRACT
BACKGROUND: This study proposed an automatic surgical planning system for high tibial osteotomy (HTO) using deep learning-based artificial intelligence and validated its accuracy. The system simulates osteotomy and measures lower-limb alignment parameters in pre- and post-osteotomy simulations.
METHODS: A total of 107 whole-leg standing radiographs were obtained from 107 patients who underwent HTO. First, the system detected anatomical landmarks on radiographs. Then, it simulated osteotomy and automatically measured five parameters in pre- and post-osteotomy simulation (hip knee angle [HKA], weight-bearing line ratio [WBL ratio], mechanical lateral distal femoral angle [mLDFA], mechanical medial proximal tibial angle [mMPTA], and mechanical lateral distal tibial angle [mLDTA]). The accuracy of the measured parameters was validated by comparing them with the ground truth (GT) values given by two orthopaedic surgeons.
RESULTS: All absolute errors of the system were within 1.5° or 1.5%. All inter-rater correlation confidence (ICC) values between the system and GT showed good reliability (>0.80). Excellent reliability was observed in the HKA (0.99) and WBL ratios (>0.99) for the pre-osteotomy simulation. The intra-rater difference of the system exhibited excellent reliability with an ICC value of 1.00 for all lower-limb alignment parameters in pre- and post-osteotomy simulations. In addition, the measurement time per radiograph (0.24 s) was considerably shorter than that of an orthopaedic surgeon (118 s).
CONCLUSION: The proposed system is practically applicable because it can measure lower-limb alignment parameters accurately and quickly in pre- and post-osteotomy simulations. The system has potential applications in surgical planning systems.
PMID:38599029 | DOI:10.1016/j.knee.2024.03.008
Noninvasive virtual biopsy using micro-registered optical coherence tomography (OCT) in human subjects
Sci Adv. 2024 Apr 12;10(15):eadi5794. doi: 10.1126/sciadv.adi5794. Epub 2024 Apr 10.
ABSTRACT
Histological hematoxylin and eosin-stained (H&E) tissue sections are used as the gold standard for pathologic detection of cancer, tumor margin detection, and disease diagnosis. Producing H&E sections, however, is invasive and time-consuming. While deep learning has shown promise in virtual staining of unstained tissue slides, true virtual biopsy requires staining of images taken from intact tissue. In this work, we developed a micron-accuracy coregistration method [micro-registered optical coherence tomography (OCT)] that can take a two-dimensional (2D) H&E slide and find the exact corresponding section in a 3D OCT image taken from the original fresh tissue. We trained a conditional generative adversarial network using the paired dataset and showed high-fidelity conversion of noninvasive OCT images to virtually stained H&E slices in both 2D and 3D. Applying these trained neural networks to in vivo OCT images should enable physicians to readily incorporate OCT imaging into their clinical practice, reducing the number of unnecessary biopsy procedures.
PMID:38598626 | DOI:10.1126/sciadv.adi5794
DeepDynaForecast: Phylogenetic-informed graph deep learning for epidemic transmission dynamic prediction
PLoS Comput Biol. 2024 Apr 10;20(4):e1011351. doi: 10.1371/journal.pcbi.1011351. Online ahead of print.
ABSTRACT
In the midst of an outbreak or sustained epidemic, reliable prediction of transmission risks and patterns of spread is critical to inform public health programs. Projections of transmission growth or decline among specific risk groups can aid in optimizing interventions, particularly when resources are limited. Phylogenetic trees have been widely used in the detection of transmission chains and high-risk populations. Moreover, tree topology and the incorporation of population parameters (phylodynamics) can be useful in reconstructing the evolutionary dynamics of an epidemic across space and time among individuals. We now demonstrate the utility of phylodynamic trees for transmission modeling and forecasting, developing a phylogeny-based deep learning system, referred to as DeepDynaForecast. Our approach leverages a primal-dual graph learning structure with shortcut multi-layer aggregation, which is suited for the early identification and prediction of transmission dynamics in emerging high-risk groups. We demonstrate the accuracy of DeepDynaForecast using simulated outbreak data and the utility of the learned model using empirical, large-scale data from the human immunodeficiency virus epidemic in Florida between 2012 and 2020. Our framework is available as open-source software (MIT license) at github.com/lab-smile/DeepDynaForcast.
PMID:38598563 | DOI:10.1371/journal.pcbi.1011351
From explanation to intervention: Interactive knowledge extraction from Convolutional Neural Networks used in radiology
PLoS One. 2024 Apr 10;19(4):e0293967. doi: 10.1371/journal.pone.0293967. eCollection 2024.
ABSTRACT
Deep Learning models such as Convolutional Neural Networks (CNNs) are very effective at extracting complex image features from medical X-rays. However, the limited interpretability of CNNs has hampered their deployment in medical settings as they failed to gain trust among clinicians. In this work, we propose an interactive framework to allow clinicians to ask what-if questions and intervene in the decisions of a CNN, with the aim of increasing trust in the system. The framework translates a layer of a trained CNN into a measurable and compact set of symbolic rules. Expert interactions with visualizations of the rules promote the use of clinically-relevant CNN kernels and attach meaning to the rules. The definition and relevance of the kernels are supported by radiomics analyses and permutation evaluations, respectively. CNN kernels that do not have a clinically-meaningful interpretation are removed without affecting model performance. By allowing clinicians to evaluate the impact of adding or removing kernels from the rule set, our approach produces an interpretable refinement of the data-driven CNN in alignment with medical best practice.
PMID:38598468 | DOI:10.1371/journal.pone.0293967
Deep learning to predict rapid progression of Alzheimer's disease from pooled clinical trials: A retrospective study
PLOS Digit Health. 2024 Apr 10;3(4):e0000479. doi: 10.1371/journal.pdig.0000479. eCollection 2024 Apr.
ABSTRACT
The rate of progression of Alzheimer's disease (AD) differs dramatically between patients. Identifying the most is critical because when their numbers differ between treated and control groups, it distorts the outcome, making it impossible to tell whether the treatment was beneficial. Much recent effort, then, has gone into identifying RPs. We pooled de-identified placebo-arm data of three randomized controlled trials (RCTs), EXPEDITION, EXPEDITION 2, and EXPEDITION 3, provided by Eli Lilly and Company. After processing, the data included 1603 mild-to-moderate AD patients with 80 weeks of longitudinal observations on neurocognitive health, brain volumes, and amyloid-beta (Aβ) levels. RPs were defined by changes in four neurocognitive/functional health measures. We built deep learning models using recurrent neural networks with attention mechanisms to predict RPs by week 80 based on varying observation periods from baseline (e.g., 12, 28 weeks). Feature importance scores for RP prediction were computed and temporal feature trajectories were compared between RPs and non-RPs. Our evaluation and analysis focused on models trained with 28 weeks of observation. The models achieved robust internal validation area under the receiver operating characteristic (AUROCs) ranging from 0.80 (95% CI 0.79-0.82) to 0.82 (0.81-0.83), and the area under the precision-recall curve (AUPRCs) from 0.34 (0.32-0.36) to 0.46 (0.44-0.49). External validation AUROCs ranged from 0.75 (0.70-0.81) to 0.83 (0.82-0.84) and AUPRCs from 0.27 (0.25-0.29) to 0.45 (0.43-0.48). Aβ plasma levels, regional brain volumetry, and neurocognitive health emerged as important factors for the model prediction. In addition, the trajectories were stratified between predicted RPs and non-RPs based on factors such as ventricular volumes and neurocognitive domains. Our findings will greatly aid clinical trialists in designing tests for new medications, representing a key step toward identifying effective new AD therapies.
PMID:38598464 | DOI:10.1371/journal.pdig.0000479
Deep Learning Model for Quality Assessment of Urinary Bladder Ultrasound Images using Multi-scale and Higher-order Processing
IEEE Trans Ultrason Ferroelectr Freq Control. 2024 Apr 10;PP. doi: 10.1109/TUFFC.2024.3386919. Online ahead of print.
ABSTRACT
Autonomous Ultrasound Image Quality Assessment (US-IQA) is a promising tool to aid the interpretation by practicing sonographers and to enable the future robotization of ultrasound procedures. However, autonomous US-IQA has several challenges. Ultrasound images contain many spurious artifacts, such as noise due to handheld probe positioning, errors in the selection of probe parameters and patient respiration during the procedure. Further, these images are highly variable in appearance with respect to the individual patient's physiology. We propose to use a deep Convolutional Neural Network (CNN), USQNet, which utilizes a Multi-scale and Local-to-Global Second-order Pooling (MS-L2GSoP) classifier to conduct the sonographer-like assessment of image quality. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations, similar to a sonographer's understanding of anatomy. Then, it uses second-order pooling in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of multi-scale structural and multi-region textural features. The L2GSoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches, much like a sonographer prioritizes regions across the image. We experimentally validated the USQNet for a new dataset of the human urinary bladder ultrasound images. The validation involved first with the subjective assessment by experienced radiologists' annotation, and then with state-of-the-art CNN networks for US-IQA and its ablated counterparts. The results demonstrate that USQNet achieves a remarkable accuracy of 92.4% and outperforms the SOTA models by 3 - 14% while requiring comparable computation time.
PMID:38598406 | DOI:10.1109/TUFFC.2024.3386919
Prototype-based Semantic Segmentation
IEEE Trans Pattern Anal Mach Intell. 2024 Apr 10;PP. doi: 10.1109/TPAMI.2024.3387116. Online ahead of print.
ABSTRACT
Deep learning based semantic segmentation solutions have yielded compelling results over the preceding decade. They encompass diverse network architectures (FCN based or attention based), along with various mask decoding schemes (parametric softmax based or pixel-query based). Despite the divergence, they can be grouped within a unified framework by interpreting the softmax weights or query vectors as learnable class prototypes. In light of this prototype view, we reveal inherent limitations within the parametric segmentation regime, and accordingly develop a nonparametric alternative based on non-learnable prototypes. In contrast to previous approaches that entail the learning of a single weight/query vector per class in a fully parametric manner, our approach represents each class as a set of non-learnable prototypes, relying solely upon the mean features of training pixels within that class. The pixel-wise prediction is thus achieved by nonparametric nearest prototype retrieving. This allows our model to directly shape the pixel embedding space by optimizing the arrangement between embedded pixels and anchored prototypes. It is able to accommodate an arbitrary number of classes with a constant number of learnable parameters. Through empirical evaluation with FCN based and Transformer based segmentation models (i.e., HRNet, Swin, SegFormer, Mask2Former) and backbones (i.e., ResNet, HRNet, Swin, MiT), our nonparametric framework shows superior performance on standard segmentation datasets (i.e., ADE20K, Cityscapes, COCO-Stuff), as well as in large-vocabulary semantic segmentation scenarios. We expect that this study will provoke a rethink of the current de facto semantic segmentation model design.
PMID:38598386 | DOI:10.1109/TPAMI.2024.3387116
Multimodal Drug Target Binding Affinity Prediction using Graph Local Substructure
IEEE J Biomed Health Inform. 2024 Apr 10;PP. doi: 10.1109/JBHI.2024.3386815. Online ahead of print.
ABSTRACT
Predicting the binding affinity of drug target is essential to reduce drug development costs and cycles. Recently, several deep learning-based methods have been proposed to utilize the structural or sequential information of drugs and targets to predict the drug-target binding affinity (DTA). However, methods that rely solely on sequence features do not consider hydrogen atom data, which may result in information loss. Graph-based methods may contain information that is not directly related to the prediction process. Additionally, the lack of structured division can limit the representation of characteristics. To address these issues, we propose a multimodal DTA prediction model using graph local substructures, called MLSDTA. This model comprehensively integrates the graph and sequence modal information from drugs and targets, achieving multimodal fusion through a cross-attention approach for multimodal features. Additionally, adaptive structure aware pooling is applied to generate graphs containing local substructural information. The model also utilizes the DropNode strategy to enhance the distinctions between different molecules. Experiments on two benchmark datasets have shown that MLSDTA outperforms current state-of-the-art models, demonstrating the feasibility of MLSDTA.
PMID:38598378 | DOI:10.1109/JBHI.2024.3386815
Predicting Blood Pressures for Pregnant Women by PPG and Personalized Deep Learning
IEEE J Biomed Health Inform. 2024 Apr 10;PP. doi: 10.1109/JBHI.2024.3386707. Online ahead of print.
ABSTRACT
Blood pressure (BP) is predicted by this effort based on photoplethysmography (PPG) data to provide effective pre-warning of possible preeclampsia of pregnant women. Towards frequent BP measurement, a PPG sensor device is utilized in this study as a solution to offer continuous, cuffless blood pressure monitoring frequently for pregnant women. PPG data were collected using a flexible sensor patch from the wrist arteries of 194 subjects, which included 154 normal individuals and 40 pregnant women. Deep-learning models in 3 stages were built and trained to predict BP. The first stage involves developing a baseline deep-learning BP model using a dataset from common subjects. In the 2nd stage, this model was fine-tuned with data from pregnant women, using a 1-Dimensional Convolutional Neural Network (1D-CNN) with Convolutional Block Attention Module (CBAMs), followed by bi-directional Gated Recurrent Units (GRUs) layers and attention layers. The fine-tuned model results in a mean error (ME) of -1.40 ± 7.15 (standard deviation, SD) for systolic blood pressure (SBP) and -0.44 (ME) ± 5.06 (SD) for diastolic blood pressure (DBP). At the final stage is the personalization for individual pregnant women using transfer learning again, enhancing further the model accuracy to -0.17 (ME) ± 1.45 (SD) for SBP and 0.27 (ME) ± 0.64 (SD) for DBP showing a promising solution for continuous, non-invasive BP monitoring in precision by the proposed 3-stage of modeling, fine-tuning and personalization.
PMID:38598377 | DOI:10.1109/JBHI.2024.3386707
RefQSR: Reference-based Quantization for Image Super-Resolution Networks
IEEE Trans Image Process. 2024 Apr 10;PP. doi: 10.1109/TIP.2024.3385276. Online ahead of print.
ABSTRACT
Single image super-resolution (SISR) aims to reconstruct a high-resolution image from its low-resolution observation. Recent deep learning-based SISR models show high performance at the expense of increased computational costs, limiting their use in resource-constrained environments. As a promising solution for computationally efficient network design, network quantization has been extensively studied. However, existing quantization methods developed for SISR have yet to effectively exploit image self-similarity, which is a new direction for exploration in this study. We introduce a novel method called reference-based quantization for image super-resolution (RefQSR) that applies high-bit quantization to several representative patches and uses them as references for low-bit quantization of the rest of the patches in an image. To this end, we design dedicated patch clustering and reference-based quantization modules and integrate them into existing SISR network quantization methods. The experimental results demonstrate the effectiveness of RefQSR on various SISR networks and quantization methods.
PMID:38598375 | DOI:10.1109/TIP.2024.3385276
A noise robust image reconstruction using slice aware cycle interpolator network for parallel imaging in MRI
Med Phys. 2024 Apr 10. doi: 10.1002/mp.17066. Online ahead of print.
ABSTRACT
BACKGROUND: Reducing Magnetic resonance imaging (MRI) scan time has been an important issue for clinical applications. In order to reduce MRI scan time, imaging acceleration was made possible by undersampling k-space data. This is achieved by leveraging additional spatial information from multiple, independent receiver coils, thereby reducing the number of sampled k-space lines.
PURPOSE: The aim of this study is to develop a deep-learning method for parallel imaging with a reduced number of auto-calibration signals (ACS) lines in noisy environments.
METHODS: A cycle interpolator network is developed for robust reconstruction of parallel MRI with a small number of ACS lines in noisy environments. The network estimates missing (unsampled) lines of each coil data, and these estimated missing lines are then utilized to re-estimate the sampled k-space lines. In addition, a slice aware reconstruction technique is developed for noise-robust reconstruction while reducing the number of ACS lines. We conducted an evaluation study using retrospectively subsampled data obtained from three healthy volunteers at 3T MRI, involving three different slice thicknesses (1.5, 3.0, and 4.5 mm) and three different image contrasts (T1w, T2w, and FLAIR).
RESULTS: Despite the challenges posed by substantial noise in cases with a limited number of ACS lines and thinner slices, the slice aware cycle interpolator network reconstructs the enhanced parallel images. It outperforms RAKI, effectively eliminating aliasing artifacts. Moreover, the proposed network outperforms GRAPPA and demonstrates the ability to successfully reconstruct brain images even under severe noisy conditions.
CONCLUSIONS: The slice aware cycle interpolator network has the potential to improve reconstruction accuracy for a reduced number of ACS lines in noisy environments.
PMID:38598259 | DOI:10.1002/mp.17066
Spinet-QSM: model-based deep learning with schatten p-norm regularization for improved quantitative susceptibility mapping
MAGMA. 2024 Apr 10. doi: 10.1007/s10334-024-01158-7. Online ahead of print.
ABSTRACT
OBJECTIVE: Quantitative susceptibility mapping (QSM) provides an estimate of the magnetic susceptibility of tissue using magnetic resonance (MR) phase measurements. The tissue magnetic susceptibility (source) from the measured magnetic field distribution/local tissue field (effect) inherent in the MR phase images is estimated by numerically solving the inverse source-effect problem. This study aims to develop an effective model-based deep-learning framework to solve the inverse problem of QSM.
MATERIALS AND METHODS: This work proposes a Schatten p -norm-driven model-based deep learning framework for QSM with a learnable norm parameter p to adapt to the data. In contrast to other model-based architectures that enforce the l 2 -norm or l 1 -norm for the denoiser, the proposed approach can enforce any p -norm ( 0 < p ≤ 2 ) on a trainable regulariser.
RESULTS: The proposed method was compared with deep learning-based approaches, such as QSMnet, and model-based deep learning approaches, such as learned proximal convolutional neural network (LPCNN). Reconstructions performed using 77 imaging volumes with different acquisition protocols and clinical conditions, such as hemorrhage and multiple sclerosis, showed that the proposed approach outperformed existing state-of-the-art methods by a significant margin in terms of quantitative merits.
CONCLUSION: The proposed SpiNet-QSM showed a consistent improvement of at least 5% in terms of the high-frequency error norm (HFEN) and normalized root mean squared error (NRMSE) over other QSM reconstruction methods with limited training data.
PMID:38598165 | DOI:10.1007/s10334-024-01158-7
Enhancing robotic telesurgery with sensorless haptic feedback
Int J Comput Assist Radiol Surg. 2024 Apr 10. doi: 10.1007/s11548-024-03117-y. Online ahead of print.
ABSTRACT
PURPOSE: This paper evaluates user performance in telesurgical tasks with the da Vinci Research Kit (dVRK), comparing unilateral teleoperation, bilateral teleoperation with force sensors and sensorless force estimation.
METHODS: A four-channel teleoperation system with disturbance observers and sensorless force estimation with learning-based dynamic compensation was developed. Palpation experiments were conducted with 12 users who tried to locate tumors hidden in tissue phantoms with their fingers or through handheld or teleoperated laparoscopic instruments with visual, force sensor, or sensorless force estimation feedback. In a peg transfer experiment with 10 users, the contribution of sensorless haptic feedback with/without learning-based dynamic compensation was assessed using NASA TLX surveys, measured free motion speeds and forces, environment interaction forces as well as experiment completion times.
RESULTS: The first study showed a 30% increase in accuracy in detecting tumors with sensorless haptic feedback over visual feedback with only a 5-10% drop in accuracy when compared with sensor feedback or direct instrument contact. The second study showed that sensorless feedback can help reduce interaction forces due to incidental contacts by about 3 times compared with unilateral teleoperation. The cost is an increase in free motion forces and physical effort. We show that it is possible to improve this with dynamic compensation.
CONCLUSION: We demonstrate the benefits of sensorless haptic feedback in teleoperated surgery systems, especially with dynamic compensation, and that it can improve surgical performance without hardware modifications.
PMID:38598140 | DOI:10.1007/s11548-024-03117-y
Knowledge-based planning for Gamma Knife
Med Phys. 2024 Apr 10. doi: 10.1002/mp.17058. Online ahead of print.
ABSTRACT
BACKGROUND: Current methods for Gamma Knife (GK) treatment planning utilizes either manual forward planning, where planners manually place shots in a tumor to achieve a desired dose distribution, or inverse planning, whereby the dose delivered to a tumor is optimized for multiple objectives based on established metrics. For other treatment modalities like IMRT and VMAT, there has been a recent push to develop knowledge-based planning (KBP) pipelines to address the limitations presented by forward and inverse planning. However, no complete KBP pipeline has been created for GK.
PURPOSE: To develop a novel (KBP) pipeline, using inverse optimization (IO) with 3D dose predictions for GK.
METHODS: Data were obtained for 349 patients from Sunnybrook Health Sciences Centre. A 3D dose prediction model was trained using 322 patients, based on a previously published deep learning methodology, and dose predictions were generated for the remaining 27 out-of-sample patients. A generalized IO model was developed to learn objective function weights from dose predictions. These weights were then used in an inverse planning model to generate deliverable treatment plans. A dose mimicking (DM) model was also implemented for comparison. The quality of the resulting plans was compared to their clinical counterparts using standard GK quality metrics. The performance of the models was also characterized with respect to the dose predictions.
RESULTS: Across all quality metrics, plans generated using the IO pipeline performed at least as well as or better than the respective clinical plans. The average conformity and gradient indices of IO plans was 0.737 ± $\pm$ 0.158 and 3.356 ± $\pm$ 1.030 respectively, compared to 0.713 ± $\pm$ 0.124 and 3.452 ± $\pm$ 1.123 for the clinical plans. IO plans also performed better than DM plans for five of the six quality metrics. Plans generated using IO also have average treatment times comparable to that of clinical plans. With regards to the dose predictions, predictions with higher conformity tend to result in higher quality KBP plans.
CONCLUSIONS: Plans resulting from an IO KBP pipeline are, on average, of equal or superior quality compared to those obtained through manual planning. The results demonstrate the potential for the use of KBP to generate GK treatment with minimal human intervention.
PMID:38598107 | DOI:10.1002/mp.17058
Artificial intelligence in kidney transplant pathology
Pathologie (Heidelb). 2024 Apr 10. doi: 10.1007/s00292-024-01324-7. Online ahead of print.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) systems have showed promising results in digital pathology, including digital nephropathology and specifically also kidney transplant pathology.
AIM: Summarize the current state of research and limitations in the field of AI in kidney transplant pathology diagnostics and provide a future outlook.
MATERIALS AND METHODS: Literature search in PubMed and Web of Science using the search terms "deep learning", "transplant", and "kidney". Based on these results and studies cited in the identified literature, a selection was made of studies that have a histopathological focus and use AI to improve kidney transplant diagnostics.
RESULTS AND CONCLUSION: Many studies have already made important contributions, particularly to the automation of the quantification of some histopathological lesions in nephropathology. This likely can be extended to automatically quantify all relevant lesions for a kidney transplant, such as Banff lesions. Important limitations and challenges exist in the collection of representative data sets and the updates of Banff classification, making large-scale studies challenging. The already positive study results make future AI support in kidney transplant pathology appear likely.
PMID:38598097 | DOI:10.1007/s00292-024-01324-7
Predicting the wicking rate of nitrocellulose membranes from recipe data: a case study using ANN at a membrane manufacturing in South Korea
Anal Sci. 2024 Apr 10. doi: 10.1007/s44211-024-00540-8. Online ahead of print.
ABSTRACT
Lateral flow assays have been widely used for detecting coronavirus disease 2019 (COVID-19). A lateral flow assay consists of a Nitrocellulose (NC) membrane, which must have a specific lateral flow rate for the proteins to react. The wicking rate is conventionally used as a method to assess the lateral flow in membranes. We used multiple regression and artificial neural networks (ANN) to predict the wicking rate of NC membranes based on membrane recipe data. The developed ANN predicted the wicking rate with a mean square error of 0.059, whereas the multiple regression had a square error of 0.503. This research also highlighted the significant impact of the water content on the wicking rate through images obtained from scanning electron microscopy. The findings of this research can cut down the research and development costs of novel NC membranes with a specific wicking rate significantly, as the algorithm can predict the wicking rate based on the membrane recipe.
PMID:38598050 | DOI:10.1007/s44211-024-00540-8