Deep learning

Automated segmentation of epilepsy surgical resection cavities: comparison of four methods to manual segmentation

Wed, 2024-06-12 06:00

Neuroimage. 2024 Jun 10:120682. doi: 10.1016/j.neuroimage.2024.120682. Online ahead of print.

ABSTRACT

Accurate resection cavity segmentation on MRI is important for neuroimaging research involving epilepsy surgical outcomes. Manual segmentation, the gold standard, is highly labour intensive. Automated pipelines are an efficient potential solution; however, most have been developed for use following temporal epilepsy surgery. Our aim was to compare the accuracy of four automated segmentation pipelines following surgical resection in a mixed cohort of subjects following temporal or extra temporal epilepsy surgery. We identified 4 open-source automated segmentation pipelines. Epic-CHOP and ResectVol utilise SPM-12 within MATLAB, while Resseg and Deep Resection utilise 3D U-net convolutional neural networks. We manually segmented the resection cavity of 50 consecutive subjects who underwent epilepsy surgery (30 temporal, 20 extratemporal). We calculated Dice similarity coefficient (DSC) for each algorithm compared to the manual segmentation. No algorithm identified all resection cavities. ResectVol (n=44, 88%) and Epic-CHOP (n=42, 84%) were able to detect more resection cavities than Resseg (n=22, 44%, P<0.001) and Deep Resection (n=23, 46%, P<0.001). The SPM-based pipelines (Epic-CHOP and ResectVol) performed better than the deep learning-based pipelines in the overall and extratemporal surgery cohorts. In the temporal cohort, the SPM-based pipelines had higher detection rates, however there was no difference in the accuracy between methods. These pipelines could be applied to machine learning studies of outcome prediction to improve efficiency in pre-processing data, however human quality control is still required.

PMID:38866195 | DOI:10.1016/j.neuroimage.2024.120682

Categories: Literature Watch

Dimensional measures of psychopathology in children and adolescents using large language models

Wed, 2024-06-12 06:00

Biol Psychiatry. 2024 Jun 10:S0006-3223(24)01299-X. doi: 10.1016/j.biopsych.2024.05.008. Online ahead of print.

ABSTRACT

BACKGROUND: To enable greater use of NIMH Research Domain Criteria (RDoC) in real-world settings, we applied large language models to estimate dimensional psychopathology from narrative clinical notes.

METHODS: We conducted a cohort study using health records from individuals age 18 years or younger evaluated in the psychiatric emergency department of a large academic medical center between November 2008 and March 2015. Outcomes were hospital admission and length of emergency department stay. RDoC domains were estimated using a HIPAA-compliant large language model (gpt-4-1106-preview), and compared to a previously-validated token-based approach.

RESULTS: The cohort included 3,059 individuals (median age 16 (25%-75% 13-18); 1580 (52%) female, 1479 (48%) male; 105 (3.4%) identified as Asian, 329 (11%) as Black, 288 (9.4%) Hispanic, 474 (15%) as another race, and 1863 (61%) as white), of whom 1695 (55%) were admitted. Correlation between LLM-extracted RDoC scores and the token-based scores ranged from small to medium by Kendall's Tau (0.14-0.22). In logistic regression models adjusted for sociodemographic and clinical features, admission likelihood was associated with greater scores on all domains, with the exception of sensorimotor, which was inversely associated (p<.001 for all adjusted associations). Tests for bias suggested modest but statistically significant differences in positive valence scores by race (p<.05 for Asian, Hispanic, and Black individuals).

CONCLUSION: A large language model extracted estimates of 6 RDoC domains in an explainable manner, which were associated with clinical outcomes. This approach can contribute to a new generation of prediction models or biological investigations based on dimensional psychopathology.

PMID:38866172 | DOI:10.1016/j.biopsych.2024.05.008

Categories: Literature Watch

How can we quantify, explain, and apply the uncertainty of complex soil maps predicted with neural networks?

Wed, 2024-06-12 06:00

Sci Total Environ. 2024 Jun 10:173720. doi: 10.1016/j.scitotenv.2024.173720. Online ahead of print.

ABSTRACT

Artificial neural networks (ANNs) have proven to be a useful tool for complex questions that involve large amounts of data. Our use case of predicting soil maps with ANNs is in high demand by government agencies, construction companies, or farmers, given cost and time intensive field work. However, there are two main challenges when applying ANNs. In their most common form, deep learning algorithms do not provide interpretable predictive uncertainty. This means that properties of an ANN such as the certainty and plausibility of the predicted variables, rely on the interpretation by experts rather than being quantified by evaluation metrics validating the ANNs. Further, these algorithms have shown a high confidence in their predictions in areas geographically distant from the training area or areas sparsely covered by training data. To tackle these challenges, we use the Bayesian deep learning approach "last-layer Laplace approximation", which is specifically designed to quantify uncertainty into deep networks, in our explorative study on soil classification. It corrects the overconfident areas without reducing the accuracy of the predictions, giving us a more realistic uncertainty expression of the model's prediction. In our study area in southern Germany, we subdivide the soils into soil regions and as a test case we explicitly exclude two soil regions in the training area but include these regions in the prediction. Our results emphasize the need for uncertainty measurement to obtain more reliable and interpretable results of ANNs, especially for regions far away from the training area. Moreover, the knowledge gained from this research addresses the problem of overconfidence of ANNs and provides valuable information on the predictability of soil types and the identification of knowledge gaps. By analyzing regions where the model has limited data support and, consequently, high uncertainty, stakeholders can recognize the areas that require more data collection efforts.

PMID:38866156 | DOI:10.1016/j.scitotenv.2024.173720

Categories: Literature Watch

Quantifying the biomimicry gap in biohybrid robot-fish pairs

Wed, 2024-06-12 06:00

Bioinspir Biomim. 2024 Jun 12. doi: 10.1088/1748-3190/ad577a. Online ahead of print.

ABSTRACT

Biohybrid systems in which robotic lures interact with animals have become compelling tools for probing and identifying the mechanisms underlying collective animal behavior. One key challenge lies in the transfer of social interaction models from simulations to reality, using robotics to validate the modeling hypotheses. This challenge arises in bridging what we term the "biomimicry gap", which is caused by imperfect robotic replicas, communication cues and physics constraints not incorporated in the simulations, that may elicit unrealistic behavioral responses in animals. In this work, we used a biomimetic lure of a rummy-nose tetra fish (Hemigrammus rhodostomus) and a neural network (NN) model for generating biomimetic social interactions. Through experiments with a biohybrid pair comprising a fish and the robotic lure, a pair of real fish, and simulations of pairs of fish, we demonstrate that our biohybrid system generates social interactions mirroring those of genuine fish pairs. Our analyses highlight that: 1) the lure and NN maintain minimal deviation in real-world interactions compared to simulations and fish-only experiments, 2) our NN controls the robot efficiently in real-time, and 3) a comprehensive validation is crucial to bridge the biomimicry gap, ensuring realistic biohybrid systems.

PMID:38866031 | DOI:10.1088/1748-3190/ad577a

Categories: Literature Watch

Comparative analysis of machine learning methods for prediction of chlorophyll-a in a river with different hydrology characteristics: A case study in Fuchun River, China

Wed, 2024-06-12 06:00

J Environ Manage. 2024 Jun 11;364:121386. doi: 10.1016/j.jenvman.2024.121386. Online ahead of print.

ABSTRACT

Eutrophication is a serious threat to water quality and human health, and chlorophyll-a (Chla) is a key indicator to represent eutrophication in rivers or lakes. Understanding the spatial-temporal distribution of Chla and its accurate prediction are significant for water system management. In this study, spatial-temporal analysis and correlation analysis were applied to reveal Chla concentration pattern in the Fuchun River, China. Then four exogenous variables (wind speed, water temperature, dissolved oxygen and turbidity) were used for predicting Chla concentrations by six models (3 traditional machine learning models and 3 deep learning models) and compare the performance in a river with different hydrology characteristics. Statistical analysis shown that the Chla concentration in the reservoir river segment was higher than in the natural river segment during August and September, while the dominant algae gradually changed from Cyanophyta to Cryptophyta. Moreover, air temperature, water temperature and dissolved oxygen had high correlations with Chla concentrations among environment factors. The results of the prediction models demonstrate that extreme gradient boosting (XGBoost) and long short-term memory neural network (LSTM) were the best performance model in the reservoir river segment (NSE = 0.93; RMSE = 4.67) and natural river segment (NSE = 0.94; RMSE = 1.84), respectively. This study provides a reference for further understanding eutrophication and early warning of algal blooms in different type of rivers.

PMID:38865920 | DOI:10.1016/j.jenvman.2024.121386

Categories: Literature Watch

ACDMBI: A deep learning model based on community division and multi-source biological information fusion predicts essential proteins

Wed, 2024-06-12 06:00

Comput Biol Chem. 2024 Jun 6;112:108115. doi: 10.1016/j.compbiolchem.2024.108115. Online ahead of print.

ABSTRACT

Accurately identifying essential proteins is vital for drug research and disease diagnosis. Traditional centrality methods and machine learning approaches often face challenges in accurately discerning essential proteins, primarily relying on information derived from protein-protein interaction (PPI) networks. Despite attempts by some researchers to integrate biological data and PPI networks for predicting essential proteins, designing effective integration methods remains a challenge. In response to these challenges, this paper presents the ACDMBI model, specifically designed to overcome the aforementioned issues. ACDMBI is comprised of two key modules: feature extraction and classification. In terms of capturing relevant information, we draw insights from three distinct data sources. Initially, structural features of proteins are extracted from the PPI network through community division. Subsequently, these features are further optimized using Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT). Moving forward, protein features are extracted from gene expression data utilizing Bidirectional Long Short-Term Memory networks (BiLSTM) and a multi-head self-attention mechanism. Finally, protein features are derived by mapping subcellular localization data to a one-dimensional vector and processing it through fully connected layers. In the classification phase, we integrate features extracted from three different data sources, crafting a multi-layer deep neural network (DNN) for protein classification prediction. Experimental results on brewing yeast data showcase the ACDMBI model's superior performance, with AUC reaching 0.9533 and AUPR reaching 0.9153. Ablation experiments further reveal that the effective integration of features from diverse biological information significantly boosts the model's performance.

PMID:38865861 | DOI:10.1016/j.compbiolchem.2024.108115

Categories: Literature Watch

MACFNet: Detection of Alzheimer's disease via multiscale attention and cross-enhancement fusion network

Wed, 2024-06-12 06:00

Comput Methods Programs Biomed. 2024 Jun 6;254:108259. doi: 10.1016/j.cmpb.2024.108259. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Alzheimer's disease (AD) is a dreaded degenerative disease that results in a profound decline in human cognition and memory. Due to its intricate pathogenesis and the lack of effective therapeutic interventions, early diagnosis plays a paramount role in AD. Recent research based on neuroimaging has shown that the application of deep learning methods by multimodal neural images can effectively detect AD. However, these methods only concatenate and fuse the high-level features extracted from different modalities, ignoring the fusion and interaction of low-level features across modalities. It consequently leads to unsatisfactory classification performance.

METHOD: In this paper, we propose a novel multi-scale attention and cross-enhanced fusion network, MACFNet, which enables the interaction of multi-stage low-level features between inputs to learn shared feature representations. We first construct a novel Cross-Enhanced Fusion Module (CEFM), which fuses low-level features from different modalities through a multi-stage cross-structure. In addition, an Efficient Spatial Channel Attention (ECSA) module is proposed, which is able to focus on important AD-related features in images more efficiently and achieve feature enhancement from different modalities through two-stage residual concatenation. Finally, we also propose a multiscale attention guiding block (MSAG) based on dilated convolution, which can obtain rich receptive fields without increasing model parameters and computation, and effectively improve the efficiency of multiscale feature extraction.

RESULTS: Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that our MACFNet has better classification performance than existing multimodal methods, with classification accuracies of 99.59 %, 98.85 %, 99.61 %, and 98.23 % for AD vs. CN, AD vs. MCI, CN vs. MCI and AD vs. CN vs. MCI, respectively, and specificity of 98.92 %, 97.07 %, 99.58 % and 99.04 %, and sensitivity of 99.91 %, 99.89 %, 99.63 % and 97.75 %, respectively.

CONCLUSIONS: The proposed MACFNet is a high-accuracy multimodal AD diagnostic framework. Through the cross mechanism and efficient attention, MACFNet can make full use of the low-level features of different modal medical images and effectively pay attention to the local and global information of the images. This work provides a valuable reference for multi-mode AD diagnosis.

PMID:38865795 | DOI:10.1016/j.cmpb.2024.108259

Categories: Literature Watch

Exploring the frontier: Transformer-based models in EEG signal analysis for brain-computer interfaces

Wed, 2024-06-12 06:00

Comput Biol Med. 2024 Jun 8;178:108705. doi: 10.1016/j.compbiomed.2024.108705. Online ahead of print.

ABSTRACT

This review systematically explores the application of transformer-based models in EEG signal processing and brain-computer interface (BCI) development, with a distinct focus on ensuring methodological rigour and adhering to empirical validations within the existing literature. By examining various transformer architectures, such as the Temporal Spatial Transformer Network (TSTN) and EEG Conformer, this review delineates their capabilities in mitigating challenges intrinsic to EEG data, such as noise and artifacts, and their subsequent implications on decoding and classification accuracies across disparate mental tasks. The analytical scope extends to a meticulous examination of attention mechanisms within transformer models, delineating their role in illuminating critical temporal and spatial EEG features and facilitating interpretability in model decision-making processes. The discourse additionally encapsulates emerging works that substantiate the efficacy of transformer models in noise reduction of EEG signals and diversifying applications beyond the conventional motor imagery paradigm. Furthermore, this review elucidates evident gaps and propounds exploratory avenues in the applications of pre-trained transformers in EEG analysis and the potential expansion into real-time and multi-task BCI applications. Collectively, this review distils extant knowledge, navigates through the empirical findings, and puts forward a structured synthesis, thereby serving as a conduit for informed future research endeavours in transformer-enhanced, EEG-based BCI systems.

PMID:38865781 | DOI:10.1016/j.compbiomed.2024.108705

Categories: Literature Watch

Radiomics diagnostic performance for predicting lymph node metastasis in esophageal cancer: a systematic review and meta-analysis

Wed, 2024-06-12 06:00

BMC Med Imaging. 2024 Jun 12;24(1):144. doi: 10.1186/s12880-024-01278-5.

ABSTRACT

BACKGROUND: Esophageal cancer, a global health concern, impacts predominantly men, particularly in Eastern Asia. Lymph node metastasis (LNM) significantly influences prognosis, and current imaging methods exhibit limitations in accurate detection. The integration of radiomics, an artificial intelligence (AI) driven approach in medical imaging, offers a transformative potential. This meta-analysis evaluates existing evidence on the accuracy of radiomics models for predicting LNM in esophageal cancer.

METHODS: We conducted a systematic review following PRISMA 2020 guidelines, searching Embase, PubMed, and Web of Science for English-language studies up to November 16, 2023. Inclusion criteria focused on preoperatively diagnosed esophageal cancer patients with radiomics predicting LNM before treatment. Exclusion criteria were applied, including non-English studies and those lacking sufficient data or separate validation cohorts. Data extraction encompassed study characteristics and radiomics technical details. Quality assessment employed modified Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) and Radiomics Quality Score (RQS) tools. Statistical analysis involved random-effects models for pooled sensitivity, specificity, diagnostic odds ratio (DOR), and area under the curve (AUC). Heterogeneity and publication bias were assessed using Deek's test and funnel plots. Analysis was performed using Stata version 17.0 and meta-DiSc.

RESULTS: Out of 426 initially identified citations, nine studies met inclusion criteria, encompassing 719 patients. These retrospective studies utilized CT, PET, and MRI imaging modalities, predominantly conducted in China. Two studies employed deep learning-based radiomics. Quality assessment revealed acceptable QUADAS-2 scores. RQS scores ranged from 9 to 14, averaging 12.78. The diagnostic meta-analysis yielded a pooled sensitivity, specificity, and AUC of 0.72, 0.76, and 0.74, respectively, representing fair diagnostic performance. Meta-regression identified the use of combined models as a significant contributor to heterogeneity (p-value = 0.05). Other factors, such as sample size (> 75) and least absolute shrinkage and selection operator (LASSO) usage for feature extraction, showed potential influence but lacked statistical significance (0.05 < p-value < 0.10). Publication bias was not statistically significant.

CONCLUSION: Radiomics shows potential for predicting LNM in esophageal cancer, with a moderate diagnostic performance. Standardized approaches, ongoing research, and prospective validation studies are crucial for realizing its clinical applicability.

PMID:38867143 | DOI:10.1186/s12880-024-01278-5

Categories: Literature Watch

Image-domain material decomposition for dual-energy CT using unsupervised learning with data-fidelity loss

Wed, 2024-06-12 06:00

Med Phys. 2024 Jun 12. doi: 10.1002/mp.17255. Online ahead of print.

ABSTRACT

BACKGROUND: Dual-energy computed tomography (DECT) and material decomposition play vital roles in quantitative medical imaging. However, the decomposition process may suffer from significant noise amplification, leading to severely degraded image signal-to-noise ratios (SNRs). While existing iterative algorithms perform noise suppression using different image priors, these heuristic image priors cannot accurately represent the features of the target image manifold. Although deep learning-based decomposition methods have been reported, these methods are in the supervised-learning framework requiring paired data for training, which is not readily available in clinical settings.

PURPOSE: This work aims to develop an unsupervised-learning framework with data-measurement consistency for image-domain material decomposition in DECT.

METHODS: The proposed framework combines iterative decomposition and deep learning-based image prior in a generative adversarial network (GAN) architecture. In the generator module, a data-fidelity loss is introduced to enforce the measurement consistency in material decomposition. In the discriminator module, the discriminator is trained to differentiate the low-noise material-specific images from the high-noise images. In this scheme, paired images of DECT and ground-truth material-specific images are not required for the model training. Once trained, the generator can perform image-domain material decomposition with noise suppression in a single step.

RESULTS: In the simulation studies of head and lung digital phantoms, the proposed method reduced the standard deviation (SD) in decomposed images by 97% and 91% from the values in direct inversion results. It also generated decomposed images with structural similarity index measures (SSIMs) greater than 0.95 against the ground truth. In the clinical head and lung patient studies, the proposed method suppressed the SD by 95% and 93% compared to the decomposed images of matrix inversion.

CONCLUSIONS: Since the invention of DECT, noise amplification during material decomposition has been one of the biggest challenges, impeding its quantitative use in clinical practice. The proposed method performs accurate material decomposition with efficient noise suppression. Furthermore, the proposed method is within an unsupervised-learning framework, which does not require paired data for model training and resolves the issue of lack of ground-truth data in clinical scenarios.

PMID:38865687 | DOI:10.1002/mp.17255

Categories: Literature Watch

Information-hiding cameras: Optical concealment of object information into ordinary images

Wed, 2024-06-12 06:00

Sci Adv. 2024 Jun 14;10(24):eadn9420. doi: 10.1126/sciadv.adn9420. Epub 2024 Jun 12.

ABSTRACT

We introduce an information-hiding camera integrated with an electronic decoder that is jointly optimized through deep learning. This system uses a diffractive optical processor, which transforms and hides input images into ordinary-looking patterns that deceive/mislead observers. This information-hiding transformation is valid for infinitely many combinations of secret messages, transformed into ordinary-looking output images through passive light-matter interactions within the diffractive processor. By processing these output patterns, an electronic decoder network accurately reconstructs the original information hidden within the deceptive output. We demonstrated our approach by designing information-hiding diffractive cameras operating under various lighting conditions and noise levels, showing their robustness. We further extended this framework to multispectral operation, allowing the concealment and decoding of multiple images at different wavelengths, performed simultaneously. The feasibility of our framework was also validated experimentally using terahertz radiation. This optical encoder-electronic decoder-based codesign provides a high speed and energy efficient information-hiding camera, offering a powerful solution for visual information security.

PMID:38865455 | DOI:10.1126/sciadv.adn9420

Categories: Literature Watch

GT-CAM: Game Theory based Class Activation Map for GCN

Wed, 2024-06-12 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Jun 12;PP. doi: 10.1109/TPAMI.2024.3413026. Online ahead of print.

ABSTRACT

Graph Convolutional Networks (GCN) have shown outstanding performance in skeleton-based behavior recognition. However, their opacity hampers further development. Researches on the explainability of deep learning have provided solutions to this issue, with Class Activation Map (CAM) algorithms being a class of explainable methods. However, existing CAM algorithms applies to GCN often independently compute the contribution of individual nodes, overlooking the interactions between nodes in the skeleton. Therefore, we propose a game theory based class activation map for GCN (GT-CAM). Firstly, GT-CAM integrates Shapley values with gradient weights to calculate node importance, producing an activation map that highlights the critical role of nodes in decision-making. It also reveals the cooperative dynamics between nodes or local subgraphs for a more comprehensive explanation. Secondly, to reduce the computational burden of Shapley values, we propose a method for calculating Shapley values of node coalitions. Lastly, to evaluate the rationality of coalition partitioning, we propose a rationality evaluation method based on bipartite game interaction and cooperative game theory. Additionally, we introduce an efficient calculation method for the coalition rationality coefficient based on the Monte Carlo method. Experimental results demonstrate that GT-CAM outperforms other competitive interpretation methods in visualization and quantitative analysis.

PMID:38865236 | DOI:10.1109/TPAMI.2024.3413026

Categories: Literature Watch

Real-time Automatic M-mode Echocardiography Measurement with Panel Attention

Wed, 2024-06-12 06:00

IEEE J Biomed Health Inform. 2024 Jun 12;PP. doi: 10.1109/JBHI.2024.3413628. Online ahead of print.

ABSTRACT

Motion mode (M-mode) echocardiography is essential for measuring cardiac dimension and ejection fraction. However, the current diagnosis is time-consuming and suffers from diagnosis accuracy variance. This work resorts to building an automatic scheme through well-designed and well-trained deep learning to conquer the situation. That is, we proposed RAMEM, an automatic scheme of real-time M-mode echocardiography, which contributes three aspects to address the challenges: 1) provide MEIS, the first dataset of M-mode echocardiograms, to enable consistent results and support developing an automatic scheme; For detecting objects accurately in echocardiograms, it requires big receptive field for covering long-range diastole to systole cycle. However, the limited receptive field in the typical backbone of convolutional neural networks (CNN) and the losing information risk in non-local block (NL) equipped CNN risk the accuracy requirement. Therefore, we 2) propose panel attention embedding with updated UPANets V2, a convolutional backbone network, in a real-time instance segmentation (RIS) scheme for boosting big object detection performance; 3) introduce AMEM, an efficient algorithm of automatic M-mode echocardiography measurement, for automatic diagnosis; The experimental results show that RAMEM surpasses existing RIS schemes (CNNs with NL & Transformers as the backbone) in PASCAL 2012 SBD and human performances in MEIS. The implemented code and dataset are available at https://github.com/hanktseng131415go/RAMEM.

PMID:38865231 | DOI:10.1109/JBHI.2024.3413628

Categories: Literature Watch

Video-based Soft Tissue Deformation Tracking for Laparoscopic Augmented Reality-based Navigation in Kidney Surgery

Wed, 2024-06-12 06:00

IEEE Trans Med Imaging. 2024 Jun 12;PP. doi: 10.1109/TMI.2024.3413537. Online ahead of print.

ABSTRACT

Minimally invasive surgery (MIS) remains technically demanding due to the difficulty of tracking hidden critical structures within the moving anatomy of the patient. In this study, we propose a soft tissue deformation tracking augmented reality (AR) navigation pipeline for laparoscopic surgery of the kidneys. The proposed navigation pipeline addresses two main sub-problems: the initial registration and deformation tracking. Our method utilizes preoperative MR or CT data and binocular laparoscopes without any additional interventional hardware. The initial registration is resolved through a probabilistic rigid registration algorithm and elastic compensation based on dense point cloud reconstruction. For deformation tracking, the sparse feature point displacement vector field continuously provides temporal boundary conditions for the biomechanical model. To enhance the accuracy of the displacement vector field, a novel feature points selection strategy based on deep learning is proposed. Moreover, an ex-vivo experimental method for internal structures error assessment is presented. The ex-vivo experiments indicate an external surface reprojection error of 4.07 ± 2.17mm and a maximum mean absolutely error for internal structures of 2.98mm. In-vivo experiments indicate mean absolutely error of 3.28 ± 0.40mm and 1.90±0.24mm, respectively. The combined qualitative and quantitative findings indicated the potential of our AR-assisted navigation system in improving the clinical application of laparoscopic kidney surgery.

PMID:38865220 | DOI:10.1109/TMI.2024.3413537

Categories: Literature Watch

Tipping points of evolving epidemiological networks: Machine learning-assisted, data-driven effective modeling

Wed, 2024-06-12 06:00

Chaos. 2024 Jun 1;34(6):063128. doi: 10.1063/5.0187511.

ABSTRACT

We study the tipping point collective dynamics of an adaptive susceptible-infected-susceptible (SIS) epidemiological network in a data-driven, machine learning-assisted manner. We identify a parameter-dependent effective stochastic differential equation (eSDE) in terms of physically meaningful coarse mean-field variables through a deep-learning ResNet architecture inspired by numerical stochastic integrators. We construct an approximate effective bifurcation diagram based on the identified drift term of the eSDE and contrast it with the mean-field SIS model bifurcation diagram. We observe a subcritical Hopf bifurcation in the evolving network's effective SIS dynamics that causes the tipping point behavior; this takes the form of large amplitude collective oscillations that spontaneously-yet rarely-arise from the neighborhood of a (noisy) stationary state. We study the statistics of these rare events both through repeated brute force simulations and by using established mathematical/computational tools exploiting the right-hand side of the identified SDE. We demonstrate that such a collective SDE can also be identified (and the rare event computations also performed) in terms of data-driven coarse observables, obtained here via manifold learning techniques, in particular, Diffusion Maps. The workflow of our study is straightforwardly applicable to other complex dynamic problems exhibiting tipping point dynamics.

PMID:38865091 | DOI:10.1063/5.0187511

Categories: Literature Watch

Contraction assessment of abdominal muscles using automated segmentation designed for wearable ultrasound applications

Wed, 2024-06-12 06:00

Int J Comput Assist Radiol Surg. 2024 Jun 12. doi: 10.1007/s11548-024-03204-0. Online ahead of print.

ABSTRACT

PURPOSE: Wearable ultrasound devices can be used to continuously monitor muscle activity. One possible application is to provide real-time feedback during physiotherapy, to show a patient whether an exercise is performed correctly. Algorithms which automatically analyze the data can be of importance to overcome the need for manual assessment and annotations and speed up evaluations especially when considering real-time video sequences. They even could be used to present feedback in an understandable manner to patients in a home-use scenario. The following work investigates three deep learning based segmentation approaches for abdominal muscles in ultrasound videos during a segmental stabilizing exercise. The segmentations are used to automatically classify the contraction state of the muscles.

METHODS: The first approach employs a simple 2D network, while the remaining two integrate the time information from the videos either via additional tracking or directly into the network architecture. The contraction state is determined by comparing measures such as muscle thickness and center of mass between rest and exercise. A retrospective analysis is conducted but also a real-time scenario is simulated, where classification is performed during exercise.

RESULTS: Using the proposed segmentation algorithms, 71% of the muscle states are classified correctly in the retrospective analysis in comparison to 90% accuracy with manual reference segmentation. For the real-time approach the majority of given feedback during exercise is correct when the retrospective analysis had come to the correct result, too.

CONCLUSION: Both retrospective and real-time analysis prove to be feasible. While no substantial differences between the algorithms were observed regarding classification, the networks incorporating the time information showed temporally more consistent segmentations. Limitations of the approaches as well as reasons for failing cases in segmentation, classification and real-time assessment are discussed and requirements regarding image quality and hardware design are derived.

PMID:38865060 | DOI:10.1007/s11548-024-03204-0

Categories: Literature Watch

Parallel processing model for low-dose computed tomography image denoising

Wed, 2024-06-12 06:00

Vis Comput Ind Biomed Art. 2024 Jun 12;7(1):14. doi: 10.1186/s42492-024-00165-8.

ABSTRACT

Low-dose computed tomography (LDCT) has gained increasing attention owing to its crucial role in reducing radiation exposure in patients. However, LDCT-reconstructed images often suffer from significant noise and artifacts, negatively impacting the radiologists' ability to accurately diagnose. To address this issue, many studies have focused on denoising LDCT images using deep learning (DL) methods. However, these DL-based denoising methods have been hindered by the highly variable feature distribution of LDCT data from different imaging sources, which adversely affects the performance of current denoising models. In this study, we propose a parallel processing model, the multi-encoder deep feature transformation network (MDFTN), which is designed to enhance the performance of LDCT imaging for multisource data. Unlike traditional network structures, which rely on continual learning to process multitask data, the approach can simultaneously handle LDCT images within a unified framework from various imaging sources. The proposed MDFTN consists of multiple encoders and decoders along with a deep feature transformation module (DFTM). During forward propagation in network training, each encoder extracts diverse features from its respective data source in parallel and the DFTM compresses these features into a shared feature space. Subsequently, each decoder performs an inverse operation for multisource loss estimation. Through collaborative training, the proposed MDFTN leverages the complementary advantages of multisource data distribution to enhance its adaptability and generalization. Numerous experiments were conducted on two public datasets and one local dataset, which demonstrated that the proposed network model can simultaneously process multisource data while effectively suppressing noise and preserving fine structures. The source code is available at https://github.com/123456789ey/MDFTN .

PMID:38865022 | DOI:10.1186/s42492-024-00165-8

Categories: Literature Watch

Streamlining Acute Abdominal Aortic Dissection Management-An AI-based CT Imaging Workflow

Wed, 2024-06-12 06:00

J Imaging Inform Med. 2024 Jun 12. doi: 10.1007/s10278-024-01164-0. Online ahead of print.

ABSTRACT

Life-threatening acute aortic dissection (AD) demands timely diagnosis for effective intervention. To streamline intrahospital workflows, automated detection of AD in abdominal computed tomography (CT) scans seems useful to assist humans. We aimed at creating a robust convolutional neural network (CNN)-based pipeline capable of real-time screening for signs of abdominal AD in CT. In this retrospective study, abdominal CT data from AD patients presenting with AD and from non-AD patients were collected (n 195, AD cases 94, mean age 65.9 years, female ratio 35.8%). A CNN-based algorithm was developed with the goal of enabling a robust, automated, and highly sensitive detection of abdominal AD. Two sets from internal (n = 32, AD cases 16) and external sources (n = 1189, AD cases 100) were procured for validation. The abdominal region was extracted, followed by the automatic isolation of the aorta region of interest (ROI) and highlighting of the membrane via edge extraction, followed by classification of the aortic ROI as dissected/healthy. A fivefold cross-validation was employed on the internal set, and an ensemble of the 5 trained models was used to predict the internal and external validation set. Evaluation metrics included receiver operating characteristic curve (AUC) and balanced accuracy. The AUC, balanced accuracy, and sensitivity scores of the internal dataset were 0.932 (CI 0.891-0.963), 0.860, and 0.885, respectively. For the internal validation dataset, the AUC, balanced accuracy, and sensitivity scores were 0.887 (CI 0.732-0.988), 0.781, and 0.875, respectively. Furthermore, for the external validation dataset, AUC, balanced accuracy, and sensitivity scores were 0.993 (CI 0.918-0.994), 0.933, and 1.000, respectively. The proposed automated pipeline could assist humans in expediting acute aortic dissection management when integrated into clinical workflows.

PMID:38864947 | DOI:10.1007/s10278-024-01164-0

Categories: Literature Watch

Accelerated musculoskeletal magnetic resonance imaging with deep learning-based image reconstruction at 0.55 T-3 T

Wed, 2024-06-12 06:00

Radiologie (Heidelb). 2024 Jun 12. doi: 10.1007/s00117-024-01325-w. Online ahead of print.

ABSTRACT

CLINICAL/METHODICAL ISSUE: Magnetic resonance imaging (MRI) is a central component of musculoskeletal imaging. However, long image acquisition times can pose practical barriers in clinical practice.

STANDARD RADIOLOGICAL METHODS: MRI is the established modality of choice in the diagnostic workup of injuries and diseases of the musculoskeletal system due to its high spatial resolution, excellent signal-to-noise ratio (SNR), and unparalleled soft tissue contrast.

METHODOLOGICAL INNOVATIONS: Continuous advances in hardware and software technology over the last few decades have enabled four-fold acceleration of 2D turbo-spin-echo (TSE) without compromising image quality or diagnostic performance. The recent clinical introduction of deep learning (DL)-based image reconstruction algorithms helps to minimize further the interdependency between SNR, spatial resolution and image acquisition time and allows the use of higher acceleration factors.

PERFORMANCE: The combined use of advanced acceleration techniques and DL-based image reconstruction holds enormous potential to maximize efficiency, patient comfort, access, and value of musculoskeletal MRI while maintaining excellent diagnostic accuracy.

ACHIEVEMENTS: Accelerated MRI with DL-based image reconstruction has rapidly found its way into clinical practice and proven to be of added value. Furthermore, recent investigations suggest that the potential of this technology does not yet appear to be fully harvested.

PRACTICAL RECOMMENDATIONS: Deep learning-reconstructed fast musculoskeletal MRI examinations can be reliably used for diagnostic work-up and follow-up of musculoskeletal pathologies in clinical practice.

PMID:38864874 | DOI:10.1007/s00117-024-01325-w

Categories: Literature Watch

Pages