Deep learning
A Survey of Few-Shot Learning for Biomedical Time Series
IEEE Rev Biomed Eng. 2024 Nov 6;PP. doi: 10.1109/RBME.2024.3492381. Online ahead of print.
ABSTRACT
Advancements in wearable sensor technologies and the digitization of medical records have contributed to the unprecedented ubiquity of biomedical time series data. Data-driven models have tremendous potential to assist clinical diagnosis and improve patient care by improving long-term monitoring capabilities, facilitating early disease detection and intervention, as well as promoting personalized healthcare delivery. However, accessing extensively labeled datasets to train data-hungry deep learning models encounters many barriers, such as long-tail distribution of rare diseases, cost of annotation, privacy and security concerns, data-sharing regulations, and ethical considerations. An emerging approach to overcome the scarcity of labeled data is to augment AI methods with human-like capabilities to leverage past experiences to learn new tasks with limited examples, called few-shot learning. This survey provides a comprehensive review and comparison of few-shot learning methods for biomedical time series applications. The clinical benefits and limitations of such methods are discussed in relation to traditional data-driven approaches. This paper aims to provide insights into the current landscape of few-shot learning for biomedical time series and its implications for future research and applications.
PMID:39504299 | DOI:10.1109/RBME.2024.3492381
Automated acute skin toxicity scoring in a mouse model through deep learning
Radiat Environ Biophys. 2024 Nov 6. doi: 10.1007/s00411-024-01096-x. Online ahead of print.
ABSTRACT
This study presents a novel approach to skin toxicity assessment in preclinical radiotherapy trials through an advanced imaging setup and deep learning. Skin reactions, commonly associated with undesirable side effects in radiotherapy, were meticulously evaluated in 160 mice across four studies. A comprehensive dataset containing 7542 images was derived from proton/electron trials with matched manual scoring of the acute toxicity on the right hind leg, which was the target area irradiated in the trials. This dataset was the foundation for the subsequent model training. The two-step deep learning framework incorporated an object detection model for hind leg detection and a classification model for toxicity classification. An observer study involving five experts and the deep learning model, was conducted to analyze the retrospective capabilities and inter-observer variations. The results revealed that the hind leg object detection model exhibited a robust performance, achieving an accuracy of almost 99%. Subsequently, the classification model demonstrated an overall accuracy of about 85%, revealing nuanced challenges in specific toxicity grades. The observer study highlighted high inter-observer agreement and showcased the model's superiority in accuracy and misclassification distance. In conclusion, this study signifies an advancement in objective and reproducible skin toxicity assessment. The imaging and deep learning system not only allows for retrospective toxicity scoring, but also presents a potential for minimizing inter-observer variation and evaluation times, addressing critical gaps in manual scoring methodologies. Future recommendations include refining the system through an expanded training dataset, paving the way for its deployment in preclinical research and radiotherapy trials.
PMID:39503921 | DOI:10.1007/s00411-024-01096-x
Deep learning-based human gunshot wounds classification
Int J Legal Med. 2024 Nov 6. doi: 10.1007/s00414-024-03355-4. Online ahead of print.
ABSTRACT
In this paper, we present a forensic perspective on classifying gunshot wound patterns using Deep Learning (DL). Although DL has revolutionized various medical specialties, such as automating tasks like medical image classification, its applications in forensic contexts have been limited despite the inherently visual nature of the field. This study investigates the application of DL techniques (59 architectures) to classify gunshot wounds in a forensic context, focusing on distinguishing between entry and exit wounds and determining the Medical-Legal Shooting Distance (MLSD), which classifies wounds as contact, close range, or distant, based on digital images from real crime scene cases. A comprehensive database was constructed with 2,551 images, including 1,883 entries and 668 exit wounds. The ResNet152 architecture demonstrated superior performance in both entry and exit wound classification and MLSD categorization. For the first task, achieved accuracy of 86.90% and an AUC of 82.09%. For MLSD, the ResNet152 showed an accuracy of 92.48% and AUC up to 94.36%, though sample imbalance affected the metrics. Our findings underscore the challenges of standardizing wound images due to varying capture conditions but reflect the practical realities of forensic work. This research highlights the significant potential of DL in enhancing forensic pathology practices, advocating for Artificial Intelligence (AI) as a supportive tool to complement human expertise in forensic investigations.
PMID:39503869 | DOI:10.1007/s00414-024-03355-4
The Segment Anything foundation model achieves favorable brain tumor auto-segmentation accuracy in MRI to support radiotherapy treatment planning
Strahlenther Onkol. 2024 Nov 6. doi: 10.1007/s00066-024-02313-8. Online ahead of print.
ABSTRACT
BACKGROUND: Promptable foundation auto-segmentation models like Segment Anything (SA, Meta AI, New York, USA) represent a novel class of universal deep learning auto-segmentation models that could be employed for interactive tumor auto-contouring in RT treatment planning.
METHODS: Segment Anything was evaluated in an interactive point-to-mask auto-segmentation task for glioma brain tumor auto-contouring in 16,744 transverse slices from 369 MRI datasets (BraTS 2020 dataset). Up to nine interactive point prompts were automatically placed per slice. Tumor boundaries were auto-segmented on contrast-enhanced T1w sequences. Out of the three auto-contours predicted by SA, accuracy was evaluated for the contour with the highest calculated IoU (Intersection over Union, "oracle mask," simulating interactive model use with selection of the best tumor contour) and for the tumor contour with the highest model confidence ("suggested mask").
RESULTS: Mean best IoU (mbIoU) using the best predicted tumor contour (oracle mask) in full MRI slices was 0.762 (IQR 0.713-0.917). The best 2D mask was achieved after a mean of 6.6 interactive point prompts (IQR 5-9). Segmentation accuracy was significantly better for high- compared to low-grade glioma cases (mbIoU 0.789 vs. 0.668). Accuracy was worse using the suggested mask (0.572). Stacking best tumor segmentations from transverse MRI slices, mean 3D Dice score for tumor auto-contouring was 0.872, which was improved to 0.919 by combining axial, sagittal, and coronal contours.
CONCLUSION: The Segment Anything foundation segmentation model can achieve high accuracy for glioma brain tumor segmentation in MRI datasets. The results suggest that foundation segmentation models could facilitate RT treatment planning when properly integrated in a clinical application.
PMID:39503868 | DOI:10.1007/s00066-024-02313-8
Automated detection of bone lesions using CT and MRI: a systematic review
Radiol Med. 2024 Nov 6. doi: 10.1007/s11547-024-01913-9. Online ahead of print.
ABSTRACT
PURPOSE: The aim of this study was to systematically review the use of automated detection systems for identifying bone lesions based on CT and MRI, focusing on advancements in artificial intelligence (AI) applications.
MATERIALS AND METHODS: A literature search was conducted on PubMed and MEDLINE. Data were extracted and grouped into three main categories, namely baseline study characteristics, model validation strategies, and the type of AI algorithms.
RESULTS: A total of 10 studies were selected and analyzed, including 2,768 patients overall with a median of 187 per study. These studies utilized various AI algorithms, predominantly deep learning models (6 studies) such as Convolutional Neural Networks. Among machine learning validation strategies, K-fold cross-validation was the mostly used (5 studies). Clinical validation was performed using data from the same institution (internal testing) in 8 studies and from both the same and different (external testing) institutions in 1 study, respectively.
CONCLUSION: AI, particularly deep learning, holds significant promise in enhancing diagnostic accuracy and efficiency. However, the review highlights several limitations, such as the lack of standardized validation methods and the limited use of external datasets for testing. Future research should address these gaps to ensure the reliability and applicability of AI-based detection systems in clinical settings.
PMID:39503845 | DOI:10.1007/s11547-024-01913-9
Anatomic Interpretability in Neuroimage Deep Learning: Saliency Approaches for Typical Aging and Traumatic Brain Injury
Neuroinformatics. 2024 Nov 6. doi: 10.1007/s12021-024-09694-2. Online ahead of print.
ABSTRACT
The black box nature of deep neural networks (DNNs) makes researchers and clinicians hesitant to rely on their findings. Saliency maps can enhance DNN explainability by suggesting the anatomic localization of relevant brain features. This study compares seven popular attribution-based saliency approaches to assign neuroanatomic interpretability to DNNs that estimate biological brain age (BA) from magnetic resonance imaging (MRI). Cognitively normal (CN) adults (N = 13,394, 5,900 males; mean age: 65.82 ± 8.89 years) are included for DNN training, testing, validation, and saliency map generation to estimate BA. To study saliency robustness to the presence of anatomic deviations from normality, saliency maps are also generated for adults with mild traumatic brain injury (mTBI, N = 214, 135 males; mean age: 55.3 ± 9.9 years). We assess saliency methods' capacities to capture known anatomic features of brain aging and compare them to a surrogate ground truth whose anatomic saliency is known a priori. Anatomic aging features are identified most reliably by the integrated gradients method, which outperforms all others through its ability to localize relevant anatomic features. Gradient Shapley additive explanations, input × gradient, and masked gradient perform less consistently but still highlight ubiquitous neuroanatomic features of aging (ventricle dilation, hippocampal atrophy, sulcal widening). Saliency methods involving gradient saliency, guided backpropagation, and guided gradient-weight class attribution mapping localize saliency outside the brain, which is undesirable. Our research suggests the relative tradeoffs of saliency methods to interpret DNN findings during BA estimation in typical aging and after mTBI.
PMID:39503843 | DOI:10.1007/s12021-024-09694-2
Image quality in three-dimensional (3D) contrast-enhanced dynamic magnetic resonance imaging of the abdomen using deep learning denoising technique: intraindividual comparison between T1-weighted sequences with compressed sensing and with a modified...
Jpn J Radiol. 2024 Nov 6. doi: 10.1007/s11604-024-01687-0. Online ahead of print.
ABSTRACT
PURPOSE: To assess the image quality of a modified Fast three-dimensional (Fast 3D) mode wheel with sequential data filling (mFast 3D wheel) combined with a deep learning denoising technique (Advanced Intelligent Clear-IQ Engine [AiCE]) in contrast-enhanced (CE) 3D dynamic magnetic resonance (MR) imaging of the abdomen during a single breath hold (BH) by intra-individual comparison with compressed sensing (CS) with AiCE.
METHODS: Forty-two patients who underwent multiphasic CE dynamic MRI obtained with both mFast 3D wheel using AiCE and CS using AiCE in the same patient were retrospectively included. The conspicuity, artifacts, image quality, signal intensity ratio (SIR), signal-to-noise ratio (SNR), contrast ratio (CR), and contrast enhancement ratio (CER) of the organs were compared between these 2 sequences.
RESULTS: Conspicuity, artifacts, and overall image quality were significantly better in the mFast 3D wheel using AiCE than in the CS with AiCE (all p < 0.001). The SNR of the liver in CS with AiCE was significantly better than that in the mFast 3D wheel using AiCE (p < 0.01). There were no significant differences in the SIR, CR, and CER between the two sequences.
CONCLUSION: A mFast 3D wheel using AiCE as a deep learning denoising technique improved the conspicuity of abdominal organs and intrahepatic structures and the overall image quality with sufficient contrast enhancement effects, making it feasible for BH 3D CE dynamic MR imaging of the abdomen.
PMID:39503820 | DOI:10.1007/s11604-024-01687-0
Automatic 3-dimensional quantification of orthodontically induced root resorption in cone-beam computed tomography images based on deep learning
Am J Orthod Dentofacial Orthop. 2024 Nov 4:S0889-5406(24)00422-0. doi: 10.1016/j.ajodo.2024.09.009. Online ahead of print.
ABSTRACT
INTRODUCTION: Orthodontically induced root resorption (OIRR) is a common and undesirable consequence of orthodontic treatment. Traditionally, studies employ manual methods to conduct 3-dimensional quantitative analysis of OIRR via cone-beam computed tomography (CBCT), which is often subjective and time-consuming. With advancements in computer technology, deep learning-based approaches have gained traction in medical image processing. This study presents a deep learning-based model for the fully automatic extraction of root volume information and the localization of root resorption from CBCT images.
METHODS: In this cross-sectional, retrospective study, 4534 teeth from 105 patients were used to train and validate an automatic model for OIRR quantification. The protocol encompassed several steps: preprocessing of CBCT images involving automatic tooth segmentation and conversion into point clouds, followed by segmentation of tooth crowns and roots via the Dynamic Graph Convolutional Neural Network. The root volume was subsequently calculated, and OIRR localization was performed. The intraclass correlation coefficient was employed to validate the consistency between the automatic model and manual measurements.
RESULTS: The proposed method strongly correlated with manual measurements in terms of root volume and OIRR severity assessment. The intraclass correlation coefficient values for average volume measurements at each tooth position exceeded 0.95 (P <0.001), with the accuracy of different OIRR severity classifications surpassing 0.8.
CONCLUSIONS: The proposed methodology provides automatic and reliable tools for OIRR assessment, offering potential improvements in orthodontic treatment planning and monitoring.
PMID:39503671 | DOI:10.1016/j.ajodo.2024.09.009
Direct Three-Dimensional Observation of the Plasmonic Near-Fields of a Nanoparticle with Circular Dichroism
ACS Nano. 2024 Nov 6. doi: 10.1021/acsnano.4c10677. Online ahead of print.
ABSTRACT
Characterizing the spatial distribution of the electromagnetic fields of a plasmonic nanoparticle is crucial for exploiting its strong light-matter interaction for optoelectronic and catalytic applications. However, observing the near-fields in three dimensions with a high spatial resolution is still challenging. To realize efficient three-dimensional (3D) nanoscale mapping of the plasmonic fields of nanoparticles with complex shapes, this work established autoencoder-embedded electron energy loss spectroscopy (EELS) tomography. A 432-symmetric chiral gold nanoparticle, a nanoparticle with a high optical dissymmetry factor, was analyzed to relate its geometrical features to its exotic optical properties. Our deep-learning-based feature extraction method discriminated plasmons with different energies in the EEL spectra of the nanoparticle in which signals from multiple plasmons were intermixed; this component was key for acceptable 3D visualization of each plasmonic field separately using EELS tomography. With this methodology, the electric field of the plasmon that induces far-field circular dichroism was observed in 3D. The field linked to this chiroptical property was strong along the swirling edges of the particle, as predicted by a numerical calculation. This study provides insight into the correlation between structural and optical chiralities through direct 3D observation of the plasmonic fields. Furthermore, the strategy of implementing an autoencoder for EELS tomography can be generalized to achieve competent 3D analysis of other features, including the optical properties of the dielectrics and chemical states.
PMID:39503616 | DOI:10.1021/acsnano.4c10677
Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer
Radiol Artif Intell. 2024 Nov 6:e240124. doi: 10.1148/ryai.240124. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To combine deep learning and biology-based modeling to predict the response of locally advanced, triple negative breast cancer before initiating neoadjuvant chemotherapy (NAC). Materials and Methods In this retrospective study, a biology-based mathematical model of tumor response to NAC was constructed and calibrated on a patient-specific basis using imaging data from patients enrolled in the MD Anderson ARTEMIS trial (ClinicalTrials.gov, NCT02276443) between April 2018 and May 2021. To relate the calibrated parameters in the biology-based model and pretreatment MRI data, a convolutional neural network (CNN) was employed. The CNN predictions of the calibrated model parameters were used to estimate tumor response at the end of NAC. CNN performance in the estimations of total tumor volume (TTV), total tumor cellularity (TTC), and tumor status was evaluated. Model-predicted TTC and TTV measurements were compared with MRI-based measurements using the concordance correlation coefficient (CCC), and area under the receiver operating characteristic curve (for predicting pathologic complete response at the end of NAC). Results The study included 118 female patients (median age, 51 [range, 29-78] years). For comparison of CNN predicted to measured change in TTC and TTV over the course of NAC, the CCCs were 0.95 (95% CI: 0.90-0.98) and 0.94 (95% CI: 0.87-0.97), respectively. CNN-predicted TTC and TTV had an AUC of 0.72 (95% CI: 0.34-0.94) and 0.72 (95% CI: 0.40-0.95) for predicting tumor status at the time of surgery, respectively. Conclusion Deep learning integrated with a biology-based mathematical model showed good performance in predicting the spatial and temporal evolution of a patient's tumor during NAC using only pre-NAC MRI data. ©RSNA, 2024.
PMID:39503605 | DOI:10.1148/ryai.240124
SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans
Radiol Artif Intell. 2024 Nov 6:e240005. doi: 10.1148/ryai.240005. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023 from 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 males). The data consisted of T2-weighted MRI acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic and lumbar spine. A deep learning model, SCIseg, was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg and contrast-agnostic, all part of the Spinal Cord Toolbox). Wilcoxon signed-rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results SCIseg achieved a Dice score of 0.92 ± 0.07 (mean ± SD) and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (P = .42) and maximal axial damage ratio (P = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and extracted relevant lesion biomarkers (namely, lesion volume, lesion length, and maximal axial damage ratio). SCIseg is open-source and accessible through the Spinal Cord Toolbox (v6.2 and above). Published under a CC BY 4.0 license.
PMID:39503603 | DOI:10.1148/ryai.240005
Deep learning in image segmentation for cancer
J Med Radiat Sci. 2024 Nov 6. doi: 10.1002/jmrs.839. Online ahead of print.
NO ABSTRACT
PMID:39503190 | DOI:10.1002/jmrs.839
Fully Bayesian VIB-DeepSSM
Med Image Comput Comput Assist Interv. 2023 Oct;14222:346-356. doi: 10.1007/978-3-031-43898-1_34. Epub 2023 Oct 1.
ABSTRACT
Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation and demonstrate the efficacy of two scalable implementation approaches: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.
PMID:39503046 | PMC:PMC11536909 | DOI:10.1007/978-3-031-43898-1_34
Comparative Phylogenetic Analysis and Protein Prediction Reveal the Taxonomy and Diverse Distribution of Virulence Factors in Foodborne <em>Clostridium</em> Strains
Evol Bioinform Online. 2024 Nov 4;20:11769343241294153. doi: 10.1177/11769343241294153. eCollection 2024.
ABSTRACT
BACKGROUND: Clostridium botulinum and Clostridium perfringens, 2 major foodborne pathogenic fusobacteria, have a variety of virulent protein types with nervous and enterotoxic pathogenic potential, respectively.
OBJECTIVE: The relationship between the molecular evolution of the 2 Clostridium genomes and virulence proteins was studied via a bioinformatics prediction method. The genetic stability, main features of gene coding and structural characteristics of virulence proteins were compared and analyzed to reveal the phylogenetic characteristics, diversity, and distribution of virulence factors of foodborne Clostridium strains.
METHODS: The phylogenetic analysis was performed via composition vector and average nucleotide identity based methods. Evolutionary distances of virulence genes relative to those of housekeeping genes were calculated via multilocus sequence analysis. Bioinformatics software and tools were used to predict and compare the main functional features of genes encoding virulence proteins, and the structures of virulence proteins were predicted and analyzed through homology modeling and a deep learning algorithm.
RESULTS: According to the diversity of toxins, genome evolution tended to cluster based on the protein-coding virulence genes. The evolutionary transfer distances of virulence genes relative to those of housekeeping genes in C. botulinum strains were greater than those in C. perfringens strains, and BoNTs and alpha toxin proteins were located extracellularly. The BoNTs have highly similar structures, but BoNT/A/B and BoNT/E/F have significantly different conformations. The beta2 toxin monomer structure is similar to but simpler than the alpha toxin monomer structure, which has 2 mobile loops in the N-terminal domain. The C-terminal domain of the CPE trimer forms a "claudin-binding pocket" shape, which suggests biological relevance, such as in pore formation.
CONCLUSIONS: According to the genotype of protein-coding virulence genes, the evolution of Clostridium showed a clustering trend. The genetic stability, functional and structural characteristics of foodborne Clostridium virulence proteins reveal the taxonomy and diverse distribution of virulence factors.
PMID:39502941 | PMC:PMC11536399 | DOI:10.1177/11769343241294153
LT-DeepLab: an improved DeepLabV3+ cross-scale segmentation algorithm for Zanthoxylum bungeanum Maxim leaf-trunk diseases in real-world environments
Front Plant Sci. 2024 Oct 22;15:1423238. doi: 10.3389/fpls.2024.1423238. eCollection 2024.
ABSTRACT
INTRODUCTION: Zanthoxylum bungeanum Maxim is an economically significant crop in Asia, but large-scale cultivation is often threatened by frequent diseases, leading to significant yield declines. Deep learning-based methods for crop disease recognition have emerged as a vital research area in agriculture.
METHODS: This paper presents a novel model, LT-DeepLab, for the semantic segmentation of leaf spot (folium macula), rust, frost damage (gelu damnum), and diseased leaves and trunks in complex field environments. The proposed model enhances DeepLabV3+ with an innovative Fission Depth Separable with CRCC Atrous Spatial Pyramid Pooling module, which reduces the structural parameters of Atrous Spatial Pyramid Pooling module and improves cross-scale extraction capability. Incorporating Criss-Cross Attention with the Convolutional Block Attention Module provides a complementary boost to channel feature extraction. Additionally, deformable convolution enhances low-dimensional features, and a Fully Convolutional Network auxiliary header is integrated to optimize the network and enhance model accuracy without increasing parameter count.
RESULTS: LT-DeepLab improves the mean Intersection over Union (mIoU) by 3.59%, the mean Pixel Accuracy (mPA) by 2.16%, and the Overall Accuracy (OA) by 0.94% compared to the baseline DeepLabV3+. It also reduces computational demands by 11.11% and decreases the parameter count by 16.82%.
DISCUSSION: These results indicate that LT-DeepLab demonstrates excellent disease segmentation capabilities in complex field environments while maintaining high computational efficiency, offering a promising solution for improving crop disease management efficiency.
PMID:39502917 | PMC:PMC11534726 | DOI:10.3389/fpls.2024.1423238
Recent technological advancements in Artificial Intelligence for orthopaedic wound management
J Clin Orthop Trauma. 2024 Oct 15;57:102561. doi: 10.1016/j.jcot.2024.102561. eCollection 2024 Oct.
ABSTRACT
In orthopaedics, wound care is crucial as surgical site infections carry disease burden due to increased length of stay, decreased quality of life and poorer patient outcomes. Artificial Intelligence (AI) has a vital role in revolutionising wound care in orthopaedics: ranging from wound assessment, early detection of complications, risk stratifying patients, and remote patient monitoring. Incorporating AI in orthopaedics has reduced dependency on manual physician assessment which is time-consuming. This article summarises current literature on how AI is used for wound assessment and management in the orthopaedic community.
PMID:39502891 | PMC:PMC11532955 | DOI:10.1016/j.jcot.2024.102561
Dynamic Glucose Enhanced Imaging using Direct Water Saturation
ArXiv [Preprint]. 2024 Oct 22:arXiv:2410.17119v1.
ABSTRACT
PURPOSE: Dynamic glucose enhanced (DGE) MRI studies employ chemical exchange saturation transfer (CEST) or spin lock (CESL) to study glucose uptake. Currently, these methods are hampered by low effect size and sensitivity to motion. To overcome this, we propose to utilize exchange-based linewidth (LW) broadening of the direct water saturation (DS) curve of the water saturation spectrum (Z-spectrum) during and after glucose infusion (DS-DGE MRI).
METHODS: To estimate the glucose-infusion-induced LW changes ($\Delta$LW), Bloch-McConnell simulations were performed for normoglycemia and hyperglycemia in blood, gray matter (GM), white matter (WM), CSF, and malignant tumor tissue. Whole-brain DS-DGE imaging was implemented at 3 tesla using dynamic Z-spectral acquisitions (1.2 s per offset frequency, 38 s per spectrum) and assessed on four brain tumor patients using infusion of 35 g of D-glucose. To assess $\Delta$LW, a deep learning-based Lorentzian fitting approach was employed on voxel-based DS spectra acquired before, during, and post-infusion. Area-under-the-curve (AUC) images, obtained from the dynamic $\Delta$LW time curves, were compared qualitatively to perfusion-weighted imaging (PWI).
RESULTS: In simulations, $\Delta$LW was 1.3%, 0.30%, 0.29/0.34%, 7.5%, and 13% in arterial blood, venous blood, GM/WM, malignant tumor tissue, and CSF, respectively. In vivo, $\Delta$LW was approximately 1% in GM/WM, 5-20% for different tumor types, and 40% in CSF. The resulting DS-DGE AUC maps clearly outlined lesion areas.
CONCLUSIONS: DS-DGE MRI is highly promising for assessing D-glucose uptake. Initial results in brain tumor patients show high-quality AUC maps of glucose-induced line broadening and DGE-based lesion enhancement similar and/or complementary to PWI.
PMID:39502884 | PMC:PMC11537340
Morphological analysis of Pd/C nanoparticles using SEM imaging and advanced deep learning
RSC Adv. 2024 Nov 5;14(47):35172-35183. doi: 10.1039/d4ra06113f. eCollection 2024 Oct 29.
ABSTRACT
In this study, we present a comprehensive approach for the morphological analysis of palladium on carbon (Pd/C) nanoparticles utilizing scanning electron microscopy (SEM) imaging and advanced deep learning techniques. A deep learning detection model based on an attention mechanism was implemented to accurately identify and delineate small nanoparticles within unlabeled SEM images. Following detection, a graph-based network was employed to analyze the structural characteristics of the nanoparticles, while density-based spatial clustering of applications with noise was utilized to cluster the detected nanoparticles, identifying meaningful patterns and distributions. Our results demonstrate the efficacy of the proposed model in detecting nanoparticles with high precision and reliability. Furthermore, the clustering analysis reveals significant insights into the morphological distribution and structural organization of Pd/C nanoparticles, contributing to the understanding of their properties and potential applications.
PMID:39502866 | PMC:PMC11536297 | DOI:10.1039/d4ra06113f
Graph neural networks are promising for phenotypic virtual screening on cancer cell lines
Biol Methods Protoc. 2024 Sep 3;9(1):bpae065. doi: 10.1093/biomethods/bpae065. eCollection 2024.
ABSTRACT
Artificial intelligence is increasingly driving early drug design, offering novel approaches to virtual screening. Phenotypic virtual screening (PVS) aims to predict how cancer cell lines respond to different compounds by focusing on observable characteristics rather than specific molecular targets. Some studies have suggested that deep learning may not be the best approach for PVS. However, these studies are limited by the small number of tested molecules as well as not employing suitable performance metrics and dissimilar-molecules splits better mimicking the challenging chemical diversity of real-world screening libraries. Here we prepared 60 datasets, each containing approximately 30 000-50 000 molecules tested for their growth inhibitory activities on one of the NCI-60 cancer cell lines. We conducted multiple performance evaluations of each of the five machine learning algorithms for PVS on these 60 problem instances. To provide even a more comprehensive evaluation, we used two model validation types: the random split and the dissimilar-molecules split. Overall, about 14 440 training runs aczross datasets were carried out per algorithm. The models were primarily evaluated using hit rate, a more suitable metric in VS contexts. The results show that all models are more challenged by test molecules that are substantially different from those in the training data. In both validation types, the D-MPNN algorithm, a graph-based deep neural network, was found to be the most suitable for building predictive models for this PVS problem.
PMID:39502795 | PMC:PMC11537795 | DOI:10.1093/biomethods/bpae065
M/EEG source localization for both subcortical and cortical sources using a convolutional neural network with a realistic head conductivity model
APL Bioeng. 2024 Oct 28;8(4):046104. doi: 10.1063/5.0226457. eCollection 2024 Dec.
ABSTRACT
While electroencephalography (EEG) and magnetoencephalography (MEG) are well-established noninvasive methods in neuroscience and clinical medicine, they suffer from low spatial resolution. Electrophysiological source imaging (ESI) addresses this by noninvasively exploring the neuronal origins of M/EEG signals. Although subcortical structures are crucial to many brain functions and neuronal diseases, accurately localizing subcortical sources of M/EEG remains particularly challenging, and the feasibility is still a subject of debate. Traditional ESIs, which depend on explicitly defined regularization priors, have struggled to set optimal priors and accurately localize brain sources. To overcome this, we introduced a data-driven, deep learning-based ESI approach without the need for these priors. We proposed a four-layered convolutional neural network (4LCNN) designed to locate both subcortical and cortical sources underlying M/EEG signals. We also employed a sophisticated realistic head conductivity model using the state-of-the-art segmentation method of ten different head tissues from individual MRI data to generate realistic training data. This is the first attempt at deep learning-based ESI targeting subcortical regions. Our method showed excellent accuracy in source localization, particularly in subcortical areas compared to other methods. This was validated through M/EEG simulations, evoked responses, and invasive recordings. The potential for accurate source localization of the 4LCNNs demonstrated in this study suggests future contributions to various research endeavors such as the clinical diagnosis, understanding of the pathophysiology of various neuronal diseases, and basic brain functions.
PMID:39502794 | PMC:PMC11537707 | DOI:10.1063/5.0226457