Deep learning

Deep learning reconstruction for coronary CT angiography in patients with origin anomaly, stent or bypass graft

Thu, 2024-07-18 06:00

Radiol Med. 2024 Jul 18. doi: 10.1007/s11547-024-01846-3. Online ahead of print.

ABSTRACT

PURPOSE: To develop and validate a deep learning (DL)-model for automatic reconstruction for coronary CT angiography (CCTA) in patients with origin anomaly, stent or bypass graft.

MATERIAL AND METHODS: In this retrospective study, a DL model for automatic CCTA reconstruction was developed with training and validation sets from 6063 and 1962 patients. The algorithm was evaluated on an independent external test set of 812 patients (357 with origin anomaly or revascularization, 455 without). The image quality of DL reconstruction and manual reconstruction (using dedicated cardiac reconstruction software provided by CT vendors) was compared using a 5-point scale. The successful reconstruction rates and post-processing time for two methods were recorded.

RESULTS: In the external test set, 812 patients (mean age, 64.0 ± 11.6, 100 with origin anomalies, 152 with stents, 105 with bypass grafts) were evaluated. The successful rates for automatic reconstruction were 100% (455/455), 97% (97/100), 100% (152/152), and 76.2% (80/105) in patients with native vessel, origin anomaly, stent, and bypass graft, respectively. The image quality scores were significantly higher for DL reconstruction than those for manual approach in all subgroups (4 vs. 3 for native vessel, 4 vs. 4 for origin anomaly, 4 vs. 3 for stent and 4 vs. 3 for bypass graft, all p < 0.001). The overall post-processing time was remarkably reduced for DL reconstruction compared to manual method (11 s vs. 465 s, p < 0.001).

CONCLUSIONS: The developed DL model enabled accurate automatic CCTA reconstruction of bypass graft, stent and origin anomaly. It significantly reduced post-processing time and improved clinical workflow.

PMID:39023665 | DOI:10.1007/s11547-024-01846-3

Categories: Literature Watch

Evaluation of a deep image-to-image network (DI2IN) auto-segmentation algorithm across a network of cancer centers

Thu, 2024-07-18 06:00

J Cancer Res Ther. 2024 Apr 1;20(3):1020-1025. doi: 10.4103/jcrt.jcrt_769_23. Epub 2024 Jan 22.

ABSTRACT

PURPOSE/OBJECTIVE S: Due to manual OAR contouring challenges, various automatic contouring solutions have been introduced. Historically, common clinical auto-segmentation algorithms used were atlas-based, which required maintaining a library of self-made contours. Searching the collection was computationally intensive and could take several minutes to complete. Deep learning approaches have shown significant benefits compared to atlas-based methods in improving segmentation accuracy and efficiency in auto-segmentation algorithms. This work represents the first multi-institutional study to describe and evaluate an AI algorithm for the auto-segmentation of organs at risk (OARs) based on a deep image-to-image network (DI2IN).

MATERIALS/METHODS: The AI-Rad Companion Organs RT (AIRC) algorithm (Siemens Healthineers, Erlangen, Germany) uses a two-step approach for segmentation. In the first step, the target organ region in the optimal input image is extracted using a trained deep reinforcement learning network (DRL), which is then used as input to create the contours in the second step based on DI2IN. The study was initially designed as a prospective single-center evaluation. The automated contours generated by AIRC were evaluated by three experienced board-certified radiation oncologists using a four-point scale where 4 is clinically usable and 1 requires re-contouring. After seeing favorable results in a single-center pilot study, we decided to expand the study to six additional institutions, encompassing eight additional evaluators for a total of 11 physician evaluators across seven institutions.

RESULTS: One hundred and fifty-six patients and 1366 contours were prospectively evaluated. The five most commonly contoured organs were the lung (136 contours, average rating = 4.0), spinal cord (106 contours, average rating = 3.1), eye globe (80 contours, average rating = 3.9), lens (77 contours, average rating = 3.9), and optic nerve (75 contours, average rating = 4.0). The average rating per evaluator per contour was 3.6. On average, 124 contours were evaluated by each evaluator. 65% of the contours were rated as 4, and 31% were rated as 3. Only 4% of contours were rated as 1 or 2. Thirty-three organs were evaluated in the study, with 19 structures having a 3.5 or above average rating (ribs, abdominopelvic cavity, skeleton, larynx, lung, aorta, brachial plexus, lens, eye globe, glottis, heart, parotid glands, bladder, kidneys, supraglottic larynx, submandibular glands, esophagus, optic nerve, oral cavity) and the remaining organs having a rating of 3.0 or greater (female breast, proximal femur, seminal vesicles, rectum, sternum, brainstem, prostate, brain, lips, mandible, liver, optic chiasm, spinal cord, spleen). No organ had an average rating below 3.

CONCLUSION: AIRC performed well with greater than 95% of contours accepted by treating physicians with no or minor edits. It supported a fully automated workflow with the potential for time savings and increased standardization with the use of AI-powered algorithms for high-quality OAR contouring.

PMID:39023610 | DOI:10.4103/jcrt.jcrt_769_23

Categories: Literature Watch

AI-Based Strain Estimation in Echocardiography Using Open and Collaborative Data: The More Experts the Better?

Thu, 2024-07-18 06:00

JACC Cardiovasc Imaging. 2024 Jul 3:S1936-878X(24)00232-8. doi: 10.1016/j.jcmg.2024.05.020. Online ahead of print.

NO ABSTRACT

PMID:39023498 | DOI:10.1016/j.jcmg.2024.05.020

Categories: Literature Watch

Multi-Plexus Nonperfusion Area Segmentation in Widefield OCT Angiography Using a Deep Convolutional Neural Network

Thu, 2024-07-18 06:00

Transl Vis Sci Technol. 2024 Jul 1;13(7):15. doi: 10.1167/tvst.13.7.15.

ABSTRACT

PURPOSE: To train and validate a convolutional neural network to segment nonperfusion areas (NPAs) in multiple retinal vascular plexuses on widefield optical coherence tomography angiography (OCTA).

METHODS: This cross-sectional study included 202 participants with a full range of diabetic retinopathy (DR) severities (diabetes mellitus without retinopathy, mild to moderate non-proliferative DR, severe non-proliferative DR, and proliferative DR) and 39 healthy participants. Consecutive 6 × 6-mm OCTA scans at the central macula, optic disc, and temporal region in one eye from 202 participants in a clinical DR study were acquired with a 70-kHz OCT commercial system (RTVue-XR). Widefield OCTA en face images were generated by montaging the scans from these three regions. A projection-resolved OCTA algorithm was applied to remove projection artifacts at the voxel scale. A deep convolutional neural network with a parallel U-Net module was designed to detect NPAs and distinguish signal reduction artifacts from flow deficits in the superficial vascular complex (SVC), intermediate capillary plexus (ICP), and deep capillary plexus (DCP). Expert graders manually labeled NPAs and signal reduction artifacts for the ground truth. Sixfold cross-validation was used to evaluate the proposed algorithm on the entire dataset.

RESULTS: The proposed algorithm showed high agreement with the manually delineated ground truth for NPA detection in three retinal vascular plexuses on widefield OCTA (mean ± SD F-score: SVC, 0.84 ± 0.05; ICP, 0.87 ± 0.04; DCP, 0.83 ± 0.07). The extrafoveal avascular area in the DCP showed the best sensitivity for differentiating eyes with diabetes but no retinopathy (77%) from healthy controls and for differentiating DR by severity: DR versus no DR, 77%; referable DR (rDR) versus non-referable DR (nrDR), 79%; vision-threatening DR (vtDR) versus non-vision-threatening DR (nvtDR), 60%. The DCP also showed the best area under the receiver operating characteristic curve for distinguishing diabetes from healthy controls (96%), DR versus no DR (95%), and rDR versus nrDR (96%). The three-plexus-combined OCTA achieved the best result in differentiating vtDR and nvtDR (81.0%).

CONCLUSIONS: A deep learning network can accurately segment NPAs in individual retinal vascular plexuses and improve DR diagnostic accuracy.

TRANSLATIONAL RELEVANCE: Using a deep learning method to segment nonperfusion areas in widefield OCTA can potentially improve the diagnostic accuracy of diabetic retinopathy by OCT/OCTA systems.

PMID:39023443 | DOI:10.1167/tvst.13.7.15

Categories: Literature Watch

Pre-processing visual scenes for retinal prosthesis systems: A comprehensive review

Thu, 2024-07-18 06:00

Artif Organs. 2024 Jul 18. doi: 10.1111/aor.14824. Online ahead of print.

ABSTRACT

BACKGROUND: Retinal prostheses offer hope for individuals with degenerative retinal diseases by stimulating the remaining retinal cells to partially restore their vision. This review delves into the current advancements in retinal prosthesis technology, with a special emphasis on the pivotal role that image processing and machine learning techniques play in this evolution.

METHODS: We provide a comprehensive analysis of the existing implantable devices and optogenetic strategies, delineating their advantages, limitations, and challenges in addressing complex visual tasks. The review extends to various image processing algorithms and deep learning architectures that have been implemented to enhance the functionality of retinal prosthetic devices. We also illustrate the testing results by demonstrating the clinical trials or using Simulated Prosthetic Vision (SPV) through phosphene simulations, which is a critical aspect of simulating visual perception for retinal prosthesis users.

RESULTS: Our review highlights the significant progress in retinal prosthesis technology, particularly its capacity to augment visual perception among the visually impaired. It discusses the integration between image processing and deep learning, illustrating their impact on individual interactions and navigations within the environment through applying clinical trials and also illustrating the limitations of some techniques to be used with current devices, as some approaches only use simulation even on sighted-normal individuals or rely on qualitative analysis, where some consider realistic perception models and others do not.

CONCLUSION: This interdisciplinary field holds promise for the future of retinal prostheses, with the potential to significantly enhance the quality of life for individuals with retinal prostheses. Future research directions should pivot towards optimizing phosphene simulations for SPV approaches, considering the distorted and confusing nature of phosphene perception, thereby enriching the visual perception provided by these prosthetic devices. This endeavor will not only improve navigational independence but also facilitate a more immersive interaction with the environment.

PMID:39023279 | DOI:10.1111/aor.14824

Categories: Literature Watch

Innovative approaches for coronary heart disease management: integrating biomedical sensors, deep learning, and stellate ganglion modulation

Thu, 2024-07-18 06:00

Comput Methods Biomech Biomed Engin. 2024 Jul 18:1-18. doi: 10.1080/10255842.2024.2378099. Online ahead of print.

ABSTRACT

Coronary heart disease (CHD) is a significant global health concern, necessitating continuous advancements in treatment modalities to improve patient outcomes. Traditional Chinese medicine (TCM) offers alternative therapeutic approaches, but integration with modern biomedical technologies remains relatively unexplored. This study aimed to assess the efficacy of a combined treatment approach for CHD, integrating traditional Chinese medicinal interventions with modern biomedical sensors and stellate ganglion modulation. The objective was to evaluate the impact of this combined treatment on symptom relief, clinical outcomes, hemorheological indicators, and inflammatory biomarkers. A randomized controlled trial was conducted on 117 CHD patients with phlegm-turbidity congestion and excessiveness type. Patients were divided into a combined treatment group (CTG) and a traditional Chinese medicinal group (CMG). The CTG group received a combination of herbal decoctions, thread-embedding therapy, and stellate ganglion modulation, while the CMG group only received traditional herbal decoctions. The CTG demonstrated superior outcomes compared to the CMG across multiple parameters. Significant reductions in TCM symptom scores, improved clinical effects, reduced angina manifestation, favorable changes in hemorheological indicators, and decreased serum inflammatory biomarkers were observed in the CTG post-intervention. The combination of traditional Chinese medicinal interventions with modern biomedical sensors and stellate ganglion modulation has shown promising results in improving symptoms, clinical outcomes, and inflammatory markers in CHD patients. This holistic approach enhances treatment efficacy and patient outcomes. Further research and advancements in sensor technology are needed to optimize this approach.

PMID:39023137 | DOI:10.1080/10255842.2024.2378099

Categories: Literature Watch

Causality-inspired crop pest recognition based on Decoupled Feature Learning

Thu, 2024-07-18 06:00

Pest Manag Sci. 2024 Jul 18. doi: 10.1002/ps.8314. Online ahead of print.

ABSTRACT

BACKGROUND: Ensuring the efficient recognition and management of crop pests is crucial for maintaining the balance in global agricultural ecosystems and ecological harmony. Deep learning-based methods have shown promise in crop pest recognition. However, prevailing methods often fail to address a critical issue: biased pest training dataset distribution stemming from the tendency to collect images primarily in certain environmental contexts, such as paddy fields. This oversight hampers recognition accuracy when encountering pest images dissimilar to training samples, highlighting the need for a novel approach to overcome this limitation.

RESULTS: We introduce the Decoupled Feature Learning (DFL) framework, leveraging causal inference techniques to handle training dataset bias. DFL manipulates the training data based on classification confidence to construct different training domains and employs center triplet loss for learning class-core features. The proposed DFL framework significantly boosts existing baseline models, attaining unprecedented recognition accuracies of 95.33%, 92.59%, and 74.86% on the Li, DFSPD, and IP102 datasets, respectively.

CONCLUSION: Extensive testing on three pest datasets using standard baseline models demonstrates the superiority of DFL in pest recognition. The visualization results show that DFL encourages the baseline models to capture the class-core features. The proposed DFL marks a pivotal step in mitigating the issue of data distribution bias, enhancing the reliability of deep learning in agriculture. © 2024 Society of Chemical Industry.

PMID:39022822 | DOI:10.1002/ps.8314

Categories: Literature Watch

An innovative approach to detecting the freshness of fruits and vegetables through the integration of convolutional neural networks and bidirectional long short-term memory network

Thu, 2024-07-18 06:00

Curr Res Food Sci. 2024 Mar 25;8:100723. doi: 10.1016/j.crfs.2024.100723. eCollection 2024.

ABSTRACT

Fruit and vegetable freshness testing can improve the efficiency of agricultural product management, reduce resource waste and economic losses, and plays a vital role in increasing the added value of fruit and vegetable agricultural products. At present, the detection of fruit and vegetable freshness mainly relies on manual feature extraction combined with machine learning. However, manual extraction of features has the problem of poor adaptability, resulting in low efficiency in fruit and vegetable freshness detection. Although exist some studies that have introduced deep learning methods to automatically learn deep features that characterize the freshness of fruits and vegetables to cope with the diversity and variability in complex scenes. However, the performance of these studies on fruit and vegetable freshness detection needs to be further improved. Based on this, this paper proposes a novel method that fusion of different deep learning models to extract the features of fruit and vegetable images and the correlation between various areas in the image, so as to detect the freshness of fruits and vegetables more objectively and accurately. First, the image size in the dataset is resized to meet the input requirements of the deep learning model. Then, deep features characterizing the freshness of fruits and vegetables are extracted by the fused deep learning model. Finally, the parameters of the fusion model were optimized based on the detection performance of the fused deep learning model, and the performance of fruit and vegetable freshness detection was evaluated. Experimental results show that the CNN_BiLSTM deep learning model, which fusion convolutional neural network (CNN) and bidirectional long-short term memory neural network (BiLSTM), is combined with parameter optimization processing to achieve an accuracy of 97.76% in detecting the freshness of fruits and vegetables. The research results show that this method is promising to improve the performance of fruit and vegetable freshness detection.

PMID:39022740 | PMC:PMC11252168 | DOI:10.1016/j.crfs.2024.100723

Categories: Literature Watch

Evaluating a deep learning AI algorithm for detecting residual prostate cancer on MRI after focal therapy

Thu, 2024-07-18 06:00

BJUI Compass. 2024 May 12;5(7):665-667. doi: 10.1002/bco2.373. eCollection 2024 Jul.

NO ABSTRACT

PMID:39022660 | PMC:PMC11250150 | DOI:10.1002/bco2.373

Categories: Literature Watch

Insights about cervical lymph nodes: Evaluating deep learning-based reconstruction for head and neck computed tomography scan

Thu, 2024-07-18 06:00

Eur J Radiol Open. 2023 Oct 28;12:100534. doi: 10.1016/j.ejro.2023.100534. eCollection 2024 Jun.

ABSTRACT

PURPOSE: This study aimed to investigate differences in cervical lymph node image quality on dual-energy computed tomography (CT) scan with datasets reconstructed using filter back projection (FBP), hybrid iterative reconstruction (IR), and deep learning-based image reconstruction (DLIR) in patients with head and neck cancer.

METHOD: Seventy patients with head and neck cancer underwent follow-up contrast-enhanced dual-energy CT examinations. All datasets were reconstructed using FBP, hybrid IR with 30 % adaptive statistical IR (ASiR-V), and DLIR with three selectable levels (low, medium, and high) at 2.5- and 0.625-mm slice thicknesses. Herein, signal, image noise, signal-to-noise ratio, and contrast-to-noise ratio of lymph nodes and overall image quality, artifact, and noise of selected regions of interest were evaluated by two radiologists. Next, cervical lymph node sharpness was evaluated using full width at half maximum.

RESULTS: DLIR exhibited significantly reduced noise, ranging from 3.8 % to 35.9 % with improved signal-to-noise ratio (11.5-105.6 %) and contrast-to-noise ratio (10.5-107.5 %) compared with FBP and ASiR-V, for cervical lymph nodes (p < 0.001). Further, 0.625-mm-thick images reconstructed using DLIR-medium and DLIR-high had a lower noise than 2.5-mm-thick images reconstructed using FBP and ASiR-V. The lymph node margins and vessels on DLIR-medium and DLIR-high were sharper than those on FBP and ASiR-V (p < 0.05). Both readers agreed that DLIR had a better image quality than the conventional reconstruction algorithms.

CONCLUSION: DLIR-medium and -high provided superior cervical lymph node image quality in head and neck CT. Improved image quality affords thin-slice DLIR images for dose-reduction protocols in the future.

PMID:39022614 | PMC:PMC467078 | DOI:10.1016/j.ejro.2023.100534

Categories: Literature Watch

Simultaneous removal of noise and correction of motion warping in neuron calcium imaging using a pipeline structure of self-supervised deep learning models

Thu, 2024-07-18 06:00

Biomed Opt Express. 2024 Jun 17;15(7):4300-4317. doi: 10.1364/BOE.527919. eCollection 2024 Jul 1.

ABSTRACT

Calcium imaging is susceptible to motion distortions and background noises, particularly for monitoring active animals under low-dose laser irradiation, and hence unavoidably hinder the critical analysis of neural functions. Current research efforts tend to focus on either denoising or dewarping and do not provide effective methods for videos distorted by both noises and motion artifacts simultaneously. We found that when a self-supervised denoising model of DeepCAD [Nat. Methods18, 1359 (2021)10.1038/s41592-021-01225-0] is used on the calcium imaging contaminated by noise and motion warping, it can remove the motion artifacts effectively but with regenerated noises. To address this issue, we develop a two-level deep-learning (DL) pipeline to dewarp and denoise the calcium imaging video sequentially. The pipeline consists of two 3D self-supervised DL models that do not require warp-free and high signal-to-noise ratio (SNR) observations for network optimization. Specifically, a high-frequency enhancement block is presented in the denoising network to restore more structure information in the denoising process; a hierarchical perception module and a multi-scale attention module are designed in the dewarping network to tackle distortions of various sizes. Experiments conducted on seven videos from two-photon and confocal imaging systems demonstrate that our two-level DL pipeline can restore high-clarity neuron images distorted by both motion warping and background noises. Compared to typical DeepCAD, our denoising model achieves a significant improvement of approximately 30% in image resolution and up to 28% in signal-to-noise ratio; compared to traditional dewarping and denoising methods, our proposed pipeline network recovers more neurons, enhancing signal fidelity and improving data correlation among frames by 35% and 60% respectively. This work may provide an attractive method for long-term neural activity monitoring in awake animals and also facilitate functional analysis of neural circuits.

PMID:39022541 | PMC:PMC11249678 | DOI:10.1364/BOE.527919

Categories: Literature Watch

ADASSM: Adversarial Data Augmentation in Statistical Shape Models From Images

Thu, 2024-07-18 06:00

Shape Med Imaging (2023). 2023 Oct;14350:90-104. doi: 10.1007/978-3-031-46914-5_8. Epub 2023 Oct 31.

ABSTRACT

Statistical shape models (SSM) have been well-established as an excellent tool for identifying variations in the morphology of anatomy across the underlying population. Shape models use consistent shape representation across all the samples in a given cohort, which helps to compare shapes and identify the variations that can detect pathologies and help in formulating treatment plans. In medical imaging, computing these shape representations from CT/MRI scans requires time-intensive preprocessing operations, including but not limited to anatomy segmentation annotations, registration, and texture denoising. Deep learning models have demonstrated exceptional capabilities in learning shape representations directly from volumetric images, giving rise to highly effective and efficient Image-to-SSM networks. Nevertheless, these models are data-hungry and due to the limited availability of medical data, deep learning models tend to overfit. Offline data augmentation techniques, that use kernel density estimation based (KDE) methods for generating shape-augmented samples, have successfully aided Image-to-SSM networks in achieving comparable accuracy to traditional SSM methods. However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit image-based texture bias resulting in sub-optimal models. This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation. The proposed framework is trained as an adversary to the Image-to-SSM network, augmenting diverse and challenging noisy samples. Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.

PMID:39022299 | PMC:PMC11251192 | DOI:10.1007/978-3-031-46914-5_8

Categories: Literature Watch

Accelerated cardiac cine magnetic resonance imaging using deep low-rank plus sparse network: validation in patients

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):5131-5143. doi: 10.21037/qims-24-17. Epub 2024 Jun 27.

ABSTRACT

BACKGROUND: Accurate and reproducible assessment of left ventricular (LV) volumes is important in managing various cardiac conditions. However, patients are required to hold their breath multiple times during data acquisition, which may result in discomfort and restrict cardiac motion, potentially compromising the accuracy of the detected results. Accelerated imaging techniques can help reduce the number of breath holds needed, potentially improving patient comfort and the reliability of the LV assessment. This study aimed to prospectively evaluate the feasibility and accuracy of LV assessment with a model-based low-rank plus sparse network (L+S-Net) for accelerated magnetic resonance (MR) cine imaging.

METHODS: Fourty-one patients with different cardiac conditions were recruited in this study. Both accelerated MR cine imaging with L+S-Net and traditional electrocardiogram (ECG)-gated segmented cine were performed for each patient. Subjective image quality (IQ) score and quantitative LV volume function parameters were measured and compared between L+S-Net and traditional standards. The IQ score and LV volume measurements of cardiovascular magnetic resonance (CMR) images reconstructed by L+S-Net and standard cine were compared by paired t-test. The acquisition time of the two methods was also calculated.

RESULTS: In a quantitative analysis, L+S-Net and standard cine yielded similar measurements for all parameters of LV function (ejection fraction: 35±22 for standard vs. 33±23 for L+S-Net), although L+S-Net had slightly lower IQ scores than standard cine CMR (4.2±0.5 for L+S-Net vs. 4.8±0.4 for standard cine; P<0.001). The mean acquisition time of L+S-Net and standard cine was 0.83±0.08 vs. 6.35±0.78 s per slice (P<0.001).

CONCLUSIONS: Assessment of LV function with L+S-Net at 3.0 T yields comparable results to the reference standard, albeit with a reduced acquisition time. This feature enhances the clinical applicability of the L+S-Net approach, helping alleviate patient discomfort and motion artifacts that may arise due to prolonged acquisition time.

PMID:39022294 | PMC:PMC11250298 | DOI:10.21037/qims-24-17

Categories: Literature Watch

Deep learning methods for diagnosis of graves' ophthalmopathy using magnetic resonance imaging

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):5099-5108. doi: 10.21037/qims-24-80. Epub 2024 Jun 11.

ABSTRACT

BACKGROUND: The effect of diagnosing Graves' ophthalmopathy (GO) through traditional measurement and observation in medical imaging is not ideal. This study aimed to develop and validate deep learning (DL) models that could be applied to the diagnosis of GO based on magnetic resonance imaging (MRI) and compare them to traditional measurement and judgment of radiologists.

METHODS: A total of 199 clinically verified consecutive GO patients and 145 normal controls undergoing MRI were retrospectively recruited, of whom 240 were randomly assigned to the training group and 104 to the validation group. Areas of superior, inferior, medial, and lateral rectus muscles and all rectus muscles on coronal planes were calculated respectively. Logistic regression models based on areas of extraocular muscles were built to diagnose GO. The DL models named ResNet101 and Swin Transformer with T1-weighted MRI without contrast as input were used to diagnose GO and the results were compared to the radiologist's diagnosis only relying on MRI T1-weighted scans.

RESULTS: Areas on the coronal plane of each muscle in the GO group were significantly greater than those in the normal group. In the validation group, the areas under the curve (AUCs) of logistic regression models by superior, inferior, medial, and lateral rectus muscles and all muscles were 0.897 [95% confidence interval (CI): 0.833-0.949], 0.705 (95% CI: 0.598-0.804), 0.799 (95% CI: 0.712-0.876), 0.681 (95% CI: 0.567-0.776), and 0.905 (95% CI: 0.843-0.955). ResNet101 and Swin Transformer achieved AUCs of 0.986 (95% CI: 0.977-0.994) and 0.936 (95% CI: 0.912-0.957), respectively. The accuracy, sensitivity, and specificity of ResNet101 were 0.933, 0.979, and 0.869, respectively. The accuracy, sensitivity, and specificity of Swin Transformer were 0.851, 0.817, and 0.898, respectively. The ResNet101 model yielded higher AUC than models of all muscles and radiologists (0.986 vs. 0.905, 0.818; P<0.001).

CONCLUSIONS: The DL models based on MRI T1-weighted scans could accurately diagnose GO, and the application of DL systems in MRI may improve radiologists' performance in diagnosing GO and early detection.

PMID:39022293 | PMC:PMC11250345 | DOI:10.21037/qims-24-80

Categories: Literature Watch

Research on ultrasound-based radiomics: a bibliometric analysis

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):4520-4539. doi: 10.21037/qims-23-1867. Epub 2024 Jun 18.

ABSTRACT

BACKGROUND: A large number of studies related to ultrasound-based radiomics have been published in recent years; however, a systematic bibliometric analysis of this topic has not yet been conducted. In this study, we attempted to identify the hotspots and frontiers in ultrasound-based radiomics through bibliometrics and to systematically characterize the overall framework and characteristics of studies through mapping and visualization.

METHODS: A literature search was carried out in Web of Science Core Collection (WoSCC) database from January 2016 to December 2023 according to a predetermined search formula. Bibliometric analysis and visualization of the results were performed using CiteSpace, VOSviewer, R, and other platforms.

RESULTS: Ultimately, 466 eligible papers were included in the study. Publication trend analysis showed that the annual publication trend of journals in ultrasound-based radiomics could be divided into three phases: there were no more than five documents published in this field in any year before 2018, a small yearly increase in the number of annual publications occurred between 2018 and 2022, and a high, stable number of publications appeared after 2022. In the analysis of publication sources, China was found to be the main contributor, with a much higher number of publications than other countries, and was followed by the United States and Italy. Frontiers in Oncology was the journal with the highest number of papers in this field, publishing 60 articles. Among the academic institutions, Fudan University, Sun Yat-sen University, and the Chinese Academy of Sciences ranked as the top three in terms of the number of documents. In the analysis of authors and cocited authors, the author with the most publications was Yuanyuan Wang, who has published 19 articles in 8 years, while Philippe Lambin was the most cited author, with 233 citations. Visualization of the results from the cocitation analysis of the literature revealed a strong centrality of the subject terms papillary thyroid cancer, biological behavior, potential biomarkers, and comparative assessment, which may be the main focal points of research in this subject. Based on the findings of the keyword analysis and cluster analysis, the keywords can be categorized into two major groups: (I) technological innovations that enable the construction of radiomics models such as machine learning and deep learning and (II) applications of predictive models to support clinical decision-making in certain diseases, such as papillary thyroid cancer, hepatocellular carcinoma (HCC), and breast cancer.

CONCLUSIONS: Ultrasound-based radiomics has received widespread attention in the medical field and has been gradually been applied in clinical research. Radiomics, a relatively late development in medical technology, has made substantial contributions to the diagnosis, prediction, and prognostic evaluation of diseases. Additionally, the coupling of artificial intelligence techniques with ultrasound imaging has yielded a number of promising tools that facilitate clinical decision-making and enable the practice of precision medicine. Finally, the development of ultrasound-based radiomics requires multidisciplinary cooperation and joint efforts from the field biomedicine, information technology, statistics, and clinical medicine.

PMID:39022291 | PMC:PMC11250334 | DOI:10.21037/qims-23-1867

Categories: Literature Watch

Prediction of metastases in confusing mediastinal lymph nodes based on flourine-18 fluorodeoxyglucose (<sup>18</sup>F-FDG) positron emission tomography/computed tomography (PET/CT) imaging using machine learning

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):4723-4734. doi: 10.21037/qims-24-100. Epub 2024 Jun 17.

ABSTRACT

BACKGROUND: For patient management and prognosis, accurate assessment of mediastinal lymph node (LN) status is essential. This study aimed to use machine learning approaches to assess the status of confusing LNs in the mediastinum using positron emission tomography/computed tomography (PET/CT) images; the results were then compared with the diagnostic conclusions of nuclear medicine physicians.

METHODS: A total of 509 confusing mediastinal LNs that had undergone pathological assessment or follow-up from 320 patients from three centres were retrospectively included in the study. LNs from centres I and II were randomised into a training cohort (N=324) and an internal validation cohort (N=81), while those from centre III patients formed an external validation cohort (N=104). Various parameters measured from PET and CT images and extracted radiomics and deep learning features were used to construct PET/CT-parameter, radiomics, and deep learning models, respectively. Model performance was compared with the diagnostic results of nuclear medicine physicians using the area under the curve (AUC), sensitivity, specificity, and decision curve analysis (DCA).

RESULTS: The coupled model of gradient boosting decision tree-logistic regression (GBDT-LR) incorporating radiomic features showed AUCs of 92.2% [95% confidence interval (CI), 0.890-0.953], 84.6% (95% CI, 0.761-0.930) and 84.6% (95% CI, 0.770-0.922) across the three cohorts. It significantly outperformed the deep learning model, the parametric PET/CT model and the physician's diagnosis. DCA demonstrated the clinical usefulness of the GBDT-LR model.

CONCLUSIONS: The presented GBDT-LR model performed well in evaluating confusing mediastinal LNs in both internal and external validation sets. It not only crossed radiometric features but also avoided overfitting.

PMID:39022286 | PMC:PMC11250303 | DOI:10.21037/qims-24-100

Categories: Literature Watch

Evaluation of preoperative difficult airway prediction methods for adult patients without obvious airway abnormalities: a systematic review and meta-analysis

Wed, 2024-07-17 06:00

BMC Anesthesiol. 2024 Jul 17;24(1):242. doi: 10.1186/s12871-024-02627-1.

ABSTRACT

BACKGROUND: This systematic review aims to assist clinical decision-making in selecting appropriate preoperative prediction methods for difficult tracheal intubation by identifying and synthesizing literature on these methods in adult patients undergoing all types of surgery.

METHODS: A systematic review and meta-analysis were conducted following PRISMA guidelines. Comprehensive electronic searches across multiple databases were completed on March 28, 2023. Two researchers independently screened, selected studies, and extracted data. A total of 227 articles representing 526 studies were included and evaluated for bias using the QUADAS-2 tool. Meta-Disc software computed pooled sensitivity (SEN), specificity (SPC), positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR). Heterogeneity was assessed using the Spearman correlation coefficient, Cochran's-Q, and I2 index, with meta-regression exploring sources of heterogeneity. Publication bias was evaluated using Deeks' funnel plot.

RESULTS: Out of 2906 articles retrieved, 227 met the inclusion criteria, encompassing a total of 686,089 patients. The review examined 11 methods for predicting difficult tracheal intubation, categorized into physical examination, multivariate scoring system, and imaging test. The modified Mallampati test (MMT) showed a SEN of 0.39 and SPC of 0.86, while the thyromental distance (TMD) had a SEN of 0.38 and SPC of 0.83. The upper lip bite test (ULBT) presented a SEN of 0.52 and SPC of 0.84. Multivariate scoring systems like LEMON and Wilson's risk score demonstrated moderate sensitivity and specificity. Imaging tests, particularly ultrasound-based methods such as the distance from the skin to the epiglottis (US-DSE), exhibited higher sensitivity (0.80) and specificity (0.77). Significant heterogeneity was identified across studies, influenced by factors such as sample size and study design.

CONCLUSION: No single preoperative prediction method shows clear superiority for predicting difficult tracheal intubation. The evidence supports a combined approach using multiple methods tailored to specific patient demographics and clinical contexts. Future research should focus on integrating advanced technologies like artificial intelligence and deep learning to improve predictive models. Standardizing testing procedures and establishing clear cut-off values are essential for enhancing prediction reliability and accuracy. Implementing a multi-modal predictive approach may reduce unanticipated difficult intubations, improving patient safety and outcomes.

PMID:39020308 | DOI:10.1186/s12871-024-02627-1

Categories: Literature Watch

Fully and Weakly Supervised Deep Learning for Meniscal Injury Classification, and Location Based on MRI

Wed, 2024-07-17 06:00

J Imaging Inform Med. 2024 Jul 17. doi: 10.1007/s10278-024-01198-4. Online ahead of print.

ABSTRACT

Meniscal injury is a common cause of knee joint pain and a precursor to knee osteoarthritis (KOA). The purpose of this study is to develop an automatic pipeline for meniscal injury classification and localization using fully and weakly supervised networks based on MRI images. In this retrospective study, data were from the osteoarthritis initiative (OAI). The MR images were reconstructed using a sagittal intermediate-weighted fat-suppressed turbo spin-echo sequence. (1) We used 130 knees from the OAI to develop the LGSA-UNet model which fuses the features of adjacent slices and adjusts the blocks in Siam to enable the central slice to obtain rich contextual information. (2) One thousand seven hundred and fifty-six knees from the OAI were included to establish segmentation and classification models. The segmentation model achieved a DICE coefficient ranging from 0.84 to 0.93. The AUC values ranged from 0.85 to 0.95 in the binary models. The accuracy for the three types of menisci (normal, tear, and maceration) ranged from 0.60 to 0.88. Furthermore, 206 knees from the orthopedic hospital were used as an external validation data set to evaluate the performance of the model. The segmentation and classification models still performed well on the external validation set. To compare the diagnostic performances between the deep learning (DL) models and radiologists, the external validation sets were sent to two radiologists. The binary classification model outperformed the diagnostic performance of the junior radiologist (0.82-0.87 versus 0.74-0.88). This study highlights the potential of DL in knee meniscus segmentation and injury classification which can help improve diagnostic efficiency.

PMID:39020156 | DOI:10.1007/s10278-024-01198-4

Categories: Literature Watch

Diagnostic Accuracy of Ultra-Low Dose CT Compared to Standard Dose CT for Identification of Fresh Rib Fractures by Deep Learning Algorithm

Wed, 2024-07-17 06:00

J Imaging Inform Med. 2024 Jul 17. doi: 10.1007/s10278-024-01027-8. Online ahead of print.

ABSTRACT

The present study aimed to evaluate the diagnostic accuracy of ultra-low dose computed tomography (ULD-CT) compared to standard dose computed tomography (SD-CT) in discerning recent rib fractures using a deep learning algorithm detection of rib fractures (DLADRF). A total of 158 patients undergoing forensic diagnosis for rib fractures were included in this study: 50 underwent SD-CT, and 108 were assessed using ULD-CT. Junior and senior radiologists independently evaluated the images to identify and characterize the rib fractures. The sensitivity of rib fracture diagnosis by radiologists and radiologist + DLADRF was better using SD-CT than ULD-CT. However, the diagnosis sensitivity of DLADRF using ULD-CT alone was slightly more than SD-CT. Nonetheless, no substantial differences were observed in specificity, positive predictive value, and negative predictive value between SD-CT and ULD-CT by the same radiologist, radiologist + DLADRF, and DLADRF (P > 0.05). The area under the curve (AUC) of receiver operating characteristic indicated that senior radiologist + DLADRF was significantly better than senior and junior radiologists, junior radiologists + DLADRF, and DLADRF alone using SD-CT or ULD-CT (all P < 0.05). Also, junior radiologists + DLADRF was better with ULD-CT than senior and junior radiologists (P < 0.05). The AUC of the rib fracture diagnosed by senior radiologists did not differ from DLADRF using ULD-CT. Also, no significant differences were observed between junior + AI and senior and between junior and DLADRF using SD-CT. DLADRF enhanced the diagnostic performance of radiologists in detecting recent rib fractures. The diagnostic outcomes between SD-CT and ULD-CT across radiologists' experience and DLADRF did not differ significantly.

PMID:39020151 | DOI:10.1007/s10278-024-01027-8

Categories: Literature Watch

In Silico drug repurposing pipeline using deep learning and structure based approaches in epilepsy

Wed, 2024-07-17 06:00

Sci Rep. 2024 Jul 17;14(1):16562. doi: 10.1038/s41598-024-67594-6.

ABSTRACT

Due to considerable global prevalence and high recurrence rate, the pursuit of effective new medication for epilepsy treatment remains an urgent and significant challenge. Drug repurposing emerges as a cost-effective and efficient strategy to combat this disorder. This study leverages the transformer-based deep learning methods coupled with molecular binding affinity calculation to develop a novel in-silico drug repurposing pipeline for epilepsy. The number of candidate inhibitors against 24 target proteins encoded by gain-of-function genes implicated in epileptogenesis ranged from zero to several hundreds. Our pipeline has repurposed the medications with most anti-epileptic drugs and nearly half psychiatric medications, highlighting the effectiveness of our pipeline. Furthermore, Lomitapide, a cholesterol-lowering drug, first emerged as particularly noteworthy, exhibiting high binding affinity for 10 targets and verified by molecular dynamics simulation and mechanism analysis. These findings provided a novel perspective on therapeutic strategies for other central nervous system disease.

PMID:39020064 | DOI:10.1038/s41598-024-67594-6

Categories: Literature Watch

Pages