Deep learning
Automatic detection of cognitive impairment in patients with white matter hyperintensity and causal analysis of related factors using artificial intelligence of MRI
Comput Biol Med. 2024 Jun 4;178:108684. doi: 10.1016/j.compbiomed.2024.108684. Online ahead of print.
ABSTRACT
PURPOSE: White matter hyperintensity (WMH) is a common feature of brain aging, often linked with cognitive decline and dementia. This study aimed to employ deep learning and radiomics to develop models for detecting cognitive impairment in WMH patients and to analyze the causal relationships among cognitive impairment and related factors.
MATERIALS AND METHODS: A total of 79 WMH patients from hospital 1 were randomly divided into a training set (62 patients) and a testing set (17 patients). Additionally, 29 patients from hospital 2 were included as an independent testing set. All participants underwent formal neuropsychological assessments to determine cognitive status. Automated identification and segmentation of WMH were conducted using VB-net, with extraction of radiomics features from cortex, white matter, and nuclei. Four machine learning classifiers were trained on the training set and validated on the testing set to detect cognitive impairment. Model performances were evaluated and compared. Causal analyses were conducted among cortex, white matter, nuclei alterations, and cognitive impairment.
RESULTS: Among the models, the logistic regression (LR) model based on white matter features demonstrated the highest performance, achieving an AUC of 0.819 in the external test dataset. Causal analyses indicated that age, education level, alterations in cortex, white matter, and nuclei were causal factors of cognitive impairment.
CONCLUSION: The LR model based on white matter features exhibited high accuracy in detecting cognitive impairment in WMH patients. Furthermore, the possible causal relationships among alterations in cortex, white matter, nuclei, and cognitive impairment were elucidated.
PMID:38852399 | DOI:10.1016/j.compbiomed.2024.108684
Deep learning promoted target volumes delineation of total marrow and total lymphoid irradiation for accelerated radiotherapy: A multi-institutional study
Phys Med. 2024 Jun 8;123:103393. doi: 10.1016/j.ejmp.2024.103393. Online ahead of print.
ABSTRACT
BACKGROUND AND PURPOSE: One of the current roadblocks to the widespread use of Total Marrow Irradiation (TMI) and Total Marrow and Lymphoid Irradiation (TMLI) is the challenging difficulties in tumor target contouring workflow. This study aims to develop a hybrid neural network model that promotes accurate, automatic, and rapid segmentation of multi-class clinical target volumes.
MATERIALS AND METHODS: Patients who underwent TMI and TMLI from January 2018 to May 2022 were included. Two independent oncologists manually contoured eight target volumes for patients on CT images. A novel Dual-Encoder Alignment Network (DEA-Net) was developed and trained using 46 patients from one internal institution and independently evaluated on a total of 39 internal and external patients. Performance was evaluated on accuracy metrics and delineation time.
RESULTS: The DEA-Net achieved a mean dice similarity coefficient of 90.1 % ± 1.8 % for internal testing dataset (23 patients) and 91.1 % ± 2.5 % for external testing dataset (16 patients). The 95 % Hausdorff distance and average symmetric surface distance were 2.04 ± 0.62 mm and 0.57 ± 0.11 mm for internal testing dataset, and 2.17 ± 0.68 mm, and 0.57 ± 0.20 mm for external testing dataset, respectively, outperforming most of existing state-of-the-art methods. In addition, the automatic segmentation workflow reduced delineation time by 98 % compared to the conventional manual contouring process (mean 173 ± 29 s vs. 12168 ± 1690 s; P < 0.001). Ablation study validate the effectiveness of hybrid structures.
CONCLUSION: The proposed deep learning framework achieved comparable or superior target volume delineation accuracy, significantly accelerating the radiotherapy planning process.
PMID:38852363 | DOI:10.1016/j.ejmp.2024.103393
Prediction of viral families and hosts of single-stranded RNA viruses based on K-Mer coding from phylogenetic gene sequences
Comput Biol Chem. 2024 May 31;112:108114. doi: 10.1016/j.compbiolchem.2024.108114. Online ahead of print.
ABSTRACT
There are billions of virus species worldwide, and viruses, the smallest parasitic entities, pose a serious threat. Therefore, fighting associated disorders requires an understanding of the genetic structure of viruses. Considering the wide diversity and rapid evolution of viruses, there is a critical need to quickly and accurately classify viral species and their potential hosts to better understand transmission dynamics, facilitating the development of targeted therapies. Recognizing this, this study has investigated the classes of RNA viruses based on their genomic sequences using Machine Learning (ML) and Deep Learning (DL) models. The PhyVirus dataset, consisting of pathogenic Single-stranded RNA viruses of Baltimore group four (+ssRNA) and five (-ssRNA) with different hosts and species, was analyzed. The dataset containing viral gene sequences was analyzed using the K-Mer coding technique, which is based on base words of various lengths. The study used classical ML algorithms (Random Forest, Gradient Boosting and Extra Trees) and the Fully Connected Deep Neural Network, a Deep Learning algorithm, to predict viral families and hosts. Detailed analyses were performed on the classifier performance in scenarios with different train-test ratios and different word lengths (k-values) for K-Mer. The observed results show that Fully Connected Deep Neural Network has a high success rate of 99.60 % in predicting virus families. In predicting virus hosts, the Extra Trees classifier achieved the highest success rate of 81.53 %. This study is considered to be the first classification study in the literature on this dataset, which has a very large family and host diversity consisting of gene sequences of Single-stranded RNA viruses. Our detailed investigations on how varying word lengths based on K-Mer coding in gene sequences affect the classification into viral families and hosts make this study particularly valuable. This study shows that ML and DL methods have the potential to produce valuable results in phylogenetic studies. In addition, the results and high-performance values show that these methods can be successfully used in regenerative applications of gene sequences or in studies such as the elimination of losses in gene sequences.
PMID:38852362 | DOI:10.1016/j.compbiolchem.2024.108114
A comprehensive survey on the use of deep learning techniques in glioblastoma
Artif Intell Med. 2024 Jun 4;154:102902. doi: 10.1016/j.artmed.2024.102902. Online ahead of print.
ABSTRACT
Glioblastoma, characterized as a grade 4 astrocytoma, stands out as the most aggressive brain tumor, often leading to dire outcomes. The challenge of treating glioblastoma is exacerbated by the convergence of genetic mutations and disruptions in gene expression, driven by alterations in epigenetic mechanisms. The integration of artificial intelligence, inclusive of machine learning algorithms, has emerged as an indispensable asset in medical analyses. AI is becoming a necessary tool in medicine and beyond. Current research on Glioblastoma predominantly revolves around non-omics data modalities, prominently including magnetic resonance imaging, computed tomography, and positron emission tomography. Nonetheless, the assimilation of omic data-encompassing gene expression through transcriptomics and epigenomics-offers pivotal insights into patients' conditions. These insights, reciprocally, hold significant value in refining diagnoses, guiding decision- making processes, and devising efficacious treatment strategies. This survey's core objective encompasses a comprehensive exploration of noteworthy applications of machine learning methodologies in the domain of glioblastoma, alongside closely associated research pursuits. The study accentuates the deployment of artificial intelligence techniques for both non-omics and omics data, encompassing a range of tasks. Furthermore, the survey underscores the intricate challenges posed by the inherent heterogeneity of Glioblastoma, delving into strategies aimed at addressing its multifaceted nature.
PMID:38852314 | DOI:10.1016/j.artmed.2024.102902
Automated detection of steps in videos of strabismus surgery using deep learning
BMC Ophthalmol. 2024 Jun 10;24(1):242. doi: 10.1186/s12886-024-03504-8.
ABSTRACT
BACKGROUND: Learning to perform strabismus surgery is an essential aspect of ophthalmologists' surgical training. Automated classification strategy for surgical steps can improve the effectiveness of training curricula and the efficient evaluation of residents' performance. To this end, we aimed to develop and validate a deep learning (DL) model for automated detecting strabismus surgery steps in the videos.
METHODS: In this study, we gathered 479 strabismus surgery videos from Shanghai Children's Hospital, affiliated to Shanghai Jiao Tong University School of Medicine, spanning July 2017 to October 2021. The videos were manually cut into 3345 clips of the eight strabismus surgical steps based on the International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubrics (ICO-OSCAR: strabismus). The videos dataset was randomly split by eye-level into a training (60%), validation (20%) and testing dataset (20%). We evaluated two hybrid DL algorithms: a Recurrent Neural Network (RNN) based and a Transformer-based model. The evaluation metrics included: accuracy, area under the receiver operating characteristic curve, precision, recall and F1-score.
RESULTS: DL models identified the steps in video clips of strabismus surgery achieved macro-average AUC of 1.00 (95% CI 1.00-1.00) with Transformer-based model and 0.98 (95% CI 0.97-1.00) with RNN-based model, respectively. The Transformer-based model yielded a higher accuracy compared with RNN-based models (0.96 vs. 0.83, p < 0.001). In detecting different steps of strabismus surgery, the predictive ability of the Transformer-based model was better than that of the RNN. Precision ranged between 0.90 and 1 for the Transformer-based model and 0.75 to 0.94 for the RNN-based model. The f1-score ranged between 0.93 and 1 for the Transformer-based model and 0.78 to 0.92 for the RNN-based model.
CONCLUSION: The DL models can automate identify video steps of strabismus surgery with high accuracy and Transformer-based algorithms show excellent performance when modeling spatiotemporal features of video frames.
PMID:38853240 | DOI:10.1186/s12886-024-03504-8
Revolutionizing breast cancer Ki-67 diagnosis: ultrasound radiomics and fully connected neural networks (FCNN) combination method
Breast Cancer Res Treat. 2024 Jun 9. doi: 10.1007/s10549-024-07375-x. Online ahead of print.
ABSTRACT
PURPOSE: This study aims to assess the diagnostic value of ultrasound habitat sub-region radiomics feature parameters using a fully connected neural networks (FCNN) combination method L2,1-norm in relation to breast cancer Ki-67 status.
METHODS: Ultrasound images from 528 cases of female breast cancer at the Affiliated Hospital of Xiangnan University and 232 cases of female breast cancer at the Affiliated Rehabilitation Hospital of Xiangnan University were selected for this study. We utilized deep learning methods to automatically outline the gross tumor volume and perform habitat clustering. Subsequently, habitat sub-regions were extracted to identify radiomics features and underwent feature engineering using the L1,2-norm. A prediction model for the Ki-67 status of breast cancer patients was then developed using a FCNN. The model's performance was evaluated using accuracy, area under the curve (AUC), specificity (Spe), positive predictive value (PPV), negative predictive value (NPV), Recall, and F1. In addition, calibration curves and clinical decision curves were plotted for the test set to visually assess the predictive accuracy and clinical benefit of the models.
RESULT: Based on the feature engineering using the L1,2-norm, a total of 9 core features were identified. The predictive model, constructed by the FCNN model based on these 9 features, achieved the following scores: ACC 0.856, AUC 0.915, Spe 0.843, PPV 0.920, NPV 0.747, Recall 0.974, and F1 0.890. Furthermore, calibration curves and clinical decision curves of the validation set demonstrated a high level of confidence in the model's performance and its clinical benefit.
CONCLUSION: Habitat clustering of ultrasound images of breast cancer is effectively supported by the combined implementation of the L1,2-norm and FCNN algorithms, allowing for the accurate classification of the Ki-67 status in breast cancer patients.
PMID:38853220 | DOI:10.1007/s10549-024-07375-x
ssVERDICT: Self-supervised VERDICT-MRI for enhanced prostate tumor characterization
Magn Reson Med. 2024 Jun 9. doi: 10.1002/mrm.30186. Online ahead of print.
ABSTRACT
PURPOSE: Demonstrating and assessing self-supervised machine-learning fitting of the VERDICT (vascular, extracellular and restricted diffusion for cytometry in tumors) model for prostate cancer.
METHODS: We derive a self-supervised neural network for fitting VERDICT (ssVERDICT) that estimates parameter maps without training data. We compare the performance of ssVERDICT to two established baseline methods for fitting diffusion MRI models: conventional nonlinear least squares and supervised deep learning. We do this quantitatively on simulated data by comparing the Pearson's correlation coefficient, mean-squared error, bias, and variance with respect to the simulated ground truth. We also calculate in vivo parameter maps on a cohort of 20 prostate cancer patients and compare the methods' performance in discriminating benign from cancerous tissue via Wilcoxon's signed-rank test.
RESULTS: In simulations, ssVERDICT outperforms the baseline methods (nonlinear least squares and supervised deep learning) in estimating all the parameters from the VERDICT prostate model in terms of Pearson's correlation coefficient, bias, and mean-squared error. In vivo, ssVERDICT shows stronger lesion conspicuity across all parameter maps, and improves discrimination between benign and cancerous tissue over the baseline methods.
CONCLUSION: ssVERDICT significantly outperforms state-of-the-art methods for VERDICT model fitting and shows, for the first time, fitting of a detailed multicompartment biophysical diffusion MRI model with machine learning without the requirement of explicit training labels.
PMID:38852195 | DOI:10.1002/mrm.30186
Recovering high-quality fiber orientation distributions from a reduced number of diffusion-weighted images using a model-driven deep learning architecture
Magn Reson Med. 2024 Jun 9. doi: 10.1002/mrm.30187. Online ahead of print.
ABSTRACT
PURPOSE: The aim of this study was to develop a model-based deep learning architecture to accurately reconstruct fiber orientation distributions (FODs) from a reduced number of diffusion-weighted images (DWIs), facilitating accurate analysis with reduced acquisition times.
METHODS: Our proposed architecture, Spherical Deconvolution Network (SDNet), performed FOD reconstruction by mapping 30 DWIs to fully sampled FODs, which have been fit to 288 DWIs. SDNet included DWI-consistency blocks within the network architecture, and a fixel-classification penalty within the loss function. SDNet was trained on a subset of the Human Connectome Project, and its performance compared with FOD-Net, and multishell multitissue constrained spherical deconvolution.
RESULTS: SDNet achieved the strongest results with respect to angular correlation coefficient and sum of squared errors. When the impact of the fixel-classification penalty was increased, we observed an improvement in performance metrics reliant on segmenting the FODs into the correct number of fixels.
CONCLUSION: Inclusion of DWI-consistency blocks improved reconstruction performance, and the fixel-classification penalty term offered increased control over the angular separation of fixels in the reconstructed FODs.
PMID:38852179 | DOI:10.1002/mrm.30187
Enhancing Chicago Classification diagnoses with functional lumen imaging probe-mechanics (FLIP-MECH)
Neurogastroenterol Motil. 2024 Jun 9:e14841. doi: 10.1111/nmo.14841. Online ahead of print.
ABSTRACT
BACKGROUND: Esophageal motility disorders can be diagnosed by either high-resolution manometry (HRM) or the functional lumen imaging probe (FLIP) but there is no systematic approach to synergize the measurements of these modalities or to improve the diagnostic metrics that have been developed to analyze them. This work aimed to devise a formal approach to bridge the gap between diagnoses inferred from HRM and FLIP measurements using deep learning and mechanics.
METHODS: The "mechanical health" of the esophagus was analyzed in 740 subjects including a spectrum of motility disorder patients and normal subjects. The mechanical health was quantified through a set of parameters including wall stiffness, active relaxation, and contraction pattern. These parameters were used by a variational autoencoder to generate a parameter space called virtual disease landscape (VDL). Finally, probabilities were assigned to each point (subject) on the VDL through linear discriminant analysis (LDA), which in turn was used to compare with FLIP and HRM diagnoses.
RESULTS: Subjects clustered into different regions of the VDL with their location relative to each other (and normal) defined by the type and severity of dysfunction. The two major categories that separated best on the VDL were subjects with normal esophagogastric junction (EGJ) opening and those with EGJ obstruction. Both HRM and FLIP diagnoses correlated well within these two groups.
CONCLUSION: Mechanics-based parameters effectively estimated esophageal health using FLIP measurements to position subjects in a 3-D VDL that segregated subjects in good alignment with motility diagnoses gleaned from HRM and FLIP studies.
PMID:38852150 | DOI:10.1111/nmo.14841
Using artificial intelligence to study atherosclerosis from computed tomography imaging: A state-of-the-art review of the current literature
Atherosclerosis. 2024 May 19:117580. doi: 10.1016/j.atherosclerosis.2024.117580. Online ahead of print.
ABSTRACT
With the enormous progress in the field of cardiovascular imaging in recent years, computed tomography (CT) has become readily available to phenotype atherosclerotic coronary artery disease. New analytical methods using artificial intelligence (AI) enable the analysis of complex phenotypic information of atherosclerotic plaques. In particular, deep learning-based approaches using convolutional neural networks (CNNs) facilitate tasks such as lesion detection, segmentation, and classification. New radiotranscriptomic techniques even capture underlying bio-histochemical processes through higher-order structural analysis of voxels on CT images. In the near future, the international large-scale Oxford Risk Factors And Non-invasive Imaging (ORFAN) study will provide a powerful platform for testing and validating prognostic AI-based models. The goal is the transition of these new approaches from research settings into a clinical workflow. In this review, we present an overview of existing AI-based techniques with focus on imaging biomarkers to determine the degree of coronary inflammation, coronary plaques, and the associated risk. Further, current limitations using AI-based approaches as well as the priorities to address these challenges will be discussed. This will pave the way for an AI-enabled risk assessment tool to detect vulnerable atherosclerotic plaques and to guide treatment strategies for patients.
PMID:38852022 | DOI:10.1016/j.atherosclerosis.2024.117580
Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning
Commun Med (Lond). 2024 Jun 8;4(1):110. doi: 10.1038/s43856-024-00528-5.
ABSTRACT
BACKGROUND: Radiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) is manually segmented with high interobserver variability. This calls for reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification and its downstream utilization is critical.
METHODS: Here we propose uncertainty-aware deep learning for OPC GTVp segmentation, and illustrate the utility of uncertainty in multiple applications. We examine two Bayesian deep learning (BDL) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 PET/CT scans to systematically analyze our approach.
RESULTS: We show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail.
CONCLUSIONS: Our BDL-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation.
PMID:38851837 | DOI:10.1038/s43856-024-00528-5
A Clinical Bacterial Dataset for Deep Learning in Microbiological Rapid On-Site Evaluation
Sci Data. 2024 Jun 8;11(1):608. doi: 10.1038/s41597-024-03370-5.
ABSTRACT
Microbiological Rapid On-Site Evaluation (M-ROSE) is based on smear staining and microscopic observation, providing critical references for the diagnosis and treatment of pulmonary infectious disease. Automatic identification of pathogens is the key to improving the quality and speed of M-ROSE. Recent advancements in deep learning have yielded numerous identification algorithms and datasets. However, most studies focus on artificially cultured bacteria and lack clinical data and algorithms. Therefore, we collected Gram-stained bacteria images from lower respiratory tract specimens of patients with lung infections in Chinese PLA General Hospital obtained by M-ROSE from 2018 to 2022 and desensitized images to produce 1705 images (4,912 × 3,684 pixels). A total of 4,833 cocci and 6,991 bacilli were manually labelled and differentiated into negative and positive. In addition, we applied the detection and segmentation networks for benchmark testing. Data and benchmark algorithms we provided that may benefit the study of automated bacterial identification in clinical specimens.
PMID:38851809 | DOI:10.1038/s41597-024-03370-5
The interplay of group size and flow velocity modulates fish exploratory behaviour
Sci Rep. 2024 Jun 8;14(1):13186. doi: 10.1038/s41598-024-63975-z.
ABSTRACT
Social facilitation is a well-known phenomenon where the presence of organisms belonging to the same species enhances an individual organism's performance in a specific task. As far as fishes are concerned, most studies on social facilitation have been conducted in standing-water conditions. However, for riverine species, fish are most commonly located in moving waters, and the effects of hydrodynamics on social facilitation remain largely unknown. To bridge this knowledge gap, we designed and performed flume experiments where the behaviour of wild juvenile Italian riffle dace (Telestes muticellus) in varying group sizes and at different mean flow velocities, was studied. An artificial intelligence (AI) deep learning algorithm was developed and employed to track fish positions in time and subsequently assess their exploration, swimming activity, and space use. Results indicate that energy-saving strategies dictated space use in flowing waters regardless of group size. Instead, exploration and swimming activity increased by increasing group size, but the magnitude of this enhancement (which quantifies social facilitation) was modulated by flow velocity. These results have implications for how future research efforts should be designed to understand the social dynamics of riverine fish populations, which can no longer ignore the contribution of hydrodynamics.
PMID:38851769 | DOI:10.1038/s41598-024-63975-z
Biologically meaningful genome interpretation models to address data underdetermination for the leaf and seed ionome prediction in Arabidopsis thaliana
Sci Rep. 2024 Jun 8;14(1):13188. doi: 10.1038/s41598-024-63855-6.
ABSTRACT
Genome interpretation (GI) encompasses the computational attempts to model the relationship between genotype and phenotype with the goal of understanding how the first leads to the second. While traditional approaches have focused on sub-problems such as predicting the effect of single nucleotide variants or finding genetic associations, recent advances in neural networks (NNs) have made it possible to develop end-to-end GI models that take genomic data as input and predict phenotypes as output. However, technical and modeling issues still need to be fixed for these models to be effective, including the widespread underdetermination of genomic datasets, making them unsuitable for training large, overfitting-prone, NNs. Here we propose novel GI models to address this issue, exploring the use of two types of transfer learning approaches and proposing a novel Biologically Meaningful Sparse NN layer specifically designed for end-to-end GI. Our models predict the leaf and seed ionome in A.thaliana, obtaining comparable results to our previous over-parameterized model while reducing the number of parameters by 8.8 folds. We also investigate how the effect of population stratification influences the evaluation of the performances, highlighting how it leads to (1) an instance of the Simpson's Paradox, and (2) model generalization limitations.
PMID:38851759 | DOI:10.1038/s41598-024-63855-6
Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation
Biomed Eng Online. 2024 Jun 8;23(1):52. doi: 10.1186/s12938-024-01238-8.
ABSTRACT
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
PMID:38851691 | DOI:10.1186/s12938-024-01238-8
Gray Matters: ViT-GAN Framework for Identifying Schizophrenia Biomarkers Linking Structural MRI and Functional Connectivity
Neuroimage. 2024 Jun 6:120674. doi: 10.1016/j.neuroimage.2024.120674. Online ahead of print.
ABSTRACT
Brain disorders are often associated with changes in brain structure and function, where functional changes may be due to underlying structural variations. Gray matter (GM) volume segmentation from 3D structural MRI offers vital structural information for brain disorders like schizophrenia, as it encompasses essential brain tissues such as neuronal cell bodies, dendrites, and synapses, which are crucial for neural signal processing and transmission; changes in GM volume can thus indicate alterations in these tissues, reflecting underlying pathological conditions. In addition, the use of the ICA algorithm to transform high-dimensional fMRI data into functional network connectivity (FNC) matrices serves as an effective carrier of functional information. In our study, we introduce a new generative deep learning architecture, the conditional efficient vision transformer generative adversarial network (cEViT-GAN), which adeptly generates FNC matrices conditioned on GM to facilitate the exploration of potential connections between brain structure and function. We developed a new, lightweight self-attention mechanism for our ViT-based generator, enhancing the generation of refined attention maps critical for identifying structural biomarkers based on GM. Our approach not only generates high quality FNC matrices with a Pearson correlation of 0.74 compared to real FNC data, but also uses attention map technology to identify potential biomarkers in GM structure that could lead to functional abnormalities in schizophrenia patients. Visualization experiments within our study have highlighted these structural biomarkers, including the medial prefrontal cortex (mPFC), dorsolateral prefrontal cortex (DL-PFC), and cerebellum. In addition, through cross-domain analysis comparing generated and real FNC matrices, we have identified functional connections with the highest correlations to structural information, further validating the structure-function connections. This comprehensive analysis helps to understand the intricate relationship between brain structure and its functional manifestations, providing a more refined insight into the neurobiological research of schizophrenia.
PMID:38851549 | DOI:10.1016/j.neuroimage.2024.120674
Thigh Muscle Composition Changes in Knee Osteoarthritis Patients During Weight Loss: Sex-Specific Analysis Using Data from Osteoarthritis Initiative
Osteoarthritis Cartilage. 2024 Jun 6:S1063-4584(24)01213-5. doi: 10.1016/j.joca.2024.05.013. Online ahead of print.
ABSTRACT
OBJECTIVES: Sex of patients with knee osteoarthritis (KOA) may impact changes in thigh muscle composition during weight loss, the most well-known disease-modifying intervention. We investigated longitudinal sex-based changes in thigh muscle quality during weight loss in participants with KOA.
METHODS: Using Osteoarthritis Initiative (OAI) cohort data, we included females and males with baseline radiographic KOA who experienced >5% reduction in BMI over four years. Using a previously validated deep-learning algorithm, we measured MRI-derived biomarkers of thigh muscles at baseline and year-4. Outcomes were the intra- and inter-muscular adipose tissue (Intra-MAT and Inter-MAT) and contractile percentage of thigh muscles between females and males. The analysis adjusted for potential confounders, such as demographics, risk factors, BMI change, physical activity, diet, and KOA status.
RESULTS: A retrospective selection of available thigh MRIs from KOA participants who also had a 4-year weight loss (>5% of BMI) yielded a sample comprising 313 thighs (192 females and 121 males). Female and male participants exhibited a comparable degree of weight loss (females: -9.72±4.38, males: -8.83±3.64, P-value=0.060). However, the changes in thigh muscle quality were less beneficial for females compared to males, as shown by a less degree of longitudinal decrease in Intra-MAT (change difference,95%CI: 783.44 mm2/4-year, 505.70 to 1061.19, P-value<0.001) and longitudinal increase in contractile percentage (change difference,95%CI: -3.9%/4-year, -6.5 to -1.4, P-value=0.019).
CONCLUSIONS: In participants with KOA and 4-year weight loss, the longitudinal changes in thigh muscle quality were overall beneficial but to a less degree in females compared to males. Further research is warranted to investigate the underlying mechanisms and develop sex-specific interventions to optimize muscle quality during weight loss.
PMID:38851527 | DOI:10.1016/j.joca.2024.05.013
Artificial intelligence for detecting periapical radiolucencies: A systematic review and meta-analysis
J Dent. 2024 Jun 6:105104. doi: 10.1016/j.jdent.2024.105104. Online ahead of print.
ABSTRACT
OBJECTIVES: Dentists' diagnostic accuracy in detecting periapical radiolucency varies considerably. This systematic review and meta-analysis aimed to investigate the accuracy of artificial intelligence (AI) for detecting periapical radiolucency.
DATA: Studies reporting diagnostic accuracy and utilizing AI for periapical radiolucency detection, published until November 2023, were eligible for inclusion. Meta-analysis was conducted using the online MetaDTA Tool to calculate pooled sensitivity and specificity. Risk of bias was evaluated using QUADAS-2.
SOURCES: A comprehensive search was conducted in PubMed/MEDLINE, ScienceDirect, and Institute of Electrical and Electronics Engineers (IEEE) Xplore databases. Studies reporting diagnostic accuracy and utilizing AI tools for periapical radiolucency detection, published until November 2023, were eligible for inclusion.
STUDY SELECTION: We identified 210 articles, of which 24 met the criteria for inclusion in the review. All but one study used one type of convolutional neural network. The body of evidence comes with an overall unclear to high risk of bias and several applicability concerns. Four of the twenty-four studies were included in a meta-analysis. AI showed a pooled sensitivity and specificity of 0.94 (95% CI = 0.90-0.96) and 0.96 (95% CI = 0.91-0.98), respectively.
CONCLUSIONS: AI demonstrated high specificity and sensitivity for detecting periapical radiolucencies. However, the current landscape suggests a need for diverse study designs beyond traditional diagnostic accuracy studies. Prospective real-life randomized controlled trials using heterogeneous data are needed to demonstrate the true value of AI.
CLINICAL SIGNIFICANCE: Artificial intelligence tools seem to have the potential to support detecting periapical radiolucencies on imagery. Notably, nearly all studies did not test fully fletched software systems but measured the mere accuracy of AI models in diagnostic accuracy studies. The true value of currently available AI-based software for lesion detection on both 2D and 3D radiographs remains uncertain.
PMID:38851523 | DOI:10.1016/j.jdent.2024.105104
Volumetric Analysis of Acute Uncomplicated Type B Aortic Dissection Using an Automated Deep Learning Aortic Zone Segmentation Model
J Vasc Surg. 2024 Jun 6:S0741-5214(24)01245-X. doi: 10.1016/j.jvs.2024.06.001. Online ahead of print.
ABSTRACT
INTRODUCTION: Machine learning techniques have shown excellent performance in 3D medical image analysis, but have not been applied to acute uncomplicated type B aortic dissection (auTBAD) utilizing SVS/STS-defined aortic zones. The purpose of this study was to establish a trained, automatic machine learning aortic zone segmentation model to facilitate performance of an aortic zone volumetric comparison between auTBAD patients based on rate of aortic growth.
METHODS: Patients with auTBAD and serial imaging were identified. For each patient, imaging characteristics from two CT scans were analyzed: (1) the baseline CTA at index admission, and (2) either the most recent surveillance CTA, or the most recent CTA prior to an aortic intervention. Patients were stratified into two comparative groups based on aortic growth: rapid growth (diameter increase ≥5mm/year) and no/slow growth (diameter increase <5mm/year). Deidentified images were imported into an open-source software package for medical image analysis and images were annotated based on SVS/STS criteria for aortic zones. Our model was trained using 4-fold cross-validation. The segmentation output was used to calculate aortic zone volumes from each imaging study.
RESULTS: Of 59 patients identified for inclusion, rapid growth was observed in 33 (56%) patients and no/slow growth was observed in 26 (44%) patients. There were no differences in baseline demographics, comorbidities, admission mean arterial pressure, number of discharge antihypertensives, or high-risk imaging characteristics between groups (p>0.05 for all). Median duration between baseline and interval CT was 1.07 years (IQR 0.38-2.57). Post-discharge aortic intervention was performed in 13 (22%) of patients at a mean of 1.5±1.2 years, with no difference between groups (p>0.05). Among all patients, the largest relative percent increases in zone volumes over time were found in zone 4 (13.9% IQR -6.82-35.1) and zone 5 (13.4% IQR -7.78-37.9). There were no differences in baseline zone volumes between groups (p>0.05 for all). Average Dice coefficient, a performance measure of the model output, was 0.73. Performance was best in zone 5 (0.84) and zone 9 (0.91).
CONCLUSIONS: We describe an automatic deep learning segmentation model incorporating SVS-defined aortic zones. The open-source, trained model demonstrates concordance to the manually segmented aortas with the strongest performance in zones 5 and 9, providing a framework for further clinical applications. In our limited sample, there were no differences in baseline aortic zone volumes between rapid growth and no/slow growth patients.
PMID:38851467 | DOI:10.1016/j.jvs.2024.06.001
Verification of image quality improvement by deep learning reconstruction to 1.5 T MRI in T2-weighted images of the prostate gland
Radiol Phys Technol. 2024 Jun 8. doi: 10.1007/s12194-024-00819-5. Online ahead of print.
ABSTRACT
This study aimed to evaluate whether the image quality of 1.5 T magnetic resonance imaging (MRI) of the prostate is equal to or higher than that of 3 T MRI by applying deep learning reconstruction (DLR). To objectively analyze the images from the 13 healthy volunteers, we measured the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the images obtained by the 1.5 T scanner with and without DLR, as well as for images obtained by the 3 T scanner. In the subjective, T2W images of the prostate were visually evaluated by two board-certified radiologists. The SNRs and CNRs in 1.5 T images with DLR were higher than that in 3 T images. Subjective image scores were better for 1.5 T images with DLR than 3 T images. The use of the DLR technique in 1.5 T MRI substantially improved the SNR and image quality of T2W images of the prostate gland, as compared to 3 T MRI.
PMID:38850389 | DOI:10.1007/s12194-024-00819-5