Deep learning
Artificial Intelligence in Infectious Disease Clinical Practice: An Overview of Gaps, Opportunities, and Limitations
Trop Med Infect Dis. 2024 Sep 30;9(10):228. doi: 10.3390/tropicalmed9100228.
ABSTRACT
The integration of artificial intelligence (AI) in clinical medicine marks a revolutionary shift, enhancing diagnostic accuracy, therapeutic efficacy, and overall healthcare delivery. This review explores the current uses, benefits, limitations, and future applications of AI in infectious diseases, highlighting its specific applications in diagnostics, clinical decision making, and personalized medicine. The transformative potential of AI in infectious diseases is emphasized, addressing gaps in rapid and accurate disease diagnosis, surveillance, outbreak detection and management, and treatment optimization. Despite these advancements, significant limitations and challenges exist, including data privacy concerns, potential biases, and ethical dilemmas. The article underscores the need for stringent regulatory frameworks and inclusive databases to ensure equitable, ethical, and effective AI utilization in the field of clinical and laboratory infectious diseases.
PMID:39453255 | DOI:10.3390/tropicalmed9100228
Early Detection of Lumpy Skin Disease in Cattle Using Deep Learning-A Comparative Analysis of Pretrained Models
Vet Sci. 2024 Oct 17;11(10):510. doi: 10.3390/vetsci11100510.
ABSTRACT
Lumpy Skin Disease (LSD) poses a significant threat to agricultural economies, particularly in livestock-dependent countries like India, due to its high transmission rate leading to severe morbidity and mortality among cattle. This underscores the urgent need for early and accurate detection to effectively manage and mitigate outbreaks. Leveraging advancements in computer vision and artificial intelligence, our research develops an automated system for LSD detection in cattle using deep learning techniques. We utilized two publicly available datasets comprising images of healthy cattle and those with LSD, including additional images of cattle affected by other diseases to enhance specificity and ensure the model detects LSD specifically rather than general illness signs. Our methodology involved preprocessing the images, applying data augmentation, and balancing the datasets to improve model generalizability. We evaluated over ten pretrained deep learning models-Xception, VGG16, VGG19, ResNet152V2, InceptionV3, MobileNetV2, DenseNet201, NASNetMobile, NASNetLarge, and EfficientNetV2S-using transfer learning. The models were rigorously trained and tested under diverse conditions, with performance assessed using metrics such as accuracy, sensitivity, specificity, precision, F1-score, and AUC-ROC. Notably, VGG16 and MobileNetV2 emerged as the most effective, achieving accuracies of 96.07% and 96.39%, sensitivities of 93.75% and 98.57%, and specificities of 97.14% and 94.59%, respectively. Our study critically highlights the strengths and limitations of each model, demonstrating that while high accuracy is achievable, sensitivity and specificity are crucial for clinical applicability. By meticulously detailing the performance characteristics and including images of cattle with other diseases, we ensured the robustness and reliability of the models. This comprehensive comparative analysis enriches our understanding of deep learning applications in veterinary diagnostics and makes a substantial contribution to the field of automated disease recognition in livestock farming. Our findings suggest that adopting such AI-driven diagnostic tools can enhance the early detection and control of LSD, ultimately benefiting animal health and the agricultural economy.
PMID:39453102 | DOI:10.3390/vetsci11100510
Artificial Intelligence and Advanced Technology in Glaucoma: A Review
J Pers Med. 2024 Oct 16;14(10):1062. doi: 10.3390/jpm14101062.
ABSTRACT
BACKGROUND: Glaucoma is a leading cause of irreversible blindness worldwide, necessitating precise management strategies tailored to individual patient characteristics. Artificial intelligence (AI) holds promise in revolutionizing the approach to glaucoma care by providing personalized interventions.
AIM: This review explores the current landscape of AI applications in the personalized management of glaucoma patients, highlighting advancements, challenges, and future directions.
METHODS: A systematic search of electronic databases, including PubMed, Scopus, and Web of Science, was conducted to identify relevant studies published up to 2024. Studies exploring the use of AI techniques in personalized management strategies for glaucoma patients were included.
RESULTS: The review identified diverse AI applications in glaucoma management, ranging from early detection and diagnosis to treatment optimization and prognosis prediction. Machine learning algorithms, particularly deep learning models, demonstrated high accuracy in diagnosing glaucoma from various imaging modalities such as optical coherence tomography (OCT) and visual field tests. AI-driven risk stratification tools facilitated personalized treatment decisions by integrating patient-specific data with predictive analytics, enhancing therapeutic outcomes while minimizing adverse effects. Moreover, AI-based teleophthalmology platforms enabled remote monitoring and timely intervention, improving patient access to specialized care.
CONCLUSIONS: Integrating AI technologies in the personalized management of glaucoma patients holds immense potential for optimizing clinical decision-making, enhancing treatment efficacy, and mitigating disease progression. However, challenges such as data heterogeneity, model interpretability, and regulatory concerns warrant further investigation. Future research should focus on refining AI algorithms, validating their clinical utility through large-scale prospective studies, and ensuring seamless integration into routine clinical practice to realize the full benefits of personalized glaucoma care.
PMID:39452568 | DOI:10.3390/jpm14101062
AI-ADC: Channel and Spatial Attention-Based Contrastive Learning to Generate ADC Maps from T2W MRI for Prostate Cancer Detection
J Pers Med. 2024 Oct 9;14(10):1047. doi: 10.3390/jpm14101047.
ABSTRACT
BACKGROUND/OBJECTIVES: Apparent Diffusion Coefficient (ADC) maps in prostate MRI can reveal tumor characteristics, but their accuracy can be compromised by artifacts related with patient motion or rectal gas associated distortions. To address these challenges, we propose a novel approach that utilizes a Generative Adversarial Network to synthesize ADC maps from T2-weighted magnetic resonance images (T2W MRI).
METHODS: By leveraging contrastive learning, our model accurately maps axial T2W MRI to ADC maps within the cropped region of the prostate organ boundary, capturing subtle variations and intricate structural details by learning similar and dissimilar pairs from two imaging modalities. We trained our model on a comprehensive dataset of unpaired T2-weighted images and ADC maps from 506 patients. In evaluating our model, named AI-ADC, we compared it against three state-of-the-art methods: CycleGAN, CUT, and StyTr2.
RESULTS: Our model demonstrated a higher mean Structural Similarity Index (SSIM) of 0.863 on a test dataset of 3240 2D MRI slices from 195 patients, compared to values of 0.855, 0.797, and 0.824 for CycleGAN, CUT, and StyTr2, respectively. Similarly, our model achieved a significantly lower Fréchet Inception Distance (FID) value of 31.992, compared to values of 43.458, 179.983, and 58.784 for the other three models, indicating its superior performance in generating ADC maps. Furthermore, we evaluated our model on 147 patients from the publicly available ProstateX dataset, where it demonstrated a higher SSIM of 0.647 and a lower FID of 113.876 compared to the other three models.
CONCLUSIONS: These results highlight the efficacy of our proposed model in generating ADC maps from T2W MRI, showcasing its potential for enhancing clinical diagnostics and radiological workflows.
PMID:39452554 | DOI:10.3390/jpm14101047
Statistical Analysis of nnU-Net Models for Lung Nodule Segmentation
J Pers Med. 2024 Sep 24;14(10):1016. doi: 10.3390/jpm14101016.
ABSTRACT
This paper aims to conduct a statistical analysis of different components of nnU-Net models to build an optimal pipeline for lung nodule segmentation in computed tomography images (CT scan). This study focuses on semantic segmentation of lung nodules, using the UniToChest dataset. Our approach is based on the nnU-Net framework and is designed to configure a whole segmentation pipeline, thereby avoiding many complex design choices, such as data properties and architecture configuration. Although these framework results provide a good starting point, many configurations in this problem can be optimized. In this study, we tested two U-Net-based architectures, using different preprocessing techniques, and we modified the existing hyperparameters provided by nnU-Net. To study the impact of different settings on model segmentation accuracy, we conducted an analysis of variance (ANOVA) statistical analysis. The factors studied included the datasets according to nodule diameter size, model, preprocessing, polynomial learning rate scheduler, and number of epochs. The results of the ANOVA analysis revealed significant differences in the datasets, models, and preprocessing.
PMID:39452524 | DOI:10.3390/jpm14101016
A Specialized Pipeline for Efficient and Reliable 3D Semantic Model Reconstruction of Buildings from Indoor Point Clouds
J Imaging. 2024 Oct 19;10(10):261. doi: 10.3390/jimaging10100261.
ABSTRACT
Recent advances in laser scanning systems have enabled the acquisition of 3D point cloud representations of scenes, revolutionizing the fields of Architecture, Engineering, and Construction (AEC). This paper presents a novel pipeline for the automatic generation of 3D semantic models of multi-level buildings from indoor point clouds. The architectural components are extracted hierarchically. After segmenting the point clouds into potential building floors, a wall detection process is performed on each floor segment. Then, room, ground, and ceiling extraction are conducted using the walls 2D constellation obtained from the projection of the walls onto the ground plan. The identification of the openings in the walls is performed using a deep learning-based classifier that separates doors and windows from non-consistent holes. Based on the geometric and semantic information from previously detected elements, the final model is generated in IFC format. The effectiveness and reliability of the proposed pipeline are demonstrated through extensive experiments and visual inspections. The results reveal high precision and recall values in the extraction of architectural elements, ensuring the fidelity of the generated models. In addition, the pipeline's efficiency and accuracy offer valuable contributions to future advancements in point cloud processing.
PMID:39452424 | DOI:10.3390/jimaging10100261
Investigating the Sim-to-Real Generalizability of Deep Learning Object Detection Models
J Imaging. 2024 Oct 18;10(10):259. doi: 10.3390/jimaging10100259.
ABSTRACT
State-of-the-art object detection models need large and diverse datasets for training. As these are hard to acquire for many practical applications, training images from simulation environments gain more and more attention. A problem arises as deep learning models trained on simulation images usually have problems generalizing to real-world images shown by a sharp performance drop. Definite reasons and influences for this performance drop are not yet found. While previous work mostly investigated the influence of the data as well as the use of domain adaptation, this work provides a novel perspective by investigating the influence of the object detection model itself. Against this background, first, a corresponding measure called sim-to-real generalizability is defined, comprising the capability of an object detection model to generalize from simulation training images to real-world evaluation images. Second, 12 different deep learning-based object detection models are trained and their sim-to-real generalizability is evaluated. The models are trained with a variation of hyperparameters resulting in a total of 144 trained and evaluated versions. The results show a clear influence of the feature extractor and offer further insights and correlations. They open up future research on investigating influences on the sim-to-real generalizability of deep learning-based object detection models as well as on developing feature extractors that have better sim-to-real generalizability capabilities.
PMID:39452422 | DOI:10.3390/jimaging10100259
CSA-Net: Channel and Spatial Attention-Based Network for Mammogram and Ultrasound Image Classification
J Imaging. 2024 Oct 16;10(10):256. doi: 10.3390/jimaging10100256.
ABSTRACT
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model along with channel and spatial attention mechanisms is employed. The efficiency of leveraging attention mechanisms for breast cancer classification is investigated here. The proposed model demonstrates commendable performance in classification tasks, particularly showing significant improvements upon integrating attention mechanisms. Furthermore, this model demonstrates versatility across various imaging modalities, as demonstrated by its robust performance in classifying breast lesions, not only in mammograms but also in ultrasound images during cross-modality evaluation. It has achieved accuracy of 99.9% for binary classification using the mammogram dataset and 92.3% accuracy on the cross-modality multi-class dataset. The experimental results emphasize the superiority of our proposed method over the current state-of-the-art approaches for breast cancer classification.
PMID:39452419 | DOI:10.3390/jimaging10100256
Artificial intelligence in forensic medicine and related sciences - selected issues
Arch Med Sadowej Kryminol. 2024;74(1):64-76. doi: 10.4467/16891716AMSIK.24.005.19650.
ABSTRACT
AIM: The aim of the work is to provide an overview of the potential application of artificial intelligence in forensic medicine and related sciences, and to identify concerns related to providing medico-legal opinions and legal liability in cases in which possible harm in terms of diagnosis and/or treatment is likely to occur when using an advanced system of computer-based information processing and analysis.
MATERIAL AND METHODS: The material for the study comprised scientific literature related to the issue of artificial intelligence in forensic medicine and related sciences. For this purpose, Google Scholar, PubMed and ScienceDirect databases were searched. To identify useful articles, such terms as "artificial intelligence," "deep learning," "machine learning," "forensic medicine," "legal medicine," "forensic pathology" and "medicine" were used. In some cases, articles were identified based on the semantic proximity of the introduced terms.
CONCLUSIONS: Dynamic development of the computing power and the ability of artificial intelligence to analyze vast data volumes made it possible to transfer artificial intelligence methods to forensic medicine and related sciences. Artificial intelligence has numerous applications in forensic medicine and related sciences and can be helpful in thanatology, forensic traumatology, post-mortem identification examinations, as well as post-mortem microscopic and toxicological diagnostics. Analyzing the legal and medico-legal aspects, artificial intelligence in medicine should be treated as an auxiliary tool, whereas the final diagnostic and therapeutic decisions and the extent to which they are implemented should be the responsibility of humans.
PMID:39450596 | DOI:10.4467/16891716AMSIK.24.005.19650
Automated breast density assessment for full-field digital mammography and digital breast tomosynthesis
Cancer Prev Res (Phila). 2024 Oct 25. doi: 10.1158/1940-6207.CAPR-24-0338. Online ahead of print.
ABSTRACT
Mammographic density is a strong risk factor for breast cancer (BC) and is reported clinically as part of Breast Imaging Reporting and Data System (BI-RADS) results issued by radiologists. Automated assessment of density is needed that can be used for both full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT) as both types of exams are acquired in standard clinical practice. We trained a deep learning model to automate the estimation of BI-RADS density from a prospective Washington University (WashU) clinic-based cohort of 9,714 women, entering into the cohort in 2013 with follow-up through, October 31, 2020. The cohort included 27% non-Hispanic Black women. The trained algorithm was assessed in an external validation cohort that included 18,360 women screened at Emory from January 1, 2013 and followed through December 31, 2020 that included 42% non-Hispanic Black women. Our model-estimated BI-RADS density demonstrated substantial agreement with the density as assessed by radiologist. In the external validation, the agreement with radiologists for category B 81% and C 77% for FFDM and B 83% and C 74% for DBT show important distinction for separation of women with dense breast. We obtained a Cohen's κ of 0.72 (95% CI, 0.71, 0.73) in FFDM and 0.71 (95% CI 0.69, 0.73) in DBT. We provided a consistent and fully automated BI-RADS estimation for both FFDM and DBT using a deep learning model. The software can be easily implemented anywhere for clinical use and risk prediction.
PMID:39450526 | DOI:10.1158/1940-6207.CAPR-24-0338
A DEEP LEARNING FRAMEWORK TO LOCALIZE THE EPILEPTOGENIC ZONE FROM DYNAMIC FUNCTIONAL CONNECTIVITY USING A COMBINED GRAPH CONVOLUTIONAL AND TRANSFORMER NETWORK
Proc IEEE Int Symp Biomed Imaging. 2023 Apr;2023. doi: 10.1109/isbi53787.2023.10230831. Epub 2023 Sep 1.
ABSTRACT
Localizing the epileptogenic zone (EZ) is a critical step in the treatment of medically refractory epilepsy. Resting-state fMRI (rs-fMRI) offers a new window into this task by capturing dynamically evolving co-activation patterns, also known as connectivity, in the brain. In this work, we present the first automated framework that uses dynamic functional connectivity from rs-fMRI to localize the EZ across a heterogeneous epilepsy cohort. Our framework uses a graph convolutional network for feature extraction, followed by a transformer network, whose attention mechanism learns which time points of the rs-fMRI scan are important for EZ localization. We train our framework on augmented data derived from the Human Connectome Project and evaluate it on a clinical epilepsy dataset. Our results demonstrate the clear advantages of our convolutional + transformer combination and data augmentation procedure over ablated and comparison models.
PMID:39450418 | PMC:PMC11500832 | DOI:10.1109/isbi53787.2023.10230831
DeepO-GlcNAc: a web server for prediction of protein O-GlcNAcylation sites using deep learning combined with attention mechanism
Front Cell Dev Biol. 2024 Oct 10;12:1456728. doi: 10.3389/fcell.2024.1456728. eCollection 2024.
ABSTRACT
INTRODUCTION: Protein O-GlcNAcylation is a dynamic post-translational modification involved in major cellular processes and associated with many human diseases. Bioinformatic prediction of O-GlcNAc sites before experimental validation is a challenge task in O-GlcNAc research. Recent advancements in deep learning algorithms and the availability of O-GlcNAc proteomics data present an opportunity to improve O-GlcNAc site prediction.
OBJECTIVES: This study aims to develop a deep learning-based tool to improve O-GlcNAcylation site prediction.
METHODS: We construct an annotated unbalanced O-GlcNAcylation data set and propose a new deep learning framework, DeepO-GlcNAc, using Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) combined with attention mechanism.
RESULTS: The ablation study confirms that the additional model components in DeepO-GlcNAc, such as attention mechanisms and LSTM, contribute positively to improving prediction performance. Our model demonstrates strong robustness across five cross-species datasets, excluding humans. We also compare our model with three external predictors using an independent dataset. Our results demonstrated that DeepO-GlcNAc outperforms the external predictors, achieving an accuracy of 92%, an average precision of 72%, a MCC of 0.60, and an AUC of 92% in ROC analysis. Moreover, we have implemented DeepO-GlcNAc as a web server to facilitate further investigation and usage by the scientific community.
CONCLUSION: Our work demonstrates the feasibility of utilizing deep learning for O-GlcNAc site prediction and provides a novel tool for O-GlcNAc investigation.
PMID:39450274 | PMC:PMC11500328 | DOI:10.3389/fcell.2024.1456728
Enhancing prediction accuracy of foliar essential oil content, growth, and stem quality in Eucalyptus globulus using multi-trait deep learning models
Front Plant Sci. 2024 Oct 10;15:1451784. doi: 10.3389/fpls.2024.1451784. eCollection 2024.
ABSTRACT
Eucalyptus globulus Labill., is a recognized multipurpose tree, which stands out not only for the valuable qualities of its wood but also for the medicinal applications of the essential oil extracted from its leaves. In this study, we implemented an integrated strategy comprising genomic and phenomic approaches to predict foliar essential oil content, stem quality, and growth-related traits within a 9-year-old breeding population of E. globulus. The strategy involved evaluating Uni/Multi-trait deep learning (DL) models by incorporating genomic data related to single nucleotide polymorphisms (SNPs) and haplotypes, as well as the phenomic data from leaf near-infrared (NIR) spectroscopy. Our results showed that essential oil content (oil yield) ranged from 0.01 to 1.69% v/fw and had no significant correlation with any growth-related traits. This suggests that selection solely based on growth-related traits did n The emphases (colored text) from revisions were removed throughout the article. Confirm that this change is fine. ot influence the essential oil content. Genomic heritability estimates ranged from 0.25 (diameter at breast height (DBH) and oil yield) to 0.71 (DBH and stem straightness (ST)), while pedigree-based heritability exhibited a broader range, from 0.05 to 0.88. Notably, oil yield was found to be moderate to highly heritable, with genomic values ranging from 0.25 to 0.60, alongside a pedigree-based estimate of 0.48. The DL prediction models consistently achieved higher prediction accuracy (PA) values with a Multi-trait approach for most traits analyzed, including oil yield (0.699), tree height (0.772), DBH (0.745), slenderness coefficient (0.616), stem volume (0.757), and ST (0.764). The Uni-trait approach achieved superior PA values solely for branching quality (0.861). NIR spectral absorbance was the best omics data for CNN or MLP models with a Multi-trait approach. These results highlight considerable genetic variation within the Eucalyptus progeny trial, particularly regarding oil production. Our results contribute significantly to understanding omics-assisted deep learning models as a breeding strategy to improve growth-related traits and optimize essential oil production in this species.
PMID:39450087 | PMC:PMC11499176 | DOI:10.3389/fpls.2024.1451784
FIDMT-GhostNet: a lightweight density estimation model for wheat ear counting
Front Plant Sci. 2024 Oct 10;15:1435042. doi: 10.3389/fpls.2024.1435042. eCollection 2024.
ABSTRACT
Wheat is one of the important food crops in the world, and the stability and growth of wheat production have a decisive impact on global food security and economic prosperity. Wheat counting is of great significance for agricultural management, yield prediction and resource allocation. Research shows that the wheat ear counting method based on deep learning has achieved remarkable results and the model accuracy is high. However, the complex background of wheat fields, dense wheat ears, small wheat ear targets, and different sizes of wheat ears make the accurate positioning and counting of wheat ears still face great challenges. To this end, an automatic positioning and counting method of wheat ears based on FIDMT-GhostNet (focal inverse distance transform maps - GhostNet) is proposed. Firstly, a lightweight wheat ear counting network using GhostNet as the feature extraction network is proposed, aiming to obtain multi-scale wheat ear features. Secondly, in view of the difficulty in counting caused by dense wheat ears, the point annotation-based network FIDMT (focal inverse distance transform maps) is introduced as a baseline network to improve counting accuracy. Furthermore, to address the problem of less feature information caused by the small ear of wheat target, a dense upsampling convolution module is introduced to improve the resolution of the image and extract more detailed information. Finally, to overcome background noise or wheat ear interference, a local maximum value detection strategy is designed to realize automatic processing of wheat ear counting. To verify the effectiveness of the FIDMT-GhostNet model, the constructed wheat image data sets including WEC, WEDD and GWHD were used for training and testing. Experimental results show that the accuracy of the wheat ear counting model reaches 0.9145, and the model parameters reach 8.42M, indicating that the model FIDMT-GhostNet proposed in this study has good performance.
PMID:39450085 | PMC:PMC11499103 | DOI:10.3389/fpls.2024.1435042
Analysis and visualization of the effect of multiple sclerosis on biological brain age
Front Neurol. 2024 Oct 10;15:1423485. doi: 10.3389/fneur.2024.1423485. eCollection 2024.
ABSTRACT
INTRODUCTION: The rate of neurodegeneration in multiple sclerosis (MS) is an important biomarker for disease progression but can be challenging to quantify. The brain age gap, which quantifies the difference between a patient's chronological and their estimated biological brain age, might be a valuable biomarker of neurodegeneration in patients with MS. Thus, the aim of this study was to investigate the value of an image-based prediction of the brain age gap using a deep learning model and compare brain age gap values between healthy individuals and patients with MS.
METHODS: A multi-center dataset consisting of 5,294 T1-weighted magnetic resonance images of the brain from healthy individuals aged between 19 and 89 years was used to train a convolutional neural network (CNN) for biological brain age prediction. The trained model was then used to calculate the brain age gap in 195 patients with relapsing remitting MS (20-60 years). Additionally, saliency maps were generated for healthy subjects and patients with MS to identify brain regions that were deemed important for the brain age prediction task by the CNN.
RESULTS: Overall, the application of the CNN revealed accelerated brain aging with a larger brain age gap for patients with MS with a mean of 6.98 ± 7.18 years in comparison to healthy test set subjects (0.23 ± 4.64 years). The brain age gap for MS patients was weakly to moderately correlated with age at disease onset (ρ = -0.299, p < 0.0001), EDSS score (ρ = 0.206, p = 0.004), disease duration (ρ = 0.162, p = 0.024), lesion volume (ρ = 0.630, p < 0.0001), and brain parenchymal fraction (ρ = -0.718, p < 0.0001). The saliency maps indicated significant differences in the lateral ventricle (p < 0.0001), insula (p < 0.0001), third ventricle (p < 0.0001), and fourth ventricle (p = 0.0001) in the right hemisphere. In the left hemisphere, the inferior lateral ventricle (p < 0.0001) and the third ventricle (p < 0.0001) showed significant differences. Furthermore, the Dice similarity coefficient showed the highest overlap of salient regions between the MS patients and the oldest healthy subjects, indicating that neurodegeneration is accelerated in this patient cohort.
DISCUSSION: In conclusion, the results of this study show that the brain age gap is a valuable surrogate biomarker to measure disease progression in patients with multiple sclerosis.
PMID:39450049 | PMC:PMC11499186 | DOI:10.3389/fneur.2024.1423485
Deep learning improves test-retest reproducibility of regional strain in echocardiography
Eur Heart J Imaging Methods Pract. 2024 Oct 23;2(4):qyae092. doi: 10.1093/ehjimp/qyae092. eCollection 2024 Oct.
ABSTRACT
AIMS: The clinical utility of regional strain measurements in echocardiography is challenged by suboptimal reproducibility. In this study, we aimed to evaluate the test-retest reproducibility of regional longitudinal strain (RLS) per coronary artery perfusion territory (RLSTerritory) and basal-to-apical level of the left ventricle (RLSLevel), measured by a novel fully automated deep learning (DL) method based on point tracking.
METHODS AND RESULTS: We measured strain in a dual-centre test-retest data set that included 40 controls and 40 patients with suspected non-ST elevation acute coronary syndrome. Two consecutive echocardiograms per subject were recorded by different operators. The reproducibility of RLSTerritory and RLSLevel measured by the DL method and by three experienced observers using semi-automatic software (2D Strain, EchoPAC, GE HealthCare) was evaluated as minimal detectable change (MDC). The DL method had MDC for RLSTerritory and RLSLevel ranging from 3.6 to 4.3%, corresponding to a 33-35% improved reproducibility compared with the inter- and intraobserver scenarios (MDC 5.5-6.4% and 4.9-5.4%). Furthermore, the DL method had a lower variance of test-retest differences for both RLSTerritory and RLSLevel compared with inter- and intraobserver scenarios (all P < 0.001). Bland-Altman analyses demonstrated superior reproducibility by the DL method for the whole range of strain values compared with the best observer scenarios. The feasibility of the DL method was 93% and measurement time was only 1 s per echocardiogram.
CONCLUSION: The novel DL method provided fully automated measurements of RLS, with improved test-retest reproducibility compared with semi-automatic measurements by experienced observers. RLS measured by the DL method has the potential to advance patient care through a more detailed, more efficient, and less user-dependent clinical assessment of myocardial function.
PMID:39449961 | PMC:PMC11498295 | DOI:10.1093/ehjimp/qyae092
Role of Radiology in the Diagnosis and Treatment of Breast Cancer in Women: A Comprehensive Review
Cureus. 2024 Sep 24;16(9):e70097. doi: 10.7759/cureus.70097. eCollection 2024 Sep.
ABSTRACT
Breast cancer remains a leading cause of morbidity and mortality among women worldwide. Early detection and precise diagnosis are critical for effective treatment and improved patient outcomes. This review explores the evolving role of radiology in the diagnosis and treatment of breast cancer, highlighting advancements in imaging technologies and the integration of artificial intelligence (AI). Traditional imaging modalities such as mammography, ultrasound, and magnetic resonance imaging have been the cornerstone of breast cancer diagnostics, with each modality offering unique advantages. The advent of radiomics, which involves extracting quantitative data from medical images, has further augmented the diagnostic capabilities of these modalities. AI, particularly deep learning algorithms, has shown potential in improving diagnostic accuracy and reducing observer variability across imaging modalities. AI-driven tools are increasingly being integrated into clinical workflows to assist in image interpretation, lesion classification, and treatment planning. Additionally, radiology plays a crucial role in guiding treatment decisions, particularly in the context of image-guided radiotherapy and monitoring response to neoadjuvant chemotherapy. The review also discusses the emerging field of theranostics, where diagnostic imaging is combined with therapeutic interventions to provide personalized cancer care. Despite these advancements, challenges such as the need for large annotated datasets and the integration of AI into clinical practice remain. The review concludes that while the role of radiology in breast cancer management is rapidly evolving, further research is required to fully realize the potential of these technologies in improving patient outcomes.
PMID:39449897 | PMC:PMC11500669 | DOI:10.7759/cureus.70097
Deep learning assisted cancer disease prediction from gene expression data using WT-GAN
BMC Med Inform Decis Mak. 2024 Oct 24;24(1):311. doi: 10.1186/s12911-024-02712-y.
ABSTRACT
Several diverse fields including the healthcare system and drug development sectors have benefited immensely through the adoption of deep learning (DL), which is a subset of artificial intelligence (AI) and machine learning (ML). Cancer makes up a significant percentage of the illnesses that cause early human mortality across the globe, and this situation is likely to rise in the coming years, especially when non-communicable illnesses are not considered. As a result, cancer patients would greatly benefit from precise and timely diagnosis and prediction. Deep learning (DL) has become a common technique in healthcare due to the abundance of computational power. Gene expression datasets are frequently used in major DL-based applications for illness detection, notably in cancer therapy. The quantity of medical data, on the other hand, is often insufficient to fulfill deep learning requirements. Microarray gene expression datasets are used for training procedures despite their extreme dimensionality, limited volume of data samples, and sparsely available information. Data augmentation is commonly used to expand the training sample size for gene data. The Wasserstein Tabular Generative Adversarial Network (WT-GAN) model is used for the data augmentation process for generating synthetic data in this proposed work. The correlation-based feature selection technique selects the most relevant characteristics based on threshold values. Deep FNN and ML algorithms train and classify the gene expression samples. The augmented data give better classification results (> 97%) when using WT-GAN for cancer diagnosis.
PMID:39449042 | DOI:10.1186/s12911-024-02712-y
Deep learning-based segmentation of abdominal aortic aneurysms and intraluminal thrombus in 3D ultrasound images
Med Biol Eng Comput. 2024 Oct 25. doi: 10.1007/s11517-024-03216-7. Online ahead of print.
ABSTRACT
Ultrasound (US)-based patient-specific rupture risk analysis of abdominal aortic aneurysms (AAAs) has shown promising results. Input for these models is the patient-specific geometry of the AAA. However, segmentation of the intraluminal thrombus (ILT) remains challenging in US images due to the low ILT-blood contrast. This study aims to improve AAA and ILT segmentation in time-resolved three-dimensional (3D + t) US images using a deep learning approach. In this study a "no new net" (nnU-Net) model was trained on 3D + t US data using either US-based or (co-registered) computed tomography (CT)-based annotations. The optimal training strategy for this low-contrast data was determined for a limited dataset. The merit of augmentation was investigated, as well as the inclusion of low-contrast areas. Segmentation results were validated with CT-based geometries as the ground truth. The model trained on CT-based masks showed the best performance in terms of DICE index, Hausdorff distance, and diameter differences, covering a larger part of the AAA. With a higher accuracy and less manual input the model outperforms conventional methods, with a mean Hausdorff distance of 4.4 mm for the vessel and 7.8 mm for the lumen. However, visibility of the lumen-ILT interface remains the limiting factor, necessitating improvements in image acquisition to ensure broader patient inclusion and enable rupture risk assessment of AAAs in the future.
PMID:39448511 | DOI:10.1007/s11517-024-03216-7
Tooth numbering with polygonal segmentation on periapical radiographs: an artificial intelligence study
Clin Oral Investig. 2024 Oct 25;28(11):610. doi: 10.1007/s00784-024-05999-3.
ABSTRACT
OBJECTIVES: Accurately identification and tooth numbering on radiographs is essential for any clinicians. The aim of the present study was to validate the hypothesis that Yolov5, a type of artificial intelligence model, can be trained to detect and number teeth in periapical radiographs.
MATERIALS AND METHODS: Six thousand four hundred forty six anonymized periapical radiographs without motion-related artifacts were randomly selected from the database. All periapical radiographs in which all boundaries of any tooth could be distinguished were included in the study. The radiographic images used were randomly divided into three groups: 80% training, 10% validation, and 10% testing. The confusion matrix was used to examine model success.
RESULTS: During the test phase, 2578 labelings were performed on 644 periapical radiographs. The number of true positive was 2434 (94.4%), false positive was 115 (4.4%), and false negative was 29 (1.2%). The recall, precision, and F1 scores were 0.9882, 0.9548, and 0.9712, respectively. Moreover, the model yielded an area under curve (AUC) of 0.603 on the receiver operating characteristic curve (ROC).
CONCLUSIONS: This study showed us that YOLOv5 is nearly perfect for numbering teeth on periapical radiography. Although high success rates were achieved as a result of the study, it should not be forgotten that artificial intelligence currently only can be guides dentists for accurate and rapid diagnosis.
CLINICAL RELEVANCE: It is thought that dentists can accelerate the radiographic examination time and inexperienced dentists can reduce the error rate by using YOLOv5. Additionally, YOLOv5 can also be used in the education of dentistry students.
PMID:39448462 | DOI:10.1007/s00784-024-05999-3