Deep learning

GAN Inversion for Data Augmentation to Improve Colonoscopy Lesion Classification

Tue, 2024-05-07 06:00

IEEE J Biomed Health Inform. 2024 May 7;PP. doi: 10.1109/JBHI.2024.3397611. Online ahead of print.

ABSTRACT

A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

PMID:38713568 | DOI:10.1109/JBHI.2024.3397611

Categories: Literature Watch

MPCNN: A Novel Matrix Profile Approach for CNN-based Single Lead Sleep Apnea In Classification Problem

Tue, 2024-05-07 06:00

IEEE J Biomed Health Inform. 2024 May 7;PP. doi: 10.1109/JBHI.2024.3397653. Online ahead of print.

ABSTRACT

Sleep apnea (SA) is a significant respiratory condition that poses a major global health challenge. Deep Learning (DL) has emerged as an efficient tool for the classification problem in electrocardiogram (ECG)-based SA diagnoses. Despite these advancements, most common conventional feature extractions derived from ECG signals in DL, such as R-peaks and RR intervals, may fail to capture crucial information encompassed within the complete ECG segments. In this study, we propose an innovative approach to address this diagnostic gap by delving deeper into the comprehensive segments of the ECG signal. The proposed methodology draws inspiration from Matrix Profile algorithms, which generate an Euclidean distance profile from fixed-length signal subsequences. From this, we derived the Min Distance Profile (MinDP), Max Distance Profile (MaxDP), and Mean Distance Profile (MeanDP) based on the minimum, maximum, and mean of the profile distances, respectively. To validate the effectiveness of our approach, we use the modified LeNet-5 architecture as the primary CNN model, along with two existing lightweight models, BAFNet and SE-MSCNN. Our experiment results on the PhysioNet Apnea-ECG dataset (70 overnight recordings), and the UCDDB dataset (25 overnight recordings) revealed that our new feature extraction method achieved per-segment accuracies of up to 92.11% and 81.25%, respectively. Moreover, using the PhysioNet data, we achieved a per-recording accuracy of 100% and yielded the highest correlation of 0.989 compared to state-of-the-art methods. By introducing a new feature extraction method based on distance relationships, we enhanced the performance of certain lightweight models in DL, showing potential for home sleep apnea test (HSAT) and SA detection in IoT devices. The source code for this work is made publicly available in GitHub: https://github.com/vinuni-vishc/MPCNN-Sleep-Apnea.

PMID:38713565 | DOI:10.1109/JBHI.2024.3397653

Categories: Literature Watch

Learning with Style: Continual Semantic Segmentation Across Tasks and Domains

Tue, 2024-05-07 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 May 7;PP. doi: 10.1109/TPAMI.2024.3397461. Online ahead of print.

ABSTRACT

Deep learning models dealing with image understanding in real-world settings must be able to adapt to a wide variety of tasks across different domains. Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem. We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces. We start by formally introducing continual learning under task and domain shift. Then, we address the proposed setup by using style transfer techniques to extend knowledge across domains when learning incremental tasks and a robust distillation framework to effectively recollect task knowledge under incremental domain shift. The devised framework (LwS, Learning with Style) is able to generalize incrementally acquired task knowledge across all the domains encountered, proving to be robust against catastrophic forgetting. Extensive experimental evaluation on multiple autonomous driving datasets shows how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift. The code is available at https://lttm.dei.unipd.it/paper data/LwS.

PMID:38713563 | DOI:10.1109/TPAMI.2024.3397461

Categories: Literature Watch

T2-weighted imaging-based deep-learning method for noninvasive prostate cancer detection and Gleason grade prediction: a multicenter study

Tue, 2024-05-07 06:00

Insights Imaging. 2024 May 7;15(1):111. doi: 10.1186/s13244-024-01682-z.

ABSTRACT

OBJECTIVES: To noninvasively detect prostate cancer and predict the Gleason grade using single-modality T2-weighted imaging with a deep-learning approach.

METHODS: Patients with prostate cancer, confirmed by histopathology, who underwent magnetic resonance imaging examinations at our hospital during September 2015-June 2022 were retrospectively included in an internal dataset. An external dataset from another medical center and a public challenge dataset were used for external validation. A deep-learning approach was designed for prostate cancer detection and Gleason grade prediction. The area under the curve (AUC) was calculated to compare the model performance.

RESULTS: For prostate cancer detection, the internal datasets comprised data from 195 healthy individuals (age: 57.27 ± 14.45 years) and 302 patients (age: 72.20 ± 8.34 years) diagnosed with prostate cancer. The AUC of our model for prostate cancer detection in the validation set (n = 96, 19.7%) was 0.918. For Gleason grade prediction, datasets comprising data from 283 of 302 patients with prostate cancer were used, with 227 (age: 72.06 ± 7.98 years) and 56 (age: 72.78 ± 9.49 years) patients being used for training and testing, respectively. The external and public challenge datasets comprised data from 48 (age: 72.19 ± 7.81 years) and 91 patients (unavailable information on age), respectively. The AUC of our model for Gleason grade prediction in the training set (n = 227) was 0.902, whereas those of the validation (n = 56), external validation (n = 48), and public challenge validation sets (n = 91) were 0.854, 0.776, and 0.838, respectively.

CONCLUSION: Through multicenter dataset validation, our proposed deep-learning method could detect prostate cancer and predict the Gleason grade better than human experts.

CRITICAL RELEVANCE STATEMENT: Precise prostate cancer detection and Gleason grade prediction have great significance for clinical treatment and decision making.

KEY POINTS: Prostate segmentation is easier to annotate than prostate cancer lesions for radiologists. Our deep-learning method detected prostate cancer and predicted the Gleason grade, outperforming human experts. Non-invasive Gleason grade prediction can reduce the number of unnecessary biopsies.

PMID:38713377 | DOI:10.1186/s13244-024-01682-z

Categories: Literature Watch

A novel artificial intelligence-based endoscopic ultrasonography diagnostic system for diagnosing the invasion depth of early gastric cancer

Tue, 2024-05-07 06:00

J Gastroenterol. 2024 May 7. doi: 10.1007/s00535-024-02102-1. Online ahead of print.

ABSTRACT

BACKGROUND: We developed an artificial intelligence (AI)-based endoscopic ultrasonography (EUS) system for diagnosing the invasion depth of early gastric cancer (EGC), and we evaluated the performance of this system.

METHODS: A total of 8280 EUS images from 559 EGC cases were collected from 11 institutions. Within this dataset, 3451 images (285 cases) from one institution were used as a development dataset. The AI model consisted of segmentation and classification steps, followed by the CycleGAN method to bridge differences in EUS images captured by different equipment. AI model performance was evaluated using an internal validation dataset collected from the same institution as the development dataset (1726 images, 135 cases). External validation was conducted using images collected from the other 10 institutions (3103 images, 139 cases).

RESULTS: The area under the curve (AUC) of the AI model in the internal validation dataset was 0.870 (95% CI: 0.796-0.944). Regarding diagnostic performance, the accuracy/sensitivity/specificity values of the AI model, experts (n = 6), and nonexperts (n = 8) were 82.2/63.4/90.4%, 81.9/66.3/88.7%, and 68.3/60.9/71.5%, respectively. The AUC of the AI model in the external validation dataset was 0.815 (95% CI: 0.743-0.886). The accuracy/sensitivity/specificity values of the AI model (74.1/73.1/75.0%) and the real-time diagnoses of experts (75.5/79.1/72.2%) in the external validation dataset were comparable.

CONCLUSIONS: Our AI model demonstrated a diagnostic performance equivalent to that of experts.

PMID:38713263 | DOI:10.1007/s00535-024-02102-1

Categories: Literature Watch

Evaluation of a Cascaded Deep Learning-based Algorithm for Prostate Lesion Detection at Biparametric MRI

Tue, 2024-05-07 06:00

Radiology. 2024 May;311(2):e230750. doi: 10.1148/radiol.230750.

ABSTRACT

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.

PMID:38713024 | DOI:10.1148/radiol.230750

Categories: Literature Watch

Predicting response to neoadjuvant chemotherapy for colorectal liver metastasis using deep learning on prechemotherapy cross-sectional imaging

Tue, 2024-05-07 06:00

J Surg Oncol. 2024 May 7. doi: 10.1002/jso.27673. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVES: Deep learning models (DLMs) are applied across domains of health sciences to generate meaningful predictions. DLMs make use of neural networks to generate predictions from discrete data inputs. This study employs DLM on prechemotherapy cross-sectional imaging to predict patients' response to neoadjuvant chemotherapy.

METHODS: Adult patients with colorectal liver metastasis who underwent surgery after neoadjuvant chemotherapy were included. A DLM was trained on computed tomography images using attention-based multiple-instance learning. A logistic regression model incorporating clinical parameters of the Fong clinical risk score was used for comparison. Both model performances were benchmarked against the Response Evaluation Criteria in Solid Tumors criteria. A receiver operating curve was created and resulting area under the curve (AUC) was determined.

RESULTS: Ninety-five patients were included, with 33,619 images available for study inclusion. Ninety-five percent of patients underwent 5-fluorouracil-based chemotherapy with oxaliplatin and/or irinotecan. Sixty percent of the patients were categorized as chemotherapy responders (30% reduction in tumor diameter). The DLM had an AUC of 0.77. The AUC for the clinical model was 0.41.

CONCLUSIONS: Image-based DLM for prediction of response to neoadjuvant chemotherapy in patients with colorectal cancer liver metastases was superior to a clinical-based model. These results demonstrate potential to identify nonresponders to chemotherapy and guide select patients toward earlier curative resection.

PMID:38712939 | DOI:10.1002/jso.27673

Categories: Literature Watch

STF-Net: sparsification transformer coding guided network for subcortical brain structure segmentation

Tue, 2024-05-07 06:00

Biomed Tech (Berl). 2024 May 8. doi: 10.1515/bmt-2023-0121. Online ahead of print.

ABSTRACT

Subcortical brain structure segmentation plays an important role in the diagnosis of neuroimaging and has become the basis of computer-aided diagnosis. Due to the blurred boundaries and complex shapes of subcortical brain structures, labeling these structures by hand becomes a time-consuming and subjective task, greatly limiting their potential for clinical applications. Thus, this paper proposes the sparsification transformer (STF) module for accurate brain structure segmentation. The self-attention mechanism is used to establish global dependencies to efficiently extract the global information of the feature map with low computational complexity. Also, the shallow network is used to compensate for low-level detail information through the localization of convolutional operations to promote the representation capability of the network. In addition, a hybrid residual dilated convolution (HRDC) module is introduced at the bottom layer of the network to extend the receptive field and extract multi-scale contextual information. Meanwhile, the octave convolution edge feature extraction (OCT) module is applied at the skip connections of the network to pay more attention to the edge features of brain structures. The proposed network is trained with a hybrid loss function. The experimental evaluation on two public datasets: IBSR and MALC, shows outstanding performance in terms of objective and subjective quality.

PMID:38712825 | DOI:10.1515/bmt-2023-0121

Categories: Literature Watch

Deep learning-based whole-body PSMA PET/CT attenuation correction utilizing Pix-2-Pix GAN

Tue, 2024-05-07 06:00

Oncotarget. 2024 May 7;15:288-300. doi: 10.18632/oncotarget.28583.

ABSTRACT

PURPOSE: Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans.

METHODS: A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling.

RESULTS: Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05).

CONCLUSION: The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.

PMID:38712741 | DOI:10.18632/oncotarget.28583

Categories: Literature Watch

Factors influencing the development of artificial intelligence in orthodontics

Tue, 2024-05-07 06:00

Orthod Craniofac Res. 2024 May 7. doi: 10.1111/ocr.12806. Online ahead of print.

ABSTRACT

OBJECTIVES: Since developing AI procedures demands significant computing resources and time, the implementation of a careful experimental design is essential. The purpose of this study was to investigate factors influencing the development of AI in orthodontics.

MATERIALS AND METHODS: A total of 162 AI models were developed, with various combinations of sample sizes (170, 340, 679), input variables (40, 80, 160), output variables (38, 76, 154), training sessions (100, 500, 1000), and computer specifications (new vs. old). The TabNet deep-learning algorithm was used to develop these AI models, and leave-one-out cross-validation was applied in training. The goodness-of-fit of the regression models was compared using the adjusted coefficient of determination values, and the best-fit model was selected accordingly. Multiple linear regression analyses were employed to investigate the relationship between the influencing factors.

RESULTS: Increasing the number of training sessions enhanced the effectiveness of the AI models. The best-fit regression model for predicting the computational time of AI, which included logarithmic transformation of time, sample size, and training session variables, demonstrated an adjusted coefficient of determination of 0.99.

CONCLUSION: The study results show that estimating the time required for AI development may be possible using logarithmic transformations of time, sample size, and training session variables, followed by applying coefficients estimated through several pilot studies with reduced sample sizes and reduced training sessions.

PMID:38712670 | DOI:10.1111/ocr.12806

Categories: Literature Watch

Application of MR Images in Radiotherapy Planning for brain tumor Based on Deep Learning

Tue, 2024-05-07 06:00

Int J Neurosci. 2024 May 7:1-11. doi: 10.1080/00207454.2024.2352784. Online ahead of print.

ABSTRACT

PURPOSE: Explore the function and dose calculation accuracy of MRI images in radiotherapy planning through deep learning methods.

METHODS: 131 brain tumor patients undergoing radiotherapy with previous MR and CT images were recruited for this study. A new series of MRI from the aligned MR was firstly registered to CT images strictly using MIM software and then resampled. A deep learning method (U-NET) was used to establish a MRI-to-CT conversion model, for which 105 patient images were used as the training set and 26 patient images were used as the tuning set. Data from additional 8 patients were collected as the test set, and the accuracy of the model was evaluated from a dosimetric standpoint.

RESULTS: Comparing the synthetic CT images with the original CT images, the difference in dosimetric parameters D98, D95, D2 and Dmean of PTV in 8 patients was less than 0.5%. The gamma passed rates of PTV and whole body volume were: 1%/1mm: 93.96%±6.75%, 2%/2mm: 99.87%±0.30%, 3%/3mm: 100.00%±0.00%; and 1%/1mm: 99.14%±0.80%, 2%/2mm: 99.92%±0.08%, 3%/3mm: 99.99%±0.01%.

CONCLUSION: MR images can be used both in delineation and treatment efficacy evaluation and in dose calculation. Using the deep learning way to convert MR image to CT image is a viable method and can be further used in dose calculation.

PMID:38712669 | DOI:10.1080/00207454.2024.2352784

Categories: Literature Watch

Comparative assessment of established and deep learning-based segmentation methods for hippocampal volume estimation in brain magnetic resonance imaging analysis

Tue, 2024-05-07 06:00

NMR Biomed. 2024 May 7:e5169. doi: 10.1002/nbm.5169. Online ahead of print.

ABSTRACT

In this study, our objective was to assess the performance of two deep learning-based hippocampal segmentation methods, SynthSeg and TigerBx, which are readily available to the public. We contrasted their performance with that of two established techniques, FreeSurfer-Aseg and FSL-FIRST, using three-dimensional T1-weighted MRI scans (n = 1447) procured from public databases. Our evaluation focused on the accuracy and reproducibility of these tools in estimating hippocampal volume. The findings suggest that both SynthSeg and TigerBx are on a par with Aseg and FIRST in terms of segmentation accuracy and reproducibility, but offer a significant advantage in processing speed, generating results in less than 1 min compared with several minutes to hours for the latter tools. In terms of Alzheimer's disease classification based on the hippocampal atrophy rate, SynthSeg and TigerBx exhibited superior performance. In conclusion, we evaluated the capabilities of two deep learning-based segmentation techniques. The results underscore their potential value in clinical and research environments, particularly when investigating neurological conditions associated with hippocampal structures.

PMID:38712667 | DOI:10.1002/nbm.5169

Categories: Literature Watch

Artificial intelligence- image learning and its applications in neurooncology: a review

Tue, 2024-05-07 06:00

J Pak Med Assoc. 2024 Apr;74(4 (Supple-4)):S158-S160. doi: 10.47391/JPMA.AKU-9S-24.

ABSTRACT

Image learning involves using artificial intelligence (AI) to analyse radiological images. Various machine and deeplearning- based techniques have been employed to process images and extract relevant features. These can later be used to detect tumours early and predict their survival based on their grading and classification. Radiomics is now also used to predict genetic mutations and differentiate between tumour progression and treatment-related side effects. These were once completely dependent on invasive procedures like biopsy and histopathology. The use and feasibility of these techniques are now widely being explored in neurooncology to devise more accurate management plans and limit morbidity and mortality. Hence, the future of oncology lies in the exploration of AI-based image learning techniques, which can be applied to formulate management plans based on less invasive diagnostic techniques, earlier detection of tumours, and prediction of prognosis based on radiomic features. In this review, we discuss some of these applications of image learning in current medical dynamics.

PMID:38712425 | DOI:10.47391/JPMA.AKU-9S-24

Categories: Literature Watch

Transforming breast cancer care: harnessing the power of artificial intelligence and imaging for predicting pathological complete response. a narrative review

Tue, 2024-05-07 06:00

J Pak Med Assoc. 2024 Apr;74(4 (Supple-4)):S43-S48. doi: 10.47391/JPMA.AKU-9S-07.

ABSTRACT

This narrative review explores the transformative potential of Artificial Intelligence (AI) and advanced imaging techniques in predicting Pathological Complete Response (pCR) in Breast Cancer (BC) patients undergoing Neo-Adjuvant Chemotherapy (NACT). Summarizing recent research findings underscores the significant strides made in the accurate assessment of pCR using AI, including deep learning and radiomics. Such AI-driven models offer promise in optimizing clinical decisions, personalizing treatment strategies, and potentially reducing the burden of unnecessary treatments, thereby improving patient outcomes. Furthermore, the review acknowledges the potential of AI to address healthcare disparities in Low- and Middle-Income Countries (LMICs), where accessible and scalable AI solutions may enhance BC management. Collaboration and international efforts are essential to fully unlock the potential of AI in BC care, offering hope for a more equitable and effective approach to treatment worldwide.

PMID:38712408 | DOI:10.47391/JPMA.AKU-9S-07

Categories: Literature Watch

An Artificial Intelligence model for implant segmentation on periapical radiographs

Tue, 2024-05-07 06:00

J Pak Med Assoc. 2024 Apr;74(4 (Supple-4)):S5-S9. doi: 10.47391/JPMA.AKU-9S-02.

ABSTRACT

OBJECTIVE: To segment dental implants on PA radiographs using a Deep Learning (DL) algorithm. To compare the performance of the algorithm relative to ground truth determined by the human annotator.

METHODOLOGY: Three hundred PA radiographs were retrieved from the radiographic database and consequently annotated to label implants as well as teeth on the LabelMe annotation software. The dataset was augmented to increase the number of images in the training data and a total of 1294 images were used to train, validate and test the DL algorithm. An untrained U-net was downloaded and trained on the annotated dataset to allow detection of implants using polygons on PA radiographs.

RESULTS: A total of one hundred and thirty unseen images were run through the trained U-net to determine its ability to segment implants on PA radiographs. The performance metrics are as follows: accuracy of 93.8%, precision of 90%, recall of 83%, F-1 score of 86%, Intersection over Union of 86.4% and loss = 21%.

CONCLUSIONS: The trained DL algorithm segmented implants on PA radiographs with high performance similar to that of the humans who labelled the images forming the ground truth.

PMID:38712403 | DOI:10.47391/JPMA.AKU-9S-02

Categories: Literature Watch

Automatic Optic Nerve Assessment From Transorbital Ultrasound Images: A Deep Learning-based Approach

Tue, 2024-05-07 06:00

Curr Med Imaging. 2024 May 6. doi: 10.2174/0115734056293608240430073630. Online ahead of print.

ABSTRACT

BACKGROUND: Transorbital Ultrasonography (TOS) is a promising imaging technology that can be used to characterize the structures of the optic nerve and the potential alterations that may occur in those structures as a result of an increase in intracranial pressure (ICP) or the presence of other disorders such as multiple sclerosis (MS) and hydrocephalus.

OBJECTIVE: In this paper, the primary objective is to develop a fully automated system that is capable of segmenting and calculating the diameters of structures that are associated with the optic nerve in TOS images. These structures include the optic nerve diameter sheath (ONSD) and the optic nerve diameter (OND).

METHODS: A fully convolutional neural network (FCN) model that has been pre-trained serves as the foundation for the segmentation method. The method that was developed was utilized to collect 464 different photographs from 110 different people, and it was accomplished with the assistance of four distinct pieces of apparatus.

RESULTS: An examination was carried out to compare the outcomes of the automatic measurements with those of a manual operator. Both OND and ONSD have a typical inaccuracy of -0.12 0.32 mm and 0.14 0.58 mm, respectively, when compared to the operator. The Pearson correlation coefficient (PCC) for OND is 0.71, while the coefficient for ONSD is 0.64, showing that there is a positive link between the two measuring tools.

CONCLUSION: A conclusion may be drawn that the technique that was developed is automatic, and the average error (AE) that was reached for the ONSD measurement is compatible with the ranges of inter-operator variability that have been discovered in the literature.

PMID:38712376 | DOI:10.2174/0115734056293608240430073630

Categories: Literature Watch

The 100 most cited articles in artificial intelligence related to orthopedics

Tue, 2024-05-07 06:00

Front Surg. 2024 Apr 17;11:1370335. doi: 10.3389/fsurg.2024.1370335. eCollection 2024.

ABSTRACT

BACKGROUND: This bibliometric study aimed to identify and analyze the top 100 articles related to artificial intelligence in the field of orthopedics.

METHODS: The articles were assessed based on their number of citations, publication years, countries, journals, authors, affiliations, and funding agencies. Additionally, they were analyzed in terms of their themes and objectives. Keyword co-occurrence, co-citation of authors, and co-citation of references analyses were conducted using VOSviewer (version 1.6.19).

RESULTS: The number of citations of these articles ranged from 32 to 272, with six papers having more than 200 citations The years of 2019 (n: 37) and 2020 (n: 19) together constituted 56% of the list. The USA was the leading contributor country to this field (n: 61). The most frequently used keywords were "machine learning" (n: 26), "classification" (n: 18), "deep learning" (n: 16), "artificial intelligence" (n: 14), respectively. The most common themes were decision support (n: 25), fracture detection (n: 24), and osteoarthrtitis staging (n: 21). The majority of the studies were diagnostic in nature (n: 85), with only two articles focused on treatment.

CONCLUSIONS: This study provides valuable insights and presents the historical perspective of scientific development on artificial intelligence in the field of orthopedics. The literature in this field is expanding rapidly. Currently, research is generally done for diagnostic purposes and predominantly focused on decision support systems, fracture detection, and osteoarthritis classification.

PMID:38712339 | PMC:PMC11072182 | DOI:10.3389/fsurg.2024.1370335

Categories: Literature Watch

Deep learning-driven imaging of cell division and cell growth across an entire eukaryotic life cycle

Tue, 2024-05-07 06:00

bioRxiv [Preprint]. 2024 Apr 27:2024.04.25.591211. doi: 10.1101/2024.04.25.591211.

ABSTRACT

The life cycle of biomedical and agriculturally relevant eukaryotic microorganisms involves complex transitions between proliferative and non-proliferative states such as dormancy, mating, meiosis, and cell division. New drugs, pesticides, and vaccines can be created by targeting specific life cycle stages of parasites and pathogens. However, defining the structure of a microbial life cycle often relies on partial observations that are theoretically assembled in an ideal life cycle path. To create a more quantitative approach to studying complete eukaryotic life cycles, we generated a deep learning-driven imaging framework to track microorganisms across sexually reproducing generations. Our approach combines microfluidic culturing, life cycle stage-specific segmentation of microscopy images using convolutional neural networks, and a novel cell tracking algorithm, FIEST, based on enhancing the overlap of single cell masks in consecutive images through deep learning video frame interpolation. As proof of principle, we used this approach to quantitatively image and compare cell growth and cell cycle regulation across the sexual life cycle of Saccharomyces cerevisiae . We developed a fluorescent reporter system based on a fluorescently labeled Whi5 protein, the yeast analog of mammalian Rb, and a new High-Cdk1 activity sensor, LiCHI, designed to report during DNA replication, mitosis, meiotic homologous recombination, meiosis I, and meiosis II. We found that cell growth preceded the exit from non-proliferative states such as mitotic G1, pre-meiotic G1, and the G0 spore state during germination. A decrease in the total cell concentration of Whi5 characterized the exit from non-proliferative states, which is consistent with a Whi5 dilution model. The nuclear accumulation of Whi5 was developmentally regulated, being at its highest during meiotic exit and spore formation. The temporal coordination of cell division and growth was not significantly different across three sexually reproducing generations. Our framework could be used to quantitatively characterize other single-cell eukaryotic life cycles that remain incompletely described. An off-the-shelf user interface Yeastvision provides free access to our image processing and single-cell tracking algorithms.

PMID:38712227 | PMC:PMC11071524 | DOI:10.1101/2024.04.25.591211

Categories: Literature Watch

Exploring the Potential of Structure-Based Deep Learning Approaches for T cell Receptor Design

Tue, 2024-05-07 06:00

bioRxiv [Preprint]. 2024 Apr 24:2024.04.19.590222. doi: 10.1101/2024.04.19.590222.

ABSTRACT

Deep learning methods, trained on the increasing set of available protein 3D structures and sequences, have substantially impacted the protein modeling and design field. These advancements have facilitated the creation of novel proteins, or the optimization of existing ones designed for specific functions, such as binding a target protein. Despite the demonstrated potential of such approaches in designing general protein binders, their application in designing immunotherapeutics remains relatively unexplored. A relevant application is the design of T cell receptors (TCRs). Given the crucial role of T cells in mediating immune responses, redirecting these cells to tumor or infected target cells through the engineering of TCRs has shown promising results in treating diseases, especially cancer. However, the computational design of TCR interactions presents challenges for current physics-based methods, particularly due to the unique natural characteristics of these interfaces, such as low affinity and cross-reactivity. For this reason, in this study, we explored the potential of two structure-based deep learning protein design methods, ProteinMPNN and ESM-IF, in designing fixed-backbone TCRs for binding target antigenic peptides presented by the MHC through different design scenarios. To evaluate TCR designs, we employed a comprehensive set of sequence- and structure-based metrics, highlighting the benefits of these methods in comparison to classical physics-based design methods and identifying deficiencies for improvement.

PMID:38712216 | PMC:PMC11071404 | DOI:10.1101/2024.04.19.590222

Categories: Literature Watch

Revolutionizing Postoperative Ileus Monitoring: Exploring GRU-D's Real-Time Capabilities and Cross-Hospital Transferability

Tue, 2024-05-07 06:00

medRxiv [Preprint]. 2024 Apr 25:2024.04.24.24306295. doi: 10.1101/2024.04.24.24306295.

ABSTRACT

BACKGROUND: Postoperative ileus (POI) after colorectal surgery leads to increased morbidity, costs, and hospital stays. Identifying POI risk for early intervention is important for improving surgical outcomes especially given the increasing trend towards early discharge after surgery. While existing studies have assessed POI risk with regression models, the role of deep learning's remains unexplored.

METHODS: We assessed the performance and transferability (brutal force/instance/parameter transfer) of Gated Recurrent Unit with Decay (GRU-D), a longitudinal deep learning architecture, for real-time risk assessment of POI among 7,349 colorectal surgeries performed across three hospital sites operated by Mayo Clinic with two electronic health records (EHR) systems. The results were compared with atemporal models on a panel of benchmark metrics.

RESULTS: GRU-D exhibits robust transferability across different EHR systems and hospital sites, showing enhanced performance by integrating new measurements, even amid the extreme sparsity of real-world longitudinal data. On average, for labs, vitals, and assisted living status, 72.2%, 26.9%, and 49.3% respectively lack measurements within 24 hours after surgery. Over the follow-up period with 4-hour intervals, 98.7%, 84%, and 95.8% of data points are missing, respectively. A maximum of 5% decrease in AUROC was observed in brutal-force transfer between different EHR systems with non-overlapping surgery date frames. Multi-source instance transfer witnessed the best performance, with a maximum of 2.6% improvement in AUROC over local learning. The significant benefit, however, lies in the reduction of variance (a maximum of 86% decrease). The GRU-D model's performance mainly depends on the prediction task's difficulty, especially the case prevalence rate. Whereas the impact of training data and transfer strategy is less crucial, underscoring the challenge of effectively leveraging transfer learning for rare outcomes. While atemporal Logit models show notably superior performance at certain pre-surgical points, their performance fluctuate significantly and generally underperform GRU-D in post-surgical hours.

CONCLUSION: GRU-D demonstrated robust transferability across EHR systems and hospital sites with highly sparse real-world EHR data. Further research on built-in explainability for meaningful intervention would be highly valuable for its integration into clinical practice.

PMID:38712199 | PMC:PMC11071561 | DOI:10.1101/2024.04.24.24306295

Categories: Literature Watch

Pages