Deep learning

Use of Conventional Chest Imaging and Artificial Intelligence in COVID-19 Infection. A Review of the Literature

Mon, 2024-04-15 06:00

Open Respir Arch. 2021 Jan 8;3(1):100078. doi: 10.1016/j.opresp.2020.100078. eCollection 2021 Jan-Mar.

ABSTRACT

The coronavirus disease caused by SARS-Cov-2 is a pandemic with millions of confirmed cases around the world and a high death toll. Currently, the real-time polymerase chain reaction (RT-PCR) is the standard diagnostic method for determining COVID-19 infection. Various failures in the detection of the disease by means of laboratory samples have raised certain doubts about the characterisation of the infection and the spread of contacts. In clinical practice, chest radiography (RT) and chest computed tomography (CT) are extremely helpful and have been widely used in the detection and diagnosis of COVID-19. RT is the most common and widely available diagnostic imaging technique, however, its reading by less qualified personnel, in many cases with work overload, causes a high number of errors to be committed. Chest CT can be used for triage, diagnosis, assessment of severity, progression, and response to treatment. Currently, artificial intelligence (AI) algorithms have shown promise in image classification, showing that they can reduce diagnostic errors by at least matching the diagnostic performance of radiologists. This review shows how AI applied to thoracic radiology speeds up and improves diagnosis, allowing to optimise the workflow of radiologists. It can provide an objective evaluation and achieve a reduction in subjectivity and variability. AI can also help to optimise the resources and increase the efficiency in the management of COVID-19 infection.

PMID:38620646 | PMC:PMC7834680 | DOI:10.1016/j.opresp.2020.100078

Categories: Literature Watch

Current status and prospects of artificial intelligence in breast cancer pathology: convolutional neural networks to prospective Vision Transformers

Mon, 2024-04-15 06:00

Int J Clin Oncol. 2024 Apr 15. doi: 10.1007/s10147-024-02513-3. Online ahead of print.

ABSTRACT

Breast cancer is the most prevalent cancer among women, and its diagnosis requires the accurate identification and classification of histological features for effective patient management. Artificial intelligence, particularly through deep learning, represents the next frontier in cancer diagnosis and management. Notably, the use of convolutional neural networks and emerging Vision Transformers (ViT) has been reported to automate pathologists' tasks, including tumor detection and classification, in addition to improving the efficiency of pathology services. Deep learning applications have also been extended to the prediction of protein expression, molecular subtype, mutation status, therapeutic efficacy, and outcome prediction directly from hematoxylin and eosin-stained slides, bypassing the need for immunohistochemistry or genetic testing. This review explores the current status and prospects of deep learning in breast cancer diagnosis with a focus on whole-slide image analysis. Artificial intelligence applications are increasingly applied to many tasks in breast pathology ranging from disease diagnosis to outcome prediction, thus serving as valuable tools for assisting pathologists and supporting breast cancer management.

PMID:38619651 | DOI:10.1007/s10147-024-02513-3

Categories: Literature Watch

Harnessing machine learning to predict cytochrome P450 inhibition through molecular properties

Mon, 2024-04-15 06:00

Arch Toxicol. 2024 Apr 15. doi: 10.1007/s00204-024-03756-9. Online ahead of print.

ABSTRACT

Cytochrome P450 enzymes are a superfamily of enzymes responsible for the metabolism of a variety of medicines and xenobiotics. Among the Cytochrome P450 family, five isozymes that include 1A2, 2C9, 2C19, 2D6, and 3A4 are most important for the metabolism of xenobiotics. Inhibition of any of these five CYP isozymes causes drug-drug interactions with high pharmacological and toxicological effects. So, the inhibition or non-inhibition prediction of these isozymes is of great importance. Many techniques based on machine learning and deep learning algorithms are currently being used to predict whether these isozymes will be inhibited or not. In this study, three different molecular or substructural properties that include Morgan, MACCS and Morgan (combined) and RDKit of the various molecules are used to train a distinct SVM model against each isozyme (1A2, 2C9, 2C19, 2D6, and 3A4). On the independent dataset, Morgan fingerprints provided the best results, while MACCS and Morgan (combined) achieved comparable results in terms of balanced accuracy (BA), sensitivity (Sn), and Mathews correlation coefficient (MCC). For the Morgan fingerprints, balanced accuracies (BA), Mathews correlation coefficients (MCC), and sensitivities (Sn) against each CYPs isozyme, 1A2, 2C9, 2C19, 2D6, and 3A4 on an independent dataset ranged between 0.81 and 0.85, 0.61 and 0.70, 0.72 and 0.83, respectively. Similarly, on the independent dataset, MACCS and Morgan (combined) fingerprints achieved competitive results in terms of balanced accuracies (BA), Mathews correlation coefficients (MCC), and sensitivities (Sn) against each CYPs isozyme, 1A2, 2C9, 2C19, 2D6, and 3A4, which ranged between 0.79 and 0.85, 0.59 and 0.69, 0.69 and 0.82, respectively.

PMID:38619593 | DOI:10.1007/s00204-024-03756-9

Categories: Literature Watch

Human-multimodal deep learning collaboration in 'precise' diagnosis of lupus erythematosus subtypes and similar skin diseases

Mon, 2024-04-15 06:00

J Eur Acad Dermatol Venereol. 2024 Apr 15. doi: 10.1111/jdv.20031. Online ahead of print.

ABSTRACT

BACKGROUND: Lupus erythematosus (LE) is a spectrum of autoimmune diseases. Due to the complexity of cutaneous LE (CLE), clinical skin image-based artificial intelligence is still experiencing difficulties in distinguishing subtypes of LE.

OBJECTIVES: We aim to develop a multimodal deep learning system (MMDLS) for human-AI collaboration in diagnosis of LE subtypes.

METHODS: This is a multi-centre study based on 25 institutions across China to assist in diagnosis of LE subtypes, other eight similar skin diseases and healthy subjects. In total, 446 cases with 800 clinical skin images, 3786 multicolor-immunohistochemistry (multi-IHC) images and clinical data were collected, and EfficientNet-B3 and ResNet-18 were utilized in this study.

RESULTS: In the multi-classification task, the overall performance of MMDLS on 13 skin conditions is much higher than single or dual modals (Sen = 0.8288, Spe = 0.9852, Pre = 0.8518, AUC = 0.9844). Further, the MMDLS-based diagnostic-support help improves the accuracy of dermatologists from 66.88% ± 6.94% to 81.25% ± 4.23% (p = 0.0004).

CONCLUSIONS: These results highlight the benefit of human-MMDLS collaborated framework in telemedicine by assisting dermatologists and rheumatologists in the differential diagnosis of LE subtypes and similar skin diseases.

PMID:38619440 | DOI:10.1111/jdv.20031

Categories: Literature Watch

Revolutionizing dementia detection: Leveraging vision and Swin transformers for early diagnosis

Mon, 2024-04-15 06:00

Am J Med Genet B Neuropsychiatr Genet. 2024 Apr 15:e32979. doi: 10.1002/ajmg.b.32979. Online ahead of print.

ABSTRACT

Dementia, an increasingly prevalent neurological disorder with a projected threefold rise globally by 2050, necessitates early detection for effective management. The risk notably increases after age 65. Dementia leads to a progressive decline in cognitive functions, affecting memory, reasoning, and problem-solving abilities. This decline can impact the individual's ability to perform daily tasks and make decisions, underscoring the crucial importance of timely identification. With the advent of technologies like computer vision and deep learning, the prospect of early detection becomes even more promising. Employing sophisticated algorithms on imaging data, such as positron emission tomography scans, facilitates the recognition of subtle structural brain changes, enabling diagnosis at an earlier stage for potentially more effective interventions. In an experimental study, the Swin transformer algorithm demonstrated superior overall accuracy compared to the vision transformer and convolutional neural network, emphasizing its efficiency. Detecting dementia early is essential for proactive management, personalized care, and implementing preventive measures, ultimately enhancing outcomes for individuals and lessening the overall burden on healthcare systems.

PMID:38619385 | DOI:10.1002/ajmg.b.32979

Categories: Literature Watch

Local and global changes in cell density induce reorganisation of 3D packing in a proliferating epithelium

Mon, 2024-04-15 06:00

Development. 2024 Apr 15:dev.202362. doi: 10.1242/dev.202362. Online ahead of print.

ABSTRACT

Tissue morphogenesis is intimately linked to the changes in shape and organisation of individual cells. In curved epithelia, cells can intercalate along their own apicobasal axes adopting a shape named "scutoid" that allows energy minimization in the tissue. Although several geometric and biophysical factors have been associated with this 3D reorganisation, the dynamic changes underlying scutoid formation in 3D epithelial packing remain poorly understood. Here we use live-imaging of the sea star embryo coupled with deep learning-based segmentation, to dissect the relative contributions of cell density, tissue compaction, and cell proliferation on epithelial architecture. We find that tissue compaction, which naturally occurs in the embryo, is necessary for the appearance of scutoids. Physical compression experiments identify cell density as the factor promoting scutoid formation at a global level. Finally, the comparison of the developing embryo with computational models indicates that the increase in the proportion of scutoids is directly associated with cell divisions. Our results suggest that apico-basal intercalations appearing just after mitosis may help accommodate the new cells within the tissue. We propose that proliferation in a compact epithelium induces 3D cell rearrangements during development.

PMID:38619327 | DOI:10.1242/dev.202362

Categories: Literature Watch

Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks

Mon, 2024-04-15 06:00

Surg Innov. 2024 Apr 15:15533506241246576. doi: 10.1177/15533506241246576. Online ahead of print.

ABSTRACT

OBJECTIVE: To propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.

METHODS: We employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.

RESULTS: The transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.

CONCLUSION: To the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.

PMID:38619039 | DOI:10.1177/15533506241246576

Categories: Literature Watch

Prediction of cardiovascular risk factors from retinal fundus photographs: Validation of a deep learning algorithm in a prospective non-interventional study in Kenya

Mon, 2024-04-15 06:00

Diabetes Obes Metab. 2024 Apr 15. doi: 10.1111/dom.15587. Online ahead of print.

ABSTRACT

AIM: Hypertension and diabetes mellitus (DM) are major causes of morbidity and mortality, with growing burdens in low-income countries where they are underdiagnosed and undertreated. Advances in machine learning may provide opportunities to enhance diagnostics in settings with limited medical infrastructure.

MATERIALS AND METHODS: A non-interventional study was conducted to develop and validate a machine learning algorithm to estimate cardiovascular clinical and laboratory parameters. At two sites in Kenya, digital retinal fundus photographs were collected alongside blood pressure (BP), laboratory measures and medical history. The performance of machine learning models, originally trained using data from the UK Biobank, were evaluated for their ability to estimate BP, glycated haemoglobin, estimated glomerular filtration rate and diagnoses from fundus images.

RESULTS: In total, 301 participants were enrolled. Compared with the UK Biobank population used for algorithm development, participants from Kenya were younger and would probably report Black/African ethnicity, with a higher body mass index and prevalence of DM and hypertension. The mean absolute error was comparable or slightly greater for systolic BP, diastolic BP, glycated haemoglobin and estimated glomerular filtration rate. The model trained to identify DM had an area under the receiver operating curve of 0.762 (0.818 in the UK Biobank) and the hypertension model had an area under the receiver operating curve of 0.765 (0.738 in the UK Biobank).

CONCLUSIONS: In a Kenyan population, machine learning models estimated cardiovascular parameters with comparable or slightly lower accuracy than in the population where they were trained, suggesting model recalibration may be appropriate. This study represents an incremental step toward leveraging machine learning to make early cardiovascular screening more accessible, particularly in resource-limited settings.

PMID:38618987 | DOI:10.1111/dom.15587

Categories: Literature Watch

Artificial Intelligence in Cataract Surgery: A Systematic Review

Mon, 2024-04-15 06:00

Transl Vis Sci Technol. 2024 Apr 2;13(4):20. doi: 10.1167/tvst.13.4.20.

ABSTRACT

PURPOSE: The purpose of this study was to assess the current use and reliability of artificial intelligence (AI)-based algorithms for analyzing cataract surgery videos.

METHODS: A systematic review of the literature about intra-operative analysis of cataract surgery videos with machine learning techniques was performed. Cataract diagnosis and detection algorithms were excluded. Resulting algorithms were compared, descriptively analyzed, and metrics summarized or visually reported. The reproducibility and reliability of the methods and results were assessed using a modified version of the Medical Image Computing and Computer-Assisted (MICCAI) checklist.

RESULTS: Thirty-eight of the 550 screened studies were included, 20 addressed the challenge of instrument detection or tracking, 9 focused on phase discrimination, and 8 predicted skill and complications. Instrument detection achieves an area under the receiver operator characteristic curve (ROC AUC) between 0.976 and 0.998, instrument tracking an mAP between 0.685 and 0.929, phase recognition an ROC AUC between 0.773 and 0.990, and complications or surgical skill performs with an ROC AUC between 0.570 and 0.970.

CONCLUSIONS: The studies showed a wide variation in quality and pose a challenge regarding replication due to a small number of public datasets (none for manual small incision cataract surgery) and seldom published source code. There is no standard for reported outcome metrics and validation of the models on external datasets is rare making comparisons difficult. The data suggests that tracking of instruments and phase detection work well but surgical skill and complication recognition remains a challenge for deep learning.

TRANSLATIONAL RELEVANCE: This overview of cataract surgery analysis with AI models provides translational value for improving training of the clinician by identifying successes and challenges.

PMID:38618893 | DOI:10.1167/tvst.13.4.20

Categories: Literature Watch

Graphical models for identifying pore-forming proteins

Mon, 2024-04-15 06:00

Proteins. 2024 Apr 15. doi: 10.1002/prot.26687. Online ahead of print.

ABSTRACT

Pore-forming toxins (PFTs) are proteins that form lesions in biological membranes. Better understanding of the structure and function of these proteins will be beneficial in a number of biotechnological applications, including the development of new pest control methods in agriculture. When searching for new pore formers, existing sequence homology-based methods fail to discover truly novel proteins with low sequence identity to known proteins. Search methodologies based on protein structures would help us move beyond this limitation. As the number of known structures for PFTs is very limited, it's quite challenging to identify new proteins having similar structures using computational approaches like deep learning. In this article, we therefore propose a sample-efficient graphical model, where a protein structure graph is first constructed according to consensus secondary structures. A semi-Markov conditional random fields model is then developed to perform protein sequence segmentation. We demonstrate that our method is able to distinguish structurally similar proteins even in the absence of sequence similarity (pairwise sequence identity < 0.4)-a feat not achievable by traditional approaches like HMMs. To extract proteins of interest from a genome-wide protein database for further study, we also develop an efficient framework for UniRef50 with 43 million proteins.

PMID:38618860 | DOI:10.1002/prot.26687

Categories: Literature Watch

Establishing a novel deep learning model for detecting peri-implantitis

Mon, 2024-04-15 06:00

J Dent Sci. 2024 Apr;19(2):1165-1173. doi: 10.1016/j.jds.2023.11.017. Epub 2023 Dec 11.

ABSTRACT

BACKGROUND/PURPOSE: The diagnosis of peri-implantitis using periapical radiographs is crucial. Recently, artificial intelligence may apply in radiographic image analysis effectively. The aim of this study was to differentiate the degree of marginal bone loss of an implant, and also to classify the severity of peri-implantitis using a deep learning model.

MATERIALS AND METHODS: A dataset of 800 periapical radiographic images were divided into training (n = 600), validation (n = 100), and test (n = 100) datasets with implants used for deep learning. An object detection algorithm (YOLOv7) was used to identify peri-implantitis. The classification performance of this model was evaluated using metrics, including the specificity, precision, recall, and F1 score.

RESULTS: Considering the classification performance, the specificity was 100%, precision was 100%, recall was 94.44%, and F1 score was 97.10%.

CONCLUSION: Results of this study suggested that implants can be identified from periapical radiographic images using deep learning-based object detection. This identification system could help dentists and patients suffering from implant problems. However, more images of other implant systems are needed to increase the learning performance to apply this system in clinical practice.

PMID:38618118 | PMC:PMC11010782 | DOI:10.1016/j.jds.2023.11.017

Categories: Literature Watch

Deep learning framework for bovine iris segmentation

Mon, 2024-04-15 06:00

J Anim Sci Technol. 2024 Jan;66(1):167-177. doi: 10.5187/jast.2023.e51. Epub 2024 Jan 31.

ABSTRACT

Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.

PMID:38618036 | PMC:PMC11007464 | DOI:10.5187/jast.2023.e51

Categories: Literature Watch

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

Mon, 2024-04-15 06:00

J Anim Sci Technol. 2024 Jan;66(1):31-56. doi: 10.5187/jast.2024.e4. Epub 2024 Jan 31.

ABSTRACT

Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.

PMID:38618025 | PMC:PMC11007457 | DOI:10.5187/jast.2024.e4

Categories: Literature Watch

GoldDigger and Checkers, computational developments in cryo-scanning transmission electron tomography to improve the quality of reconstructed volumes

Mon, 2024-04-15 06:00

Biol Imaging. 2024 Mar 27;4:e6. doi: 10.1017/S2633903X24000047. eCollection 2024.

ABSTRACT

In this work, we present a pair of tools to improve the fiducial tracking and reconstruction quality of cryo-scanning transmission electron tomography (STET) datasets. We then demonstrate the effectiveness of these two tools on experimental cryo-STET data. The first tool, GoldDigger, improves the tracking of fiducials in cryo-STET by accommodating the changed appearance of highly defocussed fiducial markers. Since defocus effects are much stronger in scanning transmission electron microscopy than in conventional transmission electron microscopy, existing alignment tools do not perform well without manual intervention. The second tool, Checkers, combines image inpainting and unsupervised deep learning for denoising tomograms. Existing tools for denoising cryo-tomography often rely on paired noisy image frames, which are unavailable in cryo-STET datasets, necessitating a new approach. Finally, we make the two software tools freely available for the cryo-STET community.

PMID:38617998 | PMC:PMC11016363 | DOI:10.1017/S2633903X24000047

Categories: Literature Watch

Deep learning of pretreatment multiphase CT images for predicting response to lenvatinib and immune checkpoint inhibitors in unresectable hepatocellular carcinoma

Mon, 2024-04-15 06:00

Comput Struct Biotechnol J. 2024 Apr 3;24:247-257. doi: 10.1016/j.csbj.2024.04.001. eCollection 2024 Dec.

ABSTRACT

OBJECTIVES: Combination therapy of lenvatinib and immune checkpoint inhibitors (CLICI) has emerged as a promising approach for managing unresectable hepatocellular carcinoma (HCC). However, the response to such treatment is observed in only a subset of patients, underscoring the pressing need for reliable methods to identify potential responders.

MATERIALS & METHODS: This was a retrospective analysis involving 120 patients with unresectable HCC. They were divided into training (n = 72) and validation (n = 48) cohorts. We developed an interpretable deep learning model using multiphase computed tomography (CT) images to predict whether patients will respond or not to CLICI treatment, based on the Response Evaluation Criteria in Solid Tumors, version 1.1 (RECIST v1.1). We evaluated the models' performance and analyzed the impact of each CT phase. Critical regions influencing predictions were identified and visualized through heatmaps.

RESULTS: The multiphase model outperformed the best biphase and uniphase models, achieving an area under the curve (AUC) of 0.802 (95% CI = 0.780-0.824). The portal phase images were found to significantly enhance the model's predictive accuracy. Heatmaps identified six critical features influencing treatment response, offering valuable insights to clinicians. Additionally, we have made this model accessible via a web server at http://uhccnet.com/ for ease of use.

CONCLUSIONS: The integration of multiphase CT images with deep learning-generated heatmaps for predicting treatment response provides a robust and practical tool for guiding CLICI therapy in patients with unresectable HCC.

PMID:38617891 | PMC:PMC11015163 | DOI:10.1016/j.csbj.2024.04.001

Categories: Literature Watch

Advanced Abdominal MRI Techniques and Problem-Solving Strategies

Mon, 2024-04-15 06:00

J Korean Soc Radiol. 2024 Mar;85(2):345-362. doi: 10.3348/jksr.2023.0067. Epub 2024 Mar 26.

ABSTRACT

MRI plays an important role in abdominal imaging because of its ability to detect and characterize focal lesions. However, MRI examinations have several challenges, such as comparatively long scan times and motion management through breath-holding maneuvers. Techniques for reducing scan time with acceptable image quality, such as parallel imaging, compressed sensing, and cutting-edge deep learning techniques, have been developed to enable problem-solving strategies. Additionally, free-breathing techniques for dynamic contrast-enhanced imaging, such as extra-dimensional-volumetric interpolated breath-hold examination, golden-angle radial sparse parallel, and liver acceleration volume acquisition Star, can help patients with severe dyspnea or those under sedation to undergo abdominal MRI. We aimed to present various advanced abdominal MRI techniques for reducing the scan time while maintaining image quality and free-breathing techniques for dynamic imaging and illustrate cases using the techniques mentioned above. A review of these advanced techniques can assist in the appropriate interpretation of sequences.

PMID:38617869 | PMC:PMC11009130 | DOI:10.3348/jksr.2023.0067

Categories: Literature Watch

Toward Perception-based Anticipation of Cortical Breach During K-wire Fixation of the Pelvis

Mon, 2024-04-15 06:00

Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12031:120311N. doi: 10.1117/12.2612989. Epub 2022 Apr 4.

ABSTRACT

Intraoperative imaging using C-arm X-ray systems enables percutaneous management of fractures by providing real-time visualization of tool to tissue relationships. However, estimating appropriate positioning of surgical instruments, such as K-wires, relative to safe bony corridors is challenging due to the projective nature of X-ray images: tool pose in the plane containing the principal ray is difficult to assess, necessitating the acquisition of numerous views onto the anatomy. This task is especially demanding in complex anatomy, such as the superior pubic ramus of the pelvis, and results in high cognitive load and repeat attempts even in experienced trauma surgeons. A perception-based algorithm that interprets interventional radiographs during internal fixation to infer the likelihood of cortical breach - especially early on, when the wire has not been advanced - might reduce both the amount of X-rays acquired for verification and the likelihood of repeat attempts. In this manuscript, we present first steps towards developing such an algorithm. We devise a strategy for in silico collection and annotation of X-ray images suitable for detecting cortical breach of a K-wire in the superior pubic ramus, including those with visible fractures. Beginning with minimal manual annotations of correct trajectories, we randomly perturb entry and exit points and project the 3D scene using a physics-based forward model to obtain a large number of 2D X-ray images with and without cortical breach. We report baseline results for anticipating cortical breach at various K-wire insertion depths, achieving an AUROC score of 0.68 for 50% insertion. Code and data are available at github.com/benjamindkilleen/cortical-breach-detection.

PMID:38617810 | PMC:PMC11016333 | DOI:10.1117/12.2612989

Categories: Literature Watch

Combining Deep Learning and Structural Modeling to Identify Potential Acetylcholinesterase Inhibitors from Hericium erinaceus

Mon, 2024-04-15 06:00

ACS Omega. 2024 Mar 26;9(14):16311-16321. doi: 10.1021/acsomega.3c10459. eCollection 2024 Apr 9.

ABSTRACT

Alzheimer's disease (AD) is the most common type of dementia, affecting over 50 million people worldwide. Currently, most approved medications for AD inhibit the activity of acetylcholinesterase (AChE), but these treatments often come with harmful side effects. There is growing interest in the use of natural compounds for disease prevention, alleviation, and treatment. This trend is driven by the anticipation that these substances may incur fewer side effects than existing medications. This research presents a computational approach combining machine learning with structural modeling to discover compounds from medicinal mushrooms with a high potential to inhibit the activity of AChE. First, we developed a deep neural network capable of rapidly screening a vast number of compounds to indicate their potential to inhibit AChE activity. Subsequently, we applied deep learning models to screen the compounds in the BACMUSHBASE database, which catalogs the bioactive compounds from cultivated and wild mushroom varieties local to Thailand, resulting in the identification of five promising compounds. Next, the five identified compounds underwent molecular docking techniques to calculate the binding energy between the compounds and AChE. This allowed us to refine the selection to two compounds, erinacerin A and hericenone B. Further analysis of the binding energy patterns between these compounds and the target protein revealed that both compounds displayed binding energy profiles similar to the combined characteristics of donepezil and galanthamine, the prescription drugs for AD. We propose that these two compounds, derived from Hericium erinaceus (also known as lion's mane mushroom), are suitable candidates for further research and development into symptom-alleviating AD medications.

PMID:38617639 | PMC:PMC11007777 | DOI:10.1021/acsomega.3c10459

Categories: Literature Watch

Dose-Incorporated Deep Ensemble Learning for Improving Brain Metastasis SRS Outcome Prediction

Sun, 2024-04-14 06:00

Int J Radiat Oncol Biol Phys. 2024 Apr 12:S0360-3016(24)00505-4. doi: 10.1016/j.ijrobp.2024.04.006. Online ahead of print.

ABSTRACT

PURPOSE/OBJECTIVE(S): To develop a novel deep ensemble learning model for accurate prediction of brain metastasis(BM) local control outcomes following stereotactic radiosurgery(SRS).

MATERIALS/METHODS: A total of 114 BMs from 82 patients were evaluated, including 26 BMs that developed biopsy-confirmed local failure post-SRS. The SRS spatial dose distribution(Dmap) of each BM was registered to the planning contrast-enhanced T1(T1-CE) MR. Axial slices of the Dmap, T1-CE, and PTV segmentation(PTVseg) intersecting the BM center were extracted within a fixed field-of-view determined by the V60% in Dmap. A spherical projection was implemented to transform planar image content onto a spherical surface using multiple projection centers, and the resultant T1-CE/Dmap/PTVseg projections were stacked as a 3-channel variable. Four VGG-19 deep encoders were utilized in an ensemble design, with each sub-model using a different spherical projection formula as input for BM outcome prediction. In each sub-model, clinical features after positional encoding were fused with VGG-19 deep features to generate logit results. The ensemble's outcome was synthesized from the four sub-model results via logistic regression. A total of 10 model versions with random validation sample assignments were trained to study model robustness. Performance was compared to 1) a single VGG-19 encoder; 2) an ensemble with T1-CE MRI as the sole image input after projections; and 3) an ensemble with the same image input design without clinical feature inclusion.

RESULTS: The ensemble model achieved an excellent AUCROC=0.89±0.02 with high sensitivity(0.82±0.05), specificity(0.84±0.11), and accuracy(0.84±0.08) results. This outperformed the MRI-only VGG-19 encoder (sensitivity:0.35±0.01, AUC:0.64±0.08), the MRI-only deep ensemble (sensitivity:0.60±0.09, AUC:0.68±0.06), and the 3-channel ensemble without clinical feature fusion (sensitivity:0.78±0.08, AUC:0.84±0.03).

CONCLUSION: Facilitated by the spherical image projection method, a deep ensemble model incorporating Dmap and clinical variables demonstrated an excellent performance in predicting BM post-SRS local failure. Our novel approach could improve other radiotherapy outcome models and warrants further evaluation.

PMID:38615888 | DOI:10.1016/j.ijrobp.2024.04.006

Categories: Literature Watch

Deep learning for the automatic detection and segmentation of parotid gland tumors on MRI

Sun, 2024-04-14 06:00

Oral Oncol. 2024 Apr 12;152:106796. doi: 10.1016/j.oraloncology.2024.106796. Online ahead of print.

ABSTRACT

OBJECTIVES: Parotid gland tumors (PGTs) often occur as incidental findings on magnetic resonance images (MRI) that may be overlooked. This study aimed to construct and validate a deep learning model to automatically identify parotid glands (PGs) with a PGT from normal PGs, and in those with a PGT to segment the tumor.

MATERIALS AND METHODS: The nnUNet combined with a PG-specific post-processing procedure was used to develop the deep learning model trained on T1-weighed images (T1WI) in 311 patients (180 PGs with tumors and 442 normal PGs) and fat-suppressed (FS)-T2WI in 257 patients (125 PGs with tumors and 389 normal PGs), for detecting and segmenting PGTs with five-fold cross-validation. Additional validation set separated by time, comprising T1WI in 34 and FS-T2WI in 41 patients, was used to validate the model performance.

RESULTS AND CONCLUSION: To identify PGs with tumors from normal PGs, using combined T1WI and FS-T2WI, the deep learning model achieved an accuracy, sensitivity and specificity of 98.2% (497/506), 100% (119/119) and 97.7% (378/387), respectively, in the cross-validation set and 98.5% (67/68), 100% (20/20) and 97.9% (47/48), respectively, in the validation set. For patients with PGTs, automatic segmentation of PGTs on T1WI and FS-T2WI achieved mean dice coefficients of 86.1% and 84.2%, respectively, in the cross-validation set, and of 85.9% and 81.0%, respectively, in the validation set. The proposed deep learning model may assist the detection and segmentation of PGTs and, by acting as a second pair of eyes, ensure that incidentally detected PGTs on MRI are not missed.

PMID:38615586 | DOI:10.1016/j.oraloncology.2024.106796

Categories: Literature Watch

Pages