Deep learning
The prediction of pCR and chemosensitivity for breast cancer patients using DLG3, RADL and Pathomics signatures based on machine learning and deep learning
Transl Oncol. 2024 May 27;46:101985. doi: 10.1016/j.tranon.2024.101985. Online ahead of print.
ABSTRACT
BACKGROUND: Limited studies have investigated the predictive value of multiomics signatures (radiomics, deep learning features, pathological features and DLG3) in breast cancer patients who underwent neoadjuvant chemotherapy (NAC). However, no study has explored the relationships among radiomic, pathomic signatures and chemosensitivity. This study aimed to predict pathological complete response (pCR) using multiomics signatures, and to evaluate the predictive utility of radiomic and pathomic signatures for guiding chemotherapy selection.
METHODS: The oncogenic function of DLG3 was explored in breast cancer cells via DLG3 knockdown. Immunohistochemistry (IHC) was used to evaluate the relationship between DLG3 expression and docetaxel/epirubin sensitivity. Machine learning (ML) and deep learning (DL) algorithms were used to develop multiomics signatures. Survival analysis was conducted by K-M curves and log-rank. Multivariate logistic regression analysis was used to develop nomograms.
RESULTS: A total of 311 patients with malignant breast tumours who underwent NAC were retrospectively included in this multicentre study. Multiomics (DLG3, RADL and PATHO) signatures could accurately predict pCR (AUC: training: 0.900; testing: 0.814; external validation: 0.792). Its performance is also superior to that of clinical TNM staging and the single RADL signature in different cohorts. Patients in the low DLG3 group more easily achieved pCR, and those in the high RADL Signature_pCR and PATHO_Signature_pCR (OR = 7.93, 95 % CI: 3.49-18, P < 0.001) groups more easily achieved pCR. In the TEC regimen NAC group, patients who achieved pCR had a lower DLG3 score (4.00 ± 2.33 vs. 6.43 ± 3.01, P < 0.05). Patients in the low RADL_Signature_DLG3 and PATHO_Signature_DLG3 groups had lower DLG3 IHC scores (P < 0.05). Patients in the high RADL signature, PATHO signature and DLG3 signature groups had worse DFS and OS.
CONCLUSIONS: Multiomics signatures (RADL, PATHO and DLG3) demonstrated great potential in predicting the pCR of breast cancer patients who underwent NAC. The RADL and PATHO signatures are associated with DLG3 status and could help doctors or patients choose proper neoadjuvant chemotherapy regimens (TEC regimens). This simple, structured, convenient and inexpensive multiomics model could help clinicians and patients make treatment decisions.
PMID:38805774 | DOI:10.1016/j.tranon.2024.101985
Deep learning for risk stratification of thymoma pathological subtypes based on preoperative CT images
BMC Cancer. 2024 May 28;24(1):651. doi: 10.1186/s12885-024-12394-4.
ABSTRACT
OBJECTIVES: This study aims to develop an innovative, deep model for thymoma risk stratification using preoperative CT images. Current algorithms predominantly focus on radiomic features or 2D deep features and require manual tumor segmentation by radiologists, limiting their practical applicability.
METHODS: The deep model was trained and tested on a dataset comprising CT images from 147 patients (82 female; mean age, 54 years ± 10) who underwent surgical resection and received subsequent pathological confirmation. The eligible participants were divided into a training cohort (117 patients) and a testing cohort (30 patients) based on the CT scan time. The model consists of two stages: 3D tumor segmentation and risk stratification. The radiomic model and deep model (2D) were constructed for comparative analysis. Model performance was evaluated through dice coefficient, area under the curve (AUC), and accuracy.
RESULTS: In both the training and testing cohorts, the deep model demonstrated better performance in differentiating thymoma risk, boasting AUCs of 0.998 and 0.893 respectively. This was compared to the radiomic model (AUCs of 0.773 and 0.769) and deep model (2D) (AUCs of 0.981 and 0.760). Notably, the deep model was capable of simultaneously identifying lesions, segmenting the region of interest (ROI), and differentiating the risk of thymoma on arterial phase CT images. Its diagnostic prowess outperformed that of the baseline model.
CONCLUSIONS: The deep model has the potential to serve as an innovative decision-making tool, assisting on clinical prognosis evaluation and the discernment of suitable treatments for different thymoma pathological subtypes.
KEY POINTS: • This study incorporated both tumor segmentation and risk stratification. • The deep model, using clinical and 3D deep features, effectively predicted thymoma risk. • The deep model improved AUCs by 16.1pt and 17.5pt compared to radiomic model and deep model (2D) respectively.
PMID:38807039 | DOI:10.1186/s12885-024-12394-4
Classification of Caries Based on CBCT: A Deep Learning Network Interpretability Study
J Imaging Inform Med. 2024 May 28. doi: 10.1007/s10278-024-01143-5. Online ahead of print.
ABSTRACT
This study aimed to create a caries classification scheme based on cone-beam computed tomography (CBCT) and develop two deep learning models to improve caries classification accuracy. A total of 2713 axial slices were obtained from CBCT images of 204 carious teeth. Both classification models were trained and tested using the same pretrained classification networks on the dataset, including ResNet50_vd, MobileNetV3_large_ssld, and ResNet50_vd_ssld. The first model was used directly to classify the original images (direct classification model). The second model incorporated a presegmentation step for interpretation (interpretable classification model). Performance evaluation metrics including accuracy, precision, recall, and F1 score were calculated. The Local Interpretable Model-agnostic Explanations (LIME) method was employed to elucidate the decision-making process of the two models. In addition, a minimum distance between caries and pulp was introduced for determining the treatment strategies for type II carious teeth. The direct model that utilized the ResNet50_vd_ssld network achieved top accuracy, precision, recall, and F1 score of 0.700, 0.786, 0.606, and 0.616, respectively. Conversely, the interpretable model consistently yielded metrics surpassing 0.917, irrespective of the network employed. The LIME algorithm confirmed the interpretability of the classification models by identifying key image features for caries classification. Evaluation of treatment strategies for type II carious teeth revealed a significant negative correlation (p < 0.01) with the minimum distance. These results demonstrated that the CBCT-based caries classification scheme and the two classification models appeared to be acceptable tools for the diagnosis and categorization of dental caries.
PMID:38806951 | DOI:10.1007/s10278-024-01143-5
Enhancing AI Research for Breast Cancer: A Comprehensive Review of Tumor-Infiltrating Lymphocyte Datasets
J Imaging Inform Med. 2024 May 28. doi: 10.1007/s10278-024-01043-8. Online ahead of print.
ABSTRACT
The field of immunology is fundamental to our understanding of the intricate dynamics of the tumor microenvironment. In particular, tumor-infiltrating lymphocyte (TIL) assessment emerges as essential aspect in breast cancer cases. To gain comprehensive insights, the quantification of TILs through computer-assisted pathology (CAP) tools has become a prominent approach, employing advanced artificial intelligence models based on deep learning techniques. The successful recognition of TILs requires the models to be trained, a process that demands access to annotated datasets. Unfortunately, this task is hampered not only by the scarcity of such datasets, but also by the time-consuming nature of the annotation phase required to create them. Our review endeavors to examine publicly accessible datasets pertaining to the TIL domain and thereby become a valuable resource for the TIL community. The overall aim of the present review is thus to make it easier to train and validate current and upcoming CAP tools for TIL assessment by inspecting and evaluating existing publicly available online datasets.
PMID:38806950 | DOI:10.1007/s10278-024-01043-8
DeepCCR: large-scale genomics-based deep learning method for improving rice breeding
Plant Biotechnol J. 2024 May 28. doi: 10.1111/pbi.14384. Online ahead of print.
NO ABSTRACT
PMID:38805625 | DOI:10.1111/pbi.14384
Deep learning for identifying bee species from images of wings and pinned specimens
PLoS One. 2024 May 28;19(5):e0303383. doi: 10.1371/journal.pone.0303383. eCollection 2024.
ABSTRACT
One of the most challenging aspects of bee ecology and conservation is species-level identification, which is costly, time consuming, and requires taxonomic expertise. Recent advances in the application of deep learning and computer vision have shown promise for identifying large bumble bee (Bombus) species. However, most bees, such as sweat bees in the genus Lasioglossum, are much smaller and can be difficult, even for trained taxonomists, to identify. For this reason, the great majority of bees are poorly represented in the crowdsourced image datasets often used to train computer vision models. But even larger bees, such as bumble bees from the B. vagans complex, can be difficult to separate morphologically. Using images of specimens from our research collections, we assessed how deep learning classification models perform on these more challenging taxa, qualitatively comparing models trained on images of whole pinned specimens or on images of bee forewings. The pinned specimen and wing image datasets represent 20 and 18 species from 6 and 4 genera, respectively, and were used to train the EfficientNetV2L convolutional neural network. Mean test precision was 94.9% and 98.1% for pinned and wing images respectively. Results show that computer vision holds great promise for classifying smaller, more difficult to identify bees that are poorly represented in crowdsourced datasets. Images from research and museum collections will be valuable for expanding classification models to include additional species, which will be essential for large scale conservation monitoring efforts.
PMID:38805521 | DOI:10.1371/journal.pone.0303383
Fuzzy ensemble of fined tuned BERT models for domain-specific sentiment analysis of software engineering dataset
PLoS One. 2024 May 28;19(5):e0300279. doi: 10.1371/journal.pone.0300279. eCollection 2024.
ABSTRACT
Software engineers post their opinions about various topics on social media that can be collectively mined using Sentiment Analysis. Analyzing this opinion is useful because it can provide insight into developers' feedback about various tools and topics. General-purpose sentiment analysis tools do not work well in the software domain because most of these tools are trained on movies and review datasets. Therefore, efforts are underway to develop domain-specific sentiment analysis tools for the Software Engineering (SE) domain. However, existing domain-specific tools for SE struggle to compute negative and neutral sentiments and can not be used on all SE datasets. This work uses a hybrid technique based on deep learning and a fine-tuned BERT model, i.e., Bert-Base, Bert-Large, Bert-LSTM, Bert-GRU, and Bert-CNN presented that is adapted as a domain-specific sentiment analysis tool for Community Question Answering datasets (named as Fuzzy Ensemble). Five different variants of fine-tuned BERT on the SE dataset are developed, and an ensemble of these fine-tuned models is taken using fuzzy logic. The trained model is evaluated on four publicly available benchmark datasets, i.e., Stack Overflow, JavaLib, Jira, and Code Review, using various evaluation metrics. The fuzzy Ensemble model is also compared with the state-of-the-art sentiment analysis tools for the software engineering domain, i.e., SentiStrength-SE, Senti4SD, SentiCR, and Generative Pre-Training Transformer (GPT). GPT mode is fine-tuned by the authors for domain-specific sentiment analysis. The Fuzzy Ensemble model covers the limitation of existing tools and improve accuracy to predict neutral sentiments even on diverse dataset. The fuzzy Ensemble model performs superior to state-of-the-art tools by achieving a maximum F1-score of 0.883.
PMID:38805433 | DOI:10.1371/journal.pone.0300279
Improved Denoising of Cryo-Electron Microscopy Micrographs with Simulation-Aware Pretraining
J Comput Biol. 2024 May 28. doi: 10.1089/cmb.2024.0513. Online ahead of print.
ABSTRACT
Cryo-electron microscopy (cryo-EM) has emerged as a potent technique for determining the structure and functionality of biological macromolecules. However, limited by the physical imaging conditions, such as low electron beam dose, micrographs in cryo-EM typically contend with an extremely low signal-to-noise ratio (SNR), impeding the efficiency and efficacy of subsequent analyses. Therefore, there is a growing demand for an efficient denoising algorithm designed for cryo-EM micrographs, aiming to enhance the quality of macromolecular analysis. However, owing to the absence of a comprehensive and well-defined dataset with ground truth images, supervised image denoising methods exhibit limited generalization when applied to experimental micrographs. To tackle this challenge, we introduce a simulation-aware image denoising (SaID) pretrained model designed to enhance the SNR of cryo-EM micrographs where the training is solely based on an accurately simulated dataset. First, we propose a parameter calibration algorithm for simulated dataset generation, aiming to align simulation parameters with those of experimental micrographs. Second, leveraging the accurately simulated dataset, we propose to train a deep general denoising model that can well generalize to real experimental cryo-EM micrographs. Comprehensive experimental results demonstrate that our pretrained denoising model achieves excellent denoising performance on experimental cryo-EM micrographs, significantly streamlining downstream analysis.
PMID:38805340 | DOI:10.1089/cmb.2024.0513
DIPO: Differentiable Parallel Operation Blocks for Surgical Neural Architecture Search
IEEE J Biomed Health Inform. 2024 May 28;PP. doi: 10.1109/JBHI.2024.3406065. Online ahead of print.
ABSTRACT
Deep learning has been used across a large number of computer vision tasks, however designing the network architectures for each task is time consuming. Neural Architecture Search (NAS) promises to automatically build neural networks, optimised for the given task and dataset. However, most NAS methods are constrained to a specific macro-architecture design which makes it hard to apply to different tasks (classification, detection, segmentation). Following the work in Differentiable NAS (DNAS), we present a simple and efficient NAS method, Differentiable Parallel Operation (DIPO), that constructs a local search space in the form of a DIPO block, and can easily be applied to any convolutional network by injecting it in-place of the convolutions. The DIPO block's internal architecture and parameters are automatically optimised end-to-end for each task. We demonstrate the flexibility of our approach by applying DIPO to 4 model architectures (U-Net, HRNET, KAPAO and YOLOX) across different surgical tasks (surgical scene segmentation, surgical instrument detection, and surgical instrument pose estimation) and evaluated across 5 datasets. Results show significant improvements in surgical scene segmentation (+10.5% in CholecSeg8K, +13.2% in CaDIS), instrument detection (+1.5% in ROBUST-MIS, +5.3% in RoboKP), and instrument pose estimation (+9.8% in RoboKP).
PMID:38805333 | DOI:10.1109/JBHI.2024.3406065
Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction
IEEE Trans Med Imaging. 2024 May 28;PP. doi: 10.1109/TMI.2024.3406559. Online ahead of print.
ABSTRACT
Multi-modal magnetic resonance imaging (MRI) plays a crucial role in comprehensive disease diagnosis in clinical medicine. However, acquiring certain modalities, such as T2-weighted images (T2WIs), is time-consuming and prone to be with motion artifacts. It negatively impacts subsequent multi-modal image analysis. To address this issue, we propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions. While image pre-processing is capable of mitigating misalignment, improper parameter selection leads to adverse pre-processing effects, requiring iterative experimentation and adjustment. To overcome this shortage, we employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis, effectively mitigating spatial misalignment effects. Furthermore, we adopt an alternating iteration framework between the reconstruction task and the cross-modal synthesis task to optimize the final results. Then, we prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing, and further illustrate that the improved reconstruction result enhances the synthesis process, whereas the enhanced synthesis result improves the reconstruction process. Finally, experimental results from FastMRI and internal datasets confirm the effectiveness of our method, demonstrating significant improvements in image reconstruction quality even at low sampling rates.
PMID:38805327 | DOI:10.1109/TMI.2024.3406559
OrganoIDNet: a deep learning tool for identification of therapeutic effects in PDAC organoid-PBMC co-cultures from time-resolved imaging data
Cell Oncol (Dordr). 2024 May 28. doi: 10.1007/s13402-024-00958-2. Online ahead of print.
ABSTRACT
PURPOSE: Pancreatic Ductal Adenocarcinoma (PDAC) remains a challenging disease due to its complex biology and aggressive behavior with an urgent need for efficient therapeutic strategies. To assess therapy response, pre-clinical PDAC organoid-based models in combination with accurate real-time monitoring are required.
METHODS: We established stable live-imaging organoid/peripheral blood mononuclear cells (PBMCs) co-cultures and introduced OrganoIDNet, a deep-learning-based algorithm, capable of analyzing bright-field images of murine and human patient-derived PDAC organoids acquired with live-cell imaging. We investigated the response to the chemotherapy gemcitabine in PDAC organoids and the PD-L1 inhibitor Atezolizumab, cultured with or without HLA-matched PBMCs over time. Results obtained with OrganoIDNet were validated with the endpoint proliferation assay CellTiter-Glo.
RESULTS: Live cell imaging in combination with OrganoIDNet accurately detected size-specific drug responses of organoids to gemcitabine over time, showing that large organoids were more prone to cytotoxic effects. This approach also allowed distinguishing between healthy and unhealthy status and measuring eccentricity as organoids' reaction to therapy. Furthermore, imaging of a new organoids/PBMCs sandwich-based co-culture enabled longitudinal analysis of organoid responses to Atezolizumab, showing an increased potency of PBMCs tumor-killing in an organoid-individual manner when Atezolizumab was added.
CONCLUSION: Optimized PDAC organoid imaging analyzed by OrganoIDNet represents a platform capable of accurately detecting organoid responses to standard PDAC chemotherapy over time. Moreover, organoid/immune cell co-cultures allow monitoring of organoid responses to immunotherapy, offering dynamic insights into treatment behavior within a co-culture setting with PBMCs. This setup holds promise for real-time assessment of immunotherapeutic effects in individual patient-derived PDAC organoids.
PMID:38805131 | DOI:10.1007/s13402-024-00958-2
Editorial for "Deep Learning Model for Grading and Localization of Lumbar Disc Herniation on Magnetic Resonance Imaging"
J Magn Reson Imaging. 2024 May 28. doi: 10.1002/jmri.29457. Online ahead of print.
NO ABSTRACT
PMID:38804734 | DOI:10.1002/jmri.29457
Integrating deep learning with non-destructive thermal imaging for precision guava ripeness determination
J Sci Food Agric. 2024 May 28. doi: 10.1002/jsfa.13614. Online ahead of print.
ABSTRACT
BACKGROUND: To mitigate post-harvest losses and inform harvesting decisions at the same time as ensuring fruit quality, precise ripeness determination is essential. The complexity arises in assessing guava ripeness as a result of subtle alterations in some varieties during the ripening process, making visual assessment less reliable. The present study proposes a non-destructive method employing thermal imaging for guava ripeness assessment, involving obtaining thermal images of guava samples at different ripeness stages, followed by data pre-processing. Five deep learning models (AlexNet, Inception-v3, GoogLeNet, ResNet-50 and VGGNet-16) were applied, and their performances were systematically evaluated and compared.
RESULTS: VGGNet-16 demonstrated outstanding performance, achieving average precision of 0.92, average sensitivity of 0.93, average specificity of 0.96, average F1-score of 0.92 and accuracy of 0.92 within a training duration of 484 s.
CONCLUSION: The present study presents a scalable and non-destructive approach for guava ripeness determination, contributing to waste reduction and enhancing efficiency in supply chains and fruit production. These initiatives align with environmentally friendly practices in agriculture. © 2024 Society of Chemical Industry.
PMID:38804719 | DOI:10.1002/jsfa.13614
Effective prediction of human skin cancer using stacking based ensemble deep learning algorithm
Network. 2024 May 28:1-37. doi: 10.1080/0954898X.2024.2346608. Online ahead of print.
ABSTRACT
Automated diagnosis of cancer from skin lesion data has been the focus of numerous research. Despite that it can be challenging to interpret these images because of features like colour illumination changes, variation in the sizes and forms of the lesions. To tackle these problems, the proposed model develops an ensemble of deep learning techniques for skin cancer diagnosis. Initially, skin imaging data are collected and preprocessed using resizing and anisotropic diffusion to enhance the quality of the image. Preprocessed images are fed into the Fuzzy-C-Means clustering technique to segment the region of diseases. Stacking-based ensemble deep learning approach is used for classification and the LSTM acts as a meta-classifier. Deep Neural Network (DNN) and Convolutional Neural Network (CNN) are used as input for LSTM. This segmented images are utilized to be input into the CNN, and the local binary pattern (LBP) technique is employed to extract DNN features from the segments of the image. The output from these two classifiers will be fed into the LSTM Meta classifier. This LSTM classifies the input data and predicts the skin cancer disease. The proposed approach had a greater accuracy of 97%. Hence, the developed model accurately predicts skin cancer disease.
PMID:38804548 | DOI:10.1080/0954898X.2024.2346608
Clinical Case of Mild Tatton-Brown-Rahman Syndrome Caused by a Nonsense Variant in <em>DNMT3A</em> Gene
Clin Pract. 2024 May 21;14(3):928-933. doi: 10.3390/clinpract14030073.
ABSTRACT
Tatton-Brown-Rahman syndrome is a rare autosomal dominant hereditary disease caused by pathogenic variants in the DNMT3A gene, which is an important participant in epigenetic regulation, especially during embryonic development, and is highly expressed in all tissues. The main features of the syndrome are high growth, macrocephaly, intellectual disability, and facial dysmorphic features. We present a clinical case of Tatton-Brown-Rahman syndrome in a ten-year-old boy with macrocephaly with learning difficulties, progressive eye impairment, and fatigue suspected by a deep learning-based diagnosis assistance system, Face2Gene. The proband underwent whole-exome sequencing, which revealed a recurrent nonsense variant in the 12th exon of the DNMT3A, leading to the formation of a premature stop codon-NM_022552.5:c.1443C>A (p.Tyr481Ter), in a heterozygous state. This variant was not found in parents, confirming its de novo status. The patient case described here contributes to the understanding of the clinical diversity of Tatton-Brown-Raman syndrome with a mild clinical presentation that expands the phenotypic spectrum of the syndrome. We report the first recurrent nonsense variant in the DNMT3A gene, suggesting a mutational hot-spot. Differential diagnoses of this syndrome with Sotos syndrome, Weaver syndrome, and Cowden syndrome, as well as molecular confirmation, are extremely important, since the presence of certain types of pathogenic variants in the DNMT3A gene significantly increases the risk of developing acute myeloid leukemia.
PMID:38804405 | DOI:10.3390/clinpract14030073
A Novel Deep Learning Model for Drug-drug Interactions
Curr Comput Aided Drug Des. 2024;20(5):666-672. doi: 10.2174/0115734099265663230926064638.
ABSTRACT
INTRODUCTION: Drug-drug interactions (DDIs) can lead to adverse events and compromised treatment efficacy that emphasize the need for accurate prediction and understanding of these interactions.
METHODS: In this paper, we propose a novel approach for DDI prediction using two separate message-passing neural network (MPNN) models, each focused on one drug in a pair. By capturing the unique characteristics of each drug and their interactions, the proposed method aims to improve the accuracy of DDI prediction. The outputs of the individual MPNN models combine to integrate the information from both drugs and their molecular features. Evaluating the proposed method on a comprehensive dataset, we demonstrate its superior performance with an accuracy of 0.90, an area under the curve (AUC) of 0.99, and an F1-score of 0.80. These results highlight the effectiveness of the proposed approach in accurately identifying potential drugdrug interactions.
RESULTS: The use of two separate MPNN models offers a flexible framework for capturing drug characteristics and interactions, contributing to our understanding of DDIs. The findings of this study have significant implications for patient safety and personalized medicine, with the potential to optimize treatment outcomes by preventing adverse events.
CONCLUSION: Further research and validation on larger datasets and real-world scenarios are necessary to explore the generalizability and practicality of this approach.
PMID:38804324 | DOI:10.2174/0115734099265663230926064638
fMRI-based spatio-temporal parcellations of the human brain
Curr Opin Neurol. 2024 May 28. doi: 10.1097/WCO.0000000000001280. Online ahead of print.
ABSTRACT
PURPOSE OF REVIEW: Human brain parcellation based on functional magnetic resonance imaging (fMRI) plays an essential role in neuroscience research. By segmenting vast and intricate fMRI data into functionally similar units, researchers can better decipher the brain's structure in both healthy and diseased states. This article reviews current methodologies and ideas in this field, while also outlining the obstacles and directions for future research.
RECENT FINDINGS: Traditional brain parcellation techniques, which often rely on cytoarchitectonic criteria, overlook the functional and temporal information accessible through fMRI. The adoption of machine learning techniques, notably deep learning, offers the potential to harness both spatial and temporal information for more nuanced brain segmentation. However, the search for a one-size-fits-all solution to brain segmentation is impractical, with the choice between group-level or individual-level models and the intended downstream analysis influencing the optimal parcellation strategy. Additionally, evaluating these models is complicated by our incomplete understanding of brain function and the absence of a definitive "ground truth".
SUMMARY: While recent methodological advancements have significantly enhanced our grasp of the brain's spatial and temporal dynamics, challenges persist in advancing fMRI-based spatio-temporal representations. Future efforts will likely focus on refining model evaluation and selection as well as developing methods that offer clear interpretability for clinical usage, thereby facilitating further breakthroughs in our comprehension of the brain.
PMID:38804205 | DOI:10.1097/WCO.0000000000001280
Deep learning unlocks label-free viability assessment of cancer spheroids in microfluidics
Lab Chip. 2024 May 28. doi: 10.1039/d4lc00197d. Online ahead of print.
ABSTRACT
Despite recent advances in cancer treatment, refining therapeutic agents remains a critical task for oncologists. Precise evaluation of drug effectiveness necessitates the use of 3D cell culture instead of traditional 2D monolayers. Microfluidic platforms have enabled high-throughput drug screening with 3D models, but current viability assays for 3D cancer spheroids have limitations in reliability and cytotoxicity. This study introduces a deep learning model for non-destructive, label-free viability estimation based on phase-contrast images, providing a cost-effective, high-throughput solution for continuous spheroid monitoring in microfluidics. Microfluidic technology facilitated the creation of a high-throughput cancer spheroid platform with approximately 12 000 spheroids per chip for drug screening. Validation involved tests with eight conventional chemotherapeutic drugs, revealing a strong correlation between viability assessed via LIVE/DEAD staining and phase-contrast morphology. Extending the model's application to novel compounds and cell lines not in the training dataset yielded promising results, implying the potential for a universal viability estimation model. Experiments with an alternative microscopy setup supported the model's transferability across different laboratories. Using this method, we also tracked the dynamic changes in spheroid viability during the course of drug administration. In summary, this research integrates a robust platform with high-throughput microfluidic cancer spheroid assays and deep learning-based viability estimation, with broad applicability to various cell lines, compounds, and research settings.
PMID:38804084 | DOI:10.1039/d4lc00197d
Deep learning system for screening AIDS-related cytomegalovirus retinitis with ultra-wide-field fundus images
Heliyon. 2024 May 15;10(10):e30881. doi: 10.1016/j.heliyon.2024.e30881. eCollection 2024 May 30.
ABSTRACT
BACKGROUND: Ophthalmological screening for cytomegalovirus retinitis (CMVR) for HIV/AIDS patients is important to prevent lifelong blindness. Previous studies have shown good properties of automated CMVR screening using digital fundus images. However, the application of a deep learning (DL) system to CMVR with ultra-wide-field (UWF) fundus images has not been studied, and the feasibility and efficiency of this method are uncertain.
METHODS: In this study, we developed, internally validated, externally validated, and prospectively validated a DL system to detect AIDS-related from UWF fundus images from different clinical datasets. We independently used the InceptionResnetV2 network to develop and internally validate a DL system for identifying active CMVR, inactive CMVR, and non-CMVR in 6960 UWF fundus images from 862 AIDS patients and validated the system in a prospective and an external validation data set using the area under the curve (AUC), accuracy, sensitivity, and specificity. A heat map identified the most important area (lesions) used by the DL system for differentiating CMVR.
RESULTS: The DL system showed AUCs of 0.945 (95 % confidence interval [CI]: 0.929, 0.962), 0.964 (95 % CI: 0.870, 0.999) and 0.968 (95 % CI: 0.860, 1.000) for detecting active CMVR from non-CMVR and 0.923 (95 % CI: 0.908, 0.938), 0.902 (0.857, 0.948) and 0.884 (0.851, 0.917) for detecting active CMVR from non-CMVR in the internal cross-validation, external validation, and prospective validation, respectively. Deep learning performed promisingly in screening CMVR. It also showed the ability to differentiate active CMVR from non-CMVR and inactive CMVR as well as to identify active CMVR and inactive CMVR from non-CMVR (all AUCs in the three independent data sets >0.900). The heat maps successfully highlighted lesion locations.
CONCLUSIONS: Our UWF fundus image-based DL system showed reliable performance for screening AIDS-related CMVR showing its potential for screening CMVR in HIV/AIDS patients, especially in the absence of ophthalmic resources.
PMID:38803983 | PMC:PMC11128864 | DOI:10.1016/j.heliyon.2024.e30881
Teeth segmentation and carious lesions segmentation in panoramic X-ray images using CariSeg, a networks' ensemble
Heliyon. 2024 May 10;10(10):e30836. doi: 10.1016/j.heliyon.2024.e30836. eCollection 2024 May 30.
ABSTRACT
BACKGROUND: Dental cavities are common oral diseases that can lead to pain, discomfort, and eventually, tooth loss. Early detection and treatment of cavities can prevent these negative consequences. We propose CariSeg, an intelligent system composed of four neural networks that result in the detection of cavities in dental X-rays with 99.42% accuracy.
METHOD: The first model of CariSeg, trained using the U-Net architecture, segments the area of interest, the teeth, and crops the radiograph around it. The next component segments the carious lesions and it is an ensemble composed of three architectures: U-Net, Feature Pyramid Network, and DeeplabV3. For tooth identification two merged datasets were used: The Tufts Dental Database consisting of 1000 panoramic radiography images and another dataset of 116 anonymized panoramic X-rays, taken at Noor Medical Imaging Center, Qom. For carious lesion segmentation, a dataset consisting of 150 panoramic X-ray images was acquired from the Department of Oral and Maxillofacial Surgery and Radiology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca.
RESULTS: The experiments demonstrate that our approach results in 99.42% accuracy and a mean 68.2% Dice coefficient.
CONCLUSIONS: AI helps in detecting carious lesions by analyzing dental X-rays and identifying cavities that might be missed by human observers, leading to earlier detection and treatment of cavities and resulting in better oral health outcomes.
PMID:38803980 | PMC:PMC11128823 | DOI:10.1016/j.heliyon.2024.e30836