Deep learning

A synthetic shadow dataset of agricultural settings

Tue, 2024-04-09 06:00

Data Brief. 2024 Mar 23;54:110364. doi: 10.1016/j.dib.2024.110364. eCollection 2024 Jun.

ABSTRACT

Shadow, a natural phenomenon resulting from the absence of direct lighting, finds diverse real-world applications beyond computer vision, such as studying its effect on photosynthesis in plants and on the reduction of solar energy harvesting through photovoltaic panels. This article presents a dataset comprising 50,000 pairs of photorealistic computer-rendered images along with their corresponding physics-based shadow masks, primarily focused on agricultural settings with human activity in the field. The images are generated by simulating a scene in 3D modeling software to produce a pair of top-down images, consisting of a regular image and an overexposed image achieved by adjusting lighting parameters. Specifically, the strength of the light source representing the sun is increased, and all indirect lighting, including global illumination and light bouncing, is disabled. The resulting overexposed image is later converted into a physically accurate shadow mask with minimal annotation errors through post-processing techniques. This dataset holds promise for future research, serving as a basis for transfer learning or as a benchmark for model evaluation in the realm of shadow-related applications such as shadow detection and removal.

PMID:38590617 | PMC:PMC10999793 | DOI:10.1016/j.dib.2024.110364

Categories: Literature Watch

Deep learning-based image annotation for leukocyte segmentation and classification of blood cell morphology

Mon, 2024-04-08 06:00

BMC Med Imaging. 2024 Apr 8;24(1):83. doi: 10.1186/s12880-024-01254-z.

ABSTRACT

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.

PMID:38589793 | DOI:10.1186/s12880-024-01254-z

Categories: Literature Watch

Lung pneumonia severity scoring in chest X-ray images using transformers

Mon, 2024-04-08 06:00

Med Biol Eng Comput. 2024 Apr 9. doi: 10.1007/s11517-024-03066-3. Online ahead of print.

ABSTRACT

To create robust and adaptable methods for lung pneumonia diagnosis and the assessment of its severity using chest X-rays (CXR), access to well-curated, extensive datasets is crucial. Many current severity quantification approaches require resource-intensive training for optimal results. Healthcare practitioners require efficient computational tools to swiftly identify COVID-19 cases and predict the severity of the condition. In this research, we introduce a novel image augmentation scheme as well as a neural network model founded on Vision Transformers (ViT) with a small number of trainable parameters for quantifying COVID-19 severity and other lung diseases. Our method, named Vision Transformer Regressor Infection Prediction (ViTReg-IP), leverages a ViT architecture and a regression head. To assess the model's adaptability, we evaluate its performance on diverse chest radiograph datasets from various open sources. We conduct a comparative analysis against several competing deep learning methods. Our results achieved a minimum Mean Absolute Error (MAE) of 0.569 and 0.512 and a maximum Pearson Correlation Coefficient (PC) of 0.923 and 0.855 for the geographic extent score and the lung opacity score, respectively, when the CXRs from the RALO dataset were used in training. The experimental results reveal that our model delivers exceptional performance in severity quantification while maintaining robust generalizability, all with relatively modest computational requirements. The source codes used in our work are publicly available at https://github.com/bouthainas/ViTReg-IP .

PMID:38589723 | DOI:10.1007/s11517-024-03066-3

Categories: Literature Watch

Diagnostic performance of a deep-learning model using (18)F-FDG PET/CT for evaluating recurrence after radiation therapy in patients with lung cancer

Mon, 2024-04-08 06:00

Ann Nucl Med. 2024 Apr 8. doi: 10.1007/s12149-024-01925-5. Online ahead of print.

ABSTRACT

OBJECTIVE: We developed a deep learning model for distinguishing radiation therapy (RT)-related changes and tumour recurrence in patients with lung cancer who underwent RT, and evaluated its performance.

METHODS: We retrospectively recruited 308 patients with lung cancer with RT-related changes observed on 18F-fluorodeoxyglucose positron emission tomography-computed tomography (18F-FDG PET/CT) performed after RT. Patients were labelled as positive or negative for tumour recurrence through histologic diagnosis or clinical follow-up after 18F-FDG PET/CT. A two-dimensional (2D) slice-based convolutional neural network (CNN) model was created with a total of 3329 slices as input, and performance was evaluated with five independent test sets.

RESULTS: For the five independent test sets, the area under the curve (AUC) of the receiver operating characteristic curve, sensitivity, and specificity were in the range of 0.98-0.99, 95-98%, and 87-95%, respectively. The region determined by the model was confirmed as an actual recurred tumour through the explainable artificial intelligence (AI) using gradient-weighted class activation mapping (Grad-CAM).

CONCLUSION: The 2D slice-based CNN model using 18F-FDG PET imaging was able to distinguish well between RT-related changes and tumour recurrence in patients with lung cancer.

PMID:38589677 | DOI:10.1007/s12149-024-01925-5

Categories: Literature Watch

Accuracy of Artificial Intelligence Models in the Prediction of Periodontitis: A Systematic Review

Mon, 2024-04-08 06:00

JDR Clin Trans Res. 2024 Apr 8:23800844241232318. doi: 10.1177/23800844241232318. Online ahead of print.

ABSTRACT

INTRODUCTION: Periodontitis is the main cause of tooth loss and is related to many systemic diseases. Artificial intelligence (AI) in periodontics has the potential to improve the accuracy of risk assessment and provide personalized treatment planning for patients with periodontitis. This systematic review aims to examine the actual evidence on the accuracy of various AI models in predicting periodontitis.

METHODS: Using a mix of MeSH keywords and free text words pooled by Boolean operators ('AND', 'OR'), a search strategy without a time frame setting was conducted on the following databases: Web of Science, ProQuest, PubMed, Scopus, and IEEE Explore. The QUADAS-2 risk of bias assessment was then performed.

RESULTS: From a total of 961 identified records screened, 8 articles were included for qualitative analysis: 4 studies showed an overall low risk of bias, 2 studies an unclear risk, and the remaining 2 studies a high risk. The most employed algorithms for periodontitis prediction were artificial neural networks, followed by support vector machines, decision trees, logistic regression, and random forest. The models showed good predictive performance for periodontitis according to different evaluation metrics, but the presented methods were heterogeneous.

CONCLUSIONS: AI algorithms may improve in the future the accuracy and reliability of periodontitis prediction. However, to date, most of the studies had a retrospective design and did not consider the most modern deep learning networks. Although the available evidence is limited by a lack of standardized data collection and protocols, the potential benefits of using AI in periodontics are significant and warrant further research and development in this area.

KNOWLEDGE TRANSFER STATEMENT: The use of AI in periodontics can lead to more accurate diagnosis and treatment planning, as well as improved patient education and engagement. Despite the current challenges and limitations of the available evidence, particularly the lack of standardized data collection and analysis protocols, the potential benefits of using AI in periodontics are significant and warrant further research and development in this area.

PMID:38589339 | DOI:10.1177/23800844241232318

Categories: Literature Watch

A muti-modal feature fusion method based on deep learning for predicting immunotherapy response

Mon, 2024-04-08 06:00

J Theor Biol. 2024 Apr 6:111816. doi: 10.1016/j.jtbi.2024.111816. Online ahead of print.

ABSTRACT

Immune checkpoint therapy (ICT) has greatly improved the survival of cancer patients in the past few years, but only a small number of patients respond to ICT. To predict ICT response, we developed a multi-modal feature fusion model based on deep learning (MFMDL). This model utilizes graph neural networks to map gene-gene relationships in gene networks to low dimensional vector spaces, and then fuses biological pathway features and immune cell infiltration features to make robust predictions of ICT. We used five datasets to validate the predictive performance of the MFMDL. These five datasets span multiple types of cancer, including melanoma, lung cancer, and gastric cancer. We found that the prediction performance of multi-modal feature fusion model based on deep learning is superior to other traditional ICT biomarkers, such as ICT targets or tumor microenvironment-associated markers. In addition, we also conducted ablation experiments to demonstrate the necessity of fusing different modal features, which can improve the prediction accuracy of the model.

PMID:38589007 | DOI:10.1016/j.jtbi.2024.111816

Categories: Literature Watch

Deep learning-based assay for programmed death ligand 1 immunohistochemistry scoring in non-small cell lung carcinoma: Does it help pathologists score?

Mon, 2024-04-08 06:00

Mod Pathol. 2024 Apr 6:100485. doi: 10.1016/j.modpat.2024.100485. Online ahead of print.

ABSTRACT

Several studies have developed various artificial intelligence (AI) models for immunohistochemical analysis of programmed death ligand 1 (PD-L1) in patients with non-small cell lung carcinoma; however, none have focused on specific ways by which AI-assisted systems could help pathologists determine the tumor proportion score (TPS). Herein, we developed an AI model to calculate the TPS of the PD-L1 22C3 assay and evaluated whether and how this AI-assisted system could help pathologists determine the TPS and analyze how AI-assisted systems could affect pathologists' assessment accuracy. We assessed the four methods of the AI-assisted system: 1) and 2) pathologists first assessed and then referred to automated AI scoring results (1, positive tumor cell percentage; 2, positive tumor cell percentage and visualized overlay image) for a final confirmation, and 3) and 4) pathologists referred to the automated AI scoring results (3, positive tumor cell percentage; 4, positive tumor cell percentage and visualized overlay image) while determining TPS. Mixed model analysis was used to calculate the odds ratios (ORs) with 95% confidence intervals for AI-assisted TPS 1) to 4) compared with pathologists' scoring. For all 584 samples of tissue microarray, the OR for AI-assisted TPS 1) to 4) was 0.94-1.07 and not statistically significant. Of them, we found 332 cases of discordant cases, on which the pathologists' judgments were inconsistent; the ORs for AI-assisted TPS 1), 2), 3), and 4) were 1.28 (1.06-1.54, p = 0.012), 1.29 (1.06-1.55, p = 0.010), 1.28 (1.06-1.54, p = 0.012), and 1.29 (1.06-1.55, p = 0.010), respectively, which were statistically significant. For discordant cases, the OR for each AI-assisted TPS compared with the others was 0.99-1.01 and not statistically significant. This study emphasized the usefulness of the AI-assisted system for cases wherein pathologists had difficulty determining the PD-L1 TPS.

PMID:38588885 | DOI:10.1016/j.modpat.2024.100485

Categories: Literature Watch

Lung cancer diagnosis on virtual histologically stained tissue using weakly supervised learning

Mon, 2024-04-08 06:00

Mod Pathol. 2024 Apr 6:100487. doi: 10.1016/j.modpat.2024.100487. Online ahead of print.

ABSTRACT

Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple instance learning (MIL) model with different settings on the open-source LUAD H&E whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model is validated on the 150 H&E WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E WSIs, comparable to the results on 58 standard H&E WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E and ground truth H&E can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E of label-free tissue is a rapid, cost-effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.

PMID:38588884 | DOI:10.1016/j.modpat.2024.100487

Categories: Literature Watch

Graph perceiver network for lung tumor and bronchial premalignant lesion stratification from histopathology

Mon, 2024-04-08 06:00

Am J Pathol. 2024 Apr 6:S0002-9440(24)00124-X. doi: 10.1016/j.ajpath.2024.03.009. Online ahead of print.

ABSTRACT

Bronchial premalignant lesions (PMLs) precede the development of invasive lung squamous carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. In this context, we present a novel computational approach, the Graph Perceiver Network (GRAPE-Net), leveraging hematoxylin and eosin (H&E) stained whole slide images (WSIs) to stratify endobronchial biopsies of PMLs across a spectrum from normal to tumor lung tissues. GRAPE-Net outperforms existing frameworks in classification accuracy predicting LUSC, lung adenocarcinoma (LUAD), and non-tumor (normal) lung tissue on The Cancer Genome Atlas (TCGA) and Clinical Proteomic Tumor Analysis Consortium (CPTAC) datasets containing lung resection tissues while efficiently generating pathologist-aligned, class-specific heatmaps. The network was further tested using endobronchial biopsies from two data cohorts, containing normal to carcinoma in situ histology, and it demonstrated a unique capability to differentiate carcinoma in situ lung squamous PMLs based on their progression status to invasive carcinoma. The network may have utility in stratifying PMLs for chemoprevention trials or more aggressive follow-up.

PMID:38588853 | DOI:10.1016/j.ajpath.2024.03.009

Categories: Literature Watch

Deep Learning Methods in Biomedical Informatics

Mon, 2024-04-08 06:00

Methods. 2024 Apr 6:S1046-2023(24)00084-7. doi: 10.1016/j.ymeth.2024.04.002. Online ahead of print.

NO ABSTRACT

PMID:38588786 | DOI:10.1016/j.ymeth.2024.04.002

Categories: Literature Watch

Higher blood biochemistry-based biological age developed by advanced deep learning techniques is associated with frailty in geriatric rehabilitation inpatients: RESORT

Mon, 2024-04-08 06:00

Exp Gerontol. 2024 Apr 6:112421. doi: 10.1016/j.exger.2024.112421. Online ahead of print.

ABSTRACT

BACKGROUND: Accelerated biological ageing is a major underlying mechanism of frailty development. This study aimed to investigate if the biological age measured by a blood biochemistry-based ageing clock is associated with frailty in geriatric rehabilitation inpatients.

METHODS: Within the REStORing health of acutely unwell adulTs (RESORT) cohort, patients' biological age was measured by an ageing clock based on completed data of 30 routine blood test variables measured at rehabilitation admission. The delta of biological age minus chronological age (years) was calculated. Ordinal logistic regression and multinomial logistic regression were performed to evaluate the association of the delta of ages with frailty assessed by the Clinical Frailty Scale. Effect modification of Cumulative Illness Rating Scale (CIRS) score was tested.

RESULTS: A total of 1187 geriatric rehabilitation patients were included (median age: 83.4 years, IQR: 77.7-88.5; 57.4 % female). The biochemistry-based biological age was strongly correlated with chronological age (Spearman r = 0.883). After adjustment for age, sex and primary reasons for acute admission, higher biological age (per 1 year higher in delta of ages) was associated with more severe frailty at admission (OR: 1.053, 95 % CI:1.012-1.096) in patients who had a CIRS score of <12 not in patients with a CIRS score >12. The delta of ages was not associated with frailty change from admission to discharge. The specific frailty manifestations as cardiac, hematological, respiratory, renal, and endocrine conditions were associated with higher biological age.

CONCLUSION: Higher biological age was associated with severe frailty in geriatric rehabilitation inpatients with less comorbidity burden.

PMID:38588752 | DOI:10.1016/j.exger.2024.112421

Categories: Literature Watch

CBCT-DRRs superior to CT-DRRs for target-tracking applications for pancreatic SBRT

Mon, 2024-04-08 06:00

Biomed Phys Eng Express. 2024 Apr 8. doi: 10.1088/2057-1976/ad3bb9. Online ahead of print.

ABSTRACT

In current radiograph-based intra-fraction markerless target-tracking, digitally reconstructed radiographs (DRRs) from planning CTs (CT-DRRs) are often used to train deep learning models that extract information from the intra-fraction radiographs acquired during treatment. Traditional DRR algorithms were designed for patient alignment (i.e. bone matching) and may not replicate the radiographic image quality of intra-fraction radiographs at treatment. Hypothetically, generating DRRs from pre-treatment Cone-Beam CTs (CBCT-DRRs) with DRR algorithms incorporating physical modelling of on-board-imagers (OBIs) could improve the similarity between intra-fraction radiographs and DRRs by eliminating inter-fraction variation and reducing image-quality mismatches between radiographs and DRRs. In this study, we test the two hypotheses that intra-fraction radiographs are more similar to CBCT-DRRs than CT-DRRs, and that intra-fraction radiographs are more similar to DRRs from algorithms incorporating physical models of OBI components than DRRs from algorithms omitting these models.&#xD;&#xD;Main results: Intra-fraction radiographs were more similar to CBCT-DRRs than CT-DRRs for both metrics across all algorithms, with all p<0.007. Source-spectrum modelling improved radiograph-DRR similarity for both metrics, with all p<1E-6. OBI detector modelling and patient material modelling did not influence radiograph-DRR similarity for either metric.&#xD;&#xD;Significance: Generating DRRs from pre-treatment CBCT-DRRs is feasible, and incorporating CBCT-DRRs into markerless target-tracking methods may promote improved target-tracking accuracies. Incorporating source-spectrum modelling into a treatment planning system's DRR algorithms may reinforce the safe treatment of cancer patients by aiding in patient alignment.

PMID:38588646 | DOI:10.1088/2057-1976/ad3bb9

Categories: Literature Watch

Prediction of protein N-terminal acetylation modification sites based on CNN-BiLSTM-attention model

Mon, 2024-04-08 06:00

Comput Biol Med. 2024 Mar 21;174:108330. doi: 10.1016/j.compbiomed.2024.108330. Online ahead of print.

ABSTRACT

N-terminal acetylation is one of the most common and important post-translational modifications (PTM) of eukaryotic proteins. PTM plays a crucial role in various cellular processes and disease pathogenesis. Thus, the accurate identification of N-terminal acetylation modifications is important to gain insight into cellular processes and other possible functional mechanisms. Although some algorithmic models have been proposed, most have been developed based on traditional machine learning algorithms and small training datasets. Their practical applications are limited. Nevertheless, deep learning algorithmic models are better at handling high-throughput and complex data. In this study, DeepCBA, a model based on the hybrid framework of convolutional neural network (CNN), bidirectional long short-term memory network (BiLSTM), and attention mechanism deep learning, was constructed to detect the N-terminal acetylation sites. The DeepCBA was built as follows: First, a benchmark dataset was generated by selecting low-redundant protein sequences from the Uniport database and further reducing the redundancy of the protein sequences using the CD-HIT tool. Subsequently, based on the skip-gram model in the word2vec algorithm, tripeptide word vector features were generated on the benchmark dataset. Finally, the CNN, BiLSTM, and attention mechanism were combined, and the tripeptide word vector features were fed into the stacked model for multiple rounds of training. The model performed excellently on independent dataset test, with accuracy and area under the curve of 80.51% and 87.36%, respectively. Altogether, DeepCBA achieved superior performance compared with the baseline model, and significantly outperformed most existing predictors. Additionally, our model can be used to identify disease loci and drug targets.

PMID:38588617 | DOI:10.1016/j.compbiomed.2024.108330

Categories: Literature Watch

AI-based support for optical coherence tomography in age-related macular degeneration

Mon, 2024-04-08 06:00

Int J Retina Vitreous. 2024 Apr 8;10(1):31. doi: 10.1186/s40942-024-00549-1.

ABSTRACT

Artificial intelligence (AI) has emerged as a transformative technology across various fields, and its applications in the medical domain, particularly in ophthalmology, has gained significant attention. The vast amount of high-resolution image data, such as optical coherence tomography (OCT) images, has been a driving force behind AI growth in this field. Age-related macular degeneration (AMD) is one of the leading causes for blindness in the world, affecting approximately 196 million people worldwide in 2020. Multimodal imaging has been for a long time the gold standard for diagnosing patients with AMD, however, currently treatment and follow-up in routine disease management are mainly driven by OCT imaging. AI-based algorithms have by their precision, reproducibility and speed, the potential to reliably quantify biomarkers, predict disease progression and assist treatment decisions in clinical routine as well as academic studies. This review paper aims to provide a summary of the current state of AI in AMD, focusing on its applications, challenges, and prospects.

PMID:38589936 | DOI:10.1186/s40942-024-00549-1

Categories: Literature Watch

Deep transfer learning with fuzzy ensemble approach for the early detection of breast cancer

Mon, 2024-04-08 06:00

BMC Med Imaging. 2024 Apr 8;24(1):82. doi: 10.1186/s12880-024-01267-8.

ABSTRACT

Breast Cancer is a significant global health challenge, particularly affecting women with higher mortality compared with other cancer types. Timely detection of such cancer types is crucial, and recent research, employing deep learning techniques, shows promise in earlier detection. The research focuses on the early detection of such tumors using mammogram images with deep-learning models. The paper utilized four public databases where a similar amount of 986 mammograms each for three classes (normal, benign, malignant) are taken for evaluation. Herein, three deep CNN models such as VGG-11, Inception v3, and ResNet50 are employed as base classifiers. The research adopts an ensemble method where the proposed approach makes use of the modified Gompertz function for building a fuzzy ranking of the base classification models and their decision scores are integrated in an adaptive manner for constructing the final prediction of results. The classification results of the proposed fuzzy ensemble approach outperform transfer learning models and other ensemble approaches such as weighted average and Sugeno integral techniques. The proposed ResNet50 ensemble network using the modified Gompertz function-based fuzzy ranking approach provides a superior classification accuracy of 98.986%.

PMID:38589813 | DOI:10.1186/s12880-024-01267-8

Categories: Literature Watch

Improving the performance of supervised deep learning for regulatory genomics using phylogenetic augmentation

Mon, 2024-04-08 06:00

Bioinformatics. 2024 Apr 8:btae190. doi: 10.1093/bioinformatics/btae190. Online ahead of print.

ABSTRACT

MOTIVATION: Supervised deep learning is used to model the complex relationship between genomic sequence and regulatory function. Understanding how these models make predictions can provide biological insight into regulatory functions. Given the complexity of the sequence to regulatory function mapping (the cis-regulatory code), it has been suggested that the genome contains insufficient sequence variation to train models with suitable complexity. Data augmentation is a widely used approach to increase the data variation available for model training, however current data augmentation methods for genomic sequence data are limited.

RESULTS: Inspired by the success of comparative genomics, we show that augmenting genomic sequences with evolutionarily related sequences from other species, which we term phylogenetic augmentation, improves the performance of deep learning models trained on regulatory genomic sequences to predict high-throughput functional assay measurements. Additionally, we show that phylogenetic augmentation can rescue model performance when the training set is down-sampled and permits deep learning on a real-world small dataset, demonstrating that this approach improves data efficiency. Overall, this data augmentation method represents a solution for improving model performance that is applicable to many supervised deep learning problems in genomics.

AVAILABILITY: The open-source GitHub repository agduncan94/phylogenetic_augmentation_paper includes the code for rerunning the analyses here and recreating the figures.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38588559 | DOI:10.1093/bioinformatics/btae190

Categories: Literature Watch

Association Between Body Composition and Survival in Patients With Gastroesophageal Adenocarcinoma: An Automated Deep Learning Approach

Mon, 2024-04-08 06:00

JCO Clin Cancer Inform. 2024 Apr;8:e2300231. doi: 10.1200/CCI.23.00231.

ABSTRACT

PURPOSE: Body composition (BC) may play a role in outcome prognostication in patients with gastroesophageal adenocarcinoma (GEAC). Artificial intelligence provides new possibilities to opportunistically quantify BC from computed tomography (CT) scans. We developed a deep learning (DL) model for fully automatic BC quantification on routine staging CTs and determined its prognostic role in a clinical cohort of patients with GEAC.

MATERIALS AND METHODS: We developed and tested a DL model to quantify BC measures defined as subcutaneous and visceral adipose tissue (VAT) and skeletal muscle on routine CT and investigated their prognostic value in a cohort of patients with GEAC using baseline, 3-6-month, and 6-12-month postoperative CTs. Primary outcome was all-cause mortality, and secondary outcome was disease-free survival (DFS). Cox regression assessed the association between (1) BC at baseline and mortality and (2) the decrease in BC between baseline and follow-up scans and mortality/DFS.

RESULTS: Model performance was high with Dice coefficients ≥0.94 ± 0.06. Among 299 patients with GEAC (age 63.0 ± 10.7 years; 19.4% female), 140 deaths (47%) occurred over a median follow-up of 31.3 months. At baseline, no BC measure was associated with DFS. Only a substantial decrease in VAT >70% after a 6- to 12-month follow-up was associated with mortality (hazard ratio [HR], 1.99 [95% CI, 1.18 to 3.34]; P = .009) and DFS (HR, 1.73 [95% CI, 1.01 to 2.95]; P = .045) independent of age, sex, BMI, Union for International Cancer Control stage, histologic grading, resection status, neoadjuvant therapy, and time between surgery and follow-up CT.

CONCLUSION: DL enables opportunistic estimation of BC from routine staging CT to quantify prognostic information. In patients with GEAC, only a substantial decrease of VAT 6-12 months postsurgery was an independent predictor for DFS beyond traditional risk factors, which may help to identify individuals at high risk who go otherwise unnoticed.

PMID:38588476 | DOI:10.1200/CCI.23.00231

Categories: Literature Watch

Artificial intelligence and natural product research

Mon, 2024-04-08 06:00

Nat Prod Res. 2024 Apr 8:1-3. doi: 10.1080/14786419.2024.2333048. Online ahead of print.

NO ABSTRACT

PMID:38588438 | DOI:10.1080/14786419.2024.2333048

Categories: Literature Watch

How Artificial Intelligence Unravels the Complex Web of Cancer Drug Response

Mon, 2024-04-08 06:00

Cancer Res. 2024 Apr 8. doi: 10.1158/0008-5472.CAN-24-1123. Online ahead of print.

ABSTRACT

The intersection of precision medicine and artificial intelligence (AI) holds profound implications for cancer treatment, with the potential to significantly advance our understanding of drug responses based on the intricate architecture of tumor cells. A recent study by Park and colleagues titled "A deep learning model of tumor cell architecture elucidates response and resistance to CDK4/6 inhibitors," epitomizes this intersection by leveraging an interpretable deep learning model grounded in a comprehensive map of multiprotein assemblies in cancer, known as Nested Systems in Tumors (NeST). This study not only elucidates mechanisms underlying the response to CDK4/6 inhibitors in breast cancer therapy but also highlights the critical role of model interpretability leading to new mechanistic insights.

PMID:38588311 | DOI:10.1158/0008-5472.CAN-24-1123

Categories: Literature Watch

Statistical Machine Learning for Power Flow Analysis Considering the Influence of Weather Factors on Photovoltaic Power Generation

Mon, 2024-04-08 06:00

IEEE Trans Neural Netw Learn Syst. 2024 Apr 8;PP. doi: 10.1109/TNNLS.2024.3382763. Online ahead of print.

ABSTRACT

It is generally accepted that the impact of weather variation is gradually increasing in modern distribution networks with the integration of high-proportion photovoltaic (PV) power generation and weather-sensitive loads. This article analyzes power flow using a novel stochastic weather generator (SWG) based on statistical machine learning (SML). The proposed SML model, which incorporates generative adversarial networks (GANs), probability theory, and information theory, enables the generation and evaluation of simulated hourly weather data throughout the year. The GAN model captures various weather variation characteristics, including weather uncertainties, diurnal variations, and seasonal patterns. Compared to shallow learning models, the proposed deep learning model exhibits significant advantages in stochastic weather simulation. The simulated data generated by the proposed model closely resemble real data in terms of time-series regularity, integrity, and stochasticity. The SWG is applied to model PV power generation and weather-sensitive loads. Then, we actively conduct a power flow analysis (PFA) on a real distribution network in Guangdong, China, using simulated data for an entire year. The results provide evidence that the GAN-based SWG surpasses the shallow machine learning approach in terms of accuracy. The proposed model ensures accurate analysis of weather-related power flow and provides valuable insights for the analysis, planning, and design of distribution networks.

PMID:38587954 | DOI:10.1109/TNNLS.2024.3382763

Categories: Literature Watch

Pages