Deep learning

Optimized robust learning framework based on big data for forecasting cardiovascular crises

Fri, 2024-11-15 06:00

Sci Rep. 2024 Nov 15;14(1):28224. doi: 10.1038/s41598-024-76569-6.

ABSTRACT

Numerous Deep Learning (DL) scenarios have been developed for evolving new healthcare systems that leverage large datasets, distributed computing, and the Internet of Things (IoT). However, the data used in these scenarios tend to be noisy, necessitating the incorporation of robust pre-processing techniques, including data cleaning, preparation, normalization, and addressing imbalances. These steps are crucial for generating a robust dataset for training. Designing frameworks capable of handling such data without compromising efficiency is essential to ensuring robustness. This research aims to propose a novel healthcare framework that selects the best features and enhances performance. This robust deep learning framework, called (R-DLH2O), is designed for forecasting cardiovascular crises. Unlike existing methods, R-DLH2O integrates five distinct phases: robust pre-processing, feature selection, feed-forward neural network, prediction, and performance evaluation. This multi-phase approach ensures superior accuracy and efficiency in crisis prediction, offering a significant advancement in healthcare analytics. H2O is utilized in the R-DLH2O framework for processing big data. The main improvement of this paper lies in the unique form of the Whale Optimization Algorithm (WOA), specifically the Modified WOA (MWOA). The Gaussian distribution approach for random walks was employed with the diffusion strategy to choose the optimal MWOA solution during the growth phase. To validate the R-DLH2O framework, six performance tests were conducted. Surprisingly, the MWOA-2 outperformed other heuristic algorithms in speed, despite exhibiting lower accuracy and scalability. The suggested MWOA was further analyzed using benchmark functions from CEC2005, demonstrating its advantages in accuracy and robustness over WOA. These findings highlight that the framework's processing time is 436 s, mean per-class error is 0.150125, accuracy 95.93%, precision 92.57%, and recall 93.6% across all datasets. These findings highlight the framework's potential to produce significant and robust results, outperforming previous frameworks concerning time and accuracy.

PMID:39548142 | DOI:10.1038/s41598-024-76569-6

Categories: Literature Watch

Study on intelligent recognition of urban road subgrade defect based on deep learning

Fri, 2024-11-15 06:00

Sci Rep. 2024 Nov 15;14(1):28119. doi: 10.1038/s41598-024-72580-z.

ABSTRACT

China's operational highway subgrades exhibit a trend of diversifying types and an increasing number of defects, leading to more frequent urban road safety incidents. This paper starts from the non-destructive testing of urban road subgrade defects using geological radar, aiming to achieve intelligent identification of subgrade pathologies with geological radar. The GprMax forward simulation software is used to establish multi-layer composite structural models of the subgrade, studying the characteristics of geological radar images for different types of subgrade diseases. Based on the forward simulation images of geological radar for subgrade defects and field measurement data, a geological radar subgrade defect image database is established. The Faster R-CNN deep learning algorithm is applied to achieve target detection, recognition, and classification of subgrade defect images. By comparing the loss value, total number of identified regions, and recognition accuracy as metrics, the study compares four improved versions of the Faster R-CNN algorithm. The results indicate that the faster_rcnn_inception_v2 version is more suitable for the intelligent identification of non-destructive testing of urban road subgrade defects.

PMID:39548115 | DOI:10.1038/s41598-024-72580-z

Categories: Literature Watch

Topographic and quantitative correlation of structure and function using deep learning in subclinical biomarkers of intermediate age-related macular degeneration

Fri, 2024-11-15 06:00

Sci Rep. 2024 Nov 15;14(1):28165. doi: 10.1038/s41598-024-72522-9.

ABSTRACT

To examine the morphological impact of deep learning (DL)-quantified biomarkers on point-wise sensitivity (PWS) using microperimetry (MP) and optical coherence tomography (OCT) in intermediate AMD (iAMD). Patients with iAMD were examined by OCT (Spectralis). DL-based algorithms quantified ellipsoid zone (EZ)-thickness, hyperreflective foci (HRF) and drusen volume. Outer nuclear layer (ONL)-thickness and subretinal drusenoid deposits (SDD) were quantified by human experts. All patients completed four MP examinations using an identical custom 45 stimuli grid on MP-3 (NIDEK) and MAIA (CenterVue). MP stimuli were co-registered with corresponding OCT using image registration algorithms. Multivariable mixed-effect models were calculated. 3.600 PWS from 20 eyes of 20 patients were analyzed. Decreased EZ thickness, decreased ONL thickness, increased HRF and increased drusen volume had a significant negative effect on PWS (all p < 0.001) with significant interaction with eccentricity (p < 0.001). Mean PWS was 26.25 ± 3.43 dB on MP3 and 22.63 ± 3.69 dB on MAIA. Univariate analyses revealed a negative association of PWS and SDD (p < 0.001). Subclinical changes in EZ integrity, HRF and drusen volume are quantifiable structural biomarkers associated with reduced retinal function. Topographic co-registration between structure on OCT volumes and sensitivity in MP broadens the understanding of pathognomonic biomarkers with potential for evaluation of quantifiable functional endpoints.

PMID:39548108 | DOI:10.1038/s41598-024-72522-9

Categories: Literature Watch

MIMIC-BP: A curated dataset for blood pressure estimation

Fri, 2024-11-15 06:00

Sci Data. 2024 Nov 15;11(1):1233. doi: 10.1038/s41597-024-04041-1.

ABSTRACT

Blood pressure (BP) is one of the most prominent indicators of potential cardiovascular disorders. Traditionally, BP measurement relies on inflatable cuffs, which is inconvenient and limit the acquisition of such important health-related information in general population. Based on large amounts of well-collected and annotated data, deep-learning approaches present a generalization potential that arose as an alternative to enable more pervasive approaches. However, most existing work in this area currently uses datasets with limitations, such as lack of subject identification and severe data imbalance that can result in data leakage and algorithm bias. Thus, to offer a more properly curated source of information, we propose a derivative dataset composed of 380 hours of the most common biomedical signals, including arterial blood pressure, photoplethysmography, and electrocardiogram for 1,524 anonymized subjects, each having 30 segments of 30 seconds of those signals. We also validated the proposed dataset through experiments using state-of-the-art deep-learning methods, as we highlight the importance of standardized benchmarks for calibration-free blood pressure estimation scenarios.

PMID:39548096 | DOI:10.1038/s41597-024-04041-1

Categories: Literature Watch

Correction: Multiparametric MRI based deep learning model for prediction of early recurrence of hepatocellular carcinoma after SR following TACE

Fri, 2024-11-15 06:00

J Cancer Res Clin Oncol. 2024 Nov 16;150(11):504. doi: 10.1007/s00432-024-06027-3.

NO ABSTRACT

PMID:39547976 | DOI:10.1007/s00432-024-06027-3

Categories: Literature Watch

A Systematic Review of the Diagnostic Accuracy of Deep Learning Models for the Automatic Detection, Localization, and Characterization of Clinically Significant Prostate Cancer on Magnetic Resonance Imaging

Fri, 2024-11-15 06:00

Eur Urol Oncol. 2024 Nov 14:S2588-9311(24)00248-7. doi: 10.1016/j.euo.2024.11.001. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Magnetic resonance imaging (MRI) plays a critical role in prostate cancer diagnosis, but is limited by variability in interpretation and diagnostic accuracy. This systematic review evaluates the current state of deep learning (DL) models in enhancing the automatic detection, localization, and characterization of clinically significant prostate cancer (csPCa) on MRI.

METHODS: A systematic search was conducted across Medline/PubMed, Embase, Web of Science, and ScienceDirect for studies published between January 2020 and September 2023. Studies were included if these presented and validated fully automated DL models for csPCa detection on MRI, with pathology confirmation. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and the Checklist for Artificial Intelligence in Medical Imaging.

KEY FINDINGS AND LIMITATIONS: Twenty-five studies met the inclusion criteria, showing promising results in detecting and characterizing csPCa. However, significant heterogeneity in study designs, validation strategies, and datasets complicates direct comparisons. Only one-third of studies performed external validation, highlighting a critical gap in generalizability. The reliance on internal validation limits a broader application of these findings, and the lack of standardized methodologies hinders the integration of DL models into clinical practice.

CONCLUSIONS AND CLINICAL IMPLICATIONS: DL models demonstrate significant potential in improving prostate cancer diagnostics on MRI. However, challenges in validation, generalizability, and clinical implementation must be addressed. Future research should focus on standardizing methodologies, ensuring external validation and conducting prospective clinical trials to facilitate the adoption of artificial intelligence (AI) in routine clinical settings. These findings support the cautious integration of AI into clinical practice, with further studies needed to confirm their efficacy in diverse clinical environments.

PATIENT SUMMARY: In this study, we reviewed how artificial intelligence (AI) models can help doctors better detect and understand aggressive prostate cancer using magnetic resonance imaging scans. We found that while these AI tools show promise, these tools need more testing and validation in different hospitals before these can be used widely in patient care.

PMID:39547898 | DOI:10.1016/j.euo.2024.11.001

Categories: Literature Watch

Decoding the Digital Pulse: Bibliometric Analysis of 25 Years in Digital Health Research Through the Journal of Medical Internet Research

Fri, 2024-11-15 06:00

J Med Internet Res. 2024 Nov 15;26:e60057. doi: 10.2196/60057.

ABSTRACT

BACKGROUND: As the digital health landscape continues to evolve, analyzing the progress and direction of the field can yield valuable insights. The Journal of Medical Internet Research (JMIR) has been at the forefront of disseminating digital health research since 1999. A comprehensive network analysis of JMIR publications can help illuminate the evolution and trends in digital medicine over the past 25 years.

OBJECTIVE: This study aims to conduct a detailed network analysis of JMIR's publications to uncover the growth patterns, dominant themes, and potential future trajectories in digital health research.

METHODS: We retrieved 8068 JMIR papers from PubMed using the Biopython library. Keyword metrics were assessed using accuracy, recall, and F1-scores to evaluate the effectiveness of keyword identification from Claude 3 Opus and Gemini 1.5 Pro in addition to 2 conventional natural language processing methods using key bidirectional encoder representations from transformers. Future trends for 2024-2026 were predicted using Claude 3 Opus, Google's Time Series Foundation Model, autoregressive integrated moving average, exponential smoothing, and Prophet. Network visualization techniques were used to represent and analyze the complex relationships between collaborating countries, paper types, and keyword co-occurrence.

RESULTS: JMIR's publication volume showed consistent growth, with a peak in 2020. The United States dominated country contributions, with China showing a notable increase in recent years. Keyword analysis from 1999 to 2023 showed significant thematic shifts, from an early internet and digital health focus to the dominance of COVID-19 and advanced technologies such as machine learning. Predictions for 2024-2026 suggest an increased focus on artificial intelligence, digital health, and mental health.

CONCLUSIONS: Network analysis of JMIR publications provides a macroscopic view of the evolution of the digital health field. The journal's trajectory reflects broader technological advances and shifting research priorities, including the impact of the COVID-19 pandemic. The predicted trends underscore the growing importance of computational technology in future health care research and practice. The findings from JMIR provide a glimpse into the future of digital medicine, suggesting a robust integration of artificial intelligence and continued emphasis on mental health in the postpandemic era.

PMID:39546778 | DOI:10.2196/60057

Categories: Literature Watch

Advancements in Using AI for Dietary Assessment Based on Food Images: Scoping Review

Fri, 2024-11-15 06:00

J Med Internet Res. 2024 Nov 15;26:e51432. doi: 10.2196/51432.

ABSTRACT

BACKGROUND: To accurately capture an individual's food intake, dietitians are often required to ask clients about their food frequencies and portions, and they have to rely on the client's memory, which can be burdensome. While taking food photos alongside food records can alleviate user burden and reduce errors in self-reporting, this method still requires trained staff to translate food photos into dietary intake data. Image-assisted dietary assessment (IADA) is an innovative approach that uses computer algorithms to mimic human performance in estimating dietary information from food images. This field has seen continuous improvement through advancements in computer science, particularly in artificial intelligence (AI). However, the technical nature of this field can make it challenging for those without a technical background to understand it completely.

OBJECTIVE: This review aims to fill the gap by providing a current overview of AI's integration into dietary assessment using food images. The content is organized chronologically and presented in an accessible manner for those unfamiliar with AI terminology. In addition, we discuss the systems' strengths and weaknesses and propose enhancements to improve IADA's accuracy and adoption in the nutrition community.

METHODS: This scoping review used PubMed and Google Scholar databases to identify relevant studies. The review focused on computational techniques used in IADA, specifically AI models, devices, and sensors, or digital methods for food recognition and food volume estimation published between 2008 and 2021.

RESULTS: A total of 522 articles were initially identified. On the basis of a rigorous selection process, 84 (16.1%) articles were ultimately included in this review. The selected articles reveal that early systems, developed before 2015, relied on handcrafted machine learning algorithms to manage traditional sequential processes, such as segmentation, food identification, portion estimation, and nutrient calculations. Since 2015, these handcrafted algorithms have been largely replaced by deep learning algorithms for handling the same tasks. More recently, the traditional sequential process has been superseded by advanced algorithms, including multitask convolutional neural networks and generative adversarial networks. Most of the systems were validated for macronutrient and energy estimation, while only a few were capable of estimating micronutrients, such as sodium. Notably, significant advancements have been made in the field of IADA, with efforts focused on replicating humanlike performance.

CONCLUSIONS: This review highlights the progress made by IADA, particularly in the areas of food identification and portion estimation. Advancements in AI techniques have shown great potential to improve the accuracy and efficiency of this field. However, it is crucial to involve dietitians and nutritionists in the development of these systems to ensure they meet the requirements and trust of professionals in the field.

PMID:39546777 | DOI:10.2196/51432

Categories: Literature Watch

Advances in Aerosol Nanostructuring: Functions and Control of Next-Generation Particles

Fri, 2024-11-15 06:00

Langmuir. 2024 Nov 15. doi: 10.1021/acs.langmuir.4c02867. Online ahead of print.

ABSTRACT

Nanostructured particles (NSPs), with their remarkable properties at the nanoscale, possess key functions required for unlocking a sustainable future. Fabricating these particles using aerosol methods and spraying processes enables precise control over the particle morphology, structure, composition, and crystallinity during in-flight transformation. In this Perspective, the significant impact of NSPs on technological advancement for energy and environmental applications is discussed. Furthermore, incorporating in situ/operando assessment techniques alongside machine and deep learning is explored. Finally, the future development trends and the perspective on the advancing NSPs synthesis via aerosol process are elaborated for further driving innovations for supersmart and carbon-neutral society.

PMID:39546762 | DOI:10.1021/acs.langmuir.4c02867

Categories: Literature Watch

Deep learning-based temporal deconvolution for photon time-of-flight distribution retrieval

Fri, 2024-11-15 06:00

Opt Lett. 2024 Nov 15;49(22):6457-6460. doi: 10.1364/OL.533923.

ABSTRACT

The acquisition of the time of flight (ToF) of photons has found numerous applications in the biomedical field. Over the last decades, a few strategies have been proposed to deconvolve the temporal instrument response function (IRF) that distorts the experimental time-resolved data. However, these methods require burdensome computational strategies and regularization terms to mitigate noise contributions. Herein, we propose a deep learning model specifically to perform the deconvolution task in fluorescence lifetime imaging (FLI). The model is trained and validated with representative simulated FLI data with the goal of retrieving the true photon ToF distribution. Its performance and robustness are validated with well-controlled in vitro experiments using three time-resolved imaging modalities with markedly different temporal IRFs. The model aptitude is further established with in vivo preclinical investigation. Overall, these in vitro and in vivo validations demonstrate the flexibility and accuracy of deep learning model-based deconvolution in time-resolved FLI and diffuse optical imaging.

PMID:39546693 | DOI:10.1364/OL.533923

Categories: Literature Watch

Efficient labeling for fine-tuning chest X-ray bone-suppression networks for pediatric patients

Fri, 2024-11-15 06:00

Med Phys. 2024 Nov 15. doi: 10.1002/mp.17516. Online ahead of print.

ABSTRACT

BACKGROUND: Pneumonia, a major infectious cause of morbidity and mortality among children worldwide, is typically diagnosed using low-dose pediatric chest X-ray [CXR (chest radiography)]. In pediatric CXR images, bone occlusion leads to a risk of missed diagnosis. Deep learning-based bone-suppression networks relying on training data have enabled considerable progress to be achieved in bone suppression in adult CXR images; however, these networks have poor generalizability to pediatric CXR images because of the lack of labeled pediatric CXR images (i.e., bone images vs. soft-tissue images). Dual-energy subtraction imaging approaches are capable of producing labeled adult CXR images; however, their application is limited because they require specialized equipment, and they are infrequently employed in pediatric settings. Traditional image processing-based models can be used to label pediatric CXR images, but they are semiautomatic and have suboptimal performance.

PURPOSE: We developed an efficient labeling approach for fine-tuning pediatric CXR bone-suppression networks capable of automatically suppressing bone structures in CXR images for pediatric patients without the need for specialized equipment and technologist training.

METHODS: Three steps were employed to label pediatric CXR images and fine-tune pediatric bone-suppression networks: distance transform-based bone-edge detection, traditional image processing-based bone suppression, and fully automated pediatric bone suppression. In distance transform-based bone-edge detection, bone edges were automatically detected by predicting bone-edge distance-transform images, which were then used as inputs in traditional image processing. In this processing, pediatric CXR images were labeled by obtaining bone images through a series of traditional image processing techniques. Finally, the pediatric bone-suppression network was fine-tuned using the labeled pediatric CXR images. This network was initially pretrained on a public adult dataset comprising 240 adult CXR images (A240) and then fine-tuned and validated on 40 pediatric CXR images (P260_40labeled) from our customized dataset (named P260) through five-fold cross-validation; finally, the network was tested on 220 pediatric CXR images (P260_220unlabeled dataset).

RESULTS: The distance transform-based bone-edge detection network achieved a mean boundary distance of 1.029. Moreover, the traditional image processing-based bone-suppression model obtained bone images exhibiting a relative Weber contrast of 93.0%. Finally, the fully automated pediatric bone-suppression network achieved a relative mean absolute error of 3.38%, a peak signal-to-noise ratio of 35.5 dB, a structural similarity index measure of 98.1%, and a bone-suppression ratio of 90.1% on P260_40labeled.

CONCLUSIONS: The proposed fully automated pediatric bone-suppression network, together with the proposed distance transform-based bone-edge detection network, can automatically acquire bone and soft-tissue images solely from CXR images for pediatric patients and has the potential to help diagnose pneumonia in children.

PMID:39546640 | DOI:10.1002/mp.17516

Categories: Literature Watch

RASP v2.0: an updated atlas for RNA structure probing data

Fri, 2024-11-15 06:00

Nucleic Acids Res. 2024 Nov 15:gkae1117. doi: 10.1093/nar/gkae1117. Online ahead of print.

ABSTRACT

RNA molecules function in numerous biological processes by folding into intricate structures. Here we present RASP v2.0, an updated database for RNA structure probing data featuring a substantially expanded collection of datasets along with enhanced online structural analysis functionalities. Compared to the previous version, RASP v2.0 includes the following improvements: (i) the number of RNA structure datasets has increased from 156 to 438, comprising 216 transcriptome-wide RNA structure datasets, 141 target-specific RNA structure datasets, and 81 RNA-RNA interaction datasets, thereby broadening species coverage from 18 to 24, (ii) a deep learning-based model has been implemented to impute missing structural signals for 59 transcriptome-wide RNA structure datasets with low structure score coverage, significantly enhancing data quality, particularly for low-abundance RNAs, (iii) three new online analysis modules have been deployed to assist RNA structure studies, including missing structure score imputation, RNA secondary and tertiary structure prediction, and RNA binding protein (RBP) binding prediction. By providing a resource of much more comprehensive RNA structure data, RASP v2.0 is poised to facilitate the exploration of RNA structure-function relationships across diverse biological processes. RASP v2.0 is freely accessible at http://rasp2.zhanglab.net/.

PMID:39546630 | DOI:10.1093/nar/gkae1117

Categories: Literature Watch

Generative adversarial networks accurately reconstruct pan-cancer histology from pathologic, genomic, and radiographic latent features

Fri, 2024-11-15 06:00

Sci Adv. 2024 Nov 15;10(46):eadq0856. doi: 10.1126/sciadv.adq0856. Epub 2024 Nov 15.

ABSTRACT

Artificial intelligence models have been increasingly used in the analysis of tumor histology to perform tasks ranging from routine classification to identification of molecular features. These approaches distill cancer histologic images into high-level features, which are used in predictions, but understanding the biologic meaning of such features remains challenging. We present and validate a custom generative adversarial network-HistoXGAN-capable of reconstructing representative histology using feature vectors produced by common feature extractors. We evaluate HistoXGAN across 29 cancer subtypes and demonstrate that reconstructed images retain information regarding tumor grade, histologic subtype, and gene expression patterns. We leverage HistoXGAN to illustrate the underlying histologic features for deep learning models for actionable mutations, identify model reliance on histologic batch effect in predictions, and demonstrate accurate reconstruction of tumor histology from radiographic imaging for a "virtual biopsy."

PMID:39546597 | DOI:10.1126/sciadv.adq0856

Categories: Literature Watch

MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction

Fri, 2024-11-15 06:00

PLoS One. 2024 Nov 15;19(11):e0306673. doi: 10.1371/journal.pone.0306673. eCollection 2024.

ABSTRACT

Considering that the traditional deep learning event extraction method ignores the correlation between word features and sequence information, it cannot fully explore the hidden associations between events and events and between events and primary attributes. To solve these problems, we developed a new framework for event extraction called the masked attention-guided dynamic graph aggregation network. On the one hand, to obtain effective word representation and sequence representation, an interaction and complementary relationship are established between word vectors and character vectors. At the same time, a squeeze layer is introduced in the bidirectional independent recurrent unit to model the sentence sequence from both positive and negative directions while retaining the local spatial details to the maximum extent and establishing practical long-term dependencies and rich global context representations. On the other hand, the designed masked attention mechanism can effectively balance the word vector features and sequence semantics and refine these features. The designed dynamic graph aggregation module establishes effective connections between events and events, and between events and essential attributes, strengthens the interactivity and association between them, and realizes feature transfer and aggregation on graph nodes in the neighborhood through dynamic strategies to improve the performance of event extraction. We designed a reconstructed weighted loss function to supervise and adjust each module individually to ensure the optimal feature representation. Finally, the proposed MaskDGNets framework is evaluated on two baseline datasets, DuEE and CCKS2020. It demonstrates its robustness and event extraction performance, with F1 of 81.443% and 87.382%, respectively.

PMID:39546454 | DOI:10.1371/journal.pone.0306673

Categories: Literature Watch

Multimodality deep learning radiomics predicts pathological response after neoadjuvant chemoradiotherapy for esophageal squamous cell carcinoma

Fri, 2024-11-15 06:00

Insights Imaging. 2024 Nov 15;15(1):277. doi: 10.1186/s13244-024-01851-0.

ABSTRACT

OBJECTIVES: This study aimed to develop and validate a deep-learning radiomics model using CT, T2, and DWI images for predicting pathological complete response (pCR) in patients with esophageal squamous cell carcinoma (ESCC) undergoing neoadjuvant chemoradiotherapy (nCRT).

MATERIALS AND METHODS: Patients with ESCC undergoing nCRT followed by surgery were retrospectively enrolled from three institutions and divided into training and testing cohorts. Both traditional and deep-learning radiomics features were extracted from pre-treatment CT, T2, and DWI. Multiple radiomics models were developed, both single modality and integrated, using machine learning algorithms. The models' performance was assessed using receiver operating characteristic curve analysis, with the area under the curve (AUC) as a primary metric, alongside sensitivity and specificity from the cut-off analysis.

RESULTS: The study involved 151 patients, among whom 63 achieved pCR. The training cohort consisted of 89 patients from Institution 1 (median age 62, 73 males) and the testing cohort included 52 patients from Institution 2 (median age 62, 41 males), and 10 in a clinical trial from Institution 3 (median age 69, 9 males). The integrated model, combining traditional and deep learning radiomics features from CT, T2, and DWI, demonstrated the best performance with an AUC of 0.868 (95% CI: 0.766-0.959), sensitivity of 88% (95% CI: 73.9-100), and specificity of 78.4% (95% CI: 63.6-90.2) in the testing cohort. This model outperformed single-modality models and the clinical model.

CONCLUSION: A multimodality deep learning radiomics model, utilizing CT, T2, and DWI images, was developed and validated for accurately predicting pCR of ESCC following nCRT.

CRITICAL RELEVANCE STATEMENT: Our research demonstrates the satisfactory predictive value of multimodality deep learning radiomics for the response of nCRT in ESCC and provides a potentially helpful tool for personalized treatment including organ preservation strategy.

KEY POINTS: After neoadjuvant chemoradiotherapy, patients with ESCC have pCR rates of about 40%. The multimodality deep learning radiomics model, could predict pCR after nCRT with high accuracy. The multimodality radiomics can be helpful in personalized treatment of esophageal cancer.

PMID:39546168 | DOI:10.1186/s13244-024-01851-0

Categories: Literature Watch

Computed tomography enterography-based deep learning radiomics to predict stratified healing in patients with Crohn's disease: a multicenter study

Fri, 2024-11-15 06:00

Insights Imaging. 2024 Nov 15;15(1):275. doi: 10.1186/s13244-024-01854-x.

ABSTRACT

OBJECTIVES: This study developed a deep learning radiomics (DLR) model utilizing baseline computed tomography enterography (CTE) to non-invasively predict stratified healing in Crohn's disease (CD) patients following infliximab (IFX) treatment.

METHODS: The study included 246 CD patients diagnosed at three hospitals. From the first two hospitals, 202 patients were randomly divided into a training cohort (n = 141) and a testing cohort (n = 61) in a 7:3 ratio. The remaining 44 patients from the third hospital served as the validation cohort. Radiomics and deep learning features were extracted from both the active lesion wall and mesenteric adipose tissue. The most valuable features were selected using univariate analysis and least absolute shrinkage and selection operator (LASSO) regression. Multivariate logistic regression was then employed to construct the radiomics, deep learning, and DLR models. Model performance was evaluated using receiver operating characteristic (ROC) curves.

RESULTS: The DLR model achieved an area under the ROC curve (AUC) of 0.948 (95% CI: 0.916-0.980), 0.889 (95% CI: 0.803-0.975), and 0.938 (95% CI: 0.868-1.000) in the training, testing, and validation cohorts, respectively in predicting mucosal healing (MH). Furthermore, the diagnostic performance of DLR model in predicting transmural healing (TH) was 0.856 (95% CI: 0.776-0.935).

CONCLUSIONS: We have developed a DLR model based on the radiomics and deep learning features of baseline CTE to predict stratified healing (MH and TH) in CD patients following IFX treatment with high accuracies in both testing and external cohorts.

CRITICAL RELEVANCE STATEMENT: The deep learning radiomics model developed in our study, along with the nomogram, can intuitively, accurately, and non-invasively predict stratified healing at baseline CT enterography.

KEY POINTS: Early prediction of mucosal and transmural healing in Crohn's Disease patients is beneficial for treatment planning. This model demonstrated excellent performance in predicting mucosal healing and had a diagnostic performance in predicting transmural healing of 0.856. CT enterography images of active lesion walls and mesenteric adipose tissue exhibit an association with stratified healing in Crohn's disease patients.

PMID:39546153 | DOI:10.1186/s13244-024-01854-x

Categories: Literature Watch

Development and validation of preeclampsia predictive models using key genes from bioinformatics and machine learning approaches

Fri, 2024-11-15 06:00

Front Immunol. 2024 Oct 31;15:1416297. doi: 10.3389/fimmu.2024.1416297. eCollection 2024.

ABSTRACT

BACKGROUND: Preeclampsia (PE) poses significant diagnostic and therapeutic challenges. This study aims to identify novel genes for potential diagnostic and therapeutic targets, illuminating the immune mechanisms involved.

METHODS: Three GEO datasets were analyzed, merging two for training set, and using the third for external validation. Intersection analysis of differentially expressed genes (DEGs) and WGCNA highlighted candidate genes. These were further refined through LASSO, SVM-RFE, and RF algorithms to identify diagnostic hub genes. Diagnostic efficacy was assessed using ROC curves. A predictive nomogram and fully Connected Neural Network (FCNN) were developed for PE prediction. ssGSEA and correlation analysis were employed to investigate the immune landscape. Further validation was provided by qRT-PCR on human placental samples.

RESULT: Five biomarkers were identified with validation AUCs: CGB5 (0.663, 95% CI: 0.577-0.750), LEP (0.850, 95% CI: 0.792-0.908), LRRC1 (0.797, 95% CI: 0.728-0.867), PAPPA2 (0.839, 95% CI: 0.775-0.902), and SLC20A1 (0.811, 95% CI: 0.742-0.880), all of which are involved in key biological processes. The nomogram showed strong predictive power (C-index 0.873), while FCNN achieved an optimal AUC of 0.911 (95% CI: 0.732-1.000) in five-fold cross-validation. Immune infiltration analysis revealed the importance of T cell subsets, neutrophils, and NK cells in PE, linking these genes to immune mechanisms underlying PE pathogenesis.

CONCLUSION: CGB5, LEP, LRRC1, PAPPA2, and SLC20A1 are validated as key diagnostic biomarkers for PE. Nomogram and FCNN could credibly predict PE. Their association with immune infiltration underscores the crucial role of immune responses in PE pathogenesis.

PMID:39544937 | PMC:PMC11560445 | DOI:10.3389/fimmu.2024.1416297

Categories: Literature Watch

Intelligent Evaluation Method for Design Education and Comparison Research between visualizing Heat-Maps of Class Activation and Eye-Movement

Fri, 2024-11-15 06:00

J Eye Mov Res. 2024 Oct 10;17(2). doi: 10.16910/jemr.17.2.1. eCollection 2024.

ABSTRACT

The evaluation of design results plays a crucial role in the development of design. This study presents a design work evaluation system for design education that assists design instructors in conducting objective evaluations. An automatic design evaluation model based on convolutional neural networks has been established, which enables intelligent evaluation of student design works. During the evaluation process, the CAM is obtained. Simultaneously, an eye-tracking experiment was designed to collect gaze data and generate eye-tracking heat maps. By comparing the heat maps with CAM, an attempt was made to explore the correlation between the focus of the evaluation's attention on human design evaluation and the CNN intelligent evaluation. The experimental results indicate that there is some certain correlation between humans and CNN in terms of the key points they focus on when conducting an evaluation. However, there are significant differences in background observation. The research results demonstrate that the intelligent evaluation model of CNN can automatically evaluate product design works and effectively classify and predict design product images. The comparison shows a correlation between artificial intelligence and the subjective evaluation of human eyes in evaluation strategy. Introducing artificial intelligence into the field of design evaluation for education has a strong potential to promote the development of design education.

PMID:39544878 | PMC:PMC11561857 | DOI:10.16910/jemr.17.2.1

Categories: Literature Watch

LS-VIT: Vision Transformer for action recognition based on long and short-term temporal difference

Fri, 2024-11-15 06:00

Front Neurorobot. 2024 Oct 31;18:1457843. doi: 10.3389/fnbot.2024.1457843. eCollection 2024.

ABSTRACT

Over the past few years, a growing number of researchers have dedicated their efforts to focusing on temporal modeling. The advent of transformer-based methods has notably advanced the field of 2D image-based vision tasks. However, with respect to 3D video tasks such as action recognition, applying temporal transformations directly to video data significantly increases both computational and memory demands. This surge in resource consumption is due to the multiplication of data patches and the added complexity of self-aware computations. Accordingly, building efficient and precise 3D self-attentive models for video content represents as a major challenge for transformers. In our research, we introduce an Long and Short-term Temporal Difference Vision Transformer (LS-VIT). This method incorporates short-term motion details into images by weighting the difference across several consecutive frames, thereby equipping the original image with the ability to model short-term motions. Concurrently, we integrate a module designed to understand long-term motion details. This module enhances the model's capacity for long-term motion modeling by directly integrating temporal differences from various segments via motion excitation. Our thorough analysis confirms that the LS-VIT achieves high recognition accuracy across multiple benchmarks (e.g., UCF101, HMDB51, Kinetics-400). These research results indicate that LS-VIT has the potential for further optimization, which can improve real-time performance and action prediction capabilities.

PMID:39544849 | PMC:PMC11560894 | DOI:10.3389/fnbot.2024.1457843

Categories: Literature Watch

LCGSC-YOLO: a lightweight apple leaf diseases detection method based on LCNet and GSConv module under YOLO framework

Fri, 2024-11-15 06:00

Front Plant Sci. 2024 Oct 31;15:1398277. doi: 10.3389/fpls.2024.1398277. eCollection 2024.

ABSTRACT

INTRODUCTION: In response to the current mainstream deep learning detection methods with a large number of learned parameters and the complexity of apple leaf disease scenarios, the paper proposes a lightweight method and names it LCGSC-YOLO. This method is based on the LCNet(A Lightweight CPU Convolutional Neural Network) and GSConv(Group Shuffle Convolution) module modified YOLO(You Only Look Once) framework.

METHODS: Firstly, the lightweight LCNet is utilized to reconstruct the backbone network, with the purpose of reducing the number of parameters and computations of the model. Secondly, the GSConv module and the VOVGSCSP (Slim-neck by GSConv) module are introduced in the neck network, which makes it possible to minimize the number of model parameters and computations while guaranteeing the fusion capability among the different feature layers. Finally, coordinate attention is embedded in the tail of the backbone and after each VOVGSCSP module to improve the problem of detection accuracy degradation issue caused by model lightweighting.

RESULTS: The experimental results show the LCGSC-YOLO can achieve an excellent detection performance with mean average precision of 95.5% and detection speed of 53 frames per second (FPS) on the mixed datasets of Plant Pathology 2021 (FGVC8) and AppleLeaf9.

DISCUSSION: The number of parameters and Floating Point Operations (FLOPs) of the LCGSC-YOLO are much less thanother related comparative experimental algorithms.

PMID:39544536 | PMC:PMC11560749 | DOI:10.3389/fpls.2024.1398277

Categories: Literature Watch

Pages