Deep learning

Development of a Deep Learning Tool to Support the Assessment of Thyroid Follicular Cell Hypertrophy in the Rat

Sat, 2025-01-18 06:00

Toxicol Pathol. 2025 Jan 17:1926233241309328. doi: 10.1177/01926233241309328. Online ahead of print.

ABSTRACT

Thyroid tissue is sensitive to the effects of endocrine disrupting substances, and this represents a significant health concern. Histopathological analysis of tissue sections of the rat thyroid gland remains the gold standard for the evaluation for agrochemical effects on the thyroid. However, there is a high degree of variability in the appearance of the rat thyroid gland, and toxicologic pathologists often struggle to decide on and consistently apply a threshold for recording low-grade thyroid follicular hypertrophy. This research project developed a deep learning image analysis solution that provides a quantitative score based on the morphological measurements of individual follicles that can be integrated into the standard pathology workflow. To achieve this, a U-Net convolutional deep learning neural network was used that not just identifies the various tissue components but also delineates individual follicles. Further steps to process the raw individual follicle data were developed using empirical models optimized to produce thyroid activity scores that were shown to be superior to the mean epithelial area approach when compared with pathologists' scores. These scores can be used for pathologist decision support using appropriate statistical methods to assess the presence or absence of low-grade thyroid hypertrophy at the group level.

PMID:39825517 | DOI:10.1177/01926233241309328

Categories: Literature Watch

Preparing physiotherapists for the future: the development and evaluation of an innovative curriculum

Fri, 2025-01-17 06:00

BMC Med Educ. 2025 Jan 17;25(1):83. doi: 10.1186/s12909-024-06537-1.

ABSTRACT

BACKGROUND: Educational innovation in health professional education is needed to keep up with rapidly changing healthcare systems and societal needs. This study evaluates the implementation of PACE, an innovative curriculum designed by the physiotherapy department of the HAN University of Applied Sciences in The Netherlands. The PACE concept features an integrated approach to learning and assessment based on pre-set learning outcomes, personalized learning goals, flexible learning routes, and programmatic assessment. PACE distinguishes itself from traditional education because of the flexible learning routes, vertical organization in learning communities, absence of pre-defined learning activities and class schedules, and a culture of continuous learning and development. PACE is based on three guiding principles: 1) flexible and varied, 2) self-directed and collaborative, 3) future-oriented. PACE was implemented in 2021 for first-year students. This study evaluates the implementation to inform future curriculum development.

METHODS: A sequential explanatory mixed methods design was used to evaluate the implementation of PACE using a questionnaire, focus groups, in-depth interviews, and a national progress test allowing for benchmarking results. Participants were undergraduate physiotherapy students of cohort 2021-2022, the first group who experienced PACE and teachers involved with this cohort. Questionnaire data were analyzed using descriptive statistics. To compare mean total scores of the national progress test between four different universities a one-way ANOVA was conducted including a post-hoc analysis. Reflexive thematic analysis guidelines were applied to analyze the interview data.

RESULTS: In total 82 first year students (44,6%) of cohort 2021-2022 and 36 teachers (60%) completed the questionnaire. Results show that the guiding principles were implemented as intended. Results of the national progress test on knowledge and clinical reasoning showed that students of the HAN University performed well compared to other universities. Thematic analysis of interviews and focus groups resulted in three themes and nine subthemes: 1) navigating a personalized curriculum, 2) caring and sharing, and 3) shaping professional identity. PACE contributed positively to students' intrinsic motivation, learning joy, identity development, and life-long learning skills. Areas for improvement were self-directed learning support, and teaching strategies to prompt deep learning.

CONCLUSION: The evaluation showed that the guiding principles of PACE were implemented as intended and that the innovation positively contributed to student learning.

PMID:39825299 | DOI:10.1186/s12909-024-06537-1

Categories: Literature Watch

A multi-stage weakly supervised design for spheroid segmentation to explore mesenchymal stem cell differentiation dynamics

Fri, 2025-01-17 06:00

BMC Bioinformatics. 2025 Jan 17;26(1):20. doi: 10.1186/s12859-024-06031-x.

NO ABSTRACT

PMID:39825265 | DOI:10.1186/s12859-024-06031-x

Categories: Literature Watch

Explainable deep learning and virtual evolution identifies antimicrobial peptides with activity against multidrug-resistant human pathogens

Fri, 2025-01-17 06:00

Nat Microbiol. 2025 Jan 17. doi: 10.1038/s41564-024-01907-3. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) is a promising approach to identify new antimicrobial compounds in diverse microbial species. Here we developed an AI-based, explainable deep learning model, EvoGradient, that predicts the potency of antimicrobial peptides (AMPs) and virtually modifies peptide sequences to produce more potent AMPs, akin to in silico directed evolution. We applied this model to peptides encoded in low-abundance human oral bacteria, resulting in the virtual evolution of 32 peptides into potent AMPs. Of these, the 6 most effective were synthesized and tested against multidrug-resistant pathogens and demonstrated activity against carbapenem-resistant species Escherichia coli, Klebsiella pneumoniae and Acinetobacter baumannii, and vancomycin-resistant Enterococcus faecium. The most potent AMP, pep-19-mod, was validated in vivo, achieving over 95% reduction in bacterial loads in mouse models of thigh infection through both systemic and local administration. Our approach advances the automatic identification and optimization of AMPs.

PMID:39825096 | DOI:10.1038/s41564-024-01907-3

Categories: Literature Watch

Pre-trained artificial intelligence-aided analysis of nanoparticles using the segment anything model

Fri, 2025-01-17 06:00

Sci Rep. 2025 Jan 17;15(1):2341. doi: 10.1038/s41598-025-86327-x.

ABSTRACT

Complex structures can be understood as compositions of smaller, more basic elements. The characterization of these structures requires an analysis of their constituents and their spatial configuration. Examples can be found in systems as diverse as galaxies, alloys, living tissues, cells, and even nanoparticles. In the latter field, the most challenging examples are those of subdivided particles and particle-based materials, due to the close proximity of their constituents. The characterization of such nanostructured materials is typically conducted through the utilization of micrographs. Despite the importance of micrograph analysis, the extraction of quantitative data is often constrained. The presented effort demonstrates the morphological characterization of subdivided particles utilizing a pre-trained artificial intelligence model. The results are validated using three types of nanoparticles: nanospheres, dumbbells, and trimers. The automated segmentation of whole particles, as well as their individual subdivisions, is investigated using the Segment Anything Model, which is based on a pre-trained neural network. The subdivisions of the particles are organized into sets, which presents a novel approach in this field. These sets collate data derived from a large ensemble of specific particle domains indicating to which particle each subdomain belongs. The arrangement of subdivisions into sets to characterize complex nanoparticles expands the information gathered from microscopy analysis. The presented method, which employs a pre-trained deep learning model, outperforms traditional techniques by circumventing systemic errors and human bias. It can effectively automate the analysis of particles, thereby providing more accurate and efficient results.

PMID:39825089 | DOI:10.1038/s41598-025-86327-x

Categories: Literature Watch

Epicardial adipose tissue, cardiac damage, and mortality in patients undergoing TAVR for aortic stenosis

Fri, 2025-01-17 06:00

Int J Cardiovasc Imaging. 2025 Jan 18. doi: 10.1007/s10554-024-03307-4. Online ahead of print.

ABSTRACT

Computed tomography (CT)-derived Epicardial Adipose Tissue (EAT) is linked to cardiovascular disease outcomes. However, its role in patients undergoing Transcatheter Aortic Valve Replacement (TAVR) and the interplay with aortic stenosis (AS) cardiac damage (CD) remains unexplored. We aim to investigate the relationship between EAT characteristics, AS CD, and all-cause mortality. We retrospectively included consecutive patients who underwent CT-TAVR followed by TAVR. EAT volume and density were estimated using a deep-learning platform and CD was assessed using echocardiography. Patients were classified according to low/high EAT volume and density. All-cause mortality at 4 years was compared using Kaplan-Meier and Cox regression analyses. A total of 666 patients (median age 81 [74-86] years; 54% female) were included. After a median follow-up of 1.28 (IQR 0.53-2.57) years, 11.7% (n = 77) of patients died. The EAT volume (p = 0.017) decreased, and density increased (p < 0.001) with worsening AS CD. Patients with low EAT volume (< 49cm3) and high density (≥-86 HU) had higher all-cause mortality (log-rank p = 0.02 and p = 0.01, respectively), even when adjusted for age, sex, and clinical characteristics (HR 1.71, p = 0.02 and HR 1.73, p = 0.03, respectively). When CD was added to the model, low EAT volume (HR 1.67 p = 0.03) and CD stages 3 and 4 (HR 3.14, p = 0.03) remained associated with all-cause mortality. In patients with AS undergoing TAVR, CT-derived low EAT volume, and high density were independently associated with increased 4-year mortality and worse CD stage. Only EAT volume remained associated when adjusted for CD.

PMID:39825067 | DOI:10.1007/s10554-024-03307-4

Categories: Literature Watch

Deep Equilibrium Unfolding Learning for Noise Estimation and Removal in Optical Molecular Imaging

Fri, 2025-01-17 06:00

Comput Med Imaging Graph. 2025 Jan 8;120:102492. doi: 10.1016/j.compmedimag.2025.102492. Online ahead of print.

ABSTRACT

In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.

PMID:39823663 | DOI:10.1016/j.compmedimag.2025.102492

Categories: Literature Watch

Explainable Predictive Model for Suicidal Ideation During COVID-19: Social Media Discourse Study

Fri, 2025-01-17 06:00

J Med Internet Res. 2025 Jan 17;27:e65434. doi: 10.2196/65434.

ABSTRACT

BACKGROUND: Studying the impact of COVID-19 on mental health is both compelling and imperative for the health care system's preparedness development. Discovering how pandemic conditions and governmental strategies and measures have impacted mental health is a challenging task. Mental health issues, such as depression and suicidal tendency, are traditionally explored through psychological battery tests and clinical procedures. To address the stigma associated with mental illness, social media is used to examine language patterns in posts related to suicide. This strategy enhances the comprehension and interpretation of suicidal ideation. Despite easy expression via social media, suicidal thoughts remain sensitive and complex to comprehend and detect. Suicidal ideation captures the new suicidal statements used during the COVID-19 pandemic that represents a different context of expressions.

OBJECTIVE: In this study, our aim was to detect suicidal ideation by mining textual content extracted from social media by leveraging state-of-the-art natural language processing (NLP) techniques.

METHODS: The work was divided into 2 major phases, one to classify suicidal ideation posts and the other to extract factors that cause suicidal ideation. We proposed a hybrid deep learning-based neural network approach (Bidirectional Encoder Representations from Transformers [BERT]+convolutional neural network [CNN]+long short-term memory [LSTM]) to classify suicidal and nonsuicidal posts. Two state-of-the-art deep learning approaches (CNN and LSTM) were combined based on features (terms) selected from term frequency-inverse document frequency (TF-IDF), Word2vec, and BERT. Explainable artificial intelligence (XAI) was used to extract key factors that contribute to suicidal ideation in order to provide a reliable and sustainable solution.

RESULTS: Of 348,110 records, 3154 (0.9%) were selected, resulting in 1338 (42.4%) suicidal and 1816 (57.6%) nonsuicidal instances. The CNN+LSTM+BERT model achieved superior performance, with a precision of 94%, a recall of 95%, an F1-score of 94%, and an accuracy of 93.65%.

CONCLUSIONS: Considering the dynamic nature of suicidal behavior posts, we proposed a fused architecture that captures both localized and generalized contextual information that is important for understanding the language patterns and predict the evolution of suicidal ideation over time. According to Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) XAI algorithms, there was a drift in the features during and before COVID-19. Due to the COVID-19 pandemic, new features have been added, which leads to suicidal tendencies. In the future, strategies need to be developed to combat this deadly disease.

PMID:39823631 | DOI:10.2196/65434

Categories: Literature Watch

Prognostic value of manual versus automatic methods for assessing extents of resection and residual tumor volume in glioblastoma

Fri, 2025-01-17 06:00

J Neurosurg. 2025 Jan 17:1-9. doi: 10.3171/2024.8.JNS24415. Online ahead of print.

ABSTRACT

OBJECTIVE: The extent of resection (EOR) and postoperative residual tumor (RT) volume are prognostic factors in glioblastoma. Calculations of EOR and RT rely on accurate tumor segmentations. Raidionics is an open-access software that enables automatic segmentation of preoperative and early postoperative glioblastoma using pretrained deep learning models. The aim of this study was to compare the prognostic value of manually versus automatically assessed volumetric measurements in glioblastoma patients.

METHODS: Adult patients who underwent resection of histopathologically confirmed glioblastoma were included from 12 different hospitals in Europe and North America. Patient characteristics and survival data were collected as part of local tumor registries or were retrieved from patient medical records. The prognostic value of manually and automatically assessed EOR and RT volume was compared using Cox regression models.

RESULTS: Both manually and automatically assessed RT volumes were a negative prognostic factor for overall survival (manual vs automatic: HR 1.051, 95% CI 1.034-1.067 [p < 0.001] vs HR 1.019, 95% CI 1.007-1.030 [p = 0.001]). Both manual and automatic EOR models showed that patients with gross-total resection have significantly longer overall survival compared with those with subtotal resection (manual vs automatic: HR 1.580, 95% CI 1.291-1.932 [p < 0.001] vs HR 1.395, 95% CI 1.160-1.679 [p < 0.001]), but no significant prognostic difference of gross-total compared with near-total (90%-99%) resection was found. According to the Akaike information criterion and the Bayesian information criterion, all multivariable Cox regression models showed similar goodness-of-fit.

CONCLUSIONS: Automatically and manually measured EOR and RT volumes have comparable prognostic properties. Automatic segmentation with Raidionics can be used in future studies in patients with glioblastoma.

PMID:39823581 | DOI:10.3171/2024.8.JNS24415

Categories: Literature Watch

Computational Methods for Predicting Chemical Reactivity of Covalent Compounds

Fri, 2025-01-17 06:00

J Chem Inf Model. 2025 Jan 17. doi: 10.1021/acs.jcim.4c01591. Online ahead of print.

ABSTRACT

In recent decades, covalent inhibitors have emerged as a promising strategy for therapeutic development, leveraging their unique mechanism of forming covalent bonds with target proteins. This approach offers advantages such as prolonged drug efficacy, precise targeting, and the potential to overcome resistance. However, the inherent reactivity of covalent compounds presents significant challenges, leading to off-target effects and toxicities. Accurately predicting and modulating this reactivity have become a critical focus in the field. In this work, we compiled a data set of 419 cysteine-targeted covalent compounds and their reactivity through an extensive literature review. Employing machine learning, deep learning, and quantum mechanical calculations, we evaluated the intrinsic reactivity of the covalent compounds. Our FP-Stack models demonstrated robust Pearson and Spearman correlations of approximately 0.80 and 0.75 on the test set, respectively. This empowers rapid and accurate reactivity predictions, significantly reducing computational costs and streamlining structural handling and experimental procedures. Experimental validation on acrylamide compounds underscored the predictive efficacy of our model. This study presents an efficient computational tool for the reactivity prediction of covalent compounds and is expected to offer valuable insights for guiding covalent drug discovery and development.

PMID:39823568 | DOI:10.1021/acs.jcim.4c01591

Categories: Literature Watch

Motion-Compensated Multishot Pancreatic Diffusion-Weighted Imaging With Deep Learning-Based Denoising

Fri, 2025-01-17 06:00

Invest Radiol. 2025 Jan 20. doi: 10.1097/RLI.0000000000001148. Online ahead of print.

ABSTRACT

OBJECTIVES: Pancreatic diffusion-weighted imaging (DWI) has numerous clinical applications, but conventional single-shot methods suffer from off resonance-induced artifacts like distortion and blurring while cardiovascular motion-induced phase inconsistency leads to quantitative errors and signal loss, limiting its utility. Multishot DWI (msDWI) offers reduced image distortion and blurring relative to single-shot methods but increases sensitivity to motion artifacts. Motion-compensated diffusion-encoding gradients (MCGs) reduce motion artifacts and could improve motion robustness of msDWI but come with the cost of extended echo time, further reducing signal. Thus, a method that combines msDWI with MCGs while minimizing the echo time penalty and maximizing signal would improve pancreatic DWI. In this work, we combine MCGs generated via convex-optimized diffusion encoding (CODE), which reduces the echo time penalty of motion compensation, with deep learning (DL)-based denoising to address residual signal loss. We hypothesize this method will qualitatively and quantitatively improve msDWI of the pancreas.

MATERIALS AND METHODS: This prospective institutional review board-approved study included 22 patients who underwent abdominal MR examinations from August 22, 2022 and May 17, 2023 on 3.0 T scanners. Following informed consent, 2-shot spin-echo echo-planar DWI (b = 0, 800 s/mm2) without (M0) and with (M1) CODE-generated first-order gradient moment nulling was added to their clinical examinations. DL-based denoising was applied to the M1 images (M1 + DL) off-line. ADC maps were reconstructed for all 3 methods. Blinded pair-wise comparisons of b = 800 s/mm2 images were done by 3 subspecialist radiologists. Five metrics were compared: pancreatic boundary delineation, motion artifacts, signal homogeneity, perceived noise, and diagnostic preference. Regions of interest of the pancreatic head, body, and tail were drawn, and mean ADC values were computed. Repeated analysis of variance and post hoc pairwise t test with Bonferroni correction were used for comparing mean ADC values. Bland-Altman analysis compared mean ADC values. Reader preferences were tabulated and compared using Wilcoxon signed rank test with Bonferroni correction and Fleiss κ.

RESULTS: M1 was significantly preferred over M0 for perceived motion artifacts and signal homogeneity (P < 0.001). M0 was significantly preferred over M1 for perceived noise (P < 0.001), but DL-based denoising (M1 + DL) reversed this trend and was significantly favored over M0 (P < 0.001). ADC measurements from M0 varied between different regions of the pancreas (P = 0.001), whereas motion correction with M1 and M1 + DL resulted in homogeneous ADC values (P = 0.24), with values similar to those reported for ssDWI with motion correction. ADC values from M0 were significantly higher than M1 in the head (bias 16.6%; P < 0.0001), body (bias 11.0%; P < 0.0001), and tail (bias 8.6%; P = 0.001). A small but significant bias (2.6%) existed between ADC values from M1 and M1 + DL.

CONCLUSIONS: CODE-generated motion compensating gradients improves multishot pancreatic DWI as interpreted by expert readers and eliminated ADC variation throughout the pancreas. DL-based denoising mitigated signal losses from motion compensation while maintaining ADC consistency. Integrating both techniques could improve the accuracy and reliability of multishot pancreatic DWI.

PMID:39823511 | DOI:10.1097/RLI.0000000000001148

Categories: Literature Watch

Field-scale detection of Bacterial Leaf Blight in rice based on UAV multispectral imaging and deep learning frameworks

Fri, 2025-01-17 06:00

PLoS One. 2025 Jan 17;20(1):e0314535. doi: 10.1371/journal.pone.0314535. eCollection 2025.

ABSTRACT

Bacterial Leaf Blight (BLB) usually attacks rice in the flowering stage and can cause yield losses of up to 50% in severely infected fields. The resulting yield losses severely impact farmers, necessitating compensation from the regulatory authorities. This study introduces a new pipeline specifically designed for detecting BLB in rice fields using unmanned aerial vehicle (UAV) imagery. Employing the U-Net architecture with a ResNet-101 backbone, we explore three band combinations-multispectral, multispectral+NDVI, and multispectral+NDRE-to achieve superior segmentation accuracy. Due to the lack of suitable UAV-based datasets for rice disease, we generate our own dataset through disease inoculation techniques in experimental paddy fields. The dataset is increased using data augmentation and patch extraction methods to improve training robustness. Our findings demonstrate that the U-Net model incorporating ResNet-101 backbone trained with multispectral+NDVI data significantly outperforms other band combinations, achieving high accuracy metrics, including mean Intersection over Union (mIoU) of up to 97.20%, mean accuracy of up to 99.42%, mean F1-score of up to 98.56%, mean Precision of 97.97%, and mean Recall of 99.16%. Additionally, this approach efficiently segments healthy rice from other classes, minimizing misclassification and improving disease severity assessment. Therefore, the experiment concludes that the accurate mapping of the disease extent and severity level in the field is reliable to accurately allocating the compensation. The developed methodology has the potential for broader application in diagnosing other rice diseases, such as Blast, Bacterial Panicle Blight, and Sheath Blight, and could significantly enhance agricultural management through accurate damage mapping and yield loss estimation.

PMID:39823436 | DOI:10.1371/journal.pone.0314535

Categories: Literature Watch

Glaucoma detection and staging from visual field images using machine learning techniques

Fri, 2025-01-17 06:00

PLoS One. 2025 Jan 17;20(1):e0316919. doi: 10.1371/journal.pone.0316919. eCollection 2025.

ABSTRACT

PURPOSE: In this study, we investigated the performance of deep learning (DL) models to differentiate between normal and glaucomatous visual fields (VFs) and classify glaucoma from early to the advanced stage to observe if the DL model can stage glaucoma as Mills criteria using only the pattern deviation (PD) plots. The DL model results were compared with a machine learning (ML) classifier trained on conventional VF parameters.

METHODS: A total of 265 PD plots and 265 numerical datasets of Humphrey 24-2 VF images were collected from 119 normal and 146 glaucomatous eyes to train the DL models to classify the images into four groups: normal, early glaucoma, moderate glaucoma, and advanced glaucoma. The two popular pre-trained DL models: ResNet18 and VGG16, were used to train the PD images using five-fold cross-validation (CV) and observed the performance using balanced, pre-augmented data (n = 476 images), imbalanced original data (n = 265) and feature extraction. The trained images were further investigated using the Grad-CAM visualization technique. Moreover, four ML models were trained from the global indices: mean deviation (MD), pattern standard deviation (PSD) and visual field index (VFI), using five-fold CV to compare the classification performance with the DL model's result.

RESULTS: The DL model, ResNet18 trained from balanced, pre-augmented PD images, achieved high accuracy in classifying the groups with an overall F1-score: 96.8%, precision: 97.0%, recall: 96.9%, and specificity: 99.0%. The highest F1 score was 87.8% for ResNet18 with the original dataset and 88.7% for VGG16 with feature extraction. The DL models successfully localized the affected VF loss in PD plots. Among the ML models, the random forest (RF) classifier performed best with an F1 score of 96%.

CONCLUSION: The DL model trained from PD plots was promising in differentiating normal and glaucomatous groups and performed similarly to conventional global indices. Hence, the evidence-based DL model trained from PD images demonstrated that the DL model could stage glaucoma using only PD plots like Mills criteria. This automated DL model will assist clinicians in precision glaucoma detection and progression management during extensive glaucoma screening.

PMID:39823435 | DOI:10.1371/journal.pone.0316919

Categories: Literature Watch

Investigating the performance of multivariate LSTM models to predict the occurrence of Distributed Denial of Service (DDoS) attack

Fri, 2025-01-17 06:00

PLoS One. 2025 Jan 17;20(1):e0313930. doi: 10.1371/journal.pone.0313930. eCollection 2025.

ABSTRACT

In the current cybersecurity landscape, Distributed Denial of Service (DDoS) attacks have become a prevalent form of cybercrime. These attacks are relatively easy to execute but can cause significant disruption and damage to targeted systems and networks. Generally, attackers perform it to make reprisal but sometimes this issue can be authentic also. In this paper basically conversed about some deep learning models that will hand over a descent accuracy in prediction of DDoS attacks. This study evaluates various models, including Vanilla LSTM, Stacked LSTM, Deep Neural Networks (DNN), and other machine learning models such as Random Forest, AdaBoost, and Gaussian Naive Bayes to determine the DDoS attack along with comparing these approaches as well as perceiving which one is about to give elegant outcomes in prediction. The rationale for selecting Long Short-Term Memory (LSTM) networks for evaluation in our study is based on their proven effectiveness in modeling sequential and time-series data, which are inherent characteristics of network traffic and cybersecurity data. Here, a benchmark dataset named CICDDoS2019 is used that contains 88 features from which a handful (22) convenient features are extracted further deep learning models are applied. The result that is acquired here is significantly better than available techniques those are attainable in this context by using Machine Learning models, data mining techniques and some IOT based approaches. It's not possible to completely avoid your server from these threats but by applying discussed techniques in the present juncture, these attacks can be prevented to an extent and it will also help to server to fulfil the genuine requests instead of sticking in the accomplishing the requests created by the unauthentic user.

PMID:39823417 | DOI:10.1371/journal.pone.0313930

Categories: Literature Watch

Normalized Protein-Ligand Distance Likelihood Score for End-to-End Blind Docking and Virtual Screening

Fri, 2025-01-17 06:00

J Chem Inf Model. 2025 Jan 17. doi: 10.1021/acs.jcim.4c01014. Online ahead of print.

ABSTRACT

Molecular Docking is a critical task in structure-based virtual screening. Recent advancements have showcased the efficacy of diffusion-based generative models for blind docking tasks. However, these models do not inherently estimate protein-ligand binding strength thus cannot be directly applied to virtual screening tasks. Protein-ligand scoring functions serve as fast and approximate computational methods to evaluate the binding strength between the protein and ligand. In this work, we introduce normalized mixture density network (NMDN) score, a deep learning (DL)-based scoring function learning the probability density distribution of distances between protein residues and ligand atoms. The NMDN score addresses limitations observed in existing DL scoring functions and performs robustly in both pose selection and virtual screening tasks. Additionally, we incorporate an interaction module to predict the experimental binding affinity score to fully utilize the learned protein and ligand representations. Finally, we present an end-to-end blind docking and virtual screening protocol named DiffDock-NMDN. For each protein-ligand pair, we employ DiffDock to sample multiple poses, followed by utilizing the NMDN score to select the optimal binding pose, and estimating the binding affinity using scoring functions. Our protocol achieves an average enrichment factor of 4.96 on the LIT-PCBA data set, proving effective in real-world drug discovery scenarios where binder information is limited. This work not only presents a robust DL-based scoring function with superior pose selection and virtual screening capabilities but also offers a blind docking protocol and benchmarks to guide future scoring function development.

PMID:39823352 | DOI:10.1021/acs.jcim.4c01014

Categories: Literature Watch

Biologically relevant integration of transcriptomics profiles from cancer cell lines, patient-derived xenografts, and clinical tumors using deep learning

Fri, 2025-01-17 06:00

Sci Adv. 2025 Jan 17;11(3):eadn5596. doi: 10.1126/sciadv.adn5596. Epub 2025 Jan 17.

ABSTRACT

Cell lines and patient-derived xenografts are essential to cancer research; however, the results derived from such models often lack clinical translatability, as they do not fully recapitulate the complex cancer biology. Identifying preclinical models that sufficiently resemble the biological characteristics of clinical tumors across different cancers is critically important. Here, we developed MOBER, Multi-Origin Batch Effect Remover method, to simultaneously extract biologically meaningful embeddings while removing confounder information. Applying MOBER on 932 cancer cell lines, 434 patient-derived tumor xenografts, and 11,159 clinical tumors, we identified preclinical models with greatest transcriptional fidelity to clinical tumors and models that are transcriptionally unrepresentative of their respective clinical tumors. MOBER allows for transformation of transcriptional profiles of preclinical models to resemble the ones of clinical tumors and, therefore, can be used to improve the clinical translation of insights gained from preclinical models. MOBER is a versatile batch effect removal method applicable to diverse transcriptomic datasets, enabling integration of multiple datasets simultaneously.

PMID:39823329 | DOI:10.1126/sciadv.adn5596

Categories: Literature Watch

Evaluation of a Deep Learning Denoising Algorithm for Dose Reduction in Whole-Body Photon-Counting CT Imaging: A Cadaveric Study

Thu, 2025-01-16 06:00

Acad Radiol. 2025 Jan 15:S1076-6332(24)01040-7. doi: 10.1016/j.acra.2024.12.052. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Photon Counting CT (PCCT) offers advanced imaging capabilities with potential for substantial radiation dose reduction; however, achieving this without compromising image quality remains a challenge due to increased noise at lower doses. This study aims to evaluate the effectiveness of a deep learning (DL)-based denoising algorithm in maintaining diagnostic image quality in whole-body PCCT imaging at reduced radiation levels, using real intraindividual cadaveric scans.

MATERIALS AND METHODS: Twenty-four cadaveric human bodies underwent whole-body CT scans on a PCCT scanner (NAEOTOM Alpha, Siemens Healthineers) at four different dose levels (100%, 50%, 25%, and 10% mAs). Each scan was reconstructed using both ADMIRE level 2 and a DL algorithm (ClariCT.AI, ClariPi Inc.), resulting in 192 datasets. Objective image quality was assessed by measuring CT value stability, image noise, and contrast-to-noise ratio (CNR) across consistent regions of interest (ROIs) in the liver parenchyma. Two radiologists independently evaluated subjective image quality based on overall image clarity, sharpness, and contrast. Inter-rater agreement was determined using Spearman's correlation coefficient, and statistical analysis included mixed-effects modeling to assess objective and subjective image quality.

RESULTS: Objective analysis showed that the DL denoising algorithm did not significantly alter CT values (p ≥ 0.9975). Noise levels were consistently lower in denoised datasets compared to the Original (p < 0.0001). No significant differences were observed between the 25% mAs denoised and the 100% mAs original datasets in terms of noise and CNR (p ≥ 0.7870). Subjective analysis revealed strong inter-rater agreement (r ≥ 0.78), with the 50% mAs denoised datasets rated superior to the 100% mAs original datasets (p < 0.0001) and no significant differences detected between the 25% mAs denoised and 100% mAs original datasets (p ≥ 0.9436).

CONCLUSION: The DL denoising algorithm maintains image quality in PCCT imaging while enabling up to a 75% reduction in radiation dose. This approach offers a promising method for reducing radiation exposure in clinical PCCT without compromising diagnostic quality.

PMID:39818525 | DOI:10.1016/j.acra.2024.12.052

Categories: Literature Watch

Social associations across species during nocturnal bird migration

Thu, 2025-01-16 06:00

Curr Biol. 2025 Jan 10:S0960-9822(24)01701-9. doi: 10.1016/j.cub.2024.12.033. Online ahead of print.

ABSTRACT

An emerging frontier in ecology explores how organisms integrate social information into movement behavior and the extent to which information exchange occurs across species boundaries.1,2,3 Most migratory landbirds are thought to undertake nocturnal migratory flights independently, guided by endogenous programs and individual experience.4,5 Little research has addressed the potential for social information exchange aloft during nocturnal migration, but social influences that aid navigation, orientation, or survival could be valuable during high-risk migration periods.1,2,6,7,8 We captured audio of >18,000 h of nocturnal bird migration and used deep learning to extract >175,000 in-flight vocalizations of 27 species of North American landbirds.9,10,11,12 We used vocalizations to test whether migrating birds distribute non-randomly relative to other species in flight, accounting for migration phenology, geography, and other non-social factors. We found that migrants engaged in distinct associations with an average of 2.7 ± 1.9 SD other species. Social associations were stronger among species with similar wing morphologies and vocalizations. These results suggest that vocal signals maintain in-flight associations that are structured by flight speed and behavior.11,13,14 For small-bodied and short-lived bird species, transient social associations could play an important role in migratory decision-making by supplementing endogenous or experiential information sources.15,16,17 This research provides the first quantitative evidence of interspecific social associations during nocturnal bird migration, supporting recent calls to rethink songbird migration with a social lens.2 Substantial recent declines in bird populations18,19 may diminish the frequency and strength of social associations during migration, with currently unknown consequences for populations.

PMID:39818216 | DOI:10.1016/j.cub.2024.12.033

Categories: Literature Watch

The regulatory landscape of 5' UTRs in translational control during zebrafish embryogenesis

Thu, 2025-01-16 06:00

Dev Cell. 2025 Jan 13:S1534-5807(24)00777-9. doi: 10.1016/j.devcel.2024.12.038. Online ahead of print.

ABSTRACT

The 5' UTRs of mRNAs are critical for translation regulation during development, but their in vivo regulatory features are poorly characterized. Here, we report the regulatory landscape of 5' UTRs during early zebrafish embryogenesis using a massively parallel reporter assay of 18,154 sequences coupled to polysome profiling. We found that the 5' UTR suffices to confer temporal dynamics to translation initiation and identified 86 motifs enriched in 5' UTRs with distinct ribosome recruitment capabilities. A quantitative deep learning model, Danio Optimus 5-Prime (DaniO5P), identified a combined role for 5' UTR length, translation initiation site context, upstream AUGs, and sequence motifs on ribosome recruitment. DaniO5P predicts the activities of maternal and zygotic 5' UTR isoforms and indicates that modulating 5' UTR length and motif grammar contributes to translation initiation dynamics. This study provides a first quantitative model of 5' UTR-based translation regulation in development and lays the foundation for identifying the underlying molecular effectors.

PMID:39818206 | DOI:10.1016/j.devcel.2024.12.038

Categories: Literature Watch

Machine learning outperforms humans in microplastic characterization and reveals human labelling errors in FTIR data

Thu, 2025-01-16 06:00

J Hazard Mater. 2024 Dec 31;487:136989. doi: 10.1016/j.jhazmat.2024.136989. Online ahead of print.

ABSTRACT

Microplastics are ubiquitous and appear to be harmful, however, the full extent to which these inflict harm has not been fully elucidated. Analysing environmental sample data is challenging, as the complexity in real data makes both automated and manual analysis either unreliable or time-consuming. To address challenges, we explored a dense feed-forward neural network (DNN) for classifying Fourier transform infrared (FTIR) spectroscopic data. The DNN provides conditional class distributions over 16 microplastic categories given an FTIR spectrum, exceeding number of categories in other works. Our results indicate that this DNN, which is significantly smaller than contemporary models, outperforms other models and even human classification performance. Specifically, while the model broadly reproduces the decisions of human annotators, in cases of disagreement either both were incorrect or the human annotation was incorrect. The errors not being reproduced indicate that the DNN is making informed generalisable decisions. Additionally, this work indicates that there exists an upper limit on metrics measuring performance, where metrics measure agreement between human and model predictions. This work indicates that a small and efficient DNN can making high throughput analysis of difficult FTIR data possible, where predictions match or exceed the reliability typical to low-throughput methods.

PMID:39818049 | DOI:10.1016/j.jhazmat.2024.136989

Categories: Literature Watch

Pages