Deep learning

CMV2U-Net: A U-shaped network with edge-weighted features for detecting and localizing image splicing

Thu, 2025-04-03 06:00

J Forensic Sci. 2025 Apr 3. doi: 10.1111/1556-4029.70033. Online ahead of print.

ABSTRACT

The practice of cutting and pasting portions of one image into another, known as "image splicing," is commonplace in the field of image manipulation. Image splicing detection using deep learning has been a hot research topic for the past few years. However, there are two problems with the way deep learning is currently implemented: first, it is not good enough for feature fusion, and second, it uses only simple models for feature extraction and encoding, which makes the models vulnerable to overfitting. To tackle these problems, this research proposes CMV2U-Net, an edge-weighted U-shaped network-based image splicing forgery localization approach. An initial step is the development of a feature extraction module that can process two streams of input images simultaneously, allowing for the simultaneous extraction of semantically connected and semantically agnostic features. One characteristic is that a hierarchical fusion approach has been devised to prevent data loss in shallow features that are either semantically related or semantically irrelevant. This approach implements a channel attention mechanism to monitor manipulation trajectories involving multiple levels. Extensive trials on numerous public datasets prove that CMV2U-Net provides high AUC and F1 in localizing tampered regions, outperforming state-of-the-art techniques. Noise, Gaussian blur, and JPEG compression are post-processing threats that CMV2U-Net has successfully resisted.

PMID:40177991 | DOI:10.1111/1556-4029.70033

Categories: Literature Watch

Deep Learning-Powered Colloidal Digital SERS for Precise Monitoring of Cell Culture Media

Thu, 2025-04-03 06:00

Nano Lett. 2025 Apr 3. doi: 10.1021/acs.nanolett.5c01071. Online ahead of print.

ABSTRACT

Maintaining consistent quality in biomanufacturing is essential for producing high-quality complex biologics. Yet, current process analytical technologies (PAT) often fall short in achieving rapid and accurate monitoring of small-molecule critical process parameters and critical quality attributes. Surface-enhanced Raman spectroscopy (SERS) holds great promise but faces challenges like intensity fluctuations, compromising reproducibility. Herein, we propose a deep learning-powered colloidal digital SERS platform. This innovation converts SERS spectra into binary "ON/OFF" signals based on defined intensity thresholds, which allows single-molecule event visualization and reduces false positives. Through integration with deep learning, this platform enables detection of a broad range of analytes, unlimited by the lack of characteristic SERS peaks. Furthermore, we demonstrate its accuracy and reproducibility for studying AMBIC 1.1 mammalian cell culture media. These results highlight its rapidity, accuracy, and precision, paving the way for widespread adoption and scale-up as a novel PAT tool in biomanufacturing and diagnostics.

PMID:40177940 | DOI:10.1021/acs.nanolett.5c01071

Categories: Literature Watch

ChatGPT for speech-impaired assistance

Thu, 2025-04-03 06:00

Disabil Rehabil Assist Technol. 2025 Apr 3:1-3. doi: 10.1080/17483107.2025.2483300. Online ahead of print.

ABSTRACT

Background: Speech and language impairments, though often used interchangeably, are two very distinct types of challenges. A speech impairment may lead to impaired ability to produce speech sounds whilst communication may be affected due to lack of fluency or articulation of words. Consequently this may affect a person's ability to articulate may affect academic achievement, social development and progress in life. ChatGPT (Generative Pretrained Transformer) is an open access AI (Artificial Intelligence) tool developed by Open AI® based on Large language models (LLMs) with the ability to respond to human prompts to generate texts using Supervised and Unsupervised Machine Learning (ML) Algorithms. This article explores the current role and future perspectives of ChatGPT AI Tool for Speech-Impaired Assistance.

Methods: A cumulative search strategy using databases of PubMed, Google Scholar, Scopus and grey literature was conducted to generate this narrative review.

Results: A spectrum of Enabling Technologies for Speech & Language Impairment have been explored. Augmentative and Alternative Communication technology (AAC), Integration with Neuroprosthesis technology and Speech therapy applications offer considerable potential to aid speech and language impaired individuals.

Conclusion: Current applications of AI, ChatGPT and other LLM's offer promising solutions in enhancing communication in people affected by Speech and Language impairment. However, further research and development is required to ensure affordability, accessibility and authenticity of these AI Tools in clinical Practice.

PMID:40177878 | DOI:10.1080/17483107.2025.2483300

Categories: Literature Watch

Age-sex-specific burden of urological cancers attributable to risk factors in China and its provinces, 1990-2021, and forecasts with scenarios simulation: a systematic analysis for the Global Burden of Disease Study 2021

Thu, 2025-04-03 06:00

Lancet Reg Health West Pac. 2025 Mar 18;56:101517. doi: 10.1016/j.lanwpc.2025.101517. eCollection 2025 Mar.

ABSTRACT

BACKGROUND: As global aging intensifies, urological cancers pose increasing health and economic burdens. In China, home to one-fifth of the world's population, monitoring the distribution and determinants of these cancers and simulating the effects of health interventions are crucial for global and national health.

METHODS: With Global Burden of Disease (GBD) China database, the present study analyzed age-sex-specific patterns of incidence, prevalence, mortality, disability-adjusted life years (DALYs), years lived with disability (YLDs), and years of life lost (YLLs) in China and its 34 provinces as well as the association between gross domestic product per capita (GDPPC) and these patterns. Importantly, a multi-attentive deep learning pipeline (iTransformer) was pioneered to model the spatiotemporal patterns of urological cancers, risk factors, GDPPC, and population, to provide age-sex-location-specific long-term forecasts of urological cancer burdens, and to investigate the impacts of risk-factor-directed interventions on their future burdens.

FINDINGS: From 1990 to 2021, the incidence and prevalence of urological cancers in China has increased, leading to 266,887 new cases (95% confidence interval: 205,304-346,033) and 159,506,067 (12,236,0000-207,447,070) cases in 2021, driven primarily by males aged 55+ years. In 2021, Taiwan, Beijing, and Zhejiang had the highest age-standardized incidence rate (ASIR) and age-standardized prevalence rates of urological cancer in China, highlighting significant regional disparities in the disease burden. Conversely, the national age-standardized mortality rate (ASMR) has declined from 6.5 (5.1-7.8) per 100,000 population in 1990 to 5.6 (4.4-7.2) in 2021, notably in Jilin [-166.7% (-237 to -64.6)], Tibet [-135.4% (-229.1 to 4.4)], and Heilongjiang [-118.5% (-206.5 to -4.6)]. Specifically, the national ASMR for bladder and testicular cancers reduced by -32.1% (-47.9 to 1.9) and -31.1% (-50.2 to 7.2), respectively, whereas prostate and kidney cancers rose by 7.9% (-18.4 to 43.6) and 9.2% (-12.2 to 36.5). Age-standardized DALYs, YLDs, and YLLs for urological cancers were consistent with ASMR. Males suffered higher burdens of urological cancers than females in all populations, except those aged <5 years. Regionally and provincially, high GDPPC provinces have the highest burden of prostate cancer, while the main burden in other provinces is bladder cancer. The main risk factors for urological cancers in 2021 are smoking [accounting for 55.1% (42.7-67.4)], high body mass index [13.9% (5.3-22.4)], and high fasting glycemic index [5.9% (-0.8 to 13.4)] for both males and females, with smoking remarkably affecting males and high body mass index affecting females. Between 2022 and 2040, the ASIR of urological cancers increased from 10.09 (9.19-10.99) to 14.42 (14.30-14.54), despite their ASMR decreasing. Notably, prostate cancer surpassed bladder cancer as the primary subcategory, with those aged 55+ years showing the highest increase in ASIR, highlighting the aging-related transformation of the urological cancer burden. Following the implementation of targeted interventions, smoking control achieved the greatest reduction in urological cancer burden, mainly affecting male bladder cancer (-45.8% decline). In females, controlling smoking and high fasting plasma glucose reduced by 5.3% and 5.8% ASMR in urological cancers. Finally, the averaged mean-square-Percentage-Error, absolute-Percentage-Error, and root-mean-square Logarithmic-Error of the forecasting model are 0.54 ± 0.22, 1.51 ± 1.26, and 0.15 ± 0.07, respectively, indicating that the model performs well.

INTERPRETATION: Urological cancers exhibit an aging trend, with increased incidence rates among the population aged 55+ years, making prostate cancer the most burdensome subcategory. Moreover, urological cancer burden is imbalanced by age, sex, and province. Based on our findings, authorities and policymakers could refine or tailor population-specific health strategies, including promoting smoking cessation, weight reduction, and blood sugar control.

FUNDING: Bill & Melinda Gates Foundation.

PMID:40177596 | PMC:PMC11964562 | DOI:10.1016/j.lanwpc.2025.101517

Categories: Literature Watch

The promise and limitations of artificial intelligence in CTPA-based pulmonary embolism detection

Thu, 2025-04-03 06:00

Front Med (Lausanne). 2025 Mar 19;12:1514931. doi: 10.3389/fmed.2025.1514931. eCollection 2025.

ABSTRACT

Computed tomography pulmonary angiography (CTPA) is an essential diagnostic tool for identifying pulmonary embolism (PE). The integration of AI has significantly advanced CTPA-based PE detection, enhancing diagnostic accuracy and efficiency. This review investigates the growing role of AI in the diagnosis of pulmonary embolism using CTPA imaging. The review examines the capabilities of AI algorithms, particularly deep learning models, in analyzing CTPA images for PE detection. It assesses their sensitivity and specificity compared to human radiologists. AI systems, using large datasets and complex neural networks, demonstrate remarkable proficiency in identifying subtle signs of PE, aiding clinicians in timely and accurate diagnosis. In addition, AI-powered CTPA analysis shows promise in risk stratification, prognosis prediction, and treatment optimization for PE patients. Automated image interpretation and quantitative analysis facilitate rapid triage of suspected cases, enabling prompt intervention and reducing diagnostic delays. Despite these advancements, several limitations remain, including algorithm bias, interpretability issues, and the necessity for rigorous validation, which hinder widespread adoption in clinical practice. Furthermore, integrating AI into existing healthcare systems requires careful consideration of regulatory, ethical, and legal implications. In conclusion, AI-driven CTPA-based PE detection presents unprecedented opportunities to enhance diagnostic precision and efficiency. However, addressing the associated limitations is critical for safe and effective implementation in routine clinical practice. Successful utilization of AI in revolutionizing PE care necessitates close collaboration among researchers, medical professionals, and regulatory organizations.

PMID:40177281 | PMC:PMC11961422 | DOI:10.3389/fmed.2025.1514931

Categories: Literature Watch

Construction of a predictive model for the efficacy of anti-VEGF therapy in macular edema patients based on OCT imaging: a retrospective study

Thu, 2025-04-03 06:00

Front Med (Lausanne). 2025 Mar 19;12:1505530. doi: 10.3389/fmed.2025.1505530. eCollection 2025.

ABSTRACT

BACKGROUND: Macular edema (ME) is an ophthalmic disease that poses a serious threat to human vision. Anti-vascular endothelial growth factor (anti-VEGF) therapy has become the first-line treatment for ME due to its safety and high efficacy. However, there are still cases of refractory macular edema and non-responding patients. Therefore, it is crucial to develop automated and efficient methods for predicting therapeutic outcomes.

METHODS: We have developed a predictive model for the surgical efficacy in ME patients based on deep learning and optical coherence tomography (OCT) imaging, aimed at predicting the treatment outcomes at different time points. This model innovatively introduces group convolution and multiple convolutional kernels to handle multidimensional features based on traditional attention mechanisms for visual recognition tasks, while utilizing spatial pyramid pooling (SPP) to combine and extract the most useful features. Additionally, the model uses ResNet50 as a pre-trained model, integrating multiple knowledge through model fusion.

RESULTS: Our proposed model demonstrated the best performance across various experiments. In the ablation study, the model achieved an F1 score of 0.9937, an MCC of 0.7653, an AUC of 0.9928, and an ACC of 0.9877 in the test conducted on the first day after surgery. In comparison experiments, the ACC of our model was 0.9930 and 0.9915 in the first and the third months post-surgery, respectively, with AUC values of 0.9998 and 0.9996, significantly outperforming other models. In conclusion, our model consistently exhibited superior performance in predicting outcomes at various time points, validating its excellence in processing OCT images and predicting postoperative efficacy.

CONCLUSION: Through precise prediction of the response to anti-VEGF therapy in ME patients, deep learning technology provides a revolutionary tool for the treatment of ophthalmic diseases, significantly enhancing treatment outcomes and improving patients' quality of life.

PMID:40177270 | PMC:PMC11961644 | DOI:10.3389/fmed.2025.1505530

Categories: Literature Watch

Measurement-guided therapeutic-dose prediction using multi-level gated modality-fusion model for volumetric-modulated arc radiotherapy

Thu, 2025-04-03 06:00

Front Oncol. 2025 Mar 19;15:1468232. doi: 10.3389/fonc.2025.1468232. eCollection 2025.

ABSTRACT

OBJECTIVES: Radiotherapy is a fundamental cancer treatment method, and pre-treatment patient-specific quality assurance (prePSQA) plays a crucial role in ensuring dose accuracy and patient safety. Artificial intelligence model for measurement-free prePSQA have been investigated over the last few years. While these models stack successive pooling layers to carry out sequential learning, directly splice together different modalities along channel dimensions and feed them into shared encoder-decoder network, which greatly reduces the anatomical features specific to different modalities. Furthermore, the existing models simply take advantage of low-dimensional dosimetry information, meaning that the spatial features about the complex dose distribution may be lost and limiting the predictive power of the models. The purpose of this study is to develop a novel deep learning model for measurement-guided therapeutic-dose (MDose) prediction from head and neck cancer radiotherapy data.

METHODS: The enrolled 310 patients underwent volumetric-modulated arc radiotherapy (VMAT) were randomly divided into the training set (186 cases, 60%), validation set (62 cases, 20%), and test set (62 cases, 20%). The effective prediction model explicitly integrates the multi-scale features that are specific to CT and dose images, takes into account the useful spatial dose information and fully exploits the mutual promotion within the different modalities. It enables medical physicists to analyze the detailed locations of spatial dose differences and to simultaneously generate clinically applicable dose-volume histograms (DVHs) metrics and gamma passing rate (GPR) outcomes.

RESULTS: The proposed model achieved better performance of MDose prediction, and dosimetric congruence of DVHs, GPR with the ground truth compared with several state-of-the-art models. Quantitative experimental predictions show that the proposed model achieved the lowest values for the mean absolute error (37.99) and root mean square error (4.916), and the highest values for the peak signal-to-noise ratio (52.622), structural similarity (0.986) and universal quality index (0.932). The predicted dose values of all voxels were within 6 Gy in the dose difference maps, except for the areas near the skin or thermoplastic mask indentation boundaries.

CONCLUSIONS: We have developed a feasible MDose prediction model that could potentially improve the efficiency and accuracy of prePSQA for head and neck cancer radiotherapy, providing a boost for clinical adaptive radiotherapy.

PMID:40177241 | PMC:PMC11961879 | DOI:10.3389/fonc.2025.1468232

Categories: Literature Watch

A flexible transoral swab sampling robot system with visual-tactile fusion approach

Thu, 2025-04-03 06:00

Front Robot AI. 2025 Mar 19;12:1520374. doi: 10.3389/frobt.2025.1520374. eCollection 2025.

ABSTRACT

A significant number of individuals have been affected by pandemic diseases, such as COVID-19 and seasonal influenza. Nucleic acid testing is a common method for identifying infected patients. However, manual sampling methods require the involvement of numerous healthcare professionals. To address this challenge, we propose a novel transoral swab sampling robot designed to autonomously perform nucleic acid sampling using a visual-tactile fusion approach. The robot comprises a series-parallel hybrid flexible mechanism for precise distal posture adjustment and a visual-tactile perception module for navigation within the subject's oral cavity. The series-parallel hybrid mechanism, driven by flexible shafts, enables omnidirectional bending through coordinated movement of the two segments of the bendable joint. The visual-tactile perception module incorporates a camera to capture oral images of the subject and recognize the nucleic acid sampling point using a deep learning method. Additionally, a force sensor positioned at the distal end of the robot provides feedback on contact force as the swab is inserted into the subject's oral cavity. The sampling robot is capable of autonomously performing transoral swab sampling while navigating using the visual-tactile perception algorithm. Preliminary experimental trials indicate that the designed robot system is feasible, safe, and accurate for sample collection from subjects.

PMID:40177224 | PMC:PMC11961991 | DOI:10.3389/frobt.2025.1520374

Categories: Literature Watch

Developing predictive models for opioid receptor binding using machine learning and deep learning techniques

Thu, 2025-04-03 06:00

Exp Biol Med (Maywood). 2025 Mar 19;250:10359. doi: 10.3389/ebm.2025.10359. eCollection 2025.

ABSTRACT

Opioids exert their analgesic effect by binding to the µ opioid receptor (MOR), which initiates a downstream signaling pathway, eventually inhibiting pain transmission in the spinal cord. However, current opioids are addictive, often leading to overdose contributing to the opioid crisis in the United States. Therefore, understanding the structure-activity relationship between MOR and its ligands is essential for predicting MOR binding of chemicals, which could assist in the development of non-addictive or less-addictive opioid analgesics. This study aimed to develop machine learning and deep learning models for predicting MOR binding activity of chemicals. Chemicals with MOR binding activity data were first curated from public databases and the literature. Molecular descriptors of the curated chemicals were calculated using software Mold2. The chemicals were then split into training and external validation datasets. Random forest, k-nearest neighbors, support vector machine, multi-layer perceptron, and long short-term memory models were developed and evaluated using 5-fold cross-validations and external validations, resulting in Matthews correlation coefficients of 0.528-0.654 and 0.408, respectively. Furthermore, prediction confidence and applicability domain analyses highlighted their importance to the models' applicability. Our results suggest that the developed models could be useful for identifying MOR binders, potentially aiding in the development of non-addictive or less-addictive drugs targeting MOR.

PMID:40177220 | PMC:PMC11961360 | DOI:10.3389/ebm.2025.10359

Categories: Literature Watch

Global trends in artificial intelligence applications in liver disease over seventeen years

Thu, 2025-04-03 06:00

World J Hepatol. 2025 Mar 27;17(3):101721. doi: 10.4254/wjh.v17.i3.101721.

ABSTRACT

BACKGROUND: In recent years, the utilization of artificial intelligence (AI) technology has gained prominence in the field of liver disease.

AIM: To analyzes AI research in the field of liver disease, summarizes the current research status and identifies hot spots.

METHODS: We searched the Web of Science Core Collection database for all articles and reviews on hepatopathy and AI. The time spans from January 2007 to August 2023. We included 4051 studies for further collection of information, including authors, countries, institutions, publication years, keywords and references. VOS viewer, CiteSpace, R 4.3.1 and Scimago Graphica were used to visualize the results.

RESULTS: A total of 4051 articles were analyzed. China was the leading contributor, with 1568 publications, while the United States had the most international collaborations. The most productive institutions and journals were the Chinese Academy of Sciences and Frontiers in Oncology. Keywords co-occurrence analysis can be roughly summarized into four clusters: Risk prediction, diagnosis, treatment and prognosis of liver diseases. "Machine learning", "deep learning", "convolutional neural network", "CT", and "microvascular infiltration" have been popular research topics in recent years.

CONCLUSION: AI is widely applied in the risk assessment, diagnosis, treatment, and prognosis of liver diseases, with a shift from invasive to noninvasive treatment approaches.

PMID:40177211 | PMC:PMC11959664 | DOI:10.4254/wjh.v17.i3.101721

Categories: Literature Watch

Conditioning generative latent optimization for sparse-view computed tomography image reconstruction

Thu, 2025-04-03 06:00

J Med Imaging (Bellingham). 2025 Mar;12(2):024004. doi: 10.1117/1.JMI.12.2.024004. Epub 2025 Apr 1.

ABSTRACT

PURPOSE: The issue of delivered doses during computed tomography (CT) scans encouraged sparser sets of X-ray projection, severely degrading reconstructions from conventional methods. Although most deep learning approaches benefit from large supervised datasets, they cannot generalize to new acquisition protocols (geometry, source/detector specifications). To address this issue, we developed a method working without training data and independently of experimental setups. In addition, our model may be initialized on small unsupervised datasets to enhance reconstructions.

APPROACH: We propose a conditioned generative latent optimization (cGLO) in which a decoder reconstructs multiple slices simultaneously with a shared objective. It is tested on full-dose sparse-view CT for varying projection sets: (a) without training data against Deep Image Prior (DIP) and (b) with training datasets of multiple sizes against state-of-the-art score-based generative models (SGMs). Peak signal-to-noise ratio (PSNR) and structural SIMilarity (SSIM) metrics are used to quantify reconstruction quality.

RESULTS: cGLO demonstrates better SSIM than SGMs (between + 0.034 and + 0.139 ) and has an increasing advantage for smaller datasets reaching a + 6.06 dB PSNR gain. Our strategy also outperforms DIP with at least a + 1.52 dB PSNR advantage and peaks at + 3.15 dB with fewer angles. Moreover, cGLO does not create artifacts or structural deformations contrary to DIP and SGMs.

CONCLUSIONS: We propose a parsimonious and robust reconstruction technique offering similar to better performances when compared with state-of-the-art methods regarding full-dose sparse-view CT. Our strategy could be readily applied to any imaging reconstruction task without any assumption about the acquisition protocol or the quantity of available data.

PMID:40177097 | PMC:PMC11961077 | DOI:10.1117/1.JMI.12.2.024004

Categories: Literature Watch

Accurate V2X traffic prediction with deep learning architectures

Thu, 2025-04-03 06:00

Front Artif Intell. 2025 Mar 18;8:1565287. doi: 10.3389/frai.2025.1565287. eCollection 2025.

ABSTRACT

Vehicle-to-Everything (V2X) communication promises to revolutionize road safety and efficiency. However, challenges in data sharing and network reliability impede its full realization. This paper addresses these challenges by proposing a novel Deep Learning (DL) approach for traffic prediction in V2X environments. We employ Bidirectional Long Short-Term Memory (BiLSTM) networks and compare their performance against other prominent DL architectures, including unidirectional LSTM and Gated Recurrent Unit (GRU). Our findings demonstrate that the BiLSTM model exhibits superior accuracy in predicting traffic patterns. This enhanced prediction capability enables more efficient resource allocation, improved network performance, and enhanced safety for all road users, reducing fuel consumption, decreased emissions, and a more sustainable transportation system.

PMID:40176965 | PMC:PMC11962783 | DOI:10.3389/frai.2025.1565287

Categories: Literature Watch

Automated Sleep Staging in Epilepsy Using Deep Learning on Standard Electroencephalogram and Wearable Data

Thu, 2025-04-03 06:00

J Sleep Res. 2025 Apr 3:e70061. doi: 10.1111/jsr.70061. Online ahead of print.

ABSTRACT

Automated sleep staging on wearable data could improve our understanding and management of epilepsy. This study evaluated sleep scoring by a deep learning model on 223 night-sleep recordings from 50 patients measured in the hospital with an electroencephalogram (EEG) and a wearable device. The model scored the sleep stage of every 30-s epoch on the EEG and wearable data, and we compared the output with a clinical expert on 20 nights, each for a different patient. The Bland-Altman analysis examined differences in the automated staging in both modalities, and using mixed-effect models, we explored sleep differences between patients with and without seizures. Overall, we found moderate accuracy and Cohen's kappa on the model scoring of standard EEG (0.73 and 0.59) and the wearable (0.61 and 0.43) versus the clinical expert. F1 scores also varied between patients and the modalities. The sensitivity varied by sleep stage and was very low for stage N1. Moreover, sleep staging on the wearable data underestimated the duration of most sleep macrostructure parameters except N2. On the other hand, patients with seizures during the hospital admission slept more each night (6.37, 95% confidence interval [CI] 5.86-7.87) compared with patients without seizures (5.68, 95% CI 5.24-6.13), p = 0.001, but also spent more time in stage N2. In conclusion, wearable EEG and accelerometry could monitor sleep in patients with epilepsy, and our approach can help automate the analysis. However, further steps are essential to improve the model performance before clinical implementation. Trial Registration: The SeizeIT2 trial was registered in clinicaltrials.gov, NCT04284072.

PMID:40176726 | DOI:10.1111/jsr.70061

Categories: Literature Watch

Machine learning fusion for glioma tumor detection

Wed, 2025-04-02 06:00

Sci Rep. 2025 Apr 2;15(1):11236. doi: 10.1038/s41598-025-89911-3.

ABSTRACT

The early detection of brain tumors is very important for treating them and improving the quality of life for patients. Through advanced imaging techniques, doctors can now make more informed decisions. This paper introduces a framework for a tumor detection system capable of grading gliomas. The system's implementation begins with the acquisition and analysis of brain magnetic resonance images. Key features indicative of tumors and gliomas are extracted and classified as independent components. A deep learning model is then employed to categorize these gliomas. The proposed model classifies gliomas into three primary categories: meningioma, pituitary, and glioma. Performance evaluation demonstrates a high level of accuracy (99.21%), specificity (98.3%), and sensitivity (97.83%). Further research and validation are essential to refine the system and ensure its clinical applicability. The development of accurate and efficient tumor detection systems holds significant promise for enhancing patient care and improving survival rates.

PMID:40175410 | DOI:10.1038/s41598-025-89911-3

Categories: Literature Watch

Artificial intelligence applied to epilepsy imaging: Current status and future perspectives

Wed, 2025-04-02 06:00

Rev Neurol (Paris). 2025 Apr 1:S0035-3787(25)00487-4. doi: 10.1016/j.neurol.2025.03.006. Online ahead of print.

ABSTRACT

In recent years, artificial intelligence (AI) has become an increasingly prominent focus of medical research, significantly impacting epileptology as well. Studies on deep learning (DL) and machine learning (ML) - the core of AI - have explored their applications in epilepsy imaging, primarily focusing on lesion detection, lateralization and localization of epileptogenic areas, postsurgical outcome prediction and automatic differentiation between people with epilepsy and healthy individuals. Various AI-driven approaches are being investigated across different neuroimaging modalities, with the ultimate goal of integrating these tools into clinical practice to enhance the diagnosis and treatment of epilepsy. As computing power continues to advance, the development, research integration, and clinical implementation of AI applications are expected to accelerate, making them even more effective and accessible. However, ensuring the safety of patient data will require strict regulatory measures. Despite these challenges, AI represents a transformative opportunity for medicine, particularly in epilepsy neuroimaging. Since ML and DL models thrive on large datasets, fostering collaborations and expanding open-access databases will become increasingly pivotal in the future.

PMID:40175210 | DOI:10.1016/j.neurol.2025.03.006

Categories: Literature Watch

Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model

Wed, 2025-04-02 06:00

Acad Radiol. 2025 Apr 1:S1076-6332(25)00210-7. doi: 10.1016/j.acra.2025.03.015. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images.

MATERIALS AND METHODS: A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github.

RESULTS: The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis.

CONCLUSION: In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.

PMID:40175204 | DOI:10.1016/j.acra.2025.03.015

Categories: Literature Watch

Emerging horizons of AI in pharmaceutical research

Wed, 2025-04-02 06:00

Adv Pharmacol. 2025;103:325-348. doi: 10.1016/bs.apha.2025.01.016. Epub 2025 Feb 16.

ABSTRACT

Artificial Intelligence (AI) has revolutionized drug discovery by enhancing data collection, integration, and predictive modeling across various critical stages. It aggregates vast biological and chemical data, including genomic information, protein structures, and chemical interactions with biological targets. Machine learning techniques and QSAR models are applied by AI to predict compound behaviors and predict potential drug candidates. Docking simulations predict drug-protein interactions, while virtual screening eliminates large chemical databases through efficient sifting. Similarly, AI supports de novo drug design by generating novel molecules, optimized against a particular biological target, using generative models such as generative adversarial network (GAN), always finding lead compounds with the most desirable pharmacological properties. AI used in clinical trials improves efficiency by pinpointing responsive patient cohorts leveraging genetic profiles and biomarkers and maintaining propriety such as dataset diversity and compliance with regulations. This chapter aimed to summarize and analyze the mechanism of AI to accelerate drug discovery by streamlining different processes that enable informed decisions and bring potential life-saving therapies to market faster, amounting to a breakthrough in pharmaceutical research and development.

PMID:40175048 | DOI:10.1016/bs.apha.2025.01.016

Categories: Literature Watch

Deep learning: A game changer in drug design and development

Wed, 2025-04-02 06:00

Adv Pharmacol. 2025;103:101-120. doi: 10.1016/bs.apha.2025.01.008. Epub 2025 Feb 6.

ABSTRACT

The lengthy and costly drug discovery process is transformed by deep learning, a subfield of artificial intelligence. Deep learning technologies expedite the procedure, increasing treatment success rates and speeding life-saving procedures. Deep learning stands out in target identification and lead selection. Deep learning greatly accelerates initial stage by analyzing large datasets of biological data to identify possible therapeutic targets and rank targeted drug molecules with desired features. Predicting possible adverse effects is another significant challenge. Deep learning offers prompt and efficient assistance with toxicology prediction in a very short time, deep learning algorithms can forecast a new drug's possible harm. This enables to concentrate on safer alternatives and steer clear of late-stage failures brought on by unanticipated toxicity. Deep learning unlocks the possibility of drug repurposing; by examining currently available medications, it is possible to find whole new therapeutic uses. This method speeds up development of diseases that were previously incurable. De novo drug discovery is made possible by deep learning when combined with sophisticated computational modeling, it can create completely new medications from the ground. Deep learning can recommend and direct towards new drug candidates with high binding affinities and intended therapeutic effects by examining molecular structures of disease targets. This provides focused and personalized medication. Lastly, drug characteristics can be optimized with aid of deep learning. Researchers can create medications with higher bioavailability and fewer toxicity by forecasting drug pharmacokinetics. In conclusion, deep learning promises to accelerate drug development, reduce costs, and ultimately save lives.

PMID:40175037 | DOI:10.1016/bs.apha.2025.01.008

Categories: Literature Watch

Integrative network analysis reveals novel moderators of Aβ-Tau interaction in Alzheimer's disease

Wed, 2025-04-02 06:00

Alzheimers Res Ther. 2025 Apr 2;17(1):70. doi: 10.1186/s13195-025-01705-x.

ABSTRACT

BACKGROUND: Although interactions between amyloid-beta and tau proteins have been implicated in Alzheimer's disease (AD), the precise mechanisms by which these interactions contribute to disease progression are not yet fully understood. Moreover, despite the growing application of deep learning in various biomedical fields, its application in integrating networks to analyze disease mechanisms in AD research remains limited. In this study, we employed BIONIC, a deep learning-based network integration method, to integrate proteomics and protein-protein interaction data, with an aim to uncover factors that moderate the effects of the Aβ-tau interaction on mild cognitive impairment (MCI) and early-stage AD.

METHODS: Proteomic data from the ROSMAP cohort were integrated with protein-protein interaction (PPI) data using a Deep Learning-based model. Linear regression analysis was applied to histopathological and gene expression data, and mutual information was used to detect moderating factors. Statistical significance was determined using the Benjamini-Hochberg correction (p < 0.05).

RESULTS: Our results suggested that astrocytes and GPNMB + microglia moderate the Aβ-tau interaction. Based on linear regression with histopathological and gene expression data, GFAP and IBA1 levels and GPNMB gene expression positively contributed to the interaction of tau with Aβ in non-dementia cases, replicating the results of the network analysis.

CONCLUSIONS: These findings suggest that GPNMB + microglia moderate the Aβ-tau interaction in early AD and therefore are a novel therapeutic target. To facilitate further research, we have made the integrated network available as a visualization tool for the scientific community (URL: https://igcore.cloud/GerOmics/AlzPPMap ).

PMID:40176187 | DOI:10.1186/s13195-025-01705-x

Categories: Literature Watch

Deep learning-based reconstruction and superresolution for MR-guided thermal ablation of malignant liver lesions

Wed, 2025-04-02 06:00

Cancer Imaging. 2025 Apr 2;25(1):47. doi: 10.1186/s40644-025-00869-x.

ABSTRACT

OBJECTIVE: This study evaluates the impact of deep learning-enhanced T1-weighted VIBE sequences (DL-VIBE) on image quality and procedural parameters during MR-guided thermoablation of liver malignancies, compared to standard VIBE (SD-VIBE).

METHODS: Between September 2021 and February 2023, 34 patients (mean age: 65.4 years; 13 women) underwent MR-guided microwave ablation on a 1.5 T scanner. Intraprocedural SD-VIBE sequences were retrospectively processed with a deep learning algorithm (DL-VIBE) to reduce noise and enhance sharpness. Two interventional radiologists independently assessed image quality, noise, artifacts, sharpness, diagnostic confidence, and procedural parameters using a 5-point Likert scale. Interrater agreement was analyzed, and noise maps were created to assess signal-to-noise ratio improvements.

RESULTS: DL-VIBE significantly improved image quality, reduced artifacts and noise, and enhanced sharpness of liver contours and portal vein branches compared to SD-VIBE (p < 0.01). Procedural metrics, including needle tip detectability, confidence in needle positioning, and ablation zone assessment, were significantly better with DL-VIBE (p < 0.01). Interrater agreement was high (Cohen κ = 0.86). Reconstruction times for DL-VIBE were 3 s for k-space reconstruction and 1 s for superresolution processing. Simulated acquisition modifications reduced breath-hold duration by approximately 2 s.

CONCLUSION: DL-VIBE enhances image quality during MR-guided thermal ablation while improving efficiency through reduced processing and acquisition times.

PMID:40176185 | DOI:10.1186/s40644-025-00869-x

Categories: Literature Watch

Pages