Deep learning

Automated Sleep Staging in Epilepsy Using Deep Learning on Standard Electroencephalogram and Wearable Data

Thu, 2025-04-03 06:00

J Sleep Res. 2025 Apr 3:e70061. doi: 10.1111/jsr.70061. Online ahead of print.

ABSTRACT

Automated sleep staging on wearable data could improve our understanding and management of epilepsy. This study evaluated sleep scoring by a deep learning model on 223 night-sleep recordings from 50 patients measured in the hospital with an electroencephalogram (EEG) and a wearable device. The model scored the sleep stage of every 30-s epoch on the EEG and wearable data, and we compared the output with a clinical expert on 20 nights, each for a different patient. The Bland-Altman analysis examined differences in the automated staging in both modalities, and using mixed-effect models, we explored sleep differences between patients with and without seizures. Overall, we found moderate accuracy and Cohen's kappa on the model scoring of standard EEG (0.73 and 0.59) and the wearable (0.61 and 0.43) versus the clinical expert. F1 scores also varied between patients and the modalities. The sensitivity varied by sleep stage and was very low for stage N1. Moreover, sleep staging on the wearable data underestimated the duration of most sleep macrostructure parameters except N2. On the other hand, patients with seizures during the hospital admission slept more each night (6.37, 95% confidence interval [CI] 5.86-7.87) compared with patients without seizures (5.68, 95% CI 5.24-6.13), p = 0.001, but also spent more time in stage N2. In conclusion, wearable EEG and accelerometry could monitor sleep in patients with epilepsy, and our approach can help automate the analysis. However, further steps are essential to improve the model performance before clinical implementation. Trial Registration: The SeizeIT2 trial was registered in clinicaltrials.gov, NCT04284072.

PMID:40176726 | DOI:10.1111/jsr.70061

Categories: Literature Watch

Machine learning fusion for glioma tumor detection

Wed, 2025-04-02 06:00

Sci Rep. 2025 Apr 2;15(1):11236. doi: 10.1038/s41598-025-89911-3.

ABSTRACT

The early detection of brain tumors is very important for treating them and improving the quality of life for patients. Through advanced imaging techniques, doctors can now make more informed decisions. This paper introduces a framework for a tumor detection system capable of grading gliomas. The system's implementation begins with the acquisition and analysis of brain magnetic resonance images. Key features indicative of tumors and gliomas are extracted and classified as independent components. A deep learning model is then employed to categorize these gliomas. The proposed model classifies gliomas into three primary categories: meningioma, pituitary, and glioma. Performance evaluation demonstrates a high level of accuracy (99.21%), specificity (98.3%), and sensitivity (97.83%). Further research and validation are essential to refine the system and ensure its clinical applicability. The development of accurate and efficient tumor detection systems holds significant promise for enhancing patient care and improving survival rates.

PMID:40175410 | DOI:10.1038/s41598-025-89911-3

Categories: Literature Watch

Artificial intelligence applied to epilepsy imaging: Current status and future perspectives

Wed, 2025-04-02 06:00

Rev Neurol (Paris). 2025 Apr 1:S0035-3787(25)00487-4. doi: 10.1016/j.neurol.2025.03.006. Online ahead of print.

ABSTRACT

In recent years, artificial intelligence (AI) has become an increasingly prominent focus of medical research, significantly impacting epileptology as well. Studies on deep learning (DL) and machine learning (ML) - the core of AI - have explored their applications in epilepsy imaging, primarily focusing on lesion detection, lateralization and localization of epileptogenic areas, postsurgical outcome prediction and automatic differentiation between people with epilepsy and healthy individuals. Various AI-driven approaches are being investigated across different neuroimaging modalities, with the ultimate goal of integrating these tools into clinical practice to enhance the diagnosis and treatment of epilepsy. As computing power continues to advance, the development, research integration, and clinical implementation of AI applications are expected to accelerate, making them even more effective and accessible. However, ensuring the safety of patient data will require strict regulatory measures. Despite these challenges, AI represents a transformative opportunity for medicine, particularly in epilepsy neuroimaging. Since ML and DL models thrive on large datasets, fostering collaborations and expanding open-access databases will become increasingly pivotal in the future.

PMID:40175210 | DOI:10.1016/j.neurol.2025.03.006

Categories: Literature Watch

Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model

Wed, 2025-04-02 06:00

Acad Radiol. 2025 Apr 1:S1076-6332(25)00210-7. doi: 10.1016/j.acra.2025.03.015. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images.

MATERIALS AND METHODS: A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github.

RESULTS: The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis.

CONCLUSION: In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.

PMID:40175204 | DOI:10.1016/j.acra.2025.03.015

Categories: Literature Watch

Emerging horizons of AI in pharmaceutical research

Wed, 2025-04-02 06:00

Adv Pharmacol. 2025;103:325-348. doi: 10.1016/bs.apha.2025.01.016. Epub 2025 Feb 16.

ABSTRACT

Artificial Intelligence (AI) has revolutionized drug discovery by enhancing data collection, integration, and predictive modeling across various critical stages. It aggregates vast biological and chemical data, including genomic information, protein structures, and chemical interactions with biological targets. Machine learning techniques and QSAR models are applied by AI to predict compound behaviors and predict potential drug candidates. Docking simulations predict drug-protein interactions, while virtual screening eliminates large chemical databases through efficient sifting. Similarly, AI supports de novo drug design by generating novel molecules, optimized against a particular biological target, using generative models such as generative adversarial network (GAN), always finding lead compounds with the most desirable pharmacological properties. AI used in clinical trials improves efficiency by pinpointing responsive patient cohorts leveraging genetic profiles and biomarkers and maintaining propriety such as dataset diversity and compliance with regulations. This chapter aimed to summarize and analyze the mechanism of AI to accelerate drug discovery by streamlining different processes that enable informed decisions and bring potential life-saving therapies to market faster, amounting to a breakthrough in pharmaceutical research and development.

PMID:40175048 | DOI:10.1016/bs.apha.2025.01.016

Categories: Literature Watch

Deep learning: A game changer in drug design and development

Wed, 2025-04-02 06:00

Adv Pharmacol. 2025;103:101-120. doi: 10.1016/bs.apha.2025.01.008. Epub 2025 Feb 6.

ABSTRACT

The lengthy and costly drug discovery process is transformed by deep learning, a subfield of artificial intelligence. Deep learning technologies expedite the procedure, increasing treatment success rates and speeding life-saving procedures. Deep learning stands out in target identification and lead selection. Deep learning greatly accelerates initial stage by analyzing large datasets of biological data to identify possible therapeutic targets and rank targeted drug molecules with desired features. Predicting possible adverse effects is another significant challenge. Deep learning offers prompt and efficient assistance with toxicology prediction in a very short time, deep learning algorithms can forecast a new drug's possible harm. This enables to concentrate on safer alternatives and steer clear of late-stage failures brought on by unanticipated toxicity. Deep learning unlocks the possibility of drug repurposing; by examining currently available medications, it is possible to find whole new therapeutic uses. This method speeds up development of diseases that were previously incurable. De novo drug discovery is made possible by deep learning when combined with sophisticated computational modeling, it can create completely new medications from the ground. Deep learning can recommend and direct towards new drug candidates with high binding affinities and intended therapeutic effects by examining molecular structures of disease targets. This provides focused and personalized medication. Lastly, drug characteristics can be optimized with aid of deep learning. Researchers can create medications with higher bioavailability and fewer toxicity by forecasting drug pharmacokinetics. In conclusion, deep learning promises to accelerate drug development, reduce costs, and ultimately save lives.

PMID:40175037 | DOI:10.1016/bs.apha.2025.01.008

Categories: Literature Watch

Integrative network analysis reveals novel moderators of Aβ-Tau interaction in Alzheimer's disease

Wed, 2025-04-02 06:00

Alzheimers Res Ther. 2025 Apr 2;17(1):70. doi: 10.1186/s13195-025-01705-x.

ABSTRACT

BACKGROUND: Although interactions between amyloid-beta and tau proteins have been implicated in Alzheimer's disease (AD), the precise mechanisms by which these interactions contribute to disease progression are not yet fully understood. Moreover, despite the growing application of deep learning in various biomedical fields, its application in integrating networks to analyze disease mechanisms in AD research remains limited. In this study, we employed BIONIC, a deep learning-based network integration method, to integrate proteomics and protein-protein interaction data, with an aim to uncover factors that moderate the effects of the Aβ-tau interaction on mild cognitive impairment (MCI) and early-stage AD.

METHODS: Proteomic data from the ROSMAP cohort were integrated with protein-protein interaction (PPI) data using a Deep Learning-based model. Linear regression analysis was applied to histopathological and gene expression data, and mutual information was used to detect moderating factors. Statistical significance was determined using the Benjamini-Hochberg correction (p < 0.05).

RESULTS: Our results suggested that astrocytes and GPNMB + microglia moderate the Aβ-tau interaction. Based on linear regression with histopathological and gene expression data, GFAP and IBA1 levels and GPNMB gene expression positively contributed to the interaction of tau with Aβ in non-dementia cases, replicating the results of the network analysis.

CONCLUSIONS: These findings suggest that GPNMB + microglia moderate the Aβ-tau interaction in early AD and therefore are a novel therapeutic target. To facilitate further research, we have made the integrated network available as a visualization tool for the scientific community (URL: https://igcore.cloud/GerOmics/AlzPPMap ).

PMID:40176187 | DOI:10.1186/s13195-025-01705-x

Categories: Literature Watch

Deep learning-based reconstruction and superresolution for MR-guided thermal ablation of malignant liver lesions

Wed, 2025-04-02 06:00

Cancer Imaging. 2025 Apr 2;25(1):47. doi: 10.1186/s40644-025-00869-x.

ABSTRACT

OBJECTIVE: This study evaluates the impact of deep learning-enhanced T1-weighted VIBE sequences (DL-VIBE) on image quality and procedural parameters during MR-guided thermoablation of liver malignancies, compared to standard VIBE (SD-VIBE).

METHODS: Between September 2021 and February 2023, 34 patients (mean age: 65.4 years; 13 women) underwent MR-guided microwave ablation on a 1.5 T scanner. Intraprocedural SD-VIBE sequences were retrospectively processed with a deep learning algorithm (DL-VIBE) to reduce noise and enhance sharpness. Two interventional radiologists independently assessed image quality, noise, artifacts, sharpness, diagnostic confidence, and procedural parameters using a 5-point Likert scale. Interrater agreement was analyzed, and noise maps were created to assess signal-to-noise ratio improvements.

RESULTS: DL-VIBE significantly improved image quality, reduced artifacts and noise, and enhanced sharpness of liver contours and portal vein branches compared to SD-VIBE (p < 0.01). Procedural metrics, including needle tip detectability, confidence in needle positioning, and ablation zone assessment, were significantly better with DL-VIBE (p < 0.01). Interrater agreement was high (Cohen κ = 0.86). Reconstruction times for DL-VIBE were 3 s for k-space reconstruction and 1 s for superresolution processing. Simulated acquisition modifications reduced breath-hold duration by approximately 2 s.

CONCLUSION: DL-VIBE enhances image quality during MR-guided thermal ablation while improving efficiency through reduced processing and acquisition times.

PMID:40176185 | DOI:10.1186/s40644-025-00869-x

Categories: Literature Watch

A compact deep learning approach integrating depthwise convolutions and spatial attention for plant disease classification

Wed, 2025-04-02 06:00

Plant Methods. 2025 Apr 2;21(1):48. doi: 10.1186/s13007-025-01325-4.

ABSTRACT

Plant leaf diseases significantly threaten agricultural productivity and global food security, emphasizing the importance of early and accurate detection and effective crop health management. Current deep learning models, often used for plant disease classification, have limitations in capturing intricate features such as texture, shape, and color of plant leaves. Furthermore, many of these models are computationally expensive and less suitable for deployment in resource-constrained environments such as farms and rural areas. We propose a novel Lightweight Deep Learning model, Depthwise Separable Convolution with Spatial Attention (LWDSC-SA), designed to address limitations and enhance feature extraction while maintaining computational efficiency. By integrating spatial attention and depthwise separable convolution, the LWDSC-SA model improves the ability to detect and classify plant diseases. In our comprehensive evaluation using the PlantVillage dataset, which consists of 38 classes and 55,000 images from 14 plant species, the LWDSC-SA model achieved 98.7% accuracy. It presents a substantial improvement over MobileNet by 5.25%, MobileNetV2 by 4.50%, AlexNet by 7.40%, and VGGNet16 by 5.95%. Furthermore, to validate its robustness and generalizability, we employed K-fold cross-validation K=5, which demonstrated consistently high performance, with an average accuracy of 98.58%, precision of 98.30%, recall of 98.90%, and F1 score of 98.58%. These results highlight the superior performance of the proposed model, demonstrating its ability to outperform state-of-the-art models in terms of accuracy while remaining lightweight and efficient. This research offers a promising solution for real-world agricultural applications, enabling effective plant disease detection in resource-limited settings and contributing to more sustainable agricultural practices.

PMID:40176127 | DOI:10.1186/s13007-025-01325-4

Categories: Literature Watch

Forecasting motion trajectories of elbow and knee joints during infant crawling based on long-short-term memory (LSTM) networks

Wed, 2025-04-02 06:00

Biomed Eng Online. 2025 Apr 2;24(1):39. doi: 10.1186/s12938-025-01360-1.

ABSTRACT

BACKGROUND: Hands-and-knees crawling is a promising rehabilitation intervention for infants with motor impairments, while research on assistive crawling devices for rehabilitation training was still in its early stages. In particular, precisely generating motion trajectories is a prerequisite to controlling exoskeleton assistive devices, and deep learning-based prediction algorithms, such as Long-Short-Term Memory (LSTM) networks, have proven effective in forecasting joint trajectories of gait. Despite this, no previous studies have focused on forecasting the more variable and complex trajectories of infant crawling. Therefore, this paper aims to explore the feasibility of using LSTM networks to predict crawling trajectories, thereby advancing our understanding of how to actively control crawling rehabilitation training robots.

METHODS: We collected joint trajectory data from 20 healthy infants (11 males and 9 females, aged 8-15 months) as they crawled on hands and knees. This study implemented LSTM networks to forecast bilateral elbow and knee trajectories based on corresponding joint angles. The data set comprised 58, 782 time steps, each containing 4 joint angles. We partitioned the data set into 70% for training and 30% for testing to evaluate predictive performance. We investigated a total of 24 combinations of input and output time-frames, with window sizes for input vectors ranging from 10, 15, 20, 30, 40, 50, 70, and 100 time steps, and output vectors from 5, 10, and 15 steps. Evaluation metrics included Mean Absolute Error (MAE), Mean Squared Error (MSE), and Correlation Coefficient (CC) to assess prediction accuracy.

RESULTS: The results indicate that across various input-output windows, the MAE for elbow joints ranged from 0.280 to 4.976°, MSE ranged from 0.203° to 59.186°, and CC ranged from 89.977% to 99.959%. For knee joints, MAE ranged from 0.277 to 4.262°, MSE from 0.229 to 53.272°, and CC from 89.454% to 99.944%. Results also show that smaller output window sizes lead to lower prediction errors. As expected, the LSTM predicting 5 output time steps has the lowest average error, while the LSTM predicting 15 time steps has the highest average error. In addition, variations in input window size had a minimal impact on average error when the output window size was fixed. Overall, the optimal performance for both elbow and knee joints was observed with input-output window sizes of 30 and 5 time steps, respectively, yielding an MAE of 0.295°, MSE of 0.260°, and CC of 99.938%.

CONCLUSIONS: This study demonstrates the feasibility of forecasting infant crawling trajectories using LSTM networks, which could potentially integrate with exoskeleton control systems. It experimentally explores how different input and output time-frames affect prediction accuracy and sets the stage for future research focused on optimizing models and developing effective control strategies to improve assistive crawling devices.

PMID:40176123 | DOI:10.1186/s12938-025-01360-1

Categories: Literature Watch

Prediction of Future Risk of Moderate to Severe Kidney Function Loss Using a Deep Learning Model-Enabled Chest Radiography

Wed, 2025-04-02 06:00

J Imaging Inform Med. 2025 Apr 2. doi: 10.1007/s10278-025-01489-4. Online ahead of print.

ABSTRACT

Chronic kidney disease (CKD) remains a major public health concern, requiring better predictive models for early intervention. This study evaluates a deep learning model (DLM) that utilizes raw chest X-ray (CXR) data to predict moderate to severe kidney function decline. We analyzed data from 79,219 patients with an estimated Glomerular Filtration Rate (eGFR) between 65 and 120, segmented into development (n = 37,983), tuning (n = 15,346), internal validation (n = 14,113), and external validation (n = 11,777) sets. Our DLM, pretrained on CXR-report pairs, was fine-tuned with the development set. We retrospectively examined data spanning April 2011 to February 2022, with a 5-year maximum follow-up. Primary and secondary endpoints included CKD stage 3b progression, ESRD/dialysis, and mortality. The overall concordance index (C-index) values for the internal and external validation sets were 0.903 (95% CI, 0.885-0.922) and 0.851 (95% CI, 0.819-0.883), respectively. In these sets, the incidences of progression to CKD stage 3b at 5 years were 19.2% and 13.4% in the high-risk group, significantly higher than those in the median-risk (5.9% and 5.1%) and low-risk groups (0.9% and 0.9%), respectively. The sex, age, and eGFR-adjusted hazard ratios (HR) for the high-risk group compared to the low-risk group were 16.88 (95% CI, 10.84-26.28) and 7.77 (95% CI, 4.77-12.64), respectively. The high-risk group also exhibited higher probabilities of progressing to ESRD/dialysis or experiencing mortality compared to the low-risk group. Further analysis revealed that the high-risk group compared to the low/median-risk group had a higher prevalence of complications and abnormal blood/urine markers. Our findings demonstrate that a DLM utilizing CXR can effectively predict CKD stage 3b progression, offering a potential tool for early intervention in high-risk populations.

PMID:40175823 | DOI:10.1007/s10278-025-01489-4

Categories: Literature Watch

Leveraging Fine-Scale Variation and Heterogeneity of the Wetland Soil Microbiome to Predict Nutrient Flux on the Landscape

Wed, 2025-04-02 06:00

Microb Ecol. 2025 Apr 2;88(1):22. doi: 10.1007/s00248-025-02516-1.

ABSTRACT

Shifts in agricultural land use over the past 200 years have led to a loss of nearly 50% of existing wetlands in the USA, and agricultural activities contribute up to 65% of the nutrients that reach the Mississippi River Basin, directly contributing to biological disasters such as the hypoxic Gulf of Mexico "Dead" Zone. Federal efforts to construct and restore wetland habitats have been employed to mitigate the detrimental effects of eutrophication, with an emphasis on the restoration of ecosystem services such as nutrient cycling and retention. Soil microbial assemblages drive biogeochemical cycles and offer a unique and sensitive framework for the accurate evaluation, restoration, and management of ecosystem services. The purpose of this study was to elucidate patterns of soil bacteria within and among wetlands by developing diversity profiles from high-throughput sequencing data, link functional gene copy number of nitrogen cycling genes to measured nutrient flux rates collected from flow-through incubation cores, and predict nutrient flux using microbial assemblage composition. Soil microbial assemblages showed fine-scale turnover in soil cores collected across the topsoil horizon (0-5 cm; top vs bottom partitions) and were structured by restoration practices on the easements (tree planting, shallow water, remnant forest). Connections between soil assemblage composition, functional gene copy number, and nutrient flux rates show the potential for soil bacterial assemblages to be used as bioindicators for nutrient cycling on the landscape. In addition, the predictive accuracy of flux rates was improved when implementing deep learning models that paired connected samples across time.

PMID:40175811 | DOI:10.1007/s00248-025-02516-1

Categories: Literature Watch

scAtlasVAE: a deep learning framework for generating a human CD8(+) T cell atlas

Wed, 2025-04-02 06:00

Nat Rev Cancer. 2025 Apr 2. doi: 10.1038/s41568-025-00811-0. Online ahead of print.

NO ABSTRACT

PMID:40175619 | DOI:10.1038/s41568-025-00811-0

Categories: Literature Watch

Estimating strawberry weight for grading by picking robot with point cloud completion and multimodal fusion network

Wed, 2025-04-02 06:00

Sci Rep. 2025 Apr 2;15(1):11227. doi: 10.1038/s41598-025-92641-1.

ABSTRACT

Strawberry grading by picking robots can eliminate the manual classification, reducing labor costs and minimizing the damage to the fruit. Strawberry size or weight is a key factor in grading, with accurate weight estimation being crucial for proper classification. In this paper, we collected 1521 sets of strawberry RGB-D images using a depth camera and manually measured the weight and size of the strawberries to construct a training dataset for the strawberry weight regression model. To address the issue of incomplete depth images caused by environmental interference with depth cameras, this study proposes a multimodal point cloud completion method specifically designed for symmetrical objects, leveraging RGB images to guide the completion of depth images in the same scene. The method follows a process of locating strawberry pixel regions, calculating centroid coordinates, determining the symmetry axis via PCA, and completing the depth image. Based on this approach, a multimodal fusion regression model for strawberry weight estimation, named MMF-Net, is developed. The model uses the completed point cloud and RGB image as inputs, and extracts features from the RGB image and point cloud by EfficientNet and PointNet, respectively. These features are then integrated at the feature level through gradient blending, realizing the combination of the strengths of both modalities. Using the Percent Correct Weight (PCW) metric as the evaluation standard, this study compares the performance of four traditional machine learning methods, Support Vector Regression (SVR), Multilayer Perceptron (MLP), Linear Regression, and Random Forest Regression, with four point cloud-based deep learning models, PointNet, PointNet++, PointMLP, and Point Cloud Transformer, as well as an image-based deep learning model, EfficientNet and ResNet, on single-modal datasets. The results indicate that among traditional machine learning methods, the SVR model achieved the best performance with an accuracy of 77.7% (PCW@0.2). Among deep learning methods, the image-based EfficientNet model obtained the highest accuracy, reaching 85% (PCW@0.2), while the PointNet + + model demonstrated the best performance among point cloud-based models, with an accuracy of 54.3% (PCW@0.2). The proposed multimodal fusion model, MMF-Net, achieved an accuracy of 87.66% (PCW@0.2), significantly outperforming both traditional machine learning methods and single-modal deep learning models in terms of precision.

PMID:40175474 | DOI:10.1038/s41598-025-92641-1

Categories: Literature Watch

Investigation on potential bias factors in histopathology datasets

Wed, 2025-04-02 06:00

Sci Rep. 2025 Apr 2;15(1):11349. doi: 10.1038/s41598-025-89210-x.

ABSTRACT

Deep neural networks (DNNs) have demonstrated remarkable capabilities in medical applications, including digital pathology, where they excel at analyzing complex patterns in medical images to assist in accurate disease diagnosis and prognosis. However, concerns have arisen about potential biases in The Cancer Genome Atlas (TCGA) dataset, a comprehensive repository of digitized histopathology data and serves as both a training and validation source for deep learning models, suggesting that over-optimistic results of model performance may be due to reliance on biased features rather than histological characteristics. Surprisingly, recent studies have confirmed the existence of site-specific bias in the embedded features extracted for cancer-type discrimination, leading to high accuracy in acquisition site classification. This biased behavior motivated us to conduct an in-depth analysis to investigate potential causes behind this unexpected biased ability toward site-specific pattern recognition. The analysis was conducted on two cutting-edge DNN models: KimiaNet, a state-of-the-art DNN trained on TCGA images, and the self-trained EfficientNet. In this research study, the balanced accuracy metric is used to evaluate the performance of a model trained to classify data centers, which was originally designed to learn cancerous patterns, with the aim of investigating the potential factors contributing to the higher balanced accuracy in data center detection.

PMID:40175463 | DOI:10.1038/s41598-025-89210-x

Categories: Literature Watch

Experiment study on UAV target detection algorithm based on YOLOv8n-ACW

Wed, 2025-04-02 06:00

Sci Rep. 2025 Apr 2;15(1):11352. doi: 10.1038/s41598-025-91394-1.

ABSTRACT

To address the challenges associated with dense and occluded targets in small target detection utilizing unmanned aerial vehicle (UAV), we propose an enhanced detection algorithm referred as the YOLOv8n-ACW. Building upon the YOLOv8n baseline network model, we have integrated Adown into the Backbone and developed a CCDHead to further improve the drone's capability to recognize small targets. Additionally, WIoU-V3 has been introduced as the loss function. Experiment results derived from the Visdrone2019 dataset indicate that, the YOLOv8n- ACW has achieved a 4.2% increase in mAP50(%) compared to the baseline model, while simultaneously reducing the parameter count by 36.7%, exhibiting superior capabilities in detecting small targets. Furthermore, utilizing a self-constructed dataset of G5-Pro drones for target detection experiments, the results indicate that the enhanced model has robust generalization capabilities in real-world environments. The UAV target detection experiment combines experimental simulation with real-world testing, while combining scientific exploration with educational objectives. This experiment has high fidelity, excellent functional scalability, and strong practicality, aiming to cultivate students' comprehensive practical and innovative abilities.

PMID:40175443 | DOI:10.1038/s41598-025-91394-1

Categories: Literature Watch

PixelPrint4D: A 3D Printing Method of Fabricating Patient-Specific Deformable CT Phantoms for Respiratory Motion Applications

Wed, 2025-04-02 06:00

Invest Radiol. 2025 Apr 2. doi: 10.1097/RLI.0000000000001182. Online ahead of print.

ABSTRACT

OBJECTIVES: Respiratory motion poses a significant challenge for clinical workflows in diagnostic imaging and radiation therapy. Many technologies such as motion artifact reduction and tumor tracking have been developed to compensate for its effect. To assess these technologies, respiratory motion phantoms (RMPs) are required as preclinical testing environments, for instance, in computed tomography (CT). However, current CT RMPs are highly simplified and do not exhibit realistic tissue structures or deformation patterns. With the rise of more complex motion compensation technologies such as deep learning-based algorithms, there is a need for more realistic RMPs. This work introduces PixelPrint4D, a 3D printing method for fabricating lifelike, patient-specific deformable lung phantoms for CT imaging.

MATERIALS AND METHODS: A 4DCT dataset of a lung cancer patient was acquired. The volumetric image data of the right lung at end inhalation was converted into 3D printer instructions using the previously developed PixelPrint software. A flexible 3D printing material was used to replicate variable densities voxel-by-voxel within the phantom. The accuracy of the phantom was assessed by acquiring CT scans of the phantom at rest, and under various levels of compression. These phantom images were then compiled into a pseudo-4DCT dataset and compared to the reference patient 4DCT images. Metrics used to assess the phantom structural accuracy included mean attenuation errors, 2-sample 2-sided Kolmogorov-Smirnov (KS) test on histograms, and structural similarity index (SSIM). The phantom deformation properties were assessed by calculating displacement errors of the tumor and throughout the full lung volume, attenuation change errors, and Jacobian errors, as well as the relationship between Jacobian and attenuation changes.

RESULTS: The phantom closely replicated patient lung structures, textures, and attenuation profiles. SSIM was measured as 0.93 between the patient and phantom lung, suggesting a high level of structural accuracy. Furthermore, it exhibited realistic nonrigid deformation patterns. The mean tumor motion errors in the phantom were ≤0.7 ± 0.6 mm in each orthogonal direction. Finally, the relationship between attenuation and local volume changes in the phantom had a strong correlation with that of the patient, with analysis of covariance yielding P = 0.83 and f = 0.04, suggesting no significant difference between the phantom and patient.

CONCLUSIONS: PixelPrint4D facilitates the creation of highly realistic RMPs, exceeding the capabilities of existing models to provide enhanced testing environments for a wide range of emerging CT technologies.

PMID:40173424 | DOI:10.1097/RLI.0000000000001182

Categories: Literature Watch

Beyond the Posts: Analyzing Breast Implant Illness Discourse With Natural Language Processing and Deep Learning

Wed, 2025-04-02 06:00

Aesthet Surg J. 2025 Apr 2:sjaf047. doi: 10.1093/asj/sjaf047. Online ahead of print.

ABSTRACT

BACKGROUND: Breast Implant Illness (BII) is a spectrum of symptoms some people attribute to breast implants. While causality remains unproven, patient interest has grown significantly. Understanding patient perceptions of BII on social media is crucial as these platforms increasingly influence healthcare decisions.

OBJECTIVES: The purpose of this study is to analyze patient perceptions and emotional responses to BII on social media using RoBERTa, a natural processing model trained on 124 million X posts.

METHODS: Posts mentioning BII from 2014-2023 were analyzed using two NLP models: one for sentiment (positive/negative) and another for emotions (fear, sadness, anger, disgust, neutral, surprise, and joy). Posts were then classified by their highest-scoring emotion. Results were compared over across 2014-2018 and 2019-2023, with correlation analysis (Pearson correlation coefficient) between published implant explantation and augmentation data.

RESULTS: Analysis of 6,099 posts over 10 years showed 75.4% were negative, with monthly averages of 50.85 peaking at 213 in March 2019. Fear and neutral emotions dominated, representing 35.9% and 35.6% respectively. The strongest emotions were neutral and fear, with an average score of 0.293 and 0.286 per post, respectively. Fear scores increased from 0.219 (2014-2018) to 0.303 (2019-2023). Strong positive correlations (r>0.70) existed between annual explantation rates/explantation-to-augmentation ratios and total, negative, neutral, and fear posts.

CONCLUSIONS: BII discourse on X peaked in 2019, characterized predominantly by negative sentiment and fear. The strong correlation between fear/negative-based posts and explantation rates suggests social media discourse significantly influences patient decisions regarding breast implant removal.

PMID:40173420 | DOI:10.1093/asj/sjaf047

Categories: Literature Watch

Enlightened prognosis: Hepatitis prediction with an explainable machine learning approach

Wed, 2025-04-02 06:00

PLoS One. 2025 Apr 2;20(4):e0319078. doi: 10.1371/journal.pone.0319078. eCollection 2025.

ABSTRACT

Hepatitis is a widespread inflammatory condition of the liver, presenting a formidable global health challenge. Accurate and timely detection of hepatitis is crucial for effective patient management, yet existing methods exhibit limitations that underscore the need for innovative approaches. Early-stage detection of hepatitis is now possible with the recent adoption of machine learning and deep learning approaches. With this in mind, the study investigates the use of traditional machine learning models, specifically classifiers such as logistic regression, support vector machines (SVM), decision trees, random forest, multilayer perceptron (MLP), and other models, to predict hepatitis infections. After extensive data preprocessing including outlier detection, dataset balancing, and feature engineering, we evaluated the performance of these models. We explored three modeling approaches: machine learning with default hyperparameters, hyperparameter-tuned models using GridSearchCV, and ensemble modeling techniques. The SVM model demonstrated outstanding performance, achieving 99.25% accuracy and a perfect AUC score of 1.00 with consistency in other metrics with 99.27% precision, and 99.24% for both recall and F1-measure. The MLP and Random Forest proved to be in pace with the superior performance of SVM exhibiting an accuracy of 99.00%. To ensure robustness, we employed a 5-fold cross-validation technique. For deeper insight into model interpretability and validation, we employed an explainability analysis of our best-performed models to identify the most effective feature for hepatitis detection. Our proposed model, particularly SVM, exhibits better prediction performance regarding different performance metrics compared to existing literature.

PMID:40173410 | DOI:10.1371/journal.pone.0319078

Categories: Literature Watch

Predicting Atlantic and Benguela Nino events with deep learning

Wed, 2025-04-02 06:00

Sci Adv. 2025 Apr 4;11(14):eads5185. doi: 10.1126/sciadv.ads5185. Epub 2025 Apr 2.

ABSTRACT

Atlantic and Benguela Niño events substantially affect the tropical Atlantic region, with far-reaching consequences on local marine ecosystems, African climates, and El Niño Southern Oscillation. While accurate forecasts of these events are invaluable, state-of-the-art dynamic forecasting systems have shown limited predictive capabilities. Thus, the extent to which the tropical Atlantic variability is predictable remains an open question. This study explores the potential of deep learning in this context. Using a simple convolutional neural network architecture, we show that Atlantic/Benguela Niños can be predicted up to 3 to 4 months ahead. Our model excels in forecasting peak-season events with remarkable accuracy extending lead time to 5 months. Detailed analysis reveals our model's ability to exploit known physical precursors, such as long-wave ocean dynamics, for accurate predictions of these events. This study challenges the perception that the tropical Atlantic is unpredictable and highlights deep learning's potential to advance our understanding and forecasting of critical climate events.

PMID:40173237 | DOI:10.1126/sciadv.ads5185

Categories: Literature Watch

Pages