Deep learning
Accelerating polymer self-consistent field simulation and inverse DSA-lithography with deep neural networks
J Chem Phys. 2025 Mar 14;162(10):104105. doi: 10.1063/5.0255288.
ABSTRACT
Self-consistent field theory (SCFT) is a powerful polymer field-theoretic simulation tool that plays a crucial role in the study of block copolymer (BCP) self-assembly. However, the computational cost of implementing SCFT simulations is comparatively high, particularly in computationally demanding applications where repeated forward simulations are needed. Herein, we propose a deep learning-based method to accelerate the SCFT simulations. By directly mapping early SCFT results to equilibrium structures using a deep neural network (DNN), this method bypasses most of the time-consuming SCFT iterations, significantly reducing the simulation time. We first applied this method to two- and three-dimensional large-cell bulk system simulations. Both results demonstrate that a DNN can be trained to predict equilibrium states based on early iteration outputs accurately. The number of early SCFT iterations can be tailored to optimize the trade-off between computational speed and predictive accuracy. The effect of training set size on DNN performance was also examined, offering guidance on minimizing dataset generation costs. Furthermore, we applied this method to the more computationally demanding inverse directed self-assembly-lithography problem. A covariance matrix adaptation evolution strategy-based inverse design method was proposed. By replacing the forward simulation model in this method with a trained DNN, we were able to determine the guiding template shapes that direct the BCP to self-assemble into the target structure with certain constraints, eliminating the need for any SCFT simulations. This improved the inverse design efficiency by a factor of 100, and the computational cost for training the network can be easily averaged out over repeated tasks.
PMID:40062757 | DOI:10.1063/5.0255288
Advancements in machine learning and biomarker integration for prenatal Down syndrome screening
Turk J Obstet Gynecol. 2025 Mar 10;22(1):75-82. doi: 10.4274/tjod.galenos.2025.12689.
ABSTRACT
The use of machine learning (ML) in biomarker analysis for predicting Down syndrome exemplifies an innovative strategy that enhances diagnostic accuracy and enables early detection. Recent studies demonstrate the effectiveness of ML algorithms in identifying genetic variations and expression patterns associated with Down syndrome by comparing genomic data from affected individuals and their typically developing peers. This review examines how ML and biomarker analysis improve prenatal screening for Down syndrome. Advancements show that integrating maternal serum markers, nuchal translucency measurements, and ultrasonographic images with algorithms, such as random forests and deep learning convolutional neural networks, raises detection rates to above 85% while keeping false positive rates low. Moreover, non-invasive prenatal testing with soft ultrasound markers has increased diagnostic sensitivity and specificity, marking a significant shift in prenatal care. The review highlights the importance of implementing robust screening protocols that utilize ultrasound biomarkers, along with developing personalized screening tools through advanced statistical methods. It also explores the potential of combining genetic and epigenetic biomarkers with ML to further improve diagnostic accuracy and understanding of Down syndrome pathophysiology. The findings stress the need for ongoing research to optimize algorithms, validate their effectiveness across diverse populations, and incorporate these cutting-edge approaches into routine clinical practice. Ultimately, blending advanced imaging techniques with ML shows promise for enhancing prenatal care outcomes and aiding informed decision-making for expectant parents.
PMID:40062699 | DOI:10.4274/tjod.galenos.2025.12689
Inferring gene regulatory networks from time-series scRNA-seq data via GRANGER causal recurrent autoencoders
Brief Bioinform. 2025 Mar 4;26(2):bbaf089. doi: 10.1093/bib/bbaf089.
ABSTRACT
The development of single-cell RNA sequencing (scRNA-seq) technology provides valuable data resources for inferring gene regulatory networks (GRNs), enabling deeper insights into cellular mechanisms and diseases. While many methods exist for inferring GRNs from static scRNA-seq data, current approaches face challenges in accurately handling time-series scRNA-seq data due to high noise levels and data sparsity. The temporal dimension introduces additional complexity by requiring models to capture dynamic changes, increasing sensitivity to noise, and exacerbating data sparsity across time points. In this study, we introduce GRANGER, an unsupervised deep learning-based method that integrates multiple advanced techniques, including a recurrent variational autoencoder, GRANGER causality, sparsity-inducing penalties, and negative binomial (NB)-based loss functions, to infer GRNs. GRANGER was evaluated using multiple popular benchmarking datasets, where it demonstrated superior performance compared to eight well-known GRN inference methods. The integration of a NB-based loss function and sparsity-inducing penalties in GRANGER significantly enhanced its capacity to address dropout noise and sparsity in scRNA-seq data. Additionally, GRANGER exhibited robustness against high levels of dropout noise. We applied GRANGER to scRNA-seq data from the whole mouse brain obtained through the BRAIN Initiative project and identified GRNs for five transcription regulators: E2f7, Gbx1, Sox10, Prox1, and Onecut2, which play crucial roles in diverse brain cell types. The inferred GRNs not only recalled many known regulatory relationships but also revealed sets of novel regulatory interactions with functional potential. These findings demonstrate that GRANGER is a highly effective tool for real-world applications in discovering novel gene regulatory relationships.
PMID:40062616 | DOI:10.1093/bib/bbaf089
A novel integrative multimodal classifier to enhance the diagnosis of Parkinson's disease
Brief Bioinform. 2025 Mar 4;26(2):bbaf088. doi: 10.1093/bib/bbaf088.
ABSTRACT
Parkinson's disease (PD) is a complex, progressive neurodegenerative disorder with high heterogeneity, making early diagnosis difficult. Early detection and intervention are crucial for slowing PD progression. Understanding PD's diverse pathways and mechanisms is key to advancing knowledge. Recent advances in noninvasive imaging and multi-omics technologies have provided valuable insights into PD's underlying causes and biological processes. However, integrating these diverse data sources remains challenging, especially when deriving meaningful low-level features that can serve as diagnostic indicators. This study developed and validated a novel integrative, multimodal predictive model for detecting PD based on features derived from multimodal data, including hematological information, proteomics, RNA sequencing, metabolomics, and dopamine transporter scan imaging, sourced from the Parkinson's Progression Markers Initiative. Several model architectures were investigated and evaluated, including support vector machine, eXtreme Gradient Boosting, fully connected neural networks with concatenation and joint modeling (FCNN_C and FCNN_JM), and a multimodal encoder-based model with multi-head cross-attention (MMT_CA). The MMT_CA model demonstrated superior predictive performance, achieving a balanced classification accuracy of 97.7%, thus highlighting its ability to capture and leverage cross-modality inter-dependencies to aid predictive analytics. Furthermore, feature importance analysis using SHapley Additive exPlanations not only identified crucial diagnostic biomarkers to inform the predictive models in this study but also holds potential for future research aimed at integrated functional analyses of PD from a multi-omics perspective, ultimately revealing targets required for precision medicine approaches to aid treatment of PD aimed at slowing down its progression.
PMID:40062615 | DOI:10.1093/bib/bbaf088
TopoQA: a topological deep learning-based approach for protein complex structure interface quality assessment
Brief Bioinform. 2025 Mar 4;26(2):bbaf083. doi: 10.1093/bib/bbaf083.
ABSTRACT
Even with the significant advances of AlphaFold-Multimer (AF-Multimer) and AlphaFold3 (AF3) in protein complex structure prediction, their accuracy is still not comparable with monomer structure prediction. Efficient and effective quality assessment (QA) or estimation of model accuracy models that can evaluate the quality of the predicted protein-complexes without knowing their native structures are of key importance for protein structure generation and model selection. In this paper, we leverage persistent homology (PH) to capture the atomic-level topological information around residues and design a topological deep learning-based QA method, TopoQA, to assess the accuracy of protein complex interfaces. We integrate PH from topological data analysis into graph neural networks (GNNs) to characterize complex higher-order structures that GNNs might overlook, enhancing the learning of the relationship between the topological structure of complex interfaces and quality scores. Our TopoQA model is extensively validated based on the two most-widely used benchmark datasets, Docking Benchmark5.5 AF2 (DBM55-AF2) and Heterodimer-AF2 (HAF2), along with our newly constructed ABAG-AF3 dataset to facilitate comparisons with AF3. For all three datasets, TopoQA outperforms AF-Multimer-based AF2Rank and shows an advantage over AF3 in nearly half of the targets. In particular, in the DBM55-AF2 dataset, a ranking loss of 73.6% lower than AF-Multimer-based AF2Rank is obtained. Further, other than AF-Multimer and AF3, we have also extensively compared with nearly-all the state-of-the-art models (as far as we know), it has been found that our TopoQA can achieve the highest Top 10 Hit-rate on the DBM55-AF2 dataset and the lowest ranking loss on the HAF2 dataset. Ablation experiments show that our topological features significantly improve the model's performance. At the same time, our method also provides a new paradigm for protein structure representation learning.
PMID:40062613 | DOI:10.1093/bib/bbaf083
Hybrid transformer-CNN network-driven optical-scanning undersampling for photoacoustic remote sensing microscopy
Photoacoustics. 2025 Feb 17;42:100697. doi: 10.1016/j.pacs.2025.100697. eCollection 2025 Apr.
ABSTRACT
Imaging speed is critical for photoacoustic microscopy as it affects the capability to capture dynamic biological processes and support real-time clinical applications. Conventional approaches for increasing imaging speed typically involve high-repetition-rate lasers, which pose a risk of thermal damage to samples. Here, we propose a deep-learning-driven optical-scanning undersampling method for photoacoustic remote sensing (PARS) microscopy, accelerating imaging acquisition while maintaining a constant laser repetition rate and reducing laser dosage. We develop a hybrid Transformer-Convolutional Neural Network, HTC-GAN, to address the challenges of both nonuniform sampling and motion misalignment inherent in optical-scanning undersampling. A mouse ear vasculature image dataset is created through our customized galvanometer-scanned PARS system to train and validate HTC-GAN. The network successfully restores high-quality images from 1/2-undersampled and 1/4-undersampled data, closely approximating the ground truth images. A series of performance experiments demonstrate that HTC-GAN surpasses the basic misalignment compensation algorithm, and standalone CNN or Transformer networks in terms of perceptual quality and quantitative metrics. Moreover, three-dimensional imaging results validate the robustness and versatility of the proposed optical-scanning undersampling imaging method across multiscale scanning modes. Our method achieves a fourfold improvement in PARS imaging speed without hardware upgrades, offering an available solution for enhancing imaging speed in other optical-scanning microscopic systems.
PMID:40062321 | PMC:PMC11889609 | DOI:10.1016/j.pacs.2025.100697
Review of models for estimating 3D human pose using deep learning
PeerJ Comput Sci. 2025 Feb 4;11:e2574. doi: 10.7717/peerj-cs.2574. eCollection 2025.
ABSTRACT
Human pose estimation (HPE) is designed to detect and localize various parts of the human body and represent them as a kinematic structure based on input data like images and videos. Three-dimensional (3D) HPE involves determining the positions of articulated joints in 3D space. Given its wide-ranging applications, HPE has become one of the fastest-growing areas in computer vision and artificial intelligence. This review highlights the latest advances in 3D deep-learning-based HPE models, addressing the major challenges such as accuracy, real-time performance, and data constraints. We assess the most widely used datasets and evaluation metrics, providing a comparison of leading algorithms in terms of precision and computational efficiency in tabular form. The review identifies key applications of HPE in industries like healthcare, security, and entertainment. Our findings suggest that while deep learning models have made significant strides, challenges in handling occlusion, real-time estimation, and generalization remain. This study also outlines future research directions, offering a roadmap for both new and experienced researchers to further develop 3D HPE models using deep learning.
PMID:40062308 | PMC:PMC11888865 | DOI:10.7717/peerj-cs.2574
Weight Differences-Based Multi-level Signal Profiling for Homogeneous and Ultrasensitive Intelligent Bioassays
ACS Nano. 2025 Mar 10. doi: 10.1021/acsnano.5c01436. Online ahead of print.
ABSTRACT
Current high-sensitivity immunoassay protocols often involve complex signal generation designs or rely on sophisticated signal-loading and readout devices, making it challenging to strike a balance between sensitivity and ease of use. In this study, we propose a homogeneous-based intelligent analysis strategy called Mata, which uses weight analysis to quantify basic immune signals through signal subunits. We perform nanomagnetic labeling of target capture events on micrometer-scale polystyrene subunits, enabling magnetically regulated kinetic signal expression. Signal subunits are classified through the multi-level signal classifier in synergy with the developed signal weight analysis and deep learning recognition models. Subsequently, the basic immune signals are quantified to achieve ultra-high sensitivity. Mata achieves a detection of 0.61 pg/mL in 20 min for interleukin-6 detection, demonstrating sensitivity comparable to conventional digital immunoassays and over 22-fold that of chemiluminescence immunoassay and reducing detection time by more than 70%. The entire process relies on a homogeneous reaction and can be performed using standard bright-field optical imaging. This intelligent analysis strategy balances high sensitivity and convenient operation and has few hardware requirements, presenting a promising high-sensitivity analysis solution with wide accessibility.
PMID:40059671 | DOI:10.1021/acsnano.5c01436
Multifunctional Terahertz Biodetection Enabled by Resonant Metasurfaces
Adv Mater. 2025 Mar 10:e2418147. doi: 10.1002/adma.202418147. Online ahead of print.
ABSTRACT
Testing diverse biomolecules and observing their dynamic interaction in complex biological systems in a label-free manner is critically important for terahertz (THz) absorption spectroscopy. However, traditionally employed micro/nanophotonic techniques suffer from a narrow operating resonance and strong absorption band interference from polar solutions preventing seriously reliable, on-demand biosensor integration. Here, a multifunctional THz plasmonic biosensing platform by leveraging multiple interfering resonances from quasi-bound states in the continuum designed to noninvasively and in situ track the temporal evolution of molecules in multiple analyte systems, is proposed. In contrast to conventional microphotonic sensors, this platform demonstrates substantially broadband performance and reduced footprints, allowing for simultaneous detection of diverse molecular vibrant at multiple spectral points through robust near-field interactions. Furthermore, this sensor enables real-time analysis of amino acid absorption as water evaporates despite its strong overlapping absorption bands in the THz range. By utilizing the real-time format of the reflectance method to acquire a comprehensive spectro-temporal data collection, this approach supports developing a deep neural network to discriminate and predict the composition and proportion of multiple mixtures, obviating the need for frequency scanning or microfluidic devices. This approach offers innovative viewpoints for exploring biological processes and provides valuable tools for biological analysis.
PMID:40059582 | DOI:10.1002/adma.202418147
Transparency and Representation in Clinical Research Utilizing Artificial Intelligence in Oncology: A Scoping Review
Cancer Med. 2025 Mar;14(5):e70728. doi: 10.1002/cam4.70728.
ABSTRACT
INTRODUCTION: Artificial intelligence (AI) has significant potential to improve health outcomes in oncology. However, as AI utility increases, it is imperative to ensure that these models do not systematize racial and ethnic bias and further perpetuate disparities in health. This scoping review evaluates the transparency of demographic data reporting and diversity of participants included in published clinical studies utilizing AI in oncology.
METHODS: We utilized PubMed to search for peer-reviewed research articles published between 2016 and 2021 with the query type "("deep learning" or "machine learning" or "neural network" or "artificial intelligence") and ("neoplas$" or "cancer$" or "tumor$" or "tumour$")." We included clinical trials and original research studies and excluded reviews and meta-analyses. Oncology-related studies that described data sets used in training or validation of the AI models were eligible. Data regarding public reporting of patient demographics were collected, including age, sex at birth, and race. We used descriptive statistics to analyze these data across studies.
RESULTS: Out of 220 total studies, 118 were eligible and 47 (40%) had at least one described training or validation data set publicly available. 69 studies (58%) reported age data for patients included in training or validation sets, 60 studies (51%) reported sex, and six studies (5%) reported race. Of the studies that reported race, a range of 70.7%-93.4% of individuals were White. Only three studies reported racial demographic data with greater than two categories (i.e. "White" vs. "non-White" or "White" vs. "Black").
CONCLUSIONS: We found that a minority of studies (5%) analyzed reported racial and ethnic demographic data. Furthermore, studies that did report racial demographic data had few non-White patients. Increased transparency regarding reporting of demographics and greater representation in data sets is essential to ensure fair and unbiased clinical integration of AI in oncology.
PMID:40059400 | DOI:10.1002/cam4.70728
Paradigms and methods of noninvasive brain-computer interfaces in motor or communication assistance and rehabilitation: a systematic review
Med Biol Eng Comput. 2025 Mar 10. doi: 10.1007/s11517-025-03340-y. Online ahead of print.
ABSTRACT
Noninvasive brain-computer interfaces (BCIs) have rapidly developed over the past decade. This new technology utilizes magneto-electrical recording or hemodynamic imaging approaches to acquire neurophysiological signals noninvasively, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). These noninvasive signals have different temporal resolutions ranging from milliseconds to seconds and various spatial resolutions ranging from centimeters to millimeters. Thanks to these neuroimaging technologies, various BCI modalities like steady-state visual evoked potential (SSVEP), P300, and motor imagery (MI) could be proposed to rehabilitate or assist patients' lost function of mobility or communication. This review focuses on the recent development of paradigms, methods, and applications of noninvasive BCI for motor or communication assistance and rehabilitation. The selection of papers follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), obtaining 223 research articles since 2016. We have observed that EEG-based BCI has gained more research focus due to its low cost and portability, as well as more translational studies in rehabilitation, robotic device control, etc. In the past decade, decoding approaches such as deep learning and source imaging have flourished in BCI. Still, there are many challenges to be solved to date, such as designing more convenient electrodes, improving the decoding accuracy and efficiency, designing more applicable systems for target patients, etc., before this new technology matures enough to benefit clinical users.
PMID:40059266 | DOI:10.1007/s11517-025-03340-y
Automated detection of small hepatocellular carcinoma in cirrhotic livers: applying deep learning to Gd-EOB-DTPA-enhanced MRI
Abdom Radiol (NY). 2025 Mar 10. doi: 10.1007/s00261-025-04853-8. Online ahead of print.
ABSTRACT
OBJECTIVES: To develop an automated deep learning (DL) methodology for detecting small hepatocellular carcinoma (sHCC) in cirrhotic livers, leveraging Gd-EOB-DTPA-enhanced MRI.
METHODS: The present retrospective study included a total of 120 patients with cirrhosis, comprising 78 patients with sHCC and 42 patients with non-HCC cirrhosis, who were selected through stratified sampling. The dataset was divided into training and testing sets (8:2 ratio). The nnU-Net exhibits enhanced capabilities in segmenting small objects. The segmentation performance was assessed using the Dice coefficient. The ability to distinguish between sHCC and non-HCC lesions was evaluated through ROC curves, AUC scores and P values. The case-level detection performance for sHCC was evaluated through several metrics: accuracy, sensitivity, and specificity.
RESULTS: The AUCs for distinguishing sHCC patients from non-HCC patients at the lesion level were 0.967 and 0.864 for the training and test cohorts, respectively, both of which were statistically significant at P < 0.001. At the case level, distinguishing between patients with sHCC and patients with cirrhosis resulted in accuracies of 92.5% (95% CI, 85.1-96.9%) and 81.5% (95% CI, 61.9-93.7%), sensitivities of 95.1% (95% CI, 86.3-99.0%) and 88.2% (95% CI, 63.6-98.5%), and specificities of 87.5% (95% CI, 71.0-96.5%) and 70% (95% CI, 34.8-93.3%) for the training and test sets, respectively.
CONCLUSION: The DL methodology demonstrated its efficacy in detecting sHCC within a cohort of patients with cirrhosis.
PMID:40059243 | DOI:10.1007/s00261-025-04853-8
Vision Mamba and xLSTM-UNet for medical image segmentation
Sci Rep. 2025 Mar 10;15(1):8163. doi: 10.1038/s41598-025-88967-5.
ABSTRACT
Deep learning-based medical image segmentation methods are generally divided into convolutional neural networks (CNNs) and Transformer-based models. Traditional CNNs are limited by their receptive field, making it challenging to capture long-range dependencies. While Transformers excel at modeling global information, their high computational complexity restricts their practical application in clinical scenarios. To address these limitations, this study introduces VMAXL-UNet, a novel segmentation network that integrates Structured State Space Models (SSM) and lightweight LSTMs (xLSTM). The network incorporates Visual State Space (VSS) and ViL modules in the encoder to efficiently fuse local boundary details with global semantic context. The VSS module leverages SSM to capture long-range dependencies and extract critical features from distant regions. Meanwhile, the ViL module employs a gating mechanism to enhance the integration of local and global features, thereby improving segmentation accuracy and robustness. Experiments on datasets such as ISIC17, ISIC18, CVC-ClinicDB, and Kvasir demonstrate that VMAXL-UNet significantly outperforms traditional CNNs and Transformer-based models in capturing lesion boundaries and their distant correlations. These results highlight the model's superior performance and provide a promising approach for efficient segmentation in complex medical imaging scenarios.
PMID:40059111 | DOI:10.1038/s41598-025-88967-5
Virtual Monochromatic Imaging of Half-Iodine-Load, Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction in Patients with Renal Insufficiency: A Clinical Pilot Study
J Nippon Med Sch. 2025;92(1):69-79. doi: 10.1272/jnms.JNMS.2025_92-112.
ABSTRACT
BACKGROUND: We retrospectively examined image quality (IQ) of thin-slice virtual monochromatic imaging (VMI) of half-iodine-load, abdominopelvic, contrast-enhanced CT (CECT) by dual-energy CT (DECT) with deep learning image reconstruction (DLIR).
METHODS: In 28 oncology patients with moderate-to-severe renal impairment undergoing half-iodine-load (300 mgI/kg) CECT by DECT during the nephrographic phase, we reconstructed VMI at 40-70 keV with a slice thickness of 0.625 mm using filtered back-projection (FBP), hybrid iterative reconstruction (HIR), and DLIR; measured contrast-noise ratio (CNR) of the liver, spleen, aorta, portal vein, and prostate/uterus; and determined the optimal keV to achieve the maximal CNR. At the optimal keV, two independent radiologists compared each organ's CNR and subjective IQ scores among FBP, HIR, and DLIR to subjectively grade image noise, contrast, sharpness, delineation of small structures, and overall IQ.
RESULTS: CNR of each organ increased continuously from 70 to 40 keV using FBP, HIR, and DLIR. At 40 keV, CNR of the prostate/uterus was significantly higher with DLIR than with FBP; however, CNR was similar between FBP and HIR and between HIR and DLIR. The CNR of all other organs increased significantly from FBP to HIR to DLIR (P < 0.05). All IQ scores significantly improved from FBP to HIR to DLIR (P < 0.05) and were acceptable in all patients with DLIR only.
CONCLUSIONS: The combination of 40 keV and DLIR offers the maximal CNR and a subjectively acceptable IQ for thin-slice VMI of half-iodine-load CECT.
PMID:40058838 | DOI:10.1272/jnms.JNMS.2025_92-112
AI-driven approaches for air pollution modeling: A comprehensive systematic review
Environ Pollut. 2025 Mar 7:125937. doi: 10.1016/j.envpol.2025.125937. Online ahead of print.
ABSTRACT
In recent years, air quality levels have become a global issue with the rise of harmful pollutants and their effects on climate change. Urban areas are especially affected by air pollution, resulting in a deterioration of the environment and a surge in health complications. Research has been conducted on different studies that accurately predict future pollution concentration levels utilising different methods. This paper introduces the current physical models for air quality prediction and conducts an extensive systematic literature review on Machine Learning and Deep Learning techniques for predicting pollutants. This work compares different methodologies and techniques by grouping studies that utilise similar approaches together and comparing them. Furthermore, a distinction is made between temporal and spatiotemporal models to understand and highlight how both approaches impact future air pollutant concentration level predictions. The review differs from similar works as it focuses not only on comparing models and approaches but by analysing how the usage of external features, such as meteorological data, traffic information, and land usage, affect pollutant levels and the model's accuracy on air quality forecasting. Performances and limitations are explored for both Machine and Deep Learning approaches, and the work offers a discussion on their comparison and possible future developments in this research space. This review highlights how Deep Learning models tend to be more suitable for forecasting problems due to their feature and spatio-temporal correlation representation abilities, as well as providing different directions for further work, from models utilisation to feature inclusion.
PMID:40058557 | DOI:10.1016/j.envpol.2025.125937
Deep-learning analysis of greenspace and metabolic syndrome: a street-view and remote-sensing approach
Environ Res. 2025 Mar 7:121349. doi: 10.1016/j.envres.2025.121349. Online ahead of print.
ABSTRACT
Evidence linking greenspace exposure to metabolic syndrome (MetS) remains sparse and inconsistent. This exploratory study evaluate the relationship between green visibility index (GVI) and normalized difference vegetation index (NDVI) with MetS prevalence, and quantifies the potential reduction in MetS burden from increased greenspace exposure. Participants were selected from the baseline survey of the Wuhan Chronic Disease Cohort. Street-view imagry was procured within buffer zones ranging from 50 to 500-m surrounding participants' residences. GVI was extracted from street-view images using a convolutional neural network model trained on CityScapes, while the NDVI was ascertained from satellite remote sensing data. We employed generalized linear mixed-effects models to assess the associations between greenspace with the risk of MetS. Additionally, restricted cubic spline function was applied to generate exposure-response curve. Leveraging a counterfactual causal inference framework, we quantified the potential diminution in MetS cases consequent to an elevation in NDVI levels within Wuhan. Within the 150-meter buffer zone, each 0.1-unit increase in GVI and NDVI corresponded to 13% and 31% decline in the odds of MetS in the fully adjusted regression models, respectively. A negative non-linear relationship between GVI and MetS was observed when the GVI level exceeded 0.209, while a negative linear association for NDVI when its level exceeded 0.299. Assuming causality, 74,183 cases of MetS can be avoided by achieving greenness threshold of NDVI, amounting for 8.16% of total MetS prevalence in 2019. Our findings offer a compelling justification for the integration of greening policies in initiatives aimed at promoting metabolic health.
PMID:40058546 | DOI:10.1016/j.envres.2025.121349
Harnessing machine learning for predicting successful weaning from mechanical ventilation: A systematic review
Aust Crit Care. 2025 Mar 8;38(3):101203. doi: 10.1016/j.aucc.2025.101203. Online ahead of print.
ABSTRACT
BACKGROUND: Machine learning (ML) models represent advanced computational approaches with increasing application in predicting successful weaning from mechanical ventilation (MV). Whilst ML itself has a long history, its application to MV weaning outcomes has emerged more recently. In this systematic review, we assessed the effects of ML on the prediction of successful weaning outcomes amongst adult patients undergoing MV.
METHODS: PubMed, EMBASE, Scopus, Web of Science, and Google Scholar electronic databases were searched up to May 2024. In addition, ACM Digital Library and IEEE Xplore databases were searched. We included peer-reviewed studies examining ML models for the prediction of successful MV in adult patients. We used a modified version of the Joanna Briggs Institute checklist for quality assessment.
RESULTS: Eleven studies (n = 18 336) were included. Boosting algorithms, including extreme gradient boosting (XGBoost) and Light Gradient-Boosting Machine, were amongst the most frequently used methods, followed by random forest, multilayer perceptron, logistic regression, artificial neural networks, and convolutional neural networks, a deep learning model. The most common cross-validation methods included five-fold and 10-fold cross-validation. Model performance varied, with the artificial neural network accuracy ranging from 77% to 80%, multilayer perceptron achieving 87% accuracy and 94% precision, and convolutional neural network showing areas under the curve of 91% and 94%. XGBoost generally outperformed other models in the area under the curve comparisons. Quality assessment indicated that almost all studies had high quality as seven out of 10 studies had full scores.
CONCLUSIONS: ML models effectively predicted weaning outcomes in adult patients undergoing MV, with XGBoost outperforming other models. However, the absence of studies utilising newer architectures, such as transformer models, highlights an opportunity for further exploration and refinement in this field.
PMID:40058181 | DOI:10.1016/j.aucc.2025.101203
Research on the performance of the SegFormer model with fusion of edge feature extraction for metal corrosion detection
Sci Rep. 2025 Mar 8;15(1):8134. doi: 10.1038/s41598-025-92531-6.
ABSTRACT
Addressing the challenge that existing deep learning models face in accurately segmenting metal corrosion boundaries and small corrosion areas. In this paper, a SegFormer metal corrosion detection method based on parallel extraction of edge features is proposed. Firstly, to solve the boundary ambiguity problem of metal corrosion images, an edge-feature extraction module (EEM) is introduced to construct a spatial branch of the network to assist the model in extracting shallow details and edge information from the images. Secondly, to mitigate the loss of target feature information during the reconstruction of the decoder, this paper adopts the gradual upsampling decoding layer design. It introduces the feature fusion module (FFM) to achieve hierarchical and progressive feature fusion, thereby enhancing the detection of small corroded areas. Experimental results show that the proposed method outperforms other semantic segmentation models achieving an accuracy of 86.56% on the public metal surface corrosion image dataset and reaching a mean intersection over union (mIoU) of 91.41% on the BSData defect dataset. On the Self-built tubing corrosion pit image dataset, the model utilizes only 3.60 MB of parameters to achieve an accuracy of 96.52%, confirming the effectiveness and performance advantages of the proposed method in practical applications.
PMID:40057599 | DOI:10.1038/s41598-025-92531-6
A swarm-optimization based fusion model of sentiment analysis for cryptocurrency price prediction
Sci Rep. 2025 Mar 8;15(1):8119. doi: 10.1038/s41598-025-92563-y.
ABSTRACT
Social media has attracted society for decades due to its reciprocal and real-life nature. It influenced almost all societal entities, including governments, academics, industries, health, and finance. The Social Network generates unstructured information about brands, political issues, cryptocurrencies, and global pandemics. The major challenge is translating this information into reliable consumer opinion as it contains jargon, abbreviations, and reference links with previous content. Several ensemble models have been introduced to mine the enormous noisy range on social platforms. Still, these need more predictability and are the less-generalized models for social sentiment analysis. Hence, an optimized stacked-Long Short-Term Memory (LSTM)-based sentiment analysis model is proposed for cryptocurrency price prediction. The model can find the relationships of latent contextual semantic and co-occurrence statistical features between phrases in a sentence. Additionally, the proposed model comprises multiple LSTM layers, and each layer is optimized with Particle Swarm Optimization (PSO) technique to learn based on the best hyperparameters. The model's efficiency is measured in terms of confusion matrix, weighted f1-Score, weighted Precision, weighted Recall, training accuracy, and testing accuracy. Moreover, comparative results reveal that an optimized stacked LSTM outperformed. The objective of the proposed model is to introduce a benchmark sentiment analysis model for predicting cryptocurrency prices, which will be helpful for other societal sentiment predictions. A pretty significant thing for this presented model is that it can process multilingual and cross-platform social media data. This could be achieved by combining LSTMs with multilingual embeddings, fine-tuning, and effective preprocessing for providing accurate and robust sentiment analysis across diverse languages, platforms, and communication styles.
PMID:40057585 | DOI:10.1038/s41598-025-92563-y
A large-scale open image dataset for deep learning-enabled intelligent sorting and analyzing of raw coal
Sci Data. 2025 Mar 8;12(1):403. doi: 10.1038/s41597-025-04719-0.
ABSTRACT
Under the strategic objectives of carbon peaking and carbon neutrality, energy transition driven by new quality productive forces has emerged as a central theme in China's energy development. Among these, the intelligent sorting and analysis of raw coal using deep learning constitute a pivotal technical process. However, the progress of intelligent coal preparation in China has been constrained by the absence of accurate and large-scale data. To address this gap, this study introduces DsCGF, a large-scale, open-source raw coal image dataset. Over the past five years, extensive raw coal image samples were systematically collected and meticulously annotated from three representative mining regions in China, resulting in a dataset comprising over 270,000 visible-light images. These images are annotated at multiple levels, targeting three primary categories: coal, gangue, and foreign objects, and are designed for three core computer vision tasks: image classification, object detection, and instance segmentation. Comprehensive evaluation results indicate that the DsCGF can effectively support further research into the intelligent sorting of raw coal.
PMID:40057526 | DOI:10.1038/s41597-025-04719-0