Deep learning
An optimized deep learning based hybrid model for prediction of daily average global solar irradiance using CNN SLSTM architecture
Sci Rep. 2025 Mar 28;15(1):10761. doi: 10.1038/s41598-025-95118-3.
ABSTRACT
Global horizontal irradiance prediction is essential for balancing the supply-demand and minimizing the energy costs for effective integration of solar photovoltaic system in electric power grid. However, its stochastic nature makes it difficult to get accurate prediction results. This study aims to develop a hybrid deep learning model that integrates a Convolutional Neural Network and Stacked Long Short-Term Memory (CNN-SLSTM) to predict the daily average global solar irradiance using real time meteorological parameters and daily solar irradiance data recorded in the study site. First, we have selected 14 significant relevant features from the dataset using recursive feature elimination techniques. The hyperparameters of the developed models are optimized using metaheuristic algorithm, a Slime Mould Optimization method. The efficacy of the model performance is evaluated using tenfold cross validation techniques. By using statistical performances metrics, the predictive performance of the developed model is compared with Gated Recurrent Unit, LSTM, CNN-LSTM, SLSTM and machine learning regressor models like Support Vector Machine, Decision Tree, and Random Forest. From the experimental results, the developed CNN-SLSTM model outperformed other models with a MSE, R2 and Adj_R2 of 0.0359, 0.9790 and 0.9789, respectively.
PMID:40155655 | DOI:10.1038/s41598-025-95118-3
UrbanEV: An Open Benchmark Dataset for Urban Electric Vehicle Charging Demand Prediction
Sci Data. 2025 Mar 28;12(1):523. doi: 10.1038/s41597-025-04874-4.
ABSTRACT
The recent surge in electric vehicles (EVs), driven by a collective push to enhance global environmental sustainability, has underscored the significance of exploring EV charging prediction. To catalyze further research in this domain, we introduce UrbanEV - an open dataset showcasing EV charging space availability and electricity consumption in a pioneering city for vehicle electrification, namely Shenzhen, China. UrbanEV offers a rich repository of charging data (i.e., charging occupancy, duration, volume, and price) captured at hourly intervals across an extensive six-month span for over 20,000 individual charging stations. Beyond these core attributes, the dataset also encompasses diverse influencing factors like weather conditions and spatial proximity. Comprehensive experiments have been conducted to showcase the predictive capabilities of various models, including statistical, deep learning, and transformer-based approaches, using the UrbanEV dataset. This dataset is poised to propel advancements in EV charging prediction and management, positioning itself as a benchmark resource within this burgeoning field.
PMID:40155635 | DOI:10.1038/s41597-025-04874-4
Correction: Detection and recognition of foreign objects in Pu-erh Sun-dried green tea using an improved YOLOv8 based on deep learning
PLoS One. 2025 Mar 28;20(3):e0321409. doi: 10.1371/journal.pone.0321409. eCollection 2025.
ABSTRACT
[This corrects the article DOI: 10.1371/journal.pone.0312112.].
PMID:40153338 | DOI:10.1371/journal.pone.0321409
Towards Unified Deep Image Deraining: A Survey and A New Benchmark
IEEE Trans Pattern Anal Mach Intell. 2025 Mar 28;PP. doi: 10.1109/TPAMI.2025.3556133. Online ahead of print.
ABSTRACT
Recent years have witnessed significant advances in image deraining due to the progress of effective image priors and deep learning models. As each deraining approach has individual settings (e.g., training and test datasets, evaluation criteria), how to fairly evaluate existing approaches comprehensively is not a trivial task. Although existing surveys aim to thoroughly review image deraining approaches, few of them focus on unifying evaluation settings to examine the deraining capability and practicality evaluation. In this paper, we provide a comprehensive review of existing image deraining methods and provide a unified evaluation setting to evaluate their performance. Furthermore, we construct a new high-quality benchmark named HQ-RAIN to conduct extensive evaluations, consisting of 5,000 paired high-resolution synthetic images with high harmony and realism. We also discuss existing challenges and highlight several future research opportunities worth exploring. To facilitate the reproduction and tracking of the latest deraining technologies for general users, we build an online platform to provide the off-the-shelf toolkit, involving the large-scale performance evaluation. This online platform and the proposed new benchmark are publicly available at http://www.deraining.tech/.
PMID:40153286 | DOI:10.1109/TPAMI.2025.3556133
A Flexible Spatio-Temporal Architecture Design for Artifact Removal in EEG with Arbitrary Channel-Settings
IEEE J Biomed Health Inform. 2025 Mar 28;PP. doi: 10.1109/JBHI.2025.3555813. Online ahead of print.
ABSTRACT
Electroencephalography (EEG) data is easily contaminated by various sources, significantly affecting subsequent analyses in neuroscience and clinical applications. Therefore, effective artifact removal is a key step in EEG preprocessing. While current deep learning methods have demonstrated notable efficacy in EEG denoising, single-channel approaches primarily focus on temporal features and neglect inter-channel correlations. Meanwhile, multi-channel methods mainly prioritize spatial features but often overlook the unique temporal dependencies of individual channels. A common limitation of both single-channel and multi-channel methods is their strict requirements on the input channel setting, which restricts their practical applicability. To address these issues, we design a flexible architecture named Artifact removal Spatio-Temporal Integration Network (ASTI-Net), a dual-branch denoising model capable of handling arbitrary EEG channel settings. ASTI-Net utilizes spatio-temporal attention weighting with dual branches that capture inter-channel spatial characteristics and intra-channel temporal dependencies. Its architecture incorporates deformable convolutional operations and channel-wise temporal processing, accommodating varying numbers of EEG channels and enhancing applicability across diverse clinical and research settings. By integrating features from both branches through a fusion reconstruction module, ASTI-Net effectively restores clean multi-channel EEG. Extensive evaluation on two semi-simulated datasets, along with qualitative assessment on real task-state EEG data, validates that ASTI-Net outperforms existing artifact removal methods.
PMID:40153283 | DOI:10.1109/JBHI.2025.3555813
Develop a Deep-Learning Model to Predict Cancer Immunotherapy Response Using In-Born Genomes
IEEE J Biomed Health Inform. 2025 Mar 28;PP. doi: 10.1109/JBHI.2025.3555596. Online ahead of print.
ABSTRACT
The emergence of immune checkpoint inhibitors (ICIs) has significantly advanced cancer treatment. However, only 15-30% of the cancer patients respond to ICI treatment, which stimulates and enhances host immunity to eliminate tumor cells. ICI treatment is very expensive and has potential adverse reactions; therefore, it is crucial to develop a method which enables to accurately and rapidly assess a patient's suitability before ICI treatment. We complied germline whole-genome sequencing (WES) data of 37 melanoma patients who have been treated with ICIs and sequenced in our lab previously, and the WES data of other 700 ICI-treated cancer patients in public domain. Using these data, we proposed a novel double-channel attention neural network (DANN) model to predict cancer ICI-response and validate the predictions. DANN achieved a mean accuracy and AUC of 0.95 and 0.98, respectively, which outperformed traditional machine learning methods. Enrichment analysis of the DANN-identified genes indicated that cancer patients whose in-born genomic variants might mainly affect host immune system in a wide-ranging manner, and then affect ICI response. Finally, we found a set of 12 genes bearing genomic variants were significantly associated with cancer patient survivals after ICI treatment.
PMID:40153282 | DOI:10.1109/JBHI.2025.3555596
Deep learning in the discovery of antiviral peptides and peptidomimetics: databases and prediction tools
Mol Divers. 2025 Mar 28. doi: 10.1007/s11030-025-11173-y. Online ahead of print.
ABSTRACT
Antiviral peptides (AVPs) represent a novel and promising therapeutic alternative to conventional antiviral treatments, due to their broad-spectrum activity, high specificity, and low toxicity. The emergence of zoonotic viruses such as Zika, Ebola, and SARS-CoV-2 have accelerated AVP research, driven by advancements in data availability and artificial intelligence (AI). This review focuses on the development of AVP databases, their physicochemical properties, and predictive tools utilizing machine learning for AVP discovery. Machine learning plays a pivotal role in advancing and developing antiviral peptides and peptidomimetics, particularly through the development of specialized databases such as DRAVP, AVPdb, and DBAASP. These resources facilitate AVP characterization but face limitations, including small datasets, incomplete annotations, and inadequate integration with multi-omics data.The antiviral efficacy of AVPs is closely linked to their physicochemical properties, such as hydrophobicity and amphipathic α-helical structures, which enable viral membrane disruption and specific target interactions. Computational prediction tools employing machine learning and deep learning have significantly advanced AVP discovery. However, challenges like overfitting, limited experimental validation, and a lack of mechanistic insights hinder clinical translation.Future advancements should focus on improved validation frameworks, integration of in vivo data, and the development of interpretable models to elucidate AVP mechanisms. Expanding predictive models to address multi-target interactions and incorporating complex biological environments will be crucial for translating AVPs into effective clinical therapies.
PMID:40153158 | DOI:10.1007/s11030-025-11173-y
Deep learning-based prediction of cervical canal stenosis from mid-sagittal T2-weighted MRI
Skeletal Radiol. 2025 Mar 28. doi: 10.1007/s00256-025-04917-2. Online ahead of print.
ABSTRACT
OBJECTIVE: This study aims to establish a large degenerative cervical myelopathy cohort and develop deep learning models for predicting cervical canal stenosis from sagittal T2-weighted MRI.
MATERIALS AND METHODS: Data was collected retrospectively from patients who underwent a cervical spine MRI from January 2007 to December 2022 at a single institution. Ground truth labels for cervical canal stenosis were obtained from sagittal T2-weighted MRI using Kang's grade, a four-level scoring system that classifies stenosis with the degree of subarachnoid space obliteration and cord indentation. ResNet50, VGG16, MobileNetV3, and EfficientNetV2 were trained using threefold cross-validation, and the models exhibiting the largest area under the receiver operating characteristic curve (AUC) were selected to produce the ensemble model. Gradient-weighted class activation mapping was adopted for qualitative assessment. Models that incorporate demographic features were trained, and their corresponding AUCs on the test set were evaluated.
RESULTS: Of 8676 patients, 7645 were eligible for developing deep learning models, where 6880 (mean age, 56.0 ± 14.3 years, 3480 men) were used for training while 765 (mean age, 56.5 ± 14.4 years, 386 men) were set aside for testing. The ensemble model exhibited the largest AUC of 0.95 (0.94-0.97). Accuracy was 0.875 (0.851-0.898), sensitivity was 0.885 (0.855-0.915), and specificity was 0.861 (0.824-0.898). Qualitative analyses demonstrated that the models accurately pinpoint radiologic findings suggestive of cervical canal stenosis and myelopathy. Incorporation of demographic features did not result in a gain of AUC.
CONCLUSION: We have developed deep learning models from a large degenerative cervical myelopathy cohort and thoroughly explored their robustness and explainability.
PMID:40152984 | DOI:10.1007/s00256-025-04917-2
Multichannel Contribution Aware Network for Prostate Cancer Grading in Histopathology Images
J Comput Biol. 2025 Mar 28. doi: 10.1089/cmb.2024.0872. Online ahead of print.
ABSTRACT
Gleason grading of prostate histopathology images is widely used by pathologists for diagnosis and prognosis. Spatial characteristics of cell and tissues through staining images is essential for accurate grading of prostate cancer. Although considerable efforts have been made to train grading models, they mainly rely on basic preprocessed images and largely overlook the intricate multiple staining aspects of histopathology images that are crucial for spatial information capture. This article proposes a novel deep learning model for automated prostate cancer grading by integrating several staining characteristics. Image deconvolution is applied to separate the multiple staining channels in the histopathology image, thereby enabling the model to identify effective feature information. A channel and pixel attention-based encoder is designed to extract cell and tissue structure information from multiple staining channel images. We propose a dual-branch decoder, where the classical convolutional neural network branch specializes in local feature extraction and the Transformer branch focuses on global feature extraction, to effectively fuse and refine features from different staining channels. Taking full advantage of the complementarity of multiple staining channels makes the features more compact and discriminative, leading to precise grading. Extensive experiments on relevant public datasets demonstrate the effectiveness and scalability of the proposed model.
PMID:40152893 | DOI:10.1089/cmb.2024.0872
Multimodal Artificial Intelligence Models Predicting Glaucoma Progression Using Electronic Health Records and Retinal Nerve Fiber Layer Scans
Transl Vis Sci Technol. 2025 Mar 3;14(3):27. doi: 10.1167/tvst.14.3.27.
ABSTRACT
PURPOSE: The purpose of this study was to develop models that predict which patients with glaucoma will progress to require surgery, combining structured data from electronic health records (EHRs) and retinal fiber layer optical coherence tomography (RNFL OCT) scans.
METHODS: EHR data (demographics and clinical eye examinations) and RNFL OCT scans were identified for patients with glaucoma from an academic center (2008-2023). Comparing the novel TabNet deep learning architecture to a baseline XGBoost model, we trained and evaluated single modality models using either EHR or RNFL features, as well as fusion models combining both EHR and RNFL features as inputs, to predict glaucoma surgery within 12 months (binary).
RESULTS: We had 1472 patients with glaucoma who were included in this study, of which 29.9% (N = 367) progressed to glaucoma surgery. The TabNet fusion model achieved the highest performance on the test set with an area under the receiver operating characteristic curve (AUROC) of 0.832, compared to the XGBoost fusion model (AUROC = 0.747). EHR only models performed with an AUROC of 0.764 and 0.720 for the deep learning model and XGBoost models, respectively. RNFL only models performed with an AUROC of 0.624 and 0.633 for the deep learning and XGBoost models, respectively.
CONCLUSIONS: Fusion models which integrate both RNFL with EHR data outperform models only utilizing one datatype or the other to predict glaucoma progression. The deep learning TabNet architecture demonstrated superior performance to traditional XGBoost models.
TRANSLATIONAL RELEVANCE: Prediction models that utilize the wealth of structured clinical and imaging data to predict glaucoma progression could form the basis of future clinical decision support tools to personalize glaucoma care.
PMID:40152766 | DOI:10.1167/tvst.14.3.27
Automated Measurements of Spinal Parameters for Scoliosis Using Deep Learning
Spine (Phila Pa 1976). 2025 Mar 28. doi: 10.1097/BRS.0000000000005280. Online ahead of print.
ABSTRACT
STUDY DESIGN: Retrospective single-institution study.
OBJECTIVE: To develop and validate an automated convolutional neural network (CNN) to measure the Cobb angle, T1 tilt angle, coronal balance, clavicular angle, height of the shoulders, T5-T12 Cobb angle, and sagittal balance for accurate scoliosis diagnosis.
SUMMARY OF BACKGROUND DATA: Scoliosis, characterized by a Cobb angle >10°, requires accurate and reliable measurements to guide treatment. Traditional manual measurements are time-consuming and have low inter- and intra-observer reliability. While some automated tools exist, they often require manual intervention and focus primarily on the Cobb angle.
METHODS: In this study, we utilized four datasets comprising the anterior-posterior (AP) and lateral radiographs of 1682 patients with scoliosis. The CNN includes coarse segmentation, landmark localization, and fine segmentation. The measurements were evaluated using the dice coefficient, mean absolute error (MAE), and percentage of correct key-points (PCK) with a 3-mm threshold. An internal testing set, including 87 adolescent (7-16 years) and 26 older adult patients (≥60 years), was used to evaluate the agreement between automated and manual measurements.
RESULTS: The automated measures by the CNN achieved high mean dice coefficients (>0.90), PCK of 89.7%-93.7%, and MAE for vertebral corners of 2.87 mm-3.62 mm on AP radiographs. Agreement on the internal testing set for manual measurements was acceptable, with an MAE of 0.26 mm/°-0.51 mm/° for the adolescent subgroup and 0.29 mm/°-4.93 mm/° for the older adult subgroup on AP radiographs. The MAE for the T5-T12 Cobb angle and sagittal balance, on lateral radiographs, was 1.03° and 0.84 mm, respectively, in adolescents, and 4.60° and 9.41 mm, respectively, in older adults. Automated measurement time was significantly shorter compared to manual measurements.
CONCLUSION: The deep learning automated system provides rapid, accurate, and reliable measurements for scoliosis diagnosis, which could improve clinical workflow efficiency and guide scoliosis treatment.
THE LEVEL OF EVIDENCE OF THIS STUDY: Level 3.
PMID:40152470 | DOI:10.1097/BRS.0000000000005280
Spider-Inspired Ion Gel Sensor for Dual-Mode Detection of Force and Speed via Magnetic Induction
ACS Sens. 2025 Mar 28. doi: 10.1021/acssensors.5c00403. Online ahead of print.
ABSTRACT
In the field of flexible sensors, the development of multifunctional, highly sensitive, wide detection range, and excellent durability sensors remains a significant challenge. This paper designs and fabricates a dual-mode ion gel sensor based on the spider's sensing mechanism, integrating both wind speed and pressure detection. The wind speed sensor employs magnetic fiber flocking and inductive resonance principles, providing accurate detection within a wind speed range of 2 to 11.5 m/s, with good linear response and high sensitivity. The impedance signal exhibits a maximum variation of 6.89 times. The pressure sensor, combining microstructured ion gel and capacitive design, demonstrates high sensitivity (15.93 kPa-1) and excellent linear response within a pressure range of 0.5 Pa to 40 kPa, with strong adaptability and good stability. The sensor shows outstanding performance in human motion monitoring, accurately capturing physiological signals such as joint movements and respiratory frequency, offering robust support for motion health management. Furthermore, combined with deep learning algorithms, the sensor achieves an accuracy of 96.83% in an intelligent motion recognition system, effectively enhancing the precision of motion performance analysis. This study provides a new solution for flexible motion monitoring and health management systems, with broad application prospects.
PMID:40152352 | DOI:10.1021/acssensors.5c00403
Transformer-based deep learning structure-conductance relationships in gold and silver nanowires
Phys Chem Chem Phys. 2025 Mar 28. doi: 10.1039/d4cp04605f. Online ahead of print.
ABSTRACT
Due to their inherently stochastic nature, microscopic configurations and conductance values of nano-junctions fabricated using break-junction techniques vary and fluctuate in and between experiments. Unfortunately, it is extremely difficult to observe the structural evolution of nano-junctions while measuring their conductance, a fact that prevents the establishment of their structure-conductance relationship. Herein, we conduct classical molecular dynamics (MD) simulations with neural-network potentials to simulate the stretching of Au and Ag nanowires followed by training a transformer-based neural network to predict their conductance. In addition to achieving an accuracy comparable to ab initio molecular dynamics within a computational cost similar to classical force fields, our approach can acquire the conductance of a large number of junction structures efficiently. Our calculations show that the transformer-based neural network, leveraging its self-attention mechanism, exhibits remarkable stability, accuracy and scalability in the prediction of zero-bias conductance of longer, larger and even structurally different gold nanowires when trained only on smaller systems. The simulated conductance histograms of gold nanowires are highly consistent with experiments. By examining the MD trajectories of gold nanowires simulated at 150 K and 300 K, we find that the formation probability of a three-strand planar structure appearing at 300 K is much higher than that at 150 K. This may be the dominating factor for the observed blueshift of the main peak positioned between 1.5-2G0 in the conductance histogram following the temperature increase. Moreover, our transformer-based neural network pretrained on Au has an excellent transferability, which can be fine-tuned to predict accurately the conductance of Ag nanowires with much less training data. Our findings pave the way for using deep learning techniques in molecule-scale electronics and are helpful for elucidating the conducting mechanism of molecular junctions and improving their performance.
PMID:40152302 | DOI:10.1039/d4cp04605f
A hybrid long short-term memory-convolutional neural network multi-stream deep learning model with Convolutional Block Attention Module incorporated for monkeypox detection
Sci Prog. 2025 Jan-Mar;108(1):368504251331706. doi: 10.1177/00368504251331706. Epub 2025 Mar 28.
ABSTRACT
BackgroundMonkeypox (mpox) is a zoonotic infectious disease caused by the mpox virus and characterized by painful body lesions, fever, headaches, and exhaustion. Since the report of the first human case of mpox in Africa, there have been multiple outbreaks, even in nonendemic regions of the world. The emergence and re-emergence of mpox highlight the critical need for early detection, which has spurred research into applying deep learning to improve diagnostic capabilities.ObjectiveThis research aims to develop a robust hybrid long short-term memory (LSTM)-convolutional neural network (CNN) model with a Convolutional Block Attention Module (CBAM) to provide a potential tool for the early detection of mpox.MethodsA hybrid LSTM-CNN multi-stream deep learning model with CBAM was developed and trained using the Mpox Skin Lesion Dataset Version 2.0 (MSLD v2.0). We employed LSTM layers for preliminary feature extraction, CNN layers for further feature extraction, and CBAM for feature conditioning. The model was evaluated with standard metrics, and gradient-weighted class activation maps (Grad-CAM) and local interpretable model-agnostic explanations (LIME) were used for interpretability.ResultsThe model achieved an F1-score, recall, and precision of 94%, an area under the curve of 95.04%, and an accuracy of 94%, demonstrating competitive performance compared to the state-of-the-art models. This robust performance highlights the reliability of our model. LIME and Grad-CAM offered insights into the model's decision-making process.ConclusionThe hybrid LSTM-CNN multi-stream deep learning model with CBAM successfully detects mpox, providing a promising early detection tool that can be integrated into web and mobile platforms for convenient and widespread use.
PMID:40152267 | DOI:10.1177/00368504251331706
Negative dataset selection impacts machine learning-based predictors for multiple bacterial species promoters
Bioinformatics. 2025 Mar 27:btaf135. doi: 10.1093/bioinformatics/btaf135. Online ahead of print.
ABSTRACT
MOTIVATION: Advances in bacterial promoter predictors based on ML have greatly improved identification metrics. However, existing models overlooked the impact of negative datasets, previously identified in GC-content discrepancies between positive and negative datasets in single-species models. This study aims to investigate whether multiple-species models for promoter classification are inherently biased due to the selection criteria of negative datasets. We further explore whether the generation of synthetic random sequences (SRS) that mimic GC-content distribution of promoters can partly reduce this bias.
RESULTS: Multiple-species predictors exhibited GC-content bias when using CDS as negative dataset, suggested by specificity and sensibility metrics in a species-specific manner, and investigated by dimensionality reduction. We demonstrated a reduction in this bias by employing the SRS dataset, with less detection of background noise in real genomic data. In both scenarios DNABERT showed the best metrics. These findings suggest that GC-balanced datasets can enhance the generalizability of promoter predictors across Bacteria.
AVAILABILITY AND IMPLEMENTATION: The source code of the experiments is freely available at https://github.com/maigonzalezh/MultispeciesPromoterClassifier.
PMID:40152247 | DOI:10.1093/bioinformatics/btaf135
scMUSCL: Multi-Source Transfer Learning for Clustering scRNA-seq Data
Bioinformatics. 2025 Mar 27:btaf137. doi: 10.1093/bioinformatics/btaf137. Online ahead of print.
ABSTRACT
MOTIVATION: Single-cell RNA sequencing (scRNA-seq) analysis relies heavily on effective clustering to facilitate numerous downstream applications. Although several machine learning methods have been developed to enhance single-cell clustering, most are fully unsupervised and overlook the rich repository of annotated datasets available from previous single-cell experiments. Since cells are inherently high-dimensional entities, unsupervised clustering can often result in clusters that lack biological relevance. Leveraging annotated scRNA-seq datasets as a reference can significantly enhance clustering performance, enabling the identification of biologically meaningful clusters in target datasets.
RESULTS: In this paper, we propose Single Cell MUlti-Source CLustering (scMUSCL), a novel transfer learning method designed to identify cell clusters in a target dataset by leveraging knowledge from multiple annotated reference datasets. scMUSCL employs a deep neural network to extract domain- and batch-invariant cell representations, effectively addressing discrepancies across various source datasets and between source and target datasets within the new representation space. Unlike existing methods, scMUSCL does not require prior knowledge of the number of clusters in the target dataset and eliminates the need for batch correction between source and target datasets. We conduct extensive experiments using 20 real-life datasets, demonstrating that scMUSCL consistently outperforms existing unsupervised and transfer learning-based methods. Furthermore, our experiments show that scMUSCL benefits from multiple source datasets as learning references and accurately estimates the number of clusters.
AVAILABILITY: The Python implementation of scMUSCL is available at https://github.com/arashkhoeini/scMUSCL.
SUPPLEMENTARY INFORMATION: Supplementary data are available and include additional experimental details, performance evaluations, and implementation guidelines.
PMID:40152244 | DOI:10.1093/bioinformatics/btaf137
Fitting Atomic Structures into Cryo-EM Maps by Coupling Deep Learning-Enhanced Map Processing with Global-Local Optimization
J Chem Inf Model. 2025 Mar 28. doi: 10.1021/acs.jcim.5c00004. Online ahead of print.
ABSTRACT
With the breakthroughs in protein structure prediction technology, constructing atomic structures from cryo-electron microscopy (cryo-EM) density maps through structural fitting has become increasingly critical. However, the accuracy of the constructed models heavily relies on the precision of the structure-to-map fitting. In this study, we introduce DEMO-EMfit, a progressive method that integrates deep learning-based backbone map extraction with a global-local structural pose search to fit atomic structures into density maps. DEMO-EMfit was extensively evaluated on a benchmark data set comprising both cryo-electron tomography (cryo-ET) and cryo-EM maps of protein and nucleic acid complexes. The results demonstrate that DEMO-EMfit outperforms state-of-the-art approaches, offering an efficient and accurate tool for fitting atomic structures into density maps.
PMID:40152222 | DOI:10.1021/acs.jcim.5c00004
Deep Learning-Based Auto-Segmentation for Liver Yttrium-90 Selective Internal Radiation Therapy
Technol Cancer Res Treat. 2025 Jan-Dec;24:15330338251327081. doi: 10.1177/15330338251327081. Epub 2025 Mar 28.
ABSTRACT
The aim was to evaluate a deep learning-based auto-segmentation method for liver delineation in Y-90 selective internal radiation therapy (SIRT). A deep learning (DL)-based liver segmentation model using the U-Net3D architecture was built. Auto-segmentation of the liver was tested in CT images of SIRT patients. DL auto-segmented liver contours were evaluated against physician manually-delineated contours. Dice similarity coefficient (DSC) and mean distance to agreement (MDA) were calculated. The DL-model-generated contours were compared with the contours generated using an Atlas-based method. Ratio of volume (RV, the ratio of DL-model auto-segmented liver volume to manually-delineated liver volume), and ratio of activity (RA, the ratio of Y-90 activity calculated using a DL-model auto-segmented liver volume to Y-90 activity calculated using a manually-delineated liver volume), were assessed. Compared with the contours generated with the Atlas method, the contours generated with the DL model had better agreement with the manually-delineated contours, which had larger DSCs (average: 0.94 ± 0.01 vs 0.83 ± 0.10) and smaller MDAs (average: 1.8 ± 0.4 mm vs 7.1 ± 5.1 mm). The average RV and average RA calculated using the DL-model-generated volumes are 0.99 ± 0.03 and 1.00 ± 0.00, respectively. The DL segmentation model was able to identify and segment livers in the CT images and provide reliable results. It outperformed the Atlas method. The model can be applied for SIRT procedures.
PMID:40152005 | DOI:10.1177/15330338251327081
Hybrid fruit bee optimization algorithm-based deep convolution neural network for brain tumour classification using MRI images
Network. 2025 Mar 28:1-23. doi: 10.1080/0954898X.2025.2476079. Online ahead of print.
ABSTRACT
An accurate classification of brain tumour disease is an important function in diagnosing cancer disease. Several deep learning (DL) methods have been used to identify and categorize the tumour illness. Nevertheless, the better categorized result was not consistently obtained by the traditional DL procedures. Therefore, a superior answer to this problem is offered by the optimized DL approaches. Here, the brain tumour categorization (BTC) is done using the devised Hybrid Fruit Bee Optimization based Deep Convolution Neural Network (HFBO-based DCNN). Here, the noise in the image is removed through pre-processing using a Gaussian filter. Next, the feature extraction process is done using the SegNet and this helps to extract the relevant data from the input image. Then, the feature selection is done with the help of the HFBO algorithm. Additionally, the brain tumour classification is done by the Deep CNN, and the established HFBO algorithm is used to train the weight. The devised model is analysed using the testing accuracy, sensitivity, and specificity and produced the values of 0.926, 0.926, and 0.931, respectively.
PMID:40151966 | DOI:10.1080/0954898X.2025.2476079
An Overview and Comparative Analysis of CRISPR-SpCas9 gRNA Activity Prediction Tools
CRISPR J. 2025 Mar 27. doi: 10.1089/crispr.2024.0058. Online ahead of print.
ABSTRACT
Design of guide RNA (gRNA) with high efficiency and specificity is vital for successful application of the CRISPR gene editing technology. Although many machine learning (ML) and deep learning (DL)-based tools have been developed to predict gRNA activities, a systematic and unbiased evaluation of their predictive performance is still needed. Here, we provide a brief overview of in silico tools for CRISPR design and assess the CRISPR datasets and statistical metrics used for evaluating model performance. We benchmark seven ML and DL-based CRISPR-Cas9 editing efficiency prediction tools across nine CRISPR datasets covering six cell types and three species. The DL models CRISPRon and DeepHF outperform the other models exhibiting greater accuracy and higher Spearman correlation coefficient across multiple datasets. We compile all CRISPR datasets and in silico prediction tools into a GuideNet resource web portal, aiming to facilitate and streamline the sharing of CRISPR datasets. Furthermore, we summarize features affecting CRISPR gene editing activity, providing important insights into model performance and the further development of more accurate CRISPR prediction models.
PMID:40151952 | DOI:10.1089/crispr.2024.0058