Deep learning
Artificial intelligence driven plaque characterization and functional assessment from CCTA using OCT-based automation: A prospective study
Int J Cardiol. 2025 Mar 8:133140. doi: 10.1016/j.ijcard.2025.133140. Online ahead of print.
ABSTRACT
BACKGROUND: We aimed to develop and validate an Artificial Intelligence (AI) model that leverages CCTA and optical coherence tomography (OCT) images for automated analysis of plaque characteristics and coronary function.
METHODS: A total of 100 patients who underwent invasive coronary angiography, OCT, and CCTA before discharge were included in this study. The data were randomly divided into a training set (80 %) and a test set (20 %). The training set, comprising 21,471 tomography images, was used to train a deep-learning convolutional neural network. Subsequently, the AI model was integrated with flow reserve score calculation software developed by Ruixin Medical.
RESULTS: The results from the test set demonstrated excellent agreement between the AI model and OCT analysis for calcified plaque (McNemar test, p = 0.683), non-calcified plaque (McNemar test, p = 0.752), mixed plaque (McNemar test, p = 1.000), and low-attenuation plaque (McNemar test, p = 1.000). Additionally, there was excellent agreement for deep learning-derived minimum lumen diameter (intraclass correlation coefficient [ICC] 0.91, p < 0.001), mean vessel diameter (ICC 0.88, p < 0.001), and percent diameter stenosis (ICC 0.82, p < 0.001). In diagnosing >50 % coronary stenosis, the diagnostic accuracy of the AI model surpassed that of conventional CCTA (AUC 0.98 vs. 0.76, p = 0.008). When compared with quantitative flow fraction, there was excellent agreement between QFR and AI-derived CT-FFR (ICC 0.745, p < 0.0001).
CONCLUSION: Our AI model effectively provides automated analysis of plaque characteristics from CCTA images, with the analysis results showing strong agreement with OCT findings. Moreover, the CT-FFR automatically analyzed by the AI model exhibits high consistency with QFR derived from coronary angiography.
PMID:40064207 | DOI:10.1016/j.ijcard.2025.133140
Addressing underestimation and explanation of retinal fundus photo-based cardiovascular disease risk score: Algorithm development and validation
Comput Biol Med. 2025 Mar 9;189:109941. doi: 10.1016/j.compbiomed.2025.109941. Online ahead of print.
ABSTRACT
OBJECTIVE: To resolve the underestimation problem and investigate the mechanism of the AI model which employed to predict cardiovascular disease (CVD) risk scores from retinal fundus photos.
METHODS: An ordinal regression Deep Learning (DL) model was proposed to predict 10-year CVD risk scores. The mechanism of the DL model in understanding CVD risk was explored using methods such as transfer learning and saliency maps.
RESULTS: Model development was performed using data from 34,652 participants with good-quality fundus photographs from the UK Biobank and a dataset for external validation collected in Australia comprised of 1376 fundus photos of 401 participants with a desktop retinal camera and a portable retinal camera. The mean [SD] risk-level accuracies across cross-validation folds was 0.772 [0.008], while AUROC for over moderate risk was 0.849 [0.005] and the AUROC for high risk was 0.874 [0.007] on the UK Biobank dataset. The risk-level accuracy for images acquired with the desktop camera data was 0.715, and the accuracy for portable camera data was 0.656 on the external dataset.
CONCLUSIONS: The DL model described in this study has minimized the underestimation problem. Our analysis confirms that the DL model learned CVD risk score prediction primarily from age- and sex-related image representation. Model performance was only slightly degraded when features such as the retinal vessels and colours were removed from the images. Our analysis identified some image features associated with high CVD risk status, such as the peripheral small vessels and the macula areas.
PMID:40064120 | DOI:10.1016/j.compbiomed.2025.109941
How much data is enough? Optimization of data collection for artifact detection in EEG recordings
J Neural Eng. 2025 Mar 10. doi: 10.1088/1741-2552/adbebe. Online ahead of print.
ABSTRACT
Objective. Electroencephalography (EEG) is a widely used neuroimaging technique known for its cost-effectiveness and user-friendliness. However, the presence of various artifacts leads to a poor signal-to-noise ratio, limiting the precision of analyses and applications. The proposed work focuses on the Electromyography (EMG) artifacts, which are among the most challenging biological artifacts. The currently reported EMG artifact cleaning performance largely depends on the data used for validation, and in the case of machine learning approaches, also on the data used for training. The data are typically gathered either by recruiting subjects to perform specific EMG artifact tasks or by integrating existing datasets. Prevailing approaches, however, tend to rely on intuitive, concept-oriented data collection with minimal justification for the selection of artifacts and their quantities. Given the substantial costs associated with biological data collection and the pressing need for effective data utilization, we propose an optimization procedure for data-oriented data collection design using deep learning-based artifact detection.Approach. We apply a binary classification differentiating between artifact epochs (time intervals containing EMG artifacts) and non-artifact epochs (time intervals containing no EMG artifact) using three different neural architectures. Our aim is to minimize data collection efforts while preserving the cleaning efficiency.Main results. We were able to reduce the number of EMG artifact tasks from twelve to three and decrease repetitions of isometric
contraction tasks from ten to three or sometimes even just one.Significance. Our work addresses the need for effective data utilization in biological data collection, offering a systematic and dynamic quantitative approach. By providing clear justifications for the choices of artifacts and their quantity, we aim to guide future studies toward more effective and economical data collection in EEG and EMG research.
PMID:40064096 | DOI:10.1088/1741-2552/adbebe
Metal Suppression Magnetic Resonance Imaging Techniques in Orthopaedic and Spine Surgery
J Am Acad Orthop Surg. 2025 Mar 11. doi: 10.5435/JAAOS-D-24-01057. Online ahead of print.
ABSTRACT
Implantation of metallic instrumentation is the mainstay of a variety of orthopaedic and spine surgeries. Postoperatively, imaging of the soft tissues around these implants is commonly required to assess for persistent, recurrent, and/or new pathology (ie, instrumentation loosening, particle disease, infection, neural compression); visualization of these pathologies often requires the superior soft-tissue contrast of magnetic resonance imaging (MRI). As susceptibility artifacts from ferromagnetic implants can result in unacceptable image quality, unique MRI approaches are often necessary to provide accurate imaging. In this text, a comprehensive review is provided on common artifacts encountered in orthopaedic MRI, including comparisons of artifacts from different metallic alloys and common nonpropriety/propriety MR metallic artifact reduction methods. The newest metal-artifact suppression imaging technology and future directions (ie, deep learning/artificial intelligence) in this important field will be considered.
PMID:40063737 | DOI:10.5435/JAAOS-D-24-01057
Color correction methods for underwater image enhancement: A systematic literature review
PLoS One. 2025 Mar 10;20(3):e0317306. doi: 10.1371/journal.pone.0317306. eCollection 2025.
ABSTRACT
Underwater vision is essential in numerous applications, such as marine resource surveying, autonomous navigation, objective detection, and target monitoring. However, raw underwater images often suffer from significant color deviations due to light attenuation, presenting challenges for practical use. This systematic literature review examines the latest advancements in color correction methods for underwater image enhancement. The core objectives of the review are to identify and critically analyze existing approaches, highlighting their strengths, limitations, and areas for future research. A comprehensive search across eight scholarly databases resulted in the identification of 67 relevant studies published between 2010 and 2024. These studies introduce 13 distinct methods for enhancing underwater images, which can be categorized into three groups: physical models, non-physical models, and deep learning-based methods. Physical model-based methods aim to reverse the effects of underwater image degradation by simulating the physical processes of light attenuation and scattering. In contrast, non-physical model-based methods focus on manipulating pixel values without modeling these underlying degradation processes. Deep learning-based methods, by leveraging data-driven approaches, aim to learn mappings between degraded and enhanced images through large datasets. However, challenges persist across all categories, including algorithmic limitations, data dependency, computational complexity, and performance variability across diverse underwater environments. This review consolidates the current knowledge, providing a taxonomy of methods while identifying critical research gaps. It emphasizes the need to improve adaptability across diverse underwater conditions and reduce computational complexity for real-time applications. The review findings serve as a guide for future research to overcome these challenges and advance the field of underwater image enhancement.
PMID:40063649 | DOI:10.1371/journal.pone.0317306
Deep learning-based prediction of atrial fibrillation from polar transformed time-frequency electrocardiogram
PLoS One. 2025 Mar 10;20(3):e0317630. doi: 10.1371/journal.pone.0317630. eCollection 2025.
ABSTRACT
Portable and wearable electrocardiogram (ECG) devices are increasingly utilized in healthcare for monitoring heart rhythms and detecting cardiac arrhythmias or other heart conditions. The integration of ECG signal visualization with AI-based abnormality detection empowers users to independently and confidently assess their physiological signals. In this study, we investigated a novel method for visualizing ECG signals using polar transformations of short-time Fourier transform (STFT) spectrograms and evaluated the performance of deep convolutional neural networks (CNNs) in predicting atrial fibrillation from these polar transformed spectrograms. The ECG data, which are available from the PhysioNet/CinC Challenge 2017, were categorized into four classes: normal sinus rhythm, atrial fibrillation, other rhythms, and noise. Preprocessing steps included ECG signal filtering, STFT-based spectrogram generation, and reverse polar transformation to generate final polar spectrogram images. These images were used as inputs for deep CNN models, where three pre-trained deep CNNs were used for comparisons. The results demonstrated that deep learning-based predictions using polar transformed spectrograms were comparable to existing methods. Furthermore, the polar transformed images offer a compact and intuitive representation of rhythm characteristics in ECG recordings, highlighting their potential for wearable applications.
PMID:40063554 | DOI:10.1371/journal.pone.0317630
Predicting and Explaining Cognitive Load, Attention, and Working Memory in Virtual Multitasking
IEEE Trans Vis Comput Graph. 2025 Mar 10;PP. doi: 10.1109/TVCG.2025.3549850. Online ahead of print.
ABSTRACT
As VR technology advances, the demand for multitasking within virtual environments escalates. Negotiating multiple tasks within the immersive virtual setting presents cognitive challenges, where users experience difficulty executing multiple concurrent tasks. This phenomenon highlights the importance of cognitive functions like attention and working memory, which are vital for navigating intricate virtual environments effectively. In addition to attention and working memory, assessing the extent of physical and mental strain induced by the virtual environment and the concurrent tasks performed by the participant is key. While previous research has focused on investigating factors influencing attention and working memory in virtual reality, more comprehensive approaches addressing the prediction of physical and mental strain alongside these cognitive aspects remain. This gap inspired our investigation, where we utilized an open dataset - VRWalking, which included eye and head tracking and physiological measures like heart rate(HR) and galvanic skin response(GSR). The VRwalking dataset has timestamped labeled data for physical and mental load, working memory, and attention metrics. In our investigation, we employed straightforward deep learning models to predict these labels, achieving noteworthy performance with 91%, 96%, 93%, and 91% accuracy in predicting physical load, mental load, working memory, and attention, respectively. Additionally, we conducted SHAP (SHapley Additive exPlanations) analysis to identify the most critical features driving these predictions. Our findings contribute to understanding the overall cognitive state of a participant and effective data collection practices for future researchers, as well as provide insights for virtual reality developers. Developers can utilize these predictive approaches to adaptively optimize user experience in real-time and minimize cognitive strain, ultimately enhancing the effectiveness and usability of virtual reality applications.
PMID:40063446 | DOI:10.1109/TVCG.2025.3549850
Learning to Explore Sample Relationships
IEEE Trans Pattern Anal Mach Intell. 2025 Mar 10;PP. doi: 10.1109/TPAMI.2025.3549300. Online ahead of print.
ABSTRACT
Despite the great success achieved, deep learning technologies usually suffer from data scarcity issues in real-world applications, where existing methods mainly explore sample relationships in a vanilla way from the perspectives of either the input or the loss function. In this paper, we propose a batch transformer module, BatchFormerV1, to equip deep neural networks themselves with the abilities to explore sample relationships in a learnable way. Basically, the proposed method enables data collaboration, e.g., head-class samples will also contribute to the learning of tail classes. Considering that exploring instance-level relationships has very limited impacts on dense prediction, we generalize and refer to the proposed module as BatchFormerV2, which further enables exploring sample relationships for pixel-/patch-level dense representations. In addition, to address the train-test inconsistency where a mini-batch of data samples are neither necessary nor desirable during inference, we also devise a two-stream training pipeline, i.e., a shared model is first jointly optimized with and without BatchFormerV2 which is then removed during testing. The proposed module is plug-and-play without requiring any extra inference cost. Lastly, we evaluate the proposed method on over ten popular datasets, including 1) different data scarcity settings such as long-tailed recognition, zero-shot learning, domain generalization, and contrastive learning; and 2) different visual recognition tasks ranging from image classification to object detection and panoptic segmentation. Code is available at https://zhihou7.github.io/BatchFormer.
PMID:40063428 | DOI:10.1109/TPAMI.2025.3549300
Identification of Camellia Oil Adulteration With Excitation-Emission Matrix Fluorescence Spectra and Deep Learning
J Fluoresc. 2025 Mar 10. doi: 10.1007/s10895-025-04229-7. Online ahead of print.
ABSTRACT
Camellia oil (CAO), known for its high nutritional and commercial value, has raised increasing concerns about adulteration. Developing an accurate and non-destructive method to identify CAO adulterants is crucial for safeguarding public health and well-being. This study simulates potential real-world adulteration cases by designing representative adulteration scenarios, followed by the acquisition and analysis of corresponding excitation-emission matrix fluorescence (EEMF) spectra. Parallel factor analysis (PARAFAC) was employed to characterize and explore the variations of fluorophores in the EEMF spectra of different adulterated scenarioss, which showed a linear correlation between the relative concentration of PARAFAC components and adulteration levels. A deep learning model named ResTransformer, which combines residual modules with Transformer, was proposed for both the qualitative detection of adulteration types and the quantitative detection of adulteration concentrations from local and global perspectives. The global ResTransformer qualitative models achieved accuracies of over 96.92% based on EEMF spectra and PARAFAC, and quantitative models showed determination coefficient of validation ([Formula: see text]) > 0.978, root mean square error of validation ([Formula: see text]) < 3.0643%, and the ratio performance deviation (RPD) > 7.6741. Compared to traditional chemometric models, the ResTransformer model demonstrated superior performance. The integration of EEMF and ResTransformer presents a highly promising strategy for rapid and reliable detection of CAO adulteration.
PMID:40063235 | DOI:10.1007/s10895-025-04229-7
Myocardial perfusion imaging SPECT left ventricle segmentation with graphs
EJNMMI Phys. 2025 Mar 10;12(1):21. doi: 10.1186/s40658-025-00728-5.
ABSTRACT
PURPOSE: Various specialized and general collimators are used for myocardial perfusion imaging (MPI) with single-photon emission computed tomography (SPECT) to assess different types of coronary artery disease (CAD). Alongside the wide variability in imaging characteristics, the apriori "learnt" information of left ventricular (LV) shape can affect the final diagnosis of the imaging protocol. This study evaluates the effect of prior information incorporation into the segmentation process, compared to deep learning (DL) approaches, as well as the differences of 4 collimation techniques on 5 different datasets.
METHODS: This study was implemented on 80 patients database. 40 patients were coming from mixed black-box collimators, 10 each, from multi-pinhole (MPH), low energy high resolution (LEHR), CardioC and CardioD collimators. The testing was evaluated on a new continuous graph-based approach, which automatically segments the left ventricular volume with prior information on the cardiac geometry. The technique is based on the continuous max-flow (CMF) min-cut algorithm, which performance was evaluated in precision, recall, IoU and Dice score metrics.
RESULTS: In the testing it was shown that, the developed method showed a good improvement over deep learning reaching higher scores in most of the evaluation metrics. Further investigating the different collimation techniques, the evaluation of receiver operating characterstic (ROC) curves showed different stabilities on the various collimators. Running Wilcoxon signed-rank test on the outlines of the LVs showed differentiability between the collimation procedures. To further investigate these phenomena the model parameters of the LVs were reconstructed and evaluated by the uniform manifold approximation and projection (UMAP) method, which further proved that collimators can be differentiated based on the projected LV shapes alone.
CONCLUSIONS: The results show that prior information incorporation can enhance the performance of segmentation methods and collimation strategies have a high effect on the projected cardiac geometry.
PMID:40063231 | DOI:10.1186/s40658-025-00728-5
I-BrainNet: Deep Learning and Internet of Things (DL/IoT)-Based Framework for the Classification of Brain Tumor
J Imaging Inform Med. 2025 Mar 10. doi: 10.1007/s10278-025-01470-1. Online ahead of print.
ABSTRACT
Brain tumor is categorized as one of the most fatal form of cancer due to its location and difficulty in terms of diagnostics. Medical expert relies on two key approaches which include biopsy and MRI. However, these techniques have several setbacks which include the need of medical experts, inaccuracy, miss-diagnosis as a result of anxiety or workload which may lead to patient morbidity and mortality. This opens a gap for the need of precise diagnosis and staging to guide appropriate clinical decisions. In this study, we proposed the application of deep learning (DL)-based techniques for the classification of MRI vs non-MRI and tumor vs no tumor. In order to accurately discriminate between classes, we acquired brain tumor multimodal image (CT and MRI) datasets, which comprises of 9616 MRI and CT scans in which 8000 are selected for discrimination between MRI and non-MRI and 4000 for the discrimination between tumor and no tumor cases. The acquired images undergo image pre-processing, data split, data augmentation and model training. The images are trained using 4 DL networks which include MobileNetV2, ResNet, Ineptionv3 and VGG16. Performance evaluation of the DL architectures and comparative analysis has shown that pre-trained MobileNetV2 achieved the best result across all metrics with 99.94% accuracy for the discrimination between MRI and non-MRI and 99.00% for the discrimination between tumor and no tumor. Moreover, I-BrainNet which is a DL/IoT-based framework is developed for the real-time classification of brain tumor.
PMID:40063173 | DOI:10.1007/s10278-025-01470-1
SADiff: A Sinogram-Aware Diffusion Model for Low-Dose CT Image Denoising
J Imaging Inform Med. 2025 Mar 10. doi: 10.1007/s10278-025-01469-8. Online ahead of print.
ABSTRACT
CT image denoising is a crucial task in medical imaging systems, aimed at enhancing the quality of acquired visual signals. The emergence of diffusion models in machine learning has revolutionized the generation of high-quality CT images. However, diffusion-based CT image denoising methods suffer from two key shortcomings. First, they do not incorporate image formation priors from CT imaging, which limits their adaptability to the CT image denoising task. Second, they are trained on CT images with varying structures and textures at the signal phase, which hinders the model generalization capability. To address the first limitation, we propose a novel conditioning module for our diffusion model that leverages image formation priors from the sinogram domain to generate rich features. To tackle the second issue, we introduce a two-phase training mechanism in which the network gradually learns different anatomical textures and structures. Extensive experimental results demonstrate the effectiveness of both approaches in enhancing CT image quality, with improvements of up to 17% in PSNR and 38% in SSIM, highlighting their superiority over state-of-the-art methods.
PMID:40063172 | DOI:10.1007/s10278-025-01469-8
Non-invasive derivation of instantaneous free-wave ratio from invasive coronary angiography using a new deep learning artificial intelligence model and comparison with human operators' performance
Int J Cardiovasc Imaging. 2025 Mar 10. doi: 10.1007/s10554-025-03369-y. Online ahead of print.
ABSTRACT
Invasive coronary physiology is underused and carries risks/costs. Artificial Intelligence (AI) might enable non-invasive physiology from invasive coronary angiography (CAG), possibly outperforming humans, but has seldom been explored, especially for instantaneous wave-free Ratio (iFR). We aimed to develop binary iFR lesion classification AI models and compare them with human performance. single-center retrospective study of patients undergoing CAG and iFR. A validated encoder-decoder convolutional neural network (CNN) performed segmentation. Manual annotation of target vessel and pressure sensor location on a segmented telediastolic frame followed. Three AI models classified lesions as positive (≤ 0.89) or negative (> 0.89). Model 1 uses preprocessed vessel diameters with a transformer. Models 2/3 are EfficientNet-B5 CNNs using concatenated angiography and segmentation - Model 3 employs class-frequency-weighted Cross-Entropy Loss. Previous findings demonstrated Model 3's superiority for left anterior descending (LAD) and Model 1's for circumflex (Cx)/right coronary artery (RCA) - they were therefore unified into a vessel-based model. Ten-fold patient-level cross-validation enabled full sample training/testing. Three experienced operators performed binary iFR classification using single frames of raw/segmented images. Comparison metrics were accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). Across 250 measurements, AI accuracy was 72%, PPV 48%, NPV 90%, sensitivity 77%, and specificity 71%. Human accuracy ranged from 54 to 74%. NPV was high for the Cx/RCA (AI: 96/98%; operators: 94/97%), but AI significantly outperformed humans in the LAD (78% vs. 60-64%). An AI model capable of binary iFR lesions classification mildly outperformed interventional cardiologists, supporting further validation studies.
PMID:40063156 | DOI:10.1007/s10554-025-03369-y
deep-Sep: a deep learning-based method for fast and accurate prediction of selenoprotein genes in bacteria
mSystems. 2025 Mar 10:e0125824. doi: 10.1128/msystems.01258-24. Online ahead of print.
ABSTRACT
Selenoproteins are a special group of proteins with major roles in cellular antioxidant defense. They contain the 21st amino acid selenocysteine (Sec) in the active sites, which is encoded by an in-frame UGA codon. Compared to eukaryotes, identification of selenoprotein genes in bacteria remains challenging due to the absence of an effective strategy for distinguishing the Sec-encoding UGA codon from a normal stop signal. In this study, we have developed a deep learning-based algorithm, deep-Sep, for quickly and precisely identifying selenoprotein genes in bacterial genomic sequences. This algorithm uses a Transformer-based neural network architecture to construct an optimal model for detecting Sec-encoding UGA codons and a homology search-based strategy to remove additional false positives. During the training and testing stages, deep-Sep has demonstrated commendable performance, including an F1 score of 0.939 and an area under the receiver operating characteristic curve of 0.987. Furthermore, when applied to 20 bacterial genomes as independent test data sets, deep-Sep exhibited remarkable capability in identifying both known and new selenoprotein genes, which significantly outperforms the existing state-of-the-art method. Our algorithm has proved to be a powerful tool for comprehensively characterizing selenoprotein genes in bacterial genomes, which should not only assist in accurate annotation of selenoprotein genes in genome sequencing projects but also provide new insights for a deeper understanding of the roles of selenium in bacteria.IMPORTANCESelenium is an essential micronutrient present in selenoproteins in the form of Sec, which is a rare amino acid encoded by the opal stop codon UGA. Identification of all selenoproteins is of vital importance for investigating the functions of selenium in nature. Previous strategies for predicting selenoprotein genes mainly relied on the identification of a special cis-acting Sec insertion sequence (SECIS) element within mRNAs. However, due to the complexity and variability of SECIS elements, recognition of all selenoprotein genes in bacteria is still a major challenge in the annotation of bacterial genomes. We have developed a deep learning-based algorithm to predict selenoprotein genes in bacterial genomic sequences, which demonstrates superior performance compared to currently available methods. This algorithm can be utilized in either web-based or local (standalone) modes, serving as a promising tool for identifying the complete set of selenoprotein genes in bacteria.
PMID:40062874 | DOI:10.1128/msystems.01258-24
Artificial Intelligence Versus Rules-Based Approach for Segmenting NonPerfusion Area in a DRCR Retina Network Optical Coherence Tomography Angiography Dataset
Invest Ophthalmol Vis Sci. 2025 Mar 3;66(3):22. doi: 10.1167/iovs.66.3.22.
ABSTRACT
PURPOSE: Loss of retinal perfusion is associated with both onset and worsening of diabetic retinopathy (DR). Optical coherence tomography angiography is a noninvasive method for measuring the nonperfusion area (NPA) and has promise as a scalable screening tool. This study compares two optical coherence tomography angiography algorithms for quantifying NPA.
METHODS: Adults with (N = 101) and without (N = 274) DR were recruited from 20 U.S. sites. We collected 3 × 3-mm macular scans using an Optovue RTVue-XR. Rules-based (RB) and deep-learning-based artificial intelligence (AI) algorithms were used to segment the NPA into four anatomical slabs. For comparison, a subset of scans (n = 50) NPA was graded manually.
RESULTS: The AI method outperformed the RB method in intersection over union, recall, and F1 score, but the RB method has better precision relative to manual grading in all anatomical slabs (all P ≤ 0.001). The AI method had a stronger rank correlation with Early Treatment of Diabetic Retinopathy Study DR severity than the RB method in all slabs (all P < 0.001). NPAs graded using the AI method had a greater area under the receiver operating characteristic curve for diagnosing referable DR than the RB method in the superficial vascular complex, intermediate capillary plexus, and combined inner retina (all P ≤ 0.001), but not in the deep capillary plexus (P = 0.92).
CONCLUSIONS: Our results indicate that output from the AI-based method agrees better with manual grading and can better distinguish between clinically relevant DR severity levels than a RB approach using most plexuses.
PMID:40062815 | DOI:10.1167/iovs.66.3.22
Chemically Engineered Peptide Efficiently Blocks Malaria Parasite Entry into Red Blood Cells
Biochemistry. 2025 Mar 10. doi: 10.1021/acs.biochem.4c00465. Online ahead of print.
ABSTRACT
Chemical peptide engineering, enabled by residue insertion, backbone cyclization, and incorporation of an additional disulfide bond, led to a unique cyclic peptide that efficiently inhibits the invasion of red blood cells by malaria parasites. The engineered peptide exhibits a 20-fold enhanced affinity toward its receptor (PfAMA1) compared to the native peptide ligand (PfRON2), as determined by surface plasmon resonance. In-vitro parasite growth inhibition assay revealed augmented potency of the engineered peptide. The structure of the PfAMA1-cyclic peptide complex, predicted by the deep learning-based structure prediction tool ColabFold-AlphaFold2 with protein-cyclic peptide complex offset, provided valuable insights into the observed activity of the peptide analogs. Rational editing of the peptide backbone and side chain described here proved to be an effective strategy for designing peptide-based inhibitors to interfere with disease-related protein-protein interactions.
PMID:40062812 | DOI:10.1021/acs.biochem.4c00465
Accelerating polymer self-consistent field simulation and inverse DSA-lithography with deep neural networks
J Chem Phys. 2025 Mar 14;162(10):104105. doi: 10.1063/5.0255288.
ABSTRACT
Self-consistent field theory (SCFT) is a powerful polymer field-theoretic simulation tool that plays a crucial role in the study of block copolymer (BCP) self-assembly. However, the computational cost of implementing SCFT simulations is comparatively high, particularly in computationally demanding applications where repeated forward simulations are needed. Herein, we propose a deep learning-based method to accelerate the SCFT simulations. By directly mapping early SCFT results to equilibrium structures using a deep neural network (DNN), this method bypasses most of the time-consuming SCFT iterations, significantly reducing the simulation time. We first applied this method to two- and three-dimensional large-cell bulk system simulations. Both results demonstrate that a DNN can be trained to predict equilibrium states based on early iteration outputs accurately. The number of early SCFT iterations can be tailored to optimize the trade-off between computational speed and predictive accuracy. The effect of training set size on DNN performance was also examined, offering guidance on minimizing dataset generation costs. Furthermore, we applied this method to the more computationally demanding inverse directed self-assembly-lithography problem. A covariance matrix adaptation evolution strategy-based inverse design method was proposed. By replacing the forward simulation model in this method with a trained DNN, we were able to determine the guiding template shapes that direct the BCP to self-assemble into the target structure with certain constraints, eliminating the need for any SCFT simulations. This improved the inverse design efficiency by a factor of 100, and the computational cost for training the network can be easily averaged out over repeated tasks.
PMID:40062757 | DOI:10.1063/5.0255288
Advancements in machine learning and biomarker integration for prenatal Down syndrome screening
Turk J Obstet Gynecol. 2025 Mar 10;22(1):75-82. doi: 10.4274/tjod.galenos.2025.12689.
ABSTRACT
The use of machine learning (ML) in biomarker analysis for predicting Down syndrome exemplifies an innovative strategy that enhances diagnostic accuracy and enables early detection. Recent studies demonstrate the effectiveness of ML algorithms in identifying genetic variations and expression patterns associated with Down syndrome by comparing genomic data from affected individuals and their typically developing peers. This review examines how ML and biomarker analysis improve prenatal screening for Down syndrome. Advancements show that integrating maternal serum markers, nuchal translucency measurements, and ultrasonographic images with algorithms, such as random forests and deep learning convolutional neural networks, raises detection rates to above 85% while keeping false positive rates low. Moreover, non-invasive prenatal testing with soft ultrasound markers has increased diagnostic sensitivity and specificity, marking a significant shift in prenatal care. The review highlights the importance of implementing robust screening protocols that utilize ultrasound biomarkers, along with developing personalized screening tools through advanced statistical methods. It also explores the potential of combining genetic and epigenetic biomarkers with ML to further improve diagnostic accuracy and understanding of Down syndrome pathophysiology. The findings stress the need for ongoing research to optimize algorithms, validate their effectiveness across diverse populations, and incorporate these cutting-edge approaches into routine clinical practice. Ultimately, blending advanced imaging techniques with ML shows promise for enhancing prenatal care outcomes and aiding informed decision-making for expectant parents.
PMID:40062699 | DOI:10.4274/tjod.galenos.2025.12689
Inferring gene regulatory networks from time-series scRNA-seq data via GRANGER causal recurrent autoencoders
Brief Bioinform. 2025 Mar 4;26(2):bbaf089. doi: 10.1093/bib/bbaf089.
ABSTRACT
The development of single-cell RNA sequencing (scRNA-seq) technology provides valuable data resources for inferring gene regulatory networks (GRNs), enabling deeper insights into cellular mechanisms and diseases. While many methods exist for inferring GRNs from static scRNA-seq data, current approaches face challenges in accurately handling time-series scRNA-seq data due to high noise levels and data sparsity. The temporal dimension introduces additional complexity by requiring models to capture dynamic changes, increasing sensitivity to noise, and exacerbating data sparsity across time points. In this study, we introduce GRANGER, an unsupervised deep learning-based method that integrates multiple advanced techniques, including a recurrent variational autoencoder, GRANGER causality, sparsity-inducing penalties, and negative binomial (NB)-based loss functions, to infer GRNs. GRANGER was evaluated using multiple popular benchmarking datasets, where it demonstrated superior performance compared to eight well-known GRN inference methods. The integration of a NB-based loss function and sparsity-inducing penalties in GRANGER significantly enhanced its capacity to address dropout noise and sparsity in scRNA-seq data. Additionally, GRANGER exhibited robustness against high levels of dropout noise. We applied GRANGER to scRNA-seq data from the whole mouse brain obtained through the BRAIN Initiative project and identified GRNs for five transcription regulators: E2f7, Gbx1, Sox10, Prox1, and Onecut2, which play crucial roles in diverse brain cell types. The inferred GRNs not only recalled many known regulatory relationships but also revealed sets of novel regulatory interactions with functional potential. These findings demonstrate that GRANGER is a highly effective tool for real-world applications in discovering novel gene regulatory relationships.
PMID:40062616 | DOI:10.1093/bib/bbaf089
A novel integrative multimodal classifier to enhance the diagnosis of Parkinson's disease
Brief Bioinform. 2025 Mar 4;26(2):bbaf088. doi: 10.1093/bib/bbaf088.
ABSTRACT
Parkinson's disease (PD) is a complex, progressive neurodegenerative disorder with high heterogeneity, making early diagnosis difficult. Early detection and intervention are crucial for slowing PD progression. Understanding PD's diverse pathways and mechanisms is key to advancing knowledge. Recent advances in noninvasive imaging and multi-omics technologies have provided valuable insights into PD's underlying causes and biological processes. However, integrating these diverse data sources remains challenging, especially when deriving meaningful low-level features that can serve as diagnostic indicators. This study developed and validated a novel integrative, multimodal predictive model for detecting PD based on features derived from multimodal data, including hematological information, proteomics, RNA sequencing, metabolomics, and dopamine transporter scan imaging, sourced from the Parkinson's Progression Markers Initiative. Several model architectures were investigated and evaluated, including support vector machine, eXtreme Gradient Boosting, fully connected neural networks with concatenation and joint modeling (FCNN_C and FCNN_JM), and a multimodal encoder-based model with multi-head cross-attention (MMT_CA). The MMT_CA model demonstrated superior predictive performance, achieving a balanced classification accuracy of 97.7%, thus highlighting its ability to capture and leverage cross-modality inter-dependencies to aid predictive analytics. Furthermore, feature importance analysis using SHapley Additive exPlanations not only identified crucial diagnostic biomarkers to inform the predictive models in this study but also holds potential for future research aimed at integrated functional analyses of PD from a multi-omics perspective, ultimately revealing targets required for precision medicine approaches to aid treatment of PD aimed at slowing down its progression.
PMID:40062615 | DOI:10.1093/bib/bbaf088