Literature Watch
Smart Grain Storage Solution: Integrated Deep Learning Framework for Grain Storage Monitoring and Risk Alert
Foods. 2025 Mar 18;14(6):1024. doi: 10.3390/foods14061024.
ABSTRACT
In order to overcome the notable limitations of current methods for monitoring grain storage states, particularly in the early warning of potential risks and the analysis of the spatial distribution of grain temperatures within the granary, this study proposes a multi-model fusion approach based on a deep learning framework for grain storage state monitoring and risk alert. This approach combines two advanced three-dimensional deep learning models, a grain storage state classification model based on 3D DenseNet and a temperature field prediction model based on 3DCNN-LSTM. First, the grain storage state classification model based on 3D DenseNet efficiently extracts features from three-dimensional grain temperature data to achieve the accurate classification of storage states. Second, the temperature prediction model based on 3DCNN-LSTM incorporates historical grain temperature and absolute water potential data to precisely predict the dynamic changes in the granary's temperature field. Finally, the grain temperature prediction results are input into the 3D DenseNet to provide early warnings for potential condensation and mildew risks within the grain pile. Comparative experiments with multiple baseline models show that the 3D DenseNet model achieves an accuracy of 97.38% in the grain storage state classification task, significantly outperforming other models. The 3DCNN-LSTM model shows high prediction accuracy in temperature forecasting, with MAE of 0.24 °C and RMSE of 0.28 °C. Furthermore, in potential risk alert experiments, the model effectively captures the temperature trend in the grain storage environment and provides early warnings, particularly for mildew and condensation risks, demonstrating the potential of this method for grain storage safety monitoring and risk alerting. This study provides a smart grain storage solution which contributes to ensuring food safety and enhancing the efficiency of grain storage management.
PMID:40232114 | DOI:10.3390/foods14061024
The Fermentation Degree Prediction Model for Tieguanyin Oolong Tea Based on Visual and Sensing Technologies
Foods. 2025 Mar 13;14(6):983. doi: 10.3390/foods14060983.
ABSTRACT
The fermentation of oolong tea is a critical process that determines its quality and flavor. Current fermentation control relies on tea makers' sensory experience, which is labor-intensive and time-consuming. In this study, using Tieguanyin oolong tea as the research object, features including the tea water loss rate, aroma, image color, and texture were obtained using weight sensors, a tin oxide-type gas sensor, and a visual acquisition system. Support vector regression (SVR), random forest (RF) machine learning, and long short-term memory (LSTM) deep learning algorithms were employed to establish models for assessing the fermentation degree based on both single features and fused multi-source features, respectively. The results showed that in the test set of the fermentation degree models based on single features, the mean absolute error (MAE) ranged from 4.537 to 6.732, the root mean square error (RMSE) ranged from 5.980 to 9.416, and the coefficient of determination (R2) values varied between 0.898 and 0.959. In contrast, the data fusion models demonstrated superior performance, with the MAE reduced to 2.232-2.783, the RMSE reduced to 2.693-3.969, and R2 increased to 0.982-0.991, confirming that feature fusion enhanced characterization accuracy. Finally, the Sparrow Search Algorithm (SSA) was applied to optimize the data fusion models. After optimization, the models exhibited a MAE ranging from 1.703 to 2.078, a RMSE from 2.258 to 3.230, and R2 values between 0.988 and 0.994 on the test set. The application of the SSA further enhanced model accuracy, with the Fusion-SSA-LSTM model demonstrating the best performance. The research results enable online real-time monitoring of the fermentation degree of Tieguanyin oolong tea, which contributes to the automated production of Tieguanyin oolong tea.
PMID:40231982 | DOI:10.3390/foods14060983
The multifaceted role of auxin in root growth and branching: Insights from non-seed vascular plants
Physiol Plant. 2025 Mar-Apr;177(2):e70210. doi: 10.1111/ppl.70210.
ABSTRACT
Plant root systems play a crucial role in taking up water and nutrients, as well as in facilitating symbiotic partnerships with microorganisms like rhizobia and mycorrhizae that enhance nutrient fixation and assimilation. Extensive research in seed plants has demonstrated the dominant role of the phytohormone auxin during root development in this group of vascular plants. Non-seed vascular plants (lycophytes, horsetails and ferns) occupy a key phylogenetic position as the sister group to seed plants, making them essential for understanding the evolution of roots. These lineages exhibit distinct root development and branching patterns, in which the hormone auxin might play a pivotal role. However, the molecular basis underlying its function during root development in these plant groups remains poorly understood. In this review, we summarize the current progress in our understanding of auxin-mediated root initiation, patterning, and branching in vascular non-seed plants while highlighting outstanding key questions. Despite limited research, the available evidence suggests that both conserved and lineage-specific auxin-dependent genetic circuits regulate root development in these species. While remaining relatively limited in lycophytes and ferns, seed plants have evolved extensive environmentally sensitive regulatory networks facilitating the adaptation of their branching strategies to perceived external cues. These networks likely emerged through the duplication and neofunctionalization of gene families involved in auxin transport and signalling, as well as their downstream factors, such as LBD and PLT genes.
PMID:40231754 | DOI:10.1111/ppl.70210
Cilostazole versus clopidogrel in acute large-vessel moderate and moderate-to-severe ischemic stroke: a randomized controlled trial
Neurol Sci. 2025 Apr 15. doi: 10.1007/s10072-025-08107-9. Online ahead of print.
ABSTRACT
BACKGROUND: More than one-third of all ischemic strokes are induced by large vessel occlusion (LVO). All the wide-scale trials that assessed the impacts of cilostazol versus clopidogrel in stroke management have been conducted in Asia and involved patients with minor stroke or TIA. Our trial is the first-ever study to evaluate cilostazol versus clopidogrel in acute LVO with moderate to severe ischemic stroke in North Africa.
OBJECTIVES: We assessed the efficacy and safety of cilostazol versus clopidogrel in first-ever LVO moderate and moderate to severe ischemic stroke patients.
METHODS: 580 moderate and moderate-to-severe LVO ischemic stroke participants were randomly enrolled to receive loading and maintenance doses of cilostazol or clopidogrel.
RESULTS: 580 patients were included in the intention-to-treat analysis. 29 (10.0%) participants in the cilostazol arm and 43 (14.8%) participants in the clopidogrel arm experienced a new stroke (HR 0.37; 95% CI, 0.29-0.73; P-value = 0.03). Eight participants (2.8%) in the cilostazol arm and 17 patients (5.9%) in the clopidogrel arm had drug-related hemorrhagic complications (HR 0.29; 95% CI, 0.18-0.63; P-value = 0.008).
CONCLUSION: Patients who experienced acute LVO moderate and moderate-to-severe ischemic stroke and received loading and maintenance doses of cilostazol within the first 24 h after stroke onset had better clinical outcomes based on recurrent stroke rates and better safety outcomes regarding hemorrhagic transformation of brain infarction and drug-induced peripheral hemorrhagic side effects compared to those who received loading and maintenance doses of clopidogrel. There were no significant differences between the two groups regarding death due to vascular events and unfavorable mRS after three months of stroke onset.
REGISTRATION: Retrospectively registered on ClinicalTrials.gov, NCT06242145, 27-01-2024.
PMID:40232632 | DOI:10.1007/s10072-025-08107-9
A disproportionality analysis of interstitial lung disease associated with drug therapy in spontaneous adverse event reports
Expert Opin Drug Saf. 2025 Apr 15. doi: 10.1080/14740338.2025.2494689. Online ahead of print.
ABSTRACT
BACKGROUND: Interstitial lung disease (ILD) is a group of disorders characterized by inflammation and fibrosis of lung tissue that make it hard to carry oxygen. Our study aimed to comprehensively evaluate the risk of drug-induced ILD using data from the FDA Adverse Event Reporting System (FAERS) database.
RESEARCH DESIGN AND METHODS: We queried the ILD reports from 2004 to 2023. The reporting odds ratio (ROR) and Bayesian Confidence Propagation Neural Network (BCPNN) were calculated to detect disproportionality signals for drugs associated with ILD.
RESULTS: A total of 39,332 ILD-related reports were identified. The most frequently reported drugs were Methotrexate (N = 1245), followed by Pembrolizumab (N = 1026), Amiodarone (N = 975), Rituximab (N = 915), and Doxorubicin (N = 911). Disproportionality analysis revealed significant signals for the top 50 drugs, including Trastuzumab deruxtecan (ROR 56.25, 95% CI 51.27-61.72; IC025 5.49), Ramucirumab (ROR 27.80, 95% CI 241.6-31.99; IC025 4.50), Amiodarone (ROR 24.35, 95% CI 22.82-25.99; IC025 4.40), Gefitinib (ROR 23.02, 95% CI 20.66-25.66; IC025 4.29), and Doxorubicin (ROR 13.99, 95% CI 13.09-14.95; IC025 3.64).
CONCLUSIONS: Drug-induced ILD represents a significant challenge in clinical practice. Our findings underscore the importance of maintaining a high index of suspicion for drug-induced ILD, particularly when prescribing medications identified as having significant associations with ILD.
PMID:40232264 | DOI:10.1080/14740338.2025.2494689
Corrigendum: A pharmacogenetically-guided acenocoumarol dosing algorithm for Chilean patients: a discovery cohort study
Front Pharmacol. 2025 Mar 31;16:1588440. doi: 10.3389/fphar.2025.1588440. eCollection 2025.
ABSTRACT
[This corrects the article DOI: 10.3389/fphar.2020.00325.].
PMID:40230696 | PMC:PMC11994894 | DOI:10.3389/fphar.2025.1588440
ERS Congress 2024: highlights from the Respiratory Infections Assembly
ERJ Open Res. 2025 Apr 14;11(2):01262-2024. doi: 10.1183/23120541.01262-2024. eCollection 2025 Mar.
ABSTRACT
This highlights article shares key updates in the field of respiratory infections from the 2024 #ERSCongress, focusing on new research and the need for fair access to care to help tackle global challenges in respiratory infections and improve patient care https://bit.ly/40gDmrj.
PMID:40230434 | PMC:PMC11995276 | DOI:10.1183/23120541.01262-2024
Unlocking chickpea flour potential: AI-powered prediction for quality assessment and compositional characterisation
Curr Res Food Sci. 2025 Mar 21;10:101030. doi: 10.1016/j.crfs.2025.101030. eCollection 2025.
ABSTRACT
The growing demand for sustainable, nutritious, and environmentally friendly food sources has placed chickpea flour as a vital component in the global shift to plant-based diets. However, the inherent variability in the composition of chickpea flour, influenced by genetic diversity, environmental conditions, and processing techniques, poses significant challenges to standardisation and quality control. This study explores the integration of deep learning models with near-infrared (NIR) spectroscopy to improve the accuracy and efficiency of chickpea flour quality assessment. Using a dataset comprising 136 chickpea varieties, the research compares the performance of several state-of-the-art deep learning models, including Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), and Graph Convolutional Networks (GCNs), and compares the most effective model, CNN, against the traditional Partial Least Squares Regression (PLSR) method. The results demonstrate that CNN-based models outperform PLSR, providing more accurate predictions for key quality attributes such as protein content, starch, soluble sugars, insoluble fibres, total lipids, and moisture levels. The study highlights the potential of AI-enhanced NIR spectroscopy to revolutionise quality assessment in the food industry by offering a non-destructive, rapid, and reliable method for analysing chickpea flour. Despite the challenges posed by the limited dataset, deep learning models exhibit capabilities that suggest that further advancements would allow their industrial applicability. This research paves the way for broader applications of AI-driven quality control in food production, contributing to the development of more consistent and high-quality plant-based food products.
PMID:40231315 | PMC:PMC11995126 | DOI:10.1016/j.crfs.2025.101030
Deep Learning-Based Heterogeneity Correction of the Homogeneous Dose Distribution for Single Brain Tumors in Gamma Knife Radiosurgery
Adv Radiat Oncol. 2025 Mar 8;10(5):101757. doi: 10.1016/j.adro.2025.101757. eCollection 2025 May.
ABSTRACT
PURPOSE: Heterogeneity correction is vital in radiation therapy treatment planning to ensure accurate dose delivery. Brain cancer stereotactic treatments, like Gamma Knife radiosurgery (GKRS), often rely on homogeneous water-based calculations despite the potential heterogeneity impact near bony structures. This study aims to develop a method for generating synthetic dose plans incorporating heterogeneity effects without additional computed tomography (CT) scans.
METHODS AND MATERIALS: Magnetic resonance imaging and CT images, TMR10-based, and convolution-based dose distributions were used from 100 retrospectively collected and 22 prospectively collected GKRS patients. A conditional Generative Adversarial Network was trained to translate TMR10 into synthetic convolution (sConv) doses.
RESULTS: The generated sConv dose demonstrated qualitative and quantitative similarity to the actual convolution (Conv) dose, showcasing better agreement of dose distributions and improved isodose volume similarity with the Conv dose in comparison to the TMR10 dose (γ pass rate; sConv dose, 92.43%; TMR10 dose, 74.18%. Prescription isodose dice; sConv dose, 91.7%; TMR10 dose, 89.7%). Skull-induced scatter and attenuation effects were accurately reflected in the sConv dose, indicating the usefulness of the new dose prediction model as an alternative to the time-consuming convolution dose calculations.
CONCLUSIONS: Our deep learning approach offers a feasible solution for heterogeneity-corrected dose planning in GKRS, circumventing additional CT scans and lengthy calculation times. This method's effectiveness in preserving dose distribution characteristics in a heterogeneous medium while only requiring a homogeneous dose plan highlights its utility for including the process in the routine treatment planning workflows. Further refinement and validation with diverse patient cohorts can enhance its applicability and impact in clinical settings.
PMID:40231287 | PMC:PMC11994306 | DOI:10.1016/j.adro.2025.101757
Deep Neural Networks Based on Sp7 Protein Sequence Prediction in Peri-Implant Bone Formation
Int J Dent. 2025 Apr 7;2025:7583275. doi: 10.1155/ijod/7583275. eCollection 2025.
ABSTRACT
Objective: Peri-implant bone regeneration is crucial for dental implant success, particularly in managing peri-implantitis, which causes inflammation and bone loss. SP7 (Osterix) is vital for osteoblast differentiation and bone matrix formation. Advances in deep neural networks (DNNs) offer new ways to analyze protein sequences, potentially improving our understanding of SP7's role in bone formation. This study aims to develop and utilize DNNs to predict the SP7 protein sequence and understand its role in peri-implant bone formation. Materials: and Methods: Sequences were retrieved from UniProt IDs Q8TDD2 and Q9V3Z2 using the UniProt dataset. The sequences were Sp7 fasta sequences. These sequences were located, and their quality was assessed. We built an architecture that can handle a wide range of input sequences using a DNN technique, with computing needs based on the length of the input sequences. Results: Protein sequences were analyzed using a DNN architecture with ADAM optimizer over 50 epochs, achieving a sensitivity of 0.89 and a specificity of 0.82. The receiver operating characteristic (ROC) curve demonstrated high true-positive rates and low false-positive rates, indicating robust model performance. Precision-recall analysis underscored the model's effectiveness in handling imbalanced data, with significant area under the curve (AUC-PR). Epoch plots highlighted consistent model accuracy throughout training, confirming its reliability for protein sequence analysis. Conclusion: The DNN employed with ADAM optimizer demonstrated robust performance in analyzing protein sequences, achieving an accuracy of 0.85 and high sensitivity and specificity. The ROC curve highlighted the model's effectiveness in distinguishing true positives from false positives, which is essential for reliable protein classification. These findings suggest that the developed model is promising for enhancing predictive capabilities in computational biology and biomedical research, particularly in protein function prediction and therapeutic development applications.
PMID:40231202 | PMC:PMC11996267 | DOI:10.1155/ijod/7583275
Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces
Adv Neural Inf Process Syst. 2024;37:133975-133998.
ABSTRACT
People with brain or spinal cord-related paralysis often need to rely on others for basic tasks, limiting their independence. A potential solution is brain-machine interfaces (BMIs), which could allow them to voluntarily control external devices (e.g., robotic arm) by decoding brain activity to movement commands. In the past decade, deep-learning decoders have achieved state-of-the-art results in most BMI applications, ranging from speech production to finger control. However, the 'black-box' nature of deep-learning decoders could lead to unexpected behaviors, resulting in major safety concerns in real-world physical control scenarios. In these applications, explainable but lower-performing decoders, such as the Kalman filter (KF), remain the norm. In this study, we designed a BMI decoder based on KalmanNet, an extension of the KF that augments its operation with recurrent neural networks to compute the Kalman gain. This results in a varying "trust" that shifts between inputs and dynamics. We used this algorithm to predict finger movements from the brain activity of two monkeys. We compared KalmanNet results offline (pre-recorded data, n = 13 days) and online (real-time predictions, n = 5 days) with a simple KF and two recent deep-learning algorithms: tcFNN (non-ReFIT version) and LSTM. KalmanNet achieved comparable or better results than other deep learning models in offline and online modes, relying on the dynamical model for stopping while depending more on neural inputs for initiating movements. We further validated this mechanism by implementing a heteroscedastic KF that used the same strategy, and it also approached state-of-the-art performance while remaining in the explainable domain of standard KFs. However, we also see two downsides to KalmanNet. KalmanNet shares the limited generalization ability of existing deep-learning decoders, and its usage of the KF as an inductive bias limits its performance in the presence of unseen noise distributions. Despite this trade-off, our analysis successfully integrates traditional controls and modern deep-learning approaches to motivate high-performing yet still explainable BMI designs.
PMID:40231170 | PMC:PMC11996206
FaciaVox: A diverse multimodal biometric dataset of facial images and voice recordings
Data Brief. 2025 Mar 21;60:111489. doi: 10.1016/j.dib.2025.111489. eCollection 2025 Jun.
ABSTRACT
FaciaVox is a multimodal biometric dataset that consists of face images and voice recordings under both masked and unmasked conditions. The term ``FaciaVox'' is strategically chosen to create a distinct and easily memorable name. This name selection serves to highlight the dataset's multimodal characteristics, as well as its relevance to biometric recognition tasks. The FaciaVox dataset consists of contributions from 100 participants from 20 different countries, each providing 18 facial images and 60 audio recordings. The facial images are stored in JPG format, while the audio recordings are saved as WAV files, ensuring compatibility with standard processing tools. Participants are categorized by age into four distinct groups: Group 1 includes individuals below 16 years of age; Group 2 corresponds to those aged 16 up to less than 31; Group 3 encompasses participants aged 31 up to less than 46; and Group 4 represents individuals aged 46 and above. The data collection was conducted in two distinct environments: a professional soundproof studio and a conventional classroom. While the studio provided a controlled setting, the classroom introduced variables such as echo and sound reflections. Some participants were recorded in the studio, while others were recorded in the classroom, as detailed in the file named 'FaciaVox list' which specifies where each participant was recorded. Participants were positioned at 70-100 cm from the iPhone's rear camera, utilizing three specific zoom levels (1x, 3x, and 5x) to obtain a collection of facial photos. Each participant submitted a total of 18 facial photos, comprising six different images captured at each magnification level. The six different images encompassed a sequence of conditions: the initial set was captured without the use of a face mask, followed by subsequent images where participants donned a disposable mask, transitioned to a reusable mask, then advanced to a dual-layer cloth mask. Subsequently, a silicon face shield was introduced along with the cloth mask, concluding in final images where the silicon shield was worn independently. Each participant was instructed to speak ten sentences, switching between English and Arabic, under the six previously mentioned conditions. The speech was recorded using the Zoom H6 Handy Recorder. The FaciaVox dataset provides an extensive range of study options in the fields of face images and audio signals with and without face mask. This broad dataset serves as a foundational resource for investigating a wide range of cutting-edge applications, including but not limited to multimodal biometrics, cross-domain biometric fusion, age and gender estimation, human-machine interaction, deep learning, speech intelligence, voice cloning, image inpainting, and security and surveillance.
PMID:40231156 | PMC:PMC11994902 | DOI:10.1016/j.dib.2025.111489
A weakly supervised deep learning framework for automated PD-L1 expression analysis in lung cancer
Front Immunol. 2025 Mar 31;16:1540087. doi: 10.3389/fimmu.2025.1540087. eCollection 2025.
ABSTRACT
The growing application of immune checkpoint inhibitors (ICIs) in cancer immunotherapy has underscored the critical need for reliable methods to identify patient populations likely to respond to ICI treatments, particularly in lung cancer treatment. Currently, the tumor proportion score (TPS), a crucial biomarker for patient selection, relies on manual interpretation by pathologists, which often shows substantial variability and inconsistency. To address these challenges, we innovatively developed multi-instance learning for TPS (MiLT), an innovative artificial intelligence (AI)-powered tool that predicts TPS from whole slide images. Our approach leverages multiple instance learning (MIL), which significantly reduces the need for labor-intensive cell-level annotations while maintaining high accuracy. In comprehensive validation studies, MiLT demonstrated remarkable consistency with pathologist assessments (intraclass correlation coefficient = 0.960, 95% confidence interval = 0.950-0.971) and robust performance across both internal and external cohorts. This tool not only standardizes TPS evaluation but also adapts to various clinical standards and provides time-efficient predictions, potentially transforming routine pathological practice. By offering a reliable, AI-assisted solution, MiLT could significantly improve patient selection for immunotherapy and reduce inter-observer variability among pathologists. These promising results warrant further exploration in prospective clinical trials and suggest new possibilities for integrating advanced AI in pathological diagnostics. MiLT represents a significant step toward more precise and efficient cancer immunotherapy decision-making.
PMID:40230846 | PMC:PMC11994606 | DOI:10.3389/fimmu.2025.1540087
Fast TILs-A pipeline for efficient TILs estimation in non-small cell Lung cancer
J Pathol Inform. 2025 Mar 12;17:100437. doi: 10.1016/j.jpi.2025.100437. eCollection 2025 Apr.
ABSTRACT
The prognostic relevance of tumor-infiltrating lymphocytes (TILs) in non-small cell Lung cancer (NSCLC) is well-established. However, manual TIL quantification in hematoxylin and eosin (H&E) whole slide images (WSIs) is laborious and prone to variability. To address this, we aim to develop and validate an automated computational pipeline for the quantification of TILs in WSIs of NSCLC. Such a solution in computational pathology can accelerate TIL evaluation, thereby standardizing the prognostication process and facilitating personalized treatment strategies. We develop an end-to-end automated pipeline for TIL estimation in Lung cancer WSIs by integrating a patch extraction approach based on hematoxylin component filtering with a machine learning-based patch classification and cell quantification method using the HoVer-Net model architecture. Additionally, we employ randomized patch sampling to further reduce the processed patch amount. We evaluate the effectiveness of the patch sampling procedure, the pipeline's ability to identify informative patches and computational efficiency, and the clinical value of produced scores using patient survival data. Our pipeline demonstrates the ability to selectively process informative patches, achieving a balance between computational efficiency and prognostic integrity. The pipeline filtering excludes approximately 70% of all patch candidates. Further, only 5% of eligible patches are necessary to retain the pipeline's prognostic accuracy (c-index = 0.65), resulting in a linear reduction of the total computational time compared to the filtered patch subset analysis. The pipeline's TILs score has a strong association with patient survival and outperforms traditional CD8 immunohistochemical scoring (c-index = 0.59). Kaplan-Meier analysis further substantiates the TILs score's prognostic value. This study introduces an automated pipeline for TIL evaluation in Lung cancer WSIs, providing a prognostic tool with potential to improve personalized treatment in NSCLC. The pipeline's computational advances, particularly in reducing processing time, and clinical relevance demonstrate a step forward in computational pathology.
PMID:40230809 | PMC:PMC11994347 | DOI:10.1016/j.jpi.2025.100437
Role of Artificial Intelligence in Congenital Heart Disease and Interventions
J Soc Cardiovasc Angiogr Interv. 2025 Mar 18;4(3Part B):102567. doi: 10.1016/j.jscai.2025.102567. eCollection 2025 Mar.
ABSTRACT
Artificial intelligence has promising impact on patients with congenital heart disease, a vulnerable population with life-long health care needs and, often, a substantially higher risk of death than the general population. This review explores the role artificial intelligence has had on cardiac imaging, electrophysiology, interventional procedures, and intensive care monitoring as it relates to children and adults with congenital heart disease. Machine learning and deep learning algorithms have enhanced not only imaging segmentation and processing but also diagnostic accuracy namely reducing interobserver variability. This has a meaningful impact in complex congenital heart disease improving anatomic diagnosis, assessment of cardiac function, and predicting long-term outcomes. Image processing has benefited procedural planning for interventional cardiology, allowing for a higher quality and density of information to be extracted from the same imaging modalities. In electrophysiology, deep learning models have enhanced the diagnostic potential of electrocardiograms, detecting subtle yet meaningful variation in signals that enable early diagnosis of cardiac dysfunction, risk stratification of mortality, and more accurate diagnosis and prediction of arrhythmias. In the congenital heart disease population, this has the potential for meaningful prolongation of life. Postoperative care in the cardiac intensive care unit is a data-rich environment that is often overwhelming. Detection of subtle data trends in this environment for early detection of morbidity is a ripe avenue for artificial intelligence algorithms to be used. Examples like early detection of catheter-induced thrombosis have already been published. Despite their great promise, artificial intelligence algorithms are still limited by hurdles such as data standardization, algorithm validation, drift, and explainability.
PMID:40230672 | PMC:PMC11993855 | DOI:10.1016/j.jscai.2025.102567
Artificial Intelligence in Cardiovascular Imaging and Interventional Cardiology: Emerging Trends and Clinical Implications
J Soc Cardiovasc Angiogr Interv. 2025 Mar 18;4(3Part B):102558. doi: 10.1016/j.jscai.2024.102558. eCollection 2025 Mar.
ABSTRACT
Artificial intelligence (AI) has revolutionized the field of cardiovascular imaging, serving as a unifying force that brings together multiple modalities under a single platform. The utility of noninvasive imaging ranges from diagnostic assessment and guiding interventions to prognostic stratification. Multimodality imaging has demonstrated important potential, particularly in patients with heterogeneous diseases, such as heart failure and atrial fibrillation. Facilitating complex interventional procedures requires accurate image acquisition and interpretation along with precise decision-making. The unique nature of interventional cardiology procedures benefiting from different imaging modalities presents an ideal target for the development of AI-assisted decision-making tools to improve workflow in the catheterization laboratory and personalize the need for transcatheter interventions. This review explores the advancements of AI in noninvasive cardiovascular imaging and interventional cardiology, addressing the clinical use and challenges of current imaging modalities, emerging trends, and promising applications as well as considerations for safe implementation of AI tools in clinical practice. Current practice has moved well beyond the question of whether we should or should not use AI in clinical health care settings. AI, in all its forms, has become deeply embedded in clinical workflows, particularly in cardiovascular imaging and interventional cardiology. It can, in the future, not only add precision and quantification but also serve as a means by which to fuse and link multimodalities together. It is only by understanding how AI techniques work, that the field can be harnessed for the greater good and avoid uninformed bias or misleading diagnoses.
PMID:40230671 | PMC:PMC11993891 | DOI:10.1016/j.jscai.2024.102558
Robust soybean seed yield estimation using high-throughput ground robot videos
Front Plant Sci. 2025 Mar 31;16:1554193. doi: 10.3389/fpls.2025.1554193. eCollection 2025.
ABSTRACT
We present a novel method for soybean [Glycine max (L.) Merr.] yield estimation leveraging high-throughput seed counting via computer vision and deep learning techniques. Traditional methods for collecting yield data are labor-intensive, costly, and prone to equipment failures at critical data collection times and require transportation of equipment across field sites. Computer vision, the field of teaching computers to interpret visual data, allows us to extract detailed yield information directly from images. By treating it as a computer vision task, we report a more efficient alternative, employing a ground robot equipped with fisheye cameras to capture comprehensive videos of soybean plots from which images are extracted in a variety of development programs. These images are processed through the P2PNet-Yield model, a deep learning framework, where we combined a feature extraction module (the backbone of the P2PNet-Soy) and a yield regression module to estimate seed yields of soybean plots. Our results are built on 2 years of yield testing plot data-8,500 plots in 2021 and 650 plots in 2023. With these datasets, our approach incorporates several innovations to further improve the accuracy and generalizability of the seed counting and yield estimation architecture, such as the fisheye image correction and data augmentation with random sensor effects. The P2PNet-Yield model achieved a genotype ranking accuracy score of up to 83%. It demonstrates up to a 32% reduction in time to collect yield data as well as costs associated with traditional yield estimation, offering a scalable solution for breeding programs and agricultural productivity enhancement.
PMID:40230608 | PMC:PMC11994694 | DOI:10.3389/fpls.2025.1554193
Modified 1-min sit-to-stand test for evaluating exercise capacity in pulmonary fibrosis
ERJ Open Res. 2025 Apr 14;11(2):00745-2024. doi: 10.1183/23120541.00745-2024. eCollection 2025 Mar.
ABSTRACT
QUESTION: The reference test for the functional evaluation of pulmonary fibrosis (PF) during exercise is the 6-min walk test (6MWT). However, the 6MWT involves temporal and spatial constraints that the 1-min sit-to-stand test (1-MSTST) does not have. Previous studies have not validated 1-MSTST use in this context, mainly because of far less oxygen desaturation. We hypothesise that the modified 1-MSTST (m1-MSTST), taking into account the recovery phase, could compensate this shortcoming.
PATIENTS AND METHODS: This was a randomised, crossover, single-centre trial conducted in 36 patients with PF. A 6MWT and 1-MSTST were performed 30 min apart for each patient in a randomised order. An equivalence test was performed on the peripheral oxygen saturation (S pO2 ) nadir.
RESULTS: The 36 patients comprised eight with idiopathic PF, five with nonspecific idiopathic pneumonia, eight with collagen tissue disease-associated PF, four with hypersensitivity pneumonitis, two with sarcoidosis and nine with other PF. Mean±sd nadir desaturation was 84.9±4.3% for the 6MWT and 88±3.5% for the m1-MSTST, with a strong correlation between both tests. 33 patients (91.7%) had concordant results in the two tests regarding significant desaturation (S pO2 delta >4% or nadir <88%), which is a known prognosis factor.
CONCLUSION: The m1-MSTST, taking into account the recovery phase, is a sensible compromise to the 6MWT in measuring exercise performance in people with PF. As many clinical endpoints transfer from hospital to outpatient care, the m1-MSTST is technically easier and more practical for patients. Further studies are warranted to determine the minimal clinically important difference and norms in healthy subjects.
PMID:40230431 | PMC:PMC11995277 | DOI:10.1183/23120541.00745-2024
Mitochondrial dysfunction and alveolar type II epithelial cell senescence: The destroyer and rescuer of idiopathic pulmonary fibrosis
Front Cell Dev Biol. 2025 Mar 31;13:1535601. doi: 10.3389/fcell.2025.1535601. eCollection 2025.
ABSTRACT
Idiopathic pulmonary fibrosis (IPF) is a chronic respiratory disease with an unknown origin and complex pathogenic mechanisms. A deeper understanding of these mechanisms is essential for effective treatment. Pulmonary fibrosis is associated with the senescence of alveolar type II epithelial (ATⅡ) cells. Additionally, ATⅡ senescence can lead to a senescence-associated secretory phenotype, which affects cellular communication and disrupts lung tissue repair, contributing to the development of IPF. The role of mitochondrial dysfunction in senescence-related diseases is increasingly recognized. It can induce ATⅡ senescence through apoptosis, impaired autophagy, and disrupted energy metabolism, potentially playing a key role in IPF progression. This article explores the therapeutic potential of targeting cellular senescence and mitochondrial dysfunction, emphasizing their significant roles in IPF pathogenesis.
PMID:40230412 | PMC:PMC11994736 | DOI:10.3389/fcell.2025.1535601
Prediction and Evaluation of Coronavirus and Human Protein-Protein Interactions Integrating Five Different Computational Methods
Proteins. 2025 Apr 15. doi: 10.1002/prot.26826. Online ahead of print.
ABSTRACT
The high lethality and infectiousness of coronaviruses, particularly SARS-Cov-2, pose a significant threat to human society. Understanding coronaviruses, especially the interactions between these viruses and humans, is crucial for mitigating the coronavirus pandemic. In this study, we conducted a comprehensive comparison and evaluation of five prevalent computational methods: interolog mapping, domain-domain interaction methodology, domain-motif interaction methodology, structure-based approaches, and machine learning techniques. These methods were assessed using unbiased datasets that include C1, C2h, C2v, and C3 test sets. Ultimately, we integrated these five methodologies into a unified model for predicting protein-protein interactions (PPIs) between coronaviruses and human proteins. Our final model demonstrates relatively better performance, particularly with the C2v and C3 test sets, which are frequently used datasets in practical applications. Based on this model, we further established a high-confidence PPI network between coronaviruses and humans, consisting of 18,012 interactions between 3843 human proteins and 129 coronavirus proteins. The reliability of our predictions was further validated through the current knowledge framework and network analysis. This study is anticipated to enhance mechanistic understanding of the coronavirus-human relationship a while facilitating the rediscovery of antiviral drug targets. The source codes and datasets are accessible at https://github.com/covhppilab/CoVHPPI.
PMID:40231383 | DOI:10.1002/prot.26826
Pages
