Literature Watch
Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model
Acad Radiol. 2025 Apr 1:S1076-6332(25)00210-7. doi: 10.1016/j.acra.2025.03.015. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images.
MATERIALS AND METHODS: A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github.
RESULTS: The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis.
CONCLUSION: In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.
PMID:40175204 | DOI:10.1016/j.acra.2025.03.015
Emerging horizons of AI in pharmaceutical research
Adv Pharmacol. 2025;103:325-348. doi: 10.1016/bs.apha.2025.01.016. Epub 2025 Feb 16.
ABSTRACT
Artificial Intelligence (AI) has revolutionized drug discovery by enhancing data collection, integration, and predictive modeling across various critical stages. It aggregates vast biological and chemical data, including genomic information, protein structures, and chemical interactions with biological targets. Machine learning techniques and QSAR models are applied by AI to predict compound behaviors and predict potential drug candidates. Docking simulations predict drug-protein interactions, while virtual screening eliminates large chemical databases through efficient sifting. Similarly, AI supports de novo drug design by generating novel molecules, optimized against a particular biological target, using generative models such as generative adversarial network (GAN), always finding lead compounds with the most desirable pharmacological properties. AI used in clinical trials improves efficiency by pinpointing responsive patient cohorts leveraging genetic profiles and biomarkers and maintaining propriety such as dataset diversity and compliance with regulations. This chapter aimed to summarize and analyze the mechanism of AI to accelerate drug discovery by streamlining different processes that enable informed decisions and bring potential life-saving therapies to market faster, amounting to a breakthrough in pharmaceutical research and development.
PMID:40175048 | DOI:10.1016/bs.apha.2025.01.016
Deep learning: A game changer in drug design and development
Adv Pharmacol. 2025;103:101-120. doi: 10.1016/bs.apha.2025.01.008. Epub 2025 Feb 6.
ABSTRACT
The lengthy and costly drug discovery process is transformed by deep learning, a subfield of artificial intelligence. Deep learning technologies expedite the procedure, increasing treatment success rates and speeding life-saving procedures. Deep learning stands out in target identification and lead selection. Deep learning greatly accelerates initial stage by analyzing large datasets of biological data to identify possible therapeutic targets and rank targeted drug molecules with desired features. Predicting possible adverse effects is another significant challenge. Deep learning offers prompt and efficient assistance with toxicology prediction in a very short time, deep learning algorithms can forecast a new drug's possible harm. This enables to concentrate on safer alternatives and steer clear of late-stage failures brought on by unanticipated toxicity. Deep learning unlocks the possibility of drug repurposing; by examining currently available medications, it is possible to find whole new therapeutic uses. This method speeds up development of diseases that were previously incurable. De novo drug discovery is made possible by deep learning when combined with sophisticated computational modeling, it can create completely new medications from the ground. Deep learning can recommend and direct towards new drug candidates with high binding affinities and intended therapeutic effects by examining molecular structures of disease targets. This provides focused and personalized medication. Lastly, drug characteristics can be optimized with aid of deep learning. Researchers can create medications with higher bioavailability and fewer toxicity by forecasting drug pharmacokinetics. In conclusion, deep learning promises to accelerate drug development, reduce costs, and ultimately save lives.
PMID:40175037 | DOI:10.1016/bs.apha.2025.01.008
Targeting disease: Computational approaches for drug target identification
Adv Pharmacol. 2025;103:163-184. doi: 10.1016/bs.apha.2025.01.011. Epub 2025 Feb 16.
ABSTRACT
With the advancing technology, the way to drug discovery has evolved. The use of AI and computational methods have revolutionized the methods to develop novel therapeutics. In previous years, the methods to discover new drugs included high-throughput screening and bioassays which were labor-dependent, extremely expensive and had high probability to inaccurate results. The introduction of Computational studies has changed the process by introducing various methods to determine hit compounds and their methods of analysis. Methods such as molecular docking, virtual screening, and dynamics have changed the path to optimize and produce lead molecules. Similarly, network pharmacology also works on the identification of target proteins complex disease pathways with the help of protein-protein interactions and obtaining hub proteins. Various tools such as STRING database, cytoscape and metascape are employed in the study to construct a network between the proteins responsible for the disease progression and helps to obtain the vital target proteins, simplifying the process of drug-target identification. These approaches when employed together, results in obtaining results with better precision and accuracy which can be further validated experimentally, saving the resources and time. This chapter highlights the foundation of computational approaches in drug discovery and provides a detailed understanding of how these approaches are helping the researchers to produce novel solutions using artificial intelligence and machine learning.
PMID:40175040 | DOI:10.1016/bs.apha.2025.01.011
Deep learning: A game changer in drug design and development
Adv Pharmacol. 2025;103:101-120. doi: 10.1016/bs.apha.2025.01.008. Epub 2025 Feb 6.
ABSTRACT
The lengthy and costly drug discovery process is transformed by deep learning, a subfield of artificial intelligence. Deep learning technologies expedite the procedure, increasing treatment success rates and speeding life-saving procedures. Deep learning stands out in target identification and lead selection. Deep learning greatly accelerates initial stage by analyzing large datasets of biological data to identify possible therapeutic targets and rank targeted drug molecules with desired features. Predicting possible adverse effects is another significant challenge. Deep learning offers prompt and efficient assistance with toxicology prediction in a very short time, deep learning algorithms can forecast a new drug's possible harm. This enables to concentrate on safer alternatives and steer clear of late-stage failures brought on by unanticipated toxicity. Deep learning unlocks the possibility of drug repurposing; by examining currently available medications, it is possible to find whole new therapeutic uses. This method speeds up development of diseases that were previously incurable. De novo drug discovery is made possible by deep learning when combined with sophisticated computational modeling, it can create completely new medications from the ground. Deep learning can recommend and direct towards new drug candidates with high binding affinities and intended therapeutic effects by examining molecular structures of disease targets. This provides focused and personalized medication. Lastly, drug characteristics can be optimized with aid of deep learning. Researchers can create medications with higher bioavailability and fewer toxicity by forecasting drug pharmacokinetics. In conclusion, deep learning promises to accelerate drug development, reduce costs, and ultimately save lives.
PMID:40175037 | DOI:10.1016/bs.apha.2025.01.008
Optimising electronic documentation of medication in Hungary: itemised, complete, historical, and standardised event recording
Eur J Pharm Sci. 2025 Mar 31:107079. doi: 10.1016/j.ejps.2025.107079. Online ahead of print.
ABSTRACT
Hospital care is a highly complex process, requiring comprehensive documentation of all aspects of the patient journey in electronic health records. A critical component of this care is the accurate tracking of patient medications. International standards are not consistently incorporated into the electronic medication systems currently in use worldwide, and their interoperability remains an unresolved issue. We recognised the need to develop a set of standardised data elements that ensure consistent and accurate documentation. Although the medication systems studied exhibit various strengths and weaknesses and can satisfactorily document certain aspects of the medication process, none achieve the necessary level of optimal documentation. Our paper presents a new perspective on medication recording by identifying the electronic data requirements for all events in an itemized, complete, historical, and standardized manner. To address this gap, we collected, defined, and introduced the essential data elements required for the comprehensive documentation of medication sub-processes for the first time in our study. The Fast Health Interoperability Resources (FHIR) data exchange standard was employed for designing these data requirements. Our research identified and categorised 138 data elements essential for describing the complete medication process, including medication description, requests, dispensation, and administration. These data elements were divided into fundamental and supplementary categories. We developed a survey form to assess medication systems. In a pilot study, we tested the quality of 5 medication systems, currently in operation in Hungary. Our analysis assessed the accuracy of the electronic recording of medication and the correspondence of the recorded data elements with international standards. None of the systems demonstrated the ability to document medication accurately or capture all fundamental data elements. The best-performing system managed to record 63% of all fundamental data elements, while the worst-performing system managed only to document 30%. The names and the values of data elements in these systems did not comply with international standards either. The primary clinical pharmaceutical usefulness of this study was to enhance the digital documentation of medication in hospitals to meet comprehensive data recording requirements, ensure greater compliance, and improve their suitability for enriching clinical health data files, enabling real-world studies, pharmacovigilance analyses, and the identification of drug repositioning opportunities.
PMID:40174662 | DOI:10.1016/j.ejps.2025.107079
A Narrative Medicine Approach to Navigating Barriers to the Diagnosis of Pediatric Neurotrophic Keratopathy
Am J Ophthalmol. 2025 Mar 31:S0002-9394(25)00162-X. doi: 10.1016/j.ajo.2025.03.043. Online ahead of print.
ABSTRACT
OBJECTIVE: Neurotrophic keratopathy (NK) is a rare disease characterized by the loss of corneal innervation and increased vulnerability to injury. The diagnosis and treatment of NK can be challenging for pediatric patients and their caregivers. This study explores the experiences of caregivers navigating the diagnostic and treatment journey of pediatric patients with neurotrophic keratopathy.
DESIGN: This study is a qualitative study using semi-structured interviews.
SUBJECTS: Ten caregivers of pediatric patients with NK who had undergone corneal neurotization (CN) surgery.
METHODS: Caregivers were interviewed about their experiences related to the diagnostic process, treatment challenges, lifestyle changes, and the impact of CN surgery. Interviews were recorded, transcribed, and analyzed using an inductive-deductive approach to identify recurring themes.
MAIN OUTCOMES: Caregiver experiences and perceptions of diagnostic delays, information-seeking behaviors, lifestyle changes, and the effects of CN surgery on corneal health and quality of life.
RESULTS: Five key themes emerged from the analysis: (1) Delays in diagnosis due to insufficient specialist knowledge; (2) Caregivers' proactive efforts in seeking information; (3) Substantial lifestyle changes required by NK; (4) The impact of CN surgery on corneal health and quality of life; and (5) Variability in healthcare experiences, highlighting the need for effective communication. Caregivers expressed frustration with diagnostic delays and highlighted their reliance on external support networks.
CONCLUSIONS: This study illustrates the need for enhanced awareness among clinicians about NK and the benefits of narrative medicine in fostering caregiver-provider relationships. The challenges reported by families navigating NK inform strategies that may improve diagnosis and treatment of NK.
PMID:40174715 | DOI:10.1016/j.ajo.2025.03.043
Discovery and optimization of AAK1 inhibitors based on 1H-indazole scaffold for the potential treatment of SARS-CoV-2 infection
Mol Divers. 2025 Apr 2. doi: 10.1007/s11030-025-11135-4. Online ahead of print.
ABSTRACT
The process of various virus entry into host cells, including SARS-CoV-2, is mediated by clathrin-mediated endocytosis (CME). AP-2 plays a crucial role in this process by recognizing membrane receptors and binding with clathrin, facilitating the formation of clathrin-coated vesicles and promoting CME. AAK1 catalyzes the phosphorylation of AP2M1 subunit at Thr156. Therefore, suppressing AAK1 activity can hinder virus invasion by blocking CME. indicating that AAK1 could be a potential target for developing novel antiviral drugs against SARS-CoV-2. In this study, we present a series of novel AAK1 inhibitors based on previously reported AAK1 inhibitors. Drug design was carried out by fusing the 1H-indazole scaffold of SGC-AAK1-1 with pharmacophore groups of compound 6, and further optimized with the assistance of molecular docking. Among the 42 compounds novelly synthesized, compounds 9i, 9s, 11f and 11l exhibited comparable antiviral activity against SARS-CoV-2 infection compared to reference compound 6 at the concentration of 3 μM. Particularly, 11f showed almost no cytotoxicity at all tested concentrations. Additionally, 11f exhibited favorable predictive pharmacokinetic properties. These findings support the potential of 11f as a lead compound for developing antiviral drugs targeting SARS-CoV-2 infection, as well as potentially other viruses which are dependent on the CME process to enter host cells. In summary, we have expanded the structural types of AAK1 inhibitors and successfully obtained effective AAK1 inhibitors with antiviral capabilities.
PMID:40175846 | DOI:10.1007/s11030-025-11135-4
Low-cost generation of clinical-grade, layperson-friendly pharmacogenetic passports using oligonucleotide arrays
Am J Hum Genet. 2025 Mar 24:S0002-9297(25)00102-8. doi: 10.1016/j.ajhg.2025.03.003. Online ahead of print.
ABSTRACT
Pharmacogenomic (PGx) information is essential for precision medicine, enabling drug prescriptions to be personalized according to an individual's genetic background. Almost all individuals will carry a genetic marker that affects their drug response, so the ideal drug prescription for these individuals will differ from the population-level guidelines. Currently, PGx information is often not available at first prescription, reducing its effectiveness. In the Netherlands, pharmacogenetic information is most often obtained using dedicated single-gene assays, making it expensive and time consuming to generate complete multi-gene PGx profiles. We therefore hypothesized that we could also use genome-wide oligonucleotide genotyping arrays to generate comprehensive PGx information (PGx passports), thereby decreasing the cost and time required for PGx testing and lowering the barrier to generating PGx information prior to first prescription. Taking advantage of existing genetic data generated in two biobanks, we developed and validated Asterix, a low-cost, clinical-grade PGx passport pipeline for 12 PGx genes. In these biobanks, we performed and clinically validated genetic variant calling and statistical phasing and imputation. In addition, we developed and validated a CYP2D6 copy-number-variant-calling tool, forgoing the need to use separate PCR-based copy-number detection. Ultimately, we returned 1,227 PGx passports to biobank participants via a layperson-friendly app, improving knowledge of PGx among citizens. Our study demonstrates the feasibility of a low-cost, clinical-grade PGx passport pipeline that could be readily implemented in clinical settings to enhance personalized healthcare, ensuring that patients receive the most effective and safe drug therapy based on their unique genetic makeup.
PMID:40174590 | DOI:10.1016/j.ajhg.2025.03.003
Whole proteome-integrated and vaccinomics-based next generation mRNA vaccine design against Pseudomonas aeruginosa-A hierarchical subtractive proteomics approach
Int J Biol Macromol. 2025 Mar 31:142627. doi: 10.1016/j.ijbiomac.2025.142627. Online ahead of print.
ABSTRACT
Pseudomonas aeruginosa (P. aeruginosa) is a multidrug-resistant opportunistic pathogen responsible for chronic obstructive pulmonary disease (COPD), cystic fibrosis, and ventilator-associated pneumonia (VAP), leading to cancer. Developing an efficacious vaccine remains the most promising strategy for combating P. aeruginosa infections. In this study, we employed an advanced in silico strategy to design a highly efficient and stable mRNA vaccine using immunoinformatics tools. Whole proteome data were utilized to identify highly immunogenic vaccine candidates using subtractive proteomics. Three extracellular proteins were prioritized for T- and linear B-cell epitope prediction. Beta-definsin protein sequence was incorporated as an adjuvant at the N-terminus of the construct. A total of 3 CTL, 3 HTL, and 3 linear B cell highly immunogenic epitopes were combined using specific linkers to design this multi-peptide construct. The 5' and 3' UTR sequences, Kozak sequence with a stop codon, and signal peptides followed by a poly-A tail were incorporated into the above vaccine construct to create our final mRNA vaccine. The vaccines exhibited antigenicity scores >0.88, ensuring high antigenicity with no allergenic or toxic. Physiochemical properties analysis revealed high solubility and thermostability. Three-dimensional structural analysis determined high-quality structures. Vaccine-receptor docking and molecular dynamic simulations demonstrated strong molecular interactions, stable binding affinities, dynamic nature, and structural stability of this vaccine, with significant immunogenic responses of the immune system against the vaccine. The immunological simulation indicates successful cellular and humoral immune responses to defend against P. aeruginosa infection. Validation of the study outcomes necessitates both experimental and clinical testing.
PMID:40174835 | DOI:10.1016/j.ijbiomac.2025.142627
Computational fluid dynamics of small airway disease in chronic obstructive pulmonary disease
EBioMedicine. 2025 Apr 1;114:105670. doi: 10.1016/j.ebiom.2025.105670. Online ahead of print.
ABSTRACT
BACKGROUND: Small airways (<2 mm diameter) are major sites of airflow obstruction in chronic obstructive pulmonary disease (COPD). This study aimed to quantify the impact of small airway disease, characterized by narrowing, occlusion, and obliteration, on airflow parameters in smokers and end-stage patients with COPDs.
METHODS: We performed computational fluid dynamics (CFD) simulations of inspiratory airflow in three lung groups: control non-used donor lungs (no smoking/emphysema history), non-used donor lungs with a smoking history and emphysema, and explanted end-stage COPD lungs. Each group included four lungs, with two tissue cylinders. Micro-CT-scanned small airways were segmented into 3D models for CFD simulations to quantify pressure, resistance, and shear stress. CFD results were benchmarked against simplified linear and Weibel models.
FINDINGS: CFD simulations showed higher pressures in COPD vs. controls (p = 0.0091) and smokers (p = 0.015), along with increased resistance (p = 0.0057 vs. controls; p = 0.0083 vs. smokers) and up to a tenfold rise in shear stress (p = 0.010 vs. controls). Narrowing and occlusion were shown to independently increase pressure, resistance, and shear stress, which were validated through segmentation corrections. Pressures and resistance assessed with simplified models were up to seven-fold higher for smokers and even 72 higher for COPD compared with CFD values.
INTERPRETATION: These findings show that increased airflow parameters can explain the association between small airway disease and airflow limitation in COPD, underscoring small airway vulnerability. Additionally, they highlight the limitations of theoretical models in accurately capturing small airway disease.
FUNDING: Supported by the KU Leuven (C16/19/005).
PMID:40174553 | DOI:10.1016/j.ebiom.2025.105670
Integrative network analysis reveals novel moderators of Aβ-Tau interaction in Alzheimer's disease
Alzheimers Res Ther. 2025 Apr 2;17(1):70. doi: 10.1186/s13195-025-01705-x.
ABSTRACT
BACKGROUND: Although interactions between amyloid-beta and tau proteins have been implicated in Alzheimer's disease (AD), the precise mechanisms by which these interactions contribute to disease progression are not yet fully understood. Moreover, despite the growing application of deep learning in various biomedical fields, its application in integrating networks to analyze disease mechanisms in AD research remains limited. In this study, we employed BIONIC, a deep learning-based network integration method, to integrate proteomics and protein-protein interaction data, with an aim to uncover factors that moderate the effects of the Aβ-tau interaction on mild cognitive impairment (MCI) and early-stage AD.
METHODS: Proteomic data from the ROSMAP cohort were integrated with protein-protein interaction (PPI) data using a Deep Learning-based model. Linear regression analysis was applied to histopathological and gene expression data, and mutual information was used to detect moderating factors. Statistical significance was determined using the Benjamini-Hochberg correction (p < 0.05).
RESULTS: Our results suggested that astrocytes and GPNMB + microglia moderate the Aβ-tau interaction. Based on linear regression with histopathological and gene expression data, GFAP and IBA1 levels and GPNMB gene expression positively contributed to the interaction of tau with Aβ in non-dementia cases, replicating the results of the network analysis.
CONCLUSIONS: These findings suggest that GPNMB + microglia moderate the Aβ-tau interaction in early AD and therefore are a novel therapeutic target. To facilitate further research, we have made the integrated network available as a visualization tool for the scientific community (URL: https://igcore.cloud/GerOmics/AlzPPMap ).
PMID:40176187 | DOI:10.1186/s13195-025-01705-x
Deep learning-based reconstruction and superresolution for MR-guided thermal ablation of malignant liver lesions
Cancer Imaging. 2025 Apr 2;25(1):47. doi: 10.1186/s40644-025-00869-x.
ABSTRACT
OBJECTIVE: This study evaluates the impact of deep learning-enhanced T1-weighted VIBE sequences (DL-VIBE) on image quality and procedural parameters during MR-guided thermoablation of liver malignancies, compared to standard VIBE (SD-VIBE).
METHODS: Between September 2021 and February 2023, 34 patients (mean age: 65.4 years; 13 women) underwent MR-guided microwave ablation on a 1.5 T scanner. Intraprocedural SD-VIBE sequences were retrospectively processed with a deep learning algorithm (DL-VIBE) to reduce noise and enhance sharpness. Two interventional radiologists independently assessed image quality, noise, artifacts, sharpness, diagnostic confidence, and procedural parameters using a 5-point Likert scale. Interrater agreement was analyzed, and noise maps were created to assess signal-to-noise ratio improvements.
RESULTS: DL-VIBE significantly improved image quality, reduced artifacts and noise, and enhanced sharpness of liver contours and portal vein branches compared to SD-VIBE (p < 0.01). Procedural metrics, including needle tip detectability, confidence in needle positioning, and ablation zone assessment, were significantly better with DL-VIBE (p < 0.01). Interrater agreement was high (Cohen κ = 0.86). Reconstruction times for DL-VIBE were 3 s for k-space reconstruction and 1 s for superresolution processing. Simulated acquisition modifications reduced breath-hold duration by approximately 2 s.
CONCLUSION: DL-VIBE enhances image quality during MR-guided thermal ablation while improving efficiency through reduced processing and acquisition times.
PMID:40176185 | DOI:10.1186/s40644-025-00869-x
A compact deep learning approach integrating depthwise convolutions and spatial attention for plant disease classification
Plant Methods. 2025 Apr 2;21(1):48. doi: 10.1186/s13007-025-01325-4.
ABSTRACT
Plant leaf diseases significantly threaten agricultural productivity and global food security, emphasizing the importance of early and accurate detection and effective crop health management. Current deep learning models, often used for plant disease classification, have limitations in capturing intricate features such as texture, shape, and color of plant leaves. Furthermore, many of these models are computationally expensive and less suitable for deployment in resource-constrained environments such as farms and rural areas. We propose a novel Lightweight Deep Learning model, Depthwise Separable Convolution with Spatial Attention (LWDSC-SA), designed to address limitations and enhance feature extraction while maintaining computational efficiency. By integrating spatial attention and depthwise separable convolution, the LWDSC-SA model improves the ability to detect and classify plant diseases. In our comprehensive evaluation using the PlantVillage dataset, which consists of 38 classes and 55,000 images from 14 plant species, the LWDSC-SA model achieved 98.7% accuracy. It presents a substantial improvement over MobileNet by 5.25%, MobileNetV2 by 4.50%, AlexNet by 7.40%, and VGGNet16 by 5.95%. Furthermore, to validate its robustness and generalizability, we employed K-fold cross-validation K=5, which demonstrated consistently high performance, with an average accuracy of 98.58%, precision of 98.30%, recall of 98.90%, and F1 score of 98.58%. These results highlight the superior performance of the proposed model, demonstrating its ability to outperform state-of-the-art models in terms of accuracy while remaining lightweight and efficient. This research offers a promising solution for real-world agricultural applications, enabling effective plant disease detection in resource-limited settings and contributing to more sustainable agricultural practices.
PMID:40176127 | DOI:10.1186/s13007-025-01325-4
Forecasting motion trajectories of elbow and knee joints during infant crawling based on long-short-term memory (LSTM) networks
Biomed Eng Online. 2025 Apr 2;24(1):39. doi: 10.1186/s12938-025-01360-1.
ABSTRACT
BACKGROUND: Hands-and-knees crawling is a promising rehabilitation intervention for infants with motor impairments, while research on assistive crawling devices for rehabilitation training was still in its early stages. In particular, precisely generating motion trajectories is a prerequisite to controlling exoskeleton assistive devices, and deep learning-based prediction algorithms, such as Long-Short-Term Memory (LSTM) networks, have proven effective in forecasting joint trajectories of gait. Despite this, no previous studies have focused on forecasting the more variable and complex trajectories of infant crawling. Therefore, this paper aims to explore the feasibility of using LSTM networks to predict crawling trajectories, thereby advancing our understanding of how to actively control crawling rehabilitation training robots.
METHODS: We collected joint trajectory data from 20 healthy infants (11 males and 9 females, aged 8-15 months) as they crawled on hands and knees. This study implemented LSTM networks to forecast bilateral elbow and knee trajectories based on corresponding joint angles. The data set comprised 58, 782 time steps, each containing 4 joint angles. We partitioned the data set into 70% for training and 30% for testing to evaluate predictive performance. We investigated a total of 24 combinations of input and output time-frames, with window sizes for input vectors ranging from 10, 15, 20, 30, 40, 50, 70, and 100 time steps, and output vectors from 5, 10, and 15 steps. Evaluation metrics included Mean Absolute Error (MAE), Mean Squared Error (MSE), and Correlation Coefficient (CC) to assess prediction accuracy.
RESULTS: The results indicate that across various input-output windows, the MAE for elbow joints ranged from 0.280 to 4.976°, MSE ranged from 0.203° to 59.186°, and CC ranged from 89.977% to 99.959%. For knee joints, MAE ranged from 0.277 to 4.262°, MSE from 0.229 to 53.272°, and CC from 89.454% to 99.944%. Results also show that smaller output window sizes lead to lower prediction errors. As expected, the LSTM predicting 5 output time steps has the lowest average error, while the LSTM predicting 15 time steps has the highest average error. In addition, variations in input window size had a minimal impact on average error when the output window size was fixed. Overall, the optimal performance for both elbow and knee joints was observed with input-output window sizes of 30 and 5 time steps, respectively, yielding an MAE of 0.295°, MSE of 0.260°, and CC of 99.938%.
CONCLUSIONS: This study demonstrates the feasibility of forecasting infant crawling trajectories using LSTM networks, which could potentially integrate with exoskeleton control systems. It experimentally explores how different input and output time-frames affect prediction accuracy and sets the stage for future research focused on optimizing models and developing effective control strategies to improve assistive crawling devices.
PMID:40176123 | DOI:10.1186/s12938-025-01360-1
Prediction of Future Risk of Moderate to Severe Kidney Function Loss Using a Deep Learning Model-Enabled Chest Radiography
J Imaging Inform Med. 2025 Apr 2. doi: 10.1007/s10278-025-01489-4. Online ahead of print.
ABSTRACT
Chronic kidney disease (CKD) remains a major public health concern, requiring better predictive models for early intervention. This study evaluates a deep learning model (DLM) that utilizes raw chest X-ray (CXR) data to predict moderate to severe kidney function decline. We analyzed data from 79,219 patients with an estimated Glomerular Filtration Rate (eGFR) between 65 and 120, segmented into development (n = 37,983), tuning (n = 15,346), internal validation (n = 14,113), and external validation (n = 11,777) sets. Our DLM, pretrained on CXR-report pairs, was fine-tuned with the development set. We retrospectively examined data spanning April 2011 to February 2022, with a 5-year maximum follow-up. Primary and secondary endpoints included CKD stage 3b progression, ESRD/dialysis, and mortality. The overall concordance index (C-index) values for the internal and external validation sets were 0.903 (95% CI, 0.885-0.922) and 0.851 (95% CI, 0.819-0.883), respectively. In these sets, the incidences of progression to CKD stage 3b at 5 years were 19.2% and 13.4% in the high-risk group, significantly higher than those in the median-risk (5.9% and 5.1%) and low-risk groups (0.9% and 0.9%), respectively. The sex, age, and eGFR-adjusted hazard ratios (HR) for the high-risk group compared to the low-risk group were 16.88 (95% CI, 10.84-26.28) and 7.77 (95% CI, 4.77-12.64), respectively. The high-risk group also exhibited higher probabilities of progressing to ESRD/dialysis or experiencing mortality compared to the low-risk group. Further analysis revealed that the high-risk group compared to the low/median-risk group had a higher prevalence of complications and abnormal blood/urine markers. Our findings demonstrate that a DLM utilizing CXR can effectively predict CKD stage 3b progression, offering a potential tool for early intervention in high-risk populations.
PMID:40175823 | DOI:10.1007/s10278-025-01489-4
Leveraging Fine-Scale Variation and Heterogeneity of the Wetland Soil Microbiome to Predict Nutrient Flux on the Landscape
Microb Ecol. 2025 Apr 2;88(1):22. doi: 10.1007/s00248-025-02516-1.
ABSTRACT
Shifts in agricultural land use over the past 200 years have led to a loss of nearly 50% of existing wetlands in the USA, and agricultural activities contribute up to 65% of the nutrients that reach the Mississippi River Basin, directly contributing to biological disasters such as the hypoxic Gulf of Mexico "Dead" Zone. Federal efforts to construct and restore wetland habitats have been employed to mitigate the detrimental effects of eutrophication, with an emphasis on the restoration of ecosystem services such as nutrient cycling and retention. Soil microbial assemblages drive biogeochemical cycles and offer a unique and sensitive framework for the accurate evaluation, restoration, and management of ecosystem services. The purpose of this study was to elucidate patterns of soil bacteria within and among wetlands by developing diversity profiles from high-throughput sequencing data, link functional gene copy number of nitrogen cycling genes to measured nutrient flux rates collected from flow-through incubation cores, and predict nutrient flux using microbial assemblage composition. Soil microbial assemblages showed fine-scale turnover in soil cores collected across the topsoil horizon (0-5 cm; top vs bottom partitions) and were structured by restoration practices on the easements (tree planting, shallow water, remnant forest). Connections between soil assemblage composition, functional gene copy number, and nutrient flux rates show the potential for soil bacterial assemblages to be used as bioindicators for nutrient cycling on the landscape. In addition, the predictive accuracy of flux rates was improved when implementing deep learning models that paired connected samples across time.
PMID:40175811 | DOI:10.1007/s00248-025-02516-1
scAtlasVAE: a deep learning framework for generating a human CD8(+) T cell atlas
Nat Rev Cancer. 2025 Apr 2. doi: 10.1038/s41568-025-00811-0. Online ahead of print.
NO ABSTRACT
PMID:40175619 | DOI:10.1038/s41568-025-00811-0
Estimating strawberry weight for grading by picking robot with point cloud completion and multimodal fusion network
Sci Rep. 2025 Apr 2;15(1):11227. doi: 10.1038/s41598-025-92641-1.
ABSTRACT
Strawberry grading by picking robots can eliminate the manual classification, reducing labor costs and minimizing the damage to the fruit. Strawberry size or weight is a key factor in grading, with accurate weight estimation being crucial for proper classification. In this paper, we collected 1521 sets of strawberry RGB-D images using a depth camera and manually measured the weight and size of the strawberries to construct a training dataset for the strawberry weight regression model. To address the issue of incomplete depth images caused by environmental interference with depth cameras, this study proposes a multimodal point cloud completion method specifically designed for symmetrical objects, leveraging RGB images to guide the completion of depth images in the same scene. The method follows a process of locating strawberry pixel regions, calculating centroid coordinates, determining the symmetry axis via PCA, and completing the depth image. Based on this approach, a multimodal fusion regression model for strawberry weight estimation, named MMF-Net, is developed. The model uses the completed point cloud and RGB image as inputs, and extracts features from the RGB image and point cloud by EfficientNet and PointNet, respectively. These features are then integrated at the feature level through gradient blending, realizing the combination of the strengths of both modalities. Using the Percent Correct Weight (PCW) metric as the evaluation standard, this study compares the performance of four traditional machine learning methods, Support Vector Regression (SVR), Multilayer Perceptron (MLP), Linear Regression, and Random Forest Regression, with four point cloud-based deep learning models, PointNet, PointNet++, PointMLP, and Point Cloud Transformer, as well as an image-based deep learning model, EfficientNet and ResNet, on single-modal datasets. The results indicate that among traditional machine learning methods, the SVR model achieved the best performance with an accuracy of 77.7% (PCW@0.2). Among deep learning methods, the image-based EfficientNet model obtained the highest accuracy, reaching 85% (PCW@0.2), while the PointNet + + model demonstrated the best performance among point cloud-based models, with an accuracy of 54.3% (PCW@0.2). The proposed multimodal fusion model, MMF-Net, achieved an accuracy of 87.66% (PCW@0.2), significantly outperforming both traditional machine learning methods and single-modal deep learning models in terms of precision.
PMID:40175474 | DOI:10.1038/s41598-025-92641-1
Investigation on potential bias factors in histopathology datasets
Sci Rep. 2025 Apr 2;15(1):11349. doi: 10.1038/s41598-025-89210-x.
ABSTRACT
Deep neural networks (DNNs) have demonstrated remarkable capabilities in medical applications, including digital pathology, where they excel at analyzing complex patterns in medical images to assist in accurate disease diagnosis and prognosis. However, concerns have arisen about potential biases in The Cancer Genome Atlas (TCGA) dataset, a comprehensive repository of digitized histopathology data and serves as both a training and validation source for deep learning models, suggesting that over-optimistic results of model performance may be due to reliance on biased features rather than histological characteristics. Surprisingly, recent studies have confirmed the existence of site-specific bias in the embedded features extracted for cancer-type discrimination, leading to high accuracy in acquisition site classification. This biased behavior motivated us to conduct an in-depth analysis to investigate potential causes behind this unexpected biased ability toward site-specific pattern recognition. The analysis was conducted on two cutting-edge DNN models: KimiaNet, a state-of-the-art DNN trained on TCGA images, and the self-trained EfficientNet. In this research study, the balanced accuracy metric is used to evaluate the performance of a model trained to classify data centers, which was originally designed to learn cancerous patterns, with the aim of investigating the potential factors contributing to the higher balanced accuracy in data center detection.
PMID:40175463 | DOI:10.1038/s41598-025-89210-x
Pages
