Deep learning
Hybrid Data Augmentation Strategies for Robust Deep Learning Classification of Corneal Topographic MapTopographic Map
Biomed Phys Eng Express. 2025 Jan 20. doi: 10.1088/2057-1976/adabea. Online ahead of print.
ABSTRACT
Deep learning has emerged as a powerful tool in medical imaging, particularly for corneal topographic map classification. However, the scarcity of labeled data poses a significant challenge to achieving robust performance. This study investigates the impact of various data augmentation strategies on enhancing the performance of a customized convolutional neural network model for corneal topographic map classification. We propose a hybrid data augmentation approach that combines traditional transformations, generative adversarial networks, and specific generative models. Experimental results demonstrate that the hybrid data augmentation method, achieves the highest accuracy of 99.54%, significantly outperforming individual data augmentation techniques. This hybrid approach not only improves model accuracy but also mitigates overfitting issues, making it a promising solution for medical image classification tasks with limited data availability.
PMID:39832385 | DOI:10.1088/2057-1976/adabea
A deep learning tissue classifier based on differential Co-expression genes predicts the pregnancy outcomes of cattle
Biol Reprod. 2025 Jan 20:ioaf009. doi: 10.1093/biolre/ioaf009. Online ahead of print.
ABSTRACT
Economic losses in cattle farms are frequently associated with failed pregnancies. Some studies found that the transcriptomic profiles of blood and endometrial tissues in cattle with varying pregnancy outcomes display discrepancies even before artificial insemination (AI) or embryo transfer (ET). In the study, 330 samples from seven distinct sources and two tissue types were integrated and divided into two groups based on the ability to establish and maintain pregnancy after AI or ET: P (pregnant) and NP (nonpregnant). By analyzing gene co-variation and employing machine learning algorithms, the objective was to identify genes that could predict pregnancy outcomes in cattle. Initially, within each tissue type, the top 100 differentially co-expressed genes (DCEG) were identified based on the analysis of changes in correlation coefficients and network topological structure. Subsequently, these genes were used in models trained by seven different machine learning algorithms. Overall, models trained on DCEGs exhibited superior predictive accuracy compared to those trained on an equivalent number of differential expression genes (DEGs). Among them, the deep learning models based on differential co-expression genes in blood and endometrial tissue achieved prediction accuracies of 91.7% and 82.6%, respectively. Finally, the importance of DCEGs was ranked using SHapley Additive exPlanations (SHAP) and enrichment analysis, identifying key signaling pathways that influence pregnancy. In summary, this study identified a set of genes potentially affecting pregnancy by analyzing the overall co-variation of gene connections between multiple sources. These key genes facilitated the development of interpretable machine learning models that accurately predict pregnancy outcomes in cattle.
PMID:39832283 | DOI:10.1093/biolre/ioaf009
Enhancing panoramic dental imaging with AI-driven arch surface fitting: Achieving improved clarity and accuracy through an optimal reconstruction zone
Dentomaxillofac Radiol. 2025 Jan 20:twaf006. doi: 10.1093/dmfr/twaf006. Online ahead of print.
ABSTRACT
OBJECTIVES: This study aimed to develop an automated method for generating clearer, well-aligned panoramic views by creating an optimized three-dimensional (3D) reconstruction zone centered on the teeth. The approach focused on achieving high contrast and clarity in key dental features, including tooth roots, morphology, and periapical lesions, by applying a 3D U-Net deep learning model to generate an arch surface and align the panoramic view.
METHODS: This retrospective study analyzed anonymized cone-beam CT (CBCT) scans from 312 patients (mean age 40 years; range 10-78; 41.3% male, 58.7% female). A 3D U-Net deep learning model segmented the jaw and dentition, facilitating panoramic view generation. During preprocessing, CBCT scans were binarized, and a cylindrical reconstruction method aligned the arch along a straight coordinate system, reducing data size for efficient processing. The 3D U-Net segmented the jaw and dentition in two steps, after which the panoramic view was reconstructed using 3D spline curves fitted to the arch, defining the optimal 3D reconstruction zone. This ensured the panoramic view captured essential anatomical details with high contrast and clarity. To evaluate performance, we compared contrast between tooth roots and alveolar bone and assessed intersection over union (IoU) values for tooth shapes and periapical lesions (#42, #44, #46) relative to the conventional method, demonstrating enhanced clarity and improved visualization of critical dental structures.
RESULTS: The proposed method outperformed the conventional approach, showing significant improvements in the contrast between tooth roots and alveolar bone, particularly for tooth #42. It also demonstrated higher IoU values in tooth morphology comparisons, indicating superior shape alignment. Additionally, when evaluating periapical lesions, our method achieved higher performance with thinner layers, resulting in several statistically significant outcomes. Specifically, average pixel values within lesions were higher for certain layer thicknesses, demonstrating enhanced visibility of lesion boundaries and better visualization.
CONCLUSIONS: The fully automated AI-based panoramic view generation method successfully created a 3D reconstruction zone centered on the teeth, enabling consistent observation of dental and surrounding tissue structures with high contrast across reconstruction widths. By accurately segmenting the dental arch and defining the optimal reconstruction zone, this method shows significant advantages in detecting pathological changes, potentially reducing clinician fatigue during interpretation while enhancing clinical decision-making accuracy. Future research will focus on further developing and testing this approach to ensure robust performance across diverse patient cases with varied dental and maxillofacial structures, thereby increasing the model's utility in clinical settings.
ADVANCES IN KNOWLEDGE: This study introduces a novel method for achieving clearer, well-aligned panoramic views focused on the dentition, providing significant improvements over conventional methods.
PMID:39832267 | DOI:10.1093/dmfr/twaf006
Multispectral imaging-based detection of apple bruises using segmentation network and classification model
J Food Sci. 2025 Jan;90(1):e70003. doi: 10.1111/1750-3841.70003.
ABSTRACT
Bruises can affect the appearance and nutritional value of apples and cause economic losses. Therefore, the accurate detection of bruise levels and bruise time of apples is crucial. In this paper, we proposed a method that combines a self-designed multispectral imaging system with deep learning to accurately detect the level and time of bruising on apples. To enhance the accuracy of extracting bruised regions with subtle features and irregular edges, an improved DeepLabV3+ was proposed. More specifically, depthwise separable convolution and efficient channel attention were employed, and the loss function was replaced with a focal loss. With these improvements, DeepLabV3+ achieved the maximum intersection over union of 95.5% and 91.0% for segmenting bruises on two types of apples in the test set, as well as maximum F1-score of 97.5% and 95.2%. In addition, the spectral data of the bruised regions were extracted. After spectral preprocessing, EfficientNetV2, DenseNet121, and ShuffleNetV2 were utilized to identify the bruise levels and times and DenseNet121 exhibited the best performance. To improve the identification accuracy, an improved DenseNet121 was proposed. The learning rate was adjusted using the cosine annealing algorithm, and squeeze-and-excitation attention mechanism and the Gaussian error linear unit activation function were utilized. Test set results demonstrated that the accuracies of the bruising levels were 99.5% and 99.1%, and those of the bruise time were 99.0% and 99.3%, respectively. This provides a new method for detecting bruise levels and bruised time on apples.
PMID:39832229 | DOI:10.1111/1750-3841.70003
Performance analysis of image retrieval system using deep learning techniques
Network. 2025 Jan 20:1-21. doi: 10.1080/0954898X.2025.2451388. Online ahead of print.
ABSTRACT
The image retrieval is the process of retrieving the relevant images to the query image with minimal searching time in internet. The problem of the conventional Content-Based Image Retrieval (CBIR) system is that they produce retrieval results for either colour images or grey scale images alone. Moreover, the CBIR system is more complex which consumes more time period for producing the significant retrieval results. These problems are overcome through the proposed methodologies stated in this work. In this paper, the General Image (GI) and Medical Image (MI) are retrieved using deep learning architecture. The proposed system is designed with feature computation module, Retrieval Convolutional Neural Network (RETCNN) module, and Distance computation algorithm. The distance computation algorithm is used to compute the distances between the query image and the images in the datasets and produces the retrieval results. The average precision and recall for the proposed RETCNN-based CBIRS is 98.98% and 99.15% respectively for GI category, and the average precision and recall for the proposed RETCNN-based CBIRS are 99.04% and 98.89% respectively for MI category. The significance of these experimental results is used to produce the higher image retrieval rate of the proposed system.
PMID:39832139 | DOI:10.1080/0954898X.2025.2451388
Machine learning models for predicting postoperative peritoneal metastasis after hepatocellular carcinoma rupture: a multicenter cohort study in China
Oncologist. 2025 Jan 17;30(1):oyae341. doi: 10.1093/oncolo/oyae341.
ABSTRACT
BACKGROUND: Peritoneal metastasis (PM) after the rupture of hepatocellular carcinoma (HCC) is a critical issue that negatively affects patient prognosis. Machine learning models have shown great potential in predicting clinical outcomes; however, the optimal model for this specific problem remains unclear.
METHODS: Clinical data were collected and analyzed from 522 patients with ruptured HCC who underwent surgery at 7 different medical centers. Patients were assigned to the training, validation, and test groups in a random manner, with a distribution ratio of 7:1.5:1.5. Overall, 78 (14.9%) patients experienced postoperative PM. Five different types of models, including logistic regression, support vector machines, classification trees, random forests, and deep learning (DL) models, were trained using these data and evaluated based on their receiver operating characteristic curve and area under the curve (AUC) values and F1 scores.
RESULTS: The DL models achieved the highest AUC values (10-fold training cohort: 0.943, validation set: 0.928, and test set: 0.892) and F1 scores (10-fold training set: 0.917, validation cohort: 0.908, and test set:0.899) The results of the analysis indicate that tumor size, timing of hepatectomy, alpha-fetoprotein levels, and microvascular invasion are the most important predictive factors closely associated with the incidence of postoperative PM.
CONCLUSION: The DL model outperformed all other machine learning models in predicting postoperative PM after the rupture of HCC based on clinical data. This model provides valuable information for clinicians to formulate individualized treatment plans that can improve patient outcomes.
PMID:39832130 | DOI:10.1093/oncolo/oyae341
On the Effect of the Patient Table on Attenuation in Myocardial Perfusion Imaging SPECT
EJNMMI Phys. 2025 Jan 20;12(1):3. doi: 10.1186/s40658-024-00713-4.
ABSTRACT
BACKGROUND: The topic of the effect of the patient table on attenuation in myocardial perfusion imaging (MPI) SPECT is gaining new relevance due to deep learning methods. Existing studies on this effect are old, rare and only consider phantom measurements, not patient studies. This study investigates the effect of the patient table on attenuation based on the difference between reconstructions of phantom scans and polar maps of patient studies.
METHODS: Jaszczak phantom scans are acquired according to quality control and MPI procedures. An algorithm is developed to automatically remove the patient table from the CT for attenuation correction. The scans are then reconstructed with attenuation correction either with or without the patient table in the CT. The reconstructions are compared qualitatively and on the basis of their percentage difference. In addition, a small retrospective cohort of 15 patients is examined by comparing the resulting polar maps. Polar maps are compared qualitatively and based on the segment perfusion scores.
RESULTS: The phantom reconstructions look qualitatively similar in both the quality control and MPI procedures. The percentage difference is highest in the lower part of the phantom, but it always remains below 17.5%. Polar maps from patient studies also look qualitatively similar. Furthermore, the segment scores are not significantly different (p=0.83).
CONCLUSIONS: The effect of the patient table on attenuation in MPI SPECT is negligible.
PMID:39832088 | DOI:10.1186/s40658-024-00713-4
Perfusion estimation from dynamic non-contrast computed tomography using self-supervised learning and a physics-inspired U-net transformer architecture
Int J Comput Assist Radiol Surg. 2025 Jan 20. doi: 10.1007/s11548-025-03323-2. Online ahead of print.
ABSTRACT
PURPOSE: Pulmonary perfusion imaging is a key lung health indicator with clinical utility as a diagnostic and treatment planning tool. However, current nuclear medicine modalities face challenges like low spatial resolution and long acquisition times which limit clinical utility to non-emergency settings and often placing extra financial burden on the patient. This study introduces a novel deep learning approach to predict perfusion imaging from non-contrast inhale and exhale computed tomography scans (IE-CT).
METHODS: We developed a U-Net Transformer architecture modified for Siamese IE-CT inputs, integrating insights from physical models and utilizing a self-supervised learning strategy tailored for lung function prediction. We aggregated 523 IE-CT images from nine different 4DCT imaging datasets for self-supervised training, aiming to learn a low-dimensional IE-CT feature space by reconstructing image volumes from random data augmentations. Supervised training for perfusion prediction used this feature space and transfer learning on a cohort of 44 patients who had both IE-CT and single-photon emission CT (SPECT/CT) perfusion scans.
RESULTS: Testing with random bootstrapping, we estimated the mean and standard deviation of the spatial Spearman correlation between our predictions and the ground truth (SPECT perfusion) to be 0.742 ± 0.037, with a mean median correlation of 0.792 ± 0.036. These results represent a new state-of-the-art accuracy for predicting perfusion imaging from non-contrast CT.
CONCLUSION: Our approach combines low-dimensional feature representations of both inhale and exhale images into a deep learning model, aligning with previous physical modeling methods for characterizing perfusion from IE-CT. This likely contributes to the high spatial correlation with ground truth. With further development, our method could provide faster and more accurate lung function imaging, potentially expanding its clinical applications beyond what is currently possible with nuclear medicine.
PMID:39832070 | DOI:10.1007/s11548-025-03323-2
Deep learning-based MVIT-MLKA model for accurate classification of pancreatic lesions: a multicenter retrospective cohort study
Radiol Med. 2025 Jan 20. doi: 10.1007/s11547-025-01949-5. Online ahead of print.
ABSTRACT
BACKGROUND: Accurate differentiation between benign and malignant pancreatic lesions is critical for effective patient management. This study aimed to develop and validate a novel deep learning network using baseline computed tomography (CT) images to predict the classification of pancreatic lesions.
METHODS: This retrospective study included 864 patients (422 men, 442 women) with confirmed histopathological results across three medical centers, forming a training cohort, internal testing cohort, and external validation cohort. A novel hybrid model, Multi-Scale Large Kernel Attention with Mobile Vision Transformer (MVIT-MLKA), was developed, integrating CNN and Transformer architectures to classify pancreatic lesions. The model's performance was compared with traditional machine learning methods and advanced deep learning models. We also evaluated the diagnostic accuracy of radiologists with and without the assistance of the optimal model. Model performance was assessed through discrimination, calibration, and clinical applicability.
RESULTS: The MVIT-MLKA model demonstrated superior performance in classifying pancreatic lesions, achieving an AUC of 0.974 (95% CI 0.967-0.980) in the training set, 0.935 (95% CI 0.915-0.954) in the internal testing set, and 0.924 (95% CI 0.902-0.945) in the external validation set, outperforming traditional models and other deep learning models (P < 0.05). Radiologists aided by the MVIT-MLKA model showed significant improvements in diagnostic accuracy and sensitivity compared to those without model assistance (P < 0.05). Grad-CAM visualization enhanced model interpretability by effectively highlighting key lesion areas.
CONCLUSION: The MVIT-MLKA model efficiently differentiates between benign and malignant pancreatic lesions, surpassing traditional methods and significantly improving radiologists' diagnostic performance. The integration of this advanced deep learning model into clinical practice has the potential to reduce diagnostic errors and optimize treatment strategies.
PMID:39832039 | DOI:10.1007/s11547-025-01949-5
scHiClassifier: a deep learning framework for cell type prediction by fusing multiple feature sets from single-cell Hi-C data
Brief Bioinform. 2024 Nov 22;26(1):bbaf009. doi: 10.1093/bib/bbaf009.
ABSTRACT
Single-cell high-throughput chromosome conformation capture (Hi-C) technology enables capturing chromosomal spatial structure information at the cellular level. However, to effectively investigate changes in chromosomal structure across different cell types, there is a requisite for methods that can identify cell types utilizing single-cell Hi-C data. Current frameworks for cell type prediction based on single-cell Hi-C data are limited, often struggling with features interpretability and biological significance, and lacking convincing and robust classification performance validation. In this study, we propose four new feature sets based on the contact matrix with clear interpretability and biological significance. Furthermore, we develop a novel deep learning framework named scHiClassifier based on multi-head self-attention encoder, 1D convolution and feature fusion, which integrates information from these four feature sets to predict cell types accurately. Through comprehensive comparison experiments with benchmark frameworks on six datasets, we demonstrate the superior classification performance and the universality of the scHiClassifier framework. We further assess the robustness of scHiClassifier through data perturbation experiments and data dropout experiments. Moreover, we demonstrate that using all feature sets in the scHiClassifier framework yields optimal performance, supported by comparisons of different feature set combinations. The effectiveness and the superiority of the multiple feature set extraction are proven by comparison with four unsupervised dimensionality reduction methods. Additionally, we analyze the importance of different feature sets and chromosomes using the "SHapley Additive exPlanations" method. Furthermore, the accuracy and reliability of the scHiClassifier framework in cell classification for single-cell Hi-C data are supported through enrichment analysis. The source code of scHiClassifier is freely available at https://github.com/HaoWuLab-Bioinformatics/scHiClassifier.
PMID:39831891 | DOI:10.1093/bib/bbaf009
ds-FCRN: three-dimensional dual-stream fully convolutional residual networks and transformer-based global-local feature learning for brain age prediction
Brain Struct Funct. 2025 Jan 18;230(2):32. doi: 10.1007/s00429-024-02889-y.
ABSTRACT
The brain undergoes atrophy and cognitive decline with advancing age. The utilization of brain age prediction represents a pioneering methodology in the examination of brain aging. This study aims to develop a deep learning model with high predictive accuracy and interpretability for brain age prediction tasks. The gray matter (GM) density maps obtained from T1 MRI data of 16,377 healthy participants aged 45 to 82 years from the UKB database were included in this study (mean age, 64.27 ± 7.52 , 7811 men). We propose an innovative deep learning architecture for predicting brain age based on GM density maps. The architecture combines a 3D dual-stream fully convolutional residual network (ds-FCRN) with a Transformer-based global-local feature learning paradigm to enhance prediction accuracy. Moreover, we employed Shapley values to elucidate the influence of various brain regions on prediction precision. On a test set of 3,276 healthy subjects (mean age, 64.15 ± 7.45 , 1561 men), our 3D ds-FCRN model achieved a mean absolute error of 2.2 years in brain age prediction, outperforming existing models on the same dataset. The posterior interpretation revealed that the temporal lobe plays the most significant role in the brain age prediction process, while frontal lobe aging is associated with the greatest number of lifestyle factors. Our designed 3D ds-FCRN model achieved high predictive accuracy and high decision transparency. The brain age vectors constructed using Shapley values provided brain region-level insights into life factors associated with abnormal brain aging.
PMID:39826018 | DOI:10.1007/s00429-024-02889-y
Development and Validation of KCPREDICT: A Deep Learning Model for Early Detection of Coronary Artery Lesions in Kawasaki Disease Patients
Pediatr Cardiol. 2025 Jan 18. doi: 10.1007/s00246-024-03762-9. Online ahead of print.
ABSTRACT
Kawasaki disease (KD) is a febrile vasculitis disorder, with coronary artery lesions (CALs) being the most severe complication. Early detection of CALs is challenging due to limitations in echocardiographic equipment (UCG). This study aimed to develop and validate an artificial intelligence algorithm to distinguish CALs in KD patients and support diagnostic decision-making at admission. A deep learning algorithm named KCPREDICT was developed using 24 features, including basic patient information, five classic KD clinical signs, and 14 laboratory measurements. Data were collected from patients diagnosed with KD between February 2017 and May 2023 at Shanghai Children's Medical Center. Patients were split into training and internal validation cohorts at an 80:20 ratio, and fivefold cross-validation was employed to assess model performance. Among the 1474 KD cases, the decision tree model performed best during the full feature experiment, achieving an accuracy of 95.42%, a precision of 98.83%, a recall of 93.58%, an F1 score of 96.14%, and an area under the receiver operating characteristic curve (AUROC) of 96.00%. The KCPREDICT algorithm can aid frontline clinicians in distinguishing KD patients with and without CALs, facilitating timely treatment and prevention of severe complications. The use of the complete set of 24 diagnostic features is the optimal choice for predicting CALs in children with KD.
PMID:39825907 | DOI:10.1007/s00246-024-03762-9
VGX: VGG19-Based Gradient Explainer Interpretable Architecture for Brain Tumor Detection in Microscopy Magnetic Resonance Imaging (MMRI)
Microsc Res Tech. 2025 Jan 17. doi: 10.1002/jemt.24809. Online ahead of print.
ABSTRACT
The development of deep learning algorithms has transformed medical image analysis, especially in brain tumor recognition. This research introduces a robust automatic microbrain tumor identification method utilizing the VGG16 deep learning model. Microscopy magnetic resonance imaging (MMRI) scans extract detailed features, providing multi-modal insights. VGG16, known for its depth and high performance, is utilized for this purpose. The study demonstrates the model's potential for precise and effective diagnosis by examining how well it can differentiate between areas of normal brain tissue and cancerous regions, leveraging both MRI and microscopy data. We describe in full the pre-processing actions taken to improve the quality of input data and maximize model efficiency. A carefully selected dataset, incorporating diverse tumor sizes and types from both microscopy and MRI sources, is used during the training phase to ensure representativeness. The proposed modified VGG19 model achieved 98.81% validation accuracy. Despite good accuracy, interpretation of the result still questionable. The proposed methodology integrates explainable AI (XAI) for brain tumor detection to interpret system decisions. The proposed study uses a gradient explainer to interpret classification results. Comparative statistical analysis highlights the effectiveness of the proposed explainer model over other XAI techniques.
PMID:39825619 | DOI:10.1002/jemt.24809
TagGen: Diffusion-based generative model for cardiac MR tagging super resolution
Magn Reson Med. 2025 Jan 17. doi: 10.1002/mrm.30422. Online ahead of print.
ABSTRACT
PURPOSE: The aim of the work is to develop a cascaded diffusion-based super-resolution model for low-resolution (LR) MR tagging acquisitions, which is integrated with parallel imaging to achieve highly accelerated MR tagging while enhancing the tag grid quality of low-resolution images.
METHODS: We introduced TagGen, a diffusion-based conditional generative model that uses low-resolution MR tagging images as guidance to generate corresponding high-resolution tagging images. The model was developed on 50 patients with long-axis-view, high-resolution tagging acquisitions. During training, we retrospectively synthesized LR tagging images using an undersampling rate (R) of 3.3 with truncated outer phase-encoding lines. During inference, we evaluated the performance of TagGen and compared it with REGAIN, a generative adversarial network-based super-resolution model that was previously applied to MR tagging. In addition, we prospectively acquired data from 6 subjects with three heartbeats per slice using 10-fold acceleration achieved by combining low-resolution R = 3.3 with GRAPPA-3 (generalized autocalibrating partially parallel acquisitions 3).
RESULTS: For synthetic data (R = 3.3), TagGen outperformed REGAIN in terms of normalized root mean square error, peak signal-to-noise ratio, and structural similarity index (p < 0.05 for all). For prospectively 10-fold accelerated data, TagGen provided better tag grid quality, signal-to-noise ratio, and overall image quality than REGAIN, as scored by two (blinded) radiologists (p < 0.05 for all).
CONCLUSIONS: We developed a diffusion-based generative super-resolution model for MR tagging images and demonstrated its potential to integrate with parallel imaging to reconstruct highly accelerated cine MR tagging images acquired in three heartbeats with enhanced tag grid quality.
PMID:39825522 | DOI:10.1002/mrm.30422
Development of a Deep Learning Tool to Support the Assessment of Thyroid Follicular Cell Hypertrophy in the Rat
Toxicol Pathol. 2025 Jan 17:1926233241309328. doi: 10.1177/01926233241309328. Online ahead of print.
ABSTRACT
Thyroid tissue is sensitive to the effects of endocrine disrupting substances, and this represents a significant health concern. Histopathological analysis of tissue sections of the rat thyroid gland remains the gold standard for the evaluation for agrochemical effects on the thyroid. However, there is a high degree of variability in the appearance of the rat thyroid gland, and toxicologic pathologists often struggle to decide on and consistently apply a threshold for recording low-grade thyroid follicular hypertrophy. This research project developed a deep learning image analysis solution that provides a quantitative score based on the morphological measurements of individual follicles that can be integrated into the standard pathology workflow. To achieve this, a U-Net convolutional deep learning neural network was used that not just identifies the various tissue components but also delineates individual follicles. Further steps to process the raw individual follicle data were developed using empirical models optimized to produce thyroid activity scores that were shown to be superior to the mean epithelial area approach when compared with pathologists' scores. These scores can be used for pathologist decision support using appropriate statistical methods to assess the presence or absence of low-grade thyroid hypertrophy at the group level.
PMID:39825517 | DOI:10.1177/01926233241309328
Preparing physiotherapists for the future: the development and evaluation of an innovative curriculum
BMC Med Educ. 2025 Jan 17;25(1):83. doi: 10.1186/s12909-024-06537-1.
ABSTRACT
BACKGROUND: Educational innovation in health professional education is needed to keep up with rapidly changing healthcare systems and societal needs. This study evaluates the implementation of PACE, an innovative curriculum designed by the physiotherapy department of the HAN University of Applied Sciences in The Netherlands. The PACE concept features an integrated approach to learning and assessment based on pre-set learning outcomes, personalized learning goals, flexible learning routes, and programmatic assessment. PACE distinguishes itself from traditional education because of the flexible learning routes, vertical organization in learning communities, absence of pre-defined learning activities and class schedules, and a culture of continuous learning and development. PACE is based on three guiding principles: 1) flexible and varied, 2) self-directed and collaborative, 3) future-oriented. PACE was implemented in 2021 for first-year students. This study evaluates the implementation to inform future curriculum development.
METHODS: A sequential explanatory mixed methods design was used to evaluate the implementation of PACE using a questionnaire, focus groups, in-depth interviews, and a national progress test allowing for benchmarking results. Participants were undergraduate physiotherapy students of cohort 2021-2022, the first group who experienced PACE and teachers involved with this cohort. Questionnaire data were analyzed using descriptive statistics. To compare mean total scores of the national progress test between four different universities a one-way ANOVA was conducted including a post-hoc analysis. Reflexive thematic analysis guidelines were applied to analyze the interview data.
RESULTS: In total 82 first year students (44,6%) of cohort 2021-2022 and 36 teachers (60%) completed the questionnaire. Results show that the guiding principles were implemented as intended. Results of the national progress test on knowledge and clinical reasoning showed that students of the HAN University performed well compared to other universities. Thematic analysis of interviews and focus groups resulted in three themes and nine subthemes: 1) navigating a personalized curriculum, 2) caring and sharing, and 3) shaping professional identity. PACE contributed positively to students' intrinsic motivation, learning joy, identity development, and life-long learning skills. Areas for improvement were self-directed learning support, and teaching strategies to prompt deep learning.
CONCLUSION: The evaluation showed that the guiding principles of PACE were implemented as intended and that the innovation positively contributed to student learning.
PMID:39825299 | DOI:10.1186/s12909-024-06537-1
A multi-stage weakly supervised design for spheroid segmentation to explore mesenchymal stem cell differentiation dynamics
BMC Bioinformatics. 2025 Jan 17;26(1):20. doi: 10.1186/s12859-024-06031-x.
NO ABSTRACT
PMID:39825265 | DOI:10.1186/s12859-024-06031-x
Explainable deep learning and virtual evolution identifies antimicrobial peptides with activity against multidrug-resistant human pathogens
Nat Microbiol. 2025 Jan 17. doi: 10.1038/s41564-024-01907-3. Online ahead of print.
ABSTRACT
Artificial intelligence (AI) is a promising approach to identify new antimicrobial compounds in diverse microbial species. Here we developed an AI-based, explainable deep learning model, EvoGradient, that predicts the potency of antimicrobial peptides (AMPs) and virtually modifies peptide sequences to produce more potent AMPs, akin to in silico directed evolution. We applied this model to peptides encoded in low-abundance human oral bacteria, resulting in the virtual evolution of 32 peptides into potent AMPs. Of these, the 6 most effective were synthesized and tested against multidrug-resistant pathogens and demonstrated activity against carbapenem-resistant species Escherichia coli, Klebsiella pneumoniae and Acinetobacter baumannii, and vancomycin-resistant Enterococcus faecium. The most potent AMP, pep-19-mod, was validated in vivo, achieving over 95% reduction in bacterial loads in mouse models of thigh infection through both systemic and local administration. Our approach advances the automatic identification and optimization of AMPs.
PMID:39825096 | DOI:10.1038/s41564-024-01907-3
Pre-trained artificial intelligence-aided analysis of nanoparticles using the segment anything model
Sci Rep. 2025 Jan 17;15(1):2341. doi: 10.1038/s41598-025-86327-x.
ABSTRACT
Complex structures can be understood as compositions of smaller, more basic elements. The characterization of these structures requires an analysis of their constituents and their spatial configuration. Examples can be found in systems as diverse as galaxies, alloys, living tissues, cells, and even nanoparticles. In the latter field, the most challenging examples are those of subdivided particles and particle-based materials, due to the close proximity of their constituents. The characterization of such nanostructured materials is typically conducted through the utilization of micrographs. Despite the importance of micrograph analysis, the extraction of quantitative data is often constrained. The presented effort demonstrates the morphological characterization of subdivided particles utilizing a pre-trained artificial intelligence model. The results are validated using three types of nanoparticles: nanospheres, dumbbells, and trimers. The automated segmentation of whole particles, as well as their individual subdivisions, is investigated using the Segment Anything Model, which is based on a pre-trained neural network. The subdivisions of the particles are organized into sets, which presents a novel approach in this field. These sets collate data derived from a large ensemble of specific particle domains indicating to which particle each subdomain belongs. The arrangement of subdivisions into sets to characterize complex nanoparticles expands the information gathered from microscopy analysis. The presented method, which employs a pre-trained deep learning model, outperforms traditional techniques by circumventing systemic errors and human bias. It can effectively automate the analysis of particles, thereby providing more accurate and efficient results.
PMID:39825089 | DOI:10.1038/s41598-025-86327-x
Epicardial adipose tissue, cardiac damage, and mortality in patients undergoing TAVR for aortic stenosis
Int J Cardiovasc Imaging. 2025 Jan 18. doi: 10.1007/s10554-024-03307-4. Online ahead of print.
ABSTRACT
Computed tomography (CT)-derived Epicardial Adipose Tissue (EAT) is linked to cardiovascular disease outcomes. However, its role in patients undergoing Transcatheter Aortic Valve Replacement (TAVR) and the interplay with aortic stenosis (AS) cardiac damage (CD) remains unexplored. We aim to investigate the relationship between EAT characteristics, AS CD, and all-cause mortality. We retrospectively included consecutive patients who underwent CT-TAVR followed by TAVR. EAT volume and density were estimated using a deep-learning platform and CD was assessed using echocardiography. Patients were classified according to low/high EAT volume and density. All-cause mortality at 4 years was compared using Kaplan-Meier and Cox regression analyses. A total of 666 patients (median age 81 [74-86] years; 54% female) were included. After a median follow-up of 1.28 (IQR 0.53-2.57) years, 11.7% (n = 77) of patients died. The EAT volume (p = 0.017) decreased, and density increased (p < 0.001) with worsening AS CD. Patients with low EAT volume (< 49cm3) and high density (≥-86 HU) had higher all-cause mortality (log-rank p = 0.02 and p = 0.01, respectively), even when adjusted for age, sex, and clinical characteristics (HR 1.71, p = 0.02 and HR 1.73, p = 0.03, respectively). When CD was added to the model, low EAT volume (HR 1.67 p = 0.03) and CD stages 3 and 4 (HR 3.14, p = 0.03) remained associated with all-cause mortality. In patients with AS undergoing TAVR, CT-derived low EAT volume, and high density were independently associated with increased 4-year mortality and worse CD stage. Only EAT volume remained associated when adjusted for CD.
PMID:39825067 | DOI:10.1007/s10554-024-03307-4