Deep learning
Modeling CAPRI Targets of Round 55 by Combining AlphaFold and Docking
Proteins. 2025 Jun 6. doi: 10.1002/prot.26853. Online ahead of print.
ABSTRACT
In recent years, the field of structural biology has seen remarkable advancements, particularly in modeling of protein tertiary and quaternary structures. The AlphaFold deep learning approach revolutionized protein structure prediction by achieving near-experimental accuracy on many targets. This paper presents a detailed account of structural modeling of oligomeric targets in Round 55 of CAPRI by combining deep learning-based predictions (AlphaFold2 multimer pipeline) with traditional docking techniques in a hybrid approach to protein-protein docking. To complement the AlphaFold models generated for the given oligomeric state of the targets, we built docking predictions by combining models generated for lower-oligomeric states-dimers for trimeric targets and trimers/dimers for tetrameric targets. In addition, we used a template-based docking procedure applied to AlphaFold predicted structures of the monomers. We analyzed the clustering of the generated AlphaFold models, the confidence in the prediction of intra- and inter-chain residue-residue contacts, and the correlation of the AlphaFold predictions stability with the quality of the submitted models.
PMID:40476317 | DOI:10.1002/prot.26853
Performance of ChatGPT-3.5 and ChatGPT-4 in Solving Questions Based on Core Concepts in Cardiovascular Physiology
Cureus. 2025 May 6;17(5):e83552. doi: 10.7759/cureus.83552. eCollection 2025 May.
ABSTRACT
Background Medical students often struggle to apply previously learned concepts to new situations, such as cardiovascular physiology. ChatGPT, an AI chatbot trained through deep learning, can analyze basic problems and produce human-like language in various subjects. Multiple-choice questions (MCQs) are given to students by many medical schools before exams, but due to time constraints, instructors frequently lack the resources necessary to adequately explain the practice questions. Even when given, the explanations might not give students sufficient information to grasp the concepts completely. This study aimed to examine ChatGPT's ability to solve various reasoning problems based on the core concepts of cardiovascular physiology. Materials and methods Multiple-choice questions were presented manually to both chatbots (ChatGPT-4 and ChatGPT-3.5), and the answers generated were compared with the faculty-led answer key using various statistical tests. Results The accuracy rates of ChatGPT-4 and ChatGPT-3.5 were 83.33% and 60%, respectively, which were statistically significant. Compared to ChatGPT-3.5, ChatGPT-4's explanation of the response was substantially more appropriate. The execution of ChatGPT-4 was better than ChatGPT-3.5 in certain core concept areas like mass balance (75% vs. 50%), scientific reasoning (60% vs. 40%), and homeostasis (100% vs. 66.67%). Conclusion When it came to responding to concept-based questions about cardiovascular physiology, ChatGPT-4 outperformed ChatGPT-3.5. However, to ensure accuracy, faculty members should review the generated explanations, and thus, the growing application of generative AI in the form of virtual-assisted learning approaches in medical education needs to be carefully considered.
PMID:40476113 | PMC:PMC12138729 | DOI:10.7759/cureus.83552
Dental practitioners versus artificial intelligence software in assessing alveolar bone loss using intraoral radiographs
J Taibah Univ Med Sci. 2025 May 9;20(3):272-279. doi: 10.1016/j.jtumed.2025.04.001. eCollection 2025 Jun.
ABSTRACT
OBJECTIVES: Integrating artificial intelligence (AI) in the dental field can potentially enhance the efficiency of dental care. However, few studies have investigated whether AI software can achieve results comparable to those obtained by dental practitioners (general practitioners (GPs) and specialists) when assessing alveolar bone loss in a clinical setting. Thus, this study compared the performance of AI in assessing periodontal bone loss with those of GPs and specialists.
METHODS: This comparative cross-sectional study evaluated the performance of dental practitioners and AI software in assessing alveolar bone loss. Radiographs were randomly selected to ensure representative samples. Dental practitioners independently evaluated the radiographs, and the AI software "Second Opinion Software" was tested using the same set of radiographs evaluated by the dental practitioners. The results produced by the AI software were then compared with the baseline values to measure their accuracy and allow direct comparison with the performance of human specialists.
RESULTS: The survey received 149 responses, where each answered 10 questions to compare the measurements made by AI and dental practitioners when assessing the amount of bone loss radiographically. The mean estimates of the participants had a moderate positive correlation with the radiographic measurements (rho = 0.547, p < 0.001) and a weaker but still significant correlation with AI measurements (rho = 0.365, p < 0.001). AI measurements had a stronger positive correlation with the radiographic measurements (rho = 0.712, p < 0.001) compared with their correlation with the estimates of dental practitioners.
CONCLUSION: This study highlights the capacity of AI software to enhance the accuracy and efficiency of radiograph-based evaluations of alveolar bone loss. Dental practitioners are vital for the clinical experience but AI technology provides a consistent and replicable methodology. Future collaborations between AI experts, researchers, and practitioners could potentially optimize patient care.
PMID:40476084 | PMC:PMC12136790 | DOI:10.1016/j.jtumed.2025.04.001
Prediction of the space group and cell volume by training a convolutional neural network with primitive 'ideal' diffraction profiles and its application to 'real' experimental data
J Appl Crystallogr. 2025 Apr 25;58(Pt 3):718-730. doi: 10.1107/S1600576725002419. eCollection 2025 Jun 1.
ABSTRACT
This study describes a deep learning approach to predict the space group and unit-cell volume of inorganic crystals from their powder X-ray diffraction profiles. Using an inorganic crystallographic database, convolutional neural network (CNN) models were successfully constructed with the δ-function-like 'ideal' X-ray diffraction profiles derived solely from the intrinsic properties of the crystal structure, which are dependent on neither the incident X-ray wavelength nor the line shape of the profiles. We examined how the statistical metrics (e.g. the prediction accuracy, precision and recall) are influenced by the ensemble averaging technique and the multi-task learning approach; six CNN models were created from an identical data set for the former, and the space group classification was coupled with the unit-cell volume prediction in a CNN architecture for the latter. The CNN models trained in the 'ideal' world were tested with 'real' X-ray profiles for eleven materials such as TiO2, LiNiO2 and LiMnO2. While the models mostly fared well in the 'real' world, the cases at odds were scrutinized to elucidate the causes of the mismatch. Specifically for Li2MnO3, detailed crystallographic considerations revealed that the mismatch can stem from the state of the specific material and/or from the quality of the experimental data, and not from the CNN models. The present study demonstrates that we can obviate the need for emulating experimental diffraction profiles in training CNN models to elicit structural information, thereby focusing efforts on further improvements.
PMID:40475932 | PMC:PMC12135985 | DOI:10.1107/S1600576725002419
YOLO-ODD: an improved YOLOv8s model for onion foliar disease detection
Front Plant Sci. 2025 May 22;16:1551794. doi: 10.3389/fpls.2025.1551794. eCollection 2025.
ABSTRACT
Onion crops are affected by many diseases at different stages of growth, resulting in significant yield loss. The early detection of diseases helps in the timely incorporation of management practices, thereby reducing yield losses. However, the manual identification of plant diseases requires considerable effort and is prone to mistakes. Thus, adopting cutting-edge technologies such as machine learning (ML) and deep learning (DL) can help overcome these difficulties by enabling the early detection of plant diseases. This study presents a cross layer integration of YOLOv8 architecture for detection of onion leaf diseases viz.anthracnose, Stemphylium blight, purple blotch (PB), and Twister disease. The experimental results demonstrate that customized YOLOv8 model YOLO-ODD integrated with CABM and DTAH attentions outperform YOLOv5 and YOLO v8 base models in most disease categories, particularly in detecting Anthracnose, Purple Blotch, and Twister disease. Proposed YOLOv8 model achieved the highest overall 77.30% accuracy, 81.50% precession and Recall of 72.10% and thus YOLOv8-based deep learning approach will detect and classify major onion foliar diseases while optimizing for accuracy, real-time application, and adaptability in diverse field conditions.
PMID:40475906 | PMC:PMC12137250 | DOI:10.3389/fpls.2025.1551794
Artificial intelligence driven mental health diagnosis based on physiological signals
MethodsX. 2025 May 7;14:103358. doi: 10.1016/j.mex.2025.103358. eCollection 2025 Jun.
ABSTRACT
Mental health disorders like stress, anxiety, and depression are increasing rapidly these days. Diagnosis of mental health disorders is a matter of concern in this era. A cost-effective and efficient method is to be implemented for detection. With this aim, stress is being monitored in this work with the help of physiological signals. This work uses a machine learning approach to detect a subject in stressed and non-stressed situations. This work aims to automatically detect stressful situations in humans using physiological data collected during anxiety-inducing scenarios. Diagnosing stress in the early stage can be helpful to minimize the risk of stress-related issues and enhance the overall well-being of the patient. The traditional methods for diagnosing stress are based on patient reporting. This approach has limitations. This proposed research aims to develop a stress-assessing model with a machine learning approach.•Stress and anxiety these days have become a prevalent issue affecting individuals' well-being and productivity. Early detection of these conditions is crucial for timely intervention and prevention of associated health complications. This paper presents a machine learning model for stress diagnosis.•The dataset consists of recordings obtained from individuals under different stress levels. The physiological signals used in this project are ECG, EMG, HR, RESP, Foot GSR, and Hand GSR. The machine learning algorithms, like Decision tree and kernel support vector machine, are employed for dope classification tasks. Additionally, a deep learning framework based on feed-forward artificial neural networks is introduced for comparative analysis.•The study evaluates the accuracies of both binary (Stressed Vs. non-Stressed) and three-class (relaxed Vs. baseline Vs. stressed) classification. Results demonstrate promising accuracies with machine learning techniques achieving up to 91.87 % and 66.68 % for binary classes and three classifications respectively. This paper highlights the potential of machine learning methods accurately detecting mental disorders offering insights for the development of effective detection managing tools.
PMID:40475896 | PMC:PMC12139511 | DOI:10.1016/j.mex.2025.103358
Bridging language gaps: The role of NLP and speech recognition in oral english instruction
MethodsX. 2025 May 7;14:103359. doi: 10.1016/j.mex.2025.103359. eCollection 2025 Jun.
ABSTRACT
The Natural Language Processing (NLP) and speech recognition have transformed language learning by providing interactive and real-time feedback, enhancing oral English proficiency. These technologies facilitate personalized and adaptive learning, making pronunciation and fluency improvement more efficient. Traditional methods lack real-time speech assessment and individualized feedback, limiting learners' progress. Existing speech recognition models struggle with diverse accents, variations in speaking styles, and computational efficiency, reducing their effectiveness in real-world applications. This study utilizes three datasets-including a custom dataset of 882 English teachers, the CMU ARCTIC corpus, and LibriSpeech Clean-to ensure generalizability and robustness. The methodology integrates Hidden Markov Models for speech recognition, NLP-based text analysis, and computer vision-based lip movement detection to create an adaptive multimodal learning system. The novelty of this study lies in its real-time Bayesian feedback mechanism and multimodal integration of audio, visual, and textual data, enabling dynamic and personalized oral instruction. Performance is evaluated using recognition accuracy, processing speed, and statistical significance testing. The continuous HMM model achieves up to 97.5 % accuracy and significantly outperforms existing models such as MLP-LSTM and GPT-3.5-turbo (p < 0.05) across all datasets. Developed a multimodal system that combines speech, text, and visual data to enhance real-time oral English learning.•Collected and annotated a diverse dataset of English speech recordings from teachers across various accents and speaking styles.•Designed an adaptive feedback framework to provide learners with immediate, personalized insights into their pronunciation and fluency.
PMID:40475890 | PMC:PMC12139008 | DOI:10.1016/j.mex.2025.103359
Enhancing patient-specific deep learning based segmentation for abdominal magnetic resonance imaging-guided radiation therapy: A framework conditioned on prior segmentation
Phys Imaging Radiat Oncol. 2025 Apr 17;34:100766. doi: 10.1016/j.phro.2025.100766. eCollection 2025 Apr.
ABSTRACT
BACKGROUND AND PURPOSE: Conventionally, the contours annotated during magnetic resonance-guided radiation therapy (MRgRT) planning are manually corrected during the RT fractions, which is a time-consuming task. Deep learning-based segmentation can be helpful, but the available patient-specific approaches require training at least one model per patient, which is computationally expensive. In this work, we introduced a novel framework that integrates fraction MR volumes and planning segmentation maps to generate robust fraction MR segmentations without the need for patient-specific retraining.
MATERIALS AND METHODS: The dataset included 69 patients (222 fraction MRs in total) treated with MRgRT for abdominal cancers with a 0.35 T MR-Linac, and annotations for eight clinically relevant abdominal structures (aorta, bowel, duodenum, left kidney, right kidney, liver, spinal canal and stomach). In the framework, we implemented two alternative models capable of generating patient-specific segmentations using the planning segmentation as prior information. The first one is a 3D UNet with dual-channel input (i.e. fraction MR and planning segmentation map) and the second one is a modified 3D UNet with double encoder for the same two inputs.
RESULTS: On average, the two models with prior anatomical information outperformed the conventional population-based 3D UNet with an increase in Dice similarity coefficient > 4 % . In particular, the dual-channel input 3D UNet outperformed the one with double encoder, especially when the alignment between the two input channels is satisfactory.
CONCLUSION: The proposed workflow was able to generate accurate patient-specific segmentations while avoiding training one model per patient and allowing for a seamless integration into clinical practice.
PMID:40475848 | PMC:PMC12138391 | DOI:10.1016/j.phro.2025.100766
Leveraging network uncertainty to identify regions in rectal cancer clinical target volume auto-segmentations likely requiring manual edits
Phys Imaging Radiat Oncol. 2025 May 8;34:100771. doi: 10.1016/j.phro.2025.100771. eCollection 2025 Apr.
ABSTRACT
BACKGROUND AND PURPOSE: While Deep Learning (DL) auto-segmentation has the potential to improve segmentation efficiency in the radiotherapy workflow, manual adjustments of the predictions are still required. Network uncertainty quantification has been proposed as a quality assurance tool to ensure an efficient segmentation workflow. However, the interpretation is often complicated due to various sources of uncertainty interacting non-trivially. In this work, we compared network predictions with both independent manual segmentations and manual corrections of the predictions. We assume that manual corrections only address clinically relevant errors and are therefore associated with lower aleatoric uncertainty due to less inter-observer variability. We expect the remaining epistemic uncertainty to be a better predictor of segmentation corrections.
MATERIALS AND METHODS: We considered DL auto-segmentations of the mesorectum clinical target volume. Uncertainty maps of nnU-Net outputs were generated using Monte Carlo dropout. On a global level, we investigated the correlation between mean network uncertainty and network segmentation performance. On a local level, we compared the uncertainty envelope width with the length of the error from both independent contours and corrected predictions. The uncertainty envelope widths were used to classify the error lengths as above or below a predefined threshold.
RESULTS: We achieved an AUC above 0.9 in identifying regions manually corrected with edits larger than 8 mm, while the AUC for inconsistencies with the independent contours was significantly lower at approximately 0.7.
CONCLUSIONS: Our results validate the hypothesis that epistemic uncertainty estimates are a valuable tool to capture regions likely requiring clinically relevant edits.
PMID:40475847 | PMC:PMC12140033 | DOI:10.1016/j.phro.2025.100771
AttentionAML: An Attention-based Deep Learning Framework for Accurate Molecular Categorization of Acute Myeloid Leukemia
bioRxiv [Preprint]. 2025 May 22:2025.05.20.655179. doi: 10.1101/2025.05.20.655179.
ABSTRACT
Acute myeloid leukemia (AML) is an aggressive hematopoietic malignancy defined by aberrant clonal expansion of abnormal myeloid progenitor cells. Characterized by morphological, molecular, and genetic alterations, AML encompasses multiple distinct subtypes that would exhibit subtype-specific responses to treatment and prognosis, underscoring the critical need of accurately identifying AML subtypes for effective clinical management and tailored therapeutic approaches. Traditional wet lab approaches such as immunophenotyping, cytogenetic analysis, morphological analysis, or molecular profiling to identify AML subtypes are labor-intensive, costly, and time-consuming. To address these challenges, we propose AttentionAML , a novel attention-based deep learning framework for accurately categorizing AML subtypes based on transcriptomic profiling only. Benchmarking tests based on 1,661 AML patients suggested that AttentionAML outperformed state-of-the-art methods across all evaluated metrics (accuracy: 0.96, precision: 0.96, recall of 0.96, F1-score: 0.96, and Matthews correlation coefficient: 0.96). Furthermore, we also demonstrated the superiority of AttentionAML over conventional approaches in terms of AML patient clustering visualization and subtype-specific gene marker characterization. We believe AttentionAML will bring remarkable positive impacts on downstream AML risk stratification and personalized treatment design. To enhance its impact, a user-friendly Python package implementing AttentionAML is publicly available at https://github.com/wan-mlab/AttentionAML .
PMID:40475602 | PMC:PMC12139891 | DOI:10.1101/2025.05.20.655179
Artificial intelligence-based detection of dens invaginatus in panoramic radiographs
BMC Oral Health. 2025 Jun 5;25(1):917. doi: 10.1186/s12903-025-06317-3.
ABSTRACT
OBJECTIVE: The aim of this study was to automatically detect teeth with dens invaginatus (DI) in panoramic radiographs using deep learning algorithms and to compare the success of the algorithms.
MATERIALS AND METHODS: For this purpose, 400 panoramic radiographs with DI were collected from the faculty database and separated into 60% training, 20% validation and 20% test images. The training and validation images were labeled by oral, dental and maxillofacial radiologists and augmented with various augmentation methods, and the improved models were asked for the images allocated for the test phase and the results were evaluated according to performance measures including accuracy, sensitivity, F1 score and mean detection time.
RESULTS: According to the test results, YOLOv8 achieved a precision, sensitivity and F1 score of 0.904 and was the fastest detection model with an average detection time of 0.041. The Faster R-CNN model achieved 0.912 precision, 0.904 sensitivity and 0.907 F1 score, with an average detection time of 0.1 s. The YOLOv9 algorithm showed the most successful performance with 0.946 precision, 0.930 sensitivity, 0.937 F1 score value and the average detection speed per image was 0.158 s.
CONCLUSION: According to the results obtained, all models achieved over 90% success. YOLOv8 was relatively more successful in detection speed and YOLOv9 in other performance criteria. Faster R-CNN ranked second in all criteria.
PMID:40474238 | DOI:10.1186/s12903-025-06317-3
Automatic cervical tumors segmentation in PET/MRI by parallel encoder U-net
Radiat Oncol. 2025 Jun 5;20(1):95. doi: 10.1186/s13014-025-02664-1.
ABSTRACT
BACKGROUND: Automatic segmentation of cervical tumors is important in quantitative analysis and radiotherapy planning.
METHODS: A parallel encoder U-Net (PEU-Net) integrating the multi-modality information of PET/MRI was proposed to segment cervical tumor, which consisted of two parallel encoders with the same structure for PET and MR images. The features of the two modalities were extracted separately and fused at each layer of the decoder. Res2Net module on skip connection aggregated the features of various scales and refined the segmentation performance. PET/MRI images of 165 patients with cervical cancer were included in this study. U-Net, TransUNet, and nnU-Net with single or multi-modality (PET or/and T2WI) input were used for comparison. The Dice similarity coefficient (DSC) with volume data, DSC and the 95th percentile of Hausdorff distance (HD95) with tumor slices were calculated to evaluate the performance.
RESULTS: The proposed PEU-Net exhibited the best performance (DSC3d: 0.726 ± 0.204, HD95: 4.603 ± 4.579 mm), DSC2d (0.871 ± 0.113) was comparable to the best result of TransUNet with PET/MRI (0.873 ± 0.125).
CONCLUSIONS: The networks with multi-modality input outperformed those with single-modality images as input. The results showed that the proposed PEU-Net could use multi-modality information more effectively through the redesigned structure and achieved competitive performance.
PMID:40474211 | DOI:10.1186/s13014-025-02664-1
Construction of a deep learning-based predictive model to evaluate the influence of mechanical stretching stimuli on MMP-2 gene expression levels in fibroblasts
Biomed Eng Online. 2025 Jun 5;24(1):71. doi: 10.1186/s12938-025-01399-0.
ABSTRACT
BACKGROUND: Matrix metalloproteinase-2 (MMP-2) secretion homeostasis, governed by the multifaceted interplay of skin stretching, is a pivotal determinant influencing wound healing dynamics. This investigation endeavors to devise an artificial intelligence (AI) prediction framework delineating the modulation of MMP-2 expression under stretching conditions, thereby unravelling profound insights into the mechanobiological orchestration of MMP-2 secretion and fostering novel mechanotherapeutic strategies targeted at MMP-2 modulation.
METHODS: Employing a bespoke mechanical tensile loading apparatus, diverse mechanical tensile stimuli were administered to fibroblasts, with parameters such as tensile shape and frequency duration constituting the mechanical loading regimen. Furthermore, reverse transcription polymerase chain reaction (RT‒PCR) assays were conducted to measure MMP-2 gene expression levels in fibroblasts subjected to mechanical stretching. Subsequently, the resulting data were partitioned into training and validation cohorts at a 7:3 ratio, facilitating the development of the deep learning (DL) model via a back propagation neural network predicated on the training set. An external validation set was also curated by culling pertinent literature from the PubMed database to assess the predictive ability of the model.
RESULTS: Analysis of 336 data points related to MMP-2 gene expression via RT‒PCR corroborated the variability in MMP-2 gene expression levels in response to distinct mechanical stretching regimens. Consequently, a DL model was successfully crafted via the backpropagation algorithm to delineate the impact of mechanical stretching stimuli on MMP-2 gene expression levels. The model, characterized by an R2 value of 0.73, evinced a commendable fit with the training data set, elucidating the intricate interplay between the input features and the target variable. Notably, the model exhibited minimal prediction errors, as evidenced by a root mean square error (RMSE) of 0.42 and a mean absolute error (MAE) of 0.28. Furthermore, the model showcased robust generalization capabilities during validation, yielding R2 values of 0.70 and 0.71 for the validation and external validation sets, respectively, revealing its predictive accuracy.
CONCLUSIONS: The DL model fashioned through the backpropagation algorithm adeptly forecasts the impact of mechanical stretching stimuli on MMP-2 gene expression levels in fibroblasts with relative precision. These findings provide a foundation for the modulation of MMP homeostasis via mechanical stretching to expedite the healing of recalcitrant chronic refractory wound (CRW).
PMID:40474209 | DOI:10.1186/s12938-025-01399-0
Multitask deep learning model based on multimodal data for predicting prognosis of rectal cancer: a multicenter retrospective study
BMC Med Inform Decis Mak. 2025 Jun 5;25(1):209. doi: 10.1186/s12911-025-03050-3.
ABSTRACT
BACKGROUND: Prognostic prediction is crucial to guide individual treatment for patients with rectal cancer. We aimed to develop and validated a multitask deep learning model for predicting prognosis in rectal cancer patients.
METHODS: This retrospective study enrolled 321 rectal cancer patients (training set: 212; internal testing set: 53; external testing set: 56) who directly received total mesorectal excision from five hospitals between March 2014 to April 2021. A multitask deep learning model was developed to simultaneously predict recurrence/metastasis and disease-free survival (DFS). The model integrated clinicopathologic data and multiparametric magnetic resonance imaging (MRI) images including diffusion kurtosis imaging (DKI), without performing tumor segmentation. The receiver operating characteristic (ROC) curve and Harrell's concordance index (C-index) were used to evaluate the predictive performance of the proposed model.
RESULTS: The deep learning model achieved good discrimination capability of recurrence/metastasis, with area under the curve (AUC) values of 0.885, 0.846, and 0.797 in the training, internal testing and external testing sets, respectively. Furthermore, the model successfully predicted DFS in the training set (C-index: 0.812), internal testing set (C-index: 0.794), and external testing set (C-index: 0.733), and classified patients into significantly distinct high- and low-risk groups (p < 0.05).
CONCLUSIONS: The multitask deep learning model, incorporating clinicopathologic data and multiparametric MRI, effectively predicted both recurrence/metastasis and survival for patients with rectal cancer. It has the potential to be an essential tool for risk stratification, and assist in making individualized treatment decisions.
CLINICAL TRIAL NUMBER: Not applicable.
PMID:40474195 | DOI:10.1186/s12911-025-03050-3
Exploring deep learning in third-year undergraduate nursing students: a mixed methods study
BMC Nurs. 2025 Jun 5;24(1):643. doi: 10.1186/s12912-025-03303-6.
ABSTRACT
BACKGROUND: Deep learning is an important way for nursing undergraduates to develop professional skills. It is helpful for these students to successfully complete clinical practice and provide high-quality care. However, research focusing on deep learning in nursing undergraduates is scarce.
AIMS: To develop an intervention program for deep learning using unfolding case-based learning (CBL) based on the CoI framework and to evaluate the effects of this intervention program among nursing undergraduates.
METHODS: A sequential explanatory mixed methods design was used. The quantitative study was followed by a qualitative study, the results of which were used to better explain and understand the results of the quantitative study. From September 2023 to January 2024, 132 students participated in the study. The quantitative component consisted of pretest-posttest of students' deep learning and academic assessment scores. The qualitative component consisted of semistructured interviews with 12 students.
RESULTS: The quantitative results revealed that students' deep learning significantly improved during unfolding case-based learning (P < 0.05), the scores for which were greater than were those for students with traditional learning (P < 0.05). Students who applied blended teaching exhibited no significant change in deep learning (P > 0.05). The qualitative data analysis identified three themes: (1) gain and experience, (2) difficulties and challenges, and (3) individual career development.
CONCLUSION: Incorporating case-based training into a course helps enhance deep learning for students. In the future, consideration may be given to continuing targeted reforms in the integration of learning methods to help students enhance their career confidence and prepare to become professional nurses.
CLINICAL TRIAL NUMBER: Not applicable.
PMID:40474169 | DOI:10.1186/s12912-025-03303-6
A radiogenomics study on (18)F-FDG PET/CT in endometrial cancer by a novel deep learning segmentation algorithm
BMC Cancer. 2025 Jun 5;25(1):1006. doi: 10.1186/s12885-025-14392-6.
ABSTRACT
OBJECTIVE: To create an automated PET/CT segmentation method and radiomics model to forecast Mismatch repair (MMR) and TP53 gene expression in endometrial cancer patients, and to examine the effect of gene expression variability on image texture features.
MATERIALS AND METHODS: We generated two datasets in this retrospective and exploratory study. The first, with 123 histopathologically confirmed patient cases, was used to develop an endometrial cancer segmentation model. The second dataset, including 249 patients for MMR and 179 for TP53 mutation prediction, was derived from PET/CT exams and immunohistochemical analysis. A PET-based Attention-U Net network was used for segmentation, followed by region-growing with co-registered PET and CT images. Feature models were constructed using PET, CT, and combined data, with model selection based on performance comparison.
RESULTS: Our segmentation model achieved 99.99% training accuracy and a dice coefficient of 97.35%, with validation accuracy at 99.93% and a dice coefficient of 84.81%. The combined PET + CT model demonstrated superior predictive power for both genes, with AUCs of 0.8146 and 0.8102 for MMR, and 0.8833 and 0.8150 for TP53 in training and test sets, respectively. MMR-related protein heterogeneity and TP53 expression differences were predominantly seen in PET images.
CONCLUSION: An efficient deep learning algorithm for endometrial cancer segmentation has been established, highlighting the enhanced predictive power of integrated PET and CT radiomics for MMR and TP53 expression. The study underscores the distinct influences of MMR and TP53 gene expression on tumor characteristics.
PMID:40474131 | DOI:10.1186/s12885-025-14392-6
Association between age and lung cancer risk: evidence from lung lobar radiomics
BMC Med Imaging. 2025 Jun 5;25(1):204. doi: 10.1186/s12880-025-01747-5.
ABSTRACT
BACKGROUND: Previous studies have highlighted the prominent role of age in lung cancer risk, with signs of lung aging visible in computed tomography (CT) imaging. This study aims to characterize lung aging using quantitative radiomic features extracted from five delineated lung lobes and explore how age contributes to lung cancer development through these features.
METHODS: We analyzed baseline CT scans from the Wenling lung cancer screening cohort, consisting of 29,810 participants. Deep learning-based segmentation method was used to delineate lung lobes. A total of 1,470 features were extracted from each lobe. The minimum redundancy maximum relevance algorithm was applied to identify the top 10 age-related radiomic features among 13,137 never smokers. Multiple regression analyses were used to adjust for confounders in the association of age, lung lobar radiomic features, and lung cancer. Linear, Cox proportional hazards, and parametric accelerated failure time models were applied as appropriate. Mediation analyses were conducted to evaluate whether lobar radiomic features mediate the relationship between age and lung cancer risk.
RESULTS: Age was significantly associated with an increased lung cancer risk, particularly among current smokers (hazard ratio = 1.07, P = 2.81 × 10- 13). Age-related radiomic features exhibited distinct effects across lung lobes. Specifically, the first order mean (mean attenuation value) filtered by wavelet in the right upper lobe increased with age (β = 0.019, P = 2.41 × 10- 276), whereas it decreased in the right lower lobe (β = -0.028, P = 7.83 × 10- 277). Three features, namely wavelet_HL_firstorder_Mean of the right upper lobe, wavelet_LH_firstorder_Mean of the right lower lobe, and original_shape_MinorAxisLength of the left upper lobe, were independently associated with lung cancer risk at Bonferroni-adjusted P value. Mediation analyses revealed that density and shape features partially mediated the relationship between age and lung cancer risk while a suppression effect was observed in the wavelet first order mean of right upper lobe.
CONCLUSIONS: The study reveals lobe-specific heterogeneity in lung aging patterns through radiomics and their associations with lung cancer risk. These findings may contribute to identify new approaches for early intervention in lung cancer related to aging.
CLINICAL TRIAL NUMBER: Not applicable.
PMID:40474072 | DOI:10.1186/s12880-025-01747-5
A Multi-Task Deep Learning Approach for Simultaneous Sleep Staging and Apnea Detection for Elderly People
Interdiscip Sci. 2025 Jun 5. doi: 10.1007/s12539-025-00721-7. Online ahead of print.
NO ABSTRACT
PMID:40474036 | DOI:10.1007/s12539-025-00721-7
Epistasis regulates genetic control of cardiac hypertrophy
Nat Cardiovasc Res. 2025 Jun 5. doi: 10.1038/s44161-025-00656-8. Online ahead of print.
ABSTRACT
Although genetic variant effects often interact nonadditively, strategies to uncover epistasis remain in their infancy. Here we develop low-signal signed iterative random forests to elucidate the complex genetic architecture of cardiac hypertrophy, using deep learning-derived left ventricular mass estimates from 29,661 UK Biobank cardiac magnetic resonance images. We report epistatic variants near CCDC141, IGF1R, TTN and TNKS, identifying loci deemed insignificant in genome-wide association studies. Functional genomic and integrative enrichment analyses reveal that genes mapped from these loci share biological process gene ontologies and myogenic regulatory factors. Transcriptomic network analyses using 313 human hearts demonstrate strong co-expression correlations among these genes in healthy hearts, with significantly reduced connectivity in failing hearts. To assess causality, RNA silencing in human induced pluripotent stem cell-derived cardiomyocytes, combined with novel microfluidic single-cell morphology analysis, confirms that cardiomyocyte hypertrophy is nonadditively modifiable by interactions between CCDC141, TTN and IGF1R. Our results expand the scope of cardiac genetic regulation to epistasis.
PMID:40473955 | DOI:10.1038/s44161-025-00656-8
The Role of AI and Voice-Activated Technology in Religious Education in China: Capturing Emotional Depth for Deeper Learning
J Relig Health. 2025 Jun 5. doi: 10.1007/s10943-025-02347-x. Online ahead of print.
ABSTRACT
Integrating artificial intelligence (AI) in religious education is an emerging area of research. This study explores the potential of AI and voice-activated technologies in capturing the emotional depth of chanting during spiritual practices. The study used pre-trained voice recognition models combined with deep learning to analyze vocal characteristics. The objective of the research was to develop AI algorithms for analyzing vocal characteristics and assessing the emotional states of practitioners. For this purpose, 110 first- and second-year Chinese university students majoring in vocal performance were involved. The students were divided into experimental (trained with the help of AI) and control groups (trained traditionally). The study used the correlation analysis method. The Spielberger State-Anxiety Inventory, the Positive and Negative Affect Schedule (PANAS), and the Perceived Stress Scale (PSS) were used to measure emotional states. Participants trained with AI-assisted tools demonstrated significant improvement in their voices' intonation, volume, timbre, and frequency spectrum, as well as increased calmness. Compared to the control group that did not use AI technologies, these improvements were statistically significant. Correlation analysis confirmed a strong relationship between vocal parameters and participants' emotional states. This research highlights the effectiveness of AI in religious education and opens new avenues for enhancing educational processes by providing participants with objective feedback on their spiritual practices.
PMID:40473902 | DOI:10.1007/s10943-025-02347-x