Deep learning
MRI-based whole-brain elastography and volumetric measurements to predict brain age
Biol Methods Protoc. 2024 Nov 20;10(1):bpae086. doi: 10.1093/biomethods/bpae086. eCollection 2025.
ABSTRACT
Brain age, as a correlate of an individual's chronological age obtained from structural and functional neuroimaging data, enables assessing developmental or neurodegenerative pathology relative to the overall population. Accurately inferring brain age from brain magnetic resonance imaging (MRI) data requires imaging methods sensitive to tissue health and sophisticated statistical models to identify the underlying age-related brain changes. Magnetic resonance elastography (MRE) is a specialized MRI technique which has emerged as a reliable, non-invasive method to measure the brain's mechanical properties, such as the viscoelastic shear stiffness and damping ratio. These mechanical properties have been shown to change across the life span, reflect neurodegenerative diseases, and are associated with individual differences in cognitive function. Here, we aim to develop a machine learning framework to accurately predict a healthy individual's chronological age from maps of brain mechanical properties. This framework can later be applied to understand neurostructural deviations from normal in individuals with neurodevelopmental or neurodegenerative conditions. Using 3D convolutional networks as deep learning models and more traditional statistical models, we relate chronological age as a function of multiple modalities of whole-brain measurements: stiffness, damping ratio, and volume. Evaluations on held-out subjects show that combining stiffness and volume in a multimodal approach achieves the most accurate predictions. Interpretation of the different models highlights important regions that are distinct between the modalities. The results demonstrate the complementary value of MRE measurements in brain age models, which, in future studies, could improve model sensitivity to brain integrity differences in individuals with neuropathology.
PMID:39902188 | PMC:PMC11790219 | DOI:10.1093/biomethods/bpae086
Automated classification of elongated styloid processes using deep learning models-an artificial intelligence diagnostics
Front Oral Health. 2025 Jan 20;5:1424840. doi: 10.3389/froh.2024.1424840. eCollection 2024.
ABSTRACT
BACKGROUND: The styloid process (SP), a bony projection from the temporal bone which can become elongated, resulting in cervical pain, throat discomfort, and headaches. Associated with Eagle syndrome, this elongation can compress nearby nerves and blood vessels, leading to potentially severe complications. Traditional imaging-based methods for classifying various types of elongated styloid processes (ESP) are challenging due to variations in image quality, patient positioning, and anatomical differences, which limit diagnostic accuracy. Recent advancements in artificial intelligence, particularly deep learning, provide more efficient classification of elongated styloid processes.
OBJECTIVE: This study aims to develop an automated classification system for elongated styloid processes using deep learning models and to evaluate the performance of two distinct architectures, EfficientNetB5 and InceptionV3, in classifying elongated styloid processes.
METHODS: This retrospective analysis classified elongated styloid processes using Ortho Pantomograms (OPG) sourced from our oral radiology archives. Styloid process lengths were measured using ImageJ software. A dataset of 330 elongated and 120 normal styloid images was curated for deep learning model training and testing. Pre-processing included median filtering and resizing, with data augmentation applied to improve generalization. EfficientNetB5 and InceptionV3 models, utilized as feature extractors, captured unique styloid characteristics. Model performance was evaluated based on accuracy, precision, recall, and F1-score, with a comparative analysis conducted to identify the most effective model and support advancements in patient care.
RESULTS: The EfficientNetB5 model achieved an accuracy of 97.49%, a precision of 98.00%, a recall of 97.00%, and an F1-score of 97.00%, demonstrating strong overall performance. Additionally, the model achieved an AUC of 0.9825. By comparison, the InceptionV3 model achieved an accuracy of 84.11%, a precision of 85.00%, a recall of 84.00%, and an F1-score of 84.00%, with an AUC of 0.8943. This comparison indicates that EfficientNetB5 outperformed InceptionV3 across all key metrics.
CONCLUSION: In conclusion, our study presents a deep learning-based approach utilizing EfficientNetB5 and InceptionV3 to accurately categorize elongated styloid processes into distinct types based on their morphological characteristics from digital panoramic radiographs. Our results indicate that these models, particularly EfficientNetB5, can enhance diagnostic accuracy and streamline clinical workflows, contributing to improved patient care.
PMID:39902080 | PMC:PMC11788325 | DOI:10.3389/froh.2024.1424840
Fully automatic reconstruction of prostate high-dose-rate brachytherapy interstitial needles using two-phase deep learning-based segmentation and object tracking algorithms
Clin Transl Radiat Oncol. 2025 Jan 19;51:100925. doi: 10.1016/j.ctro.2025.100925. eCollection 2025 Mar.
ABSTRACT
The critical aspect of successful brachytherapy (BT) is accurate detection of applicator/needle trajectories, which is an ongoing challenge. This study proposes a two-phase deep learning-based method to automate localization of high-dose-rate (HDR) prostate BT catheters through the patient's CT images. The whole process is divided into two phases using two different deep neural networks. First, BT needles segmentation was accomplished through a pix2pix Generative Adversarial Neural network (pix2pix GAN). Second, a Generic Object Tracking Using Regression Networks (GOTURN) was used to predict the needle trajectories. These models were trained and tested on a clinical prostate BT dataset. Among the total 25 patients, 5 patients that consist of 592 slices was dedicated to testing sets, and the rest were used as train/validation set. The total number of needles in these slices of CT images was 8764, of which the employed pix2pix network was able to segment 98.72 % (8652 of total). Dice Similarity Coefficient (DSC) and IoU (Intersection over Union) between the network output and the ground truth were 0.95 and 0.90, respectively. Moreover, the F1-score, recall, and precision results were 0.95, 0.93, and 0.97, respectively. Regarding the location of the shafts, the proposed model has an error of 0.41 mm. The current study proposed a novel methodology to automatically localize and reconstruct the prostate HDR-BT interstitial needles through the 3D CT images. The presented method can be utilized as a computer-aided module in clinical applications to automatically detect and delineate the multi-catheters, potentially enhancing the treatment quality.
PMID:39901943 | PMC:PMC11788795 | DOI:10.1016/j.ctro.2025.100925
Fundus camera-based precision monitoring of blood vitamin A level for Wagyu cattle using deep learning
Sci Rep. 2025 Feb 3;15(1):4125. doi: 10.1038/s41598-025-85372-w.
ABSTRACT
In the wagyu industry worldwide, high-quality marbling beef is produced by promoting intramuscular fat deposition during cattle fattening stage through dietary vitamin A control. Thus, however, cattle become susceptible to either vitamin A deficiency or excess state, not only influencing cattle performance and beef quality, but also causing health problems. Researchers have been exploring eye photography monitoring methods for cattle blood vitamin A levels based on the relation between vitamin A and retina colour changes. But previous endeavours cannot realise real-time monitoring and their prediction accuracy still need improvement in a practical sense. This study developed a handheld camera system capable of capturing cattle fundus images and predicting vitamin A levels in real time using deep learning. 4000 fundus images from 50 Japanese Black cattle were used to train and test the prediction algorithms, and the model achieved an average 87%, 83%, and 80% accuracy for three levels of vitamin A deficiency classification (particularly 87% for severe level), demonstrating the effectiveness of camera system in vitamin A deficiency prediction, especially for screening and early warning. More importantly, a new method was exemplified to utilise visualisation heatmap for colour-related DNNs tasks, and it was found that chromatic features extracted from LRP heatmap highlighted-ROI could account for 70% accuracy for the prediction of vitamin A deficiency. This system can assist farmers in blood vitamin A level monitoring and related disease prevention, contributing to precision livestock management and animal well-being in wagyu industry.
PMID:39900776 | DOI:10.1038/s41598-025-85372-w
Annotation-free deep learning for predicting gene mutations from whole slide images of acute myeloid leukemia
NPJ Precis Oncol. 2025 Feb 3;9(1):35. doi: 10.1038/s41698-025-00804-0.
ABSTRACT
The rapid development of deep learning has revolutionized medical image processing, including analyzing whole slide images (WSIs). Despite the demonstrated potential for characterizing gene mutations directly from WSIs in certain cancers, challenges remain due to image resolution and reliance on manual annotations for acute myeloid leukemia (AML). We, therefore, propose a deep learning model based on multiple instance learning (MIL) with ensemble techniques to predict gene mutations from AML WSIs. Our model predicts NPM1 mutations and FLT3-ITD without requiring patch-level or cell-level annotations. Using a dataset of 572 WSIs, the largest database with both WSI and genetic mutation information, our model achieved an AUC of 0.90 ± 0.08 for NPM1 and 0.80 ± 0.10 for FLT3-ITD in the testing cohort. Additionally, we found that blasts are pivotal indicators for gene mutation predictions, with their proportions varying between mutated and standard WSIs, highlighting the clinical potential of AML WSI analysis.
PMID:39900774 | DOI:10.1038/s41698-025-00804-0
Pathological and radiological assessment of benign breast lesions with BIRADS IVc/V subtypes. should we repeat the biopsy?
BMC Womens Health. 2025 Feb 3;25(1):47. doi: 10.1186/s12905-025-03569-7.
ABSTRACT
BACKGROUND: Timely diagnosis is a crucial factor in decreasing the death rate of patients with breast cancer. BI-RADS categories IVc and V indicate a strong suspicion of cancer. The categorisation of each group is determined by the characteristics of the lesion. Certain benign breast lesions might have radiological features indicative of malignancy; thus, biopsy is mandatory. This study aimed to identify the histopathological diagnosis of benign breast masses classified into BIRADS IVc and V subgroups, investigate the radiological characteristics of these masses, and identify ultrasound features that could lead to false positive results (benign lesions that mimic malignancy on imaging).
METHODS: This was a retrospective cross-sectional study at a single facility. Breast lesions reported as BIRADS IVc and V that underwent needle core/stereotactic vacuum-assisted biopsy were reviewed. Patients with benign pathologic diagnoses were analysed, delineating pathological diagnoses. Radiological descriptors were compared to those of a matched control of 50 malignant cases with BIRADS IVc.
RESULTS: A total of 828 breast lesions classified as BIRADS IVc or V were detected during the period spanning from 2015 to 2022. Forty-four lesions (44/828, 5.3%) were benign at initial biopsy, while 784 lesions (784/828, 94.7%) were malignant. After histopathological testing and repeat biopsy, 26/828 (3.14%) patients had discordant benign diagnosis. Half of the repeated biopsies (10/20, 50%) showed malignant pathology. Compared to that in the control group, the presence of an oval shape of the mass was significantly more common in patients with benign pathology (p = 0.035). Conversely, the presence of posterior shadowing was significantly less common (p = 0.050) in benign lesions. No significant differences were observed for the other radiological characteristics. The most common histopathological diagnosis was fibrocystic change.
CONCLUSION: This study highlights key findings regarding the sonographic imaging descriptors and histopathological diagnoses of benign breast lesions categorised as BIRADS IVc/V. The study recommends a correlation between clinical and radiological findings and encourages multidisciplinary decision-making among radiologists, pathologists, and clinicians to determine if a repeat biopsy is warranted. There is a need for continuous research to improve the diagnosis and treatment of breast lesions and reduce false-positive rates by incorporating other methodologies such as sonoelastography and incorporating deep learning and artificial intelligence in the decision-making to eliminate unnecessary procedures.
PMID:39901102 | DOI:10.1186/s12905-025-03569-7
Comparative analysis of the DCNN and HFCNN Based Computerized detection of liver cancer
BMC Med Imaging. 2025 Feb 3;25(1):37. doi: 10.1186/s12880-025-01578-4.
ABSTRACT
Liver cancer detection is critically important in the discipline of biomedical image testing and diagnosis. Researchers have explored numerous machine learning (ML) techniques and deep learning (DL) approaches aimed at the automated recognition of liver disease by analysing computed tomography (CT) images. This study compares two frameworks, Deep Convolutional Neural Network (DCNN) and Hierarchical Fusion Convolutional Neural Networks (HFCNN), to assess their effectiveness in liver cancer segmentation. The contribution includes enhancing the edges and textures of CT images through filtering to achieve precise liver segmentation. Additionally, an existing DL framework was employed for liver cancer detection and segmentation. The strengths of this paper include a clear emphasis on the criticality of liver cancer detection in biomedical imaging and diagnostics. It also highlights the challenges associated with CT image detection and segmentation and provides a comprehensive summary of recent literature. However, certain difficulties arise during the detection process in CT images due to overlapping structures, such as bile ducts, blood vessels, image noise, textural changes, size and location variations, and inherent heterogeneity. These factors may lead to segmentation errors and subsequently different analyses. This research analysis compares two advanced methodologies, DCNN and HFCNN, for liver cancer detection. The evaluation of DCNN and HFCNN in liver cancer detection is conducted using multiple performance metrics, including precision, F1-score, recall, and accuracy. This comprehensive assessment provides a detailed evaluation of these models' effectiveness compared to other state-of-the-art methods in identifying liver cancer.
PMID:39901085 | DOI:10.1186/s12880-025-01578-4
Synchronization-based graph spatio-temporal attention network for seizure prediction
Sci Rep. 2025 Feb 3;15(1):4080. doi: 10.1038/s41598-025-88492-5.
ABSTRACT
Epilepsy is a common neurological disorder in which abnormal brain waves propagate rapidly in the brain in the form of a graph network during seizures, and seizures are extremely sudden. So, designing accurate and reliable prediction methods can provide early warning for patients, which is crucial for improving their lives. In recent years, a large number of studies have been conducted using deep learning models on epileptic open electroencephalogram (EEG) datasets with good results, but due to individual differences there are still some subjects whose seizure features cannot be accurately captured and are more difficult to differentiate, with poor prediction results. Important time-varying information may be overlooked if only graph space features during seizures are considered. To address these issues, we propose a synchronization-based graph spatio-temporal attention network (SGSTAN). This model effectively leverages the intricate information embedded within EEG recordings through spatio-temporal correlations. Experimental results on public datasets demonstrate the efficacy of our approach. On the CHB-MIT dataset, our method achieves accuracy, specificity, and sensitivity scores of 98.2%, 98.07%, and 97.85%, respectively. In the case of challenging subjects that are difficult to classify, we achieved an outstanding average classification accuracy of 97.59%, surpassing the results of previous studies.
PMID:39901056 | DOI:10.1038/s41598-025-88492-5
AI-driven video summarization for optimizing content retrieval and management through deep learning techniques
Sci Rep. 2025 Feb 3;15(1):4058. doi: 10.1038/s41598-025-87824-9.
ABSTRACT
With the rapid advancement of artificial intelligence, questions are increasingly being raised by stakeholders regarding how such technologies can enhance the environmental, social, and governance outcomes of organizations. In this study, challenges related to the organization and retrieval of video content within large, heterogeneous media archives are addressed. Existing methods, often reliant on human intervention or low-complexity algorithms, are observed to struggle with the growing demands of online video quantity and quality. To address these limitations, a novel approach is proposed, where convolutional neural networks and long short-term memory networks are utilized to extract both frame-level and temporal video features. Residual networks 50 (ResNet50) is integrated for enhanced content representation, and two-frame video flow is employed to improve system performance. The framework achieves precision, recall, and F-score of 79.2%, 86.5%, and 83%, respectively, on the YouTube, EPFL, and TVSum datasets. Beyond technological advancements, opportunities for effective content management are highlighted, emphasizing the promotion of sustainable digital practices. By minimizing data duplication and optimizing resource usage, scalable solutions for large media collections are supported by the proposed system.
PMID:39901035 | DOI:10.1038/s41598-025-87824-9
A novel early stage drip irrigation system cost estimation model based on management and environmental variables
Sci Rep. 2025 Feb 3;15(1):4089. doi: 10.1038/s41598-025-88446-x.
ABSTRACT
One of the most significant, intricate, and little-discussed aspects of pressurized irrigation is cost estimation. This study attempts to model the early-stage cost of the drip irrigation system using a database of 515 projects divided into four sections the cost of the pumping station and central control system (TCP), the cost of on-farm equipment (TCF), the cost of installation and operation on-farm and pumping station (TCI), and the total cost (TCT). First, 39 environmental and management features affecting the cost of the listed sectors were extracted for each of the 515 projects previously mentioned. A database (a matrix of 515 × 43) was created, and the costs of all projects were updated for the baseline year of 2022. Then, several feature selection algorithms, such as WCC, LCA, GA, PSO, ACO, ICA, LA, HTS, FOA, DSOS, and CUK, were employed to choose the most significant features that had the biggest influence on the system cost. The selection of features was carried out for all features (a total of 39 features) as well as for easily available features (those features that existed before the irrigation system's design phase, 18 features). Then, different machine learning models such as Multivariate Linear Regression, Support Vector Regression, Artificial Neural Networks, Gene Expression Programming, Genetic Algorithms, Deep Learning, and Decision Trees, were used to estimate the costs of each of the of the aforementioned sections. Support vector machine (SVM) and optimization algorithms (Wrapper) were found to be the best learner and feature selection techniques, respectively, out of all the available feature selection algorithms. The two LCA and FOA algorithms produced the best estimation, according to the evaluation criteria results. Their RMSE for all features was 0.0020 and 0.0018, respectively, and their R2 was 0.94 and 0.94. For readily available features, these criteria were 0.0006 and 0.95 for both algorithms. In the part of the overall feature, the early-stage cost modeling with selected features revealed that the SVM model (with RBF Kernel) is the best model among the four cost sections discussed. Its evaluation criteria in the training stage are R2 = 0.923, RMSE = 0.008, and VE = 0.082; in the testing stage, they are R2 = 0.893, RMSE = 0.009, and VE = 0.102. The ANN model (MLP) was found to be the best model for a subset of features in the easily available feature part, with R2 = 0.912, RMSE = 0.008, and VE = 0.083 in the training stage and R2 = 0.882, RMSE = 0.009, and VE = 0.103 in the testing stage. The findings of this study can be utilized to highly accurately estimate the cost of local irrigation systems based on the recognized environmental and management parameters and by employing particular models.
PMID:39900997 | DOI:10.1038/s41598-025-88446-x
An explainable deep learning model for diabetic foot ulcer classification using swin transformer and efficient multi-scale attention-driven network
Sci Rep. 2025 Feb 3;15(1):4057. doi: 10.1038/s41598-025-87519-1.
ABSTRACT
Diabetic Foot Ulcer (DFU) is a severe complication of diabetes mellitus, resulting in significant health and socio-economic challenges for the diagnosed individual. Severe cases of DFU can lead to lower limb amputation in diabetic patients, making their diagnosis a complex and costly process that poses challenges for medical professionals. Manual identification of DFU is particularly difficult due to their diverse visual characteristics, leading to multiple cases going undiagnosed. To address this challenge, Deep Learning (DL) methods offer an efficient and automated approach to facilitate timely treatment and improve patient outcomes. This research proposes a novel feature fusion-based model that incorporates two parallel tracks for efficient feature extraction. The first track utilizes the Swin transformer, which captures long-range dependencies by employing shifted windows and self-attention mechanisms. The second track involves the Efficient Multi-Scale Attention-Driven Network (EMADN), which leverages Light-weight Multi-scale Deformable Shuffle (LMDS) and Global Dilated Attention (GDA) blocks to extract local features efficiently. These blocks dynamically adjust kernel sizes and leverage attention modules, enabling effective feature extraction. To the best of our knowledge, this is the first work reporting the findings of a dual track architecture for DFU classification, leveraging Swin transformer and EMADN networks. The obtained feature maps from both the networks are concatenated and subjected to shuffle attention for feature refinement at a reduced computational cost. The proposed work also incorporates Grad-CAM-based Explainable Artificial Intelligence (XAI) to visualize and interpret the decision making of the network. The proposed model demonstrated better performance on the DFUC-2021 dataset, surpassing existing works and pre-trained CNN architectures with an accuracy of 78.79% and a macro F1-score of 80%.
PMID:39900977 | DOI:10.1038/s41598-025-87519-1
Enhancing depression recognition through a mixed expert model by integrating speaker-related and emotion-related features
Sci Rep. 2025 Feb 3;15(1):4064. doi: 10.1038/s41598-025-88313-9.
ABSTRACT
The World Health Organization predicts that by 2030, depression will be the most common mental disorder, significantly affecting individuals, families, and society. Speech, as a sensitive indicator, reveals noticeable acoustic changes linked to physiological and cognitive variations, making it a crucial behavioral marker for detecting depression. However, existing studies often overlook the separation of speaker-related and emotion-related features in speech when recognizing depression. To tackle this challenge, we propose a Mixture-of-Experts (MoE) method that integrates speaker-related and emotion-related features for depression recognition. Our approach begins with a Time Delay Neural Network to pre-train a speaker-related feature extractor using a large-scale speaker recognition dataset while simultaneously pre-training a speaker's emotion-related feature extractor with a speech emotion dataset. We then apply transfer learning to extract both features from a depression dataset, followed by fusion. A multi-domain adaptation algorithm trains the MoE model for depression recognition. Experimental results demonstrate that our method achieves 74.3% accuracy on a self-built Chinese localized depression dataset and an MAE of 6.32 on the AVEC2014 dataset. Thus, it outperforms state-of-the-art deep learning methods that use speech features. Additionally, our approach shows strong performance across Chinese and English speech datasets, highlighting its effectiveness in addressing cultural variations.
PMID:39900968 | DOI:10.1038/s41598-025-88313-9
A mechanism-informed deep neural network enables prioritization of regulators that drive cell state transitions
Nat Commun. 2025 Feb 3;16(1):1284. doi: 10.1038/s41467-025-56475-9.
ABSTRACT
Cells are regulated at multiple levels, from regulations of individual genes to interactions across multiple genes. Some recent neural network models can connect molecular changes to cellular phenotypes, but their design lacks modeling of regulatory mechanisms, limiting the decoding of regulations behind key cellular events, such as cell state transitions. Here, we present regX, a deep neural network incorporating both gene-level regulation and gene-gene interaction mechanisms, which enables prioritizing potential driver regulators of cell state transitions and providing mechanistic interpretations. Applied to single-cell multi-omics data on type 2 diabetes and hair follicle development, regX reliably prioritizes key transcription factors and candidate cis-regulatory elements that drive cell state transitions. Some regulators reveal potential new therapeutic targets, drug repurposing possibilities, and putative causal single nucleotide polymorphisms. This method to analyze single-cell multi-omics data demonstrates how the interpretable design of neural networks can better decode biological systems.
PMID:39900922 | DOI:10.1038/s41467-025-56475-9
Automated contouring for breast cancer radiotherapy in the isocentric lateral decubitus position: a neural network-based solution for enhanced precision and efficiency
Strahlenther Onkol. 2025 Feb 3. doi: 10.1007/s00066-024-02364-x. Online ahead of print.
ABSTRACT
BACKGROUND: Adjuvant radiotherapy is essential for reducing local recurrence and improving survival in breast cancer patients, but it carries a risk of ischemic cardiac toxicity, which increases with heart exposure. The isocentric lateral decubitus position, where the breast rests flat on a support, reduces heart exposure and leads to delivery of a more uniform dose. This position is particularly beneficial for patients with unique anatomies, such as those with pectus excavatum or larger breast sizes. While artificial intelligence (AI) algorithms for autocontouring have shown promise, they have not been tailored to this specific position. This study aimed to develop and evaluate a neural network-based autocontouring algorithm for patients treated in the isocentric lateral decubitus position.
MATERIALS AND METHODS: In this single-center study, 1189 breast cancer patients treated after breast-conserving surgery were included. Their simulation CT scans (1209 scans) were used to train and validate a neural network-based autocontouring algorithm (nnU-Net). Of these, 1087 scans were used for training, and 122 scans were reserved for validation. The algorithm's performance was assessed using the Dice similarity coefficient (DSC) to compare the automatically delineated volumes with manual contours. A clinical evaluation of the algorithm was performed on 30 additional patients, with contours rated by two expert radiation oncologists.
RESULTS: The neural network-based algorithm achieved a segmentation time of approximately 4 min, compared to 20 min for manual segmentation. The DSC values for the validation cohort were 0.88 for the treated breast, 0.90 for the heart, 0.98 for the right lung, and 0.97 for the left lung. In the clinical evaluation, 90% of the automatically contoured breast volumes were rated as acceptable without corrections, while the remaining 10% required minor adjustments. All lung contours were accepted without corrections, and heart contours were rated as acceptable in 93.3% of cases, with minor corrections needed in 6.6% of cases.
CONCLUSION: This neural network-based autocontouring algorithm offers a practical, time-saving solution for breast cancer radiotherapy planning in the isocentric lateral decubitus position. Its strong geometric performance, clinical acceptability, and significant time efficiency make it a valuable tool for modern radiotherapy practices, particularly in high-volume centers.
PMID:39900818 | DOI:10.1007/s00066-024-02364-x
Artificial intelligence in arthroplasty
Orthopadie (Heidelb). 2025 Feb 3. doi: 10.1007/s00132-025-04619-6. Online ahead of print.
ABSTRACT
BACKGROUND: Artificial intelligence is very likely to be a pioneering technology in arthroplasty, with a wide range of pre-, intra- and post-operative applications. The opportunities for patients, doctors and healthcare policy are considerable, especially in the context of optimized and individualized patient care.
DATA AVAILABILITY: Despite these diverse possibilities, there are currently only a few AI applications in routine clinical practice, mainly due to the limited availability of analyzable health data. AI systems are only as good as the data they are trained with. If the data is insufficient, incomplete or biased, the AI may draw false conclusions. The current results of such AI applications in arthroplasty must, therefore, be viewed critically, especially as previous data bases were not designed a priori for AI applications.
PROSPECTS: The successful integration of AI, therefore, requires a targeted focus on the development of a specific data structure. In order to exploit the full potential of AI, comprehensive clinical data volumes are required, which can only be realized through a multicentric approach. In this context, ethical and data protection issues remain a further question, and not only in orthopaedics. Cooperative efforts at national and international levels are, therefore, essential in order to research and develop new AI applications.
PMID:39900780 | DOI:10.1007/s00132-025-04619-6
<em>De Novo</em> Synthesis of Reticuline and Taxifolin Using Re-engineered Homologous Recombination in <em>Yarrowia lipolytica</em>
ACS Synth Biol. 2025 Feb 3. doi: 10.1021/acssynbio.4c00853. Online ahead of print.
ABSTRACT
Yarrowia lipolytica has been widely engineered as a eukaryotic cell factory to produce various important compounds. However, the difficulty of gene editing and the lack of efficient neutral sites make rewiring of Y. lipolytica metabolism challenging. Herein, a Cas9 system was established to redesign the Y. lipolytica homologous recombination system, which caused a more than 56-fold increase in the HR efficiency. The fusion expression of the hBrex27 sequence in the C-terminus of Cas9 recruited more Rad51 protein, and the engineered Cas9 decreased NHEJ, achieving 85% single-gene positive efficiency and 25% multigene editing efficiency. With this system, neutral sites on different chromosomes were characterized, and a deep learning model was developed for gRNA activity prediction, thus providing the corresponding integration efficiency and expression intensity. Subsequently, the tool and platform strains were validated by applying them for the de novo synthesis of (S)-reticuline and (2S)-taxifolin. The developed platform strains and tools helped transform Y. lipolytica into an easy-to-operate model cell factory, similar to Saccharomyces cerevisiae.
PMID:39899813 | DOI:10.1021/acssynbio.4c00853
Machine Learning-Enabled Drug-Induced Toxicity Prediction
Adv Sci (Weinh). 2025 Feb 3:e2413405. doi: 10.1002/advs.202413405. Online ahead of print.
ABSTRACT
Unexpected toxicity has become a significant obstacle to drug candidate development, accounting for 30% of drug discovery failures. Traditional toxicity assessment through animal testing is costly and time-consuming. Big data and artificial intelligence (AI), especially machine learning (ML), are robustly contributing to innovation and progress in toxicology research. However, the optimal AI model for different types of toxicity usually varies, making it essential to conduct comparative analyses of AI methods across toxicity domains. The diverse data sources also pose challenges for researchers focusing on specific toxicity studies. In this review, 10 categories of drug-induced toxicity is examined, summarizing the characteristics and applicable ML models, including both predictive and interpretable algorithms, striking a balance between breadth and depth. Key databases and tools used in toxicity prediction are also highlighted, including toxicology, chemical, multi-omics, and benchmark databases, organized by their focus and function to clarify their roles in drug-induced toxicity prediction. Finally, strategies to turn challenges into opportunities are analyzed and discussed. This review may provide researchers with a valuable reference for understanding and utilizing the available resources to bridge prediction and mechanistic insights, and further advance the application of ML in drugs-induced toxicity prediction.
PMID:39899688 | DOI:10.1002/advs.202413405
Adaptive wavelet base selection for deep learning-based ECG diagnosis: A reinforcement learning approach
PLoS One. 2025 Feb 3;20(2):e0318070. doi: 10.1371/journal.pone.0318070. eCollection 2025.
ABSTRACT
Electrocardiogram (ECG) signals are crucial in diagnosing cardiovascular diseases (CVDs). While wavelet-based feature extraction has demonstrated effectiveness in deep learning (DL)-based ECG diagnosis, selecting the optimal wavelet base poses a significant challenge, as it directly influences feature quality and diagnostic accuracy. Traditional methods typically rely on fixed wavelet bases chosen heuristically or through trial-and-error, which can fail to cover the distinct characteristics of individual ECG signals, leading to suboptimal performance. To address this limitation, we propose a reinforcement learning-based wavelet base selection (RLWBS) framework that dynamically customizes the wavelet base for each ECG signal. In this framework, a reinforcement learning (RL) agent iteratively optimizes its wavelet base selection (WBS) strategy based on successive feedback of classification performance, aiming to achieve progressively optimized feature extraction. Experiments conducted on the clinically collected PTB-XL dataset for ECG abnormality classification show that the proposed RLWBS framework could obtain more detailed time-frequency representation of ECG signals, yielding enhanced diagnostic performance compared to traditional WBS approaches.
PMID:39899639 | DOI:10.1371/journal.pone.0318070
Capturing continuous, long timescale behavioral changes in Drosophila melanogaster postural data
PLoS Comput Biol. 2025 Feb 3;21(2):e1012753. doi: 10.1371/journal.pcbi.1012753. Online ahead of print.
ABSTRACT
Animal behavior spans many timescales, from short, seconds-scale actions to daily rhythms over many hours to life-long changes during aging. To access longer timescales of behavior, we continuously recorded individual Drosophila melanogaster at 100 frames per second for up to 7 days at a time in featureless arenas on sucrose-agarose media. We use the deep learning framework SLEAP to produce a full-body postural dataset for 47 individuals resulting in nearly 2 billion pose instances. We identify stereotyped behaviors such as grooming, proboscis extension, and locomotion and use the resulting ethograms to explore how the flies' behavior varies across time of day and days in the experiment. We find distinct daily patterns in all stereotyped behaviors, adding specific information about trends in different grooming modalities, proboscis extension duration, and locomotion speed to what is known about the D. melanogaster circadian cycle. Using our holistic measurements of behavior, we find that the hour after dawn is a unique time point in the flies' daily pattern of behavior, and that the behavioral composition of this hour tracks well with other indicators of health such as locomotion speed and the fraction of time spend moving vs. resting. The method, data, and analysis presented here give us a new and clearer picture of D. melanogaster behavior across timescales, revealing novel features that hint at unexplored underlying biological mechanisms.
PMID:39899595 | DOI:10.1371/journal.pcbi.1012753
Unsupervised monocular depth estimation with omnidirectional camera for 3D reconstruction of grape berries in the wild
PLoS One. 2025 Feb 3;20(2):e0317359. doi: 10.1371/journal.pone.0317359. eCollection 2025.
ABSTRACT
Japanese table grapes are quite expensive because their production is highly labor-intensive. In particular, grape berry pruning is a labor-intensive task performed to produce grapes with desirable characteristics. Because it is considered difficult to master, it is desirable to assist new entrants by using information technology to show the recommended berries to cut. In this research, we aim to build a system that identifies which grape berries should be removed during the pruning process. To realize this, the 3D positions of individual grape berries need to be estimated. Our environmental restriction is that bunches hang from trellises at a height of about 1.6 meters in the grape orchards outside. It is hard to use depth sensors in such circumstances, and using an omnidirectional camera with a wide field of view is desired for the convenience of shooting videos. Obtaining 3D information of grape berries from videos is challenging because they have textureless surfaces, highly symmetric shapes, and crowded arrangements. For these reasons, it is hard to use conventional 3D reconstruction methods, which rely on matching local unique features. To satisfy the practical constraints of this task, we extend a deep learning-based unsupervised monocular depth estimation method to an omnidirectional camera and propose using it. Our experiments demonstrate the effectiveness of the proposed method for estimating the 3D positions of grape berries in the wild.
PMID:39899513 | DOI:10.1371/journal.pone.0317359