Deep learning

Mental Health Screening Using the Heart Rate Variability and Frontal Electroencephalography Features: A Machine Learning-Based Approach

Wed, 2025-02-19 06:00

JMIR Ment Health. 2025 Feb 19. doi: 10.2196/72803. Online ahead of print.

ABSTRACT

BACKGROUND: Heart rate variability (HRV) is a physiological marker of the cardiac autonomic modulation and related emotional regulation. Electroencephalography (EEG) is reflective of brain cortical activities and related psychopathology. The HRV and EEG have been employed in machine learning- and deep learning-based algorithms either alone or with other wearable device-based features to classify patients with psychiatric disorder (PT) and healthy controls (HC). Little study examined the utility of wearable device-based physiological markers to discern PT with various psychiatric diagnosis versus HC.

OBJECTIVE: This study examined the HRV and prefrontal EEG features most frequently selected in the support vector machine (SVM) having the highest classification accuracy of PT versus HC, contributing to the individual-level initial screening of PT and minimized duration of untreated psychiatric illness.

METHODS: A simultaneous acquisition of 5 minute-length PPG (measured on right ear lobe) and resting-state EEG (with eye-closed; using two left/right forehead-located electrodes) of 182 participants [87 PT (including major depressive disorder (70.1%) and panic disorder (12.6%)) and 95 HC] were performed. The PPG-based HRV features were quantified for both time- and frequency-domains. The time-varying EEG signals were converted into frequency-domain signals of the power spectral density. In the feature selection of the Gaussian radial basis function kernel-based support vector machine (SVM) models, estimators were comprised of top N (1£N£22) highest scored HRV/EEG features based on the one-way ANOVA F-value. Classification performance of SVM model (PT vs. HC) having N estimators was assessed using the Leave-one-out cross-validation (LOOCV; N = 182), to confirm those showing the highest balanced accuracy and area under the receiver operating characteristic curve (AUROC) as final classification model.

RESULTS: The final SVM model having 13 estimators showed balanced accuracy of 0.76 and AUROC of 0.78. Power spectral density of HRV in the high frequency, very low frequency, low frequency (LF) bands, and total power, a product of the mean of the 5-minute standard deviation of all NN intervals (SDNN) and normalized LF power of HRV, power spectral density of frontal EEG in the high alpha and alpha peak frequency comprised the top 13-scored classification features in > 90% of the LOOCV.

CONCLUSIONS: This study showed a possible synergic effect of combining the HRV and prefrontal EEG features in machine learning-based mental health screening. Future studies to predict the treatment response and to propose the preferred treatment regimen based on the baseline physiological markers are required.

CLINICALTRIAL: N/A.

PMID:39971280 | DOI:10.2196/72803

Categories: Literature Watch

FusionNet: Dual Input Feature Fusion Network with Ensemble Based Filter Feature Selection for Enhanced Brain Tumor Classification

Wed, 2025-02-19 06:00

Brain Res. 2025 Feb 17:149507. doi: 10.1016/j.brainres.2025.149507. Online ahead of print.

ABSTRACT

Brain tumors pose a significant threat to human health, require a precise and quick diagnosis for effective treatment. However, achieving high diagnostic accuracy with traditional methods remains challenging due to the complex nature of brain tumors. Recent advances in deep learning have showed potential in automating brain tumor classification using brain MRI images, offering the potential to enhance diagnostic result. This paper present FusionNet, a novel approach that utilizing normal and segmented MRI images to achieve better classification accuracy. Segmented images are generated using a Dual Residual Blocks based pre-trained model. Secondly, the model uses attention based mechanism and ensemble feature selection to prioritize the relevant features for improving the classification performance. Thirdly, proposed model incorporates the feature fusion of both the images (normal and segmented) to increase the selected feature for better classification. The proposed model achieved high accuracy across multiple datasets, with an accuracy of 99.62%, 99.54%, 99.39%, and 99.57% on the Figshare, Kaggle, Sartaj, combined dataset respectively. The proposed model demonstrates notable improvements in performance on both datasets. It achieves higher accuracy, precision, recall, and F1-score compared to existing models on the both datasets. The proposed FusionNet demonstrates significant improvements in brain tumor classification performance. The utility of this study lies in its contribution to the scientific community as a robust, efficient tool that advances brain tumor classification, supporting medical professionals in achieving superior diagnostic outcomes.

PMID:39970997 | DOI:10.1016/j.brainres.2025.149507

Categories: Literature Watch

A deep learning approach: physics-informed neural networks for solving a nonlinear telegraph equation with different boundary conditions

Wed, 2025-02-19 06:00

BMC Res Notes. 2025 Feb 19;18(1):77. doi: 10.1186/s13104-025-07142-1.

ABSTRACT

The nonlinear Telegraph equation appears in a variety of engineering and science problems. This paper presents a deep learning algorithm termed physics-informed neural networks to resolve a hyperbolic nonlinear telegraph equation with Dirichlet, Neumann, and Periodic boundary conditions. To include physical information about the issue, a multi-objective loss function consisting of the residual of the governing partial differential equation and initial conditions and boundary conditions is defined. Using multiple densely connected neural networks, termed feedforward deep neural networks, the proposed scheme has been trained to minimize the total loss results from the multi-objective loss function. Three computational examples are provided to demonstrate the efficacy and applications of our suggested method. Using a Python software package, we conducted several tests for various model optimizations, activation functions, neural network architectures, and hidden layers to choose the best hyper-parameters representing the problem's physics-informed neural network model with the optimal solution. Furthermore, using graphs and tables, the results of the suggested approach are contrasted with the analytical solution in literature based on various relative error analyses and statistical performance measure analyses. According to the results, the suggested computational method is effective in resolving difficult non-linear physical issues with various boundary conditions.

PMID:39972356 | DOI:10.1186/s13104-025-07142-1

Categories: Literature Watch

UAS-based MT-YOLO model for detecting missed tassels in hybrid maize detasseling

Wed, 2025-02-19 06:00

Plant Methods. 2025 Feb 19;21(1):21. doi: 10.1186/s13007-025-01341-4.

ABSTRACT

Accurate detection of missed tassels is crucial for maintaining the purity of hybrid maize seed production. This study introduces the MT-YOLO model, designed to replace or assist manual detection by leveraging deep learning and unmanned aerial systems (UASs). A comprehensive dataset was constructed, informed by an analysis of the agronomic characteristics of missed tassels during the detasseling period, including factors such as tassel visibility, plant height variability, and tassel development stages. The dataset captures diverse tassel images under varying lighting conditions, planting densities, and growth stages, with special attention to early tasseling stages when tassels are partially wrapped in leaves-a critical yet underexplored challenge for accurate detasseling. The MT-YOLO model demonstrates significant improvements in detection metrics, achieving an average precision (AP) of 93.1%, precision of 93.3%, recall of 91.6%, and an F1-score of 92.4%, outperforming Faster R-CNN, SSD, and various YOLO models. Compared to the baseline YOLO v5s, the MT-YOLO model increased recall by 1.1%, precision by 4.9%, and F1-score by 3.0%, while maintaining a detection speed of 124 fps. Field tests further validated its robustness, achieving a mean missed rate of 9.1%. These results highlight the potential of MT-YOLO as a reliable and efficient solution for enhancing detasseling efficiency in hybrid maize seed production.

PMID:39972352 | DOI:10.1186/s13007-025-01341-4

Categories: Literature Watch

De novo design of transmembrane fluorescence-activating proteins

Wed, 2025-02-19 06:00

Nature. 2025 Feb 19. doi: 10.1038/s41586-025-08598-8. Online ahead of print.

ABSTRACT

The recognition of ligands by transmembrane proteins is essential for the exchange of materials, energy and information across biological membranes. Progress has been made in the de novo design of transmembrane proteins1-6, as well as in designing water-soluble proteins to bind small molecules7-12, but de novo design of transmembrane proteins that tightly and specifically bind to small molecules remains an outstanding challenge13. Here we present the accurate design of ligand-binding transmembrane proteins by integrating deep learning and energy-based methods. We designed pre-organized ligand-binding pockets in high-quality four-helix backbones for a fluorogenic ligand, and generated a transmembrane span using gradient-guided hallucination. The designer transmembrane proteins specifically activated fluorescence of the target fluorophore with mid-nanomolar affinity, exhibiting higher brightness and quantum yield compared to those of enhanced green fluorescent protein. These proteins were highly active in the membrane fraction of live bacterial and eukaryotic cells following expression. The crystal and cryogenic electron microscopy structures of the designer protein-ligand complexes were very close to the structures of the design models. We showed that the interactions between ligands and transmembrane proteins within the membrane can be accurately designed. Our work paves the way for the creation of new functional transmembrane proteins, with a wide range of applications including imaging, ligand sensing and membrane transport.

PMID:39972138 | DOI:10.1038/s41586-025-08598-8

Categories: Literature Watch

Assessment of hydrological loading displacement from GNSS and GRACE data using deep learning algorithms

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 19;15(1):6070. doi: 10.1038/s41598-025-90363-y.

ABSTRACT

This work introduces a novel method for estimating hydrological loading displacement using 3D Convolutional Neural Networks (3D-CNN). This approach utilizes vertical displacement time series data from 41 Global Navigation Satellite System (GNSS) stations across Yunnan Province, China, and its adjacent areas, coupled with spatiotemporal variations in terrestrial water storage derived from the Gravity Recovery and Climate Experiment satellites (GRACE). The 3D-CNN method demonstrates markedly higher inversion precision compared to conventional load Green's function inversion techniques. This improvement is evidenced by substantial reductions in deviations from GNSS observations across various statistical metrics: the maximum deviation decreased by 1.34 millimeters, the absolute minimum deviation by 1.47 millimeters, the absolute mean deviation by 79.6%, and the standard deviation by 31.4%. An in-depth analysis of terrestrial water storage and loading displacement from 2019 to 2022 in Yunnan Province revealed distinct seasonal fluctuations, primarily driven by dominant annual and semi-annual cycles, and these periodic signals accounted for over 90% of the variance. The spatial distribution of terrestrial water loading displacement is strongly associated with regional precipitation patterns, showing smaller amplitudes in the northeast and northwest and larger amplitudes in the southwest. The research findings presented in this paper offer a novel perspective on the spatiotemporal variations of environmental load effects, particularly those related to the terrestrial water loading deformation with significant spatial heterogeneity. Accurate assessment of the effects of terrestrial water loading displacement (TWLD) is of considerable importance for precise geodetic observations, as well as for the establishment and maintenance of high-precision dynamic reference frames. Furthermore, the development of TWLD model that integrates GRACE and GNSS data provides valuable data support for the higher-precision inversion of changes in terrestrial water storage.

PMID:39972111 | DOI:10.1038/s41598-025-90363-y

Categories: Literature Watch

Towards realistic simulation of disease progression in the visual cortex with CNNs

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 19;15(1):6099. doi: 10.1038/s41598-025-89738-y.

ABSTRACT

Convolutional neural networks (CNNs) and mammalian visual systems share architectural and information processing similarities. We leverage these parallels to develop an in-silico CNN model simulating diseases affecting the visual system. This model aims to replicate neural complexities in an experimentally controlled environment. Therefore, we examine object recognition and internal representations of a CNN under neurodegeneration and neuroplasticity conditions simulated through synaptic weight decay and retraining. This approach can model neurodegeneration from events like tau accumulation, reflecting cognitive decline in diseases such as posterior cortical atrophy, a condition that can accompany Alzheimer's disease and primarily affects the visual system. After each degeneration iteration, we retrain unaffected synapses to simulate ongoing neuroplasticity. Our results show that with significant synaptic decay and limited retraining, the model's representational similarity decreases compared to a healthy model. Early CNN layers retain high similarity to the healthy model, while later layers are more prone to degradation. The results of this study reveal a progressive decline in object recognition proficiency, mirroring posterior cortical atrophy progression. In-silico modeling of neurodegenerative diseases can enhance our understanding of disease progression and aid in developing targeted rehabilitation and treatments.

PMID:39972104 | DOI:10.1038/s41598-025-89738-y

Categories: Literature Watch

Ensemble fuzzy deep learning for brain tumor detection

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 19;15(1):6124. doi: 10.1038/s41598-025-90572-5.

ABSTRACT

This research presents a novel ensemble fuzzy deep learning approach for brain Magnetic Resonance Imaging (MRI) analysis, aiming to improve the segmentation of brain tissues and abnormalities. The method integrates multiple components, including diverse deep learning architectures enhanced with volumetric fuzzy pooling, a model fusion strategy, and an attention mechanism to focus on the most relevant regions of the input data. The process begins by collecting medical data using sensors to acquire MRI images. These data are then used to train several deep learning models that are specifically designed to handle various aspects of brain MRI segmentation. To enhance the model's performance, an efficient ensemble learning method is employed to combine the predictions of multiple models, ensuring that the final decision accounts for different strengths of each individual model. A key feature of the approach is the construction of a knowledge base that stores data from training images and associates it with the most suitable model for each specific sample. During the inference phase, this knowledge base is consulted to quickly identify and select the best model for processing new test images, based on the similarity between the test data and previously encountered samples. The proposed method is rigorously tested on real-world brain MRI segmentation benchmarks, demonstrating superior performance in comparison to existing techniques. Our proposed method achieves an Intersection over Union (IoU) of 95% on the complete Brain MRI Segmentation dataset, demonstrating a 10% improvement over baseline solutions.

PMID:39972098 | DOI:10.1038/s41598-025-90572-5

Categories: Literature Watch

Temporal and spatial self supervised learning methods for electrocardiograms

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 19;15(1):6029. doi: 10.1038/s41598-025-90084-2.

ABSTRACT

The limited availability of labeled ECG data restricts the application of supervised deep learning methods in ECG detection. Although existing self-supervised learning approaches have been applied to ECG analysis, they are predominantly image-based, which limits their effectiveness. To address these limitations and provide novel insights, we propose a Temporal-Spatial Self-Supervised Learning (TSSL) method specifically designed for ECG detection. TSSL leverages the intrinsic temporal and spatial characteristics of ECG signals to enhance feature representation. Temporally, ECG signals retain consistent identity information over time, enabling the model to generate stable representations for the same individual across different time points while isolating representations of different leads to preserve their unique features. Spatially, ECG signals from various leads capture the heart's activity from different perspectives, revealing both commonalities and distinct patterns. TSSL captures these correlations by maintaining consistency in the relationships between signals and their representations across different leads. Experimental results on the CPSC2018, Chapman, and PTB-XL databases demonstrate that TSSL introduces new capabilities by effectively utilizing temporal and spatial information, achieving superior performance compared to existing methods and approaching the performance of full-label training with only 10% of the labeled data. This highlights TSSL's ability to provide deeper insights and enhanced feature extraction beyond mere performance improvements. We make our code publicly available on https://github.com/cwp9731/temporal-spatial-self-supervised-learning.

PMID:39972080 | DOI:10.1038/s41598-025-90084-2

Categories: Literature Watch

A skin disease classification model based on multi scale combined efficient channel attention module

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 19;15(1):6116. doi: 10.1038/s41598-025-90418-0.

ABSTRACT

Skin diseases, a significant category in the medical field, have always been challenging to diagnose and have a high misdiagnosis rate. Deep learning for skin disease classification has considerable value in clinical diagnosis and treatment. This study proposes a skin disease classification model based on multi-scale channel attention. The network architecture of the model consists of three main parts: an input module, four processing blocks, and an output module. Firstly, the model has improved the pyramid segmentation attention module to extract multi-scale features of the image entirely. Secondly, the reverse residual structure is used to replace the residual structure in the backbone network, and the attention module is integrated into the reverse residual structure to achieve better multi-scale feature extraction. Finally, the output module consists of an adaptive average pool and a fully connected layer, which convert the aggregated global features into several categories to generate the final output for the classification task. To verify the performance of the proposed model, this study used two commonly used skin disease datasets, ISIC2019 and HAM10000, for validation. The experimental results showed that the accuracy of this study was 77.6% on the ISIC2019 skin disease series dataset and 88.2% on the HAM10000 skin disease dataset. External validation data was added for evaluation to validate the model further, and the comprehensive evaluation results proved the effectiveness of the proposed model in this paper.

PMID:39972014 | DOI:10.1038/s41598-025-90418-0

Categories: Literature Watch

An extensive experimental analysis for heart disease prediction using artificial intelligence techniques

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 20;15(1):6132. doi: 10.1038/s41598-025-90530-1.

ABSTRACT

The heart is an important organ that plays a crucial role in maintaining life. Unfortunately, heart disease is one of the major causes of mortality globally. Early and accurate detection can significantly improve the situation by enabling preventive measures and personalized healthcare recommendations. Artificial intelligence is emerging as a powerful tool for healthcare applications, particularly in predicting heart diseases. Researchers are actively working on this, but challenges remain in achieving accurate heart disease prediction. Therefore, experimenting with various models to identify the most effective one for heart disease prediction is crucial. In this view, this paper addresses this need by conducting an extensive investigation of various models. The proposed research considered 11 feature selection techniques and 21 classifiers for the experiment. The feature selection techniques considered for the research are Information Gain, Chi-Square Test, Fisher Discriminant Analysis (FDA), Variance Threshold, Mean Absolute Difference (MAD), Dispersion Ratio, Relief, LASSO, Random Forest Importance, Linear Discriminant Analysis (LDA), and Principal Component Analysis (PCA). The classifiers considered for the research are Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Gaussian Naïve Bayes (GNB), XGBoost, AdaBoost, Stochastic Gradient Descent (SGD), Gradient Boosting Classifier, Extra Tree Classifier, CatBoost, LightGBM, Multilayer Perceptron (MLP), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional LSTM (BiLSTM), Bidirectional GRU (BiGRU), Convolutional Neural Network (CNN), and Hybrid Model (CNN, RNN, LSTM, GRU, BiLSTM, BiGRU). Among all the extensive experiments, XGBoost outperformed all others, achieving an accuracy of 0.97, precision of 0.97, sensitivity of 0.98, specificity of 0.98, F1 score of 0.98, and AUC of 0.98.

PMID:39972004 | DOI:10.1038/s41598-025-90530-1

Categories: Literature Watch

Real-time warning method for sand plugging in offshore fracturing wells

Wed, 2025-02-19 06:00

Sci Rep. 2025 Feb 19;15(1):6062. doi: 10.1038/s41598-025-90768-9.

ABSTRACT

Sand plugging during hydraulic fracturing is one of the primary causes of operational failure. Existing methods for identifying sand plugging during fracturing suffer from issues such as time-consuming, low accuracy, and inability to provide real-time warning. Addressing these challenges, this study leverages offshore hydraulic fracturing operational data and reports to propose a novel method for intelligent identification and real-time warning of sand plugging. Initially, we employ an Attention Mechanism based Long-Short Term Memory Network (Att-LSTM) to establish a real-time pressure prediction model during fracturing, capable of forecasting pressure within 40 s with an accuracy exceeding 92%. Subsequently, we enhance the structure of an Attention Mechanism based Convolutional Long-Short Term Memory Network (Att-CNN-LSTM) to develop a model for identifying sand plugging during fracturing, achieving identification with an error margin of less than 1 min. Finally, through the integration of Att-LSTM and Att-CNN-LSTM networks coupled with transfer learning techniques, we introduce a continuously learning approach for sand plugging warning during fracturing operations, significantly improving accuracy and efficiency in sand plugging identification and advancing the intelligent decision-making process for hydraulic fracturing. These methodologies not only contribute theoretical innovations but also demonstrate substantial practical effectiveness, providing critical technical support and guidance to enhance safety and efficiency in hydraulic fracturing operations.

PMID:39971998 | DOI:10.1038/s41598-025-90768-9

Categories: Literature Watch

TGF-Net: Transformer and gist CNN fusion network for multi-modal remote sensing image classification

Wed, 2025-02-19 06:00

PLoS One. 2025 Feb 19;20(2):e0316900. doi: 10.1371/journal.pone.0316900. eCollection 2025.

ABSTRACT

In the field of earth sciences and remote exploration, the classification and identification of surface materials on earth have been a significant research area that poses considerable challenges in recent times. Although deep learning technology has achieved certain results in remote sensing image classification, it still has certain challenges for multi-modality remote sensing data classification. In this paper, we propose a fusion network based on transformer and gist convolutional neural network (CNN), namely TGF-Net. To minimize the duplication of information in multimodal data, the TGF-Net network incorporates a feature reconstruction module (FRM) that employs matrix factorization and self-attention mechanism for decomposing and evaluating the similarity of multimodal features. This enables the extraction of distinct as well as common features. Meanwhile, the transformer-based spectral feature extraction module (TSFEM) was designed by combining the different characteristics of remote sensing images and considering the problem of orderliness of the sequence between hyperspectral image (HSI) channels. In order to address the issue of representing the relative positions of spatial targets in synthetic aperture radar (SAR) images, we proposed a spatial feature extraction module called gist-based spatial feature extraction module (GSFEM). To assess the efficacy and superiority of the proposed TGF-Net, we performed experiments on two datasets comprising HSI and SAR data.

PMID:39970154 | DOI:10.1371/journal.pone.0316900

Categories: Literature Watch

EBHOA-EMobileNetV2: a hybrid system based on efficient feature selection and classification for cardiovascular disease diagnosis

Wed, 2025-02-19 06:00

Comput Methods Biomech Biomed Engin. 2025 Feb 19:1-23. doi: 10.1080/10255842.2025.2466081. Online ahead of print.

ABSTRACT

The accurate prediction of cardiovascular disease (CVD) or heart disease is an essential and challenging task to treat a patient efficiently before occurring a heart attack. Many deep learning and machine learning frameworks have been developed recently to predict cardiovascular disease in intelligent healthcare. However, a lack of data-recognized and appropriate prediction methodologies meant that most existing strategies failed to improve cardiovascular disease prediction accuracy. This paper presents an intelligent healthcare framework based on a deep learning model to detect cardiovascular heart disease, motivated by present issues. Initially, the proposed system compiles data on heart disease from multiple publicly accessible data sources. To improve the quality of the dataset, effective pre-processing techniques are used including (i) the interquartile range (IQR) method used to identify and eliminate outliers; (ii) the data standardization technique used to handle missing values; (iii) and the 'K-Means SMOTE' oversampling method is used to address the issue of class imbalance. Using the Enhanced Binary Grasshopper Optimization Algorithm (EBHOA), the dataset's appropriate features are chosen. Finally, the presence and absence of CVD are predicted using the Enhanced MobileNetV2 (EMobileNetV2) model. Training and evaluation of the proposed approach were conducted using the UCI Heart Disease and Framingham Heart Study datasets. We obtained excellent results by comparing the results with the most recent methods. The proposed approach beats the current approaches concerning performance evaluation metrics, according to experimental results. For the UCI Heart Disease dataset, the proposed research achieves a higher accuracy of 98.78%, precision of 99%, recall of 99% and F1 score of 99%. For the Framingham dataset, the proposed research achieves a higher accuracy of 99.39%, precision of 99.50%, recall of 99.50%, and F1 score of 99%. The proposed deep learning-based classification model combined with an effective feature selection technique yielded the best results. This innovative method has the potential to enhance the accuracy and consistency of heart disease prediction, which would be advantageous for clinical practice and patient care.

PMID:39970065 | DOI:10.1080/10255842.2025.2466081

Categories: Literature Watch

A novel dataset for nuclei and tissue segmentation in melanoma with baseline nuclei segmentation and tissue segmentation benchmarks

Wed, 2025-02-19 06:00

Gigascience. 2025 Jan 6;14:giaf011. doi: 10.1093/gigascience/giaf011.

ABSTRACT

BACKGROUND: Melanoma is an aggressive form of skin cancer in which tumor-infiltrating lymphocytes (TILs) are a biomarker for recurrence and treatment response. Manual TIL assessment is prone to interobserver variability, and current deep learning models are not publicly accessible or have low performance. Deep learning models, however, have the potential of consistent spatial evaluation of TILs and other immune cell subsets with the potential of improved prognostic and predictive value. To make the development of these models possible, we created the Panoptic Segmentation of nUclei and tissue in advanced MelanomA (PUMA) dataset and assessed the performance of several state-of-the-art deep learning models. In addition, we show how to improve model performance further by using heuristic postprocessing in which nuclei classes are updated based on their tissue localization.

RESULTS: The PUMA dataset includes 155 primary and 155 metastatic melanoma hematoxylin and eosin-stained regions of interest with nuclei and tissue annotations from a single melanoma referral institution. The Hover-NeXt model, trained on the PUMA dataset, demonstrated the best performance for lymphocyte detection, approaching human interobserver agreement. In addition, heuristic postprocessing of deep learning models improved the detection of noncommon classes, such as epithelial nuclei.

CONCLUSION: The PUMA dataset is the first melanoma-specific dataset that can be used to develop melanoma-specific nuclei and tissue segmentation models. These models can, in turn, be used for prognostic and predictive biomarker development. Incorporating tissue and nuclei segmentation is a step toward improved deep learning nuclei segmentation performance. To support the development of these models, this dataset is used in the PUMA challenge.

PMID:39970004 | DOI:10.1093/gigascience/giaf011

Categories: Literature Watch

Identifying Research Priorities in Digital Education for Health Care: Umbrella Review and Modified Delphi Method Study

Wed, 2025-02-19 06:00

J Med Internet Res. 2025 Feb 19;27:e66157. doi: 10.2196/66157.

ABSTRACT

BACKGROUND: In recent years, the use of digital technology in the education of health care professionals has surged, partly driven by the COVID-19 pandemic. However, there is still a need for focused research to establish evidence of its effectiveness.

OBJECTIVE: This study aimed to define the gaps in the evidence for the efficacy of digital education and to identify priority areas where future research has the potential to contribute to our understanding and use of digital education.

METHODS: We used a 2-stage approach to identify research priorities. First, an umbrella review of the recent literature (published between 2020 and 2023) was performed to identify and build on existing work. Second, expert consensus on the priority research questions was obtained using a modified Delphi method.

RESULTS: A total of 8857 potentially relevant papers were identified. Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology, we included 217 papers for full review. All papers were either systematic reviews or meta-analyses. A total of 151 research recommendations were extracted from the 217 papers. These were analyzed, recategorized, and consolidated to create a final list of 63 questions. From these, a modified Delphi process with 42 experts was used to produce the top-five rated research priorities: (1) How do we measure the learning transfer from digital education into the clinical setting? (2) How can we optimize the use of artificial intelligence, machine learning, and deep learning to facilitate education and training? (3) What are the methodological requirements for high-quality rigorous studies assessing the outcomes of digital health education? (4) How does the design of digital education interventions (eg, format and modality) in health professionals' education and training curriculum affect learning outcomes? and (5) How should learning outcomes in the field of health professions' digital education be defined and standardized?

CONCLUSIONS: This review provides a prioritized list of research gaps in digital education in health care, which will be of use to researchers, educators, education providers, and funding agencies. Additional proposals are discussed regarding the next steps needed to advance this agenda, aiming to promote meaningful and practical research on the use of digital technologies and drive excellence in health care education.

PMID:39969988 | DOI:10.2196/66157

Categories: Literature Watch

Artificial intelligence in the management of metabolic disorders: a comprehensive review

Wed, 2025-02-19 06:00

J Endocrinol Invest. 2025 Feb 19. doi: 10.1007/s40618-025-02548-x. Online ahead of print.

ABSTRACT

This review explores the significant role of artificial intelligence (AI) in managing metabolic disorders like diabetes, obesity, metabolic dysfunction-associated fatty liver disease (MAFLD), and thyroid dysfunction. AI applications in this context encompass early diagnosis, personalized treatment plans, risk assessment, prevention, and biomarker discovery for early and accurate disease management. This review also delves into techniques involving machine learning (ML), deep learning (DL), natural language processing (NLP), computer vision, and reinforcement learning associated with AI and their application in metabolic disorders. The following study also enlightens the challenges and ethical considerations associated with AI implementation, such as data privacy, model interpretability, and bias mitigation. We have reviewed various AI-based tools utilized for the diagnosis and management of metabolic disorders, such as Idx, Guardian Connect system, and DreaMed for diabetes. Further, the paper emphasizes the potential of AI to revolutionize the management of metabolic disorders through collaborations among clinicians and AI experts, the integration of AI into clinical practice, and the necessity for long-term validation studies. The references provided in the paper cover a range of studies related to AI, ML, personalized medicine, metabolic disorders, and diagnostic tools in healthcare, including research on disease diagnostics, personalized therapy, chronic disease management, and the application of AI in diabetes care and nutrition.

PMID:39969797 | DOI:10.1007/s40618-025-02548-x

Categories: Literature Watch

Sex estimation with convolutional neural networks using the patella magnetic resonance image slices

Wed, 2025-02-19 06:00

Forensic Sci Med Pathol. 2025 Feb 19. doi: 10.1007/s12024-025-00943-7. Online ahead of print.

ABSTRACT

Conducting sex estimation based on bones through morphometric methods increases the need for automatic image analyses, as doing so requires experienced staff and is a time-consuming process. In this study, sex estimation was performed with the EfficientNetB3, MobileNetV2, Visual Geometry Group 16 (VGG16), ResNet50, and DenseNet121 architectures on patellar magnetic resonance images via a developed model. Within the scope of the study, 6710 magnetic resonance sagittal patella image slices of 696 patients (293 males and 403 females) were obtained. The performance of artificial intelligence algorithms was examined through deep learning architectures and the developed classification model. Considering the performance evaluation criteria, the best accuracy result of 88.88% was obtained with the ResNet50 model. In addition, the proposed model was among the best-performing models with an accuracy of 85.70%. When all these results were examined, it was concluded that positive sex estimation results could be obtained from patella magnetic resonance image (MRI) slices without the use of the morphometric method.

PMID:39969760 | DOI:10.1007/s12024-025-00943-7

Categories: Literature Watch

Artificial Intelligence Methods for Diagnostic and Decision-Making Assistance in Chronic Wounds: A Systematic Review

Wed, 2025-02-19 06:00

J Med Syst. 2025 Feb 19;49(1):29. doi: 10.1007/s10916-025-02153-8.

ABSTRACT

Chronic wounds, which take over four weeks to heal, are a major global health issue linked to conditions such as diabetes, venous insufficiency, arterial diseases, and pressure ulcers. These wounds cause pain, reduce quality of life, and impose significant economic burdens. This systematic review explores the impact of technological advancements on the diagnosis of chronic wounds, focusing on how computational methods in wound image and data analysis improve diagnostic precision and patient outcomes. A literature search was conducted in databases including ACM, IEEE, PubMed, Scopus, and Web of Science, covering studies from 2013 to 2023. The focus was on articles applying complex computational techniques to analyze chronic wound images and clinical data. Exclusion criteria were non-image samples, review articles, and non-English or non-Spanish texts. From 2,791 articles identified, 93 full-text studies were selected for final analysis. The review identified significant advancements in tissue classification, wound measurement, segmentation, prediction of wound aetiology, risk indicators, and healing potential. The use of image-based and data-driven methods has proven to enhance diagnostic accuracy and treatment efficiency in chronic wound care. The integration of technology into chronic wound diagnosis has shown a transformative effect, improving diagnostic capabilities, patient care, and reducing healthcare costs. Continued research and innovation in computational techniques are essential to unlock their full potential in managing chronic wounds effectively.

PMID:39969674 | DOI:10.1007/s10916-025-02153-8

Categories: Literature Watch

Exploring a decade of deep learning in dentistry: A comprehensive mapping review

Wed, 2025-02-19 06:00

Clin Oral Investig. 2025 Feb 19;29(2):143. doi: 10.1007/s00784-025-06216-5.

ABSTRACT

OBJECTIVES: Artificial Intelligence (AI), particularly deep learning, has significantly impacted healthcare, including dentistry, by improving diagnostics, treatment planning, and prognosis prediction. This systematic mapping review explores the current applications of deep learning in dentistry, offering a comprehensive overview of trends, models, and their clinical significance.

MATERIALS AND METHODS: Following a structured methodology, relevant studies published from January 2012 to September 2023 were identified through database searches in PubMed, Scopus, and Embase. Key data, including clinical purpose, deep learning tasks, model architectures, and data modalities, were extracted for qualitative synthesis.

RESULTS: From 21,242 screened studies, 1,007 were included. Of these, 63.5% targeted diagnostic tasks, primarily with convolutional neural networks (CNNs). Classification (43.7%) and segmentation (22.9%) were the main methods, and imaging data-such as cone-beam computed tomography and orthopantomograms-were used in 84.4% of cases. Most studies (95.2%) applied fully supervised learning, emphasizing the need for annotated data. Pathology (21.5%), radiology (17.5%), and orthodontics (10.2%) were prominent fields, with 24.9% of studies relating to more than one specialty.

CONCLUSION: This review explores the advancements in deep learning in dentistry, particulary for diagnostics, and identifies areas for further improvement. While CNNs have been used successfully, it is essential to explore emerging model architectures, learning approaches, and ways to obtain diverse and reliable data. Furthermore, fostering trust among all stakeholders by advancing explainable AI and addressing ethical considerations is crucial for transitioning AI from research to clinical practice.

CLINICAL RELEVANCE: This review offers a comprehensive overview of a decade of deep learning in dentistry, showcasing its significant growth in recent years. By mapping its key applications and identifying research trends, it provides a valuable guide for future studies and highlights emerging opportunities for advancing AI-driven dental care.

PMID:39969623 | DOI:10.1007/s00784-025-06216-5

Categories: Literature Watch

Pages