Deep learning

Erratum: Volumetric Breast Density Estimation From Three-Dimensional Reconstructed Digital Breast Tomosynthesis Images Using Deep Learning

Tue, 2025-01-14 06:00

JCO Clin Cancer Inform. 2025 Jan;9:e2400325. doi: 10.1200/CCI-24-00325. Epub 2025 Jan 14.

NO ABSTRACT

PMID:39807853 | DOI:10.1200/CCI-24-00325

Categories: Literature Watch

Sleep stages classification based on feature extraction from music of brain

Tue, 2025-01-14 06:00

Heliyon. 2024 Dec 12;11(1):e41147. doi: 10.1016/j.heliyon.2024.e41147. eCollection 2025 Jan 15.

ABSTRACT

Sleep stages classification one of the essential factors concerning sleep disorder diagnoses, which can contribute to many functional disease treatments or prevent the primary cognitive risks in daily activities. In this study, A novel method of mapping EEG signals to music is proposed to classify sleep stages. A total of 4.752 selected 1-min sleep records extracted from the capsleep database are applied as the statistical population for this assessment. In this process, first, the tempo and scale parameters are extracted from the signal according to the rules of music, and next by applying them and changing the dominant frequency of the pre-processed single-channel EEG signal, a sequence of musical notes is produced. A total of 19 features are extracted from the sequence of notes and fed into feature reduction algorithms; the selected features are applied to a two-stage classification structure: 1) the classification of 5 classes (merging S1 and REM-S2-S3-S4-W) is made with an accuracy of 89.5 % (Cap sleep database), 85.9 % (Sleep-EDF database), 86.5 % (Sleep-EDF expanded database), and 2) the classification of 2 classes (S1 vs. REM) is made with an accuracy of 90.1 % (Cap sleep database),88.9 % (Sleep-EDF database), 90.1 % (Sleep-EDF expanded database). The overall percentage of correct classification for 6 sleep stages are 88.13 %, 84.3 % and 86.1 % for those databases, respectively. The other objective of this study is to present a new single-channel EEG sonification method, The classification accuracy obtained is higher or comparable to contemporary methods. This shows the efficiency of our proposed method.

PMID:39807512 | PMC:PMC11728888 | DOI:10.1016/j.heliyon.2024.e41147

Categories: Literature Watch

AxonFinder: Automated segmentation of tumor innervating neuronal fibers

Tue, 2025-01-14 06:00

Heliyon. 2024 Dec 15;11(1):e41209. doi: 10.1016/j.heliyon.2024.e41209. eCollection 2025 Jan 15.

ABSTRACT

Neurosignaling is increasingly recognized as a critical factor in cancer progression, where neuronal innervation of primary tumors contributes to the disease's advancement. This study focuses on segmenting individual axons within the prostate tumor microenvironment, which have been challenging to detect and analyze due to their irregular morphologies. We present a novel deep learning-based approach for the automated segmentation of axons, AxonFinder, leveraging a U-Net model with a ResNet-101 encoder, based on a multiplexed imaging approach. Utilizing a dataset of whole-slide images from low-, intermediate-, and high-risk prostate cancer patients, we manually annotated axons to train our model, achieving significant accuracy in detecting axonal structures that were previously hard to segment. Our method achieves high performance, with a validation F1-score of 94 % and IoU of 90.78 %. Besides, the morphometric analysis that shows strong alignment between manual annotations and automated segmentation with nerve length and tortuosity closely matching manual measurements. Furthermore, our analysis includes a comprehensive assessment of axon density and morphological features across different CAPRA-S prostate cancer risk categories revealing a significant decline in axon density correlating with higher CAPRA-S prostate cancer risk scores. Our paper suggests the potential utility of neuronal markers in the prognostic assessment of prostate cancer in aiding the pathologist's assessment of tumor sections and advancing our understanding of neurosignaling in the tumor microenvironment.

PMID:39807499 | PMC:PMC11728976 | DOI:10.1016/j.heliyon.2024.e41209

Categories: Literature Watch

An empirical study of LLaMA3 quantization: from LLMs to MLLMs

Tue, 2025-01-14 06:00

Vis Intell. 2024;2(1):36. doi: 10.1007/s44267-024-00070-x. Epub 2024 Dec 30.

ABSTRACT

The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3's capabilities when quantized to low bit-width. This exploration can potentially provide new insights and challenges for the low-bit quantization of LLaMA3 and other future LLMs, especially in addressing performance degradation issues that suffer in LLM compression. Specifically, we comprehensively evaluate the 10 existing post-training quantization and LoRA fine-tuning (LoRA-FT) methods of LLaMA3 on 1-8 bits and various datasets to reveal the low-bit quantization performance of LLaMA3. To uncover the capabilities of low-bit quantized MLLM, we assessed the performance of the LLaMA3-based LLaVA-Next-8B model under 2-4 ultra-low bits with post-training quantization methods. Our experimental results indicate that LLaMA3 still suffers from non-negligible degradation in linguistic and visual contexts, particularly under ultra-low bit widths. This highlights the significant performance gap at low bit-width that needs to be addressed in future developments. We expect that this empirical study will prove valuable in advancing future models, driving LLMs and MLLMs to achieve higher accuracy at lower bit to enhance practicality.

PMID:39807379 | PMC:PMC11728678 | DOI:10.1007/s44267-024-00070-x

Categories: Literature Watch

Advances in modeling cellular state dynamics: integrating omics data and predictive techniques

Tue, 2025-01-14 06:00

Anim Cells Syst (Seoul). 2025 Jan 10;29(1):72-83. doi: 10.1080/19768354.2024.2449518. eCollection 2025.

ABSTRACT

Dynamic modeling of cellular states has emerged as a pivotal approach for understanding complex biological processes such as cell differentiation, disease progression, and tissue development. This review provides a comprehensive overview of current approaches for modeling cellular state dynamics, focusing on techniques ranging from dynamic or static biomolecular network models to deep learning models. We highlight how these approaches integrated with various omics data such as transcriptomics, and single-cell RNA sequencing could be used to capture and predict cellular behavior and transitions. We also discuss applications of these modeling approaches in predicting gene knockout effects, designing targeted interventions, and simulating organ development. This review emphasizes the importance of selecting appropriate modeling strategies based on scalability and resolution requirements, which vary according to the complexity and size of biological systems under study. By evaluating strengths, limitations, and recent advancements of these methodologies, we aim to guide future research in developing more robust and interpretable models for understanding and manipulating cellular state dynamics in various biological contexts, ultimately advancing therapeutic strategies and precision medicine.

PMID:39807350 | PMC:PMC11727055 | DOI:10.1080/19768354.2024.2449518

Categories: Literature Watch

Assessment of the Accuracy of a Deep Learning Algorithm- and Video-based Motion Capture System in Estimating Snatch Kinematics

Tue, 2025-01-14 06:00

Int J Exerc Sci. 2024 Dec 1;17(1):1629-1647. doi: 10.70252/PRVV4165. eCollection 2024.

ABSTRACT

In weightlifting, quantitative kinematic analysis is essential for evaluating snatch performance. While marker-based (MB) approaches are commonly used, they are impractical for training or competitions. Markerless video-based (VB) systems utilizing deep learning-based pose estimation algorithms could address this issue. This study assessed the comparability and applicability of VB systems in obtaining snatch kinematics by comparing the outcomes to an MB reference system. 21 weightlifters (15 Male, 6 Female) performed 2-3 snatches at 65%, 75%, and 80% of their one-repetition maximum. Snatch kinematics were analyzed using an MB (Vicon Nexus) and VB (Contemplas along with Theia3D) system. Analysis of 131 trials revealed that corresponding lower limb joint center positions of the systems on average differed by 4.7 ± 1.2 cm, and upper limb joint centers by 5.7 ± 1.5 cm. VB and MB lower limb joint angles showed highest agreement in the frontal plane (root mean square difference (RMSD): 11.2 ± 5.9°), followed by the sagittal plane (RMSD: 13.6 ± 4.7°). Statistical Parametric Mapping analysis revealed significant differences throughout most of the movement for all degrees of freedom. Maximum extension angles and velocities during the second pull displayed significant differences (p < .05) for the lower limbs. Our data showed significant differences in estimated kinematics between both systems, indicating a lack of comparability. These differences are likely due to differing models and assumptions, rather than measurement accuracy. However, given the rapid advancements of neural network-based approaches, it holds promise to become a suitable alternative to MB systems in weightlifting analysis.

PMID:39807293 | PMC:PMC11728585 | DOI:10.70252/PRVV4165

Categories: Literature Watch

Glomerular and Nephron Size and Kidney Disease Outcomes: A Comparison of Manual Versus Deep Learning Methods in Kidney Pathology

Tue, 2025-01-14 06:00

Kidney Med. 2024 Nov 28;7(1):100939. doi: 10.1016/j.xkme.2024.100939. eCollection 2025 Jan.

NO ABSTRACT

PMID:39807248 | PMC:PMC11728938 | DOI:10.1016/j.xkme.2024.100939

Categories: Literature Watch

Deep learning radiomics analysis for prediction of survival in patients with unresectable gastric cancer receiving immunotherapy

Tue, 2025-01-14 06:00

Eur J Radiol Open. 2024 Dec 19;14:100626. doi: 10.1016/j.ejro.2024.100626. eCollection 2025 Jun.

ABSTRACT

OBJECTIVE: Immunotherapy has become an option for the first-line therapy of advanced gastric cancer (GC), with improved survival. Our study aimed to investigate unresectable GC from an imaging perspective combined with clinicopathological variables to identify patients who were most likely to benefit from immunotherapy.

METHOD: Patients with unresectable GC who were consecutively treated with immunotherapy at two different medical centers of Chinese PLA General Hospital were included and divided into the training and validation cohorts, respectively. A deep learning neural network, using a multimodal ensemble approach based on CT imaging data before immunotherapy, was trained in the training cohort to predict survival, and an internal validation cohort was constructed to select the optimal ensemble model. Data from another cohort were used for external validation. The area under the receiver operating characteristic curve was analyzed to evaluate performance in predicting survival. Detailed clinicopathological data and peripheral blood prior to immunotherapy were collected for each patient. Univariate and multivariable logistic regression analysis of imaging models and clinicopathological variables was also applied to identify the independent predictors of survival. A nomogram based on multivariable logistic regression was constructed.

RESULT: A total of 79 GC patients in the training cohort and 97 patients in the external validation cohort were enrolled in this study. A multi-model ensemble approach was applied to train a model to predict the 1-year survival of GC patients. Compared to individual models, the ensemble model showed improvement in performance metrics in both the internal and external validation cohorts. There was a significant difference in overall survival (OS) among patients with different imaging models based on the optimum cutoff score of 0.5 (HR = 0.20, 95 % CI: 0.10-0.37, P < 0.001). Multivariate Cox regression analysis revealed that the imaging models, PD-L1 expression, and lung immune prognostic index were independent prognostic factors for OS. We combined these variables and built a nomogram. The calibration curves showed that the C-index of the nomogram was 0.85 and 0.78 in the training and validation cohorts.

CONCLUSION: The deep learning model in combination with several clinical factors showed predictive value for survival in patients with unresectable GC receiving immunotherapy.

PMID:39807092 | PMC:PMC11728962 | DOI:10.1016/j.ejro.2024.100626

Categories: Literature Watch

A semi-supervised deep neuro-fuzzy iterative learning system for automatic segmentation of hippocampus brain MRI

Tue, 2025-01-14 06:00

Math Biosci Eng. 2024 Dec 11;21(12):7830-7853. doi: 10.3934/mbe.2024344.

ABSTRACT

The hippocampus is a small, yet intricate seahorse-shaped tiny structure located deep within the brain's medial temporal lobe. It is a crucial component of the limbic system, which is responsible for regulating emotions, memory, and spatial navigation. This research focuses on automatic hippocampus segmentation from Magnetic Resonance (MR) images of a human head with high accuracy and fewer false positive and false negative rates. This segmentation technique is significantly faster than the manual segmentation methods used in clinics. Unlike the existing approaches such as UNet and Convolutional Neural Networks (CNN), the proposed algorithm generates an image that is similar to a real image by learning the distribution much more quickly by the semi-supervised iterative learning algorithm of the Deep Neuro-Fuzzy (DNF) technique. To assess its effectiveness, the proposed segmentation technique was evaluated on a large dataset of 18,900 images from Kaggle, and the results were compared with those of existing methods. Based on the analysis of results reported in the experimental section, the proposed scheme in the Semi-Supervised Deep Neuro-Fuzzy Iterative Learning System (SS-DNFIL) achieved a 0.97 Dice coefficient, a 0.93 Jaccard coefficient, a 0.95 sensitivity (true positive rate), a 0.97 specificity (true negative rate), a false positive value of 0.09 and a 0.08 false negative value when compared to existing approaches. Thus, the proposed segmentation techniques outperform the existing techniques and produce the desired result so that an accurate diagnosis is made at the earliest stage to save human lives and to increase their life span.

PMID:39807055 | DOI:10.3934/mbe.2024344

Categories: Literature Watch

Enhanced Pneumonia Detection in Chest X-Rays Using Hybrid Convolutional and Vision Transformer Networks

Tue, 2025-01-14 06:00

Curr Med Imaging. 2025 Jan 9. doi: 10.2174/0115734056326685250101113959. Online ahead of print.

ABSTRACT

OBJECTIVE: The objective of this research is to enhance pneumonia detection in chest X-rays by leveraging a novel hybrid deep learning model that combines Convolutional Neural Networks (CNNs) with modified Swin Transformer blocks. This study aims to significantly improve diagnostic accuracy, reduce misclassifications, and provide a robust, deployable solution for underdeveloped regions where access to conventional diagnostics and treatment is limited.

METHODS: The study developed a hybrid model architecture integrating CNNs with modified Swin Transformer blocks to work seamlessly within the same model. The CNN layers perform initial feature extraction, capturing local patterns within the images. At the same time, the modified Swin Transformer blocks handle long-range dependencies and global context through window-based self-attention mechanisms. Preprocessing steps included resizing images to 224x224 pixels and applying Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance image features. Data augmentation techniques, such as horizontal flipping, rotation, and zooming, were utilized to prevent overfitting and ensure model robustness. Hyperparameter optimization was conducted using Optuna, employing Bayesian optimization (Tree-structured Parzen Estimator) to fine-tune key parameters of both the CNN and Swin Transformer components, ensuring optimal model performance.

RESULTS: The proposed hybrid model was trained and validated on a dataset provided by the Guangzhou Women and Children's Medical Center. The model achieved an overall accuracy of 98.72% and a loss of 0.064 on an unseen dataset, significantly outperforming a baseline CNN model. Detailed performance metrics indicated a precision of 0.9738 for the normal class and 1.0000 for the pneumonia class, with an overall F1-score of 0.9872. The hybrid model consistently outperformed the CNN model across all performance metrics, demonstrating higher accuracy, precision, recall, and F1-score. Confusion matrices revealed high sensitivity and specificity with minimal misclassifications.

CONCLUSION: The proposed hybrid CNN-ViT model, which integrates modified Swin Transformer blocks within the CNN architecture, provides a significant advancement in pneumonia detection by effectively capturing both local and global features within chest X-ray images. The modifications to the Swin Transformer blocks enable them to work seamlessly with the CNN layers, enhancing the model's ability to understand complex visual patterns and dependencies. This results in superior classification performance. The lightweight design of the model eliminates the need for extensive hardware, facilitating easy deployment in resource-constrained settings. This innovative approach not only improves pneumonia diagnosis but also has the potential to enhance patient outcomes and support healthcare providers in underdeveloped regions. Future research will focus on further refining the model architecture, incorporating more advanced image processing techniques, and exploring explainable AI methods to provide deeper insights into the model's decision-making process.

PMID:39806960 | DOI:10.2174/0115734056326685250101113959

Categories: Literature Watch

Comparing prediction accuracy for 30-day readmission following primary total knee arthroplasty: the ACS-NSQIP risk calculator versus a novel artificial neural network model

Mon, 2025-01-13 06:00

Knee Surg Relat Res. 2025 Jan 13;37(1):3. doi: 10.1186/s43019-024-00256-z.

ABSTRACT

BACKGROUND: Unplanned readmission, a measure of surgical quality, occurs after 4.8% of primary total knee arthroplasties (TKA). Although the prediction of individualized readmission risk may inform appropriate preoperative interventions, current predictive models, such as the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) surgical risk calculator (SRC), have limited utility. This study aims to compare the predictive accuracy of the SRC with a novel artificial neural network (ANN) algorithm for 30-day readmission after primary TKA, using the same set of clinical variables from a large national database.

METHODS: Patients undergoing primary TKA between 2013 and 2020 were identified from the ACS-NSQIP database and randomly stratified into training and validation cohorts. The ANN was developed using data from the training cohort with fivefold cross-validation performed five times. ANN and SRC performance were subsequently evaluated in the distinct validation cohort, and predictive performance was compared on the basis of discrimination, calibration, accuracy, and clinical utility.

RESULTS: The overall cohort consisted of 365,394 patients (trainingN = 362,559; validationN = 2835), with 11,392 (3.1%) readmitted within 30 days. While the ANN demonstrated good discrimination and calibration (area under the curve (AUC)ANN = 0.72, slope = 1.32, intercept = -0.09) in the validation cohort, the SRC demonstrated poor discrimination (AUCSRC = 0.55) and underestimated readmission risk (slope = -0.21, intercept = 0.04). Although both models possessed similar accuracy (Brier score: ANN = 0.03; SRC = 0.02), only the ANN demonstrated a higher net benefit than intervening in all or no patients on the decision curve analysis. The strongest predictors of readmission were body mass index (> 33.5 kg/m2), age (> 69 years), and male sex.

CONCLUSIONS: This study demonstrates the superior predictive ability and potential clinical utility of the ANN over the conventional SRC when constrained to the same variables. By identifying the most important predictors of readmission following TKA, our findings may assist in the development of novel clinical decision support tools, potentially improving preoperative counseling and postoperative monitoring practices in at-risk patients.

PMID:39806502 | DOI:10.1186/s43019-024-00256-z

Categories: Literature Watch

Effect of flipped classroom method on the reflection ability in nursing students in the professional ethics course; Solomon four-group design

Mon, 2025-01-13 06:00

BMC Med Educ. 2025 Jan 13;25(1):56. doi: 10.1186/s12909-024-06556-y.

ABSTRACT

BACKGROUND AND PURPOSE: The purpose of reflection in the learning process is to create meaningful and deep learning. Considering the importance of emphasizing active and student-centered methods in learning and the necessity of learners' participation in the education process, the present study was conducted to investigate the effect of flipped classroom teaching method on the amount of reflection ability in nursing students and the course of professional ethics.

STUDY METHOD: The current study is a quasi-experimental study using Solomon's four-group method. The statistical population included all nursing students who were taking the professional ethics course at Kermanshah University of Medical Sciences. The study tool was a 26-item questionnaire with acceptable validity and reliability. The sample size was 80 nursing students by simple random method and divided into four groups, which included: 1- experimental group 1 2- experimental group 2 3- control group 1, and control 2. The collected data were used by SPSS software and using descriptive statistics methods and two-way analysis of variance and analysis of covariance analysis.

FINDINGS: The findings showed that the four investigated groups do not have statistically significant differences in terms of gender composition (p = 0.599). There was no significant difference between the control and experimental groups in terms of all 5 reflection components in the pre-test. A significant difference was observed between the amount of reflection of the experimental and control groups.

CONCLUSION: Considering that there are controversial issues in the course of professional ethics, this method can be effective in the field of deep learning of students.

PMID:39806386 | DOI:10.1186/s12909-024-06556-y

Categories: Literature Watch

Optimizing hip MRI: enhancing image quality and elevating inter-observer consistency using deep learning-powered reconstruction

Mon, 2025-01-13 06:00

BMC Med Imaging. 2025 Jan 13;25(1):17. doi: 10.1186/s12880-025-01554-y.

ABSTRACT

BACKGROUND: Conventional hip joint MRI scans necessitate lengthy scan durations, posing challenges for patient comfort and clinical efficiency. Previously, accelerated imaging techniques were constrained by a trade-off between noise and resolution. Leveraging deep learning-based reconstruction (DLR) holds the potential to mitigate scan time without compromising image quality.

METHODS: We enrolled a cohort of sixty patients who underwent DL-MRI, conventional MRI, and No-DL MRI examinations to evaluate image quality. Key metrics considered in the assessment included scan duration, overall image quality, quantitative assessments of Relative Signal-to-Noise Ratio (rSNR), Relative Contrast-to-Noise Ratio (rCNR), and diagnostic efficacy. Two experienced radiologists independently assessed image quality using a 5-point scale (5 indicating the highest quality). To gauge interobserver agreement for the assessed pathologies across image sets, we employed weighted kappa statistics. Additionally, the Wilcoxon signed rank test was employed to compare image quality and quantitative rSNR and rCNR measurements.

RESULTS: Scan time was significantly reduced with DL-MRI and represented an approximate 66.5% reduction. DL-MRI consistently exhibited superior image quality in both coronal T2WI and axial T2WI when compared to both conventional MRI (p < 0.01) and No-DL-MRI (p < 0.01). Interobserver agreement was robust, with kappa values exceeding 0.735. For rSNR data, coronal fat-saturated(FS) T2WI and axial FS T2WI in DL-MRI consistently outperformed No-DL-MRI, with statistical significance (p < 0.01) observed in all cases. Similarly, rCNR data revealed significant improvements (p < 0.01) in coronal FS T2WI of DL-MRI when compared to No-DL-MRI. Importantly, our findings indicated that DL-MRI demonstrated diagnostic performance comparable to conventional MRI.

CONCLUSION: Integrating deep learning-based reconstruction methods into standard clinical workflows has the potential to the promise of accelerating image acquisition, enhancing image clarity, and increasing patient throughput, thereby optimizing diagnostic efficiency.

TRIAL REGISTRATION: Retrospectively registered.

PMID:39806303 | DOI:10.1186/s12880-025-01554-y

Categories: Literature Watch

MDFGNN-SMMA: prediction of potential small molecule-miRNA associations based on multi-source data fusion and graph neural networks

Mon, 2025-01-13 06:00

BMC Bioinformatics. 2025 Jan 13;26(1):13. doi: 10.1186/s12859-025-06040-4.

ABSTRACT

BACKGROUND: MicroRNAs (miRNAs) are pivotal in the initiation and progression of complex human diseases and have been identified as targets for small molecule (SM) drugs. However, the expensive and time-intensive characteristics of conventional experimental techniques for identifying SM-miRNA associations highlight the necessity for efficient computational methodologies in this field.

RESULTS: In this study, we proposed a deep learning method called Multi-source Data Fusion and Graph Neural Networks for Small Molecule-MiRNA Association (MDFGNN-SMMA) to predict potential SM-miRNA associations. Firstly, MDFGNN-SMMA extracted features of Atom Pairs fingerprints and Molecular ACCess System fingerprints to derive fusion feature vectors for small molecules (SMs). The K-mer features were employed to generate the initial feature vectors for miRNAs. Secondly, cosine similarity measures were computed to construct the adjacency matrices for SMs and miRNAs, respectively. Thirdly, these feature vectors and adjacency matrices were input into a model comprising GAT and GraphSAGE, which were utilized to generate the final feature vectors for SMs and miRNAs. Finally, the averaged final feature vectors were utilized as input for a multilayer perceptron to predict the associations between SMs and miRNAs.

CONCLUSIONS: The performance of MDFGNN-SMMA was assessed using 10-fold cross-validation, demonstrating superior compared to the four state-of-the-art models in terms of both AUC and AUPR. Moreover, the experimental results of an independent test set confirmed the model's generalization capability. Additionally, the efficacy of MDFGNN-SMMA was substantiated through three case studies. The findings indicated that among the top 50 predicted miRNAs associated with Cisplatin, 5-Fluorouracil, and Doxorubicin, 42, 36, and 36 miRNAs, respectively, were corroborated by existing literature and the RNAInter database.

PMID:39806287 | DOI:10.1186/s12859-025-06040-4

Categories: Literature Watch

Diagnosis and prognosis of melanoma from dermoscopy images using machine learning and deep learning: a systematic literature review

Mon, 2025-01-13 06:00

BMC Cancer. 2025 Jan 13;25(1):75. doi: 10.1186/s12885-024-13423-y.

ABSTRACT

BACKGROUND: Melanoma is a highly aggressive skin cancer, where early and accurate diagnosis is crucial to improve patient outcomes. Dermoscopy, a non-invasive imaging technique, aids in melanoma detection but can be limited by subjective interpretation. Recently, machine learning and deep learning techniques have shown promise in enhancing diagnostic precision by automating the analysis of dermoscopy images.

METHODS: This systematic review examines recent advancements in machine learning (ML) and deep learning (DL) applications for melanoma diagnosis and prognosis using dermoscopy images. We conducted a thorough search across multiple databases, ultimately reviewing 34 studies published between 2016 and 2024. The review covers a range of model architectures, including DenseNet and ResNet, and discusses datasets, methodologies, and evaluation metrics used to validate model performance.

RESULTS: Our results highlight that certain deep learning architectures, such as DenseNet and DCNN demonstrated outstanding performance, achieving over 95% accuracy on the HAM10000, ISIC and other datasets for melanoma detection from dermoscopy images. The review provides insights into the strengths, limitations, and future research directions of machine learning and deep learning methods in melanoma diagnosis and prognosis. It emphasizes the challenges related to data diversity, model interpretability, and computational resource requirements.

CONCLUSION: This review underscores the potential of machine learning and deep learning methods to transform melanoma diagnosis through improved diagnostic accuracy and efficiency. Future research should focus on creating accessible, large datasets and enhancing model interpretability to increase clinical applicability. By addressing these areas, machine learning and deep learning models could play a central role in advancing melanoma diagnosis and patient care.

PMID:39806282 | DOI:10.1186/s12885-024-13423-y

Categories: Literature Watch

A novel deep learning-based pipeline architecture for pulp stone detection on panoramic radiographs

Mon, 2025-01-13 06:00

Oral Radiol. 2025 Jan 14. doi: 10.1007/s11282-025-00804-7. Online ahead of print.

ABSTRACT

OBJECTIVES: Pulp stones are ectopic calcifications located in pulp tissue. The aim of this study is to introduce a novel method for detecting pulp stones on panoramic radiography images using a deep learning-based two-stage pipeline architecture.

MATERIALS AND METHODS: The first stage involved tooth localization with the YOLOv8 model, followed by pulp stone classification using ResNeXt. 375 panoramic images were included in this study, and a comprehensive set of evaluation metrics, including precision, recall, false-negative rate, false-positive rate, accuracy, and F1 score was employed to rigorously assess the performance of the proposed architecture.

RESULTS: Despite the limited annotated training data, the proposed method achieved impressive results: an accuracy of 95.4%, precision of 97.1%, recall of 96.1%, false-negative rate of 3.9%, false-positive rate of 6.1%, and a F1 score of 96.6%, outperforming existing approaches in pulp stone detection.

CONCLUSIONS: Unlike current studies, this approach adopted a more realistic scenario by utilizing a small dataset with few annotated samples, acknowledging the time-consuming and error-prone nature of expert labeling. The proposed system is particularly beneficial for dental students and newly graduated dentists who lack sufficient clinical experience, as it aids in the automatic detection of pulpal calcifications. To the best of our knowledge, this is the first study in the literature that propose a pipeline architecture to address the PS detection tasks on panoramic images.

PMID:39806222 | DOI:10.1007/s11282-025-00804-7

Categories: Literature Watch

Artificial intelligence in clinical genetics

Mon, 2025-01-13 06:00

Eur J Hum Genet. 2025 Jan 13. doi: 10.1038/s41431-024-01782-w. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) has been growing more powerful and accessible, and will increasingly impact many areas, including virtually all aspects of medicine and biomedical research. This review focuses on previous, current, and especially emerging applications of AI in clinical genetics. Topics covered include a brief explanation of different general categories of AI, including machine learning, deep learning, and generative AI. After introductory explanations and examples, the review discusses AI in clinical genetics in three main categories: clinical diagnostics; management and therapeutics; clinical support. The review concludes with short, medium, and long-term predictions about the ways that AI may affect the field of clinical genetics. Overall, while the precise speed at which AI will continue to change clinical genetics is unclear, as are the overall ramifications for patients, families, clinicians, researchers, and others, it is likely that AI will result in dramatic evolution in clinical genetics. It will be important for all those involved in clinical genetics to prepare accordingly in order to minimize the risks and maximize benefits related to the use of AI in the field.

PMID:39806188 | DOI:10.1038/s41431-024-01782-w

Categories: Literature Watch

Video-based robotic surgical action recognition and skills assessment on porcine models using deep learning

Mon, 2025-01-13 06:00

Surg Endosc. 2025 Jan 13. doi: 10.1007/s00464-024-11486-3. Online ahead of print.

ABSTRACT

OBJECTIVES: This study aimed to develop an automated skills assessment tool for surgical trainees using deep learning.

BACKGROUND: Optimal surgical performance in robot-assisted surgery (RAS) is essential for ensuring good surgical outcomes. This requires effective training of new surgeons, which currently relies on supervision and skill assessment by experienced surgeons. Artificial Intelligence (AI) presents an opportunity to augment existing human-based assessments.

METHODS: We used a network architecture consisting of a convolutional neural network combined with a long short-term memory (LSTM) layer to create two networks for the extraction and analysis of spatial and temporal features from video recordings of surgical procedures, facilitating action recognition and skill assessment.

RESULTS: 21 participants (16 novices and 5 experienced) performed 16 different intra-abdominal robot-assisted surgical procedures on porcine models. The action recognition network achieved an accuracy of 96.0% in identifying surgical actions. A GradCAM filter was used to enhance the model interpretability. The skill assessment network had an accuracy of 81.3% in classifying novices and experiences. Procedure plots were created to visualize the skill assessment.

CONCLUSION: Our study demonstrated that AI can be used to automate surgical action recognition and skill assessment. The use of a porcine model enables effective data collection at different levels of surgical performance, which is normally not available in the clinical setting. Future studies need to test how well AI developed within a porcine setting can be used to detect errors and provide feedback and actionable skills assessment in the clinical setting.

PMID:39806176 | DOI:10.1007/s00464-024-11486-3

Categories: Literature Watch

A Deep Learning and PSSM Profile Approach for Accurate SNARE Protein Prediction

Mon, 2025-01-13 06:00

Methods Mol Biol. 2025;2887:79-89. doi: 10.1007/978-1-0716-4314-3_5.

ABSTRACT

SNARE proteins play a pivotal role in membrane fusion and various cellular processes. Accurate identification of SNARE proteins is crucial for elucidating their functions in both health and disease contexts. This chapter presents a novel approach employing multiscan convolutional neural networks (CNNs) combined with position-specific scoring matrix (PSSM) profiles to accurately recognize SNARE proteins. By leveraging deep learning techniques, our method significantly enhances the accuracy and efficacy of SNARE protein classification. We detail the step-by-step methodology, including dataset preparation, feature extraction using PSI-BLAST, and the design of the multiscan CNN architecture. Our results demonstrate that this approach outperforms existing methods, providing a robust and reliable tool for bioinformatics research.

PMID:39806147 | DOI:10.1007/978-1-0716-4314-3_5

Categories: Literature Watch

Toward efficient slide-level grading of liver biopsy via explainable deep learning framework

Mon, 2025-01-13 06:00

Med Biol Eng Comput. 2025 Jan 13. doi: 10.1007/s11517-024-03266-x. Online ahead of print.

ABSTRACT

In the context of chronic liver diseases, where variability in progression necessitates early and precise diagnosis, this study addresses the limitations of traditional histological analysis and the shortcomings of existing deep learning approaches. A novel patch-level classification model employing multi-scale feature extraction and fusion was developed to enhance the grading accuracy and interpretability of liver biopsies, analyzing 1322 cases across various staining methods. The study also introduces a slide-level aggregation framework, comparing different diagnostic models, to efficiently integrate local histological information. Results from extensive validation show that the slide-level model consistently achieved high F1 scores, notably 0.9 for inflammatory activity and steatosis, and demonstrated rapid diagnostic capabilities with less than one minute per slide on average. The patch-level model also performed well, with an F1 score of 0.64 for ballooning and 0.99 for other indicators, and proved transferable to public datasets. The conclusion drawn is that the proposed analytical framework offers a reliable basis for the diagnosis and treatment of chronic liver diseases, with the added benefit of robust interpretability, suggesting its practical utility in clinical settings.

PMID:39806118 | DOI:10.1007/s11517-024-03266-x

Categories: Literature Watch

Pages