Deep learning

Dementia Care Research and Psychosocial Factors

Thu, 2025-01-09 06:00

Alzheimers Dement. 2024 Dec;20 Suppl 4:e087844. doi: 10.1002/alz.087844.

ABSTRACT

BACKGROUND: Clinical Dementia Rating (CDR) and its evaluation have been important nowadays as its prevalence in older ages after 60 years. Early identification of dementia can help the world to take preventive measures as most of them are treatable. The cellular Automata (CA) framework is a powerful tool in analyzing brain dynamics and modeling the prognosis of Alzheimer's disease.

METHOD: The proposed algorithm uses the CA framework to construct features for the classifier for the classification of classes in the dataset. A subject is assigned to a CA cell grid based on its feature values as rows and the specified number of cells in each row. When a CA grid receives a feature from a subject, the feature values are distributed among the cells using a transfer function. The distribution of featured values in the CA grid from the initialized cell to the neighboring cells in the row with a diffusion rate of 20%. Hence, redistributed CA images have been obtained for all the subjects. Deep learning architecture constituted with 4 layered Conv2d has been modeled for the classification of the CA images to classify low, moderate, and severe cognitive impairment.

RESULT: CDR from the ADNI dataset comprising 1948 subjects has been preprocessed for the six features and three classes (i.e., Low, moderate, and severe cognitive impairment) with 70% train sets and 30% test sets. A balanced dataset of 89 subjects for moderate and severe cognitive impairment has given the classification accuracy of 96%. A balanced dataset of 363 subjects for low and moderate cognitive impairment has given the classification accuracy of 95%.

CONCLUSION: A CA framework for the classification of cognitive impairment has been achieved with good accuracy. The implementation of the CA approach and its runtime performance has an advantage over the well-known algorithms by giving a good pathway in contributing to the classification problems.

PMID:39782453 | DOI:10.1002/alz.087844

Categories: Literature Watch

Dementia Care Research and Psychosocial Factors

Thu, 2025-01-09 06:00

Alzheimers Dement. 2024 Dec;20 Suppl 4:e083965. doi: 10.1002/alz.083965.

ABSTRACT

BACKGROUND: With the advent of new media, more people - possibly including caregivers of persons with dementia - are turning to social media platforms to share their thoughts and emotions related to personal life experiences. This may potentially serve as an opportunity to leverage on social media to gain insights into the key issues faced by dementia caregivers. We examined salient concerns of dementia caregivers through Twitter posts, aiming to shed light on how to better support and engage such caregivers.

METHOD: English tweets related to "dementia" and "caregiver" (or related terms such as "Alzheimer's disease" and "carer") were extracted between 1st January 2013 and 31st December 2022. A supervised deep learning model (Bidirectional Encoder Representations from Transformers, BERT) was trained to select tweets describing individual's accounts related to dementia caregiving. Then, an unsupervised deep learning approach (BERT-based topic modelling) was applied to identify topics from selected tweets, with each topic further grouped into themes manually using thematic analysis.

RESULT: A total of 44,527 tweets were analysed, and stratified using the emergence of COVID-19 pandemic as a threshold. Three themes were derived: challenges of caregiving in dementia, positive aspects related to dementia caregiving, and dementia-related stigmatization. Over time, there is a rising trend of tweets relating to dementia caregiving. Post-pandemic, challenges of caregiving remained the top discussed topic; followed by an increase in tweets related to dementia-related stigmatization; and a decrease in tweets related to positive aspects of caregiving (p-value<.001).

CONCLUSION: Social media is increasingly being used by dementia caregivers to share their thoughts. The findings uncover the worrying trend of growing dementia-related stigmatization among the caregivers, and its manifestation in the form of devaluing others. The persistence of these issues post-pandemic underscores the need for caregiver's support and resources.

PMID:39782315 | DOI:10.1002/alz.083965

Categories: Literature Watch

Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI)

Thu, 2025-01-09 06:00

J Magn Reson Imaging. 2025 Jan 9. doi: 10.1002/jmri.29687. Online ahead of print.

ABSTRACT

Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.

PMID:39781684 | DOI:10.1002/jmri.29687

Categories: Literature Watch

Multiparametric MRI for Assessment of the Biological Invasiveness and Prognosis of Pancreatic Ductal Adenocarcinoma in the Era of Artificial Intelligence

Thu, 2025-01-09 06:00

J Magn Reson Imaging. 2025 Jan 9. doi: 10.1002/jmri.29708. Online ahead of print.

ABSTRACT

Pancreatic ductal adenocarcinoma (PDAC) is the deadliest malignant tumor, with a grim 5-year overall survival rate of about 12%. As its incidence and mortality rates rise, it is likely to become the second-leading cause of cancer-related death. The radiological assessment determined the stage and management of PDAC. However, it is a highly heterogeneous disease with the complexity of the tumor microenvironment, and it is challenging to adequately reflect the biological aggressiveness and prognosis accurately through morphological evaluation alone. With the dramatic development of artificial intelligence (AI), multiparametric magnetic resonance imaging (mpMRI) using specific contrast media and special techniques can provide morphological and functional information with high image quality and become a powerful tool in quantifying intratumor characteristics. Besides, AI has been widespread in the field of medical imaging analysis. Radiomics is the high-throughput mining of quantitative image features from medical imaging that enables data to be extracted and applied for better decision support. Deep learning is a subset of artificial neural network algorithms that can automatically learn feature representations from data. AI-enabled imaging biomarkers of mpMRI have enormous promise to bridge the gap between medical imaging and personalized medicine and demonstrate huge advantages in predicting biological characteristics and the prognosis of PDAC. However, current AI-based models of PDAC operate mainly in the realm of a single modality with a relatively small sample size, and the technical reproducibility and biological interpretation present a barrage of new potential challenges. In the future, the integration of multi-omics data, such as radiomics and genomics, alongside the establishment of standardized analytical frameworks will provide opportunities to increase the robustness and interpretability of AI-enabled image biomarkers and bring these biomarkers closer to clinical practice. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 4.

PMID:39781607 | DOI:10.1002/jmri.29708

Categories: Literature Watch

Precision Opioid Prescription in ICU Surgery: Insights from an Interpretable Deep Learning Framework

Thu, 2025-01-09 06:00

J Surg (Lisle). 2024;9(15):11189. doi: 10.29011/2575-9760.11189. Epub 2024 Nov 27.

ABSTRACT

PURPOSE: Appropriate opioid management is crucial to reduce opioid overdose risk for ICU surgical patients, which can lead to severe complications. Accurately predicting postoperative opioid needs and understanding the associated factors can effectively guide appropriate opioid use, significantly enhancing patient safety and recovery outcomes. Although machine learning models can accurately predict postoperative opioid needs, lacking interpretability hinders their adoption in clinical practice.

METHODS: We developed an interpretable deep learning framework to evaluate individual feature's impact on postoperative opioid use and identify important factors. A Permutation Feature Importance Test (PermFIT) was employed to assess the impact with a rigorous statistical inference for machine learning models including Support Vector Machines, eXtreme Gradient Boosting, Random Forest, and Deep Neural Networks (DNN). The Mean Squared Error (MSE) and Pearson Correlation Coefficient (PCC) were used to evaluate the performance of these models.

RESULTS: We conducted analysis utilizing the electronic health records of 4,912 surgical patients from the Medical Information Mart for Intensive Care database. In a 10-fold cross-validation, the DNN outperformed other machine learning models, achieving the lowest MSE (7889.2 mcg) and highest PCC (0.283). Among 25 features, 13-including age, surgery type, and others-were identified as significant predictors of postoperative opioid use (p < 0.05).

CONCLUSION: The DNN proved to be an effective model for predicting postoperative opioid consumption and identifying significant features through the PermFIT framework. This approach offers a valuable tool for precise opioid prescription tailored to the individual needs of ICU surgical patients, improving patient outcomes and enhancing safety.

PMID:39781484 | PMC:PMC11709741 | DOI:10.29011/2575-9760.11189

Categories: Literature Watch

Tech Bytes-Harnessing Artificial Intelligence for Pediatric Oral Health: A Scoping Review

Thu, 2025-01-09 06:00

Int J Clin Pediatr Dent. 2024 Nov;17(11):1289-1295. doi: 10.5005/jp-journals-10005-2971. Epub 2024 Dec 19.

ABSTRACT

AIM AND BACKGROUND: The applications of artificial intelligence (AI) are escalating in all frontiers, specifically healthcare. It constitutes the umbrella term for a number of technologies that enable machines to independently solve problems they have not been programmed to address. With its aid, patient management, diagnostics, treatment planning, and interventions can be significantly improved. The aim of this review is to analyze the current data to assess the applications of artificial intelligence in pediatric dentistry and determine their clinical effectiveness.

MATERIALS AND METHODS: A search of published studies in PubMed, Web of Science, Scopus, and Google Scholar databases was included till January 2024.

RESULTS: This review consisted of 30 published studies in the English language. The use of AI has been employed in the detection of dental caries, dental plaque, behavioral science, interceptive orthodontics, predicting the dental age, and identification of teeth which can enhance patient care.

CONCLUSION: Artificial intelligence models can be used as an aid to the clinician as they are of significant help at individual and community levels in identifying an increased risk to dental diseases.

CLINICAL SIGNIFICANCE: Artificial intelligence can be used as an asset in preventive school health programs, dental education for students and parents, and to assist the clinician in the dental practice. Further advancements in technology will give rise to newer potential innovations and applications.

HOW TO CITE THIS ARTICLE: Tanna DA, Bhandary S, Hegde SK. Tech Bytes-Harnessing Artificial Intelligence for Pediatric Oral Health: A Scoping Review. Int J Clin Pediatr Dent 2024;17(11):1289-1295.

PMID:39781392 | PMC:PMC11703760 | DOI:10.5005/jp-journals-10005-2971

Categories: Literature Watch

Comparing the Artificial Intelligence Detection Models to Standard Diagnostic Methods and Alternative Models in Identifying Alzheimer's Disease in At-Risk or Early Symptomatic Individuals: A Scoping Review

Thu, 2025-01-09 06:00

Cureus. 2024 Dec 9;16(12):e75389. doi: 10.7759/cureus.75389. eCollection 2024 Dec.

ABSTRACT

Alzheimer's disease (AD) and other neurodegenerative illnesses place a heavy strain on the world's healthcare systems, particularly among the aging population. With a focus on research from January 2022 to September 2023, this scoping review, which adheres to Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-Scr) criteria, examines the changing landscape of artificial intelligence (AI) applications for early AD detection and diagnosis. Forty-four carefully chosen articles were selected from a pool of 2,966 articles for the qualitative synthesis. The research reveals impressive advancements in AI-driven approaches, including neuroimaging, genomics, cognitive tests, and blood-based biomarkers. Notably, AI models focusing on deep learning (DL) algorithms demonstrate outstanding accuracy in early AD identification, often even before the onset of clinical symptoms. Multimodal approaches, which combine information from various sources, including neuroimaging and clinical assessments, provide comprehensive insights into the complex nature of AD. The study also emphasizes the critical role that blood-based and genetic biomarkers play in strengthening AD diagnosis and risk assessment. When combined with clinical or imaging data, genetic variations and polygenic risk scores help to improve prediction models. In a similar vein, blood-based biomarkers provide non-invasive instruments for detecting metabolic changes linked to AD. Cognitive and functional evaluations, which include neuropsychological examinations and assessments of daily living activities, serve as essential benchmarks for monitoring the course of AD and directing treatment interventions. When these evaluations are included in machine learning models, the diagnosis accuracy is improved, and treatment monitoring is made more accessible. In addition, including methods that support model interpretability and explainability helps in the thorough understanding and valuable implementation of AI-driven insights in clinical contexts. This review further identifies several gaps in the research landscape, including the need for diverse, high-quality datasets to address data heterogeneity and improve model generalizability. Practical implementation challenges, such as integrating AI systems into clinical workflows and clinician adoption, are highlighted as critical barriers to real-world application. Moreover, ethical considerations, particularly surrounding data privacy and informed consent, must be prioritized as AI adoption in healthcare accelerates. Performance metrics (e.g., sensitivity, specificity, and area under the curve (AUC)) for AI-based approaches are discussed, with a need for clearer reporting and comparative analyses. Addressing these limitations, alongside methodological clarity and critical evaluation of biases, would strengthen the credibility of AI applications in AD detection. By expanding its scope, this study highlights areas for improvement and future opportunities in early detection, aiming to bridge the gap between innovative AI technologies and practical clinical utility.

PMID:39781179 | PMC:PMC11709138 | DOI:10.7759/cureus.75389

Categories: Literature Watch

Brain-inspired learning rules for spiking neural network-based control: a tutorial

Thu, 2025-01-09 06:00

Biomed Eng Lett. 2024 Dec 2;15(1):37-55. doi: 10.1007/s13534-024-00436-6. eCollection 2025 Jan.

ABSTRACT

Robotic systems rely on spatio-temporal information to solve control tasks. With advancements in deep neural networks, reinforcement learning has significantly enhanced the performance of control tasks by leveraging deep learning techniques. However, as deep neural networks grow in complexity, they consume more energy and introduce greater latency. This complexity hampers their application in robotic systems that require real-time data processing. To address this issue, spiking neural networks, which emulate the biological brain by transmitting spatio-temporal information through spikes, have been developed alongside neuromorphic hardware that supports their operation. This paper reviews brain-inspired learning rules and examines the application of spiking neural networks in control tasks. We begin by exploring the features and implementations of biologically plausible spike-timing-dependent plasticity. Subsequently, we investigate the integration of a global third factor with spike-timing-dependent plasticity and its utilization and enhancements in both theoretical and applied research. We also discuss a method for locally applying a third factor that sophisticatedly modifies each synaptic weight through weight-based backpropagation. Finally, we review studies utilizing these learning rules to solve control tasks using spiking neural networks.

PMID:39781065 | PMC:PMC11704115 | DOI:10.1007/s13534-024-00436-6

Categories: Literature Watch

A Review for automated classification of knee osteoarthritis using KL grading scheme for X-rays

Thu, 2025-01-09 06:00

Biomed Eng Lett. 2024 Oct 10;15(1):1-35. doi: 10.1007/s13534-024-00437-5. eCollection 2025 Jan.

ABSTRACT

Osteoarthritis (OA) is a musculoskeletal disorder that affects weight-bearing joints like the hip, knee, spine, feet, and fingers. It is a chronic disorder that causes joint stiffness and leads to functional impairment. Knee osteoarthritis (KOA) is a degenerative knee joint disease that is a significant disability for over 60 years old, with the most prevalent symptom of knee pain. Radiography is the gold standard for the evaluation of KOA. These radiographs are evaluated using different classification systems. Kellgren and Lawrence's (KL) classification system is used to classify X-rays into five classes (Normal = 0 to Severe = 4) based on osteoarthritis severity levels. In recent years, with the advent of artificial intelligence, machine learning, and deep learning, more emphasis has been given to automated medical diagnostic systems or decision support systems. Computer-aided diagnosis is needed for the improvement of health-related information systems. This survey aims to review the latest advances in automated radiographic classification and detection of KOA using the KL system. A total of 85 articles are reviewed as original research or survey articles. This survey will benefit researchers, practitioners, and medical experts interested in X-rays-based KOA diagnosis and prediction.

PMID:39781063 | PMC:PMC11704124 | DOI:10.1007/s13534-024-00437-5

Categories: Literature Watch

Systematic review of computational techniques, dataset utilization, and feature extraction in electrocardiographic imaging

Wed, 2025-01-08 06:00

Med Biol Eng Comput. 2025 Jan 9. doi: 10.1007/s11517-024-03264-z. Online ahead of print.

ABSTRACT

This study aimed to analyze computational techniques in ECG imaging (ECGI) reconstruction, focusing on dataset identification, problem-solving, and feature extraction. We employed a PRISMA approach to review studies from Scopus and Web of Science, applying Cochrane principles to assess risk of bias. The selection was limited to English peer-reviewed papers published from 2010 to 2023, excluding studies that lacked computational technique descriptions. From 99 reviewed papers, trends show a preference for traditional methods like the boundary element and Tikhonov methods, alongside a rising use of advanced technologies including hybrid techniques and deep learning. These advancements have enhanced cardiac diagnosis and treatment precision. Our findings underscore the need for robust data utilization and innovative computational integration in ECGI, highlighting promising areas for future research and advances. This shift toward tailored cardiac care suggests significant progress in diagnostic and treatment methods.

PMID:39779645 | DOI:10.1007/s11517-024-03264-z

Categories: Literature Watch

Multi-Class Brain Tumor Grades Classification Using a Deep Learning-Based Majority Voting Algorithm and Its Validation Using Explainable-AI

Wed, 2025-01-08 06:00

J Imaging Inform Med. 2025 Jan 8. doi: 10.1007/s10278-024-01368-4. Online ahead of print.

ABSTRACT

Biopsy is considered the gold standard for diagnosing brain tumors, but its invasive nature can pose risks to patients. Additionally, tissue analysis can be cumbersome and inconsistent among observers. This research aims to develop a cost-effective, non-invasive, MRI-based computer-aided diagnosis tool that can reliably, accurately and swiftly identify brain tumor grades. Our system employs ensemble deep learning (EDL) within an MRI multiclass framework that includes five datasets: two-class (C2), three-class (C3), four-class (C4), five-class (C5) and six-class (C6). The EDL utilizes a majority voting algorithm to classify brain tumors by combining seven renowned deep learning (DL) models-EfficientNet, VGG16, ResNet18, GoogleNet, ResNet50, Inception-V3 and DarkNet-and seven machine learning (ML) models, including support vector machine, K-nearest neighbour, Naïve Bayes, decision tree, linear discriminant analysis, artificial neural network and random forest. Additionally, local interpretable model-agnostic explanations (LIME) are employed as an explainable AI algorithm, providing a visual representation of the CNN's internal workings to enhance the credibility of the results. Through extensive five-fold cross-validation experiments, the DL-based majority voting algorithm outperformed the ML-based majority voting algorithm, achieving the highest average accuracies of 100 ± 0.00%, 98.55 ± 0.35%, 98.47 ± 0.63%, 95.34 ± 1.17% and 96.61 ± 0.85% for the C2, C3, C4, C5 and C6 datasets, respectively. Majority voting algorithms typically yield consistent results across different folds of the brain tumor data and enhance performance compared to any individual deep learning and machine learning models.

PMID:39779641 | DOI:10.1007/s10278-024-01368-4

Categories: Literature Watch

Multi-site, multi-vendor development and validation of a deep learning model for liver stiffness prediction using abdominal biparametric MRI

Wed, 2025-01-08 06:00

Eur Radiol. 2025 Jan 9. doi: 10.1007/s00330-024-11312-3. Online ahead of print.

ABSTRACT

BACKGROUND: Chronic liver disease (CLD) is a substantial cause of morbidity and mortality worldwide. Liver stiffness, as measured by MR elastography (MRE), is well-accepted as a surrogate marker of liver fibrosis.

PURPOSE: To develop and validate deep learning (DL) models for predicting MRE-derived liver stiffness using routine clinical non-contrast abdominal T1-weighted (T1w) and T2-weighted (T2w) data from multiple institutions/system manufacturers in pediatric and adult patients.

MATERIALS AND METHODS: We identified pediatric and adult patients with known or suspected CLD from four institutions, who underwent clinical MRI with MRE from 2011 to 2022. We used T1w and T2w data to train DL models for liver stiffness classification. Patients were categorized into two groups for binary classification using liver stiffness thresholds (≥ 2.5 kPa, ≥ 3.0 kPa, ≥ 3.5 kPa, ≥ 4 kPa, or ≥ 5 kPa), reflecting various degrees of liver stiffening.

RESULTS: We identified 4695 MRI examinations from 4295 patients (mean ± SD age, 47.6 ± 18.7 years; 428 (10.0%) pediatric; 2159 males [50.2%]). With a primary liver stiffness threshold of 3.0 kPa, our model correctly classified patients into no/minimal (< 3.0 kPa) vs moderate/severe (≥ 3.0 kPa) liver stiffness with AUROCs of 0.83 (95% CI: 0.82, 0.84) in our internal multi-site cross-validation (CV) experiment, 0.82 (95% CI: 0.80, 0.84) in our temporal hold-out validation experiment, and 0.79 (95% CI: 0.75, 0.81) in our external leave-one-site-out CV experiment. The developed model is publicly available ( https://github.com/almahdir1/Multi-channel-DeepLiverNet2.0.git ).

CONCLUSION: Our DL models exhibited reasonable diagnostic performance for categorical classification of liver stiffness on a large diverse dataset using T1w and T2w MRI data.

KEY POINTS: Question Can DL models accurately predict liver stiffness using routine clinical biparametric MRI in pediatric and adult patients with CLD? Findings DeepLiverNet2.0 used biparametric MRI data to classify liver stiffness, achieving AUROCs of 0.83, 0.82, and 0.79 for multi-site CV, hold-out validation, and external CV. Clinical relevance Our DeepLiverNet2.0 AI model can categorically classify the severity of liver stiffening using anatomic biparametric MR images in children and young adults. Model refinements and incorporation of clinical features may decrease the need for MRE.

PMID:39779515 | DOI:10.1007/s00330-024-11312-3

Categories: Literature Watch

Deep Learning-Based Super-Resolution Reconstruction on Undersampled Brain Diffusion-Weighted MRI for Infarction Stroke: A Comparison to Conventional Iterative Reconstruction

Wed, 2025-01-08 06:00

AJNR Am J Neuroradiol. 2025 Jan 8;46(1):41-48. doi: 10.3174/ajnr.A8482.

ABSTRACT

BACKGROUND AND PURPOSE: DWI is crucial for detecting infarction stroke. However, its spatial resolution is often limited, hindering accurate lesion visualization. Our aim was to evaluate the image quality and diagnostic confidence of deep learning (DL)-based super-resolution reconstruction for brain DWI of infarction stroke.

MATERIALS AND METHODS: This retrospective study enrolled 114 consecutive participants who underwent brain DWI. The DWI images were reconstructed with 2 schemes: 1) DL-based super-resolution reconstruction (DWIDL); and 2) conventional compressed sensing reconstruction (DWICS). Qualitative image analysis included overall image quality, lesion conspicuity, and diagnostic confidence in infarction stroke of different lesion sizes. Quantitative image quality assessments were performed by measurements of SNR, contrast-to-noise ratio (CNR), ADC, and edge rise distance. Group comparisons were conducted by using a paired t test for normally distributed data and the Wilcoxon test for non-normally distributed data. The overall agreement between readers for qualitative ratings was assessed by using the Cohen κ coefficient. A P value less than .05 was considered statistically significant.

RESULTS: A total of 114 DWI examinations constituted the study cohort. For the qualitative assessment, overall image quality, lesion conspicuity, and diagnostic confidence in infarction stroke lesions (lesion size <1.5 cm) improved by DWIDL compared with DWICS (all P < .001). For the quantitative analysis, edge rise distance of DWIDL was reduced compared with that of DWICS (P < .001), and no significant difference in SNR, CNR, and ADC values (all P > .05).

CONCLUSIONS: Compared with the conventional compressed sensing reconstruction, the DL-based super-resolution reconstruction demonstrated superior image quality and was feasible for achieving higher diagnostic confidence in infarction stroke.

PMID:39779291 | DOI:10.3174/ajnr.A8482

Categories: Literature Watch

Predicting Parkinson's Disease Using a Deep-Learning Algorithm to Analyze Prodromal Medical and Prescription Data

Wed, 2025-01-08 06:00

J Clin Neurol. 2025 Jan;21(1):21-30. doi: 10.3988/jcn.2024.0175.

ABSTRACT

BACKGROUND AND PURPOSE: Parkinson's disease (PD) is characterized by various prodromal symptoms, and these symptoms are mostly investigated retrospectively. While some symptoms such as rapid eye movement sleep behavior disorder are highly specific, others are common. This makes it challenging to predict those at risk of PD based solely on less-specific prodromal symptoms. The prediction accuracy when using only less-specific symptoms can be improved by analyzing the vast amount of information available using sophisticated deep-learning techniques. This study aimed to improve the performance of deep-learning-based screening in detecting prodromal PD using medical-claims data, including prescription information.

METHODS: We sampled 820 PD patients and 8,200 age- and sex-matched non-PD controls from Korean National Health Insurance cohort data. A deep-learning algorithm was developed using various combinations of diagnostic codes, medication codes, and prodromal periods.

RESULTS: During the prodromal period from year -3 to year 0, predicting PD using only diagnostic codes yielded a high accuracy of 0.937. Adding medication codes for the same period did not increase the accuracy (0.931-0.935). For the earlier prodromal period (year -6 to year -3), the accuracy of PD prediction decreased to 0.890 when using only diagnostic codes. The inclusion of all medication-codes data increased that accuracy markedly to 0.922.

CONCLUSIONS: A deep-learning algorithm using both prodromal diagnostic and medication codes was effective in screening PD. Developing a surveillance system with automatically collected medical-claims data for those at risk of developing PD could be cost-effective. This approach could streamline the process of developing disease-modifying drugs by focusing on the most-appropriate candidates for inclusion in accurate diagnostic tests.

PMID:39778564 | DOI:10.3988/jcn.2024.0175

Categories: Literature Watch

Identity Model Transformation for boosting performance and efficiency in object detection network

Wed, 2025-01-08 06:00

Neural Netw. 2024 Dec 31;184:107098. doi: 10.1016/j.neunet.2024.107098. Online ahead of print.

ABSTRACT

Modifying the structure of an existing network is a common method to further improve the performance of the network. However, modifying some layers in network often results in pre-trained weight mismatch, and fine-tune process is time-consuming and resource-inefficient. To address this issue, we propose a novel technique called Identity Model Transformation (IMT), which keep the output before and after transformation in an equal form by rigorous algebraic transformations. This approach ensures the preservation of the original model's performance when modifying layers. Additionally, IMT significantly reduces the total training time required to achieve optimal results while further enhancing network performance. IMT has established a bridge for rapid transformation between model architectures, enabling a model to quickly perform analytic continuation and derive a family of tree-like models with better performance. This model family possesses a greater potential for optimization improvements compared to a single model. Extensive experiments across various object detection tasks validated the effectiveness and efficiency of our proposed IMT solution, which saved 94.76% time in fine-tuning the basic model YOLOv4-Rot on DOTA 1.5 dataset, and by using the IMT method, we saw stable performance improvements of 9.89%, 6.94%, 2.36%, and 4.86% on the four datasets: AI-TOD, DOTA1.5, coco2017, and MRSAText, respectively.

PMID:39778291 | DOI:10.1016/j.neunet.2024.107098

Categories: Literature Watch

Skin image analysis for detection and quantitative assessment of dermatitis, vitiligo and alopecia areata lesions: a systematic literature review

Wed, 2025-01-08 06:00

BMC Med Inform Decis Mak. 2025 Jan 8;25(1):10. doi: 10.1186/s12911-024-02843-2.

ABSTRACT

Vitiligo, alopecia areata, atopic, and stasis dermatitis are common skin conditions that pose diagnostic and assessment challenges. Skin image analysis is a promising noninvasive approach for objective and automated detection as well as quantitative assessment of skin diseases. This review provides a systematic literature search regarding the analysis of computer vision techniques applied to these benign skin conditions, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The review examines deep learning architectures and image processing algorithms for segmentation, feature extraction, and classification tasks employed for disease detection. It also focuses on practical applications, emphasizing quantitative disease assessment, and the performance of various computer vision approaches for each condition while highlighting their strengths and limitations. Finally, the review denotes the need for disease-specific datasets with curated annotations and suggests future directions toward unsupervised or self-supervised approaches. Additionally, the findings underscore the importance of developing accurate, automated tools for disease severity score calculation to improve ML-based monitoring and diagnosis in dermatology. TRIAL REGISTRATION: Not applicable.

PMID:39780145 | DOI:10.1186/s12911-024-02843-2

Categories: Literature Watch

Feasibility of occlusal plane in predicting the changes in anteroposterior mandibular position: a comprehensive analysis using deep learning-based three-dimensional models

Wed, 2025-01-08 06:00

BMC Oral Health. 2025 Jan 8;25(1):42. doi: 10.1186/s12903-024-05345-9.

ABSTRACT

BACKGROUND: A comprehensive analysis of the occlusal plane (OP) inclination in predicting anteroposterior mandibular position (APMP) changes is still lacking. This study aimed to analyse the relationships between inclinations of different OPs and APMP metrics and explore the feasibility of OP inclination in predicting changes in APMP.

METHODS: Overall, 115 three-dimensional (3D) models were reconstructed using deep learning-based cone-beam computed tomography (CBCT) segmentation, and their accuracy in supporting cusps was compared with that of intraoral scanning models. The anatomical landmarks of seven OPs and three APMP metrics were identified, and their values were measured on the sagittal reference plane. The receiver operating characteristic curves of inclinations of seven OPs in distinguishing different anteroposterior skeletal patterns and correlations between inclinations of these OPs and APMP metrics were calculated and compared. For the OP inclination with the highest area under the curve (AUC) values and correlation coefficients, the regression models between this OP inclination and APMP metrics were further calculated.

RESULTS: The deviations in supporting cusps between deep learning-based and intraoral scanning models were < 0.300 mm. The improved functional OP (IFOP) inclination could distinguish different skeletal classification determinations (AUC Class I VS Class II = 0.693, AUC Class I VS Class III = 0.763, AUC Class II VS Class III = 0.899, all P values < 0.01) and the AUC value in skeletal Classes II and III determination was statistically higher than the inclinations of other OPs (all P values < 0.01). Moreover, the IFOP inclination showed statistical correlations with APMP metrics (rAPDI = -0.557, rANB = 0.543, rAF-BF = 0.731, all P values < 0.001) and had the highest correlation coefficients among all OP inclinations (all P values < 0.05). The regression analysis models of IFOP inclination and APMP metrics were yAPDI = -0.917x + 91.144, yANB = 0.395x + 0.292, and yAF-BF = 0.738x - 2.331.

CONCLUSIONS: Constructing the OP using deep learning-based 3D models from CBCT data is feasible. IFOP inclination could be used in predicting the APMP changes. A steeper IFOP inclination corresponded to a more retrognathic mandibular posture.

PMID:39780117 | DOI:10.1186/s12903-024-05345-9

Categories: Literature Watch

Hybrid natural language processing tool for semantic annotation of medical texts in Spanish

Wed, 2025-01-08 06:00

BMC Bioinformatics. 2025 Jan 8;26(1):7. doi: 10.1186/s12859-024-05949-6.

ABSTRACT

BACKGROUND: Natural language processing (NLP) enables the extraction of information embedded within unstructured texts, such as clinical case reports and trial eligibility criteria. By identifying relevant medical concepts, NLP facilitates the generation of structured and actionable data, supporting complex tasks like cohort identification and the analysis of clinical records. To accomplish those tasks, we introduce a deep learning-based and lexicon-based named entity recognition (NER) tool for texts in Spanish. It performs medical NER and normalization, medication information extraction and detection of temporal entities, negation and speculation, and temporality or experiencer attributes (Age, Contraindicated, Negated, Speculated, Hypothetical, Future, Family_member, Patient and Other). We built the tool with a dedicated lexicon and rules adapted from NegEx and HeidelTime. Using these resources, we annotated a corpus of 1200 texts, with high inter-annotator agreement (average F1 = 0.841% ± 0.045 for entities, and average F1 = 0.881% ± 0.032 for attributes). We used this corpus to train Transformer-based models (RoBERTa-based models, mBERT and mDeBERTa). We integrated them with the dictionary-based system in a hybrid tool, and distribute the models via the Hugging Face hub. For an internal validation, we used a held-out test set and conducted an error analysis. For an external validation, eight medical professionals evaluated the system by revising the annotation of 200 new texts not used in development.

RESULTS: In the internal validation, the models yielded F1 values up to 0.915. In the external validation with 100 clinical trials, the tool achieved an average F1 score of 0.858 (± 0.032); and in 100 anonymized clinical cases, it achieved an average F1 score of 0.910 (± 0.019).

CONCLUSIONS: The tool is available at https://claramed.csic.es/medspaner . We also release the code ( https://github.com/lcampillos/medspaner ) and the annotated corpus to train the models.

PMID:39780059 | DOI:10.1186/s12859-024-05949-6

Categories: Literature Watch

Effective BCDNet-based breast cancer classification model using hybrid deep learning with VGG16-based optimal feature extraction

Wed, 2025-01-08 06:00

BMC Med Imaging. 2025 Jan 8;25(1):12. doi: 10.1186/s12880-024-01538-4.

ABSTRACT

PROBLEM: Breast cancer is a leading cause of death among women, and early detection is crucial for improving survival rates. The manual breast cancer diagnosis utilizes more time and is subjective. Also, the previous CAD models mostly depend on manmade visual details that are complex to generalize across ultrasound images utilizing distinct techniques. Distinct imaging tools have been utilized in previous works such as mammography and MRI. However, these imaging tools are costly and less portable than ultrasound imaging. Also, ultrasound imaging is a non-invasive method commonly used for breast cancer screening. Hence, the paper presents a novel deep learning model, BCDNet, for classifying breast tumors as benign or malignant using ultrasound images.

AIM: The primary aim of the study is to design an effective breast cancer diagnosis model that can accurately classify tumors in their early stages, thus reducing mortality rates. The model aims to optimize the weight and parameters using the RPAOSM-ESO algorithm to enhance accuracy and minimize false negative rates.

METHODS: The BCDNet model utilizes transfer learning from a pre-trained VGG16 network for feature extraction and employs an AHDNAM classification approach, which includes ASPP, DTCN, 1DCNN, and an attention mechanism. The RPAOSM-ESO algorithm is used to fine-tune the weights and parameters.

RESULTS: The RPAOSM-ESO-BCDNet-based breast cancer diagnosis model provided 94.5 accuracy rates. This value is relatively higher than the previous models such as DTCN (88.2), 1DCNN (89.6), MobileNet (91.3), and ASPP-DTC-1DCNN-AM (93.8). Hence, it is guaranteed that the designed RPAOSM-ESO-BCDNet produces relatively accurate solutions for the classification than the previous models.

CONCLUSION: The BCDNet model, with its sophisticated feature extraction and classification techniques optimized by the RPAOSM-ESO algorithm, shows promise in accurately classifying breast tumors using ultrasound images. The study suggests that the model could be a valuable tool in the early detection of breast cancer, potentially saving lives and reducing the burden on healthcare systems.

PMID:39780045 | DOI:10.1186/s12880-024-01538-4

Categories: Literature Watch

A practical approach to the spatial-domain calculation of nonprewhitening model observers in computed tomography

Wed, 2025-01-08 06:00

Med Phys. 2025 Jan 8. doi: 10.1002/mp.17599. Online ahead of print.

ABSTRACT

BACKGROUND: Modern reconstruction algorithms for computed tomography (CT) can exhibit nonlinear properties, including non-stationarity of noise and contrast dependence of both noise and spatial resolution. Model observers have been recommended as a tool for the task-based assessment of image quality (Samei E et al., Med Phys. 2019; 46(11): e735-e756), but the common Fourier domain approach to their calculation assumes quasi-stationarity.

PURPOSE: A practical spatial-domain approach is proposed for the calculation of the nonprewhitening (NPW) family of model observers in CT, avoiding the disadvantages of the Fourier domain. The methodology avoids explicit estimation of a noise covariance matrix. A formula is also provided for the uncertainty on estimates of detectability index, for a given number of slices and repeat scans. The purpose of this work is to demonstrate the method and provide comparisons to the conventional Fourier approach for both iterative reconstruction (IR) and a deep Learning-based reconstruction (DLR) algorithm.

MATERIALS AND METHODS: Acquisitions were made on a Revolution CT scanner (GE Healthcare, Waukesha, Wisconsin, USA) and reconstructed using the vendor's IR and DLR algorithms (ASiR-V and TrueFidelity). Several reconstruction kernels were investigated (Standard, Lung, and Bone for IR and Standard for DLR). An in-house developed phantom with two flat contrast levels (2 and 8 mgI/mL) and varying feature size (1-10 mm diameter) was used. Two single-energy protocols (80 and 120 kV) were investigated with two dose levels (CTDIvol = 5 and 13 mGy). The spatial domain calculations relied on repeated scanning, region-of-interest placement and simple operations with image matrices. No more repeat scans were utilized than required for Fourier domain estimations. Fourier domain calculations were made using techniques described in a previous publication (Thor D et al., Med Phys. 2023;50(5):2775-2786). Differences between the calculations in the two domains were assessed using the normalized root-mean-square discrepancy (NMRSD).

RESULTS: Fourier domain calculations agreed closely with those in the spatial domain for all zero-strength IR reconstructions, which most closely resemble traditional filtered backprojection. The Fourier-based calculations, however, displayed higher detectability compared to those in the spatial domain for IR with strong iterative strength and for the DLR algorithm. The NRMSD remained within 10% for the NPW model observer without eye filter, but reached larger values when an eye filter was included. The formula for the uncertainty on the detectability index was validated by bootstrap estimates.

CONCLUSION: A practical methodology was demonstrated for calculating NPW observers in the spatial domain. In addition to being a valuable tool for verifying the applicability of typical Fourier-based methodologies, it lends itself to routine calculations for features embedded in a phantom. Higher estimates of detectability were observed when adopting the Fourier domain methodology for IR and for a DLR algorithm, demonstrating that use of the Fourier domain can indicate greater benefit to noise suppression than suggested by spatial domain calculations. This is consistent with the results of previous authors for the Fourier domain, who have compared to human and other model observers, but not, as in this study, to the NPW model observer calculated in the spatial domain.

PMID:39780034 | DOI:10.1002/mp.17599

Categories: Literature Watch

Pages