Deep learning

AI-Based multimodal Multi-tasks analysis reveals tumor molecular heterogeneity, predicts preoperative lymph node metastasis and prognosis in papillary thyroid carcinoma: A retrospective study

Thu, 2024-07-11 06:00

Int J Surg. 2024 Jul 11. doi: 10.1097/JS9.0000000000001875. Online ahead of print.

ABSTRACT

BACKGROUND: Papillary thyroid carcinoma (PTC) is the predominant form of thyroid cancer globally, especially when lymph node metastasis (LNM) occurs. Molecular heterogeneity, driven by genetic alterations and tumor microenvironment components, contributes to the complexity of PTC. Understanding these complexities is essential for precise risk stratification and therapeutic decisions.

METHODS: This study involved a comprehensive analysis of 521 patients with PTC from our hospital and 499 patients from The Cancer Genome Atlas (TCGA). The real-world cohort 1 comprised 256 patients with stage I-III PTC. Tissues from 252 patients were analyzed by DNA-based next-generation sequencing, and tissues from four patients were analyzed by single-cell RNA sequencing (scRNA-seq). Additionally, 586 PTC pathological sections were collected from TCGA, and 275 PTC pathological sections were collected from the real-world cohort 2. A deep learning multimodal model was developed using matched histopathology images, genomic, transcriptomic, and immune cell data to predict LNM and disease-free survival (DFS).

RESULTS: This study included a total of 1,011 PTC patients, comprising 256 patients from cohort 1, 275 patients from cohort 2, and 499 patients from TCGA. In cohort 1, we categorized PTC into four molecular subtypes based on BRAF, RAS, RET, and other mutations. BRAF mutations were significantly associated with LNM and impacted DFS. ScRNA-seq identified distinct T cell subtypes and reduced B cell diversity in BRAF-mutated PTC with LNM. The study also explored cancer-associated fibroblasts and macrophages, highlighting their associations with LNM. The deep learning model was trained using 405 pathology slides and RNA sequences from 328 PTC patients and validated with 181 slides and RNA sequences from 140 PTC patients in the TCGA cohort. It achieved high accuracy, with an AUC of 0.86 in the training cohort, 0.84 in the validation cohort, and 0.83 in the real-world cohort 2. High-risk patients in the training cohort had significantly lower DFS rates (P<0.001). Model AUCs were 0.91 at 1 year, 0.93 at 3 years, and 0.87 at 5 years. In the validation cohort, high-risk patients also had lower DFS (P<0.001); the AUCs were 0.89, 0.87, and 0.80 at 1, 3, and 5 years. We utilized the GradCAM algorithm to generate heatmaps from pathology-based deep learning models, which visually highlighted high-risk tumor areas in PTC patients. This enhanced clinicians' understanding of the model's predictions and improved diagnostic accuracy, especially in cases with lymph node metastasis.

CONCLUSION: The AI-based analysis uncovered vital insights into PTC molecular heterogeneity, emphasizing BRAF mutations' impact. The integrated deep learning model shows promise in predicting metastasis, offering valuable contributions to improved diagnostic and therapeutic strategies.

PMID:38990290 | DOI:10.1097/JS9.0000000000001875

Categories: Literature Watch

Navigating the landscape of enzyme design: from molecular simulations to machine learning

Thu, 2024-07-11 06:00

Chem Soc Rev. 2024 Jul 11. doi: 10.1039/d4cs00196f. Online ahead of print.

ABSTRACT

Global environmental issues and sustainable development call for new technologies for fine chemical synthesis and waste valorization. Biocatalysis has attracted great attention as the alternative to the traditional organic synthesis. However, it is challenging to navigate the vast sequence space to identify those proteins with admirable biocatalytic functions. The recent development of deep-learning based structure prediction methods such as AlphaFold2 reinforced by different computational simulations or multiscale calculations has largely expanded the 3D structure databases and enabled structure-based design. While structure-based approaches shed light on site-specific enzyme engineering, they are not suitable for large-scale screening of potential biocatalysts. Effective utilization of big data using machine learning techniques opens up a new era for accelerated predictions. Here, we review the approaches and applications of structure-based and machine-learning guided enzyme design. We also provide our view on the challenges and perspectives on effectively employing enzyme design approaches integrating traditional molecular simulations and machine learning, and the importance of database construction and algorithm development in attaining predictive ML models to explore the sequence fitness landscape for the design of admirable biocatalysts.

PMID:38990263 | DOI:10.1039/d4cs00196f

Categories: Literature Watch

Integrating deep learning and regression models for accurate prediction of groundwater fluoride contamination in old city in Bitlis province, Eastern Anatolia Region, Turkiye

Thu, 2024-07-11 06:00

Environ Sci Pollut Res Int. 2024 Jul 11. doi: 10.1007/s11356-024-34194-w. Online ahead of print.

ABSTRACT

Groundwater resources in Bitlis province and its surroundings in Türkiye's Eastern Anatolia Region are pivotal for drinking water, yet they face a significant threat from fluoride contamination, compounded by the region's volcanic rock structure. To address this concern, fluoride levels were meticulously measured at 30 points in June 2019 dry period and September 2019 rainy period. Despite the accuracy of present measurement techniques, their time-consuming nature renders them economically unviable. Therefore, this study aims to assess the distribution of probable geogenic contamination of groundwater and develop a robust prediction model by analyzing the relationship between predictive variables and target contaminants. In this pursuit, various machine learning techniques and regression models, including Linear Regression, Random Forest, Decision Tree, K-Neighbors, and XGBoost, as well as deep learning models such as ANN, DNN, CNN, and LSTM, were employed. Elements such as aluminum (Al), boron (B), cadmium (Cd), cobalt (Co), chromium (Cr), copper (Cu), iron (Fe), manganese (Mn), nickel (Ni), phosphorus (Pb), lead (Pb), and zinc (Zn) were utilized as features to predict fluoride levels. The SelectKbest feature selection method was used to improve the accuracy of the prediction model. This method identifies important features in the dataset for different values of k and increases model efficiency. The models were able to produce more accurate predictions by selecting the most important variables. The findings highlight the superior performance of the XGBoost regressor and CNN in predicting groundwater quality, with XGBoost consistently outperforming other models, exhibiting the lowest values for evaluation metrics like mean squared error (MSE), mean absolute error (MAE), and root mean squared error (RMSE) across different k values. For instance, when considering all features, XGBoost attained an MSE of 0.07, an MAE of 0.22, an RMSE of 0.27, a MAPE of 9.25%, and an NSE of 0.75. Conversely, the Decision Tree regressor consistently displayed inferior performance, with its maximum MSE reaching 0.11 (k = 5) and maximum RMSE of 0.33 (k = 5). Furthermore, feature selection analysis revealed the consistent significance of boron (B) and cadmium (Cd) across all datasets, underscoring their pivotal roles in groundwater contamination. Notably, in the machine learning framework evaluation, the XGBoost regressor excelled in modeling both the "all" and "rainy season" datasets, while the convolutional neural network (CNN) outperformed in the "dry season" dataset. This study emphasizes the potential of XGBoost regressor and CNN for accurate groundwater quality prediction and recommends their utilization, while acknowledging the limitations of the Decision Tree Regressor.

PMID:38990257 | DOI:10.1007/s11356-024-34194-w

Categories: Literature Watch

Deep learning classification performance for diagnosing condylar osteoarthritis in patients with dentofacial deformities using panoramic temporomandibular joint projection images

Thu, 2024-07-11 06:00

Oral Radiol. 2024 Jul 11. doi: 10.1007/s11282-024-00768-0. Online ahead of print.

ABSTRACT

OBJECTIVE: The present study aimed to assess the consistencies and performances of deep learning (DL) models in the diagnosis of condylar osteoarthritis (OA) among patients with dentofacial deformities using panoramic temporomandibular joint (TMJ) projection images.

METHODS: A total of 68 TMJs with or without condylar OA in dentofacial deformity patients were tested to verify the consistencies and performances of DL models created using 252 TMJs with or without OA in TMJ disorder and dentofacial deformity patients; these models were used to diagnose OA on conventional panoramic (Con-Pa) images and open (Open-TMJ) and closed (Closed-TMJ) mouth TMJ projection images. The GoogLeNet and VGG-16 networks were used to create the DL models. For comparison, two dental residents with < 1 year of experience interpreting radiographs evaluated the same condyle data that had been used to test the DL models.

RESULTS: On Open-TMJ images, the DL models showed moderate to very good consistency, whereas the residents' demonstrated fair consistency on all images. The areas under the curve (AUCs) of both DL models on Con-Pa (0.84 for GoogLeNet and 0.75 for VGG-16) and Open-TMJ images (0.89 for both models) were significantly higher than the residents' AUCs (p < 0.01). The AUCs of the DL models on Open-TMJ images (0.89 for both models) were higher than the AUCs on Closed-TMJ images (0.72 for both models).

CONCLUSIONS: The DL models created in this study could help residents to interpret Con-Pa and Open-TMJ images in the diagnosis of condylar OA.

PMID:38990220 | DOI:10.1007/s11282-024-00768-0

Categories: Literature Watch

Approaches for the use of Artificial Intelligence in workplace health promotion and prevention: A systematic scoping review

Thu, 2024-07-11 06:00

JMIR AI. 2024 Jul 10. doi: 10.2196/53506. Online ahead of print.

ABSTRACT

BACKGROUND: Artificial intelligence (AI) is an umbrella term for various algorithms and rapidly emerging technologies with huge potential for workplace health promotion and prevention (WHPP). WHPP interventions aim to improve people's health and well-being through behavioral and organizational measures or by minimizing the burden of workplace-related diseases and associated risk factors. While AI has been the focus of research in other health-related fields, such as Public Health or biomedicine, the transition of AI into WHPP research has yet to be systematically investigated.

OBJECTIVE: The systematic scoping review aims to comprehensively assess an overview of the current use of AI in WHPP. The results will be then used to point to future research directions. The following research questions were derived: (1) what are the study characteristics of studies on AI algorithms and technologies in the context of WHPP, (2) what specific WHPP fields (prevention, behavioral, and organizational approaches) were addressed by the AI algorithms and technologies, and (3) what kind of interventions lead to which outcomes?

METHODS: A systematic scoping literature review (PRISMA-ScR) was conducted in the three academic databases PubMed, IEEE, and ACM in July 2023, searching for articles published between January 2000 and December 2023. Studies needed to be 1) peer-reviewed, 2) written in English, and 3) focused on any AI-based algorithm or technology that (4) were conducted in the context of WHPP or (5) an associated field. Information on study design, AI algorithms and technologies, WHPP fields, and the PICO framework were extracted blindly with Rayyan and summarized.

RESULTS: A total of ten studies were included. Risk prevention and modeling were the most identified WHPP fields (n=6), followed by behavioral health promotion (n=4) and organizational health promotion (n=1). Four studies focused on mental health. Most AI algorithms were machine learning-based, and three studies used combined deep learning algorithms. AI algorithms and technologies were primarily implemented in smartphone applications (eg, in the form of a Chatbot) or used the smartphone as a data source (eg, GPS). Behavioral approaches ranged from 8 to 12 weeks and were compared to control groups. Three studies evaluated the robustness and accuracy of an AI model or framework.

CONCLUSIONS: Although AI has caught increasing attention in health-related research, the review reveals that AI in WHPP is marginally investigated. Our results indicate that AI is promising for individualization and risk prediction in WHPP, but current research does not cover the scope of WHPP. Beyond that, future research will profit from an extended range of research in all fields of WHPP, longitudinal data, and reporting guidelines.

CLINICALTRIAL: Registered on 5th July 2023 at Open Science Framework [1].

PMID:38989904 | DOI:10.2196/53506

Categories: Literature Watch

Deep demosaicking convolution neural network and quantum wavelet transform-based image denoising

Thu, 2024-07-11 06:00

Network. 2024 Jul 11:1-25. doi: 10.1080/0954898X.2024.2358950. Online ahead of print.

ABSTRACT

Demosaicking is a popular scientific area that is being explored by a vast number of scientists. Current digital imaging technologies capture colour images with a single monochrome sensor. In addition, the colour images were captured using a sensor coupled with a Colour Filter Array (CFA). Furthermore, the demosaicking procedure is required to obtain a full-colour image. Image denoising and image demosaicking are the two important image restoration techniques, which have increased popularity in recent years. Finding a suitable strategy for multiple image restoration is critical for researchers. Hence, a deep learning (DL) based image denoising and image demosaicking is developed in this research. Moreover, the Autoregressive Circle Wave Optimization (ACWO) based Demosaicking Convolutional Neural Network (DMCNN) is designed for image demosaicking. The Quantum Wavelet Transform (QWT) is used in the image denoising process. Similarly, Quantum Wavelet Transform (QWT) is used to analyse the abrupt changes in the input image with noise. The transformed image is then subjected to a thresholding technique, which determines an appropriate threshold range. Once the threshold range has been determined, soft thresholding is applied to the resulting wavelet coefficients. After that, the extraction and reconstruction of the original image is carried out using the Inverse Quantum Wavelet Transform (IQWT). Finally, the fused image is created by combining the results of both processes using a weighted average. The denoised and demosaicked images are combined using the weighted average technique. Furthermore, the proposed QWT+DMCNN-ACWO model provided the ideal values of Peak signal-to-noise ratio (PSNR), Second derivative like measure of enhancement (SDME), Structural Similarity Index (SSIM), Figure of Merit (FOM) of 0.890, and computational time of 49.549 dB, 59.53 dB, 0.963, 0.890, and 0.571, respectively.

PMID:38989778 | DOI:10.1080/0954898X.2024.2358950

Categories: Literature Watch

A transformative framework reshaping sustainable drought risk management through advanced early warning systems

Thu, 2024-07-11 06:00

iScience. 2024 May 31;27(7):110066. doi: 10.1016/j.isci.2024.110066. eCollection 2024 Jul 19.

ABSTRACT

In light of the increasing vulnerability to drought occurrences and the heightened impact of drought-related disasters on numerous communities, it is imperative for drought-sensitive sectors to adopt proactive measures. This involves the implementation of early warning systems to effectively mitigate potential risks. Guided by Toulmin's model of argumentation, this research proposes a framework of eight interconnected modules introducing Fourth Industrial Revolution technologies to enhance drought early warning capabilities. The framework emphasizes the Internet of Things, drones, big data analytics, and deep learning for real-time monitoring and accurate drought forecasts. Another key component is the role of natural language processing in analyzing data from unstructured sources, such as social media, and reviews, essential for improving alerts, dissemination, and interoperability. While the framework optimizes resource use in agriculture, water, and the environment, overcoming impending limitations is crucial; hence, practical implementation and amendment of policies are necessary.

PMID:38989469 | PMC:PMC11233914 | DOI:10.1016/j.isci.2024.110066

Categories: Literature Watch

The Future of Orthodontics: Deep Learning Technologies

Thu, 2024-07-11 06:00

Cureus. 2024 Jun 10;16(6):e62045. doi: 10.7759/cureus.62045. eCollection 2024 Jun.

ABSTRACT

Deep learning has emerged as a revolutionary technical advancement in modern orthodontics, offering novel methods for diagnosis, treatment planning, and outcome prediction. Over the past 25 years, the field of dentistry has widely adopted information technology (IT), resulting in several benefits, including decreased expenses, increased efficiency, decreased need for human expertise, and reduced errors. The transition from preset rules to learning from real-world examples, particularly machine learning (ML) and artificial intelligence (AI), has greatly benefited the organization, analysis, and storage of medical data. Deep learning, a type of AI, enables robots to mimic human neural networks, allowing them to learn and make decisions independently without the need for explicit programming. Its ability to automate cephalometric analysis and enhance diagnosis through 3D imaging has revolutionized orthodontic operations. Deep learning models have the potential to significantly improve treatment outcomes and reduce human errors by accurately identifying anatomical characteristics on radiographs, thereby expediting analytical processes. Additionally, the use of 3D imaging technologies such as cone-beam computed tomography (CBCT) can facilitate precise treatment planning, allowing for comprehensive examinations of craniofacial architecture, tooth movements, and airway dimensions. In today's era of personalized medicine, deep learning's ability to customize treatments for individual patients has propelled the field of orthodontics forward tremendously. However, it is essential to address issues related to data privacy, model interpretability, and ethical considerations before orthodontic practices can use deep learning in an ethical and responsible manner. Modern orthodontics is evolving, thanks to the ability of deep learning to deliver more accurate, effective, and personalized orthodontic treatments, improving patient care as technology develops.

PMID:38989357 | PMC:PMC11234326 | DOI:10.7759/cureus.62045

Categories: Literature Watch

Association of retinal age gap with chronic kidney disease and subsequent cardiovascular disease sequelae: a cross-sectional and longitudinal study from the UK Biobank

Thu, 2024-07-11 06:00

Clin Kidney J. 2024 Jul 4;17(7):sfae088. doi: 10.1093/ckj/sfae088. eCollection 2024 Jul.

ABSTRACT

BACKGROUND: Chronic kidney disease (CKD) increases the risk of cardiovascular disease (CVD) and is more prevalent in older adults. Retinal age gap, a biomarker of aging based on fundus images, has been previously developed and validated. This study aimed to investigate the association of retinal age gap with CKD and subsequent CVD complications.

METHODS: A deep learning model was trained to predict the retinal age using 19 200 fundus images of 11 052 participants without any medical history at baseline. Retinal age gap, calculated as retinal age predicted minus chronological age, was calculated for the remaining 35 906 participants. Logistic regression models and Cox proportional hazards regression models were used for the association analysis.

RESULTS: A total of 35 906 participants (56.75 ± 8.04 years, 55.68% female) were included in this study. In the cross-sectional analysis, each 1-year increase in retinal age gap was associated with a 2% increase in the risk of CKD prevalence [odds ratio 1.02, 95% confidence interval (CI) 1.01-1.04, P = .012]. A longitudinal analysis of 35 039 participants demonstrated that 2.87% of them developed CKD in follow-up, and each 1-year increase in retinal age gap was associated with a 3% increase in the risk of CKD incidence (hazard ratio 1.03, 95% CI 1.01-1.05, P = .004). In addition, a total of 111 CKD patients (15.81%) developed CVD in follow-up, and each 1-year increase in retinal age gap was associated with a 10% increase in the risk of incident CVD (hazard ratio 1.10, 95% CI 1.03-1.17, P = .005).

CONCLUSIONS: We found that retinal age gap was independently associated with the prevalence and incidence of CKD, and also associated with CVD complications in CKD patients. This supports the use of this novel biomarker in identifying individuals at high risk of CKD and CKD patients with increased risk of CVD.

PMID:38989278 | PMC:PMC11233993 | DOI:10.1093/ckj/sfae088

Categories: Literature Watch

Characterizing drought prediction with deep learning: A literature review

Thu, 2024-07-11 06:00

MethodsX. 2024 Jun 12;13:102800. doi: 10.1016/j.mex.2024.102800. eCollection 2024 Dec.

ABSTRACT

Drought prediction is a complex phenomenon that impacts human activities and the environment. For this reason, predicting its behavior is crucial to mitigating such effects. Deep learning techniques are emerging as a powerful tool for this task. The main goal of this work is to review the state-of-the-art for characterizing the deep learning techniques used in the drought prediction task. The results suggest that the most widely used climate indexes were the Standardized Precipitation Index (SPI) and the Standardized Precipitation Evapotranspiration Index (SPEI). Regarding the multispectral index, the Normalized Difference Vegetation Index (NDVI) is the indicator most utilized. On the other hand, countries with a higher production of scientific knowledge in this area are located in Asia and Oceania; meanwhile, America and Africa are the regions with few publications. Concerning deep learning methods, the Long-Short Term Memory network (LSTM) is the algorithm most implemented for this task, either implemented canonically or together with other deep learning techniques (hybrid methods). In conclusion, this review reveals a need for more scientific knowledge about drought prediction using multispectral indices and deep learning techniques in America and Africa; therefore, it is an opportunity to characterize the phenomenon in developing countries.

PMID:38989261 | PMC:PMC11234152 | DOI:10.1016/j.mex.2024.102800

Categories: Literature Watch

Knowledge, attitude, and practice of artificial intelligence among medical students in Sudan: a cross-sectional study

Thu, 2024-07-11 06:00

Ann Med Surg (Lond). 2024 Apr 24;86(7):3917-3923. doi: 10.1097/MS9.0000000000002070. eCollection 2024 Jul.

ABSTRACT

INTRODUCTION: In this cross-sectional study, the authors explored the knowledge, attitudes, and practices related to artificial intelligence (AI) among medical students in Sudan. With AI increasingly impacting healthcare, understanding its integration into medical education is crucial. This study aimed to assess the current state of AI awareness, perceptions, and practical experiences among medical students in Sudan. The authors aimed to evaluate the extent of AI familiarity among Sudanese medical students by examining their attitudes toward its application in medicine. Additionally, this study seeks to identify the factors influencing knowledge levels and explore the practical implementation of AI in the medical field.

METHOD: A web-based survey was distributed to medical students in Sudan via social media platforms and e-mail during October 2023. The survey included questions on demographic information, knowledge of AI, attitudes toward its applications, and practical experiences. The descriptive statistics, χ2 tests, logistic regression, and correlations were analyzed using SPSS version 26.0.

RESULTS: Out of the 762 participants, the majority exhibited a basic understanding of AI, but detailed knowledge of its applications was limited. Positive attitudes toward the importance of AI in diagnosis, radiology, and pathology were prevalent. However, practical application of these methods was infrequent, with only a minority of the participants having hands-on experience. Factors influencing knowledge included the lack of a formal curriculum and gender disparities.

CONCLUSION: This study highlights the need for comprehensive AI education in medical training programs in Sudan. While participants displayed positive attitudes, there was a notable gap in practical experience. Addressing these gaps through targeted educational interventions is crucial for preparing future healthcare professionals to navigate the evolving landscape of AI in medicine.

RECOMMENDATIONS: Policy efforts should focus on integrating AI education into the medical curriculum to ensure readiness for the technological advancements shaping the future of healthcare.

PMID:38989161 | PMC:PMC11230734 | DOI:10.1097/MS9.0000000000002070

Categories: Literature Watch

A clinical-radiomics nomogram based on automated segmentation of chest CT to discriminate PRISm and COPD patients

Thu, 2024-07-11 06:00

Eur J Radiol Open. 2024 Jun 14;13:100580. doi: 10.1016/j.ejro.2024.100580. eCollection 2024 Dec.

ABSTRACT

PURPOSE: It is vital to develop noninvasive approaches with high accuracy to discriminate the preserved ratio impaired spirometry (PRISm) group from the chronic obstructive pulmonary disease (COPD) groups. Radiomics has emerged as an image analysis technique. This study aims to develop and confirm the new radiomics-based noninvasive approach to discriminate these two groups.

METHODS: Totally 1066 subjects from 4 centers were included in this retrospective research, and classified into training, internal validation or external validation sets. The chest computed tomography (CT) images were segmented by the fully automated deep learning segmentation algorithm (Unet231) for radiomics feature extraction. We established the radiomics signature (Rad-score) using the least absolute shrinkage and selection operator algorithm, then conducted ten-fold cross-validation using the training set. Last, we constructed a radiomics signature by incorporating independent risk factors using the multivariate logistic regression model. Model performance was evaluated by receiver operating characteristic (ROC) curve, calibration curve, and decision curve analyses (DCA).

RESULTS: The Rad-score, including 15 radiomic features in whole-lung region, which was suitable for diffuse lung diseases, was demonstrated to be effective for discriminating between PRISm and COPD. Its diagnostic accuracy was improved through integrating Rad-score with a clinical model, and the area under the ROC (AUC) were 0.82(95 %CI 0.79-0.86), 0.77(95 %CI 0.72-0.83) and 0.841(95 %CI 0.78-0.91) for training, internal validation and external validation sets, respectively. As revealed by analysis, radiomics nomogram showed good fit and superior clinical utility.

CONCLUSIONS: The present work constructed the new radiomics-based nomogram and verified its reliability for discriminating between PRISm and COPD.

PMID:38989052 | PMC:PMC11233899 | DOI:10.1016/j.ejro.2024.100580

Categories: Literature Watch

Feasibility of remote monitoring for fatal coronary heart disease using Apple Watch ECGs

Thu, 2024-07-11 06:00

Cardiovasc Digit Health J. 2024 Apr 5;5(3):115-121. doi: 10.1016/j.cvdhj.2024.03.007. eCollection 2024 Jun.

ABSTRACT

BACKGROUND: Fatal coronary heart disease (FCHD) is often described as sudden cardiac death (affects >4 million people/year), where coronary artery disease is the only identified condition. Electrocardiographic artificial intelligence (ECG-AI) models for FCHD risk prediction using ECG data from wearable devices could enable wider screening/monitoring efforts.

OBJECTIVES: To develop a single-lead ECG-based deep learning model for FCHD risk prediction and assess concordance between clinical and Apple Watch ECGs.

METHODS: An FCHD single-lead ("lead I" from 12-lead ECGs) ECG-AI model was developed using 167,662 ECGs (50,132 patients) from the University of Tennessee Health Sciences Center. Eighty percent of the data (5-fold cross-validation) was used for training and 20% as a holdout. Cox proportional hazards (CPH) models incorporating ECG-AI predictions with age, sex, and race were also developed. The models were tested on paired clinical single-lead and Apple Watch ECGs from 243 St. Jude Lifetime Cohort Study participants. The correlation and concordance of the predictions were assessed using Pearson correlation (R), Spearman correlation (ρ), and Cohen's kappa.

RESULTS: The ECG-AI and CPH models resulted in AUC = 0.76 and 0.79, respectively, on the 20% holdout and AUC = 0.85 and 0.87 on the Atrium Health Wake Forest Baptist external validation data. There was moderate-strong positive correlation between predictions (R = 0.74, ρ = 0.67, and κ = 0.58) when tested on the 243 paired ECGs. The clinical (lead I) and Apple Watch predictions led to the same low/high-risk FCHD classification for 99% of the participants. CPH prediction correlation resulted in an R = 0.81, ρ = 0.76, and κ = 0.78.

CONCLUSION: Risk of FCHD can be predicted from single-lead ECGs obtained from wearable devices and are statistically concordant with lead I of a 12-lead ECG.

PMID:38989042 | PMC:PMC11232422 | DOI:10.1016/j.cvdhj.2024.03.007

Categories: Literature Watch

Projected pooling loss for red nucleus segmentation with soft topology constraints

Thu, 2024-07-11 06:00

J Med Imaging (Bellingham). 2024 Jul;11(4):044002. doi: 10.1117/1.JMI.11.4.044002. Epub 2024 Jul 9.

ABSTRACT

PURPOSE: Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.

APPROACH: This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.

RESULTS: When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.

CONCLUSIONS: We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.

PMID:38988992 | PMC:PMC11232703 | DOI:10.1117/1.JMI.11.4.044002

Categories: Literature Watch

Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach

Thu, 2024-07-11 06:00

J Med Imaging (Bellingham). 2024 Jul;11(4):044502. doi: 10.1117/1.JMI.11.4.044502. Epub 2024 Jul 9.

ABSTRACT

PURPOSE: Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.

APPROACH: In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.

RESULTS: The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity.

CONCLUSIONS: In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.

PMID:38988991 | PMC:PMC11232701 | DOI:10.1117/1.JMI.11.4.044502

Categories: Literature Watch

Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation

Thu, 2024-07-11 06:00

J Med Imaging (Bellingham). 2024 Jul;11(4):044001. doi: 10.1117/1.JMI.11.4.044001. Epub 2024 Jul 9.

ABSTRACT

PURPOSE: Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.

APPROACH: We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.

RESULTS: Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.

CONCLUSIONS: Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.

PMID:38988990 | PMC:PMC11231955 | DOI:10.1117/1.JMI.11.4.044001

Categories: Literature Watch

Author Correction: A fully automated classification of third molar development stages using deep learning

Wed, 2024-07-10 06:00

Sci Rep. 2024 Jul 10;14(1):15932. doi: 10.1038/s41598-024-66731-5.

NO ABSTRACT

PMID:38987634 | DOI:10.1038/s41598-024-66731-5

Categories: Literature Watch

Nano fuzzy alarming system for blood transfusion requirement detection in cancer using deep learning

Wed, 2024-07-10 06:00

Sci Rep. 2024 Jul 10;14(1):15958. doi: 10.1038/s41598-024-66607-8.

ABSTRACT

Periodic blood transfusion is a need in cancer patients in which the disease process as well as the chemotherapy can disrupt the natural production of blood cells. However, there are concerns about blood transfusion side effects, the cost, and the availability of donated blood. Therefore, predicting the timely requirement for blood transfusion considering patient variability is a need, and here for the first-time deal with this issue in blood cancer using in vivo data. First, a data set of 98 samples of blood cancer patients including 61 features of demographic, clinical, and laboratory data are collected. After performing multivariate analysis and the approval of an expert, effective parameters are derived. Then using a deep recurrent neural network, a system is presented to predict a need for packed red blood cell transfusion. Here, we use a Long Short-Term Memory (LSTM) neural network for modeling and the cross-validation technique with 5 layers for validation of the model along with comparing the result with networking and non-networking machine learning algorithms including bidirectional LSTM, AdaBoost, bagging decision tree based, bagging KNeighbors, and Multi-Layer Perceptron (MLP). Results show the LSTM outperforms the other methods. Then, using the swarm of fuzzy bioinspired nanomachines and the most effective parameters of Hgb, PaO2, and pH, we propose a feasibility study on nano fuzzy alarming system (NFABT) for blood transfusion requirements. Alarming decisions using the Internet of Things (IoT) gateway are delivered to the physician for performing medical actions. Also, NFABT is considered a real-time non-invasive AI-based hemoglobin monitoring and alarming method. Results show the merits of the proposed method.

PMID:38987580 | DOI:10.1038/s41598-024-66607-8

Categories: Literature Watch

Classification of osteoarthritic and healthy cartilage using deep learning with Raman spectra

Wed, 2024-07-10 06:00

Sci Rep. 2024 Jul 10;14(1):15902. doi: 10.1038/s41598-024-66857-6.

ABSTRACT

Raman spectroscopy is a rapid method for analysing the molecular composition of biological material. However, noise contamination in the spectral data necessitates careful pre-processing prior to analysis. Here we propose an end-to-end Convolutional Neural Network to automatically learn an optimal combination of pre-processing strategies, for the classification of Raman spectra of superficial and deep layers of cartilage harvested from 45 Osteoarthritis and 19 Osteoporosis (Healthy controls) patients. Using 6-fold cross-validation, the Multi-Convolutional Neural Network achieves comparable or improved classification accuracy against the best-performing Convolutional Neural Network applied to either the raw or pre-processed spectra. We utilised Integrated Gradients to identify the contributing features (Raman signatures) in the network decision process, showing they are biologically relevant. Using these features, we compared Artificial Neural Networks, Decision Trees and Support Vector Machines for the feature selection task. Results show that training on fewer than 3 and 300 features, respectively, for the disease classification and layer assignment task provide performance comparable to the best-performing CNN-based network applied to the full dataset. Our approach, incorporating multi-channel input and Integrated Gradients, can potentially facilitate the clinical translation of Raman spectroscopy-based diagnosis without the need for laborious manual pre-processing and feature selection.

PMID:38987563 | DOI:10.1038/s41598-024-66857-6

Categories: Literature Watch

Exploiting the Role of Features for Antigens-Antibodies Interaction Site Prediction

Wed, 2024-07-10 06:00

Methods Mol Biol. 2024;2780:303-325. doi: 10.1007/978-1-0716-3985-6_16.

ABSTRACT

Antibodies are a class of proteins that recognize and neutralize pathogens by binding to their antigens. They are the most significant category of biopharmaceuticals for both diagnostic and therapeutic applications. Understanding how antibodies interact with their antigens plays a fundamental role in drug and vaccine design and helps to comprise the complex antigen binding mechanisms. Computational methods for predicting interaction sites of antibody-antigen are of great value due to the overall cost of experimental methods. Machine learning methods and deep learning techniques obtained promising results.In this work, we predict antibody interaction interface sites by applying HSS-PPI, a hybrid method defined to predict the interface sites of general proteins. The approach abstracts the proteins in terms of hierarchical representation and uses a graph convolutional network to classify the amino acids between interface and non-interface. Moreover, we also equipped the amino acids with different sets of physicochemical features together with structural ones to describe the residues. Analyzing the results, we observe that the structural features play a fundamental role in the amino acid descriptions. We compare the obtained performances, evaluated using standard metrics, with the ones obtained with SVM with 3D Zernike descriptors, Parapred, Paratome, and Antibody i-Patch.

PMID:38987475 | DOI:10.1007/978-1-0716-3985-6_16

Categories: Literature Watch

Pages