Deep learning

A framework for measuring the training efficiency of a neural architecture

Thu, 2024-10-31 06:00

Artif Intell Rev. 2024;57(12):349. doi: 10.1007/s10462-024-10943-8. Epub 2024 Oct 28.

ABSTRACT

Measuring Efficiency in neural network system development is an open research problem. This paper presents an experimental framework to measure the training efficiency of a neural architecture. To demonstrate our approach, we analyze the training efficiency of Convolutional Neural Networks and Bayesian equivalents on the MNIST and CIFAR-10 tasks. Our results show that training efficiency decays as training progresses and varies across different stopping criteria for a given neural model and learning task. We also find a non-linear relationship between training stopping criteria, training Efficiency, model size, and training Efficiency. Furthermore, we illustrate the potential confounding effects of overtraining on measuring the training efficiency of a neural architecture. Regarding relative training efficiency across different architectures, our results indicate that CNNs are more efficient than BCNNs on both datasets. More generally, as a learning task becomes more complex, the relative difference in training efficiency between different architectures becomes more pronounced.

PMID:39478973 | PMC:PMC11519118 | DOI:10.1007/s10462-024-10943-8

Categories: Literature Watch

Deep learning models for hepatitis E incidence prediction leveraging Baidu index

Thu, 2024-10-31 06:00

BMC Public Health. 2024 Oct 31;24(1):3014. doi: 10.1186/s12889-024-20532-7.

ABSTRACT

BACKGROUND: Infectious diseases are major medical and social challenges of the 21st century. Accurately predicting incidence is of great significance for public health organizations to prevent the spread of diseases. Internet search engine data, like Baidu search index, may be useful for analyzing epidemics and improving prediction.

METHODS: We collected data on hepatitis E incidence and cases in Shandong province from January 2009 to December 2022 are extracted. Baidu index is available from January 2009 to December 2022. Employing Pearson correlation analysis, we validated the relationship between the Baidu index and hepatitis E incidence. We utilized various LSTM architectures, including LSTM, stacked LSTM, attention-based LSTM, and attention-based stacked LSTM, to forecast hepatitis E incidence both with and without incorporating the Baidu index. Meanwhile, we introduce KAN to LSTM models for improving nonlinear learning capability. The performance of models are evaluated by three standard quality metrics, including root mean square error(RMSE), mean absolute percentage error(MAPE) and mean absolute error(MAE).

RESULTS: Adjusting for the Baidu index altered the correlation between hepatitis E incidence and the Baidu index from -0.1654 to 0.1733. Without Baidu index, we obtained 17.04±0.13%, 17.19±0.57%, in terms of MAPE, by LSTM and attention based stacked LSTM, respectively. With the Baidu index, we obtained 15.36±0.16%, 15.15±0.07%, in term of MAPE, by the same methods. The prediction accuracy increased by 2%. The methods with KAN can improve the performance by 0.3%. More detailed results are shown in results section of this paper.

CONCLUSIONS: Our experiments reveal a weak correlation and similar trends between the Baidu index and hepatitis E incidence. Baidu index proves to be valuable for predicting hepatitis E incidence. Furthermore, stack layers and KAN can also improve the representational ability of LSTM models.

PMID:39478514 | DOI:10.1186/s12889-024-20532-7

Categories: Literature Watch

Does the FARNet neural network algorithm accurately identify Posteroanterior cephalometric landmarks?

Thu, 2024-10-31 06:00

BMC Med Imaging. 2024 Oct 30;24(1):294. doi: 10.1186/s12880-024-01478-z.

ABSTRACT

BACKGROUND: We explored whether the feature aggregation and refinement network (FARNet) algorithm accurately identified posteroanterior (PA) cephalometric landmarks.

METHODS: We identified 47 landmarks on 1,431 PA cephalograms of which 1,177 were used for training, 117 for validation, and 137 for testing. A FARNet-based artificial intelligence (AI) algorithm automatically detected the landmarks. Model effectiveness was calculated by deriving the mean radial error (MRE) and the successful detection rates (SDRs) within 2, 2.5, 3, and 4 mm. The Mann-Whitney U test was performed on the Euclidean differences between repeated manual identifications and AI trials. The direction in differences was analyzed, and whether differences moved in the same or opposite directions relative to ground truth on both the x and y-axis.

RESULTS: The AI system (web-based CranioCatch annotation software (Eskişehir, Turkey)) identified 47 anatomical landmarks in PA cephalograms. The right gonion SDRs were the highest, thus 96.4, 97.8, 100, and 100% within 2, 2.5, 3, and 4 mm, respectively. The right gonion MRE was 0.94 ± 0.53 mm. The right condylon SDRs were the lowest, thus 32.8, 45.3, 54.0, and 67.9% within the same thresholds. The right condylon MRE was 3.31 ± 2.25 mm. The AI model's reliability and accuracy were similar to a human expert's. AI was better at four skeleton points than the expert, whereas the expert was better at one skeletal and seven dental points (P < 0.05). Most of the points exhibited significant deviations along the y-axis. Compared to ground truth, most of the points in AI and the second trial showed opposite movement on the x-axis and the same on the y-axis.

CONCLUSIONS: The FARNet algorithm streamlined orthodontic diagnosis.

PMID:39478475 | DOI:10.1186/s12880-024-01478-z

Categories: Literature Watch

GASIDN: identification of sub-Golgi proteins with multi-scale feature fusion

Thu, 2024-10-31 06:00

BMC Genomics. 2024 Oct 30;25(1):1019. doi: 10.1186/s12864-024-10954-3.

ABSTRACT

The Golgi apparatus is a crucial component of the inner membrane system in eukaryotic cells, playing a central role in protein biosynthesis. Dysfunction of the Golgi apparatus has been linked to neurodegenerative diseases. Accurate identification of sub-Golgi protein types is therefore essential for developing effective treatments for such diseases. Due to the expensive and time-consuming nature of experimental methods for identifying sub-Golgi protein types, various computational methods have been developed as identification tools. However, the majority of these methods rely solely on neighboring features in the protein sequence and neglect the crucial spatial structure information of the protein.To discover alternative methods for accurately identifying sub-Golgi proteins, we have developed a model called GASIDN. The GASIDN model extracts multi-dimension features by utilizing a 1D convolution module on protein sequences and a graph learning module on contact maps constructed from AlphaFold2.The model utilizes the deep representation learning model SeqVec to initialize protein sequences. GASIDN achieved accuracy values of 98.4% and 96.4% in independent testing and ten-fold cross-validation, respectively, outperforming the majority of previous predictors. To the best of our knowledge, this is the first method that utilizes multi-scale feature fusion to identify and locate sub-Golgi proteins. In order to assess the generalizability and scalability of our model, we conducted experiments to apply it in the identification of proteins from other organelles, including plant vacuoles and peroxisomes. The results obtained from these experiments demonstrated promising outcomes, indicating the effectiveness and versatility of our model. The source code and datasets can be accessed at https://github.com/SJNNNN/GASIDN .

PMID:39478465 | DOI:10.1186/s12864-024-10954-3

Categories: Literature Watch

BASE: a web service for providing compound-protein binding affinity prediction datasets with reduced similarity bias

Thu, 2024-10-31 06:00

BMC Bioinformatics. 2024 Oct 30;25(1):340. doi: 10.1186/s12859-024-05968-3.

ABSTRACT

BACKGROUND: Deep learning-based drug-target affinity (DTA) prediction methods have shown impressive performance, despite a high number of training parameters relative to the available data. Previous studies have highlighted the presence of dataset bias by suggesting that models trained solely on protein or ligand structures may perform similarly to those trained on complex structures. However, these studies did not propose solutions and focused solely on analyzing complex structure-based models. Even when ligands are excluded, protein-only models trained on complex structures still incorporate some ligand information at the binding sites. Therefore, it is unclear whether binding affinity can be accurately predicted using only compound or protein features due to potential dataset bias. In this study, we expanded our analysis to comprehensive databases and investigated dataset bias through compound and protein feature-based methods using multilayer perceptron models. We assessed the impact of this bias on current prediction models and proposed the binding affinity similarity explorer (BASE) web service, which provides bias-reduced datasets.

RESULTS: By analyzing eight binding affinity databases using multilayer perceptron models, we confirmed a bias where the compound-protein binding affinity can be accurately predicted using compound features alone. This bias arises because most compounds show consistent binding affinities due to high sequence or functional similarity among their target proteins. Our Uniform Manifold Approximation and Projection analysis based on compound fingerprints further revealed that low and high variation compounds do not exhibit significant structural differences. This suggests that the primary factor driving the consistent binding affinities is protein similarity rather than compound structure. We addressed this bias by creating datasets with progressively reduced protein similarity between the training and test sets, observing significant changes in model performance. We developed the BASE web service to allow researchers to download and utilize these datasets. Feature importance analysis revealed that previous models heavily relied on protein features. However, using bias-reduced datasets increased the importance of compound and interaction features, enabling a more balanced extraction of key features.

CONCLUSIONS: We propose the BASE web service, providing both the affinity prediction results of existing models and bias-reduced datasets. These resources contribute to the development of generalized and robust predictive models, enhancing the accuracy and reliability of DTA predictions in the drug discovery process. BASE is freely available online at https://synbi2024.kaist.ac.kr/base .

PMID:39478454 | DOI:10.1186/s12859-024-05968-3

Categories: Literature Watch

Ki-67 evaluation using deep-learning model-assisted digital image analysis in breast cancer

Thu, 2024-10-31 06:00

Histopathology. 2024 Oct 31. doi: 10.1111/his.15356. Online ahead of print.

ABSTRACT

AIMS: To test the efficacy of artificial intelligence (AI)-assisted Ki-67 digital image analysis in invasive breast carcinoma (IBC) with quantitative assessment of AI model performance.

METHODS AND RESULTS: This study used 494 cases of Ki-67 slide images of IBC core needle biopsies. The methods were divided into two steps: (i) construction of a deep-learning model (DL); and (ii) DL implementation for Ki-67 analysis. First, a DL tissue classifier model (DL-TC) and a DL nuclear detection model (DL-ND) were constructed using HALO AI DenseNet V2 algorithm with 31,924 annotations in 300 Ki-67 digital slide images. Whether the class predicted by DL-TC in the test set was correct compared with the annotation of ground truth at the pixel level was evaluated. Second, DL-TC- and DL-ND-assisted digital image analysis (DL-DIA) was performed in the other 194 luminal-type cases and correlations with manual counting and clinical outcome were investigated to confirm the accuracy and prognostic potential of DL-DIA. The performance of DL-TC was excellent and invasive carcinoma nests were well segmented from other elements (average precision: 0.851; recall: 0.878; F1-score: 0.858). Ki-67 index data and the number of nuclei from DL-DIA were positively correlated with data from manual counting (ρ = 0.961, and 0.928, respectively). High Ki-67 index (cutoff 20%) cases showed significantly worse recurrence-free survival and breast cancer-specific survival (P = 0.024, and 0.032, respectively).

CONCLUSION: The performances of DL-TC and DL-ND were excellent. DL-DIA demonstrated a high degree of concordance with manual counting of Ki-67 and the results of this approach have prognostic potential.

PMID:39478421 | DOI:10.1111/his.15356

Categories: Literature Watch

Accuracy of tooth segmentation algorithm based on deep learning

Thu, 2024-10-31 06:00

Shanghai Kou Qiang Yi Xue. 2024 Aug;33(4):339-344.

ABSTRACT

PURPOSE: The established automatic AI tooth segmentation algorithm was used to achieve rapid and automatic tooth segmentation from CBCT images. The three-dimensional data obtained by oral scanning of real isolated teeth were used as the gold standard to verify the accuracy of the algorithm.

METHODS: Thirty sets of CBCT data and corresponding 59 isolated teeth were collected from Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine. The three-dimensional tooth data in CBCT images were segmented by the algorithm. The digital information obtained by scanning the extracted teeth after processing was used as the gold standard. In order to compare the difference between the segmentation results and the scanning results of the algorithm. The Dice coefficient(Dice), sensitivity (Sen) and average symmetric surface distance (ASSD) were selected to evaluate the segmentation accuracy of the algorithm. The intra-class correlation coefficient(ICC) was used to evaluate the differences in length, area, and volume between the single tooth obtained by the AI system and the digital isolated tooth. Due to the existence of CBCT with different resolution, ANOVA was used to analyze the differences between groups with different resolution, and SNK method was used to compare them between two groups. SPSS 25.0 software package was used to analyze the data.

RESULTS: After comparing the segmentation results with the in vitro dental scanning results, the average Dice value was (94.7±1.88)%, the average Sen was (95.8±2.02)%, and the average ASSD was (0.49±0.12) mm. By comparing the length, area and volume of a single tooth obtained by the digital isolated tooth and the AI system, the ICC values of the intra-group correlation coefficients were 0.734, 0.719 and 0.885, respectively. The single tooth divided by the AI system has a good consistency with the digital model in evaluating the length, area and volume, but the segmentation results were still different from the real situation in terms of specific values. The smaller the voxel of CBCT, the higher the resolution, the better the segmentation results.

CONCLUSIONS: The CBCT tooth segmentation algorithm established in this study can accurately achieve the tooth segmentation of the whole dentition in CBCT at all resolutions. The improvement of CBCT resolution ratio can make the algorithm more accurate. Compared with the current segmentation algorithms, our algorithm has better performance. Compared with the real situation, there are still some differences, and the algorithm needs to be further improved and verified.

PMID:39478388

Categories: Literature Watch

Advances in Miniaturized Computational Spectrometers

Wed, 2024-10-30 06:00

Adv Sci (Weinh). 2024 Oct 30:e2404448. doi: 10.1002/advs.202404448. Online ahead of print.

ABSTRACT

Miniaturized computational spectrometers have emerged as a promising strategy for miniaturized spectrometers, which breaks the compromise between footprint and performance in traditional miniaturized spectrometers by introducing computational resources. They have attracted widespread attention and a variety of materials, optical structures, and photodetectors are adopted to fabricate computational spectrometers with the cooperation of reconstruction algorithms. Here, a comprehensive review of miniaturized computational spectrometers, focusing on two crucial components: spectral encoding and reconstruction algorithms are provided. Principles, features, and recent progress of spectral encoding strategies are summarized in detail, including space-modulated, time-modulated, and light-source spectral encoding. The reconstruction algorithms are classified into traditional and deep learning algorithms, and they are carefully analyzed based on the mathematical models required for spectral reconstruction. Drawing from the analysis of the two components, cooperations between them are considered, figures of merits for miniaturized computational spectrometers are highlighted, optimization strategies for improving their performance are outlined, and considerations in operating these systems are provided. The application of miniaturized computational spectrometers to achieve hyperspectral imaging is also discussed. Finally, the insights into the potential future applications and developments of computational spectrometers are provided.

PMID:39477813 | DOI:10.1002/advs.202404448

Categories: Literature Watch

The Evolution and Clinical Impact of Deep Learning Technologies in Breast MRI

Wed, 2024-10-30 06:00

Magn Reson Med Sci. 2024 Oct 29. doi: 10.2463/mrms.rev.2024-0056. Online ahead of print.

ABSTRACT

The integration of deep learning (DL) in breast MRI has revolutionized the field of medical imaging, notably enhancing diagnostic accuracy and efficiency. This review discusses the substantial influence of DL technologies across various facets of breast MRI, including image reconstruction, classification, object detection, segmentation, and prediction of clinical outcomes such as response to neoadjuvant chemotherapy and recurrence of breast cancer. Utilizing sophisticated models such as convolutional neural networks, recurrent neural networks, and generative adversarial networks, DL has improved image quality and precision, enabling more accurate differentiation between benign and malignant lesions and providing deeper insights into disease behavior and treatment responses. DL's predictive capabilities for patient-specific outcomes also suggest potential for more personalized treatment strategies. The advancements in DL are pioneering a new era in breast cancer diagnostics, promising more personalized and effective healthcare solutions. Nonetheless, the integration of this technology into clinical practice faces challenges, necessitating further research, validation, and development of legal and ethical frameworks to fully leverage its potential.

PMID:39477506 | DOI:10.2463/mrms.rev.2024-0056

Categories: Literature Watch

Artificial Intelligence in Obstetric and Gynecological MR Imaging

Wed, 2024-10-30 06:00

Magn Reson Med Sci. 2024 Oct 29. doi: 10.2463/mrms.rev.2024-0077. Online ahead of print.

ABSTRACT

This review explores the significant progress and applications of artificial intelligence (AI) in obstetrics and gynecological MRI, charting its development from foundational algorithmic techniques to deep learning strategies and advanced radiomics. This review features research published over the last few years that has used AI with MRI to identify specific conditions such as uterine leiomyosarcoma, endometrial cancer, cervical cancer, ovarian tumors, and placenta accreta. In addition, it covers studies on the application of AI for segmentation and quality improvement in obstetrics and gynecology MRI. The review also outlines the existing challenges and envisions future directions for AI research in this domain. The growing accessibility of extensive datasets across various institutions and the application of multiparametric MRI are significantly enhancing the accuracy and adaptability of AI. This progress has the potential to enable more accurate and efficient diagnosis, offering opportunities for personalized medicine in the field of obstetrics and gynecology.

PMID:39477505 | DOI:10.2463/mrms.rev.2024-0077

Categories: Literature Watch

Effect of Training Data Differences on Accuracy in MR Image Generation Using Pix2pix

Wed, 2024-10-30 06:00

Nihon Hoshasen Gijutsu Gakkai Zasshi. 2024 Oct 29. doi: 10.6009/jjrt.2024-1487. Online ahead of print.

ABSTRACT

PURPOSE: Using a magnetic resonance (MR) image generation technique with deep learning, we elucidated whether changing the training data patterns affected image generation accuracy.

METHODS: The pix2pix training model generated T1-weighted images from T2-weighted images or FLAIR images. Head MR images obtained at our hospital were used in this study. We prepared 300 cases for each model and four training data patterns for each model (a: 150 cases for one MR system, b: 300 cases for one MR system, c: 150 cases and augmentation data for one MR system, and d: 300 cases for two MR systems). The extension data were images of 150 cases rotated in the XY plane. The similarity between the images generated by the training and evaluation data in each group was evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).

RESULTS: For both MR systems, the PSNR and SSIM were higher for training dataset b than training dataset a. The PSNR and SSIM were lower for training dataset d.

CONCLUSION: MR image generation accuracy varied among training data patterns.

PMID:39477465 | DOI:10.6009/jjrt.2024-1487

Categories: Literature Watch

A Multi-label Artificial Intelligence Approach for Improving Breast Cancer Detection With Mammographic Image Analysis

Wed, 2024-10-30 06:00

In Vivo. 2024 Nov-Dec;38(6):2864-2872. doi: 10.21873/invivo.13767.

ABSTRACT

BACKGROUND/AIM: Breast cancer remains a major global health concern. This study aimed to develop a deep-learning-based artificial intelligence (AI) model that predicts the malignancy of mammographic lesions and reduces unnecessary biopsies in patients with breast cancer.

PATIENTS AND METHODS: In this retrospective study, we used deep-learning-based AI to predict whether lesions in mammographic images are malignant. The AI model learned the malignancy as well as margins and shapes of mass lesions through multi-label training, similar to the diagnostic process of a radiologist. We used the Curated Breast Imaging Subset of Digital Database for Screening Mammography. This dataset includes annotations for mass lesions, and we developed an algorithm to determine the exact location of the lesions for accurate classification. A multi-label classification approach enabled the model to recognize malignancy and lesion attributes.

RESULTS: Our multi-label classification model, trained on both lesion shape and margin, demonstrated superior performance compared with models trained solely on malignancy. Gradient-weighted class activation mapping analysis revealed that by considering the margin and shape, the model assigned higher importance to border areas and analyzed pixels more uniformly when classifying malignant lesions. This approach improved diagnostic accuracy, particularly in challenging cases, such as American College of Radiology Breast Imaging-Reporting and Data System categories 3 and 4, where the breast density exceeded 50%.

CONCLUSION: This study highlights the potential of AI in improving the diagnosis of breast cancer. By integrating advanced techniques and modern neural network designs, we developed an AI model with enhanced accuracy for mammographic image analysis.

PMID:39477432 | DOI:10.21873/invivo.13767

Categories: Literature Watch

An Artificial Intelligence-assisted Diagnostic System Improves Upper Urine Tract Cytology Diagnosis

Wed, 2024-10-30 06:00

In Vivo. 2024 Nov-Dec;38(6):3016-3021. doi: 10.21873/invivo.13785.

ABSTRACT

BACKGROUND/AIM: To evaluate efficacy of the AIxURO system, a deep learning-based artificial intelligence (AI) tool, in enhancing the accuracy and reliability of urine cytology for diagnosing upper urinary tract cancers.

MATERIALS AND METHODS: One hundred and eighty-five cytology samples of upper urine tract were collected and categorized according to The Paris System for Reporting Urinary Cytology (TPS), yielding 168 negative for High-Grade Urothelial Carcinoma (NHGUC), 14 atypical urothelial cells (AUC), 2 suspicious for high-grade urothelial carcinoma (SHGUC), and 1 high-grade urothelial carcinoma (HGUC). The AIxURO system, trained on annotated cytology images, was employed to analyze these samples. Independent assessments by a cytotechnologist and a cytopathologist were conducted to validate the initial AIxURO assessment.

RESULTS: AIxURO identified discrepancies in 37 of the 185 cases, resulting in a 20% discrepancy rate. The cytotechnologist achieved an accuracy of 85% for NHGUC and 21.4% for AUC, whereas the cytopathologist attained accuracies of 95% for NHGUC and 85.7% for AUC. The cytotechnologist exhibited overcall rates of roughly 15% and undercall rates of greater than 50%, while the cytopathologist showed profoundly lower miscall rates from both undercall and overcall. AIxURO significantly enhanced diagnostic accuracy and consistency, particularly in complex cases involving atypical cells.

CONCLUSION: AIxURO can improve the accuracy and reliability of cytology diagnosis for upper urine tract urothelial carcinomas by providing precise detection on atypical urothelial cells and reducing subjectivity in assessments. The integration of AIxURO into clinical practice can significantly ameliorate diagnostic outcomes, highlighting the synergistic potential of AI technology and human expertise in cytology.

PMID:39477382 | DOI:10.21873/invivo.13785

Categories: Literature Watch

Development and effect of hybrid simulation program for nursing students: focusing on a case of pediatric cardiac catheterization in Korea: quasi-experimental study

Wed, 2024-10-30 06:00

Child Health Nurs Res. 2024 Oct;30(4):277-287. doi: 10.4094/chnr.2024.020. Epub 2024 Oct 31.

ABSTRACT

PURPOSE: Hybrid simulation has emerged to increase the practicality of simulation training by combining simulators and standardized patient (SP) that implement realistic clinical environments at a high level. This study aimed to develop a hybrid simulation program focused on case of pediatric cardiac catheterization and to evaluate its effectiveness.

METHODS: The hybrid simulation program was developed according to the Analyze, Design, Develop, Implement, and Evaluate (ADDIE) model. And deep learning-based analysis program was used to analyze non-verbal communication with SP and applied it for debriefing sessions. To verify the effect of the program, a quasi-experimental study using a random assignment design was conducted. In total, 48 nursing students (n=24 in the experimental group; n=24 in the control group) participated in the study.

RESULTS: Knowledge (F=3.53, p=.038), confidence in clinical performance (F=9.73, p<.001), and communication self-efficacy (F=5.20, p=.007) showed a significant difference in both groups and interaction between time points, and the communication ability of the experimental group increased significantly (t=3.32, p=.003).

CONCLUSION: Hybrid simulation program developed in this study has been proven effective, it can be implemented in child nursing education. Future research should focus on developing and incorporating various hybrid simulation programs using SP into the nursing curriculum and evaluating their effectiveness.

PMID:39477234 | DOI:10.4094/chnr.2024.020

Categories: Literature Watch

Deep Learning Significantly Boosts CRT Response Prediction Using Synthetic Longitudinal Strain Data: Training on Synthetic Data and Testing on Real Patients

Wed, 2024-10-30 06:00

Biomed J. 2024 Oct 28:100803. doi: 10.1016/j.bj.2024.100803. Online ahead of print.

ABSTRACT

BACKGROUND: Recently, as a relatively novel technology, artificial intelligence (especially in the deep learning fields) has received more and more attention from researchers and has successfully been applied to many biomedical domains. Nonetheless, just a few research works use deep learning skills to predict the cardiac resynchronization therapy (CRT)-response of heart failure patients.

OBJECTIVE: We try to use the deep learning-based technique to construct a model which is used to predict the CRT response of patients with high prediction accuracy, precision, and sensitivity.

METHODS: Using two-dimensional echocardiographic strain traces from 131 patients, we pre-processed the data and synthesized 2,000 model inputs through the synthetic minority oversampling technique (SMOTE). These inputs trained and optimized deep neural networks (DNN) and one-dimensional convolution neural networks (1D-CNN). Visualization of prediction results was performed using t-distributed stochastic neighbor embedding (t-SNE), and model performance was evaluated using accuracy, precision, sensitivity, F1 score, and specificity. Variable importance was assessed using Shapley additive explanations (SHAP) analysis.

RESULTS: Both the optimal DNN and 1D-CNN models demonstrated exceptional predictive performance, with prediction accuracy, precision, and sensitivity all around 90%. Furthermore, the area under the receiver operating characteristic curve (AUROC) of the optimal 1D-CNN and DNN models achieved 0.8734 and 0.9217, respectively. Crucially, the most significant input variables for both models align well with clinical experience, further corroborating their robustness and applicability in real-world settings.

CONCLUSIONS: We believe that both the DL models could be an auxiliary to help in treatment response prediction for doctors because of the excellent prediction performance and the convenience of obtaining input data to predict the CRT response of patients clinically.

PMID:39477070 | DOI:10.1016/j.bj.2024.100803

Categories: Literature Watch

Uncertainty-Aware Deep Learning Characterization of Knee Radiographs for Large-Scale Registry Creation

Wed, 2024-10-30 06:00

J Arthroplasty. 2024 Oct 28:S0883-5403(24)01143-4. doi: 10.1016/j.arth.2024.10.103. Online ahead of print.

ABSTRACT

BACKGROUND: We present an automated image ingestion pipeline for a knee radiography registry, integrating a multilabel image-semantic classifier with conformal prediction-based uncertainty quantification and an object detection model for knee hardware.

METHODS: Annotators retrospectively classified 26,000 knee images detailing presence, laterality, prostheses, and radiographic views. They further annotated surgical construct locations in 11,841 knee radiographs. An uncertainty-aware multilabel EfficientNet-based classifier was trained to identify the knee laterality, implants, and radiographic view. A classifier trained with embeddings from the EfficientNet model detected out-of-domain images. An object detection model was trained to identify 20 different knee implants. Model performance was assessed against a held-out internal and an external dataset using per-class F1 score, accuracy, sensitivity, and specificity. Conformal prediction was evaluated with marginal coverage and efficiency.

RESULTS: Classification Model with Conformal Prediction: F1 scores for each label output > 0.98. Coverage of each label output was >0.99 and the average efficiency was 0.97.

DOMAIN DETECTION MODEL: The F1 score was 0.99, with precision and recall for knee radiographs of 0.99.

OBJECT DETECTION MODEL: Mean average precision across all classes was 0.945 and ranged from 0.695 to 1.000. Average precision and recall across all classes were 0.950 and 0.886.

CONCLUSIONS: We present a multilabel classifier with domain detection and an object detection model to characterize knee radiographs. Conformal prediction enhances transparency in cases when the model is uncertain.

PMID:39477040 | DOI:10.1016/j.arth.2024.10.103

Categories: Literature Watch

Linking Joint Exposures to Residential Greenness and Air Pollution with Adults' Social Health in Dense Hong Kong

Wed, 2024-10-30 06:00

Environ Pollut. 2024 Oct 28:125207. doi: 10.1016/j.envpol.2024.125207. Online ahead of print.

ABSTRACT

Despite the growing recognition of the impact of urban environments on social health, limited research explores the combined associations of multiple urban exposures, particularly in dense cities. This study examines the interplay between greenspace, air pollution, and social health as well as the underlying pathways and population heterogeneity in Hong Kong using cross-sectional survey data from 1,977 adults and residential environmental data. Social health includes social contacts, relations, and support. Greenspace used street-view greenness (SVG), park density, and the normalized difference vegetation index (NDVI). 100-m daily ground NO2 and O3, indicative of air pollution, were derived using a spatiotemporal deep learning model. Mediators involved physical activity and negative emotions. Main analyses were performed in a 1000-m buffer with multivariate logistical regressions, stratification, interaction, and Partial Lease Square - Structural Equation Modelling (PLS-SEM). Multi-exposure models revealed positive associations between park density/SVG and social contacts, as well as between SVG and social relations, while O3 was negatively associated with social relations/support. Significant moderators included age, birthplace, employment, and education. PLS-SEM indicated direct positive associations between SVG and social contacts/relations and significant indirect negative associations between NO2/O3 and social health via negative emotions. This study adds to urban health research by exploring complex relationships between greenspace, air pollution, and social health, highlighting the role of the environment in fostering social restoration.

PMID:39476997 | DOI:10.1016/j.envpol.2024.125207

Categories: Literature Watch

Challenges and Opportunities in the Clinical Translation of High-Resolution Spatial Transcriptomics

Wed, 2024-10-30 06:00

Annu Rev Pathol. 2024 Oct 30. doi: 10.1146/annurev-pathmechdis-111523-023417. Online ahead of print.

ABSTRACT

Pathology has always been fueled by technological advances. Histology powered the study of tissue architecture at single-cell resolution and remains a cornerstone of clinical pathology today. In the last decade, next-generation sequencing has become informative for the targeted treatment of many diseases, demonstrating the importance of genome-scale molecular information for personalized medicine. Today, revolutionary developments in spatial transcriptomics technologies digitalize gene expression at subcellular resolution in intact tissue sections, enabling the computational analysis of cell types, cellular phenotypes, and cell-cell communication in routinely collected and archival clinical samples. Here we review how such molecular microscopes work, highlight their potential to identify disease mechanisms and guide personalized therapies, and provide guidance for clinical study design. Finally, we discuss remaining challenges to the swift translation of high-resolution spatial transcriptomics technologies and how integration of multimodal readouts and deep learning approaches is bringing us closer to a holistic understanding of tissue biology and pathology.

PMID:39476415 | DOI:10.1146/annurev-pathmechdis-111523-023417

Categories: Literature Watch

Diffusion network with spatial channel attention infusion and frequency spatial attention for brain tumor segmentation

Wed, 2024-10-30 06:00

Med Phys. 2024 Oct 30. doi: 10.1002/mp.17482. Online ahead of print.

ABSTRACT

BACKGROUND: Accurate segmentation of gliomas is crucial for diagnosis, treatment planning, and evaluating therapeutic efficacy. Physicians typically analyze and delineate tumor regions in brain magnetic resonance imaging (MRI) images based on personal experience, which is often time-consuming and subject to individual interpretation. Despite advancements in deep learning technology for image segmentation, current techniques still face challenges in clearly defining tumor boundary contours and enhancing segmentation accuracy.

PURPOSE: To address these issues, this paper proposes a conditional diffusion network (SF-Diff) with a spatial channel attention infusion (SCAI) module and a frequency spatial attention (FSA) mechanism to achieve accurate segmentation of the whole tumor (WT) region in brain tumors.

METHODS: SF-Diff initially extracts multiscale information from multimodal MRI images and subsequently employs a diffusion model to restore boundaries and details, thereby enabling accurate brain tumor segmentation (BraTS). Specifically, a SCAI module is developed to capture multiscale information within and between encoder layers. A dual-channel upsampling block (DUB) is designed to assist in detail recovery during upsampling. A FSA mechanism is introduced to better match the conditional features with the diffusion probability distribution information. Furthermore, a cross-model loss function has been implemented to supervise the feature extraction of the conditional model and the noise distribution of the diffusion model.

RESULTS: The dataset used in this paper is publicly available and includes 369 patient cases from the Multimodal BraTS Challenge 2020 (BraTS2020). The conducted experiments on BraTS2020 demonstrate that SF-Diff performs better than other state-of-the-art models. The method achieved a Dice score of 91.87%, a Hausdorff 95 of 5.47 mm, an IoU of 84.96%, a sensitivity of 92.29%, and a specificity of 99.95% on BraTS2020.

CONCLUSIONS: The proposed SF-Diff performs well in identifying the WT region of the brain tumors compared to other state-of-the-art models, especially in terms of boundary contours and non-contiguous lesion regions, which is clinically significant. In the future, we will further develop this method for brain tumor three-class segmentation task.

PMID:39476317 | DOI:10.1002/mp.17482

Categories: Literature Watch

Assessing small molecule conformational sampling methods in molecular docking

Wed, 2024-10-30 06:00

J Comput Chem. 2024 Oct 30. doi: 10.1002/jcc.27516. Online ahead of print.

ABSTRACT

Small molecule conformational sampling plays a pivotal role in molecular docking. Recent advancements have led to the emergence of various conformational sampling methods, each employing distinct algorithms. This study investigates the impact of different small molecule conformational sampling methods in molecular docking using UCSF DOCK 3.7. Specifically, six traditional sampling methods (Omega, BCL::Conf, CCDC Conformer Generator, ConfGenX, Conformator, RDKit ETKDGv3) and a deep learning-based model (Torsional Diffusion) for generating conformational ensembles are evaluated. These ensembles are subsequently docked against the Platinum Diverse Dataset, the PoseBusters dataset and the DUDE-Z dataset to assess binding pose reproducibility and screening power. Notably, different sampling methods exhibit varying performance due to their unique preferences, such as dihedral angle sampling ranges on rotatable bonds. Combining complementary methods may lead to further improvements in docking performance.

PMID:39476310 | DOI:10.1002/jcc.27516

Categories: Literature Watch

Pages