Deep learning
Retraction Note: AMLnet, A deep-learning pipeline for the differential diagnosis of acute myeloid leukemia from bone marrow smears
J Hematol Oncol. 2024 Aug 1;17(1):59. doi: 10.1186/s13045-024-01582-1.
NO ABSTRACT
PMID:39090716 | DOI:10.1186/s13045-024-01582-1
Exploring the deep learning of artificial intelligence in nursing: a concept analysis with Walker and Avant's approach
BMC Nurs. 2024 Aug 1;23(1):529. doi: 10.1186/s12912-024-02170-x.
ABSTRACT
BACKGROUND: In recent years, increased attention has been given to using deep learning (DL) of artificial intelligence (AI) in healthcare to address nursing challenges. The adoption of new technologies in nursing needs to be improved, and AI in nursing is still in its early stages. However, the current literature needs more clarity, which affects clinical practice, research, and theory development. This study aimed to clarify the meaning of deep learning and identify the defining attributes of artificial intelligence within nursing.
METHODS: We conducted a concept analysis of the deep learning of AI in nursing care using Walker and Avant's 8-step approach. Our search strategy employed Boolean techniques and MeSH terms across databases, including BMC, CINAHL, ClinicalKey for Nursing, Embase, Ovid, Scopus, SpringerLink and Spinger Nature, ProQuest, PubMed, and Web of Science. By focusing on relevant keywords in titles and abstracts from articles published between 2018 and 2024, we initially found 571 sources.
RESULTS: Thirty-seven articles that met the inclusion criteria were analyzed in this study. The attributes of evidence included four themes: focus and immersion, coding and understanding, arranging layers and algorithms, and implementing within the process of use cases to modify recommendations. Antecedents, unclear systems and communication, insufficient data management knowledge and support, and compound challenges can lead to suffering and risky caregiving tasks. Applying deep learning techniques enables nurses to simulate scenarios, predict outcomes, and plan care more precisely. Embracing deep learning equipment allows nurses to make better decisions. It empowers them with enhanced knowledge while ensuring adequate support and resources essential for caregiver and patient well-being. Access to necessary equipment is vital for high-quality home healthcare.
CONCLUSION: This study provides a clearer understanding of the use of deep learning in nursing and its implications for nursing practice. Future research should focus on exploring the impact of deep learning on healthcare operations management through quantitative and qualitative studies. Additionally, developing a framework to guide the integration of deep learning into nursing practice is recommended to facilitate its adoption and implementation.
PMID:39090714 | DOI:10.1186/s12912-024-02170-x
Current genomic deep learning models display decreased performance in cell type-specific accessible regions
Genome Biol. 2024 Aug 1;25(1):202. doi: 10.1186/s13059-024-03335-2.
ABSTRACT
BACKGROUND: A number of deep learning models have been developed to predict epigenetic features such as chromatin accessibility from DNA sequence. Model evaluations commonly report performance genome-wide; however, cis regulatory elements (CREs), which play critical roles in gene regulation, make up only a small fraction of the genome. Furthermore, cell type-specific CREs contain a large proportion of complex disease heritability.
RESULTS: We evaluate genomic deep learning models in chromatin accessibility regions with varying degrees of cell type specificity. We assess two modeling directions in the field: general purpose models trained across thousands of outputs (cell types and epigenetic marks) and models tailored to specific tissues and tasks. We find that the accuracy of genomic deep learning models, including two state-of-the-art general purpose models-Enformer and Sei-varies across the genome and is reduced in cell type-specific accessible regions. Using accessibility models trained on cell types from specific tissues, we find that increasing model capacity to learn cell type-specific regulatory syntax-through single-task learning or high capacity multi-task models-can improve performance in cell type-specific accessible regions. We also observe that improving reference sequence predictions does not consistently improve variant effect predictions, indicating that novel strategies are needed to improve performance on variants.
CONCLUSIONS: Our results provide a new perspective on the performance of genomic deep learning models, showing that performance varies across the genome and is particularly reduced in cell type-specific accessible regions. We also identify strategies to maximize performance in cell type-specific accessible regions.
PMID:39090688 | DOI:10.1186/s13059-024-03335-2
Integrating MRI-based radiomics and clinicopathological features for preoperative prognostication of early-stage cervical adenocarcinoma patients: in comparison to deep learning approach
Cancer Imaging. 2024 Aug 1;24(1):101. doi: 10.1186/s40644-024-00747-y.
ABSTRACT
OBJECTIVES: The roles of magnetic resonance imaging (MRI) -based radiomics approach and deep learning approach in cervical adenocarcinoma (AC) have not been explored. Herein, we aim to develop prognosis-predictive models based on MRI-radiomics and clinical features for AC patients.
METHODS: Clinical and pathological information from one hundred and ninety-seven patients with cervical AC was collected and analyzed. For each patient, 107 radiomics features were extracted from T2-weighted MRI images. Feature selection was performed using Spearman correlation and random forest (RF) algorithms, and predictive models were built using support vector machine (SVM) technique. Deep learning models were also trained with T2-weighted MRI images and clinicopathological features through Convolutional Neural Network (CNN). Kaplan-Meier curve was analyzed using significant features. In addition, information from another group of 56 AC patients was used for the independent validation.
RESULTS: A total of 107 radiomics features and 6 clinicopathological features (age, FIGO stage, differentiation, invasion depth, lymphovascular space invasion (LVSI), and lymph node metastasis (LNM) were included in the analysis. When predicting the 3-year, 4-year, and 5-year DFS, the model trained solely on radiomics features achieved AUC values of 0.659 (95%CI: 0.620-0.716), 0.791 (95%CI: 0.603-0.922), and 0.853 (95%CI: 0.745-0.912), respectively. However, the combined model, incorporating both radiomics and clinicopathological features, outperformed the radiomics model with AUC values of 0.934 (95%CI: 0.885-0.981), 0.937 (95%CI: 0.867-0.995), and 0.916 (95%CI: 0.857-0.970), respectively. For deep learning models, the MRI-based models achieved an AUC of 0.857, 0.777 and 0.828 for 3-year DFS, 4-year DFS and 5-year DFS prediction, respectively. And the combined deep learning models got a improved performance, the AUCs were 0.903. 0.862 and 0.969. In the independent test set, the combined model achieved an AUC of 0.873, 0.858 and 0.914 for 3-year DFS, 4-year DFS and 5-year DFS prediction, respectively.
CONCLUSIONS: We demonstrated the prognostic value of integrating MRI-based radiomics and clinicopathological features in cervical adenocarcinoma. Both radiomics and deep learning models showed improved predictive performance when combined with clinical data, emphasizing the importance of a multimodal approach in patient management.
PMID:39090668 | DOI:10.1186/s40644-024-00747-y
Deep learning-based automated bone age estimation for Saudi patients on hand radiograph images: a retrospective study
BMC Med Imaging. 2024 Aug 1;24(1):199. doi: 10.1186/s12880-024-01378-2.
ABSTRACT
PURPOSE: In pediatric medicine, precise estimation of bone age is essential for skeletal maturity evaluation, growth disorder diagnosis, and therapeutic intervention planning. Conventional techniques for determining bone age depend on radiologists' subjective judgments, which may lead to non-negligible differences in the estimated bone age. This study proposes a deep learning-based model utilizing a fully connected convolutional neural network(CNN) to predict bone age from left-hand radiographs.
METHODS: The data set used in this study, consisting of 473 patients, was retrospectively retrieved from the PACS (Picture Achieving and Communication System) of a single institution. We developed a fully connected CNN consisting of four convolutional blocks, three fully connected layers, and a single neuron as output. The model was trained and validated on 80% of the data using the mean-squared error as a cost function to minimize the difference between the predicted and reference bone age values through the Adam optimization algorithm. Data augmentation was applied to the training and validation sets yielded in doubling the data samples. The performance of the trained model was evaluated on a test data set (20%) using various metrics including, the mean absolute error (MAE), median absolute error (MedAE), root-mean-squared error (RMSE), and mean absolute percentage error (MAPE). The code of the developed model for predicting the bone age in this study is available publicly on GitHub at https://github.com/afiosman/deep-learning-based-bone-age-estimation .
RESULTS: Experimental results demonstrate the sound capabilities of our model in predicting the bone age on the left-hand radiographs as in the majority of the cases, the predicted bone ages and reference bone ages are nearly close to each other with a calculated MAE of 2.3 [1.9, 2.7; 0.95 confidence level] years, MedAE of 2.1 years, RMAE of 3.0 [1.5, 4.5; 0.95 confidence level] years, and MAPE of 0.29 (29%) on the test data set.
CONCLUSION: These findings highlight the usability of estimating the bone age from left-hand radiographs, helping radiologists to verify their own results considering the margin of error on the model. The performance of our proposed model could be improved with additional refining and validation.
PMID:39090563 | DOI:10.1186/s12880-024-01378-2
A QR code-enabled framework for fast biomedical image processing in medical diagnosis using deep learning
BMC Med Imaging. 2024 Aug 1;24(1):198. doi: 10.1186/s12880-024-01351-z.
ABSTRACT
In the realm of disease prognosis and diagnosis, a plethora of medical images are utilized. These images are typically stored either within the local on-premises servers of healthcare providers or within cloud storage infrastructures. However, this conventional storage approach often incurs high infrastructure costs and results in sluggish information retrieval, ultimately leading to delays in diagnosis and consequential wastage of valuable time for patients. The methodology proposed in this paper offers a pioneering solution to expedite the diagnosis of medical conditions while simultaneously reducing infrastructure costs associated with data storage. Through this study, a high-speed biomedical image processing approach is designed to facilitate rapid prognosis and diagnosis. The proposed framework includes Deep learning QR code technique using an optimized database design aimed at alleviating the burden of intensive on-premises database requirements. The work includes medical dataset from Crawford Image and Data Archive and Duke CIVM for evaluating the proposed work suing different performance metrics, The work has also been compared from the previous research further enhancing the system's efficiency. By providing healthcare providers with high-speed access to medical records, this system enables swift retrieval of comprehensive patient details, thereby improving accuracy in diagnosis and supporting informed decision-making.
PMID:39090546 | DOI:10.1186/s12880-024-01351-z
Adaptive neighborhood triplet loss: enhanced segmentation of dermoscopy datasets by mining pixel information
Int J Comput Assist Radiol Surg. 2024 Aug 2. doi: 10.1007/s11548-024-03241-9. Online ahead of print.
ABSTRACT
PURPOSE: The integration of deep learning in image segmentation technology markedly improves the automation capabilities of medical diagnostic systems, reducing the dependence on the clinical expertise of medical professionals. However, the accuracy of image segmentation is still impacted by various interference factors encountered during image acquisition.
METHODS: To address this challenge, this paper proposes a loss function designed to mine specific pixel information which dynamically changes during training process. Based on the triplet concept, this dynamic change is leveraged to drive the predicted boundaries of images closer to the real boundaries.
RESULTS: Extensive experiments on the PH2 and ISIC2017 dermoscopy datasets validate that our proposed loss function overcomes the limitations of traditional triplet loss methods in image segmentation applications. This loss function not only enhances Jaccard indices of neural networks by 2.42 % and 2.21 % for PH2 and ISIC2017, respectively, but also neural networks utilizing this loss function generally surpass those that do not in terms of segmentation performance.
CONCLUSION: This work proposed a loss function that mined the information of specific pixels deeply without incurring additional training costs, significantly improving the automation of neural networks in image segmentation tasks. This loss function adapts to dermoscopic images of varying qualities and demonstrates higher effectiveness and robustness compared to other boundary loss functions, making it suitable for image segmentation tasks across various neural networks.
PMID:39090504 | DOI:10.1007/s11548-024-03241-9
Preoperative prediction of microvascular invasion risk in hepatocellular carcinoma with MRI: peritumoral versus tumor region
Insights Imaging. 2024 Aug 1;15(1):188. doi: 10.1186/s13244-024-01760-2.
ABSTRACT
OBJECTIVES: To explore the predictive performance of tumor and multiple peritumoral regions on dynamic contrast-enhanced magnetic resonance imaging (MRI), to identify optimal regions of interest for developing a preoperative predictive model for the grade of microvascular invasion (MVI).
METHODS: A total of 147 patients who were surgically diagnosed with hepatocellular carcinoma, and had a maximum tumor diameter ≤ 5 cm were recruited and subsequently divided into a training set (n = 117) and a testing set (n = 30) based on the date of surgery. We utilized a pre-trained AlexNet to extract deep learning features from seven different regions of the maximum transverse cross-section of tumors in various MRI sequence images. Subsequently, an extreme gradient boosting (XGBoost) classifier was employed to construct the MVI grade prediction model, with evaluation based on the area under the curve (AUC).
RESULTS: The XGBoost classifier trained with data from the 20-mm peritumoral region showed superior AUC compared to the tumor region alone. AUC values consistently increased when utilizing data from 5-mm, 10-mm, and 20-mm peritumoral regions. Combining arterial and delayed-phase data yielded the highest predictive performance, with micro- and macro-average AUCs of 0.78 and 0.74, respectively. Integration of clinical data further improved AUCs values to 0.83 and 0.80.
CONCLUSION: Compared with those of the tumor region, the deep learning features of the peritumoral region provide more important information for predicting the grade of MVI. Combining the tumor region and the 20-mm peritumoral region resulted in a relatively ideal and accurate region within which the grade of MVI can be predicted.
CLINICAL RELEVANCE STATEMENT: The 20-mm peritumoral region holds more significance than the tumor region in predicting MVI grade. Deep learning features can indirectly predict MVI by extracting information from the tumor region and directly capturing MVI information from the peritumoral region.
KEY POINTS: We investigated tumor and different peritumoral regions, as well as their fusion. MVI predominantly occurs in the peritumoral region, a superior predictor compared to the tumor region. The peritumoral 20 mm region is reasonable for accurately predicting the three-grade MVI.
PMID:39090456 | DOI:10.1186/s13244-024-01760-2
Improving quality control of whole slide images by explicit artifact augmentation
Sci Rep. 2024 Aug 1;14(1):17847. doi: 10.1038/s41598-024-68667-2.
ABSTRACT
The problem of artifacts in whole slide image acquisition, prevalent in both clinical workflows and research-oriented settings, necessitates human intervention and re-scanning. Overcoming this challenge requires developing quality control algorithms, that are hindered by the limited availability of relevant annotated data in histopathology. The manual annotation of ground-truth for artifact detection methods is expensive and time-consuming. This work addresses the issue by proposing a method dedicated to augmenting whole slide images with artifacts. The tool seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The augmented datasets are then utilized to train artifact classification methods. The evaluation shows their usefulness in classification of the artifacts, where they show an improvement from 0.10 to 0.01 AUROC depending on the artifact type. The framework, model, weights, and ground-truth annotations are freely released to facilitate open science and reproducible research.
PMID:39090284 | DOI:10.1038/s41598-024-68667-2
Significance of AI-assisted techniques for epiphyte plant monitoring and identification from drone images
J Environ Manage. 2024 Jul 31;367:121996. doi: 10.1016/j.jenvman.2024.121996. Online ahead of print.
ABSTRACT
Monitoring forest canopies is vital for ecological studies, particularly for assessing epiphytes in rain forest ecosystems. Traditional methods for studying epiphytes, such as climbing trees and building observation structures, are labor, cost intensive and risky. Unmanned Aerial Vehicles (UAVs) have emerged as a valuable tool in this domain, offering botanists a safer and more cost-effective means to collect data. This study leverages AI-assisted techniques to enhance the identification and mapping of epiphytes using UAV imagery. The primary objective of this research is to evaluate the effectiveness of AI-assisted methods compared to traditional approaches in segmenting/identifying epiphytes from UAV images collected in a reserve forest in Costa Rica. Specifically, the study investigates whether Deep Learning (DL) models can accurately identify epiphytes during complex backgrounds, even with a limited dataset of varying image quality. Systematically, this study compares three traditional image segmentation methods Auto Cluster, Watershed, and Level Set with two DL-based segmentation networks: the UNet and the Vision Transformer-based TransUNet. Results obtained from this study indicate that traditional methods struggle with the complexity of vegetation backgrounds and variability in target characteristics. Epiphyte identification results were quantitatively evaluated using the Jaccard score. Among traditional methods, Watershed scored 0.10, Auto Cluster 0.13, and Level Set failed to identify the target. In contrast, AI-assisted models performed better, with UNet scoring 0.60 and TransUNet 0.65. These results highlight the potential of DL approaches to improve the accuracy and efficiency of epiphyte identification and mapping, advancing ecological research and conservation.
PMID:39088905 | DOI:10.1016/j.jenvman.2024.121996
Enhanced river suspended sediment concentration identification via multimodal video image, optical flow, and water temperature data fusion
J Environ Manage. 2024 Jul 31;367:122048. doi: 10.1016/j.jenvman.2024.122048. Online ahead of print.
ABSTRACT
Monitoring suspended sediment concentration (SSC) in rivers is pivotal for water quality management and sustainable river ecosystem development. However, achieving continuous and precise SSC monitoring is fraught with challenges, including low automation, lengthy measurement processes, and high cost. This study proposes an innovative approach for SSC identification in rivers using multimodal data fusion. We developed a robust model by harnessing colour features from video images, motion characteristics from the Lucas-Kanade (LK) optical flow method, and temperature data. By integrating ResNet with a mixed density network (MDN), our method fused the image and optical flow fields, and temperature data to enhance accuracy and reliability. Validated at a hydropower station in the Xinjiang Uygur Autonomous Region, China, the results demonstrated that while the image field alone offers a baseline level of SSC identification, it experiences local errors under specific conditions. The incorporation of optical flow and water temperature information enhanced model robustness, particularly when coupling the image and optical flow fields, yielding a Nash-Sutcliffe efficiency (NSE) of 0.91. Further enhancement was observed with the combined use of all three data types, attaining an NSE of 0.93. This integrated approach offers a more accurate SSC identification solution, enabling non-contact, low-cost measurements, facilitating remote online monitoring, and supporting water resource management and river water-sediment element monitoring.
PMID:39088903 | DOI:10.1016/j.jenvman.2024.122048
Deep learning algorithms for melanoma detection using dermoscopic images: A systematic review and meta-analysis
Artif Intell Med. 2024 Jul 25;155:102934. doi: 10.1016/j.artmed.2024.102934. Online ahead of print.
ABSTRACT
BACKGROUND: Melanoma is a serious risk to human health and early identification is vital for treatment success. Deep learning (DL) has the potential to detect cancer using imaging technologies and many studies provide evidence that DL algorithms can achieve high accuracy in melanoma diagnostics.
OBJECTIVES: To critically assess different DL performances in diagnosing melanoma using dermatoscopic images and discuss the relationship between dermatologists and DL.
METHODS: Ovid-Medline, Embase, IEEE Xplore, and the Cochrane Library were systematically searched from inception until 7th December 2021. Studies that reported diagnostic DL model performances in detecting melanoma using dermatoscopic images were included if they had specific outcomes and histopathologic confirmation. Binary diagnostic accuracy data and contingency tables were extracted to analyze outcomes of interest, which included sensitivity (SEN), specificity (SPE), and area under the curve (AUC). Subgroup analyses were performed according to human-machine comparison and cooperation. The study was registered in PROSPERO, CRD42022367824.
RESULTS: 2309 records were initially retrieved, of which 37 studies met our inclusion criteria, and 27 provided sufficient data for meta-analytical synthesis. The pooled SEN was 82 % (range 77-86), SPE was 87 % (range 84-90), with an AUC of 0.92 (range 0.89-0.94). Human-machine comparison had pooled AUCs of 0.87 (0.84-0.90) and 0.83 (0.79-0.86) for DL and dermatologists, respectively. Pooled AUCs were 0.90 (0.87-0.93), 0.80 (0.76-0.83), and 0.88 (0.85-0.91) for DL, and junior and senior dermatologists, respectively. Analyses of human-machine cooperation were 0.88 (0.85-0.91) for DL, 0.76 (0.72-0.79) for unassisted, and 0.87 (0.84-0.90) for DL-assisted dermatologists.
CONCLUSIONS: Evidence suggests that DL algorithms are as accurate as senior dermatologists in melanoma diagnostics. Therefore, DL could be used to support dermatologists in diagnostic decision-making. Although, further high-quality, large-scale multicenter studies are required to address the specific challenges associated with medical AI-based diagnostics.
PMID:39088883 | DOI:10.1016/j.artmed.2024.102934
Improving 3D dose prediction for breast radiotherapy using novel glowing masks and gradient-weighted loss functions
Med Phys. 2024 Aug 1. doi: 10.1002/mp.17326. Online ahead of print.
ABSTRACT
BACKGROUND: The quality of treatment plans for breast cancer can vary greatly. This variation could be reduced by using dose prediction to automate treatment planning. Our work investigates novel methods for training deep-learning models that are capable of producing high-quality dose predictions for breast cancer treatment planning.
PURPOSE: The goal of this work was to compare the performance impact of two novel techniques for deep learning dose prediction models for tangent field treatments for breast cancer. The first technique, a "glowing" mask algorithm, encodes the distance from a contour into each voxel in a mask. The second, a gradient-weighted mean squared error (MSE) loss function, emphasizes the error in high-dose gradient regions in the predicted image.
METHODS: Four 3D U-Net deep learning models were trained using the planning CT and contours of the heart, lung, and tumor bed as inputs. The dataset consisted of 305 treatment plans split into 213/46/46 training/validation/test sets using a 70/15/15% split. We compared the impact of novel "glowing" anatomical mask inputs and a novel gradient-weighted MSE loss function to their standard counterparts, binary anatomical masks, and MSE loss, using an ablation study methodology. To assess performance, we examined the mean error and mean absolute error (ME/MAE) in dose across all within-body voxels, the error in mean dose to heart, ipsilateral lung, and tumor bed, dice similarity coefficient (DSC) across isodose volumes defined by 0%-100% prescribed dose thresholds, and gamma analysis (3%/3 mm).
RESULTS: The combination of novel glowing masks and gradient weighted loss function yielded the best-performing model in this study. This model resulted in a mean ME of 0.40%, MAE of 2.70%, an error in mean dose to heart and lung of -0.10 and 0.01 Gy, and an error in mean dose to the tumor bed of -0.01%. The median DSC at 50/95/100% isodose levels were 0.91/0.87/0.82. The mean 3D gamma pass rate (3%/3 mm) was 93%.
CONCLUSIONS: This study found the combination of novel anatomical mask inputs and loss function for dose prediction resulted in superior performance to their standard counterparts. These results have important implications for the field of radiotherapy dose prediction, as the methods used here can be easily incorporated into many other dose prediction models for other treatment sites. Additionally, this dose prediction model for breast radiotherapy has sufficient performance to be used in an automated planning pipeline for tangent field radiotherapy and has the major benefit of not requiring a PTV for accurate dose prediction.
PMID:39088756 | DOI:10.1002/mp.17326
The Role of Artificial Intelligence in Predicting Optic Neuritis Subtypes From Ocular Fundus Photographs
J Neuroophthalmol. 2024 Aug 1. doi: 10.1097/WNO.0000000000002229. Online ahead of print.
ABSTRACT
BACKGROUND: Optic neuritis (ON) is a complex clinical syndrome that has diverse etiologies and treatments based on its subtypes. Notably, ON associated with multiple sclerosis (MS ON) has a good prognosis for recovery irrespective of treatment, whereas ON associated with other conditions including neuromyelitis optica spectrum disorders or myelin oligodendrocyte glycoprotein antibody-associated disease is often associated with less favorable outcomes. Delay in treatment of these non-MS ON subtypes can lead to irreversible vision loss. It is important to distinguish MS ON from other ON subtypes early, to guide appropriate management. Yet, identifying ON and differentiating subtypes can be challenging as MRI and serological antibody test results are not always readily available in the acute setting. The purpose of this study is to develop a deep learning artificial intelligence (AI) algorithm to predict subtype based on fundus photographs, to aid the diagnostic evaluation of patients with suspected ON.
METHODS: This was a retrospective study of patients with ON seen at our institution between 2007 and 2022. Fundus photographs (1,599) were retrospectively collected from a total of 321 patients classified into 2 groups: MS ON (262 patients; 1,114 photographs) and non-MS ON (59 patients; 485 photographs). The dataset was divided into training and holdout test sets with an 80%/20% ratio, using stratified sampling to ensure equal representation of MS ON and non-MS ON patients in both sets. Model hyperparameters were tuned using 5-fold cross-validation on the training dataset. The overall performance and generalizability of the model was subsequently evaluated on the holdout test set.
RESULTS: The receiver operating characteristic (ROC) curve for the developed model, evaluated on the holdout test dataset, yielded an area under the ROC curve of 0.83 (95% confidence interval [CI], 0.72-0.92). The model attained an accuracy of 76.2% (95% CI, 68.4-83.1), a sensitivity of 74.2% (95% CI, 55.9-87.4) and a specificity of 76.9% (95% CI, 67.6-85.0) in classifying images as non-MS-related ON.
CONCLUSION: This study provides preliminary evidence supporting a role for AI in differentiating non-MS ON subtypes from MS ON. Future work will aim to increase the size of the dataset and explore the role of combining clinical and paraclinical measures to refine deep learning models over time.
PMID:39088711 | DOI:10.1097/WNO.0000000000002229
PhAI: A deep-learning approach to solve the crystallographic phase problem
Science. 2024 Aug 2;385(6708):522-528. doi: 10.1126/science.adn2777. Epub 2024 Aug 1.
ABSTRACT
X-ray crystallography provides a distinctive view on the three-dimensional structure of crystals. To reconstruct the electron density map, the complex structure factors [Formula: see text] of a sufficiently large number of diffracted reflections must be known. In a conventional experiment, only the amplitudes [Formula: see text] are obtained, and the phases ϕ are lost. This is the crystallographic phase problem. In this work, we show that a neural network, trained on millions of artificial structure data, can solve the phase problem at a resolution of only 2 angstroms, using only 10 to 20% of the data needed for direct methods. The network works in common space groups and for modest unit-cell dimensions and suggests that neural networks could be used to solve the phase problem in the general case for weakly scattering crystals.
PMID:39088613 | DOI:10.1126/science.adn2777
Factors affecting the intention to use COVID-19 contact tracing application "StaySafe PH": Integrating protection motivation theory, UTAUT2, and system usability theory
PLoS One. 2024 Aug 1;19(8):e0306701. doi: 10.1371/journal.pone.0306701. eCollection 2024.
ABSTRACT
PURPOSE: StaySafe PH is the Philippines' official contact tracing software for controlling the propagation of COVID-19 and promoting a uniform contact tracing strategy. The StaySafe PH has various features such as a social distancing system, LGU heat map and response system, real-time monitoring, graphs, infographics, and the primary purpose, which is a contact tracing system. This application is mandatory in establishments such as fast-food restaurants, banks, and malls.
OBJECTIVE AND METHODOLOGY: The purpose of this research was to determine the country's willingness to utilize StaySafe PH. Specifically, this study utilized 12 latent variables from the integrated Protection Motivation Theory (PMT), Unified Theory of Acceptance and Use of Technology (UTAUT2), and System Usability Scale (SUS). Data from 646 respondents in the Philippines were employed through Structural Equation Modelling (SEM), Deep Learning Neural Network (DLNN), and SUS.
RESULTS: Utilizing the SEM, it is found that understanding the COVID-19 vaccine, understanding the COVID-19 Delta variant, perceived vulnerability, perceived severity, performance expectancy, social influence, hedonic motivation, behavioral intention, actual use, and the system usability scale are major determinants of intent to utilize the application. Understanding of the COVID-19 Delta Variant was found to be the most important factor by DLNN, which is congruent with the results of SEM. The SUS score of the application is "D", which implies that the application has poor usability.
IMPLICATIONS: It could be implicated that large concerns stem from the trust issues on privacy, data security, and overall consent in the information needed. This is one area that should be promoted. That is, how the data is stored and kept, utilized, and covered by the system, how the assurance could be provided among consumers, and how the government would manage the information obtained. Building the trust is crucial on the development and deployment of these types of technology. The results in this study can also suggest that individuals in the Philippines expected and were certain that vaccination would help them not contract the virus and thus not be vulnerable, leading to a positive actual use of the application.
NOVELTY: The current study considered encompassing health-related behaviors using the PMT, integrating with the technology acceptance model, UTAUT2; as well as usability perspective using the SUS. This study was the first one to evaluate and assess a contact tracing application in the Philippines, as well as integrate the frameworks to provide a holistic measurement.
PMID:39088508 | DOI:10.1371/journal.pone.0306701
High-Performance Method and Architecture for Attention Computation in DNN Inference
IEEE Trans Biomed Circuits Syst. 2024 Aug 1;PP. doi: 10.1109/TBCAS.2024.3436837. Online ahead of print.
ABSTRACT
In recent years, The combination of Attention mechanism and deep learning has a wide range of applications in the field of medical imaging. However, due to its complex computational processes, existing hardware architectures have high resource consumption or low accuracy, and deploying them efficiently to DNN accelerators is a challenge. This paper proposes an online-programmable Attention hardware architecture based on compute-in-memory (CIM) marco, which reduces the complexity of Attention in hardware and improves integration density, energy efficiency, and calculation accuracy. First, the Attention computation process is decomposed into multiple cascaded combinatorial matrix operations to reduce the complexity of its implementation on the hardware side; second, in order to reduce the influence of the non-ideal characteristics of the hardware, an online-programmable CIM architecture is designed to improve calculation accuracy by dynamically adjusting the weights; and lastly, it is verified that the proposed Attention hardware architecture can be applied for the inference of deep neural networks through Spice simulation. Based on the 100nm CMOS process, compared with the traditional Attention hardware architectures, the integrated density and energy efficiency are increased by at least 91.38 times, and latency and computing efficiency are improved by about 12.5 times.
PMID:39088504 | DOI:10.1109/TBCAS.2024.3436837
Attention-Aware Non-Rigid Image Registration for Accelerated MR Imaging
IEEE Trans Med Imaging. 2024 Aug;43(8):3013-3026. doi: 10.1109/TMI.2024.3385024.
ABSTRACT
Accurate motion estimation at high acceleration factors enables rapid motion-compensated reconstruction in Magnetic Resonance Imaging (MRI) without compromising the diagnostic image quality. In this work, we introduce an attention-aware deep learning-based framework that can perform non-rigid pairwise registration for fully sampled and accelerated MRI. We extract local visual representations to build similarity maps between the registered image pairs at multiple resolution levels and additionally leverage long-range contextual information using a transformer-based module to alleviate ambiguities in the presence of artifacts caused by undersampling. We combine local and global dependencies to perform simultaneous coarse and fine motion estimation. The proposed method was evaluated on in-house acquired fully sampled and accelerated data of 101 patients and 62 healthy subjects undergoing cardiac and thoracic MRI. The impact of motion estimation accuracy on the downstream task of motion-compensated reconstruction was analyzed. We demonstrate that our model derives reliable and consistent motion fields across different sampling trajectories (Cartesian and radial) and acceleration factors of up to 16x for cardiac motion and 30x for respiratory motion and achieves superior image quality in motion-compensated reconstruction qualitatively and quantitatively compared to conventional and recent deep learning-based approaches. The code is publicly available at https://github.com/lab-midas/GMARAFT.
PMID:39088484 | DOI:10.1109/TMI.2024.3385024
AmiR-P3: An AI-based microRNA prediction pipeline in plants
PLoS One. 2024 Aug 1;19(8):e0308016. doi: 10.1371/journal.pone.0308016. eCollection 2024.
ABSTRACT
BACKGROUND: MicroRNAs (miRNAs) are small noncoding RNAs that play important post-transcriptional regulatory roles in animals and plants. Despite the importance of plant miRNAs, the inherent complexity of miRNA biogenesis in plants hampers the application of standard miRNA prediction tools, which are often optimized for animal sequences. Therefore, computational approaches to predict putative miRNAs (merely) from genomic sequences, regardless of their expression levels or tissue specificity, are of great interest.
RESULTS: Here, we present AmiR-P3, a novel ab initio plant miRNA prediction pipeline that leverages the strengths of various utilities for its key computational steps. Users can readily adjust the prediction criteria based on the state-of-the-art biological knowledge of plant miRNA properties. The pipeline starts with finding the potential homologs of the known plant miRNAs in the input sequence(s) and ensures that they do not overlap with protein-coding regions. Then, by computing the secondary structure of the presumed RNA sequence based on the minimum free energy, a deep learning classification model is employed to predict potential pre-miRNA structures. Finally, a set of criteria is used to select the most likely miRNAs from the set of predicted miRNAs. We show that our method yields acceptable predictions in a variety of plant species.
CONCLUSION: AmiR-P3 does not (necessarily) require sequencing reads and/or assembled reference genomes, enabling it to identify conserved and novel putative miRNAs from any genomic or transcriptomic sequence. Therefore, AmiR-P3 is suitable for miRNA prediction even in less-studied plants, as it does not require any prior knowledge of the miRNA repertoire of the organism. AmiR-P3 is provided as a docker container, which is a portable and self-contained software package that can be readily installed and run on any platform and is freely available for non-commercial use from: https://hub.docker.com/r/micrornaproject/amir-p3.
PMID:39088479 | DOI:10.1371/journal.pone.0308016
Automated Behavioral Coding to Enhance the Effectiveness of Motivational Interviewing in a Chat-Based Suicide Prevention Helpline: Secondary Analysis of a Clinical Trial
J Med Internet Res. 2024 Aug 1;26:e53562. doi: 10.2196/53562.
ABSTRACT
BACKGROUND: With the rise of computer science and artificial intelligence, analyzing large data sets promises enormous potential in gaining insights for developing and improving evidence-based health interventions. One such intervention is the counseling strategy motivational interviewing (MI), which has been found effective in improving a wide range of health-related behaviors. Despite the simplicity of its principles, MI can be a challenging skill to learn and requires expertise to apply effectively.
OBJECTIVE: This study aims to investigate the performance of artificial intelligence models in classifying MI behavior and explore the feasibility of using these models in online helplines for mental health as an automated support tool for counselors in clinical practice.
METHODS: We used a coded data set of 253 MI counseling chat sessions from the 113 Suicide Prevention helpline. With 23,982 messages coded with the MI Sequential Code for Observing Process Exchanges codebook, we trained and evaluated 4 machine learning models and 1 deep learning model to classify client- and counselor MI behavior based on language use.
RESULTS: The deep learning model BERTje outperformed all machine learning models, accurately predicting counselor behavior (accuracy=0.72, area under the curve [AUC]=0.95, Cohen κ=0.69). It differentiated MI congruent and incongruent counselor behavior (AUC=0.92, κ=0.65) and evocative and nonevocative language (AUC=0.92, κ=0.66). For client behavior, the model achieved an accuracy of 0.70 (AUC=0.89, κ=0.55). The model's interpretable predictions discerned client change talk and sustain talk, counselor affirmations, and reflection types, facilitating valuable counselor feedback.
CONCLUSIONS: The results of this study demonstrate that artificial intelligence techniques can accurately classify MI behavior, indicating their potential as a valuable tool for enhancing MI proficiency in online helplines for mental health. Provided that the data set size is sufficiently large with enough training samples for each behavioral code, these methods can be trained and applied to other domains and languages, offering a scalable and cost-effective way to evaluate MI adherence, accelerate behavioral coding, and provide therapists with personalized, quick, and objective feedback.
PMID:39088244 | DOI:10.2196/53562