Deep learning

A Recognition System for Diagnosing Salivary Gland Neoplasms Based on Vision Transformer

Sun, 2024-11-03 06:00

Am J Pathol. 2024 Oct 26:S0002-9440(24)00396-1. doi: 10.1016/j.ajpath.2024.09.010. Online ahead of print.

ABSTRACT

Salivary gland neoplasms (SGNs) represent a group of human neoplasms characterized by a remarkable cyto-morphological diversity, which frequently poses diagnostic challenges. Accurate histological categorization of salivary tumors is crucial to make precise diagnoses and guide decisions regarding patient management. Within the scope of this study, a computer-aided diagnosis model utilizing Vision Transformer, a cutting-edge deep-learning model in computer vision, has been developed to accurately classify the most prevalent subtypes of SGNs. These subtypes include pleomorphic adenoma, myoepithelioma, Warthin's tumor, basal cell adenoma, oncocytic adenoma, cystadenoma, mucoepidermoid carcinoma and salivary adenoid cystic carcinoma. The dataset comprised 3046 whole slide images (WSIs) of histologically confirmed salivary gland tumors, encompassing nine distinct tissue categories. SGN-ViT exhibited impressive performance in classifying the eight salivary gland tumors, achieving an accuracy of 0.9966, an AUC value of 0.9899, precision of 0.9848, recall of 0.9848, and an F1-score of 0.9848. When compared to benchmark models, SGN-ViT surpassed them in terms of diagnostic performance. In a subset of 100 WSIs, SGN-ViT demonstrated comparable diagnostic performance to that of the chief pathologist while significantly reducing the diagnosis time, indicating that SGN-ViT held the potential to serve as a valuable computer-aided diagnostic tool for salivary tumors, enhancing the diagnostic accuracy of junior pathologists.

PMID:39490441 | DOI:10.1016/j.ajpath.2024.09.010

Categories: Literature Watch

Knowledge-based planning, multicriteria optimization, and plan scorecards: A winning combination

Sun, 2024-11-03 06:00

Radiother Oncol. 2024 Oct 26:110598. doi: 10.1016/j.radonc.2024.110598. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: The ESTRO 2023 Physics Workshop hosted the Fully-Automated Radiotherapy Treatment Planning (Auto-RTP) Challenge, where participants were provided with CT images from 16 prostate cancer patients (6 prostate only, 6 prostate + nodes, and 4 prostate bed + nodes) across 3 challenge phases with the goal of automatically generating treatment plans with minimal user intervention. Here, we present our team's winning approach developed to swiftly adapt to both different contouring guidelines and treatment prescriptions than those used in our clinic.

MATERIALS AND METHODS: Our planning pipeline comprises two main components: 1) auto-contouring and 2) auto-planning engines, both internally developed and activated via DICOM operations. The auto-contouring engine employs 3D U-Net models trained on a dataset of 600 prostate cancer patients for normal tissues, 253 cases for pelvic lymph node, and 32 cases for prostate bed. The auto-planning engine, utilizing the Eclipse Scripting Application Programming Interface, automates target volume definition, field geometry, planning parameters, optimization, and dose calculation. RapidPlan models, combined with multicriteria optimization and scorecards defined on challenge scoring criteria, were employed to ensure plans met challenge objectives. We report leaderboard scores (0-100, where 100 is a perfect score) which combine organ-at-risk and target dose-metrics on the provided cases.

RESULTS: Our team secured 1st place across all three challenge phases, achieving leaderboard scores of 79.9, 77.3, and 78.5 outperforming 2nd place scores by margins of 6.4, 0.4, and 2.9 points for each phase, respectively. Highest plan scores were for prostate only cases, with an average score exceeding 90. Upon challenge completion, a "Plan Only" phase was opened where organizers provided contours for planning. Our current score of 90.0 places us at the top of the "Plan Only" leaderboard.

CONCLUSIONS: Our automated pipeline demonstrates adaptability to diverse guidelines, indicating progress towards fully automated radiotherapy planning. Future studies are needed to assess the clinical acceptability and integration of automatically generated plans.

PMID:39490417 | DOI:10.1016/j.radonc.2024.110598

Categories: Literature Watch

Evaluation of a deep learning-based software to automatically detect and quantify breast arterial calcifications on digital mammogram

Sun, 2024-11-03 06:00

Diagn Interv Imaging. 2024 Oct 25:S2211-5684(24)00233-X. doi: 10.1016/j.diii.2024.10.001. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of this study was to evaluate an artificial intelligence (AI) software that automatically detects and quantifies breast arterial calcifications (BAC).

MATERIALS AND METHODS: Women who underwent both mammography and thoracic computed tomography (CT) from 2009 to 2018 were retrospectively included in this single-center study. Deep learning-based software was used to automatically detect and quantify BAC with a BAC AI score ranging from 0 to 10-points. Results were compared using Spearman correlation test with a previously described BAC manual score based on radiologists' visual quantification of BAC on the mammogram. Coronary artery calcification (CAC) score was manually scored using a 12-point scale on CT. The diagnostic performance of the marked BAC AI score (defined as BAC AI score ≥ 5) for the detection of marked CAC (CAC score ≥ 4) was analyzed in terms of sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUC).

RESULTS: A total of 502 women with a median age of 62 years (age range: 42-96 years) were included. The BAC AI score showed a very strong correlation with the BAC manual score (r = 0.83). Marked BAC AI score had 32.7 % sensitivity (37/113; 95 % confidence interval [CI]: 24.2-42.2), 96.1 % specificity (374/389; 95 % CI: 93.7-97.8), 71.2 % positive predictive value (37/52; 95 % CI: 56.9-82.9), 83.1 % negative predictive value (374/450; 95 % CI: 79.3-86.5), and 81.9 % accuracy (411/502; 95 % CI: 78.2-85.1) for the diagnosis of marked CAC. The AUC of the marked BAC AI score for the diagnosis of marked CAC was 0.64 (95 % CI: 0.60-0.69).

CONCLUSION: The automated BAC AI score shows a very strong correlation with manual BAC scoring in this external validation cohort. The automated BAC AI score may be a useful tool to promote the integration of BAC into mammography reports and to improve awareness of a woman's cardiovascular risk status.

PMID:39490357 | DOI:10.1016/j.diii.2024.10.001

Categories: Literature Watch

Automated grading system for quantifying KOH microscopic images in dermatophytosis

Sun, 2024-11-03 06:00

Diagn Microbiol Infect Dis. 2024 Oct 18;111(1):116565. doi: 10.1016/j.diagmicrobio.2024.116565. Online ahead of print.

ABSTRACT

Concerning the progression of dermatophytosis and its prognosis, quantification studies play a significant role. Present work aims to develop an automated grading system for quantifying fungal loads in KOH microscopic images of skin scrapings collected from dermatophytosis patients. Fungal filaments in the images were segmented using a U-Net model to obtain the pixel counts. In the absence of any threshold value for pixel counts to grade these images as low, moderate, or high, experts were assigned the task of manual grading. Grades and corresponding pixel counts were subjected to statistical procedures involving cumulative receiver operating characteristic curve analysis for developing an automated grading system. The model's specificity, accuracy, precision, and sensitivity metrics crossed 92%, 86%, 82%, and 76%, respectively. 'Almost perfect agreement' with Fleiss kappa of 0.847 was obtained between automated and manual gradings. This pixel count-based grading of KOH images offers a novel, cost-effective solution for quantifying fungal load.

PMID:39490258 | DOI:10.1016/j.diagmicrobio.2024.116565

Categories: Literature Watch

The emerging role of artificial intelligence in neuropathology: Where are we and where do we want to go?

Sun, 2024-11-03 06:00

Pathol Res Pract. 2024 Oct 23;263:155671. doi: 10.1016/j.prp.2024.155671. Online ahead of print.

ABSTRACT

The field of neuropathology, a subspecialty of pathology which studies the diseases affecting the nervous system, is experiencing significant changes due to advancements in artificial intelligence (AI). Traditionally reliant on histological methods and clinical correlations, neuropathology is now experiencing a revolution due to the development of AI technologies like machine learning (ML) and deep learning (DL). These technologies enhance diagnostic accuracy, optimize workflows, and enable personalized treatment strategies. AI algorithms excel at analyzing histopathological images, often revealing subtle morphological changes missed by conventional methods. For example, deep learning models applied to digital pathology can effectively differentiate tumor grades and detect rare pathologies, leading to earlier and more precise diagnoses. Progress in neuroimaging is another helpful tool of AI, as enhanced analysis of MRI and CT scans supports early detection of neurodegenerative diseases. By identifying biomarkers and progression patterns, AI aids in timely therapeutic interventions, potentially slowing disease progression. In molecular pathology, AI's ability to analyze complex genomic data helps uncover the genetic and molecular basis of neuropathological conditions, facilitating personalized treatment plans. AI-driven automation streamlines routine diagnostic tasks, allowing pathologists to focus on complex cases, especially in settings with limited resources. This review explores AI's integration into neuropathology, highlighting its current applications, benefits, challenges, and future directions.

PMID:39490225 | DOI:10.1016/j.prp.2024.155671

Categories: Literature Watch

Optimized deep learning networks for accurate identification of cancer cells in bone marrow

Sun, 2024-11-03 06:00

Neural Netw. 2024 Oct 18;181:106822. doi: 10.1016/j.neunet.2024.106822. Online ahead of print.

ABSTRACT

Radiologists utilize pictures from X-rays, magnetic resonance imaging, or computed tomography scans to diagnose bone cancer. Manual methods are labor-intensive and may need specialized knowledge. As a result, creating an automated process for distinguishing between malignant and healthy bone is essential. Bones that have cancer have a different texture than bones in unaffected areas. Diagnosing hematological illnesses relies on correct labeling and categorizing nucleated cells in the bone marrow. However, timely diagnosis and treatment are hampered by pathologists' need to identify specimens, which can be sensitive and time-consuming manually. Humanity's ability to evaluate and identify these more complicated illnesses has significantly been bolstered by the development of artificial intelligence, particularly machine, and deep learning. Conversely, much research and development is needed to enhance cancer cell identification-and lower false alarm rates. We built a deep learning model for morphological analysis to solve this problem. This paper introduces a novel deep convolutional neural network architecture in which hybrid multi-objective and category-based optimization algorithms are used to optimize the hyperparameters adaptively. Using the processed cell pictures as input, the proposed model is then trained with an optimized attention-based multi-scale convolutional neural network to identify the kind of cancer cells in the bone marrow. Extensive experiments are run on publicly available datasets, with the results being measured and evaluated using a wide range of performance indicators. In contrast to deep learning models that have already been trained, the total accuracy of 99.7% was determined to be superior.

PMID:39490023 | DOI:10.1016/j.neunet.2024.106822

Categories: Literature Watch

Deep learning assisted femtosecond laser-ablation spark-induced breakdown spectroscopy employed for rapid and accurate identification of bismuth brass

Sun, 2024-11-03 06:00

Anal Chim Acta. 2024 Nov 22;1330:343271. doi: 10.1016/j.aca.2024.343271. Epub 2024 Sep 25.

ABSTRACT

BACKGROUND: Owing to its excellent machinability and less toxicity, bismuth brass has been widely used in manufacturing various industrial products. Thus, it is of significance to perform rapid and accurate identification of bismuth brass to reveal the alloying properties. However, the analytical lines of various elements in bismuth brass alloy products based on conventional laser-induced breakdown spectroscopy (LIBS) are usually weak. Moreover, the analytical lines of various elements are often overlaped, seriously interfering with the identification of bismuth brass alloys. To address these challenges, developing an advanced strategy enabling to achieve ultra-high accuracy identification of bismuth brass alloys is highly desirable.

RESULTS: This work proposed a novel method for rapidly and accurately identifying bismuth brass samples using deep learning assisted femtosecond laser-ablation spark-induced breakdown spectroscopy (fs-LA-SIBS). With the help of fs-LA-SIBS, a spectral database containing high quality LIBS spectra on element components were constructed. Then, one-dimensional convolutional neural network (CNN) was introduced to distinguish five species of bismuth brass alloy. Amazingly, the optimal CNN model can provide an identification accuracy of 100 % for specie identification. To figure out the spectral features, we proposed a novel approach named "segmented fs-LA-SIBS wavelength". The identification contribution from various wavelength intervals were extracted by optimal CNN model. It clearly showed that, the differences of spectra feature in the wavelength interval from 336.05 to 364.66 nm can produce the largest identification contribution for an identification accuracy of 100 %. More importantly, the feature differences in the four elements such as Ni, Cu, Sn, and Zn, were verified to mostly contribute to identification accuracy of 100 %.

SIGNIFICANCE: To the best of our knowledge, it is the first study on one-dimensional CNN configuration assisted with fs-LA-SIBS successfully employed for performing identification of bismuth brass. Compared with conventional machine learning methods, CNN has shown significant more superiority. To reveal the tiny spectra differences, the classification contribution from spectra features were accurately defined by our proposed "segmented fs-LA-SIBS wavelength" method. It can be expected that, CNN assisted with fs-LA-SIBS has great promising for identifying the differences from various element components in metallurgical field.

PMID:39489954 | DOI:10.1016/j.aca.2024.343271

Categories: Literature Watch

The Impact of Deep Learning on Determining the Necessity of Bronchoscopy in Pediatric Foreign Body Aspiration: Can Negative Bronchoscopy Rates Be Reduced?

Sun, 2024-11-03 06:00

J Pediatr Surg. 2024 Oct 19;60(2):162014. doi: 10.1016/j.jpedsurg.2024.162014. Online ahead of print.

ABSTRACT

INTRODUCTION: This study aimed to evaluate the role of deep learning methods in diagnosing foreign body aspiration (FBA) to reduce the frequency of negative bronchoscopy and minimize potential complications.

METHODS: We retrospectively analysed data and radiographs from 47 pediatric patients who presented to our hospital with suspected FBA between 2019 and 2023. A control group of 63 healthy children provided a total of 110 PA CXR images, which were analysed using both convolutional neural network (CNN)-based deep learning methods and multiple logistic regression (MLR).

RESULTS: CNN-deep learning method correctly predicted 16 out of 17 bronchoscopy-positive images, while the MLR model correctly predicted 13. The CNN method misclassified one positive image as negative and two negative images as positive. The MLR model misclassified four positive images as negative and two negative images as positive. The sensitivity of the CNN predictor was 94.1 %, specificity was 97.8 %, accuracy was 97.3 %, and the F1 score was 0.914. The sensitivity of the MLR predictor was 76.5 %, specificity was 97.8 %, accuracy was 94.5 %, and the F1 score was 0.812.

CONCLUSION: The CNN-deep learning method demonstrated high accuracy in determining the necessity for bronchoscopy in children with suspected FBA, significantly reducing the rate of negative bronchoscopies. This reduction may contribute to fewer unnecessary bronchoscopy procedures and complications. However, considering the risk of missing a positive case, this method should be used in conjunction with clinical evaluations. To overcome the limitations of our study, future research with larger multi-center datasets is needed to validate and enhance the findings.

TYPE OF STUDY: Original article.

LEVEL OF EVIDENCE: III.

PMID:39489944 | DOI:10.1016/j.jpedsurg.2024.162014

Categories: Literature Watch

AI derived ECG global longitudinal strain compared to echocardiographic measurements

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26458. doi: 10.1038/s41598-024-78268-8.

ABSTRACT

Left ventricular (LV) global longitudinal strain (LVGLS) is versatile; however, it is difficult to obtain. We evaluated the potential of an artificial intelligence (AI)-generated electrocardiography score for LVGLS estimation (ECG-GLS score) to diagnose LV systolic dysfunction and predict prognosis of patients with heart failure (HF). A convolutional neural network-based deep-learning algorithm was trained to estimate the echocardiography-derived GLS (LVGLS). ECG-GLS score performance was evaluated using data from an acute HF registry at another tertiary hospital (n = 1186). In the validation cohort, the ECG-GLS score could identify patients with impaired LVGLS (≤ 12%) (area under the receiver-operating characteristic curve [AUROC], 0.82; sensitivity, 85%; specificity, 59%). The performance of ECG-GLS in identifying patients with an LV ejection fraction (LVEF) < 40% (AUROC, 0.85) was comparable to that of LVGLS (AUROC, 0.83) (p = 0.08). Five-year outcomes (all-cause death; composite of all-cause death and hospitalization for HF) occurred significantly more frequently in patients with low ECG-GLS scores. Low ECG-GLS score was a significant risk factor for these outcomes after adjustment for other clinical risk factors and LVEF. The ECG-GLS score demonstrated a meaningful correlation with the LVGLS and is effective in risk stratification for long-term prognosis after acute HF, possibly acting as a practical alternative to the LVGLS.

PMID:39488646 | DOI:10.1038/s41598-024-78268-8

Categories: Literature Watch

Development of a method for estimating asari clam distribution by combining three-dimensional acoustic coring system and deep neural network

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26467. doi: 10.1038/s41598-024-77893-7.

ABSTRACT

Developing non-contact, non-destructive monitoring methods for marine life is crucial for sustainable resource management. Recent monitoring technologies and machine learning analysis advancements have enhanced underwater image and acoustic data acquisition. Systems to obtain 3D acoustic data from beneath the seafloor are being developed; however, manual analysis of large 3D datasets is challenging. Therefore, an automatic method for analyzing benthic resource distribution is needed. This study developed a system to estimate benthic resource distribution non-destructively by combining high-precision habitat data acquisition using high-frequency ultrasonic waves and prediction models based on a 3D convolutional neural network (3D-CNN). The system estimated the distribution of asari clams (Ruditapes philippinarum) in Lake Hamana, Japan. Clam presence and count were successfully estimated in a voxel with an ROC-AUC of 0.9 and a macro-average ROC-AUC of 0.8, respectively. This system visualized clam distribution and estimated numbers, demonstrating its effectiveness for quantifying marine resources beneath the seafloor.

PMID:39488638 | DOI:10.1038/s41598-024-77893-7

Categories: Literature Watch

Data-driven and privacy-preserving risk assessment method based on federated learning for smart grids

Sun, 2024-11-03 06:00

Commun Eng. 2024 Nov 2;3(1):154. doi: 10.1038/s44172-024-00300-6.

ABSTRACT

Timely and precise security risk evaluation is essential for optimal operational planning, threat detection, and the reliable operation of smart grid. The smart grid can integrate extensive high-dimensional operational data. However, conventional risk assessment techniques often struggle with managing such data volumes. Moreover, many methods use centralized evaluation, potentially neglecting privacy issues. Additionally, Power grid operators are often reluctant to share sensitive risk-related data due to privacy concerns. Here we introduce a data-driven and privacy-preserving risk assessment method that safeguards Power grid operators' data privacy by integrating deep learning and secure encryption in a federated learning framework. The method involves: (1) developing a two-tier risk indicator system and an expanded dataset; (2) using a deep convolutional neural network -based model to analyze the relationship between system variables and risk levels; and (3) creating a secure, federated risk assessment protocol with homomorphic encryption to protect model parameters during training. Experiments on IEEE 14-bus and IEEE 118-bus systems show that our approach ensures high assessment accuracy and data privacy.

PMID:39488597 | DOI:10.1038/s44172-024-00300-6

Categories: Literature Watch

Enhancing runoff predictions in data-sparse regions through hybrid deep learning and hydrologic modeling

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26450. doi: 10.1038/s41598-024-77678-y.

ABSTRACT

Amidst growing concerns over climate-induced extreme weather events, precise flood forecasting becomes imperative, especially in regions like the Chaersen Basin where data scarcity compounds the challenge. Traditional hydrologic models, while reliable, often fall short in areas with insufficient observational data. This study introduces a hybrid modeling approach that combines the deep learning capabilities of the Informer model with the robust hydrological simulation by the WRF-Hydro model to enhance runoff predictions in such data-sparse regions. Trained initially on the diverse and extensive CAMELS dataset in the United States, the Informer model successfully applied its learned insights to predict runoff in the Chaersen Basin, leveraging transfer learning to bridge data gaps. Concurrently, the WRF-Hydro model, when integrated with The Global Forecast System (GFS) data, provided a basis for comparison and further refinement of flood prediction accuracy. The integration of these models resulted in a significant improvement in prediction precision. The synergy between the Informer's advanced pattern recognition and the physical modeling strength of the WRF-Hydro significantly enhanced the prediction accuracy. The final predictions for the years 2015 and 2016 demonstrated notable increases in the Nash-Sutcliffe Efficiency (NSE) and the Index of Agreement (IOA) metrics, confirming the effectiveness of the hybrid model in capturing complex hydrological dynamics during runoff predictions. Specifically, in 2015, the NSE improved from 0.5 with WRF-Hydro and 0.63 with the Informer model to 0.66 using the hybrid model, while in 2016, the NSE increased from 0.42 to 0.76. Similarly, the IOA in 2015 rose from 0.83 with WRF-Hydro and 0.84 with the Informer model to 0.87 using the hybrid approach, and in 2016, it increased from 0.78 to 0.92. Further investigation into the respective contributions of the WRF-Hydro and the Informer models revealed that the hybrid model achieved the optimal performance when the contribution of the Informer model was maintained between 60%-80%.

PMID:39488589 | DOI:10.1038/s41598-024-77678-y

Categories: Literature Watch

Predicting removal of arsenic from groundwater by iron based filters using deep neural network models

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26428. doi: 10.1038/s41598-024-76758-3.

ABSTRACT

Arsenic (As) contamination in drinking water has been highlighted for its environmental significance and potential health implications. Iron-based filters are cost-effective and sustainable solutions for As removal from contaminated water. Applying Machine Learning (ML) models to investigate and optimize As removal using iron-based filters is limited. The present study developed Deep Learning Neural Network (DLNN) models for predicting the removal of As and other contaminants by iron-based filters from groundwater. A small Original Dataset (ODS) consisting of 20 data points and 13 groundwater parameters was obtained from the field performances of 20 individual iron-amended ceramic filters. Cubic-spline interpolation (CSI) expanded the ODS, generating 1600 interpolated data points (IDPs) without duplication. The Bayesian optimization algorithm tuned the model hyper-parameters and IDPs in a Stratified fivefold Cross-Validation (CV) setup trained all the models. The models demonstrated reliable performances with the coefficient of determination (R2) 0.990-0.999 for As, 0.774-0.976 for Iron (Fe), 0.934-0.954 for Phosphorus (P), and 0.878-0.998 for predicting manganese (Mn) in the effluent. Sobol sensitivity analysis revealed that As (total order index (ST) = 0.563), P (ST = 0.441), Eh (ST = 0.712), and Temp (ST = 0.371) are the most sensitive parameters for the removal of As, Fe, P, and Mn. The comprehensive approach, from data expansion through DLNN model development, provides a valuable tool for estimating optimal As removal conditions from groundwater.

PMID:39488582 | DOI:10.1038/s41598-024-76758-3

Categories: Literature Watch

A deep learning approach for ovarian cancer detection and classification based on fuzzy deep learning

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26463. doi: 10.1038/s41598-024-75830-2.

ABSTRACT

Different oncologists make their own decisions about the detection and classification of the type of ovarian cancer from histopathological whole slide images. However, it is necessary to have an automated system that is more accurate and standardized for decision-making, which is essential for early detection of ovarian cancer. To help doctors, an automated detection and classification of ovarian cancer system is proposed. This model starts by extracting the main features from the histopathology images based on the ResNet-50 model to detect and classify the cancer. Then, recursive feature elimination based on a decision tree is introduced to remove unnecessary features extracted during the feature extraction process. Adam optimizers were implemented to optimize the network's weights during training data. Finally, the advantages of combining deep learning and fuzzy logic are combined to classify the images of ovarian cancer. The dataset consists of 288 hematoxylin and eosin (H&E) stained whole slides with clinical information from 78 patients. H&E-stained Whole Slide Images (WSIs), including 162 effective and 126 invalid WSIs were obtained from different tissue blocks of post-treatment specimens. Experimental results can diagnose ovarian cancer with a potential accuracy of 98.99%, sensitivity of 99%, specificity of 98.96%, and F1-score of 98.99%. The results show promising results indicating the potential of using fuzzy deep-learning classifiers for predicting ovarian cancer.

PMID:39488573 | DOI:10.1038/s41598-024-75830-2

Categories: Literature Watch

A spatiotemporal correlation and attention-based model for pipeline deformation prediction in foundation pit engineering

Sun, 2024-11-03 06:00

Sci Rep. 2024 Nov 2;14(1):26387. doi: 10.1038/s41598-024-77601-5.

ABSTRACT

In foundation pit engineering, the deformation prediction of adjacent pipelines is crucial for construction safety. Existing approaches depend on constitutive models, grey correlation prediction, or traditional feedforward neural networks. Due to the complex hydrological and geological conditions, as well as the nonstationary and nonlinear characteristics of monitoring data, this problem remains a challenge. By formulating the deformation of monitoring points as multivariate time series, a deep learning-based prediction model is proposed, which utilizes the convolutional neural network to extract the spatial dependencies among various monitoring points, and leverages the bi-directional long-short memory unit network to extract temporal features. Notably, an attention mechanism is introduced to adjust the trainable weights of spatial-temporal features extracted in the prediction. The evaluation of a real-world subway project demonstrates that the proposed model has advantages compared with current models, particularly in long-term prediction. It improves the Adjusted R2 index averagely by from 19.4 to 61.6 % compared with existing models. The proposed model also exhibits a decrease in mean absolute error ranging from 51.5 to 70.3 % compared to others. Experiments and analyses verify that the spatial-temporal dependencies in time series and the attention learning for spatial-temporal features can improve the prediction of such engineering problems.

PMID:39488572 | DOI:10.1038/s41598-024-77601-5

Categories: Literature Watch

Automated estimation of offshore polymetallic nodule abundance based on seafloor imagery using deep learning

Sat, 2024-11-02 06:00

Sci Total Environ. 2024 Oct 31:177225. doi: 10.1016/j.scitotenv.2024.177225. Online ahead of print.

ABSTRACT

The burgeoning demand for critical metals used in high-tech and green technology industries has turned attention toward the vast resources of polymetallic nodules on the ocean floor. Traditional methods for estimating the abundance of these nodules, such as direct sampling or acoustic imagery are time and labour-intensive or often insufficient for large-scale or accurate assessment. This paper advocates for the automatization of polymetallic nodules detection and abundance estimation using deep learning algorithms applied to seabed photographs. We propose UNET convolutional neural network framework specifically trained to process the unique features of seabed imagery, which can reliably detect and estimate the abundance of polymetallic nodules based on thousands of seabed photographs in significantly reduced time (below 10 h for 30 thousand photographs). Our approach addresses the challenges of data preparation, variable image quality, coverage-abundance transition model and sediments presence. We indicated the utilization of this approach can substantially increase the efficiency and accuracy of resource estimation, dramatically reducing the time and cost currently required for manual assessment. Furthermore, we discuss the potential of this method to be integrated into large-scale systems for sustainable exploitation of these undersea resources.

PMID:39488283 | DOI:10.1016/j.scitotenv.2024.177225

Categories: Literature Watch

GraphCVAE: Uncovering cell heterogeneity and therapeutic target discovery through residual and contrastive learning

Sat, 2024-11-02 06:00

Life Sci. 2024 Oct 31:123208. doi: 10.1016/j.lfs.2024.123208. Online ahead of print.

ABSTRACT

Advancements in Spatial Transcriptomics (ST) technologies in recent years have transformed the analysis of tissue structure and function within spatial contexts. However, accurately identifying spatial domains remains challenging due to data sparsity and noise. Traditional clustering methods often fail to capture spatial dependencies, while spatial clustering methods struggle with batch effects and data integration. We introduce GraphCVAE, a model designed to enhance spatial domain identification by integrating spatial and morphological information, correcting batch effects, and managing heterogeneous data. GraphCVAE employs a multi-layer Graph Convolutional Network (GCN) and a variational autoencoder to improve the representation and integration of spatial information. Through contrastive learning, the model captures subtle differences between cell types and states. Extensive testing on various ST datasets demonstrates GraphCVAE's robustness and biological contributions. In the dorsolateral prefrontal cortex (DLPFC) dataset, it accurately delineates cortical layer boundaries. In glioblastoma, GraphCVAE reveals critical therapeutic targets such as TF and NFIB. In colorectal cancer, it explores the role of the extracellular matrix in colorectal cancer. The model's performance metrics consistently surpass existing methods, validating its effectiveness. GraphCVAE's advanced visualization capabilities further highlight its precision in resolving spatial structures, making it a powerful tool for spatial transcriptomics analysis and offering new insights into disease studies.

PMID:39488267 | DOI:10.1016/j.lfs.2024.123208

Categories: Literature Watch

Automatic detection of temporomandibular joint osteoarthritis radiographic features using deep learning artificial intelligence. A Diagnostic accuracy study

Sat, 2024-11-02 06:00

J Stomatol Oral Maxillofac Surg. 2024 Oct 31:102124. doi: 10.1016/j.jormas.2024.102124. Online ahead of print.

ABSTRACT

OBJECTIVE: The purpose of this study was to investigate the diagnostic performance of a neural network Artificial Intelligence model for the radiographic confirmation of Temporomandibular Joint Osteoarthritis in reference to an experienced radiologist.

MATERIALS AND METHODS: The diagnostic performance of an AI model in identifying radiographic features in patients with TMJ-OA was evaluated in a diagnostic accuracy cohort study. Adult patients elected for radiographic examination by the Diagnostic Criteria for Temporomandibular Disorders decision tree were included. Cone-beam computed Tomography images were evaluated by object detection YOLO deep learning model. The diagnostic performance was verified against examiner radiographic evaluation.

RESULTS: The differences between the AI model and examiner were non-significant statistically, except in the subcortical cyst (P=0.049*). AI model showed substantial to near-perfect levels of agreement when compared to those of the examiner data. Regarding each radiographic phenotype, the AI model reported favorable sensitivity, specificity, accuracy, and highly statistically significant Receiver Operating Characteristic (ROC) analysis (p < 0.001). Area Under Curve ranged from 0.872, for surface erosion, to 0.911 for subcortical cyst.

CONCLUSION: AI object detection model could open the horizon for a valid, automated, and convenient modality for TMJ-OA radiographic confirmation and radiomic features identification with a significant diagnostic power.

PMID:39488247 | DOI:10.1016/j.jormas.2024.102124

Categories: Literature Watch

Zero-shot counting with a dual-stream neural network model

Sat, 2024-11-02 06:00

Neuron. 2024 Oct 29:S0896-6273(24)00729-3. doi: 10.1016/j.neuron.2024.10.008. Online ahead of print.

ABSTRACT

To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of "apple" and "three." In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items "zero-shot"-even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cortex in visual scene understanding.

PMID:39488209 | DOI:10.1016/j.neuron.2024.10.008

Categories: Literature Watch

AI-empowered perturbation proteomics for complex biological systems

Sat, 2024-11-02 06:00

Cell Genom. 2024 Oct 24:100691. doi: 10.1016/j.xgen.2024.100691. Online ahead of print.

ABSTRACT

The insufficient availability of comprehensive protein-level perturbation data is impeding the widespread adoption of systems biology. In this perspective, we introduce the rationale, essentiality, and practicality of perturbation proteomics. Biological systems are perturbed with diverse biological, chemical, and/or physical factors, followed by proteomic measurements at various levels, including changes in protein expression and turnover, post-translational modifications, protein interactions, transport, and localization, along with phenotypic data. Computational models, employing traditional machine learning or deep learning, identify or predict perturbation responses, mechanisms of action, and protein functions, aiding in therapy selection, compound design, and efficient experiment design. We propose to outline a generic PMMP (perturbation, measurement, modeling to prediction) pipeline and build foundation models or other suitable mathematical models based on large-scale perturbation proteomic data. Finally, we contrast modeling between artificially and naturally perturbed systems and highlight the importance of perturbation proteomics for advancing our understanding and predictive modeling of biological systems.

PMID:39488205 | DOI:10.1016/j.xgen.2024.100691

Categories: Literature Watch

Pages