Deep learning
Enhancing <em>De Novo</em> Drug Design across Multiple Therapeutic Targets with CVAE Generative Models
ACS Omega. 2024 Oct 18;9(43):43963-43976. doi: 10.1021/acsomega.4c08027. eCollection 2024 Oct 29.
ABSTRACT
Drug discovery is a costly and time-consuming process, necessitating innovative strategies to enhance efficiency across different stages, from initial hit identification to final market approval. Recent advancement in deep learning (DL), particularly in de novo drug design, show promise. Generative models, a subclass of DL algorithms, have significantly accelerated the de novo drug design process by exploring vast areas of chemical space. Here, we introduce a Conditional Variational Autoencoder (CVAE) generative model tailored for de novo molecular design tasks, utilizing both SMILES and SELFIES as molecular representations. Our computational framework successfully generates molecules with specific property profiles validated though metrics such as uniqueness, validity, novelty, quantitative estimate of drug-likeness (QED), and synthetic accessibility (SA). We evaluated our model's efficacy in generating novel molecules capable of binding to three therapeutic molecular targets: CDK2, PPARγ, and DPP-IV. Comparing with state-of-the-art frameworks demonstrated our model's ability to achieve higher structural diversity while maintaining the molecular properties ranges observed in the training set molecules. This proposed model stands as a valuable resource for advancing de novo molecular design capabilities.
PMID:39493989 | PMC:PMC11525747 | DOI:10.1021/acsomega.4c08027
Big data and artificial intelligence applied to blood and CSF fluid biomarkers in multiple sclerosis
Front Immunol. 2024 Oct 18;15:1459502. doi: 10.3389/fimmu.2024.1459502. eCollection 2024.
ABSTRACT
Artificial intelligence (AI) has meant a turning point in data analysis, allowing predictions of unseen outcomes with precedented levels of accuracy. In multiple sclerosis (MS), a chronic inflammatory-demyelinating condition of the central nervous system with a complex pathogenesis and potentially devastating consequences, AI-based models have shown promising preliminary results, especially when using neuroimaging data as model input or predictor variables. The application of AI-based methodologies to serum/blood and CSF biomarkers has been less explored, according to the literature, despite its great potential. In this review, we aimed to investigate and summarise the recent advances in AI methods applied to body fluid biomarkers in MS, highlighting the key features of the most representative studies, while illustrating their limitations and future directions.
PMID:39493759 | PMC:PMC11527669 | DOI:10.3389/fimmu.2024.1459502
Spatio-Temporal Attention and Gaussian Processes for Personalized Video Gaze Estimation
Conf Comput Vis Pattern Recognit Workshops. 2024 Jun;2024:604-614. doi: 10.1109/cvprw63382.2024.00065. Epub 2024 Sep 27.
ABSTRACT
Gaze is an essential prompt for analyzing human behavior and attention. Recently, there has been an increasing interest in determining gaze direction from facial videos. However, video gaze estimation faces significant challenges, such as understanding the dynamic evolution of gaze in video sequences, dealing with static backgrounds, and adapting to variations in illumination. To address these challenges, we propose a simple and novel deep learning model designed to estimate gaze from videos, incorporating a specialized attention module. Our method employs a spatial attention mechanism that tracks spatial dynamics within videos. This technique enables accurate gaze direction prediction through a temporal sequence model, adeptly transforming spatial observations into temporal insights, thereby significantly improving gaze estimation accuracy. Additionally, our approach integrates Gaussian processes to include individual-specific traits, facilitating the personalization of our model with just a few labeled samples. Experimental results confirm the efficacy of the proposed approach, demonstrating its success in both within-dataset and cross-dataset settings. Specifically, our proposed approach achieves state-of-the-art performance on the Gaze360 dataset, improving by 2.5° without personalization. Further, by personalizing the model with just three samples, we achieved an additional improvement of 0.8°. The code and pre-trained models are available at https://github.com/jswati31/stage.
PMID:39493731 | PMC:PMC11529379 | DOI:10.1109/cvprw63382.2024.00065
Reproducibility and explainability in digital pathology: The need to make black-box artificial intelligence systems more transparent
J Public Health Res. 2024 Oct 29;13(4):22799036241284898. doi: 10.1177/22799036241284898. eCollection 2024 Oct.
ABSTRACT
Artificial intelligence (AI), and more specifically Machine Learning (ML) and Deep learning (DL), has permeated the digital pathology field in recent years, with many algorithms successfully applied as new advanced tools to analyze pathological tissues. The introduction of high-resolution scanners in histopathology services has represented a real revolution for pathologists, allowing the analysis of digital whole-slide images (WSI) on a screen without a microscope at hand. However, it means a transition from microscope to algorithms in the absence of specific training for most pathologists involved in clinical practice. The WSI approach represents a major transformation, even from a computational point of view. The multiple ML and DL tools specifically developed for WSI analysis may enhance the diagnostic process in many fields of human pathology. AI-driven models allow the achievement of more consistent results, providing valid support for detecting, from H&E-stained sections, multiple biomarkers, including microsatellite instability, that are missed by expert pathologists.
PMID:39493704 | PMC:PMC11528586 | DOI:10.1177/22799036241284898
The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review
Digit Health. 2024 Oct 30;10:20552076241272657. doi: 10.1177/20552076241272657. eCollection 2024 Jan-Dec.
ABSTRACT
Machine Learning (ML) and Deep Learning (DL) models show potential in surpassing traditional methods including generalised linear models for healthcare predictions, particularly with large, complex datasets. However, low interpretability hinders practical implementation. To address this, Explainable Artificial Intelligence (XAI) methods are proposed, but a comprehensive evaluation of their effectiveness is currently limited. The aim of this scoping review is to critically appraise the application of XAI methods in ML/DL models using Electronic Health Record (EHR) data. In accordance with PRISMA scoping review guidelines, the study searched PUBMED and OVID/MEDLINE (including EMBASE) for publications related to tabular EHR data that employed ML/DL models with XAI. Out of 3220 identified publications, 76 were included. The selected publications published between February 2017 and June 2023, demonstrated an exponential increase over time. Extreme Gradient Boosting and Random Forest models were the most frequently used ML/DL methods, with 51 and 50 publications, respectively. Among XAI methods, Shapley Additive Explanations (SHAP) was predominant in 63 out of 76 publications, followed by partial dependence plots (PDPs) in 11 publications, and Locally Interpretable Model-Agnostic Explanations (LIME) in 8 publications. Despite the growing adoption of XAI methods, their applications varied widely and lacked critical evaluation. This review identifies the increasing use of XAI in tabular EHR research and highlights a deficiency in the reporting of methods and a lack of critical appraisal of validity and robustness. The study emphasises the need for further evaluation of XAI methods and underscores the importance of cautious implementation and interpretation in healthcare settings.
PMID:39493635 | PMC:PMC11528818 | DOI:10.1177/20552076241272657
Are ICD codes reliable for observational studies? Assessing coding consistency for data quality
Digit Health. 2024 Oct 29;10:20552076241297056. doi: 10.1177/20552076241297056. eCollection 2024 Jan-Dec.
ABSTRACT
OBJECTIVE: International Classification of Diseases (ICD) codes recorded in electronic health records (EHRs) are frequently used to create patient cohorts or define phenotypes. Inconsistent assignment of codes may reduce the utility of such cohorts. We assessed the reliability across time and location of the assignment of ICD codes in a US health system at the time of the transition from ICD-9-CM (ICD, 9th Revision, Clinical Modification) to ICD-10-CM (ICD, 10th Revision, Clinical Modification).
MATERIALS AND METHODS: Using clusters of equivalent codes derived from the US Centers for Disease Control and Prevention General Equivalence Mapping (GEM) tables, ICD assignments occurring during the ICD-9-CM to ICD-10-CM transition were investigated in EHR data from the US Veterans Administration Central Data Warehouse using deep learning and statistical models. These models were then used to detect abrupt changes across the transition; additionally, changes at each VA station were examined.
RESULTS: Many of the 687 most-used code clusters had ICD-10-CM assignments differing greatly from that predicted from the codes used in ICD-9-CM. Manual reviews of a random sample found that 66% of the clusters showed problematic changes, with 37% having no apparent explanations. Notably, the observed pattern of changes varied widely across care locations.
DISCUSSION AND CONCLUSION: The observed coding variability across time and across location suggests that ICD codes in EHRs are insufficient to establish a semantically reliable cohort or phenotype. While some variations might be expected with a changing in coding structure, the inconsistency across locations suggests other difficulties. Researchers should consider carefully how cohorts and phenotypes of interest are selected and defined.
PMID:39493629 | PMC:PMC11528819 | DOI:10.1177/20552076241297056
Osteoarthritis Year in Review 2024: Imaging
Osteoarthritis Cartilage. 2024 Oct 25:S1063-4584(24)01440-7. doi: 10.1016/j.joca.2024.10.009. Online ahead of print.
ABSTRACT
OBJECTIVE: To review recent literature evidence describing imaging of osteoarthritis (OA) and to identify the current trends in research on OA imaging.
METHOD: This is a narrative review of publications in English, published between April, 2023, and March, 2024. A Pubmed search was conducted using the following search terms: osteoarthritis/OA, radiography, ultrasound/US, computed tomography/CT, magnetic resonance imaging/MRI, DXA/DEXA, and artificial intelligence/AI/deep learning. Most publications focus on OA imaging in the knee and hip. Imaging of OA in other joints and OA imaging with artificial intelligence (AI) are also reviewed.
RESULTS: Compared to the same period last year (April 2022 - March 2023), there has been no significant change in the number of publications utilizing CT, MRI, and artificial intelligence. A notable reduction in the number of OA research papers using radiography and ultrasound is noted. There were several observational studies focusing on imaging of knee OA, such as the Multicenter Osteoarthritis Study, Rotterdam Study, Strontium ranelate efficacy in knee OA (SEKOIA) study, and the Osteoarthritis Initiative FNIH Biomarker study. Hip OA observational studies included, but not limited to, Cohort Hip and Cohort Knee study and UK Biobank study. Studies on emerging applications of AI in OA imaging were also covered. A small number of OA clinical trials were published with a focus on imaging-based outcomes.
CONCLUSION: MRI-based OA imaging research continues to play an important role compared to other modalities. Usage of various AI tools as an adjunct to human assessment is increasingly applied in OA imaging research.
PMID:39490728 | DOI:10.1016/j.joca.2024.10.009
Demonstration-based learning for few-shot biomedical named entity recognition under machine reading comprehension
J Biomed Inform. 2024 Oct 25:104739. doi: 10.1016/j.jbi.2024.104739. Online ahead of print.
ABSTRACT
OBJECTIVE: Although deep learning techniques have shown significant achievements, they frequently depend on extensive amounts of hand-labeled data and tend to perform inadequately in few-shot scenarios. The objective of this study is to devise a strategy that can improve the model's capability to recognize biomedical entities in scenarios of few-shot learning.
METHODS: By redefining biomedical named entity recognition (BioNER) as a machine reading comprehension (MRC) problem, we propose a demonstration-based learning method to address few-shot BioNER, which involves constructing appropriate task demonstrations. In assessing our proposed method, we compared the proposed method with existing advanced methods using six benchmark datasets, including BC4CHEMD, BC5CDR-Chemical, BC5CDR-Disease, NCBI-Disease, BC2GM, and JNLPBA.
RESULTS: We examined the models' efficacy by reporting F1 scores from both the 25-shot and 50-shot learning experiments. In 25-shot learning, we observed 1.1% improvements in the average F1 scores compared to the baseline method, reaching 61.7%, 84.1%, 69.1%, 70.1%, 50.6%, and 59.9% on six datasets, respectively. In 50-shot learning, we further improved the average F1 scores by 1.0% compared to the baseline method, reaching 73.1%, 86.8%, 76.1%, 75.6%, 61.7%, and 65.4%, respectively.
CONCLUSION: We reported that in the realm of few-shot learning BioNER, MRC-based language models are much more proficient in recognizing biomedical entities compared to the sequence labeling approach. Furthermore, our MRC-language models can compete successfully with fully-supervised learning methodologies that rely heavily on the availability of abundant annotated data. These results highlight possible pathways for future advancements in few-shot BioNER methodologies.
PMID:39490610 | DOI:10.1016/j.jbi.2024.104739
A Recognition System for Diagnosing Salivary Gland Neoplasms Based on Vision Transformer
Am J Pathol. 2024 Oct 26:S0002-9440(24)00396-1. doi: 10.1016/j.ajpath.2024.09.010. Online ahead of print.
ABSTRACT
Salivary gland neoplasms (SGNs) represent a group of human neoplasms characterized by a remarkable cyto-morphological diversity, which frequently poses diagnostic challenges. Accurate histological categorization of salivary tumors is crucial to make precise diagnoses and guide decisions regarding patient management. Within the scope of this study, a computer-aided diagnosis model utilizing Vision Transformer, a cutting-edge deep-learning model in computer vision, has been developed to accurately classify the most prevalent subtypes of SGNs. These subtypes include pleomorphic adenoma, myoepithelioma, Warthin's tumor, basal cell adenoma, oncocytic adenoma, cystadenoma, mucoepidermoid carcinoma and salivary adenoid cystic carcinoma. The dataset comprised 3046 whole slide images (WSIs) of histologically confirmed salivary gland tumors, encompassing nine distinct tissue categories. SGN-ViT exhibited impressive performance in classifying the eight salivary gland tumors, achieving an accuracy of 0.9966, an AUC value of 0.9899, precision of 0.9848, recall of 0.9848, and an F1-score of 0.9848. When compared to benchmark models, SGN-ViT surpassed them in terms of diagnostic performance. In a subset of 100 WSIs, SGN-ViT demonstrated comparable diagnostic performance to that of the chief pathologist while significantly reducing the diagnosis time, indicating that SGN-ViT held the potential to serve as a valuable computer-aided diagnostic tool for salivary tumors, enhancing the diagnostic accuracy of junior pathologists.
PMID:39490441 | DOI:10.1016/j.ajpath.2024.09.010
Knowledge-based planning, multicriteria optimization, and plan scorecards: A winning combination
Radiother Oncol. 2024 Oct 26:110598. doi: 10.1016/j.radonc.2024.110598. Online ahead of print.
ABSTRACT
BACKGROUND AND PURPOSE: The ESTRO 2023 Physics Workshop hosted the Fully-Automated Radiotherapy Treatment Planning (Auto-RTP) Challenge, where participants were provided with CT images from 16 prostate cancer patients (6 prostate only, 6 prostate + nodes, and 4 prostate bed + nodes) across 3 challenge phases with the goal of automatically generating treatment plans with minimal user intervention. Here, we present our team's winning approach developed to swiftly adapt to both different contouring guidelines and treatment prescriptions than those used in our clinic.
MATERIALS AND METHODS: Our planning pipeline comprises two main components: 1) auto-contouring and 2) auto-planning engines, both internally developed and activated via DICOM operations. The auto-contouring engine employs 3D U-Net models trained on a dataset of 600 prostate cancer patients for normal tissues, 253 cases for pelvic lymph node, and 32 cases for prostate bed. The auto-planning engine, utilizing the Eclipse Scripting Application Programming Interface, automates target volume definition, field geometry, planning parameters, optimization, and dose calculation. RapidPlan models, combined with multicriteria optimization and scorecards defined on challenge scoring criteria, were employed to ensure plans met challenge objectives. We report leaderboard scores (0-100, where 100 is a perfect score) which combine organ-at-risk and target dose-metrics on the provided cases.
RESULTS: Our team secured 1st place across all three challenge phases, achieving leaderboard scores of 79.9, 77.3, and 78.5 outperforming 2nd place scores by margins of 6.4, 0.4, and 2.9 points for each phase, respectively. Highest plan scores were for prostate only cases, with an average score exceeding 90. Upon challenge completion, a "Plan Only" phase was opened where organizers provided contours for planning. Our current score of 90.0 places us at the top of the "Plan Only" leaderboard.
CONCLUSIONS: Our automated pipeline demonstrates adaptability to diverse guidelines, indicating progress towards fully automated radiotherapy planning. Future studies are needed to assess the clinical acceptability and integration of automatically generated plans.
PMID:39490417 | DOI:10.1016/j.radonc.2024.110598
Evaluation of a deep learning-based software to automatically detect and quantify breast arterial calcifications on digital mammogram
Diagn Interv Imaging. 2024 Oct 25:S2211-5684(24)00233-X. doi: 10.1016/j.diii.2024.10.001. Online ahead of print.
ABSTRACT
PURPOSE: The purpose of this study was to evaluate an artificial intelligence (AI) software that automatically detects and quantifies breast arterial calcifications (BAC).
MATERIALS AND METHODS: Women who underwent both mammography and thoracic computed tomography (CT) from 2009 to 2018 were retrospectively included in this single-center study. Deep learning-based software was used to automatically detect and quantify BAC with a BAC AI score ranging from 0 to 10-points. Results were compared using Spearman correlation test with a previously described BAC manual score based on radiologists' visual quantification of BAC on the mammogram. Coronary artery calcification (CAC) score was manually scored using a 12-point scale on CT. The diagnostic performance of the marked BAC AI score (defined as BAC AI score ≥ 5) for the detection of marked CAC (CAC score ≥ 4) was analyzed in terms of sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUC).
RESULTS: A total of 502 women with a median age of 62 years (age range: 42-96 years) were included. The BAC AI score showed a very strong correlation with the BAC manual score (r = 0.83). Marked BAC AI score had 32.7 % sensitivity (37/113; 95 % confidence interval [CI]: 24.2-42.2), 96.1 % specificity (374/389; 95 % CI: 93.7-97.8), 71.2 % positive predictive value (37/52; 95 % CI: 56.9-82.9), 83.1 % negative predictive value (374/450; 95 % CI: 79.3-86.5), and 81.9 % accuracy (411/502; 95 % CI: 78.2-85.1) for the diagnosis of marked CAC. The AUC of the marked BAC AI score for the diagnosis of marked CAC was 0.64 (95 % CI: 0.60-0.69).
CONCLUSION: The automated BAC AI score shows a very strong correlation with manual BAC scoring in this external validation cohort. The automated BAC AI score may be a useful tool to promote the integration of BAC into mammography reports and to improve awareness of a woman's cardiovascular risk status.
PMID:39490357 | DOI:10.1016/j.diii.2024.10.001
Automated grading system for quantifying KOH microscopic images in dermatophytosis
Diagn Microbiol Infect Dis. 2024 Oct 18;111(1):116565. doi: 10.1016/j.diagmicrobio.2024.116565. Online ahead of print.
ABSTRACT
Concerning the progression of dermatophytosis and its prognosis, quantification studies play a significant role. Present work aims to develop an automated grading system for quantifying fungal loads in KOH microscopic images of skin scrapings collected from dermatophytosis patients. Fungal filaments in the images were segmented using a U-Net model to obtain the pixel counts. In the absence of any threshold value for pixel counts to grade these images as low, moderate, or high, experts were assigned the task of manual grading. Grades and corresponding pixel counts were subjected to statistical procedures involving cumulative receiver operating characteristic curve analysis for developing an automated grading system. The model's specificity, accuracy, precision, and sensitivity metrics crossed 92%, 86%, 82%, and 76%, respectively. 'Almost perfect agreement' with Fleiss kappa of 0.847 was obtained between automated and manual gradings. This pixel count-based grading of KOH images offers a novel, cost-effective solution for quantifying fungal load.
PMID:39490258 | DOI:10.1016/j.diagmicrobio.2024.116565
The emerging role of artificial intelligence in neuropathology: Where are we and where do we want to go?
Pathol Res Pract. 2024 Oct 23;263:155671. doi: 10.1016/j.prp.2024.155671. Online ahead of print.
ABSTRACT
The field of neuropathology, a subspecialty of pathology which studies the diseases affecting the nervous system, is experiencing significant changes due to advancements in artificial intelligence (AI). Traditionally reliant on histological methods and clinical correlations, neuropathology is now experiencing a revolution due to the development of AI technologies like machine learning (ML) and deep learning (DL). These technologies enhance diagnostic accuracy, optimize workflows, and enable personalized treatment strategies. AI algorithms excel at analyzing histopathological images, often revealing subtle morphological changes missed by conventional methods. For example, deep learning models applied to digital pathology can effectively differentiate tumor grades and detect rare pathologies, leading to earlier and more precise diagnoses. Progress in neuroimaging is another helpful tool of AI, as enhanced analysis of MRI and CT scans supports early detection of neurodegenerative diseases. By identifying biomarkers and progression patterns, AI aids in timely therapeutic interventions, potentially slowing disease progression. In molecular pathology, AI's ability to analyze complex genomic data helps uncover the genetic and molecular basis of neuropathological conditions, facilitating personalized treatment plans. AI-driven automation streamlines routine diagnostic tasks, allowing pathologists to focus on complex cases, especially in settings with limited resources. This review explores AI's integration into neuropathology, highlighting its current applications, benefits, challenges, and future directions.
PMID:39490225 | DOI:10.1016/j.prp.2024.155671
Optimized deep learning networks for accurate identification of cancer cells in bone marrow
Neural Netw. 2024 Oct 18;181:106822. doi: 10.1016/j.neunet.2024.106822. Online ahead of print.
ABSTRACT
Radiologists utilize pictures from X-rays, magnetic resonance imaging, or computed tomography scans to diagnose bone cancer. Manual methods are labor-intensive and may need specialized knowledge. As a result, creating an automated process for distinguishing between malignant and healthy bone is essential. Bones that have cancer have a different texture than bones in unaffected areas. Diagnosing hematological illnesses relies on correct labeling and categorizing nucleated cells in the bone marrow. However, timely diagnosis and treatment are hampered by pathologists' need to identify specimens, which can be sensitive and time-consuming manually. Humanity's ability to evaluate and identify these more complicated illnesses has significantly been bolstered by the development of artificial intelligence, particularly machine, and deep learning. Conversely, much research and development is needed to enhance cancer cell identification-and lower false alarm rates. We built a deep learning model for morphological analysis to solve this problem. This paper introduces a novel deep convolutional neural network architecture in which hybrid multi-objective and category-based optimization algorithms are used to optimize the hyperparameters adaptively. Using the processed cell pictures as input, the proposed model is then trained with an optimized attention-based multi-scale convolutional neural network to identify the kind of cancer cells in the bone marrow. Extensive experiments are run on publicly available datasets, with the results being measured and evaluated using a wide range of performance indicators. In contrast to deep learning models that have already been trained, the total accuracy of 99.7% was determined to be superior.
PMID:39490023 | DOI:10.1016/j.neunet.2024.106822
Deep learning assisted femtosecond laser-ablation spark-induced breakdown spectroscopy employed for rapid and accurate identification of bismuth brass
Anal Chim Acta. 2024 Nov 22;1330:343271. doi: 10.1016/j.aca.2024.343271. Epub 2024 Sep 25.
ABSTRACT
BACKGROUND: Owing to its excellent machinability and less toxicity, bismuth brass has been widely used in manufacturing various industrial products. Thus, it is of significance to perform rapid and accurate identification of bismuth brass to reveal the alloying properties. However, the analytical lines of various elements in bismuth brass alloy products based on conventional laser-induced breakdown spectroscopy (LIBS) are usually weak. Moreover, the analytical lines of various elements are often overlaped, seriously interfering with the identification of bismuth brass alloys. To address these challenges, developing an advanced strategy enabling to achieve ultra-high accuracy identification of bismuth brass alloys is highly desirable.
RESULTS: This work proposed a novel method for rapidly and accurately identifying bismuth brass samples using deep learning assisted femtosecond laser-ablation spark-induced breakdown spectroscopy (fs-LA-SIBS). With the help of fs-LA-SIBS, a spectral database containing high quality LIBS spectra on element components were constructed. Then, one-dimensional convolutional neural network (CNN) was introduced to distinguish five species of bismuth brass alloy. Amazingly, the optimal CNN model can provide an identification accuracy of 100 % for specie identification. To figure out the spectral features, we proposed a novel approach named "segmented fs-LA-SIBS wavelength". The identification contribution from various wavelength intervals were extracted by optimal CNN model. It clearly showed that, the differences of spectra feature in the wavelength interval from 336.05 to 364.66 nm can produce the largest identification contribution for an identification accuracy of 100 %. More importantly, the feature differences in the four elements such as Ni, Cu, Sn, and Zn, were verified to mostly contribute to identification accuracy of 100 %.
SIGNIFICANCE: To the best of our knowledge, it is the first study on one-dimensional CNN configuration assisted with fs-LA-SIBS successfully employed for performing identification of bismuth brass. Compared with conventional machine learning methods, CNN has shown significant more superiority. To reveal the tiny spectra differences, the classification contribution from spectra features were accurately defined by our proposed "segmented fs-LA-SIBS wavelength" method. It can be expected that, CNN assisted with fs-LA-SIBS has great promising for identifying the differences from various element components in metallurgical field.
PMID:39489954 | DOI:10.1016/j.aca.2024.343271
The Impact of Deep Learning on Determining the Necessity of Bronchoscopy in Pediatric Foreign Body Aspiration: Can Negative Bronchoscopy Rates Be Reduced?
J Pediatr Surg. 2024 Oct 19;60(2):162014. doi: 10.1016/j.jpedsurg.2024.162014. Online ahead of print.
ABSTRACT
INTRODUCTION: This study aimed to evaluate the role of deep learning methods in diagnosing foreign body aspiration (FBA) to reduce the frequency of negative bronchoscopy and minimize potential complications.
METHODS: We retrospectively analysed data and radiographs from 47 pediatric patients who presented to our hospital with suspected FBA between 2019 and 2023. A control group of 63 healthy children provided a total of 110 PA CXR images, which were analysed using both convolutional neural network (CNN)-based deep learning methods and multiple logistic regression (MLR).
RESULTS: CNN-deep learning method correctly predicted 16 out of 17 bronchoscopy-positive images, while the MLR model correctly predicted 13. The CNN method misclassified one positive image as negative and two negative images as positive. The MLR model misclassified four positive images as negative and two negative images as positive. The sensitivity of the CNN predictor was 94.1 %, specificity was 97.8 %, accuracy was 97.3 %, and the F1 score was 0.914. The sensitivity of the MLR predictor was 76.5 %, specificity was 97.8 %, accuracy was 94.5 %, and the F1 score was 0.812.
CONCLUSION: The CNN-deep learning method demonstrated high accuracy in determining the necessity for bronchoscopy in children with suspected FBA, significantly reducing the rate of negative bronchoscopies. This reduction may contribute to fewer unnecessary bronchoscopy procedures and complications. However, considering the risk of missing a positive case, this method should be used in conjunction with clinical evaluations. To overcome the limitations of our study, future research with larger multi-center datasets is needed to validate and enhance the findings.
TYPE OF STUDY: Original article.
LEVEL OF EVIDENCE: III.
PMID:39489944 | DOI:10.1016/j.jpedsurg.2024.162014
AI derived ECG global longitudinal strain compared to echocardiographic measurements
Sci Rep. 2024 Nov 2;14(1):26458. doi: 10.1038/s41598-024-78268-8.
ABSTRACT
Left ventricular (LV) global longitudinal strain (LVGLS) is versatile; however, it is difficult to obtain. We evaluated the potential of an artificial intelligence (AI)-generated electrocardiography score for LVGLS estimation (ECG-GLS score) to diagnose LV systolic dysfunction and predict prognosis of patients with heart failure (HF). A convolutional neural network-based deep-learning algorithm was trained to estimate the echocardiography-derived GLS (LVGLS). ECG-GLS score performance was evaluated using data from an acute HF registry at another tertiary hospital (n = 1186). In the validation cohort, the ECG-GLS score could identify patients with impaired LVGLS (≤ 12%) (area under the receiver-operating characteristic curve [AUROC], 0.82; sensitivity, 85%; specificity, 59%). The performance of ECG-GLS in identifying patients with an LV ejection fraction (LVEF) < 40% (AUROC, 0.85) was comparable to that of LVGLS (AUROC, 0.83) (p = 0.08). Five-year outcomes (all-cause death; composite of all-cause death and hospitalization for HF) occurred significantly more frequently in patients with low ECG-GLS scores. Low ECG-GLS score was a significant risk factor for these outcomes after adjustment for other clinical risk factors and LVEF. The ECG-GLS score demonstrated a meaningful correlation with the LVGLS and is effective in risk stratification for long-term prognosis after acute HF, possibly acting as a practical alternative to the LVGLS.
PMID:39488646 | DOI:10.1038/s41598-024-78268-8
Development of a method for estimating asari clam distribution by combining three-dimensional acoustic coring system and deep neural network
Sci Rep. 2024 Nov 2;14(1):26467. doi: 10.1038/s41598-024-77893-7.
ABSTRACT
Developing non-contact, non-destructive monitoring methods for marine life is crucial for sustainable resource management. Recent monitoring technologies and machine learning analysis advancements have enhanced underwater image and acoustic data acquisition. Systems to obtain 3D acoustic data from beneath the seafloor are being developed; however, manual analysis of large 3D datasets is challenging. Therefore, an automatic method for analyzing benthic resource distribution is needed. This study developed a system to estimate benthic resource distribution non-destructively by combining high-precision habitat data acquisition using high-frequency ultrasonic waves and prediction models based on a 3D convolutional neural network (3D-CNN). The system estimated the distribution of asari clams (Ruditapes philippinarum) in Lake Hamana, Japan. Clam presence and count were successfully estimated in a voxel with an ROC-AUC of 0.9 and a macro-average ROC-AUC of 0.8, respectively. This system visualized clam distribution and estimated numbers, demonstrating its effectiveness for quantifying marine resources beneath the seafloor.
PMID:39488638 | DOI:10.1038/s41598-024-77893-7
Data-driven and privacy-preserving risk assessment method based on federated learning for smart grids
Commun Eng. 2024 Nov 2;3(1):154. doi: 10.1038/s44172-024-00300-6.
ABSTRACT
Timely and precise security risk evaluation is essential for optimal operational planning, threat detection, and the reliable operation of smart grid. The smart grid can integrate extensive high-dimensional operational data. However, conventional risk assessment techniques often struggle with managing such data volumes. Moreover, many methods use centralized evaluation, potentially neglecting privacy issues. Additionally, Power grid operators are often reluctant to share sensitive risk-related data due to privacy concerns. Here we introduce a data-driven and privacy-preserving risk assessment method that safeguards Power grid operators' data privacy by integrating deep learning and secure encryption in a federated learning framework. The method involves: (1) developing a two-tier risk indicator system and an expanded dataset; (2) using a deep convolutional neural network -based model to analyze the relationship between system variables and risk levels; and (3) creating a secure, federated risk assessment protocol with homomorphic encryption to protect model parameters during training. Experiments on IEEE 14-bus and IEEE 118-bus systems show that our approach ensures high assessment accuracy and data privacy.
PMID:39488597 | DOI:10.1038/s44172-024-00300-6
Enhancing runoff predictions in data-sparse regions through hybrid deep learning and hydrologic modeling
Sci Rep. 2024 Nov 2;14(1):26450. doi: 10.1038/s41598-024-77678-y.
ABSTRACT
Amidst growing concerns over climate-induced extreme weather events, precise flood forecasting becomes imperative, especially in regions like the Chaersen Basin where data scarcity compounds the challenge. Traditional hydrologic models, while reliable, often fall short in areas with insufficient observational data. This study introduces a hybrid modeling approach that combines the deep learning capabilities of the Informer model with the robust hydrological simulation by the WRF-Hydro model to enhance runoff predictions in such data-sparse regions. Trained initially on the diverse and extensive CAMELS dataset in the United States, the Informer model successfully applied its learned insights to predict runoff in the Chaersen Basin, leveraging transfer learning to bridge data gaps. Concurrently, the WRF-Hydro model, when integrated with The Global Forecast System (GFS) data, provided a basis for comparison and further refinement of flood prediction accuracy. The integration of these models resulted in a significant improvement in prediction precision. The synergy between the Informer's advanced pattern recognition and the physical modeling strength of the WRF-Hydro significantly enhanced the prediction accuracy. The final predictions for the years 2015 and 2016 demonstrated notable increases in the Nash-Sutcliffe Efficiency (NSE) and the Index of Agreement (IOA) metrics, confirming the effectiveness of the hybrid model in capturing complex hydrological dynamics during runoff predictions. Specifically, in 2015, the NSE improved from 0.5 with WRF-Hydro and 0.63 with the Informer model to 0.66 using the hybrid model, while in 2016, the NSE increased from 0.42 to 0.76. Similarly, the IOA in 2015 rose from 0.83 with WRF-Hydro and 0.84 with the Informer model to 0.87 using the hybrid approach, and in 2016, it increased from 0.78 to 0.92. Further investigation into the respective contributions of the WRF-Hydro and the Informer models revealed that the hybrid model achieved the optimal performance when the contribution of the Informer model was maintained between 60%-80%.
PMID:39488589 | DOI:10.1038/s41598-024-77678-y