Deep learning

Deep learning-based multimodal image analysis predicts bone cement leakage during percutaneous kyphoplasty: protocol for model development, and validation by prospective and external datasets

Fri, 2024-10-04 06:00

Front Med (Lausanne). 2024 Sep 19;11:1479187. doi: 10.3389/fmed.2024.1479187. eCollection 2024.

ABSTRACT

BACKGROUND: Bone cement leakage (BCL) is one of the most prevalent complications of percutaneous kyphoplasty (PKP) for treating osteoporotic vertebral compression fracture (OVCF), which may result in severe secondary complications and poor outcomes. Previous studies employed several traditional machine learning (ML) models to predict BCL preoperatively, but effective and intelligent methods to bridge the distance between current models and real-life clinical applications remain lacking.

METHODS: We will develop a deep learning (DL)-based prediction model that directly analyzes preoperative computed tomography (CT) and magnetic resonance imaging (MRI) of patients with OVCF to accurately predict BCL occurrence and classification during PKP. This retrospective study includes a retrospective internal dataset for DL model training and validation, a prospective internal dataset, and a cross-center external dataset for model testing. We will evaluate not only model's predictive performance, but also its reliability by calculating its consistency with reference standards and comparing it with that of clinician prediction.

DISCUSSION: The model holds an imperative clinical significance. Clinicians can formulate more targeted treatment strategies to minimize the incidence of BCL, thereby improving clinical outcomes by preoperatively identifying patients at high risk for each BCL subtype. In particular, the model holds great potential to be extended and applied in remote areas where medical resources are relatively scarce so that more patients can benefit from quality perioperative evaluation and management strategies. Moreover, the model will efficiently promote information sharing and decision-making between clinicians and patients, thereby increasing the overall quality of healthcare services.

PMID:39364028 | PMC:PMC11446777 | DOI:10.3389/fmed.2024.1479187

Categories: Literature Watch

Hyperspectral imaging and artificial intelligence enhance remote phenotyping of grapevine rootstock influence on whole vine photosynthesis

Fri, 2024-10-04 06:00

Front Plant Sci. 2024 Sep 19;15:1409821. doi: 10.3389/fpls.2024.1409821. eCollection 2024.

ABSTRACT

Rootstocks are gaining importance in viticulture as a strategy to combat abiotic challenges, as well as enhancing scion physiology. Photosynthetic parameters such as maximum rate of carboxylation of RuBP (Vcmax) and the maximum rate of electron transport driving RuBP regeneration (Jmax) have been identified as ideal targets for potential influence by rootstock and breeding. However, leaf specific direct measurement of these photosynthetic parameters is time consuming, limiting the information scope and the number of individuals that can be screened. This study aims to overcome these limitations by employing hyperspectral imaging combined with artificial intelligence (AI) to predict these key photosynthetic traits at the canopy level. Hyperspectral imaging captures detailed optical properties across a broad range of wavelengths (400 to 1000 nm), enabling use of all wavelengths in a comprehensive analysis of the entire vine's photosynthetic performance (Vcmax and Jmax). Artificial intelligence-based prediction models that blend the strength of deep learning and machine learning were developed using two growing seasons data measured post-solstice at 15 h, 14 h, 13 h and 12 h daylengths for Vitis hybrid 'Marquette' grafted to five commercial rootstocks and 'Marquette' grafted to 'Marquette'. Significant differences in photosynthetic efficiency (Vcmax and Jmax) were noted for both direct and indirect measurements for the six rootstocks, indicating that rootstock genotype and daylength have a significant influence on scion photosynthesis. Evaluation of multiple feature-extraction algorithms indicated the proposed Vitis base model incorporating a 1D-Convolutional neural Network (CNN) had the best prediction performance with a R2 of 0.60 for Vcmax and Jmax. Inclusion of weather and chlorophyll parameters slightly improved model performance for both photosynthetic parameters. Integrating AI with hyperspectral remote phenotyping provides potential for high-throughput whole vine assessment of photosynthetic performance and selection of rootstock genotypes that confer improved photosynthetic performance potential in the scion.

PMID:39363918 | PMC:PMC11446806 | DOI:10.3389/fpls.2024.1409821

Categories: Literature Watch

The Value of Topological Radiomics Analysis in Predicting Malignant Risk of Pulmonary Ground-Glass Nodules: A Multi-Center Study

Fri, 2024-10-04 06:00

Technol Cancer Res Treat. 2024 Jan-Dec;23:15330338241287089. doi: 10.1177/15330338241287089.

ABSTRACT

BACKGROUND: Early detection and accurate differentiation of malignant ground-glass nodules (GGNs) in lung CT scans are crucial for the effective treatment of lung adenocarcinoma. However, existing imaging diagnostic methods often struggle to distinguish between benign and malignant GGNs in the early stages. This study aims to predict the malignancy risk of GGNs observed in lung CT scans by applying two radiomics methods: topological data analysis and texture analysis.

METHODS: A retrospective analysis was conducted on 3223 patients from two centers between January 2018 and June2023. The dataset was divided into training, testing, and validation sets to ensure robust model development and validation. We developed topological features applied to GGNs using radiomics analysis based on homology. This innovative approach emphasizes the integration of topological information, capturing complex geometric and spatial relationships within GGNs. By combining machine learning and deep learning algorithms, we established a predictive model that integrates clinical parameters, previous radiomics features, and topological radiomics features.

RESULTS: Incorporating topological radiomics into our model significantly enhanced the ability to distinguish between benign and malignant GGNs. The topological radiomics model achieved areas under the curve (AUC) of 0.85 and 0.862 in two independent validation sets, outperforming previous radiomics models. Furthermore, this model demonstrated higher sensitivity compared to models based solely on clinical parameters, with sensitivities of 80.7% in validation set 1 and 82.3% in validation set 2. The most comprehensive model, which combined clinical parameters, previous radiomics features, and topological radiomics features, achieved the highest AUC value of 0.879 across all datasets.

CONCLUSION: This study validates the potential of topological radiomics in improving the predictive performance for distinguishing between benign and malignant GGNs. By integrating topological features with previous radiomics and clinical parameters, our comprehensive model provides a more accurate and reliable basis for developing treatment strategies for patients with GGNs.

PMID:39363876 | DOI:10.1177/15330338241287089

Categories: Literature Watch

Prediction of testicular histology in azoospermia patients through deep learning-enabled two-dimensional grayscale ultrasound

Fri, 2024-10-04 06:00

Asian J Androl. 2024 Oct 4. doi: 10.4103/aja202480. Online ahead of print.

ABSTRACT

Testicular histology based on testicular biopsy is an important factor for determining appropriate testicular sperm extraction surgery and predicting sperm retrieval outcomes in patients with azoospermia. Therefore, we developed a deep learning (DL) model to establish the associations between testicular grayscale ultrasound images and testicular histology. We retrospectively included two-dimensional testicular grayscale ultrasound from patients with azoospermia (353 men with 4357 images between July 2017 and December 2021 in The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China) to develop a DL model. We obtained testicular histology during conventional testicular sperm extraction. Our DL model was trained based on ultrasound images or fusion data (ultrasound images fused with the corresponding testicular volume) to distinguish spermatozoa presence in pathology (SPP) and spermatozoa absence in pathology (SAP) and to classify maturation arrest (MA) and Sertoli cell-only syndrome (SCOS) in patients with SAP. Areas under the receiver operating characteristic curve (AUCs), accuracy, sensitivity, and specificity were used to analyze model performance. DL based on images achieved an AUC of 0.922 (95% confidence interval [CI]: 0.908-0.935), a sensitivity of 80.9%, a specificity of 84.6%, and an accuracy of 83.5% in predicting SPP (including normal spermatogenesis and hypospermatogenesis) and SAP (including MA and SCOS). In the identification of SCOS and MA, DL on fusion data yielded better diagnostic performance with an AUC of 0.979 (95% CI: 0.969-0.989), a sensitivity of 89.7%, a specificity of 97.1%, and an accuracy of 92.1%. Our study provides a noninvasive method to predict testicular histology for patients with azoospermia, which would avoid unnecessary testicular biopsy.

PMID:39363830 | DOI:10.4103/aja202480

Categories: Literature Watch

Artificial Intelligence Detection of Cervical Spine Fractures Using Convolutional Neural Network Models

Fri, 2024-10-04 06:00

Neurospine. 2024 Sep;21(3):833-841. doi: 10.14245/ns.2448580.290. Epub 2024 Sep 30.

ABSTRACT

OBJECTIVE: To develop and evaluate a technique using convolutional neural networks (CNNs) for the computer-assisted diagnosis of cervical spine fractures from radiographic x-ray images. By leveraging deep learning techniques, the study might potentially lead to improved patient outcomes and clinical decision-making.

METHODS: This study obtained 500 lateral radiographic cervical spine x-ray images from standard open-source dataset repositories to develop a classification model using CNNs. All the images contained diagnostic information, including normal cervical radiographic images (n=250) and fracture images of the cervical spine fracture (n=250). The model would classify whether the patient had a cervical spine fracture or not. Seventy percent of the images were training data sets used for model training, and 30% were for testing. Konstanz Information Miner (KNIME)'s graphic user interface-based programming enabled class label annotation, data preprocessing, CNNs model training, and performance evaluation.

RESULTS: The performance evaluation of a model for detecting cervical spine fractures presents compelling results across various metrics. This model exhibits high sensitivity (recall) values of 0.886 for fractures and 0.957 for normal cases, indicating its proficiency in identifying true positives. Precision values of 0.954 for fractures and 0.893 for normal cases highlight the model's ability to minimize false positives. With specificity values of 0.957 for fractures and 0.886 for normal cases, the model effectively identifies true negatives. The overall accuracy of 92.14% highlights its reliability in correctly classifying cases by the area under the receiver operating characteristic curve.

CONCLUSION: We successfully used deep learning models for computer-assisted diagnosis of cervical spine fractures from radiographic x-ray images. This approach can assist the radiologist in screening, detecting, and diagnosing cervical spine fractures.

PMID:39363462 | DOI:10.14245/ns.2448580.290

Categories: Literature Watch

Adaptation to space conditions of novel bacterial species isolated from the International Space Station revealed by functional gene annotations and comparative genome analysis

Fri, 2024-10-04 06:00

Microbiome. 2024 Oct 4;12(1):190. doi: 10.1186/s40168-024-01916-8.

ABSTRACT

BACKGROUND: The extreme environment of the International Space Station (ISS) puts selective pressure on microorganisms unintentionally introduced during its 20+ years of service as a low-orbit science platform and human habitat. Such pressure leads to the development of new features not found in the Earth-bound relatives, which enable them to adapt to unfavorable conditions.

RESULTS: In this study, we generated the functional annotation of the genomes of five newly identified species of Gram-positive bacteria, four of which are non-spore-forming and one spore-forming, all isolated from the ISS. Using a deep-learning based tool-deepFRI-we were able to functionally annotate close to 100% of protein-coding genes in all studied species, overcoming other annotation tools. Our comparative genomic analysis highlights common characteristics across all five species and specific genetic traits that appear unique to these ISS microorganisms. Proteome analysis mirrored these genomic patterns, revealing similar traits. The collective annotations suggest adaptations to life in space, including the management of hypoosmotic stress related to microgravity via mechanosensitive channel proteins, increased DNA repair activity to counteract heightened radiation exposure, and the presence of mobile genetic elements enhancing metabolism. In addition, our findings suggest the evolution of certain genetic traits indicative of potential pathogenic capabilities, such as small molecule and peptide synthesis and ATP-dependent transporters. These traits, exclusive to the ISS microorganisms, further substantiate previous reports explaining why microbes exposed to space conditions demonstrate enhanced antibiotic resistance and pathogenicity.

CONCLUSION: Our findings indicate that the microorganisms isolated from ISS we studied have adapted to life in space. Evidence such as mechanosensitive channel proteins, increased DNA repair activity, as well as metallopeptidases and novel S-layer oxidoreductases suggest a convergent adaptation among these diverse microorganisms, potentially complementing one another within the context of the microbiome. The common genes that facilitate adaptation to the ISS environment may enable bioproduction of essential biomolecules need during future space missions, or serve as potential drug targets, if these microorganisms pose health risks. Video Abstract.

PMID:39363369 | DOI:10.1186/s40168-024-01916-8

Categories: Literature Watch

Generalizing deep learning electronic structure calculation to the plane-wave basis

Thu, 2024-10-03 06:00

Nat Comput Sci. 2024 Oct 3. doi: 10.1038/s43588-024-00701-9. Online ahead of print.

ABSTRACT

Deep neural networks capable of representing the density functional theory (DFT) Hamiltonian as a function of material structure hold great promise for revolutionizing future electronic structure calculations. However, a notable limitation of previous neural networks is their compatibility solely with the atomic-orbital (AO) basis, excluding the widely used plane-wave (PW) basis. Here we overcome this critical limitation by proposing an accurate and efficient real-space reconstruction method for directly computing AO Hamiltonian matrices from PW DFT results. The reconstruction method is orders of magnitude faster than traditional projection-based methods to convert PW results to the AO basis, and the reconstructed Hamiltonian matrices can faithfully reproduce the PW electronic structure, thus bridging the longstanding gap between the AO basis deep learning electronic structure approach and PW DFT. Advantages of the PW methods, such as high accuracy, high flexibility and wide applicability, thus can be all integrated into deep learning electronic structure methods without sacrificing these methods' inherent benefits. This allows for the construction of large-scale and high-fidelity training datasets with the help of PW DFT results towards the development of precise and broadly applicable deep learning electronic structure models.

PMID:39363113 | DOI:10.1038/s43588-024-00701-9

Categories: Literature Watch

Synergistic application of digital outcrop characterization techniques and deep learning algorithms in geological exploration

Thu, 2024-10-03 06:00

Sci Rep. 2024 Oct 3;14(1):22948. doi: 10.1038/s41598-024-74903-6.

ABSTRACT

In order to meet the needs of geologists for the analysis of data characterizing field outcrops (rock sections or formations exposed on the ground surface), this study developed a field digital outcrop visualization platform based on Cesium (a 3D geospatial visualization technology) digital outcrop characterization technology. The platform was developed based on WebGL (a protocol for rendering interactions on web pages), which overcame the shortcomings of traditional software in terms of visualization, cross-device, cross-platform, and ease of use. Firstly, UAV inclined photography is used for data collection, which transforms a large amount of geological data into an intuitive 3D geological model, while the visualization platform provides rich measurement and mapping tools for the identified features, which more intuitively displays the outcrop information, helps geological explorers to understand the geological conditions in the field more quickly and comprehensively, and improves the analysis efficiency and ease-of-use of outcrop characterization data. Combined with the improved VGG19 (a deep convolutional neural network architecture) algorithm model, it has excellent performance in dealing with the fine texture and complex structure of rocks, which significantly improves the accuracy of lithology identification. The synergistic application of this technology provides geologists with a faster and more comprehensive means to understand the geological conditions in the field. The reliability of combining the Cesium digital outcrop characterization technology with the VGG19 lithology identification algorithm in geological exploration is verified through case studies. The synergistic application of this technology will greatly enhance the efficiency and ease of analysis of outcrop characterization in the field, and provide new perspectives for future research in geosciences.

PMID:39363057 | DOI:10.1038/s41598-024-74903-6

Categories: Literature Watch

Integrative analysis of H&E and IHC identifies prognostic immune subtypes in HPV related oropharyngeal cancer

Thu, 2024-10-03 06:00

Commun Med (Lond). 2024 Oct 3;4(1):190. doi: 10.1038/s43856-024-00604-w.

ABSTRACT

BACKGROUND: Deep learning techniques excel at identifying tumor-infiltrating lymphocytes (TILs) and immune phenotypes in hematoxylin and eosin (H&E)-stained slides. However, their ability to elucidate detailed functional characteristics of diverse cellular phenotypes within tumor immune microenvironment (TME) is limited. We aimed to enhance our understanding of cellular composition and functional characteristics across TME regions and improve patient stratification by integrating H&E with adjacent immunohistochemistry (IHC) images.

METHODS: A retrospective study was conducted on patients with Human Papillomavirus-positive oropharyngeal squamous cell carcinoma (OPSCC). Using paired H&E and IHC slides for 11 proteins, a deep learning pipeline was used to quantify tumor, stroma, and TILs in the TME. Patients were classified into immune inflamed (IN), immune excluded (IE), or immune desert (ID) phenotypes. By registering the IHC and H&E slides, we integrated IHC data to capture protein expression in the corresponding tumor regions. We further stratified patients into specific immune subtypes, such as IN, with increased or reduced CD8+ cells, based on the abundance of these proteins. This characterization provided functional insight into the H&E-based subtypes.

RESULTS: Analysis of 88 primary tumors and 70 involved lymph node tissue images reveals an improved prognosis in patients classified as IN in primary tumors with high CD8 and low CD163 expression (p = 0.007). Multivariate Cox regression analysis confirms a significantly better prognosis for these subtypes.

CONCLUSIONS: Integrating H&E and IHC data enhances the functional characterization of immune phenotypes of the TME with biological interpretability, and improves patient stratification in HPV( + ) OPSCC.

PMID:39363031 | DOI:10.1038/s43856-024-00604-w

Categories: Literature Watch

Enhancing human computer interaction with coot optimization and deep learning for multi language identification

Thu, 2024-10-03 06:00

Sci Rep. 2024 Oct 3;14(1):22963. doi: 10.1038/s41598-024-74327-2.

ABSTRACT

Human-Computer Interaction (HCI) is a multidisciplinary field focused on designing and utilizing computer technology, underlining the interaction interface between computers and humans. HCI aims to generate systems that allow consumers to relate to computers effectively, efficiently, and pleasantly. Multiple Spoken Language Identification (SLI) for HCI (MSLI for HCI) denotes the ability of a computer system to recognize and distinguish various spoken languages to enable more complete and handy interactions among consumers and technology. SLI utilizing deep learning (DL) involves using artificial neural networks (ANNs), a subset of DL models, to automatically detect and recognize the language spoken in an audio signal. DL techniques, particularly neural networks (NNs), have succeeded in various pattern detection tasks, including speech and language processing. This paper develops a novel Coot Optimizer Algorithm with a DL-Driven Multiple SLI and Detection (COADL-MSLID) technique for HCI applications. The COADL-MSLID approach aims to detect multiple spoken languages from the input audio regardless of gender, speaking style, and age. In the COADL-MSLID technique, the audio files are transformed into spectrogram images as a primary step. Besides, the COADL-MSLID technique employs the SqueezeNet model to produce feature vectors, and the COA is applied to the hyperparameter range of the SqueezeNet method. The COADL-MSLID technique exploits the SLID process's convolutional autoencoder (CAE) model. To underline the importance of the COADL-MSLID technique, a series of experiments were conducted on the benchmark dataset. The experimentation validation of the COADL-MSLID technique exhibits a greater accuracy result of 98.33% over other techniques.

PMID:39362948 | DOI:10.1038/s41598-024-74327-2

Categories: Literature Watch

ChemAP: predicting drug approval with chemical structures before clinical trial phase by leveraging multi-modal embedding space and knowledge distillation

Thu, 2024-10-03 06:00

Sci Rep. 2024 Oct 3;14(1):23010. doi: 10.1038/s41598-024-72868-0.

ABSTRACT

Recent studies showed that the likelihood of drug approval can be predicted with clinical data and structure information of drug using computational approaches. Predicting the likelihood of drug approval can be innovative and of high impact. However, models that leverage clinical data are applicable only in clinical stages, which is not very practical. Prioritizing drug candidates and early-stage decision-making in the de novo drug development process is crucial in pharmaceutical research to optimize resource allocation. For early-stage decision-making, we need a computational model that uses only chemical structures. This seemingly impossible task may utilize the predictive power with multi-modal features including clinical data. In this work, we introduce ChemAP (Chemical structure-based drug Approval Predictor), a novel deep learning scheme for drug approval prediction in the early-stage drug discovery phase. ChemAP aims to enhance the possibility of early-stage decision-making by enriching semantic knowledge to fill in the gap between multi-modal and single-modal chemical spaces through knowledge distillation techniques. This approach facilitates the effective construction of chemical space solely from chemical structure data, guided by multi-modal knowledge related to efficacy, such as clinical trials and patents of drugs. In this study, ChemAP achieved state-of-the-art performance, outperforming both traditional machine learning and deep learning models in drug approval prediction, with AUROC and AUPRC scores of 0.782 and 0.842 respectively on the drug approval benchmark dataset. Additionally, we demonstrated its generalizability by outperforming baseline models on a recent external dataset, which included drugs from the 2023 FDA-approved list and the 2024 clinical trial failure drug list, achieving AUROC and AUPRC scores of 0.694 and 0.851. These results demonstrate that ChemAP is an effective method in predicting drug approval only with chemical structure information of drug so that decision-making can be done at the early stages of drug development process. To the best of our knowledge, our work is the first of its kind to show that prediction of drug approval is possible only with structure information of drug by defining the chemical space of approved and unapproved drugs using deep learning technology.

PMID:39362916 | DOI:10.1038/s41598-024-72868-0

Categories: Literature Watch

Computer vision and deep transfer learning for automatic gauge reading detection

Thu, 2024-10-03 06:00

Sci Rep. 2024 Oct 3;14(1):23019. doi: 10.1038/s41598-024-71270-0.

ABSTRACT

This manuscript proposes an automatic reading detection system for an analogue gauge using a combination of deep learning, machine learning, and image processing. The study suggests image-processing techniques in manual analogue gauge reading that include generating readings for the image to provide supervised data to address difficulties in unsupervised data in gauges and to achieve better accuracy using DenseNet 169 compared to other approaches. The model uses artificial intelligence to automate reading detection using deep transfer learning models like DenseNet 169, InceptionNet V3, and VGG19. The models were trained using 1011 labeled pictures, 9 classes, and readings from 0 to 8. The VGG19 model exhibits a high training precision of 97.00% but a comparatively lower testing precision of 75.00%, indicating the possibility of overfitting. On the other hand, InceptionNet V3 demonstrates consistent precision across both datasets, but DenseNet 169 surpasses other models in terms of precision and generalization capabilities.

PMID:39362865 | DOI:10.1038/s41598-024-71270-0

Categories: Literature Watch

Validation of an Artificial Intelligence-Based Prediction Model Using 5 External PET/CT Datasets of Diffuse Large B-Cell Lymphoma

Thu, 2024-10-03 06:00

J Nucl Med. 2024 Oct 3:jnumed.124.268191. doi: 10.2967/jnumed.124.268191. Online ahead of print.

ABSTRACT

The aim of this study was to validate a previously developed deep learning model in 5 independent clinical trials. The predictive performance of this model was compared with the international prognostic index (IPI) and 2 models incorporating radiomic PET/CT features (clinical PET and PET models). Methods: In total, 1,132 diffuse large B-cell lymphoma patients were included: 296 for training and 836 for external validation. The primary outcome was 2-y time to progression. The deep learning model was trained on maximum-intensity projections from PET/CT scans. The clinical PET model included metabolic tumor volume, maximum distance from the bulkiest lesion to another lesion, SUVpeak, age, and performance status. The PET model included metabolic tumor volume, maximum distance from the bulkiest lesion to another lesion, and SUVpeak Model performance was assessed using the area under the curve (AUC) and Kaplan-Meier curves. Results: The IPI yielded an AUC of 0.60 on all external data. The deep learning model yielded a significantly higher AUC of 0.66 (P < 0.01). For each individual clinical trial, the model was consistently better than IPI. Radiomic model AUCs remained higher for all clinical trials. The deep learning and clinical PET models showed equivalent performance (AUC, 0.69; P > 0.05). The PET model yielded the highest AUC of all models (AUC, 0.71; P < 0.05). Conclusion: The deep learning model predicted outcome in all trials with a higher performance than IPI and better survival curve separation. This model can predict treatment outcome in diffuse large B-cell lymphoma without tumor delineation but at the cost of a lower prognostic performance than with radiomics.

PMID:39362767 | DOI:10.2967/jnumed.124.268191

Categories: Literature Watch

The Updated Registry of Fast Myocardial Perfusion Imaging with Next-Generation SPECT (REFINE SPECT 2.0)

Thu, 2024-10-03 06:00

J Nucl Med. 2024 Oct 3:jnumed.124.268292. doi: 10.2967/jnumed.124.268292. Online ahead of print.

ABSTRACT

The Registry of Fast Myocardial Perfusion Imaging with Next-Generation SPECT (REFINE SPECT) has been expanded to include more patients and CT attenuation correction imaging. We present the design and initial results from the updated registry. Methods: The updated REFINE SPECT is a multicenter, international registry with clinical data and image files. SPECT images were processed by quantitative software and CT images by deep learning software detecting coronary artery calcium (CAC). Patients were followed for major adverse cardiovascular events (MACEs) (death, myocardial infarction, unstable angina, late revascularization). Results: The registry included scans from 45,252 patients from 13 centers (55.9% male, 64.7 ± 11.8 y). Correlating invasive coronary angiography was available for 3,786 (8.4%) patients. CT attenuation correction imaging was available for 13,405 patients. MACEs occurred in 6,514 (14.4%) patients during a median follow-up of 3.6 y (interquartile range, 2.5-4.8 y). Patients with a stress total perfusion deficit of 5% to less than 10% (unadjusted hazard ratio [HR], 2.42; 95% CI, 2.23-2.62) and a stress total perfusion deficit of at least 10% (unadjusted HR, 3.85; 95% CI, 3.56-4.16) were more likely to experience MACEs. Patients with a deep learning CAC score of 101-400 (unadjusted HR, 3.09; 95% CI, 2.57-3.72) and a CAC of more than 400 (unadjusted HR, 5.17; 95% CI, 4.41-6.05) were at increased risk of MACEs. Conclusion: The REFINE SPECT registry contains a comprehensive set of imaging and clinical variables. It will aid in understanding the value of SPECT myocardial perfusion imaging, leverage hybrid imaging, and facilitate validation of new artificial intelligence tools for improving prediction of adverse outcomes incorporating multimodality imaging.

PMID:39362762 | DOI:10.2967/jnumed.124.268292

Categories: Literature Watch

Artificial Intelligence, Large Language Models, and Digital Health in the Management of Alcohol-Associated Liver Disease

Thu, 2024-10-03 06:00

Clin Liver Dis. 2024 Nov;28(4):819-830. doi: 10.1016/j.cld.2024.06.016. Epub 2024 Aug 14.

ABSTRACT

Artificial intelligence (AI) has the potential to aid in the diagnosis and management of alcohol-associated liver disease (ALD). Machine learning algorithms can analyze medical data, such as patient records and imaging results, to identify patterns and predict disease progression. Newer advances such as large language models (LLMs) can enhance early detection and personalized treatment strategies for individuals with chronic diseases such as ALD. However, it is essential to integrate LLMs and other AI tools responsibly, considering ethical concerns in health care applications and ensuring an evidence base for real-world applications of the existing knowledge.

PMID:39362724 | DOI:10.1016/j.cld.2024.06.016

Categories: Literature Watch

Unveiling Thymoma Typing Through Hyperspectral Imaging and Deep Learning

Thu, 2024-10-03 06:00

J Biophotonics. 2024 Oct 3:e202400325. doi: 10.1002/jbio.202400325. Online ahead of print.

ABSTRACT

Thymoma, a rare tumor from thymic epithelial cells, presents diagnostic challenges because of the subjective nature of traditional methods, leading to high false-negative rates and long diagnosis times. This study introduces a thymoma classification technique that integrates hyperspectral imaging with deep learning. We initially capture pathological slice images of thymoma using a hyperspectral camera and delineate regions of interest to extract spectral data. This data undergoes reflectance calibration and noise reduction. Subsequently, we transform the spectral data into two-dimensional images via the Gramian Angular Field (GAF) method. A variant residual network is then utilized to extract features and classify these images. Our results demonstrate that this model significantly enhances classification accuracy and efficiency, achieving an average accuracy of 95%. The method proves highly effective in automated thymoma diagnosis, optimizing data utilization, and feature representation learning.

PMID:39362657 | DOI:10.1002/jbio.202400325

Categories: Literature Watch

On-site burn severity assessment using smartphone-captured color burn wound images

Thu, 2024-10-03 06:00

Comput Biol Med. 2024 Oct 2;182:109171. doi: 10.1016/j.compbiomed.2024.109171. Online ahead of print.

ABSTRACT

Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R2 for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.

PMID:39362001 | DOI:10.1016/j.compbiomed.2024.109171

Categories: Literature Watch

Ascle-A Python Natural Language Processing Toolkit for Medical Text Generation: Development and Evaluation Study

Thu, 2024-10-03 06:00

J Med Internet Res. 2024 Oct 3;26:e60601. doi: 10.2196/60601.

ABSTRACT

BACKGROUND: Medical texts present significant domain-specific challenges, and manually curating these texts is a time-consuming and labor-intensive process. To address this, natural language processing (NLP) algorithms have been developed to automate text processing. In the biomedical field, various toolkits for text processing exist, which have greatly improved the efficiency of handling unstructured text. However, these existing toolkits tend to emphasize different perspectives, and none of them offer generation capabilities, leaving a significant gap in the current offerings.

OBJECTIVE: This study aims to describe the development and preliminary evaluation of Ascle. Ascle is tailored for biomedical researchers and clinical staff with an easy-to-use, all-in-one solution that requires minimal programming expertise. For the first time, Ascle provides 4 advanced and challenging generative functions: question-answering, text summarization, text simplification, and machine translation. In addition, Ascle integrates 12 essential NLP functions, along with query and search capabilities for clinical databases.

METHODS: We fine-tuned 32 domain-specific language models and evaluated them thoroughly on 27 established benchmarks. In addition, for the question-answering task, we developed a retrieval-augmented generation (RAG) framework for large language models that incorporated a medical knowledge graph with ranking techniques to enhance the reliability of generated answers. Additionally, we conducted a physician validation to assess the quality of generated content beyond automated metrics.

RESULTS: The fine-tuned models and RAG framework consistently enhanced text generation tasks. For example, the fine-tuned models improved the machine translation task by 20.27 in terms of BLEU score. In the question-answering task, the RAG framework raised the ROUGE-L score by 18% over the vanilla models. Physician validation of generated answers showed high scores for readability (4.95/5) and relevancy (4.43/5), with a lower score for accuracy (3.90/5) and completeness (3.31/5).

CONCLUSIONS: This study introduces the development and evaluation of Ascle, a user-friendly NLP toolkit designed for medical text generation. All code is publicly available through the Ascle GitHub repository. All fine-tuned language models can be accessed through Hugging Face.

PMID:39361955 | DOI:10.2196/60601

Categories: Literature Watch

Impaired motor-to-sensory transformation mediates auditory hallucinations

Thu, 2024-10-03 06:00

PLoS Biol. 2024 Oct 3;22(10):e3002836. doi: 10.1371/journal.pbio.3002836. eCollection 2024 Oct.

ABSTRACT

Distinguishing reality from hallucinations requires efficient monitoring of agency. It has been hypothesized that a copy of motor signals, termed efference copy (EC) or corollary discharge (CD), suppresses sensory responses to yield a sense of agency; impairment of the inhibitory function leads to hallucinations. However, how can the sole absence of inhibition yield positive symptoms of hallucinations? We hypothesize that selective impairments in functionally distinct signals of CD and EC during motor-to-sensory transformation cause the positive symptoms of hallucinations. In an electroencephalography (EEG) experiment with a delayed articulation paradigm in schizophrenic patients with (AVHs) and without auditory verbal hallucinations (non-AVHs), we found that preparing to speak without knowing the contents (general preparation) did not suppress auditory responses in both patient groups, suggesting the absent of inhibitory function of CD. Whereas, preparing to speak a syllable (specific preparation) enhanced the auditory responses to the prepared syllable in non-AVHs, whereas AVHs showed enhancement in responses to unprepared syllables, opposite to the observations in the normal population, suggesting that the enhancement function of EC is not precise in AVHs. A computational model with a virtual lesion of an inhibitory inter-neuron and disproportional sensitization of auditory cortices fitted the empirical data and further quantified the distinct impairments in motor-to-sensory transformation in AVHs. These results suggest that "broken" CD plus "noisy" EC causes erroneous monitoring of the imprecise generation of internal auditory representation and yields auditory hallucinations. Specific impairments in functional granularity of motor-to-sensory transformation mediate positivity symptoms of agency abnormality in mental disorders.

PMID:39361912 | DOI:10.1371/journal.pbio.3002836

Categories: Literature Watch

Creation of de novo cryptic splicing for ALS and FTD precision medicine

Thu, 2024-10-03 06:00

Science. 2024 Oct 4;386(6717):61-69. doi: 10.1126/science.adk2539. Epub 2024 Oct 3.

ABSTRACT

Loss of function of the RNA-binding protein TDP-43 (TDP-LOF) is a hallmark of amyotrophic lateral sclerosis (ALS) and other neurodegenerative disorders. Here we describe TDP-REG, which exploits the specificity of cryptic splicing induced by TDP-LOF to drive protein expression when and where the disease process occurs. The SpliceNouveau algorithm combines deep learning with rational design to generate customizable cryptic splicing events within protein-coding sequences. We demonstrate that expression of TDP-REG reporters is tightly coupled to TDP-LOF in vitro and in vivo. TDP-REG enables genomic prime editing to ablate the UNC13A cryptic donor splice site specifically upon TDP-LOF. Finally, we design TDP-REG vectors encoding a TDP-43/Raver1 fusion protein that rescues key pathological cryptic splicing events, paving the way for the development of precision therapies for TDP43-related disorders.

PMID:39361759 | DOI:10.1126/science.adk2539

Categories: Literature Watch

Pages