Deep learning

The changing landscape of text mining: a review of approaches for ecology and evolution

Wed, 2024-07-31 06:00

Proc Biol Sci. 2024 Jul;291(2027):20240423. doi: 10.1098/rspb.2024.0423. Epub 2024 Jul 31.

ABSTRACT

In ecology and evolutionary biology, the synthesis and modelling of data from published literature are commonly used to generate insights and test theories across systems. However, the tasks of searching, screening, and extracting data from literature are often arduous. Researchers may manually process hundreds to thousands of articles for systematic reviews, meta-analyses, and compiling synthetic datasets. As relevant articles expand to tens or hundreds of thousands, computer-based approaches can increase the efficiency, transparency and reproducibility of literature-based research. Methods available for text mining are rapidly changing owing to developments in machine learning-based language models. We review the growing landscape of approaches, mapping them onto three broad paradigms (frequency-based approaches, traditional Natural Language Processing and deep learning-based language models). This serves as an entry point to learn foundational and cutting-edge concepts, vocabularies, and methods to foster integration of these tools into ecological and evolutionary research. We cover approaches for modelling ecological texts, generating training data, developing custom models and interacting with large language models and discuss challenges and possible solutions to implementing these methods in ecology and evolution.

PMID:39082244 | DOI:10.1098/rspb.2024.0423

Categories: Literature Watch

Label-Free Single-Cell Cancer Classification from the Spatial Distribution of Adhesion Contact Kinetics

Wed, 2024-07-31 06:00

ACS Sens. 2024 Jul 31. doi: 10.1021/acssensors.4c01139. Online ahead of print.

ABSTRACT

There is an increasing need for simple-to-use, noninvasive, and rapid tools to identify and separate various cell types or subtypes at the single-cell level with sufficient throughput. Often, the selection of cells based on their direct biological activity would be advantageous. These steps are critical in immune therapy, regenerative medicine, cancer diagnostics, and effective treatment. Today, live cell selection procedures incorporate some kind of biomolecular labeling or other invasive measures, which may impact cellular functionality or cause damage to the cells. In this study, we first introduce a highly accurate single-cell segmentation methodology by combining the high spatial resolution of a phase-contrast microscope with the adhesion kinetic recording capability of a resonant waveguide grating (RWG) biosensor. We present a classification workflow that incorporates the semiautomatic separation and classification of single cells from the measurement data captured by an RWG-based biosensor for adhesion kinetics data and a phase-contrast microscope for highly accurate spatial resolution. The methodology was tested with one healthy and six cancer cell types recorded with two functionalized coatings. The data set contains over 5000 single-cell samples for each surface and over 12,000 samples in total. We compare and evaluate the classification using these two types of surfaces (fibronectin and noncoated) with different segmentation strategies and measurement timespans applied to our classifiers. The overall classification performance reached nearly 95% with the best models showing that our proof-of-concept methodology could be adapted for real-life automatic diagnostics use cases. The label-free measurement technique has no impact on cellular functionality, directly measures cellular activity, and can be easily tuned to a specific application by varying the sensor coating. These features make it suitable for applications requiring further processing of selected cells.

PMID:39082162 | DOI:10.1021/acssensors.4c01139

Categories: Literature Watch

A Hybrid GNN Approach for Improved Molecular Property Prediction

Wed, 2024-07-31 06:00

J Comput Biol. 2024 Jul 31. doi: 10.1089/cmb.2023.0452. Online ahead of print.

ABSTRACT

The development of new drugs is a vital effort that has the potential to improve human health, well-being and life expectancy. Molecular property prediction is a crucial step in drug discovery, as it helps to identify potential therapeutic compounds. However, experimental methods for drug development can often be time-consuming and resource-intensive, with a low probability of success. To address such limitations, deep learning (DL) methods have emerged as a viable alternative due to their ability to identify high-discriminating patterns in molecular data. In particular, graph neural networks (GNNs) operate on graph-structured data to identify promising drug candidates with desirable molecular properties. These methods represent molecules as a set of node (atoms) and edge (chemical bonds) features to aggregate local information for molecular graph representation learning. Despite the availability of several GNN frameworks, each approach has its own shortcomings. Although, some GNNs may excel in certain tasks, they may not perform as well in others. In this work, we propose a hybrid approach that incorporates different graph-based methods to combine their strengths and mitigate their limitations to accurately predict molecular properties. The proposed approach consists in a multi-layered hybrid GNN architecture that integrates multiple GNN frameworks to compute graph embeddings for molecular property prediction. Furthermore, we conduct extensive experiments on multiple benchmark datasets to demonstrate that our hybrid approach significantly outperforms the state-of-the-art graph-based models. The data and code scripts to reproduce the results are available in the repository, https://github.com/pedro-quesado/HybridGNN.

PMID:39082155 | DOI:10.1089/cmb.2023.0452

Categories: Literature Watch

Artificial intelligence-enhanced electrocardiography analysis as a promising tool for predicting obstructive coronary artery disease in patients with stable angina

Wed, 2024-07-31 06:00

Eur Heart J Digit Health. 2024 May 14;5(4):444-453. doi: 10.1093/ehjdh/ztae038. eCollection 2024 Jul.

ABSTRACT

AIMS: The clinical feasibility of artificial intelligence (AI)-based electrocardiography (ECG) analysis for predicting obstructive coronary artery disease (CAD) has not been sufficiently validated in patients with stable angina, especially in large sample sizes.

METHODS AND RESULTS: A deep learning framework for the quantitative ECG (QCG) analysis was trained and internally tested to derive the risk scores (0-100) for obstructive CAD (QCGObstCAD) and extensive CAD (QCGExtCAD) using 50 756 ECG images from 21 866 patients who underwent coronary artery evaluation for chest pain (invasive coronary or computed tomography angiography). External validation was performed in 4517 patients with stable angina who underwent coronary imaging to identify obstructive CAD. The QCGObstCAD and QCGExtCAD scores were significantly increased in the presence of obstructive and extensive CAD (all P < 0.001) and with increasing degrees of stenosis and disease burden, respectively (all P trend < 0.001). In the internal and external tests, QCGObstCAD exhibited a good predictive ability for obstructive CAD [area under the curve (AUC), 0.781 and 0.731, respectively] and severe obstructive CAD (AUC, 0.780 and 0.786, respectively), and QCGExtCAD exhibited a good predictive ability for extensive CAD (AUC, 0.689 and 0.784). In the external test, the QCGObstCAD and QCGExtCAD scores demonstrated independent and incremental predictive values for obstructive and extensive CAD, respectively, over that with conventional clinical risk factors. The QCG scores demonstrated significant associations with lesion characteristics, such as the fractional flow reserve, coronary calcification score, and total plaque volume.

CONCLUSION: The AI-based QCG analysis for predicting obstructive CAD in patients with stable angina, including those with severe stenosis and multivessel disease, is feasible.

PMID:39081950 | PMC:PMC11284006 | DOI:10.1093/ehjdh/ztae038

Categories: Literature Watch

Simple models vs. deep learning in detecting low ejection fraction from the electrocardiogram

Wed, 2024-07-31 06:00

Eur Heart J Digit Health. 2024 Apr 25;5(4):427-434. doi: 10.1093/ehjdh/ztae034. eCollection 2024 Jul.

ABSTRACT

AIMS: Deep learning methods have recently gained success in detecting left ventricular systolic dysfunction (LVSD) from electrocardiogram (ECG) waveforms. Despite their high level of accuracy, they are difficult to interpret and deploy broadly in the clinical setting. In this study, we set out to determine whether simpler models based on standard ECG measurements could detect LVSD with similar accuracy to that of deep learning models.

METHODS AND RESULTS: Using an observational data set of 40 994 matched 12-lead ECGs and transthoracic echocardiograms, we trained a range of models with increasing complexity to detect LVSD based on ECG waveforms and derived measurements. The training data were acquired from the Stanford University Medical Center. External validation data were acquired from the Columbia Medical Center and the UK Biobank. The Stanford data set consisted of 40 994 matched ECGs and echocardiograms, of which 9.72% had LVSD. A random forest model using 555 discrete, automated measurements achieved an area under the receiver operator characteristic curve (AUC) of 0.92 (0.91-0.93), similar to a deep learning waveform model with an AUC of 0.94 (0.93-0.94). A logistic regression model based on five measurements achieved high performance [AUC of 0.86 (0.85-0.87)], close to a deep learning model and better than N-terminal prohormone brain natriuretic peptide (NT-proBNP). Finally, we found that simpler models were more portable across sites, with experiments at two independent, external sites.

CONCLUSION: Our study demonstrates the value of simple electrocardiographic models that perform nearly as well as deep learning models, while being much easier to implement and interpret.

PMID:39081946 | PMC:PMC11284011 | DOI:10.1093/ehjdh/ztae034

Categories: Literature Watch

Machine learning in cardiac stress test interpretation: a systematic review

Wed, 2024-07-31 06:00

Eur Heart J Digit Health. 2024 Apr 17;5(4):401-408. doi: 10.1093/ehjdh/ztae027. eCollection 2024 Jul.

ABSTRACT

Coronary artery disease (CAD) is a leading health challenge worldwide. Exercise stress testing is a foundational non-invasive diagnostic tool. Nonetheless, its variable accuracy prompts the exploration of more reliable methods. Recent advancements in machine learning (ML), including deep learning and natural language processing, have shown potential in refining the interpretation of stress testing data. Adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we conducted a systematic review of ML applications in stress electrocardiogram (ECG) and stress echocardiography for CAD prognosis. Medical Literature Analysis and Retrieval System Online, Web of Science, and the Cochrane Library were used as databases. We analysed the ML models, outcomes, and performance metrics. Overall, seven relevant studies were identified. Machine-learning applications in stress ECGs resulted in sensitivity and specificity improvements. Some models achieved rates of above 96% in both metrics and reduced false positives by up to 21%. In stress echocardiography, ML models demonstrated an increase in diagnostic precision. Some models achieved specificity and sensitivity rates of up to 92.7 and 84.4%, respectively. Natural language processing applications enabled the categorization of stress echocardiography reports, with accuracy rates nearing 98%. Limitations include a small, retrospective study pool and the exclusion of nuclear stress testing, due to its well-documented status. This review indicates the potential of artificial intelligence applications in refining CAD stress testing assessment. Further development for real-world use is warranted.

PMID:39081945 | PMC:PMC11284008 | DOI:10.1093/ehjdh/ztae027

Categories: Literature Watch

Dynamic risk stratification of worsening heart failure using a deep learning-enabled implanted ambulatory single-lead electrocardiogram

Wed, 2024-07-31 06:00

Eur Heart J Digit Health. 2024 May 8;5(4):435-443. doi: 10.1093/ehjdh/ztae035. eCollection 2024 Jul.

ABSTRACT

AIMS: Implantable loop recorders (ILRs) provide continuous single-lead ambulatory electrocardiogram (aECG) monitoring. Whether these aECGs could be used to identify worsening heart failure (HF) is unknown.

METHODS AND RESULTS: We linked ILR aECG from Medtronic device database to the left ventricular ejection fraction (LVEF) measurements in Optum® de-identified electronic health record dataset. We trained an artificial intelligence (AI) algorithm [aECG-convolutional neural network (CNN)] on a dataset of 35 741 aECGs from 2247 patients to identify LVEF ≤ 40% and assessed its performance using the area under the receiver operating characteristic curve. Ambulatory electrocardiogram-CNN was then used to identify patients with increasing risk of HF hospitalization in a real-world cohort of 909 patients with prior HF diagnosis. This dataset provided 12 467 follow-up monthly evaluations, with 201 HF hospitalizations. For every month, time-series features from these predictions were used to categorize patients into high- and low-risk groups and predict HF hospitalization in the next month. The risk of HF hospitalization in the next 30 days was significantly higher in the cohort that aECG-CNN identified as high risk [hazard ratio (HR) 1.89; 95% confidence interval (CI) 1.28-2.79; P = 0.001] compared with low risk, even after adjusting patient demographics (HR 1.88; 95% CI 1.27-2.79 P = 0.002).

CONCLUSION: An AI algorithm trained to detect LVEF ≤40% using ILR aECGs can also readily identify patients at increased risk of HF hospitalizations by monitoring changes in the probability of HF over 30 days.

PMID:39081943 | PMC:PMC11284004 | DOI:10.1093/ehjdh/ztae035

Categories: Literature Watch

Prospects for artificial intelligence-enhanced electrocardiogram as a unified screening tool for cardiac and non-cardiac conditions: an explorative study in emergency care

Wed, 2024-07-31 06:00

Eur Heart J Digit Health. 2024 May 12;5(4):454-460. doi: 10.1093/ehjdh/ztae039. eCollection 2024 Jul.

ABSTRACT

AIMS: Current deep learning algorithms for automatic ECG analysis have shown notable accuracy but are typically narrowly focused on singular diagnostic conditions. This exploratory study aims to investigate the capability of a single deep learning model to predict a diverse range of both cardiac and non-cardiac discharge diagnoses based on a single ECG collected in the emergency department.

METHODS AND RESULTS: In this study, we assess the performance of a model trained to predict a broad spectrum of diagnoses. We find that the model can reliably predict 253 ICD codes (81 cardiac and 172 non-cardiac) in the sense of exceeding an AUROC score of 0.8 in a statistically significant manner.

CONCLUSION: The model demonstrates proficiency in handling a wide array of cardiac and non-cardiac diagnostic scenarios, indicating its potential as a comprehensive screening tool for diverse medical encounters.

PMID:39081937 | PMC:PMC11284007 | DOI:10.1093/ehjdh/ztae039

Categories: Literature Watch

Hypertrophic cardiomyopathy detection with artificial intelligence electrocardiography in international cohorts: an external validation study

Wed, 2024-07-31 06:00

Eur Heart J Digit Health. 2024 Apr 15;5(4):416-426. doi: 10.1093/ehjdh/ztae029. eCollection 2024 Jul.

ABSTRACT

AIMS: Recently, deep learning artificial intelligence (AI) models have been trained to detect cardiovascular conditions, including hypertrophic cardiomyopathy (HCM), from the 12-lead electrocardiogram (ECG). In this external validation study, we sought to assess the performance of an AI-ECG algorithm for detecting HCM in diverse international cohorts.

METHODS AND RESULTS: A convolutional neural network-based AI-ECG algorithm was developed previously in a single-centre North American HCM cohort (Mayo Clinic). This algorithm was applied to the raw 12-lead ECG data of patients with HCM and non-HCM controls from three external cohorts (Bern, Switzerland; Oxford, UK; and Seoul, South Korea). The algorithm's ability to distinguish HCM vs. non-HCM status from the ECG alone was examined. A total of 773 patients with HCM and 3867 non-HCM controls were included across three sites in the merged external validation cohort. The HCM study sample comprised 54.6% East Asian, 43.2% White, and 2.2% Black patients. Median AI-ECG probabilities of HCM were 85% for patients with HCM and 0.3% for controls (P < 0.001). Overall, the AI-ECG algorithm had an area under the receiver operating characteristic curve (AUC) of 0.922 [95% confidence interval (CI) 0.910-0.934], with diagnostic accuracy 86.9%, sensitivity 82.8%, and specificity 87.7% for HCM detection. In age- and sex-matched analysis (case-control ratio 1:2), the AUC was 0.921 (95% CI 0.909-0.934) with accuracy 88.5%, sensitivity 82.8%, and specificity 90.4%.

CONCLUSION: The AI-ECG algorithm determined HCM status from the 12-lead ECG with high accuracy in diverse international cohorts, providing evidence for external validity. The value of this algorithm in improving HCM detection in clinical practice and screening settings requires prospective evaluation.

PMID:39081936 | PMC:PMC11284003 | DOI:10.1093/ehjdh/ztae029

Categories: Literature Watch

ProLesA-Net: A multi-channel 3D architecture for prostate MRI lesion segmentation with multi-scale channel and spatial attentions

Wed, 2024-07-31 06:00

Patterns (N Y). 2024 May 15;5(7):100992. doi: 10.1016/j.patter.2024.100992. eCollection 2024 Jul 12.

ABSTRACT

Prostate cancer diagnosis and treatment relies on precise MRI lesion segmentation, a challenge notably for small (<15 mm) and intermediate (15-30 mm) lesions. Our study introduces ProLesA-Net, a multi-channel 3D deep-learning architecture with multi-scale squeeze and excitation and attention gate mechanisms. Tested against six models across two datasets, ProLesA-Net significantly outperformed in key metrics: Dice score increased by 2.2%, and Hausdorff distance and average surface distance improved by 0.5 mm, with recall and precision also undergoing enhancements. Specifically, for lesions under 15 mm, our model showed a notable increase in five key metrics. In summary, ProLesA-Net consistently ranked at the top, demonstrating enhanced performance and stability. This advancement addresses crucial challenges in prostate lesion segmentation, enhancing clinical decision making and expediting treatment processes.

PMID:39081575 | PMC:PMC11284496 | DOI:10.1016/j.patter.2024.100992

Categories: Literature Watch

Human monkeypox disease prediction using novel modified restricted Boltzmann machine-based equilibrium optimizer

Tue, 2024-07-30 06:00

Sci Rep. 2024 Jul 30;14(1):17612. doi: 10.1038/s41598-024-68836-3.

ABSTRACT

While the globe continues to struggle to recover from the devastation brought on by the COVID-19 virus's extensive distribution, the recent worrying rise in human monkeypox outbreaks in several nations raises the possibility of a novel worldwide pandemic. The symptoms of human monkeypox resemble those of chickenpox and traditional measles, with a few subtle variations like the various kinds of skin blisters. A range of deep learning techniques have demonstrated encouraging results in image-oriented tumor cell, Covid-19 diagnosis, and skin disease prediction tasks. Hence, it becomes necessary to perform the prediction of the new monkeypox disease using deep learning techniques. In this paper, an image-oriented human monkeypox disease prediction is performed with the help of novel deep learning methodology. Initially, the data is gathered from the standard benchmark dataset called Monkeypox Skin Lesion Dataset. From the collected data, the pre-processing is accomplished using image resizing and image normalization as well as data augmentation techniques. These pre-processed images undergo the feature extraction that is performed by the Convolutional Block Attention Module (CBAM) approach. The extracted features undergo the final prediction phase using the Modified Restricted Boltzmann Machine (MRBM), where the parameter tuning in RBM is accomplished by the nature inspired optimization algorithm referred to as Equilibrium Optimizer (EO), with the consideration of error minimization as the major objective function. Simulation findings demonstrate that the proposed model performed better than the remaining models at monkeypox prediction. The proposed MRBM-EO for the suggested human monkeypox disease prediction model in terms of RMSE is 75.68%, 70%, 60.87%, and 43.75% better than PSO-SVM, Xception-CBAM-Dense, ShuffleNet, and RBM respectively. Similarly, the proposed MRBM-EO for the suggested human monkeypox disease prediction model with respect to accuracy is 9.22%, 7.75%, 3.77%, and 10.90% better than PSO-SVM, Xception-CBAM-Dense, ShuffleNet, and RBM respectively.

PMID:39080387 | DOI:10.1038/s41598-024-68836-3

Categories: Literature Watch

Training high-performance deep learning classifier for diagnosis in oral cytology using diverse annotations

Tue, 2024-07-30 06:00

Sci Rep. 2024 Jul 30;14(1):17591. doi: 10.1038/s41598-024-67879-w.

ABSTRACT

The uncertainty of true labels in medical images hinders diagnosis owing to the variability across professionals when applying deep learning models. We used deep learning to obtain an optimal convolutional neural network (CNN) by adequately annotating data for oral exfoliative cytology considering labels from multiple oral pathologists. Six whole-slide images were processed using QuPath for segmenting them into tiles. The images were labeled by three oral pathologists, resulting in 14,535 images with the corresponding pathologists' annotations. Data from three pathologists who provided the same diagnosis were labeled as ground truth (GT) and used for testing. We investigated six models trained using the annotations of (1) pathologist A, (2) pathologist B, (3) pathologist C, (4) GT, (5) majority voting, and (6) a probabilistic model. We divided the test by cross-validation per slide dataset and examined the classification performance of the CNN with a ResNet50 baseline. Statistical evaluation was performed repeatedly and independently using every slide 10 times as test data. For the area under the curve, three cases showed the highest values (0.861, 0.955, and 0.991) for the probabilistic model. Regarding accuracy, two cases showed the highest values (0.988 and 0.967). For the models using the pathologists and GT annotations, many slides showed very low accuracy and large variations across tests. Hence, the classifier trained with probabilistic labels provided the optimal CNN for oral exfoliative cytology considering diagnoses from multiple pathologists. These results may lead to trusted medical artificial intelligence solutions that reflect diverse diagnoses of various professionals.

PMID:39080384 | DOI:10.1038/s41598-024-67879-w

Categories: Literature Watch

CBIL-VHPLI: a model for predicting viral-host protein-lncRNA interactions based on machine learning and transfer learning

Tue, 2024-07-30 06:00

Sci Rep. 2024 Jul 30;14(1):17549. doi: 10.1038/s41598-024-68750-8.

ABSTRACT

Virus‒host protein‒lncRNA interaction (VHPLI) predictions are critical for decoding the molecular mechanisms of viral pathogens and host immune processes. Although VHPLI interactions have been predicted in both plants and animals, they have not been extensively studied in viruses. For the first time, we propose a new deep learning-based approach that consists mainly of a convolutional neural network and bidirectional long and short-term memory network modules in combination with transfer learning named CBIL‒VHPLI to predict viral-host protein‒lncRNA interactions. The models were first trained on large and diverse datasets (including plants, animals, etc.). Protein sequence features were extracted using a k-mer method combined with the one-hot encoding and composition-transition-distribution (CTD) methods, and lncRNA sequence features were extracted using a k-mer method combined with the one-hot encoding and Z curve methods. The results obtained on three independent external validation datasets showed that the pre-trained CBIL‒VHPLI model performed the best with an accuracy of approximately 0.9. Pretraining was followed by conducting transfer learning on a viral protein-human lncRNA dataset, and the fine-tuning results showed that the accuracy of CBIL‒VHPLI was 0.946, which was significantly greater than that of the previous models. The final case study results showed that CBIL‒VHPLI achieved a prediction reproducibility rate of 91.6% for the RIP-Seq experimental screening results. This model was then used to predict the interactions between human lncRNA PIK3CD-AS2 and the nonstructural protein 1 (NS1) of the H5N1 virus, and RNA pull-down experiments were used to prove the prediction readiness of the model in terms of prediction. The source code of CBIL‒VHPLI and the datasets used in this work are available at https://github.com/Liu-Lab-Lnu/CBIL-VHPLI for academic usage.

PMID:39080344 | DOI:10.1038/s41598-024-68750-8

Categories: Literature Watch

Enhanced 3D dose prediction for hypofractionated SRS (gamma knife radiosurgery) in brain tumor using cascaded-deep-supervised convolutional neural network

Tue, 2024-07-30 06:00

Phys Eng Sci Med. 2024 Jul 30. doi: 10.1007/s13246-024-01457-2. Online ahead of print.

ABSTRACT

Gamma Knife radiosurgery (GKRS) is a well-established technique in radiation therapy (RT) for treating small-size brain tumors. It administers highly concentrated doses during each treatment fraction, with even minor dose errors posing a significant risk of causing severe damage to healthy tissues. It underscores the critical need for precise and meticulous precision in GKRS. However, the planning process for GKRS is complex and time-consuming, heavily reliant on the expertise of medical physicists. Incorporating deep learning approaches for GKRS dose prediction can reduce this dependency, improve planning efficiency and homogeneity, streamline clinical workflows, and reduce patient lagging times. Despite this, precise Gamma Knife plan dose distribution prediction using existing models remains a significant challenge. The complexity stems from the intricate nature of dose distributions, subtle contrasts in CT scans, and the interdependence of dosimetric metrics. To overcome these challenges, we have developed a "Cascaded-Deep-Supervised" Convolutional Neural Network (CDS-CNN) that employs a hybrid-weighted optimization scheme. Our innovative method incorporates multi-level deep supervision and a strategic sequential multi-network training approach. It enables the extraction of intra-slice and inter-slice features, leading to more realistic dose predictions with additional contextual information. CDS-CNN was trained and evaluated using data from 105 brain cancer patients who underwent GKRS treatment, with 85 cases used for training and 20 for testing. Quantitative assessments and statistical analyses demonstrated high consistency between the predicted dose distributions and the reference doses from the treatment planning system (TPS). The 3D overall gamma passing rates (GPRs) reached 97.15% ± 1.36% (3 mm/3%, 10% threshold), surpassing the previous best performance by 2.53% using the 3D Dense U-Net model. When evaluated against more stringent criteria (2 mm/3%, 10% threshold, and 1 mm/3%, 10% threshold), the overall GPRs still achieved 96.53% ± 1.08% and 95.03% ± 1.18%. Furthermore, the average target coverage (TC) was 98.33% ± 1.16%, dose selectivity (DS) was 0.57 ± 0.10, gradient index (GI) was 2.69 ± 0.30, and homogeneity index (HI) was 1.79 ± 0.09. Compared to the 3D Dense U-Net, CDS-CNN predictions demonstrated a 3.5% improvement in TC, and CDS-CNN's dose prediction yielded better outcomes than the 3D Dense U-Net across all evaluation criteria. The experimental results demonstrated that the proposed CDS-CNN model outperformed other models in predicting GKRS dose distributions, with predictions closely matching the TPS doses.

PMID:39080208 | DOI:10.1007/s13246-024-01457-2

Categories: Literature Watch

Clinical Utility of a Rapid 2D Balanced Steady State Free Precession Sequence with Deep Learning Reconstruction

Tue, 2024-07-30 06:00

J Cardiovasc Magn Reson. 2024 Jul 28:101069. doi: 10.1016/j.jocmr.2024.101069. Online ahead of print.

ABSTRACT

BACKGROUND: Cardiovascular magnetic resonance (CMR) cine imaging is still limited by long acquisition times. This study evaluated the clinical utility of an accelerated two-dimensional (2D) cine sequence with deep learning reconstruction (Sonic DL) to decrease acquisition time without compromising quantitative volumetry or image quality.

METHODS: A sub-study using 16 participants was performed using Sonic DL at two different acceleration factors (8x and 12x). Quantitative left-ventricular volumetry, function and mass measurements were compared between the two acceleration factors against a standard cine method. Following this sub-study, 108 participants were prospectively recruited and imaged using a standard cine method and the Sonic DL method with the acceleration factor that more closely matched the reference method. Two experienced clinical readers rated images based on their diagnostic utility and performed all image contouring. Quantitative contrast difference and endocardial border sharpness were also assessed. Left- and right-ventricular volumetry, left-ventricular mass and myocardial strain measurements were compared between cine methods using Bland-Altman plots, Pearson's correlation, and paired t-tests. Comparative analysis of image quality was measured using Wilcoxon-signed-rank tests and visualized using bar graphs.

RESULTS: Sonic DL at an acceleration factor of 8 more closely matched the reference cine method. There were no significant differences found across left ventricular volumetry, function, or mass measurements. In contrast, an acceleration factor of 12 resulted in a 6% reduction of measured ejection fraction when compared to the standard cine method and a 4% reduction of measured ejection fraction when compared to Sonic DL at an acceleration factor of 8. Thus, Sonic DL at an acceleration factor of 8 was chosen for downstream analysis. In the larger cohort, this accelerated cine sequence was successfully performed in all participants and significantly reduced the acquisition time of cine images compared to the standard 2D method (reduction of 40%, p < 0.0001). Diagnostic image quality ratings and quantitative image quality evaluations were statistically not different between the two methods (p > 0.05). Left- and right-ventricular volumetry and circumferential and radial strain were also similar between methods (p > 0.05) but left-ventricular mass and longitudinal strain were over-estimated using the proposed accelerated cine method (mass over-estimated by 3.36g/m2, p < 0.0001; longitudinal strain over-estimated by 1.97%, p = 0.001).

CONCLUSIONS: This study found that an accelerated 2D cine method with DL reconstruction at an acceleration factor of 8 can reduce CMR cine acquisition time by 40% without significantly affecting volumetry or image quality. Given the increase of scan time efficiency, this undersampled acquisition method using deep learning reconstruction should be considered for routine clinical CMR.

PMID:39079600 | DOI:10.1016/j.jocmr.2024.101069

Categories: Literature Watch

MoCab: A framework for the deployment of machine learning models across health information systems

Tue, 2024-07-30 06:00

Comput Methods Programs Biomed. 2024 Jul 20;255:108336. doi: 10.1016/j.cmpb.2024.108336. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Machine learning models are vital for enhancing healthcare services. However, integrating them into health information systems (HISs) introduces challenges beyond clinical decision making, such as interoperability and diverse electronic health records (EHR) formats. We proposed Model Cabinet Architecture (MoCab), a framework designed to leverage fast healthcare interoperability resources (FHIR) as the standard for data storage and retrieval when deploying machine learning models across various HISs, addressing the challenges highlighted by platforms such as EPOCH®, ePRISM®, KETOS, and others.

METHODS: The MoCab architecture is designed to streamline predictive modeling in healthcare through a structured framework incorporating several specialized parts. The Data Service Center manages patient data retrieval from FHIR servers. These data are then processed by the Knowledge Model Center, where they are formatted and fed into predictive models. The Model Retraining Center is crucial in continuously updating these models to maintain accuracy in dynamic clinical environments. The framework further incorporates Clinical Decision Support (CDS) Hooks for issuing clinical alerts. It uses Substitutable Medical Apps Reusable Technologies (SMART) on FHIR to develop applications for displaying alerts, prediction results, and patient records.

RESULTS: The MoCab framework was demonstrated using three types of predictive models: a scoring model (qCSI), a machine learning model (NSTI), and a deep learning model (SPC), applied to synthetic data that mimic a major EHR system. The implementations showed how MoCab integrates predictive models with health data for clinical decision support, utilizing CDS Hooks and SMART on FHIR for seamless HIS integration. The demonstration confirmed the practical utility of MoCab in supporting clinical decision making, validated by its application in various healthcare settings.

CONCLUSIONS: We demonstrate MoCab's potential in promoting the interoperability of machine learning models and enhancing its utility across various EHRs. Despite facing challenges like FHIR adoption, MoCab addresses key challenges in adapting machine learning models within healthcare settings, paving the way for further enhancements and broader adoption.

PMID:39079482 | DOI:10.1016/j.cmpb.2024.108336

Categories: Literature Watch

Real time detection and identification of fish quality using low-power multimodal artificial olfaction system

Tue, 2024-07-30 06:00

Talanta. 2024 Jul 22;279:126601. doi: 10.1016/j.talanta.2024.126601. Online ahead of print.

ABSTRACT

Single gas quantification and mixed gas identification have been the major challenges in the field of gas detection. To address the shortcomings of chemo-resistive gas sensors, sensor arrays have been the subject of recent research. In this work, the research focused on both optimization of gas-sensing materials and further analysis of pattern recognition algorithms. Four bimetallic oxide-based gas sensors capable of operating at room temperature were first developed by introducing different modulating techniques on the sensing layer, including constructing surface oxygen defects, polymerizing conducting polymers, modifying Nano-metal, and compositing flexible substrates. The signals derived from the gas sensor array were then processed to eliminate noise and reduce dimension with the feature engineering. The gases of were qualitatively identified by support vector machine (SVM) model with an accuracy of 98.86 %. Meanwhile, a combined model of convolutional neural network and long short-term memory network (CNN-LSTM) was established to remove the interference samples and quantitatively estimate the concentration of the target gases. The combined model based on deep learning, which avoids the overfitting with local optimal solutions, effectively boosts the performance of concentration recognition with the lowest root mean square error (RMSE) of 2.3. Finally, a low-power artificial olfactory system was established by merging the multi-sensor data and applied for real-time and accurate judgment of the food freshness.

PMID:39079435 | DOI:10.1016/j.talanta.2024.126601

Categories: Literature Watch

Shape prior-constrained deep learning network for medical image segmentation

Tue, 2024-07-30 06:00

Comput Biol Med. 2024 Jul 29;180:108932. doi: 10.1016/j.compbiomed.2024.108932. Online ahead of print.

ABSTRACT

We propose a shape prior representation-constrained multi-scale features fusion segmentation network for medical image segmentation, including training and testing stages. The novelty of our training framework lies in two modules comprised of the shape prior constraint and the multi-scale features fusion. The shape prior learning model is embedded into a segmentation neural network to solve the problems of low contrast and neighboring organs with intensities similar to the target organ. The latter can provide both local and global contexts to address the issues of large variations in patient postures as well as organ's shape. In the testing stage, we propose a circular collaboration framework strategy which combines a shape generator auto-encoder network model with a segmentation network model, allowing the two models to collaborate with each other, resulting in a cooperative effect that leads to accurate segmentations. Our proposed method is evaluated and demonstrated on the ACDC MICCAI'17 Challenge Dataset, CT scans datasets, namely, in COVID-19 CT lung, and LiTS2017 liver from three different datasets, and its results are compared with the recent state of the art in these areas. Our method ranked 1st on the ACDC Dataset in terms of Dice score and achieved very competitive performance on COVID-19 CT lung and LiTS2017 liver segmentation.

PMID:39079416 | DOI:10.1016/j.compbiomed.2024.108932

Categories: Literature Watch

Robust visual question answering via polarity enhancement and contrast

Tue, 2024-07-30 06:00

Neural Netw. 2024 Jul 20;179:106560. doi: 10.1016/j.neunet.2024.106560. Online ahead of print.

ABSTRACT

The Visual Question Answering (VQA) task is an important research direction in the field of artificial intelligence, which requires a model that can simultaneously understand visual images and natural language questions, and answer questions related to images. Recent studies have shown that many Visual Question Answering models rely on statistically regular correlations between questions and answers, which in turn weakens the correlation between visual content and textual information. In this work, we propose an unbiased Visual Question Answering method to solve language priors from the perspective of strengthening the contrast between the correct answer and the positive and negative predictions. We design a new model consisting of two modules with different roles. We input the image and the question corresponding to it into the Answer Visual Attention Modules to generate positive prediction output, and then use a Dual Channels Joint Module to generate negative prediction output with great linguistic prior knowledge. Finally, we input the positive and negative predictions together with the correct answer to our newly designed loss function for training. Our method achieves high performance (61.24%) on the VQA-CP v2 dataset. In addition, most existing debiasing methods improve performance on VQA-CP v2 dataset at the cost of reducing performance on VQA v2 dataset, while our method not only does not reduce the accuracy on VQA v2 dataset. Instead, it improves performance on both datasets mentioned above.

PMID:39079376 | DOI:10.1016/j.neunet.2024.106560

Categories: Literature Watch

Differentiation of tuberculous and brucellar spondylitis using conventional MRI-based deep learning algorithms

Tue, 2024-07-30 06:00

Eur J Radiol. 2024 Jul 27;178:111655. doi: 10.1016/j.ejrad.2024.111655. Online ahead of print.

ABSTRACT

PURPOSE: To investigate the feasibility of deep learning (DL) based on conventional MRI to differentiate tuberculous spondylitis (TS) from brucellar spondylitis (BS).

METHODS: A total of 383 patients with TS (n = 182) or BS (n = 201) were enrolled from April 2013 to May 2023 and randomly divided into training (n = 307) and validation (n = 76) sets. Sagittal T1WI, T2WI, and fat-suppressed (FS) T2WI images were used to construct single-sequence DL models and combined models based on VGG19, VGG16, ResNet18, and DenseNet121 network. The area under the receiver operating characteristic curve (AUC) was used to assess the classification performance. The AUC of DL models was compared with that of two radiologists with different levels of experience.

RESULTS: The AUCs based on VGG19, ResNet18, VGG16, and DenseNet121 ranged from 0.885 to 0.973, 0.873 to 0.944, 0.882 to 0.929, and 0.801 to 0.933, respectively, and VGG19 models performed better. The diagnostic efficiency of combined models outperformed single-sequence DL models. The combined model of T1WI, T2WI, and FS T2WI based on VGG19 achieved optimal performance, with an AUC of 0.973. In addition, the performance of all combined models based on T1WI, T2WI, and FS T2WI was better than that of two radiologists (P<0.05).

CONCLUSION: The DL models have potential guiding value in the diagnosis of TS and BS based on conventional MRI and provide a certain reference for clinical work.

PMID:39079324 | DOI:10.1016/j.ejrad.2024.111655

Categories: Literature Watch

Pages