Deep learning
Protein language models enable prediction of polyreactivity of monospecific, bispecific, and heavy-chain-only antibodies
Antib Ther. 2024 May 30;7(3):199-208. doi: 10.1093/abt/tbae012. eCollection 2024 Jul.
ABSTRACT
BACKGROUND: Early assessment of antibody off-target binding is essential for mitigating developability risks such as fast clearance, reduced efficacy, toxicity, and immunogenicity. The baculovirus particle (BVP) binding assay has been widely utilized to evaluate polyreactivity of antibodies. As a complementary approach, computational prediction of polyreactivity is desirable for counter-screening antibodies from in silico discovery campaigns. However, there is a lack of such models.
METHODS: Herein, we present the development of an ensemble of three deep learning models based on two pan-protein foundational protein language models (ESM2 and ProtT5) and an antibody-specific protein language model (PLM) (Antiberty). These models were trained in a transfer learning network to predict the outcomes in the BVP assay and the bovine serum albumin binding assay, which was developed as a complement to the BVP assay. The training was conducted on a large dataset of antibody sequences augmented with experimental conditions, which were collected through a highly efficient application system.
RESULTS: The resulting models demonstrated robust performance on canonical mAbs (monospecific with heavy and light chain), bispecific Abs, and single-domain Fc (VHH-Fc). PLMs outperformed a model built using molecular descriptors calculated from AlphaFold 2 predicted structures. Embeddings from the antibody-specific and foundational PLMs resulted in similar performance.
CONCLUSION: To our knowledge, this represents the first application of PLMs to predict assay data on bispecifics and VHH-Fcs.
PMID:39036071 | PMC:PMC11259759 | DOI:10.1093/abt/tbae012
Mammogram mastery: A robust dataset for breast cancer detection and medical education
Data Brief. 2024 Jun 17;55:110633. doi: 10.1016/j.dib.2024.110633. eCollection 2024 Aug.
ABSTRACT
This data article presents a comprehensive dataset comprising breast cancer images collected from patients, encompassing two distinct sets: one from individuals diagnosed with breast cancer and another from those without the condition. Expert physicians carefully select, verify, and categorize the dataset to guarantee its quality and dependability for use in research and teaching. The dataset, which originates from Sulaymaniyah, Iraq, provides a distinctive viewpoint on the frequency and features of breast cancer in the area. This dataset offers a wealth of information for developing and testing deep learning algorithms for identifying breast cancer, with 745 original images and 9,685 augmented images. The addition of augmented X-rays to the dataset increases its adaptability for algorithm development and instructional projects. This dataset holds immense potential for advancing medical research, aiding in the development of innovative diagnostic tools, and fostering educational opportunities for medical students interested in breast cancer detection and diagnosis.
PMID:39035836 | PMC:PMC11259914 | DOI:10.1016/j.dib.2024.110633
Deep Learning of radiology-genomics integration for computational oncology: A mini review
Comput Struct Biotechnol J. 2024 Jun 20;23:2708-2716. doi: 10.1016/j.csbj.2024.06.019. eCollection 2024 Dec.
ABSTRACT
In the field of computational oncology, patient status is often assessed using radiology-genomics, which includes two key technologies and data, such as radiology and genomics. Recent advances in deep learning have facilitated the integration of radiology-genomics data, and even new omics data, significantly improving the robustness and accuracy of clinical predictions. These factors are driving artificial intelligence (AI) closer to practical clinical applications. In particular, deep learning models are crucial in identifying new radiology-genomics biomarkers and therapeutic targets, supported by explainable AI (xAI) methods. This review focuses on recent developments in deep learning for radiology-genomics integration, highlights current challenges, and outlines some research directions for multimodal integration and biomarker discovery of radiology-genomics or radiology-omics that are urgently needed in computational oncology.
PMID:39035833 | PMC:PMC11260400 | DOI:10.1016/j.csbj.2024.06.019
Construction of a multi-tissue compound-target interaction network of Qingfei Paidu decoction in COVID-19 treatment based on deep learning and transcriptomic analysis
J Bioinform Comput Biol. 2024 Jul 20:2450016. doi: 10.1142/S0219720024500161. Online ahead of print.
ABSTRACT
The Qingfei Paidu decoction (QFPDD) is a widely acclaimed therapeutic formula employed nationwide for the clinical management of coronavirus disease 2019 (COVID-19). QFPDD exerts a synergistic therapeutic effect, characterized by its multi-component, multi-target, and multi-pathway action. However, the intricate interactions among the ingredients and targets within QFPDD and their systematic effects in multiple tissues remain undetermined. To address this, we qualitatively characterized the chemical components of QFPDD. We integrated multi-tissue transcriptomic analysis with GraphDTA, a deep learning model, to screen for potential compound-target interactions of QFPDD in multiple tissues. We predicted 13 key active compounds, 127 potential targets and 27 pathways associated with QFPDD across six different tissues. Notably, oleanolic acid-AXL exhibited leading affinity in the heart, blood, and liver. Molecular docking and molecular dynamics simulation confirmed their strong binding affinity. The robust interaction between oleanolic acid and the AXL receptor suggests that AXL is a promising target for developing clinical intervention strategies. Through the construction of a multi-tissue compound-target interaction network, our study further elucidated the mechanisms through which QFPDD effectively combats COVID-19 in multiple tissues. Our work also establishes a framework for future investigations into the systemic effects of other Traditional Chinese Medicine (TCM) formulas in disease treatment.
PMID:39036847 | DOI:10.1142/S0219720024500161
NDMNN: A novel deep residual network based MNN method to remove batch effects from scRNA-seq data
J Bioinform Comput Biol. 2024 Jul 20:2450015. doi: 10.1142/S021972002450015X. Online ahead of print.
ABSTRACT
The rapid development of single-cell RNA sequencing (scRNA-seq) technology has generated vast amounts of data. However, these data often exhibit batch effects due to various factors such as different time points, experimental personnel, and instruments used, which can obscure the biological differences in the data itself. Based on the characteristics of scRNA-seq data, we designed a dense deep residual network model, referred to as NDnetwork. Subsequently, we combined the NDnetwork model with the MNN method to correct batch effects in scRNA-seq data, and named it the NDMNN method. Comprehensive experimental results demonstrate that the NDMNN method outperforms existing commonly used methods for correcting batch effects in scRNA-seq data. As the scale of single-cell sequencing continues to expand, we believe that NDMNN will be a valuable tool for researchers in the biological community for correcting batch effects in their studies. The source code and experimental results of the NDMNN method can be found at https://github.com/mustang-hub/NDMNN.
PMID:39036845 | DOI:10.1142/S021972002450015X
Towards Automatic Cartilage Quantification in Clinical Trials - Continuing from the 2019 IWOAI Knee Segmentation Challenge
Osteoarthr Imaging. 2023 Mar;3(1):100087. doi: 10.1016/j.ostima.2023.100087. Epub 2023 Feb 10.
ABSTRACT
OBJECTIVE: To evaluate whether the deep learning (DL) segmentation methods from the six teams that participated in the IWOAI 2019 Knee Cartilage Segmentation Challenge are appropriate for quantifying cartilage loss in longitudinal clinical trials.
DESIGN: We included 556 subjects from the Osteoarthritis Initiative study with manually read cartilage volume scores for the baseline and 1-year visits. The teams used their methods originally trained for the IWOAI 2019 challenge to segment the 1130 knee MRIs. These scans were anonymized and the teams were blinded to any subject or visit identifiers. Two teams also submitted updated methods. The resulting 9,040 segmentations are available online.The segmentations included tibial, femoral, and patellar compartments. In post-processing, we extracted medial and lateral tibial compartments and geometrically defined central medial and lateral femoral sub-compartments. The primary study outcome was the sensitivity to measure cartilage loss as defined by the standardized response mean (SRM).
RESULTS: For the tibial compartments, several of the DL segmentation methods had SRMs similar to the gold standard manual method. The highest DL SRM was for the lateral tibial compartment at 0.38 (the gold standard had 0.34). For the femoral compartments, the gold standard had higher SRMs than the automatic methods at 0.31/0.30 for medial/lateral compartments.
CONCLUSION: The lower SRMs for the DL methods in the femoral compartments at 0.2 were possibly due to the simple sub-compartment extraction done during post-processing. The study demonstrated that state-of-the-art DL segmentation methods may be used in standardized longitudinal single-scanner clinical trials for well-defined cartilage compartments.
PMID:39036792 | PMC:PMC11258861 | DOI:10.1016/j.ostima.2023.100087
Role of Artificial Intelligence in Endoscopic Intervention: A Clinical Review
J Community Hosp Intern Med Perspect. 2024 May 7;14(3):37-43. doi: 10.55729/2000-9666.1341. eCollection 2024.
ABSTRACT
Gastrointestinal diseases are increasing in global prevalence. As a result, the contribution to both mortality and healthcare costs is increasing. While interventions utilizing scoping techniques or ultrasound are crucial to both the timely diagnosis and management of illness, a few limitations are associated with these techniques. Artificial intelligence, using computerized diagnoses, deep learning systems, or neural networks, is increasingly being employed in multiple aspects of medicine to improve the characteristics and outcomes of these tools. Therefore, this review aims to discuss applications of artificial intelligence in endoscopy, colonoscopy, and endoscopic ultrasound.
PMID:39036586 | PMC:PMC11259475 | DOI:10.55729/2000-9666.1341
Towards automated organs at risk and target volumes contouring: Defining precision radiation therapy in the modern era
J Natl Cancer Cent. 2022 Oct 11;2(4):306-313. doi: 10.1016/j.jncc.2022.09.003. eCollection 2022 Dec.
ABSTRACT
Precision radiotherapy is a critical and indispensable cancer treatment means in the modern clinical workflow with the goal of achieving "quality-up and cost-down" in patient care. The challenge of this therapy lies in developing computerized clinical-assistant solutions with precision, automation, and reproducibility built-in to deliver it at scale. In this work, we provide a comprehensive yet ongoing, incomplete survey of and discussions on the recent progress of utilizing advanced deep learning, semantic organ parsing, multimodal imaging fusion, neural architecture search and medical image analytical techniques to address four corner-stone problems or sub-problems required by all precision radiotherapy workflows, namely, organs at risk (OARs) segmentation, gross tumor volume (GTV) segmentation, metastasized lymph node (LN) detection, and clinical tumor volume (CTV) segmentation. Without loss of generality, we mainly focus on using esophageal and head-and-neck cancers as examples, but the methods can be extrapolated to other types of cancers. High-precision, automated and highly reproducible OAR/GTV/LN/CTV auto-delineation techniques have demonstrated their effectiveness in reducing the inter-practitioner variabilities and the time cost to permit rapid treatment planning and adaptive replanning for the benefit of patients. Through the presentation of the achievements and limitations of these techniques in this review, we hope to encourage more collective multidisciplinary precision radiotherapy workflows to transpire.
PMID:39036546 | PMC:PMC11256697 | DOI:10.1016/j.jncc.2022.09.003
CSXAI: a lightweight 2D CNN-SVM model for detection and classification of various crop diseases with explainable AI visualization
Front Plant Sci. 2024 Jul 5;15:1412988. doi: 10.3389/fpls.2024.1412988. eCollection 2024.
ABSTRACT
Plant diseases significantly impact crop productivity and quality, posing a serious threat to global agriculture. The process of identifying and categorizing these diseases is often time-consuming and prone to errors. This research addresses this issue by employing a convolutional neural network and support vector machine (CNN-SVM) hybrid model to classify diseases in four economically important crops: strawberries, peaches, cherries, and soybeans. The objective is to categorize 10 classes of diseases, with six diseased classes and four healthy classes, for these crops using the deep learning-based CNN-SVM model. Several pre-trained models, including VGG16, VGG19, DenseNet, Inception, MobileNetV2, MobileNet, Xception, and ShuffleNet, were also trained, achieving accuracy ranges from 53.82% to 98.8%. The proposed model, however, achieved an average accuracy of 99.09%. While the proposed model's accuracy is comparable to that of the VGG16 pre-trained model, its significantly lower number of trainable parameters makes it more efficient and distinctive. This research demonstrates the potential of the CNN-SVM model in enhancing the accuracy and efficiency of plant disease classification. The CNN-SVM model was selected over VGG16 and other models due to its superior performance metrics. The proposed model achieved a 99% F1-score, a 99.98% Area Under the Curve (AUC), and a 99% precision value, demonstrating its efficacy. Additionally, class activation maps were generated using the Gradient Weighted Class Activation Mapping (Grad-CAM) technique to provide a visual explanation of the detected diseases. A heatmap was created to highlight the regions requiring classification, further validating the model's accuracy and interpretability.
PMID:39036360 | PMC:PMC11257924 | DOI:10.3389/fpls.2024.1412988
Lightweight tomato ripeness detection algorithm based on the improved RT-DETR
Front Plant Sci. 2024 Jul 5;15:1415297. doi: 10.3389/fpls.2024.1415297. eCollection 2024.
ABSTRACT
Tomatoes, widely cherished for their high nutritional value, necessitate precise ripeness identification and selective harvesting of mature fruits to significantly enhance the efficiency and economic benefits of tomato harvesting management. Previous studies on intelligent harvesting often focused solely on identifying tomatoes as the target, lacking fine-grained detection of tomato ripeness. This deficiency leads to the inadvertent harvesting of immature and rotten fruits, resulting in economic losses. Moreover, in natural settings, uneven illumination, occlusion by leaves, and fruit overlap hinder the precise assessment of tomato ripeness by robotic systems. Simultaneously, the demand for high accuracy and rapid response in tomato ripeness detection is compounded by the need for making the model lightweight to mitigate hardware costs. This study proposes a lightweight model named PDSI-RTDETR to address these challenges. Initially, the PConv_Block module, integrating partial convolution with residual blocks, replaces the Basic_Block structure in the legacy backbone to alleviate computing load and enhance feature extraction efficiency. Subsequently, a deformable attention module is amalgamated with intra-scale feature interaction structure, bolstering the capability to extract detailed features for fine-grained classification. Additionally, the proposed slimneck-SSFF feature fusion structure, merging the Scale Sequence Feature Fusion framework with a slim-neck design utilizing GSConv and VoVGSCSP modules, aims to reduce volume of computation and inference latency. Lastly, by amalgamating Inner-IoU with EIoU to formulate Inner-EIoU, replacing the original GIoU to expedite convergence while utilizing auxiliary frames enhances small object detection capabilities. Comprehensive assessments validate that the PDSI-RTDETR model achieves an average precision mAP50 of 86.8%, marking a 3.9% enhancement over the original RT-DETR model, and a 38.7% increase in FPS. Furthermore, the GFLOPs of PDSI-RTDETR have been diminished by 17.6%. Surpassing the baseline RT-DETR and other prevalent methods regarding precision and speed, it unveils its considerable potential for detecting tomato ripeness. When applied to intelligent harvesting robots in the future, this approach can improve the quality of tomato harvesting by reducing the collection of immature and spoiled fruits.
PMID:39036358 | PMC:PMC11257922 | DOI:10.3389/fpls.2024.1415297
Intelligent oncology: The convergence of artificial intelligence and oncology
J Natl Cancer Cent. 2022 Dec 5;3(1):83-91. doi: 10.1016/j.jncc.2022.11.004. eCollection 2023 Mar.
ABSTRACT
With increasingly explored ideologies and technologies for potential applications of artificial intelligence (AI) in oncology, we here describe a holistic and structured concept termed intelligent oncology. Intelligent oncology is defined as a cross-disciplinary specialty which integrates oncology, radiology, pathology, molecular biology, multi-omics and computer sciences, aiming to promote cancer prevention, screening, early diagnosis and precision treatment. The development of intelligent oncology has been facilitated by fast AI technology development such as natural language processing, machine/deep learning, computer vision, and robotic process automation. While the concept and applications of intelligent oncology is still in its infancy, and there are still many hurdles and challenges, we are optimistic that it will play a pivotal role for the future of basic, translational and clinical oncology.
PMID:39036310 | PMC:PMC11256531 | DOI:10.1016/j.jncc.2022.11.004
Image Quality Assessment Using Convolutional Neural Network in Clinical Skin Images
JID Innov. 2024 Apr 27;4(4):100285. doi: 10.1016/j.xjidi.2024.100285. eCollection 2024 Jul.
ABSTRACT
The image quality received for clinical evaluation is often suboptimal. The goal is to develop an image quality analysis tool to assess patient- and primary care physician-derived images using deep learning model. Dataset included patient- and primary care physician-derived images from August 21, 2018 to June 30, 2022 with 4 unique quality labels. VGG16 model was fine tuned with input data, and optimal threshold was determined by Youden's index. Ordinal labels were transformed to binary labels using a majority vote because model distinguishes between 2 categories (good vs bad). At a threshold of 0.587, area under the curve for the test set was 0.885 (95% confidence interval = 0.838-0.933); sensitivity, specificity, positive predictive value, and negative predictive value were 0.829, 0.784, 0.906, and 0.645, respectively. Independent validation of 300 additional images (from patients and primary care physicians) demonstrated area under the curve of 0.864 (95% confidence interval = 0.818-0.909) and area under the curve of 0.902 (95% confidence interval = 0.85-0.95), respectively. The sensitivity, specificity, positive predictive value, and negative predictive value for the 300 images were 0.827, 0.800, 0.959, and 0.450, respectively. We demonstrate a practical approach improving the image quality for clinical workflow. Although users may have to capture additional images, this is offset by the improved workload and efficiency for clinical teams.
PMID:39036289 | PMC:PMC11260318 | DOI:10.1016/j.xjidi.2024.100285
Deep learning radiomics based on multimodal imaging for distinguishing benign and malignant breast tumours
Front Med (Lausanne). 2024 Jul 5;11:1402967. doi: 10.3389/fmed.2024.1402967. eCollection 2024.
ABSTRACT
OBJECTIVES: This study aimed to develop a deep learning radiomic model using multimodal imaging to differentiate benign and malignant breast tumours.
METHODS: Multimodality imaging data, including ultrasonography (US), mammography (MG), and magnetic resonance imaging (MRI), from 322 patients (112 with benign breast tumours and 210 with malignant breast tumours) with histopathologically confirmed breast tumours were retrospectively collected between December 2018 and May 2023. Based on multimodal imaging, the experiment was divided into three parts: traditional radiomics, deep learning radiomics, and feature fusion. We tested the performance of seven classifiers, namely, SVM, KNN, random forest, extra trees, XGBoost, LightGBM, and LR, on different feature models. Through feature fusion using ensemble and stacking strategies, we obtained the optimal classification model for benign and malignant breast tumours.
RESULTS: In terms of traditional radiomics, the ensemble fusion strategy achieved the highest accuracy, AUC, and specificity, with values of 0.892, 0.942 [0.886-0.996], and 0.956 [0.873-1.000], respectively. The early fusion strategy with US, MG, and MRI achieved the highest sensitivity of 0.952 [0.887-1.000]. In terms of deep learning radiomics, the stacking fusion strategy achieved the highest accuracy, AUC, and sensitivity, with values of 0.937, 0.947 [0.887-1.000], and 1.000 [0.999-1.000], respectively. The early fusion strategies of US+MRI and US+MG achieved the highest specificity of 0.954 [0.867-1.000]. In terms of feature fusion, the ensemble and stacking approaches of the late fusion strategy achieved the highest accuracy of 0.968. In addition, stacking achieved the highest AUC and specificity, which were 0.997 [0.990-1.000] and 1.000 [0.999-1.000], respectively. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity of 1.000 [0.999-1.000] under the early fusion strategy.
CONCLUSION: This study demonstrated the potential of integrating deep learning and radiomic features with multimodal images. As a single modality, MRI based on radiomic features achieved greater accuracy than US or MG. The US and MG models achieved higher accuracy with transfer learning than the single-mode or radiomic models. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity under the early fusion strategy, showed higher diagnostic performance, and provided more valuable information for differentiation between benign and malignant breast tumours.
PMID:39036101 | PMC:PMC11257849 | DOI:10.3389/fmed.2024.1402967
High Resolution TOF-MRA Using Compressed Sensing-based Deep Learning Image Reconstruction for the Visualization of Lenticulostriate Arteries: A Preliminary Study
Magn Reson Med Sci. 2024 Jul 20. doi: 10.2463/mrms.mp.2024-0025. Online ahead of print.
ABSTRACT
PURPOSE: To investigate the visibility of the lenticulostriate arteries (LSAs) in time-of-flight (TOF)-MR angiography (MRA) using compressed sensing (CS)-based deep learning (DL) image reconstruction by comparing its image quality with that obtained by the conventional CS algorithm.
METHODS: Five healthy volunteers were included. High-resolution TOF-MRA images with the reduction (R)-factor of 1 were acquired as full-sampling data. Images with R-factors of 2, 4, and 6 were then reconstructed using CS-DL and conventional CS (the combination of CS and sensitivity conceding; CS-SENSE) reconstruction, respectively. In the quantitative assessment, the number of visible LSAs (identified by two radiologists), length of each depicted LSA (evaluated by one radiological technologist), and normalized mean squared error (NMSE) value were assessed. In the qualitative assessment, the overall image quality and the visibility of the peripheral LSA were visually evaluated by two radiologists.
RESULTS: In the quantitative assessment of the DL-CS images, the number of visible LSAs was significantly higher than those obtained with CS-SENSE in the R-factors of 4 and 6 (Reader 1) and in the R-factor of 6 (Reader 2). The length of the depicted LSAs in the DL-CS images was significantly longer in the R-factor 6 compared to the CS-SENSE result. The NMSE value in CS-DL was significantly lower than in CS-SENSE for R-factors of 4 and 6. In the qualitative assessment of DL-CS images, the overall image quality was significantly higher than that obtained with CS-SENSE in the R-factors 4 and 6 (Reader 1) and in the R-factor 4 (Reader 2). The visibility of the peripheral LSA was significantly higher than that shown by CS-SENSE in all R-factors (Reader 1) and in the R-factors 2 and 4 (Reader 2).
CONCLUSION: CS-DL reconstruction demonstrated preserved image quality for the depiction of LSAs compared to the conventional CS-SENSE when the R-factor is elevated.
PMID:39034144 | DOI:10.2463/mrms.mp.2024-0025
Windy events detection in big bioacoustics datasets using a pre-trained Convolutional Neural Network
Sci Total Environ. 2024 Jul 19:174868. doi: 10.1016/j.scitotenv.2024.174868. Online ahead of print.
ABSTRACT
Passive Acoustic Monitoring (PAM), which involves using autonomous record units for studying wildlife behaviour and distribution, often requires handling big acoustic datasets collected over extended periods. While these data offer invaluable insights about wildlife, their analysis can present challenges in dealing with geophonic sources. A major issue in the process of detection of target sounds is represented by wind-induced noise. This can lead to false positive detections, i.e., energy peaks due to wind gusts misclassified as biological sounds, or false negative, i.e., the wind noise masks the presence of biological sounds. Acoustic data dominated by wind noise makes the analysis of vocal activity unreliable, thus compromising the detection of target sounds and, subsequently, the interpretation of the results. Our work introduces a straightforward approach for detecting recordings affected by windy events using a pre-trained convolutional neural network. This process facilitates identifying wind-compromised data. We consider this dataset pre-processing crucial for ensuring the reliable use of PAM data. We implemented this preprocessing by leveraging YAMNet, a deep learning model for sound classification tasks. We evaluated YAMNet as-is ability to detect wind-induced noise and tested its performance in a Transfer Learning scenario by using our annotated data from the Stony Point Penguin Colony in South Africa. While the classification of YAMNet as-is achieved a precision of 0.71, and recall of 0.66, those metrics strongly improved after the training on our annotated dataset, reaching a precision of 0.91, and recall of 0.92, corresponding to a relative increment of >28 %. Our study demonstrates the promising application of YAMNet in the bioacoustics and ecoacoustics fields, addressing the need for wind-noise-free acoustic data. We released an open-access code that, combined with the efficiency and peak performance of YAMNet, can be used on standard laptops for a broad user base.
PMID:39034006 | DOI:10.1016/j.scitotenv.2024.174868
Application and performance enhancement of FAIMS spectral data for deep learning analysis using generative adversarial network reinforcement
Anal Biochem. 2024 Jul 19:115627. doi: 10.1016/j.ab.2024.115627. Online ahead of print.
ABSTRACT
When using High-field asymmetric ion mobility spectrometry (FAIMS) to process complex mixtures for deep learning analysis, there is a problem of poor recognition performance due to the lack of high-quality data and low sample diversity. In this paper, a Generative Adversarial Network (GAN) method is introduced to simulate and generate highly realistic and diverse spectral for expanding the dataset using real mixture spectral data of 15 classes collected by FAIMS. The mixed datasets were put into VGG and ResNeXt for testing respectively, and the experimental results proved that the best recognition effect was achieved when the ratio of real data to generated data was 1:4: where accuracy improved by 24.19% and 6.43%; precision improved by 23.71% and 6.97%; recall improved by 21.08% and 7.09%; and F1-score improved by 24.50% and 8.23%. The above results strongly demonstrate that GAN can effectively expand the data volume and increase the sample diversity without increasing the additional experimental cost, which significantly enhances the experimental effect of FAIMS spectral for the analysis of complex mixtures.
PMID:39033946 | DOI:10.1016/j.ab.2024.115627
Differentiating loss of consciousness causes through artificial intelligence-enabled decoding of functional connectivity
Neuroimage. 2024 Jul 19:120749. doi: 10.1016/j.neuroimage.2024.120749. Online ahead of print.
ABSTRACT
Differential diagnosis of acute loss of consciousness (LOC) is crucial due to the need for different therapeutic strategies despite similar clinical presentations among etiologies such as nonconvulsive status epilepticus, metabolic encephalopathy, and benzodiazepine intoxication. While altered functional connectivity (FC) plays a pivotal role in the pathophysiology of LOC, there has been a lack of efforts to develop differential diagnosis artificial intelligence (AI) models that feature the distinctive FC change patterns specific to each LOC cause. Three approaches were applied for extracting features for the AI models: three-dimensional FC adjacency matrices, vectorized FC values, and graph theoretical measurements. Deep learning using convolutional neural networks (CNN) and various machine learning algorithms were implemented to compare classification accuracy using electroencephalography (EEG) data with different epoch sizes. The CNN model using FC adjacency matrices achieved the highest accuracy with an AUC of 0.905, with 20-s epoch data being optimal for classifying the different LOC causes. The high accuracy of the CNN model was maintained in a prospective cohort. Key distinguishing features among the LOC causes were found in the delta and theta brain wave bands. This research advances the understanding of LOC's underlying mechanisms and shows promise for enhancing diagnosis and treatment selection. Moreover, the AI models can provide accurate LOC differentiation with a relatively small amount of EEG data in 20-s epochs, which may be clinically useful.
PMID:39033787 | DOI:10.1016/j.neuroimage.2024.120749
Improving diagnostic confidence in low-dose dual-energy CTE with low energy level and deep learning reconstruction
Eur J Radiol. 2024 Jul 10;178:111607. doi: 10.1016/j.ejrad.2024.111607. Online ahead of print.
ABSTRACT
OBJECTIVE: To demonstrate the value of using 50 keV virtual monochromatic images with deep learning image reconstruction (DLIR) in low-dose dual-energy CT enterography (CTE).
METHODS: In this prospective study, 114 participants (62 % M; 41.9 ± 16 years) underwent dual-energy CTE. The early-enteric phase was performed using standard-dose (noise index (NI): 8) and images were reconstructed at 70 keV and 50 keV with 40 % strength ASIR-V (ASIR-V40%). The late-enteric phase used low-dose (NI: 12) and images were reconstructed at 50 keV with ASIR-V40%, and DLIR at medium (DLIR-M) and high strength (DLIR-H). Image standard deviation (SD), signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), edge-rise-slope (ERS) were computed. The quantitative comb sign score was calculated for the 27 patients with Crohn's disease. The subjective noise, image contrast, display of rectus artery were scored using a 5-point scale by two radiologists blindly.
RESULTS: Effective dose was reduced by 50 % (P < 0.001) in the late-enteric phase to 3.26 mSv. The lower-dose 50 keV-DLIR-H images (SD:17.7 ± 0.5HU) had similar image noise (P = 0.97) as the standard-dose 70 keV-ASIR-V40% images (SD:17.7 ± 0.73HU), but with higher (P < 0.001) SNR, CNR, ERS and quantitative comb sign score (5.7 ± 0.17, 1.8 ± 0.12, 156.04 ± 5.21 and 5.05 ± 0.73, respectively). Furthermore, the lower-dose 50 keV-DLIR-H images obtained the highest score in the rectus artery visibility (4.27 ± 0.6).
CONCLUSIONS: The 50 keV images in dual-energy CTE with DLIR provides high-quality images, with a 50 % reduction in radiation dose. Images with high contrast and density resolutions significantly enhance the diagnostic confidence of Crohn's disease and are essential for the clinical development of individualized treatment plans.
PMID:39033690 | DOI:10.1016/j.ejrad.2024.111607
Improved sleep stage predictions by deep learning of photoplethysmogram and respiration patterns
Comput Biol Med. 2024 Jul 20;179:108679. doi: 10.1016/j.compbiomed.2024.108679. Online ahead of print.
ABSTRACT
Sleep staging is a crucial tool for diagnosing and monitoring sleep disorders, but the standard clinical approach using polysomnography (PSG) in a sleep lab is time-consuming, expensive, uncomfortable, and limited to a single night. Advancements in sensor technology have enabled home sleep monitoring, but existing devices still lack sufficient accuracy to inform clinical decisions. To address this challenge, we propose a deep learning architecture that combines a convolutional neural network and bidirectional long short-term memory to accurately classify sleep stages. By supplementing photoplethysmography (PPG) signals with respiratory sensor inputs, we demonstrated significant improvements in prediction accuracy and Cohen's kappa (k) for 2- (92.7 %; k = 0.768), 3- (80.2 %; k = 0.714), 4- (76.8 %, k = 0.550), and 5-stage (76.7 %, k = 0.616) sleep classification using raw data. This relatively translatable approach, with a less intensive AI model and leveraging only a few, inexpensive sensors, shows promise in accurately staging sleep. This has potential for diagnosing and managing sleep disorders in a more accessible and practical manner, possibly even at home.
PMID:39033682 | DOI:10.1016/j.compbiomed.2024.108679
Automatic segmentation of intraluminal thrombosis of abdominal aortic aneurysms from CT angiography using a mixed-scale-driven multiview perception network (M<sup>2</sup>Net) model
Comput Biol Med. 2024 Jul 19;179:108838. doi: 10.1016/j.compbiomed.2024.108838. Online ahead of print.
ABSTRACT
Intraluminal thrombosis (ILT) plays a critical role in the progression of abdominal aortic aneurysms (AAA). Understanding the role of ILT can improve the evaluation and management of AAAs. However, compared with highly developed automatic vessel lumen segmentation methods, ILT segmentation is challenging. Angiographic contrast agents can enhance the vessel lumen but cannot improve boundary delineation of the ILT regions; the lack of intrinsic contrast in the ILT structure significantly limits the accurate segmentation of ILT. Additionally, ILT is not evenly distributed within AAAs; its sparsity and scattered distributions in the imaging data pose challenges to the learning process of neural networks. Thus, we propose a multiview fusion approach, allowing us to obtain high-quality ILT delineation from computed tomography angiography (CTA) data. Our multiview fusion network is named Mixed-scale-driven Multiview Perception Network (M2Net), and it consists of two major steps. Following image preprocessing, the 2D mixed-scale ZoomNet segments ILT from each orthogonal view (i.e., Axial, Sagittal, and Coronal views) to enhance the prior information. Then, the proposed context-aware volume integration network (CVIN) effectively fuses the multiview results. Using contrast-enhanced computed tomography angiography (CTA) data from human subjects with AAAs, we evaluated the proposed M2Net. A quantitative analysis shows that the proposed deep-learning M2Net model achieved superior performance (e.g., DICE scores of 0.88 with a sensitivity of 0.92, respectively) compared with other state-of-the-art deep-learning models. In closing, the proposed M2Net model can provide high-quality delineation of ILT in an automated fashion and has the potential to be translated into the clinical workflow.
PMID:39033681 | DOI:10.1016/j.compbiomed.2024.108838