Deep learning
Artificial Intelligence in Lung Cancer Imaging: From Data to Therapy
Crit Rev Oncog. 2024;29(2):1-13. doi: 10.1615/CritRevOncog.2023050439.
ABSTRACT
Lung cancer remains a global health challenge, leading to substantial morbidity and mortality. While prevention and early detection strategies have improved, the need for precise diagnosis, prognosis, and treatment remains crucial. In this comprehensive review article, we explore the role of artificial intelligence (AI) in reshaping the management of lung cancer. AI may have different potential applications in lung cancer characterization and outcome prediction. Manual segmentation is a time-consuming task, with high inter-observer variability, that can be replaced by AI-based approaches, including deep learning models such as U-Net, BCDU-Net, and others, to quantify lung nodules and cancers objectively and to extract radiomics features for the characterization of the tissue. AI models have also demonstrated their ability to predict treatment responses, such as immunotherapy and targeted therapy, by integrating radiomic features with clinical data. Additionally, AI-based prognostic models have been developed to identify patients at higher risk and personalize treatment strategies. In conclusion, this review article provides a comprehensive overview of the current state of AI applications in lung cancer management, spanning from segmentation and virtual biopsy to outcome prediction. The evolving role of AI in improving the precision and effectiveness of lung cancer diagnosis and treatment underscores its potential to significantly impact clinical practice and patient outcomes.
PMID:38505877 | DOI:10.1615/CritRevOncog.2023050439
Deep learning in spatial transcriptomics: Learning from the next next-generation sequencing
Biophys Rev (Melville). 2023 Feb 7;4(1):011306. doi: 10.1063/5.0091135. eCollection 2023 Mar.
ABSTRACT
Spatial transcriptomics (ST) technologies are rapidly becoming the extension of single-cell RNA sequencing (scRNAseq), holding the potential of profiling gene expression at a single-cell resolution while maintaining cellular compositions within a tissue. Having both expression profiles and tissue organization enables researchers to better understand cellular interactions and heterogeneity, providing insight into complex biological processes that would not be possible with traditional sequencing technologies. Data generated by ST technologies are inherently noisy, high-dimensional, sparse, and multi-modal (including histological images, count matrices, etc.), thus requiring specialized computational tools for accurate and robust analysis. However, many ST studies currently utilize traditional scRNAseq tools, which are inadequate for analyzing complex ST datasets. On the other hand, many of the existing ST-specific methods are built upon traditional statistical or machine learning frameworks, which have shown to be sub-optimal in many applications due to the scale, multi-modality, and limitations of spatially resolved data (such as spatial resolution, sensitivity, and gene coverage). Given these intricacies, researchers have developed deep learning (DL)-based models to alleviate ST-specific challenges. These methods include new state-of-the-art models in alignment, spatial reconstruction, and spatial clustering, among others. However, DL models for ST analysis are nascent and remain largely underexplored. In this review, we provide an overview of existing state-of-the-art tools for analyzing spatially resolved transcriptomics while delving deeper into the DL-based approaches. We discuss the new frontiers and the open questions in this field and highlight domains in which we anticipate transformational DL applications.
PMID:38505815 | PMC:PMC10903438 | DOI:10.1063/5.0091135
DCENet-based low-light image enhancement improved by spiking encoding and convLSTM
Front Neurosci. 2024 Mar 5;18:1297671. doi: 10.3389/fnins.2024.1297671. eCollection 2024.
ABSTRACT
The direct utilization of low-light images hinders downstream visual tasks. Traditional low-light image enhancement (LLIE) methods, such as Retinex-based networks, require image pairs. A spiking-coding methodology called intensity-to-latency has been used to gradually acquire the structural characteristics of an image. convLSTM has been used to connect the features. This study introduces a simplified DCENet to achieve unsupervised LLIE as well as the spiking coding mode of a spiking neural network. It also applies the comprehensive coding features of convLSTM to improve the subjective and objective effects of LLIE. In the ablation experiment for the proposed structure, the convLSTM structure was replaced by a convolutional neural network, and the classical CBAM attention was introduced for comparison. Five objective evaluation metrics were compared with nine LLIE methods that currently exhibit strong comprehensive performance, with PSNR, SSIM, MSE, UQI, and VIFP exceeding the second place at 4.4% (0.8%), 3.9% (17.2%), 0% (15%), 0.1% (0.2%), and 4.3% (0.9%) on the LOL and SCIE datasets. Further experiments of the user study in five non-reference datasets were conducted to subjectively evaluate the effects depicted in the images. These experiments verified the remarkable performance of the proposed method.
PMID:38505773 | PMC:PMC10948416 | DOI:10.3389/fnins.2024.1297671
Extracting quantitative biological information from bright-field cell images using deep learning
Biophys Rev (Melville). 2021 Jul 20;2(3):031401. doi: 10.1063/5.0044782. eCollection 2021 Sep.
ABSTRACT
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time consuming, labor intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this is a robust and fast-converging approach to generate virtually stained images from the bright-field images and, in subsequent downstream analyses, to quantify the properties of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually stained images to extract quantitative measures about these cell structures. Generating virtually stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell. To make this deep-learning-powered approach readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific virtual-staining and cell-profiling applications.
PMID:38505631 | PMC:PMC10903417 | DOI:10.1063/5.0044782
An efficient decision support system for leukemia identification utilizing nature-inspired deep feature optimization
Front Oncol. 2024 Mar 5;14:1328200. doi: 10.3389/fonc.2024.1328200. eCollection 2024.
ABSTRACT
In the field of medicine, decision support systems play a crucial role by harnessing cutting-edge technology and data analysis to assist doctors in disease diagnosis and treatment. Leukemia is a malignancy that emerges from the uncontrolled growth of immature white blood cells within the human body. An accurate and prompt diagnosis of leukemia is desired due to its swift progression to distant parts of the body. Acute lymphoblastic leukemia (ALL) is an aggressive type of leukemia that affects both children and adults. Computer vision-based identification of leukemia is challenging due to structural irregularities and morphological similarities of blood entities. Deep neural networks have shown promise in extracting valuable information from image datasets, but they have high computational costs due to their extensive feature sets. This work presents an efficient pipeline for binary and subtype classification of acute lymphoblastic leukemia. The proposed method first unveils a novel neighborhood pixel transformation method using differential evolution to improve the clarity and discriminability of blood cell images for better analysis. Next, a hybrid feature extraction approach is presented leveraging transfer learning from selected deep neural network models, InceptionV3 and DenseNet201, to extract comprehensive feature sets. To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. These optimized features subsequently empower multiple classifiers, potentially capturing diverse perspectives and amplifying classification accuracy. The proposed pipeline is validated on publicly available standard datasets of ALL images. For binary classification, the best average accuracy of 98.1% is achieved with 98.1% sensitivity and 98% precision. For ALL subtype classifications, the best accuracy of 98.14% was attained with 78.5% sensitivity and 98% precision. The proposed feature selection method shows a better convergence behavior as compared to classical population-based meta-heuristics. The suggested solution also demonstrates comparable or better performance in comparison to several existing techniques.
PMID:38505591 | PMC:PMC10949894 | DOI:10.3389/fonc.2024.1328200
Deep learning from atrioventricular plane displacement in patients with Takotsubo syndrome: lighting up the black-box
Eur Heart J Digit Health. 2023 Dec 6;5(2):134-143. doi: 10.1093/ehjdh/ztad077. eCollection 2024 Mar.
ABSTRACT
AIMS: The spatiotemporal deep convolutional neural network (DCNN) helps reduce echocardiographic readers' erroneous 'judgement calls' on Takotsubo syndrome (TTS). The aim of this study was to improve the interpretability of the spatiotemporal DCNN to discover latent imaging features associated with causative TTS pathophysiology.
METHODS AND RESULTS: We applied gradient-weighted class activation mapping analysis to visualize an established spatiotemporal DCNN based on the echocardiographic videos to differentiate TTS (150 patients) from anterior wall ST-segment elevation myocardial infarction (STEMI, 150 patients). Forty-eight human expert readers interpreted the same echocardiographic videos and prioritized the regions of interest on myocardium for the differentiation. Based on visualization results, we completed optical flow measurement, myocardial strain, and Doppler/tissue Doppler echocardiography studies to investigate regional myocardial temporal dynamics and diastology. While human readers' visualization predominantly focused on the apex of the heart in TTS patients, the DCNN temporal arm's saliency visualization was attentive on the base of the heart, particularly at the atrioventricular (AV) plane. Compared with STEMI patients, TTS patients consistently showed weaker peak longitudinal displacement (in pixels) in the basal inferoseptal (systolic: 2.15 ± 1.41 vs. 3.10 ± 1.66, P < 0.001; diastolic: 2.36 ± 1.71 vs. 2.97 ± 1.69, P = 0.004) and basal anterolateral (systolic: 2.70 ± 1.96 vs. 3.44 ± 2.13, P = 0.003; diastolic: 2.73 ± 1.70 vs. 3.45 ± 2.20, P = 0.002) segments, and worse longitudinal myocardial strain in the basal inferoseptal (-8.5 ± 3.8% vs. -9.9 ± 4.1%, P = 0.013) and basal anterolateral (-8.6 ± 4.2% vs. -10.4 ± 4.1%, P = 0.006) segments. Meanwhile, TTS patients showed worse diastolic mechanics than STEMI patients (E'/septal: 5.1 ± 1.2 cm/s vs. 6.3 ± 1.5 cm/s, P < 0.001; S'/septal: 5.8 ± 1.3 cm/s vs. 6.8 ± 1.4 cm/s, P < 0.001; E'/lateral: 6.0 ± 1.4 cm/s vs. 7.9 ± 1.6 cm/s, P < 0.001; S'/lateral: 6.3 ± 1.4 cm/s vs. 7.3 ± 1.5 cm/s, P < 0.001; E/E': 15.5 ± 5.6 vs. 12.5 ± 3.5, P < 0.001).
CONCLUSION: The spatiotemporal DCNN saliency visualization helps identify the pattern of myocardial temporal dynamics and navigates the quantification of regional myocardial mechanics. Reduced AV plane displacement in TTS patients likely correlates with impaired diastolic mechanics.
PMID:38505490 | PMC:PMC10944681 | DOI:10.1093/ehjdh/ztad077
Machine learning-based gait analysis to predict clinical frailty scale in elderly patients with heart failure
Eur Heart J Digit Health. 2023 Dec 20;5(2):152-162. doi: 10.1093/ehjdh/ztad082. eCollection 2024 Mar.
ABSTRACT
AIMS: Although frailty assessment is recommended for guiding treatment strategies and outcome prediction in elderly patients with heart failure (HF), most frailty scales are subjective, and the scores vary among raters. We sought to develop a machine learning-based automatic rating method/system/model of the clinical frailty scale (CFS) for patients with HF.
METHODS AND RESULTS: We prospectively examined 417 elderly (≥75 years) with symptomatic chronic HF patients from 7 centres between January 2019 and October 2023. The patients were divided into derivation (n = 194) and validation (n = 223) cohorts. We obtained body-tracking motion data using a deep learning-based pose estimation library, on a smartphone camera. Predicted CFS was calculated from 128 key features, including gait parameters, using the light gradient boosting machine (LightGBM) model. To evaluate the performance of this model, we calculated Cohen's weighted kappa (CWK) and intraclass correlation coefficient (ICC) between the predicted and actual CFSs. In the derivation and validation datasets, the LightGBM models showed excellent agreements between the actual and predicted CFSs [CWK 0.866, 95% confidence interval (CI) 0.807-0.911; ICC 0.866, 95% CI 0.827-0.898; CWK 0.812, 95% CI 0.752-0.868; ICC 0.813, 95% CI 0.761-0.854, respectively]. During a median follow-up period of 391 (inter-quartile range 273-617) days, the higher predicted CFS was independently associated with a higher risk of all-cause death (hazard ratio 1.60, 95% CI 1.02-2.50) after adjusting for significant prognostic covariates.
CONCLUSION: Machine learning-based algorithms of automatically CFS rating are feasible, and the predicted CFS is associated with the risk of all-cause death in elderly patients with HF.
PMID:38505484 | PMC:PMC10944685 | DOI:10.1093/ehjdh/ztad082
Non-invasive prediction of overall survival time for glioblastoma multiforme patients based on multimodal MRI radiomics
Int J Imaging Syst Technol. 2023 Jul;33(4):1261-1274. doi: 10.1002/ima.22869. Epub 2023 Mar 10.
ABSTRACT
Glioblastoma multiforme (GBM) is the most common and deadly primary malignant brain tumor. As GBM tumor is aggressive and shows high biological heterogeneity, the overall survival (OS) time is extremely low even with the most aggressive treatment. If the OS time can be predicted before surgery, developing personalized treatment plans for GBM patients will be beneficial. Magnetic resonance imaging (MRI) is a commonly used diagnostic tool for brain tumors with high-resolution and sound imaging effects. However, in clinical practice, doctors mainly rely on manually segmenting the tumor regions in MRI and predicting the OS time of GBM patients, which is time-consuming, subjective and repetitive, limiting the effectiveness of clinical diagnosis and treatment. Therefore, it is crucial to segment the brain tumor regions in MRI, and an accurate pre-operative prediction of OS time for personalized treatment is highly desired. In this study, we present a multimodal MRI radiomics-based automatic framework for non-invasive prediction of the OS time for GBM patients. A modified 3D-UNet model is built to segment tumor subregions in MRI of GBM patients; then, the radiomic features in the tumor subregions are extracted and combined with the clinical features input into the Support Vector Regression (SVR) model to predict the OS time. In the experiments, the BraTS2020, BraTS2019 and BraTS2018 datasets are used to evaluate our framework. Our model achieves competitive OS time prediction accuracy compared to most typical approaches.
PMID:38505467 | PMC:PMC10946632 | DOI:10.1002/ima.22869
Anchored-fusion enables targeted fusion search in bulk and single-cell RNA sequencing data
Cell Rep Methods. 2024 Mar 12:100733. doi: 10.1016/j.crmeth.2024.100733. Online ahead of print.
ABSTRACT
Here, we present Anchored-fusion, a highly sensitive fusion gene detection tool. It anchors a gene of interest, which often involves driver fusion events, and recovers non-unique matches of short-read sequences that are typically filtered out by conventional algorithms. In addition, Anchored-fusion contains a module based on a deep learning hierarchical structure that incorporates self-distillation learning (hierarchical view learning and distillation [HVLD]), which effectively filters out false positive chimeric fragments generated during sequencing while maintaining true fusion genes. Anchored-fusion enables highly sensitive detection of fusion genes, thus allowing for application in cases with low sequencing depths. We benchmark Anchored-fusion under various conditions and found it outperformed other tools in detecting fusion events in simulated data, bulk RNA sequencing (bRNA-seq) data, and single-cell RNA sequencing (scRNA-seq) data. Our results demonstrate that Anchored-fusion can be a useful tool for fusion detection tasks in clinically relevant RNA-seq data and can be applied to investigate intratumor heterogeneity in scRNA-seq data.
PMID:38503288 | DOI:10.1016/j.crmeth.2024.100733
A retinal vessel segmentation network with multiple-dimension attention and adaptive feature fusion
Comput Biol Med. 2024 Mar 15;172:108315. doi: 10.1016/j.compbiomed.2024.108315. Online ahead of print.
ABSTRACT
The incidence of blinding eye diseases is highly correlated with changes in retinal morphology, and is clinically detected by segmenting retinal structures in fundus images. However, some existing methods have limitations in accurately segmenting thin vessels. In recent years, deep learning has made a splash in the medical image segmentation, but the lack of edge information representation due to repetitive convolution and pooling, limits the final segmentation accuracy. To this end, this paper proposes a pixel-level retinal vessel segmentation network with multiple-dimension attention and adaptive feature fusion. Here, a multiple dimension attention enhancement (MDAE) block is proposed to acquire more local edge information. Meanwhile, a deep guidance fusion (DGF) block and a cross-pooling semantic enhancement (CPSE) block are proposed simultaneously to acquire more global contexts. Further, the predictions of different decoding stages are learned and aggregated by an adaptive weight learner (AWL) unit to obtain the best weights for effective feature fusion. The experimental results on three public fundus image datasets show that proposed network could effectively enhance the segmentation performance on retinal blood vessels. In particular, the proposed method achieves AUC of 98.30%, 98.75%, and 98.71% on the DRIVE, CHASE_DB1, and STARE datasets, respectively, while the F1 score on all three datasets exceeded 83%. The source code of the proposed model is available at https://github.com/gegao310/VesselSeg-Pytorch-master.
PMID:38503093 | DOI:10.1016/j.compbiomed.2024.108315
PPII-AEAT: Prediction of protein-protein interaction inhibitors based on autoencoders with adversarial training
Comput Biol Med. 2024 Mar 16;172:108287. doi: 10.1016/j.compbiomed.2024.108287. Online ahead of print.
ABSTRACT
Protein-protein interactions (PPIs) have shown increasing potential as novel drug targets. The design and development of small molecule inhibitors targeting specific PPIs are crucial for the prevention and treatment of related diseases. Accordingly, effective computational methods are highly desired to meet the emerging need for the large-scale accurate prediction of PPI inhibitors. However, existing machine learning models rely heavily on the manual screening of features and lack generalizability. Here, we propose a new PPI inhibitor prediction method based on autoencoders with adversarial training (named PPII-AEAT) that can adaptively learn molecule representation to cope with different PPI targets. First, Extended-connectivity fingerprints and Mordred descriptors are employed to extract the primary features of small molecular compounds. Then, an autoencoder architecture is trained in three phases to learn high-level representations and predict inhibitory scores. We evaluate PPII-AEAT on nine PPI targets and two different tasks, including the PPI inhibitor identification task and inhibitory potency prediction task. The experimental results show that our proposed PPII-AEAT outperforms state-of-the-art methods.
PMID:38503089 | DOI:10.1016/j.compbiomed.2024.108287
A deep ensemble medical image segmentation with novel sampling method and loss function
Comput Biol Med. 2024 Mar 13;172:108305. doi: 10.1016/j.compbiomed.2024.108305. Online ahead of print.
ABSTRACT
Medical image segmentation is a critical task in computer vision because of facilitating precise identification of regions of interest in medical images. This task plays an important role in disease diagnosis and treatment planning. In recent years, deep learning algorithms have exhibited remarkable performance in this domain. However, it is important to note that there are still unresolved issues, including challenges related to class imbalance and achieving higher levels of accuracy. Considering the challenges, we propose a novel approach to the semantic segmentation of medical images. In this study, a new sampling method to handle class imbalance in the medical datasets is proposed that ensures a comprehensive understanding of both abnormal tissues and background characteristics. Additionally, we propose a novel loss function inspired by exponential loss, which operates at the pixel level. To enhance segmentation performance further, we present an ensemble model comprising two UNet models with ResNet backbone. The initial model is trained on the primary dataset, while the second model is trained on the dataset obtained through our sampling method. The predictions of both models are combined using an ensemble model. We have assessed the effectiveness of our approach using three publicly available datasets: Kvasir-SEG, FLAIR MRI Low-Grade Glioma (LGG), and ISIC 2018 datasets. In our evaluation, we have compared the performance of our loss function against four different loss functions. Furthermore, we have showcased the excellence of our approach by comparing it with various state-of-the-art methods.
PMID:38503087 | DOI:10.1016/j.compbiomed.2024.108305
US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation
Comput Biol Med. 2024 Mar 15;172:108282. doi: 10.1016/j.compbiomed.2024.108282. Online ahead of print.
ABSTRACT
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
PMID:38503085 | DOI:10.1016/j.compbiomed.2024.108282
Brain MR image simulation for deep learning based medical image analysis networks
Comput Methods Programs Biomed. 2024 Mar 7;248:108115. doi: 10.1016/j.cmpb.2024.108115. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism.
METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg.
RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937.
CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.
PMID:38503072 | DOI:10.1016/j.cmpb.2024.108115
Circular RNAs in the KRAS pathway: Emerging players in cancer progression
Pathol Res Pract. 2024 Mar 11;256:155259. doi: 10.1016/j.prp.2024.155259. Online ahead of print.
ABSTRACT
Circular RNAs (circRNAs) have been recognized as key components in the intricate regulatory network of the KRAS pathway across various cancers. The KRAS pathway, a central signalling cascade crucial in tumorigenesis, has gained substantial emphasis as a possible therapeutic target. CircRNAs, a subgroup of non-coding RNAs known for their closed circular arrangement, play diverse roles in gene regulation, contributing to the intricate landscape of cancer biology. This review consolidates existing knowledge on circRNAs within the framework of the KRAS pathway, emphasizing their multifaceted functions in cancer progression. Notable circRNAs, such as Circ_GLG1 and circITGA7, have been identified as pivotal regulators in colorectal cancer (CRC), influencing KRAS expression and the Ras signaling pathway. Aside from their significance in gene regulation, circRNAs contribute to immune evasion, apoptosis, and drug tolerance within KRAS-driven cancers, adding complexity to the intricate interplay. While our comprehension of circRNAs in the KRAS pathway is evolving, challenges such as the diverse landscape of KRAS mutant tumors and the necessity for synergistic combination therapies persist. Integrating cutting-edge technologies, including deep learning-based prediction methods, holds the potential for unveiling disease-associated circRNAs and identifying novel therapeutic targets. Sustained research efforts are crucial to comprehensively unravel the molecular mechanisms governing the intricate interplay between circRNAs and the KRAS pathway, offering insights that could potentially revolutionize cancer diagnostics and treatment strategies.
PMID:38503004 | DOI:10.1016/j.prp.2024.155259
Artificial intelligence system for automatic maxillary sinus segmentation on cone beam computed tomography images
Dentomaxillofac Radiol. 2024 Mar 19:twae012. doi: 10.1093/dmfr/twae012. Online ahead of print.
ABSTRACT
OBJECTIVES: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in Cone Beam Computed Tomography (CBCT) volumes and to evaluate the performance of this model.
METHODS: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into three parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, Area Under Curve (AUC), Dice Coefficient (DC), 95% Hausdorff Distance (95% HD), and Intersection over Union (IoU) values.
RESULTS: F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96 respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively.
CONCLUSIONS: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.
PMID:38502963 | DOI:10.1093/dmfr/twae012
Deep learning to detect left ventricular structural abnormalities in chest X-rays
Eur Heart J. 2024 Mar 20:ehad782. doi: 10.1093/eurheartj/ehad782. Online ahead of print.
ABSTRACT
BACKGROUND AND AIMS: Early identification of cardiac structural abnormalities indicative of heart failure is crucial to improving patient outcomes. Chest X-rays (CXRs) are routinely conducted on a broad population of patients, presenting an opportunity to build scalable screening tools for structural abnormalities indicative of Stage B or worse heart failure with deep learning methods. In this study, a model was developed to identify severe left ventricular hypertrophy (SLVH) and dilated left ventricle (DLV) using CXRs.
METHODS: A total of 71 589 unique CXRs from 24 689 different patients completed within 1 year of echocardiograms were identified. Labels for SLVH, DLV, and a composite label indicating the presence of either were extracted from echocardiograms. A deep learning model was developed and evaluated using area under the receiver operating characteristic curve (AUROC). Performance was additionally validated on 8003 CXRs from an external site and compared against visual assessment by 15 board-certified radiologists.
RESULTS: The model yielded an AUROC of 0.79 (0.76-0.81) for SLVH, 0.80 (0.77-0.84) for DLV, and 0.80 (0.78-0.83) for the composite label, with similar performance on an external data set. The model outperformed all 15 individual radiologists for predicting the composite label and achieved a sensitivity of 71% vs. 66% against the consensus vote across all radiologists at a fixed specificity of 73%.
CONCLUSIONS: Deep learning analysis of CXRs can accurately detect the presence of certain structural abnormalities and may be useful in early identification of patients with LV hypertrophy and dilation. As a resource to promote further innovation, 71 589 CXRs with adjoining echocardiographic labels have been made publicly available.
PMID:38503537 | DOI:10.1093/eurheartj/ehad782
Rapid and automated design of two-component protein nanomaterials using ProteinMPNN
Proc Natl Acad Sci U S A. 2024 Mar 26;121(13):e2314646121. doi: 10.1073/pnas.2314646121. Epub 2024 Mar 19.
ABSTRACT
The design of protein-protein interfaces using physics-based design methods such as Rosetta requires substantial computational resources and manual refinement by expert structural biologists. Deep learning methods promise to simplify protein-protein interface design and enable its application to a wide variety of problems by researchers from various scientific disciplines. Here, we test the ability of a deep learning method for protein sequence design, ProteinMPNN, to design two-component tetrahedral protein nanomaterials and benchmark its performance against Rosetta. ProteinMPNN had a similar success rate to Rosetta, yielding 13 new experimentally confirmed assemblies, but required orders of magnitude less computation and no manual refinement. The interfaces designed by ProteinMPNN were substantially more polar than those designed by Rosetta, which facilitated in vitro assembly of the designed nanomaterials from independently purified components. Crystal structures of several of the assemblies confirmed the accuracy of the design method at high resolution. Our results showcase the potential of deep learning-based methods to unlock the widespread application of designed protein-protein interfaces and self-assembling protein nanomaterials in biotechnology.
PMID:38502697 | DOI:10.1073/pnas.2314646121
Efficient Prediction Model of mRNA End-to-End Distance and Conformation: Three-Dimensional RNA Illustration Program (TRIP)
Methods Mol Biol. 2024;2784:191-200. doi: 10.1007/978-1-0716-3766-1_13.
ABSTRACT
The secondary and tertiary structures of RNA play a vital role in the regulation of biological reactions. These structures have been experimentally studied through in vivo and in vitro analyses, and in silico models have become increasingly accurate in predicting them. Recent technologies have diversified RNA structure predictions, from the earliest thermodynamic and molecular dynamic-based RNA structure predictions to deep learning-based conformation predictions in the past decade. While most research on RNA structure prediction has focused on short non-coding RNAs, there has been limited research on predicting the conformation of longer mRNAs. Our study introduces a computer simulation model called the Three-dimensional RNA Illustration Program (TRIP). TRIP is based on single-chain models and angle restriction of each bead component from previously reported single-molecule fluorescence in situ hybridization (smFISH) experiments. TRIP is a fast and efficient application that only requires up to three inputs to acquire outputs. It can also provide a rough visualization of the 3D conformation of RNA, making it a valuable tool for predicting RNA end-to-end distance.
PMID:38502487 | DOI:10.1007/978-1-0716-3766-1_13
Exploring the Low-Dose Limit for Focal Hepatic Lesion Detection with a Deep Learning-Based CT Reconstruction Algorithm: A Simulation Study on Patient Images
J Imaging Inform Med. 2024 Mar 19. doi: 10.1007/s10278-024-01080-3. Online ahead of print.
ABSTRACT
This study aims to investigate the maximum achievable dose reduction for applying a new deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in computed tomography (CT) for hepatic lesion detection. A total of 40 patients with 98 clinically confirmed hepatic lesions were retrospectively included. The mean volume CT dose index was 13.66 ± 1.73 mGy in routine-dose portal venous CT examinations, where the images were originally obtained with hybrid iterative reconstruction (HIR). Low-dose simulations were performed in projection domain for 40%-, 20%-, and 10%-dose levels, followed by reconstruction using both HIR and AIIR. Two radiologists were asked to detect hepatic lesion on each set of low-dose image in separate sessions. Qualitative metrics including lesion conspicuity, diagnostic confidence, and overall image quality were evaluated using a 5-point scale. The contrast-to-noise ratio (CNR) for lesion was also calculated for quantitative assessment. The lesion CNR on AIIR at reduced doses were significantly higher than that on routine-dose HIR (all p < 0.05). Lower qualitative image quality was observed as the radiation dose reduced, while there were no significant differences between 40%-dose AIIR and routine-dose HIR images. The lesion detection rate was 100%, 98% (96/98), and 73.5% (72/98) on 40%-, 20%-, and 10%-dose AIIR, respectively, whereas it was 98% (96/98), 73.5% (72/98), and 40% (39/98) on the corresponding low-dose HIR, respectively. AIIR outperformed HIR in simulated low-dose CT examinations of the liver. The use of AIIR allows up to 60% dose reduction for lesion detection while maintaining comparable image quality to routine-dose HIR.
PMID:38502435 | DOI:10.1007/s10278-024-01080-3