Deep learning

Machine learning-aided search for ligands of P2Y<sub>6</sub> and other P2Y receptors

Mon, 2024-03-25 06:00

Purinergic Signal. 2024 Mar 25. doi: 10.1007/s11302-024-10003-4. Online ahead of print.

ABSTRACT

The P2Y6 receptor, activated by uridine diphosphate (UDP), is a target for antagonists in inflammatory, neurodegenerative, and metabolic disorders, yet few potent and selective antagonists are known to date. This prompted us to use machine learning as a novel approach to aid ligand discovery, with pharmacological evaluation at three P2YR subtypes: initially P2Y6 and subsequently P2Y1 and P2Y14. Relying on extensive published data for P2Y6R agonists, we generated and validated an array of classification machine learning model using the algorithms deep learning (DL), adaboost classifier (ada), Bernoulli NB (bnb), k-nearest neighbors (kNN) classifier, logistic regression (lreg), random forest classifier (rf), support vector classification (SVC), and XGBoost (XGB) classifier models, and the common consensus was applied to molecular selection of 21 diverse structures. Compounds were screened using human P2Y6R-induced functional calcium transients in transfected 1321N1 astrocytoma cells and fluorescent binding inhibition at closely related hP2Y14R expressed in CHO cells. The hit compound ABBV-744, an experimental anticancer drug with a 6-methyl-7-oxo-6,7-dihydro-1H-pyrrolo[2,3-c]pyridine scaffold, had multifaceted interactions with the P2YR family: hP2Y6R inhibition in a non-surmountable fashion, suggesting that noncompetitive antagonism, and hP2Y1R enhancement, but not hP2Y14R binding inhibition. Other machine learning-selected compounds were either weak (experimental anti-asthmatic drug AZD5423 with a phenyl-1H-indazole scaffold) or inactive in inhibiting the hP2Y6R. Experimental drugs TAK-593 and GSK1070916 (100 µM) inhibited P2Y14R fluorescent binding by 50% and 38%, respectively, and all other compounds by < 20%. Thus, machine learning has led the way toward revealing previously unknown modulators of several P2YR subtypes that have varied effects.

PMID:38526670 | DOI:10.1007/s11302-024-10003-4

Categories: Literature Watch

AIxSuture: vision-based assessment of open suturing skills

Mon, 2024-03-25 06:00

Int J Comput Assist Radiol Surg. 2024 Mar 25. doi: 10.1007/s11548-024-03093-3. Online ahead of print.

ABSTRACT

PURPOSE: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills.

METHODS: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset.

RESULTS: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training.

CONCLUSION: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.

PMID:38526613 | DOI:10.1007/s11548-024-03093-3

Categories: Literature Watch

Personalized AI-Driven Real-Time Models to Predict Stress-Induced Blood Pressure Spikes Using Wearable Devices: Proposal for a Prospective Cohort Study

Mon, 2024-03-25 06:00

JMIR Res Protoc. 2024 Mar 25;13:e55615. doi: 10.2196/55615.

ABSTRACT

BACKGROUND: Referred to as the "silent killer," elevated blood pressure (BP) often goes unnoticed due to the absence of apparent symptoms, resulting in cumulative harm over time. Chronic stress has been consistently linked to increased BP. Prior studies have found that elevated BP often arises due to a stressful lifestyle, although the effect of exact stressors varies drastically between individuals. The heterogeneous nature of both the stress and BP response to a multitude of lifestyle decisions can make it difficult if not impossible to pinpoint the most deleterious behaviors using the traditional mechanism of clinical interviews.

OBJECTIVE: The aim of this study is to leverage machine learning (ML) algorithms for real-time predictions of stress-induced BP spikes using consumer wearable devices such as Fitbit, providing actionable insights to both patients and clinicians to improve diagnostics and enable proactive health monitoring. This study also seeks to address the significant challenges in identifying specific deleterious behaviors associated with stress-induced hypertension through the development of personalized artificial intelligence models for individual patients, departing from the conventional approach of using generalized models.

METHODS: The study proposes the development of ML algorithms to analyze biosignals obtained from these wearable devices, aiming to make real-time predictions about BP spikes. Given the longitudinal nature of the data set comprising time-series data from wearables (eg, Fitbit) and corresponding time-stamped labels representing stress levels from Ecological Momentary Assessment reports, the adoption of self-supervised learning for pretraining the network and using transformer models for fine-tuning the model on a personalized prediction task is proposed. Transformer models, with their self-attention mechanisms, dynamically weigh the importance of different time steps, enabling the model to focus on relevant temporal features and dependencies, facilitating accurate prediction.

RESULTS: Supported as a pilot project from the Robert C Perry Fund of the Hawaii Community Foundation, the study team has developed the core study app, CardioMate. CardioMate not only reminds participants to initiate BP readings using an Omron HeartGuide wearable monitor but also prompts them multiple times a day to report stress levels. Additionally, it collects other useful information including medications, environmental conditions, and daily interactions. Through the app's messaging system, efficient contact and interaction between users and study admins ensure smooth progress.

CONCLUSIONS: Personalized ML when applied to biosignals offers the potential for real-time digital health interventions for chronic stress and its symptoms. The project's clinical use for Hawaiians with stress-induced high BP combined with its methodological innovation of personalized artificial intelligence models highlights its significance in advancing health care interventions. Through iterative refinement and optimization, the aim is to develop a personalized deep-learning framework capable of accurately predicting stress-induced BP spikes, thereby promoting individual well-being and health outcomes.

INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/55615.

PMID:38526539 | DOI:10.2196/55615

Categories: Literature Watch

Influence of spatio-temporal filtering on hand kinematics estimation from high-density EMG signals<sup></sup>

Mon, 2024-03-25 06:00

J Neural Eng. 2024 Mar 25;21(2). doi: 10.1088/1741-2552/ad3498.

ABSTRACT

Objective.Surface electromyography (sEMG) is a non-invasive technique that records the electrical signals generated by muscles through electrodes placed on the skin. sEMG is the state-of-the-art method used to control active upper limb prostheses because of the association between its amplitude and the neural drive sent from the spinal cord to muscles. However, accurately estimating the kinematics of a freely moving human hand using sEMG from extrinsic hand muscles remains a challenge. Deep learning has been recently successfully applied to this problem by mapping raw sEMG signals into kinematics. Nonetheless, the optimal number of EMG signals and the type of pre-processing that would maximize performance have not been investigated yet.Approach.Here, we analyze the impact of these factors on the accuracy in kinematics estimates. For this purpose, we processed monopolar sEMG signals that were originally recorded from 320 electrodes over the forearm muscles of 13 subjects. We used a previously published deep learning method that can map the kinematics of the human hand with real-time resolution.Main results.While myocontrol algorithms essentially use the temporal envelope of the EMG signal as the only EMG feature, we show that our approach requires the full bandwidth of the signal in the temporal domain for accurate estimates. Spatial filtering however, had a smaller impact and low-order spatial filters may be suitable. Moreover, reducing the number of channels by ablation resulted in large performance losses. The highest accuracy was reached with the highest number of available sensors (n = 320). Importantly and unexpected, our results suggest that increasing the number of channels above those used in this study may further enhance the accuracy in predicting the kinematics of the human hand.Significance.We conclude that full bandwidth high-density EMG systems of hundreds of electrodes are needed for accurate kinematic estimates of the human hand.

PMID:38525843 | DOI:10.1088/1741-2552/ad3498

Categories: Literature Watch

The prognostic value of visual and automatic coronary calcium -scoring from low dose CT-[15O]-water PET

Mon, 2024-03-25 06:00

Eur Heart J Cardiovasc Imaging. 2024 Mar 25:jeae081. doi: 10.1093/ehjci/jeae081. Online ahead of print.

ABSTRACT

PURPOSE: Firstly, to validate automatically and visually scored coronary artery calcium (CAC) on low dose CT (LDCT) scans with a dedicated calcium scoring CT (CSCT) scan. Secondly, to assess the added value of CAC scored from LDCT scans acquired during [15O]-water-PET myocardial perfusion imaging (MPI) on prediction of major adverse cardiac events (MACE).

METHODS: 572 consecutive patients with suspected coronary artery disease, who underwent [15O]-water-PET MPI with LDCT and a dedicated CSCT scan were included. In the reference CSCT scans, manual CAC scoring was performed, while LDCT scans were scored visually and automatically using deep learning approach. Subsequently, based on CAC score results from CSCT and LDCT scans, each patient's scan was assigned to one out of five cardiovascular risk groups (0; 1-100; 101-400; 401-1000; >1000) and the agreement in risk group classification between CSCT and LDCT scans was investigated. MACE was defined as a composite of all-cause death, nonfatal myocardial infarction, coronary revascularization, and unstable angina.

RESULTS: The agreement in risk group classification between reference CSCT manual scoring and visual/automatic LDCT scoring from LDCT was 0.66 (95% CI: 0.62-0.70) and 0.58 (95% CI: 0.53-0.62), respectively. Based on visual and automatic CAC scoring from LDCT scans, patients with CAC>100 and CAC>400, respectively, were at increased risk of MACE, independently of ischemic information from the [15O]-water-PET scan.

CONCLUSIONS: There is a moderate agreement in risk classification between visual and automatic CAC scoring from LDCT and reference CSCT scans. Visual and automatic CAC scoring from LDCT scans improve identification of patients at higher risk of MACE.

PMID:38525588 | DOI:10.1093/ehjci/jeae081

Categories: Literature Watch

Diffusion Models To Predict 3D Late Mechanical Activation From Sparse 2D Cardiac MRIs

Mon, 2024-03-25 06:00

Proc Mach Learn Res. 2023 Dec;225:190-200.

ABSTRACT

Identifying regions of late mechanical activation (LMA) of the left ventricular (LV) myocardium is critical in determining the optimal pacing site for cardiac resynchronization therapy in patients with heart failure. Several deep learning-based approaches have been developed to predict 3D LMA maps of LV myocardium from a stack of sparse 2D cardiac magnetic resonance imaging (MRIs). However, these models often loosely consider the geometric shape structure of the myocardium. This makes the reconstructed activation maps suboptimal; hence leading to a reduced accuracy of predicting the late activating regions of hearts. In this paper, we propose to use shape-constrained diffusion models to better reconstruct a 3D LMA map, given a limited number of 2D cardiac MRI slices. In contrast to previous methods that primarily rely on spatial correlations of image intensities for 3D reconstruction, our model leverages object shape as priors learned from the training data to guide the reconstruction process. To achieve this, we develop a joint learning network that simultaneously learns a mean shape under deformation models. Each reconstructed image is then considered as a deformed variant of the mean shape. To validate the performance of our model, we train and test the proposed framework on a publicly available mesh dataset of 3D myocardium and compare it with state-of-the-art deep learning-based reconstruction models. Experimental results show that our model achieves superior performance in reconstructing the 3D LMA maps as compared to the state-of-the-art models.

PMID:38525446 | PMC:PMC10958778

Categories: Literature Watch

BioVDB: biological vector database for high-throughput gene expression meta-analysis

Mon, 2024-03-25 06:00

Front Artif Intell. 2024 Mar 8;7:1366273. doi: 10.3389/frai.2024.1366273. eCollection 2024.

ABSTRACT

High-throughput sequencing has created an exponential increase in the amount of gene expression data, much of which is freely, publicly available in repositories such as NCBI's Gene Expression Omnibus (GEO). Querying this data for patterns such as similarity and distance, however, becomes increasingly challenging as the total amount of data increases. Furthermore, vectorization of the data is commonly required in Artificial Intelligence and Machine Learning (AI/ML) approaches. We present BioVDB, a vector database for storage and analysis of gene expression data, which enhances the potential for integrating biological studies with AI/ML tools. We used a previously developed approach called Automatic Label Extraction (ALE) to extract sample labels from metadata, including age, sex, and tissue/cell-line. BioVDB stores 438,562 samples from eight microarray GEO platforms. We show that it allows for efficient querying of data using similarity search, which can also be useful for identifying and inferring missing labels of samples, and for rapid similarity analysis.

PMID:38525301 | PMC:PMC10957786 | DOI:10.3389/frai.2024.1366273

Categories: Literature Watch

Task-based transferable deep-learning scatter correction in cone beam computed tomography: a simulation study

Mon, 2024-03-25 06:00

J Med Imaging (Bellingham). 2024 Mar;11(2):024006. doi: 10.1117/1.JMI.11.2.024006. Epub 2024 Mar 23.

ABSTRACT

PURPOSE: X-ray scatter significantly affects the image quality of cone beam computed tomography (CBCT). Although convolutional neural networks (CNNs) have shown promise in correcting x-ray scatter, their effectiveness is hindered by two main challenges: the necessity for extensive datasets and the uncertainty regarding model generalizability. This study introduces a task-based paradigm to overcome these obstacles, enhancing the application of CNNs in scatter correction.

APPROACH: Using a CNN with U-net architecture, the proposed methodology employs a two-stage training process for scatter correction in CBCT scans. Initially, the CNN is pre-trained on approximately 4000 image pairs from geometric phantom projections, then fine-tuned using transfer learning (TL) on 250 image pairs of anthropomorphic projections, enabling task-specific adaptations with minimal data. 2D scatter ratio (SR) maps from projection data were considered as CNN targets, and such maps were used to perform the scatter prediction. The fine-tuning process for specific imaging tasks, like head and neck imaging, involved simulating scans of an anthropomorphic phantom and pre-processing the data for CNN retraining.

RESULTS: For the pre-training stage, it was observed that SR predictions were quite accurate (SSIM≥0.9). The accuracy of SR predictions was further improved after TL, with a relatively short retraining time (≈70 times faster than pre-training) and using considerably fewer samples compared to the pre-training dataset (≈12 times smaller).

CONCLUSIONS: A fast and low-cost methodology to generate task-specific CNN for scatter correction in CBCT was developed. CNN models trained with the proposed methodology were successful to correct x-ray scatter in anthropomorphic structures, unknown to the network, for simulated data.

PMID:38525293 | PMC:PMC10960584 | DOI:10.1117/1.JMI.11.2.024006

Categories: Literature Watch

CMNet: deep learning model for colon polyp segmentation based on dual-branch structure

Mon, 2024-03-25 06:00

J Med Imaging (Bellingham). 2024 Mar;11(2):024004. doi: 10.1117/1.JMI.11.2.024004. Epub 2024 Mar 23.

ABSTRACT

PURPOSE: Colon cancer is one of the top three diseases in gastrointestinal cancers, and colon polyps are an important trigger of colon cancer. Early diagnosis and removal of colon polyps can avoid the incidence of colon cancer. Currently, colon polyp removal surgery is mainly based on artificial-intelligence (AI) colonoscopy, supplemented by deep-learning technology to help doctors remove colon polyps. With the development of deep learning, the use of advanced AI technology to assist in medical diagnosis has become mainstream and can maximize the doctor's diagnostic time and help doctors to better formulate medical plans.

APPROACH: We propose a deep-learning model for segmenting colon polyps. The model adopts a dual-branch structure, combines a convolutional neural network (CNN) with a transformer, and replaces ordinary convolution with deeply separable convolution based on ResNet; a stripe pooling module is introduced to obtain more effective information. The aggregated attention module (AAM) is proposed for high-dimensional semantic information, which effectively combines two different structures for the high-dimensional information fusion problem. Deep supervision and multi-scale training are added in the model training process to enhance the learning effect and generalization performance of the model.

RESULTS: The experimental results show that the proposed dual-branch structure is significantly better than the single-branch structure, and the model using the AAM has a significant performance improvement over the model not using the AAM. Our model leads 1.1% and 1.5% in mIoU and mDice, respectively, when compared with state-of-the-art models in a fivefold cross-validation on the Kvasir-SEG dataset.

CONCLUSIONS: We propose and validate a deep learning model for segmenting colon polyps, using a dual-branch network structure. Our results demonstrate the feasibility of complementing traditional CNNs and transformer with each other. And we verified the feasibility of fusing different structures on high-dimensional semantics and successfully retained the high-dimensional information of different structures effectively.

PMID:38525292 | PMC:PMC10960180 | DOI:10.1117/1.JMI.11.2.024004

Categories: Literature Watch

Advanced detection of coronary artery disease via deep learning analysis of plasma cytokine data

Mon, 2024-03-25 06:00

Front Cardiovasc Med. 2024 Mar 8;11:1365481. doi: 10.3389/fcvm.2024.1365481. eCollection 2024.

ABSTRACT

The 2017 World Health Organization Fact Sheet highlights that coronary artery disease is the leading cause of death globally, responsible for approximately 30% of all deaths. In this context, machine learning (ML) technology is crucial in identifying coronary artery disease, thereby saving lives. ML algorithms can potentially analyze complex patterns and correlations within medical data, enabling early detection and accurate diagnosis of CAD. By leveraging ML technology, healthcare professionals can make informed decisions and implement timely interventions, ultimately leading to improved outcomes and potentially reducing the mortality rate associated with coronary artery disease. Machine learning algorithms create non-invasive, quick, accurate, and economical diagnoses. As a result, machine learning algorithms can be employed to supplement existing approaches or as a forerunner to them. This study shows how to use the CNN classifier and RNN based on the LSTM classifier in deep learning to attain targeted "risk" CAD categorization utilizing an evolving set of 450 cytokine biomarkers that could be used as suggestive solid predictive variables for treatment. The two used classifiers are based on these "45" different cytokine prediction characteristics. The best Area Under the Receiver Operating Characteristic curve (AUROC) score achieved is (0.98) for a confidence interval (CI) of 95; the classifier RNN-LSTM used "450" cytokine biomarkers had a great (AUROC) score of 0.99 with a confidence interval of 0.95 the percentage 95, the CNN model containing cytokines received the second best AUROC score (0.92). The RNN-LSTM classifier considerably beats the CNN classifier regarding AUROC scores, as evidenced by a p-value smaller than 7.48 obtained via an independent t-test. As large-scale initiatives to achieve early, rapid, reliable, inexpensive, and accessible individual identification of CAD risk gain traction, robust machine learning algorithms can now augment older methods such as angiography. Incorporating 65 new sensitive cytokine biomarkers can increase early detection even more. Investigating the novel involvement of cytokines in CAD could lead to better risk detection, disease mechanism discovery, and new therapy options.

PMID:38525188 | PMC:PMC10957635 | DOI:10.3389/fcvm.2024.1365481

Categories: Literature Watch

The diagnostic performance of impacted third molars in the mandible: A review of deep learning on panoramic radiographs

Mon, 2024-03-25 06:00

Saudi Dent J. 2024 Mar;36(3):404-412. doi: 10.1016/j.sdentj.2023.11.025. Epub 2023 Nov 25.

ABSTRACT

BACKGROUND: Mandibular third molar is prone to impaction, resulting in its inability to erupt into the oral cavity. The radiographic examination is required to support the odontectomy of impacted teeth. The use of computer-aided diagnosis based on deep learning is emerging in the field of medical and dentistry with the advancement of artificial intelligence (AI) technology. This review describes the performance and prospects of deep learning for the detection, classification, and evaluation of third molar-mandibular canal relationships on panoramic radiographs.

METHODS: This work was conducted using three databases: PubMed, Google Scholar, and Science Direct. Following the literature selection, 49 articles were reviewed, with the 12 main articles discussed in this review.

RESULTS: Several models of deep learning are currently used for segmentation and classification of third molar impaction with or without the combination of other techniques. Deep learning has demonstrated significant diagnostic performance in identifying mandibular impacted third molars (ITM) on panoramic radiographs, with an accuracy range of 78.91% to 90.23%. Meanwhile, the accuracy of deep learning in determining the relationship between ITM and the mandibular canal (MC) ranges from 72.32% to 99%.

CONCLUSION: Deep learning-based AI with high performance for the detection, classification, and evaluation of the relationship of ITM to the MC using panoramic radiographs has been developed over the past decade. However, deep learning must be improved using large datasets, and the evaluation of diagnostic performance for deep learning models should be aligned with medical diagnostic test protocols. Future studies involving collaboration among oral radiologists, clinicians, and computer scientists are required to identify appropriate AI development models that are accurate, efficient, and applicable to clinical services.

PMID:38525176 | PMC:PMC10960107 | DOI:10.1016/j.sdentj.2023.11.025

Categories: Literature Watch

Sugarcane breeding: a fantastic past and promising future driven by technology and methods

Mon, 2024-03-25 06:00

Front Plant Sci. 2024 Mar 8;15:1375934. doi: 10.3389/fpls.2024.1375934. eCollection 2024.

ABSTRACT

Sugarcane is the most important sugar and energy crop in the world. During sugarcane breeding, technology is the requirement and methods are the means. As we know, seed is the cornerstone of the development of the sugarcane industry. Over the past century, with the advancement of technology and the expansion of methods, sugarcane breeding has continued to improve, and sugarcane production has realized a leaping growth, providing a large amount of essential sugar and clean energy for the long-term mankind development, especially in the face of the future threats of world population explosion, reduction of available arable land, and various biotic and abiotic stresses. Moreover, due to narrow genetic foundation, serious varietal degradation, lack of breakthrough varieties, as well as long breeding cycle and low probability of gene polymerization, it is particularly important to realize the leapfrog development of sugarcane breeding by seizing the opportunity for the emerging Breeding 4.0, and making full use of modern biotechnology including but not limited to whole genome selection, transgene, gene editing, and synthetic biology, combined with information technology such as remote sensing and deep learning. In view of this, we focus on sugarcane breeding from the perspective of technology and methods, reviewing the main history, pointing out the current status and challenges, and providing a reasonable outlook on the prospects of smart breeding.

PMID:38525140 | PMC:PMC10957636 | DOI:10.3389/fpls.2024.1375934

Categories: Literature Watch

Time-Series Field Phenotyping of Soybean Growth Analysis by Combining Multimodal Deep Learning and Dynamic Modeling

Mon, 2024-03-25 06:00

Plant Phenomics. 2024 Mar 20;6:0158. doi: 10.34133/plantphenomics.0158. eCollection 2024.

ABSTRACT

The rate of soybean canopy establishment largely determines photoperiodic sensitivity, subsequently influencing yield potential. However, assessing the rate of soybean canopy development in large-scale field breeding trials is both laborious and time-consuming. High-throughput phenotyping methods based on unmanned aerial vehicle (UAV) systems can be used to monitor and quantitatively describe the development of soybean canopies for different genotypes. In this study, high-resolution and time-series raw data from field soybean populations were collected using UAVs. The RGB (red, green, and blue) and infrared images are used as inputs to construct the multimodal image segmentation model-the RGB & Infrared Feature Fusion Segmentation Network (RIFSeg-Net). Subsequently, the segment anything model was employed to extract complete individual leaves from the segmentation results obtained from RIFSeg-Net. These leaf aspect ratios facilitated the accurate categorization of soybean populations into 2 distinct varieties: oval leaf type variety and lanceolate leaf type variety. Finally, dynamic modeling was conducted to identify 5 phenotypic traits associated with the canopy development rate that differed substantially among the classified soybean varieties. The results showed that the developed multimodal image segmentation model RIFSeg-Net for extracting soybean canopy cover from UAV images outperformed traditional deep learning image segmentation networks (precision = 0.94, recall = 0.93, F1-score = 0.93). The proposed method has high practical value in the field of germplasm resource identification. This approach could lead to the use of a practical tool for further genotypic differentiation analysis and the selection of target genes.

PMID:38524738 | PMC:PMC10959008 | DOI:10.34133/plantphenomics.0158

Categories: Literature Watch

Prediction of immunotherapy response in idiopathic membranous nephropathy using deep learning-pathological and clinical factors

Mon, 2024-03-25 06:00

Front Endocrinol (Lausanne). 2024 Mar 8;15:1328579. doi: 10.3389/fendo.2024.1328579. eCollection 2024.

ABSTRACT

BACKGROUND: Owing to individual heterogeneity, patients with idiopathic membranous nephropathy (IMN) exhibit varying sensitivities to immunotherapy. This study aimed to establish and validate a model incorporating pathological and clinical features using deep learning training to evaluate the response of patients with IMN to immunosuppressive therapy.

METHODS: The 291 patients were randomly categorized into training (n = 219) and validation (n = 72) cohorts. Patch-level convolutional neural network training in a weakly supervised manner was utilized to analyze whole-slide histopathological features. We developed a machine-learning model to assess the predictive value of pathological signatures compared to clinical factors. The performance levels of the models were evaluated using the area under the receiver operating characteristic curve (AUC) on the training and validation tests, and the prediction accuracies of the models for immunotherapy response were compared.

RESULTS: Multivariate analysis indicated that diabetes and smoking were independent risk factors affecting the response to immunotherapy in IMN patients. The model integrating pathologic features had a favorable predictive value for determining the response to immunotherapy in IMN patients, with AUCs of 0.85 and 0.77 when employed in the training and test cohorts, respectively. However, when incorporating clinical features into the model, the predictive efficacy diminishes, as evidenced by lower AUC values of 0.75 and 0.62 on the training and testing cohorts, respectively.

CONCLUSIONS: The model incorporating pathological signatures demonstrated a superior predictive ability for determining the response to immunosuppressive therapy in IMN patients compared to the integration of clinical factors.

PMID:38524629 | PMC:PMC10958378 | DOI:10.3389/fendo.2024.1328579

Categories: Literature Watch

Trans-Atlantic Differences in Approach to Sudden Death Prevention in Hypertrophic Cardiomyopathy

Sun, 2024-03-24 06:00

Can J Cardiol. 2024 Mar 22:S0828-282X(24)00272-1. doi: 10.1016/j.cjca.2024.03.011. Online ahead of print.

ABSTRACT

The American approach to predicting sudden cardiac death (SCD) in patients with hypertrophic cardiomyopathy (HCM) diverges from the European method in that it relies on major risk factors independently justifying the implantation of an (implantable cardioverter-defibrillator) ICD for primary prevention, whereas the European approach utilizes a mathematical equation to estimate a 5-year risk percentage. The aim of this review is to outline the differences between the American and European guidelines and to demonstrate how they have arisen. Furthermore, it will provide insight into the future of SCD risk prediction in HCM. The American SCD risk prediction method demonstrates high sensitivity but limited specificity, whereas the European method exhibits the opposite. These differences in sensitivity and specificity likely contribute to the fact that primary prevention ICD utilization is two-fold higher in the U.S. It is highly likely that new insights and new imaging modalities will enhance prediction models in the near future. Genotyping could potentially assume a significant role. Left ventricular global longitudinal strain was recently found to be an independent predictor of SCD. Furthermore, following late gadolinium enhancement, additional cardiac magnetic resonance techniques such as T1 mapping and diffusion tensor imaging are demonstrating encouraging outcomes in predicting SCD. Ultimately, it is conceivable that integrating diverse morphological and genetic characteristics through deep learning will yield novel insights and enhance SCD prediction methods.

PMID:38522619 | DOI:10.1016/j.cjca.2024.03.011

Categories: Literature Watch

Quality assurance of late gadolinium enhancement cardiac MRI images: a deep learning classifier for confidence in the presence or absence of abnormality with potential to prompt real-time image optimisation

Sun, 2024-03-24 06:00

J Cardiovasc Magn Reson. 2024 Mar 22:101040. doi: 10.1016/j.jocmr.2024.101040. Online ahead of print.

ABSTRACT

BACKGROUND: Late gadolinium enhancement (LGE) of the myocardium has significant diagnostic and prognostic implications, with even small areas of enhancement being important. Distinguishing between definitely normal and definitely abnormal LGE images is usually straightforward; but diagnostic uncertainty arises when reporters are not sure whether the observed LGE is genuine or not. This uncertainty might be resolved by repetition (to remove artefact) or further acquisition of intersecting images, but this must take place before the scan finishes. Real-time quality assurance by humans is a complex task requiring training and experience, so being able to identify which images have an intermediate likelihood of LGE while the scan is ongoing, without the presence of an expert is of high value. This decision-support could prompt immediate image optimisation or acquisition of supplementary images to confirm or refute the presence of genuine LGE. This could reduce ambiguity in reports.

METHODS: Short-axis, phase sensitive inversion recovery (PSIR) late gadolinium images were extracted from our clinical CMR database and shuffled. Two, independent, blinded experts scored each individual slice for 'LGE likelihood' on a visual analogue scale, from 0 (absolute certainty of no LGE) to 100 (absolute certainty of LGE), with 50 representing clinical equipoise. The scored images were split into 2 classes - either "high certainty" of whether LGE was present or not, or "low certainty". The dataset was split into training, validation and test sets (70:15:15). A deep learning binary classifier based on the EfficientNetV2 convolutional neural network architecture was trained to distinguish between these categories. Classifier performance on the test set was evaluated by calculating the accuracy, precision, recall, F1-score, and area under the receiver operating characteristics curve (ROC AUC). Performance was also evaluated on an external test set of images from a different centre.

RESULTS: 1645 images (from 272 patients) were labelled and split at the patient level into training (1151 images), validation (247 images) and test (247 images) sets for the deep learning binary classifier. Of these, 1208 images were 'high certainty' (255 for LGE, 953 for no LGE), and 437 were 'low certainty'). An external test comprising 247 images from 41 patients from another centre was also employed. After 100 epochs the performance on the internal test set was: accuracy = 94%, recall = 0.80, precision = 0.97, F1-score = 0.87 and ROC AUC = 0.94. The classifier also performed robustly on the external test set (accuracy = 91%, recall = 0.73, precision = 0.93, F1-score = 0.82 and ROC AUC = 0.91). These results were benchmarked against a reference inter-expert accuracy of 86%.

CONCLUSIONS: Deep learning shows potential to automate quality control of late gadolinium imaging in CMR. The ability to identify short-axis images with intermediate LGE likelihood in real-time may serve as a useful decision support tool. This approach has the potential to guide immediate further imaging while the patient is still in the scanner, thereby reducing the frequency of recalls and inconclusive reports due to diagnostic indecision.

PMID:38522522 | DOI:10.1016/j.jocmr.2024.101040

Categories: Literature Watch

A novel validated real-world dataset for the diagnosis of multi-class serous effusion cytology according to TIS and ground-truth validation data

Sun, 2024-03-24 06:00

Acta Cytol. 2024 Mar 24. doi: 10.1159/000538465. Online ahead of print.

ABSTRACT

INTRODUCTION: The application of AI algorithms in serous fluid cytology is lacking due to the deficiency in standardized publicly available datasets. Here, we develop a novel public serous effusion cytology dataset. Furthermore, we apply AI algorithms on it to test its diagnostic utility and safety in clinical practice.

METHODS: The work is divided into three phases. Phase 1 entails building the dataset based on the multi-tiered evidence-based classification system proposed by the international system (TIS) of serous fluid cytology along with ground truth tissue diagnosis for malignancy. To ensure reliable results of future AI research on this dataset, we carefully consider all the steps of the preparation and staining from a real-world cytopathology perspective. In Phase 2, we pay special consideration to the image acquisition pipeline to ensure image integrity. Then we utilize the power of transfer learning using the convolutional layers of the VGG16 deep learning model for feature extraction Finally, in Phase 3, we apply the random forest classifier on the constructed dataset.

RESULTS: The dataset comprises 3731 images distributed among the four TIS diagnostic categories. The model achieves 74 % accuracy in this multiclass classification problem. Using a one versus all classifier, the fall-out rate for images that are misclassified as negative for malignancy despite being a higher risk diagnosis is 0.13. Most of these misclassified images (77%) belong to the atypia of undetermined significance category in concordance with real-life statistics.

CONCLUSION: This is the first and largest publicly available serous fluid cytology dataset based on a standardized diagnostic system. It is also the first dataset to include various types of effusions and is the first dataset to include pericardial fluid specimens. In addition, it is the first dataset to include the diagnostically challenging atypical categories. AI algorithms applied on this novel dataset show reliable results that can incorporated in actual clinical practice with minimal risk of missing a diagnosis of malignancy. This work provides a foundation for researchers to develop and test further AI algorithms for the diagnosis of serous effusions.

PMID:38522415 | DOI:10.1159/000538465

Categories: Literature Watch

RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate

Sun, 2024-03-24 06:00

Comput Biol Med. 2024 Mar 19;173:108318. doi: 10.1016/j.compbiomed.2024.108318. Online ahead of print.

ABSTRACT

Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.

PMID:38522253 | DOI:10.1016/j.compbiomed.2024.108318

Categories: Literature Watch

Linear semantic transformation for semi-supervised medical image segmentation

Sun, 2024-03-24 06:00

Comput Biol Med. 2024 Mar 21;173:108331. doi: 10.1016/j.compbiomed.2024.108331. Online ahead of print.

ABSTRACT

Medical image segmentation is a focus research and foundation in developing intelligent medical systems. Recently, deep learning for medical image segmentation has become a standard process and succeeded significantly, promoting the development of reconstruction, and surgical planning of disease diagnosis. However, semantic learning is often inefficient owing to the lack of supervision of feature maps, resulting in that high-quality segmentation models always rely on numerous and accurate data annotations. Learning robust semantic representation in latent spaces remains a challenge. In this paper, we propose a novel semi-supervised learning framework to learn vital attributes in medical images, which constructs generalized representation from diverse semantics to realize medical image segmentation. We first build a self-supervised learning part that achieves context recovery by reconstructing space and intensity of medical images, which conduct semantic representation for feature maps. Subsequently, we combine semantic-rich feature maps and utilize simple linear semantic transformation to convert them into image segmentation. The proposed framework was tested using five medical segmentation datasets. Quantitative assessments indicate the highest scores of our method on IXI (73.78%), ScaF (47.50%), COVID-19-Seg (50.72%), PC-Seg (65.06%), and Brain-MR (72.63%) datasets. Finally, we compared our method with the latest semi-supervised learning methods and obtained 77.15% and 75.22% DSC values, respectively, ranking first on two representative datasets. The experimental results not only proved that the proposed linear semantic transformation was effectively applied to medical image segmentation, but also presented its simplicity and ease-of-use to pursue robust segmentation in semi-supervised learning. Our code is now open at: https://github.com/QingYunA/Linear-Semantic-Transformation-for-Semi-Supervised-Medical-Image-Segmentation.

PMID:38522252 | DOI:10.1016/j.compbiomed.2024.108331

Categories: Literature Watch

PHE-SICH-CT-IDS: A benchmark CT image dataset for evaluation semantic segmentation, object detection and radiomic feature extraction of perihematomal edema in spontaneous intracerebral hemorrhage

Sun, 2024-03-24 06:00

Comput Biol Med. 2024 Mar 20;173:108342. doi: 10.1016/j.compbiomed.2024.108342. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH) typically presents acutely, prompt and expedited radiological examination is crucial for diagnosis, localization, and quantification of the hemorrhage. Early detection and accurate segmentation of perihematomal edema (PHE) play a critical role in guiding appropriate clinical intervention and enhancing patient prognosis. However, the progress and assessment of computer-aided diagnostic methods for PHE segmentation and detection face challenges due to the scarcity of publicly accessible brain CT image datasets.

METHODS: This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage. The dataset comprises 120 brain CT scans and 7,022 CT images, along with corresponding medical information of the patients. To demonstrate its effectiveness, classical algorithms for semantic segmentation, object detection, and radiomic feature extraction are evaluated. The experimental results confirm the suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation, detection and radiomic feature extraction methods.

RESULTS: This study conducts numerous experiments using classical machine learning and deep learning methods, demonstrating the differences in various segmentation and detection methods on the PHE-SICH-CT-IDS. The highest precision achieved in semantic segmentation is 76.31%, while object detection attains a maximum precision of 97.62%. The experimental results on radiomic feature extraction and analysis prove the suitability of PHE-SICH-CT-IDS for evaluating image features and highlight the predictive value of these features for the prognosis of SICH patients.

CONCLUSION: To the best of our knowledge, this is the first publicly available dataset for PHE in SICH, comprising various data formats suitable for applications across diverse medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to explore novel algorithms, providing valuable support for clinicians and patients in the clinical setting. PHE-SICH-CT-IDS is freely published for non-commercial purpose at https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937.

PMID:38522249 | DOI:10.1016/j.compbiomed.2024.108342

Categories: Literature Watch

Pages