Deep learning

A deep learning-based image analysis for assessing the extent of abduction in abducens nerve palsy patients before and after strabismus surgery

Fri, 2024-11-01 06:00

Adv Ophthalmol Pract Res. 2024 Jun 25;4(4):202-208. doi: 10.1016/j.aopr.2024.06.004. eCollection 2024 Nov-Dec.

ABSTRACT

PURPOSE: This study aimed to propose a novel deep learning-based approach to assess the extent of abduction in patients with abducens nerve palsy before and after strabismus surgery.

METHODS: This study included 13 patients who were diagnosed with abducens nerve palsy and underwent strabismus surgery in a tertiary hospital. Photographs of primary, dextroversion and levoversion position were collected before and after strabismus surgery. The eye location and eye segmentation network were trained via recurrent residual convolutional neural networks with attention gate connection based on U-Net (R2AU-Net). Facial images of abducens nerve palsy patients were used as the test set and parameters were measured automatically based on the masked images. Absolute abduction also was measured manually, and relative abduction was calculated. Agreements between manual and automatic measurements, as well as repeated automatic measurements were analyzed. Preoperative and postoperative results were compared.

RESULTS: The intraclass correlation coefficients (ICCs) between manual and automatic measurements of absolute abduction ranged from 0.985 to 0.992 (P<0.001), and the bias ranged from -0.25 ​mm to -0.05 ​mm. The ICCs between two repeated automatic measurements ranged from 0.994 to 0.997 (P<0.001), and the bias ranged from -0.11 ​mm to 0.05 ​mm. After strabismus surgery, absolute abduction of affected eye increased from 2.18 ​± ​1.40 ​mm to 3.36 ​± ​1.93 ​mm (P<0.05). The relative abduction was improved in 76.9% patients (10/13) after surgery (P<0.01).

CONCLUSIONS: This image analysis technique demonstrated excellent accuracy and repeatability for automatic measurements of ocular abduction, which has promising application prospects in objectively assessing surgical outcomes in patients with abducens nerve palsy.

PMID:39484054 | PMC:PMC11526073 | DOI:10.1016/j.aopr.2024.06.004

Categories: Literature Watch

Early Detection of Breast Cancer in MRI Using AI

Thu, 2024-10-31 06:00

Acad Radiol. 2024 Oct 30:S1076-6332(24)00774-8. doi: 10.1016/j.acra.2024.10.014. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: To develop and evaluate an AI algorithm that detects breast cancer in MRI scans up to one year before radiologists typically identify it, potentially enhancing early detection in high-risk women.

MATERIALS AND METHODS: A convolutional neural network (CNN) AI model, pre-trained on breast MRI data, was fine-tuned using a retrospective dataset of 3029 MRI scans from 910 patients. These contained 115 cancers that were diagnosed within one year of a negative MRI. The model aimed to identify these cancers, with the goal of predicting cancer development up to one year in advance. The network was fine-tuned and tested with 10-fold cross-validation. Mean age of patients was 52 years (range, 18-88 years), with average follow-up of 4.3 years (range 1-12 years).

RESULTS: The AI detected cancers one year earlier with an area under the ROC curve of 0.72 (0.67-0.76). Retrospective analysis by a radiologist of the top 10% highest risk MRIs as ranked by the AI could have increased early detection by up to 30%. (35/115, CI:22.2-39.7%, 30% sensitivity). A radiologist identified a visual correlate to biopsy-proven cancers in 83 of prior-year MRIs (83/115, CI: 62.1-79.4%). The AI algorithm identified the anatomic region where cancer would be detected in 66 cases (66/115, CI:47.8-66.5%); with both agreeing in 54 cases (54/115, CI:%37.5-56.4%).

CONCLUSION: This novel AI-aided re-evaluation of "benign" breasts shows promise for improving early breast cancer detection with MRI. As datasets grow and image quality improves, this approach is expected to become even more impactful.

PMID:39482209 | DOI:10.1016/j.acra.2024.10.014

Categories: Literature Watch

Novel multimodal sensing and machine learning strategies to classify cognitive workload in laparoscopic surgery

Thu, 2024-10-31 06:00

Eur J Surg Oncol. 2024 Oct 15:108735. doi: 10.1016/j.ejso.2024.108735. Online ahead of print.

ABSTRACT

BACKGROUND: Surgeons can experience elevated cognitive workload (CWL) during surgery due to various factors including operative technicalities and the environmental demands of the operating theatre. This can result in poorer outcomes and have a detrimental effect on surgeon well-being. The objective measurement of CWL provides a potential solution to facilitate classification of workload levels, however results are variable when physiological measures are used in isolation. The aim of this study is to develop and propose a multimodal machine learning (ML) approach to classify CWL levels using a bespoke sensor platform and to develop a ML approach to impute missing pupil diameter measures due to the effect of blinking or noise.

MATERIALS AND METHODS: Ten surgical trainees performed a simulated laparoscopic cholecystectomy under cognitive conditions of increasing difficulty, namely a modified auditory N-back task with increasing difficulty and a verbal clinical scenario. Physiological measures were recorded using a novel platform (MAESTRO). Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were used as direct measures of CWL. Indirect measures included electromyography (EMG), electrocardiography (ECG) and pupil diameter (PD). A reference point for validation was provided by subjective assessment of perceived CWL using the SURG-TLX. A multimodal machine learning approach that systematically implements a CNN-BiLSTM, a binary version of the metaheuristic Manta Ray Foraging Optimisation (BMRFO) and a version of Fuzzy C-Means (FCM) called Optimal Completion Strategy (OCS) was used to classify the associated perceived CWL state.

RESULTS: Compared to other state of the art classification techniques, cross-validation results for the classification of CWL levels suggest that the CNN-BLSTM and BMRFO approach provides an average accuracy of 97 % based on the confusion matrix. Additionally, OCS demonstrated a superior average performance of 9.15 % in terms of Root-Mean-Square-Error (RMSE) when compared to other PD imputation methods.

CONCLUSION: Perceived CWL levels were correctly classified using a multimodal ML approach. This approach provides a potential route to accurately classify CWL levels, which may have application in future surgical training and assessment programs as well as the development of cognitive support systems in the operating room.

PMID:39482204 | DOI:10.1016/j.ejso.2024.108735

Categories: Literature Watch

Transcriptional regulation of hypoxic cancer cell metabolism and artificial intelligence

Thu, 2024-10-31 06:00

Trends Cancer. 2024 Oct 30:S2405-8033(24)00222-X. doi: 10.1016/j.trecan.2024.10.003. Online ahead of print.

ABSTRACT

Gene expression regulation in hypoxic tumor microenvironments is mediated by O2 responsive transcription factors (O2R-TFs), fine-tuning cancer cell metabolic demand for O2 according to its availability. Here, we discuss key O2R-TFs and emerging artificial intelligence (AI)-based applications suitable for the interrogation of O2R-TF relationships specifying cancer cell metabolic adaptations to hypoxia.

PMID:39482194 | DOI:10.1016/j.trecan.2024.10.003

Categories: Literature Watch

Interpreting hourly mass concentrations of PM(2.5) chemical components with an optimal deep-learning model

Thu, 2024-10-31 06:00

J Environ Sci (China). 2025 May;151:125-139. doi: 10.1016/j.jes.2024.03.037. Epub 2024 Mar 29.

ABSTRACT

PM2.5 constitutes a complex and diverse mixture that significantly impacts the environment, human health, and climate change. However, existing observation and numerical simulation techniques have limitations, such as a lack of data, high acquisition costs, and multiple uncertainties. These limitations hinder the acquisition of comprehensive information on PM2.5 chemical composition and effectively implement refined air pollution protection and control strategies. In this study, we developed an optimal deep learning model to acquire hourly mass concentrations of key PM2.5 chemical components without complex chemical analysis. The model was trained using a randomly partitioned multivariate dataset arranged in chronological order, including atmospheric state indicators, which previous studies did not consider. Our results showed that the correlation coefficients of key chemical components were no less than 0.96, and the root mean square errors ranged from 0.20 to 2.11 µg/m3 for the entire process (training and testing combined). The model accurately captured the temporal characteristics of key chemical components, outperforming typical machine-learning models, previous studies, and global reanalysis datasets (such as Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) and Copernicus Atmosphere Monitoring Service ReAnalysis (CAMSRA)). We also quantified the feature importance using the random forest model, which showed that PM2.5, PM1, visibility, and temperature were the most influential variables for key chemical components. In conclusion, this study presents a practical approach to accurately obtain chemical composition information that can contribute to filling missing data, improved air pollution monitoring and source identification. This approach has the potential to enhance air pollution control strategies and promote public health and environmental sustainability.

PMID:39481927 | DOI:10.1016/j.jes.2024.03.037

Categories: Literature Watch

Application of Artificial Intelligence in Prediction of Ki-67 Index in Meningiomas: A Systematic Review and Meta-Analysis

Thu, 2024-10-31 06:00

World Neurosurg. 2024 Oct 29:S1878-8750(24)01793-5. doi: 10.1016/j.wneu.2024.10.089. Online ahead of print.

ABSTRACT

BACKGROUND: The Ki-67 index is a histopathological marker that has been reported to be a crucial factor in the biological behavior and prognosis of meningiomas. Several studies have developed artificial intelligence (AI) models to predict the Ki-67 based on radiomics. In this study, we aimed to perform a systematic review and meta-analysis of AI models that predicted the Ki-67 index in meningioma.

METHODS: Literature records were retrieved on April 27th, 2024, using the relevant key terms without filters in PubMed, Embase, Scopus, and Web of Science. Records were screened according to the eligibility criteria, and the data from included studies were extracted. The quality assessment was performed using the QUADAS-2 tool. The meta-analysis, sensitivity analysis, and meta-regression were conducted using R software.

RESULTS: Our study included six studies. The mean Ki-67 ranged from 2.7 ± 2.97 to 4.8 ± 40.3. Of six studies, five utilized an ML method. The most used AI method was the least absolute shrinkage and selection operator (LASSO). The AUC and ACC ranged from 0.83 to 0.99 and 0.81 to 0.95, respectively. AI models demonstrated a pooled sensitivity of 87.5% (95% CI: 75.2%, 94.2%), a specificity of 86.9% (95% CI: 75.8%, 93.4%), and a diagnostic odds ratio (DOR) of 40.02 (95% CI: 13.5, 156.4). The summary receiver operating characteristic SROC curve indicated an AUC of 0.931 for the prediction of Ki-67 index status in intracranial meningiomas.

CONCLUSION: AI models have demonstrated promising performance for predicting the Ki-67 index in meningiomas and can optimize the treatment strategy.

PMID:39481846 | DOI:10.1016/j.wneu.2024.10.089

Categories: Literature Watch

Alg-MFDL: A multi-feature deep learning framework for allergenic proteins prediction

Thu, 2024-10-31 06:00

Anal Biochem. 2024 Oct 29:115701. doi: 10.1016/j.ab.2024.115701. Online ahead of print.

ABSTRACT

The escalating global incidence of allergy patients illustrates the growing impact of allergic issues on global health. Allergens are small molecule antigens that trigger allergic reactions. A widely recognized strategy for allergy prevention involves identifying allergens and avoiding re-exposure. However, the laboratory methods to identify allergenic proteins are often time-consuming and resource-intensive. There is a crucial need to establish efficient and reliable computational approaches for the identification of allergenic proteins. In this study, we developed a novel allergenic proteins predictor named Alg-MFDL, which integrates pre-trained protein language models (PLMs) and traditional handcrafted features to achieve a more complete protein representation. First, we compared the performance of eight pre-trained PLMs from ProtTrans and ESM-2 and selected the best-performing one from each of the two groups. In addition, we evaluated the performance of three handcrafted features and different combinations of them to select the optimal feature or feature combination. Then, these three protein representations were fused and used as inputs to train the convolutional neural network (CNN). Finally, the independent validation was performed on benchmark datasets to evaluate the performance of Alg-MFDL. As a result, Alg-MFDL achieved an accuracy of 0.973, a precision of 0.996, a sensitivity of 0.951, and an F1 value of 0.973, outperforming the most of current state-of-the-art (SOTA) methods across all key metrics. We anticipated that the proposed model could be considered a useful tool for predicting allergen proteins. The datasets and code utilized in this study are freely available at https://github.com/Hupenpen/Alg-MFDL.

PMID:39481588 | DOI:10.1016/j.ab.2024.115701

Categories: Literature Watch

Understanding the role of machine learning in predicting progression of osteoarthritis

Thu, 2024-10-31 06:00

Bone Joint J. 2024 Nov 1;106-B(11):1216-1222. doi: 10.1302/0301-620X.106B11.BJJ-2024-0453.R1.

ABSTRACT

AIMS: Machine learning (ML), a branch of artificial intelligence that uses algorithms to learn from data and make predictions, offers a pathway towards more personalized and tailored surgical treatments. This approach is particularly relevant to prevalent joint diseases such as osteoarthritis (OA). In contrast to end-stage disease, where joint arthroplasty provides excellent results, early stages of OA currently lack effective therapies to halt or reverse progression. Accurate prediction of OA progression is crucial if timely interventions are to be developed, to enhance patient care and optimize the design of clinical trials.

METHODS: A systematic review was conducted in accordance with PRISMA guidelines. We searched MEDLINE and Embase on 5 May 2024 for studies utilizing ML to predict OA progression. Titles and abstracts were independently screened, followed by full-text reviews for studies that met the eligibility criteria. Key information was extracted and synthesized for analysis, including types of data (such as clinical, radiological, or biochemical), definitions of OA progression, ML algorithms, validation methods, and outcome measures.

RESULTS: Out of 1,160 studies initially identified, 39 were included. Most studies (85%) were published between 2020 and 2024, with 82% using publicly available datasets, primarily the Osteoarthritis Initiative. ML methods were predominantly supervised, with significant variability in the definitions of OA progression: most studies focused on structural changes (59%), while fewer addressed pain progression or both. Deep learning was used in 44% of studies, while automated ML was used in 5%. There was a lack of standardization in evaluation metrics and limited external validation. Interpretability was explored in 54% of studies, primarily using SHapley Additive exPlanations.

CONCLUSION: Our systematic review demonstrates the feasibility of ML models in predicting OA progression, but also uncovers critical limitations that currently restrict their clinical applicability. Future priorities should include diversifying data sources, standardizing outcome measures, enforcing rigorous validation, and integrating more sophisticated algorithms. This paradigm shift from predictive modelling to actionable clinical tools has the potential to transform patient care and disease management in orthopaedic practice.

PMID:39481441 | DOI:10.1302/0301-620X.106B11.BJJ-2024-0453.R1

Categories: Literature Watch

Integrating (deep) machine learning and cheminformatics for predicting human intestinal absorption of small molecules

Thu, 2024-10-31 06:00

Comput Biol Chem. 2024 Oct 28;113:108270. doi: 10.1016/j.compbiolchem.2024.108270. Online ahead of print.

ABSTRACT

The oral route is the most preferred route for drug delivery, due to which the largest share of the pharmaceutical market is represented by oral drugs. Human intestinal absorption (HIA) is closely related to oral bioavailability making it an important factor in predicting drug absorption. In this study, we focus on predicting drug permeability at HIA as a marker for oral bioavailability. A set of 2648 compounds were collected from some early as well as recent works and curated to build a robust dataset. Five machine learning (ML) algorithms have been trained with a set of molecular descriptors of these compounds which have been selected after rigorous feature engineering. Additionally, two deep learning models - graph convolution neural network (GCNN) and graph attention network (GAT) based model were developed using the same set of compounds to exploit the predictability with automated extracted features. The numerical analyses show that out the five ML models, Random forest and LightGBM could predict with an accuracy of 87.71 % and 86.04 % on the test set and 81.43 % and 77.30 % with the external validation set respectively. Whereas with the GCNN and GAT based models, the final accuracy achieved was 77.69 % and 78.58 % on test set and 79.29 % and 79.42 % on the external validation set respectively. We believe deployment of these models for screening oral drugs can provide promising results and therefore deposited the dataset and models on the GitHub platform (https://github.com/hridoy69/HIA).

PMID:39481232 | DOI:10.1016/j.compbiolchem.2024.108270

Categories: Literature Watch

Radiographer Education and Learning in Artificial Intelligence (REAL-AI): A survey of radiographers, radiologists, and students' knowledge of and attitude to education on AI

Thu, 2024-10-31 06:00

Radiography (Lond). 2024 Oct 30;30 Suppl 2:79-87. doi: 10.1016/j.radi.2024.10.010. Online ahead of print.

ABSTRACT

INTRODUCTION: In Autumn 2023, amendments to the Health and Care Professions Councils (HCPC) Standards of Proficiency for Radiographers were introduced requiring clinicians to demonstrate awareness of the principles of AI and deep learning technology, and its application to practice' (HCPC 2023; standard 12.25). With the rapid deployment of AI in departments, staff must be prepared to implement and utilise AI. AI readiness is crucial for adoption, with education as a key factor in overcoming fear and resistance. This survey aimed to assess the current understanding of AI among students and qualified staff in clinical practice.

METHODS: A survey targeting radiographers (diagnostic and therapeutic), radiologists and students was conducted to gather demographic data and assess awareness of AI in clinical practice. Hosted online via JISC, the survey included both closed and open-ended questions and was launched in March 2023 at the European Congress of Radiology (ECR).

RESULTS: A total of 136 responses were collected from participants across 25 countries and 5 continents. The majority were diagnostic radiographers 56.6 %, followed by students 27.2 %, dual-qualified 3.7 % and radiologists 2.9 %. Of the respondents, 30.1 % of respondents indicated that their highest level of qualification was a Bachelor's degree, 29.4 % stated that they are currently using AI in their role, whilst 27 % were unsure. Only 10.3 % had received formal AI training.

CONCLUSION: This study reveals significant gaps in training and understanding of AI among medical imaging staff. These findings will guide further research into AI education for medical imaging professionals.

IMPLICATIONS FOR PRACTICE: This paper lays foundations for future qualitative studies on the provision of AI education for medical imaging professionals, helping to prepare the workforce for the evolving role of AI in medical imaging.

PMID:39481214 | DOI:10.1016/j.radi.2024.10.010

Categories: Literature Watch

Rectangling and enhancing underwater stitched image via content-aware warping and perception balancing

Thu, 2024-10-31 06:00

Neural Netw. 2024 Oct 18;181:106809. doi: 10.1016/j.neunet.2024.106809. Online ahead of print.

ABSTRACT

Single underwater images often face limitations in field-of-view and visual perception due to scattering and absorption. Numerous image stitching techniques have attempted to provide a wider viewing range, but the resulting stitched images may exhibit unsightly irregular boundaries. Unlike natural landscapes, the absence of reliable high-fidelity references in water complicates the replicability of these deep learning-based methods, leading to unpredictable distortions in cross-domain applications. To address these challenges, we propose an Underwater Wide-field Image Rectangling and Enhancement (UWIRE) framework that incorporates two procedures, i.e., the R-procedure and E-procedure, both of which employ self-coordinated modes, requiring only a single underwater stitched image as input. The R-procedure rectangles the irregular boundaries in stitched images by employing the initial shape resizing and mesh-based image preservation warping. Instead of local linear constraints, we use complementary optimization of boundary-structure-content to ensure a natural appearance with minimal distortion. The E-procedure enhances the rectangled image by employing parameter-adaptive correction to balance information distribution across channels. We further propose an attentive weight-guided fusion method to balance the perception of color restoration, contrast enhancement, and texture sharpening in a complementary manner. Comprehensive experiments demonstrate the superior performance of our UWIRE framework over state-of-the-art image rectangling and enhancement methods, both in quantitative and qualitative evaluation.

PMID:39481203 | DOI:10.1016/j.neunet.2024.106809

Categories: Literature Watch

Exploring structural diversity across the protein universe with The Encyclopedia of Domains

Thu, 2024-10-31 06:00

Science. 2024 Nov;386(6721):eadq4946. doi: 10.1126/science.adq4946. Epub 2024 Nov 1.

ABSTRACT

The AlphaFold Protein Structure Database (AFDB) contains more than 214 million predicted protein structures composed of domains, which are independently folding units found in multiple structural and functional contexts. Identifying domains can enable many functional and evolutionary analyses but has remained challenging because of the sheer scale of the data. Using deep learning methods, we have detected and classified every domain in the AFDB, producing The Encyclopedia of Domains. We detected nearly 365 million domains, over 100 million more than can be found by sequence methods, covering more than 1 million taxa. Reassuringly, 77% of the nonredundant domains are similar to known superfamilies, greatly expanding representation of their domain space. We uncovered more than 10,000 new structural interactions between superfamilies and thousands of new folds across the fold space continuum.

PMID:39480926 | DOI:10.1126/science.adq4946

Categories: Literature Watch

Exploring the feasibility of FOCUS DWI with deep learning reconstruction for breast cancer diagnosis: A comparative study with conventional DWI

Thu, 2024-10-31 06:00

PLoS One. 2024 Oct 31;19(10):e0313011. doi: 10.1371/journal.pone.0313011. eCollection 2024.

ABSTRACT

PURPOSE: This study compared field-of-view (FOV) optimized and constrained undistorted single-shot diffusion-weighted imaging (FOCUS DWI) with deep-learning-based reconstruction (DLR) to conventional DWI for breast imaging.

METHODS: This study prospectively enrolled 49 female patients suspected of breast cancer from July to December 2023. The patients underwent conventional and FOCUS breast DWI and data were reconstructed with and without DLR. Two radiologists independently evaluated three images per patient using a 5-point Likert scale. Objective evaluations, including signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and apparent diffusion coefficient (ADC), were conducted using manual region of interest-based analysis. The subjective and objective evaluations were compared using the Friedman test.

RESULTS: The scores for the overall image quality, anatomical details, lesion conspicuity, artifacts, and distortion in FOCUS-DLR DWI were higher than in conventional DWI (all P < 0.001). The SNR of FOCUS-DLR DWI was higher than that of conventional and FOCUS DWI (both P < 0.001), while FOCUS and conventional DWI were similar (P = 0.096). Conventional, FOCUS, and FOCUS-DLR DWI had similar CNR and ADC values.

CONCLUSION: Our findings indicate that images produced by FOCUS-DLR DWI were superior to conventional DWI, supporting the applicability of this technique in clinical practice. DLR provides a new approach to optimize breast DWI.

PMID:39480865 | DOI:10.1371/journal.pone.0313011

Categories: Literature Watch

HepNet: Deep Neural Network for Classification of Early-Stage Hepatic Steatosis Using Microwave Signals

Thu, 2024-10-31 06:00

IEEE J Biomed Health Inform. 2024 Oct 31;PP. doi: 10.1109/JBHI.2024.3489626. Online ahead of print.

ABSTRACT

Hepatic steatosis, a key factor in chronic liver diseases, is difficult to diagnose early. This study introduces a classifier for hepatic steatosis using microwave technology, validated through clinical trials. Our method uses microwave signals and deep learning to improve detection to reliable results. It includes a pipeline with simulation data, a new deep-learning model called HepNet, and transfer learning. The simulation data, created with 3D electromagnetic tools, is used for training and evaluating the model. HepNet uses skip connections in convolutional layers and two fully connected layers for better feature extraction and generalization. Calibration and uncertainty assessments ensure the model's robustness. Our simulation achieved an F1-score of 0.91 and a confidence level of 0.97 for classifications with entropy ≤0.1, outperforming traditional models like LeNet (0.81) and ResNet (0.87). We also use transfer learning to adapt HepNet to clinical data with limited patient samples. Using 1H-MRS as the standard for two microwave liver scanners, HepNet achieved high F1-scores of 0.95 and 0.88 for 94 and 158 patient samples, respectively, showing its clinical potential.

PMID:39480722 | DOI:10.1109/JBHI.2024.3489626

Categories: Literature Watch

Deep Power-aware Tunable Weighting for Ultrasound Microvascular Imaging

Thu, 2024-10-31 06:00

IEEE Trans Ultrason Ferroelectr Freq Control. 2024 Oct 31;PP. doi: 10.1109/TUFFC.2024.3488729. Online ahead of print.

ABSTRACT

Ultrasound microvascular imaging (UMI), including ultrafast power Doppler imaging (uPDI) and ultrasound localization microscopy (ULM), obtains blood flow information through plane wave transmissions at high frame rates. However, low signal-to-noise ratio of plane waves causes low image quality. Adaptive beamformers have been proposed to suppress noise energy to achieve higher image quality accompanied by increasing computational complexity. Deep learning (DL) leverages powerful hardware capabilities to enable rapid implementation of noise suppression at the cost of flexibility. To enhance the applicability of DL-based methods, in this work, we propose a deep power-aware tunable (DPT) weighting (i.e., postfilter) for delay-and-sum (DAS) beamforming to improve UMI by enhancing plane wave images. The model, called Yformer is a hybrid structure combining convolution and Transformer. With the DAS beamformed and compounded envelope image as input, Yformer can estimate both noise power and signal power. Furthermore, we utilize the obtained powers to compute pixel-wise weights by introducing a tunable noise control factor, which is tailored for improving the quality of different UMI applications. In vivo experiments on the rat brain demonstrate that Yformer can accurately estimate the powers of noise and signal with the structural similarity index (SSIM) higher than 0.95. The performance of the DPT weighting is comparable to that of superior adaptive beamformer in uPDI with low computational cost. The DPT weighting was then applied to four different datasets of ULM, including public simulation, public rat brain, private rat brain, and private rat liver datasets, showing excellent generalizability using the model trained by the private rat brain dataset only. In particular, our method indirectly improves the resolution of liver ULM from 25.24 μm to 18.77 μm by highlighting small vessels. In addition, the DPT weighting exhibits more details of blood vessels with faster processing, which has the potential to facilitate the clinical applications of high-quality UMI.

PMID:39480714 | DOI:10.1109/TUFFC.2024.3488729

Categories: Literature Watch

Spike-and-Slab Shrinkage Priors for Structurally Sparse Bayesian Neural Networks

Thu, 2024-10-31 06:00

IEEE Trans Neural Netw Learn Syst. 2024 Oct 31;PP. doi: 10.1109/TNNLS.2024.3485529. Online ahead of print.

ABSTRACT

Network complexity and computational efficiency have become increasingly significant aspects of deep learning. Sparse deep learning addresses these challenges by recovering a sparse representation of the underlying target function by reducing heavily overparameterized deep neural networks. Specifically, deep neural architectures compressed via structured sparsity (e.g., node sparsity) provide low-latency inference, higher data throughput, and reduced energy consumption. In this article, we explore two well-established shrinkage techniques, Lasso and Horseshoe, for model compression in Bayesian neural networks (BNNs). To this end, we propose structurally sparse BNNs, which systematically prune excessive nodes with the following: 1) spike-and-slab group Lasso (SS-GL) and 2) SS group Horseshoe (SS-GHS) priors, and develop computationally tractable variational inference, including continuous relaxation of Bernoulli variables. We establish the contraction rates of the variational posterior of our proposed models as a function of the network topology, layerwise node cardinalities, and bounds on the network weights. We empirically demonstrate the competitive performance of our models compared with the baseline models in prediction accuracy, model compression, and inference latency.

PMID:39480710 | DOI:10.1109/TNNLS.2024.3485529

Categories: Literature Watch

Active Machine Learning for Pre-procedural Prediction of Time-Varying Boundary Condition After Fontan Procedure Using Generative Adversarial Networks

Thu, 2024-10-31 06:00

Ann Biomed Eng. 2024 Oct 31. doi: 10.1007/s10439-024-03640-8. Online ahead of print.

ABSTRACT

The Fontan procedure is the definitive palliation for pediatric patients born with single ventricles. Surgical planning for the Fontan procedure has emerged as a promising vehicle toward optimizing outcomes, where pre-operative measurements are used prospectively as post-operative boundary conditions for simulation. Nevertheless, actual post-operative measurements can be very different from pre-operative states, which raises questions for the accuracy of surgical planning. The goal of this study is to apply machine leaning techniques to describing pre-operative and post-operative vena caval flow conditions in Fontan patients in order to develop predictions of post-operative boundary conditions to be used in surgical planning. Based on a virtual cohort synthesized by lumped-parameter models, we proposed a novel diversity-aware generative adversarial active learning framework to successfully train predictive deep neural networks on very limited amount of cases that are generally faced by cardiovascular studies. Results of 14 groups of experiments uniquely combining different data query strategies, metrics, and data augmentation options with generative adversarial networks demonstrated that the highest overall prediction accuracy and coefficient of determination were exhibited by the proposed method. This framework serves as a first step toward deep learning for cardiovascular flow prediction/regression with reduced labeling requirements and augmented learning space.

PMID:39480609 | DOI:10.1007/s10439-024-03640-8

Categories: Literature Watch

Real-time monitoring of single dendritic cell maturation using deep learning-assisted surface-enhanced Raman spectroscopy

Thu, 2024-10-31 06:00

Theranostics. 2024 Oct 14;14(17):6818-6830. doi: 10.7150/thno.100298. eCollection 2024.

ABSTRACT

Background: Dynamic real-time detection of dendritic cell (DC) maturation is pivotal for accurately predicting immune system activation, assessing vaccine efficacy, and determining the effectiveness of immunotherapy. The heterogeneity of cells underscores the significance of assessing the maturation status of each individual cell, while achieving real-time monitoring of DC maturation at the single-cell level poses significant challenges. Surface-enhanced Raman spectroscopy (SERS) holds great potential for providing specific fingerprinting information of DCs to detect biochemical alterations and evaluate their maturation status. Methods: We developed Au@CpG@PEG nanoparticle as a self-reporting nanovaccine for DC activation and maturation state assessment, utilizing a label-free SERS strategy. Fingerprint vibrational spectra of the biological components in different states of DCs were collected and analyzed using deep learning Convolutional Neural Networks (CNN) algorithms, aiding in the rapid and efficient identification of DC maturation. Results: This approach enables dynamic real-time detection of DC maturation, maintaining accuracy levels above 98.92%. Conclusion: By employing molecular profiling, we revealed that the signal ratio of tryptophan-to-carbohydrate holds potential as a prospective marker for distinguishing the maturation status of DCs.

PMID:39479453 | PMC:PMC11519801 | DOI:10.7150/thno.100298

Categories: Literature Watch

AI-enabled workflow for automated classification and analysis of feto-placental Doppler images

Thu, 2024-10-31 06:00

Front Digit Health. 2024 Oct 16;6:1455767. doi: 10.3389/fdgth.2024.1455767. eCollection 2024.

ABSTRACT

INTRODUCTION: Extraction of Doppler-based measurements from feto-placental Doppler images is crucial in identifying vulnerable new-borns prenatally. However, this process is time-consuming, operator dependent, and prone to errors.

METHODS: To address this, our study introduces an artificial intelligence (AI) enabled workflow for automating feto-placental Doppler measurements from four sites (i.e., Umbilical Artery (UA), Middle Cerebral Artery (MCA), Aortic Isthmus (AoI) and Left Ventricular Inflow and Outflow (LVIO)), involving classification and waveform delineation tasks. Derived from data from a low- and middle-income country, our approach's versatility was tested and validated using a dataset from a high-income country, showcasing its potential for standardized and accurate analysis across varied healthcare settings.

RESULTS: The classification of Doppler views was approached through three distinct blocks: (i) a Doppler velocity amplitude-based model with an accuracy of 94%, (ii) two Convolutional Neural Networks (CNN) with accuracies of 89.2% and 67.3%, and (iii) Doppler view- and dataset-dependent confidence models to detect misclassifications with an accuracy higher than 85%. The extraction of Doppler indices utilized Doppler-view dependent CNNs coupled with post-processing techniques. Results yielded a mean absolute percentage error of 6.1 ± 4.9% (n = 682), 1.8 ± 1.5% (n = 1,480), 4.7 ± 4.0% (n = 717), 3.5 ± 3.1% (n = 1,318) for the magnitude location of the systolic peak in LVIO, UA, AoI and MCA views, respectively.

CONCLUSIONS: The developed models proved to be highly accurate in classifying Doppler views and extracting essential measurements from Doppler images. The integration of this AI-enabled workflow holds significant promise in reducing the manual workload and enhancing the efficiency of feto-placental Doppler image analysis, even for non-trained readers.

PMID:39479252 | PMC:PMC11521966 | DOI:10.3389/fdgth.2024.1455767

Categories: Literature Watch

Identification of middle cerebral artery stenosis in transcranial Doppler using a modified VGG-16

Thu, 2024-10-31 06:00

Front Neurol. 2024 Oct 16;15:1394435. doi: 10.3389/fneur.2024.1394435. eCollection 2024.

ABSTRACT

OBJECTIVES: The diagnosis of intracranial atherosclerotic stenosis (ICAS) is of great significance for the prevention of stroke. Deep learning (DL)-based artificial intelligence techniques may aid in the diagnosis. The study aimed to identify ICAS in the middle cerebral artery (MCA) based on a modified DL model.

METHODS: This retrospective study included two datasets. Dataset1 consisted of 3,068 transcranial Doppler (TCD) images of the MCA from 1,729 patients, which were assessed as normal or stenosis by three physicians with varying levels of experience, in conjunction with other medical imaging data. The data were used to improve and train the VGG16 models. Dataset2 consisted of TCD images of 90 people who underwent physical examination, which were used to verify the robustness of the model and compare the consistency between the model and human physicians.

RESULTS: The accuracy, precision, specificity, sensitivity, and area under curve (AUC) of the best model VGG16 + Squeeze-and-Excitation (SE) + skip connection (SC) on dataset1 reached 85.67 ± 0.43(%),87.23 ± 1.17(%),87.73 ± 1.47(%),83.60 ± 1.60(%), and 0.857 ± 0.004, while those of dataset2 were 93.70 ± 2.80(%),62.65 ± 11.27(%),93.00 ± 3.11(%),100.00 ± 0.00(%), and 0.965 ± 0.016. The kappa coefficient showed that it reached the recognition level of senior doctors.

CONCLUSION: The improved DL model has a good diagnostic effect for MCV stenosis in TCD images and is expected to help in ICAS screening.

PMID:39479004 | PMC:PMC11521853 | DOI:10.3389/fneur.2024.1394435

Categories: Literature Watch

Pages