Deep learning

Spontaneous breaking of symmetry in overlapping cell instance segmentation using diffusion models

Wed, 2024-12-11 06:00

Biol Methods Protoc. 2024 Nov 9;9(1):bpae084. doi: 10.1093/biomethods/bpae084. eCollection 2024.

ABSTRACT

Instance segmentation is the task of assigning unique identifiers to individual objects in images. Solving this task requires breaking the inherent symmetry that semantically similar objects must result in distinct outputs. Deep learning algorithms bypass this break-of-symmetry by training specialized predictors or by utilizing intermediate label representations. However, many of these approaches break down when faced with overlapping labels that are ubiquitous in biomedical imaging, for instance for segmenting cell layers. Here, we discuss the reason for this failure and offer a novel approach for instance segmentation based on diffusion models that breaks this symmetry spontaneously. Our method outputs pixel-level instance segmentations matching the performance of models such as cellpose on the cellpose fluorescent cell dataset, while also permitting overlapping labels.

PMID:39659670 | PMC:PMC11631529 | DOI:10.1093/biomethods/bpae084

Categories: Literature Watch

Deep learning and transfer learning for brain tumor detection and classification

Wed, 2024-12-11 06:00

Biol Methods Protoc. 2024 Nov 19;9(1):bpae080. doi: 10.1093/biomethods/bpae080. eCollection 2024.

ABSTRACT

Convolutional neural networks (CNNs) are powerful tools that can be trained on image classification tasks and share many structural and functional similarities with biological visual systems and mechanisms of learning. In addition to serving as a model of biological systems, CNNs possess the convenient feature of transfer learning where a network trained on one task may be repurposed for training on another, potentially unrelated, task. In this retrospective study of public domain MRI data, we investigate the ability of neural network models to be trained on brain cancer imaging data while introducing a unique camouflage animal detection transfer learning step as a means of enhancing the networks' tumor detection ability. Training on glioma and normal brain MRI data, post-contrast T1-weighted and T2-weighted, we demonstrate the potential success of this training strategy for improving neural network classification accuracy. Qualitative metrics such as feature space and DeepDreamImage analysis of the internal states of trained models were also employed, which showed improved generalization ability by the models following camouflage animal transfer learning. Image saliency maps further this investigation by allowing us to visualize the most important image regions from a network's perspective while learning. Such methods demonstrate that the networks not only 'look' at the tumor itself when deciding, but also at the impact on the surrounding tissue in terms of compressions and midline shifts. These results suggest an approach to brain tumor MRIs that is comparable to that of trained radiologists while also exhibiting a high sensitivity to subtle structural changes resulting from the presence of a tumor.

PMID:39659666 | PMC:PMC11631523 | DOI:10.1093/biomethods/bpae080

Categories: Literature Watch

Adaptive Multicore Dual-Path Fusion Multimodel Extraction of Heterogeneous Features for FAIMS Spectral Analysis

Tue, 2024-12-10 06:00

Rapid Commun Mass Spectrom. 2025 Mar;39(5):e9967. doi: 10.1002/rcm.9967.

ABSTRACT

With the increasing application scenarios and detection needs of high-field asymmetric waveform ion mobility spectrometry (FAIMS) analysis, deep learning-assisted spectral analysis has become an important method to improve the analytical effect and work efficiency. However, a single model has limitations in generalizing to different types of tasks, and a model trained from one batch of spectral data is difficult to achieve good results on another task with large differences. To address this problem, this study proposes an adaptive multicore dual-path fusion multimodel extraction of heterogeneous features for FAIMS spectral analysis model in conjunction with FAIMS small-sample data analysis scenarios. Multinetwork complementarity is achieved through multimodel feature extraction, adaptive feature fusion module adjusts feature size and dimension fusion to heterogeneous features, and multicore dual-path fusion can capture and integrate information at all scales and levels. The model's performance improves dramatically when performing complex mixture multiclassification tasks: accuracy, precision, recall, f1-score, and micro-AUC reach 98.11%, 98.66%, 98.33%, 98.30%, and 98.98%. The metrics for the generalization test using the untrained xylene isomer data were 96.42%, 96.66%, 96.96%, 96.65%, and 97.60%. The model not only exhibits excellent analytical results on preexisting data but also demonstrates good generalization ability on untrained data.

PMID:39658821 | DOI:10.1002/rcm.9967

Categories: Literature Watch

FlavorMiner: a machine learning platform for extracting molecular flavor profiles from structural data

Tue, 2024-12-10 06:00

J Cheminform. 2024 Dec 10;16(1):140. doi: 10.1186/s13321-024-00935-9.

ABSTRACT

Flavor is the main factor driving consumers acceptance of food products. However, tracking the biochemistry of flavor is a formidable challenge due to the complexity of food composition. Current methodologies for linking individual molecules to flavor in foods and beverages are expensive and time-consuming. Predictive models based on machine learning (ML) are emerging as an alternative to speed up this process. Nonetheless, the optimal approach to predict flavor features of molecules remains elusive. In this work we present FlavorMiner, an ML-based multilabel flavor predictor. FlavorMiner seamlessly integrates different combinations of algorithms and mathematical representations, augmented with class balance strategies to address the inherent class of the input dataset. Notably, Random Forest and K-Nearest Neighbors combined with Extended Connectivity Fingerprint and RDKit molecular descriptors consistently outperform other combinations in most cases. Resampling strategies surpass weight balance methods in mitigating bias associated with class imbalance. FlavorMiner exhibits remarkable accuracy, with an average ROC AUC score of 0.88. This algorithm was used to analyze cocoa metabolomics data, unveiling its profound potential to help extract valuable insights from intricate food metabolomics data. FlavorMiner can be used for flavor mining in any food product, drawing from a diverse training dataset that spans over 934 distinct food products.Scientific Contribution FlavorMiner is an advanced machine learning (ML)-based tool designed to predict molecular flavor features with high accuracy and efficiency, addressing the complexity of food metabolomics. By leveraging robust algorithmic combinations paired with mathematical representations FlavorMiner achieves high predictive performance. Applied to cocoa metabolomics, FlavorMiner demonstrated its capacity to extract meaningful insights, showcasing its versatility for flavor analysis across diverse food products. This study underscores the transformative potential of ML in accelerating flavor biochemistry research, offering a scalable solution for the food and beverage industry.

PMID:39658805 | DOI:10.1186/s13321-024-00935-9

Categories: Literature Watch

Evaluation of the mandibular canal and the third mandibular molar relationship by CBCT with a deep learning approach

Tue, 2024-12-10 06:00

Oral Radiol. 2024 Dec 11. doi: 10.1007/s11282-024-00793-z. Online ahead of print.

ABSTRACT

OBJECTIVE: The mandibular canal (MC) houses the inferior alveolar nerve. Extraction of the mandibular third molar (MM3) is a common dental surgery, often complicated by nerve damage. CBCT is the most effective imaging method to assess the relationship between MM3 and MC. With advancements in artificial intelligence, deep learning has shown promising results in dentistry. The aim of this study is to evaluate the MC-MM3 relationship using CBCT and a deep learning technique, as well as to automatically segment the mandibular impacted third molar, mandibular canal, mental and mandibular foramen.

METHODS: This retrospective study analyzed CBCT data from 300 patients. Segmentation was used for labeling, dividing the data into training (n = 270) and test (n = 30) sets. The nnU-NetV2 architecture was employed to develop an optimal deep learning model. The model's success was validated using the test set, with metrics including accuracy, sensitivity, precision, Dice score, Jaccard index, and AUC.

RESULTS: For the MM3 annotated on CBCT, the accuracy was 0.99, sensitivity 0.90, precision 0.85, Dice score 0.85, Jaccard index 0.78, AUC value 0.95. In MC evaluation, accuracy was 0.99, sensitivity 0.75, precision 0.78, Dice score 0.76, Jaccard index 0.62, AUC value 0.88. For the evaluation of mental foramen; accuracy 0.99, sensitivity 0.64, precision 0.66, Dice score 0.64, Jaccard index 0.57, AUC value 0.82. In the evaluation of mandibular foramen, accuracy was found to be 0.99, sensitivity 0.79, precision 0.68, Dice score 0.71, and AUC value 0.90. Evaluating the MM3-MC relationship, the model showed an 80% correlation with observer assessments.

CONCLUSION: The nnU-NetV2 deep learning architecture reliably identifies the MC-MM3 relationship in CBCT images, aiding in diagnosis, surgical planning, and complication prediction.

PMID:39658743 | DOI:10.1007/s11282-024-00793-z

Categories: Literature Watch

Evaluating deep learning and radiologist performance in volumetric prostate cancer analysis with biparametric MRI and histopathologically mapped slides

Tue, 2024-12-10 06:00

Abdom Radiol (NY). 2024 Dec 11. doi: 10.1007/s00261-024-04734-6. Online ahead of print.

NO ABSTRACT

PMID:39658736 | DOI:10.1007/s00261-024-04734-6

Categories: Literature Watch

Artificial intelligence-guided design of lipid nanoparticles for pulmonary gene therapy

Tue, 2024-12-10 06:00

Nat Biotechnol. 2024 Dec 10. doi: 10.1038/s41587-024-02490-y. Online ahead of print.

ABSTRACT

Ionizable lipids are a key component of lipid nanoparticles, the leading nonviral messenger RNA delivery technology. Here, to advance the identification of ionizable lipids beyond current methods, which rely on experimental screening and/or rational design, we introduce lipid optimization using neural networks, a deep-learning strategy for ionizable lipid design. We created a dataset of >9,000 lipid nanoparticle activity measurements and used it to train a directed message-passing neural network for prediction of nucleic acid delivery with diverse lipid structures. Lipid optimization using neural networks predicted RNA delivery in vitro and in vivo and extrapolated to structures divergent from the training set. We evaluated 1.6 million lipids in silico and identified two structures, FO-32 and FO-35, with local mRNA delivery to the mouse muscle and nasal mucosa. FO-32 matched the state of the art for nebulized mRNA delivery to the mouse lung, and both FO-32 and FO-35 efficiently delivered mRNA to ferret lungs. Overall, this work shows the utility of deep learning for improving nanoparticle delivery.

PMID:39658727 | DOI:10.1038/s41587-024-02490-y

Categories: Literature Watch

Convolutional neural networks for automatic MR classification of myocardial iron overload in thalassemia major patients

Tue, 2024-12-10 06:00

Eur Radiol. 2024 Dec 10. doi: 10.1007/s00330-024-11245-x. Online ahead of print.

ABSTRACT

OBJECTIVES: To develop a deep-learning model for supervised classification of myocardial iron overload (MIO) from magnitude T2* multi-echo MR images.

MATERIALS AND METHODS: Eight hundred twenty-three cardiac magnitude T2* multi-slice, multi-echo MR images from 496 thalassemia major patients (285 females, 57%), labeled for MIO level (normal: T2* > 20 ms, moderate: 10 ≤ T2* ≤ 20 ms, severe: T2* < 10 ms), were retrospectively studied. Two 2D convolutional neural networks (CNN) developed for multi-slice (MS-HippoNet) and single-slice (SS-HippoNet) analysis were trained using 5-fold cross-validation. Performance was assessed using micro-average, multi-class accuracy, and single-class accuracy, sensitivity, and specificity. CNN performance was compared with inter-observer agreement between radiologists on 20% of the patients. The agreement between patients' classifications was assessed by the inter-agreement Kappa test.

RESULTS: Among the 165 images in the test set, a multi-class accuracy of 0.885 and 0.836 was obtained for MS- and SS-Hippo-Net, respectively. Network performances were confirmed on external test set analysis (0.827 and 0.793 multi-class accuracy, 29 patients from the CHMMOTv1 database). The agreement between automatic and ground truth classification was good (MS: κ = 0.771; SS: κ = 0.614), comparable with the inter-observer agreement (MS: κ = 0.872, SS: κ = 0.907) evaluated on the test set.

CONCLUSION: The developed networks performed classification of MIO level from multiecho, bright-blood, and T2* images with good performances.

KEY POINTS: Question MRI T2* represents the established clinical tool for MIO assessment. Quality control of the image analysis is a problem in small centers. Findings Deep learning models can perform MIO staging with good accuracy, comparable to inter-observer variability of the standard procedure. Clinical relevance CNN can perform automated staging of cardiac iron overload from multiecho MR sequences facilitating non-invasive evaluation of patients with various hematologic disorders.

PMID:39658686 | DOI:10.1007/s00330-024-11245-x

Categories: Literature Watch

The role of deep learning in diagnostic imaging of spondyloarthropathies: a systematic review

Tue, 2024-12-10 06:00

Eur Radiol. 2024 Dec 10. doi: 10.1007/s00330-024-11261-x. Online ahead of print.

ABSTRACT

AIM: Diagnostic imaging is an integral part of identifying spondyloarthropathies (SpA), yet the interpretation of these images can be challenging. This review evaluated the use of deep learning models to enhance the diagnostic accuracy of SpA imaging.

METHODS: Following PRISMA guidelines, we systematically searched major databases up to February 2024, focusing on studies that applied deep learning to SpA imaging. Performance metrics, model types, and diagnostic tasks were extracted and analyzed. Study quality was assessed using QUADAS-2.

RESULTS: We analyzed 21 studies employing deep learning in SpA imaging diagnosis across MRI, CT, and X-ray modalities. These models, particularly advanced CNNs and U-Nets, demonstrated high accuracy in diagnosing SpA, differentiating arthritis forms, and assessing disease progression. Performance metrics frequently surpassed traditional methods, with some models achieving AUCs up to 0.98 and matching expert radiologist performance.

CONCLUSION: This systematic review underscores the effectiveness of deep learning in SpA imaging diagnostics across MRI, CT, and X-ray modalities. The studies reviewed demonstrated high diagnostic accuracy. However, the presence of small sample sizes in some studies highlights the need for more extensive datasets and further prospective and external validation to enhance the generalizability of these AI models.

KEY POINTS: Question How can deep learning models improve diagnostic accuracy in imaging for spondyloarthropathies (SpA), addressing challenges in early detection and differentiation from other forms of arthritis? Findings Deep learning models, especially CNNs and U-Nets, showed high accuracy in SpA imaging across MRI, CT, and X-ray, often matching or surpassing expert radiologists. Clinical relevance Deep learning models can enhance diagnostic precision in SpA imaging, potentially reducing diagnostic delays and improving treatment decisions, but further validation on larger datasets is required for clinical integration.

PMID:39658683 | DOI:10.1007/s00330-024-11261-x

Categories: Literature Watch

Age-dependent changes in CT vertebral attenuation values in opportunistic screening for osteoporosis: a nationwide multi-center study

Tue, 2024-12-10 06:00

Eur Radiol. 2024 Dec 10. doi: 10.1007/s00330-024-11263-9. Online ahead of print.

ABSTRACT

OBJECTIVES: To examine how vertebral attenuation changes with aging, and to establish age-adjusted CT attenuation value cutoffs for diagnosing osteoporosis.

MATERIALS AND METHODS: This multi-center retrospective study included 11,246 patients (mean age ± standard deviation, 50 ± 13 years; 7139 men) who underwent CT and dual-energy X-ray absorptiometry (DXA) in six health-screening centers between 2022 and 2023. Using deep-learning-based software, attenuation values of L1 vertebral bodies were measured. Segmented linear regression in women and simple linear regression in men were used to assess how attenuation values change with aging. A multivariable linear regression analysis was performed to determine whether age is associated with CT attenuation values independently of the DXA T-score. Age-adjusted cutoffs targeting either 90% sensitivity or 90% specificity were derived using quantile regression. Performance of both age-adjusted and age-unadjusted cutoffs was measured, where the target sensitivity or specificity was considered achieved if a 95% confidence interval encompassed 90%.

RESULTS: While attenuation values declined consistently with age in men, they declined abruptly in women aged > 42 years. Such decline occurred independently of the DXA T-score (p < 0.001). Age adjustment seemed critical for age ≥ 65 years, where the age-adjusted cutoffs achieved the target (sensitivity of 91.5% (86.3-95.2%) when targeting 90% sensitivity and specificity of 90.0% (88.3-91.6%) when targeting 90% specificity), but age-unadjusted cutoffs did not (95.5% (91.2-98.0%) and 73.8% (71.4-76.1%), respectively).

CONCLUSION: Age-adjusted cutoffs provided a more reliable diagnosis of osteoporosis than age-unadjusted cutoffs since vertebral attenuation values decrease with age, regardless of DXA T-scores.

KEY POINTS: Question How does vertebral CT attenuation change with age? Findings Independent of dual-energy X-ray absorptiometry T-score, vertebral attenuation values on CT declined at a constant rate in men and abruptly in women over 42 years of age. Clinical relevance Age adjustments are needed in opportunistic osteoporosis screening, especially among the elderly.

PMID:39658682 | DOI:10.1007/s00330-024-11263-9

Categories: Literature Watch

UPicker: a semi-supervised particle picking transformer method for cryo-EM micrographs

Tue, 2024-12-10 06:00

Brief Bioinform. 2024 Nov 22;26(1):bbae636. doi: 10.1093/bib/bbae636.

ABSTRACT

Automatic single particle picking is a critical step in the data processing pipeline of cryo-electron microscopy structure reconstruction. In recent years, several deep learning-based algorithms have been developed, demonstrating their potential to solve this challenge. However, current methods highly depend on manually labeled training data, which is labor-intensive and prone to biases especially for high-noise and low-contrast micrographs, resulting in suboptimal precision and recall. To address these problems, we propose UPicker, a semi-supervised transformer-based particle-picking method with a two-stage training process: unsupervised pretraining and supervised fine-tuning. During the unsupervised pretraining, an Adaptive Laplacian of Gaussian region proposal generator is proposed to obtain pseudo-labels from unlabeled data for initial feature learning. For the supervised fine-tuning, UPicker only needs a small amount of labeled data to achieve high accuracy in particle picking. To further enhance model performance, UPicker employs a contrastive denoising training strategy to reduce redundant detections and accelerate convergence, along with a hybrid data augmentation strategy to deal with limited labeled data. Comprehensive experiments on both simulated and experimental datasets demonstrate that UPicker outperforms state-of-the-art particle-picking methods in terms of accuracy and robustness while requiring fewer labeled data than other transformer-based models. Furthermore, ablation studies demonstrate the effectiveness and necessity of each component of UPicker. The source code and data are available at https://github.com/JachyLikeCoding/UPicker.

PMID:39658205 | DOI:10.1093/bib/bbae636

Categories: Literature Watch

Deep learning reveals pathology-confirmed neuroimaging signatures in Alzheimer's, vascular and Lewy body dementias

Tue, 2024-12-10 06:00

Brain. 2024 Dec 9:awae388. doi: 10.1093/brain/awae388. Online ahead of print.

ABSTRACT

Concurrent neurodegenerative and vascular pathologies pose a diagnostic challenge in the clinical setting, with histopathology remaining the definitive modality for dementia-type diagnosis. To address this clinical challenge, we introduce a neuropathology-based, data-driven, multi-label deep learning framework to identify and quantify in-vivo biomarkers for Alzheimer's disease (AD), vascular dementia (VD), and Lewy body dementia (LBD) using antemortem T1-weighted MRI scans of 423 demented and 361 control participants from NACC and ADNI datasets. Based on the best-performing deep learning model, explainable heatmaps are extracted to visualize disease patterns, and the novel Deep Signature of Pathology Atrophy REcognition (DeepSPARE) indices are developed, where a higher DeepSPARE score indicates more brain alterations associated with that specific pathology. A substantial discrepancy in clinical and neuropathology diagnosis was observed in the demented patients: 71% of them had more than one pathology, but 67% of them were clinically diagnosed as AD only. Based on these neuropathology diagnoses and leveraging cross-validation principles, the deep learning model achieved the best performance with a balanced accuracy of 0.844, 0.839, and 0.623 for AD, VD, and LBD, respectively, and was used to generate the explainable deep-learning heatmaps and DeepSPARE indices. The explainable deep-learning heatmaps revealed distinct neuroimaging brain alteration patterns for each pathology: the AD heatmap highlighted bilateral hippocampal regions, the VD heatmap emphasized white matter regions, and the LBD heatmap exposed occipital alterations. The DeepSPARE indices were validated by examining their associations with cognitive testing, neuropathological, and neuroimaging measures using linear mixed-effects models. The DeepSPARE-AD index was associated with MMSE, Trail B, memory, PFDR-adjustedhippocampal volume, Braak stages, CERAD scores, and Thal phases (PFDR-adjusted < 0.05). The DeepSPARE-VD index was associated with white matter hyperintensity volume and cerebral amyloid angiopathy (PFDR-adjusted < 0.001). The DeepSPARE-LBD index was associated with Lewy body stages (PFDR-adjusted < 0.05). The findings were replicated in an out-of-sample ADNI dataset by testing associations with cognitive, imaging, plasma, and CSF measures. CSF and plasma pTau181 were significantly associated with DeepSPARE-AD in the AD/MCIΑβ+ group (PFDR-adjusted < 0.001), and CSF α-synuclein was associated solely with DeepSPARE-LBD (PFDR-adjusted = 0.036). Overall, these findings demonstrate the advantages of our innovative deep-learning framework in detecting antemortem neuroimaging signatures linked to different pathologies. The newly deep learning-derived DeepSPARE indices are precise, pathology-sensitive, and single-valued noninvasive neuroimaging metrics, bridging the traditional widely available in-vivo T1 imaging with histopathology.

PMID:39657969 | DOI:10.1093/brain/awae388

Categories: Literature Watch

End-to-end deep learning patient level classification of affected territory of ischemic stroke patients in DW-MRI

Tue, 2024-12-10 06:00

Neuroradiology. 2024 Dec 10. doi: 10.1007/s00234-024-03520-x. Online ahead of print.

ABSTRACT

PURPOSE: To develop an end-to-end DL model for automated classification of affected territory in DWI of stroke patients.

MATERIALS AND METHODS: In this retrospective multicenter study, brain DWI studies from January 2017 to April 2020 from Center 1, from June 2020 to December 2020 from Center 2, and from November 2019 to April 2020 from Center 3 were included. Four radiologists labeled images into five classes: anterior cerebral artery (ACA), middle cerebral artery (MCA), posterior circulation (PC), and watershed (WS) regions, as well as normal images. Additionally, for Center 1, clinical information was encoded as a domain knowledge vector to incorporate into image embeddings. 3D convolutional neural network (CNN) and attention gate integrated versions for direct 3D encoding, long short-term memory (LSTM-CNN), and time-distributed layer for slice-based encoding were employed. Balanced classification accuracy, macro averaged f1 score, AUC, and interrater Cohen's kappa were calculated.

RESULTS: Overall, 624 DWI MRIs from 3 centers were utilized (mean age, interval: 66.89 years, 29-95 years; 345 male) with 439 patients in the training, 103 in the validation, and 82 in the test sets. The best model was a slice-based parallel encoding model with 0.88 balanced accuracy, 0.80 macro-f1 score, and an AUC of 0.98. Clinical domain knowledge integration improved the performance with 0.93 best overall accuracy with parallel stream model embeddings and support vector machine classifiers. The mean kappa value for interrater agreement was 0.87.

CONCLUSION: Developed end-to-end deep learning models performed well in classifying affected regions from stroke in DWI.

CLINICAL RELEVANCE STATEMENT: The end-to-end deep learning model with a parallel stream encoding strategy for classifying stroke regions in DWI has performed comparably with radiologists.

PMID:39656236 | DOI:10.1007/s00234-024-03520-x

Categories: Literature Watch

Utilizing deep learning-based causal inference to explore vancomycin's impact on continuous kidney replacement therapy necessity in blood culture-positive intensive care unit patients

Tue, 2024-12-10 06:00

Microbiol Spectr. 2024 Dec 10:e0266224. doi: 10.1128/spectrum.02662-24. Online ahead of print.

ABSTRACT

Patients with positive blood cultures in the intensive care unit (ICU) are at high risk for septic acute kidney injury requiring continuous kidney replacement therapy (CKRT), especially when treated with vancomycin. This study developed a machine learning model to predict CKRT and examined vancomycin's impact using deep learning-based causal inference. We analyzed ICU patients with positive blood cultures, utilizing the Medical Information Mart for Intensive Care III data set. The primary outcome was defined as the initiation of CKRT during the ICU stay. The machine learning models were developed to predict the outcome. The deep learning-based causal inference model was utilized to quantitatively demonstrate the impact of vancomycin on the probability of CKRT initiation. Logistic regression was performed to analyze the relationship between the variables and the susceptibility of vancomycin. A total of 1,318 patients were included in the analysis, with 41 requiring CKRT. The Random Forest and Light Gradient Boosting Machine exhibited the best performance, with Area Under Curve of Receiver Operating Characteristic Curve values of 0.905 and 0.886, respectively. The deep learning-based causal inference model demonstrated an average 7.7% increase in the probability of CKRT occurrence when administrating vancomycin in total data set. Additionally, that younger age, lower diastolic blood pressure, higher heart rate, higher baseline creatinine, and lower bicarbonate levels sensitized the probability of CKRT application in response to vancomycin treatment. Deep learning-based causal inference models showed that vancomycin administration increases CKRT risk, identifying specific patient characteristics associated with higher susceptibility.IMPORTANCEThis study assesses the impact of vancomycin on the risk of continuous kidney replacement therapy (CKRT) in intensive care unit (ICU) patients with blood culture-positive infections. Utilizing deep learning-based causal inference and machine learning models, the research quantifies how vancomycin administration increases CKRT risk by an average of 7.7%. Key variables influencing susceptibility include baseline creatinine, diastolic blood pressure, heart rate, and bicarbonate levels. These findings offer insights into managing vancomycin-induced kidney risk and may inform patient-specific treatment strategies in ICU settings.

PMID:39656005 | DOI:10.1128/spectrum.02662-24

Categories: Literature Watch

Rapid diagnosis of bacterial vaginosis using machine-learning-assisted surface-enhanced Raman spectroscopy of human vaginal fluids

Tue, 2024-12-10 06:00

mSystems. 2024 Dec 10:e0105824. doi: 10.1128/msystems.01058-24. Online ahead of print.

ABSTRACT

Bacterial vaginosis (BV) is an abnormal gynecological condition caused by the overgrowth of specific bacteria in the vagina. This study aims to develop a novel method for BV detection by integrating surface-enhanced Raman scattering (SERS) with machine learning (ML) algorithms. Vaginal fluid samples were classified as BV positive or BV negative using the BVBlue Test and clinical microscopy, followed by SERS spectral acquisition to construct the data set. Preliminary SERS spectral analysis revealed notable disparities in characteristic peak features. Multiple ML models were constructed and optimized, with the convolutional neural network (CNN) model achieving the highest prediction accuracy at 99%. Gradient-weighted class activation mapping (Grad-CAM) was used to highlight important regions in the images for prediction. Moreover, the CNN model was blindly tested on SERS spectra of vaginal fluid samples collected from 40 participants with unknown BV infection status, achieving a prediction accuracy of 90.75% compared with the results of the BVBlue Test combined with clinical microscopy. This novel technique is simple, cheap, and rapid in accurately diagnosing bacterial vaginosis, potentially complementing current diagnostic methods in clinical laboratories.

IMPORTANCE: The accurate and rapid diagnosis of bacterial vaginosis (BV) is crucial due to its high prevalence and association with serious health complications, including increased risk of sexually transmitted infections and adverse pregnancy outcomes. Although widely used, traditional diagnostic methods have significant limitations in subjectivity, complexity, and cost. The development of a novel diagnostic approach that integrates SERS with ML offers a promising solution. The CNN model's high prediction accuracy, cost-effectiveness, and extraordinary rapidity underscore its significant potential to enhance the diagnosis of BV in clinical settings. This method not only addresses the limitations of current diagnostic tools but also provides a more accessible and reliable option for healthcare providers, ultimately enhancing patient care and health outcomes.

PMID:39655908 | DOI:10.1128/msystems.01058-24

Categories: Literature Watch

Systematic review of experimental paradigms and deep neural networks for electroencephalography-based cognitive workload detection

Tue, 2024-12-10 06:00

Prog Biomed Eng (Bristol). 2024 Oct 21;6(4). doi: 10.1088/2516-1091/ad8530.

ABSTRACT

This article summarizes a systematic literature review of deep neural network-based cognitive workload (CWL) estimation from electroencephalographic (EEG) signals. The focus of this article can be delineated into two main elements: first is the identification of experimental paradigms prevalently employed for CWL induction, and second, is an inquiry about the data structure and input formulations commonly utilized in deep neural networks (DNN)-based CWL detection. The survey revealed several experimental paradigms that can reliably induce either graded levels of CWL or a desired cognitive state due to sustained induction of CWL. This article has characterized them with respect to the number of distinct CWL levels, cognitive states, experimental environment, and agents in focus. Further, this literature analysis found that DNNs can successfully detect distinct levels of CWL despite the inter-subject and inter-session variability typically observed in EEG signals. Several methodologies were found using EEG signals in its native representation of a two-dimensional matrix as input to the classification algorithm, bypassing traditional feature selection steps. More often than not, researchers used DNNs as black-box type models, and only a few studies employed interpretable or explainable DNNs for CWL detection. However, these algorithms were mostly post hoc data analysis and classification schemes, and only a few studies adopted real-time CWL estimation methodologies. Further, it has been suggested that using interpretable deep learning methodologies may shed light on EEG correlates of CWL, but this remains mostly an unexplored area. This systematic review suggests using networks sensitive to temporal dependencies and appropriate input formulations for each type of DNN architecture to achieve robust classification performance. An additional suggestion is to utilize transfer learning methods to achieve high generalizability across tasks (task-independent classifiers), while simple cross-subject data pooling may achieve the same for subject-independent classifiers.

PMID:39655862 | DOI:10.1088/2516-1091/ad8530

Categories: Literature Watch

Ultrasound imaging based recognition of prenatal anomalies: a systematic clinical engineering review

Tue, 2024-12-10 06:00

Prog Biomed Eng (Bristol). 2024 May 7;6(2). doi: 10.1088/2516-1091/ad3a4b.

ABSTRACT

For prenatal screening, ultrasound (US) imaging allows for real-time observation of developing fetal anatomy. Understanding normal and aberrant forms through extensive fetal structural assessment enables for early detection and intervention. However, the reliability of anomaly diagnosis varies depending on operator expertise and device limits. First trimester scans in conjunction with circulating biochemical markers are critical in identifying high-risk pregnancies, but they also pose technical challenges. Recent engineering advancements in automated diagnosis, such as artificial intelligence (AI)-based US image processing and multimodal data fusion, are developing to improve screening efficiency, accuracy, and consistency. Still, creating trust in these data-driven solutions is necessary for integration and acceptability in clinical settings. Transparency can be promoted by explainable AI (XAI) technologies that provide visual interpretations and illustrate the underlying diagnostic decision making process. An explanatory framework based on deep learning is suggested to construct charts depicting anomaly screening results from US video feeds. AI modelling can then be applied to these charts to connect defects with probable deformations. Overall, engineering approaches that increase imaging, automation, and interpretability hold enormous promise for altering traditional workflows and expanding diagnostic capabilities for better prenatal care.

PMID:39655845 | DOI:10.1088/2516-1091/ad3a4b

Categories: Literature Watch

Machine Learning-Based Prediction for In-Hospital Mortality After Acute Intracerebral Hemorrhage Using Real-World Clinical and Image Data

Tue, 2024-12-10 06:00

J Am Heart Assoc. 2024 Dec 10:e036447. doi: 10.1161/JAHA.124.036447. Online ahead of print.

ABSTRACT

BACKGROUND: Machine learning (ML) techniques are widely employed across various domains to achieve accurate predictions. This study assessed the effectiveness of ML in predicting early mortality risk among patients with acute intracerebral hemorrhage (ICH) in real-world settings.

METHODS AND RESULTS: ML-based models were developed to predict in-hospital mortality in 527 patients with ICH using raw brain imaging data from brain computed tomography and clinical data. The models' performances were evaluated using the area under the receiver operating characteristic curves and calibration plots, comparing them with traditional risk scores such as the ICH score and ICH grading scale. Kaplan-Meier curves were used to examine the post-ICH survival rates, stratified by ML-based risk assessment. The net benefit of ML-based models was evaluated using decision curve analysis. The area under the receiver operating characteristic curves were 0.91 (95% CI, 0.86-0.95) for the ICH score, 0.93 (95% CI, 0.89-0.97) for the ICH grading scale, 0.83 (95% CI, 0.71-0.91) for the ML-based model fitted with raw image data only, and 0.87 (95% CI, 0.76-0.93) for the ML-based model fitted using clinical data without specialist expertise. The area under the receiver operating characteristic curve increased significantly to 0.97 (95% CI, 0.94-0.99) when the ML model was fitted using clinical and image data assessed by specialists. All ML-based models demonstrated good calibration, and the survival rates showed significant differences between risk groups. Decision curve analysis indicated the highest net benefit when utilizing the findings assessed by specialists.

CONCLUSIONS: ML-based prediction models exhibit satisfactory performance in predicting post-ICH in-hospital mortality when utilizing raw imaging data or nonspecialist input. Nevertheless, incorporating specialist expertise notably improves performance.

PMID:39655759 | DOI:10.1161/JAHA.124.036447

Categories: Literature Watch

Assessment of image quality on the diagnostic performance of clinicians and deep learning models: Cross-sectional comparative reader study

Tue, 2024-12-10 06:00

J Eur Acad Dermatol Venereol. 2024 Dec 10. doi: 10.1111/jdv.20462. Online ahead of print.

ABSTRACT

BACKGROUND: Skin cancer is a prevalent and clinically significant condition, with early and accurate diagnosis being crucial for improved patient outcomes. Dermoscopy and artificial intelligence (AI) hold promise in enhancing diagnostic accuracy. However, the impact of image quality, particularly high dynamic range (HDR) conversion in smartphone images, on diagnostic performance remains poorly understood.

OBJECTIVE: This study aimed to investigate the effect of varying image qualities, including HDR-enhanced dermoscopic images, on the diagnostic capabilities of clinicians and a convolutional neural network (CNN) model.

METHODS: Eighteen dermatology clinicians assessed 303 images of 101 skin lesions that were categorized into three image quality groups: low quality (LQ), high quality (HQ) and enhanced quality (EQ) produced using HDR-style conversion. Clinicians participated in a two part reader study that required their diagnosis, management and confidence level for each image assessed.

RESULTS: In the binary classification of lesions, clinicians had the greatest diagnostic performance with HQ images, with sensitivity (77.3%; CI 69.1-85.5), specificity (63.1%; CI 53.7-72.5) and accuracy (70.2%; CI 61.3-79.1). For the multiclass classification, the overall performance was also best with HQ images, attaining the greatest specificity (91.9%; CI 83.2-95.0) and accuracy (51.5%; CI 48.4-54.7). Clinicians had a superior performance (median correct diagnoses) to the CNN model for the binary classification of LQ and EQ images, but their performance was comparable on the HQ images. However, in the multiclass classification, the CNN model significantly outperformed the clinicians on HQ images (p < 0.01).

CONCLUSION: This study highlights the importance of image quality on the diagnostic performance of clinicians and deep learning models. This has significant implications for telehealth reporting and triage.

PMID:39655640 | DOI:10.1111/jdv.20462

Categories: Literature Watch

Automatic classification of fungal-fungal interactions using deep leaning models

Tue, 2024-12-10 06:00

Comput Struct Biotechnol J. 2024 Nov 14;23:4222-4231. doi: 10.1016/j.csbj.2024.11.027. eCollection 2024 Dec.

ABSTRACT

Fungi provide valuable solutions for diverse biotechnological applications, such as enzymes in the food industry, bioactive metabolites for healthcare, and biocontrol organisms in agriculture. Current workflows for identifying new biocontrol fungi often rely on subjective visual observations of strains' performance in microbe-microbe interaction studies, making the process time-consuming and difficult to reproduce. To overcome these challenges, we developed an AI-automated image classification approach using machine learning algorithm based on deep neural network. Our method focuses on analyzing standardized images of 96-well microtiter plates with solid medium for fungal-fungal challenge experiments. We used our model to categorize the outcome of interactions between the plant pathogen Fusarium graminearum and individual isolates from a collection of 38,400 fungal strains. The authors trained multiple deep learning architectures and evaluated their performance. The results strongly support our approach, achieving a peak accuracy of 95.0 % with the DenseNet121 model and a maximum macro-averaged F1-Score of 93.1 across five folds. To the best of our knowledge, this paper introduces the first automated method for classifying fungal-fungal interactions using deep learning, which can easily be adapted for other fungal species.

PMID:39655263 | PMC:PMC11626056 | DOI:10.1016/j.csbj.2024.11.027

Categories: Literature Watch

Pages