Deep learning
Deep learning models for the prediction of acute postoperative pain in PACU for video-assisted thoracoscopic surgery
BMC Med Res Methodol. 2024 Oct 7;24(1):232. doi: 10.1186/s12874-024-02357-5.
ABSTRACT
BACKGROUND: Postoperative pain is a prevalent symptom experienced by patients undergoing surgical procedures. This study aims to develop deep learning algorithms for predicting acute postoperative pain using both essential patient details and real-time vital sign data during surgery.
METHODS: Through a retrospective observational approach, we utilized Graph Attention Networks (GAT) and graph Transformer Networks (GTN) deep learning algorithms to construct the DoseFormer model while incorporating an attention mechanism. This model employed patient information and intraoperative vital signs obtained during Video-assisted thoracoscopic surgery (VATS) surgery to anticipate postoperative pain. By categorizing the static and dynamic data, the DoseFormer model performed binary classification to predict the likelihood of postoperative acute pain.
RESULTS: A total of 1758 patients were initially included, with 1552 patients after data cleaning. These patients were then divided into training set (n = 931) and testing set (n = 621). In the testing set, the DoseFormer model exhibited significantly higher AUROC (0.98) compared to classical machine learning algorithms. Furthermore, the DoseFormer model displayed a significantly higher F1 value (0.85) in comparison to other classical machine learning algorithms. Notably, the attending anesthesiologists' F1 values (attending: 0.49, fellow: 0.43, Resident: 0.16) were significantly lower than those of the DoseFormer model in predicting acute postoperative pain.
CONCLUSIONS: Deep learning model can predict postoperative acute pain events based on patients' basic information and intraoperative vital signs.
PMID:39375589 | DOI:10.1186/s12874-024-02357-5
Screening chronic kidney disease through deep learning utilizing ultra-wide-field fundus images
NPJ Digit Med. 2024 Oct 7;7(1):275. doi: 10.1038/s41746-024-01271-w.
ABSTRACT
To address challenges in screening for chronic kidney disease (CKD), we devised a deep learning-based CKD screening model named UWF-CKDS. It utilizes ultra-wide-field (UWF) fundus images to predict the presence of CKD. We validated the model with data from 23 tertiary hospitals across China. Retinal vessels and retinal microvascular parameters (RMPs) were extracted to enhance model interpretability, which revealed a significant correlation between renal function and RMPs. UWF-CKDS, utilizing UWF images, RMPs, and relevant medical history, can accurately determine CKD status. Importantly, UWF-CKDS exhibited superior performance compared to CTR-CKDS, a model developed using the central region (CTR) cropped from UWF images, underscoring the contribution of the peripheral retina in predicting renal function. The study presents UWF-CKDS as a highly implementable method for large-scale and accurate CKD screening at the population level.
PMID:39375513 | DOI:10.1038/s41746-024-01271-w
GeneCompass: deciphering universal gene regulatory mechanisms with a knowledge-informed cross-species foundation model
Cell Res. 2024 Oct 8. doi: 10.1038/s41422-024-01034-y. Online ahead of print.
ABSTRACT
Deciphering universal gene regulatory mechanisms in diverse organisms holds great potential for advancing our knowledge of fundamental life processes and facilitating clinical applications. However, the traditional research paradigm primarily focuses on individual model organisms and does not integrate various cell types across species. Recent breakthroughs in single-cell sequencing and deep learning techniques present an unprecedented opportunity to address this challenge. In this study, we built an extensive dataset of over 120 million human and mouse single-cell transcriptomes. After data preprocessing, we obtained 101,768,420 single-cell transcriptomes and developed a knowledge-informed cross-species foundation model, named GeneCompass. During pre-training, GeneCompass effectively integrated four types of prior biological knowledge to enhance our understanding of gene regulatory mechanisms in a self-supervised manner. By fine-tuning for multiple downstream tasks, GeneCompass outperformed state-of-the-art models in diverse applications for a single species and unlocked new realms of cross-species biological investigations. We also employed GeneCompass to search for key factors associated with cell fate transition and showed that the predicted candidate genes could successfully induce the differentiation of human embryonic stem cells into the gonadal fate. Overall, GeneCompass demonstrates the advantages of using artificial intelligence technology to decipher universal gene regulatory mechanisms and shows tremendous potential for accelerating the discovery of critical cell fate regulators and candidate drug targets.
PMID:39375485 | DOI:10.1038/s41422-024-01034-y
Influence of OCT biomarkers on microperimetry intra- and interdevice repeatability in diabetic macular edema
Sci Rep. 2024 Oct 7;14(1):23342. doi: 10.1038/s41598-024-74230-w.
ABSTRACT
To evaluate the intra- and interdevice repeatability of microperimetry (MP) assessments in patients with diabetic macular edema (DME) two consecutive MP testings (45 fovea-centered stimuli, 4-2 staircase strategy) were performed using MP3 (NIDEK, Aichi, Japan) and MAIA (CenterVue, Padova, Italy), respectively. Intraretinal fluid (IRF) and ellipsoid zone (EZ) thickness were automatically segmented by published deep learning algorithms. Hard exudates (HEs) were annotated semi-automatically and disorganization of retinal inner layers (DRIL) was segmented manually. Point-to-point registration of MP stimuli to corresponding spectral-domain OCT (Spectralis, Heidelberg Engineering, Germany) locations was performed for both devices. Repeatability was assessed overall and in areas of disease-specific OCT biomarkers using Bland-Altmann coefficients of repeatability (CoR). A total of 3600 microperimetry stimuli were tested in 20 eyes with DME. Global CoR was high using both devices (MP3: ± 6.55 dB, MAIA: ± 7.69 dB). Higher retest variances were observed in stimuli with IRF (MP3: CoR ± 7.4 dB vs. ± 6.0 dB, p = 0.001, MAIA: CoR ± 9.2dB vs. ± 6.8 dB, p = 0.002) and DRIL on MP3 (CoR ± 6.9 dB vs. ± 3.2 dB, p < 0.001) compared to stimuli without. Repeatabilities were reduced in areas with thinner EZ layers (both p < 0.05). Fixation (Fuji classification) was relatively unstable independent of device and run. These findings emphasize taking higher caution using MP in patients with DME.
PMID:39375434 | DOI:10.1038/s41598-024-74230-w
Rapid detection of mouse spermatogenic defects by testicular cellular composition analysis via enhanced deep learning model
Andrology. 2024 Oct 7. doi: 10.1111/andr.13773. Online ahead of print.
ABSTRACT
BACKGROUND: Histological analysis of the testicular sections is paramount in infertility research but tedious and often requires months of training and practice.
OBJECTIVES: Establish an expeditious histopathological analysis of mutant mice testicular sections stained with commonly available hematoxylin and eosin (H&E) via enhanced deep learning model MATERIALS AND METHODS: Automated segmentation and cellular composition analysis on the testes of six mouse reproductive mutants of key reproductive gene family, DAZ and PUMILIO gene family via H&E-stained mouse testicular sections.
RESULTS: We improved the deep learning model with human interaction to achieve better pixel accuracy and reduced annotation time for histologists; revealed distinctive cell composition features consistent with previously published phenotypes for four mutants and novel spermatogenic defects in two newly generated mutants; established a fast spermatogenic defect detection protocol for quantitative and qualitative assessment of testicular defects within 2.5-3 h, requiring as few as 8 H&E-stained testis sections; uncovered novel defects in AcDKO and a meiotic arrest defect in HDBKO, supporting the synergistic interaction of Sertoli Pum1 and Pum2 as well as redundant meiotic function of Dazl and Boule.
DISCUSSION: Our testicular compositional analysis not only could reveal spermatogenic defects from staged seminiferous tubules but also from unstaged seminiferous tubule sections.
CONCLUSION: Our SCSD-Net model offers a rapid protocol for detecting reproductive defects from H&E-stained testicular sections in as few as 3 h, providing both quantitative and qualitative assessments of spermatogenic defects. Our analysis uncovered evidence supporting the synergistic interaction of Sertoli PUM1 and PUM2 in maintaining average testis size, and redundant roles of DAZ family proteins DAZL and BOULE in meiosis.
PMID:39375288 | DOI:10.1111/andr.13773
Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification
J Imaging Inform Med. 2024 Oct 7. doi: 10.1007/s10278-024-01286-5. Online ahead of print.
ABSTRACT
Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.
PMID:39375270 | DOI:10.1007/s10278-024-01286-5
A Deep Learning-Driven Sampling Technique to Explore the Phase Space of an RNA Stem-Loop
J Chem Theory Comput. 2024 Oct 7. doi: 10.1021/acs.jctc.4c00669. Online ahead of print.
ABSTRACT
The folding and unfolding of RNA stem-loops are critical biological processes; however, their computational studies are often hampered by the ruggedness of their folding landscape, necessitating long simulation times at the atomistic scale. Here, we adapted DeepDriveMD (DDMD), an advanced deep learning-driven sampling technique originally developed for protein folding, to address the challenges of RNA stem-loop folding. Although tempering- and order parameter-based techniques are commonly used for similar rare-event problems, the computational costs or the need for a priori knowledge about the system often present a challenge in their effective use. DDMD overcomes these challenges by adaptively learning from an ensemble of running MD simulations using generic contact maps as the raw input. DeepDriveMD enables on-the-fly learning of a low-dimensional latent representation and guides the simulation toward the undersampled regions while optimizing the resources to explore the relevant parts of the phase space. We showed that DDMD estimates the free energy landscape of the RNA stem-loop reasonably well at room temperature. Our simulation framework runs at a constant temperature without external biasing potential, hence preserving the information on transition rates, with a computational cost much lower than that of the simulations performed with external biasing potentials. We also introduced a reweighting strategy for obtaining unbiased free energy surfaces and presented a qualitative analysis of the latent space. This analysis showed that the latent space captures the relevant slow degrees of freedom for the RNA folding problem of interest. Finally, throughout the manuscript, we outlined how different parameters are selected and optimized to adapt DDMD for this system. We believe this compendium of decision-making processes will help new users adapt this technique for the rare-event sampling problems of their interest.
PMID:39374435 | DOI:10.1021/acs.jctc.4c00669
A review of deep learning approaches for multimodal image segmentation of liver cancer
J Appl Clin Med Phys. 2024 Oct 7:e14540. doi: 10.1002/acm2.14540. Online ahead of print.
ABSTRACT
This review examines the recent developments in deep learning (DL) techniques applied to multimodal fusion image segmentation for liver cancer. Hepatocellular carcinoma is a highly dangerous malignant tumor that requires accurate image segmentation for effective treatment and disease monitoring. Multimodal image fusion has the potential to offer more comprehensive information and more precise segmentation, and DL techniques have achieved remarkable progress in this domain. This paper starts with an introduction to liver cancer, then explains the preprocessing and fusion methods for multimodal images, then explores the application of DL methods in this area. Various DL architectures such as convolutional neural networks (CNN) and U-Net are discussed and their benefits in multimodal image fusion segmentation. Furthermore, various evaluation metrics and datasets currently used to measure the performance of segmentation models are reviewed. While reviewing the progress, the challenges of current research, such as data imbalance, model generalization, and model interpretability, are emphasized and future research directions are suggested. The application of DL in multimodal image segmentation for liver cancer is transforming the field of medical imaging and is expected to further enhance the accuracy and efficiency of clinical decision making. This review provides useful insights and guidance for medical practitioners.
PMID:39374312 | DOI:10.1002/acm2.14540
Enhancing stereotactic ablative boost radiotherapy dose prediction for bulky lung cancer: A multi-scale dilated network approach with scale-balanced structure loss
J Appl Clin Med Phys. 2024 Oct 7:e14546. doi: 10.1002/acm2.14546. Online ahead of print.
ABSTRACT
PURPOSE: Partial stereotactic ablative boost radiotherapy (P-SABR) effectively treats bulky lung cancer; however, the planning process for P-SABR requires repeated dose calculations. To improve planning efficiency, we proposed a novel deep learning method that utilizes limited data to accurately predict the three-dimensional (3D) dose distribution of the P-SABR plan for bulky lung cancer.
METHODS: We utilized data on 74 patients diagnosed with bulky lung cancer who received P-SABR treatment. The patient dataset was randomly divided into a training set (51 plans) with augmentation, validation set (7 plans), and testing set (16 plans). We devised a 3D multi-scale dilated network (MD-Net) and integrated a scale-balanced structure loss into the loss function. A comparative analysis with a classical network and other advanced networks with multi-scale analysis capabilities and other loss functions was conducted based on the dose distributions in terms of the axial view, average dose scores (ADSs), and average absolute differences of dosimetric indices (AADDIs). Finally, we analyzed the predicted dosimetric indices against the ground-truth values and compared the predicted dose-volume histogram (DVH) with the ground-truth DVH.
RESULTS: Our proposed dose prediction method for P-SABR plans for bulky lung cancer demonstrated strong performance, exhibiting a significant improvement in predicting multiple indicators of regions of interest (ROIs), particularly the gross target volume (GTV). Our network demonstrated increased accuracy in most dosimetric indices and dose scores in different ROIs. The proposed loss function significantly enhanced the predictive performance of the dosimetric indices. The predicted dosimetric indices and DVHs were equivalent to the ground-truth values.
CONCLUSION: Our study presents an effective model based on limited datasets, and it exhibits high accuracy in the dose prediction of P-SABR plans for bulky lung cancer. This method has potential as an automated tool for P-SABR planning and can help optimize treatments and improve planning efficiency.
PMID:39374302 | DOI:10.1002/acm2.14546
Solving Zero-Shot Sparse-View CT Reconstruction With Variational Score Solver
IEEE Trans Med Imaging. 2024 Oct 7;PP. doi: 10.1109/TMI.2024.3475516. Online ahead of print.
ABSTRACT
Computed tomography (CT) stands as a ubiquitous medical diagnostic tool. Nonetheless, the radiation-related concerns associated with CT scans have raised public apprehensions. Mitigating radiation dosage in CT imaging poses an inherent challenge as it inevitably compromises the fidelity of CT reconstructions, impacting diagnostic accuracy. While previous deep learning techniques have exhibited promise in enhancing CT reconstruction quality, they remain hindered by the reliance on paired data, which is arduous to procure. In this study, we present a novel approach named Variational Score Solver (VSS) for solving sparse-view reconstruction without paired data. Our approach entails the acquisition of a probability distribution from densely sampled CT reconstructions, employing a latent diffusion model. High-quality reconstruction outcomes are achieved through an iterative process, wherein the diffusion model serves as the prior term, subsequently integrated with the data consistency term. Notably, rather than directly employing the prior diffusion model, we distill prior knowledge by finding the fixed point of the diffusion model. This framework empowers us to exercise precise control over the process. Moreover, we depart from modeling the reconstruction outcomes as deterministic values, opting instead for a distribution-based approach. This enables us to achieve more accurate reconstructions utilizing a trainable model. Our approach introduces a fresh perspective to the realm of zero-shot CT reconstruction, circumventing the constraints of supervised learning. Our extensive qualitative and quantitative experiments unequivocally demonstrate that VSS surpasses other contemporary unsupervised and achieves comparable results compared with the most advance supervised methods in sparse-view reconstruction tasks. Codes are available in https://github.com/fpsandnoob/vss.
PMID:39374276 | DOI:10.1109/TMI.2024.3475516
Deep spectral improvement for unsupervised image instance segmentation
PLoS One. 2024 Oct 7;19(10):e0307432. doi: 10.1371/journal.pone.0307432. eCollection 2024.
ABSTRACT
Recently, there has been growing interest in deep spectral methods for image localization and segmentation, influenced by traditional spectral segmentation approaches. These methods reframe the image decomposition process as a graph partitioning task by extracting features using self-supervised learning and utilizing the Laplacian of the affinity matrix to obtain eigensegments. However, instance segmentation has received less attention than other tasks within the context of deep spectral methods. This paper addresses that not all channels of the feature map extracted from a self-supervised backbone contain sufficient information for instance segmentation purposes. Some channels are noisy and hinder the accuracy of the task. To overcome this issue, this paper proposes two channel reduction modules, Noise Channel Reduction (NCR) and Deviation-based Channel Reduction (DCR). The NCR retains channels with lower entropy, as they are less likely to be noisy, while DCR prunes channels with low standard deviation, as they lack sufficient information for effective instance segmentation. Furthermore, the paper demonstrates that the dot product, commonly used in deep spectral methods, is not suitable for instance segmentation due to its sensitivity to feature map values, potentially leading to incorrect instance segments. A novel similarity metric called Bray-curtis over Chebyshev (BoC) is proposed to address this issue. This metric considers the distribution of features in addition to their values, providing a more robust similarity measure for instance segmentation. Quantitative and qualitative results on the Youtube-VIS 2019 and OVIS datasets highlight the improvements achieved by the proposed channel reduction methods and using BoC instead of the conventional dot product for creating the affinity matrix. These improvements regarding mean Intersection over Union (mIoU) and extracted instance segments are observed, demonstrating enhanced instance segmentation performance. The code is available on: https://github.com/farnooshar/SpecUnIIS.
PMID:39374253 | DOI:10.1371/journal.pone.0307432
UNet-based multi-organ segmentation in photon counting CT using virtual monoenergetic images
Med Phys. 2024 Oct 7. doi: 10.1002/mp.17440. Online ahead of print.
ABSTRACT
BACKGROUND: Multi-organ segmentation aids in disease diagnosis, treatment, and radiotherapy. The recently emerged photon counting detector-based CT (PCCT) provides spectral information of the organs and the background tissue and may improve segmentation performance.
PURPOSE: We propose UNet-based multi-organ segmentation in PCCT using virtual monoenergetic images (VMI) to exploit spectral information effectively.
METHODS: The proposed method consists of the following steps: Noise reduction in bin-wise images, image-based material decomposition, generating VMIs, and deep learning-based segmentation. VMIs are synthesized for various x-ray energies using basis images. The UNet-based networks (3D UNet, Swin UNETR) were used for segmentation, and dice similarity coefficients (DSC) and 3D visualization of the segmented result were evaluation indicators. We validated the proposed method for the liver, pancreas, and spleen segmentation using abdominal phantoms from 55 subjects for dual- and quad-energy bins. We compared it to the conventional PCCT-based segmentation, which uses only the (noise-reduced) bin-wise images. The experiments were conducted on two cases by adjusting the dose levels.
RESULTS: The proposed method improved the training stability for most cases. With the proposed method, the average DSC for the three organs slightly increased from 0.933 to 0.95, and the standard deviation decreased from 0.066 to 0.047, for example, in the low dose case (using VMIs v.s. bin-wise images from dual-energy bins; 3D UNet).
CONCLUSIONS: The proposed method using VMIs improves training stability for multi-organ segmentation in PCCT, particularly when the number of energy bins is small.
PMID:39374095 | DOI:10.1002/mp.17440
Space-Confined Amplification for In Situ Imaging of Single Nucleic Acid and Single Pathogen on Biological Samples
Adv Sci (Weinh). 2024 Oct 7:e2407055. doi: 10.1002/advs.202407055. Online ahead of print.
ABSTRACT
Direct in situ imaging of nucleic acids on biological samples is advantageous for rapid analysis without DNA extraction. However, traditional nucleic acid amplification in aqueous solutions tends to lose spatial information because of the high mobility of molecules. Similar to a cellular matrix, hydrogels with biomimetic 3D nanoconfined spaces can limit the free diffusion of nucleic acids, thereby allowing for ultrafast in situ enzymatic reactions. In this study, hydrogel-based in situ space-confined interfacial amplification (iSCIA) is developed for direct imaging of single nucleic acid and single pathogen on biological samples without formaldehyde fixation. With a polyethylene glycol hydrogel coating, nucleic acids on the sample are nanoconfined with restricted movement, while in situ amplification can be successfully performed. As a result, the nucleic acids are lighted-up on the large-scale surface in 20 min, with a detection limit as low as 1 copy/10 cm2. Multiplex imaging with a deep learning model is also established to automatically analyze multiple targets. Furthermore, the iSCIA imaging of pathogens on plant leaves and food is successfully used to monitor plant health and food safety. The proposed technique, a rapid and flexible system for in situ imaging, has great potential for food, environmental, and clinical applications.
PMID:39373849 | DOI:10.1002/advs.202407055
Deep-learning-based Attenuation Correction for (68)Ga-DOTATATE Whole-body PET Imaging: A Dual-center Clinical Study
Mol Imaging Radionucl Ther. 2024 Oct 7;33(3):138-146. doi: 10.4274/mirt.galenos.2024.86422.
ABSTRACT
OBJECTIVES: Attenuation correction is a critical phenomenon in quantitative positron emission tomography (PET) imaging with its own special challenges. However, computed tomography (CT) modality which is used for attenuation correction and anatomical localization increases patient radiation dose. This study was aimed to develop a deep learning model for attenuation correction of whole-body 68Ga-DOTATATE PET images.
METHODS: Non-attenuation-corrected and computed tomography-based attenuation-corrected (CTAC) whole-body 68Ga-DOTATATE PET images of 118 patients from two different imaging centers were used. We implemented a residual deep learning model using the NiftyNet framework. The model was trained four times and evaluated six times using the test data from the centers. The quality of the synthesized PET images was compared with the PET-CTAC images using different evaluation metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), mean square error (MSE), and root mean square error (RMSE).
RESULTS: Quantitative analysis of four network training sessions and six evaluations revealed the highest and lowest PSNR values as (52.86±6.6) and (47.96±5.09), respectively. Similarly, the highest and lowest SSIM values were obtained (0.99±0.003) and (0.97±0.01), respectively. Additionally, the highest and lowest RMSE and MSE values fell within the ranges of (0.0117±0.003), (0.0015±0.000103), and (0.01072±0.002), (0.000121±5.07xe-5), respectively. The study found that using datasets from the same center resulted in the highest PSNR, while using datasets from different centers led to lower PSNR and SSIM values. In addition, scenarios involving datasets from both centers achieved the best SSIM and the lowest MSE and RMSE.
CONCLUSION: Acceptable accuracy of attenuation correction on 68Ga-DOTATATE PET images using a deep learning model could potentially eliminate the need for additional X-ray imaging modalities, thereby imposing a high radiation dose on the patient.
PMID:39373140 | DOI:10.4274/mirt.galenos.2024.86422
Deep learning in template-free de novo biosynthetic pathway design of natural products
Brief Bioinform. 2024 Sep 23;25(6):bbae495. doi: 10.1093/bib/bbae495.
ABSTRACT
Natural products (NPs) are indispensable in drug development, particularly in combating infections, cancer, and neurodegenerative diseases. However, their limited availability poses significant challenges. Template-free de novo biosynthetic pathway design provides a strategic solution for NP production, with deep learning standing out as a powerful tool in this domain. This review delves into state-of-the-art deep learning algorithms in NP biosynthesis pathway design. It provides an in-depth discussion of databases like Kyoto Encyclopedia of Genes and Genomes (KEGG), Reactome, and UniProt, which are essential for model training, along with chemical databases such as Reaxys, SciFinder, and PubChem for transfer learning to expand models' understanding of the broader chemical space. It evaluates the potential and challenges of sequence-to-sequence and graph-to-graph translation models for accurate single-step prediction. Additionally, it discusses search algorithms for multistep prediction and deep learning algorithms for predicting enzyme function. The review also highlights the pivotal role of deep learning in improving catalytic efficiency through enzyme engineering, which is essential for enhancing NP production. Moreover, it examines the application of large language models in pathway design, enzyme discovery, and enzyme engineering. Finally, it addresses the challenges and prospects associated with template-free approaches, offering insights into potential advancements in NP biosynthesis pathway design.
PMID:39373052 | DOI:10.1093/bib/bbae495
scDFN: enhancing single-cell RNA-seq clustering with deep fusion networks
Brief Bioinform. 2024 Sep 23;25(6):bbae486. doi: 10.1093/bib/bbae486.
ABSTRACT
Single-cell ribonucleic acid sequencing (scRNA-seq) technology can be used to perform high-resolution analysis of the transcriptomes of individual cells. Therefore, its application has gained popularity for accurately analyzing the ever-increasing content of heterogeneous single-cell datasets. Central to interpreting scRNA-seq data is the clustering of cells to decipher transcriptomic diversity and infer cell behavior patterns. However, its complexity necessitates the application of advanced methodologies capable of resolving the inherent heterogeneity and limited gene expression characteristics of single-cell data. Herein, we introduce a novel deep learning-based algorithm for single-cell clustering, designated scDFN, which can significantly enhance the clustering of scRNA-seq data through a fusion network strategy. The scDFN algorithm applies a dual mechanism involving an autoencoder to extract attribute information and an improved graph autoencoder to capture topological nuances, integrated via a cross-network information fusion mechanism complemented by a triple self-supervision strategy. This fusion is optimized through a holistic consideration of four distinct loss functions. A comparative analysis with five leading scRNA-seq clustering methodologies across multiple datasets revealed the superiority of scDFN, as determined by better the Normalized Mutual Information (NMI) and the Adjusted Rand Index (ARI) metrics. Additionally, scDFN demonstrated robust multi-cluster dataset performance and exceptional resilience to batch effects. Ablation studies highlighted the key roles of the autoencoder and the improved graph autoencoder components, along with the critical contribution of the four joint loss functions to the overall efficacy of the algorithm. Through these advancements, scDFN set a new benchmark in single-cell clustering and can be used as an effective tool for the nuanced analysis of single-cell transcriptomics.
PMID:39373051 | DOI:10.1093/bib/bbae486
Deep Learning-Based Detection of Impacted Teeth on Panoramic Radiographs
Biomed Eng Comput Biol. 2024 Oct 5;15:11795972241288319. doi: 10.1177/11795972241288319. eCollection 2024.
ABSTRACT
OBJECTIVE: The aim is to detect impacted teeth in panoramic radiology by refining the pretrained MedSAM model.
STUDY DESIGN: Impacted teeth are dental issues that can cause complications and are diagnosed via radiographs. We modified SAM model for individual tooth segmentation using 1016 X-ray images. The dataset was split into training, validation, and testing sets, with a ratio of 16:3:1. We enhanced the SAM model to automatically detect impacted teeth by focusing on the tooth's center for more accurate results.
RESULTS: With 200 epochs, batch size equals to 1, and a learning rate of 0.001, random images trained the model. Results on the test set showcased performance up to an accuracy of 86.73%, F1-score of 0.5350, and IoU of 0.3652 on SAM-related models.
CONCLUSION: This study fine-tunes MedSAM for impacted tooth segmentation in X-ray images, aiding dental diagnoses. Further improvements on model accuracy and selection are essential for enhancing dental practitioners' diagnostic capabilities.
PMID:39372969 | PMC:PMC11456186 | DOI:10.1177/11795972241288319
Empowering precision medicine: regenerative AI in breast cancer
Front Oncol. 2024 Sep 20;14:1465720. doi: 10.3389/fonc.2024.1465720. eCollection 2024.
ABSTRACT
Regenerative AI is transforming breast cancer diagnosis and treatment through enhanced imaging analysis, personalized medicine, drug discovery, and remote patient monitoring. AI algorithms can detect subtle patterns in mammograms and other imaging modalities with high accuracy, potentially leading to earlier diagnoses. In treatment planning, AI integrates patient-specific data to predict individual responses and optimize therapies. For drug discovery, generative AI models rapidly design and screen novel molecules targeting breast cancer pathways. Remote monitoring tools powered by AI provide real-time insights to guide care. Examples include Google's LYNA for analyzing pathology slides, Kheiron's Mia for mammogram interpretation, and Tempus's platform for integrating clinical and genomic data. While promising, challenges remain, including limited high-quality training data, integration into clinical workflows, interpretability of AI decisions, and regulatory/ethical concerns. Strategies to address these include collaborative data-sharing initiatives, user-centered design, explainable AI techniques, and robust oversight frameworks. In developing countries, AI tools like MammoAssist and Niramai's thermal imaging system are improving access to screening. Overall, regenerative AI offers significant potential to enhance breast cancer care, but judicious implementation with awareness of limitations is crucial. Coordinated efforts across the healthcare ecosystem are needed to fully realize AI's benefits while addressing challenges.
PMID:39372870 | PMC:PMC11449872 | DOI:10.3389/fonc.2024.1465720
Fast and full characterization of large earthquakes from prompt elastogravity signals
Commun Earth Environ. 2024;5(1):561. doi: 10.1038/s43247-024-01725-9. Epub 2024 Oct 4.
ABSTRACT
Prompt ElastoGravity Signals are light-speed gravity-induced signals recorded before the arrival of seismic waves. They have raised interest for early warning applications but their weak amplitudes close to the background seismic noise have questioned their actual potential for operational use. A deep-learning model has recently demonstrated its ability to mitigate this noise limitation and to provide in near real-time the earthquake magnitude (M w ). However, this approach was efficient only for large earthquakes (M w ≥ 8.3) of known focal mechanism. Here we show unprecedented performance in full earthquake characterization using the dense broadband seismic network deployed in Alaska and Western Canada. Our deep-learning model provides accurate magnitude and focal mechanism estimates of M w ≥ 7.8 earthquakes, 2 minutes after origin time (hence the tsunamigenic potential). Our results represent a major step towards the routine use of prompt elastogravity signals in operational warning systems, and demonstrate its potential for tsunami warning in densely-instrumented areas.
PMID:39372845 | PMC:PMC11452338 | DOI:10.1038/s43247-024-01725-9
Brain tumor grade classification using the ConvNext architecture
Digit Health. 2024 Sep 28;10:20552076241284920. doi: 10.1177/20552076241284920. eCollection 2024 Jan-Dec.
ABSTRACT
OBJECTIVE: Brain tumor grade is an important aspect of brain tumor diagnosis and helps to plan for treatment. Traditional methods of diagnosis, including biopsy and manual examination of medical images, are either invasive or may result in inaccurate diagnoses. This study proposes a brain tumor grade classification technique using a modern convolutional neural network (CNN) architecture called ConvNext that inputs magnetic resonance imaging (MRI) data.
METHODS: Deep learning-based techniques are replacing invasive procedures for consistent, accurate, and non-invasive diagnosis of brain tumors. A well-known challenge of using deep learning architectures in medical imaging is data scarcity. Modern-day architectures have huge trainable parameters and require massive datasets to achieve the desired accuracy and avoid overfitting. Therefore, transfer learning is popular among researchers using medical imaging data. Recently, transformer-based architectures have surpassed CNNs for image data. However, recently proposed CNNs have achieved superior accuracy by introducing some tweaks inspired by vision transformers. This study proposed a technique to extract features from the ConvNext architecture and feed these features to a fully connected neural network for final classification.
RESULTS: The proposed study achieved state-of-the-art performance on the BraTS 2019 dataset using pre-trained ConvNext. The best accuracy of 99.5% was achieved when three MRI sequences were input as three channels of the pre-trained CNN.
CONCLUSION: The study demonstrated the efficacy of the representations learned by a modern CNN architecture, which has a higher inductive bias for the image data than vision transformers for brain tumor grade classification.
PMID:39372816 | PMC:PMC11452878 | DOI:10.1177/20552076241284920