Deep learning
Evaluating enrichment use in group-housed rhesus macaques (<em>Macaca mulatta</em>): A machine learning approach
Anim Welf. 2024 Dec 9;33:e59. doi: 10.1017/awf.2024.65. eCollection 2024.
ABSTRACT
Environmental enrichment programmes are widely used to improve welfare of captive and laboratory animals, especially non-human primates. Monitoring enrichment use over time is crucial, as animals may habituate and reduce their interaction with it. In this study we aimed to monitor the interaction with enrichment items in groups of rhesus macaques (Macaca mulatta), each consisting of an average of ten individuals, living in a breeding colony. To streamline the time-intensive task of assessing enrichment programmes we automated the evaluation process by using machine learning technologies. We built two computer vision-based pipelines to evaluate monkeys' interactions with different enrichment items: a white drum containing raisins and a non-food-based puzzle. The first pipeline analyses the usage of enrichment items in nine groups, both when it contains food and when it is empty. The second pipeline counts the number of monkeys interacting with a puzzle across twelve groups. The data derived from the two pipelines reveal that the macaques consistently express interest in the food-based white drum enrichment, even several months after its introduction. The puzzle enrichment was monitored for one month, showing a gradual decline in interaction over time. These pipelines are valuable for assessing enrichment by minimising the time spent on animal observation and data analysis; this study demonstrates that automated methods can consistently monitor macaque engagement with enrichments, systematically tracking habituation responses and long-term effectiveness. Such advancements have significant implications for enhancing animal welfare, enabling the discontinuation of ineffective enrichments and the adaptation of enrichment plans to meet the animals' needs.
PMID:39703214 | PMC:PMC11655280 | DOI:10.1017/awf.2024.65
Role of artificial intelligence in treatment planning and outcome prediction of jaw corrective surgeries by using 3-D imaging-a systematic review
Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 Oct 1:S2212-4403(24)00507-8. doi: 10.1016/j.oooo.2024.09.010. Online ahead of print.
ABSTRACT
OBJECTIVE: Artificial intelligence (AI) has been increasingly utilized in diagnosis of skeletal deformities, while its role in treatment planning and outcome prediction of jaw corrective surgeries with 3-dimensional (3D) imaging remains underexplored.
METHODS: The comprehensive search was done in PubMed, Google scholar, Semantic scholar and Cochrane Library between January 2000 and May 2024. Inclusion criteria encompassed studies on AI applications in treatment planning and outcome prediction for jaw corrective surgeries using 3D imaging. Data extracted included study details, AI algorithms, and performance metrics. Modified PROBAST tool was used to assess the risk of bias (ROB).
RESULTS: Fourteen studies were included. 11 studies used deep learning algorithms, and 3 employed machine learning on CT data. In treatment planning the prediction error was 0.292 to 3.32 mm (N = 5), and Dice score was 92.24 to 96% (N = 2). Accuracy of outcome predictions varied from 85.7% to 99.98% (N = 2). ROB was low in most of the included studies. A meta-analysis was not conducted due to significant heterogeneity and insufficient data reporting in the included studies.
CONCLUSION: 3D imaging-based AI models in treatment planning and outcome prediction for jaw corrective surgeries show promise but remain at proof-of-concept. Further, prospective multicentric studies are needed to validate these findings.
PMID:39701860 | DOI:10.1016/j.oooo.2024.09.010
The quality and accuracy of radiomics model in diagnosing osteoporosis: a systematic review and meta-analysis
Acad Radiol. 2024 Dec 18:S1076-6332(24)00940-1. doi: 10.1016/j.acra.2024.11.065. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: The purpose of this study is to conduct a meta-analysis to evaluate the diagnostic performance of current radiomics models for diagnosing osteoporosis, as well as to assess the methodology and reporting quality of these radiomics studies.
METHODS: According to PRISMA guidelines, four databases including MEDLINE, Web of Science, Embase and the Cochrane Library were searched systematically to select relevant studies published before July 18, 2024. The articles that used radiomics models for diagnosing osteoporosis were considered eligible. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool and radiomics quality score (RQS) were used to assess the quality of included studies. The pooled diagnostic odds ratio (DOR), sensitivity, specificity, area under the summary receiver operator characteristic curve (AUC) were calculated to estimated diagnostic efficiency of pooled model.
RESULTS: A total of 25 studies were included, of which 24 provided usable data that were utilized for the meta-analysis, including 1553 patients with osteoporosis and 2200 patients without osteoporosis. The mean RQS score of included studies was 11.48 ± 4.92, with an adherence rate of 31.89%. The pooled DOR, sensitivity and specificity for model to diagnose osteoporosis were 81.72 (95% CI: 51.08 - 130.73), 0.90 (95% CI: 0.87-0.93) and 0.90 (95% CI: 0.87-0.93), respectively. The AUC was 0.96, indicating a high diagnostic capability. Subgroup analysis revealed that the use of different imaging modalities to construct radiomics models might be one source of heterogeneity. Radiomics models built using CT images and deep learning algorithms demonstrated higher diagnostic accuracy for osteoporosis.
CONCLUSION: Radiomics models for the diagnosis of osteoporosis have high diagnostic efficacy. In the future, radiomics models for diagnosing osteoporosis will be an efficient instrument to assist clinical doctors in screening osteoporosis patients. However, relevant guidelines should be followed strictly to improve the quality of radiomics studies.
PMID:39701845 | DOI:10.1016/j.acra.2024.11.065
Predicting Intracerebral Hemorrhage Outcomes Using Deep Learning Models to Extract Head CT Imaging Features
Acad Radiol. 2024 Dec 18:S1076-6332(24)00967-X. doi: 10.1016/j.acra.2024.12.019. Online ahead of print.
NO ABSTRACT
PMID:39701844 | DOI:10.1016/j.acra.2024.12.019
Bayesian unsupervised clustering identifies clinically relevant osteosarcoma subtypes
Brief Bioinform. 2024 Nov 22;26(1):bbae665. doi: 10.1093/bib/bbae665.
ABSTRACT
Identification of cancer subtypes is a critical step for developing precision medicine. Most cancer subtyping is based on the analysis of RNA sequencing (RNA-seq) data from patient cohorts using unsupervised machine learning methods such as hierarchical cluster analysis, but these computational approaches disregard the heterogeneous composition of individual cancer samples. Here, we used a more sophisticated unsupervised Bayesian model termed latent process decomposition (LPD), which handles individual cancer sample heterogeneity and deconvolutes the structure of transcriptome data to provide clinically relevant information. The work was performed on the pediatric tumor osteosarcoma, which is a prototypical model for a rare and heterogeneous cancer. The LPD model detected three osteosarcoma subtypes. The subtype with the poorest prognosis was validated using independent patient datasets. This new stratification framework will be important for more accurate diagnostic labeling, expediting precision medicine, and improving clinical trial success. Our results emphasize the importance of using more sophisticated machine learning approaches (and for teaching deep learning and artificial intelligence) for RNA-seq data analysis, which may assist drug targeting and clinical management.
PMID:39701601 | DOI:10.1093/bib/bbae665
BCDB: A Dual-Branch Network Based on Transformer for Predicting Transcription Factor Binding Sites
Methods. 2024 Dec 17:S1046-2023(24)00279-2. doi: 10.1016/j.ymeth.2024.12.006. Online ahead of print.
ABSTRACT
Transcription factor binding sites (TFBSs) are critical in regulating gene expression. Precisely locating TFBSs can reveal the mechanisms of action of different transcription factors in gene transcription. Various deep learning methods have been proposed to predict TFBS; however, these models often need help demonstrating ideal performance under limited data conditions. Furthermore, these models typically have complex structures, which makes their decision-making processes difficult to transparentize. Addressing these issues, we have developed a framework named BCDB. This framework integrates multi-scale DNA information and employs a dual-branch output strategy. Integrating DNABERT, convolutional neural networks(CNN), and multi-head attention mechanisms enhances the feature extraction capabilities, significantly improving the accuracy of predictions. This innovative method aims to balance the extraction of global and local information, enhancing predictive performance while utilizing attention mechanisms to provide an intuitive way to explain the model's predictions, thus strengthening the overall interpretability of the model. Prediction results on 165 ChIP-seq datasets show that BCDB significantly outperforms other existing deep learning methods in terms of performance. Additionally, since the BCDB model utilizes transfer learning methods, it can transfer knowledge learned from many unlabeled data to specific cell line prediction tasks, allowing our model to achieve cross-cell line TFBS prediction. The source code for BCDB is available on https://github.com/ZhangLab312/BCDB.
PMID:39701486 | DOI:10.1016/j.ymeth.2024.12.006
ToxinPredictor: Computational models to predict the toxicity of molecules
Chemosphere. 2024 Dec 17:143900. doi: 10.1016/j.chemosphere.2024.143900. Online ahead of print.
ABSTRACT
Predicting the toxicity of molecules is essential in fields like drug discovery, environmental protection, and industrial chemical management. While traditional experimental methods are time-consuming and costly, computational models offer an efficient alternative. In this study, we introduce ToxinPredictor, a machine learning-based model to predict the toxicity of small molecules using their structural properties. The model was trained on a curated dataset of 7,550 toxic and 6,514 non-toxic molecules, leveraging feature selection techniques like Boruta and PCA. The best-performing model, a Support Vector Machine (SVM), achieved state-of-the-art results with an AUROC of 91.7%, F1-score of 84.9%, and accuracy of 85.4%, outperforming existing solutions. SHAP analysis was applied to the SVM model to identify the most important molecular descriptors contributing to toxicity predictions, enhancing interpretability. Despite challenges related to data quality, ToxinPredictor provides a reliable framework for toxicity risk assessment, paving the way for safer drug development and improved environmental health assessments. We also created a user-friendly webserver, ToxinPredictor (https://cosylab.iiitd.edu.in/toxinpredictor) to facilitate the search and prediction of toxic compounds.
PMID:39701316 | DOI:10.1016/j.chemosphere.2024.143900
Cracking the code of adaptive immunity: The role of computational tools
Cell Syst. 2024 Dec 18;15(12):1156-1167. doi: 10.1016/j.cels.2024.11.009.
ABSTRACT
In recent years, the advances in high-throughput and deep sequencing have generated a diverse amount of adaptive immune repertoire data. This surge in data has seen a proportional increase in computational methods aimed to characterize T cell receptor (TCR) repertoires. In this perspective, we will provide a brief commentary on the various domains of TCR repertoire analysis, their respective computational methods, and the ongoing challenges. Given the breadth of methods and applications of TCR analysis, we will focus our perspective on sequence-based computational methods.
PMID:39701033 | DOI:10.1016/j.cels.2024.11.009
Glaucoma detection: Binocular approach and clinical data in machine learning
Artif Intell Med. 2024 Dec 12;160:103050. doi: 10.1016/j.artmed.2024.103050. Online ahead of print.
ABSTRACT
In this work, we present a multi-modal machine learning method to automate early glaucoma diagnosis. The proposed methodology introduces two novel aspects for automated diagnosis not previously explored in the literature: simultaneous use of ocular fundus images from both eyes and integration with the patient's additional clinical data. We begin by establishing a baseline, termed monocular mode, which adheres to the traditional approach of considering the data from each eye as a separate instance. We then explore the binocular mode, investigating how combining information from both eyes of the same patient can enhance glaucoma diagnosis accuracy. This exploration employs the PAPILA dataset, comprising information from both eyes, clinical data, ocular fundus images, and expert segmentation of these images. Additionally, we compare two image-derived data modalities: direct ocular fundus images and morphological data from manual expert segmentation. Our method integrates Gradient-Boosted Decision Trees (GBDT) and Convolutional Neural Networks (CNN), specifically focusing on the MobileNet, VGG16, ResNet-50, and Inception models. SHAP values are used to interpret GBDT models, while the Deep Explainer method is applied in conjunction with SHAP to analyze the outputs of convolutional-based models. Our findings show the viability of considering both eyes, which improves the model performance. The binocular approach, incorporating information from morphological and clinical data yielded an AUC of 0.796 (±0.003 at a 95% confidence interval), while the CNN, using the same approach (both eyes), achieved an AUC of 0.764 (±0.005 at a 95% confidence interval).
PMID:39701017 | DOI:10.1016/j.artmed.2024.103050
Graph convolution networks model identifies and quantifies gene and cancer specific transcriptome signatures of cancer driver events
Comput Biol Med. 2024 Dec 18;185:109491. doi: 10.1016/j.compbiomed.2024.109491. Online ahead of print.
ABSTRACT
BACKGROUND: The identification and drug targeting of cancer causing (driver) genetic alterations has seen immense improvement in recent years, with many new targeted therapies developed. However, identifying, prioritizing, and treating genetic alterations is insufficient for most cancer patients. Current clinical practices rely mainly on DNA level mutational analyses, which in many cases fail to identify treatable driver events. Arguably, signal strength may determine cell fate more than the mutational status that initiated it. The use of transcriptomics, a complex and highly informative representation of cellular and tumor state, had been suggested to enhance diagnostics and treatment successes. A gene-expression based model trained over known genetic alterations could improve identification and quantification of cancer related biological aberrations' signal strength.
METHODS: We present STAMP (Signatures in Transcriptome Associated with Mutated Protein), a Graph Convolution Networks (GCN) based framework for the identification of gene expression signatures related to cancer driver events. STAMP was trained to identify the p53 dysfunction of cancer samples from gene expression, utilizing comprehensive curated graph structures of gene interactions. Predictions were modified for generating a quantitative score to rank the severity of a driver event in each sample. STAMP was then extended to almost 300 tumor type-specific predictive models for important cancer genes/pathways, by training to identify well-established driver events' annotations from the literature.
RESULTS: STAMP achieved very high AUC on unseen data across several tumor types and on an independent cohort. The framework was validated on p53 related genetic and clinical characteristics, including the effect of Variants of Unknown Significance, and showed strong correlation with protein function. For genes and tumor types where targeted therapy is available, STAMP showed correlation with drugs sensitivity (IC50) in an independent cell line database. It managed to stratify drug effect on samples with similar mutational profiles. STAMP was validated for drug-response prediction in clinical patients' cohorts, improving over a state-of-the-art method and suggesting potential biomarkers for cancer treatments.
CONCLUSIONS: The STAMP models provide a learning framework that successfully identifies and quantifies driver events' signal strength, showing utility in portraying the molecular landscape of tumors based on transcriptomics. Importantly, STAMP manifested the ability to improve targeted therapy selection and hence can contribute to better treatment.
PMID:39700860 | DOI:10.1016/j.compbiomed.2024.109491
Generating synthetic CT images from unpaired head and neck CBCT images and validating the importance of detailed nasal cavity acquisition through simulations
Comput Biol Med. 2024 Dec 18;185:109568. doi: 10.1016/j.compbiomed.2024.109568. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Computed tomography (CT) of the head and neck is crucial for diagnosing internal structures. The demand for substituting traditional CT with cone beam CT (CBCT) exists because of its cost-effectiveness and reduced radiation exposure. However, CBCT cannot accurately depict airway shapes owing to image noise. This study proposes a strategy utilizing a cycle-consistent generative adversarial network (cycleGAN) for denoising CBCT images with various loss functions and augmentation strategies, resulting in the generation of denoised synthetic CT (sCT) images. Furthermore, through a rule-based approach, we were able to automatically segment the upper airway in sCT images with high accuracy. Additionally, we conducted an analysis of the impact of finely segmented nasal cavities on airflow using computational fluid dynamics (CFD).
METHODS: We trained the cycleGAN model using various loss functions and compared the quality of the sCT images generated by each model. We improved the artifact removal performance by incorporating CT images with added Gaussian noise augmentation into the training dataset. We developed a rule-based automatic segmentation methodology using threshold and watershed algorithms to compare the accuracy of airway segmentation for noise-reduced sCT and original CBCT. Furthermore, we validated the significance of the nasal cavity by conducting CFD based on automatically segmented shapes obtained from sCT.
RESULT: The generated sCT images exhibited improved quality, with the mean absolute error decreasing from 161.60 to 100.54, peak signal-to-noise ratio increasing from 22.33 to 28.65, and structural similarity index map increasing from 0.617 to 0.865. Furthermore, by comparing the airway segmentation performances of CBCT and sCT using our proposed automatic rule-based algorithm, the Dice score improved from 0.849 to 0.960. Airway segmentation performance is closely associated with the accuracy of fluid dynamics simulations. Detailed airway segmentation is crucial for altering flow dynamics and contributes significantly to diagnostics.
CONCLUSION: Our deep learning methodology enhances the image quality of CBCT to provide anatomical information to medical professionals and enables precise and accurate biomechanical analysis. This allows clinicians to obtain precise quantitative metrics and facilitates accurate assessment.
PMID:39700859 | DOI:10.1016/j.compbiomed.2024.109568
AI-Enhanced Interface for Colonic Polyp Segmentation Using DeepLabv3+ with Comparative Backbone Analysis
Biomed Phys Eng Express. 2024 Dec 19. doi: 10.1088/2057-1976/ada15f. Online ahead of print.
ABSTRACT
Polyps are one of the early stages of colon cancer. The detection of polyps by segmentation and their removal by surgical intervention is of great importance for making treatment decisions. Although the detection of polyps through colonoscopy images can lead to multiple expert needs and time losses, it can also include human error. Therefore, automatic, fast, and highly accurate segmentation of polyps from colonoscopy images is important. Many methods have been proposed, including deep learning-based approaches. In this study, a method using DeepLabv3+ with encoder-decoder structure and ResNet architecture as backbone network is proposed for the segmentation of colonic polyps. The Kvasir-SEG polyp dataset was used to train and test the proposed method. After images were preprocessed, the training of the proposed network was performed. The trained network was then tested and performance metrics were calculated, and additionally, a GUI (Graphical User Interface) was designed to enable the segmentation of colonoscopy images for polyp segmentation. The experimental results showed that the ResNet-50 based DeepLabv3+ model had high performance metrics such as DSC: 0.9609, mIoU: 0.9246, demonstrating its effectiveness in the segmentation of colonic polyps. In conclusion, our method utilizing DeepLabv3+ with a ResNet-50 backbone achieves highly accurate colonic polyp segmentation. The obtained results demonstrate its potential to significantly enhance colorectal cancer diagnosis and planning for polypectomy surgery through automated image analysis.
.
PMID:39700528 | DOI:10.1088/2057-1976/ada15f
Mitigating epidemic spread in complex networks based on deep reinforcement learning
Chaos. 2024 Dec 1;34(12):123159. doi: 10.1063/5.0235689.
ABSTRACT
Complex networks are susceptible to contagious cascades, underscoring the urgency for effective epidemic mitigation strategies. While physical quarantine is a proven mitigation measure for mitigation, it can lead to substantial economic repercussions if not managed properly. This study presents an innovative approach to selecting quarantine targets within complex networks, aiming for an efficient and economic epidemic response. We model the epidemic spread in complex networks as a Markov chain, accounting for stochastic state transitions and node quarantines. We then leverage deep reinforcement learning (DRL) to design a quarantine strategy that minimizes both infection rates and quarantine costs through a sequence of strategic node quarantines. Our DRL agent is specifically trained with the proximal policy optimization algorithm to optimize these dual objectives. Through simulations in both synthetic small-world and real-world community networks, we demonstrate the efficacy of our strategy in controlling epidemics. Notably, we observe a non-linear pattern in the mitigation effect as the daily maximum quarantine scale increases: the mitigation rate is most pronounced at first but plateaus after reaching a critical threshold. This insight is crucial for setting the most effective epidemic mitigation parameters.
PMID:39700518 | DOI:10.1063/5.0235689
Ligand Identification in CryoEM and X-ray Maps Using Deep Learning
Bioinformatics. 2024 Dec 19:btae749. doi: 10.1093/bioinformatics/btae749. Online ahead of print.
ABSTRACT
MOTIVATION: Accurately identifying ligands plays a crucial role in the process of structure-guided drug design. Based on density maps from X-ray diffraction or cryogenic-sample electron microscopy (cryoEM), scientists verify whether small-molecule ligands bind to active sites of interest. However, the interpretation of density maps is challenging, and cognitive bias can sometimes mislead investigators into modeling fictitious compounds. Ligand identification can be aided by automatic methods, but existing approaches are available only for X-ray diffraction and are based on iterative fitting or feature-engineered machine learning rather than end-to-end deep learning.
RESULTS: Here, we propose to identify ligands using a deep learning approach that treats density maps as 3D point clouds. We show that the proposed model is on par with existing machine learning methods for X-ray crystallography while also being applicable to cryoEM density maps. Our study demonstrates that electron density map fragments can aid the training of models that can later be applied to cryoEM structures but also highlights challenges associated with the standardization of electron microscopy maps and the quality assessment of cryoEM ligands.
AVAILABILITY: Code and model weights are available on GitHub at https://github.com/jkarolczak/ligands-classification . Datasets used for training and testing are hosted at Zenodo: 10.5281/zenodo.10908325. An accompanying ChimeraX bundle is available at https://github.com/wtaisner/chimerax-ligand-recognizer.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.\.
PMID:39700427 | DOI:10.1093/bioinformatics/btae749
Spatiotemporal Profiling Defines Persistence and Resistance Dynamics During Targeted Treatment of Melanoma
Cancer Res. 2024 Dec 19. doi: 10.1158/0008-5472.CAN-24-0690. Online ahead of print.
ABSTRACT
Resistance of BRAF-mutant melanomas to targeted therapy arises from the ability of cells to enter a persister state, evade treatment with relative dormancy, and repopulate the tumor when reactivated. A better understanding of the temporal dynamics and specific pathways leading into and out of the persister state is needed to identify strategies to prevent treatment failure. Using spatial transcriptomics in patient-derived xenograft models, we captured clonal lineage evolution during treatment. The persister state showed increased oxidative phosphorylation, decreased proliferation, and increased invasive capacity, with central-to-peripheral gradients. Phylogenetic tracing identified intrinsic and acquired resistance mechanisms (e.g., dual specific phosphatases, reticulon-4, and CDK2) and suggested specific temporal windows of potential therapeutic susceptibility. Deep learning-enabled analysis of histopathological slides revealed morphological features correlating with specific cell states, demonstrating that juxtaposition of transcriptomics and histological data enabled identification of phenotypically distinct populations from using imaging data alone. In summary, this study defined state change and lineage selection during melanoma treatment with spatiotemporal resolution, elucidating how choice and timing of therapeutic agents will impact the ability to eradicate resistant clones.
PMID:39700408 | DOI:10.1158/0008-5472.CAN-24-0690
Machine and deep learning algorithms for sentiment analysis during COVID-19: A vision to create fake news resistant society
PLoS One. 2024 Dec 19;19(12):e0315407. doi: 10.1371/journal.pone.0315407. eCollection 2024.
ABSTRACT
Informal education via social media plays a crucial role in modern learning, offering self-directed and community-driven opportunities to gain knowledge, skills, and attitudes beyond traditional educational settings. These platforms provide access to a broad range of learning materials, such as tutorials, blogs, forums, and interactive content, making education more accessible and tailored to individual interests and needs. However, challenges like information overload and the spread of misinformation highlight the importance of digital literacy in ensuring users can critically evaluate the credibility of information. Consequently, the significance of sentiment analysis has grown in contemporary times due to the widespread utilization of social media platforms as a means for individuals to articulate their viewpoints. Twitter (now X) is well recognized as a prominent social media platform that is predominantly utilized for microblogging. Individuals commonly engage in expressing their viewpoints regarding contemporary events, hence presenting a significant difficulty for scholars to categorize the sentiment associated with such expressions effectively. This research study introduces a highly effective technique for detecting misinformation related to the COVID-19 pandemic. The spread of fake news during the COVID-19 pandemic has created significant challenges for public health and safety because misinformation about the virus, its transmission, and treatments has led to confusion and distrust among the public. This research study introduce highly effective techniques for detecting misinformation related to the COVID-19 pandemic. The methodology of this work includes gathering a dataset comprising fabricated news articles sourced from a corpus and subjected to the natural language processing (NLP) cycle. After applying some filters, a total of five machine learning classifiers and three deep learning classifiers were employed to forecast the sentiment of news articles, distinguishing between those that are authentic and those that are fabricated. This research employs machine learning classifiers, namely Support Vector Machine, Logistic Regression, K-Nearest Neighbors, Decision Trees, and Random Forest, to analyze and compare the obtained results. This research employs Convolutional Neural Networks, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) as deep learning classifiers, and afterwards compares the obtained results. The results indicate that the BiGRU deep learning classifier demonstrates high accuracy and efficiency, with the following indicators: accuracy of 0.91, precision of 0.90, recall of 0.93, and F1-score of 0.92. For the same algorithm, the true negatives, and true positives came out to be 555 and 580, respectively, whereas, the false negatives and false positives came out to be 81, and 68, respectively. In conclusion, this research highlights the effectiveness of the BiGRU deep learning classifier in detecting misinformation related to COVID-19, emphasizing its significance for fostering media literacy and resilience against fake news in contemporary society. The implications of this research are significant for higher education and lifelong learners as it highlights the potential for using advanced machine learning to help educators and institutions in the process of combating the spread of misinformation and promoting critical thinking skills among students. By applying these methods to analyze and classify news articles, educators can develop more effective tools and curricula for teaching media literacy and information validation, equipping students with the skills needed to discern between authentic and fabricated information in the context of the COVID-19 pandemic and beyond. The implications of this research extrapolate to the creation of a society that is resistant to the spread of fake news through social media platforms.
PMID:39700256 | DOI:10.1371/journal.pone.0315407
Noninvasive Quantitative CT for Diffuse Liver Diseases: Steatosis, Iron Overload, and Fibrosis
Radiographics. 2025 Jan;45(1):e240176. doi: 10.1148/rg.240176.
ABSTRACT
Chronic diffuse liver disease continues to increase in prevalence and represents a global health concern. Noninvasive detection and quantification of hepatic steatosis, iron overload, and fibrosis are critical, especially given the many relative disadvantages and potential risks of invasive liver biopsy. Although MRI techniques have emerged as the preferred reference standard for quantification of liver fat, iron, and fibrosis, CT can play an important role in opportunistic detection of unsuspected disease and is performed at much higher volumes. For hepatic steatosis, noncontrast CT provides a close approximation to MRI-based proton-density fat fraction (PDFF) quantification, with liver attenuation values less than or equal to 40 HU signifying at least moderate steatosis. Liver fat quantification with postcontrast CT is less precise but can generally provide categorical assessment (eg, mild vs moderate steatosis). Noncontrast CT can also trigger appropriate assessment for iron overload when increased parenchymal attenuation values are observed (eg, >75 HU). A variety of morphologic and functional CT features indicate the presence of underlying hepatic fibrosis and cirrhosis. Beyond subjective assessment, quantitative CT methods for staging fibrosis can provide comparable performance to that of elastography. Furthermore, quantitative CT assessment can be performed retrospectively, since prospective techniques are not required. Many of these CT quantitative measures are now fully automated via artificial intelligence (AI) deep learning algorithms. These retrospective and automated advantages have important implications for longitudinal clinical care and research. Ultimately, regardless of the indication for CT, opportunistic detection of steatosis, iron overload, and fibrosis can result in appropriate clinical awareness and management. ©RSNA, 2024 See the invited commentary by Yeh in this issue.
PMID:39700040 | DOI:10.1148/rg.240176
Hormone-responsive progenitors have a unique identity and exhibit high motility during mammary morphogenesis
Cell Rep. 2024 Dec 17;43(12):115073. doi: 10.1016/j.celrep.2024.115073. Online ahead of print.
ABSTRACT
Hormone-receptor-positive (HR+) luminal cells largely mediate the response to estrogen and progesterone during mammary gland morphogenesis. However, there remains a lack of consensus on the precise nature of the precursor cells that maintain this essential HR+ lineage. Here we refine the identification of HR+ progenitors and demonstrate their unique regenerative capacity compared to mature HR+ cells. HR+ progenitors proliferate but do not expand, suggesting rapid differentiation. Subcellular resolution, 3D intravital microscopy was performed on terminal end buds (TEBs) during puberty to dissect the contribution of each luminal lineage. Surprisingly, HR+ TEB progenitors were highly elongated and motile compared to columnar HR- progenitors and static, conoid HR+ cells within ducts. This dynamic behavior was also observed in response to hormones. Development of an AI model for motility dynamics analysis highlighted stark behavioral changes in HR+ progenitors as they transitioned to mature cells. This work provides valuable insights into how progenitor behavior contributes to mammary morphogenesis.
PMID:39700014 | DOI:10.1016/j.celrep.2024.115073
Deep learning for opportunistic, end-to-end automated assessment of epicardial adipose tissue in pre-interventional, ECG-gated spiral computed tomography
Insights Imaging. 2024 Dec 19;15(1):301. doi: 10.1186/s13244-024-01875-6.
ABSTRACT
OBJECTIVES: Recently, epicardial adipose tissue (EAT) assessed by CT was identified as an independent mortality predictor in patients with various cardiac diseases. Our goal was to develop a deep learning pipeline for robust automatic EAT assessment in CT.
METHODS: Contrast-enhanced ECG-gated cardiac and thoraco-abdominal spiral CT imaging from 1502 patients undergoing transcatheter aortic valve replacement (TAVR) was included. Slice selection at aortic valve (AV)-level and EAT segmentation were performed manually as ground truth. For slice extraction, two approaches were compared: A regression model with a 2D convolutional neural network (CNN) and a 3D CNN utilizing reinforcement learning (RL). Performance evaluation was based on mean absolute z-deviation to the manually selected AV-level (Δz). For tissue segmentation, a 2D U-Net was trained on single-slice images at AV-level and compared to the open-source body and organ analysis (BOA) framework using Dice score. Superior methods were selected for end-to-end evaluation, where mean absolute difference (MAD) of EAT area and tissue density were compared. 95% confidence intervals (CI) were assessed for all metrics.
RESULTS: Slice extraction using RL was slightly more precise (Δz: RL 1.8 mm (95% CI: [1.6, 2.0]), 2D CNN 2.0 mm (95% CI: [1.8, 2.3])). For EAT segmentation at AV-level, the 2D U-Net outperformed BOA significantly (Dice score: 2D U-Net 91.3% (95% CI: [90.7, 91.8]), BOA 85.6% (95% CI: [84.7, 86.5])). The end-to-end evaluation revealed high agreement between automatic and manual measurements of EAT (MAD area: 1.1 cm2 (95% CI: [1.0, 1.3]), MAD density: 2.2 Hounsfield units (95% CI: [2.0, 2.5])).
CONCLUSIONS: We propose a method for robust automatic EAT assessment in spiral CT scans enabling opportunistic evaluation in clinical routine.
CRITICAL RELEVANCE STATEMENT: Since inflammatory changes in epicardial adipose tissue (EAT) are associated with an increased risk of cardiac diseases, automated evaluation can serve as a basis for developing automated cardiac risk assessment tools, which are essential for efficient, large-scale assessment in opportunistic settings.
KEY POINTS: Deep learning methods for automatic assessment of epicardial adipose tissue (EAT) have great potential. A 2-step approach with slice extraction and tissue segmentation enables robust automated evaluation of EAT. End-to-end automation enables large-scale research on the value of EAT for outcome analysis.
PMID:39699798 | DOI:10.1186/s13244-024-01875-6
Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading
Eur Radiol. 2024 Dec 19. doi: 10.1007/s00330-024-11287-1. Online ahead of print.
ABSTRACT
OBJECTIVES: This study aims to evaluate a deep learning pipeline for detecting clinically significant prostate cancer (csPCa), defined as Gleason Grade Group (GGG) ≥ 2, using biparametric MRI (bpMRI) and compare its performance with radiological reading.
MATERIALS AND METHODS: The training dataset included 4381 bpMRI cases (3800 positive and 581 negative) across three continents, with 80% annotated using PI-RADS and 20% with Gleason Scores. The testing set comprised 328 cases from the PROSTATEx dataset, including 34% positive (GGG ≥ 2) and 66% negative cases. A 3D nnU-Net was trained on bpMRI for lesion detection, evaluated using histopathology-based annotations, and assessed with patient- and lesion-level metrics, along with lesion volume, and GGG. The algorithm was compared to non-expert radiologists using multi-parametric MRI (mpMRI).
RESULTS: The model achieved an AUC of 0.83 (95% CI: 0.80, 0.87). Lesion-level sensitivity was 0.85 (95% CI: 0.82, 0.94) at 0.5 False Positives per volume (FP/volume) and 0.88 (95% CI: 0.79, 0.92) at 1 FP/volume. Average Precision was 0.55 (95% CI: 0.46, 0.64). The model showed over 0.90 sensitivity for lesions larger than 650 mm³ and exceeded 0.85 across GGGs. It had higher true positive rates (TPRs) than radiologists equivalent FP rates, achieving TPRs of 0.93 and 0.79 compared to radiologists' 0.87 and 0.68 for PI-RADS ≥ 3 and PI-RADS ≥ 4 lesions (p ≤ 0.05).
CONCLUSION: The DL model showed strong performance in detecting csPCa on an independent test cohort, surpassing radiological interpretation and demonstrating AI's potential to improve diagnostic accuracy for non-expert radiologists. However, detecting small lesions remains challenging.
KEY POINTS: Question Current prostate cancer detection methods often do not involve non-expert radiologists, highlighting the need for more accurate deep learning approaches using biparametric MRI. Findings Our model outperforms radiologists significantly, showing consistent performance across Gleason Grade Groups and for medium to large lesions. Clinical relevance This AI model improves prostate detection accuracy in prostate imaging, serves as a benchmark with reference performance on a public dataset, and offers public PI-RADS annotations, enhancing transparency and facilitating further research and development.
PMID:39699671 | DOI:10.1007/s00330-024-11287-1