Deep learning

Automatic tooth periodontal ligament segmentation of cone beam computed tomography based on instance segmentation network

Wed, 2024-01-31 06:00

Heliyon. 2024 Jan 9;10(2):e24097. doi: 10.1016/j.heliyon.2024.e24097. eCollection 2024 Jan 30.

ABSTRACT

OBJECTIVE: The three-dimensional morphological structures of periodontal ligaments (PDLs) are important data for periodontal, orthodontic, prosthodontic, and implant interventions. This study aimed to employ a deep learning (DL) algorithm to segment the PDL automatically in cone-beam computed tomography (CBCT).

METHOD: This was a retrospective study. We randomly selected 389 patients and 1734 axial CBCT images from the CBCT database, and designed a fully automatic PDL segmentation computer-aided model based on instance segmentation Mask R-CNN network. The labels of the model training were 'teeth' and 'alveolar bone', and the 'PDL' is defined as the region where the 'teeth' and 'alveolar bone' overlap. The model's segmentation performance was evaluated using CBCT data from eight patients outside the database.

RESULTS: Qualitative evaluation indicates that the PDL segmentation accuracy of incisors, canines, premolars, wisdom teeth, and implants reached 100%. The segmentation accuracy of molars was 96.4%. Quantitative evaluation indicates that the mIoU and mDSC of PDL segmentation were 0.667 ± 0.015 (>0.6) and 0.799 ± 0.015 (>0.7) respectively.

CONCLUSION: This study analysed a unique approach to AI-driven automatic segmentation of PDLs on CBCT imaging, possibly enabling chair-side measurements of PDLs to facilitate periodontists, orthodontists, prosthodontists, and implantologists in more efficient and accurate diagnosis and treatment planning.

PMID:38293338 | PMC:PMC10827460 | DOI:10.1016/j.heliyon.2024.e24097

Categories: Literature Watch

Editorial: Multi-modal learning and its application for biomedical data

Wed, 2024-01-31 06:00

Front Med (Lausanne). 2024 Jan 16;10:1342374. doi: 10.3389/fmed.2023.1342374. eCollection 2023.

NO ABSTRACT

PMID:38293296 | PMC:PMC10824823 | DOI:10.3389/fmed.2023.1342374

Categories: Literature Watch

TIE-GANs: single-shot quantitative phase imaging using transport of intensity equation with integration of GANs

Wed, 2024-01-31 06:00

J Biomed Opt. 2024 Jan;29(1):016010. doi: 10.1117/1.JBO.29.1.016010. Epub 2024 Jan 30.

ABSTRACT

SIGNIFICANCE: Artificial intelligence (AI) has become a prominent technology in computational imaging over the past decade. The expeditious and label-free characteristics of quantitative phase imaging (QPI) render it a promising contender for AI investigation. Though interferometric methodologies exhibit potential efficacy, their implementation involves complex experimental platforms and computationally intensive reconstruction procedures. Hence, non-interferometric methods, such as transport of intensity equation (TIE), are preferred over interferometric methods.

AIM: TIE method, despite its effectiveness, is tedious as it requires the acquisition of many images at varying defocus planes. The proposed methodology holds the ability to generate a phase image utilizing a single intensity image using generative adversarial networks (GANs). We present a method called TIE-GANs to overcome the multi-shot scheme of conventional TIE.

APPROACH: The present investigation employs the TIE as a QPI methodology, which necessitates reduced experimental and computational efforts. TIE is being used for the dataset preparation as well. The proposed method captures images from different defocus planes for training. Our approach uses an image-to-image translation technique to produce phase maps and is based on GANs. The main contribution of this work is the introduction of GANs with TIE (TIE:GANs) that can give better phase reconstruction results with shorter computation times. This is the first time the GANs is proposed for TIE phase retrieval.

RESULTS: The characterization of the system was carried out with microbeads of 4 μm size and structural similarity index (SSIM) for microbeads was found to be 0.98. We demonstrated the application of the proposed method with oral cells, which yielded a maximum SSIM value of 0.95. The key characteristics include mean squared error and peak-signal-to-noise ratio values of 140 and 26.42 dB for oral cells and 100 and 28.10 dB for microbeads.

CONCLUSIONS: The proposed methodology holds the ability to generate a phase image utilizing a single intensity image. Our method is feasible for digital cytology because of its reported high value of SSIM. Our approach can handle defocused images in such a way that it can take intensity image from any defocus plane within the provided range and able to generate phase map.

PMID:38293292 | PMC:PMC10826717 | DOI:10.1117/1.JBO.29.1.016010

Categories: Literature Watch

Deep Survival Analysis for Interpretable Time-Varying Prediction of Preeclampsia Risk

Wed, 2024-01-31 06:00

medRxiv. 2024 Jan 19:2024.01.18.24301456. doi: 10.1101/2024.01.18.24301456. Preprint.

ABSTRACT

OBJECTIVE: Survival analysis is widely utilized in healthcare to predict the timing of disease onset. Traditional methods of survival analysis are usually based on Cox Proportional Hazards model and assume proportional risk for all subjects. However, this assumption is rarely true for most diseases, as the underlying factors have complex, non-linear, and time-varying relationships. This concern is especially relevant for pregnancy, where the risk for pregnancy-related complications, such as preeclampsia, varies across gestation. Recently, deep learning survival models have shown promise in addressing the limitations of classical models, as the novel models allow for non-proportional risk handling, capturing nonlinear relationships, and navigating complex temporal dynamics.

METHODS: We present a methodology to model the temporal risk of preeclampsia during pregnancy and investigate the associated clinical risk factors. We utilized a retrospective dataset including 66,425 pregnant individuals who delivered in two tertiary care centers from 2015-2023. We modeled the preeclampsia risk by modifying DeepHit, a deep survival model, which leverages neural network architecture to capture time-varying relationships between covariates in pregnancy. We applied time series k-means clustering to DeepHit's normalized output and investigated interpretability using Shapley values.

RESULTS: We demonstrate that DeepHit can effectively handle high-dimensional data and evolving risk hazards over time with performance similar to the Cox Proportional Hazards model, achieving an area under the curve (AUC) of 0.78 for both models. The deep survival model outperformed traditional methodology by identifying time-varied risk trajectories for preeclampsia, providing insights for early and individualized intervention. K-means clustering resulted in patients delineating into low-risk, early-onset, and late-onset preeclampsia groups- notably, each of those has distinct risk factors.

CONCLUSION: This work demonstrates a novel application of deep survival analysis in time-varying prediction of preeclampsia risk. Our results highlight the advantage of deep survival models compared to Cox Proportional Hazards models in providing personalized risk trajectory and demonstrating the potential of deep survival models to generate interpretable and meaningful clinical applications in medicine.

PMID:38293230 | PMC:PMC10827248 | DOI:10.1101/2024.01.18.24301456

Categories: Literature Watch

Large-scale comparison of machine learning methods for profiling prediction of kinase inhibitors

Tue, 2024-01-30 06:00

J Cheminform. 2024 Jan 30;16(1):13. doi: 10.1186/s13321-023-00799-5.

ABSTRACT

Conventional machine learning (ML) and deep learning (DL) play a key role in the selectivity prediction of kinase inhibitors. A number of models based on available datasets can be used to predict the kinase profile of compounds, but there is still controversy about the advantages and disadvantages of ML and DL for such tasks. In this study, we constructed a comprehensive benchmark dataset of kinase inhibitors, involving in 141,086 unique compounds and 216,823 well-defined bioassay data points for 354 kinases. We then systematically compared the performance of 12 ML and DL methods on the kinase profiling prediction task. Extensive experimental results reveal that (1) Descriptor-based ML models generally slightly outperform fingerprint-based ML models in terms of predictive performance. RF as an ensemble learning approach displays the overall best predictive performance. (2) Single-task graph-based DL models are generally inferior to conventional descriptor- and fingerprint-based ML models, however, the corresponding multi-task models generally improves the average accuracy of kinase profile prediction. For example, the multi-task FP-GNN model outperforms the conventional descriptor- and fingerprint-based ML models with an average AUC of 0.807. (3) Fusion models based on voting and stacking methods can further improve the performance of the kinase profiling prediction task, specifically, RF::AtomPairs + FP2 + RDKitDes fusion model performs best with the highest average AUC value of 0.825 on the test sets. These findings provide useful information for guiding choices of the ML and DL methods for the kinase profiling prediction tasks. Finally, an online platform called KIPP ( https://kipp.idruglab.cn ) and python software are developed based on the best models to support the kinase profiling prediction, as well as various kinase inhibitor identification tasks including virtual screening, compound repositioning and target fishing.

PMID:38291477 | DOI:10.1186/s13321-023-00799-5

Categories: Literature Watch

Application of machine learning models on predicting the length of hospital stay in fragility fracture patients

Tue, 2024-01-30 06:00

BMC Med Inform Decis Mak. 2024 Jan 30;24(1):26. doi: 10.1186/s12911-024-02417-2.

ABSTRACT

BACKGROUND: The rate of geriatric hip fracture in Hong Kong is increasing steadily and associated mortality in fragility fracture is high. Moreover, fragility fracture patients increase the pressure on hospital bed demand. Hence, this study aims to develop a predictive model on the length of hospital stay (LOS) of geriatric fragility fracture patients using machine learning (ML) techniques.

METHODS: In this study, we use the basic information, such as gender, age, residence type, etc., and medical parameters of patients, such as the modified functional ambulation classification score (MFAC), elderly mobility scale (EMS), modified Barthel index (MBI) etc, to predict whether the length of stay would exceed 21 days or not.

RESULTS: Our results are promising despite the relatively small sample size of 8000 data. We develop various models with three approaches, namely (1) regularizing gradient boosting frameworks, (2) custom-built artificial neural network and (3) Google's Wide & Deep Learning technique. Our best results resulted from our Wide & Deep model with an accuracy of 0.79, with a precision of 0.73, with an area under the receiver operating characteristic curve (AUC-ROC) of 0.84. Feature importance analysis indicates (1) the type of hospital the patient is admitted to, (2) the mental state of the patient and (3) the length of stay at the acute hospital all have a relatively strong impact on the length of stay at palliative care.

CONCLUSIONS: Applying ML techniques to improve the quality and efficiency in the healthcare sector is becoming popular in Hong Kong and around the globe, but there has not yet been research related to fragility fracture. The integration of machine learning may be useful for health-care professionals to better identify fragility fracture patients at risk of prolonged hospital stays. These findings underline the usefulness of machine learning techniques in optimizing resource allocation by identifying high risk individuals and providing appropriate management to improve treatment outcome.

PMID:38291406 | DOI:10.1186/s12911-024-02417-2

Categories: Literature Watch

Automatic dental age calculation from panoramic radiographs using deep learning: a two-stage approach with object detection and image classification

Tue, 2024-01-30 06:00

BMC Oral Health. 2024 Jan 31;24(1):143. doi: 10.1186/s12903-024-03928-0.

ABSTRACT

BACKGROUND: Dental age is crucial for treatment planning in pediatric and orthodontic dentistry. Dental age calculation methods can be categorized into morphological, biochemical, and radiological methods. Radiological methods are commonly used because they are non-invasive and reproducible. When radiographs are available, dental age can be calculated by evaluating the developmental stage of permanent teeth and converting it into an estimated age using a table, or by measuring the length between some landmarks such as the tooth, root, or pulp, and substituting them into regression formulas. However, these methods heavily depend on manual time-consuming processes. In this study, we proposed a novel and completely automatic dental age calculation method using panoramic radiographs and deep learning techniques.

METHODS: Overall, 8,023 panoramic radiographs were used as training data for Scaled-YOLOv4 to detect dental germs and mean average precision were evaluated. In total, 18,485 single-root and 16,313 multi-root dental germ images were used as training data for EfficientNetV2 M to classify the developmental stages of detected dental germs and Top-3 accuracy was evaluated since the adjacent stages of the dental germ looks similar and the many variations of the morphological structure can be observed between developmental stages. Scaled-YOLOv4 and EfficientNetV2 M were trained using cross-validation. We evaluated a single selection, a weighted average, and an expected value to convert the probability of developmental stage classification to dental age. One hundred and fifty-seven panoramic radiographs were used to compare automatic and manual human experts' dental age calculations.

RESULTS: Dental germ detection was achieved with a mean average precision of 98.26% and dental germ classifiers for single and multi-root were achieved with a Top-3 accuracy of 98.46% and 98.36%, respectively. The mean absolute errors between the automatic and manual dental age calculations using single selection, weighted average, and expected value were 0.274, 0.261, and 0.396, respectively. The weighted average was better than the other methods and was accurate by less than one developmental stage error.

CONCLUSION: Our study demonstrates the feasibility of automatic dental age calculation using panoramic radiographs and a two-stage deep learning approach with a clinically acceptable level of accuracy.

PMID:38291396 | DOI:10.1186/s12903-024-03928-0

Categories: Literature Watch

CCL-DTI: contributing the contrastive loss in drug-target interaction prediction

Tue, 2024-01-30 06:00

BMC Bioinformatics. 2024 Jan 30;25(1):48. doi: 10.1186/s12859-024-05671-3.

ABSTRACT

BACKGROUND: The Drug-Target Interaction (DTI) prediction uses a drug molecule and a protein sequence as inputs to predict the binding affinity value. In recent years, deep learning-based models have gotten more attention. These methods have two modules: the feature extraction module and the task prediction module. In most deep learning-based approaches, a simple task prediction loss (i.e., categorical cross entropy for the classification task and mean squared error for the regression task) is used to learn the model. In machine learning, contrastive-based loss functions are developed to learn more discriminative feature space. In a deep learning-based model, extracting more discriminative feature space leads to performance improvement for the task prediction module.

RESULTS: In this paper, we have used multimodal knowledge as input and proposed an attention-based fusion technique to combine this knowledge. Also, we investigate how utilizing contrastive loss function along the task prediction loss could help the approach to learn a more powerful model. Four contrastive loss functions are considered: (1) max-margin contrastive loss function, (2) triplet loss function, (3) Multi-class N-pair Loss Objective, and (4) NT-Xent loss function. The proposed model is evaluated using four well-known datasets: Wang et al. dataset, Luo's dataset, Davis, and KIBA datasets.

CONCLUSIONS: Accordingly, after reviewing the state-of-the-art methods, we developed a multimodal feature extraction network by combining protein sequences and drug molecules, along with protein-protein interaction networks and drug-drug interaction networks. The results show it performs significantly better than the comparable state-of-the-art approaches.

PMID:38291364 | DOI:10.1186/s12859-024-05671-3

Categories: Literature Watch

Structure-aware deep model for MHC-II peptide binding affinity prediction

Tue, 2024-01-30 06:00

BMC Genomics. 2024 Jan 30;25(1):127. doi: 10.1186/s12864-023-09900-6.

ABSTRACT

The prediction of major histocompatibility complex (MHC)-peptide binding affinity is an important branch in immune bioinformatics, especially helpful in accelerating the design of disease vaccines and immunity therapy. Although deep learning-based solutions have yielded promising results on MHC-II molecules in recent years, these methods ignored structure knowledge from each peptide when employing the deep neural network models. Each peptide sequence has its specific combination order, so it is worth considering adding the structural information of the peptide sequence to the deep model training. In this work, we use positional encoding to represent the structural information of peptide sequences and validly combine the positional encoding with existing models by different strategies. Experiments on three datasets show that the introduction of position-coding information can further improve the performance built upon the existing model. The idea of introducing positional encoding to this field can provide important reference significance for the optimization of the deep network structure in the future.

PMID:38291350 | DOI:10.1186/s12864-023-09900-6

Categories: Literature Watch

AI-driven estimation of O6 methylguanine-DNA-methyltransferase (MGMT) promoter methylation in glioblastoma patients: a systematic review with bias analysis

Tue, 2024-01-30 06:00

J Cancer Res Clin Oncol. 2024 Jan 31;150(2):57. doi: 10.1007/s00432-023-05566-5.

ABSTRACT

BACKGROUND: Accurate and non-invasive estimation of MGMT promoter methylation status in glioblastoma (GBM) patients is of paramount clinical importance, as it is a predictive biomarker associated with improved overall survival (OS). In response to the clinical need, recent studies have focused on the development of non-invasive artificial intelligence (AI)-based methods for MGMT estimation. In this systematic review, we not only delve into the technical aspects of these AI-driven MGMT estimation methods but also emphasize their profound clinical implications. Specifically, we explore the potential impact of accurate non-invasive MGMT estimation on GBM patient care and treatment decisions.

METHODS: Employing a PRISMA search strategy, we identified 33 relevant studies from reputable databases, including PubMed, ScienceDirect, Google Scholar, and IEEE Explore. These studies were comprehensively assessed using 21 diverse attributes, encompassing factors such as types of imaging modalities, machine learning (ML) methods, and cohort sizes, with clear rationales for attribute scoring. Subsequently, we ranked these studies and established a cutoff value to categorize them into low-bias and high-bias groups.

RESULTS: By analyzing the 'cumulative plot of mean score' and the 'frequency plot curve' of the studies, we determined a cutoff value of 6.00. A higher mean score indicated a lower risk of bias, with studies scoring above the cutoff mark categorized as low-bias (73%), while 27% fell into the high-bias category.

CONCLUSION: Our findings underscore the immense potential of AI-based machine learning (ML) and deep learning (DL) methods in non-invasively determining MGMT promoter methylation status. Importantly, the clinical significance of these AI-driven advancements lies in their capacity to transform GBM patient care by providing accurate and timely information for treatment decisions. However, the translation of these technical advancements into clinical practice presents challenges, including the need for large multi-institutional cohorts and the integration of diverse data types. Addressing these challenges will be critical in realizing the full potential of AI in improving the reliability and accessibility of MGMT estimation while lowering the risk of bias in clinical decision-making.

PMID:38291266 | DOI:10.1007/s00432-023-05566-5

Categories: Literature Watch

Deep learning for protein structure prediction and design-progress and applications

Tue, 2024-01-30 06:00

Mol Syst Biol. 2024 Jan 30. doi: 10.1038/s44320-024-00016-x. Online ahead of print.

ABSTRACT

Proteins are the key molecular machines that orchestrate all biological processes of the cell. Most proteins fold into three-dimensional shapes that are critical for their function. Studying the 3D shape of proteins can inform us of the mechanisms that underlie biological processes in living cells and can have practical applications in the study of disease mutations or the discovery of novel drug treatments. Here, we review the progress made in sequence-based prediction of protein structures with a focus on applications that go beyond the prediction of single monomer structures. This includes the application of deep learning methods for the prediction of structures of protein complexes, different conformations, the evolution of protein structures and the application of these methods to protein design. These developments create new opportunities for research that will have impact across many areas of biomedical research.

PMID:38291232 | DOI:10.1038/s44320-024-00016-x

Categories: Literature Watch

Machine learning identifies key metabolic reactions in bacterial growth on different carbon sources

Tue, 2024-01-30 06:00

Mol Syst Biol. 2024 Jan 30. doi: 10.1038/s44320-024-00017-w. Online ahead of print.

ABSTRACT

Carbon source-dependent control of bacterial growth is fundamental to bacterial physiology and survival. However, pinpointing the metabolic steps important for cell growth is challenging due to the complexity of cellular networks. Here, the elastic net model and multilayer perception model that integrated genome-wide gene-deletion data and simulated flux distributions were constructed to identify metabolic reactions beneficial or detrimental to Escherichia coli grown on 30 different carbon sources. Both models outperformed traditional in silico methods by identifying not just essential reactions but also nonessential ones that promote growth. They successfully predicted metabolic reactions beneficial to cell growth, with high convergence between the models. The models revealed that biosynthetic pathways generally promote growth across various carbon sources, whereas the impact of energy-generating pathways varies with the carbon source. Intriguing predictions were experimentally validated for findings beyond experimental training data and the impact of various carbon sources on the glyoxylate shunt, pyruvate dehydrogenase reaction, and redundant purine biosynthesis reactions. These highlight the practical significance and predictive power of the models for understanding and engineering microbial metabolism.

PMID:38291231 | DOI:10.1038/s44320-024-00017-w

Categories: Literature Watch

GMean-a semi-supervised GRU and K-mean model for predicting the TF binding site

Tue, 2024-01-30 06:00

Sci Rep. 2024 Jan 30;14(1):2539. doi: 10.1038/s41598-024-52933-4.

ABSTRACT

The transcription factor binding site is a deoxyribonucleic acid sequence that binds to transcription factors. Transcription factors are proteins that regulate the transcription gene. Abnormal turnover of transcription factors can lead to uncontrolled cell growth. Therefore, discovering the relationships between transcription factors and deoxyribonucleic acid sequences is an important component of bioinformatics research. Numerous deep learning and machine learning language models have been developed to accomplish these tasks. Our goal in this work is to propose a GMean model for predicting unlabelled deoxyribonucleic acid sequences. The GMean model is a hybrid model with a combination of gated recurrent unit and K-mean clustering. The GMean model is developed in three phases. The labelled and unlabelled data are processed based on k-mers and tokenization. The labelled data is used for training. The unlabelled data are used for testing and prediction. The experimental data consists of deoxyribonucleic acid experimental of GM12878, K562 and HepG2. The experimental results show that GMean is feasible and effective in predicting deoxyribonucleic acid sequences, as the highest accuracy is 91.85% in predicting K562 and HepG2. This is followed by the prediction of the sequence between GM12878 and K562 with an accuracy of 89.13%. The lowest accuracy is the prediction of the sequence between HepG2 and GM12828, which is 88.80%.

PMID:38291225 | DOI:10.1038/s41598-024-52933-4

Categories: Literature Watch

Author Correction: Application of deep learning technology for temporal analysis of videofluoroscopic swallowing studies

Tue, 2024-01-30 06:00

Sci Rep. 2024 Jan 30;14(1):2526. doi: 10.1038/s41598-024-52899-3.

NO ABSTRACT

PMID:38291119 | DOI:10.1038/s41598-024-52899-3

Categories: Literature Watch

Direct estimation of the noise power spectrum from patient data to generate synthesized CT noise for denoising network training

Tue, 2024-01-30 06:00

Med Phys. 2024 Jan 30. doi: 10.1002/mp.16963. Online ahead of print.

ABSTRACT

BACKGROUND: Developing a deep-learning network for denoising low-dose CT (LDCT) images necessitates paired computed tomography (CT) images acquired at different dose levels. However, it is challenging to obtain these images from the same patient.

PURPOSE: In this study, we introduce a novel approach to generate CT images at different dose levels.

METHODS: Our method involves the direct estimation of the quantum noise power spectrum (NPS) from patient CT images without the need for prior information. By modeling the anatomical NPS using a power-law function and estimating the quantum NPS from the measured NPS after removing the anatomical NPS, we create synthesized quantum noise by applying the estimated quantum NPS as a filter to random noise. By adding synthesized noise to CT images, synthesized CT images can be generated as if these are obtained at a lower dose. This leads to the generation of paired images at different dose levels for training denoising networks.

RESULTS: The proposed method accurately estimates the reference quantum NPS. The denoising network trained with paired data generated using synthesized quantum noise achieves denoising performance comparable to networks trained using Mayo Clinic data, as justified by the mean-squared-error (MSE), structural similarity index (SSIM)and peak signal-to-noise ratio (PSNR) scores.

CONCLUSIONS: This approach offers a promising solution for LDCT image denoising network development without the need for multiple scans of the same patient at different doses.

PMID:38289987 | DOI:10.1002/mp.16963

Categories: Literature Watch

Artificial intelligence in fracture detection with different image modalities and data types: A systematic review and meta-analysis

Tue, 2024-01-30 06:00

PLOS Digit Health. 2024 Jan 30;3(1):e0000438. doi: 10.1371/journal.pdig.0000438. eCollection 2024 Jan.

ABSTRACT

Artificial Intelligence (AI), encompassing Machine Learning and Deep Learning, has increasingly been applied to fracture detection using diverse imaging modalities and data types. This systematic review and meta-analysis aimed to assess the efficacy of AI in detecting fractures through various imaging modalities and data types (image, tabular, or both) and to synthesize the existing evidence related to AI-based fracture detection. Peer-reviewed studies developing and validating AI for fracture detection were identified through searches in multiple electronic databases without time limitations. A hierarchical meta-analysis model was used to calculate pooled sensitivity and specificity. A diagnostic accuracy quality assessment was performed to evaluate bias and applicability. Of the 66 eligible studies, 54 identified fractures using imaging-related data, nine using tabular data, and three using both. Vertebral fractures were the most common outcome (n = 20), followed by hip fractures (n = 18). Hip fractures exhibited the highest pooled sensitivity (92%; 95% CI: 87-96, p< 0.01) and specificity (90%; 95% CI: 85-93, p< 0.01). Pooled sensitivity and specificity using image data (92%; 95% CI: 90-94, p< 0.01; and 91%; 95% CI: 88-93, p < 0.01) were higher than those using tabular data (81%; 95% CI: 77-85, p< 0.01; and 83%; 95% CI: 76-88, p < 0.01), respectively. Radiographs demonstrated the highest pooled sensitivity (94%; 95% CI: 90-96, p < 0.01) and specificity (92%; 95% CI: 89-94, p< 0.01). Patient selection and reference standards were major concerns in assessing diagnostic accuracy for bias and applicability. AI displays high diagnostic accuracy for various fracture outcomes, indicating potential utility in healthcare systems for fracture diagnosis. However, enhanced transparency in reporting and adherence to standardized guidelines are necessary to improve the clinical applicability of AI. Review Registration: PROSPERO (CRD42021240359).

PMID:38289965 | DOI:10.1371/journal.pdig.0000438

Categories: Literature Watch

Reservoir parameters prediction based on spatially transferred long short-term memory network

Tue, 2024-01-30 06:00

PLoS One. 2024 Jan 30;19(1):e0296506. doi: 10.1371/journal.pone.0296506. eCollection 2024.

ABSTRACT

Reservoir reconstruction, where parameter prediction plays a key role, constitutes an extremely important part in oil and gas reservoir exploration. With the mature development of artificial intelligence, parameter prediction methods are gradually shifting from previous petrophysical models to deep learning models, which bring about obvious improvements in terms of accuracy and efficiency. However, it is difficult to achieve large amount of data acquisition required for deep learning due to the cost of detection, technical difficulties, and the limitations of complex geological parameters. To address the data shortage problem, a transfer learning prediction model based on long short-term memory neural networks has been proposed, and the model structure has been determined by parameter search and optimization methods in this paper. The proposed approach transfers knowledge from historical data to enhance new well prediction by sharing some parameters in the neural network structure. Moreover, the practicality and effectiveness of this method was tested by comparison based on two block datasets. The results showed that this method could significantly improve the prediction accuracy of the reservoir parameters in the event of data shortage.

PMID:38289937 | DOI:10.1371/journal.pone.0296506

Categories: Literature Watch

ParaPET: non-invasive deep learning method for direct parametric brain PET reconstruction using histoimages

Tue, 2024-01-30 06:00

EJNMMI Res. 2024 Jan 30;14(1):10. doi: 10.1186/s13550-024-01072-y.

ABSTRACT

BACKGROUND: The indirect method for generating parametric images in positron emission tomography (PET) involves the acquisition and reconstruction of dynamic images and temporal modelling of tissue activity given a measured arterial input function. This approach is not robust, as noise in each dynamic image leads to a degradation in parameter estimation. Direct methods incorporate into the image reconstruction step both the kinetic and noise models, leading to improved parametric images. These methods require extensive computational time and large computing resources. Machine learning methods have demonstrated significant potential in overcoming these challenges. But they are limited by the requirement of a paired training dataset. A further challenge within the existing framework is the use of state-of-the-art arterial input function estimation via temporal arterial blood sampling, which is an invasive procedure, or an additional magnetic resonance imaging (MRI) scan for selecting a region where arterial blood signal can be measured from the PET image. We propose a novel machine learning approach for reconstructing high-quality parametric brain images from histoimages produced from time-of-flight PET data without requiring invasive arterial sampling, an MRI scan, or paired training data from standard field-of-view scanners.

RESULT: The proposed is tested on a simulated phantom and five oncological subjects undergoing an 18F-FDG-PET scan of the brain using Siemens Biograph Vision Quadra. Kinetic parameters set in the brain phantom correlated strongly with the estimated parameters (K1, k2 and k3, Pearson correlation coefficient of 0.91, 0.92 and 0.93) and a mean squared error of less than 0.0004. In addition, our method significantly outperforms (p < 0.05, paired t-test) the conventional nonlinear least squares method in terms of contrast-to-noise ratio. At last, the proposed method was found to be 37% faster than the conventional method.

CONCLUSION: We proposed a direct non-invasive DL-based reconstruction method and produced high-quality parametric maps of the brain. The use of histoimages holds promising potential for enhancing the estimation of parametric images, an area that has not been extensively explored thus far. The proposed method can be applied to subject-specific dynamic PET data alone.

PMID:38289518 | DOI:10.1186/s13550-024-01072-y

Categories: Literature Watch

Tailored Intraoperative MRI Strategies in High-Grade Glioma Surgery: A Machine Learning-Based Radiomics Model Highlights Selective Benefits

Tue, 2024-01-30 06:00

Oper Neurosurg (Hagerstown). 2023 Dec 22. doi: 10.1227/ons.0000000000001023. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVES: In high-grade glioma (HGG) surgery, intraoperative MRI (iMRI) has traditionally been the gold standard for maximizing tumor resection and improving patient outcomes. However, recent Level 1 evidence juxtaposes the efficacy of iMRI and 5-aminolevulinic acid (5-ALA), questioning the continued justification of iMRI because of its associated costs and extended surgical duration. Nonetheless, drawing from our clinical observations, we postulated that a subset of intricate HGGs may continue to benefit from the adjunctive application of iMRI.

METHODS: In a prospective study of 73 patients with HGG, 5-ALA was the primary technique for tumor delineation, complemented by iMRI to detect residual contrast-enhanced regions. Suboptimal 5-ALA efficacy was defined when (1) iMRI detected contrast-enhanced remnants despite 5-ALA's indication of a gross total resection or (2) surgeons observed residual fluorescence, contrary to iMRI findings. Radiomic features from preoperative MRIs were extracted using a U2-Net deep learning algorithm. Binary logistic regression was then used to predict compromised 5-ALA performance.

RESULTS: Resections guided solely by 5-ALA achieved an average removal of 93.14% of contrast-enhancing tumors. This efficacy increased to 97% with iMRI integration, albeit not statistically significant. Notably, for tumors with suboptimal 5-ALA performance, iMRI's inclusion significantly improved resection outcomes (P-value: .00013). The developed deep learning-based model accurately pinpointed these scenarios, and when enriched with radiomic parameters, showcased high predictive accuracy, as indicated by a Nagelkerke R2 of 0.565 and a receiver operating characteristic of 0.901.

CONCLUSION: Our machine learning-driven radiomics approach predicts scenarios where 5-ALA alone may be suboptimal in HGG surgery compared with its combined use with iMRI. Although 5-ALA typically yields favorable results, our analyses reveal that HGGs characterized by significant volume, complex morphology, and left-sided location compromise the effectiveness of resections relying exclusively on 5-ALA. For these intricate cases, we advocate for the continued relevance of iMRI.

PMID:38289331 | DOI:10.1227/ons.0000000000001023

Categories: Literature Watch

Idiopathic Pulmonary Fibrosis: From Common Microscopy to Single-Cell Biology and Precision Medicine

Tue, 2024-01-30 06:00

Am J Respir Crit Care Med. 2024 Jan 30. doi: 10.1164/rccm.202309-1573PP. Online ahead of print.

ABSTRACT

Idiopathic pulmonary fibrosis is a chronic, progressive, and usually fatal lung disease of unknown etiology and is included in the large, complex, and heterogeneous group of interstitial lung diseases (ILD). IPF is characterized by an aberrant activation of epithelial cells and fibroblasts and excessive accumulation of extracellular matrix, resulting in the progressive destruction of the lung architecture. Here we approach the fascinating historical evolution of our knowledge about this disorder from the past century, when most efforts were focused on ILD classification and terminology, until the current century, when the explosive progress in molecular genetics, epigenetics, and multiomics, as well as the development of complex computational tools, resulted in a huge advance in unveiling the pathogenic mechanisms of the disease. Recent advances in artificial intelligence, machine and deep learning, and artificial neural networks show promising results in identifying lung fibrotic patterns in high-resolution computed tomography or helping to create a tissue genomic classifier for specific diagnosis. Finally, incipient approaches to precision medicine are ongoing in the area of pharmacogenetics.

PMID:38289233 | DOI:10.1164/rccm.202309-1573PP

Categories: Literature Watch

Pages