Deep learning

Development of a predictive model for 1-year postoperative recovery in patients with lumbar disk herniation based on deep learning and machine learning

Wed, 2024-06-26 06:00

Front Neurol. 2024 Jun 11;15:1255780. doi: 10.3389/fneur.2024.1255780. eCollection 2024.

ABSTRACT

BACKGROUND: The aim of this study is to develop a predictive model utilizing deep learning and machine learning techniques that will inform clinical decision-making by predicting the 1-year postoperative recovery of patients with lumbar disk herniation.

METHODS: The clinical data of 470 inpatients who underwent tubular microdiscectomy (TMD) between January 2018 and January 2021 were retrospectively analyzed as variables. The dataset was randomly divided into a training set (n = 329) and a test set (n = 141) using a 10-fold cross-validation technique. Various deep learning and machine learning algorithms including Random Forests, Extreme Gradient Boosting, Support Vector Machines, Extra Trees, K-Nearest Neighbors, Logistic Regression, Light Gradient Boosting Machine, and MLP (Artificial Neural Networks) were employed to develop predictive models for the recovery of patients with lumbar disk herniation 1 year after surgery. The cure rate score of lumbar JOA score 1 year after TMD was used as an outcome indicator. The primary evaluation metric was the area under the receiver operating characteristic curve (AUC), with additional measures including decision curve analysis (DCA), accuracy, sensitivity, specificity, and others.

RESULTS: The heat map of the correlation matrix revealed low inter-feature correlation. The predictive model employing both machine learning and deep learning algorithms was constructed using 15 variables after feature engineering. Among the eight algorithms utilized, the MLP algorithm demonstrated the best performance.

CONCLUSION: Our study findings demonstrate that the MLP algorithm provides superior predictive performance for the recovery of patients with lumbar disk herniation 1 year after surgery.

PMID:38919973 | PMC:PMC11197993 | DOI:10.3389/fneur.2024.1255780

Categories: Literature Watch

Development of a deep learning model for predicting recurrence of hepatocellular carcinoma after liver transplantation

Wed, 2024-06-26 06:00

Front Med (Lausanne). 2024 Jun 11;11:1373005. doi: 10.3389/fmed.2024.1373005. eCollection 2024.

ABSTRACT

BACKGROUND: Liver transplantation (LT) is one of the main curative treatments for hepatocellular carcinoma (HCC). Milan criteria has long been applied to candidate LT patients with HCC. However, the application of Milan criteria failed to precisely predict patients at risk of recurrence. As a result, we aimed to establish and validate a deep learning model comparing with Milan criteria and better guide post-LT treatment.

METHODS: A total of 356 HCC patients who received LT with complete follow-up data were evaluated. The entire cohort was randomly divided into training set (n = 286) and validation set (n = 70). Multi-layer-perceptron model provided by pycox library was first used to construct the recurrence prediction model. Then tabular neural network (TabNet) that combines elements of deep learning and tabular data processing techniques was utilized to compare with Milan criteria and verify the performance of the model we proposed.

RESULTS: Patients with larger tumor size over 7 cm, poorer differentiation of tumor grade and multiple tumor numbers were first classified as high risk of recurrence. We trained a classification model with TabNet and our proposed model performed better than the Milan criteria in terms of accuracy (0.95 vs. 0.86, p < 0.05). In addition, our model showed better performance results with improved AUC, NRI and hazard ratio, proving the robustness of the model.

CONCLUSION: A prognostic model had been proposed based on the use of TabNet on various parameters from HCC patients. The model performed well in post-LT recurrence prediction and the identification of high-risk subgroups.

PMID:38919938 | PMC:PMC11196752 | DOI:10.3389/fmed.2024.1373005

Categories: Literature Watch

BrainCDNet: a concatenated deep neural network for the detection of brain tumors from MRI images

Wed, 2024-06-26 06:00

Front Hum Neurosci. 2024 Jun 11;18:1405586. doi: 10.3389/fnhum.2024.1405586. eCollection 2024.

ABSTRACT

INTRODUCTION: Brain cancer is a frequently occurring disease around the globe and mostly developed due to the presence of tumors in/around the brain. Generally, the prevalence and incidence of brain cancer are much lower than that of other cancer types (breast, skin, lung, etc.). However, brain cancers are associated with high mortality rates, especially in adults, due to the false identification of tumor types, and delay in the diagnosis. Therefore, the minimization of false detection of brain tumor types and early diagnosis plays a crucial role in the improvement of patient survival rate. To achieve this, many researchers have recently developed deep learning (DL)-based approaches since they showed a remarkable performance, particularly in the classification task.

METHODS: This article proposes a novel DL architecture named BrainCDNet. This model was made by concatenating the pooling layers and dealing with the overfitting issues by initializing the weights into layers using 'He Normal' initialization along with the batch norm and global average pooling (GAP). Initially, we sharpen the input images using a Nimble filter, which results in maintaining the edges and fine details. After that, we employed the suggested BrainCDNet for the extraction of relevant features and classification. In this work, two different forms of magnetic resonance imaging (MRI) databases such as binary (healthy vs. pathological) and multiclass (glioma vs. meningioma vs. pituitary) are utilized to perform all these experiments.

RESULTS AND DISCUSSION: Empirical evidence suggests that the presented model attained a significant accuracy on both datasets compared to the state-of-the-art approaches, with 99.45% (binary) and 96.78% (multiclass), respectively. Hence, the proposed model can be used as a decision-supportive tool for radiologists during the diagnosis of brain cancer patients.

PMID:38919881 | PMC:PMC11196409 | DOI:10.3389/fnhum.2024.1405586

Categories: Literature Watch

A semi-automatic deep learning model based on biparametric MRI scanning strategy to predict bone metastases in newly diagnosed prostate cancer patients

Wed, 2024-06-26 06:00

Front Oncol. 2024 Jun 11;14:1298516. doi: 10.3389/fonc.2024.1298516. eCollection 2024.

ABSTRACT

OBJECTIVE: To develop a semi-automatic model integrating radiomics, deep learning, and clinical features for Bone Metastasis (BM) prediction in prostate cancer (PCa) patients using Biparametric MRI (bpMRI) images.

METHODS: A retrospective study included 414 PCa patients (BM, n=136; NO-BM, n=278) from two institutions (Center 1, n=318; Center 2, n=96) between January 2016 and December 2022. MRI scans were confirmed with BM status via PET-CT or ECT pre-treatment. Tumor areas on bpMRI images were delineated as tumor's region of interest (ROI) using auto-delineation tumor models, evaluated with Dice similarity coefficient (DSC). Samples were auto-sketched, refined, and used to train the ResNet BM prediction model. Clinical, radiomics, and deep learning data were synthesized into the ResNet-C model, evaluated using receiver operating characteristic (ROC).

RESULTS: The auto-segmentation model achieved a DSC of 0.607. Clinical BM prediction's internal validation had an accuracy (ACC) of 0.650 and area under the curve (AUC) of 0.713; external cohort had an ACC of 0.668 and AUC of 0.757. The deep learning model yielded an ACC of 0.875 and AUC of 0.907 for the internal, and ACC of 0.833 and AUC of 0.862 for the external cohort. The Radiomics model registered an ACC of 0.819 and AUC of 0.852 internally, and ACC of 0.885 and AUC of 0.903 externally. ResNet-C demonstrated the highest ACC of 0.902 and AUC of 0.934 for the internal, and ACC of 0.885 and AUC of 0.903 for the external cohort.

CONCLUSION: The ResNet-C model, utilizing bpMRI scanning strategy, accurately assesses bone metastasis (BM) status in newly diagnosed prostate cancer (PCa) patients, facilitating precise treatment planning and improving patient prognoses.

PMID:38919538 | PMC:PMC11196796 | DOI:10.3389/fonc.2024.1298516

Categories: Literature Watch

Developing and comparing deep learning and machine learning algorithms for osteoporosis risk prediction

Wed, 2024-06-26 06:00

Front Artif Intell. 2024 Jun 11;7:1355287. doi: 10.3389/frai.2024.1355287. eCollection 2024.

ABSTRACT

INTRODUCTION: Osteoporosis, characterized by low bone mineral density (BMD), is an increasingly serious public health issue. So far, several traditional regression models and machine learning (ML) algorithms have been proposed for predicting osteoporosis risk. However, these models have shown relatively low accuracy in clinical implementation. Recently proposed deep learning (DL) approaches, such as deep neural network (DNN), which can discover knowledge from complex hidden interactions, offer a new opportunity to improve predictive performance. In this study, we aimed to assess whether DNN can achieve a better performance in osteoporosis risk prediction.

METHODS: By utilizing hip BMD and extensive demographic and routine clinical data of 8,134 subjects with age more than 40 from the Louisiana Osteoporosis Study (LOS), we developed and constructed a novel DNN framework for predicting osteoporosis risk and compared its performance in osteoporosis risk prediction with four conventional ML models, namely random forest (RF), artificial neural network (ANN), k-nearest neighbor (KNN), and support vector machine (SVM), as well as a traditional regression model termed osteoporosis self-assessment tool (OST). Model performance was assessed by area under 'receiver operating curve' (AUC) and accuracy.

RESULTS: By using 16 discriminative variables, we observed that the DNN approach achieved the best predictive performance (AUC = 0.848) in classifying osteoporosis (hip BMD T-score ≤ -1.0) and non-osteoporosis risk (hip BMD T-score > -1.0) subjects, compared to the other approaches. Feature importance analysis showed that the top 10 most important variables identified by the DNN model were weight, age, gender, grip strength, height, beer drinking, diastolic pressure, alcohol drinking, smoke years, and economic level. Furthermore, we performed subsampling analysis to assess the effects of varying number of sample size and variables on the predictive performance of these tested models. Notably, we observed that the DNN model performed equally well (AUC = 0.846) even by utilizing only the top 10 most important variables for osteoporosis risk prediction. Meanwhile, the DNN model can still achieve a high predictive performance (AUC = 0.826) when sample size was reduced to 50% of the original dataset.

CONCLUSION: In conclusion, we developed a novel DNN model which was considered to be an effective algorithm for early diagnosis and intervention of osteoporosis in the aging population.

PMID:38919268 | PMC:PMC11196804 | DOI:10.3389/frai.2024.1355287

Categories: Literature Watch

Rapid Mold Detection in Chinese Herbal Medicine Using Enhanced Deep Learning Technology

Wed, 2024-06-26 06:00

J Med Food. 2024 Jun 26. doi: 10.1089/jmf.2024.k.0004. Online ahead of print.

ABSTRACT

Mold contamination poses a significant challenge in the processing and storage of Chinese herbal medicines (CHM), leading to quality degradation and reduced efficacy. To address this issue, we propose a rapid and accurate detection method for molds in CHM, with a specific focus on Atractylodes macrocephala, using electronic nose (e-nose) technology. The proposed method introduces an eccentric temporal convolutional network (ETCN) model, which effectively captures temporal and spatial information from the e-nose data, enabling efficient and precise mold detection in CHM. In our approach, we employ the stochastic resonance (SR) technique to eliminate noise from the raw e-nose data. By comprehensively analyzing data from eight sensors, the SR-enhanced ETCN (SR-ETCN) method achieves an impressive accuracy of 94.3%, outperforming seven other comparative models that use only the response time of 7.0 seconds before the rise phase. The experimental results showcase the ETCN model's accuracy and efficiency, providing a reliable solution for mold detection in Chinese herbal medicine. This study contributes significantly to expediting the assessment of herbal medicine quality, thereby helping to ensure the safety and efficacy of traditional medicinal practices.

PMID:38919153 | DOI:10.1089/jmf.2024.k.0004

Categories: Literature Watch

Engineering a Robust UDP-Glucose Pyrophosphorylase for Enhanced Biocatalytic Synthesis via ProteinMPNN and Ancestral Sequence Reconstruction

Wed, 2024-06-26 06:00

J Agric Food Chem. 2024 Jun 25. doi: 10.1021/acs.jafc.4c03126. Online ahead of print.

ABSTRACT

UDP-glucose is a key metabolite in carbohydrate metabolism and plays a vital role in glycosyl transfer reactions. Its significance spans across the food and agricultural industries. This study focuses on UDP-glucose synthesis via multienzyme catalysis using dextrin, incorporating UTP production and ATP regeneration modules to reduce costs. To address thermal stability limitations of the key UDP-glucose pyrophosphorylase (UGP), a deep learning-based protein sequence design approach and ancestral sequence reconstruction are employed to engineer a thermally stable UGP variant. The engineered UGP variant is significantly 500-fold more thermally stable at 60 °C and has a half-life of 49.8 h compared to the wild-type enzyme. MD simulations and umbrella sampling calculations provide insights into the mechanism behind the enhanced thermal stability. Experimental validation demonstrates that the engineered UGP variant can produce 52.6 mM UDP-glucose within 6 h in an in vitro cascade reaction. This study offers practical insights for efficient UDP-glucose synthesis methods.

PMID:38918953 | DOI:10.1021/acs.jafc.4c03126

Categories: Literature Watch

Analyzing variation of water inflow to inland lakes under climate change: Integrating deep learning and time series data mining

Tue, 2024-06-25 06:00

Environ Res. 2024 Jun 23:119478. doi: 10.1016/j.envres.2024.119478. Online ahead of print.

ABSTRACT

The alarming depletion of global inland lakes in recent decades makes it essential to predict water inflow from rivers to lakes (WIRL) trend and unveil the dominant influencing driver, particularly in the context of climate change. The raw time series data contains multiple components (i.e., long-term trend, seasonal periodicity, and random noise), which makes it challenging for traditional machine/deep learning techniques to effectively capture long-term trend information. In this study, a novel FactorConvSTLnet (FCS) method is developed through integrating STL decomposition, convolutional neural networks (CNN), and factorial analysis into a general framework. FCS is more robust in long-term WIRL trend prediction through separating trend information as a modeling predictor, as well as unveiling predominant drivers. FCS is applied to typical inland lakes (the Aral Sea and the Lake Balkhash) in Central Asia, and results indicate that FCS (Nash-Sutcliffe efficiency=0.88, root mean squared error=67m³/s, mean relative error=10%) outperforms the traditional CNN. Some main findings are: (i) during 1960-1990, reservoir water storage (WSR) was the dominant driver for the two lakes, respectively contributing to 71% and 49%; during 1991-2014 and 2015-2099, evaporation (EVAP) would be the dominant driver, with the contribution of 30% and 47%; (ii) climate change would shift the dominant driver from human activities to natural factors, where EVAP and surface snow amount (SNW) have an increasing influence on WIRL; (iii) compared to SSP1-2.6, the SNW contribution would decrease by 26% under SSP5-8.5, while the EVAP contribution would increase by 9%. The findings reveal the main drivers of shrinkage of the inland lakes and provide the scientific basis for promoting regional ecological sustainability.

PMID:38917931 | DOI:10.1016/j.envres.2024.119478

Categories: Literature Watch

A deep learning model integrating a wind direction-based dynamic graph network for ozone prediction

Tue, 2024-06-25 06:00

Sci Total Environ. 2024 Jun 23:174229. doi: 10.1016/j.scitotenv.2024.174229. Online ahead of print.

ABSTRACT

Ozone pollution is an important environmental issue in many countries. Accurate forecasting of ozone concentration enables relevant authorities to enact timely policies to mitigate adverse impacts. This study develops a novel hybrid deep learning model, named wind direction-based dynamic spatio-temporal graph network (WDDSTG-Net), for hourly ozone concentration prediction. The model uses a dynamic directed graph structure based on hourly changing wind direction data to capture evolving spatial relationships between air quality monitoring stations. It applied the graph attention mechanism to compute dynamic weights between connected stations, thereby aggregating neighborhood information adaptively. For temporal modeling, it utilized a sequence-to-sequence model with attention mechanism to extract long-range temporal dependencies. Additionally, it integrated meteorological predictions to guide the ozone forecasting. The model achieves a mean absolute error of 6.69 μg/m3 and 18.63 μg/m3 for 1-h prediction and 24-h prediction, outperforming several classic models. The model's IAQI accuracy predictions at all stations are above 75 %, with a maximum of 81.74 %. It also exhibits strong capabilities in predicting severe ozone pollution events, with a 24-h true positive rate of 0.77. Compared to traditional static graph models, WDDSTG-Net demonstrates the importance of incorporating short-term wind fluctuations and transport dynamics for data-driven air quality modeling. In principle, it may serve as an effective data-driven approach for the concentration prediction of other airborne pollutants.

PMID:38917895 | DOI:10.1016/j.scitotenv.2024.174229

Categories: Literature Watch

HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge

Tue, 2024-06-25 06:00

Radiother Oncol. 2024 Jun 23:110410. doi: 10.1016/j.radonc.2024.110410. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge.

MATERIALS AND METHODS: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test.

RESULTS: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities.

CONCLUSION: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.

PMID:38917883 | DOI:10.1016/j.radonc.2024.110410

Categories: Literature Watch

Artificial intelligence: a new cutting-edge tool in spine surgery

Tue, 2024-06-25 06:00

Asian Spine J. 2024 Jun 25. doi: 10.31616/asj.2023.0382. Online ahead of print.

ABSTRACT

The purpose of this narrative review was to comprehensively elaborate the various components of artificial intelligence (AI), their applications in spine surgery, practical concerns, and future directions. Over the years, spine surgery has been continuously transformed in various aspects, including diagnostic strategies, surgical approaches, procedures, and instrumentation, to provide better-quality patient care. Surgeons have also augmented their surgical expertise with rapidly growing technological advancements. AI is an advancing field that has the potential to revolutionize many aspects of spine surgery. We performed a comprehensive narrative review of the various aspects of AI and machine learning in spine surgery. To elaborate on the current role of AI in spine surgery, a review of the literature was performed using PubMed and Google Scholar databases for articles published in English in the last 20 years. The initial search using the keywords "artificial intelligence" AND "spine," "machine learning" AND "spine," and "deep learning" AND "spine" extracted a total of 78, 60, and 37 articles and 11,500, 4,610, and 2,270 articles on PubMed and Google Scholar. After the initial screening and exclusion of unrelated articles, duplicates, and non-English articles, 405 articles were identified. After the second stage of screening, 93 articles were included in the review. Studies have shown that AI can be used to analyze patient data and provide personalized treatment recommendations in spine care. It also provides valuable insights for planning surgeries and assisting with precise surgical maneuvers and decisionmaking during the procedures. As more data become available and with further advancements, AI is likely to improve patient outcomes.

PMID:38917854 | DOI:10.31616/asj.2023.0382

Categories: Literature Watch

Deep learning-based voxel sampling for particle therapy treatment planning

Tue, 2024-06-25 06:00

Phys Med Biol. 2024 Jun 25. doi: 10.1088/1361-6560/ad5bba. Online ahead of print.

ABSTRACT

&#xD;Scanned particle therapy often requires complex treatment plans, robust optimization, as well as treatment adaptation. Plan optimization is especially complicated for heavy ions due to the variable relative biological effectiveness. We present a novel deep-learning model to select a subset of voxels in the planning process thus reducing the planning problem size for improved computational efficiency.&#xD;Approach:&#xD;Using only a subset of the voxels in target and organs at risk (OARs) we produced high-quality treatment plans, but heuristic selection strategies require manual input. We designed a deep-learning model based on P-Net to obtain an optimal voxel sampling without relying on patient-specific user input. A cohort of 70 head and neck patients that received carbon ion therapy was used for model training (50), validation (10) and testing (10). For training, a total of 12,500 carbon ion plans were optimized, using a highly efficient artificial intelligence (AI) infrastructure implemented into a research treatment planning platform. A custom loss function increased sampling density in underdosed regions, while aiming to reduce the total number of voxels.&#xD;Main results:&#xD;On the test dataset, the number of voxels in the optimization could be reduced by 84.8% (median) at <1% median loss in plan quality. When the model was trained to reduce sampling in the target only while keeping all voxels in OARs, a median reduction up to 71.6% was achieved, with 0.5% loss in the plan quality. The optimization time was reduced by a factor of 7.5 for the total AI selection model and a factor of 3.7 for the model with only target selection.&#xD;Significance:&#xD;The novel deep-learning voxel sampling technique achieves a significant reduction in computational time with a negligible loss in the plan quality. The reduction in optimization time can be especially useful for future real-time adaptation strategies.&#xD.

PMID:38917844 | DOI:10.1088/1361-6560/ad5bba

Categories: Literature Watch

Position paper on how technology for human motion analysis and relevant clinical applications have evolved over the past decades: Striking a balance between accuracy and convenience

Tue, 2024-06-25 06:00

Gait Posture. 2024 Jun 13;113:191-203. doi: 10.1016/j.gaitpost.2024.06.007. Online ahead of print.

ABSTRACT

BACKGROUND: Over the past decades, tremendous technological advances have emerged in human motion analysis (HMA).

RESEARCH QUESTION: How has technology for analysing human motion evolved over the past decades, and what clinical applications has it enabled?

METHODS: The literature on HMA has been extensively reviewed, focusing on three main approaches: Fully-Instrumented Gait Analysis (FGA), Wearable Sensor Analysis (WSA), and Deep-Learning Video Analysis (DVA), considering both technical and clinical aspects.

RESULTS: FGA techniques relying on data collected using stereophotogrammetric systems, force plates, and electromyographic sensors have been dramatically improved providing highly accurate estimates of the biomechanics of motion. WSA techniques have been developed with the advances in data collection at home and in community settings. DVA techniques have emerged through artificial intelligence, which has marked the last decade. Some authors have considered WSA and DVA techniques as alternatives to "traditional" HMA techniques. They have suggested that WSA and DVA techniques are destined to replace FGA.

SIGNIFICANCE: We argue that FGA, WSA, and DVA complement each other and hence should be accounted as "synergistic" in the context of modern HMA and its clinical applications. We point out that DVA techniques are especially attractive as screening techniques, WSA methods enable data collection in the home and community for extensive periods of time, and FGA does maintain superior accuracy and should be the preferred technique when a complete and highly accurate biomechanical data is required. Accordingly, we envision that future clinical applications of HMA would favour screening patients using DVA in the outpatient setting. If deemed clinically appropriate, then WSA would be used to collect data in the home and community to derive relevant information. If accurate kinetic data is needed, then patients should be referred to specialized centres where an FGA system is available, together with medical imaging and thorough clinical assessments.

PMID:38917666 | DOI:10.1016/j.gaitpost.2024.06.007

Categories: Literature Watch

State-of-art technologies, challenges, and emerging trends of computer vision in dental images

Tue, 2024-06-25 06:00

Comput Biol Med. 2024 Jun 24;178:108800. doi: 10.1016/j.compbiomed.2024.108800. Online ahead of print.

ABSTRACT

Computer vision falls under the broad umbrella of artificial intelligence that mimics human vision and plays a vital role in dental imaging. Dental practitioners visualize and interpret teeth, and the structure surrounding the teeth and detect abnormalities by manually examining various dental imaging modalities. Due to the complexity and cognitive difficulty of comprehending medical data, human error makes correct diagnosis difficult. Automated diagnosis may be able to help alleviate delays, hasten practitioners' interpretation of positive cases, and lighten their workload. Several medical imaging modalities like X-rays, CT scans, color images, etc. that are employed in dentistry are briefly described in this survey. Dentists employ dental imaging as a diagnostic tool in several specialties, including orthodontics, endodontics, periodontics, etc. In the discipline of dentistry, computer vision has progressed from classic image processing to machine learning with mathematical approaches and robust deep learning techniques. Here conventional image processing techniques solely as well as in conjunction with intelligent machine learning algorithms, and sophisticated architectures of dental radiograph analysis employ deep learning techniques. This study provides a detailed summary of several tasks, including anatomical segmentation, identification, and categorization of different dental anomalies with their shortfalls as well as future perspectives in this field.

PMID:38917534 | DOI:10.1016/j.compbiomed.2024.108800

Categories: Literature Watch

iProL: identifying DNA promoters from sequence information based on Longformer pre-trained model

Tue, 2024-06-25 06:00

BMC Bioinformatics. 2024 Jun 25;25(1):224. doi: 10.1186/s12859-024-05849-9.

ABSTRACT

Promoters are essential elements of DNA sequence, usually located in the immediate region of the gene transcription start sites, and play a critical role in the regulation of gene transcription. Its importance in molecular biology and genetics has attracted the research interest of researchers, and it has become a consensus to seek a computational method to efficiently identify promoters. Still, existing methods suffer from imbalanced recognition capabilities for positive and negative samples, and their recognition effect can still be further improved. We conducted research on E. coli promoters and proposed a more advanced prediction model, iProL, based on the Longformer pre-trained model in the field of natural language processing. iProL does not rely on prior biological knowledge but simply uses promoter DNA sequences as plain text to identify promoters. It also combines one-dimensional convolutional neural networks and bidirectional long short-term memory to extract both local and global features. Experimental results show that iProL has a more balanced and superior performance than currently published methods. Additionally, we constructed a novel independent test set following the previous specification and compared iProL with three existing methods on this independent test set.

PMID:38918692 | DOI:10.1186/s12859-024-05849-9

Categories: Literature Watch

Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools

Tue, 2024-06-25 06:00

Nat Methods. 2024 Jun 25. doi: 10.1038/s41592-024-02319-1. Online ahead of print.

ABSTRACT

Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce 'Lightning Pose', an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We released a cloud application that allows users to label data, train networks and process new videos directly from the browser.

PMID:38918605 | DOI:10.1038/s41592-024-02319-1

Categories: Literature Watch

Automated detection of type 1 ROP, type 2 ROP and A-ROP based on deep learning

Tue, 2024-06-25 06:00

Eye (Lond). 2024 Jun 25. doi: 10.1038/s41433-024-03184-0. Online ahead of print.

ABSTRACT

PURPOSE: To provide automatic detection of Type 1 retinopathy of prematurity (ROP), Type 2 ROP, and A-ROP by deep learning-based analysis of fundus images obtained by clinical examination using convolutional neural networks.

MATERIAL AND METHODS: A total of 634 fundus images of 317 premature infants born at 23-34 weeks of gestation were evaluated. After image pre-processing, we obtained a rectangular region (ROI). RegNetY002 was used for algorithm training, and stratified 10-fold cross-validation was applied during training to evaluate and standardize our model. The model's performance was reported as accuracy and specificity and described by the receiver operating characteristic (ROC) curve and area under the curve (AUC).

RESULTS: The model achieved 0.98 accuracy and 0.98 specificity in detecting Type 2 ROP versus Type 1 ROP and A-ROP. On the other hand, as a result of the analysis of ROI regions, the model achieved 0.90 accuracy and 0.95 specificity in detecting Stage 2 ROP versus Stage 3 ROP and 0.91 accuracy and 0.92 specificity in detecting A-ROP versus Type 1 ROP. The AUC scores were 0.98 for Type 2 ROP versus Type 1 ROP and A-ROP, 0.85 for Stage 2 ROP versus Stage 3 ROP, and 0.91 for A-ROP versus Type 1 ROP.

CONCLUSION: Our study demonstrated that ROP classification by DL-based analysis of fundus images can be distinguished with high accuracy and specificity. Integrating DL-based artificial intelligence algorithms into clinical practice may reduce the workload of ophthalmologists in the future and provide support in decision-making in the management of ROP.

PMID:38918566 | DOI:10.1038/s41433-024-03184-0

Categories: Literature Watch

A preliminary study of super-resolution deep learning reconstruction with cardiac option for evaluation of endovascular-treated intracranial aneurysms

Tue, 2024-06-25 06:00

Br J Radiol. 2024 Jun 25:tqae117. doi: 10.1093/bjr/tqae117. Online ahead of print.

ABSTRACT

OBJECTIVES: To investigate the usefulness of super-resolution deep learning reconstruction (SR-DLR) with cardiac option in the assessment of image quality in patients with stent-assisted coil embolization, coil embolization, and flow-diverting stent placement compared with other image reconstructions.

METHODS: This single-center retrospective study included fifty patients (mean age, 59 years; range, 44-81 years; 13 men) who were treated with stent-assisted coil embolization, coil embolization, and flow-diverting stent placement between January and July 2023. The images were reconstructed using filtered back projection (FBP), hybrid iterative reconstruction (IR), and SR-DLR. The objective image analysis included image noise in the Hounsfield unit (HU), signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and full width at half maximum (FWHM). Subjectively, two radiologists evaluated the overall image quality for the visualization of the flow-diverting stent, coil, and stent.

RESULTS: The image noise in HU in SR-DLR was 6.99 ± 1.49, which was significantly lower than that in images reconstructed with FBP (12.32 ± 3.01) and hybrid IR (8.63 ± 2.12) (p < 0.001). Both the mean SNR and CNR were significantly higher in SR-DLR than in FBP and hybrid IR (p < 0.001 and p < 0.001). The FWHMs for the stent (p < 0.004), flow-diverting stent (p < 0.001), and coil (p < 0.001) were significantly lower in SR-DLR than in FBP and hybrid IR. The subjective visual scores were significantly higher in SR-DLR than in other image reconstructions (p < 0.001).

CONCLUSIONS: SR-DLR with cardiac option is useful for follow-up imaging in stent-assisted coil embolization and flow-diverting stent placement in terms of lower image noise, higher SNR and CNR, superior subjective image analysis, and less blooming artifact than other image reconstructions.

ADVANCES IN KNOWLEDGE: SR-DLR with cardiac option allow better visualization of the peripheral and smaller cerebral arteries. SR-DLR with cardiac option can be beneficial for CT imaging of stent-assisted coil embolization and flow-diverting stent.

PMID:38917414 | DOI:10.1093/bjr/tqae117

Categories: Literature Watch

Intelligent Gas Detection: g-C<sub>3</sub>N<sub>4</sub>/Polypyrrole Decorated Alginate Paper as Smart Selective NH<sub>3</sub>/NO<sub>2</sub> Sensors at Room Temperature

Tue, 2024-06-25 06:00

Inorg Chem. 2024 Jun 25. doi: 10.1021/acs.inorgchem.4c01242. Online ahead of print.

ABSTRACT

Chemiresistive NH3/NO2 sensors are attracting considerable attention for use in air-conditioning systems. However, the existing sensors suffer from cross-sensitivity, detection limit, and power consumption, owing to the inadequate charge-transfer ability of gas-sensing materials. Herein, we develop a flexible NH3/NO2 sensor based on graphitic carbon nitride/polypyrrole decorated alginate paper (AP@g-CN/PPy). The flexible sensor can work at room temperature and exhibits a positive response of 23-246% and a negative response of 37-262% toward 0.1-5 ppm of NH3 and NO2, which is ∼4.5 times and ∼7.0 times higher than a pristine PPy sensor. Moreover, the sensor exhibits flexibility, reproducibility, long-term stability, anti-interference, and high resilience to humidity, indicating its promising potential in real applications. Using the 9 feature parameters extracted from the transient response, a matched deep learning model was developed to achieve qualitative recognition of different types of gases with distinguished decision boundaries. This work not only provides an alternative gas-sensing material for dual NH3/NO2 sensing but also establishes an intelligent strategy to identify hazardous gases under an interfering atmosphere.

PMID:38917357 | DOI:10.1021/acs.inorgchem.4c01242

Categories: Literature Watch

Temporal Dynamic Synchronous Functional Brain Network for Schizophrenia Classification and Lateralization Analysis

Tue, 2024-06-25 06:00

IEEE Trans Med Imaging. 2024 Jun 25;PP. doi: 10.1109/TMI.2024.3419041. Online ahead of print.

ABSTRACT

Available evidence suggests that dynamic functional connectivity can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia (SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed based on the synchronous temporal properties of features. Finally, the first modular test tool for abnormal hemispherical lateralization in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower-order perceptual system and higher-order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ, reaffirmings the importance of the left medial superior frontal gyrus in SZ. Our code was available at: https://github.com/swfen/Temporal-BCGCN.

PMID:38917293 | DOI:10.1109/TMI.2024.3419041

Categories: Literature Watch

Pages