Deep learning

An innovative approach to detecting the freshness of fruits and vegetables through the integration of convolutional neural networks and bidirectional long short-term memory network

Thu, 2024-07-18 06:00

Curr Res Food Sci. 2024 Mar 25;8:100723. doi: 10.1016/j.crfs.2024.100723. eCollection 2024.

ABSTRACT

Fruit and vegetable freshness testing can improve the efficiency of agricultural product management, reduce resource waste and economic losses, and plays a vital role in increasing the added value of fruit and vegetable agricultural products. At present, the detection of fruit and vegetable freshness mainly relies on manual feature extraction combined with machine learning. However, manual extraction of features has the problem of poor adaptability, resulting in low efficiency in fruit and vegetable freshness detection. Although exist some studies that have introduced deep learning methods to automatically learn deep features that characterize the freshness of fruits and vegetables to cope with the diversity and variability in complex scenes. However, the performance of these studies on fruit and vegetable freshness detection needs to be further improved. Based on this, this paper proposes a novel method that fusion of different deep learning models to extract the features of fruit and vegetable images and the correlation between various areas in the image, so as to detect the freshness of fruits and vegetables more objectively and accurately. First, the image size in the dataset is resized to meet the input requirements of the deep learning model. Then, deep features characterizing the freshness of fruits and vegetables are extracted by the fused deep learning model. Finally, the parameters of the fusion model were optimized based on the detection performance of the fused deep learning model, and the performance of fruit and vegetable freshness detection was evaluated. Experimental results show that the CNN_BiLSTM deep learning model, which fusion convolutional neural network (CNN) and bidirectional long-short term memory neural network (BiLSTM), is combined with parameter optimization processing to achieve an accuracy of 97.76% in detecting the freshness of fruits and vegetables. The research results show that this method is promising to improve the performance of fruit and vegetable freshness detection.

PMID:39022740 | PMC:PMC11252168 | DOI:10.1016/j.crfs.2024.100723

Categories: Literature Watch

Evaluating a deep learning AI algorithm for detecting residual prostate cancer on MRI after focal therapy

Thu, 2024-07-18 06:00

BJUI Compass. 2024 May 12;5(7):665-667. doi: 10.1002/bco2.373. eCollection 2024 Jul.

NO ABSTRACT

PMID:39022660 | PMC:PMC11250150 | DOI:10.1002/bco2.373

Categories: Literature Watch

Insights about cervical lymph nodes: Evaluating deep learning-based reconstruction for head and neck computed tomography scan

Thu, 2024-07-18 06:00

Eur J Radiol Open. 2023 Oct 28;12:100534. doi: 10.1016/j.ejro.2023.100534. eCollection 2024 Jun.

ABSTRACT

PURPOSE: This study aimed to investigate differences in cervical lymph node image quality on dual-energy computed tomography (CT) scan with datasets reconstructed using filter back projection (FBP), hybrid iterative reconstruction (IR), and deep learning-based image reconstruction (DLIR) in patients with head and neck cancer.

METHOD: Seventy patients with head and neck cancer underwent follow-up contrast-enhanced dual-energy CT examinations. All datasets were reconstructed using FBP, hybrid IR with 30 % adaptive statistical IR (ASiR-V), and DLIR with three selectable levels (low, medium, and high) at 2.5- and 0.625-mm slice thicknesses. Herein, signal, image noise, signal-to-noise ratio, and contrast-to-noise ratio of lymph nodes and overall image quality, artifact, and noise of selected regions of interest were evaluated by two radiologists. Next, cervical lymph node sharpness was evaluated using full width at half maximum.

RESULTS: DLIR exhibited significantly reduced noise, ranging from 3.8 % to 35.9 % with improved signal-to-noise ratio (11.5-105.6 %) and contrast-to-noise ratio (10.5-107.5 %) compared with FBP and ASiR-V, for cervical lymph nodes (p < 0.001). Further, 0.625-mm-thick images reconstructed using DLIR-medium and DLIR-high had a lower noise than 2.5-mm-thick images reconstructed using FBP and ASiR-V. The lymph node margins and vessels on DLIR-medium and DLIR-high were sharper than those on FBP and ASiR-V (p < 0.05). Both readers agreed that DLIR had a better image quality than the conventional reconstruction algorithms.

CONCLUSION: DLIR-medium and -high provided superior cervical lymph node image quality in head and neck CT. Improved image quality affords thin-slice DLIR images for dose-reduction protocols in the future.

PMID:39022614 | PMC:PMC467078 | DOI:10.1016/j.ejro.2023.100534

Categories: Literature Watch

Simultaneous removal of noise and correction of motion warping in neuron calcium imaging using a pipeline structure of self-supervised deep learning models

Thu, 2024-07-18 06:00

Biomed Opt Express. 2024 Jun 17;15(7):4300-4317. doi: 10.1364/BOE.527919. eCollection 2024 Jul 1.

ABSTRACT

Calcium imaging is susceptible to motion distortions and background noises, particularly for monitoring active animals under low-dose laser irradiation, and hence unavoidably hinder the critical analysis of neural functions. Current research efforts tend to focus on either denoising or dewarping and do not provide effective methods for videos distorted by both noises and motion artifacts simultaneously. We found that when a self-supervised denoising model of DeepCAD [Nat. Methods18, 1359 (2021)10.1038/s41592-021-01225-0] is used on the calcium imaging contaminated by noise and motion warping, it can remove the motion artifacts effectively but with regenerated noises. To address this issue, we develop a two-level deep-learning (DL) pipeline to dewarp and denoise the calcium imaging video sequentially. The pipeline consists of two 3D self-supervised DL models that do not require warp-free and high signal-to-noise ratio (SNR) observations for network optimization. Specifically, a high-frequency enhancement block is presented in the denoising network to restore more structure information in the denoising process; a hierarchical perception module and a multi-scale attention module are designed in the dewarping network to tackle distortions of various sizes. Experiments conducted on seven videos from two-photon and confocal imaging systems demonstrate that our two-level DL pipeline can restore high-clarity neuron images distorted by both motion warping and background noises. Compared to typical DeepCAD, our denoising model achieves a significant improvement of approximately 30% in image resolution and up to 28% in signal-to-noise ratio; compared to traditional dewarping and denoising methods, our proposed pipeline network recovers more neurons, enhancing signal fidelity and improving data correlation among frames by 35% and 60% respectively. This work may provide an attractive method for long-term neural activity monitoring in awake animals and also facilitate functional analysis of neural circuits.

PMID:39022541 | PMC:PMC11249678 | DOI:10.1364/BOE.527919

Categories: Literature Watch

ADASSM: Adversarial Data Augmentation in Statistical Shape Models From Images

Thu, 2024-07-18 06:00

Shape Med Imaging (2023). 2023 Oct;14350:90-104. doi: 10.1007/978-3-031-46914-5_8. Epub 2023 Oct 31.

ABSTRACT

Statistical shape models (SSM) have been well-established as an excellent tool for identifying variations in the morphology of anatomy across the underlying population. Shape models use consistent shape representation across all the samples in a given cohort, which helps to compare shapes and identify the variations that can detect pathologies and help in formulating treatment plans. In medical imaging, computing these shape representations from CT/MRI scans requires time-intensive preprocessing operations, including but not limited to anatomy segmentation annotations, registration, and texture denoising. Deep learning models have demonstrated exceptional capabilities in learning shape representations directly from volumetric images, giving rise to highly effective and efficient Image-to-SSM networks. Nevertheless, these models are data-hungry and due to the limited availability of medical data, deep learning models tend to overfit. Offline data augmentation techniques, that use kernel density estimation based (KDE) methods for generating shape-augmented samples, have successfully aided Image-to-SSM networks in achieving comparable accuracy to traditional SSM methods. However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit image-based texture bias resulting in sub-optimal models. This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation. The proposed framework is trained as an adversary to the Image-to-SSM network, augmenting diverse and challenging noisy samples. Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.

PMID:39022299 | PMC:PMC11251192 | DOI:10.1007/978-3-031-46914-5_8

Categories: Literature Watch

Accelerated cardiac cine magnetic resonance imaging using deep low-rank plus sparse network: validation in patients

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):5131-5143. doi: 10.21037/qims-24-17. Epub 2024 Jun 27.

ABSTRACT

BACKGROUND: Accurate and reproducible assessment of left ventricular (LV) volumes is important in managing various cardiac conditions. However, patients are required to hold their breath multiple times during data acquisition, which may result in discomfort and restrict cardiac motion, potentially compromising the accuracy of the detected results. Accelerated imaging techniques can help reduce the number of breath holds needed, potentially improving patient comfort and the reliability of the LV assessment. This study aimed to prospectively evaluate the feasibility and accuracy of LV assessment with a model-based low-rank plus sparse network (L+S-Net) for accelerated magnetic resonance (MR) cine imaging.

METHODS: Fourty-one patients with different cardiac conditions were recruited in this study. Both accelerated MR cine imaging with L+S-Net and traditional electrocardiogram (ECG)-gated segmented cine were performed for each patient. Subjective image quality (IQ) score and quantitative LV volume function parameters were measured and compared between L+S-Net and traditional standards. The IQ score and LV volume measurements of cardiovascular magnetic resonance (CMR) images reconstructed by L+S-Net and standard cine were compared by paired t-test. The acquisition time of the two methods was also calculated.

RESULTS: In a quantitative analysis, L+S-Net and standard cine yielded similar measurements for all parameters of LV function (ejection fraction: 35±22 for standard vs. 33±23 for L+S-Net), although L+S-Net had slightly lower IQ scores than standard cine CMR (4.2±0.5 for L+S-Net vs. 4.8±0.4 for standard cine; P<0.001). The mean acquisition time of L+S-Net and standard cine was 0.83±0.08 vs. 6.35±0.78 s per slice (P<0.001).

CONCLUSIONS: Assessment of LV function with L+S-Net at 3.0 T yields comparable results to the reference standard, albeit with a reduced acquisition time. This feature enhances the clinical applicability of the L+S-Net approach, helping alleviate patient discomfort and motion artifacts that may arise due to prolonged acquisition time.

PMID:39022294 | PMC:PMC11250298 | DOI:10.21037/qims-24-17

Categories: Literature Watch

Deep learning methods for diagnosis of graves' ophthalmopathy using magnetic resonance imaging

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):5099-5108. doi: 10.21037/qims-24-80. Epub 2024 Jun 11.

ABSTRACT

BACKGROUND: The effect of diagnosing Graves' ophthalmopathy (GO) through traditional measurement and observation in medical imaging is not ideal. This study aimed to develop and validate deep learning (DL) models that could be applied to the diagnosis of GO based on magnetic resonance imaging (MRI) and compare them to traditional measurement and judgment of radiologists.

METHODS: A total of 199 clinically verified consecutive GO patients and 145 normal controls undergoing MRI were retrospectively recruited, of whom 240 were randomly assigned to the training group and 104 to the validation group. Areas of superior, inferior, medial, and lateral rectus muscles and all rectus muscles on coronal planes were calculated respectively. Logistic regression models based on areas of extraocular muscles were built to diagnose GO. The DL models named ResNet101 and Swin Transformer with T1-weighted MRI without contrast as input were used to diagnose GO and the results were compared to the radiologist's diagnosis only relying on MRI T1-weighted scans.

RESULTS: Areas on the coronal plane of each muscle in the GO group were significantly greater than those in the normal group. In the validation group, the areas under the curve (AUCs) of logistic regression models by superior, inferior, medial, and lateral rectus muscles and all muscles were 0.897 [95% confidence interval (CI): 0.833-0.949], 0.705 (95% CI: 0.598-0.804), 0.799 (95% CI: 0.712-0.876), 0.681 (95% CI: 0.567-0.776), and 0.905 (95% CI: 0.843-0.955). ResNet101 and Swin Transformer achieved AUCs of 0.986 (95% CI: 0.977-0.994) and 0.936 (95% CI: 0.912-0.957), respectively. The accuracy, sensitivity, and specificity of ResNet101 were 0.933, 0.979, and 0.869, respectively. The accuracy, sensitivity, and specificity of Swin Transformer were 0.851, 0.817, and 0.898, respectively. The ResNet101 model yielded higher AUC than models of all muscles and radiologists (0.986 vs. 0.905, 0.818; P<0.001).

CONCLUSIONS: The DL models based on MRI T1-weighted scans could accurately diagnose GO, and the application of DL systems in MRI may improve radiologists' performance in diagnosing GO and early detection.

PMID:39022293 | PMC:PMC11250345 | DOI:10.21037/qims-24-80

Categories: Literature Watch

Research on ultrasound-based radiomics: a bibliometric analysis

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):4520-4539. doi: 10.21037/qims-23-1867. Epub 2024 Jun 18.

ABSTRACT

BACKGROUND: A large number of studies related to ultrasound-based radiomics have been published in recent years; however, a systematic bibliometric analysis of this topic has not yet been conducted. In this study, we attempted to identify the hotspots and frontiers in ultrasound-based radiomics through bibliometrics and to systematically characterize the overall framework and characteristics of studies through mapping and visualization.

METHODS: A literature search was carried out in Web of Science Core Collection (WoSCC) database from January 2016 to December 2023 according to a predetermined search formula. Bibliometric analysis and visualization of the results were performed using CiteSpace, VOSviewer, R, and other platforms.

RESULTS: Ultimately, 466 eligible papers were included in the study. Publication trend analysis showed that the annual publication trend of journals in ultrasound-based radiomics could be divided into three phases: there were no more than five documents published in this field in any year before 2018, a small yearly increase in the number of annual publications occurred between 2018 and 2022, and a high, stable number of publications appeared after 2022. In the analysis of publication sources, China was found to be the main contributor, with a much higher number of publications than other countries, and was followed by the United States and Italy. Frontiers in Oncology was the journal with the highest number of papers in this field, publishing 60 articles. Among the academic institutions, Fudan University, Sun Yat-sen University, and the Chinese Academy of Sciences ranked as the top three in terms of the number of documents. In the analysis of authors and cocited authors, the author with the most publications was Yuanyuan Wang, who has published 19 articles in 8 years, while Philippe Lambin was the most cited author, with 233 citations. Visualization of the results from the cocitation analysis of the literature revealed a strong centrality of the subject terms papillary thyroid cancer, biological behavior, potential biomarkers, and comparative assessment, which may be the main focal points of research in this subject. Based on the findings of the keyword analysis and cluster analysis, the keywords can be categorized into two major groups: (I) technological innovations that enable the construction of radiomics models such as machine learning and deep learning and (II) applications of predictive models to support clinical decision-making in certain diseases, such as papillary thyroid cancer, hepatocellular carcinoma (HCC), and breast cancer.

CONCLUSIONS: Ultrasound-based radiomics has received widespread attention in the medical field and has been gradually been applied in clinical research. Radiomics, a relatively late development in medical technology, has made substantial contributions to the diagnosis, prediction, and prognostic evaluation of diseases. Additionally, the coupling of artificial intelligence techniques with ultrasound imaging has yielded a number of promising tools that facilitate clinical decision-making and enable the practice of precision medicine. Finally, the development of ultrasound-based radiomics requires multidisciplinary cooperation and joint efforts from the field biomedicine, information technology, statistics, and clinical medicine.

PMID:39022291 | PMC:PMC11250334 | DOI:10.21037/qims-23-1867

Categories: Literature Watch

Prediction of metastases in confusing mediastinal lymph nodes based on flourine-18 fluorodeoxyglucose (<sup>18</sup>F-FDG) positron emission tomography/computed tomography (PET/CT) imaging using machine learning

Thu, 2024-07-18 06:00

Quant Imaging Med Surg. 2024 Jul 1;14(7):4723-4734. doi: 10.21037/qims-24-100. Epub 2024 Jun 17.

ABSTRACT

BACKGROUND: For patient management and prognosis, accurate assessment of mediastinal lymph node (LN) status is essential. This study aimed to use machine learning approaches to assess the status of confusing LNs in the mediastinum using positron emission tomography/computed tomography (PET/CT) images; the results were then compared with the diagnostic conclusions of nuclear medicine physicians.

METHODS: A total of 509 confusing mediastinal LNs that had undergone pathological assessment or follow-up from 320 patients from three centres were retrospectively included in the study. LNs from centres I and II were randomised into a training cohort (N=324) and an internal validation cohort (N=81), while those from centre III patients formed an external validation cohort (N=104). Various parameters measured from PET and CT images and extracted radiomics and deep learning features were used to construct PET/CT-parameter, radiomics, and deep learning models, respectively. Model performance was compared with the diagnostic results of nuclear medicine physicians using the area under the curve (AUC), sensitivity, specificity, and decision curve analysis (DCA).

RESULTS: The coupled model of gradient boosting decision tree-logistic regression (GBDT-LR) incorporating radiomic features showed AUCs of 92.2% [95% confidence interval (CI), 0.890-0.953], 84.6% (95% CI, 0.761-0.930) and 84.6% (95% CI, 0.770-0.922) across the three cohorts. It significantly outperformed the deep learning model, the parametric PET/CT model and the physician's diagnosis. DCA demonstrated the clinical usefulness of the GBDT-LR model.

CONCLUSIONS: The presented GBDT-LR model performed well in evaluating confusing mediastinal LNs in both internal and external validation sets. It not only crossed radiometric features but also avoided overfitting.

PMID:39022286 | PMC:PMC11250303 | DOI:10.21037/qims-24-100

Categories: Literature Watch

Evaluation of preoperative difficult airway prediction methods for adult patients without obvious airway abnormalities: a systematic review and meta-analysis

Wed, 2024-07-17 06:00

BMC Anesthesiol. 2024 Jul 17;24(1):242. doi: 10.1186/s12871-024-02627-1.

ABSTRACT

BACKGROUND: This systematic review aims to assist clinical decision-making in selecting appropriate preoperative prediction methods for difficult tracheal intubation by identifying and synthesizing literature on these methods in adult patients undergoing all types of surgery.

METHODS: A systematic review and meta-analysis were conducted following PRISMA guidelines. Comprehensive electronic searches across multiple databases were completed on March 28, 2023. Two researchers independently screened, selected studies, and extracted data. A total of 227 articles representing 526 studies were included and evaluated for bias using the QUADAS-2 tool. Meta-Disc software computed pooled sensitivity (SEN), specificity (SPC), positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR). Heterogeneity was assessed using the Spearman correlation coefficient, Cochran's-Q, and I2 index, with meta-regression exploring sources of heterogeneity. Publication bias was evaluated using Deeks' funnel plot.

RESULTS: Out of 2906 articles retrieved, 227 met the inclusion criteria, encompassing a total of 686,089 patients. The review examined 11 methods for predicting difficult tracheal intubation, categorized into physical examination, multivariate scoring system, and imaging test. The modified Mallampati test (MMT) showed a SEN of 0.39 and SPC of 0.86, while the thyromental distance (TMD) had a SEN of 0.38 and SPC of 0.83. The upper lip bite test (ULBT) presented a SEN of 0.52 and SPC of 0.84. Multivariate scoring systems like LEMON and Wilson's risk score demonstrated moderate sensitivity and specificity. Imaging tests, particularly ultrasound-based methods such as the distance from the skin to the epiglottis (US-DSE), exhibited higher sensitivity (0.80) and specificity (0.77). Significant heterogeneity was identified across studies, influenced by factors such as sample size and study design.

CONCLUSION: No single preoperative prediction method shows clear superiority for predicting difficult tracheal intubation. The evidence supports a combined approach using multiple methods tailored to specific patient demographics and clinical contexts. Future research should focus on integrating advanced technologies like artificial intelligence and deep learning to improve predictive models. Standardizing testing procedures and establishing clear cut-off values are essential for enhancing prediction reliability and accuracy. Implementing a multi-modal predictive approach may reduce unanticipated difficult intubations, improving patient safety and outcomes.

PMID:39020308 | DOI:10.1186/s12871-024-02627-1

Categories: Literature Watch

Fully and Weakly Supervised Deep Learning for Meniscal Injury Classification, and Location Based on MRI

Wed, 2024-07-17 06:00

J Imaging Inform Med. 2024 Jul 17. doi: 10.1007/s10278-024-01198-4. Online ahead of print.

ABSTRACT

Meniscal injury is a common cause of knee joint pain and a precursor to knee osteoarthritis (KOA). The purpose of this study is to develop an automatic pipeline for meniscal injury classification and localization using fully and weakly supervised networks based on MRI images. In this retrospective study, data were from the osteoarthritis initiative (OAI). The MR images were reconstructed using a sagittal intermediate-weighted fat-suppressed turbo spin-echo sequence. (1) We used 130 knees from the OAI to develop the LGSA-UNet model which fuses the features of adjacent slices and adjusts the blocks in Siam to enable the central slice to obtain rich contextual information. (2) One thousand seven hundred and fifty-six knees from the OAI were included to establish segmentation and classification models. The segmentation model achieved a DICE coefficient ranging from 0.84 to 0.93. The AUC values ranged from 0.85 to 0.95 in the binary models. The accuracy for the three types of menisci (normal, tear, and maceration) ranged from 0.60 to 0.88. Furthermore, 206 knees from the orthopedic hospital were used as an external validation data set to evaluate the performance of the model. The segmentation and classification models still performed well on the external validation set. To compare the diagnostic performances between the deep learning (DL) models and radiologists, the external validation sets were sent to two radiologists. The binary classification model outperformed the diagnostic performance of the junior radiologist (0.82-0.87 versus 0.74-0.88). This study highlights the potential of DL in knee meniscus segmentation and injury classification which can help improve diagnostic efficiency.

PMID:39020156 | DOI:10.1007/s10278-024-01198-4

Categories: Literature Watch

Diagnostic Accuracy of Ultra-Low Dose CT Compared to Standard Dose CT for Identification of Fresh Rib Fractures by Deep Learning Algorithm

Wed, 2024-07-17 06:00

J Imaging Inform Med. 2024 Jul 17. doi: 10.1007/s10278-024-01027-8. Online ahead of print.

ABSTRACT

The present study aimed to evaluate the diagnostic accuracy of ultra-low dose computed tomography (ULD-CT) compared to standard dose computed tomography (SD-CT) in discerning recent rib fractures using a deep learning algorithm detection of rib fractures (DLADRF). A total of 158 patients undergoing forensic diagnosis for rib fractures were included in this study: 50 underwent SD-CT, and 108 were assessed using ULD-CT. Junior and senior radiologists independently evaluated the images to identify and characterize the rib fractures. The sensitivity of rib fracture diagnosis by radiologists and radiologist + DLADRF was better using SD-CT than ULD-CT. However, the diagnosis sensitivity of DLADRF using ULD-CT alone was slightly more than SD-CT. Nonetheless, no substantial differences were observed in specificity, positive predictive value, and negative predictive value between SD-CT and ULD-CT by the same radiologist, radiologist + DLADRF, and DLADRF (P > 0.05). The area under the curve (AUC) of receiver operating characteristic indicated that senior radiologist + DLADRF was significantly better than senior and junior radiologists, junior radiologists + DLADRF, and DLADRF alone using SD-CT or ULD-CT (all P < 0.05). Also, junior radiologists + DLADRF was better with ULD-CT than senior and junior radiologists (P < 0.05). The AUC of the rib fracture diagnosed by senior radiologists did not differ from DLADRF using ULD-CT. Also, no significant differences were observed between junior + AI and senior and between junior and DLADRF using SD-CT. DLADRF enhanced the diagnostic performance of radiologists in detecting recent rib fractures. The diagnostic outcomes between SD-CT and ULD-CT across radiologists' experience and DLADRF did not differ significantly.

PMID:39020151 | DOI:10.1007/s10278-024-01027-8

Categories: Literature Watch

In Silico drug repurposing pipeline using deep learning and structure based approaches in epilepsy

Wed, 2024-07-17 06:00

Sci Rep. 2024 Jul 17;14(1):16562. doi: 10.1038/s41598-024-67594-6.

ABSTRACT

Due to considerable global prevalence and high recurrence rate, the pursuit of effective new medication for epilepsy treatment remains an urgent and significant challenge. Drug repurposing emerges as a cost-effective and efficient strategy to combat this disorder. This study leverages the transformer-based deep learning methods coupled with molecular binding affinity calculation to develop a novel in-silico drug repurposing pipeline for epilepsy. The number of candidate inhibitors against 24 target proteins encoded by gain-of-function genes implicated in epileptogenesis ranged from zero to several hundreds. Our pipeline has repurposed the medications with most anti-epileptic drugs and nearly half psychiatric medications, highlighting the effectiveness of our pipeline. Furthermore, Lomitapide, a cholesterol-lowering drug, first emerged as particularly noteworthy, exhibiting high binding affinity for 10 targets and verified by molecular dynamics simulation and mechanism analysis. These findings provided a novel perspective on therapeutic strategies for other central nervous system disease.

PMID:39020064 | DOI:10.1038/s41598-024-67594-6

Categories: Literature Watch

Finite element models with automatic computed tomography bone segmentation for failure load computation

Wed, 2024-07-17 06:00

Sci Rep. 2024 Jul 17;14(1):16576. doi: 10.1038/s41598-024-66934-w.

ABSTRACT

Bone segmentation is an important step to perform biomechanical failure load simulations on in-vivo CT data of patients with bone metastasis, as it is a mandatory operation to obtain meshes needed for numerical simulations. Segmentation can be a tedious and time consuming task when done manually, and expert segmentations are subject to intra- and inter-operator variability. Deep learning methods are increasingly employed to automatically carry out image segmentation tasks. These networks usually need to be trained on a large image dataset along with the manual segmentations to maximize generalization to new images, but it is not always possible to have access to a multitude of CT-scans with the associated ground truth. It then becomes necessary to use training techniques to make the best use of the limited available data. In this paper, we propose a dedicated pipeline of preprocessing, deep learning based segmentation method and post-processing for in-vivo human femurs and vertebrae segmentation from CT-scans volumes. We experimented with three U-Net architectures and showed that out-of-the-box models enable automatic and high-quality volume segmentation if carefully trained. We compared the failure load simulation results obtained on femurs and vertebrae using either automatic or manual segmentations and studied the sensitivity of the simulations on small variations of the automatic segmentation. The failure loads obtained using automatic segmentations were comparable to those obtained using manual expert segmentations for all the femurs and vertebrae tested, demonstrating the effectiveness of the automated segmentation approach for failure load simulations.

PMID:39019937 | DOI:10.1038/s41598-024-66934-w

Categories: Literature Watch

A quantitative MRI comparative study of imaging markers for cerebral small vessel disease in the middle-aged and elderly patients with and without hypertension

Wed, 2024-07-17 06:00

Zhonghua Yi Xue Za Zhi. 2024 Jul 23;104(28):2619-2625. doi: 10.3760/cma.j.cn112137-20240110-00076.

ABSTRACT

Objective: To explore the difference of MRI markers of small cerebral vascular disease in middle-aged and elderly patients with hypertension and non-hypertension. Methods: A cross-sectional study. The clinical data of 316 patients who underwent head MRI with susceptibility weighted imaging scans at the Affiliated Zhongda Hospital of Southeast University from November 2013 to August 2022 were retrospectively analyzed, including 190 males and 126 females, with the age of (71.6±8.9)years. According to the history of hypertension, the patients were divided into hypertension group(n=259) and the non-hypertension group(n=57). The patients in the non-hypertension group were further divided into abnormal blood pressure group on admission (n=19) and normal blood pressure group on admission (n=38). The imaging features of different CSVD dimensions in the patient's images were quantified or graded and compared between hypertensive and non-hypertensive patient groups. Deep learning methods were employed to segment white matter lesions, and voxel-wise analysis was conducted to investigate the differences in whole-brain white matter lesion probability between patients in both groups. Spearman correlation analysis was used to analyze the correlation between hypertension and small cerebral vascular disease. Results: Compared with the non-hypertensive group, the cerebral microhemorrhage count, deep microhemorrhage count, basal ganglia level lacunae count, perivascular space (EPVS) grade of hemioval center level and EPVS grade of basal ganglia level were higher in the hypertensive group (all P<0.05). The cerebral microhemorrhage count [3.0(1.0, 15.0) vs 1.0 (0, 4.2)], deep microhemorrhage count [1.0 (0, 7.0) vs 1.0 (0, 4.2)] and EPVS classification at basal ganglium level [2.0(1.0, 3.0) vs 1.0(1.0, 2.0)] in the group with history of hypertension were higher than those in the group with normal blood pressure at admission (all P<0.05). The EPVS grade at the central level of the semiovale in the hypertension group was higher than that of the group with normal blood pressure at admission [2.0(1.0, 2.0) vs 1.0 (1.0, 2.0)], and also higher than that of the group with abnormal blood pressure at admission [2.0(1.0, 2.0) vs 1.0(1.0, 2.0)](both P<0.05). Voxel-by-voxel analysis showed no significant difference in the probability of white matter lesions in the whole brain between patients with and without a history of hypertension, but patients with a history of hypertension showed more extensive para-ventricular white matter hypersignaling than those without a history of hypertension. Spearman correlation analysis showed that hypertension grade was correlated with the number of microbleeding lesions in depth (r=0.149), the number of lacunae lesions in the center of the hemioval (r=0.209), and the number of lacunae lesions in the basal ganglia (r=0.204) (all P<0.05). Conclusions: Chronic hypertension can affect different dimensions of small vessel disease imaging, primarily manifested in the increases of deep microbleed counts and the EPVS grade.

PMID:39019818 | DOI:10.3760/cma.j.cn112137-20240110-00076

Categories: Literature Watch

Artificial intelligence in cardiac surgery: A systematic review

Wed, 2024-07-17 06:00

World J Surg. 2024 Jul 17. doi: 10.1002/wjs.12265. Online ahead of print.

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has emerged as a tool to potentially increase the efficiency and efficacy of cardiovascular care and improve clinical outcomes. This study aims to provide an overview of applications of AI in cardiac surgery.

METHODS: A systematic literature search on AI applications in cardiac surgery from inception to February 2024 was conducted. Articles were then filtered based on the inclusion and exclusion criteria and the risk of bias was assessed. Key findings were then summarized.

RESULTS: A total of 81 studies were found that reported on AI applications in cardiac surgery. There is a rapid rise in studies since 2020. The most popular machine learning technique was random forest (n = 48), followed by support vector machine (n = 33), logistic regression (n = 32), and eXtreme Gradient Boosting (n = 31). Most of the studies were on adult patients, conducted in China, and involved procedures such as valvular surgery (24.7%), heart transplant (9.4%), coronary revascularization (11.8%), congenital heart disease surgery (3.5%), and aortic dissection repair (2.4%). Regarding evaluation outcomes, 35 studies examined the performance, 26 studies examined clinician outcomes, and 20 studies examined patient outcomes.

CONCLUSION: AI was mainly used to predict complications following cardiac surgeries and improve clinicians' decision-making by providing better preoperative risk assessment, stratification, and prognostication. While the application of AI in cardiac surgery has greatly progressed in the last decade, further studies need to be conducted to verify accuracy and ensure safety before use in clinical practice.

PMID:39019775 | DOI:10.1002/wjs.12265

Categories: Literature Watch

Leveraging artificial intelligence in vaccine development: A narrative review

Wed, 2024-07-17 06:00

J Microbiol Methods. 2024 Jul 15:106998. doi: 10.1016/j.mimet.2024.106998. Online ahead of print.

ABSTRACT

Vaccine development stands as a cornerstone of public health efforts, pivotal in curbing infectious diseases and reducing global morbidity and mortality. However, traditional vaccine development methods are often time-consuming, costly, and inefficient. The advent of artificial intelligence (AI) has ushered in a new era in vaccine design, offering unprecedented opportunities to expedite the process. This narrative review explores the role of AI in vaccine development, focusing on antigen selection, epitope prediction, adjuvant identification, and optimization strategies. AI algorithms, including machine learning and deep learning, leverage genomic data, protein structures, and immune system interactions to predict antigenic epitopes, assess immunogenicity, and prioritize antigens for experimentation. Furthermore, AI-driven approaches facilitate the rational design of immunogens and the identification of novel adjuvant candidates with optimal safety and efficacy profiles. Challenges such as data heterogeneity, model interpretability, and regulatory considerations must be addressed to realize the full potential of AI in vaccine development. Integrating emerging technologies, such as single-cell omics and synthetic biology, promises to enhance vaccine design precision and scalability. This review underscores the transformative impact of AI on vaccine development and highlights the need for interdisciplinary collaborations and regulatory harmonization to accelerate the delivery of safe and effective vaccines against infectious diseases.

PMID:39019262 | DOI:10.1016/j.mimet.2024.106998

Categories: Literature Watch

Efficient determination of Born-effective charges, LO-TO splitting, and Raman tensors of solids with a real-space atom-centered deep learning approach

Wed, 2024-07-17 06:00

J Phys Condens Matter. 2024 Jul 17. doi: 10.1088/1361-648X/ad64a2. Online ahead of print.

ABSTRACT

We introduce a deep neural network (DNN) framework called the Real-space Atomic Decomposition NETwork (RADNET), which is capable of making accurate predictions of polarization and of electronic dielectric permittivity tensors in solids. This framework builds on previous, atom-centered approaches while utilizing deep convolutional neural networks. We report excellent accuracies on direct predictions for two prototypical examples: GaAs and BN. We then use automatic differentiation to calculate the Born-effective charges, longitudinal optical-transverse optical (LO-TO) splitting frequencies, and Raman tensors of these materials. We compute the Raman spectra, and find agreement with ab initio results. Lastly, we explore ways to generalize polarization predictions while taking into account periodic boundary conditions and symmetries.

PMID:39019077 | DOI:10.1088/1361-648X/ad64a2

Categories: Literature Watch

A deep-learning-based surrogate model for Monte-Carlo simulations of the linear energy transfer in primary brain tumor patients treated with proton-beam radiotherapy

Wed, 2024-07-17 06:00

Phys Med Biol. 2024 Jul 17. doi: 10.1088/1361-6560/ad64b7. Online ahead of print.

ABSTRACT

This study explores the use of neural networks (NNs) as surrogate models for Monte-Carlo (MC) simulations in predicting the dose-averaged linear energy transfer (LETd) of protons in proton-beam therapy based on the planned dose distribution and patient anatomy in the form of computed tomography (CT) images. As LETdis associated with variability in the relative biological effectiveness (RBE) of protons, we also evaluate the implications of using NN predictions for normal tissue complication probability (NTCP) models within a variable-RBE context.&#xD;&#xD;Approach: The predictive performance of three-dimensional NN architectures was evaluated using five-fold cross-validation on a cohort of brain tumor patients (n=151). The best-performing model was identified and externally validated on patients from a different center (n=107). LETdpredictions were compared to MC-simulated results in clinically relevant regions of interest. We assessed the impact on NTCP models by leveraging LETdpredictions to derive RBE-weighted doses, using the Wedenberg RBE model.&#xD;&#xD;Main results: We found NNs based solely on the planned dose profile, i.e. without additional usage of CT images, can approximate MC-based LETddistributions. Root mean squared errors (RMSE) for the median LETdwithin the brain, brainstem, CTV, chiasm, lacrimal glands (ipsilateral/contralateral) and optic nerves (ipsilateral/contralateral) were 0.36, 0.87, 0.31, 0.73, 0.68, 1.04, 0.69 and 1.24~keV/μm, respectively. Although model predictions showed statistically significant differences from MC outputs, these did not result in substantial changes in NTCP predictions, with RMSEs of at most 3.2 percentage points.&#xD;&#xD;Significance: The ability of NNs to predict LETdbased solely on planned dose profiles suggests a viable alternative to the compute-intensive MC simulations in a variable-RBE setting. This is particularly useful in scenarios where MC simulation data are unavailable, facilitating resource-constrained proton therapy treatment planning, retrospective patient data analysis and further investigations on the variability of proton RBE.

PMID:39019053 | DOI:10.1088/1361-6560/ad64b7

Categories: Literature Watch

ALFREDO: Active Learning with FeatuRe disEntangelement and DOmain adaptation for medical image classification

Wed, 2024-07-17 06:00

Med Image Anal. 2024 Jul 4;97:103261. doi: 10.1016/j.media.2024.103261. Online ahead of print.

ABSTRACT

State-of-the-art deep learning models often fail to generalize in the presence of distribution shifts between training (source) data and test (target) data. Domain adaptation methods are designed to address this issue using labeled samples (supervised domain adaptation) or unlabeled samples (unsupervised domain adaptation). Active learning is a method to select informative samples to obtain maximum performance from minimum annotations. Selecting informative target domain samples can improve model performance and robustness, and reduce data demands. This paper proposes a novel pipeline called ALFREDO (Active Learning with FeatuRe disEntangelement and DOmain adaptation) that performs active learning under domain shift. We propose a novel feature disentanglement approach to decompose image features into domain specific and task specific components. Domain specific components refer to those features that provide source specific information, e.g., scanners, vendors or hospitals. Task specific components are discriminative features for classification, segmentation or other tasks. Thereafter we define multiple novel cost functions that identify informative samples under domain shift. We test our proposed method for medical image classification using one histopathology dataset and two chest X-ray datasets. Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods, as well as state of the art active domain adaptation methods.

PMID:39018722 | DOI:10.1016/j.media.2024.103261

Categories: Literature Watch

Pages