Deep learning

Advancements in Uric Acid Stone Detection: Integrating Deep Learning with CT Imaging and Clinical Assessments in the Upper Urinary Tract

Sun, 2024-03-03 06:00

Urol Int. 2024 Mar 2. doi: 10.1159/000538133. Online ahead of print.

ABSTRACT

The incidence of urinary tract stones is increasing worldwide, with a notably high recurrence rate. Among upper urinary tract stones, a significant proportion comprises uric acid stones. This study aims to rapid and reliable identification of uric acid stones in the upper urethra by gathering comprehensive biochemical profiles, urinalysis, and CT scan data from 276 patients diagnosed with kidney and ureteral stones. Leveraging machine learning techniques, the goal is to establish multiple predictive models that can accurately identify uric acid stones.

PMID:38432217 | DOI:10.1159/000538133

Categories: Literature Watch

A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening

Sun, 2024-03-03 06:00

Comput Methods Programs Biomed. 2024 Feb 24;247:108101. doi: 10.1016/j.cmpb.2024.108101. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms.

METHODS: We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis.

RESULTS: The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively.

CONCLUSIONS: Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.

PMID:38432087 | DOI:10.1016/j.cmpb.2024.108101

Categories: Literature Watch

Multi-task global optimization-based method for vascular landmark detection

Sun, 2024-03-03 06:00

Comput Med Imaging Graph. 2024 Mar 1;114:102364. doi: 10.1016/j.compmedimag.2024.102364. Online ahead of print.

ABSTRACT

Vascular landmark detection plays an important role in medical analysis and clinical treatment. However, due to the complex topology and similar local appearance around landmarks, the popular heatmap regression based methods always suffer from the landmark confusion problem. Vascular landmarks are connected by vascular segments and have special spatial correlations, which can be utilized for performance improvement. In this paper, we propose a multi-task global optimization-based framework for accurate and automatic vascular landmark detection. A multi-task deep learning network is exploited to accomplish landmark heatmap regression, vascular semantic segmentation, and orientation field regression simultaneously. The two auxiliary objectives are highly correlated with the heatmap regression task and help the network incorporate the structural prior knowledge. During inference, instead of performing a max-voting strategy, we propose a global optimization-based post-processing method for final landmark decision. The spatial relationships between neighboring landmarks are utilized explicitly to tackle the landmark confusion problem. We evaluated our method on a cerebral MRA dataset with 564 volumes, a cerebral CTA dataset with 510 volumes, and an aorta CTA dataset with 50 volumes. The experiments demonstrate that the proposed method is effective for vascular landmark localization and achieves state-of-the-art performance.

PMID:38432060 | DOI:10.1016/j.compmedimag.2024.102364

Categories: Literature Watch

A novel radiological software prototype for automatically detecting the inner ear and classifying normal from malformed anatomy

Sun, 2024-03-03 06:00

Comput Biol Med. 2024 Feb 16;171:108168. doi: 10.1016/j.compbiomed.2024.108168. Online ahead of print.

ABSTRACT

BACKGROUND: To develop an effective radiological software prototype that could read Digital Imaging and Communications in Medicine (DICOM) files, crop the inner ear automatically based on head computed tomography (CT), and classify normal and inner ear malformation (IEM).

METHODS: A retrospective analysis was conducted on 2053 patients from 3 hospitals. We extracted 1200 inner ear CTs for importing, cropping, and training, testing, and validating an artificial intelligence (AI) model. Automated cropping algorithms based on CTs were developed to precisely isolate the inner ear volume. Additionally, a simple graphical user interface (GUI) was implemented for user interaction. Using cropped CTs as input, a deep learning convolutional neural network (DL CNN) with 5-fold cross-validation was used to classify inner ear anatomy as normal or abnormal. Five specific IEM types (cochlear hypoplasia, ossification, incomplete partition types I and III, and common cavity) were included, with data equally distributed between classes. Both the cropping tool and the AI model were extensively validated.

RESULTS: The newly developed DICOM viewer/software successfully achieved its objectives: reading CT files, automatically cropping inner ear volumes, and classifying them as normal or malformed. The cropping tool demonstrated an average accuracy of 92.25%. The DL CNN model achieved an area under the curve (AUC) of 0.86 (95% confidence interval: 0.81-0.91). Performance metrics for the AI model were: accuracy (0.812), precision (0.791), recall (0.8), and F1-score (0.766).

CONCLUSION: This study successfully developed and validated a fully automated workflow for classifying normal versus abnormal inner ear anatomy using a combination of advanced image processing and deep learning techniques. The tool exhibited good diagnostic accuracy, suggesting its potential application in risk stratification. However, it is crucial to emphasize the need for supervision by qualified medical professionals when utilizing this tool for clinical decision-making.

PMID:38432006 | DOI:10.1016/j.compbiomed.2024.108168

Categories: Literature Watch

Automated detection of Alzheimer's disease: a multi-modal approach with 3D MRI and amyloid PET

Sun, 2024-03-03 06:00

Sci Rep. 2024 Mar 3;14(1):5210. doi: 10.1038/s41598-024-56001-9.

ABSTRACT

Recent advances in deep learning and imaging technologies have revolutionized automated medical image analysis, especially in diagnosing Alzheimer's disease through neuroimaging. Despite the availability of various imaging modalities for the same patient, the development of multi-modal models leveraging these modalities remains underexplored. This paper addresses this gap by proposing and evaluating classification models using 2D and 3D MRI images and amyloid PET scans in uni-modal and multi-modal frameworks. Our findings demonstrate that models using volumetric data learn more effective representations than those using only 2D images. Furthermore, integrating multiple modalities enhances model performance over single-modality approaches significantly. We achieved state-of-the-art performance on the OASIS-3 cohort. Additionally, explainability analyses with Grad-CAM indicate that our model focuses on crucial AD-related regions for its predictions, underscoring its potential to aid in understanding the disease's causes.

PMID:38433282 | DOI:10.1038/s41598-024-56001-9

Categories: Literature Watch

Predicting microvascular invasion in hepatocellular carcinoma with a CT- and MRI-based multimodal deep learning model

Sun, 2024-03-03 06:00

Abdom Radiol (NY). 2024 Mar 3. doi: 10.1007/s00261-024-04202-1. Online ahead of print.

ABSTRACT

PURPOSE: To investigate the value of a multimodal deep learning (MDL) model based on computed tomography (CT) and magnetic resonance imaging (MRI) for predicting microvascular invasion (MVI) in hepatocellular carcinoma (HCC).

METHODS: A total of 287 patients with HCC from our institution and 58 patients from another individual institution were included. Among these, 119 patients with only CT data and 116 patients with only MRI data were selected for single-modality deep learning model development, after which select parameters were migrated for MDL model development with transfer learning (TL). In addition, 110 patients with simultaneous CT and MRI data were divided into a training cohort (n = 66) and a validation cohort (n = 44). We input the features extracted from DenseNet121 into an extreme learning machine (ELM) classifier to construct a classification model.

RESULTS: The area under the curve (AUC) of the MDL model was 0.844, which was superior to that of the single-phase CT (AUC = 0.706-0.776, P < 0.05), single-sequence MRI (AUC = 0.706-0.717, P < 0.05), single-modality DL model (AUCall-phase CT = 0.722, AUCall-sequence MRI = 0.731; P < 0.05), clinical (AUC = 0.648, P < 0.05), but not to that of the delay phase (DP) and in-phase (IP) MRI and portal venous phase (PVP) CT models. The MDL model achieved better performance than models described above (P < 0.05). When combined with clinical features, the AUC of the MDL model increased from 0.844 to 0.871. A nomogram, combining deep learning signatures (DLS) and clinical indicators for MDL models, demonstrated a greater overall net gain than the MDL models (P < 0.05).

CONCLUSION: The MDL model is a valuable noninvasive technique for preoperatively predicting MVI in HCC.

PMID:38433144 | DOI:10.1007/s00261-024-04202-1

Categories: Literature Watch

Deep learning solutions for smart city challenges in urban development

Sat, 2024-03-02 06:00

Sci Rep. 2024 Mar 2;14(1):5176. doi: 10.1038/s41598-024-55928-3.

ABSTRACT

In the realm of urban planning, the integration of deep learning technologies has emerged as a transformative force, promising to revolutionize the way cities are designed, managed, and optimized. This research embarks on a multifaceted exploration that combines the power of deep learning with Bayesian regularization techniques to enhance the performance and reliability of neural networks tailored for urban planning applications. Deep learning, characterized by its ability to extract complex patterns from vast urban datasets, has the potential to offer unprecedented insights into urban dynamics, transportation networks, and environmental sustainability. However, the complexity of these models often leads to challenges such as overfitting and limited interpretability. To address these issues, Bayesian regularization methods are employed to imbue neural networks with a principled framework that enhances generalization while quantifying predictive uncertainty. This research unfolds with the practical implementation of Bayesian regularization within neural networks, focusing on applications ranging from traffic prediction, urban infrastructure, data privacy, safety and security. By integrating Bayesian regularization, the aim is to, not only improve model performance in terms of accuracy and reliability but also to provide planners and decision-makers with probabilistic insights into the outcomes of various urban interventions. In tandem with quantitative assessments, graphical analysis is wielded as a crucial tool to visualize the inner workings of deep learning models in the context of urban planning. Through graphical representations, network visualizations, and decision boundary analysis, we uncover how Bayesian regularization influences neural network architecture and enhances interpretability.

PMID:38431741 | DOI:10.1038/s41598-024-55928-3

Categories: Literature Watch

Migraine headache (MH) classification using machine learning methods with data augmentation

Sat, 2024-03-02 06:00

Sci Rep. 2024 Mar 2;14(1):5180. doi: 10.1038/s41598-024-55874-0.

ABSTRACT

Migraine headache, a prevalent and intricate neurovascular disease, presents significant challenges in its clinical identification. Existing techniques that use subjective pain intensity measures are insufficiently accurate to make a reliable diagnosis. Even though headaches are a common condition with poor diagnostic specificity, they have a significant negative influence on the brain, body, and general human function. In this era of deeply intertwined health and technology, machine learning (ML) has emerged as a crucial force in transforming every aspect of healthcare, utilizing advanced facilities ML has shown groundbreaking achievements related to developing classification and automatic predictors. With this, deep learning models, in particular, have proven effective in solving complex problems spanning computer vision and data analytics. Consequently, the integration of ML in healthcare has become vital, especially in developing countries where limited medical resources and lack of awareness prevail, the urgent need to forecast and categorize migraines using artificial intelligence (AI) becomes even more crucial. By training these models on a publicly available dataset, with and without data augmentation. This study focuses on leveraging state-of-the-art ML algorithms, including support vector machine (SVM), K-nearest neighbors (KNN), random forest (RF), decision tree (DST), and deep neural networks (DNN), to predict and classify various types of migraines. The proposed models with data augmentations were trained to classify seven various types of migraine. The proposed models with data augmentations were trained to classify seven various types of migraine. The revealed results show that DNN, SVM, KNN, DST, and RF achieved an accuracy of 99.66%, 94.60%, 97.10%, 88.20%, and 98.50% respectively with data augmentation highlighting the transformative potential of AI in enhancing migraine diagnosis.

PMID:38431729 | DOI:10.1038/s41598-024-55874-0

Categories: Literature Watch

Domain generalization enables general cancer cell annotation in single-cell and spatial transcriptomics

Sat, 2024-03-02 06:00

Nat Commun. 2024 Mar 2;15(1):1929. doi: 10.1038/s41467-024-46413-6.

ABSTRACT

Single-cell and spatial transcriptome sequencing, two recently optimized transcriptome sequencing methods, are increasingly used to study cancer and related diseases. Cell annotation, particularly for malignant cell annotation, is essential and crucial for in-depth analyses in these studies. However, current algorithms lack accuracy and generalization, making it difficult to consistently and rapidly infer malignant cells from pan-cancer data. To address this issue, we present Cancer-Finder, a domain generalization-based deep-learning algorithm that can rapidly identify malignant cells in single-cell data with an average accuracy of 95.16%. More importantly, by replacing the single-cell training data with spatial transcriptomic datasets, Cancer-Finder can accurately identify malignant spots on spatial slides. Applying Cancer-Finder to 5 clear cell renal cell carcinoma spatial transcriptomic samples, Cancer-Finder demonstrates a good ability to identify malignant spots and identifies a gene signature consisting of 10 genes that are significantly co-localized and enriched at the tumor-normal interface and have a strong correlation with the prognosis of clear cell renal cell carcinoma patients. In conclusion, Cancer-Finder is an efficient and extensible tool for malignant cell annotation.

PMID:38431724 | DOI:10.1038/s41467-024-46413-6

Categories: Literature Watch

IInception-CBAM-IBiGRU based fault diagnosis method for asynchronous motors

Sat, 2024-03-02 06:00

Sci Rep. 2024 Mar 2;14(1):5192. doi: 10.1038/s41598-024-55367-0.

ABSTRACT

Aiming at the problems of insufficient extraction of asynchronous motor fault features by traditional deep learning algorithms and poor diagnosis of asynchronous motor faults in robust noise environments, this paper proposes an end-to-end fault diagnosis method for asynchronous motors based on IInception-CBAM-IBiGRU. The method first uses a signal-to-grayscale image conversion method to convert one-dimensional vibration signals into two-dimensional images and initially extracts shallow features through two-dimensional convolution; then the Improved Inception (IInception) module is used as a residual block to learning features at different scales with a residual structure, and extracts its important feature information through the Convolutional Block Attention Module (CBAM) to extract important feature information and adjust the weight parameters; then the feature information is input to the Improved Bi-directional Gate Recurrent Unit (IBiGRU) to extract its timing features further; finally, the fault identification is achieved by the SoftMax function. The primary hyperparameters in the model are optimized by the Weighted Mean Of Vectors Algorithm (INFO). The experimental results show that the method is effective in fault diagnosis of asynchronous motors, with an accuracy rate close to 100%, and can still maintain a high accuracy rate under the condition of low noise ratio, with good robustness and generalization ability.

PMID:38431682 | DOI:10.1038/s41598-024-55367-0

Categories: Literature Watch

Jewelry rock discrimination as interpretable data using laser-induced breakdown spectroscopy and a convolutional LSTM deep learning algorithm

Sat, 2024-03-02 06:00

Sci Rep. 2024 Mar 2;14(1):5169. doi: 10.1038/s41598-024-55502-x.

ABSTRACT

In this study, the deep learning algorithm of Convolutional Neural Network long short-term memory (CNN-LSTM) is used to classify various jewelry rocks such as agate, turquoise, calcites, and azure from various historical periods and styles related to Shahr-e Sokhteh. Here, the CNN-LSTM architecture includes utilizing CNN layers for the extraction of features from input data mixed with LSTMs for supporting sequence forecasting. It should be mentioned that interpretable deep learning-assisted laser induced breakdown spectroscopy helped achieve excellent performance. For the first time, this paper interprets the Convolutional LSTM effectiveness layer by layer in self-adaptively obtaining LIBS features and the quantitative data of major chemical elements in jewelry rocks. Moreover, Lasso method is applied on data as a factor for investigation of interoperability. The results demonstrated that LIBS can be essentially combined with a deep learning algorithm for the classification of different jewelry songs. The proposed methodology yielded high accuracy, confirming the effectiveness and suitability of the approach in the discrimination process.

PMID:38431680 | DOI:10.1038/s41598-024-55502-x

Categories: Literature Watch

Evaluating the accuracy of the Ophthalmologist Robot for multiple blindness-causing eye diseases: a multicentre, prospective study protocol

Sat, 2024-03-02 06:00

BMJ Open. 2024 Mar 1;14(3):e077859. doi: 10.1136/bmjopen-2023-077859.

ABSTRACT

INTRODUCTION: Early eye screening and treatment can reduce the incidence of blindness by detecting and addressing eye diseases at an early stage. The Ophthalmologist Robot is an automated device that can simultaneously capture ocular surface and fundus images without the need for ophthalmologists, making it highly suitable for primary application. However, the accuracy of the device's screening capabilities requires further validation. This study aims to evaluate and compare the screening accuracies of ophthalmologists and deep learning models using images captured by the Ophthalmologist Robot, in order to identify a screening method that is both highly accurate and cost-effective. Our findings may provide valuable insights into the potential applications of remote eye screening.

METHODS AND ANALYSIS: This is a multicentre, prospective study that will recruit approximately 1578 participants from 3 hospitals. All participants will undergo ocular surface and fundus images taken by the Ophthalmologist Robot. Additionally, 695 participants will have their ocular surface imaged with a slit lamp. Relevant information from outpatient medical records will be collected. The primary objective is to evaluate the accuracy of ophthalmologists' screening for multiple blindness-causing eye diseases using device images through receiver operating characteristic curve analysis. The targeted diseases include keratitis, corneal scar, cataract, diabetic retinopathy, age-related macular degeneration, glaucomatous optic neuropathy and pathological myopia. The secondary objective is to assess the accuracy of deep learning models in disease screening. Furthermore, the study aims to compare the consistency between the Ophthalmologist Robot and the slit lamp in screening for keratitis and corneal scar using the Kappa test. Additionally, the cost-effectiveness of three eye screening methods, based on non-telemedicine screening, ophthalmologist-telemedicine screening and artificial intelligence-telemedicine screening, will be assessed by constructing Markov models.

ETHICS AND DISSEMINATION: The study has obtained approval from the ethics committee of the Ophthalmology and Optometry Hospital of Wenzhou Medical University (reference: 2023-026 K-21-01). This work will be disseminated by peer-review publications, abstract presentations at national and international conferences and data sharing with other researchers.

TRIAL REGISTRATION NUMBER: ChiCTR2300070082.

PMID:38431298 | DOI:10.1136/bmjopen-2023-077859

Categories: Literature Watch

BERT-siRNA: siRNA target prediction based on BERT pre-trained interpretable model

Sat, 2024-03-02 06:00

Gene. 2024 Feb 29:148330. doi: 10.1016/j.gene.2024.148330. Online ahead of print.

ABSTRACT

Silencing mRNA through siRNA is vital for RNA interference (RNAi), necessitating accurate computational methods for siRNA selection. Current approaches, relying on machine learning, often face challenges with large data requirements and intricate data preprocessing, leading to reduced accuracy. To address this challenge, we propose a BERT model-based siRNA target gene knockdown efficiency prediction method called BERT-siRNA, which consists of a pre-trained DNA-BERT module and Multilayer Perceptron module. It applies the concept of transfer learning to avoid the limitation of a small sample size and the need for extensive preprocessing processes. By fine-tuning on various siRNA datasets after pretraining on extensive genomic data using DNA-BERT to enhance predictive capabilities. Our model clearly outperforms all existing siRNA prediction models through testing on the independent public siRNA dataset. Furthermore, the model's consistent predictions of high-efficiency siRNA knockdown for SARS-CoV-2, as well as its alignment with experimental results for PDCD1, CD38, and IL6, demonstrate the reliability and stability of the model. In addition, the attention scores for all 19-nt positions in the dataset indicate that the model's attention is predominantly focused on the 5' end of the siRNA. The step-by-step visualization of the hidden layer's classification progressively clarified and explained the effective feature extraction of the MLP layer. The explainability of model by analysis the attention scores and hidden layers is also our main purpose in this work, making it more explainable and reliable for biological researchers.

PMID:38431236 | DOI:10.1016/j.gene.2024.148330

Categories: Literature Watch

Experience of Implementing Deep Learning-Based Automatic Contouring in Breast Radiation Therapy Planning: Insights from Over 2,000 Cases

Sat, 2024-03-02 06:00

Int J Radiat Oncol Biol Phys. 2024 Feb 29:S0360-3016(24)00352-3. doi: 10.1016/j.ijrobp.2024.02.041. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: This study evaluates the impact and clinical utility of an auto-contouring system for radiation therapy treatments.

MATERIALS AND METHODS: The auto-contouring system was implemented in 2019. We evaluated data from 2,428 patients who underwent adjuvant breast radiation therapy before and after the system's introduction. We collected the treatment's finalized contours, reviewed and revised by multidisciplinary team. After implementation, the treatment contours underwent a finalization process which involved manual review and adjustment of the initial auto-contours. For the pre-implementation group (n = 369), auto-contours were generated retrospectively. We compared the auto-contours and final contours using the Dice similarity coefficient (DSC) and the 95% Hausdorff distance (HD95).

RESULTS: We analyzed 22,215 structures from final and corresponding auto-contours. The final contours were generally larger, encompassing more slices in the superior or inferior directions. For organs-at-risk (OAR), the heart, esophagus, spinal cord, and contralateral breast demonstrated significantly increased DSC and reduced HD95 post-implementation (all p < 0.05), except for the lungs, which presented inaccurate segmentation. Among target volumes, CTVn_L2, L3, L4, and IMN showed increased DSC and reduced HD95 in the post-implementation group (all p < 0.05), although the increase was less pronounced than OAR outcomes. The analysis also covered factors contributing to significant differences, pattern identification, and outlier detection.

CONCLUSIONS: In our study, the adoption of the auto-contouring system was associated with an increased reliance on automated settings, underscoring its utility and the potential risk of automation bias. Given these findings, we believe it may be important to consider the integration of stringent risk assessments and quality management strategies as a precautionary measure for the optimal use of such systems.

PMID:38431232 | DOI:10.1016/j.ijrobp.2024.02.041

Categories: Literature Watch

A joint brain extraction and image quality assessment framework for fetal brain MRI slices

Sat, 2024-03-02 06:00

Neuroimage. 2024 Feb 29:120560. doi: 10.1016/j.neuroimage.2024.120560. Online ahead of print.

ABSTRACT

Brain extraction and image quality assessment are two fundamental steps in fetal brain magnetic resonance imaging (MRI) 3D reconstruction and quantification. However, the randomness of fetal position and orientation, the variability of fetal brain morphology, maternal organs around the fetus, and the scarcity of data samples, all add excessive noise and impose a great challenge to automated brain extraction and quality assessment of fetal MRI slices. Conventionally, brain extraction and quality assessment are typically performed independently. However, both of them focus on the brain image representation, so they can be jointly optimized to ensure the network learns more effective features and avoid overfitting. To this end, we propose a novel two-stage dual-task deep learning framework with a brain localization stage and a dual-task stage for joint brain extraction and quality assessment of fetal MRI slices. Specifically, the dual-task module compactly contains a feature extraction module, a quality assessment head and a segmentation head with feature fusion for simultaneous brain extraction and quality assessment. Besides, a transformer architecture is introduced into the feature extraction module and the segmentation head. We utilize a multi-step training strategy to guarantee a stable and successful training of all modules. Finally, we validate our method by a 5-fold cross-validation and ablation study on a dataset with fetal brain MRI slices in different qualities, and perform a cross-dataset validation in addition. Experiments show that the proposed framework achieves very promising performance.

PMID:38431181 | DOI:10.1016/j.neuroimage.2024.120560

Categories: Literature Watch

Miffi: Improving the accuracy of CNN-based cryo-EM micrograph filtering with fine-tuning and Fourier space information

Sat, 2024-03-02 06:00

J Struct Biol. 2024 Feb 29:108072. doi: 10.1016/j.jsb.2024.108072. Online ahead of print.

ABSTRACT

Efficient and high-accuracy filtering of cryo-electron microscopy (cryo-EM) micrographs is an emerging challenge with the growing speed of data collection and sizes of datasets. Convolutional neural networks (CNNs) are machine learning models that have been proven successful in many computer vision tasks, and have been previously applied to cryo-EM micrograph filtering. In this work, we demonstrate that two strategies, fine-tuning models from pretrained weights and including the power spectrum of micrographs as input, can greatly improve the attainable prediction accuracy of CNN models. The resulting software package, Miffi, is open-source and freely available for public use (https://github.com/ando-lab/miffi).

PMID:38431179 | DOI:10.1016/j.jsb.2024.108072

Categories: Literature Watch

Thickness and design features of clinical cranial implants-what should automated methods strive to replicate?

Sat, 2024-03-02 06:00

Int J Comput Assist Radiol Surg. 2024 Mar 2. doi: 10.1007/s11548-024-03068-4. Online ahead of print.

ABSTRACT

PURPOSE: New deep learning and statistical shape modelling approaches aim to automate the design process for patient-specific cranial implants, as highlighted by the MICCAI AutoImplant Challenges. To ensure applicability, it is important to determine if the training data used in developing these algorithms represent the geometry of implants designed for clinical use.

METHODS: Calavera Surgical Design provided a dataset of 206 post-craniectomy skull geometries and their clinically used implants. The MUG500+ dataset includes 29 post-craniectomy skull geometries and implants designed for automating design. For both implant and skull shapes, the inner and outer cortical surfaces were segmented, and the thickness between them was measured. For the implants, a 'rim' was defined that transitions from the repaired defect to the surrounding skull. For unilateral defect cases, skull implants were mirrored to the contra-lateral side and thickness differences were quantified.

RESULTS: The average thickness of the clinically used implants was 6.0 ± 0.5 mm, which approximates the thickness on the contra-lateral side of the skull (relative difference of -0.3 ± 1.4 mm). The average thickness of the MUG500+ implants was 2.9 ± 1.0 mm, significantly thinner than the intact skull thickness (relative difference of 2.9 ± 1.2 mm). Rim transitions in the clinical implants (average width of 8.3 ± 3.4 mm) were used to cap and create a smooth boundary with the skull.

CONCLUSIONS: For implant modelers or manufacturers, this shape analysis quantified differences of cranial implants (thickness, rim width, surface area, and volume) to help guide future automated design algorithms. After skull completion, a thicker implant can be more versatile for cases involving muscle hollowing or thin skulls, and wider rims can smooth over the defect margins to provide more stability. For clinicians, the differing measurements and implant designs can help inform the options available for their patient specific treatment.

PMID:38430381 | DOI:10.1007/s11548-024-03068-4

Categories: Literature Watch

Subtracting-adding strategy for necrotic lesion segmentation in osteonecrosis of the femoral head

Sat, 2024-03-02 06:00

Int J Comput Assist Radiol Surg. 2024 Mar 2. doi: 10.1007/s11548-024-03073-7. Online ahead of print.

ABSTRACT

PURPOSE: Osteonecrosis of the femoral head (ONFH) is a severe bone disease that can progressively lead to hip dysfunction. Accurately segmenting the necrotic lesion helps in diagnosing and treating ONFH. This paper aims at enhancing deep learning models for necrosis segmentation.

METHODS: Necrotic lesions of ONFH are confined to the femoral head. Considering this domain knowledge, we introduce a preprocessing procedure, termed the "subtracting-adding" strategy, which explicitly incorporates this domain knowledge into the downstream deep neural network input. This strategy first removes the voxels outside the predefined volume of interest to "subtract" irrelevant information, and then it concatenates the bone mask with raw data to "add" anatomical structure information.

RESULTS: Each of the tested off-the-shelf networks performed better with the help of the "subtracting-adding" strategy. The dice similarity coefficients increased by 10.93%, 9.23%, 9.38% and 1.60% for FCN, HRNet, SegNet and UNet, respectively. The improvements in FCN and HRNet were statistically significant.

CONCLUSIONS: The "subtracting-adding" strategy enhances the performance of general-purpose networks in necrotic lesion segmentation. This strategy is compatible with various semantic segmentation networks, alleviating the need to design task-specific models.

PMID:38430380 | DOI:10.1007/s11548-024-03073-7

Categories: Literature Watch

Machine learning-based medical imaging diagnosis in patients with temporomandibular disorders: a diagnostic test accuracy systematic review and meta-analysis

Sat, 2024-03-02 06:00

Clin Oral Investig. 2024 Mar 2;28(3):186. doi: 10.1007/s00784-024-05586-6.

ABSTRACT

OBJECTIVES: Temporomandibular disorders (TMDs) are the second most common musculoskeletal condition which are challenging tasks for most clinicians. Recent research used machine learning (ML) algorithms to diagnose TMDs intelligently. This study aimed to systematically evaluate the quality of these studies and assess the diagnostic accuracy of existing models.

MATERIALS AND METHODS: Twelve databases (Europe PMC, Embase, etc.) and two registers were searched for published and unpublished studies using ML algorithms on medical images. Two reviewers extracted the characteristics of studies and assessed the methodological quality using the QUADAS-2 tool independently.

RESULTS: A total of 28 studies (29 reports) were included: one was at unclear risk of bias and the others were at high risk. Thus the certainty of evidence was quite low. These studies used many types of algorithms including 8 machine learning models (logistic regression, support vector machine, random forest, etc.) and 15 deep learning models (Resnet152, Yolo v5, Inception V3, etc.). The diagnostic accuracy of a few models was relatively satisfactory. The pooled sensitivity and specificity were 0.745 (0.660-0.814) and 0.770 (0.700-0.828) in random forest, 0.765 (0.686-0.829) and 0.766 (0.688-0.830) in XGBoost, and 0.781 (0.704-0.843) and 0.781 (0.704-0.843) in LightGBM.

CONCLUSIONS: Most studies had high risks of bias in Patient Selection and Index Test. Some algorithms are relatively satisfactory and might be promising in intelligent diagnosis. Overall, more high-quality studies and more types of algorithms should be conducted in the future.

CLINICAL RELEVANCE: We evaluated the diagnostic accuracy of the existing models and provided clinicians with much advice about the selection of algorithms. This study stated the promising orientation of future research, and we believe it will promote the intelligent diagnosis of TMDs.

PMID:38430334 | DOI:10.1007/s00784-024-05586-6

Categories: Literature Watch

Retinal imaging for the assessment of stroke risk: a systematic review

Sat, 2024-03-02 06:00

J Neurol. 2024 Mar 2. doi: 10.1007/s00415-023-12171-6. Online ahead of print.

ABSTRACT

BACKGROUND: Stroke is a leading cause of morbidity and mortality. Retinal imaging allows non-invasive assessment of the microvasculature. Consequently, retinal imaging is a technology which is garnering increasing attention as a means of assessing cardiovascular health and stroke risk.

METHODS: A biomedical literature search was performed to identify prospective studies that assess the role of retinal imaging derived biomarkers as indicators of stroke risk.

RESULTS: Twenty-four studies were included in this systematic review. The available evidence suggests that wider retinal venules, lower fractal dimension, increased arteriolar tortuosity, presence of retinopathy, and presence of retinal emboli are associated with increased likelihood of stroke. There is weaker evidence to suggest that narrower arterioles and the presence of individual retinopathy traits such as microaneurysms and arteriovenous nicking indicate increased stroke risk. Our review identified three models utilizing artificial intelligence algorithms for the analysis of retinal images to predict stroke. Two of these focused on fundus photographs, whilst one also utilized optical coherence tomography (OCT) technology images. The constructed models performed similarly to conventional risk scores but did not significantly exceed their performance. Only two studies identified in this review used OCT imaging, despite the higher dimensionality of this data.

CONCLUSION: Whilst there is strong evidence that retinal imaging features can be used to indicate stroke risk, there is currently no predictive model which significantly outperforms conventional risk scores. To develop clinically useful tools, future research should focus on utilization of deep learning algorithms, validation in external cohorts, and analysis of OCT images.

PMID:38430271 | DOI:10.1007/s00415-023-12171-6

Categories: Literature Watch

Pages