Deep learning
Effective prediction of human skin cancer using stacking based ensemble deep learning algorithm
Network. 2024 May 28:1-37. doi: 10.1080/0954898X.2024.2346608. Online ahead of print.
ABSTRACT
Automated diagnosis of cancer from skin lesion data has been the focus of numerous research. Despite that it can be challenging to interpret these images because of features like colour illumination changes, variation in the sizes and forms of the lesions. To tackle these problems, the proposed model develops an ensemble of deep learning techniques for skin cancer diagnosis. Initially, skin imaging data are collected and preprocessed using resizing and anisotropic diffusion to enhance the quality of the image. Preprocessed images are fed into the Fuzzy-C-Means clustering technique to segment the region of diseases. Stacking-based ensemble deep learning approach is used for classification and the LSTM acts as a meta-classifier. Deep Neural Network (DNN) and Convolutional Neural Network (CNN) are used as input for LSTM. This segmented images are utilized to be input into the CNN, and the local binary pattern (LBP) technique is employed to extract DNN features from the segments of the image. The output from these two classifiers will be fed into the LSTM Meta classifier. This LSTM classifies the input data and predicts the skin cancer disease. The proposed approach had a greater accuracy of 97%. Hence, the developed model accurately predicts skin cancer disease.
PMID:38804548 | DOI:10.1080/0954898X.2024.2346608
Clinical Case of Mild Tatton-Brown-Rahman Syndrome Caused by a Nonsense Variant in <em>DNMT3A</em> Gene
Clin Pract. 2024 May 21;14(3):928-933. doi: 10.3390/clinpract14030073.
ABSTRACT
Tatton-Brown-Rahman syndrome is a rare autosomal dominant hereditary disease caused by pathogenic variants in the DNMT3A gene, which is an important participant in epigenetic regulation, especially during embryonic development, and is highly expressed in all tissues. The main features of the syndrome are high growth, macrocephaly, intellectual disability, and facial dysmorphic features. We present a clinical case of Tatton-Brown-Rahman syndrome in a ten-year-old boy with macrocephaly with learning difficulties, progressive eye impairment, and fatigue suspected by a deep learning-based diagnosis assistance system, Face2Gene. The proband underwent whole-exome sequencing, which revealed a recurrent nonsense variant in the 12th exon of the DNMT3A, leading to the formation of a premature stop codon-NM_022552.5:c.1443C>A (p.Tyr481Ter), in a heterozygous state. This variant was not found in parents, confirming its de novo status. The patient case described here contributes to the understanding of the clinical diversity of Tatton-Brown-Raman syndrome with a mild clinical presentation that expands the phenotypic spectrum of the syndrome. We report the first recurrent nonsense variant in the DNMT3A gene, suggesting a mutational hot-spot. Differential diagnoses of this syndrome with Sotos syndrome, Weaver syndrome, and Cowden syndrome, as well as molecular confirmation, are extremely important, since the presence of certain types of pathogenic variants in the DNMT3A gene significantly increases the risk of developing acute myeloid leukemia.
PMID:38804405 | DOI:10.3390/clinpract14030073
A Novel Deep Learning Model for Drug-drug Interactions
Curr Comput Aided Drug Des. 2024;20(5):666-672. doi: 10.2174/0115734099265663230926064638.
ABSTRACT
INTRODUCTION: Drug-drug interactions (DDIs) can lead to adverse events and compromised treatment efficacy that emphasize the need for accurate prediction and understanding of these interactions.
METHODS: In this paper, we propose a novel approach for DDI prediction using two separate message-passing neural network (MPNN) models, each focused on one drug in a pair. By capturing the unique characteristics of each drug and their interactions, the proposed method aims to improve the accuracy of DDI prediction. The outputs of the individual MPNN models combine to integrate the information from both drugs and their molecular features. Evaluating the proposed method on a comprehensive dataset, we demonstrate its superior performance with an accuracy of 0.90, an area under the curve (AUC) of 0.99, and an F1-score of 0.80. These results highlight the effectiveness of the proposed approach in accurately identifying potential drugdrug interactions.
RESULTS: The use of two separate MPNN models offers a flexible framework for capturing drug characteristics and interactions, contributing to our understanding of DDIs. The findings of this study have significant implications for patient safety and personalized medicine, with the potential to optimize treatment outcomes by preventing adverse events.
CONCLUSION: Further research and validation on larger datasets and real-world scenarios are necessary to explore the generalizability and practicality of this approach.
PMID:38804324 | DOI:10.2174/0115734099265663230926064638
fMRI-based spatio-temporal parcellations of the human brain
Curr Opin Neurol. 2024 May 28. doi: 10.1097/WCO.0000000000001280. Online ahead of print.
ABSTRACT
PURPOSE OF REVIEW: Human brain parcellation based on functional magnetic resonance imaging (fMRI) plays an essential role in neuroscience research. By segmenting vast and intricate fMRI data into functionally similar units, researchers can better decipher the brain's structure in both healthy and diseased states. This article reviews current methodologies and ideas in this field, while also outlining the obstacles and directions for future research.
RECENT FINDINGS: Traditional brain parcellation techniques, which often rely on cytoarchitectonic criteria, overlook the functional and temporal information accessible through fMRI. The adoption of machine learning techniques, notably deep learning, offers the potential to harness both spatial and temporal information for more nuanced brain segmentation. However, the search for a one-size-fits-all solution to brain segmentation is impractical, with the choice between group-level or individual-level models and the intended downstream analysis influencing the optimal parcellation strategy. Additionally, evaluating these models is complicated by our incomplete understanding of brain function and the absence of a definitive "ground truth".
SUMMARY: While recent methodological advancements have significantly enhanced our grasp of the brain's spatial and temporal dynamics, challenges persist in advancing fMRI-based spatio-temporal representations. Future efforts will likely focus on refining model evaluation and selection as well as developing methods that offer clear interpretability for clinical usage, thereby facilitating further breakthroughs in our comprehension of the brain.
PMID:38804205 | DOI:10.1097/WCO.0000000000001280
Deep learning unlocks label-free viability assessment of cancer spheroids in microfluidics
Lab Chip. 2024 May 28. doi: 10.1039/d4lc00197d. Online ahead of print.
ABSTRACT
Despite recent advances in cancer treatment, refining therapeutic agents remains a critical task for oncologists. Precise evaluation of drug effectiveness necessitates the use of 3D cell culture instead of traditional 2D monolayers. Microfluidic platforms have enabled high-throughput drug screening with 3D models, but current viability assays for 3D cancer spheroids have limitations in reliability and cytotoxicity. This study introduces a deep learning model for non-destructive, label-free viability estimation based on phase-contrast images, providing a cost-effective, high-throughput solution for continuous spheroid monitoring in microfluidics. Microfluidic technology facilitated the creation of a high-throughput cancer spheroid platform with approximately 12 000 spheroids per chip for drug screening. Validation involved tests with eight conventional chemotherapeutic drugs, revealing a strong correlation between viability assessed via LIVE/DEAD staining and phase-contrast morphology. Extending the model's application to novel compounds and cell lines not in the training dataset yielded promising results, implying the potential for a universal viability estimation model. Experiments with an alternative microscopy setup supported the model's transferability across different laboratories. Using this method, we also tracked the dynamic changes in spheroid viability during the course of drug administration. In summary, this research integrates a robust platform with high-throughput microfluidic cancer spheroid assays and deep learning-based viability estimation, with broad applicability to various cell lines, compounds, and research settings.
PMID:38804084 | DOI:10.1039/d4lc00197d
Deep learning system for screening AIDS-related cytomegalovirus retinitis with ultra-wide-field fundus images
Heliyon. 2024 May 15;10(10):e30881. doi: 10.1016/j.heliyon.2024.e30881. eCollection 2024 May 30.
ABSTRACT
BACKGROUND: Ophthalmological screening for cytomegalovirus retinitis (CMVR) for HIV/AIDS patients is important to prevent lifelong blindness. Previous studies have shown good properties of automated CMVR screening using digital fundus images. However, the application of a deep learning (DL) system to CMVR with ultra-wide-field (UWF) fundus images has not been studied, and the feasibility and efficiency of this method are uncertain.
METHODS: In this study, we developed, internally validated, externally validated, and prospectively validated a DL system to detect AIDS-related from UWF fundus images from different clinical datasets. We independently used the InceptionResnetV2 network to develop and internally validate a DL system for identifying active CMVR, inactive CMVR, and non-CMVR in 6960 UWF fundus images from 862 AIDS patients and validated the system in a prospective and an external validation data set using the area under the curve (AUC), accuracy, sensitivity, and specificity. A heat map identified the most important area (lesions) used by the DL system for differentiating CMVR.
RESULTS: The DL system showed AUCs of 0.945 (95 % confidence interval [CI]: 0.929, 0.962), 0.964 (95 % CI: 0.870, 0.999) and 0.968 (95 % CI: 0.860, 1.000) for detecting active CMVR from non-CMVR and 0.923 (95 % CI: 0.908, 0.938), 0.902 (0.857, 0.948) and 0.884 (0.851, 0.917) for detecting active CMVR from non-CMVR in the internal cross-validation, external validation, and prospective validation, respectively. Deep learning performed promisingly in screening CMVR. It also showed the ability to differentiate active CMVR from non-CMVR and inactive CMVR as well as to identify active CMVR and inactive CMVR from non-CMVR (all AUCs in the three independent data sets >0.900). The heat maps successfully highlighted lesion locations.
CONCLUSIONS: Our UWF fundus image-based DL system showed reliable performance for screening AIDS-related CMVR showing its potential for screening CMVR in HIV/AIDS patients, especially in the absence of ophthalmic resources.
PMID:38803983 | PMC:PMC11128864 | DOI:10.1016/j.heliyon.2024.e30881
Teeth segmentation and carious lesions segmentation in panoramic X-ray images using CariSeg, a networks' ensemble
Heliyon. 2024 May 10;10(10):e30836. doi: 10.1016/j.heliyon.2024.e30836. eCollection 2024 May 30.
ABSTRACT
BACKGROUND: Dental cavities are common oral diseases that can lead to pain, discomfort, and eventually, tooth loss. Early detection and treatment of cavities can prevent these negative consequences. We propose CariSeg, an intelligent system composed of four neural networks that result in the detection of cavities in dental X-rays with 99.42% accuracy.
METHOD: The first model of CariSeg, trained using the U-Net architecture, segments the area of interest, the teeth, and crops the radiograph around it. The next component segments the carious lesions and it is an ensemble composed of three architectures: U-Net, Feature Pyramid Network, and DeeplabV3. For tooth identification two merged datasets were used: The Tufts Dental Database consisting of 1000 panoramic radiography images and another dataset of 116 anonymized panoramic X-rays, taken at Noor Medical Imaging Center, Qom. For carious lesion segmentation, a dataset consisting of 150 panoramic X-ray images was acquired from the Department of Oral and Maxillofacial Surgery and Radiology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca.
RESULTS: The experiments demonstrate that our approach results in 99.42% accuracy and a mean 68.2% Dice coefficient.
CONCLUSIONS: AI helps in detecting carious lesions by analyzing dental X-rays and identifying cavities that might be missed by human observers, leading to earlier detection and treatment of cavities and resulting in better oral health outcomes.
PMID:38803980 | PMC:PMC11128823 | DOI:10.1016/j.heliyon.2024.e30836
Automatic detection of potholes using VGG-16 pre-trained network and Convolutional Neural Network
Heliyon. 2024 May 14;10(10):e30957. doi: 10.1016/j.heliyon.2024.e30957. eCollection 2024 May 30.
ABSTRACT
A self-driving car is necessary to implement traffic intelligence because it can vastly enhance both the safety of driving and the comfort of the driver by adjusting to the circumstances of the road ahead. Road hazards such as potholes can be a big challenge for autonomous vehicles, increasing the risk of crashes and vehicle damage. Real-time identification of road potholes is required to solve this issue. To this end, various approaches have been tried, including notifying the appropriate authorities, utilizing vibration-based sensors, and engaging in three-dimensional laser imaging. Unfortunately, these approaches have several drawbacks, such as large initial expenditures and the possibility of being discovered. Transfer learning is considered a potential answer to the pressing necessity of automating the process of pothole identification. A Convolutional Neural Network (CNN) is constructed to categorize potholes effectively using the VGG-16 pre-trained model as a transfer learning model throughout the training process. A Super-Resolution Generative Adversarial Network (SRGAN) is suggested to enhance the image's overall quality. Experiments conducted with the suggested approach of classifying road potholes revealed a high accuracy rate of 97.3%, and its effectiveness was tested using various criteria. The developed transfer learning technique obtained the best accuracy rate compared to many other deep learning algorithms.
PMID:38803954 | PMC:PMC11128863 | DOI:10.1016/j.heliyon.2024.e30957
Automated system for classifying uni-bicompartmental knee osteoarthritis by using redefined residual learning with convolutional neural network
Heliyon. 2024 May 14;10(10):e31017. doi: 10.1016/j.heliyon.2024.e31017. eCollection 2024 May 30.
ABSTRACT
Knee Osteoarthritis (OA) is one of the most common joint diseases that may cause physical disability associated with a significant personal and socioeconomic burden. X-ray imaging is the cheapest and most common method to detect Knee (OA). Accurate classification of knee OA can help physicians manage treatment efficiently and slow knee OA progression. This study aims to classify knee OA X-ray images according to anatomical types, such as uni or bicompartmental. The study proposes a deep learning model for classifying uni or bicompartmental knee OA based on redefined residual learning with CNN. The proposed model was trained, validated, and tested on a dataset containing 733 knee X-ray images (331 normal Knee images, 205 unicompartmental, and 197 bicompartmental knee images). The results show 61.81 % and 68.33 % for accuracy and specificity, respectively. Then, the performance of the proposed model was compared with different pre-trained CNNs. The proposed model achieved better results than all pre-trained CNNs.
PMID:38803931 | PMC:PMC11128872 | DOI:10.1016/j.heliyon.2024.e31017
XAI-FusionNet: Diabetic foot ulcer detection based on multi-scale feature fusion with explainable artificial intelligence
Heliyon. 2024 May 14;10(10):e31228. doi: 10.1016/j.heliyon.2024.e31228. eCollection 2024 May 30.
ABSTRACT
Diabetic foot ulcer (DFU) poses a significant threat to individuals affected by diabetes, often leading to limb amputation. Early detection of DFU can greatly improve the chances of survival for diabetic patients. This work introduces FusionNet, a novel multi-scale feature fusion network designed to accurately differentiate DFU skin from healthy skin using multiple pre-trained convolutional neural network (CNN) algorithms. A dataset comprising 6963 skin images (3574 healthy and 3389 ulcer) from various patients was divided into training (6080 images), validation (672 images), and testing (211 images) sets. Initially, three image preprocessing techniques - Gaussian filter, median filter, and motion blur estimation - were applied to eliminate irrelevant, noisy, and blurry data. Subsequently, three pre-trained CNN algorithms -DenseNet201, VGG19, and NASNetMobile - were utilized to extract high-frequency features from the input images. These features were then inputted into a meta-tuner module to predict DFU by selecting the most discriminative features. Statistical tests, including Friedman and analysis of variance (ANOVA), were employed to identify significant differences between FusionNet and other sub-networks. Finally, three eXplainable Artificial Intelligence (XAI) algorithms - SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and Grad-CAM (Gradient-weighted Class Activation Mapping) - were integrated into FusionNet to enhance transparency and explainability. The FusionNet classifier achieved exceptional classification results with 99.05 % accuracy, 98.18 % recall, 100.00 % precision, 99.09 % AUC, and 99.08 % F1 score. We believe that our proposed FusionNet will be a valuable tool in the medical field to distinguish DFU from healthy skin.
PMID:38803883 | PMC:PMC11129011 | DOI:10.1016/j.heliyon.2024.e31228
A hands-on guide to use network video recorders, internet protocol cameras, and deep learning models for dynamic monitoring of trout and salmon in small streams
Ecol Evol. 2024 May 27;14(5):e11246. doi: 10.1002/ece3.11246. eCollection 2024 May.
ABSTRACT
This study outlines a method for using surveillance cameras and an algorithm that calls a deep learning model to generate video segments featuring salmon and trout in small streams. This automated process greatly reduces the need for human intervention in video surveillance. Furthermore, a comprehensive guide is provided on setting up and configuring surveillance equipment, along with instructions on training a deep learning model tailored to specific requirements. Access to video data and knowledge about deep learning models makes monitoring of trout and salmon dynamic and hands-on, as the collected data can be used to train and further improve deep learning models. Hopefully, this setup will encourage fisheries managers to conduct more monitoring as the equipment is relatively cheap compared with customized solutions for fish monitoring. To make effective use of the data, natural markings of the camera-captured fish can be used for individual identification. While the automated process greatly reduces the need for human intervention in video surveillance and speeds up the initial sorting and detection of fish, the manual identification of individual fish based on natural markings still requires human effort and involvement. Individual encounter data hold many potential applications, such as capture-recapture and relative abundance models, and for evaluating fish passages in streams with hydropower by spatial recaptures, that is, the same individual identified at different locations. There is much to gain by using this technique as camera captures are the better option for the fish's welfare and are less time-consuming compared with physical captures and tagging.
PMID:38803608 | PMC:PMC11128984 | DOI:10.1002/ece3.11246
Image Quality and Lesion Detection of Multiplanar Reconstruction Images Using Deep Learning: Comparison with Hybrid Iterative Reconstruction
Yonago Acta Med. 2024 Apr 22;67(2):100-107. doi: 10.33160/yam.2024.05.001. eCollection 2024 May.
ABSTRACT
BACKGROUND: We assessed and compared the image quality of normal and pathologic structures as well as the image noise in chest computed tomography images using "adaptive statistical iterative reconstruction-V" (ASiR-V) or deep learning reconstruction "TrueFidelity".
METHODS: Forty consecutive patients with suspected lung disease were evaluated. The 1.25-mm axial images and 2.0-mm coronal multiplanar images were reconstructed under the following three conditions: (i) ASiR-V, lung kernel with 60% of ASiR-V; (ii) TF-M, standard kernel, image filter (Lung) with TrueFidelity at medium strength; and (iii) TF-H, standard kernel, image filter (Lung) with TrueFidelity at high strength. Two radiologists (readers) independently evaluated the image quality of anatomic structures using a scale ranging from 1 (best) to 5 (worst). In addition, readers ranked their image preference. Objective image noise was measured using a circular region of interest in the lung parenchyma. Subjective image quality scores, total scores for normal and abnormal structures, and lesion detection were compared using Wilcoxon's signed-rank test. Objective image quality was compared using Student's paired t-test and Wilcoxon's signed-rank test. The Bonferroni correction was applied to the P value, and significance was assumed only for values of P < 0.016.
RESULTS: Both readers rated TF-M and TF-H images significantly better than ASiR-V images in terms of visualization of the centrilobular region in axial images. The preference score of TF-M and TF-H images for reader 1 were better than that of ASiR-V images, and the preference score of TF-H images for reader 2 were significantly better than that of ASiR-V and TF-M images. TF-M images showed significantly lower objective image noise than ASiR-V or TF-H images.
CONCLUSION: TrueFidelity showed better image quality, especially in the centrilobular region, than ASiR-V in subjective and objective evaluations. In addition, the image texture preference for TrueFidelity was better than that for ASiR-V.
PMID:38803592 | PMC:PMC11128077 | DOI:10.33160/yam.2024.05.001
Differentiation of benign and malignant parotid gland tumors based on the fusion of radiomics and deep learning features on ultrasound images
Front Oncol. 2024 May 13;14:1384105. doi: 10.3389/fonc.2024.1384105. eCollection 2024.
ABSTRACT
OBJECTIVE: The pathological classification and imaging manifestation of parotid gland tumors are complex, while accurate preoperative identification plays a crucial role in clinical management and prognosis assessment. This study aims to construct and compare the performance of clinical models, traditional radiomics models, deep learning (DL) models, and deep learning radiomics (DLR) models based on ultrasound (US) images in differentiating between benign parotid gland tumors (BPGTs) and malignant parotid gland tumors (MPGTs).
METHODS: Retrospective analysis was conducted on 526 patients with confirmed PGTs after surgery, who were randomly divided into a training set and a testing set in the ratio of 7:3. Traditional radiomics and three DL models (DenseNet121, VGG19, ResNet50) were employed to extract handcrafted radiomics (HCR) features and DL features followed by feature fusion. Seven machine learning classifiers including logistic regression (LR), support vector machine (SVM), RandomForest, ExtraTrees, XGBoost, LightGBM and multi-layer perceptron (MLP) were combined to construct predictive models. The most optimal model was integrated with clinical and US features to develop a nomogram. Receiver operating characteristic (ROC) curve was employed for assessing performance of various models while the clinical utility was assessed by decision curve analysis (DCA).
RESULTS: The DLR model based on ExtraTrees demonstrated superior performance with AUC values of 0.943 (95% CI: 0.918-0.969) and 0.916 (95% CI: 0.861-0.971) for the training and testing set, respectively. The combined model DLR nomogram (DLRN) further enhanced the performance, resulting in AUC values of 0.960 (95% CI: 0.940- 0.979) and 0.934 (95% CI: 0.876-0.991) for the training and testing sets, respectively. DCA analysis indicated that DLRN provided greater clinical benefits compared to other models.
CONCLUSION: DLRN based on US images shows exceptional performance in distinguishing BPGTs and MPGTs, providing more reliable information for personalized diagnosis and treatment plans in clinical practice.
PMID:38803533 | PMC:PMC11128676 | DOI:10.3389/fonc.2024.1384105
Deep learning facilitates efficient optimization of antisense oligonucleotide drugs
Mol Ther Nucleic Acids. 2024 May 16;35(2):102208. doi: 10.1016/j.omtn.2024.102208. eCollection 2024 Jun 11.
NO ABSTRACT
PMID:38803420 | PMC:PMC11129084 | DOI:10.1016/j.omtn.2024.102208
A scoping review of machine learning for sepsis prediction- feature engineering strategies and model performance: a step towards explainability
Crit Care. 2024 May 28;28(1):180. doi: 10.1186/s13054-024-04948-6.
ABSTRACT
BACKGROUND: Sepsis, an acute and potentially fatal systemic response to infection, significantly impacts global health by affecting millions annually. Prompt identification of sepsis is vital, as treatment delays lead to increased fatalities through progressive organ dysfunction. While recent studies have delved into leveraging Machine Learning (ML) for predicting sepsis, focusing on aspects such as prognosis, diagnosis, and clinical application, there remains a notable deficiency in the discourse regarding feature engineering. Specifically, the role of feature selection and extraction in enhancing model accuracy has been underexplored.
OBJECTIVES: This scoping review aims to fulfill two primary objectives: To identify pivotal features for predicting sepsis across a variety of ML models, providing valuable insights for future model development, and To assess model efficacy through performance metrics including AUROC, sensitivity, and specificity.
RESULTS: The analysis included 29 studies across diverse clinical settings such as Intensive Care Units (ICU), Emergency Departments, and others, encompassing 1,147,202 patients. The review highlighted the diversity in prediction strategies and timeframes. It was found that feature extraction techniques notably outperformed others in terms of sensitivity and AUROC values, thus indicating their critical role in improving sepsis prediction models.
CONCLUSION: Key dynamic indicators, including vital signs and critical laboratory values, are instrumental in the early detection of sepsis. Applying feature selection methods significantly boosts model precision, with models like Random Forest and XG Boost showing promising results. Furthermore, Deep Learning models (DL) reveal unique insights, spotlighting the pivotal role of feature engineering in sepsis prediction, which could greatly benefit clinical practice.
PMID:38802973 | DOI:10.1186/s13054-024-04948-6
Advanced MRI techniques in abdominal imaging
Abdom Radiol (NY). 2024 May 28. doi: 10.1007/s00261-024-04369-7. Online ahead of print.
ABSTRACT
Magnetic resonance imaging (MRI) is a crucial modality for abdominal imaging evaluation of focal lesions and tissue properties. However, several obstacles, such as prolonged scan times, limitations in patients' breath-hold capacity, and contrast agent-associated artifacts, remain in abdominal MR images. Recent techniques, including parallel imaging, three-dimensional acquisition, compressed sensing, and deep learning, have been developed to reduce the scan time while ensuring acceptable image quality or to achieve higher resolution without extending the scan duration. Quantitative measurements using MRI techniques enable the noninvasive evaluation of specific materials. A comprehensive understanding of these advanced techniques is essential for accurate interpretation of MRI sequences. Herein, we therefore review advanced abdominal MRI techniques.
PMID:38802629 | DOI:10.1007/s00261-024-04369-7
Patient-specific cerebral 3D vessel model reconstruction using deep learning
Med Biol Eng Comput. 2024 May 28. doi: 10.1007/s11517-024-03136-6. Online ahead of print.
ABSTRACT
Three-dimensional vessel model reconstruction from patient-specific magnetic resonance angiography (MRA) images often requires some manual maneuvers. This study aimed to establish the deep learning (DL)-based method for vessel model reconstruction. Time of flight MRA of 40 patients with internal carotid artery aneurysms was prepared, and three-dimensional vessel models were constructed using the threshold and region-growing method. Using those datasets, supervised deep learning using 2D U-net was performed to reconstruct 3D vessel models. The accuracy of the DL-based vessel segmentations was assessed using 20 MRA images outside the training dataset. The dice coefficient was used as the indicator of the model accuracy, and the blood flow simulation was performed using the DL-based vessel model. The created DL model could successfully reconstruct a three-dimensional model in all 60 cases. The dice coefficient in the test dataset was 0.859. Of note, the DL-generated model proved its efficacy even for large aneurysms (> 10 mm in their diameter). The reconstructed model was feasible in performing blood flow simulation to assist clinical decision-making. Our DL-based method could successfully reconstruct a three-dimensional vessel model with moderate accuracy. Future studies are warranted to exhibit that DL-based technology can promote medical image processing.
PMID:38802608 | DOI:10.1007/s11517-024-03136-6
Proximal femur fracture detection on plain radiography via feature pyramid networks
Sci Rep. 2024 May 27;14(1):12046. doi: 10.1038/s41598-024-63001-2.
ABSTRACT
Hip fractures exceed 250,000 cases annually in the United States, with the worldwide incidence projected to increase by 240-310% by 2050. Hip fractures are predominantly diagnosed by radiologist review of radiographs. In this study, we developed a deep learning model by extending the VarifocalNet Feature Pyramid Network (FPN) for detection and localization of proximal femur fractures from plain radiography with clinically relevant metrics. We used a dataset of 823 hip radiographs of 150 subjects with proximal femur fractures and 362 controls to develop and evaluate the deep learning model. Our model attained 0.94 specificity and 0.95 sensitivity in fracture detection over the diverse imaging dataset. We compared the performance of our model against five benchmark FPN models, demonstrating 6-14% sensitivity and 1-9% accuracy improvement. In addition, we demonstrated that our model outperforms a state-of-the-art transformer model based on DINO network by 17% sensitivity and 5% accuracy, while taking half the time on average to process a radiograph. The developed model can aid radiologists and support on-premise integration with hospital cloud services to enable automatic, opportunistic screening for hip fractures.
PMID:38802519 | DOI:10.1038/s41598-024-63001-2
Impact of functional electrical stimulation on nerve-damaged muscles by quantifying fat infiltration using deep learning
Sci Rep. 2024 May 28;14(1):12158. doi: 10.1038/s41598-024-62805-6.
ABSTRACT
Quantitative imaging in life sciences has evolved into a powerful approach combining advanced microscopy acquisition and automated analysis of image data. The focus of the present study is on the imaging-based evaluation of the posterior cricoarytenoid muscle (PCA) influenced by long-term functional electrical stimulation (FES), which may assist the inspiration of patients with bilateral vocal fold paresis. To this end, muscle cross-sections of the PCA of sheep were examined by quantitative image analysis. Previous investigations of the muscle fibers and the collagen amount have not revealed signs of atrophy and fibrosis due to FES by a laryngeal pacemaker. It was therefore hypothesized that regardless of the stimulation parameters the fat in the muscle cross-sections would not be significantly altered. We here extending our previous investigations using quantitative imaging of intramuscular fat in cross-sections. In order to perform this analysis both reliably and faster than a qualitative evaluation and time-consuming manual annotation, the selection of the automated method was of crucial importance. To this end, our recently established deep neural network IMFSegNet, which provides more accurate results compared to standard machine learning approaches, was applied to more than 300 H&E stained muscle cross-sections from 22 sheep. It was found that there were no significant differences in the amount of intramuscular fat between the PCA with and without long-term FES, nor were any significant differences found between the low and high duty cycle stimulated groups. This study on a human-like animal model not only confirms the hypothesis that FES with the selected parameters has no negative impact on the PCA, but also demonstrates that objective and automated deep learning-based quantitative imaging is a powerful tool for such a challenging analysis.
PMID:38802457 | DOI:10.1038/s41598-024-62805-6
Mechanism-based organization of neural networks to emulate systems biology and pharmacology models
Sci Rep. 2024 May 27;14(1):12082. doi: 10.1038/s41598-024-59378-9.
ABSTRACT
Deep learning neural networks are often described as black boxes, as it is difficult to trace model outputs back to model inputs due to a lack of clarity over the internal mechanisms. This is even true for those neural networks designed to emulate mechanistic models, which simply learn a mapping between the inputs and outputs of mechanistic models, ignoring the underlying processes. Using a mechanistic model studying the pharmacological interaction between opioids and naloxone as a proof-of-concept example, we demonstrated that by reorganizing the neural networks' layers to mimic the structure of the mechanistic model, it is possible to achieve better training rates and prediction accuracy relative to the previously proposed black-box neural networks, while maintaining the interpretability of the mechanistic simulations. Our framework can be used to emulate mechanistic models in a large parameter space and offers an example on the utility of increasing the interpretability of deep learning networks.
PMID:38802422 | DOI:10.1038/s41598-024-59378-9