Deep learning
A Hybrid Intelligence Approach for Circulating Tumor Cell Enumeration in Digital Pathology by Using CNN and Weak Annotations
IEEE Access. 2023;11:142992-143003. doi: 10.1109/access.2023.3343701. Epub 2023 Dec 18.
ABSTRACT
Counting the number of Circulating Tumor Cells (CTCs) for cancer screenings is currently done by cytopathologists with a heavy time and energy cost. AI, especially deep learning, has shown great potential in medical imaging domains. The aim of this paper is to develop a novel hybrid intelligence approach to automatically enumerate CTCs by combining cytopathologist expertise with the efficiency of deep learning convolutional neural networks (CNNs). This hybrid intelligence approach includes three major components: CNN based CTC detection/localization using weak annotations, CNN based CTC segmentation, and a classifier to ultimately determine CTCs. A support vector machine (SVM) was investigated for classification efficiency. The B-scale transform was also introduced to find the maximum sphericality of a given region. The SVM classifier was implemented to use a three-element vector as its input, including the B-scale (size), texture, and area values from the detection and segmentation results. We collected 466 fluoroscopic images for CTC detection/localization, 473 images for CTC segmentation and another 198 images with 323 CTCs as an independent data set for CTC enumeration. Precision and recall for CTC detection are 0.98 and 0.92, which is comparable with the state-of-the-art results that needed much larger and stricter training data sets. The counting error on an independent testing set was 2-3% and 9% (with/without B-scale) and performs much better than previous thresholding approaches with 30% of counting error rates. Recent publications prove facilitation of other types of research in object localization and segmentation are necessary.
PMID:38957613 | PMC:PMC11218908 | DOI:10.1109/access.2023.3343701
Effects of different ground segmentation methods on the accuracy of UAV-based canopy volume measurements
Front Plant Sci. 2024 Jun 18;15:1393592. doi: 10.3389/fpls.2024.1393592. eCollection 2024.
ABSTRACT
The nonuniform distribution of fruit tree canopies in space poses a challenge for precision management. In recent years, with the development of Structure from Motion (SFM) technology, unmanned aerial vehicle (UAV) remote sensing has been widely used to measure canopy features in orchards to balance efficiency and accuracy. A pipeline of canopy volume measurement based on UAV remote sensing was developed, in which RGB and digital surface model (DSM) orthophotos were constructed from captured RGB images, and then the canopy was segmented using U-Net, OTSU, and RANSAC methods, and the volume was calculated. The accuracy of the segmentation and the canopy volume measurement were compared. The results show that the U-Net trained with RGB and DSM achieves the best accuracy in the segmentation task, with mean intersection of concatenation (MIoU) of 84.75% and mean pixel accuracy (MPA) of 92.58%. However, in the canopy volume estimation task, the U-Net trained with DSM only achieved the best accuracy with Root mean square error (RMSE) of 0.410 m3, relative root mean square error (rRMSE) of 6.40%, and mean absolute percentage error (MAPE) of 4.74%. The deep learning-based segmentation method achieved higher accuracy in both the segmentation task and the canopy volume measurement task. For canopy volumes up to 7.50 m3, OTSU and RANSAC achieve an RMSE of 0.521 m3 and 0.580 m3, respectively. Therefore, in the case of manually labeled datasets, the use of U-Net to segment the canopy region can achieve higher accuracy of canopy volume measurement. If it is difficult to cover the cost of data labeling, ground segmentation using partitioned OTSU can yield more accurate canopy volumes than RANSAC.
PMID:38957596 | PMC:PMC11217331 | DOI:10.3389/fpls.2024.1393592
Evaluating surgical expertise with AI-based automated instrument recognition for robotic distal gastrectomy
Ann Gastroenterol Surg. 2024 Feb 27;8(4):611-619. doi: 10.1002/ags3.12784. eCollection 2024 Jul.
ABSTRACT
INTRODUCTION: Complexities of robotic distal gastrectomy (RDG) give reason to assess physician's surgical skill. Varying levels in surgical skill affect patient outcomes. We aim to investigate how a novel artificial intelligence (AI) model can be used to evaluate surgical skill in RDG by recognizing surgical instruments.
METHODS: Fifty-five consecutive robotic surgical videos of RDG for gastric cancer were analyzed. We used Deeplab, a multi-stage temporal convolutional network, and it trained on 1234 manually annotated images. The model was then tested on 149 annotated images for accuracy. Deep learning metrics such as Intersection over Union (IoU) and accuracy were assessed, and the comparison between experienced and non-experienced surgeons based on usage of instruments during infrapyloric lymph node dissection was performed.
RESULTS: We annotated 540 Cadiere forceps, 898 Fenestrated bipolars, 359 Suction tubes, 307 Maryland bipolars, 688 Harmonic scalpels, 400 Staplers, and 59 Large clips. The average IoU and accuracy were 0.82 ± 0.12 and 87.2 ± 11.9% respectively. Moreover, the percentage of each instrument's usage to overall infrapyloric lymphadenectomy duration predicted by AI were compared. The use of Stapler and Large clip were significantly shorter in the experienced group compared to the non-experienced group.
CONCLUSIONS: This study is the first to report that surgical skill can be successfully and accurately determined by an AI model for RDG. Our AI gives us a way to recognize and automatically generate instance segmentation of the surgical instruments present in this procedure. Use of this technology allows unbiased, more accessible RDG surgical skill.
PMID:38957567 | PMC:PMC11216797 | DOI:10.1002/ags3.12784
Enhancing brain tumor detection in MRI with a rotation invariant Vision Transformer
Front Neuroinform. 2024 Jun 18;18:1414925. doi: 10.3389/fninf.2024.1414925. eCollection 2024.
ABSTRACT
BACKGROUND: The Rotation Invariant Vision Transformer (RViT) is a novel deep learning model tailored for brain tumor classification using MRI scans.
METHODS: RViT incorporates rotated patch embeddings to enhance the accuracy of brain tumor identification.
RESULTS: Evaluation on the Brain Tumor MRI Dataset from Kaggle demonstrates RViT's superior performance with sensitivity (1.0), specificity (0.975), F1-score (0.984), Matthew's Correlation Coefficient (MCC) (0.972), and an overall accuracy of 0.986.
CONCLUSION: RViT outperforms the standard Vision Transformer model and several existing techniques, highlighting its efficacy in medical imaging. The study confirms that integrating rotational patch embeddings improves the model's capability to handle diverse orientations, a common challenge in tumor imaging. The specialized architecture and rotational invariance approach of RViT have the potential to enhance current methodologies for brain tumor detection and extend to other complex imaging tasks.
PMID:38957549 | PMC:PMC11217563 | DOI:10.3389/fninf.2024.1414925
Image reconstruction of multispectral sparse sampling photoacoustic tomography based on deep algorithm unrolling
Photoacoustics. 2024 Jun 4;38:100618. doi: 10.1016/j.pacs.2024.100618. eCollection 2024 Aug.
ABSTRACT
Photoacoustic tomography (PAT), as a novel medical imaging technology, provides structural, functional, and metabolism information of biological tissue in vivo. Sparse Sampling PAT, or SS-PAT, generates images with a smaller number of detectors, yet its image reconstruction is inherently ill-posed. Model-based methods are the state-of-the-art method for SS-PAT image reconstruction, but they require design of complex handcrafted prior. Owing to their ability to derive robust prior from labeled datasets, deep-learning-based methods have achieved great success in solving inverse problems, yet their interpretability is poor. Herein, we propose a novel SS-PAT image reconstruction method based on deep algorithm unrolling (DAU), which integrates the advantages of model-based and deep-learning-based methods. We firstly provide a thorough analysis of DAU for PAT reconstruction. Then, in order to incorporate the structural prior constraint, we propose a nested DAU framework based on plug-and-play Alternating Direction Method of Multipliers (PnP-ADMM) to deal with the sparse sampling problem. Experimental results on numerical simulation, in vivo animal imaging, and multispectral un-mixing demonstrate that the proposed DAU image reconstruction framework outperforms state-of-the-art model-based and deep-learning-based methods.
PMID:38957484 | PMC:PMC11217744 | DOI:10.1016/j.pacs.2024.100618
SEA-Net: Structure-Enhanced Attention Network for Limited-Angle CBCT Reconstruction of Clinical Projection Data
IEEE Trans Instrum Meas. 2023;72:4507613. doi: 10.1109/tim.2023.3318712. Epub 2023 Oct 9.
ABSTRACT
This work aims to improve limited-angle (LA) cone beam computed tomography (CBCT) by developing deep learning (DL) methods for real clinical CBCT projection data, which is the first feasibility study of clinical-projection-data-based LA-CBCT, to the best of our knowledge. In radiation therapy (RT), CBCT is routinely used as the on-board imaging modality for patient setup. Compared to diagnostic CT, CBCT has a long acquisition time, e.g., 60 seconds for a full 360° rotation, which is subject to the motion artifact. Therefore, the LA-CBCT, if achievable, is of the great interest for the purpose of RT, for its proportionally reduced scanning time in addition to the radiation dose. However, LA-CBCT suffers from severe wedge artifacts and image distortions. Targeting at real clinical projection data, we have explored various DL methods such as image/data/hybrid-domain methods and finally developed a so-called Structure-Enhanced Attention Network (SEA-Net) method that has the best image quality from clinical projection data among the DL methods we have implemented. Specifically, the proposed SEA-Net employs a specialized structure enhancement sub-network to promote texture preservation. Based on the observation that the distribution of wedge artifacts in reconstruction images is non-uniform, the spatial attention module is utilized to emphasize the relevant regions while ignores the irrelevant ones, which leads to more accurate texture restoration.
PMID:38957474 | PMC:PMC11218899 | DOI:10.1109/tim.2023.3318712
Editorial: Artificial intelligence in predicting, determining and controlling cell phenotype or tissue function in inflammatory diseases
Front Immunol. 2024 Jun 18;15:1443534. doi: 10.3389/fimmu.2024.1443534. eCollection 2024.
NO ABSTRACT
PMID:38957459 | PMC:PMC11217532 | DOI:10.3389/fimmu.2024.1443534
Justifying the prediction of major soil nutrients levels (N, P, and K) in cabbage cultivation
MethodsX. 2024 Jun 4;12:102793. doi: 10.1016/j.mex.2024.102793. eCollection 2024 Jun.
ABSTRACT
In a recent paper by Sajindra et al. [1], the soil nutrient levels, specifically nitrogen, phosphorus, and potassium, in organic cabbage cultivation were predicted using a deep learning model. This model was designed with a total of four hidden layers, excluding the input and output layers, with each hidden layer meticulously crafted to contain ten nodes. The selection of the tangent sigmoid transfer function as the optimal activation function for the dataset was based on considerations such as the coefficient of correlation, mean squared error, and the accuracy of the predicted results. Throughout this study, the objective is to justify the tangent sigmoid transfer function and provide mathematical justification for the obtained results.•This paper presents the comprehensive methodology for the development of deep neural network for predict the soil nutrient levels.•Tangent Sigmoid transfer function usage is justified in predictions.•Methodology can be adapted to any similar real-world scenarios.
PMID:38957375 | PMC:PMC11217682 | DOI:10.1016/j.mex.2024.102793
Application of CT and MRI images based on artificial intelligence to predict lymph node metastases in patients with oral squamous cell carcinoma: a subgroup meta-analysis
Front Oncol. 2024 Jun 18;14:1395159. doi: 10.3389/fonc.2024.1395159. eCollection 2024.
ABSTRACT
BACKGROUND: The performance of artificial intelligence (AI) in the prediction of lymph node (LN) metastasis in patients with oral squamous cell carcinoma (OSCC) has not been quantitatively evaluated. The purpose of this study was to conduct a systematic review and meta-analysis of published data on the diagnostic performance of CT and MRI based on AI algorithms for predicting LN metastases in patients with OSCC.
METHODS: We searched the Embase, PubMed (Medline), Web of Science, and Cochrane databases for studies on the use of AI in predicting LN metastasis in OSCC. Binary diagnostic accuracy data were extracted to obtain the outcomes of interest, namely, the area under the curve (AUC), sensitivity, and specificity, and compared the diagnostic performance of AI with that of radiologists. Subgroup analyses were performed with regard to different types of AI algorithms and imaging modalities.
RESULTS: Fourteen eligible studies were included in the meta-analysis. The AUC, sensitivity, and specificity of the AI models for the diagnosis of LN metastases were 0.92 (95% CI 0.89-0.94), 0.79 (95% CI 0.72-0.85), and 0.90 (95% CI 0.86-0.93), respectively. Promising diagnostic performance was observed in the subgroup analyses based on algorithm types [machine learning (ML) or deep learning (DL)] and imaging modalities (CT vs. MRI). The pooled diagnostic performance of AI was significantly better than that of experienced radiologists.
DISCUSSION: In conclusion, AI based on CT and MRI imaging has good diagnostic accuracy in predicting LN metastasis in patients with OSCC and thus has the potential for clinical application.
SYSTEMATIC REVIEW REGISTRATION: https://www.crd.york.ac.uk/PROSPERO/#recordDetails, PROSPERO (No. CRD42024506159).
PMID:38957322 | PMC:PMC11217320 | DOI:10.3389/fonc.2024.1395159
Defect detection of photovoltaic modules based on improved VarifocalNet
Sci Rep. 2024 Jul 2;14(1):15170. doi: 10.1038/s41598-024-66234-3.
ABSTRACT
Detecting and replacing defective photovoltaic modules is essential as they directly impact power generation efficiency. Many current deep learning-based methods for detecting defects in photovoltaic modules focus solely on either detection speed or accuracy, which limits their practical application. To address this issue, an improved VarifocalNet has been proposed to enhance both the detection speed and accuracy of defective photovoltaic modules. Firstly, a new bottleneck module is designed to replace the first bottleneck module of the last stage convolution group in the backbone. This new module includes both standard convolution and dilated convolution, enabling an increase in network depth and receptive field without reducing the output feature map size. This improvement can help to enhance the accuracy of defect detection for photovoltaic modules. Secondly, another bottleneck module is also designed and used to replace the original bottleneck module used in the fourth stage convolution group of the backbone. This new module has smaller parameters than the original bottleneck module, which is useful to improve the defect detection speed of the photovoltaic module. Thirdly, a feature interactor is designed in the detection head to enhance feature expression in the classification branch. This helps improve detection accuracy. Besides, an improved intersection over union is proposed and introduced into the loss function to measure the difference between the predicted and ground truth boxes. This is useful for improving defect detection accuracy. Compared to other methods, the proposed method has the highest detection accuracy. Additionally, it also has a faster detection speed than other methods except for the DDH-YOLOv5 method and the improved YOLOv7 method.
PMID:38956270 | DOI:10.1038/s41598-024-66234-3
SIGNIFICANCE deep learning based platform to fight illicit trafficking of Cultural Heritage goods
Sci Rep. 2024 Jul 2;14(1):15081. doi: 10.1038/s41598-024-65885-6.
ABSTRACT
The illicit traffic of cultural goods remains a persistent global challenge, despite the proliferation of comprehensive legislative frameworks developed to address and prevent cultural property crimes. Online platforms, especially social media and e-commerce, have facilitated illegal trade and pose significant challenges for law enforcement agencies. To address this issue, the European project SIGNIFICANCE was born, with the aim of combating illicit traffic of Cultural Heritage (CH) goods. This paper presents the outcomes of the project, introducing a user-friendly platform that employs Artificial Intelligence (AI) and Deep learning (DL) to prevent and combat illicit activities. The platform enables authorities to identify, track, and block illegal activities in the online domain, thereby aiding successful prosecutions of criminal networks. Moreover, it incorporates an ontology-based approach, providing comprehensive information on the cultural significance, provenance, and legal status of identified artefacts. This enables users to access valuable contextual information during the scraping and classification phases, facilitating informed decision-making and targeted actions. To accomplish these objectives, computationally intensive tasks are executed on the HPC CyClone infrastructure, optimizing computing resources, time, and cost efficiency. Notably, the infrastructure supports algorithm modelling and training, as well as web, dark web and social media scraping and data classification. Preliminary results indicate a 10-15% increase in the identification of illicit artifacts, demonstrating the platform's effectiveness in enhancing law enforcement capabilities.
PMID:38956250 | DOI:10.1038/s41598-024-65885-6
A deep learning model accurately predicts 1-year mortality but at the risk of unfairness
Nat Aging. 2024 Jul 2. doi: 10.1038/s43587-024-00665-5. Online ahead of print.
NO ABSTRACT
PMID:38956193 | DOI:10.1038/s43587-024-00665-5
An automatic deep reinforcement learning bolus calculator for automated insulin delivery systems
Sci Rep. 2024 Jul 2;14(1):15245. doi: 10.1038/s41598-024-62912-4.
ABSTRACT
In hybrid automatic insulin delivery (HAID) systems, meal disturbance is compensated by feedforward control, which requires the announcement of the meal by the patient with type 1 diabetes (DM1) to achieve the desired glycemic control performance. The calculation of insulin bolus in the HAID system is based on the amount of carbohydrates (CHO) in the meal and patient-specific parameters, i.e. carbohydrate-to-insulin ratio (CR) and insulin sensitivity-related correction factor (CF). The estimation of CHO in a meal is prone to errors and is burdensome for patients. This study proposes a fully automatic insulin delivery (FAID) system that eliminates patient intervention by compensating for unannounced meals. This study exploits the deep reinforcement learning (DRL) algorithm to calculate insulin bolus for unannounced meals without utilizing the information on CHO content. The DRL bolus calculator is integrated with a closed-loop controller and a meal detector (both previously developed by our group) to implement the FAID system. An adult cohort of 68 virtual patients based on the modified UVa/Padova simulator was used for in-silico trials. The percentage of the overall duration spent in the target range of 70-180 mg/dL was 71.2 % and 76.2 % , < 70 mg/dL was 0.9 % and 0.1 % , and > 180 mg/dL was 26.7 % and 21.1 % , respectively, for the FAID system and HAID system utilizing a standard bolus calculator (SBC) including CHO misestimation. The proposed algorithm can be exploited to realize FAID systems in the future.
PMID:38956183 | DOI:10.1038/s41598-024-62912-4
LMBiS-Net: A lightweight bidirectional skip connection based multipath CNN for retinal blood vessel segmentation
Sci Rep. 2024 Jul 2;14(1):15219. doi: 10.1038/s41598-024-63496-9.
ABSTRACT
Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.
PMID:38956117 | DOI:10.1038/s41598-024-63496-9
1 Million Segmented Red Blood Cells With 240 K Classified in 9 Shapes and 47 K Patches of 25 Manual Blood Smears
Sci Data. 2024 Jul 2;11(1):722. doi: 10.1038/s41597-024-03570-z.
ABSTRACT
Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological alternative to the visual examination of red blood cells (RBCs) morphology/shapes. True/non-artifact teardrop-shaped RBCs and schistocytes/fragmented RBCs are commonly associated with serious medical conditions that could be fatal, increased ovalocytes are associated with almost all types of anemias. 25 distinct blood smears, each from a different patient, were manually prepared, stained, and then sorted into four groups. Each group underwent imaging using different cameras integrated into light microscopes with 40X microscopic lenses resulting in total 47 K + field images/patches. Two hematologists processed cell-by-cell to provide one million + segmented RBCs with their XYWH coordinates and classified 240 K + RBCs into nine shapes. This dataset (Elsafty_RBCs_for_AI) enables the development/testing of deep learning-based (DL) automation of RBCs morphology/shapes examination, including specific normalization of blood smear stains (different from histopathology stains), detection/counting, segmentation, and classification. Two codes are provided (Elsafty_Codes_for_AI), one for semi-automated image processing and another for training/testing of a DL-based image classifier.
PMID:38956115 | DOI:10.1038/s41597-024-03570-z
Rapid diagnosis of celiac disease based on plasma Raman spectroscopy combined with deep learning
Sci Rep. 2024 Jul 1;14(1):15056. doi: 10.1038/s41598-024-64621-4.
ABSTRACT
Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. Therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. This study utilizes Raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. A total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. Convolutional Neural Network (CNN), Multi-Scale Convolutional Neural Network (MCNN), Residual Network (ResNet), and Deep Residual Shrinkage Network (DRSN) classification models were employed. The accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. Comparative validation results revealed that the DRSN model exhibited the best performance, with an AUC value and accuracy of 97.60% and 95%, respectively. This confirms the superiority of Raman spectroscopy combined with deep learning in the diagnosis of celiac disease.
PMID:38956075 | DOI:10.1038/s41598-024-64621-4
Review of Machine Learning Techniques in Soft Tissue Biomechanics and Biomaterials
Cardiovasc Eng Technol. 2024 Jul 2. doi: 10.1007/s13239-024-00737-y. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Advanced material models and material characterization of soft biological tissues play an essential role in pre-surgical planning for vascular surgeries and transcatheter interventions. Recent advances in heart valve engineering, medical device and patch design are built upon these models. Furthermore, understanding vascular growth and remodeling in native and tissue-engineered vascular biomaterials, as well as designing and testing drugs on soft tissue, are crucial aspects of predictive regenerative medicine. Traditional nonlinear optimization methods and finite element (FE) simulations have served as biomaterial characterization tools combined with soft tissue mechanics and tensile testing for decades. However, results obtained through nonlinear optimization methods are reliable only to a certain extent due to mathematical limitations, and FE simulations may require substantial computing time and resources, which might not be justified for patient-specific simulations. To a significant extent, machine learning (ML) techniques have gained increasing prominence in the field of soft tissue mechanics in recent years, offering notable advantages over conventional methods. This review article presents an in-depth examination of emerging ML algorithms utilized for estimating the mechanical characteristics of soft biological tissues and biomaterials. These algorithms are employed to analyze crucial properties such as stress-strain curves and pressure-volume loops. The focus of the review is on applications in cardiovascular engineering, and the fundamental mathematical basis of each approach is also discussed.
METHODS: The review effort employed two strategies. First, the recent studies of major research groups actively engaged in cardiovascular soft tissue mechanics are compiled, and research papers utilizing ML and deep learning (DL) techniques were included in our review. The second strategy involved a standard keyword search across major databases. This approach provided 11 relevant ML articles, meticulously selected from reputable sources including ScienceDirect, Springer, PubMed, and Google Scholar. The selection process involved using specific keywords such as "machine learning" or "deep learning" in conjunction with "soft biological tissues", "cardiovascular", "patient-specific," "strain energy", "vascular" or "biomaterials". Initially, a total of 25 articles were selected. However, 14 of these articles were excluded as they did not align with the criteria of focusing on biomaterials specifically employed for soft tissue repair and regeneration. As a result, the remaining 11 articles were categorized based on the ML techniques employed and the training data utilized.
RESULTS: ML techniques utilized for assessing the mechanical characteristics of soft biological tissues and biomaterials are broadly classified into two categories: standard ML algorithms and physics-informed ML algorithms. The standard ML models are then organized based on their tasks, being grouped into Regression and Classification subcategories. Within these categories, studies employ various supervised learning models, including support vector machines (SVMs), bagged decision trees (BDTs), artificial neural networks (ANNs) or deep neural networks (DNNs), and convolutional neural networks (CNNs). Additionally, the utilization of unsupervised learning approaches, such as autoencoders incorporating principal component analysis (PCA) and/or low-rank approximation (LRA), is based on the specific characteristics of the training data. The training data predominantly consists of three types: experimental mechanical data, including uniaxial or biaxial stress-strain data; synthetic mechanical data generated through non-linear fitting and/or FE simulations; and image data such as 3D second harmonic generation (SHG) images or computed tomography (CT) images. The evaluation of performance for physics-informed ML models primarily relies on the coefficient of determination R 2 . In contrast, various metrics and error measures are utilized to assess the performance of standard ML models. Furthermore, our review includes an extensive examination of prevalent biomaterial models that can serve as physical laws for physics-informed ML models.
CONCLUSION: ML models offer an accurate, fast, and reliable approach for evaluating the mechanical characteristics of diseased soft tissue segments and selecting optimal biomaterials for time-critical soft tissue surgeries. Among the various ML models examined in this review, physics-informed neural network models exhibit the capability to forecast the mechanical response of soft biological tissues accurately, even with limited training samples. These models achieve high R 2 values ranging from 0.90 to 1.00. This is particularly significant considering the challenges associated with obtaining a large number of living tissue samples for experimental purposes, which can be time-consuming and impractical. Additionally, the review not only discusses the advantages identified in the current literature but also sheds light on the limitations and offers insights into future perspectives.
PMID:38956008 | DOI:10.1007/s13239-024-00737-y
Fine-Tuned Large Language Model for Extracting Patients on Pretreatment for Lung Cancer from a Picture Archiving and Communication System Based on Radiological Reports
J Imaging Inform Med. 2024 Jul 2. doi: 10.1007/s10278-024-01186-8. Online ahead of print.
ABSTRACT
This study aimed to investigate the performance of a fine-tuned large language model (LLM) in extracting patients on pretreatment for lung cancer from picture archiving and communication systems (PACS) and comparing it with that of radiologists. Patients whose radiological reports contained the term lung cancer (3111 for training, 124 for validation, and 288 for test) were included in this retrospective study. Based on clinical indication and diagnosis sections of the radiological report (used as input data), they were classified into four groups (used as reference data): group 0 (no lung cancer), group 1 (pretreatment lung cancer present), group 2 (after treatment for lung cancer), and group 3 (planning radiation therapy). Using the training and validation datasets, fine-tuning of the pretrained LLM was conducted ten times. Due to group imbalance, group 2 data were undersampled in the training. The performance of the best-performing model in the validation dataset was assessed in the independent test dataset. For testing purposes, two other radiologists (readers 1 and 2) were also involved in classifying radiological reports. The overall accuracy of the fine-tuned LLM, reader 1, and reader 2 was 0.983, 0.969, and 0.969, respectively. The sensitivity for differentiating group 0/1/2/3 by LLM, reader 1, and reader 2 was 1.000/0.948/0.991/1.000, 0.750/0.879/0.996/1.000, and 1.000/0.931/0.978/1.000, respectively. The time required for classification by LLM, reader 1, and reader 2 was 46s/2539s/1538s, respectively. Fine-tuned LLM effectively extracted patients on pretreatment for lung cancer from PACS with comparable performance to radiologists in a shorter time.
PMID:38955964 | DOI:10.1007/s10278-024-01186-8
Development and validation of a predictive model for vertebral fracture risk in osteoporosis patients
Eur Spine J. 2024 Jul 2. doi: 10.1007/s00586-024-08235-4. Online ahead of print.
ABSTRACT
OBJECTIVE: This study aimed to develop and validate a predictive model for osteoporotic vertebral fractures (OVFs) risk by integrating demographic, bone mineral density (BMD), CT imaging, and deep learning radiomics features from CT images.
METHODS: A total of 169 osteoporosis-diagnosed patients from three hospitals were randomly split into OVFs (n = 77) and Non-OVFs (n = 92) groups for training (n = 135) and test (n = 34). Demographic data, BMD, and CT imaging details were collected. Deep transfer learning (DTL) using ResNet-50 and radiomics features were fused, with the best model chosen via logistic regression. Cox proportional hazards models identified clinical factors. Three models were constructed: clinical, radiomics-DTL, and fusion (clinical-radiomics-DTL). Performance was assessed using AUC, C-index, Kaplan-Meier, and calibration curves. The best model was depicted as a nomogram, and clinical utility was evaluated using decision curve analysis (DCA).
RESULTS: BMD, CT values of paravertebral muscles (PVM), and paravertebral muscles' cross-sectional area (CSA) significantly differed between OVFs and Non-OVFs groups (P < 0.05). No significant differences were found between training and test cohort. Multivariate Cox models identified BMD, CT values of PVM, and CSAPS reduction as independent OVFs risk factors (P < 0.05). The fusion model exhibited the highest predictive performance (C-index: 0.839 in training, 0.795 in test). DCA confirmed the nomogram's utility in OVFs risk prediction.
CONCLUSION: This study presents a robust predictive model for OVFs risk, integrating BMD, CT data, and radiomics-DTL features, offering high sensitivity and specificity. The model's visualizations can inform OVFs prevention and treatment strategies.
PMID:38955868 | DOI:10.1007/s00586-024-08235-4
Deep learning classification of ex vivo human colon tissues using spectroscopic optical coherence tomography
J Biophotonics. 2024 Jul 2:e202400082. doi: 10.1002/jbio.202400082. Online ahead of print.
ABSTRACT
Screening for colorectal cancer (CRC) with colonoscopy has improved patient outcomes; however, it remains the third leading cause of cancer-related mortality, novel strategies to improve screening are needed. Here, we propose an optical biopsy technique based on spectroscopic optical coherence tomography (OCT). Depth resolved OCT images are analyzed as a function of wavelength to measure optical tissue properties and used as input to machine learning algorithms. Previously, we used this approach to analyze mouse colon polyps. Here, we extend the approach to examine human biopsied colonic epithelial tissue samples ex vivo. Optical properties are used as input to a novel deep learning architecture, producing accuracy of up to 97.9% in discriminating tissue type. SOCT parameters are used to create false colored en face OCT images and deep learning classifications are used to enable visual classification by tissue type. This study advances SOCT toward clinical utility for analysis of colonic epithelium.
PMID:38955358 | DOI:10.1002/jbio.202400082