Deep learning
Facial adult female acne in China: An analysis based on artificial intelligence over one million
Skin Res Technol. 2024 Apr;30(4):e13693. doi: 10.1111/srt.13693.
ABSTRACT
BACKGROUND: To further clarify the acne profile of Chinese adult women, we included 1,156,703 adult women. An artificial intelligence algorithm was used to analyze images taken by high-resolution mobile phones to further explore acne levels in Chinese adult women.
METHOD: In this study, we assessed the severity of acne by evaluating patients' selfies through a smartphone application. Furthermore, we gathered basic user information through a questionnaire, including details such as age, gender, skin sensitivity, and dietary habits.
RESULTS: This study showed a gradual decrease in acne severity from the age of 25 years. A trough was reached between the ages of 40 and 44, followed by a gradual increase in acne severity. In terms of skin problems and acne severity, we have found that oily skin, hypersensitive skin, frequent makeup application and unhealthy dietary habits can affect the severity of acne. For environment and acne severity, we observed that developed city levels, cold seasons and high altitude and strong radiation affect acne severity in adult women. For the results of the AI analyses, the severity of blackheads, pores, dark circles and skin roughness were positively associated with acne severity in adult women.
CONCLUSIONS: AI analysis of high-res phone images in Chinese adult women reveals acne severity trends. Severity decreases after 25, hits a low at 40-44, then gradually rises. Skin type, sensitivity, makeup, diet, urbanization, seasons, altitude, and radiation impact acne. Blackheads, pores, dark circles, and skin roughness are linked to acne severity. These findings inform personalized skincare and public health strategies for adult women.
PMID:38572573 | DOI:10.1111/srt.13693
Geographic-Scale Coffee Cherry Counting with Smartphones and Deep Learning
Plant Phenomics. 2024 Apr 3;6:0165. doi: 10.34133/plantphenomics.0165. eCollection 2024.
ABSTRACT
Deep learning and computer vision, using remote sensing and drones, are 2 promising nondestructive methods for plant monitoring and phenotyping. However, their applications are infeasible for many crop systems under tree canopies, such as coffee crops, making it challenging to perform plant monitoring and phenotyping at a large spatial scale at a low cost. This study aims to develop a geographic-scale monitoring method for coffee cherry counting, supported by an artificial intelligence (AI)-powered citizen science approach. The approach uses basic smartphones to take a few pictures of coffee trees; 2,968 trees were investigated with 8,904 pictures in Junín and Piura (Peru), Cauca, and Quindío (Colombia) in 2022, with the help of nearly 1,000 smallholder coffee farmers. Then, we trained and validated YOLO (You Only Look Once) v8 for detecting cherries in the dataset in Peru. An average number of cherries per picture was multiplied by the number of branches to estimate the total number of cherries per tree. The model's performance in Peru showed an R2 of 0.59. When the model was tested in Colombia, where different varieties are grown in different biogeoclimatic conditions, the model showed an R2 of 0.71. The overall performance in both countries reached an R2 of 0.72. The results suggest that the method can be applied to much broader scales and is transferable to other varieties, countries, and regions. To our knowledge, this is the first AI-powered method for counting coffee cherries and has the potential for a geographic-scale, multiyear, photo-based phenotypic monitoring for coffee crops in low-income countries worldwide.
PMID:38572469 | PMC:PMC10988386 | DOI:10.34133/plantphenomics.0165
Tissue classification and diagnosis of colorectal cancer histopathology images using deep learning algorithms. Is the time ripe for clinical practice implementation?
Prz Gastroenterol. 2023;18(4):353-367. doi: 10.5114/pg.2023.130337. Epub 2023 Aug 7.
ABSTRACT
Colorectal cancer is one of the most prevalent types of cancer, with histopathologic examination of biopsied tissue samples remaining the gold standard for diagnosis. During the past years, artificial intelligence (AI) has steadily found its way into the field of medicine and pathology, especially with the introduction of whole slide imaging (WSI). The main outcome of interest was the composite balanced accuracy (ACC) as well as the F1 score. The average reported ACC from the collected studies was 95.8 ±3.8%. Reported F1 scores reached as high as 0.975, with an average of 89.7 ±9.8%, indicating that existing deep learning algorithms can achieve in silico distinction between malignant and benign. Overall, the available state-of-the-art algorithms are non-inferior to pathologists for image analysis and classification tasks. However, due to their inherent uniqueness in their training and lack of widely accepted external validation datasets, their generalization potential is still limited.
PMID:38572457 | PMC:PMC10985751 | DOI:10.5114/pg.2023.130337
ENHANCED SHARP-GAN FOR HISTOPATHOLOGY IMAGE SYNTHESIS
Proc IEEE Int Symp Biomed Imaging. 2023 Apr;2023. doi: 10.1109/isbi53787.2023.10230516. Epub 2023 Sep 1.
ABSTRACT
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection. However, existing methods struggle to produce realistic images that have accurate nuclei boundaries and less artifacts, which limits the application in downstream tasks. To address the challenges, we propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization. The proposed approach uses the skeleton map of nuclei to integrate nuclei topology and separate touching nuclei. In the loss function, we propose two new contour regularization terms that enhance the contrast between contour and non-contour pixels and increase the similarity between contour pixels. We evaluate the proposed approach on the two datasets using image quality metrics and a downstream task (nuclei segmentation). The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets. By integrating 6k synthetic images from the proposed approach into training, a nuclei segmentation model achieves the state-of-the-art segmentation performance on TNBC dataset and its detection quality (DQ), segmentation quality (SQ), panoptic quality (PQ), and aggregated Jaccard index (AJI) is 0.855, 0.863, 0.691, and 0.683, respectively.
PMID:38572451 | PMC:PMC10989243 | DOI:10.1109/isbi53787.2023.10230516
High-Temperature Tolerance Protein Engineering through Deep Evolution
Biodes Res. 2024 Apr 3;6:0031. doi: 10.34133/bdr.0031. eCollection 2024.
ABSTRACT
Protein engineering aimed at increasing temperature tolerance through iterative mutagenesis and high-throughput screening is often labor-intensive. Here, we developed a deep evolution (DeepEvo) strategy to engineer protein high-temperature tolerance by generating and selecting functional sequences using deep learning models. Drawing inspiration from the concept of evolution, we constructed a high-temperature tolerance selector based on a protein language model, acting as selective pressure in the high-dimensional latent spaces of protein sequences to enrich those with high-temperature tolerance. Simultaneously, we developed a variant generator using a generative adversarial network to produce protein sequence variants containing the desired function. Afterward, the iterative process involving the generator and selector was executed to accumulate high-temperature tolerance traits. We experimentally tested this approach on the model protein glyceraldehyde 3-phosphate dehydrogenase, obtaining 8 variants with high-temperature tolerance from just 30 generated sequences, achieving a success rate of over 26%, demonstrating the high efficiency of DeepEvo in engineering protein high-temperature tolerance.
PMID:38572349 | PMC:PMC10988389 | DOI:10.34133/bdr.0031
A rotary transformer cross-subject model for continuous estimation of finger joints kinematics and a transfer learning approach for new subjects
Front Neurosci. 2024 Mar 20;18:1306050. doi: 10.3389/fnins.2024.1306050. eCollection 2024.
ABSTRACT
INTRODUCTION: Surface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject generalizability. Considering the erratic nature of sEMG signals, a model trained on a specific subject cannot be directly applied to other subjects. Therefore, in this study, we proposed a cross-subject model based on the Rotary Transformer (RoFormer) to extract features of multiple subjects for continuous estimation kinematics and extend it to new subjects by adversarial transfer learning (ATL) approach.
METHODS: We utilized the new subject's training data and an ATL approach to calibrate the cross-subject model. To improve the performance of the classic transformer network, we compare the impact of different position embeddings on model performance, including learnable absolute position embedding, Sinusoidal absolute position embedding, and Rotary Position Embedding (RoPE), and eventually selected RoPE. We conducted experiments on 10 randomly selected subjects from the NinaproDB2 dataset, using Pearson correlation coefficient (CC), normalized root mean square error (NRMSE), and coefficient of determination (R2) as performance metrics.
RESULTS: The proposed model was compared with four other models including LSTM, TCN, Transformer, and CNN-Attention. The results demonstrated that both in cross-subject and subject-specific cases the performance of RoFormer was significantly better than the other four models. Additionally, the ATL approach improves the generalization performance of the cross-subject model better than the fine-tuning (FT) transfer learning approach.
DISCUSSION: The findings indicate that the proposed RoFormer-based method with an ATL approach has the potential for practical applications in robot hand control and other HMI settings. The model's superior performance suggests its suitability for continuous estimation of finger kinematics across different subjects, addressing the limitations of subject-specific models.
PMID:38572147 | PMC:PMC10987947 | DOI:10.3389/fnins.2024.1306050
POST-HOC EXPLAINABILITY OF BI-RADS DESCRIPTORS IN A MULTI-TASK FRAMEWORK FOR BREAST CANCER DETECTION AND SEGMENTATION
IEEE Int Workshop Mach Learn Signal Process. 2023 Sep;2023. doi: 10.1109/mlsp55844.2023.10286006. Epub 2023 Oct 23.
ABSTRACT
Despite recent medical advancements, breast cancer remains one of the most prevalent and deadly diseases among women. Although machine learning-based Computer-Aided Diagnosis (CAD) systems have shown potential to assist radiologists in analyzing medical images, the opaque nature of the best-performing CAD systems has raised concerns about their trustworthiness and interpretability. This paper proposes MT-BI-RADS, a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images. The approach offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy. Firstly, the proposed model outputs the BI-RADS categories used for BUS image analysis by radiologists. Secondly, the model employs multitask learning to concurrently segment regions in images that correspond to tumors. Thirdly, the proposed approach outputs quantified contributions of each BI-RADS descriptor toward predicting the benign or malignant class using post-hoc explanations with Shapley Values.
PMID:38572141 | PMC:PMC10989244 | DOI:10.1109/mlsp55844.2023.10286006
Deep learning system for distinguishing between nasopalatine duct cysts and radicular cysts arising in the midline region of the anterior maxilla on panoramic radiographs
Imaging Sci Dent. 2024 Mar;54(1):33-41. doi: 10.5624/isd.20230169. Epub 2023 Dec 13.
ABSTRACT
PURPOSE: The aims of this study were to create a deep learning model to distinguish between nasopalatine duct cysts (NDCs), radicular cysts, and no-lesions (normal) in the midline region of the anterior maxilla on panoramic radiographs and to compare its performance with that of dental residents.
MATERIALS AND METHODS: One hundred patients with a confirmed diagnosis of NDC (53 men, 47 women; average age, 44.6±16.5 years), 100 with radicular cysts (49 men, 51 women; average age, 47.5±16.4 years), and 100 with normal groups (56 men, 44 women; average age, 34.4±14.6 years) were enrolled in this study. Cases were randomly assigned to the training datasets (80%) and the test dataset (20%). Then, 20% of the training data were randomly assigned as validation data. A learning model was created using a customized DetectNet built in Digits version 5.0 (NVIDIA, Santa Clara, USA). The performance of the deep learning system was assessed and compared with that of two dental residents.
RESULTS: The performance of the deep learning system was superior to that of the dental residents except for the recall of radicular cysts. The areas under the curve (AUCs) for NDCs and radicular cysts in the deep learning system were significantly higher than those of the dental residents. The results for the dental residents revealed a significant difference in AUC between NDCs and normal groups.
CONCLUSION: This study showed superior performance in detecting NDCs and radicular cysts and in distinguishing between these lesions and normal groups.
PMID:38571775 | PMC:PMC10985522 | DOI:10.5624/isd.20230169
Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study
Imaging Sci Dent. 2024 Mar;54(1):81-91. doi: 10.5624/isd.20230245. Epub 2024 Feb 22.
ABSTRACT
PURPOSE: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs.
MATERIALS AND METHODS: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset.
RESULTS: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%.
CONCLUSION: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.
PMID:38571772 | PMC:PMC10985527 | DOI:10.5624/isd.20230245
Artificial Intelligence in Senology - Where Do We Stand and What Are the Future Horizons?
Eur J Breast Health. 2024 Apr 1;20(2):73-80. doi: 10.4274/ejbh.galenos.2024.2023-12-13. eCollection 2024 Apr.
ABSTRACT
Artificial Intelligence (AI) is defined as the simulation of human intelligence by a digital computer or robotic system and has become a hype in current conversations. A subcategory of AI is deep learning, which is based on complex artificial neural networks that mimic the principles of human synaptic plasticity and layered brain architectures, and uses large-scale data processing. AI-based image analysis in breast screening programmes has shown non-inferior sensitivity, reduces workload by up to 70% by pre-selecting normal cases, and reduces recall by 25% compared to human double reading. Natural language programs such as ChatGPT (OpenAI) achieve 80% and higher accuracy in advising and decision making compared to the gold standard: human judgement. This does not yet meet the necessary requirements for medical products in terms of patient safety. The main advantage of AI is that it can perform routine but complex tasks much faster and with fewer errors than humans. The main concerns in healthcare are the stability of AI systems, cybersecurity, liability and transparency. More widespread use of AI could affect human jobs in healthcare and increase technological dependency. AI in senology is just beginning to evolve towards better forms with improved properties. Responsible training of AI systems with meaningful raw data and scientific studies to analyse their performance in the real world are necessary to keep AI on track. To mitigate significant risks, it will be necessary to balance active promotion and development of quality-assured AI systems with careful regulation. AI regulation has only recently included in transnational legal frameworks, as the European Union's AI Act was the first comprehensive legal framework to be published, in December 2023. Unacceptable AI systems will be banned if they are deemed to pose a clear threat to people's fundamental rights. Using AI and combining it with human wisdom, empathy and affection will be the method of choice for further, fruitful development of tomorrow's senology.
PMID:38571686 | PMC:PMC10985572 | DOI:10.4274/ejbh.galenos.2024.2023-12-13
Computational limits to the legibility of the imaged human brain
Neuroimage. 2024 Apr 1:120600. doi: 10.1016/j.neuroimage.2024.120600. Online ahead of print.
ABSTRACT
Our knowledge of the organisation of the human brain at the population-level is yet to translate into power to predict functional differences at the individual-level, limiting clinical applications, and casting doubt on the generalisability of inferred mechanisms. It remains unknown whether the difficulty arises from the absence of individuating biological patterns within the brain, or from limited power to access them with the models and compute at our disposal. Here we comprehensively investigate the resolvability of such patterns with data and compute at unprecedented scale. Across 23 810 unique participants from UK Biobank, we systematically evaluate the predictability of 25 individual biological characteristics, from all available combinations of structural and functional neuroimaging data. Over 4526 GPU*hours of computation, we train, optimize, and evaluate out-of-sample 700 individual predictive models, including fully-connected feed-forward neural networks of demographic, psychological, serological, chronic disease, and functional connectivity characteristics, and both uni- and multi-modal 3D convolutional neural network models of macro- and micro-structural brain imaging. We find a marked discrepancy between the high predictability of sex (balanced accuracy 99.7%), age (mean absolute error 2.048 years, R2 0.859), and weight (mean absolute error 2.609Kg, R2 0.625), for which we set new state-of-the-art performance, and the surprisingly low predictability of other characteristics. Neither structural nor functional imaging predicted an individual's psychology better than the coincidence of common chronic disease (p<0.05). Serology predicted chronic disease (p<0.05) and was best predicted by it (p<0.001), followed by structural neuroimaging (p<0.05). Our findings suggest either more informative imaging or more powerful models will be needed to decipher individual level characteristics from the human brain. We make our models and code openly available.
PMID:38569979 | DOI:10.1016/j.neuroimage.2024.120600
phylaGAN: Data augmentation through conditional GANs and autoencoders for improving disease prediction accuracy using microbiome data
Bioinformatics. 2024 Apr 3:btae161. doi: 10.1093/bioinformatics/btae161. Online ahead of print.
ABSTRACT
MOTIVATION: Research is improving our understanding of how the microbiome interacts with the human body and its impact on human health. Existing machine learning methods have shown great potential in discriminating healthy from diseased microbiome states. However, Machine Learning based prediction using microbiome data has challenges such as, small sample size, imbalance between cases and controls and high cost of collecting large number of samples. To address these challenges, we propose a deep learning framework phylaGAN to augment the existing datasets with generated microbiome data using a combination of conditional generative adversarial network (C-GAN) and autoencoder. Conditional generative adversarial networks train two models against each other to compute larger simulated datasets that are representative of the original dataset. Autoencoder maps the original and the generated samples onto a common subspace to make the prediction more accurate.
RESULTS: Extensive evaluation and predictive analysis was conducted on two datasets, T2D study and Cirrhosis study showing an improvement in mean AUC using data augmentation by 11% and 5% respectively. External validation on a cohort classifying between obese and lean subjects, with a smaller sample size provided an improvement in mean AUC close to 32% when augmented through phylaGAN as compared to using the original cohort. Our findings not only indicate that the generative adversarial networks can create samples that mimic the original data across various diversity metrics, but also highlight the potential of enhancing disease prediction through machine learning models trained on synthetic data.
AVAILABILITY AND IMPLEMENTATION: https://github.com/divya031090/phylaGAN.
PMID:38569898 | DOI:10.1093/bioinformatics/btae161
Intelligent cholinergic white matter pathways algorithm based on U-net reflects cognitive impairment in patients with silent cerebrovascular disease
Stroke Vasc Neurol. 2024 Apr 3:svn-2023-002976. doi: 10.1136/svn-2023-002976. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: The injury of the cholinergic white matter pathway underlies cognition decline in patients with silent cerebrovascular disease (SCD) with white matter hyperintensities (WMH) of vascular origin. However, the evaluation of the cholinergic white matter pathway is complex with poor consistency. We established an intelligent algorithm to evaluate WMH in the cholinergic pathway.
METHODS: Patients with SCD with WMH of vascular origin were enrolled. The Cholinergic Pathways Hyperintensities Scale (CHIPS) was used to measure cholinergic white matter pathway impairment. The intelligent algorithm used a deep learning model based on convolutional neural networks to achieve WMH segmentation and CHIPS scoring. The diagnostic value of the intelligent algorithm for moderate-to-severe cholinergic pathway injury was calculated. The correlation between the WMH in the cholinergic pathway and cognitive function was analysed.
RESULTS: A number of 464 patients with SCD were enrolled in internal training and test set. The algorithm was validated using data from an external cohort comprising 100 patients with SCD. The sensitivity, specificity and area under the curve of the intelligent algorithm to assess moderate and severe cholinergic white matter pathway injury were 91.7%, 87.3%, 0.903 (95% CI 0.861 to 0.952) and 86.5%, 81.3%, 0.868 (95% CI 0.819 to 0.921) for the internal test set and external validation set. for the. The general cognitive function, execution function and attention showed significant differences among the three groups of different CHIPS score (all p<0.05).
DISCUSSION: We have established the first intelligent algorithm to evaluate the cholinergic white matter pathway with good accuracy compared with the gold standard. It helps more easily assess the cognitive function in patients with SCD.
PMID:38569895 | DOI:10.1136/svn-2023-002976
ViNe-Seg: Deep-Learning assisted segmentation of visible neurons and subsequent analysis embedded in a graphical user interface
Bioinformatics. 2024 Apr 3:btae177. doi: 10.1093/bioinformatics/btae177. Online ahead of print.
ABSTRACT
Segmentation of neural somata is a crucial and usually the most time-consuming step in the analysis of optical functional imaging of neuronal microcircuits. In recent years, multiple auto-segmentation tools have been developed to improve the speed and consistency of the segmentation process, mostly, using deep learning approaches. Current segmentation tools, while advanced, still encounter challenges in producing accurate segmentation results, especially in datasets with a low signal-to-noise ratio. This has led to a reliance on manual segmentation techniques. However, manual methods, while customized to specific laboratory protocols, can introduce variability due to individual differences in interpretation, potentially affecting dataset consistency across studies. In response to this challenge, we present ViNe-Seg: a deep-learning-based semi-automatic segmentation tool that offers 1) detection of visible neurons, irrespective of their activity status; 2) the ability to perform segmentation during an ongoing experiment; 3) a user-friendly graphical interface that facilitates expert supervision, ensuring precise identification of Regions of Interest; 4) an array of segmentation models with the option of training custom models and sharing them with the community; and 5) seamless integration of subsequent analysis steps.
AVAILABILITY AND IMPLEMENTATION: ViNe-Seg code and documentation are publicly available at https://github.com/NiRuff/ViNe-Seg and can be installed from https://pypi.org/project/ViNeSeg/.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
PMID:38569889 | DOI:10.1093/bioinformatics/btae177
Deep Learning-Enhanced Hand Grip and Release Test for Degenerative Cervical Myelopathy: Shortening Assessment Duration to 6 Seconds
Neurospine. 2024 Mar;21(1):46-56. doi: 10.14245/ns.2347326.663. Epub 2024 Mar 31.
ABSTRACT
OBJECTIVE: Hand clumsiness and reduced hand dexterity can signal early signs of degenerative cervical myelopathy (DCM). While the 10-second grip and release (10-s G&R) test is a common clinical tool for evaluating hand function, a more accessible method is warranted. This study explores the use of deep learning-enhanced hand grip and release test (DL-HGRT) for predicting DCM and evaluates its capability to reduce the duration of the 10-s G&R test.
METHODS: The retrospective study included 508 DCM patients and 1,194 control subjects. Propensity score matching (PSM) was utilized to minimize the confounding effects related to age and sex. Videos of the 10-s G&R test were captured using a smartphone application. The 3D-MobileNetV2 was utilized for analysis, generating a series of parameters. Additionally, receiver operating characteristic curves were employed to assess the performance of the 10-s G&R test in predicting DCM and to evaluate the effectiveness of a shortened testing duration.
RESULTS: Patients with DCM exhibited impairments in most 10-s G&R test parameters. Before PSM, the number of cycles achieved the best diagnostic performance (area under the curve [AUC], 0.85; sensitivity, 80.12%; specificity, 74.29% at 20 cycles), followed by average grip time. Following PSM for age and gender, the AUC remained above 0.80. The average grip time achieved the highest AUC of 0.83 after 6 seconds, plateauing with no significant improvement in extending the duration to 10 seconds, indicating that 6 seconds is an adequate timeframe to efficiently evaluate hand motor dysfunction in DCM based on DL-HGRT.
CONCLUSION: DL-HGRT demonstrates potential as a promising supplementary tool for predicting DCM. Notably, a testing duration of 6 seconds appears to be sufficient for accurate assessment, enhancing the test more feasible and practical without compromising diagnostic performance.
PMID:38569631 | DOI:10.14245/ns.2347326.663
Commentary on "Deep Learning-Assisted Quantitative Measurement of Thoracolumbar Fracture Features on Lateral Radiographs"
Neurospine. 2024 Mar;21(1):44-45. doi: 10.14245/ns.2448202.101. Epub 2024 Mar 31.
NO ABSTRACT
PMID:38569630 | DOI:10.14245/ns.2448202.101
Deep Learning-Assisted Quantitative Measurement of Thoracolumbar Fracture Features on Lateral Radiographs
Neurospine. 2024 Mar;21(1):30-43. doi: 10.14245/ns.2347366.683. Epub 2024 Mar 31.
ABSTRACT
OBJECTIVE: This study aimed to develop and validate a deep learning (DL) algorithm for the quantitative measurement of thoracolumbar (TL) fracture features, and to evaluate its efficacy across varying levels of clinical expertise.
METHODS: Using the pretrained Mask Region-Based Convolutional Neural Networks model, originally developed for vertebral body segmentation and fracture detection, we fine-tuned the model and added a new module for measuring fracture metrics-compression rate (CR), Cobb angle (CA), Gardner angle (GA), and sagittal index (SI)-from lumbar spine lateral radiographs. These metrics were derived from six-point labeling by 3 radiologists, forming the ground truth (GT). Training utilized 1,000 nonfractured and 318 fractured radiographs, while validations employed 213 internal and 200 external fractured radiographs. The accuracy of the DL algorithm in quantifying fracture features was evaluated against GT using the intraclass correlation coefficient. Additionally, 4 readers with varying expertise levels, including trainees and an attending spine surgeon, performed measurements with and without DL assistance, and their results were compared to GT and the DL model.
RESULTS: The DL algorithm demonstrated good to excellent agreement with GT for CR, CA, GA, and SI in both internal (0.860, 0.944, 0.932, and 0.779, respectively) and external (0.836, 0.940, 0.916, and 0.815, respectively) validations. DL-assisted measurements significantly improved most measurement values, particularly for trainees.
CONCLUSION: The DL algorithm was validated as an accurate tool for quantifying TL fracture features using radiographs. DL-assisted measurement is expected to expedite the diagnostic process and enhance reliability, particularly benefiting less experienced clinicians.
PMID:38569629 | DOI:10.14245/ns.2347366.683
Body composition analysis by radiological imaging - methods, applications, and prospects
Rofo. 2024 Apr 3. doi: 10.1055/a-2263-1501. Online ahead of print.
ABSTRACT
BACKGROUND: This review discusses the quantitative assessment of tissue composition in the human body (body composition, BC) using radiological methods. Such analyses are gaining importance, in particular, for oncological and metabolic problems. The aim is to present the different methods and definitions in this field to a radiological readership in order to facilitate application and dissemination of BC methods. The main focus is on radiological cross-sectional imaging.
METHODS: The review is based on a recent literature search in the US National Library of Medicine catalog (pubmed.gov) using appropriate search terms (body composition, obesity, sarcopenia, osteopenia in conjunction with imaging and radiology, respectively), as well as our own work and experience, particularly with MRI- and CT-based analyses of abdominal fat compartments and muscle groups.
RESULTS AND CONCLUSION: Key post-processing methods such as segmentation of tomographic datasets are now well established and used in numerous clinical disciplines, including bariatric surgery. Validated reference values are required for a reliable assessment of radiological measures, such as fatty liver or muscle. Artificial intelligence approaches (deep learning) already enable the automated segmentation of different tissues and compartments so that the extensive datasets can be processed in a time-efficient manner - in the case of so-called opportunistic screening, even retrospectively from diagnostic examinations. The availability of analysis tools and suitable datasets for AI training is considered a limitation.
KEY POINTS: · Radiological imaging methods are increasingly used to determine body composition (BC).. · BC parameters are usually quantitative and well reproducible.. · CT image data from routine clinical examinations can be used retrospectively for BC analysis.. · Prospectively, MRI examinations can be used to determine organ-specific BC parameters.. · Automated and in-depth analysis methods (deep learning or radiomics) appear to become important in the future..
CITATION FORMAT: · Linder N, Denecke T, Busse H. Body composition analysis by radiological imaging - methods, applications, and prospects. Fortschr Röntgenstr 2024; DOI: 10.1055/a-2263-1501.
PMID:38569516 | DOI:10.1055/a-2263-1501
Diabetic foot ulcers segmentation challenge report: Benchmark and analysis
Med Image Anal. 2024 Mar 24;94:103153. doi: 10.1016/j.media.2024.103153. Online ahead of print.
ABSTRACT
Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.
PMID:38569380 | DOI:10.1016/j.media.2024.103153
Immunotherapy efficacy prediction through a feature re-calibrated 2.5D neural network
Comput Methods Programs Biomed. 2024 Mar 18;249:108135. doi: 10.1016/j.cmpb.2024.108135. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Lung cancer continues to be a leading cause of cancer-related mortality worldwide, with immunotherapy emerging as a promising therapeutic strategy for advanced non-small cell lung cancer (NSCLC). Despite its potential, not all patients experience benefits from immunotherapy, and the current biomarkers used for treatment selection possess inherent limitations. As a result, the implementation of imaging-based biomarkers to predict the efficacy of lung cancer treatments offers a promising avenue for improving therapeutic outcomes.
METHODS: This study presents an automatic system for immunotherapy efficacy prediction on the subjects with lung cancer, facilitating significant clinical implications. Our model employs an advanced 2.5D neural network that incorporates 2D intra-slice feature extraction and 3D inter-slice feature aggregation. We further present a lesion-focused prior to guide the re-calibration for intra-slice features, and a attention-based re-calibration for the inter-slice features. Finally, we design an accumulated back-propagation strategy to optimize network parameters in a memory-efficient fashion.
RESULTS: We demonstrate that the proposed method achieves impressive performance on an in-house clinical dataset, surpassing existing state-of-the-art models. Furthermore, the proposed model exhibits increased efficiency in inference for each subject on average. To further validate the effectiveness of our model and its components, we conducted comprehensive and in-depth ablation experiments and discussions.
CONCLUSION: The proposed model showcases the potential to enhance physicians' diagnostic performance due to its impressive performance in predicting immunotherapy efficacy, thereby offering significant clinical application value. Moreover, we conduct adequate comparison experiments of the proposed methods and existing advanced models. These findings contribute to our understanding of the proposed model's effectiveness and serve as motivation for future work in immunotherapy efficacy prediction.
PMID:38569256 | DOI:10.1016/j.cmpb.2024.108135