Deep learning

Enhancing brain tumor classification in MRI scans with a multi-layer customized convolutional neural network approach

Thu, 2024-06-27 06:00

Front Comput Neurosci. 2024 Jun 12;18:1418546. doi: 10.3389/fncom.2024.1418546. eCollection 2024.

ABSTRACT

BACKGROUND: The necessity of prompt and accurate brain tumor diagnosis is unquestionable for optimizing treatment strategies and patient prognoses. Traditional reliance on Magnetic Resonance Imaging (MRI) analysis, contingent upon expert interpretation, grapples with challenges such as time-intensive processes and susceptibility to human error.

OBJECTIVE: This research presents a novel Convolutional Neural Network (CNN) architecture designed to enhance the accuracy and efficiency of brain tumor detection in MRI scans.

METHODS: The dataset used in the study comprises 7,023 brain MRI images from figshare, SARTAJ, and Br35H, categorized into glioma, meningioma, no tumor, and pituitary classes, with a CNN-based multi-task classification model employed for tumor detection, classification, and location identification. Our methodology focused on multi-task classification using a single CNN model for various brain MRI classification tasks, including tumor detection, classification based on grade and type, and tumor location identification.

RESULTS: The proposed CNN model incorporates advanced feature extraction capabilities and deep learning optimization techniques, culminating in a groundbreaking paradigm shift in automated brain MRI analysis. With an exceptional tumor classification accuracy of 99%, our method surpasses current methodologies, demonstrating the remarkable potential of deep learning in medical applications.

CONCLUSION: This study represents a significant advancement in the early detection and treatment planning of brain tumors, offering a more efficient and accurate alternative to traditional MRI analysis methods.

PMID:38933391 | PMC:PMC11199693 | DOI:10.3389/fncom.2024.1418546

Categories: Literature Watch

Deep learning framework for comprehensive molecular and prognostic stratifications of triple-negative breast cancer

Thu, 2024-06-27 06:00

Fundam Res. 2022 Jun 29;4(3):678-689. doi: 10.1016/j.fmre.2022.06.008. eCollection 2024 May.

ABSTRACT

Triple-negative breast cancer (TNBC) is the most challenging breast cancer subtype. Molecular stratification and target therapy bring clinical benefit for TNBC patients, but it is difficult to implement comprehensive molecular testing in clinical practice. Here, using our multi-omics TNBC cohort (N = 425), a deep learning-based framework was devised and validated for comprehensive predictions of molecular features, subtypes and prognosis from pathological whole slide images. The framework first incorporated a neural network to decompose the tissue on WSIs, followed by a second one which was trained based on certain tissue types for predicting different targets. Multi-omics molecular features were analyzed including somatic mutations, copy number alterations, germline mutations, biological pathway activities, metabolomics features and immunotherapy biomarkers. It was shown that the molecular features with therapeutic implications can be predicted including the somatic PIK3CA mutation, germline BRCA2 mutation and PD-L1 protein expression (area under the curve [AUC]: 0.78, 0.79 and 0.74 respectively). The molecular subtypes of TNBC can be identified (AUC: 0.84, 0.85, 0.93 and 0.73 for the basal-like immune-suppressed, immunomodulatory, luminal androgen receptor, and mesenchymal-like subtypes respectively) and their distinctive morphological patterns were revealed, which provided novel insights into the heterogeneity of TNBC. A neural network integrating image features and clinical covariates stratified patients into groups with different survival outcomes (log-rank P < 0.001). Our prediction framework and neural network models were externally validated on the TNBC cases from TCGA (N = 143) and appeared robust to the changes in patient population. For potential clinical translation, we built a novel online platform, where we modularized and deployed our framework along with the validated models. It can realize real-time one-stop prediction for new cases. In summary, using only pathological WSIs, our proposed framework can enable comprehensive stratifications of TNBC patients and provide valuable information for therapeutic decision-making. It had the potential to be clinically implemented and promote the personalized management of TNBC.

PMID:38933195 | PMC:PMC11197495 | DOI:10.1016/j.fmre.2022.06.008

Categories: Literature Watch

Generative artificial intelligence in ophthalmology: current innovations, future applications and challenges

Wed, 2024-06-26 06:00

Br J Ophthalmol. 2024 Jun 26:bjo-2024-325458. doi: 10.1136/bjo-2024-325458. Online ahead of print.

ABSTRACT

The rapid advancements in generative artificial intelligence are set to significantly influence the medical sector, particularly ophthalmology. Generative adversarial networks and diffusion models enable the creation of synthetic images, aiding the development of deep learning models tailored for specific imaging tasks. Additionally, the advent of multimodal foundational models, capable of generating images, text and videos, presents a broad spectrum of applications within ophthalmology. These range from enhancing diagnostic accuracy to improving patient education and training healthcare professionals. Despite the promising potential, this area of technology is still in its infancy, and there are several challenges to be addressed, including data bias, safety concerns and the practical implementation of these technologies in clinical settings.

PMID:38925907 | DOI:10.1136/bjo-2024-325458

Categories: Literature Watch

The role of artificial intelligence in cardiovascular magnetic resonance imaging

Wed, 2024-06-26 06:00

Prog Cardiovasc Dis. 2024 Jun 24:S0033-0620(24)00092-6. doi: 10.1016/j.pcad.2024.06.004. Online ahead of print.

ABSTRACT

Cardiovascular magnetic resonance (CMR) imaging is the gold standard test for myocardial tissue characterization and chamber volumetric and functional evaluation. However, manual CMR analysis can be time-consuming and is subject to intra- and inter-observer variability. Artificial intelligence (AI) is a field that permits automated task performance through the identification of high-level and complex data relationships. In this review, we review the rapidly growing role of AI in CMR, including image acquisition, sequence prescription, artifact detection, reconstruction, segmentation, and data reporting and analysis including quantification of volumes, function, myocardial infarction (MI) and scar detection, and prediction of outcomes. We conclude with a discussion of the emerging challenges to widespread adoption and solutions that will allow for successful, broader uptake of this powerful technology.

PMID:38925255 | DOI:10.1016/j.pcad.2024.06.004

Categories: Literature Watch

AI-based opportunistic quantitative image analysis of lung cancer screening CTs to reduce disparities in osteoporosis screening

Wed, 2024-06-26 06:00

Bone. 2024 Jun 24:117176. doi: 10.1016/j.bone.2024.117176. Online ahead of print.

ABSTRACT

Osteoporosis is underdiagnosed, especially in ethnic and racial minorities who are thought to be protected against bone loss, but often have worse outcomes after an osteoporotic fracture. We aimed to determine the prevalence of osteoporosis by opportunistic CT in patients who underwent lung cancer screening (LCS) using non-contrast CT in the Northeastern United States. Demographics including race and ethnicity were retrieved. We assessed trabecular bone and body composition using a fully-automated artificial intelligence algorithm. ROIs were placed at T12 vertebral body for attenuation measurements in Hounsfield Units (HU). Two validated thresholds were used to diagnose osteoporosis: high-sensitivity threshold (115-165 HU) and high specificity threshold (<115 HU). We performed descriptive statistics and ANOVA to compare differences across sex, race, ethnicity, and income class according to neighborhoods' mean household incomes. Forward stepwise regression modeling was used to determine body composition predictors of trabecular attenuation. We included 3708 patients (mean age 64 ± 7 years, 54 % males) who underwent LCS, had available demographic information and an evaluable CT for trabecular attenuation analysis. Using the high sensitivity threshold, osteoporosis was more prevalent in females (74 % vs. 65 % in males, p < 0.0001) and Whites (72 % vs 49 % non-Whites, p < 0.0001). However, osteoporosis was present across all races (38 % Black, 55 % Asian, 56 % Hispanic) and affected all income classes (69 %, 69 %, and 91 % in low, medium, and high-income class, respectively). High visceral/subcutaneous fat-ratio, aortic calcification, and hepatic steatosis were associated with low trabecular attenuation (p < 0.01), whereas muscle mass was positively associated with trabecular attenuation (p < 0.01). In conclusion, osteoporosis is prevalent across all races, income classes and both sexes in patients undergoing LCS. Opportunistic CT using a fully-automated algorithm and uniform imaging protocol is able to detect osteoporosis and body composition without additional testing or radiation. Early identification of patients traditionally thought to be at low risk for bone loss will allow for initiating appropriate treatment to prevent future fragility fractures. CLINICALTRIALS.GOV IDENTIFIER: N/A.

PMID:38925254 | DOI:10.1016/j.bone.2024.117176

Categories: Literature Watch

Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning

Wed, 2024-06-26 06:00

Comput Biol Med. 2024 Jun 25;178:108798. doi: 10.1016/j.compbiomed.2024.108798. Online ahead of print.

ABSTRACT

Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.

PMID:38925085 | DOI:10.1016/j.compbiomed.2024.108798

Categories: Literature Watch

Exhaustive in vitro evaluation of the 9-drug cocktail CUSP9 for treatment of glioblastoma

Wed, 2024-06-26 06:00

Comput Biol Med. 2024 Jun 25;178:108748. doi: 10.1016/j.compbiomed.2024.108748. Online ahead of print.

ABSTRACT

The CUSP9 protocol is a polypharmaceutical strategy aiming at addressing the complexity of glioblastoma by targeting multiple pathways. Although the rationale for this 9-drug cocktail is well-supported by theoretical and in vitro data, its effectiveness compared to its 511 possible subsets has not been comprehensively evaluated. Such an analysis could reveal if fewer drugs could achieve similar or better outcomes. We conducted an exhaustive in vitro evaluation of the CUSP9 protocol using COMBImageDL, our specialized framework for testing higher-order drug combinations. This study assessed all 511 subsets of the CUSP9v3 protocol, in combination with temozolomide, on two clonal cultures of glioma-initiating cells derived from patient samples. The drugs were used at fixed, clinically relevant concentrations, and the experiment was performed in quadruplicate with endpoint cell viability and live-cell imaging readouts. Our results showed that several lower-order drug combinations produced effects equivalent to the full CUSP9 cocktail, indicating potential for simplified regimens in personalized therapy. Further validation through in vivo and precision medicine testing is required. Notably, a subset of four drugs (auranofin, disulfiram, itraconazole, sertraline) was particularly effective, reducing cell growth, altering cell morphology, increasing apoptotic-like cells within 4-28 h, and significantly decreasing cell viability after 68 h compared to untreated cells. This study underscores the importance and feasibility of comprehensive in vitro evaluations of complex drug combinations on patient-derived tumor cells, serving as a critical step toward (pre-)clinical development.

PMID:38925084 | DOI:10.1016/j.compbiomed.2024.108748

Categories: Literature Watch

An effective deep learning fusion method for predicting the TVB-N and TVC contents of chicken breasts using dual hyperspectral imaging systems

Wed, 2024-06-26 06:00

Food Chem. 2024 May 28;456:139847. doi: 10.1016/j.foodchem.2024.139847. Online ahead of print.

ABSTRACT

Total volatile basic nitrogen (TVB-N) and total viable count (TVC) are important freshness indicators of meat. Hyperspectral imaging combined with chemometrics has been proven to be effective in meat detection. However, a challenge with chemometrics is the lack of a universally applicable processing combination, requiring trial-and-error experiments with different datasets. This study proposes an end-to-end deep learning model, pyramid attention features fusion model (PAFFM), integrating CNN, attention mechanism and pyramid structure. PAFFM fuses the raw visible and near-infrared range (VNIR) and shortwave near-infrared range (SWIR) spectral data for predicting TVB-N and TVC in chicken breasts. Compared with the CNN and chemometric models, PAFFM obtains excellent results without a complicated processing combinatorial optimization process. Important wavelengths that contributed significantly to PAFFM performance are visualized and interpreted. This study offers valuable references and technical support for the market application of spectral detection, benefiting related research and practical fields.

PMID:38925007 | DOI:10.1016/j.foodchem.2024.139847

Categories: Literature Watch

Rapid, portable, and sensitive detection of CaMV35S by RPA-CRISPR/Cas12a-G4 colorimetric assays with high accuracy deep learning object recognition and classification

Wed, 2024-06-26 06:00

Talanta. 2024 Jun 20;278:126441. doi: 10.1016/j.talanta.2024.126441. Online ahead of print.

ABSTRACT

Fast, sensitive, and portable detection of genetic modification contributes to agricultural security and food safety. Here, we developed RPA-CRISPR/Cas12a-G-quadruplex colorimetric assays that can combine with intelligent recognition by deep learning algorithms to achieve sensitive, rapid, and portable detection of the CaMV35S promoter. When the crRNA-Cas12a complex recognizes the RPA amplification product, Cas12 cleaves the G-quadruplex, causing the G4-Hemin complex to lose its peroxide mimetic enzyme function and be unable to catalyze the conversion of ABTS2- to ABTS, allowing CaMV35S concentration to be determined based on ABTS absorbance. By utilizing the RPA-CRISPR/Cas12a-G4 assay, we achieved a CaMV35S limit of detection down to 10 aM and a 0.01 % genetic modification sample in 45 min. Deep learning algorithms are designed for highly accurate classification of color results. Yolov5 objective finding and Resnet classification algorithms have been trained to identify trace (0.01 %) CaMV35S more accurately than naked eye colorimetry. We also coupled deep learning algorithms with a smartphone app to achieve portable and rapid photo identification. Overall, our findings enable low cost ($0.43), high accuracy, and intelligent detection of the CaMV35S promoter.

PMID:38924982 | DOI:10.1016/j.talanta.2024.126441

Categories: Literature Watch

Meta-analysis of the quantitative assessment of lower extremity motor function in elderly individuals based on objective detection

Wed, 2024-06-26 06:00

J Neuroeng Rehabil. 2024 Jun 26;21(1):111. doi: 10.1186/s12984-024-01409-7.

ABSTRACT

OBJECTIVE: To avoid deviation caused by the traditional scale method, the present study explored the accuracy, advantages, and disadvantages of different objective detection methods in evaluating lower extremity motor function in elderly individuals.

METHODS: Studies on lower extremity motor function assessment in elderly individuals published in the PubMed, Web of Science, Cochrane Library and EMBASE databases in the past five years were searched. The methodological quality of the included trials was assessed using RevMan 5.4.1 and Stata, followed by statistical analyses.

RESULTS: In total, 19 randomized controlled trials with a total of 2626 participants, were included. The results of the meta-analysis showed that inertial measurement units (IMUs), motion sensors, 3D motion capture systems, and observational gait analysis had statistical significance in evaluating the changes in step velocity and step length of lower extremity movement in elderly individuals (P < 0.00001), which can be used as a standardized basis for the assessment of motor function in elderly individuals. Subgroup analysis showed that there was significant heterogeneity in the assessment of step velocity [SMD=-0.98, 95%CI(-1.23, -0.72), I2 = 91.3%, P < 0.00001] and step length [SMD=-1.40, 95%CI(-1.77, -1.02), I2 = 86.4%, P < 0.00001] in elderly individuals. However, the sensors (I2 = 9%, I2 = 0%) and 3D motion capture systems (I2 = 0%) showed low heterogeneity in terms of step velocity and step length. The sensitivity analysis and publication bias test demonstrated that the results were stable and reliable.

CONCLUSION: observational gait analysis, motion sensors, 3D motion capture systems, and IMUs, as evaluation means, play a certain role in evaluating the characteristic parameters of step velocity and step length in lower extremity motor function of elderly individuals, which has good accuracy and clinical value in preventing motor injury. However, the high heterogeneity of observational gait analysis and IMUs suggested that different evaluation methods use different calculation formulas and indicators, resulting in the failure to obtain standardized indicators in clinical applications. Thus, multimodal quantitative evaluation should be integrated.

PMID:38926890 | DOI:10.1186/s12984-024-01409-7

Categories: Literature Watch

Deep learning image reconstruction generates thinner slice iodine maps with improved image quality to increase diagnostic acceptance and lesion conspicuity: a prospective study on abdominal dual-energy CT

Wed, 2024-06-26 06:00

BMC Med Imaging. 2024 Jun 26;24(1):159. doi: 10.1186/s12880-024-01334-0.

ABSTRACT

BACKGROUND: To assess the improvement of image quality and diagnostic acceptance of thinner slice iodine maps enabled by deep learning image reconstruction (DLIR) in abdominal dual-energy CT (DECT).

METHODS: This study prospectively included 104 participants with 136 lesions. Four series of iodine maps were generated based on portal-venous scans of contrast-enhanced abdominal DECT: 5-mm and 1.25-mm using adaptive statistical iterative reconstruction-V (Asir-V) with 50% blending (AV-50), and 1.25-mm using DLIR with medium (DLIR-M), and high strength (DLIR-H). The iodine concentrations (IC) and their standard deviations of nine anatomical sites were measured, and the corresponding coefficient of variations (CV) were calculated. Noise-power-spectrum (NPS) and edge-rise-slope (ERS) were measured. Five radiologists rated image quality in terms of image noise, contrast, sharpness, texture, and small structure visibility, and evaluated overall diagnostic acceptability of images and lesion conspicuity.

RESULTS: The four reconstructions maintained the IC values unchanged in nine anatomical sites (all p > 0.999). Compared to 1.25-mm AV-50, 1.25-mm DLIR-M and DLIR-H significantly reduced CV values (all p < 0.001) and presented lower noise and noise peak (both p < 0.001). Compared to 5-mm AV-50, 1.25-mm images had higher ERS (all p < 0.001). The difference of the peak and average spatial frequency among the four reconstructions was relatively small but statistically significant (both p < 0.001). The 1.25-mm DLIR-M images were rated higher than the 5-mm and 1.25-mm AV-50 images for diagnostic acceptability and lesion conspicuity (all P < 0.001).

CONCLUSIONS: DLIR may facilitate the thinner slice thickness iodine maps in abdominal DECT for improvement of image quality, diagnostic acceptability, and lesion conspicuity.

PMID:38926711 | DOI:10.1186/s12880-024-01334-0

Categories: Literature Watch

Prediction of CO(2) solubility in Ionic liquids for CO(2) capture using deep learning models

Wed, 2024-06-26 06:00

Sci Rep. 2024 Jun 26;14(1):14730. doi: 10.1038/s41598-024-65499-y.

ABSTRACT

Ionic liquids (ILs) are highly effective for capturing carbon dioxide (CO2). The prediction of CO2 solubility in ILs is crucial for optimizing CO2 capture processes. This study investigates the use of deep learning models for CO2 solubility prediction in ILs with a comprehensive dataset of 10,116 CO2 solubility data in 164 kinds of ILs under different temperature and pressure conditions. Deep neural network models, including Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM), were developed to predict CO2 solubility in ILs. The ANN and LSTM models demonstrated robust test accuracy in predicting CO2 solubility, with coefficient of determination (R2) values of 0.986 and 0.985, respectively. Both model's computational efficiency and cost were investigated, and the ANN model achieved reliable accuracy with a significantly lower computational time (approximately 30 times faster) than the LSTM model. A global sensitivity analysis (GSA) was performed to assess the influence of process parameters and associated functional groups on CO2 solubility. The sensitivity analysis results provided insights into the relative importance of input attributes on output variables (CO2 solubility) in ILs. The findings highlight the significant potential of deep learning models for streamlining the screening process of ILs for CO2 capture applications.

PMID:38926595 | DOI:10.1038/s41598-024-65499-y

Categories: Literature Watch

Harnessing the deep learning power of foundation models in single-cell omics

Wed, 2024-06-26 06:00

Nat Rev Mol Cell Biol. 2024 Jun 26. doi: 10.1038/s41580-024-00756-6. Online ahead of print.

NO ABSTRACT

PMID:38926531 | DOI:10.1038/s41580-024-00756-6

Categories: Literature Watch

A novel radiomics approach for predicting TACE outcomes in hepatocellular carcinoma patients using deep learning for multi-organ segmentation

Wed, 2024-06-26 06:00

Sci Rep. 2024 Jun 26;14(1):14779. doi: 10.1038/s41598-024-65630-z.

ABSTRACT

Transarterial chemoembolization (TACE) represent the standard of therapy for non-operative hepatocellular carcinoma (HCC), while prediction of long term treatment outcomes is a complex and multifactorial task. In this study, we present a novel machine learning approach utilizing radiomics features from multiple organ volumes of interest (VOIs) to predict TACE outcomes for 252 HCC patients. Unlike conventional radiomics models requiring laborious manual segmentation limited to tumoral regions, our approach captures information comprehensively across various VOIs using a fully automated, pretrained deep learning model applied to pre-TACE CT images. Evaluation of radiomics random survival forest models against clinical ones using Cox proportional hazard demonstrated comparable performance in predicting overall survival. However, radiomics outperformed clinical models in predicting progression-free survival. Explainable analysis highlighted the significance of non-tumoral VOI features, with their cumulative importance superior to features from the largest liver tumor. The proposed approach overcomes the limitations of manual VOI segmentation, requires no radiologist input and highlight the clinical relevance of features beyond tumor regions. Our findings suggest the potential of this radiomics models in predicting TACE outcomes, with possible implications for other clinical scenarios.

PMID:38926517 | DOI:10.1038/s41598-024-65630-z

Categories: Literature Watch

CT imaging-derived phenotypes for abdominal muscle and their association with age and sex in a medical biobank

Wed, 2024-06-26 06:00

Sci Rep. 2024 Jun 26;14(1):14807. doi: 10.1038/s41598-024-64603-6.

ABSTRACT

The study of muscle mass as an imaging-derived phenotype (IDP) may yield new insights into determining the normal and pathologic variations in muscle mass in the population. This can be done by determining 3D abdominal muscle mass from 12 distinct abdominal muscle regions and groups using computed tomography (CT) in a racially diverse medical biobank. To develop a fully automatic technique for assessment of CT abdominal muscle IDPs and preliminarily determine abdominal muscle IDP variations with age and sex in a clinically and racially diverse medical biobank. This retrospective study was conducted using the Penn Medicine BioBank (PMBB), a research protocol that recruits adult participants during outpatient visits at hospitals in the Penn Medicine network. We developed a deep residual U-Net (ResUNet) to segment 12 abdominal muscle groups including the left and right psoas, quadratus lumborum, erector spinae, gluteus medius, rectus abdominis, and lateral abdominals. 110 CT studies were randomly selected for training, validation, and testing. 44 of the 110 CT studies were selected to enrich the dataset with representative cases of intra-abdominal and abdominal wall pathology. The studies were divided into non-overlapping training, validation and testing sets. Model performance was evaluated using the Sørensen-Dice coefficient. Volumes of individual muscle groups were plotted to distribution curves. To investigate associations between muscle IDPs, age, and sex, deep learning model segmentations were performed on a larger abdominal CT dataset from PMBB consisting of 295 studies. Multivariable models were used to determine relationships between muscle mass, age and sex. The model's performance (Dice scores) on the test data was the following: psoas: 0.85 ± 0.12, quadratus lumborum: 0.72 ± 0.14, erector spinae: 0.92 ± 0.07, gluteus medius: 0.90 ± 0.08, rectus abdominis: 0.85 ± 0.08, lateral abdominals: 0.85 ± 0.09. The average Dice score across all muscle groups was 0.86 ± 0.11. Average total muscle mass for females was 2041 ± 560.7 g with a high of 2256 ± 560.1 g (41-50 year old cohort) and a change of - 0.96 g/year, declining to an average mass of 1579 ± 408.8 g (81-100 year old cohort). Average total muscle mass for males was 3086 ± 769.1 g with a high of 3385 ± 819.3 g (51-60 year old cohort) and a change of - 1.73 g/year, declining to an average mass of 2629 ± 536.7 g (81-100 year old cohort). Quadratus lumborum was most highly correlated with age for both sexes (correlation coefficient of - 0.5). Gluteus medius mass in females was positively correlated with age with a coefficient of 0.22. These preliminary findings show that our CNN can automate detailed abdominal muscle volume measurement. Unlike prior efforts, this technique provides 3D muscle segmentations of individual muscles. This technique will dramatically impact sarcopenia diagnosis and research, elucidating its clinical and public health implications. Our results suggest a peak age range for muscle mass and an expected rate of decline, both of which vary between genders. Future goals are to investigate genetic variants for sarcopenia and malnutrition, while describing genotype-phenotype associations of muscle mass in healthy humans using imaging-derived phenotypes. It is feasible to obtain 3D abdominal muscle IDPs with high accuracy from patients in a medical biobank using fully automated machine learning methods. Abdominal muscle IDPs showed significant variations in lean mass by age and sex. In the future, this tool can be leveraged to perform a genome-wide association study across the medical biobank and determine genetic variants associated with early or accelerated muscle wasting.

PMID:38926479 | DOI:10.1038/s41598-024-64603-6

Categories: Literature Watch

DG2GAN: improving defect recognition performance with generated defect image sample

Wed, 2024-06-26 06:00

Sci Rep. 2024 Jun 26;14(1):14787. doi: 10.1038/s41598-024-64716-y.

ABSTRACT

This article aims to improve the deep-learning-based surface defect recognition. In actual manufacturing processes, there are issues such as data imbalance, insufficient diversity, and poor quality of augmented data in the collected image data for product defect recognition. A novel defect generation method with multiple loss functions, DG2GAN is presented in this paper. This method employs cycle consistency loss to generate defect images from a large number of defect-free images, overcoming the issue of imbalanced original training data. DJS optimized discriminator loss is introduced in the added discriminator to encourage the generation of diverse defect images. Furthermore, to maintain diversity in generated images while improving image quality, a new DG2 adversarial loss is proposed with the aim of generating high-quality and diverse images. The experiments demonstrated that DG2GAN produces defect images of higher quality and greater diversity compared with other advanced generation methods. Using the DG2GAN method to augment defect data in the CrackForest and MVTec datasets, the defect recognition accuracy increased from 86.9 to 94.6%, and the precision improved from 59.8 to 80.2%. The experimental results show that using the proposed defect generation method can obtain sample images with high quality and diversity and employ this method for data augmentation significantly enhances surface defect recognition technology.

PMID:38926463 | DOI:10.1038/s41598-024-64716-y

Categories: Literature Watch

Fluorescence excitation-scanning hyperspectral imaging with scalable 2D-3D deep learning framework for colorectal cancer detection

Wed, 2024-06-26 06:00

Sci Rep. 2024 Jun 26;14(1):14790. doi: 10.1038/s41598-024-64917-5.

ABSTRACT

Colorectal cancer is one of the top contributors to cancer-related deaths in the United States, with over 100,000 estimated cases in 2020 and over 50,000 deaths. The most common screening technique is minimally invasive colonoscopy using either reflected white light endoscopy or narrow-band imaging. However, current imaging modalities have only moderate sensitivity and specificity for lesion detection. We have developed a novel fluorescence excitation-scanning hyperspectral imaging (HSI) approach to sample image and spectroscopic data simultaneously on microscope and endoscope platforms for enhanced diagnostic potential. Unfortunately, fluorescence excitation-scanning HSI datasets pose major challenges for data processing, interpretability, and classification due to their high dimensionality. Here, we present an end-to-end scalable Artificial Intelligence (AI) framework built for classification of excitation-scanning HSI microscopy data that provides accurate image classification and interpretability of the AI decision-making process. The developed AI framework is able to perform real-time HSI classification with different speed/classification performance trade-offs by tailoring the dimensionality of the dataset, supporting different dimensions of deep learning models, and varying the architecture of deep learning models. We have also incorporated tools to visualize the exact location of the lesion detected by the AI decision-making process and to provide heatmap-based pixel-by-pixel interpretability. In addition, our deep learning framework provides wavelength-dependent impact as a heatmap, which allows visualization of the contributions of HSI wavelength bands during the AI decision-making process. This framework is well-suited for HSI microscope and endoscope platforms, where real-time analysis and visualization of classification results are required by clinicians.

PMID:38926431 | DOI:10.1038/s41598-024-64917-5

Categories: Literature Watch

Deep learning model integrating cfDNA methylation and fragment size profiles for lung cancer diagnosis

Wed, 2024-06-26 06:00

Sci Rep. 2024 Jun 26;14(1):14797. doi: 10.1038/s41598-024-63411-2.

ABSTRACT

Detecting aberrant cell-free DNA (cfDNA) methylation is a promising strategy for lung cancer diagnosis. In this study, our aim is to identify methylation markers to distinguish patients with lung cancer from healthy individuals. Additionally, we sought to develop a deep learning model incorporating cfDNA methylation and fragment size profiles. To achieve this, we utilized methylation data collected from The Cancer Genome Atlas and Gene Expression Omnibus databases. Then we generated methylated DNA immunoprecipitation sequencing and genome-wide Enzymatic Methyl-seq (EM-seq) form lung cancer tissue and plasma. Using these data, we selected 366 methylation markers. A targeted EM-seq panel was designed using the selected markers, and 142 lung cancer and 56 healthy samples were produced with the panel. Additionally, cfDNA samples from healthy individuals and lung cancer patients were diluted to evaluate sensitivity. Its lung cancer detection performance reached an accuracy of 81.5% and an area under the receiver operating characteristic curve of 0.87. In the serial dilution experiment, we achieved tumor fraction detection of 1% at 98% specificity and 0.1% at 80% specificity. In conclusion, we successfully developed and validated a combination of methylation panel and a deep learning model that can distinguish between patients with lung cancer and healthy individuals.

PMID:38926407 | DOI:10.1038/s41598-024-63411-2

Categories: Literature Watch

ROCOv2: Radiology Objects in COntext Version 2, an Updated Multimodal Image Dataset

Wed, 2024-06-26 06:00

Sci Data. 2024 Jun 26;11(1):688. doi: 10.1038/s41597-024-03496-6.

ABSTRACT

Automated medical image analysis systems often require large amounts of training data with high quality labels, which are difficult and time consuming to generate. This paper introduces Radiology Object in COntext version 2 (ROCOv2), a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access subset. It is an updated version of the ROCO dataset published in 2018, and adds 35,705 new images added to PMC since 2018. It further provides manually curated concepts for imaging modalities with additional anatomical and directional concepts for X-rays. The dataset consists of 79,789 images and has been used, with minor modifications, in the concept detection and caption prediction tasks of ImageCLEFmedical Caption 2023. The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image. In addition, it can serve for pre-training of medical domain models, and evaluation of deep learning models for multi-task learning.

PMID:38926396 | DOI:10.1038/s41597-024-03496-6

Categories: Literature Watch

PredGCN: A Pruning-enabled Gene-Cell Net for Automatic Cell Annotation of Single Cell Transcriptome Data

Wed, 2024-06-26 06:00

Bioinformatics. 2024 Jun 26:btae421. doi: 10.1093/bioinformatics/btae421. Online ahead of print.

ABSTRACT

MOTIVATION: The annotation of cell types from single-cell transcriptomics is essential for understanding the biological identity and functionality of cellular populations. Although manual annotation remains the gold standard, the advent of automatic pipelines has become crucial for scalable, unbiased, and cost-effective annotations. Nonetheless, the effectiveness of these automatic methods, particularly those employing deep learning, significantly depends on the architecture of the classifier and the quality and diversity of the training datasets.

RESULTS: To address these limitations, we present a Pruning-enabled Gene-Cell Net (PredGCN) incorporating a Coupled Gene-Cell Net (CGCN) to enable representation learning and information storage. PredGCN integrates a Gene Splicing Net (GSN) and a Cell Stratification Net (CSN), employing a pruning operation (PrO) to dynamically tackle the complexity of heterogeneous cell identification. Among them, GSN leverages multiple statistical and hypothesis-driven feature extraction methods to selectively assemble genes with specificity for scRNA-seq data while CSN unifies elements based on diverse region demarcation principles, exploiting the representations from GSN and precise identification from different regional homogeneity perspectives. Furthermore, we develop a multi-objective Pareto pruning operation (Pareto PrO) to expand the dynamic capabilities of CGCN, optimizing the sub-network structure for accurate cell type annotation. Multiple comparison experiments on real scRNA-seq datasets from various species have demonstrated that PredGCN surpasses existing state-of-the-art methods, including its scalability to cross-species datasets. Moreover, PredGCN can uncover unknown cell types and provide functional genomic analysis by quantifying the influence of genes on cell clusters, bringing new insights into cell type identification and characterizing scRNA-seq data from different perspectives.

AVAILABILITY AND IMPLEMENTATION: The source code is available at https://github.com/IrisQi7/PredGCN and test data is available at https://figshare.com/articles/dataset/PredGCN/25251163.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38924517 | DOI:10.1093/bioinformatics/btae421

Categories: Literature Watch

Pages