Deep learning

A Comparative Study of Metaheuristic Feature Selection Algorithms for Respiratory Disease Classification

Wed, 2024-10-16 06:00

Diagnostics (Basel). 2024 Oct 8;14(19):2244. doi: 10.3390/diagnostics14192244.

ABSTRACT

The correct diagnosis and early treatment of respiratory diseases can significantly improve the health status of patients, reduce healthcare expenses, and enhance quality of life. Therefore, there has been extensive interest in developing automatic respiratory disease detection systems. Most recent methods for detecting respiratory disease use machine and deep learning algorithms. The success of these machine learning methods depends heavily on the selection of proper features to be used in the classifier. Although metaheuristic-based feature selection methods have been successful in addressing difficulties presented by high-dimensional medical data in various biomedical classification tasks, there is not much research on the utilization of metaheuristic methods in respiratory disease classification. This paper aims to conduct a detailed and comparative analysis of six widely used metaheuristic optimization methods using eight different transfer functions in respiratory disease classification. For this purpose, two different classification cases were examined: binary and multi-class. The findings demonstrate that metaheuristic algorithms using correct transfer functions could effectively reduce data dimensionality while enhancing classification accuracy.

PMID:39410648 | DOI:10.3390/diagnostics14192244

Categories: Literature Watch

Hybrid Deep Learning Framework for Melanoma Diagnosis Using Dermoscopic Medical Images

Wed, 2024-10-16 06:00

Diagnostics (Basel). 2024 Oct 8;14(19):2242. doi: 10.3390/diagnostics14192242.

ABSTRACT

Background: Melanoma, or skin cancer, is a dangerous form of cancer that is the major cause of the demise of thousands of people around the world. Methods: In recent years, deep learning has become more popular for analyzing and detecting these medical issues. In this paper, a hybrid deep learning approach has been proposed based on U-Net for image segmentation, Inception-ResNet-v2 for feature extraction, and the Vision Transformer model with a self-attention mechanism for refining the features for early and accurate diagnosis and classification of skin cancer. Furthermore, in the proposed approach, hyperparameter tuning helps to obtain more accurate and optimized results for image classification. Results: Dermoscopic shots gathered by the worldwide skin imaging collaboration (ISIC2020) challenge dataset are used in the proposed research work and achieved 98.65% accuracy, 99.20% sensitivity, and 98.03% specificity, which outperforms the other existing approaches for skin cancer classification. Furthermore, the HAM10000 dataset is used for ablation studies to compare and validate the performance of the proposed approach. Conclusions: The achieved outcome suggests that the proposed approach would be able to serve as a valuable tool for assisting dermatologists in the early detection of melanoma.

PMID:39410645 | DOI:10.3390/diagnostics14192242

Categories: Literature Watch

Differential diagnosis of congenital ventricular septal defect and atrial septal defect in children using deep learning-based analysis of chest radiographs

Tue, 2024-10-15 06:00

BMC Pediatr. 2024 Oct 15;24(1):661. doi: 10.1186/s12887-024-05141-y.

ABSTRACT

BACKGROUND: Children with atrial septal defect (ASD) and ventricular septal defect (VSD) are frequently examined for respiratory symptoms, even when the underlying disease is not found. Chest radiographs often serve as the primary imaging modality. It is crucial to differentiate between ASD and VSD due to their distinct treatment.

PURPOSE: To assess whether deep learning analysis of chest radiographs can more effectively differentiate between ASD and VSD in children.

METHODS: In this retrospective study, chest radiographs and corresponding radiology reports from 1,194 patients were analyzed. The cases were categorized into a training set and a validation set, comprising 480 cases of ASD and 480 cases of VSD, and a test set with 115 cases of ASD and 119 cases of VSD. Four deep learning network models-ResNet-CBAM, InceptionV3, EfficientNet, and ViT-were developed for training, and a fivefold cross-validation method was employed to optimize the models. Receiver operating characteristic (ROC) curve analyses were conducted to assess the performance of each model. The most effective algorithm was compared with the interpretations provided by two radiologists on 234 images from the test group.

RESULTS: The average accuracy, sensitivity, and specificity of the four deep learning models in the differential diagnosis of VSD and ASD were higher than 70%. The AUC values of ResNet-CBAM, IncepetionV3, EfficientNet, and ViT were 0.87, 0.91, 0.90, and 0.66, respectively. Statistical analysis showed that the differential diagnosis efficiency of InceptionV3 was the highest, reaching 87% classification accuracy. The accuracy of InceptionV3 in the differential diagnosis of VSD and ASD was higher than that of the radiologists.

CONCLUSIONS: Deep learning methods such as IncepetionV3 based on chest radiographs in the study showed good performance for differential diagnosis of congenital VSD and ASD, which may be able to assist radiologists in diagnosis, education, and training, and reduce missed diagnosis and misdiagnosis.

PMID:39407181 | DOI:10.1186/s12887-024-05141-y

Categories: Literature Watch

HiCDiffusion - diffusion-enhanced, transformer-based prediction of chromatin interactions from DNA sequences

Tue, 2024-10-15 06:00

BMC Genomics. 2024 Oct 15;25(1):964. doi: 10.1186/s12864-024-10885-z.

ABSTRACT

Prediction of chromatin interactions from DNA sequence has been a significant research challenge in the last couple of years. Several solutions have been proposed, most of which are based on encoder-decoder architecture, where 1D sequence is convoluted, encoded into the latent representation, and then decoded using 2D convolutions into the Hi-C pairwise chromatin spatial proximity matrix. Those methods, while obtaining high correlation scores and improved metrics, produce Hi-C matrices that are artificial - they are blurred due to the deep learning model architecture. In our study, we propose the HiCDiffusion, sequence-only model that addresses this problem. We first train the encoder-decoder neural network and then use it as a component of the diffusion model - where we guide the diffusion using a latent representation of the sequence, as well as the final output from the encoder-decoder. That way, we obtain the high-resolution Hi-C matrices that not only better resemble the experimental results - improving the Fréchet inception distance by an average of 11 times, with the highest improvement of 56 times - but also obtain similar classic metrics to current state-of-the-art encoder-decoder architectures used for the task.

PMID:39407104 | DOI:10.1186/s12864-024-10885-z

Categories: Literature Watch

A Multi-task Neural Network for Image Recognition in Magnetically Controlled Capsule Endoscopy

Tue, 2024-10-15 06:00

Dig Dis Sci. 2024 Oct 15. doi: 10.1007/s10620-024-08681-6. Online ahead of print.

ABSTRACT

BACKGROUND AND AIMS: Physicians are required to spend a significant amount of reading time of magnetically controlled capsule endoscopy. However, current deep learning models are limited to completing a single recognition task and cannot replicate the diagnostic process of a physician. This study aims to construct a multi-task model that can simultaneously recognize gastric anatomical sites and gastric lesions.

METHODS: A multi-task recognition model named Mul-Recog-Model was established. The capsule endoscopy image data from 886 patients were selected to construct a training set and a test set for training and testing the model. Based on the same test set, the model in this study was compared with the current single-task recognition model with good performance.

RESULTS: The sensitivity and specificity of the model for recognizing gastric anatomical sites were 99.8% (95% confidence intervals: 99.7-99.8) and 98.5% (95% confidence intervals: 98.3-98.7), and for gastric lesions were 98.8% (95% confidence intervals: 98.3-99.2) and 99.4% (95% confidence intervals: 99.1-99.7). Moreover, the positive predictive value, negative predictive value, and accuracy of the model were more than 95% in recognizing gastric anatomical sites and gastric lesions. Compared with the current single-task recognition model, our model showed comparable sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (p < 0.01, except for the negative predictive value of ResNet, p > 0.05). The Areas Under Curve of our model were 0.985 and 0.989 in recognizing gastric anatomical sites and gastric lesions. Furthermore, the model had 49.1 M parameters and 38.1G Float calculations. The model took 15.5 ms to recognize an image, which was less than the superposition of multiple single models (p < 0.01).

CONCLUSIONS: The Mul-Recog-Model exhibited high sensitivity, specificity, PPV, NPV, and accuracy. The model demonstrated excellent performance in terms of parameters quantity, Float computation, and computing time. The utilization of the model for recognizing gastric images can improve the efficiency of physicians' reports and meet complex diagnostic requirements.

PMID:39407081 | DOI:10.1007/s10620-024-08681-6

Categories: Literature Watch

A Rapid Adaptation Approach for Dynamic Air-Writing Recognition Using Wearable Wristbands with Self-Supervised Contrastive Learning

Tue, 2024-10-15 06:00

Nanomicro Lett. 2024 Oct 16;17(1):41. doi: 10.1007/s40820-024-01545-8.

ABSTRACT

Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities. Unlike existing approaches that often focus on static gestures and require extensive labeled data, the proposed wearable wristband with self-supervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios. It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes, resulting in high-sensitivity capacitance output. Through wireless transmission from a Wi-Fi module, the proposed algorithm learns latent features from the unlabeled signals of random wrist movements. Remarkably, only few-shot labeled data are sufficient for fine-tuning the model, enabling rapid adaptation to various tasks. The system achieves a high accuracy of 94.9% in different scenarios, including the prediction of eight-direction commands, and air-writing of all numbers and letters. The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training. Its utility has been further extended to enhance human-machine interaction over digital platforms, such as game controls, calculators, and three-language login systems, offering users a natural and intuitive way of communication.

PMID:39407061 | DOI:10.1007/s40820-024-01545-8

Categories: Literature Watch

Development trends and knowledge framework of artificial intelligence (AI) applications in oncology by years: a bibliometric analysis from 1992 to 2022

Tue, 2024-10-15 06:00

Discov Oncol. 2024 Oct 16;15(1):566. doi: 10.1007/s12672-024-01415-0.

ABSTRACT

PURPOSE: Oncology is the primary field in medicine with a high rate of artificial intelligence (AI) use. Thus, this study aimed to investigate the trends of AI in oncology, evaluating the bibliographic characteristics of articles. We evaluated the related research on the knowledge framework of Artificial Intelligence (AI) applications in Oncology through bibliometrics analysis and explored the research hotspots and current status from 1992 to 2022.

METHODS: The research employed a scientometric methodology and leveraged scientific visualization tools such as Bibliometrix R Package Software, VOSviewer, and Litmaps for comprehensive data analysis. Scientific AI-related publications in oncology were retrieved from the Web of Science (WoS) and InCites from 1992 to 2022.

RESULTS: A total of 7,815 articles authored by 35,098 authors and published in 1,492 journals were included in the final analysis. The most prolific authors were Esteva A (citaition = 5,821) and Gillies RJ (citaition = 4288). The most active institutions were the Chinese Academy of Science and Harward University. The leading journals were Frontiers ın Oncology and Scientific Reports. The most Frequent Author Keywords are "machine learning", "deep learning," "radiomics", "breast cancer", "melanoma" and "artificial intelligence," which are the research hotspots in this field. A total of 10,866 Authors' keywords were investigated. The average number of citations per document is 23. After 2015, the number of publications proliferated.

CONCLUSION: The investigation of Artificial Intelligence (AI) applications in the field of Oncology is still in its early phases especially for genomics, proteomics, and clinicomics, with extensive studies focused on biology, diagnosis, treatment, and cancer risk assessment. This bibliometric analysis offered valuable perspectives into AI's role in Oncology research, shedding light on emerging research paths. Notably, a significant portion of these publications originated from developed nations. These findings could prove beneficial for both researchers and policymakers seeking to navigate this field.

PMID:39406991 | DOI:10.1007/s12672-024-01415-0

Categories: Literature Watch

Artificial intelligence-based analysis of retinal fluid volume dynamics in neovascular age-related macular degeneration and association with vision and atrophy

Tue, 2024-10-15 06:00

Eye (Lond). 2024 Oct 15. doi: 10.1038/s41433-024-03399-1. Online ahead of print.

ABSTRACT

BACKGROUND/OBJECTIVES: To characterise morphological changes in neovascular age-related macular degeneration (nAMD) during anti-angiogenic therapy and explore relationships with best-corrected visual acuity (BCVA) and development of macular atrophy (MA).

SUBJECTS/METHODS: Post-hoc analysis of the phase III HARBOR trial. SD-OCT scans from 1097 treatment-naïve nAMD eyes were analysed. Volumes of intraretinal cystoid fluid (ICF), subretinal hyperreflective material (SHRM), subretinal fluid (SRF), pigment epithelial detachment (PED) and cyst-free retinal volume (CFRV) were measured by deep-learning model. Volumes were analysed by treatment regimen, macular neovascularisation (MNV) subtypes and topographic location. Associations of volumetric features with BCVA and MA development were quantified at month 12/24.

RESULTS: Differences in feature volume changes by treatment regimens and MNV subtypes were observed. Each additional 100 nanolitre unit (AHNU) of residual ICF, SHRM and CFRV at month 1 in the fovea was associated with deficits of 10.3, 7.3 and 12.2 letters at month 12. Baseline AHNUs of ICF, CFRV and PED were associated with increased odds of MA development at month 12 by 10%, 4% and 3%. While that of SRF was associated with a decrease in odds of 5%. Associations at month 24 were similar to those at month 12.

CONCLUSION: Eyes with different MNV subtypes showed distinct trajectories of feature volume response to treatment. Higher baseline volumes of ICF or PED and lower baseline volume of SRF were associated with higher likelihoods of MA development over 24 months. Residual intraretinal fluid, including ICF and CFRV, along with SHRM were predictors of poor visual outcomes.

PMID:39406933 | DOI:10.1038/s41433-024-03399-1

Categories: Literature Watch

Prospective clinical evaluation of deep learning for ultrasonographic screening of abdominal aortic aneurysms

Tue, 2024-10-15 06:00

NPJ Digit Med. 2024 Oct 15;7(1):282. doi: 10.1038/s41746-024-01269-4.

ABSTRACT

Abdominal aortic aneurysm (AAA) often remains undetected until rupture due to limited access to diagnostic ultrasound. This trial evaluated a deep learning (DL) algorithm to guide AAA screening by novice nurses with no prior ultrasonography experience. Ten nurses performed 15 scans each on patients over 65, assisted by a DL object detection algorithm, and compared against physician-performed scans. Ultrasound scan quality, assessed by three blinded expert physicians, was the primary outcome. Among 184 patients, DL-guided novices achieved adequate scan quality in 87.5% of cases, comparable to the 91.3% by physicians (p = 0.310). The DL model predicted AAA with an AUC of 0.975, 100% sensitivity, and 97.8% specificity, with a mean absolute error of 2.8 mm in predicting aortic width compared to physicians. This study demonstrates that DL-guided POCUS has the potential to democratize AAA screening, offering performance comparable to experienced physicians and improving early detection.

PMID:39406888 | DOI:10.1038/s41746-024-01269-4

Categories: Literature Watch

Comparison of deep and conventional machine learning models for prediction of one supply chain management distribution cost

Tue, 2024-10-15 06:00

Sci Rep. 2024 Oct 15;14(1):24195. doi: 10.1038/s41598-024-75114-9.

ABSTRACT

Strategic supply chain management (SCM) is essential for organizations striving to optimize performance and attain their goals. Prediction of supply chain management distribution cost (SCMDC) is one branch of SCM and it's essential for organizations striving to optimize performance and attain their goals. For this purpose, four machine learning algorithms, including random forest (RF), support vector machine (SVM), multilayer perceptron (MLP) and decision tree (DT), along with deep learning using convolutional neural network (CNN), was used to predict and analyze SCMDC. A comprehensive dataset consisting of 180,519 open-source data points was used for analyze and make the structure of each algorithm. Evaluation based on Root Mean Square Error (RMSE) and Correlation coefficient (R2) show the CNN model has high accuracy in SCMDC prediction than other models. The CNN algorithm demonstrated exceptional accuracy on the test dataset, with an RMSE of RMSE of 0.528 and an R2 value of 0.953. Notable advantages of CNNs include automatic learning of hierarchical features, proficiency in capturing spatial and temporal patterns, computational efficiency, robustness to data variations, minimal preprocessing requirements, end-to-end training capability, scalability, and widespread adoption supported by extensive research. These attributes position the CNN algorithm as the preferred choice for precise and reliable SCMDC predictions, especially in scenarios requiring rapid responses and limited computational resources.

PMID:39406828 | DOI:10.1038/s41598-024-75114-9

Categories: Literature Watch

A deep learning approach for line-level Amharic Braille image recognition

Tue, 2024-10-15 06:00

Sci Rep. 2024 Oct 15;14(1):24172. doi: 10.1038/s41598-024-73895-7.

ABSTRACT

Braille, the most popular tactile-based writing system, uses patterns of raised dots arranged in cells to inscribe characters for visually impaired persons. Amharic is Ethiopia's official working language, spoken by more than 100 million people. To bridge the written communication gap between persons with and without eyesight, multiple Optical braille recognition systems for various language scripts have been developed utilizing both statistical and deep learning approaches. However, the need for half-character identification and character segmentation has complicated these systems, particularly in the Amharic script, where each character is represented by two braille cells. To address these challenges, this study proposed deep learning model that combines a CNN and a BiLSTM network with CTC. The model was trained with 1,800 line images with 32 × 256 and 48 × 256 dimensions, and validated with 200 line images and evaluated using Character Error Rate. The best-trained model had a CER of 7.81% on test data with a 48 × 256 image dimension. These findings demonstrate that the proposed sequence-to-sequence learning method is a viable Optical Braille Recognition (OBR) solution that does not necessitate extensive image pre and post processing. Inaddition, we have made the first Amharic braille line-image data set available for free to researchers via the link: https://github.com/Ne-UoG-git/Am-Br-line-image.github.io .

PMID:39406793 | DOI:10.1038/s41598-024-73895-7

Categories: Literature Watch

Evaluating facial dermis aging in healthy Caucasian females with LC-OCT and deep learning

Tue, 2024-10-15 06:00

Sci Rep. 2024 Oct 15;14(1):24113. doi: 10.1038/s41598-024-74370-z.

ABSTRACT

Recent advancements in high-resolution imaging have significantly improved our understanding of microstructural changes in the skin and their relationship to the aging process. Line Field Confocal Optical Coherence Tomography (LC-OCT) provides detailed 3D insights into various skin layers, including the papillary dermis and its fibrous network. In this study, a deep learning model utilizing a 3D ResNet-18 network was trained to predict chronological age from LC-OCT images of 100 healthy Caucasian female volunteers, aged 20 to 70 years. The AI-based protocol focused on regions of interest delineated between the segmented dermal-epidermal junction and the superficial dermis, exploiting complex patterns within the collagen network for age prediction. The model achieved a mean absolute error of 4.2 years and exhibited a Pearson correlation coefficient of 0.937 with actual ages. Furthermore, there was a notable correlation (r = 0.87) between quantified clinical scoring, encompassing parameters such as firmness, elasticity, density, and wrinkle appearance, and the ages predicted by deep learning model. This strong correlation underscores how integrating emerging imaging technologies with deep learning can accelerate aging research and deepen our understanding of how alterations in skin microstructure are related to visible signs of aging.

PMID:39406771 | DOI:10.1038/s41598-024-74370-z

Categories: Literature Watch

High-risk powered two-wheelers scenarios generation for autonomous vehicle testing using WGAN

Tue, 2024-10-15 06:00

Traffic Inj Prev. 2024 Oct 15:1-9. doi: 10.1080/15389588.2024.2399305. Online ahead of print.

ABSTRACT

OBJECTIVE: Autonomous vehicles (AVs) have the potential to revolutionize the future of mobility by significantly improving traffic safety. This study presents a novel method for validating the safety performance of AVs in high-risk scenarios involving powered 2-wheelers (PTWs). By generating high-risk scenarios using in-depth crash data, this study is devoted to addressing the challenge of public road scenarios in testing, which often lack the necessary complexity and risk to effectively evaluate the capabilities of AVs in high-risk situations.

METHOD: Our approach employs a Wasserstein generative adversarial network (WGAN) to generate high-risk scenes, particularly focusing on PTW scenarios. By extracting 314 car-to-PTW crashes from the China In-depth Mobility Safety Study-Traffic Accident database, we simulate outcomes using PC-Crash software. The data are divided into scenes at 0.1-s intervals, with WGAN generating numerous high-risk scenes. By using a cumulative distribution function (CDF), we sampled and analyzed the vehicle's dynamic information to generate complete scenarios applicable to the test. The validation process involves using the SVL Simulator and the Baidu Apollo joint simulation platform to evaluate the AV's driving behavior and interactions with PTWs.

RESULTS: This study evaluates model generation results by comparing distributions using Wasserstein distance as an indicator. The generator converges after approximately 200 epochs, with the iterator converging quickly. Subsequently, 10,000 new scenes are then generated. The distribution of several key parameters in the generated scenes can be found to approximate that of the original scenes. After sampling, the usability of generated scenarios is 64.76%. Virtual simulations confirm the effectiveness of the scenario generation method, with a generated scenario crash rate of 16.50% closely reflecting the original rate of 15.0%, showcasing the method's capacity to produce realistic and hazardous scenarios.

CONCLUSIONS: The experimental results suggest that these scenarios exhibit a level of risk similar to the original crashes and are effective for testing AVs. Consequently, the generated scenarios enhance the diversity of the scenario library and accelerate the overall testing process of AVs.

PMID:39405428 | DOI:10.1080/15389588.2024.2399305

Categories: Literature Watch

Predicting pedestrian-vehicle interaction severity at unsignalized intersections

Tue, 2024-10-15 06:00

Traffic Inj Prev. 2024 Oct 15:1-10. doi: 10.1080/15389588.2024.2404713. Online ahead of print.

ABSTRACT

OBJECTIVES: This study aims to develop and validate a novel deep-learning model that predicts the severity of pedestrian-vehicle interactions at unsignalized intersections, distinctively integrating Transformer-based models with Multilayer Perceptrons (MLP). This approach leverages advanced feature analysis capabilities, offering a more direct and interpretable method than traditional models.

METHODS: High-resolution optical cameras recorded detailed pedestrian and vehicle movements at study sites, with data processed to extract trajectories and convert them into real-world coordinates via precise georeferencing. Trained observers categorized interactions into safe passage, critical event, and conflict based on movement patterns, speeds, and accelerations. Fleiss Kappa statistic measured inter-rater agreement to ensure evaluator consistency. This study introduces a novel deep-learning model combining Transformer-based time series data capabilities with the classification strengths of a Multilayer Perceptron (MLP). Unlike traditional models, this approach focuses on feature analysis for greater interpretability. The model, trained on dynamic input variables from trajectory data, employs attention mechanisms to evaluate the significance of each input variable, offering deeper insights into factors influencing interaction severity.

RESULTS: The model demonstrated high performance across different severity categories: safe interactions achieved a precision of 0.78, recall of 0.91, and F1-score of 0.84. In more severe categories like critical events and conflicts, precision and recall were even higher. Overall accuracy stood at 0.87, with both macro and weighted averages for precision, recall, and F1-score also at 0.87. The variable importance analysis, using attention scores from the proposed transformer model, identified 'Vehicle Speed' as the most significant input variable positively influencing severity. Conversely, 'Approaching Angle' and 'Vehicle Distance from Conflict Point' negatively impacted severity. Other significant factors included 'Type of Vehicle', 'Pedestrian Speed', and 'Pedestrian Yaw Rate', highlighting the complex interplay of behavioral and environmental factors in pedestrian-vehicle interactions.

CONCLUSIONS: This study introduces a deep-learning model that effectively predicts the severity of pedestrian-vehicle interactions at crosswalks, utilizing a Transformer-MLP hybrid architecture with high precision and recall across severity categories. Key factors influencing severity were identified, paving the way for further enhancements in real-time analysis and broader safety assessments in urban settings.

PMID:39405419 | DOI:10.1080/15389588.2024.2404713

Categories: Literature Watch

Patient-Specific Myocardial Infarction Risk Thresholds From AI-Enabled Coronary Plaque Analysis

Tue, 2024-10-15 06:00

Circ Cardiovasc Imaging. 2024 Oct;17(10):e016958. doi: 10.1161/CIRCIMAGING.124.016958. Epub 2024 Sep 30.

ABSTRACT

BACKGROUND: Plaque quantification from coronary computed tomography angiography has emerged as a valuable predictor of cardiovascular risk. Deep learning can provide automated quantification of coronary plaque from computed tomography angiography. We determined per-patient age- and sex-specific distributions of deep learning-based plaque measurements and further evaluated their risk prediction for myocardial infarction in external samples.

METHODS: In this international, multicenter study of 2803 patients, a previously validated deep learning system was used to quantify coronary plaque from computed tomography angiography. Age- and sex-specific distributions of coronary plaque volume were determined from 956 patients undergoing computed tomography angiography for stable coronary artery disease from 5 cohorts. Multicenter external samples were used to evaluate associations between coronary plaque percentiles and myocardial infarction.

RESULTS: Quantitative deep learning plaque volumes increased with age and were higher in male patients. In the combined external sample (n=1847), patients in the ≥75th percentile of total plaque volume (unadjusted hazard ratio, 2.65 [95% CI, 1.47-4.78]; P=0.001) were at increased risk of myocardial infarction compared with patients below the 50th percentile. Similar relationships were seen for most plaque volumes and persisted in multivariable analyses adjusting for clinical characteristics, coronary artery calcium, stenosis, and plaque volume, with adjusted hazard ratios ranging from 2.38 to 2.50 for patients in the ≥75th percentile of total plaque volume.

CONCLUSIONS: Per-patient age- and sex-specific distributions for deep learning-based coronary plaque volumes are strongly predictive of myocardial infarction, with the highest risk seen in patients with coronary plaque volumes in the ≥75th percentile.

PMID:39405390 | DOI:10.1161/CIRCIMAGING.124.016958

Categories: Literature Watch

A novel classification framework for genome-wide association study of whole brain MRI images using deep learning

Tue, 2024-10-15 06:00

PLoS Comput Biol. 2024 Oct 15;20(10):e1012527. doi: 10.1371/journal.pcbi.1012527. Online ahead of print.

ABSTRACT

Genome-wide association studies (GWASs) have been widely applied in the neuroimaging field to discover genetic variants associated with brain-related traits. So far, almost all GWASs conducted in neuroimaging genetics are performed on univariate quantitative features summarized from brain images. On the other hand, powerful deep learning technologies have dramatically improved our ability to classify images. In this study, we proposed and implemented a novel machine learning strategy for systematically identifying genetic variants that lead to detectable nuances on Magnetic Resonance Images (MRI). For a specific single nucleotide polymorphism (SNP), if MRI images labeled by genotypes of this SNP can be reliably distinguished using machine learning, we then hypothesized that this SNP is likely to be associated with brain anatomy or function which is manifested in MRI brain images. We applied this strategy to a catalog of MRI image and genotype data collected by the Alzheimer's Disease Neuroimaging Initiative (ADNI) consortium. From the results, we identified novel variants that show strong association to brain phenotypes.

PMID:39405331 | DOI:10.1371/journal.pcbi.1012527

Categories: Literature Watch

Path of career planning and employment strategy based on deep learning in the information age

Tue, 2024-10-15 06:00

PLoS One. 2024 Oct 15;19(10):e0308654. doi: 10.1371/journal.pone.0308654. eCollection 2024.

ABSTRACT

With the improvement of education level and the expansion of higher education, more students can have the opportunities to obtain better education, and the pressure of employment competition is also increasing. How to improve students' employment competitiveness, comprehensive quality and the ability to explore paths for career planning and employment strategies has become a common concern in today's society. Under the background of today's informatization, the paths of career planning and employment strategies are becoming more and more informatized. The support of Internet is essential for obtaining more employment information. As a representative product of the information age, deep learning provides people with a better path. This paper conducts an in-depth study of the career planning and employment strategy paths based on deep learning in the information age. Research has shown that in the current information age, deep learning through career planning and employment strategy paths can help students solve the main problems they face in career planning education and better meet the needs of today's society. Career awareness increased by 35% and self-improvement by 15%. This indicated that in the information age, career planning and employment strategies based on deep learning are a way to conform to the trend of the times, which can better help college students improve their understanding, promote employment, and promote self-development.This study combines quantitative and qualitative methods, collects data through questionnaires, and uses deep learning model for analysis. Control group and experimental group were set up to evaluate the effect of career planning education. Descriptive statistics and correlation analysis were used to ensure the accuracy and reliability of the results.

PMID:39405324 | DOI:10.1371/journal.pone.0308654

Categories: Literature Watch

Enhancing Semantic Segmentation in High-Resolution TEM Images: A Comparative Study of Batch Normalization and Instance Normalization

Tue, 2024-10-15 06:00

Microsc Microanal. 2024 Oct 15:ozae093. doi: 10.1093/mam/ozae093. Online ahead of print.

ABSTRACT

Integrating deep learning into image analysis for transmission electron microscopy (TEM) holds significant promise for advancing materials science and nanotechnology. Deep learning is able to enhance image quality, to automate feature detection, and to accelerate data analysis, addressing the complex nature of TEM datasets. This capability is crucial for precise and efficient characterization of details on the nano-and microscale, e.g., facilitating more accurate and high-throughput analysis of nanoparticle structures. This study investigates the influence of batch normalization (BN) and instance normalization (IN) on the performance of deep learning models for semantic segmentation of high-resolution TEM images. Using U-Net and ResNet architectures, we trained models on two different datasets. Our results demonstrate that IN consistently outperforms BN, yielding higher Dice scores and Intersection over Union metrics. These findings underscore the necessity of selecting appropriate normalization methods to maximize the performance of deep learning models applied to TEM images.

PMID:39405188 | DOI:10.1093/mam/ozae093

Categories: Literature Watch

Deep Learning Image Segmentation Based on Adaptive Total Variation Preprocessing

Tue, 2024-10-15 06:00

IEEE Trans Cybern. 2024 Oct 15;PP. doi: 10.1109/TCYB.2024.3418937. Online ahead of print.

ABSTRACT

This article proposes a two-stage image segmentation method based on the MS model, aiming to enhance the segmentation accuracy of images with complex structure and background. In the first stage, in order to obtain the smooth approximate solution of the image by minimizing the energy functional, an anisotropic regularization term formed by the combination of the gradient operator and an adaptive weighted matrix is introduced. Different weights in both horizontal and vertical directions can be provided by the adaptive weighting matrix according to the gradient information, so that the curve diffuses along the directions of local feature tangents of the objects. In addition, information irrelevant to the image target can be filtered out by the adaptive weighting matrix, thus reducing the interference of complex background. The alternating direction method of multipliers (ADMMs) is employed to solve the convex optimization problem in the first stage. In the second stage, the smoothed image obtained in the first stage is segmented by the deep learning method. By comparing with some traditional methods and deep learning methods, the results demonstrate that not only has good perceptual quality been achieved by this segmentation method, but also superior evaluation metrics have been obtained.

PMID:39405157 | DOI:10.1109/TCYB.2024.3418937

Categories: Literature Watch

Unpaired data training enables super-resolution confocal microscopy from low-resolution acquisitions

Tue, 2024-10-15 06:00

Opt Lett. 2024 Oct 15;49(20):5775-5778. doi: 10.1364/OL.537713.

ABSTRACT

Supervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets. This paper introduces a cycleGAN framework specifically designed to increase the lateral resolution limit in confocal microscopy by training a cycleGAN model using low- and high-resolution unpaired confocal images of human glioblastoma cells. Training and testing performances of the cycleGAN model have been assessed by measuring specific metrics such as background standard deviation, peak-to-noise ratio, and a customized frequency content measure. Our cycleGAN model has been evaluated in terms of image fidelity and resolution improvement using a paired dataset, showing superior performance than other reported methods. This work highlights the efficacy and promise of cycleGAN models in tackling super-resolution microscopic imaging without paired training, paving the path for turning home-built low-resolution microscopic systems into low-cost super-resolution instruments by means of unsupervised deep learning.

PMID:39404535 | DOI:10.1364/OL.537713

Categories: Literature Watch

Pages