Deep learning
Brain Computer Interfaces: An Introduction for Clinical Neurodiagnostic Technologists
Neurodiagn J. 2024 Oct 16:1-14. doi: 10.1080/21646821.2024.2408501. Online ahead of print.
ABSTRACT
Brain-computer interface (BCI) is a term used to describe systems that translate biological information into commands that can control external devices such as computers, prosthetics, and other machinery. While BCI is used in military applications, home control systems, and a wide array of entertainment, much of its modern interest and funding can be attributed to its utility in the medical community, where it has rapidly propelled advancements in the restoration or replacement of critical functions robbed from victims of disease, stroke, and traumatic injury. BCI devices can allow patients to move prosthetic limbs, operate devices such as wheelchairs or computers, and communicate through writing and speech-generating devices. In this article, we aim to provide an introductory summary of the historical context and modern growing utility of BCI, with specific interest in igniting the conversation of where and how the neurodiagnostics community and its associated parties can embrace and contribute to the world of BCI.
PMID:39413360 | DOI:10.1080/21646821.2024.2408501
Mpox outbreak: Time series analysis with multifractal and deep learning network
Chaos. 2024 Oct 1;34(10):101103. doi: 10.1063/5.0236082.
ABSTRACT
This article presents an overview of an mpox epidemiological situation in the most affected regions-Africa, Americas, and Europe-tailoring fractal interpolation for pre-processing the mpox cases. This keen analysis has highlighted the irregular and fractal patterns in the trend of mpox transmission. During the current scenario of public health emergency of international concern due to an mpox outbreak, an additional significance of this article is the interpretation of mpox spread in light of multifractality. The self-similar measure, namely, the multifractal measure, is utilized to explore the heterogeneity in the mpox cases. Moreover, a bidirectional long-short term memory neural network has been employed to forecast the future mpox spread to alert the outbreak as it seems to be a silent symptom for global epidemic.
PMID:39413265 | DOI:10.1063/5.0236082
Radiomics-Based Prediction of Patient Demographic Characteristics on Chest Radiographs: Looking Beyond Deep Learning for Risk of Bias
AJR Am J Roentgenol. 2024 Oct 16. doi: 10.2214/AJR.24.31963. Online ahead of print.
NO ABSTRACT
PMID:39413236 | DOI:10.2214/AJR.24.31963
Prior Visual-guided Self-supervised Learning Enables Color Vignetting Correction for High-throughput Microscopic Imaging
IEEE J Biomed Health Inform. 2024 Oct 16;PP. doi: 10.1109/JBHI.2024.3471907. Online ahead of print.
ABSTRACT
Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other stateof-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.
PMID:39412976 | DOI:10.1109/JBHI.2024.3471907
Attention-guided 3D CNN With Lesion Feature Selection for Early Alzheimer's Disease Prediction Using Longitudinal sMRI
IEEE J Biomed Health Inform. 2024 Oct 16;PP. doi: 10.1109/JBHI.2024.3482001. Online ahead of print.
ABSTRACT
Predicting the progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD) is critical for early intervention. Towards this end, various deep learning models have been applied in this domain, typically relying on structural magnetic resonance imaging (sMRI) data from a single time point whereas neglecting the dynamic changes in brain structure over time. Current longitudinal studies inadequately explore disease evolution dynamics and are burdened by high computational complexity. This paper introduces a novel lightweight 3D convolutional neural network specifically designed to capture the evolution of brain diseases for modeling the progression of MCI. First, a longitudinal lesion feature selection strategy is proposed to extract core features from temporal data, facilitating the detection of subtle differences in brain structure between two time points. Next, to refine the model for a more concentrated emphasis on lesion features, a disease trend attention mechanism is introduced to learn the dependencies between overall disease trends and local variation features. Finally, disease prediction visualization techniques are employed to improve the interpretability of the final predictions. Extensive experiments demonstrate that the proposed model achieves state-of-the-art performance in terms of area under the curve (AUC), accuracy, specificity, precision, and F1 score. This study confirms the efficacy of our early diagnostic method, utilizing only two follow-up sMRI scans to predict the disease status of MCI patients 24 months later with an AUC of 79.03%.
PMID:39412975 | DOI:10.1109/JBHI.2024.3482001
One novel transfer learning-based CLIP model combined with self-attention mechanism for differentiating the tumor-stroma ratio in pancreatic ductal adenocarcinoma
Radiol Med. 2024 Oct 16. doi: 10.1007/s11547-024-01902-y. Online ahead of print.
ABSTRACT
PURPOSE: To develop a contrastive language-image pretraining (CLIP) model based on transfer learning and combined with self-attention mechanism to predict the tumor-stroma ratio (TSR) in pancreatic ductal adenocarcinoma on preoperative enhanced CT images, in order to understand the biological characteristics of tumors for risk stratification and guiding feature fusion during artificial intelligence-based model representation.
MATERIAL AND METHODS: This retrospective study collected a total of 207 PDAC patients from three hospitals. TSR assessments were performed on surgical specimens by pathologists and divided into high TSR and low TSR groups. This study developed one novel CLIP-adapter model that integrates the CLIP paradigm with a self-attention mechanism for better utilizing features from multi-phase imaging, thereby enhancing the accuracy and reliability of tumor-stroma ratio predictions. Additionally, clinical variables, traditional radiomics model and deep learning models (ResNet50, ResNet101, ViT_Base_32, ViT_Base_16) were constructed for comparison.
RESULTS: The models showed significant efficacy in predicting TSR in PDAC. The performance of the CLIP-adapter model based on multi-phase feature fusion was superior to that based on any single phase (arterial or venous phase). The CLIP-adapter model outperformed traditional radiomics models and deep learning models, with CLIP-adapter_ViT_Base_32 performing the best, achieving the highest AUC (0.978) and accuracy (0.921) in the test set. Kaplan-Meier survival analysis showed longer overall survival in patients with low TSR compared to those with high TSR.
CONCLUSION: The CLIP-adapter model designed in this study provides a safe and accurate method for predicting the TSR in PDAC. The feature fusion module based on multi-modal (image and text) and multi-phase (arterial and venous phase) significantly improves model performance.
PMID:39412688 | DOI:10.1007/s11547-024-01902-y
Deep learning radiomic nomogram outperforms the clinical model in distinguishing intracranial solitary fibrous tumors from angiomatous meningiomas and can predict patient prognosis
Eur Radiol. 2024 Oct 16. doi: 10.1007/s00330-024-11082-y. Online ahead of print.
ABSTRACT
OBJECTIVES: To evaluate the value of a magnetic resonance imaging (MRI)-based deep learning radiomic nomogram (DLRN) for distinguishing intracranial solitary fibrous tumors (ISFTs) from angiomatous meningioma (AMs) and predicting overall survival (OS) for ISFT patients.
METHODS: In total, 1090 patients from Beijing Tiantan Hospital, Capital Medical University, and 131 from Lanzhou University Second Hospital were categorized as primary cohort (PC) and external validation cohort (EVC), respectively. An MRI-based DLRN was developed in PC to distinguish ISFTs from AMs. We validated the DLRN and compared it with a clinical model (CM) in EVC. In total, 149 ISFT patients were followed up. We carried out Cox regression analysis on DLRN score, clinical characteristics, and histological stratification. Besides, we evaluated the association between independent risk factors and OS in the follow-up patients using Kaplan-Meier curves.
RESULTS: The DLRN outperformed CM in distinguishing ISFTs from AMs (area under the curve [95% confidence interval (CI)]: 0.86 [0.84-0.88] for DLRN and 0.70 [0.67-0.72] for CM, p < 0.001) in EVC. Patients with high DLRN score [per 1 increase; hazard ratio (HR) 1.079, 95% CI: 1.009-1.147, p = 0.019] and subtotal resection (STR) [per 1 increase; HR 2.573, 95% CI: 1.337-4.932, p = 0.004] were associated with a shorter OS. A statistically significant difference in OS existed between the high and low DLRN score groups with a cutoff value of 12.19 (p < 0.001). There is also a difference in OS between total excision (GTR) and STR groups (p < 0.001).
CONCLUSION: The proposed DLRN outperforms the CM in distinguishing ISFTs from AMs and can predict OS for ISFT patients.
CLINICAL RELEVANCE STATEMENT: The proposed MRI-based deep learning radiomic nomogram outperforms the clinical model in distinguishing ISFTs from AMs and can predict OS of ISFT patients, which could guide the surgical strategy and predict prognosis for patients.
KEY POINTS: Distinguishing ISFTs from AMs based on conventional radiological signs is challenging. The DLRN outperformed the CM in our study. The DLRN can predict OS for ISFT patients.
PMID:39412667 | DOI:10.1007/s00330-024-11082-y
Assessing the deep learning based image quality enhancements for the BGO based GE omni legend PET/CT
EJNMMI Phys. 2024 Oct 16;11(1):86. doi: 10.1186/s40658-024-00688-2.
ABSTRACT
BACKGROUND: This study investigates the integration of Artificial Intelligence (AI) in compensating the lack of time-of-flight (TOF) of the GE Omni Legend PET/CT, which utilizes BGO scintillation crystals.
METHODS: The current study evaluates the image quality of the GE Omni Legend PET/CT using a NEMA IQ phantom. It investigates the impact on imaging performance of various deep learning precision levels (low, medium, high) across different data acquisition durations. Quantitative analysis was performed using metrics such as contrast recovery coefficient (CRC), background variability (BV), and contrast to noise Ratio (CNR). Additionally, patient images reconstructed with various deep learning precision levels are presented to illustrate the impact on image quality.
RESULTS: The deep learning approach significantly reduced background variability, particularly for the smallest region of interest. We observed improvements in background variability of 11.8 % , 17.2 % , and 14.3 % for low, medium, and high precision deep learning, respectively. The results also indicate a significant improvement in larger spheres when considering both background variability and contrast recovery coefficient. The high precision deep learning approach proved advantageous for short scans and exhibited potential in improving detectability of small lesions. The exemplary patient study shows that the noise was suppressed for all deep learning cases, but low precision deep learning also reduced the lesion contrast (about -30 % ), while high precision deep learning increased the contrast (about 10 % ).
CONCLUSION: This study conducted a thorough evaluation of deep learning algorithms in the GE Omni Legend PET/CT scanner, demonstrating that these methods enhance image quality, with notable improvements in CRC and CNR, thereby optimizing lesion detectability and offering opportunities to reduce image acquisition time.
PMID:39412633 | DOI:10.1186/s40658-024-00688-2
Automated segment-level coronary artery calcium scoring on non-contrast CT: a multi-task deep-learning approach
Insights Imaging. 2024 Oct 16;15(1):250. doi: 10.1186/s13244-024-01827-0.
ABSTRACT
OBJECTIVES: To develop and evaluate a multi-task deep-learning (DL) model for automated segment-level coronary artery calcium (CAC) scoring on non-contrast computed tomography (CT) for precise localization and quantification of calcifications in the coronary artery tree.
METHODS: This study included 1514 patients (mean age, 60.0 ± 10.2 years; 56.0% female) with stable chest pain from 26 centers participating in the multicenter DISCHARGE trial (NCT02400229). The patients were randomly assigned to a training/validation set (1059) and a test set (455). We developed a multi-task neural network for performing the segmentation of calcifications on the segment level as the main task and the segmentation of coronary artery segment regions with weak annotations as an auxiliary task. Model performance was evaluated using (micro-average) sensitivity, specificity, F1-score, and weighted Cohen's κ for segment-level agreement based on the Agatston score and performing interobserver variability analysis.
RESULTS: In the test set of 455 patients with 1797 calcifications, the model assigned 73.2% (1316/1797) to the correct coronary artery segment. The model achieved a micro-average sensitivity of 0.732 (95% CI: 0.710-0.754), a micro-average specificity of 0.978 (95% CI: 0.976-0.980), and a micro-average F1-score of 0.717 (95% CI: 0.695-0.739). The segment-level agreement was good with a weighted Cohen's κ of 0.808 (95% CI: 0.790-0.824), which was only slightly lower than the agreement between the first and second observer (0.809 (95% CI: 0.798-0.845)).
CONCLUSION: Automated segment-level CAC scoring using a multi-task neural network approach showed good agreement on the segment level, indicating that DL has the potential for automated coronary artery calcification classification.
CRITICAL RELEVANCE STATEMENT: Multi-task deep learning can perform automated coronary calcium scoring on the segment level with good agreement and may contribute to the development of new and improved calcium scoring methods.
KEY POINTS: Segment-level coronary artery calcium scoring is a tedious and error-prone task. The proposed multi-task model achieved good agreement with a human observer on the segment level. Deep learning can contribute to the automation of segment-level coronary artery calcium scoring.
PMID:39412613 | DOI:10.1186/s13244-024-01827-0
TRAITER: Transformer-guided diagnosis and prognosis of heart failure using cell nuclear morphology and DNA damage marker
Bioinformatics. 2024 Oct 16:btae610. doi: 10.1093/bioinformatics/btae610. Online ahead of print.
ABSTRACT
MOTIVATION: Heart failure (HF), a major cause of morbidity and mortality, necessitates precise diagnostic and prognostic methods.
RESULTS: This study presents a novel deep learning approach, Transformer-based Analysis of Images of Tissue for Effective Remedy (TRAITER), for HF diagnosis and prognosis. Employing image segmentation techniques and a Vision Transformer, TRAITER predicts HF likelihood from cardiac tissue cell nuclear morphology images and the potential for left ventricular reverse remodeling (LVRR) from dual-stained images with cell nuclei and DNA damage markers. In HF prediction using 31,158 images from 9 patients, TRAITER achieved 83.1% accuracy. For LVRR prediction with 231,840 images from 46 patients, TRAITER attained 84.2% accuracy for individual images and 92.9% for individual patients. TRAITER outperformed other neural network models in terms of receiver operating characteristics, and precision-recall curves. Our method promises to advance personalized HF medicine decision-making.
AVAILABILITY: The source code and data are available at the following link: Https://github.com/HamanoLaboratory/predict-of-HF-and-LVRR.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
PMID:39412446 | DOI:10.1093/bioinformatics/btae610
Lifestyle factors in the biomedical literature: An ontology and comprehensive resources for named entity recognition
Bioinformatics. 2024 Oct 16:btae613. doi: 10.1093/bioinformatics/btae613. Online ahead of print.
ABSTRACT
MOTIVATION: Despite lifestyle factors (LSFs) being increasingly acknowledged in shaping individual health trajectories, particularly in chronic diseases, they have still not been systematically described in the biomedical literature. This is in part because no named entity recognition (NER) system exists, which can comprehensively detect all types of LSFs in text. The task is challenging due to their inherent diversity, lack of a comprehensive LSF classification for dictionary-based NER, and lack of a corpus for deep learning-based NER.
RESULTS: We present a novel Lifestyle Factor Ontology (LSFO), which we used to develop a dictionary-based system for recognition and normalization of LSFs. Additionally, we introduce a manually annotated corpus for LSFs (LSF200) suitable for training and evaluation of NER systems, and use it to train a transformer-based system. Evaluating the performance of both NER systems on the corpus revealed an F-score of 64% for the dictionary-based system and 76% for the transformer-based system. Large-scale application of these systems on PubMed abstracts and PMC Open Access articles identified over 300 million mentions of LSF in the biomedical literature.
AVAILABILITY: LSFO, the annotated LSF200 corpus, and the detected LSFs in PubMed and PMC-OA articles using both NER systems, are available under open licenses via the following GitHub repository: Https://github.com/EsmaeilNourani/LSFO-expansion. This repository contains links to two associated GitHub repositories and a Zenodo project related to the study. LSFO is also available at BioPortal: Https://bioportal.bioontology.org/ontologies/LSFO.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
PMID:39412443 | DOI:10.1093/bioinformatics/btae613
MFCA-MICNN: a convolutional neural network with multiscale fast channel attention and multibranch irregular convolution for noise removal in dMRI
Phys Med Biol. 2024 Oct 16;69(21). doi: 10.1088/1361-6560/ad8294.
ABSTRACT
Diffusion magnetic resonance imaging (dMRI) currently stands as the foremost noninvasive method for quantifying brain tissue microstructure and reconstructing white matter fiber pathways. However, the inherent free diffusion motion of water molecules in dMRI results in signal decay, diminishing the signal-to-noise ratio (SNR) and adversely affecting the accuracy and precision of microstructural data. In response to this challenge, we propose a novel method known as the Multiscale Fast Attention-Multibranch Irregular Convolutional Neural Network for dMRI image denoising. In this work, we introduce Multiscale Fast Channel Attention, a novel approach for efficient multiscale feature extraction with attention weight computation across feature channels. This enhances the model's capability to capture complex features and improves overall performance. Furthermore, we propose a multi-branch irregular convolutional architecture that effectively disrupts spatial noise correlation and captures noise features, thereby further enhancing the denoising performance of the model. Lastly, we design a novel loss function, which ensures excellent performance in both edge and flat regions. Experimental results demonstrate that the proposed method outperforms other state-of-the-art deep learning denoising methods in both quantitative and qualitative aspects for dMRI image denoising with fewer parameters and faster operational speed.
PMID:39412243 | DOI:10.1088/1361-6560/ad8294
Diagnosis of fetal arrhythmia in echocardiography imaging using deep learning with cyclic loss
Digit Health. 2024 Oct 14;10:20552076241286929. doi: 10.1177/20552076241286929. eCollection 2024 Jan-Dec.
ABSTRACT
BACKGROUND: Fetal arrhythmias frequently co-occur with congenital heart disease in fetuses. The peaks observed in M-mode fetal echocardiograms serve as pivotal diagnostic markers for fetal arrhythmias. However, speckles, artifacts, and noise pose notable challenges for accurate image analysis. While current deep learning networks mainly overlook cardiac cyclic information, this study concentrated on the integration of such features, leveraging contextual constraints derived from cardiac cyclical features to improve diagnostic accuracy.
METHODS: This study proposed a novel deep learning architecture for diagnosing fetal arrhythmias. The architecture presented a loss function tailored to the cardiac cyclical information and formulated a diagnostic algorithm for classifying fetal arrhythmias. The training and validation processes utilized a dataset comprising 4440 patches gathered from 890 participants.
RESULTS: Incorporating cyclic loss significantly enhanced the performance of deep learning networks in predicting peak points for diagnosing fetal arrhythmia, resulting in improvements ranging from 7.11% to 14.81% in F1-score across different network combinations. Particularly noteworthy was the 18.2% improvement in the F1-score for the low-quality group. Additionally, the precision of diagnosing fetal arrhythmia across four categories exhibited improvement, with an average improvement rate of 20.6%.
CONCLUSION: This study introduced a cyclic loss mechanism based on the cardiac cycle information. Comparative evaluations were conducted using baseline methods and state-of-the-art deep learning architectures with the fetal echocardiogram dataset. These evaluations demonstrated the proposed framework's superior accuracy in diagnosing fetal arrhythmias. It is also crucial to note that further external testing is essential to assess the model's generalizability and clinical value.
PMID:39411546 | PMC:PMC11475117 | DOI:10.1177/20552076241286929
Caution: ChatGPT Doesn't Know What You Are Asking and Doesn't Know What It Is Saying
J Pediatr Pharmacol Ther. 2024 Oct;29(5):558-560. doi: 10.5863/1551-6776-29.5.558. Epub 2024 Oct 14.
NO ABSTRACT
PMID:39411419 | PMC:PMC11472406 | DOI:10.5863/1551-6776-29.5.558
Semantic segmentation-based detection algorithm for challenging cryo-electron microscopy RNP samples
Front Mol Biosci. 2024 Oct 1;11:1473609. doi: 10.3389/fmolb.2024.1473609. eCollection 2024.
ABSTRACT
In this study, we present a novel and robust methodology for the automatic detection of influenza A virus ribonucleoproteins (RNPs) in single-particle cryo-electron microscopy (cryo-EM) images. Utilizing a U-net architecture-a type of convolutional neural network renowned for its efficiency in biomedical image segmentation-our approach is based on a pretraining phase with a dataset annotated through visual inspection. This dataset facilitates the precise identification of filamentous RNPs, including the localization of the filaments and their terminal coordinates. A key feature of our method is the application of semantic segmentation techniques, enabling the automated categorization of micrograph pixels into distinct classifications of particle and background. This deep learning strategy allows to robustly detect these intricate particles, a crucial step in achieving high-resolution reconstructions in cryo-EM studies. To encourage collaborative advancements in the field, we have made our routines, the pretrained U-net model, and the training dataset publicly accessible. The reproducibility and accessibility of these resources aim to facilitate further research and validation in the realm of cryo-EM image analysis.
PMID:39411403 | PMC:PMC11473350 | DOI:10.3389/fmolb.2024.1473609
Editorial: Deep learning methods and applications in brain imaging for the diagnosis of neurological and psychiatric disorders
Front Neurosci. 2024 Oct 1;18:1497417. doi: 10.3389/fnins.2024.1497417. eCollection 2024.
NO ABSTRACT
PMID:39411146 | PMC:PMC11473404 | DOI:10.3389/fnins.2024.1497417
DeepVol: volatility forecasting from high-frequency data with dilated causal convolutions
Quant Finance. 2024 Sep 5;24(8):1105-1127. doi: 10.1080/14697688.2024.2387222. eCollection 2024.
ABSTRACT
Volatility forecasts play a central role among equity risk measures. Besides traditional statistical models, modern forecasting techniques based on machine learning can be employed when treating volatility as a univariate, daily time-series. Moreover, econometric studies have shown that increasing the number of daily observations with high-frequency intraday data helps to improve volatility predictions. In this work, we propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility. Our empirical findings demonstrate that dilated convolutional filters are highly effective at extracting relevant information from intraday financial time-series, proving that this architecture can effectively leverage predictive information present in high-frequency data that would otherwise be lost if realised measures were precomputed. Simultaneously, dilated convolutional filters trained with intraday high-frequency data help us avoid the limitations of models that use daily data, such as model misspecification or manually designed handcrafted features, whose devise involves optimising the trade-off between accuracy and computational efficiency and makes models prone to lack of adaptation into changing circumstances. In our analysis, we use two years of intraday data from NASDAQ-100 to evaluate the performance of DeepVol. Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data, resulting in more accurate predictions compared to traditional methodologies and producing more accurate risk measures.
PMID:39410924 | PMC:PMC11473055 | DOI:10.1080/14697688.2024.2387222
Comparative analysis of retinal microvascular parameters in healthy individuals with or without carotid artery stenosis or plaque
Eur J Ophthalmol. 2024 Oct 15:11206721241291224. doi: 10.1177/11206721241291224. Online ahead of print.
ABSTRACT
PURPOSE: To evaluate the correlations between retinal microvascular changes and carotid artery stenosis (CAS) with and without plaques using fundus photography.
METHODS: Patients who had undergone bilateral carotid ultrasonography and bilateral fundus photography were divided into the following groups based on the carotid intima-media thickness (IMT) determined via ultrasonography in this retrospective, observational study: the control and CAS (comprising CAS with and without plaque subgroups) groups. The following retinal indicators were determined via fundus photography based on a deep learning algorithm: the arteriole-to-venule ratio (AVR), whole retinal fractal dimension (FD), arteriolar fractal dimension (AFD), venular fractal dimension (VFD), vascular density (VD), and VD within 3 mm (VD3mm) and 5 mm (VD5mm) from the macular fovea. The correlations between these indicators and IMT were also assessed.
RESULTS: In total, 715 participants, comprising 313 participants with CAS (CAS group; 91 with plaque and 222 without plaque) and 402 participants without CAS (control group), participated in this study. AFD, VFD, and FD in the CAS group were significantly lower than those in the control group (all p < 0.001). VD, VD3mm, and VD5mm showed significant differences between the groups (all p < 0.05). VFD in the CAS with plaque group was lower than that in the group without plaque (p = 0.014). VD3mm, and VD5mm showed significantly negative correlations with IMTmin in the CAS subgroup.
CONCLUSIONS: AFD, VFD, FD, VD, VD3 mm, and VD5 mm decreased, and fundus photography based on deep learning algorithm may provide new approaches for screening of CAS.
PMID:39410788 | DOI:10.1177/11206721241291224
A Highly-Sensitive Omnidirectional Acoustic Sensor for Enhanced Human-Machine Interaction
Adv Mater. 2024 Oct 15:e2413086. doi: 10.1002/adma.202413086. Online ahead of print.
ABSTRACT
Acoustic sensor-based human-machine interaction (HMI) plays a crucial role in natural and efficient communication in intelligent robots. However, accurately identifying and tracking omnidirectional sound sources, especially in noisy environments still remains a notable challenge. Here, a self-powered triboelectric stereo acoustic sensor (SAS) with omnidirectional sound recognition and tracking capabilities by a 3D structure configuration is presented. The SAS incorporates a porous vibrating film with high electron affinity and low Young's modulus, resulting in high sensitivity (3172.9 mVpp Pa-1) and a wide frequency response range (100-20 000 Hz). By utilizing its omnidirectional sound recognition capability and adjustable resonant frequency feature, the SAS can precisely identify the desired audio signal with an average deep learning accuracy of 98%, even in noisy environments. Moreover, the SAS can simultaneously recognize multiple individuals in the auxiliary conference system and the driving commands under background music in self-driving vehicles, which marks a notable advance in voice-based HMI systems.
PMID:39410724 | DOI:10.1002/adma.202413086
Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image
Diagnostics (Basel). 2024 Oct 9;14(19):2253. doi: 10.3390/diagnostics14192253.
ABSTRACT
Background: Breast cancer is one of the most lethal cancers among women. Early detection and proper treatment reduce mortality rates. Histopathological images provide detailed information for diagnosing and staging breast cancer disease. Methods: The BreakHis dataset, which includes histopathological images, is used in this study. Medical images are prone to problems such as different textural backgrounds and overlapping cell structures, unbalanced class distribution, and insufficiently labeled data. In addition to these, the limitations of deep learning models in overfitting and insufficient feature extraction make it extremely difficult to obtain a high-performance model in this dataset. In this study, 20 state-of-the-art models are trained to diagnose eight types of breast cancer using the fine-tuning method. In addition, a comprehensive experimental study was conducted to determine the most successful new model, with 20 different custom models reported. As a result, we propose a novel model called MultiHisNet. Results: The most effective new model, which included a pointwise convolution layer, residual link, channel, and spatial attention module, achieved 94.69% accuracy in multi-class breast cancer classification. An ensemble model was created with the best-performing transfer learning and custom models obtained in the study, and model weights were determined with an Equilibrium Optimizer. The proposed ensemble model achieved 96.71% accuracy in eight-class breast cancer detection. Conclusions: The results show that the proposed model will support pathologists in successfully diagnosing breast cancer.
PMID:39410657 | DOI:10.3390/diagnostics14192253