Deep learning

Reconfigurable Origami/Kirigami Metamaterial Absorbers Developed by Fast Inverse Design and Low-Concentration MXene Inks

Tue, 2024-07-30 06:00

ACS Appl Mater Interfaces. 2024 Jul 30. doi: 10.1021/acsami.4c07084. Online ahead of print.

ABSTRACT

Reconfigurable metamaterial absorbers (MAs), consisting of tunable elements or deformable structures, are able to transform their absorbing bandwidth and amplitude in response to environmental changes. Among the options for building reconfigurable MAs, origami/kirigami structures show great potential because of their ability to combine excellent mechanical and electromagnetic (EM) properties. However, neither the trial-and-error-based design method nor the complex fabrication process can meet the requirement of developing high-performance MAs. Accordingly, this work introduces a deep-learning-based algorithm to realize the fast inverse design of origami MAs. Then, an accordion-origami coding MA is generated with reconfigurable EM responses that can be smoothly transformed between ultrabroadband absorption (5.5-20 GHz, folding angle α = 82°) and high reflection (2-20 GHz, RL > -1.5 dB, α = 0°) under y-polarized waves. However, the asymmetric coding pattern and accordion-origami deformation lead to typical polarization-sensitive absorbing performance (2-20 GHz, RL > -4 dB, α < 90°) under x-polarized waves. For the first time, a kirigami polarization rotation surface with switchable operation band is adapted to balance the absorbing performance of accordion-origami MA under orthogonal polarized waves. As a result, the stacked origami-kirigami MA maintains polarization-insensitive ultrabroadband absorption (4.4-20 GHz) at β = 0° and could be transformed into a narrowband absorber through deformation. Besides, the adapted origami/kirigami structures possess excellent mechanical properties such as low relative density, negative Poisson's ratio, and tunable specific energy absorption. Moreover, by modulating the PEDOT:PSS conductive bridges among MXene nanosheets, a series of low-concentration MXene-PEDOT:PSS inks (∼46 mg·mL-1) with adjustable square resistance (5-32.5 Ω/sq) are developed to fabricate the metamaterials via screen printing. Owing to the universal design scheme, this work supplies a promising paradigm for developing low-cost and high-performance reconfigurable EM absorbers.

PMID:39078617 | DOI:10.1021/acsami.4c07084

Categories: Literature Watch

Conserved cysteine residues in Kaposi's sarcoma herpesvirus ORF34 are necessary for viral production and viral pre-initiation complex formation

Tue, 2024-07-30 06:00

J Virol. 2024 Jul 30:e0100024. doi: 10.1128/jvi.01000-24. Online ahead of print.

ABSTRACT

Kaposi's sarcoma herpesvirus (KSHV) ORF34 plays a significant role as a component of the viral pre-initiation complex (vPIC), which is indispensable for late gene expression across beta- and gammaherpesviruses. Although the key role of ORF34 within the vPIC and its function as a hub protein have been recognized, further clarification regarding its specific contribution to vPIC functionality and interactions with other components is required. This study employed a deep learning algorithm-assisted structural model of ORF34, revealing highly conserved amino acid residues across human beta- and gammaherpesviruses localized in structured domains. Thus, we engineered ORF34 alanine-scanning mutants by substituting conserved residues with alanine. These mutants were evaluated for their ability to interact with other vPIC factors and restore viral production in cells harboring the ORF34-deficient KSHV-BAC. Our experimental results highlight the crucial role of the four cysteine residues conserved in ORF34: a tetrahedral arrangement consisting of a pair of C-Xn-C consensus motifs. This suggests the potential incorporation of metal cations in interacting with ORF24 and ORF66 vPIC components, facilitating late gene transcription, and promoting overall virus production by capturing metal cations. In summary, our findings underline the essential role of conserved cysteines in KSHV ORF34 for effective vPIC assembly and viral replication, thereby enhancing our understanding of the complex interplay between the vPIC components.

IMPORTANCE: The initiation of late gene transcription is universally conserved across the beta- and gammaherpesvirus families. This process employs a viral pre-initiation complex (vPIC), which is analogous to a cellular PIC. Although KSHV ORF34 is a critical factor for viral replication and is a component of the vPIC, the specifics of vPIC formation and the essential domains crucial for its function remain unclear. Structural predictions suggest that the four conserved cysteines (C170, C175, C256, and C259) form a tetrahedron that coordinates the metal cation. We investigated the role of these conserved amino acids in interactions with other vPIC components, late gene expression, and virus production to demonstrate for the first time that these cysteines are pivotal for such functions. This discovery not only deepens our comprehensive understanding of ORF34 and vPIC dynamics but also lays the groundwork for more detailed studies on herpesvirus replication mechanisms in future research.

PMID:39078391 | DOI:10.1128/jvi.01000-24

Categories: Literature Watch

A medical image classification method based on self-regularized adversarial learning

Tue, 2024-07-30 06:00

Med Phys. 2024 Jul 30. doi: 10.1002/mp.17320. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning (DL) techniques have been extensively applied in medical image classification. The unique characteristics of medical imaging data present challenges, including small labeled datasets, severely imbalanced class distribution, and significant variations in imaging quality. Recently, generative adversarial network (GAN)-based classification methods have gained attention for their ability to enhance classification accuracy by incorporating realistic GAN-generated images as data augmentation. However, the performance of these GAN-based methods often relies on high-quality generated images, while large amounts of training data are required to train GAN models to achieve optimal performance.

PURPOSE: In this study, we propose an adversarial learning-based classification framework to achieve better classification performance. Innovatively, GAN models are employed as supplementary regularization terms to support classification, aiming to address the challenges described above.

METHODS: The proposed classification framework, GAN-DL, consists of a feature extraction network (F-Net), a classifier, and two adversarial networks, specifically a reconstruction network (R-Net) and a discriminator network (D-Net). The F-Net extracts features from input images, and the classifier uses these features for classification tasks. R-Net and D-Net have been designed following the GAN architecture. R-Net employs the extracted feature to reconstruct the original images, while D-Net is tasked with the discrimination between the reconstructed image and the original images. An iterative adversarial learning strategy is designed to guide model training by incorporating multiple network-specific loss functions. These loss functions, serving as supplementary regularization, are automatically derived during the reconstruction process and require no additional data annotation.

RESULTS: To verify the model's effectiveness, we performed experiments on two datasets, including a COVID-19 dataset with 13 958 chest x-ray images and an oropharyngeal squamous cell carcinoma (OPSCC) dataset with 3255 positron emission tomography images. Thirteen classic DL-based classification methods were implemented on the same datasets for comparison. Performance metrics included precision, sensitivity, specificity, and F 1 $F_1$ -score. In addition, we conducted ablation studies to assess the effects of various factors on model performance, including the network depth of F-Net, training image size, training dataset size, and loss function design. Our method achieved superior performance than all comparative methods. On the COVID-19 dataset, our method achieved 95.4 % ± 0.6 % $95.4\%\pm 0.6\%$ , 95.3 % ± 0.9 % $95.3\%\pm 0.9\%$ , 97.7 % ± 0.4 % $97.7\%\pm 0.4\%$ , and 95.3 % ± 0.9 % $95.3\%\pm 0.9\%$ in terms of precision, sensitivity, specificity, and F 1 $F_1$ -score, respectively. It achieved 96.2 % ± 0.7 % $96.2\%\pm 0.7\%$ across all these metrics on the OPSCC dataset. The study to investigate the effects of two adversarial networks highlights the crucial role of D-Net in improving model performance. Ablation studies further provide an in-depth understanding of our methodology.

CONCLUSION: Our adversarial-based classification framework leverages GAN-based adversarial networks and an iterative adversarial learning strategy to harness supplementary regularization during training. This design significantly enhances classification accuracy and mitigates overfitting issues in medical image datasets. Moreover, its modular design not only demonstrates flexibility but also indicates its potential applicability to various clinical contexts and medical imaging applications.

PMID:39078069 | DOI:10.1002/mp.17320

Categories: Literature Watch

Development of a deep-learning phenotyping tool for analyzing image-based strawberry phenotypes

Tue, 2024-07-30 06:00

Front Plant Sci. 2024 Jul 12;15:1418383. doi: 10.3389/fpls.2024.1418383. eCollection 2024.

ABSTRACT

INTRODUCTION: In strawberry farming, phenotypic traits (such as crown diameter, petiole length, plant height, flower, leaf, and fruit size) measurement is essential as it serves as a decision-making tool for plant monitoring and management. To date, strawberry plant phenotyping has relied on traditional approaches. In this study, an image-based Strawberry Phenotyping Tool (SPT) was developed using two deep-learning (DL) architectures, namely "YOLOv4" and "U-net" integrated into a single system. We aimed to create the most suitable DL-based tool with enhanced robustness to facilitate digital strawberry plant phenotyping directly at the natural scene or indirectly using captured and stored images.

METHODS: Our SPT was developed primarily through two steps (subsequently called versions) using image data with different backgrounds captured with simple smartphone cameras. The two versions (V1 and V2) were developed using the same DL networks but differed by the amount of image data and annotation method used during their development. For V1, 7,116 images were annotated using the single-target non-labeling method, whereas for V2, 7,850 images were annotated using the multitarget labeling method.

RESULTS: The results of the held-out dataset revealed that the developed SPT facilitates strawberry phenotype measurements. By increasing the dataset size combined with multitarget labeling annotation, the detection accuracy of our system changed from 60.24% in V1 to 82.28% in V2. During the validation process, the system was evaluated using 70 images per phenotype and their corresponding actual values. The correlation coefficients and detection frequencies were higher for V2 than for V1, confirming the superiority of V2. Furthermore, an image-based regression model was developed to predict the fresh weight of strawberries based on the fruit size (R2 = 0.92).

DISCUSSION: The results demonstrate the efficiency of our system in recognizing the aforementioned six strawberry phenotypic traits regardless of the complex scenario of the environment of the strawberry plant. This tool could help farmers and researchers make accurate and efficient decisions related to strawberry plant management, possibly causing increased productivity and yield potential.

PMID:39077512 | PMC:PMC11284602 | DOI:10.3389/fpls.2024.1418383

Categories: Literature Watch

Advancing materials science through next-generation machine learning

Tue, 2024-07-30 06:00

Curr Opin Solid State Mater Sci. 2024 Jun;30:101157. doi: 10.1016/j.cossms.2024.101157. Epub 2024 Apr 3.

ABSTRACT

For over a decade, machine learning (ML) models have been making strides in computer vision and natural language processing (NLP), demonstrating high proficiency in specialized tasks. The emergence of large-scale language and generative image models, such as ChatGPT and Stable Diffusion, has significantly broadened the accessibility and application scope of these technologies. Traditional predictive models are typically constrained to mapping input data to numerical values or predefined categories, limiting their usefulness beyond their designated tasks. In contrast, contemporary models employ representation learning and generative modeling, enabling them to extract and encode key insights from a wide variety of data sources and decode them to create novel responses for desired goals. They can interpret queries phrased in natural language to deduce the intended output. In parallel, the application of ML techniques in materials science has advanced considerably, particularly in areas like inverse design, material prediction, and atomic modeling. Despite these advancements, the current models are overly specialized, hindering their potential to supplant established industrial processes. Materials science, therefore, necessitates the creation of a comprehensive, versatile model capable of interpreting human-readable inputs, intuiting a wide range of possible search directions, and delivering precise solutions. To realize such a model, the field must adopt cutting-edge representation, generative, and foundation model techniques tailored to materials science. A pivotal component in this endeavor is the establishment of an extensive, centralized dataset encompassing a broad spectrum of research topics. This dataset could be assembled by crowdsourcing global research contributions and developing models to extract data from existing literature and represent them in a homogenous format. A massive dataset can be used to train a central model that learns the underlying physics of the target areas, which can then be connected to a variety of specialized downstream tasks. Ultimately, the envisioned model would empower users to intuitively pose queries for a wide array of desired outcomes. It would facilitate the search for existing data that closely matches the sought-after solutions and leverage its understanding of physics and material-behavior relationships to innovate new solutions when pre-existing ones fall short.

PMID:39077430 | PMC:PMC11285097 | DOI:10.1016/j.cossms.2024.101157

Categories: Literature Watch

DFA-UNet: dual-stream feature-fusion attention U-Net for lymph node segmentation in lung cancer diagnosis

Tue, 2024-07-30 06:00

Front Neurosci. 2024 Jul 15;18:1448294. doi: 10.3389/fnins.2024.1448294. eCollection 2024.

ABSTRACT

In bronchial ultrasound elastography, accurately segmenting mediastinal lymph nodes is of great significance for diagnosing whether lung cancer has metastasized. However, due to the ill-defined margin of ultrasound images and the complexity of lymph node structure, accurate segmentation of fine contours is still challenging. Therefore, we propose a dual-stream feature-fusion attention U-Net (DFA-UNet). Firstly, a dual-stream encoder (DSE) is designed by combining ConvNext with a lightweight vision transformer (ViT) to extract the local information and global information of images; Secondly, we propose a hybrid attention module (HAM) at the bottleneck, which incorporates spatial and channel attention to optimize the features transmission process by optimizing high-dimensional features at the bottom of the network. Finally, the feature-enhanced residual decoder (FRD) is developed to improve the fusion of features obtained from the encoder and decoder, ensuring a more comprehensive integration. Extensive experiments on the ultrasound elasticity image dataset show the superiority of our DFA-UNet over 9 state-of-the-art image segmentation models. Additionally, visual analysis, ablation studies, and generalization assessments highlight the significant enhancement effects of DFA-UNet. Comprehensive experiments confirm the excellent segmentation effectiveness of the DFA-UNet combined attention mechanism for ultrasound images, underscoring its important significance for future research on medical images.

PMID:39077427 | PMC:PMC11284146 | DOI:10.3389/fnins.2024.1448294

Categories: Literature Watch

A simple and effective deep neural network based QRS complex detection method on ECG signal

Tue, 2024-07-30 06:00

Front Physiol. 2024 Jul 15;15:1384356. doi: 10.3389/fphys.2024.1384356. eCollection 2024.

ABSTRACT

Introduction: The QRS complex is the most prominent waveform within the electrocardiograph (ECG) signal. The accurate detection of the QRS complex is an essential step in the ECG analysis algorithm, which can provide fundamental information for the monitoring and diagnosis of the cardiovascular diseases. Methods: Seven public ECG datasets were used in the experiments. A simple and effective QRS complex detection algorithm based on the deep neural network (DNN) was proposed. The DNN model was composed of two parts: a feature pyramid network (FPN) based backbone with dual input channels to generate the feature maps, and a location head to predict the probability of point belonging to the QRS complex. The depthwise convolution was applied to reduce the parameters of the DNN model. Furthermore, a novel training strategy was developed. The target of the DNN model was generated by using the points within 75 milliseconds and beyond 150 milliseconds from the closest annotated QRS complexes, and artificial simulated ECG segments with high heart rates were generated in the data augmentation. The number of parameters and floating point operations (FLOPs) of our model was 26976 and 9.90M, respectively. Results: The proposed method was evaluated through a cross-dataset test and compared with the sophisticated state-of-the-art methods. On the MITBIH NST, the proposed method demonstrated slightly better sensitivity (95.59% vs. 95.55%) and lower presicion (91.03% vs. 92.93%). On the CPSC 2019, the proposed method have similar sensitivity (95.15% vs.95.13%) and better precision (91.75% vs. 82.03%). Discussion: Experimental results show the proposed algorithm achieved a comparable performance with only a few parameters and FLOPs, which would be useful for the application of ECG analysis on the wearable device.

PMID:39077760 | PMC:PMC11284145 | DOI:10.3389/fphys.2024.1384356

Categories: Literature Watch

A novel approach to brain tumor detection using K-Means++, SGLDM, ResNet50, and synthetic data augmentation

Tue, 2024-07-30 06:00

Front Physiol. 2024 Jul 15;15:1342572. doi: 10.3389/fphys.2024.1342572. eCollection 2024.

ABSTRACT

Introduction: Brain tumors are abnormal cell growths in the brain, posing significant treatment challenges. Accurate early detection using non-invasive methods is crucial for effective treatment. This research focuses on improving the early detection of brain tumors in MRI images through advanced deep-learning techniques. The primary goal is to identify the most effective deep-learning model for classifying brain tumors from MRI data, enhancing diagnostic accuracy and reliability. Methods: The proposed method for brain tumor classification integrates segmentation using K-means++, feature extraction from the Spatial Gray Level Dependence Matrix (SGLDM), and classification with ResNet50, along with synthetic data augmentation to enhance model robustness. Segmentation isolates tumor regions, while SGLDM captures critical texture information. The ResNet50 model then classifies the tumors accurately. To further improve the interpretability of the classification results, Grad-CAM is employed, providing visual explanations by highlighting influential regions in the MRI images. Result: In terms of accuracy, sensitivity, and specificity, the evaluation on the Br35H::BrainTumorDetection2020 dataset showed superior performance of the suggested method compared to existing state-of-the-art approaches. This indicates its effectiveness in achieving higher precision in identifying and classifying brain tumors from MRI data, showcasing advancements in diagnostic reliability and efficacy. Discussion: The superior performance of the suggested method indicates its robustness in accurately classifying brain tumors from MRI images, achieving higher accuracy, sensitivity, and specificity compared to existing methods. The method's enhanced sensitivity ensures a greater detection rate of true positive cases, while its improved specificity reduces false positives, thereby optimizing clinical decision-making and patient care in neuro-oncology.

PMID:39077759 | PMC:PMC11284281 | DOI:10.3389/fphys.2024.1342572

Categories: Literature Watch

Rapid identification of bloodstream infection pathogens and drug resistance using Raman spectroscopy enhanced by convolutional neural networks

Tue, 2024-07-30 06:00

Front Microbiol. 2024 Jul 15;15:1428304. doi: 10.3389/fmicb.2024.1428304. eCollection 2024.

ABSTRACT

Bloodstream infections (BSIs) are a critical medical concern, characterized by elevated morbidity, mortality, extended hospital stays, substantial healthcare costs, and diagnostic challenges. The clinical outcomes for patients with BSI can be markedly improved through the prompt identification of the causative pathogens and their susceptibility to antibiotics and antimicrobial agents. Traditional BSI diagnosis via blood culture is often hindered by its lengthy incubation period and its limitations in detecting pathogenic bacteria and their resistance profiles. Surface-enhanced Raman scattering (SERS) has recently gained prominence as a rapid and effective technique for identifying pathogenic bacteria and assessing drug resistance. This method offers molecular fingerprinting with benefits such as rapidity, sensitivity, and non-destructiveness. The objective of this study was to integrate deep learning (DL) with SERS for the rapid identification of common pathogens and their resistance to drugs in BSIs. To assess the feasibility of combining DL with SERS for direct detection, erythrocyte lysis and differential centrifugation were employed to isolate bacteria from blood samples with positive blood cultures. A total of 12,046 and 11,968 SERS spectra were collected from the two methods using Raman spectroscopy and subsequently analyzed using DL algorithms. The findings reveal that convolutional neural networks (CNNs) exhibit considerable potential in identifying prevalent pathogens and their drug-resistant strains. The differential centrifugation technique outperformed erythrocyte lysis in bacterial isolation from blood, achieving a detection accuracy of 98.68% for pathogenic bacteria and an impressive 99.85% accuracy in identifying carbapenem-resistant Klebsiella pneumoniae. In summary, this research successfully developed an innovative approach by combining DL with SERS for the swift identification of pathogenic bacteria and their drug resistance in BSIs. This novel method holds the promise of significantly improving patient prognoses and optimizing healthcare efficiency. Its potential impact could be profound, potentially transforming the diagnostic and therapeutic landscape of BSIs.

PMID:39077742 | PMC:PMC11284601 | DOI:10.3389/fmicb.2024.1428304

Categories: Literature Watch

Diagnostic Performance of Noninvasive Coronary Computed Tomography Angiography-Derived FFR for Coronary Lesion-Specific Ischemia Based on Deep Learning Analysis

Tue, 2024-07-30 06:00

Rev Cardiovasc Med. 2024 Jan 10;25(1):20. doi: 10.31083/j.rcm2501020. eCollection 2024 Jan.

ABSTRACT

BACKGROUND: The noninvasive computed tomography angiography-derived fractional flow reserve (CT-FFR) can be used to diagnose coronary ischemia. With advancements in associated software, the diagnostic capability of CT-FFR may have evolved. This study evaluates the effectiveness of a novel deep learning-based software in predicting coronary ischemia through CT-FFR.

METHODS: In this prospective study, 138 subjects with suspected or confirmed coronary artery disease were assessed. Following indication of 30%-90% stenosis on coronary computed tomography (CT) angiography, participants underwent invasive coronary angiography and fractional flow reserve (FFR) measurement. The diagnostic performance of the CT-FFR was determined using the FFR as the reference standard.

RESULTS: With a threshold of 0.80, the CT-FFR displayed an impressive diagnostic accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), positive predictive value (PPV), and negative predictive value (NPV) of 97.1%, 96.2%, 97.7%, 0.98, 96.2%, and 97.7%, respectively. At a 0.75 threshold, the CT-FFR showed a diagnostic accuracy, sensitivity, specificity, AUC, PPV, and NPV of 84.1%, 78.8%, 85.7%, 0.95, 63.4%, and 92.8%, respectively. The Bland-Altman analysis revealed a direct correlation between the CT-FFR and FFR (p < 0.001), without systematic differences (p = 0.085).

CONCLUSIONS: The CT-FFR, empowered by novel deep learning software, demonstrates a strong correlation with the FFR, offering high clinical diagnostic accuracy for coronary ischemia. The results underline the potential of modern computational approaches in enhancing noninvasive coronary assessment.

PMID:39077668 | PMC:PMC11262400 | DOI:10.31083/j.rcm2501020

Categories: Literature Watch

Machine Learning for Detecting Atrial Fibrillation from ECGs: Systematic Review and Meta-Analysis

Tue, 2024-07-30 06:00

Rev Cardiovasc Med. 2024 Jan 8;25(1):8. doi: 10.31083/j.rcm2501008. eCollection 2024 Jan.

ABSTRACT

BACKGROUND: Atrial fibrillation (AF) is a common arrhythmia that can result in adverse cardiovascular outcomes but is often difficult to detect. The use of machine learning (ML) algorithms for detecting AF has become increasingly prevalent in recent years. This study aims to systematically evaluate and summarize the overall diagnostic accuracy of the ML algorithms in detecting AF in electrocardiogram (ECG) signals.

METHODS: The searched databases included PubMed, Web of Science, Embase, and Google Scholar. The selected studies were subjected to a meta-analysis of diagnostic accuracy to synthesize the sensitivity and specificity.

RESULTS: A total of 14 studies were included, and the forest plot of the meta-analysis showed that the pooled sensitivity and specificity were 97% (95% confidence interval [CI]: 0.94-0.99) and 97% (95% CI: 0.95-0.99), respectively. Compared to traditional machine learning (TML) algorithms (sensitivity: 91.5%), deep learning (DL) algorithms (sensitivity: 98.1%) showed superior performance. Using multiple datasets and public datasets alone or in combination demonstrated slightly better performance than using a single dataset and proprietary datasets.

CONCLUSIONS: ML algorithms are effective for detecting AF from ECGs. DL algorithms, particularly those based on convolutional neural networks (CNN), demonstrate superior performance in AF detection compared to TML algorithms. The integration of ML algorithms can help wearable devices diagnose AF earlier.

PMID:39077651 | PMC:PMC11262392 | DOI:10.31083/j.rcm2501008

Categories: Literature Watch

Advances in Artificial Intelligence-Assisted Coronary Computed Tomographic Angiography for Atherosclerotic Plaque Characterization

Tue, 2024-07-30 06:00

Rev Cardiovasc Med. 2024 Jan 15;25(1):27. doi: 10.31083/j.rcm2501027. eCollection 2024 Jan.

ABSTRACT

Coronary artery disease is a leading cause of death worldwide. Major adverse cardiac events are associated not only with coronary luminal stenosis but also with atherosclerotic plaque components. Coronary computed tomography angiography (CCTA) enables non-invasive evaluation of atherosclerotic plaque along the entire coronary tree. However, precise and efficient assessment of plaque features on CCTA is still a challenge for physicians in daily practice. Artificial intelligence (AI) refers to algorithms that can simulate intelligent human behavior to improve clinical work efficiency. Recently, cardiovascular imaging has seen remarkable advancements with the use of AI. AI-assisted CCTA has the potential to facilitate the clinical workflow, offer objective and repeatable quantitative results, accelerate the interpretation of reports, and guide subsequent treatment. Several AI algorithms have been developed to provide a comprehensive assessment of atherosclerotic plaques. This review serves to highlight the cutting-edge applications of AI-assisted CCTA in atherosclerosis plaque characterization, including detecting obstructive plaques, assessing plaque volumes and vulnerability, monitoring plaque progression, and providing risk assessment. Finally, this paper discusses the current problems and future directions for implementing AI in real-world clinical settings.

PMID:39077649 | PMC:PMC11262402 | DOI:10.31083/j.rcm2501027

Categories: Literature Watch

From Left Atrial Dimension to Curved M-Mode Speckle-Tracking Images: Role of Echocardiography in Evaluating Patients with Atrial Fibrillation

Tue, 2024-07-30 06:00

Rev Cardiovasc Med. 2022 May 11;23(5):171. doi: 10.31083/j.rcm2305171. eCollection 2022 May.

ABSTRACT

Left atrial (LA) enlargement and dysfunction increase the risk of atrial fibrillation (AF). Traditional echocardiographic evaluation of the left atrium has been limited to dimensional and semi-quantification measurement of the atrial component of ventricular filling, with routine measurement of LA function not yet implemented. However, functional parameters, such as LA emptying fraction (LAEF), may be more sensitive markers for detecting AF-related changes than LA enlargement. Speckle-tracking echocardiography has proven to be a feasible and reproducible technology for the direct evaluation of LA function. The clinical application, advantages, and limitations of LA strain and strain rate need to be fully understood. Furthermore, the prognostic value and utility of this technique in making therapeutic decisions for patients with AF need further elucidation. Deep learning neural networks have been successfully adapted to specific tasks in echocardiographic image analysis, and fully automated measurements based on artificial intelligence could facilitate the clinical diagnostic use of LA speckle-tracking images for classification of AF ablation outcome. This review describes the fundamental concepts and a brief overview of the prognostic utility of LA size, LAEF, LA strain and strain rate analyses, and the clinical implications of the use of these measures.

PMID:39077610 | PMC:PMC11273969 | DOI:10.31083/j.rcm2305171

Categories: Literature Watch

Machine Learning in Cardio-Oncology: New Insights from an Emerging Discipline

Tue, 2024-07-30 06:00

Rev Cardiovasc Med. 2023 Oct 19;24(10):296. doi: 10.31083/j.rcm2410296. eCollection 2023 Oct.

ABSTRACT

A growing body of evidence on a wide spectrum of adverse cardiac events following oncologic therapies has led to the emergence of cardio-oncology as an increasingly relevant interdisciplinary specialty. This also calls for better risk-stratification for patients undergoing cancer treatment. Machine learning (ML), a popular branch discipline of artificial intelligence that tackles complex big data problems by identifying interaction patterns among variables, has seen increasing usage in cardio-oncology studies for risk stratification. The objective of this comprehensive review is to outline the application of ML approaches in cardio-oncology, including deep learning, artificial neural networks, random forest and summarize the cardiotoxicity identified by ML. The current literature shows that ML has been applied for the prediction, diagnosis and treatment of cardiotoxicity in cancer patients. In addition, role of ML in gender and racial disparities for cardiac outcomes and potential future directions of cardio-oncology are discussed. It is essential to establish dedicated multidisciplinary teams in the hospital and educate medical professionals to become familiar and proficient in ML in the future.

PMID:39077576 | PMC:PMC11273149 | DOI:10.31083/j.rcm2410296

Categories: Literature Watch

Elbow trauma in children: development and evaluation of radiological artificial intelligence models

Tue, 2024-07-30 06:00

Res Diagn Interv Imaging. 2023 Apr 29;6:100029. doi: 10.1016/j.redii.2023.100029. eCollection 2023 Jun.

ABSTRACT

RATIONALE AND OBJECTIVES: To develop a model using artificial intelligence (A.I.) able to detect post-traumatic injuries on pediatric elbow X-rays then to evaluate its performances in silico and its impact on radiologists' interpretation in clinical practice.

MATERIAL AND METHODS: A total of 1956 pediatric elbow radiographs performed following a trauma were retrospectively collected from 935 patients aged between 0 and 18 years. Deep convolutional neural networks were trained on these X-rays. The two best models were selected then evaluated on an external test set involving 120 patients, whose X-rays were performed on a different radiological equipment in another time period. Eight radiologists interpreted this external test set without then with the help of the A.I. models .

RESULTS: Two models stood out: model 1 had an accuracy of 95.8% and an AUROC of 0.983 and model 2 had an accuracy of 90.5% and an AUROC of 0.975. On the external test set, model 1 kept a good accuracy of 82.5% and AUROC of 0.916 while model 2 had a loss of accuracy down to 69.2% and of AUROC to 0.793. Model 1 significantly improved radiologist's sensitivity (0.82 to 0.88, P = 0.016) and accuracy (0.86 to 0.88, P = 0,047) while model 2 significantly decreased specificity of readers (0.86 to 0.83, P = 0.031).

CONCLUSION: End-to-end development of a deep learning model to assess post-traumatic injuries on elbow X-ray in children was feasible and showed that models with close metrics in silico can unpredictably lead radiologists to either improve or lower their performances in clinical settings.

PMID:39077546 | PMC:PMC11265386 | DOI:10.1016/j.redii.2023.100029

Categories: Literature Watch

Audiological Diagnosis of Valvular and Congenital Heart Diseases in the Era of Artificial Intelligence

Tue, 2024-07-30 06:00

Rev Cardiovasc Med. 2023 Jun 14;24(6):175. doi: 10.31083/j.rcm2406175. eCollection 2023 Jun.

ABSTRACT

In recent years, electronic stethoscopes have been combined with artificial intelligence (AI) technology to digitally acquire heart sounds, intelligently identify valvular disease and congenital heart disease, and improve the accuracy of heart disease diagnosis. The research on AI-based intelligent stethoscopy technology mainly focuses on AI algorithms, and the commonly used methods are end-to-end deep learning algorithms and machine learning algorithms based on feature extraction, and the hot spot for future research is to establish a large standardized heart sound database and unify these algorithms for external validation; in addition, different electronic stethoscopes should also be extensively compared so that the algorithms can be compatible with different. In addition, there should be extensive comparison of different electronic stethoscopes so that the algorithms can be compatible with heart sounds collected by different stethoscopes; especially importantly, the deployment of algorithms in the cloud is a major trend in the future development of artificial intelligence. Finally, the research of artificial intelligence based on heart sounds is still in the preliminary stage, although there is great progress in identifying valve disease and congenital heart disease, they are all in the research of algorithm for disease diagnosis, and there is little research on disease severity, remote monitoring, prognosis, etc., which will be a hot spot for future research.

PMID:39077516 | PMC:PMC11264159 | DOI:10.31083/j.rcm2406175

Categories: Literature Watch

3D Features Fusion for Automated Segmentation of Fluid Regions in CSCR Patients: An OCT-based Photodynamic Therapy Response Analysis

Mon, 2024-07-29 06:00

J Imaging Inform Med. 2024 Jul 29. doi: 10.1007/s10278-024-01190-y. Online ahead of print.

ABSTRACT

Central Serous Chorioretinopathy (CSCR) is a significant cause of vision impairment worldwide, with Photodynamic Therapy (PDT) emerging as a promising treatment strategy. The capability to precisely segment fluid regions in Optical Coherence Tomography (OCT) scans and predict the response to PDT treatment can substantially augment patient outcomes. This paper introduces a novel deep learning (DL) methodology for automated 3D segmentation of fluid regions in OCT scans, followed by a subsequent PDT response analysis for CSCR patients. Our approach utilizes the rich 3D contextual information from OCT scans to train a model that accurately delineates fluid regions. This model not only substantially reduces the time and effort required for segmentation but also offers a standardized technique, fostering further large-scale research studies. Additionally, by incorporating pre- and post-treatment OCT scans, our model is capable of predicting PDT response, hence enabling the formulation of personalized treatment strategies and optimized patient management. To validate our approach, we employed a robust dataset comprising 2,769 OCT scans (124 3D volumes), and the results obtained were significantly satisfactory, outperforming the current state-of-the-art methods. This research signifies an important milestone in the integration of DL advancements with practical clinical applications, propelling us a step closer towards improved management of CSCR. Furthermore, the methodologies and systems developed can be adapted and extrapolated to tackle similar challenges in the diagnosis and treatment of other retinal pathologies, favoring more comprehensive and personalized patient care.

PMID:39075249 | DOI:10.1007/s10278-024-01190-y

Categories: Literature Watch

Joint AI-driven event prediction and longitudinal modeling in newly diagnosed and relapsed multiple myeloma

Mon, 2024-07-29 06:00

NPJ Digit Med. 2024 Jul 29;7(1):200. doi: 10.1038/s41746-024-01189-3.

ABSTRACT

Multiple myeloma management requires a balance between maximizing survival, minimizing adverse events to therapy, and monitoring disease progression. While previous work has proposed data-driven models for individual tasks, these approaches fail to provide a holistic view of a patient's disease state, limiting their utility to assist physician decision-making. To address this limitation, we developed a transformer-based machine learning model that jointly (1) predicts progression-free survival (PFS), overall survival (OS), and adverse events (AE), (2) forecasts key disease biomarkers, and (3) assesses the effect of different treatment strategies, e.g., ixazomib, lenalidomide, dexamethasone (IRd) vs lenalidomide, dexamethasone (Rd). Using TOURMALINE trial data, we trained and internally validated our model on newly diagnosed myeloma patients (N = 703) and externally validated it on relapsed and refractory myeloma patients (N = 720). Our model achieved superior performance to a risk model based on the multiple myeloma international staging system (ISS) (p < 0.001, Bonferroni corrected) and comparable performance to survival models trained separately on each task, but unable to forecast biomarkers. Our approach outperformed state-of-the-art deep learning models, tailored towards forecasting, on predicting key disease biomarkers (p < 0.001, Bonferroni corrected). Finally, leveraging our model's capacity to estimate individual-level treatment effects, we found that patients with IgA kappa myeloma appear to benefit the most from IRd. Our study suggests that a holistic assessment of a patient's myeloma course is possible, potentially serving as the foundation for a personalized decision support system.

PMID:39075240 | DOI:10.1038/s41746-024-01189-3

Categories: Literature Watch

Extended dipeptide composition framework for accurate identification of anticancer peptides

Mon, 2024-07-29 06:00

Sci Rep. 2024 Jul 29;14(1):17381. doi: 10.1038/s41598-024-68475-8.

ABSTRACT

The identification of anticancer peptides (ACPs) is crucial, especially in the development of peptide-based cancer therapy. The classical models such as Split Amino Acid Composition (SAAC) and Pseudo Amino Acid Composition (PseAAC) lack the incorporation of feature representation. These advancements improve the predictive accuracy and efficiency of ACP identification. Thus, the effort of this research is to propose and develop an advanced framework based on feature extraction. Thus, to achieve this objective herein we propose an Extended Dipeptide Composition (EDPC) framework. The proposed EDPC framework extends the dipeptide composition by considering the local sequence environment information and reforming the CD-HIT framework to remove noise and redundancy. To measure the accuracy, we have performed several experiments. These experiments were employed using four famous machine learning (ML) algorithms named; Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), and K Nearest Neighbor (KNN). For comparisons, we have used accuracy, specificity, sensitivity, precision, recall, and F1-Score as evaluation criteria. The reliability of the proposed framework is further evaluated using statistical significance tests. As a result, the proposed EDPC framework exhibited enhanced performance than SAAC and PseAAC, where the SVM model delivered the highest accuracy of 96. 6% and significant enhancements in specificity, sensitivity, precision, and F1-score over multiple datasets. Due to the incorporation of enhanced feature representation and the incorporation of local and global sequence profiles proposed EDPC achieves higher classification performance. The proposed frameworks can deal with noise and also duplicating features. These are accompanied by a wide range of feature representations. Finally, our proposed framework can be used for clinical applications where ACP identification is essential. Future works will include extending to a larger variety of datasets, incorporating tertiary structural information, and using deep learning techniques to improve the proposed EDPC.

PMID:39075193 | DOI:10.1038/s41598-024-68475-8

Categories: Literature Watch

Developing a fair and interpretable representation of the clock drawing test for mitigating low education and racial bias

Mon, 2024-07-29 06:00

Sci Rep. 2024 Jul 29;14(1):17444. doi: 10.1038/s41598-024-68481-w.

ABSTRACT

The clock drawing test (CDT) is a neuropsychological assessment tool to screen an individual's cognitive ability. In this study, we developed a Fair and Interpretable Representation of Clock drawing test (FaIRClocks) to evaluate and mitigate classification bias against people with less than 8 years of education, while screening their cognitive function using an array of neuropsychological measures. In this study, we represented clock drawings by a priorly published 10-dimensional deep learning feature set trained on publicly available data from the National Health and Aging Trends Study (NHATS). These embeddings were further fine-tuned with clocks from a preoperative cognitive screening program at the University of Florida to predict three cognitive scores: the Mini-Mental State Examination (MMSE) total score, an attention composite z-score (ATT-C), and a memory composite z-score (MEM-C). ATT-C and MEM-C scores were developed by averaging z-scores based on normative references. The cognitive screening classifiers were initially tested to see their relative performance in patients with low years of education (< = 8 years) versus patients with higher education (> 8 years) and race. Results indicated that the initial unweighted classifiers confounded lower education with cognitive compromise resulting in a 100% type I error rate for this group. Thereby, the samples were re-weighted using multiple fairness metrics to achieve sensitivity/specificity and positive/negative predictive value (PPV/NPV) balance across groups. In summary, we report the FaIRClocks model, with promise to help identify and mitigate bias against people with less than 8 years of education during preoperative cognitive screening.

PMID:39075127 | DOI:10.1038/s41598-024-68481-w

Categories: Literature Watch

Pages