Deep learning

Transcriptomic Profiling of Plasma Extracellular Vesicles Enables Reliable Annotation of the Cancer-specific Transcriptome and Molecular Subtype

Thu, 2024-03-07 06:00

Cancer Res. 2024 Mar 7. doi: 10.1158/0008-5472.CAN-23-4070. Online ahead of print.

ABSTRACT

Longitudinal monitoring of patients with advanced cancers is crucial to evaluate both disease burden and treatment response. Current liquid biopsy approaches mostly rely on the detection of DNA-based biomarkers. However, plasma RNA analysis can unleash tremendous opportunities for tumor state interrogation and molecular subtyping. Through the application of deep learning algorithms to the deconvolved transcriptomes of RNA within plasma extracellular vesicles (evRNA), we successfully predict consensus molecular subtypes in metastatic colorectal cancer patients. We further demonstrate the ability to monitor changes in transcriptomic subtype under treatment selection pressure and identify molecular pathways in evRNA associated with recurrence. Our approach also identified expressed gene fusions and neoepitopes from evRNA. These results demonstrate the feasibility of transcriptomic-based liquid biopsy platforms for precision oncology approaches, spanning from the longitudinal monitoring of tumor subtype changes to identification of expressed fusions and neoantigens as cancer-specific therapeutic targets, sans the need for tissue-based sampling.

PMID:38451249 | DOI:10.1158/0008-5472.CAN-23-4070

Categories: Literature Watch

Deep-VEGF: deep stacked ensemble model for prediction of vascular endothelial growth factor by concatenating gated recurrent unit with two-dimensional convolutional neural network

Thu, 2024-03-07 06:00

J Biomol Struct Dyn. 2024 Mar 7:1-11. doi: 10.1080/07391102.2024.2323144. Online ahead of print.

ABSTRACT

Vascular endothelial growth factor (VEGF) is involved in the development and progression of various diseases, including cancer, diabetic retinopathy, macular degeneration and arthritis. Understanding the role of VEGF in various disorders has led to the development of effective treatments, including anti-VEGF drugs, which have significantly improved therapeutic methods. Accurate VEGF identification is critical, yet experimental identification is expensive and time-consuming. This study presents Deep-VEGF, a novel computational model for VEGF prediction based on deep-stacked ensemble learning. We formulated two datasets using primary sequences. A novel feature descriptor named K-Space Tri Slicing-Bigram position-specific scoring metrix (KSTS-BPSSM) is constructed to extract numerical features from primary sequences. The model training is performed by deep learning techniques, including gated recurrent unit (GRU), generative adversarial network (GAN) and convolutional neural network (CNN). The GRU and CNN are ensembled using stacking learning approach. KSTS-BPSSM-based ensemble model secured the most accurate predictive outcomes, surpassing other competitive predictors across both training and testing datasets. This demonstrates the potential of leveraging deep learning for accurate VEGF prediction as a powerful tool to accelerate research, streamline drug discovery and uncover novel therapeutic targets. This insightful approach holds promise for expanding our knowledge of VEGF's role in health and disease.Communicated by Ramaswamy H. Sarma.

PMID:38450715 | DOI:10.1080/07391102.2024.2323144

Categories: Literature Watch

Rapid and Precise Differentiation and Authentication of Agricultural Products via Deep Learning-Assisted Multiplex SERS Fingerprinting

Thu, 2024-03-07 06:00

Anal Chem. 2024 Mar 7. doi: 10.1021/acs.analchem.4c00064. Online ahead of print.

ABSTRACT

Accurate and rapid differentiation and authentication of agricultural products based on their origin and quality are crucial to ensuring food safety and quality control. However, similar chemical compositions and complex matrices often hinder precise identification, particularly for adulterated samples. Herein, we propose a novel method combining multiplex surface-enhanced Raman scattering (SERS) fingerprinting with a one-dimensional convolutional neural network (1D-CNN), which enables the effective differentiation of the category, origin, and grade of agricultural products. This strategy leverages three different SERS-active nanoparticles as multiplex sensors, each tailored to selectively amplify the signals of preferentially adsorbed chemicals within the sample. By strategically combining SERS spectra from different NPs, a 'SERS super-fingerprint' is constructed, offering a more comprehensive representation of the characteristic information on agricultural products. Subsequently, utilizing a custom-designed 1D-CNN model for feature extraction from the 'super-fingerprint' significantly enhances the predictive accuracy for agricultural products. This strategy successfully identified various agricultural products and simulated adulterated samples with exceptional accuracy, reaching 97.7% and 94.8%, respectively. Notably, the entire identification process, encompassing sample preparation, SERS measurement, and deep learning analysis, takes only 35 min. This development of deep learning-assisted multiplex SERS fingerprinting establishes a rapid and reliable method for the identification and authentication of agricultural products.

PMID:38450485 | DOI:10.1021/acs.analchem.4c00064

Categories: Literature Watch

Evaluation of atopic dermatitis severity using artificial intelligence

Thu, 2024-03-07 06:00

Narra J. 2023 Dec;3(3):e511. doi: 10.52225/narra.v3i3.511. Epub 2023 Dec 19.

ABSTRACT

Atopic dermatitis is a prevalent and persistent chronic inflammatory skin disorder that poses significant challenges when it comes to accurately assessing its severity. The aim of this study was to evaluate deep learning models for automated atopic dermatitis severity scoring using a dataset of Aceh ethnicity individuals in Indonesia. The dataset of clinical images was collected from 250 patients at Dr. Zainoel Abidin Hospital, Banda Aceh, Indonesia and labeled by dermatologists as mild, moderate, severe, or none. Five pretrained convolutional neural networks (CNN) architectures were evaluated: ResNet50, VGGNet19, MobileNetV3, MnasNet, and EfficientNetB0. The evaluation metrics, including accuracy, precision, sensitivity, specificity, and F1-score, were employed to assess the models. Among the models, ResNet50 emerged as the most proficient, demonstrating an accuracy of 89.8%, precision of 90.00%, sensitivity of 89.80%, specificity of 96.60%, and an F1-score of 89.85%. These results highlight the potential of incorporating advanced, data-driven models into the field of dermatology. These models can serve as invaluable tools to assist dermatologists in making early and precise assessments of atopic dermatitis severity and therefore improve patient care and outcomes.

PMID:38450339 | PMC:PMC10914065 | DOI:10.52225/narra.v3i3.511

Categories: Literature Watch

An India soyabean dataset for identification and classification of diseases using computer-vision algorithms

Thu, 2024-03-07 06:00

Data Brief. 2024 Feb 22;53:110216. doi: 10.1016/j.dib.2024.110216. eCollection 2024 Apr.

ABSTRACT

Intelligent agriculture heavily relies on the science of agricultural disease image recognition. India is also responsible for large production of French beans, accounting for 37.25% of total production. In India from south region of Maharashtra state this crop is cultivated thrice in year. Soyabean plant is planted between the months of June through July, during the months of October and September during the rabi season, as well as in February. In the Maharashtrian regions of Pune, Satara, Ahmednagar, Solapur, and Nashik, among others, Soyabean plant is a common crop. In Maharashtra, Soyabean plant is grown over an area of around 31,050 hectares. This research presents a dataset of leaves from soyabean plants that are both insect-damaged and healthy. Images were taken over the course of fewer than two to three seasons on several farms. There are 3363 photos altogether in the seven folders that make up the dataset. Six categories comprise the dataset: I) Healthy plants II) Vein Necrosis III) Dry leaf IV) Septoria brown spot V) Root images VI) Bacterial leaf blight. This study's goal is to give academics and students accessibility to our dataset so they may use it for their studies and to build machine learning models.

PMID:38450198 | PMC:PMC10915497 | DOI:10.1016/j.dib.2024.110216

Categories: Literature Watch

Exploring the application and future outlook of Artificial intelligence in pancreatic cancer

Thu, 2024-03-07 06:00

Front Oncol. 2024 Feb 21;14:1345810. doi: 10.3389/fonc.2024.1345810. eCollection 2024.

ABSTRACT

Pancreatic cancer, an exceptionally malignant tumor of the digestive system, presents a challenge due to its lack of typical early symptoms and highly invasive nature. The majority of pancreatic cancer patients are diagnosed when curative surgical resection is no longer possible, resulting in a poor overall prognosis. In recent years, the rapid progress of Artificial intelligence (AI) in the medical field has led to the extensive utilization of machine learning and deep learning as the prevailing approaches. Various models based on AI technology have been employed in the early screening, diagnosis, treatment, and prognostic prediction of pancreatic cancer patients. Furthermore, the development and application of three-dimensional visualization and augmented reality navigation techniques have also found their way into pancreatic cancer surgery. This article provides a concise summary of the current state of AI technology in pancreatic cancer and offers a promising outlook for its future applications.

PMID:38450187 | PMC:PMC10915754 | DOI:10.3389/fonc.2024.1345810

Categories: Literature Watch

Feasibility of video-based real-time nystagmus tracking: a lightweight deep learning model approach using ocular object segmentation

Thu, 2024-03-07 06:00

Front Neurol. 2024 Feb 21;15:1342108. doi: 10.3389/fneur.2024.1342108. eCollection 2024.

ABSTRACT

BACKGROUND: Eye movement tests remain significantly underutilized in emergency departments and primary healthcare units, despite their superior diagnostic sensitivity compared to neuroimaging modalities for the differential diagnosis of acute vertigo. This underutilization may be attributed to a potential lack of awareness regarding these tests and the absence of appropriate tools for detecting nystagmus. This study aimed to develop a nystagmus measurement algorithm using a lightweight deep-learning model that recognizes the ocular regions.

METHOD: The deep learning model was used to segment the eye regions, detect blinking, and determine the pupil center. The model was trained using images extracted from video clips of a clinical battery of eye movement tests and synthesized images reproducing real eye movement scenarios using virtual reality. Each eye image was annotated with segmentation masks of the sclera, iris, and pupil, with gaze vectors of the pupil center for eye tracking. We conducted a comprehensive evaluation of model performance and its execution speeds in comparison to various alternative models using metrics that are suitable for the tasks.

RESULTS: The mean Intersection over Union values of the segmentation model ranged from 0.90 to 0.97 for different classes (sclera, iris, and pupil) across types of images (synthetic vs. real-world images). Additionally, the mean absolute error for eye tracking was 0.595 for real-world data and the F1 score for blink detection was ≥ 0.95, which indicates our model is performing at a very high level of accuracy. Execution speed was also the most rapid for ocular object segmentation under the same hardware condition as compared to alternative models. The prediction for horizontal and vertical nystagmus in real eye movement video revealed high accuracy with a strong correlation between the observed and predicted values (r = 0.9949 for horizontal and r = 0.9950 for vertical; both p < 0.05).

CONCLUSION: The potential of our model, which can automatically segment ocular regions and track nystagmus in real time from eye movement videos, holds significant promise for emergency settings or remote intervention within the field of neurotology.

PMID:38450068 | PMC:PMC10915048 | DOI:10.3389/fneur.2024.1342108

Categories: Literature Watch

Machine learning-based evaluation of spontaneous pain and analgesics from cellular calcium signals in the mouse primary somatosensory cortex using explainable features

Thu, 2024-03-07 06:00

Front Mol Neurosci. 2024 Feb 21;17:1356453. doi: 10.3389/fnmol.2024.1356453. eCollection 2024.

ABSTRACT

INTRODUCTION: Pain that arises spontaneously is considered more clinically relevant than pain evoked by external stimuli. However, measuring spontaneous pain in animal models in preclinical studies is challenging due to methodological limitations. To address this issue, recently we developed a deep learning (DL) model to assess spontaneous pain using cellular calcium signals of the primary somatosensory cortex (S1) in awake head-fixed mice. However, DL operate like a "black box", where their decision-making process is not transparent and is difficult to understand, which is especially evident when our DL model classifies different states of pain based on cellular calcium signals. In this study, we introduce a novel machine learning (ML) model that utilizes features that were manually extracted from S1 calcium signals, including the dynamic changes in calcium levels and the cell-to-cell activity correlations.

METHOD: We focused on observing neural activity patterns in the primary somatosensory cortex (S1) of mice using two-photon calcium imaging after injecting a calcium indicator (GCaMP6s) into the S1 cortex neurons. We extracted features related to the ratio of up and down-regulated cells in calcium activity and the correlation level of activity between cells as input data for the ML model. The ML model was validated using a Leave-One-Subject-Out Cross-Validation approach to distinguish between non-pain, pain, and drug-induced analgesic states.

RESULTS AND DISCUSSION: The ML model was designed to classify data into three distinct categories: non-pain, pain, and drug-induced analgesic states. Its versatility was demonstrated by successfully classifying different states across various pain models, including inflammatory and neuropathic pain, as well as confirming its utility in identifying the analgesic effects of drugs like ketoprofen, morphine, and the efficacy of magnolin, a candidate analgesic compound. In conclusion, our ML model surpasses the limitations of previous DL approaches by leveraging manually extracted features. This not only clarifies the decision-making process of the ML model but also yields insights into neuronal activity patterns associated with pain, facilitating preclinical studies of analgesics with higher potential for clinical translation.

PMID:38450042 | PMC:PMC10915002 | DOI:10.3389/fnmol.2024.1356453

Categories: Literature Watch

Robust human locomotion and localization activity recognition over multisensory

Thu, 2024-03-07 06:00

Front Physiol. 2024 Feb 21;15:1344887. doi: 10.3389/fphys.2024.1344887. eCollection 2024.

ABSTRACT

Human activity recognition (HAR) plays a pivotal role in various domains, including healthcare, sports, robotics, and security. With the growing popularity of wearable devices, particularly Inertial Measurement Units (IMUs) and Ambient sensors, researchers and engineers have sought to take advantage of these advances to accurately and efficiently detect and classify human activities. This research paper presents an advanced methodology for human activity and localization recognition, utilizing smartphone IMU, Ambient, GPS, and Audio sensor data from two public benchmark datasets: the Opportunity dataset and the Extrasensory dataset. The Opportunity dataset was collected from 12 subjects participating in a range of daily activities, and it captures data from various body-worn and object-associated sensors. The Extrasensory dataset features data from 60 participants, including thousands of data samples from smartphone and smartwatch sensors, labeled with a wide array of human activities. Our study incorporates novel feature extraction techniques for signal, GPS, and audio sensor data. Specifically, for localization, GPS, audio, and IMU sensors are utilized, while IMU and Ambient sensors are employed for locomotion activity recognition. To achieve accurate activity classification, state-of-the-art deep learning techniques, such as convolutional neural networks (CNN) and long short-term memory (LSTM), have been explored. For indoor/outdoor activities, CNNs are applied, while LSTMs are utilized for locomotion activity recognition. The proposed system has been evaluated using the k-fold cross-validation method, achieving accuracy rates of 97% and 89% for locomotion activity over the Opportunity and Extrasensory datasets, respectively, and 96% for indoor/outdoor activity over the Extrasensory dataset. These results highlight the efficiency of our methodology in accurately detecting various human activities, showing its potential for real-world applications. Moreover, the research paper introduces a hybrid system that combines machine learning and deep learning features, enhancing activity recognition performance by leveraging the strengths of both approaches.

PMID:38449788 | PMC:PMC10915014 | DOI:10.3389/fphys.2024.1344887

Categories: Literature Watch

Deep learning based assessment of hemodynamics in the coarctation of the aorta: comparison of bidirectional recurrent and convolutional neural networks

Thu, 2024-03-07 06:00

Front Physiol. 2024 Feb 21;15:1288339. doi: 10.3389/fphys.2024.1288339. eCollection 2024.

ABSTRACT

The utilization of numerical methods, such as computational fluid dynamics (CFD), has been widely established for modeling patient-specific hemodynamics based on medical imaging data. Hemodynamics assessment plays a crucial role in treatment decisions for the coarctation of the aorta (CoA), a congenital heart disease, with the pressure drop (PD) being a crucial biomarker for CoA treatment decisions. However, implementing CFD methods in the clinical environment remains challenging due to their computational cost and the requirement for expert knowledge. This study proposes a deep learning approach to mitigate the computational need and produce fast results. Building upon a previous proof-of-concept study, we compared the effects of two different artificial neural network (ANN) architectures trained on data with different dimensionalities, both capable of predicting hemodynamic parameters in CoA patients: a one-dimensional bidirectional recurrent neural network (1D BRNN) and a three-dimensional convolutional neural network (3D CNN). The performance was evaluated by median point-wise root mean square error (RMSE) for pressures along the centerline in 18 test cases, which were not included in a training cohort. We found that the 3D CNN (median RMSE of 3.23 mmHg) outperforms the 1D BRNN (median RMSE of 4.25 mmHg). In contrast, the 1D BRNN is more precise in PD prediction, with a lower standard deviation of the error (±7.03 mmHg) compared to the 3D CNN (±8.91 mmHg). The differences between both ANNs are not statistically significant, suggesting that compressing the 3D aorta hemodynamics into a 1D centerline representation does not result in the loss of valuable information when training ANN models. Additionally, we evaluated the utility of the synthetic geometries of the aortas with CoA generated by using a statistical shape model (SSM), as well as the impact of aortic arch geometry (gothic arch shape) on the model's training. The results show that incorporating a synthetic cohort obtained through the SSM of the clinical cohort does not significantly increase the model's accuracy, indicating that the synthetic cohort generation might be oversimplified. Furthermore, our study reveals that selecting training cases based on aortic arch shape (gothic versus non-gothic) does not improve ANN performance for test cases sharing the same shape.

PMID:38449784 | PMC:PMC10916009 | DOI:10.3389/fphys.2024.1288339

Categories: Literature Watch

Attention-assisted hybrid CNN-BILSTM-BiGRU model with SMOTE-Tomek method to detect cardiac arrhythmia based on 12-lead electrocardiogram signals

Thu, 2024-03-07 06:00

Digit Health. 2024 Mar 5;10:20552076241234624. doi: 10.1177/20552076241234624. eCollection 2024 Jan-Dec.

ABSTRACT

OBJECTIVES: Cardiac arrhythmia is one of the most severe cardiovascular diseases that can be fatal. Therefore, its early detection is critical. However, detecting types of arrhythmia by physicians based on visual identification is time-consuming and subjective. Deep learning can develop effective approaches to classify arrhythmias accurately and quickly. This study proposed a deep learning approach developed based on a Chapman-Shaoxing electrocardiogram (ECG) dataset signal to detect seven types of arrhythmias.

METHOD: Our DNN model is a hybrid CNN-BILSTM-BiGRU algorithm assisted by a multi-head self-attention mechanism regarding the challenging problem of classifying various arrhythmias of ECG signals. Additionally, the synthetic minority oversampling technique (SMOTE)-Tomek technique was utilized to address the data imbalance problem to detect and classify cardiac arrhythmias.

RESULT: The proposed model, trained with a single lead, was tested using a dataset containing 10,466 participants. The performance of the algorithm was evaluated using a random split validation approach. The proposed algorithm achieved an accuracy of 98.57% by lead II and 98.34% by lead aVF for the classification of arrhythmias.

CONCLUSION: We conducted an analysis of single-lead ECG signals to evaluate the effectiveness of our proposed hybrid model in diagnosing and classifying different types of arrhythmias. We trained separate classification models using each individual signal lead. Additionally, we implemented the SMOTE-Tomek technique along with cross-entropy loss as a cost function to address the class imbalance problem. Furthermore, we utilized a multi-headed self-attention mechanism to adjust the network structure and classify the seven arrhythmia classes. Our model achieved high accuracy and demonstrated good generalization ability in detecting ECG arrhythmias. However, further testing of the model with diverse datasets is crucial to validate its performance.

PMID:38449680 | PMC:PMC10916475 | DOI:10.1177/20552076241234624

Categories: Literature Watch

Boosted dipper throated optimization algorithm-based Xception neural network for skin cancer diagnosis: An optimal approach

Thu, 2024-03-07 06:00

Heliyon. 2024 Feb 18;10(5):e26415. doi: 10.1016/j.heliyon.2024.e26415. eCollection 2024 Mar 15.

ABSTRACT

Skin cancer is a prevalent form of cancer that necessitates prompt and precise detection. However, current diagnostic methods for skin cancer are either invasive, time-consuming, or unreliable. Consequently, there is a demand for an innovative and efficient approach to diagnose skin cancer that utilizes non-invasive and automated techniques. In this study, a unique method has been proposed for diagnosing skin cancer by employing an Xception neural network that has been optimized using Boosted Dipper Throated Optimization (BDTO) algorithm. The Xception neural network is a deep learning model capable of extracting high-level features from skin dermoscopy images, while the BDTO algorithm is a bio-inspired optimization technique that can determine the optimal parameters and weights for the Xception neural network. To enhance the quality and diversity of the images, the ISIC dataset is utilized, a widely accepted benchmark system for skin cancer diagnosis, and various image preprocessing and data augmentation techniques were implemented. By comparing the method with several contemporary approaches, it has been demonstrated that the method outperforms others in detecting skin cancer. The method achieves an average precision of 94.936%, an average accuracy of 94.206%, and an average recall of 97.092% for skin cancer diagnosis, surpassing the performance of alternative methods. Additionally, the 5-fold ROC curve and error curve have been presented for the data validation to showcase the superiority and robustness of the method.

PMID:38449650 | PMC:PMC10915520 | DOI:10.1016/j.heliyon.2024.e26415

Categories: Literature Watch

Potential applications of artificial intelligence in image analysis in cornea diseases: a review

Wed, 2024-03-06 06:00

Eye Vis (Lond). 2024 Mar 7;11(1):10. doi: 10.1186/s40662-024-00376-3.

ABSTRACT

Artificial intelligence (AI) is an emerging field which could make an intelligent healthcare model a reality and has been garnering traction in the field of medicine, with promising results. There have been recent developments in machine learning and/or deep learning algorithms for applications in ophthalmology-primarily for diabetic retinopathy, and age-related macular degeneration. However, AI research in the field of cornea diseases is relatively new. Algorithms have been described to assist clinicians in diagnosis or detection of cornea conditions such as keratoconus, infectious keratitis and dry eye disease. AI may also be used for segmentation and analysis of cornea imaging or tomography as an adjunctive tool. Despite the potential advantages that these new technologies offer, there are challenges that need to be addressed before they can be integrated into clinical practice. In this review, we aim to summarize current literature and provide an update regarding recent advances in AI technologies pertaining to corneal diseases, and its potential future application, in particular pertaining to image analysis.

PMID:38448961 | DOI:10.1186/s40662-024-00376-3

Categories: Literature Watch

Continual learning framework for a multicenter study with an application to electrocardiogram

Wed, 2024-03-06 06:00

BMC Med Inform Decis Mak. 2024 Mar 6;24(1):67. doi: 10.1186/s12911-024-02464-9.

ABSTRACT

Deep learning has been increasingly utilized in the medical field and achieved many goals. Since the size of data dominates the performance of deep learning, several medical institutions are conducting joint research to obtain as much data as possible. However, sharing data is usually prohibited owing to the risk of privacy invasion. Federated learning is a reasonable idea to train distributed multicenter data without direct access; however, a central server to merge and distribute models is needed, which is expensive and hardly approved due to various legal regulations. This paper proposes a continual learning framework for a multicenter study, which does not require a central server and can prevent catastrophic forgetting of previously trained knowledge. The proposed framework contains the continual learning method selection process, assuming that a single method is not omnipotent for all involved datasets in a real-world setting and that there could be a proper method to be selected for specific data. We utilized the fake data based on a generative adversarial network to evaluate methods prospectively, not ex post facto. We used four independent electrocardiogram datasets for a multicenter study and trained the arrhythmia detection model. Our proposed framework was evaluated against supervised and federated learning methods, as well as finetuning approaches that do not include any regulation to preserve previous knowledge. Even without a central server and access to the past data, our framework achieved stable performance (AUROC 0.897) across all involved datasets, achieving comparable performance to federated learning (AUROC 0.901).

PMID:38448921 | DOI:10.1186/s12911-024-02464-9

Categories: Literature Watch

Deep learning model to predict Ki-67 expression of breast cancer using digital breast tomosynthesis

Wed, 2024-03-06 06:00

Breast Cancer. 2024 Mar 7. doi: 10.1007/s12282-024-01549-7. Online ahead of print.

ABSTRACT

BACKGROUND: Developing a deep learning (DL) model for digital breast tomosynthesis (DBT) images to predict Ki-67 expression.

METHODS: The institutional review board approved this retrospective study and waived the requirement for informed consent from the patients. Initially, 499 patients (mean age: 50.5 years, range: 29-90 years) referred to our hospital for breast cancer were participated, 126 patients with pathologically confirmed breast cancer were selected and their Ki-67 expression measured. The Xception architecture was used in the DL model to predict Ki-67 expression levels. The high Ki-67 vs low Ki-67 expression diagnostic performance of our DL model was assessed by accuracy, sensitivity, specificity, areas under the receiver operating characteristic curve (AUC), and by using sub-datasets divided by the radiological characteristics of breast cancer.

RESULTS: The average accuracy, sensitivity, specificity, and AUC were 0.912, 0.629, 0.985, and 0.883, respectively. The AUC of the four subgroups separated by radiological findings for the mass, calcification, distortion, and focal asymmetric density sub-datasets were 0.890, 0.750, 0.870, and 0.660, respectively.

CONCLUSIONS: Our results suggest the potential application of our DL model to predict the expression of Ki-67 using DBT, which may be useful for preoperatively determining the treatment strategy for breast cancer.

PMID:38448777 | DOI:10.1007/s12282-024-01549-7

Categories: Literature Watch

A novel method for multi-pollutant monitoring in water supply systems using chemical machine vision

Wed, 2024-03-06 06:00

Environ Sci Pollut Res Int. 2024 Mar 6. doi: 10.1007/s11356-024-32791-3. Online ahead of print.

ABSTRACT

Drinking water is vital for human health and life, but detecting multiple contaminants in it is challenging. Traditional testing methods are both time-consuming and labor-intensive, lacking the ability to capture abrupt changes in water quality over brief intervals. This paper proposes a direct analysis and rapid detection method of three indicators of arsenic, cadmium, and selenium in complex drinking water systems by combining a novel long-path spectral imager with machine learning models. Our technique can obtain multiple parameters in about 1 s. The experiment involved setting up samples from various drinking water backgrounds and mixed groups, totaling 9360 injections. A raw visible light source ranging from 380 to 780 nm was utilized, uniformly dispersing light into the sample cell through a filter. The residual beam was captured by a high-definition camera, forming a distinctive spectrum. Three deep learning models-ResNet-50, SqueezeNet V1.1, and GoogLeNet Inception V1-were employed. Datasets were divided into training, validation, and test sets in a 6:2:2 ratio, and prediction performance across different datasets was assessed using the coefficient of determination and root mean square error. The experimental results show that a well-trained machine learning model can extract a lot of feature image information and quickly predict multi-dimensional drinking water indicators with almost no preprocessing. The model's prediction performance is stable under different background drinking water systems. The method is accurate, efficient, and real-time and can be widely used in actual water supply systems. This study can improve the efficiency of water quality monitoring and treatment in water supply systems, and the method's potential for environmental monitoring, food safety, industrial testing, and other fields can be further explored in the future.

PMID:38448769 | DOI:10.1007/s11356-024-32791-3

Categories: Literature Watch

Effects of Intravenous Infusion of Iodine Contrast Media on the Tracheal Diameter and Lung Volume Measured with Deep Learning-Based Algorithm

Wed, 2024-03-06 06:00

J Imaging Inform Med. 2024 Mar 6. doi: 10.1007/s10278-024-01071-4. Online ahead of print.

ABSTRACT

This study aimed to investigate the effects of intravenous injection of iodine contrast agent on the tracheal diameter and lung volume. In this retrospective study, a total of 221 patients (71.1 ± 12.4 years, 174 males) who underwent vascular dynamic CT examination including chest were included. Unenhanced, arterial phase, and delayed-phase images were scanned. The tracheal luminal diameters at the level of the thoracic inlet and both lung volumes were evaluated by a radiologist using a commercial software, which allows automatic airway and lung segmentation. The tracheal diameter and both lung volumes were compared between the unenhanced vs. arterial and delayed phase using a paired t-test. The Bonferroni correction was performed for multiple group comparisons. The tracheal diameter in the arterial phase (18.6 ± 2.4 mm) was statistically significantly smaller than those in the unenhanced CT (19.1 ± 2.5 mm) (p < 0.001). No statistically significant difference was found in the tracheal diameter between the delayed phase (19.0 ± 2.4 mm) and unenhanced CT (p = 0.077). Both lung volumes in the arterial phase were 4131 ± 1051 mL which was significantly smaller than those in the unenhanced CT (4332 ± 1076 mL) (p < 0.001). No statistically significant difference was found in both lung volumes between the delayed phase (4284 ± 1054 mL) and unenhanced CT (p = 0.068). In conclusion, intravenous infusion of iodine contrast agent transiently decreased the tracheal diameter and both lung volumes.

PMID:38448759 | DOI:10.1007/s10278-024-01071-4

Categories: Literature Watch

Deep learning methods for fully automated dental age estimation on orthopantomograms

Wed, 2024-03-06 06:00

Clin Oral Investig. 2024 Mar 7;28(3):198. doi: 10.1007/s00784-024-05598-2.

ABSTRACT

OBJECTIVES: This study aimed to use all permanent teeth as the target and establish an automated dental age estimation method across all developmental stages of permanent teeth, accomplishing all the essential steps of tooth determination, tooth development staging, and dental age assessment.

METHODS: A three-step framework for automatically estimating dental age was developed for children aged 3 to 15. First, a YOLOv3 network was employed to complete the tasks of tooth localization and numbering on a digital orthopantomogram. Second, a novel network named SOS-Net was established for accurate tooth development staging based on a modified Demirjian method. Finally, the dental age assessment procedure was carried out through a single-group meta-analysis utilizing the statistical data derived from our reference dataset.

RESULTS: The performance tests showed that the one-stage YOLOv3 detection network attained an overall mean average precision 50 of 97.50 for tooth determination. The proposed SOS-Net method achieved an average tooth development staging accuracy of 82.97% for a full dentition. The dental age assessment validation test yielded an MAE of 0.72 years with a full dentition (excluding the third molars) as its input.

CONCLUSIONS: The proposed automated framework enhances the dental age estimation process in a fast and standard manner, enabling the reference of any accessible population.

CLINICAL RELEVANCE: The tooth development staging network can facilitate the precise identification of permanent teeth with abnormal growth, improving the effectiveness and comprehensiveness of dental diagnoses using pediatric orthopantomograms.

PMID:38448657 | DOI:10.1007/s00784-024-05598-2

Categories: Literature Watch

Visual acuity prediction on real-life patient data using a machine learning based multistage system

Wed, 2024-03-06 06:00

Sci Rep. 2024 Mar 6;14(1):5532. doi: 10.1038/s41598-024-54482-2.

ABSTRACT

In ophthalmology, intravitreal operative medication therapy (IVOM) is a widespread treatment for diseases related to the age-related macular degeneration (AMD), the diabetic macular edema, as well as the retinal vein occlusion. However, in real-world settings, patients often suffer from loss of vision on time scales of years despite therapy, whereas the prediction of the visual acuity (VA) and the earliest possible detection of deterioration under real-life conditions is challenging due to heterogeneous and incomplete data. In this contribution, we present a workflow for the development of a research-compatible data corpus fusing different IT systems of the department of ophthalmology of a German maximum care hospital. The extensive data corpus allows predictive statements of the expected progression of a patient and his or her VA in each of the three diseases. For the disease AMD, we found out a significant deterioration of the visual acuity over time. Within our proposed multistage system, we subsequently classify the VA progression into the three groups of therapy "winners", "stabilizers", and "losers" (WSL classification scheme). Our OCT biomarker classification using an ensemble of deep neural networks results in a classification accuracy (F1-score) of over 98%, enabling us to complete incomplete OCT documentations while allowing us to exploit them for a more precise VA modelling process. Our VA prediction requires at least four VA examinations and optionally OCT biomarkers from the same time period to predict the VA progression within a forecasted time frame, whereas our prediction is currently restricted to IVOM/no therapy. We achieve a final prediction accuracy of 69% in macro average F1-score, while being in the same range as the ophthalmologists with 57.8 and 50 ± 10.7 % F1-score.

PMID:38448469 | DOI:10.1038/s41598-024-54482-2

Categories: Literature Watch

Deep learning model for personalized prediction of positive MRSA culture using time-series electronic health records

Wed, 2024-03-06 06:00

Nat Commun. 2024 Mar 6;15(1):2036. doi: 10.1038/s41467-024-46211-0.

ABSTRACT

Methicillin-resistant Staphylococcus aureus (MRSA) poses significant morbidity and mortality in hospitals. Rapid, accurate risk stratification of MRSA is crucial for optimizing antibiotic therapy. Our study introduced a deep learning model, PyTorch_EHR, which leverages electronic health record (EHR) time-series data, including wide-variety patient specific data, to predict MRSA culture positivity within two weeks. 8,164 MRSA and 22,393 non-MRSA patient events from Memorial Hermann Hospital System, Houston, Texas are used for model development. PyTorch_EHR outperforms logistic regression (LR) and light gradient boost machine (LGBM) models in accuracy (AUROCPyTorch_EHR = 0.911, AUROCLR = 0.857, AUROCLGBM = 0.892). External validation with 393,713 patient events from the Medical Information Mart for Intensive Care (MIMIC)-IV dataset in Boston confirms its superior accuracy (AUROCPyTorch_EHR = 0.859, AUROCLR = 0.816, AUROCLGBM = 0.838). Our model effectively stratifies patients into high-, medium-, and low-risk categories, potentially optimizing antimicrobial therapy and reducing unnecessary MRSA-specific antimicrobials. This highlights the advantage of deep learning models in predicting MRSA positive cultures, surpassing traditional machine learning models and supporting clinicians' judgments.

PMID:38448409 | DOI:10.1038/s41467-024-46211-0

Categories: Literature Watch

Pages