Deep learning
Bidirectional Long Short-Term Memory-Based Detection of Adverse Drug Reaction Posts Using Korean Social Networking Services Data: Deep Learning Approaches
JMIR Med Inform. 2024 Nov 20;12:e45289. doi: 10.2196/45289.
ABSTRACT
BACKGROUND: Social networking services (SNS) closely reflect the lives of individuals in modern society and generate large amounts of data. Previous studies have extracted drug information using relevant SNS data. In particular, it is important to detect adverse drug reactions (ADRs) early using drug surveillance systems. To this end, various deep learning methods have been used to analyze data in multiple languages in addition to English.
OBJECTIVE: A cautionary drug that can cause ADRs in older patients was selected, and Korean SNS data containing this drug information were collected. Based on this information, we aimed to develop a deep learning model that classifies drug ADR posts based on a recurrent neural network.
METHODS: In previous studies, ketoprofen, which has a high prescription frequency and, thus, was referred to the most in posts secured from SNS data, was selected as the target drug. Blog posts, café posts, and NAVER Q&A posts from 2005 to 2020 were collected from NAVER, a portal site containing drug-related information, and natural language processing techniques were applied to analyze data written in Korean. Posts containing highly relevant drug names and ADR word pairs were filtered through association analysis, and training data were generated through manual labeling tasks. Using the training data, an embedded layer of word2vec was formed, and a Bidirectional Long Short-Term Memory (Bi-LSTM) classification model was generated. Then, we evaluated the area under the curve with other machine learning models. In addition, the entire process was further verified using the nonsteroidal anti-inflammatory drug aceclofenac.
RESULTS: Among the nonsteroidal anti-inflammatory drugs, Korean SNS posts containing information on ketoprofen and aceclofenac were secured, and the generic name lexicon, ADR lexicon, and Korean stop word lexicon were generated. In addition, to improve the accuracy of the classification model, an embedding layer was created considering the association between the drug name and the ADR word. In the ADR post classification test, ketoprofen and aceclofenac achieved 85% and 80% accuracy, respectively.
CONCLUSIONS: Here, we propose a process for developing a model for classifying ADR posts using SNS data. After analyzing drug name-ADR patterns, we filtered high-quality data by extracting posts, including known ADR words based on the analysis. Based on these data, we developed a model that classifies ADR posts. This confirmed that a model that can leverage social data to monitor ADRs automatically is feasible.
PMID:39565685 | DOI:10.2196/45289
An AI deep learning algorithm for detecting pulmonary nodules on ultra-low-dose CT in an emergency setting: a reader study
Eur Radiol Exp. 2024 Nov 20;8(1):132. doi: 10.1186/s41747-024-00518-1.
ABSTRACT
BACKGROUND: To retrospectively assess the added value of an artificial intelligence (AI) algorithm for detecting pulmonary nodules on ultra-low-dose computed tomography (ULDCT) performed at the emergency department (ED).
METHODS: In the OPTIMACT trial, 870 patients with suspected nontraumatic pulmonary disease underwent ULDCT. The ED radiologist prospectively read the examinations and reported incidental pulmonary nodules requiring follow-up. All ULDCTs were processed post hoc using an AI deep learning software marking pulmonary nodules ≥ 6 mm. Three chest radiologists independently reviewed the subset of ULDCTs with either prospectively detected incidental nodules in 35/870 patients or AI marks in 458/870 patients; findings scored as nodules by at least two chest radiologists were used as true positive reference standard. Proportions of true and false positives were compared.
RESULTS: During the OPTIMACT study, 59 incidental pulmonary nodules requiring follow-up were prospectively reported. In the current analysis, 18/59 (30.5%) nodules were scored as true positive while 104/1,862 (5.6%) AI marks in 84/870 patients (9.7%) were scored as true positive. Overall, 5.8 times more (104 versus 18) true positive pulmonary nodules were detected with the use of AI, at the expense of 42.9 times more (1,758 versus 41) false positives. There was a median number of 1 (IQR: 0-2) AI mark per ULDCT.
CONCLUSION: The use of AI on ULDCT in patients suspected of pulmonary disease in an emergency setting results in the detection of many more incidental pulmonary nodules requiring follow-up (5.8×) with a high trade-off in terms of false positives (42.9×).
RELEVANCE STATEMENT: AI aids in the detection of incidental pulmonary nodules that require follow-up at chest-CT, aiding early pulmonary cancer detection but also results in an increase of false positive results that are mainly clustered in patients with major abnormalities.
TRIAL REGISTRATION: The OPTIMACT trial was registered on 6 December 2016 in the National Trial Register (number NTR6163) (onderzoekmetmensen.nl).
KEY POINTS: An AI deep learning algorithm was tested on 870 ULDCT examinations acquired in the ED. AI detected 5.8 times more pulmonary nodules requiring follow-up (true positives). AI resulted in the detection of 42.9 times more false positive results, clustered in patients with major abnormalities. AI in the ED setting may aid in early pulmonary cancer detection with a high trade-off in terms of false positives.
PMID:39565453 | DOI:10.1186/s41747-024-00518-1
A Physically Constrained Deep-Learning Fusion Method for Estimating Surface NO(2) Concentration from Satellite and Ground Monitors
Environ Sci Technol. 2024 Nov 20. doi: 10.1021/acs.est.4c07341. Online ahead of print.
ABSTRACT
Accurate estimation of atmospheric chemical concentrations from multiple observations is crucial for assessing the health effects of air pollution. However, existing methods are limited by imbalanced samples from observations. Here, we introduce a novel deep-learning model-measurement fusion method (DeepMMF) constrained by physical laws inferred from a chemical transport model (CTM) to estimate NO2 concentrations over the Continental United States (CONUS). By pretraining with spatiotemporally complete CTM simulations, fine-tuning with satellite and ground measurements, and employing a novel optimization strategy for selecting proper prior emission, DeepMMF delivers improved NO2 estimates, showing greater consistency and daily variation alignment with observations (with NMB reduced from -0.3 to -0.1 compared to original CTM simulations). More importantly, DeepMMF effectively addressed the sample imbalance issue that causes overestimation (by over 100%) of downwind or rural concentrations in other methods. It achieves a higher R2 of 0.98 and a lower RMSE of 1.45 ppb compared to surface NO2 observations, overperforming other approaches, which show R2 values of 0.4-0.7 and RMSEs of 3-6 ppb. The method also offers a synergistic advantage by adjusting corresponding emissions, in agreement with changes (-10% to -20%) reported in the NEI between 2019 and 2020. Our results demonstrate the great potential of DeepMMF in data fusion to better support air pollution exposure estimation and forecasting.
PMID:39565242 | DOI:10.1021/acs.est.4c07341
Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation
Radiol Artif Intell. 2024 Nov 20:e240206. doi: 10.1148/ryai.240206. Online ahead of print.
ABSTRACT
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm2). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (P = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; P = .64) or sensitivity (85.9% versus 98.8%; P = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.
PMID:39565222 | DOI:10.1148/ryai.240206
Leveraging laryngograph data for robust voicing detection in speech
J Acoust Soc Am. 2024 Nov 1;156(5):3502-3513. doi: 10.1121/10.0034445.
ABSTRACT
Accurately detecting voiced intervals in speech signals is a critical step in pitch tracking and has numerous applications. While conventional signal processing methods and deep learning algorithms have been proposed for this task, their need to fine-tune threshold parameters for different datasets and limited generalization restrict their utility in real-world applications. To address these challenges, this study proposes a supervised voicing detection model that leverages recorded laryngograph data. The model, adapted from a recently developed CrossNet architecture, is trained using reference voicing decisions derived from laryngograph datasets. Pretraining is also investigated to improve the generalization ability of the model. The proposed model produces robust voicing detection results, outperforming other strong baseline methods, and generalizes well to unseen datasets. The source code of the proposed model with pretraining is provided along with the list of used laryngograph datasets to facilitate further research in this area.
PMID:39565144 | DOI:10.1121/10.0034445
Deep learning enabled integration of tumor microenvironment microbial profiles and host gene expressions for interpretable survival subtyping in diverse types of cancers
mSystems. 2024 Nov 20:e0139524. doi: 10.1128/msystems.01395-24. Online ahead of print.
ABSTRACT
The tumor microbiome, a complex community of microbes found in tumors, has been found to be linked to cancer development, progression, and treatment outcome. However, it remains a bottleneck in distangling the relationship between the tumor microbiome and host gene expressions in tumor microenvironment, as well as their concert effects on patient survival. In this study, we aimed to decode this complex relationship by developing ASD-cancer (autoencoder-based subtypes detector for cancer), a semi-supervised deep learning framework that could extract survival-related features from tumor microbiome and transcriptome data, and identify patients' survival subtypes. By using tissue samples from The Cancer Genome Atlas database, we identified two statistically distinct survival subtypes across all 20 types of cancer Our framework provided improved risk stratification (e.g., for liver hepatocellular carcinoma, [LIHC], log-rank test, P = 8.12E-6) compared to PCA (e.g., for LIHC, log-rank test, P = 0.87), predicted survival subtypes accurately, and identified biomarkers for survival subtypes. Additionally, we identified potential interactions between microbes and host genes that may play roles in survival. For instance, in LIHC, Arcobacter, Methylocella, and Isoptericola may regulate host survival through interactions with host genes enriched in the HIF-1 signaling pathway, indicating these species as potential therapy targets. Further experiments on validation data sets have also supported these patterns. Collectively, ASD-cancer has enabled accurate survival subtyping and biomarker discovery, which could facilitate personalized treatment for broad-spectrum types of cancers.IMPORTANCEUnraveling the intricate relationship between the tumor microbiome, host gene expressions, and their collective impact on cancer outcomes is paramount for advancing personalized treatment strategies. Our study introduces ASD-cancer, a cutting-edge autoencoder-based subtype detector. ASD-cancer decodes the complexities within the tumor microenvironment, successfully identifying distinct survival subtypes across 20 cancer types. Its superior risk stratification, demonstrated by significant improvements over traditional methods like principal component analysis, holds promise for refining patient prognosis. Accurate survival subtype predictions, biomarker discovery, and insights into microbe-host gene interactions elevate ASD-cancer as a powerful tool for advancing precision medicine. These findings not only contribute to a deeper understanding of the tumor microenvironment but also open avenues for personalized interventions across diverse cancer types, underscoring the transformative potential of ASD-cancer in shaping the future of cancer care.
PMID:39565103 | DOI:10.1128/msystems.01395-24
Tractography-Based Automated Identification of Retinogeniculate Visual Pathway With Novel Microstructure-Informed Supervised Contrastive Learning
Hum Brain Mapp. 2024 Dec 1;45(17):e70071. doi: 10.1002/hbm.70071.
ABSTRACT
The retinogeniculate visual pathway (RGVP) is responsible for carrying visual information from the retina to the lateral geniculate nucleus. Identification and visualization of the RGVP are important in studying the anatomy of the visual system and can inform the treatment of related brain diseases. Diffusion MRI (dMRI) tractography is an advanced imaging method that uniquely enables in vivo mapping of the 3D trajectory of the RGVP. Currently, identification of the RGVP from tractography data relies on expert (manual) selection of tractography streamlines, which is time-consuming, has high clinical and expert labor costs, and is affected by inter-observer variability. In this paper, we present a novel deep learning framework, DeepRGVP, to enable fast and accurate identification of the RGVP from dMRI tractography data. We design a novel microstructure-informed supervised contrastive learning method that leverages both streamline label and tissue microstructure information to determine positive and negative pairs. We propose a new streamline-level data augmentation method to address highly imbalanced training data, where the number of RGVP streamlines is much lower than that of non-RGVP streamlines. In the experiments, we perform comparisons with several state-of-the-art deep learning methods that were designed for tractography parcellation. Furthermore, to assess the generalizability of the proposed RGVP method, we apply our method to dMRI tractography data from neurosurgical patients with pituitary tumors. In comparison with the state-of-the-art methods, we show superior RGVP identification results using DeepRGVP with significantly higher accuracy and F1 scores. In the patient data experiment, we show DeepRGVP can successfully identify RGVPs despite the effect of lesions affecting the RGVPs. Overall, our study shows the high potential of using deep learning to automatically identify the RGVP.
PMID:39564727 | DOI:10.1002/hbm.70071
Combination of Transfer Learning and Chemprop Interpreter with Support of Deep Learning for the Energy Levels of Organic Photovoltaic Materials Prediction and Regulation
ACS Appl Mater Interfaces. 2024 Nov 20. doi: 10.1021/acsami.4c15835. Online ahead of print.
ABSTRACT
It is challenging to build a deep learning predictive model using traditional data mining methods due to the scarcity of available data, and the model's internal decision-making process is often nonintuitive and difficult to explain. In this work, a directed message passing neural network model with transfer learning (TL) and chemprop interpreter is proposed to improve energy levels prediction and visualization for organic photovoltaic materials. The established model shows the best performance, with coefficient of determination reaching 0.787 for HOMO and 0.822 for LUMO in a small testing set after TL, compared to the other four models. Then, the chemprop interpreter analyzes local and global effects of 12 molecular structures on the energy levels for organic materials. After a comprehensive analysis of the energy level effects of nonfullerene Y-series, IT-series, and other organic materials, 12 new IT-series derivatives are designed. 1,1-dicyano-methylene-3-indanone (IC) end group halogenation can reduce HOMO and LUMO energy levels to varying degrees, while IC end group modified by electron-withdrawing aromatic groups can increase HOMO and LUMO energy levels and obtain relatively smaller electrostatic potential (ESP) to reducing intermolecular interactions. The influence of side-chain modification on energy levels is limited. It is worth mentioning that the predicted results of IT-series derivatives match density functional theory calculations. The model also shows good generalization and transferability for predicting the energy levels of other organic electronic materials. This work not only provides a cost-effective model for predicting the energy levels of organic photovoltaic materials but also explains the potential bridge between molecular structure and electronic properties.
PMID:39564708 | DOI:10.1021/acsami.4c15835
Guided Conditional Diffusion Classifier (ConDiff) for Enhanced Prediction of Infection in Diabetic Foot Ulcers
IEEE Open J Eng Med Biol. 2024 Sep 2;6:20-27. doi: 10.1109/OJEMB.2024.3453060. eCollection 2025.
ABSTRACT
Goal: To accurately detect infections in Diabetic Foot Ulcers (DFUs) using photographs taken at the Point of Care (POC). Achieving high performance is critical for preventing complications and amputations, as well as minimizing unnecessary emergency department visits and referrals. Methods: This paper proposes the Guided Conditional Diffusion Classifier (ConDiff). This novel deep-learning framework combines guided image synthesis with a denoising diffusion model and distance-based classification. The process involves (1) generating guided conditional synthetic images by injecting Gaussian noise to a guide (input) image, followed by denoising the noise-perturbed image through a reverse diffusion process, conditioned on infection status and (2) classifying infections based on the minimum Euclidean distance between synthesized images and the original guide image in embedding space. Results: ConDiff demonstrated superior performance with an average accuracy of 81% that outperformed state-of-the-art (SOTA) models by at least 3%. It also achieved the highest sensitivity of 85.4%, which is crucial in clinical domains while significantly improving specificity to 74.4%, surpassing the best SOTA model. Conclusions: ConDiff not only improves the diagnosis of DFU infections but also pioneers the use of generative discriminative models for detailed medical image analysis, offering a promising approach for improving patient outcomes.
PMID:39564561 | PMC:PMC11573405 | DOI:10.1109/OJEMB.2024.3453060
An Integrated Framework for Infectious Disease Control Using Mathematical Modeling and Deep Learning
IEEE Open J Eng Med Biol. 2024 Sep 9;6:41-53. doi: 10.1109/OJEMB.2024.3455801. eCollection 2025.
ABSTRACT
Infectious diseases are a major global public health concern. Precise modeling and prediction methods are essential to develop effective strategies for disease control. However, data imbalance and the presence of noise and intensity inhomogeneity make disease detection more challenging. Goal: In this article, a novel infectious disease pattern prediction system is proposed by integrating deterministic and stochastic model benefits with the benefits of the deep learning model. Results: The combined benefits yield improvement in the performance of solution prediction. Moreover, the objective is also to investigate the influence of time delay on infection rates and rates associated with vaccination. Conclusions: In this proposed framework, at first, the global stability at disease free equilibrium is effectively analysed using Routh-Haurwitz criteria and Lyapunov method, and the endemic equilibrium is analysed using non-linear Volterra integral equations in the infectious disease model. Unlike the existing model, emphasis is given to suggesting a model that is capable of investigating stability while considering the effect of vaccination and migration rate. Next, the influence of vaccination on the rate of infection is effectively predicted using an efficient deep learning model by employing the long-term dependencies in sequential data. Thus making the prediction more accurate.
PMID:39564557 | PMC:PMC11573407 | DOI:10.1109/OJEMB.2024.3455801
Breast Cancer Detection on Dual-View Sonography via Data-Centric Deep Learning
IEEE Open J Eng Med Biol. 2024 Sep 5;6:100-106. doi: 10.1109/OJEMB.2024.3454958. eCollection 2025.
ABSTRACT
Goal: This study aims to enhance AI-assisted breast cancer diagnosis through dual-view sonography using a data-centric approach. Methods: We customize a DenseNet-based model on our exclusive dual-view breast ultrasound dataset to enhance the model's ability to differentiate between malignant and benign masses. Various assembly strategies are designed to integrate the dual views into the model input, contrasting with the use of single views alone, with a goal to maximize performance. Subsequently, we compare the model against the radiologist and quantify the improvement in key performance metrics. We further assess how the radiologist's diagnostic accuracy is enhanced with the assistance of the model. Results: Our experiments consistently found that optimal outcomes were achieved by using a channel-wise stacking approach incorporating both views, with one duplicated as the third channel. This configuration resulted in remarkable model performance with an area underthe receiver operating characteristic curve (AUC) of 0.9754, specificity of 0.96, and sensitivity of 0.9263, outperforming the radiologist by 50% in specificity. With the model's guidance, the radiologist's performance improved across key metrics: accuracy by 17%, precision by 26%, and specificity by 29%. Conclusions: Our customized model, withan optimal configuration for dual-view image input, surpassed both radiologists and existing model results in the literature. Integrating the model as a standalone tool or assistive aid for radiologists can greatly enhance specificity, reduce false positives, thereby minimizing unnecessary biopsies and alleviating radiologists' workload.
PMID:39564554 | PMC:PMC11573408 | DOI:10.1109/OJEMB.2024.3454958
Machine Learning-Based X-Ray Projection Interpolation for Improved 4D-CBCT Reconstruction
IEEE Open J Eng Med Biol. 2024 Sep 11;6:61-67. doi: 10.1109/OJEMB.2024.3459622. eCollection 2025.
ABSTRACT
Goal: Respiration-correlated cone-beam computed tomography (4D-CBCT) is an X-ray-based imaging modality that uses reconstruction algorithms to produce time-varying volumetric images of moving anatomy over a cycle of respiratory motion. The quality of the produced images is affected by the number of CBCT projections available for reconstruction. Interpolation techniques have been used to generate intermediary projections to be used, along with the original projections, for reconstruction. Transfer learning is a powerful approach that harnesses the ability to reuse pre-trained models in solving new problems. Methods: Several state-of-the-art pre-trained deep learning models, used for video frame interpolation, are utilized in this work to generate intermediary projections. Moreover, a novel regression predictive modeling approach is also proposed to achieve the same objective. Digital phantom and clinical datasets are used to evaluate the performance of the models. Results: The results show that the Real-Time Intermediate Flow Estimation (RIFE) algorithm outperforms the others in terms of the Structural Similarity Index Method (SSIM): 0.986 [Formula: see text] 0.010, Peak Signal to Noise Ratio (PSNR): 44.13 [Formula: see text] 2.76, and Mean Square Error (MSE): 18.86 [Formula: see text] 206.90 across all datasets. Moreover, the interpolated projections were used along with the original ones to reconstruct a 4D-CBCT image that was compared to that reconstructed from the original projections only. Conclusions: The reconstructed image using the proposed approach was found to minimize the streaking artifacts, thereby enhancing the image quality. This work demonstrates the advantage of using general-purpose transfer learning algorithms in 4D-CBCT image enhancement.
PMID:39564553 | PMC:PMC11573399 | DOI:10.1109/OJEMB.2024.3459622
Classifying driver mutations of papillary thyroid carcinoma on whole slide image: an automated workflow applying deep convolutional neural network
Front Endocrinol (Lausanne). 2024 Nov 6;15:1395979. doi: 10.3389/fendo.2024.1395979. eCollection 2024.
ABSTRACT
BACKGROUND: Informative biomarkers play a vital role in guiding clinical decisions regarding management of cancers. We have previously demonstrated the potential of a deep convolutional neural network (CNN) for predicting cancer driver gene mutations from expert-curated histopathologic images in papillary thyroid carcinomas (PTCs). Recognizing the importance of whole slide image (WSI) analysis for clinical application, we aimed to develop an automated image preprocessing workflow that uses WSI inputs to categorize PTCs based on driver mutations.
METHODS: Histopathology slides from The Cancer Genome Atlas (TCGA) repository were utilized for diagnostic purposes. These slides underwent an automated tile extraction and preprocessing pipeline to ensure analysis-ready quality. Next, the extracted image tiles were utilized to train a deep learning CNN model, specifically Google's Inception v3, for the classification of PTCs. The model was trained to distinguish between different groups based on BRAFV600E or RAS mutations.
RESULTS: The newly developed pipeline performed equally well as the expert-curated image classifier. The best model achieved Area Under the Curve (AUC) values of 0.86 (ranging from 0.847 to 0.872) for validation and 0.865 (ranging from 0.854 to 0.876) for the final testing subsets. Notably, it accurately predicted 90% of tumors in the validation set and 84.2% in the final testing set. Furthermore, the performance of our new classifier showed a strong correlation with the expert-curated classifier (Spearman rho = 0.726, p = 5.28 e-08), and correlated with the molecular expression-based classifier, BRS (BRAF-RAS scores) (Spearman rho = 0.418, p = 1.92e-13).
CONCLUSIONS: Utilizing WSIs, we implemented an automated workflow with deep CNN model that accurately classifies driver mutations in PTCs.
PMID:39564124 | PMC:PMC11573888 | DOI:10.3389/fendo.2024.1395979
Advancing dermoscopy through a synthetic hair benchmark dataset and deep learning-based hair removal
J Biomed Opt. 2024 Nov;29(11):116003. doi: 10.1117/1.JBO.29.11.116003. Epub 2024 Nov 19.
ABSTRACT
SIGNIFICANCE: Early detection of melanoma is crucial for improving patient outcomes, and dermoscopy is a critical tool for this purpose. However, hair presence in dermoscopic images can obscure important features, complicating the diagnostic process. Enhancing image clarity by removing hair without compromising lesion integrity can significantly aid dermatologists in accurate melanoma detection.
AIM: We aim to develop a novel synthetic hair dermoscopic image dataset and a deep learning model specifically designed for hair removal in melanoma dermoscopy images.
APPROACH: To address the challenge of hair in dermoscopic images, we created a comprehensive synthetic hair dataset that simulates various hair types and dimensions over melanoma lesions. We then designed a convolutional neural network (CNN)-based model that focuses on effective hair removal while preserving the integrity of the melanoma lesions.
RESULTS: The CNN-based model demonstrated significant improvements in the clarity and diagnostic utility of dermoscopic images. The enhanced images provided by our model offer a valuable tool for the dermatological community, aiding in more accurate and efficient melanoma detection.
CONCLUSIONS: The introduction of our synthetic hair dermoscopic image dataset and CNN-based model represents a significant advancement in medical image analysis for melanoma detection. By effectively removing hair from dermoscopic images while preserving lesion details, our approach enhances diagnostic accuracy and supports early melanoma detection efforts.
PMID:39564076 | PMC:PMC11575456 | DOI:10.1117/1.JBO.29.11.116003
Morphometric analysis and tortuosity typing of the large intestine segments on computed tomography colonography with artificial intelligence
Colomb Med (Cali). 2024 Jun 30;55(2):e2005944. doi: 10.25100/cm.v55i2.5944. eCollection 2024 Apr-Jun.
ABSTRACT
BACKGROUND: Morphological properties such as length and tortuosity of the large intestine segments play important roles, especially in interventional procedures like colonoscopy.
OBJECTIVE: Using computed tomography (CT) colonoscopy images, this study aimed to examine the morphological features of the colon's anatomical sections and investigate the relationship of these sections with each other or with age groups. The shapes of the transverse colon were analyzed using artificial intelligence.
METHODS: The study was conducted as a two- and three-dimensional examination of CT colonography images of people between 40 and 80 years old, which were obtained retrospectively. An artificial intelligence algorithm (YOLOv8) was used for shape detection on 3D colon images.
RESULTS: 160 people with a mean age of 89 men and 71 women included in the study were 57.79±8.55 and 56.55±6.60, respectively, and there was no statistically significant difference (p= 0.24). The total colon length was 166.11±25.07 cm for men and 158.73±21.92 cm for women, with no significant difference between groups (p=0.12). As a result of the training of the model Precision, Recall, and Mean Average Precision (mAP) were found to be 0.8578, 0.7940, and 0.9142, respectively.
CONCLUSION: The study highlights the importance of understanding the type and morphology of the large intestine for accurate interpretation of CT colonography results and effective clinical management of patients with suspected large intestine abnormalities. Furthermore, this study showed that 88.57% of the images in the test data set were detected correctly and that AI can play an important role in colon typing.
PMID:39564004 | PMC:PMC11573345 | DOI:10.25100/cm.v55i2.5944
Deep learning algorithm for predicting left ventricular systolic dysfunction in atrial fibrillation with rapid ventricular response
Eur Heart J Digit Health. 2024 Aug 19;5(6):683-691. doi: 10.1093/ehjdh/ztae062. eCollection 2024 Nov.
ABSTRACT
AIMS: Although evaluation of left ventricular ejection fraction (LVEF) is crucial for deciding the rate control strategy in patients with atrial fibrillation (AF), real-time assessment of LVEF is limited in outpatient settings. We aimed to investigate the performance of artificial intelligence-based algorithms in predicting LV systolic dysfunction (LVSD) in patients with AF and rapid ventricular response (RVR).
METHODS AND RESULTS: This study is an external validation of a pre-existing deep learning algorithm based on residual neural network architecture. Data were obtained from a prospective cohort of AF with RVR at a single centre between 2018 and 2023. Primary outcome was the detection of LVSD, defined as a LVEF ≤ 40%, assessed using 12-lead electrocardiography (ECG). Secondary outcome involved predicting LVSD using 1-lead ECG (Lead I). Among 423 patients, 241 with available echocardiography data within 2 months were evaluated, of whom 54 (22.4%) were confirmed to have LVSD. Deep learning algorithm demonstrated fair performance in predicting LVSD [area under the curve (AUC) 0.78]. Negative predictive value for excluding LVSD was 0.88. Deep learning algorithm resulted competent performance in predicting LVSD compared with N-terminal prohormone of brain natriuretic peptide (AUC 0.78 vs. 0.70, P = 0.12). Predictive performance of the deep learning algorithm was lower in Lead I (AUC 0.68); however, negative predictive value remained consistent (0.88).
CONCLUSION: Deep learning algorithm demonstrated competent performance in predicting LVSD in patients with AF and RVR. In outpatient setting, use of artificial intelligence-based algorithm may facilitate prediction of LVSD and earlier choice of drug, enabling better symptom control in AF patients with RVR.
PMID:39563911 | PMC:PMC11570393 | DOI:10.1093/ehjdh/ztae062
A statistical analysis for deepfake videos forgery traces recognition followed by a fine-tuned InceptionResNetV2 detection technique
J Forensic Sci. 2024 Nov 19. doi: 10.1111/1556-4029.15665. Online ahead of print.
ABSTRACT
Deepfake videos are growing progressively more competent because of the rapid advancements in artificial intelligence and deep learning technology. This has led to substantial problems around propaganda, privacy, and security. This research provides an analytically novel method for detecting deepfake videos using temporal discrepancies of the various statistical features of video at the pixel level, followed by a deep learning algorithm. To detect minute aberrations typical of deepfake manipulations, this study focuses on both spatial information inside individual frames and temporal correlations between subsequent frames. This study primarily provides a novel Euclidean distance variation probability score value for directly commenting on the authenticity of a deepfake video. Next, fine-tuning of InceptionResNetV2 with the addition of a dense layer is trained FaceForensics++ for deepfake detection. The proposed fine-tuned model outperforms the existing techniques as its testing accuracy on unseen data outperforms the existing methods. The propsd method achieved an accuracy of 99.80% for FF++ dataset and 97.60% accuracy for CelebDF dataset.
PMID:39562484 | DOI:10.1111/1556-4029.15665
Imaging-genomic spatial-modality attentive fusion for studying neuropsychiatric disorders
Hum Brain Mapp. 2024 Dec 1;45(17):e26799. doi: 10.1002/hbm.26799.
ABSTRACT
Multimodal learning has emerged as a powerful technique that leverages diverse data sources to enhance learning and decision-making processes. Adapting this approach to analyzing data collected from different biological domains is intuitive, especially for studying neuropsychiatric disorders. A complex neuropsychiatric disorder like schizophrenia (SZ) can affect multiple aspects of the brain and biologies. These biological sources each present distinct yet correlated expressions of subjects' underlying physiological processes. Joint learning from these data sources can improve our understanding of the disorder. However, combining these biological sources is challenging for several reasons: (i) observations are domain specific, leading to data being represented in dissimilar subspaces, and (ii) fused data are often noisy and high-dimensional, making it challenging to identify relevant information. To address these challenges, we propose a multimodal artificial intelligence model with a novel fusion module inspired by a bottleneck attention module. We use deep neural networks to learn latent space representations of the input streams. Next, we introduce a two-dimensional (spatio-modality) attention module to regulate the intermediate fusion for SZ classification. We implement spatial attention via a dilated convolutional neural network that creates large receptive fields for extracting significant contextual patterns. The resulting joint learning framework maximizes complementarity allowing us to explore the correspondence among the modalities. We test our model on a multimodal imaging-genetic dataset and achieve an SZ prediction accuracy of 94.10% (p < .0001), outperforming state-of-the-art unimodal and multimodal models for the task. Moreover, the model provides inherent interpretability that helps identify concepts significant for the neural network's decision and explains the underlying physiopathology of the disorder. Results also show that functional connectivity among subcortical, sensorimotor, and cognitive control domains plays an important role in characterizing SZ. Analysis of the spatio-modality attention scores suggests that structural components like the supplementary motor area, caudate, and insula play a significant role in SZ. Biclustering the attention scores discover a multimodal cluster that includes genes CSMD1, ATK3, MOB4, and HSPE1, all of which have been identified as relevant to SZ. In summary, feature attribution appears to be especially useful for probing the transient and confined but decisive patterns of complex disorders, and it shows promise for extensive applicability in future studies.
PMID:39562310 | DOI:10.1002/hbm.26799
Response prediction for neoadjuvant treatment in locally advanced rectal cancer patients-improvement in decision-making: A systematic review
Eur J Surg Oncol. 2024 Nov 15:109463. doi: 10.1016/j.ejso.2024.109463. Online ahead of print.
ABSTRACT
BACKGROUND: Predicting pathological complete response (pCR) from pre or post-treatment features could be significant in improving the process of making clinical decisions and providing a more personalized treatment approach for better treatment outcomes. However, the lack of external validation of predictive models, missing in several published articles, is a major issue that can potentially limit the reliability and applicability of predictive models in clinical settings. Therefore, this systematic review described different externally validated methods of predicting response to neoadjuvant chemoradiotherapy (nCRT) in locally advanced rectal cancer (LARC) patients and how they could improve clinical decision-making.
METHOD: An extensive search for eligible articles was performed on PubMed, Cochrane, and Scopus between 2018 and 2023, using the keywords: (Response OR outcome) prediction AND (neoadjuvant OR chemoradiotherapy) treatment in 'locally advanced Rectal Cancer'.
INCLUSION CRITERIA: (i) Studies including patients diagnosed with LARC (T3/4 and N- or any T and N+) by pre-medical imaging and pathological examination or as stated by the author (ii) Standardized nCRT completed. (iii) Treatment with long or short course radiotherapy. (iv) Studies reporting on the prediction of response to nCRT with pathological complete response (pCR) as the primary outcome. (v) Studies reporting external validation results for response prediction. (vi) Regarding language restrictions, only articles in English were accepted.
EXCLUSION CRITERIA: (i) We excluded case report studies, conference abstracts, reviews, studies reporting patients with distant metastases at diagnosis. (ii) Studies reporting response prediction with only internally validated approaches.
DATA COLLECTION AND QUALITY ASSESSMENT: Three researchers (DC-D, FB, HT) independently reviewed and screened titles and abstracts of all articles retrieved after de-duplication. Possible disagreements were resolved through discussion among the three researchers. If necessary, three other researchers (LB, GC, MG) were consulted to make the final decision. The extraction of data was performed using the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) template and quality assessment was done using the Prediction model Risk Of Bias Assessment Tool (PROBAST).
RESULTS: A total of 4547 records were identified from the three databases. After excluding 392 duplicate results, 4155 records underwent title and abstract screening. Three thousand and eight hundred articles were excluded after title and abstract screening and 355 articles were retrieved. Out of the 355 retrieved articles, 51 studies were assessed for eligibility. Nineteen reports were then excluded due to lack of reports on external validation, while 4 were excluded due to lack of evaluation of pCR as the primary outcome. Only Twenty-eight articles were eligible and included in this systematic review. In terms of quality assessment, 89 % of the models had low concerns in the participants domain, while 11 % had an unclear rating. 96 % of the models were of low concern in both the predictors and outcome domains. The overall rating showed high applicability potential of the models with 82 % showing low concern, while 18 % were deemed unclear.
CONCLUSION: Most of the external validated techniques showed promising performances and the potential to be applied in clinical settings, which is a crucial step towards evidence-based medicine. However, more studies focused on the external validations of these models in larger cohorts is necessary to ensure that they can reliably predict outcomes in diverse populations.
PMID:39562260 | DOI:10.1016/j.ejso.2024.109463
RiceSNP-BST: a deep learning framework for predicting biotic stress-associated SNPs in rice
Brief Bioinform. 2024 Sep 23;25(6):bbae599. doi: 10.1093/bib/bbae599.
ABSTRACT
Rice consistently faces significant threats from biotic stresses, such as fungi, bacteria, pests, and viruses. Consequently, accurately and rapidly identifying previously unknown single-nucleotide polymorphisms (SNPs) in the rice genome is a critical challenge for rice research and the development of resistant varieties. However, the limited availability of high-quality rice genotype data has hindered this research. Deep learning has transformed biological research by facilitating the prediction and analysis of SNPs in biological sequence data. Convolutional neural networks are especially effective in extracting structural and local features from DNA sequences, leading to significant advancements in genomics. Nevertheless, the expanding catalog of genome-wide association studies provides valuable biological insights for rice research. Expanding on this idea, we introduce RiceSNP-BST, an automatic architecture search framework designed to predict SNPs associated with rice biotic stress traits (BST-associated SNPs) by integrating multidimensional features. Notably, the model successfully innovates the datasets, offering more precision than state-of-the-art methods while demonstrating good performance on an independent test set and cross-species datasets. Additionally, we extracted features from the original DNA sequences and employed causal inference to enhance the biological interpretability of the model. This study highlights the potential of RiceSNP-BST in advancing genome prediction in rice. Furthermore, a user-friendly web server for RiceSNP-BST (http://rice-snp-bst.aielab.cc) has been developed to support broader genome research.
PMID:39562160 | DOI:10.1093/bib/bbae599