Deep learning
Predicting Alzheimer's Disease Progression using a Versatile-Sequence-Length-Adaptive Encoder-Decoder LSTM Architecture
IEEE J Biomed Health Inform. 2024 Apr 9;PP. doi: 10.1109/JBHI.2024.3386801. Online ahead of print.
ABSTRACT
Detecting Alzheimer's disease (AD) accurately at an early stage is critical for planning and implementing disease-modifying treatments that can help prevent the progression to severe stages of the disease. In the existing literature, diagnostic test scores and clinical status have been provided for specific time points, and predicting the disease progression poses a significant challenge. However, few studies focus on longitudinal data to build deep-learning models for AD detection. These models are not stable to be relied upon in real medical settings due to a lack of adaptive training and testing. We aim to predict the individual's diagnostic status for the next six years in an adaptive manner where prediction performance improves with the number of patient visits. This study presents a Sequence-Length Adaptive Encoder-Decoder Long Short-Term Memory (SLA-ED LSTM) deep-learning model on longitudinal data obtained from the Alzheimer's Disease Neuroimaging Initiative archive. In the suggested approach, decoder LSTM dynamically adjusts to accommodate variations in training sequence length and inference length rather than being constrained to a fixed length. We evaluated the model performance for various sequence lengths and found that for inference length one, sequence length nine gives the highest average test accuracy and area under the receiver operating characteristic curves of 0.920 and 0.982, respectively. This insight suggests that data from nine visits effectively captures meaningful cognitive status changes and is adequate for accurate model training. We conducted a comparative analysis of the proposed model against state-of-the-art methods, revealing a significant improvement in disease progression prediction over the previous methods. Index Terms- Cognitive impairment, longitudinal data, multimodal data, encoder-decoder LSTM, progression prediction Clinical relevance The proposed approach has the potential to improve understanding of Alzheimer's disease progression in diagnostics, facilitating early identification of various stages of cognitive decline leading to AD by considering its clinical variability.
PMID:38593020 | DOI:10.1109/JBHI.2024.3386801
Optical color routing enabled by deep learning
Nanoscale. 2024 Apr 9. doi: 10.1039/d4nr00105b. Online ahead of print.
ABSTRACT
Nano-color routing has emerged as an immensely popular and widely discussed subject in the realms of light field manipulation, image sensing, and the integration of deep learning. The conventional dye filters employed in commercial applications have long been hampered by several limitations, including subpar signal-to-noise ratio, restricted upper bounds on optical efficiency, and challenges associated with miniaturization. Nonetheless, the advent of bandpass-free color routing has opened up unprecedented avenues for achieving remarkable optical spectral efficiency and operation at sub-wavelength scales within the area of image sensing applications. This has brought about a paradigm shift, fundamentally transforming the field by offering a promising solution to surmount the constraints encountered with traditional dye filters. This review presents a comprehensive exploration of representative deep learning-driven nano-color routing structure designs, encompassing forward simulation algorithms, photonic neural networks, and various global and local topology optimization methods. A thorough comparison is drawn between the exceptional light-splitting capabilities exhibited by these methods and those of traditional design approaches. Additionally, the existing research on color routing is summarized, highlighting a promising direction for forthcoming development, delivering valuable insights to advance the field of color routing and serving as a powerful reference for future endeavors.
PMID:38592716 | DOI:10.1039/d4nr00105b
Gelato: a new hybrid deep learning-based Informer model for multivariate air pollution prediction
Environ Sci Pollut Res Int. 2024 Apr 9. doi: 10.1007/s11356-024-33190-4. Online ahead of print.
ABSTRACT
The increase in air pollutants and its adverse effects on human health and the environment has raised significant concerns. This implies the necessity of predicting air pollutant levels. Numerous studies have aimed to provide new models for more accurate prediction of air pollutants such as CO2, O3, and PM2.5. Most of the models used in the literature are deep learning models with Transformers being the best for time series prediction. However, there is still a need to enhance accuracy in air pollution prediction using Transformers. Alongside the need for increased accuracy, there is a significant demand for predicting a broader spectrum of air pollutants. To encounter this challenge, this paper proposes a new hybrid deep learning-based Informer model called "Gelato" for multivariate air pollution prediction. Gelato takes a leap forward by taking several air pollutants into consideration simultaneously. Besides introducing new changes to the Informer structure as the base model, Gelato utilizes Particle Swarm Optimization for hyperparameter optimization. Moreover, XGBoost is used at the final stage to achieve minimal errors. Applying the proposed model on a dataset containing eight important air pollutants, including CO2, O3, NO, NO2, SO2, PM10, NH3, and PM2.5, the Gelato performance is assessed. Comparing the results of Gelato with other models shows Gelato's superiority over them, proving it is a high-confidence model for multivariate air pollution prediction.
PMID:38592633 | DOI:10.1007/s11356-024-33190-4
Clinical evaluation of deep learning-based risk profiling in breast cancer histopathology and comparison to an established multigene assay
Breast Cancer Res Treat. 2024 Apr 9. doi: 10.1007/s10549-024-07303-z. Online ahead of print.
ABSTRACT
PURPOSE: To evaluate the Stratipath Breast tool for image-based risk profiling and compare it with an established prognostic multigene assay for risk profiling in a real-world case series of estrogen receptor (ER)-positive and human epidermal growth factor receptor 2 (HER2)-negative early breast cancer patients categorized as intermediate risk based on classic clinicopathological variables and eligible for chemotherapy.
METHODS: In a case series comprising 234 invasive ER-positive/HER2-negative tumors, clinicopathological data including Prosigna results and corresponding HE-stained tissue slides were retrieved. The digitized HE slides were analysed by Stratipath Breast.
RESULTS: Our findings showed that the Stratipath Breast analysis identified 49.6% of the clinically intermediate tumors as low risk and 50.4% as high risk. The Prosigna assay classified 32.5%, 47.0% and 20.5% tumors as low, intermediate and high risk, respectively. Among Prosigna intermediate-risk tumors, 47.3% were stratified as Stratipath low risk and 52.7% as high risk. In addition, 89.7% of Stratipath low-risk cases were classified as Prosigna low/intermediate risk. The overall agreement between the two tests for low-risk and high-risk groups (N = 124) was 71.0%, with a Cohen's kappa of 0.42. For both risk profiling tests, grade and Ki67 differed significantly between risk groups.
CONCLUSION: The results from this clinical evaluation of image-based risk stratification shows a considerable agreement to an established gene expression assay in routine breast pathology.
PMID:38592541 | DOI:10.1007/s10549-024-07303-z
Detection of structural lesions of the sacroiliac joints in patients with spondyloarthritis: A comparison of T1-weighted 3D spoiled gradient echo MRI and MRI-based synthetic CT versus T1-weighted turbo spin echo MRI
Skeletal Radiol. 2024 Apr 9. doi: 10.1007/s00256-024-04669-5. Online ahead of print.
ABSTRACT
OBJECTIVES: To investigate the detection of erosion, sclerosis and ankylosis using 1 mm 3D T1-weighted spoiled gradient echo (T1w-GRE) MRI and 1 mm MRI-based synthetic CT (sCT), compared with conventional 4 mm T1w-TSE.
MATERIALS AND METHODS: Prospective, cross-sectional study. Semi-coronal 4 mm T1w-TSE and axial T1w-GRE with 1.6 mm slice thickness and 0.8 mm spacing between overlapping slices were performed. The T1w-GRE images were processed into sCT images using a commercial deep learning algorithm, BoneMRI. Both were reconstructed into 1 mm semi-coronal images. T1w-TSE, T1w-GRE and sCT images were assessed independently by 3 expert and 4 non-expert readers for erosion, sclerosis and ankylosis. Cohen's kappa for inter-reader agreement, exact McNemar test for lesion frequencies and Wilcoxon signed-rank test for confidence in lesion detection were used.
RESULTS: Nineteen patients with axial spondyloarthritis were evaluated. T1w-GRE increased inter-reader agreement for detecting erosion (kappa 0.42 vs 0.21 in non-experts), increased detection of erosion (57 vs 43 of 152 joint quadrants) and sclerosis (26 vs 17 of 152 joint quadrants) among experts, and increased reader confidence for scoring erosion and sclerosis. sCT increased inter-reader agreement for detecting sclerosis (kappa 0.69 vs 0.37 in experts) and ankylosis (0.71 vs 0.52 in non-experts), increased detection of sclerosis (34 vs 17 of 152 joint quadrants) and ankylosis (20 vs 13 of 76 joint halves) among experts, and increased reader confidence for scoring erosion, sclerosis and ankylosis.
CONCLUSION: T1w-GRE and sCT increase sensitivity and reader confidence for the detection of erosion, sclerosis and ankylosis, compared with T1w-TSE.
CLINICAL RELEVANCE STATEMENT: These methods improve the detection of sacroiliac joint structural lesions and might be a useful addition to SIJ MRI protocols both in routine clinical care and as structural outcome measures in clinical trials.
PMID:38592521 | DOI:10.1007/s00256-024-04669-5
Noise-Optimized CBCT Imaging of Temporomandibular Joints-The Impact of AI on Image Quality
J Clin Med. 2024 Mar 5;13(5):1502. doi: 10.3390/jcm13051502.
ABSTRACT
Background: Temporomandibular joint disorder (TMD) is a common medical condition. Cone beam computed tomography (CBCT) is effective in assessing TMD-related bone changes, but image noise may impair diagnosis. Emerging deep learning reconstruction algorithms (DLRs) could minimize noise and improve CBCT image clarity. This study compares standard and deep learning-enhanced CBCT images for image quality in detecting osteoarthritis-related degeneration in TMJs (temporomandibular joints). This study analyzed CBCT images of patients with suspected temporomandibular joint degenerative joint disease (TMJ DJD). Methods: The DLM reconstructions were performed with ClariCT.AI software. Image quality was evaluated objectively via CNR in target areas and subjectively by two experts using a five-point scale. Both readers also assessed TMJ DJD lesions. The study involved 50 patients with a mean age of 28.29 years. Results: Objective analysis revealed a significantly better image quality in DLM reconstructions (CNR levels; p < 0.001). Subjective assessment showed high inter-reader agreement (κ = 0.805) but no significant difference in image quality between the reconstruction types (p = 0.055). Lesion counts were not significantly correlated with the reconstruction type (p > 0.05). Conclusions: The analyzed DLM reconstruction notably enhanced the objective image quality in TMJ CBCT images but did not significantly alter the subjective quality or DJD lesion diagnosis. However, the readers favored DLM images, indicating the potential for better TMD diagnosis with CBCT, meriting more study.
PMID:38592413 | DOI:10.3390/jcm13051502
Value of Automatically Derived Full Thrombus Characteristics: An Explorative Study of Their Associations with Outcomes in Ischemic Stroke Patients
J Clin Med. 2024 Feb 28;13(5):1388. doi: 10.3390/jcm13051388.
ABSTRACT
(1) Background: For acute ischemic strokes caused by large vessel occlusion, manually assessed thrombus volume and perviousness have been associated with treatment outcomes. However, the manual assessment of these characteristics is time-consuming and subject to inter-observer bias. Alternatively, a recently introduced fully automated deep learning-based algorithm can be used to consistently estimate full thrombus characteristics. Here, we exploratively assess the value of these novel biomarkers in terms of their association with stroke outcomes. (2) Methods: We studied two applications of automated full thrombus characterization as follows: one in a randomized trial, MR CLEAN-NO IV (n = 314), and another in a Dutch nationwide registry, MR CLEAN Registry (n = 1839). We used an automatic pipeline to determine the thrombus volume, perviousness, density, and heterogeneity. We assessed their relationship with the functional outcome defined as the modified Rankin Scale (mRS) at 90 days and two technical success measures as follows: successful final reperfusion, which is defined as an eTICI score of 2b-3, and successful first-pass reperfusion (FPS). (3) Results: Higher perviousness was significantly related to a better mRS in both MR CLEAN-NO IV and the MR CLEAN Registry. A lower thrombus volume and lower heterogeneity were only significantly related to better mRS scores in the MR CLEAN Registry. Only lower thrombus heterogeneity was significantly related to technical success; it was significantly related to a higher chance of FPS in the MR CLEAN-NO IV trial (OR = 0.55, 95% CI: 0.31-0.98) and successful reperfusion in the MR CLEAN Registry (OR = 0.88, 95% CI: 0.78-0.99). (4) Conclusions: Thrombus characteristics derived from automatic entire thrombus segmentations are significantly related to stroke outcomes.
PMID:38592252 | DOI:10.3390/jcm13051388
Radiomics and Deep Learning to Predict Pulmonary Nodule Metastasis at CT
Radiology. 2024 Apr;311(1):e233356. doi: 10.1148/radiol.233356.
NO ABSTRACT
PMID:38591975 | DOI:10.1148/radiol.233356
Predicting Invasiveness of Lung Adenocarcinoma at Chest CT with Deep Learning Ternary Classification Models
Radiology. 2024 Apr;311(1):e232057. doi: 10.1148/radiol.232057.
ABSTRACT
Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.
PMID:38591974 | DOI:10.1148/radiol.232057
Quantitative Assessment of Fundus Tessellated Density in Highly Myopic Glaucoma Using Deep Learning
Transl Vis Sci Technol. 2024 Apr 2;13(4):17. doi: 10.1167/tvst.13.4.17.
ABSTRACT
PURPOSE: To characterize the fundus tessellated density (FTD) in highly myopic glaucoma (HMG) and high myopia (HM) for discovering early signs and diagnostic markers.
METHODS: This retrospective cross-sectional study included hospital in-patients with HM (133 eyes) and HMG (73 eyes) with an axial length ≥26 mm at Zhongshan Ophthalmic Center. Using deep learning, FTD was quantified as the average exposed choroid area per unit area on fundus photographs in the global, macular, and disc regions. FTD-associated factors were assessed using partial correlation. Diagnostic efficacy was analyzed using the area under the curve (AUC).
RESULTS: HMG patients had lower global (0.20 ± 0.12 versus 0.36 ± 0.09) and macular FTD (0.25 ± 0.14 vs. 0.40 ± 0.09) but larger disc FTD (0.24 ± 0.11 vs. 0.19 ± 0.07) than HM patients in the tessellated fundus (all P < 0.001). In the macular region, nasal FTD was lowest in the HM (0.26 ± 0.13) but highest in the HMG (0.32 ± 0.13) compared with the superior, inferior, and temporal subregions (all P < 0.05). A fundus with a macular region nasal/temporal (NT) FTD ratio > 0.96 (AUC = 0.909) was 15.7 times more indicative of HMG than HM. A higher macular region NT ratio with a lower horizontal parapapillary atrophy/disc ratio indicated a higher possibility of HMG than HM (AUC = 0.932).
CONCLUSIONS: FTD differs in degree and distribution between HMG and HM. A higher macular NT alone or with a lower horizontal parapapillary atrophy/disc ratio may help differentiate HMG.
TRANSLATIONAL RELEVANCE: Deep learning-based FTD measurement could potentially assist glaucoma diagnosis in HM.
PMID:38591943 | DOI:10.1167/tvst.13.4.17
Accurate Finite Element Simulations of Dynamic Behaviour: Constitutive Models and Analysis with Deep Learning
Materials (Basel). 2024 Jan 28;17(3):643. doi: 10.3390/ma17030643.
ABSTRACT
Owing to the challenge of capturing the dynamic behaviour of metal experimentally, high-precision numerical simulations have become essential for analysing dynamic characteristics. In this study, calculation accuracy was improved by analysing the impact of constitutive models using the finite element (FE) model, and the deep learning (DL) model was employed for result analysis. The results showed that FE simulations with these models effectively capture the elastic-plastic response, and the ZA model exhibits the highest accuracy, with a 26.0% accuracy improvement compared with other models at 502 m/s for Hugoniot elastic limit (HEL) stress. The different constitutive models offer diverse descriptions of stress during the elastic-plastic response because of temperature effects. Concurrently, the parameters related to the yield strength at quasi-static influence the propagation speed of elastic waves. Calculation show that the yield strength at quasi-static of 6061 Al adheres to y = ax + b for HEL stress. The R-squared (R2) and mean absolute error (MAE) values of the DL model for HEL stress predictions are 0.998 and 0.0062, respectively. This research provides a reference for selecting constitutive models for simulation under the same conditions.
PMID:38591486 | DOI:10.3390/ma17030643
Digital Response to Physical Crises: The Role of an E-Health Platform in the 2023 Southern Turkey Earthquakes
Disaster Med Public Health Prep. 2024 Apr 9;18:e57. doi: 10.1017/dmp.2024.63.
ABSTRACT
The catastrophic earthquakes that struck Southern Turkey in 2023 highlighted the pressing need for effective disaster management strategies. The unprecedented scale of the crisis tested the robustness of traditional healthcare responses and highlighted the potential of e-health solutions. Despite the deployment of Emergency Medical Teams, initial responders - primarily survivors of the earthquakes - faced an enormous challenge due to their lack of training in mass-casualty situations. An e-health platform was introduced to support these first responders, offering tools for drug calculations, case management guidelines, and a deep learning model for pediatric X-ray analysis. This commentary presents an analysis of the platform's use and contributes to the growing discourse on integrating digital health technologies in disaster response and management.
PMID:38591261 | DOI:10.1017/dmp.2024.63
Prediction of LncRNA-protein Interactions Using Auto-Encoder, SE-ResNet Models and Transfer Learning
Microrna. 2024 Apr 8. doi: 10.2174/0122115366288068240322064431. Online ahead of print.
ABSTRACT
BACKGROUND: Long non-coding RNA (lncRNA) plays a crucial role in various biolog-ical processes, and mutations or imbalances of lncRNAs can lead to several diseases, including cancer, Prader-Willi syndrome, autism, Alzheimer's disease, cartilage-hair hypoplasia, and hear-ing loss. Understanding lncRNA-protein interactions (LPIs) is vital for elucidating basic cellular processes, human diseases, viral replication, transcription, and plant pathogen resistance. Despite the development of several LPI calculation methods, predicting LPI remains challenging, with the selection of variables and deep learning structure being the focus of LPI research.
METHODS: We propose a deep learning framework called AR-LPI, which extracts sequence and secondary structure features of proteins and lncRNAs. The framework utilizes an auto-encoder for feature extraction and employs SE-ResNet for prediction. Additionally, we apply transfer learning to the deep neural network SE-ResNet for predicting small-sample datasets.
RESULTS: Through comprehensive experimental comparison, we demonstrate that the AR-LPI ar-chitecture performs better in LPI prediction. Specifically, the accuracy of AR-LPI increases by 2.86% to 94.52%, while the F-value of AR-LPI increases by 2.71% to 94.73%.
CONCLUSION: Our experimental results show that the overall performance of AR-LPI is better than that of other LPI prediction tools.
PMID:38591194 | DOI:10.2174/0122115366288068240322064431
Motion sensitive network for action recognition in control and decision-making of autonomous systems
Front Neurosci. 2024 Mar 25;18:1370024. doi: 10.3389/fnins.2024.1370024. eCollection 2024.
ABSTRACT
Spatial-temporal modeling is crucial for action recognition in videos within the field of artificial intelligence. However, robustly extracting motion information remains a primary challenge due to temporal deformations of appearances and variations in motion frequencies between different actions. In order to address these issues, we propose an innovative and effective method called the Motion Sensitive Network (MSN), incorporating the theories of artificial neural networks and key concepts of autonomous system control and decision-making. Specifically, we employ an approach known as Spatial-Temporal Pyramid Motion Extraction (STP-ME) module, adjusting convolution kernel sizes and time intervals synchronously to gather motion information at different temporal scales, aligning with the learning and prediction characteristics of artificial neural networks. Additionally, we introduce a new module called Variable Scale Motion Excitation (DS-ME), utilizing a differential model to capture motion information in resonance with the flexibility of autonomous system control. Particularly, we employ a multi-scale deformable convolutional network to alter the motion scale of the target object before computing temporal differences across consecutive frames, providing theoretical support for the flexibility of autonomous systems. Temporal modeling is a crucial step in understanding environmental changes and actions within autonomous systems, and MSN, by integrating the advantages of Artificial Neural Networks (ANN) in this task, provides an effective framework for the future utilization of artificial neural networks in autonomous systems. We evaluate our proposed method on three challenging action recognition datasets (Kinetics-400, Something-Something V1, and Something-Something V2). The results indicate an improvement in accuracy ranging from 1.1% to 2.2% on the test set. When compared with state-of-the-art (SOTA) methods, the proposed approach achieves a maximum performance of 89.90%. In ablation experiments, the performance gain of this module also shows an increase ranging from 2% to 5.3%. The introduced Motion Sensitive Network (MSN) demonstrates significant potential in various challenging scenarios, providing an initial exploration into integrating artificial neural networks into the domain of autonomous systems.
PMID:38591065 | PMC:PMC11000707 | DOI:10.3389/fnins.2024.1370024
Deep Learning-Based Prediction of Individual Geographic Atrophy Progression from a Single Baseline OCT
Ophthalmol Sci. 2024 Jan 17;4(4):100466. doi: 10.1016/j.xops.2024.100466. eCollection 2024 Jul-Aug.
ABSTRACT
OBJECTIVE: To identify the individual progression of geographic atrophy (GA) lesions from baseline OCT images of patients in routine clinical care.
DESIGN: Clinical evaluation of a deep learning-based algorithm.
SUBJECTS: One hundred eighty-four eyes of 100 consecutively enrolled patients.
METHODS: OCT and fundus autofluorescence (FAF) images (both Spectralis, Heidelberg Engineering) of patients with GA secondary to age-related macular degeneration in routine clinical care were used for model validation. Fundus autofluorescence images were annotated manually by delineating the GA area by certified readers of the Vienna Reading Center. The annotated FAF images were anatomically registered in an automated manner to the corresponding OCT scans, resulting in 2-dimensional en face OCT annotations, which were taken as a reference for the model performance. A deep learning-based method for modeling the GA lesion growth over time from a single baseline OCT was evaluated. In addition, the ability of the algorithm to identify fast progressors for the top 10%, 15%, and 20% of GA growth rates was analyzed.
MAIN OUTCOME MEASURES: Dice similarity coefficient (DSC) and mean absolute error (MAE) between manual and predicted GA growth.
RESULTS: The deep learning-based tool was able to reliably identify disease activity in GA using a standard OCT image taken at a single baseline time point. The mean DSC for the total GA region increased for the first 2 years of prediction (0.80-0.82). With increasing time intervals beyond 3 years, the DSC decreased slightly to a mean of 0.70. The MAE was low over the first year and with advancing time slowly increased, with mean values ranging from 0.25 mm to 0.69 mm for the total GA region prediction. The model achieved an area under the curve of 0.81, 0.79, and 0.77 for the identification of the top 10%, 15%, and 20% growth rates, respectively.
CONCLUSIONS: The proposed algorithm is capable of fully automated GA lesion growth prediction from a single baseline OCT in a time-continuous fashion in the form of en face maps. The results are a promising step toward clinical decision support tools for therapeutic dosing and guidance of patient management because the first treatment for GA has recently become available.
FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
PMID:38591046 | PMC:PMC11000109 | DOI:10.1016/j.xops.2024.100466
Inference of drug off-target effects on cellular signaling using interactome-based deep learning
iScience. 2024 Mar 14;27(4):109509. doi: 10.1016/j.isci.2024.109509. eCollection 2024 Apr 19.
ABSTRACT
Many diseases emerge from dysregulated cellular signaling, and drugs are often designed to target specific signaling proteins. Off-target effects are, however, common and may ultimately result in failed clinical trials. Here we develop a computer model of the cell's transcriptional response to drugs for improved understanding of their mechanisms of action. The model is based on ensembles of artificial neural networks and simultaneously infers drug-target interactions and their downstream effects on intracellular signaling. With this, it predicts transcription factors' activities, while recovering known drug-target interactions and inferring many new ones, which we validate with an independent dataset. As a case study, we analyze the effects of the drug Lestaurtinib on downstream signaling. Alongside its intended target, FLT3, the model predicts an inhibition of CDK2 that enhances the downregulation of the cell cycle-critical transcription factor FOXM1. Our approach can therefore enhance our understanding of drug signaling for therapeutic design.
PMID:38591003 | PMC:PMC11000001 | DOI:10.1016/j.isci.2024.109509
Novel deep learning framework for detection of epileptic seizures using EEG signals
Front Comput Neurosci. 2024 Mar 21;18:1340251. doi: 10.3389/fncom.2024.1340251. eCollection 2024.
ABSTRACT
INTRODUCTION: Epilepsy is a chronic neurological disorder characterized by abnormal electrical activity in the brain, often leading to recurrent seizures. With 50 million people worldwide affected by epilepsy, there is a pressing need for efficient and accurate methods to detect and diagnose seizures. Electroencephalogram (EEG) signals have emerged as a valuable tool in detecting epilepsy and other neurological disorders. Traditionally, the process of analyzing EEG signals for seizure detection has relied on manual inspection by experts, which is time-consuming, labor-intensive, and susceptible to human error. To address these limitations, researchers have turned to machine learning and deep learning techniques to automate the seizure detection process.
METHODS: In this work, we propose a novel method for epileptic seizure detection, leveraging the power of 1-D Convolutional layers in combination with Bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) and Average pooling Layer as a single unit. This unit is repeatedly used in the proposed model to extract the features. The features are then passed to the Dense layers to predict the class of the EEG waveform. The performance of the proposed model is verified on the Bonn dataset. To assess the robustness and generalizability of our proposed architecture, we employ five-fold cross-validation. By dividing the dataset into five subsets and iteratively training and testing the model on different combinations of these subsets, we obtain robust performance measures, including accuracy, sensitivity, and specificity.
RESULTS: Our proposed model achieves an accuracy of 99-100% for binary classifications into seizure and normal waveforms, 97.2%-99.2% accuracy for classifications into normal-interictal-seizure waveforms, 96.2%-98.4% accuracy for four class classification and accuracy of 95.81%-98% for five class classification.
DISCUSSION: Our proposed models have achieved significant improvements in the performance metrics for the binary classifications and multiclass classifications. We demonstrate the effectiveness of the proposed architecture in accurately detecting epileptic seizures from EEG signals by using EEG signals of varying lengths. The results indicate its potential as a reliable and efficient tool for automated seizure detection, paving the way for improved diagnosis and management of epilepsy.
PMID:38590939 | PMC:PMC11000706 | DOI:10.3389/fncom.2024.1340251
Predicting rectal cancer tumor budding grading based on MRI and CT with multimodal deep transfer learning: A dual-center study
Heliyon. 2024 Mar 26;10(7):e28769. doi: 10.1016/j.heliyon.2024.e28769. eCollection 2024 Apr 15.
ABSTRACT
OBJECTIVE: To investigate the effectiveness of a multimodal deep learning model in predicting tumor budding (TB) grading in rectal cancer (RC) patients.
MATERIALS AND METHODS: A retrospective analysis was conducted on 355 patients with rectal adenocarcinoma from two different hospitals. Among them, 289 patients from our institution were randomly divided into an internal training cohort (n = 202) and an internal validation cohort (n = 87) in a 7:3 ratio, while an additional 66 patients from another hospital constituted an external validation cohort. Various deep learning models were constructed and compared for their performance using T1CE and CT-enhanced images, and the optimal models were selected for the creation of a multimodal fusion model. Based on single and multiple factor logistic regression, clinical N staging and fecal occult blood were identified as independent risk factors and used to construct the clinical model. A decision-level fusion was employed to integrate these two models to create an ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), DeLong's test, calibration curve, and decision curve analysis (DCA). Model visualization Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for model interpretation.
RESULTS: The multimodal fusion model demonstrated superior performance compared to single-modal models, with AUC values of 0.869 (95% CI: 0.761-0.976) for the internal validation cohort and 0.848 (95% CI: 0.721-0.975) for the external validation cohort. N-stage and fecal occult blood were identified as clinically independent risk factors through single and multivariable logistic regression analysis. The final ensemble model exhibited the best performance, with AUC values of 0.898 (95% CI: 0.820-0.975) for the internal validation cohort and 0.868 (95% CI: 0.768-0.968) for the external validation cohort.
CONCLUSION: Multimodal deep learning models can effectively and non-invasively provide individualized predictions for TB grading in RC patients, offering valuable guidance for treatment selection and prognosis assessment.
PMID:38590908 | PMC:PMC11000007 | DOI:10.1016/j.heliyon.2024.e28769
On the search for efficient face recognition algorithm subject to multiple environmental constraints
Heliyon. 2024 Mar 25;10(7):e28568. doi: 10.1016/j.heliyon.2024.e28568. eCollection 2024 Apr 15.
ABSTRACT
From literature, majority of face recognition modules suffer performance challenges when presented with test images acquired under multiple constrained environments (occlusion and varying expressions). The performance of these models further deteriorates as the degree of degradation of the test images increases (relatively higher occlusion level). Deep learning-based face recognition models have attracted much attention in the research community as they are purported to outperform the classical PCA-based methods. Unfortunately their application to real-life problems is limited because of their intensive computational complexity and relatively longer run-times. This study proposes an enhancement of some PCA-based methods (with relatively lower computational complexity and run-time) to overcome the challenges posed to the recognition module in the presence of multiple constraints. The study compared the performance of enhanced classical PCA-based method (HE-GC-DWT-PCA/SVD) to FaceNet algorithm (deep learning method) using expression variant face images artificially occluded at 30% and 40%. The study leveraged on two statistical imputation methods of MissForest and Multiple Imputation by Chained Equations (MICE) for occlusion recovery. From the numerical evaluation results, although the two models achieved the same recognition rate (85.19%) at 30% level of occlusion, the enhanced PCA-based algorithm (HE-GC-DWT-PCA/SVD) outperformed the FaceNet model at 40% occlusion rate, with a recognition rate of 83.33%. Although both Missforest and MICE performed creditably well as de-occlusion mechanisms at higher levels of occlusion, MissForest outperforms the MICE imputation mechanism. MissForest imputation mechanism and the proposed HE-GC-DWT-PCA/SVD algorithm are recommended for occlusion recovery and recognition of multiple constrained test images respectively.
PMID:38590879 | PMC:PMC10999939 | DOI:10.1016/j.heliyon.2024.e28568
Number of intraepithelial lymphocytes and presence of a subepithelial band in normal colonic mucosa differs according to stainings and evaluation method
J Pathol Inform. 2024 Mar 24;15:100374. doi: 10.1016/j.jpi.2024.100374. eCollection 2024 Dec.
ABSTRACT
Chronic watery diarrhea is a frequent symptom. In approximately 10% of the patients, a diagnosis of microscopic colitis (MC) is established. The diagnosis relies on specific, but sometimes subtle, histopathological findings. As the histology of normal intestinal mucosa vary, discriminating subtle features of MC from normal tissue can be challenging and therefore auxiliary stainings are increasingly used. The aim of this study was to determine the variance in number of intraepithelial lymphocytes (IELs) and presence of a subepithelial band in normal ileum and colonic mucosa, according to different stains and digital assessment. Sixty-one patients without diarrhea referred to screening colonoscopy due to a positive feacal blood test and presenting with endoscopically normal mucosa were included. Basic histological features, number of IELs, and thickness of a subepithelial band was manually evaluated and a deep learning-based algorithm was developed to digitally determine the number of IELs in each of the two compartments; surface epithelium and cryptal epithelium, and the density of lymphocytes in the lamina propria compartment. The number of IELs was significantly higher on CD3-stained slides compared with slides stained with Hematoxylin-and-Eosin (HE) (p<0.001), and even higher numbers were reached using digital analysis. No significant difference between right and left colon in IELs or density of CD3-positive lymphocytes in lamina propria was found. No subepithelial band was present in HE-stained slides while a thin band was visualized on special stains. Conclusively, in this cohort of prospectively collected ileum and colonic biopsies from asymptomatic patients, the range of IELs and detection of a subepithelial collagenous band varied depending on the stain and method used for assessment. As assessment of biopsies from patients with diarrhea constitute a considerable workload in the pathology departments digital image analysis is highly desired. Knowledge provided by the present study highlight important differences that should be considered before introducing this method in the clinic.
PMID:38590727 | PMC:PMC10999801 | DOI:10.1016/j.jpi.2024.100374