Deep learning

Multi-receptor skin with highly sensitive tele-perception somatosensory

Wed, 2024-09-11 06:00

Sci Adv. 2024 Sep 13;10(37):eadp8681. doi: 10.1126/sciadv.adp8681. Epub 2024 Sep 11.

ABSTRACT

The limitations and complexity of traditional noncontact sensors in terms of sensitivity and threshold settings pose great challenges to extend the traditional five human senses. Here, we propose tele-perception to enhance human perception and cognition beyond these conventional noncontact sensors. Our bionic multi-receptor skin employs structured doping of inorganic nanoparticles to enhance the local electric field, coupled with advanced deep learning algorithms, achieving a ΔVd sensitivity of 14.2, surpassing benchmarks. This enables precise remote control of surveillance systems and robotic manipulators. Our long short-term memory-based adaptive pulse identification achieves 99.56% accuracy in material identification with accelerated processing speeds. In addition, we demonstrate the feasibility of using a two-dimensional (2D) sensor matrix to integrate real object scan data into a convolutional neural network to accurately discriminate the shape and material of 3D objects. This promises transformative advances in human-computer interaction and neuromorphic computing.

PMID:39259789 | DOI:10.1126/sciadv.adp8681

Categories: Literature Watch

Deep learning aided measurement of outer retinal layer metrics as biomarkers for inherited retinal degenerations: opportunities and challenges

Wed, 2024-09-11 06:00

Curr Opin Ophthalmol. 2024 Aug 29. doi: 10.1097/ICU.0000000000001088. Online ahead of print.

ABSTRACT

PURPOSE OF REVIEW: The purpose of this review was to provide a summary of currently available retinal imaging and visual function testing methods for assessing inherited retinal degenerations (IRDs), with the emphasis on the application of deep learning (DL) approaches to assist the determination of structural biomarkers for IRDs.

RECENT FINDINGS: (clinical trials for IRDs; discover effective biomarkers as endpoints; DL applications in processing retinal images to detect disease-related structural changes).

SUMMARY: Assessing photoreceptor loss is a direct way to evaluate IRDs. Outer retinal layer structures, including outer nuclear layer, ellipsoid zone, photoreceptor outer segment, RPE, are potential structural biomarkers for IRDs. More work may be needed on structure and function relationship.

PMID:39259656 | DOI:10.1097/ICU.0000000000001088

Categories: Literature Watch

Artificial intelligence in myopia in children: current trends and future directions

Wed, 2024-09-11 06:00

Curr Opin Ophthalmol. 2024 Aug 27. doi: 10.1097/ICU.0000000000001086. Online ahead of print.

ABSTRACT

PURPOSE OF REVIEW: Myopia is one of the major causes of visual impairment globally, with myopia and its complications thus placing a heavy healthcare and economic burden. With most cases of myopia developing during childhood, interventions to slow myopia progression are most effective when implemented early. To address this public health challenge, artificial intelligence has emerged as a potential solution in childhood myopia management.

RECENT FINDINGS: The bulk of artificial intelligence research in childhood myopia was previously focused on traditional machine learning models for the identification of children at high risk for myopia progression. Recently, there has been a surge of literature with larger datasets, more computational power, and more complex computation models, leveraging artificial intelligence for novel approaches including large-scale myopia screening using big data, multimodal data, and advancing imaging technology for myopia progression, and deep learning models for precision treatment.

SUMMARY: Artificial intelligence holds significant promise in transforming the field of childhood myopia management. Novel artificial intelligence modalities including automated machine learning, large language models, and federated learning could play an important role in the future by delivering precision medicine, improving health literacy, and allowing the preservation of data privacy. However, along with these advancements in technology come practical challenges including regulation and clinical integration.

PMID:39259652 | DOI:10.1097/ICU.0000000000001086

Categories: Literature Watch

Effects of individual research practices on fNIRS signal quality and latent characteristics

Wed, 2024-09-11 06:00

IEEE Trans Neural Syst Rehabil Eng. 2024 Sep 11;PP. doi: 10.1109/TNSRE.2024.3458396. Online ahead of print.

ABSTRACT

Functional near-infrared spectroscopy (fNIRS) is an increasingly popular tool for cross-cultural neuroimaging studies. However, the reproducibility and comparability of fNIRS studies is still an open issue in the scientific community. The paucity of experimental practices and the lack of clear guidelines regarding fNIRS use contribute to undermining the reproducibility of results. For this reason, much effort is now directed at assessing the impact of heterogeneous experimental practices in creating divergent fNIRS results. The current work aims to assess differences in fNIRS signal quality in data collected by two different labs in two different cohorts: Singapore (N=74) and Italy (N=84). Random segments of 20s were extracted from each channel in each participant's NIRScap and 1280 deep features were obtained using a deep learning model trained to classify the quality of fNIRS data. Two datasets were generated: ALL dataset (segments with bad and good data quality) and GOOD dataset (segments with good quality). Each dataset was divided into train and test partitions, which were used to train and evaluate the performance of a Support Vector Machine (SVM) model in classifying the co-horts from signal quality features. Results showed that the SG cohort had significantly higher occurrences of bad signal quality in the majority of the fNIRS channels. Moreover, the SVM correctly classified the cohorts when using the ALL dataset. However, the performance dropped almost completely (except for five channels) when the SVM had to classify the cohorts using data from the GOOD dataset. These results suggest that fNIRS raw data obtained by different labs might possess different levels of quality as well as different latent characteristics beyond quality per se. The current study highlights the importance of defining clear guidelines in the conduction of fNIRS experiments in the reporting of data quality in fNIRS manuscripts.

PMID:39259640 | DOI:10.1109/TNSRE.2024.3458396

Categories: Literature Watch

GIAE-DTI: Predicting Drug-Target Interactions Based on Heterogeneous Network and GIN-based Graph Autoencoder

Wed, 2024-09-11 06:00

IEEE J Biomed Health Inform. 2024 Sep 11;PP. doi: 10.1109/JBHI.2024.3458794. Online ahead of print.

ABSTRACT

Accurate prediction of drug-target interactions (DTIs) is essential for advancing drug discovery and repurposing. However, the sparsity of DTI data limits the effectiveness of existing computational methods, which primarily focus on sparse DTI networks and have poor performance in aggregating information from neighboring nodes and representing isolated nodes within the network. In this study, we propose a novel deep learning framework, named GIAE-DTI, which considers cross-modal similarity of drugs and targets and constructs a heterogeneous network for DTI prediction. Firstly, the model calculates the cross-modal similarity of drugs and proteins from the relationships among drugs, proteins, diseases, and side effects, and performs similarity integration by taking the average. Then, a drug-target heterogeneous network is constructed, including drug-drug interactions, protein-protein interactions, and drug-target interactions processed by weighted K nearest known neighbors. In the heterogeneous network, a graph autoencoder based on a graph isomorphism network is employed for feature extraction, while a dual decoder is utilized to achieve better self-supervised learning, resulting in latent feature representations for drugs and targets. Finally, a deep neural network is employed to predict DTIs. The experimental results indicate that on the benchmark dataset, GIAE-DTI achieves AUC and AUPR scores of 0.9533 and 0.9619, respectively, in DTI prediction, outperforming the current state-of-the-art methods. Additionally, case studies on four 5-hydroxytryptamine receptor-related targets and five drugs related to mental diseases show the great potential of the proposed method in practical applications.

PMID:39259623 | DOI:10.1109/JBHI.2024.3458794

Categories: Literature Watch

Deep learning assisted quantitative detection of cardiac troponin I in hierarchical dendritic copper-nickel nanostructure lateral flow immunoassay

Wed, 2024-09-11 06:00

Anal Methods. 2024 Sep 11. doi: 10.1039/d4ay01187b. Online ahead of print.

ABSTRACT

The rising demand for point-of-care testing (POCT) in disease diagnosis has made LFIA sensors based on dendritic metal thin film (HD-nanometal) and background fluorescence technology essential for rapid and accurate disease marker detection, thanks to their integrated design, high sensitivity, and cost-effectiveness. However, their unique 3D nanostructures cause significant fluorescence variation, challenging traditional image processing methods in segmenting weak fluorescence regions. This paper develops a deep learning method to efficiently segment target regions in HD-nanometal LFIA sensor images, improving quantitative detection accuracy. We propose an improved UNet++ network with attention and residual modules, accurately segmenting varying fluorescence intensities, especially weak ones. We evaluated the method using IoU and Dice coefficients, comparing it with UNet, Deeplabv3, and UNet++. We used an HD-nanoCu-Ni LFIA sensor for cardiac troponin I (cTnI) as a case study to validate the method's practicality. The proposed method achieved a 96.3% IoU, outperforming other networks. The R2 between characteristic quantity and cTnI concentration reached 0.994, confirming the method's accuracy and reliability. This enhances POCT accuracy and provides a reference for future fluorescence immunochromatography expansion.

PMID:39259228 | DOI:10.1039/d4ay01187b

Categories: Literature Watch

Development and Validation of a Deep Learning Model for Prediction of Adult Physiological Deterioration

Wed, 2024-09-11 06:00

Crit Care Explor. 2024 Sep 11;6(9):e1151. doi: 10.1097/CCE.0000000000001151. eCollection 2024 Sep 1.

ABSTRACT

BACKGROUND: Prediction-based strategies for physiologic deterioration offer the potential for earlier clinical interventions that improve patient outcomes. Current strategies are limited because they operate on inconsistent definitions of deterioration, attempt to dichotomize a dynamic and progressive phenomenon, and offer poor performance.

OBJECTIVE: Can a deep learning deterioration prediction model (Deep Learning Enhanced Triage and Emergency Response for Inpatient Optimization [DETERIO]) based on a consensus definition of deterioration (the Adult Inpatient Decompensation Event [AIDE] criteria) and that approaches deterioration as a state "value-estimation" problem outperform a commercially available deterioration score?

DERIVATION COHORT: The derivation cohort contained retrospective patient data collected from both inpatient services (inpatient) and emergency departments (EDs) of two hospitals within the University of California San Diego Health System. There were 330,729 total patients; 71,735 were inpatient and 258,994 were ED. Of these data, 20% were randomly sampled as a retrospective "testing set."

VALIDATION COHORT: The validation cohort contained temporal patient data. There were 65,898 total patients; 13,750 were inpatient and 52,148 were ED.

PREDICTION MODEL: DETERIO was developed and validated on these data, using the AIDE criteria to generate a composite score. DETERIO's architecture builds upon previous work. DETERIO's prediction performance up to 12 hours before T0 was compared against Epic Deterioration Index (EDI).

RESULTS: In the retrospective testing set, DETERIO's area under the receiver operating characteristic curve (AUC) was 0.797 and 0.874 for inpatient and ED subsets, respectively. In the temporal validation cohort, the corresponding AUC were 0.775 and 0.856, respectively. DETERIO outperformed EDI in the inpatient validation cohort (AUC, 0.775 vs. 0.721; p < 0.01) while maintaining superior sensitivity and a comparable rate of false alarms (sensitivity, 45.50% vs. 30.00%; positive predictive value, 20.50% vs. 16.11%).

CONCLUSIONS: DETERIO demonstrates promise in the viability of a state value-estimation approach for predicting adult physiologic deterioration. It may outperform EDI while offering additional clinical utility in triage and clinician interaction with prediction confidence and explanations. Additional studies are needed to assess generalizability and real-world clinical impact.

PMID:39258951 | DOI:10.1097/CCE.0000000000001151

Categories: Literature Watch

Predicting <em>BRCA</em> mutation and stratifying targeted therapy response using multimodal learning: a multicenter study

Wed, 2024-09-11 06:00

Ann Med. 2024 Dec;56(1):2399759. doi: 10.1080/07853890.2024.2399759. Epub 2024 Sep 11.

ABSTRACT

BACKGROUND: The status of BRCA1/2 genes plays a crucial role in the treatment decision-making process for multiple cancer types. However, due to high costs and limited resources, a demand for BRCA1/2 genetic testing among patients is currently unmet. Notably, not all patients with BRCA1/2 mutations achieve favorable outcomes with poly (ADP-ribose) polymerase inhibitors (PARPi), indicating the necessity for risk stratification. In this study, we aimed to develop and validate a multimodal model for predicting BRCA1/2 gene status and prognosis with PARPi treatment.

METHODS: We included 1695 slides from 1417 patients with ovarian, breast, prostate, and pancreatic cancers across three independent cohorts. Using a self-attention mechanism, we constructed a multi-instance attention model (MIAM) to detect BRCA1/2 gene status from hematoxylin and eosin (H&E) pathological images. We further combined tissue features from the MIAM model, cell features, and clinical factors (the MIAM-C model) to predict BRCA1/2 mutations and progression-free survival (PFS) with PARPi therapy. Model performance was evaluated using area under the curve (AUC) and Kaplan-Meier analysis. Morphological features contributing to MIAM-C were analyzed for interpretability.

RESULTS: Across the four cancer types, MIAM-C outperformed the deep learning-based MIAM in identifying the BRCA1/2 genotype. Interpretability analysis revealed that high-attention regions included high-grade tumors and lymphocytic infiltration, which correlated with BRCA1/2 mutations. Notably, high lymphocyte ratios appeared characteristic of BRCA1/2 mutations. Furthermore, MIAM-C predicted PARPi therapy response (log-rank p < 0.05) and served as an independent prognostic factor for patients with BRCA1/2-mutant ovarian cancer (p < 0.05, hazard ratio:0.4, 95% confidence interval: 0.16-0.99).

CONCLUSIONS: The MIAM-C model accurately detected BRCA1/2 gene status and effectively stratified prognosis for patients with BRCA1/2 mutations.

PMID:39258876 | DOI:10.1080/07853890.2024.2399759

Categories: Literature Watch

Advancing the Prediction of MS/MS Spectra Using Machine Learning

Wed, 2024-09-11 06:00

J Am Soc Mass Spectrom. 2024 Sep 11. doi: 10.1021/jasms.4c00154. Online ahead of print.

ABSTRACT

Tandem mass spectrometry (MS/MS) is an important tool for the identification of small molecules and metabolites where resultant spectra are most commonly identified by matching them with spectra in MS/MS reference libraries. While popular, this strategy is limited by the contents of existing reference libraries. In response to this limitation, various methods are being developed for the in silico generation of spectra to augment existing libraries. Recently, machine learning and deep learning techniques have been applied to predict spectra with greater speed and accuracy. Here, we investigate the challenges these algorithms face in achieving fast and accurate predictions on a wide range of small molecules. The challenges are often amplified by the use of generic machine learning benchmarking tactics, which lead to misleading accuracy scores. Curating data sets, only predicting spectra for sufficiently high collision energies, and working more closely with experimental mass spectrometrists are recommended strategies to improve overall prediction accuracy in this nuanced field.

PMID:39258761 | DOI:10.1021/jasms.4c00154

Categories: Literature Watch

Editorial for "Multiparametric MRI-Based Deep Learning Radiomics Model for Assessing 5-Year Recurrence Risk in Non-Muscle Invasive Bladder Cancer"

Wed, 2024-09-11 06:00

J Magn Reson Imaging. 2024 Sep 11. doi: 10.1002/jmri.29592. Online ahead of print.

NO ABSTRACT

PMID:39258759 | DOI:10.1002/jmri.29592

Categories: Literature Watch

Assessing the Reporting Quality of Machine Learning Algorithms in Head and Neck Oncology

Wed, 2024-09-11 06:00

Laryngoscope. 2024 Sep 11. doi: 10.1002/lary.31756. Online ahead of print.

ABSTRACT

OBJECTIVE: This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD-AI criteria.

DATA SOURCES: A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to "artificial intelligence," "machine learning," "deep learning," "neural network," and various head and neck neoplasms.

REVIEW METHODS: Two independent reviewers analyzed each published study for adherence to the 65-point TRIPOD-AI criteria. Items were classified as "Yes," "No," or "NA" for each publication. The proportion of studies satisfying each TRIPOD-AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached.

RESULTS: The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD-AI is necessary for achieving standardized ML research reporting in head and neck oncology.

CONCLUSION: Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases.

LEVEL OF EVIDENCE: NA Laryngoscope, 2024.

PMID:39258420 | DOI:10.1002/lary.31756

Categories: Literature Watch

Residual swin transformer for classifying the types of cotton pests in complex background

Wed, 2024-09-11 06:00

Front Plant Sci. 2024 Aug 27;15:1445418. doi: 10.3389/fpls.2024.1445418. eCollection 2024.

ABSTRACT

BACKGROUND: Cotton pests have a major impact on cotton quality and yield during cotton production and cultivation. With the rapid development of agricultural intelligence, the accurate classification of cotton pests is a key factor in realizing the precise application of medicines by utilize unmanned aerial vehicles (UAVs), large application devices and other equipment.

METHODS: In this study, a cotton insect pest classification model based on improved Swin Transformer is proposed. The model introduces the residual module, skip connection, into Swin Transformer to improve the problem that pest features are easily confused in complex backgrounds leading to poor classification accuracy, and to enhance the recognition of cotton pests. In this study, 2705 leaf images of cotton insect pests (including three insect pests, cotton aphids, cotton mirids and cotton leaf mites) were collected in the field, and after image preprocessing and data augmentation operations, model training was performed.

RESULTS: The test results proved that the accuracy of the improved model compared to the original model increased from 94.6% to 97.4%, and the prediction time for a single image was 0.00434s. The improved Swin Transformer model was compared with seven kinds of classification models (VGG11, VGG11-bn, Resnet18, MobilenetV2, VIT, Swin Transformer small, and Swin Transformer base), and the model accuracy was increased respectively by 0.5%, 4.7%, 2.2%, 2.5%, 6.3%, 7.9%, 8.0%.

DISCUSSION: Therefore, this study demonstrates that the improved Swin Transformer model significantly improves the accuracy and efficiency of cotton pest detection compared with other classification models, and can be deployed on edge devices such as utilize unmanned aerial vehicles (UAVs), thus providing an important technological support and theoretical basis for cotton pest control and precision drug application.

PMID:39258298 | PMC:PMC11383767 | DOI:10.3389/fpls.2024.1445418

Categories: Literature Watch

Towards accurate and efficient diagnoses in nephropathology: An AI-based approach for assessing kidney transplant rejection

Wed, 2024-09-11 06:00

Comput Struct Biotechnol J. 2024 Aug 16;24:571-582. doi: 10.1016/j.csbj.2024.08.011. eCollection 2024 Dec.

ABSTRACT

The Banff classification is useful for diagnosing renal transplant rejection. However, it has limitations due to subjectivity and varying concordance in physicians' assessments. Artificial intelligence (AI) can help standardize research, increase objectivity and accurately quantify morphological characteristics, improving reproducibility in clinical practice. This study aims to develop an AI-based solutions for diagnosing acute kidney transplant rejection by introducing automated evaluation of prognostic morphological patterns. The proposed approach aims to help accurately distinguish borderline changes from rejection. We trained a deep-learning model utilizing a fine-tuned Mask R-CNN architecture which achieved a mean Average Precision value of 0.74 for the segmentation of renal tissue structures. A strong positive nonlinear correlation was found between the measured infiltration areas and fibrosis, indicating the model's potential for assessing these parameters in kidney biopsies. The ROC analysis showed a high predictive ability for distinguishing between ci and i scores based on infiltration area and fibrosis area measurements. The AI model demonstrated high precision in predicting clinical scores which makes it a promising AI assisting tool for pathologists. The application of AI in nephropathology has a potential for advancements, including automated morphometric evaluation, 3D histological models and faster processing to enhance diagnostic accuracy and efficiency.

PMID:39258238 | PMC:PMC11385065 | DOI:10.1016/j.csbj.2024.08.011

Categories: Literature Watch

AI-guided identification of risk variants for adrenocortical tumours in <em>TP53</em> p.R337H carrier children: a genetic association study

Wed, 2024-09-11 06:00

Lancet Reg Health Am. 2024 Aug 23;38:100863. doi: 10.1016/j.lana.2024.100863. eCollection 2024 Oct.

ABSTRACT

BACKGROUND: Adrenocortical tumours (ACT) in children are part of the Li-Fraumeni cancer spectrum and are frequently associated with a germline TP53 pathogenic variant. TP53 p.R337H is highly prevalent in the south and southeast of Brazil and predisposes to ACT with low penetrance. Thus, we aimed to investigate whether genetic variants exist which are associated with an increased risk of developing ACT in TP53 p.R337H carrier children.

METHODS: A genetic association study was conducted in trios of children (14 girls, 7 boys) from southern Brazil carriers of TP53 p.R337H with (n = 18) or without (n = 3) ACT and their parents, one of whom also carries this pathogenic variant (discovery cohort). Results were confirmed in a validation cohort of TP53 p.R337H carriers with (n = 90; 68 girls, 22 boys) or without ACT (n = 302; 165 women, 137 men).

FINDINGS: We analysed genomic data from whole exome sequencing of blood DNA from the trios. Using deep learning algorithms, according to a model where the affected child inherits from the non-carrier parent variant(s) increasing the risk of developing ACT, we found a significantly enriched representation of non-coding variants in genes involved in the cyclic AMP (cAMP) pathway known to be involved in adrenocortical tumorigenesis. One among those variants (rs2278986 in the SCARB1 gene) was confirmed to be significantly enriched in the validation cohort of TP53 p.R337H carriers with ACT compared to carriers without ACT (OR 1.858; 95% CI 1.146, 3.042, p = 0.01).

INTERPRETATION: Profiling of the variant rs2278986 is a candidate for future confirmation and possible use as a tool for ACT risk stratification in TP53 p.R337H carriers.

FUNDING: Centre National de la Recherche Scientifique (CNRS), Behring Foundation, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).

PMID:39258234 | PMC:PMC11386259 | DOI:10.1016/j.lana.2024.100863

Categories: Literature Watch

Unveiling the hidden: a deep learning approach to unraveling subzone-specific changes in peripapillary atrophy in type 2 diabetes

Wed, 2024-09-11 06:00

Front Cell Dev Biol. 2024 Aug 27;12:1459040. doi: 10.3389/fcell.2024.1459040. eCollection 2024.

ABSTRACT

PURPOSE: This study aimed to evaluate the optical coherence tomography angiography (OCTA) changes in subzones of peripapillary atrophy (PPA) among type 2 diabetic patients (T2DM) with or without diabetic retinopathy (DR) using well-designed deep learning models.

METHODS: A multi-task joint deep-learning model was trained and validated on 2,820 images to automate the determination and quantification of the microstructure and corresponding microcirculation of beta zone and gamma zone PPA. This model was then applied in the cross-sectional study encompassing 44 eyes affected by non-proliferative diabetic retinopathy (NPDR) and 46 eyes without DR (NDR). OCTA was utilized to image the peripapillary area in four layers: superficial capillary plexus (SCP), deep capillary plexus (DCP), choroidal capillary (CC) and middle-to-large choroidal vessel (MLCV).

RESULTS: The patients in both groups were matched for age, sex, BMI, and axial length. The width and area of the gamma zone were significantly smaller in NPDR group compared to the NDR group. Multiple linear regression analysis revealed a negative association between the diagnosis of DR and the width and area of the gamma zone. The gamma zone exhibited higher SCP, DCP and MLCV density than the beta zone, while the beta zone showed higher CC density than the gamma zone. In comparison to the NDR group, the MLCV density of gamma zone was significantly lower in NPDR group, and this density was positively correlated with the width and area of the gamma zone.

DISCUSSION: DR-induced peripapillary vascular changes primarily occur in gamma zone PPA. After eliminating the influence of axial length, our study demonstrated a negative correlation between DR and the gamma zone PPA. Longitudinal studies are required to further elucidate the role of the gamma zone in the development and progression of DR.

PMID:39258228 | PMC:PMC11385310 | DOI:10.3389/fcell.2024.1459040

Categories: Literature Watch

DeepMonitoring: a deep learning-based monitoring system for assessing the quality of cornea images captured by smartphones

Wed, 2024-09-11 06:00

Front Cell Dev Biol. 2024 Aug 27;12:1447067. doi: 10.3389/fcell.2024.1447067. eCollection 2024.

ABSTRACT

Smartphone-based artificial intelligence (AI) diagnostic systems could assist high-risk patients to self-screen for corneal diseases (e.g., keratitis) instead of detecting them in traditional face-to-face medical practices, enabling the patients to proactively identify their own corneal diseases at an early stage. However, AI diagnostic systems have significantly diminished performance in low-quality images which are unavoidable in real-world environments (especially common in patient-recorded images) due to various factors, hindering the implementation of these systems in clinical practice. Here, we construct a deep learning-based image quality monitoring system (DeepMonitoring) not only to discern low-quality cornea images created by smartphones but also to identify the underlying factors contributing to the generation of such low-quality images, which can guide operators to acquire high-quality images in a timely manner. This system performs well across validation, internal, and external testing sets, with AUCs ranging from 0.984 to 0.999. DeepMonitoring holds the potential to filter out low-quality cornea images produced by smartphones, facilitating the application of smartphone-based AI diagnostic systems in real-world clinical settings, especially in the context of self-screening for corneal diseases.

PMID:39258227 | PMC:PMC11385315 | DOI:10.3389/fcell.2024.1447067

Categories: Literature Watch

Diabetes detection from non-diabetic retinopathy fundus images using deep learning methodology

Wed, 2024-09-11 06:00

Heliyon. 2024 Aug 22;10(16):e36592. doi: 10.1016/j.heliyon.2024.e36592. eCollection 2024 Aug 30.

ABSTRACT

Diabetes is one of the leading causes of morbidity and mortality in the United States and worldwide. Traditionally, diabetes detection from retinal images has been performed only using relevant retinopathy indications. This research aimed to develop an artificial intelligence (AI) machine learning model which can detect the presence of diabetes from fundus imagery of eyes without any diabetic eye disease. A machine learning algorithm was trained on the EyePACS dataset, consisting of 47,076 images. Patients were also divided into cohorts based on disease duration, each cohort consisting of patients diagnosed within the timeframe in question (e.g., 15 years) and healthy participants. The algorithm achieved 0.86 area under receiver operating curve (AUC) in detecting diabetes per patient visit when averaged across camera models, and AUC 0.83 on the task of detecting diabetes per image. The results suggest that diabetes may be diagnosed non-invasively using fundus imagery alone. This may enable diabetes diagnosis at point of care, as well as other, accessible venues, facilitating the diagnosis of many undiagnosed people with diabetes.

PMID:39258195 | PMC:PMC11386038 | DOI:10.1016/j.heliyon.2024.e36592

Categories: Literature Watch

Advancing sub-seasonal to seasonal multi-model ensemble precipitation prediction in east asia: Deep learning-based post-processing for improved accuracy

Wed, 2024-09-11 06:00

Heliyon. 2024 Aug 8;10(16):e35933. doi: 10.1016/j.heliyon.2024.e35933. eCollection 2024 Aug 30.

ABSTRACT

The growing interest in Subseasonal to Seasonal (S2S) prediction data across different industries underscores its potential use in comprehending weather patterns, extreme conditions, and important sectors such as agriculture and energy management. However, concerns about its accuracy have been raised. Furthermore, enhancing the precision of rainfall predictions remains challenging in S2S forecasts. This study enhanced the sub-seasonal to seasonal (S2S) prediction skills for precipitation amount and occurrence over the East Asian region by employing deep learning-based post-processing techniques. We utilized a modified U-Net architecture that wraps all its convolutional layers with TimeDistributed layers as a deep learning model. For the training datasets, the precipitation prediction data of six S2S climate models and their multi-model ensemble (MME) were constructed, and the daily precipitation occurrence was obtained from the three thresholds values, 0 % of the daily precipitation for no-rain events, <33 % for light-rain, >67 % for heavy-rain. Based on the precipitation amount prediction skills of the six climate models, deep learning-based post-processing outperformed post-processing using multiple linear regression (MLR) in the lead times of weeks 2-4. The prediction accuracy of precipitation occurrence with MLR-based post-processing did not significantly improve, whereas deep learning-based post-processing enhanced the prediction accuracy in the total lead times, demonstrating superiority over MLR. We enhanced the prediction accuracy in forecasting the amount and occurrence of precipitation in individual climate models using deep learning-based post-processing.

PMID:39258194 | PMC:PMC11385763 | DOI:10.1016/j.heliyon.2024.e35933

Categories: Literature Watch

Deep learning image analysis for filamentous fungi taxonomic classification: Dealing with small datasets with class imbalance and hierarchical grouping

Wed, 2024-09-11 06:00

Biol Methods Protoc. 2024 Aug 27;9(1):bpae063. doi: 10.1093/biomethods/bpae063. eCollection 2024.

ABSTRACT

Deep learning applications in taxonomic classification for animals and plants from images have become popular, while those for microorganisms are still lagging behind. Our study investigated the potential of deep learning for the taxonomic classification of hundreds of filamentous fungi from colony images, which is typically a task that requires specialized knowledge. We isolated soil fungi, annotated their taxonomy using standard molecular barcode techniques, and took images of the fungal colonies grown in petri dishes (n = 606). We applied a convolutional neural network with multiple training approaches and model architectures to deal with some common issues in ecological datasets: small amounts of data, class imbalance, and hierarchically structured grouping. Model performance was overall low, mainly due to the relatively small dataset, class imbalance, and the high morphological plasticity exhibited by fungal colonies. However, our approach indicates that morphological features like color, patchiness, and colony extension rate could be used for the recognition of fungal colonies at higher taxonomic ranks (i.e. phylum, class, and order). Model explanation implies that image recognition characters appear at different positions within the colony (e.g. outer or inner hyphae) depending on the taxonomic resolution. Our study suggests the potential of deep learning applications for a better understanding of the taxonomy and ecology of filamentous fungi amenable to axenic culturing. Meanwhile, our study also highlights some technical challenges in deep learning image analysis in ecology, highlighting that the domain of applicability of these methods needs to be carefully considered.

PMID:39258158 | PMC:PMC11387011 | DOI:10.1093/biomethods/bpae063

Categories: Literature Watch

Personalized Deep Learning Model for Clinical Target Volume on Daily Cone Beam Computed Tomography in Breast Cancer Patients

Wed, 2024-09-11 06:00

Adv Radiat Oncol. 2024 Jul 26;9(10):101580. doi: 10.1016/j.adro.2024.101580. eCollection 2024 Oct.

ABSTRACT

PURPOSE: Herein, we developed a deep learning algorithm to improve the segmentation of the clinical target volume (CTV) on daily cone beam computed tomography (CBCT) scans in breast cancer radiation therapy. By leveraging the Intentional Deep Overfit Learning (IDOL) framework, we aimed to enhance personalized image-guided radiation therapy based on patient-specific learning.

METHODS AND MATERIALS: We used 240 CBCT scans from 100 breast cancer patients and employed a 2-stage training approach. The first stage involved training a novel general deep learning model (Swin UNETR, UNET, and SegResNET) on 90 patients. The second stage used intentional overfitting on the remaining 10 patients for patient-specific CBCT outputs. Quantitative evaluation was conducted using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), mean surface distance (MSD), and independent samples t test with expert contours on CBCT scans from the first to 15th fractions.

RESULTS: IDOL integration significantly improved CTV segmentation, particularly with the Swin UNETR model (P values < .05). Using patient-specific data, IDOL enhanced the DSC, HD, and MSD metrics. The average DSC for the 15th fraction improved from 0.9611 to 0.9819, the average HD decreased from 4.0118 mm to 1.3935 mm, and the average MSD decreased from 0.8723 to 0.4603. Incorporating CBCT scans from the initial treatments and first to third fractions further improved results, with an average DSC of 0.9850, an average HD of 1.2707 mm, and an average MSD of 0.4076 for the 15th fraction, closely aligning with physician-drawn contours.

CONCLUSION: Compared with a general model, our patient-specific deep learning-based training algorithm significantly improved CTV segmentation accuracy of CBCT scans in patients with breast cancer. This approach, coupled with continuous deep learning training using daily CBCT scans, demonstrated enhanced CTV delineation accuracy and efficiency. Future studies should explore the adaptability of the IDOL framework to diverse deep learning models, data sets, and cancer sites.

PMID:39258144 | PMC:PMC11381721 | DOI:10.1016/j.adro.2024.101580

Categories: Literature Watch

Pages