Deep learning

Facilitating interaction between partial differential equation-based dynamics and unknown dynamics for regional wind speed prediction

Wed, 2024-03-20 06:00

Neural Netw. 2024 Mar 11;174:106233. doi: 10.1016/j.neunet.2024.106233. Online ahead of print.

ABSTRACT

Regional wind speed prediction is an important spatiotemporal prediction problem which is crucial for optimizing wind power utilization. Nevertheless, the complex dynamics of wind speed pose a formidable challenge to prediction tasks. The evolving dynamics of wind could be governed by underlying physical principles that can be described by partial differential equations (PDE). This study proposes a novel approach called PDE-assisted network (PaNet) for regional wind speed prediction. In PaNet, a new architecture is devised, incorporating both PDE-based dynamics (PDE dynamics) and unknown dynamics. Specifically, this architecture establishes interactions between the two dynamics, regulated by an inter-dynamics communication unit that controls interactions through attention gates. Additionally, recognizing the significance of the initial state for PDE dynamics, an adaptive frequency-gated unit is introduced to generate a suitable initial state for the PDE dynamics by selecting essential frequency components. To evaluate the predictive performance of PaNet, this study conducts comprehensive experiments on two real-world wind speed datasets. The experimental results indicated that the proposed method is superior to other baseline methods.

PMID:38508045 | DOI:10.1016/j.neunet.2024.106233

Categories: Literature Watch

Electronic Properties of CrB/Co<sub>2</sub>CO<sub>2</sub> Superlattices by Multiple Descriptor-Based Machine Learning Combined with First-Principles

Wed, 2024-03-20 06:00

Small Methods. 2024 Mar 20:e2301415. doi: 10.1002/smtd.202301415. Online ahead of print.

ABSTRACT

In recent times, newly unveiled 2D materials exhibiting exceptional characteristics, such as MBenes and MXenes, have gained widespread application across diverse domains, encompassing electronic devices, catalysis, energy storage, sensors, and various others. Nonetheless, numerous technical bottlenecks persist in the development of high-performance, structurally flexible, and adjustable electronic device materials. Research investigations have demonstrated that 2D van der Waals superlattices (vdW SLs) structures comprising materials exhibit exceptional electrical, mechanical, and optical properties. In this work, the advantages of both materials are combined and compose the vdW SLs structure of MBenes and MXenes, thus obtaining materials with excellent electronic properties. Furthermore, it integrates machine learning (ML) with first-principles methods to forecast the electrical properties of MBene/MXene superlattice materials. Initially, various configurations of MBene/MXene superlattice materials are explored, revealing that distinct stacking methods exert significant influence on the electronic structure of MBene/MXene materials. Specifically, the BABA-type stacking of CrB (layer A) and Co2CO2 MXene (layer B) is most stable configureation. Subsequently, multiple descriptors of the structure are constructed to predict the density of states of vdW SLs through the employment of ML techniques. The best model achieves a mean absolute error (MAE) as low as 0.147 eV.

PMID:38507722 | DOI:10.1002/smtd.202301415

Categories: Literature Watch

Contrastive pre-training and 3D convolution neural network for RNA and small molecule binding affinity prediction

Wed, 2024-03-20 06:00

Bioinformatics. 2024 Mar 20:btae155. doi: 10.1093/bioinformatics/btae155. Online ahead of print.

ABSTRACT

MOTIVATION: The diverse structures and functions inherent in RNAs present a wealth of potential drug targets. Some small molecules are anticipated to serve as leading compounds, providing guidance for the development of novel RNA-targeted therapeutics. Consequently, the determination of RNA-small molecule binding affinity is a critical undertaking in the landscape of RNA-targeted drug discovery and development. Nevertheless, to date, no computational method for RNA-small molecule binding affinity prediction has been proposed. The prediction of RNA-small molecule binding affinity remains a significant challenge. The development of a computational model is deemed essential to effectively extract relevant features and predict RNA-small molecule binding affinity accurately.

RESULTS: In this study, we introduced RLaffinity, a novel deep learning model designed for the prediction of RNA-small molecule binding affinity based on 3D structures. RLaffinity integrated information from RNA pockets and small molecules, utilizing a 3D convolutional neural network (3D-CNN) coupled with a contrastive learning-based self-supervised pre-training model. To the best of our knowledge, RLaffinity was the first computational method for the prediction of RNA-small molecule binding affinity. Our experimental results exhibited RLaffinity's superior performance compared to baseline methods, revealing by all metrics. The efficacy of RLaffinity underscores the capability of 3D-CNN to accurately extract both global pocket information and local neighbor nucleotide information within RNAs. Notably, the integration of a self-supervised pre-training model significantly enhanced predictive performance. Ultimately, RLaffinity was also proved as a potential tool for RNA-targeted drugs virtual screening.

AVAILABILITY: https://github.com/SaisaiSun/RLaffinity.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:38507691 | DOI:10.1093/bioinformatics/btae155

Categories: Literature Watch

TemStaPro: protein thermostability prediction using sequence representations from protein language models

Wed, 2024-03-20 06:00

Bioinformatics. 2024 Mar 20:btae157. doi: 10.1093/bioinformatics/btae157. Online ahead of print.

ABSTRACT

MOTIVATION: Reliable prediction of protein thermostability from its sequence is valuable for both academic and industrial research. This prediction problem can be tackled using machine learning and by taking advantage of the recent blossoming of deep learning methods for sequence analysis. These methods can facilitate training on more data and, possibly, enable development of more versatile thermostability predictors for multiple ranges of temperatures.

RESULTS: We applied the principle of transfer learning to predict protein thermostability using embeddings generated by protein language models (pLMs) from an input protein sequence. We used large pLMs that were pre-trained on hundreds of millions of known sequences. The embeddings from such models allowed us to efficiently train and validate a high-performing prediction method using over one million sequences that we collected from organisms with annotated growth temperatures. Our method, TemStaPro (Temperatures of Stability for Proteins), was used to predict thermostability of CRISPR-Cas Class II effector proteins (C2EPs). Predictions indicated sharp differences among groups of C2EPs in terms of thermostability and were largely in tune with previously published and our newly obtained experimental data.

AVAILABILITY AND IMPLEMENTATION: TemStaPro software and the related data are freely available from https://github.com/ievapudz/TemStaPro and https://doi.org/10.5281/zenodo.7743637.

PMID:38507682 | DOI:10.1093/bioinformatics/btae157

Categories: Literature Watch

Single-sequence protein structure prediction by integrating protein language models

Wed, 2024-03-20 06:00

Proc Natl Acad Sci U S A. 2024 Mar 26;121(13):e2308788121. doi: 10.1073/pnas.2308788121. Epub 2024 Mar 20.

ABSTRACT

Protein structure prediction has been greatly improved by deep learning in the past few years. However, the most successful methods rely on multiple sequence alignment (MSA) of the sequence homologs of the protein under prediction. In nature, a protein folds in the absence of its sequence homologs and thus, a MSA-free structure prediction method is desired. Here, we develop a single-sequence-based protein structure prediction method RaptorX-Single by integrating several protein language models and a structure generation module and then study its advantage over MSA-based methods. Our experimental results indicate that in addition to running much faster than MSA-based methods such as AlphaFold2, RaptorX-Single outperforms AlphaFold2 and other MSA-free methods in predicting the structure of antibodies (after fine-tuning on antibody data), proteins of very few sequence homologs, and single mutation effects. By comparing different protein language models, our results show that not only the scale but also the training data of protein language models will impact the performance. RaptorX-Single also compares favorably to MSA-based AlphaFold2 when the protein under prediction has a large number of sequence homologs.

PMID:38507445 | DOI:10.1073/pnas.2308788121

Categories: Literature Watch

Predicting Progression From Mild Cognitive Impairment to Alzheimer's Dementia With Adversarial Attacks

Wed, 2024-03-20 06:00

IEEE J Biomed Health Inform. 2024 Mar 20;PP. doi: 10.1109/JBHI.2024.3373703. Online ahead of print.

ABSTRACT

Early diagnosisof Alzheimer's disease plays a crucial role in treatment planning that might slow down the disease's progression. This problem is commonly posed as a classification task performed by machine learning and deep learning techniques. Although data-driven techniques set the state-of-the-art in many domains, the scale of the available datasets in Alzheimer's research is not sufficient to learn complex models from patient data. This study proposes a simple yet promising framework to predict the conversion from Mild Cognitive Impairment (MCI) to Alzheimer's Disease (AD). The proposed framework comprises a shallow neural network for binary classification and a single-step gradient-based adversarial attack to find an adversarial progression direction in the input space. The step size required for the adversarial attack to change a patient's diagnosis from MCI to AD indicates the distance to the decision boundary. The patient's diagnosis at the next visit is predicted by employing this notion of distance to the decision boundary. We also present a potential application of the proposed framework to patient subtyping. Experiments with two publicly available datasets for Alzheimer's disease research imply that the proposed framework can predict MCI-to-AD conversions and assist in subtyping by only training a shallow neural network.

PMID:38507374 | DOI:10.1109/JBHI.2024.3373703

Categories: Literature Watch

DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images

Wed, 2024-03-20 06:00

PLoS One. 2024 Mar 20;19(3):e0297667. doi: 10.1371/journal.pone.0297667. eCollection 2024.

ABSTRACT

Skin cancer is a common cancer affecting millions of people annually. Skin cells inside the body that grow in unusual patterns are a sign of this invasive disease. The cells then spread to other organs and tissues through the lymph nodes and destroy them. Lifestyle changes and increased solar exposure contribute to the rise in the incidence of skin cancer. Early identification and staging are essential due to the high mortality rate associated with skin cancer. In this study, we presented a deep learning-based method named DVFNet for the detection of skin cancer from dermoscopy images. To detect skin cancer images are pre-processed using anisotropic diffusion methods to remove artifacts and noise which enhances the quality of images. A combination of the VGG19 architecture and the Histogram of Oriented Gradients (HOG) is used in this research for discriminative feature extraction. SMOTE Tomek is used to resolve the problem of imbalanced images in the multiple classes of the publicly available ISIC 2019 dataset. This study utilizes segmentation to pinpoint areas of significantly damaged skin cells. A feature vector map is created by combining the features of HOG and VGG19. Multiclassification is accomplished by CNN using feature vector maps. DVFNet achieves an accuracy of 98.32% on the ISIC 2019 dataset. Analysis of variance (ANOVA) statistical test is used to validate the model's accuracy. Healthcare experts utilize the DVFNet model to detect skin cancer at an early clinical stage.

PMID:38507348 | DOI:10.1371/journal.pone.0297667

Categories: Literature Watch

Deep learning-based target decomposition for markerless lung tumor tracking in radiotherapy

Wed, 2024-03-20 06:00

Med Phys. 2024 Mar 20. doi: 10.1002/mp.17039. Online ahead of print.

ABSTRACT

BACKGROUND: In radiotherapy, real-time tumor tracking can verify tumor position during beam delivery, guide the radiation beam to target the tumor, and reduce the chance of a geometric miss. Markerless kV x-ray image-based tumor tracking is challenging due to the low tumor visibility caused by tumor-obscuring structures. Developing a new method to enhance tumor visibility for real-time tumor tracking is essential.

PURPOSE: To introduce a novel method for markerless kV image-based tracking of lung tumors via deep learning-based target decomposition.

METHODS: We utilized a conditional Generative Adversarial Network (cGAN), known as Pix2Pix, to build a patient-specific model and generate the synthetic decomposed target image (sDTI) to enhance tumor visibility on the real-time kV projection images acquired by the onboard kV imager equipped on modern linear accelerators. We used 4DCT simulation images to generate the digitally reconstructed radiograph (DRR) and DTI image pairs for model training. We augmented the training dataset by randomly shifting the 4DCT in the superior-inferior, anterior-posterior, and left-right directions during the DRR and DTI generation process. We performed real-time 2D tumor tracking via template matching between the DTI generated from the CT simulation and the sDTI generated from the real-time kV projection images. We validated the proposed method using nine patients' datasets with implanted beacons near the tumor.

RESULTS: The sDTI can effectively improve the image contrast around the lung tumors on the kV projection images for the nine patients. With the beacon motion as ground truth, the tracking errors were on average 0.8 ± 0.7 mm in the superior-inferior (SI) direction and 0.9 ± 0.8 mm in the in-plane left-right (IPLR) direction. The percentage of successful tracking, defined as a tracking error less than 2 mm in the SI direction, is 92.2% on the 4312 tested images. The patient-specific model took approximately 12 h to train. During testing, it took approximately 35 ms to generate one sDTI, and 13 ms to perform the tumor tracking using template matching.

CONCLUSIONS: Our method offers the potential solution for nearly real-time markerless lung tumor tracking. It achieved a high level of accuracy and an impressive tracking rate. Further development of 3D lung tumor tracking is warranted.

PMID:38507259 | DOI:10.1002/mp.17039

Categories: Literature Watch

An Effective Segmentation and Attention based Reptile Residual Capsule Auto Encoder for Pest Classification

Wed, 2024-03-20 06:00

Pest Manag Sci. 2024 Mar 20. doi: 10.1002/ps.8085. Online ahead of print.

ABSTRACT

Insect pests are a major global factor affecting agricultural crop productivity and quality. Rapid and precise insect pest detection is crucial for improving handling and prediction techniques. There are several methods for pest detection and classification tasks; still, the inaccurate detection, computation complexity and several other challenges affect the performance of the model. Thus, this research presents a Deep Learning (DL) approach that has led to significant advancements and is currently being applied successfully in many domains, such as autonomous insect pest detection. Initially, the input images are gathered from the test dataset. The next step in pre-processing the input images is to improve the model capacity by removing unwanted data using the Enhanced Kuan filter method. Then, the pre-processed images are segmented using the Attention-based U-Net method. Finally, a novel Attention Based Reptile Residual Capsule Auto Encoder (ARRCAE) technique is proposed to classify and recognize crop pests. Furthermore, the Improved Reptile Search Optimisation (IRSO) algorithm is employed to fine-tune the classification parameters optimally. As a result, the proposed study enhances performance by classifying crop pest detection systems. The suggested method makes use of a Python tool for simulation, and pest datasets are utilized for result analysis. The suggested model beats other current models with an accuracy of 98%, precision of 97%, recall of 96%, and specificity of 99% for the pest dataset, per the simulation results that were obtained. This article is protected by copyright. All rights reserved.

PMID:38507257 | DOI:10.1002/ps.8085

Categories: Literature Watch

Research on predicting hematoma expansion in spontaneous intracerebral hemorrhage based on deep features of the VGG-19 network

Wed, 2024-03-20 06:00

Postgrad Med J. 2024 Mar 20:qgae037. doi: 10.1093/postmj/qgae037. Online ahead of print.

ABSTRACT

PURPOSE: To construct a clinical noncontrastive computed tomography (NCCT) deep learning joint model for predicting early hematoma expansion (HE) after cerebral hemorrhage (sICH) and evaluate its predictive performance.

METHODS: All 254 patients with primary cerebral hemorrhage from January 2017 to December 2022 in the General Hospital of the Western Theater Command were included. According to the criteria of hematoma enlargement exceeding 33% or the volume exceeding 6 ml, the patients were divided into the HE group and the hematoma non-enlargement (NHE) group. Multiple models and the 10-fold cross-validation method were used to screen the most valuable features and model the probability of predicting HE. The area under the curve (AUC) was used to analyze the prediction efficiency of each model for HE.

RESULTS: They were randomly divided into a training set of 204 cases in an 8:2 ratio and 50 cases of the test set. The clinical imaging deep feature joint model (22 features) predicted the area under the curve of HE as follows: clinical Navie Bayes model AUC 0.779, traditional radiology logistic regression (LR) model AUC 0.818, deep learning LR model AUC 0.873, and clinical NCCT deep learning multilayer perceptron model AUC 0.921.

CONCLUSION: The combined clinical imaging deep learning model has a high predictive effect for early HE in sICH patients, which is helpful for clinical individualized assessment of the risk of early HE in sICH patients.

PMID:38507237 | DOI:10.1093/postmj/qgae037

Categories: Literature Watch

A lightweight xAI approach to cervical cancer classification

Wed, 2024-03-20 06:00

Med Biol Eng Comput. 2024 Mar 20. doi: 10.1007/s11517-024-03063-6. Online ahead of print.

ABSTRACT

Cervical cancer is caused in the vast majority of cases by the human papilloma virus (HPV) through sexual contact and requires a specific molecular-based analysis to be detected. As an HPV vaccine is available, the incidence of cervical cancer is up to ten times higher in areas without adequate healthcare resources. In recent years, liquid cytology has been used to overcome these shortcomings and perform mass screening. In addition, classifiers based on convolutional neural networks can be developed to help pathologists diagnose the disease. However, these systems always require the final verification of a pathologist to make a final diagnosis. For this reason, explainable AI techniques are required to highlight the most significant data to the healthcare professional, as it can be used to determine the confidence in the results and the areas of the image used for classification (allowing the professional to point out the areas he/she thinks are most important and cross-check them against those detected by the system in order to create incremental learning systems). In this work, a 4-phase optimization process is used to obtain a custom deep-learning classifier for distinguishing between 4 severity classes of cervical cancer with liquid-cytology images. The final classifier obtains an accuracy over 97% for 4 classes and 100% for 2 classes with execution times under 1 s (including the final report generation). Compared to previous works, the proposed classifier obtains better accuracy results with a lower computational cost.

PMID:38507122 | DOI:10.1007/s11517-024-03063-6

Categories: Literature Watch

AttnPep: A Self-Attention-Based Deep Learning Method for Peptide Identification in Shotgun Proteomics

Wed, 2024-03-20 06:00

J Proteome Res. 2024 Mar 20. doi: 10.1021/acs.jproteome.4c00147. Online ahead of print.

NO ABSTRACT

PMID:38506788 | DOI:10.1021/acs.jproteome.4c00147

Categories: Literature Watch

Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy

Wed, 2024-03-20 06:00

Radiol Artif Intell. 2024 Mar 20:e230151. doi: 10.1148/ryai.230151. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop a fast and fully automated deep learning (DL)-based method for the MRI planimetric segmentation and measurement of the brainstem and ventricular structures most affected in patients with progressive supranuclear palsy (PSP). Materials and Methods In this retrospective study, T1-weighted MR images from healthy controls (n=84) were used to train DL models for segmenting the midbrain, pons, middle cerebellar peduncles (MCP), superior cerebellar peduncle (SCP), third ventricle (3rd V) and frontal horns (FHs). Internal, external and clinical test datasets (n=305) were used to assess segmentation model reliability. DL masks from test datasets were used to automatically extract midbrain and pons areas and the width of MCP, SCP, 3rd V and FHs. Automated measurements were compared with those manually performed by an expert radiologist. Finally, these measures were combined to calculate the midbrain-to-pons area ratio, magnetic resonance parkinsonism index (MRPI) and MRPI 2.0, which were used to differentiate patients with PSP (n=71) from those with Parkinson's disease (PD, n=129). Results Dice coefficients above 0.85 were found for all brain regions when comparing manual and DL-based segmentations. A strong correlation was observed between automated and manual measurements (Spearman's Rho>0.80, p<0.001). DL-based measurements showed excellent performance in differentiating patients with PSP from those with PD, with an area under the receiver operating characteristic curve above 0.92. Conclusion Automated approach successfully segmented and measured the brainstem and ventricular structures. DL-based models may represent a useful approach to support the diagnosis of PSP and potentially other conditions associated with brainstem and ventricular alterations. ©RSNA, 2024.

PMID:38506619 | DOI:10.1148/ryai.230151

Categories: Literature Watch

Impact of Deep Learning Image Reconstruction Methods on MRI Throughput

Wed, 2024-03-20 06:00

Radiol Artif Intell. 2024 Mar 20:e230181. doi: 10.1148/ryai.230181. Online ahead of print.

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate the effect of implementing two distinct commercially available deep learning reconstruction (DLR) algorithms on the efficiency of MRI examinations conducted in real clinical practice in an outpatient setting within a large, multicenter institution. Materials and Methods This retrospective study included 7,346 examinations from ten clinical MRI scanners analyzed during the pre- and postimplementation periods of DLR methods. Two different types of DLR methods, namely Digital Imaging and Communications in Medicine (DICOM)-based and k-space-based methods, were implemented in half of the scanners (three DICOM-based and two k-space-based), while the remaining five scanners had no DLR method implemented. Scan and room times of each examination type during the pre-and postimplementation periods were compared among the different DLR methods using the Wilcoxon test. Results The application of deep learning methods resulted in significant reductions in scan and room times for certain examination types. The DICOM-based method demonstrated up to a 53% reduction in scan times and a 41% reduction in room times for various study types. The k-space-based method demonstrated up to a 27% reduction in scan times but did not significantly reduce room times. Conclusion DLR methods were associated with reductions in scan and room times in a clinical setting, though the effects were heterogenous depending on examination type. Thus, potential adopters should carefully evaluate their case mix to determine the impact of integrating these tools. ©RSNA, 2024.

PMID:38506618 | DOI:10.1148/ryai.230181

Categories: Literature Watch

Performance evaluation of machine-assisted interpretation of Gram stains from positive blood cultures

Wed, 2024-03-20 06:00

J Clin Microbiol. 2024 Mar 20:e0087623. doi: 10.1128/jcm.00876-23. Online ahead of print.

ABSTRACT

Manual microscopy of Gram stains from positive blood cultures (PBCs) is crucial for diagnosing bloodstream infections but remains labor intensive, time consuming, and subjective. This study aimed to evaluate a scan and analysis system that combines fully automated digital microscopy with deep convolutional neural networks (CNNs) to assist the interpretation of Gram stains from PBCs for routine laboratory use. The CNN was trained to classify images of Gram stains based on staining and morphology into seven different classes: background/false-positive, Gram-positive cocci in clusters (GPCCL), Gram-positive cocci in pairs (GPCP), Gram-positive cocci in chains (GPCC), rod-shaped bacilli (RSB), yeasts, and polymicrobial specimens. A total of 1,555 Gram-stained slides of PBCs were scanned, pre-classified, and reviewed by medical professionals. The results of assisted Gram stain interpretation were compared to those of manual microscopy and cultural species identification by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). The comparison of assisted Gram stain interpretation and manual microscopy yielded positive/negative percent agreement values of 95.8%/98.0% (GPCCL), 87.6%/99.3% (GPCP/GPCC), 97.4%/97.8% (RSB), 83.3%/99.3% (yeasts), and 87.0%/98.5% (negative/false positive). The assisted Gram stain interpretation, when compared to MALDI-TOF MS species identification, also yielded similar results. During the analytical performance study, assisted interpretation showed excellent reproducibility and repeatability. Any microorganism in PBCs should be detectable at the determined limit of detection of 105 CFU/mL. Although the CNN-based interpretation of Gram stains from PBCs is not yet ready for clinical implementation, it has potential for future integration and advancement.

PMID:38506525 | DOI:10.1128/jcm.00876-23

Categories: Literature Watch

A deep learning method to identify and localize large-vessel occlusions from cerebral digital subtraction angiography

Wed, 2024-03-20 06:00

J Neuroimaging. 2024 Mar 20. doi: 10.1111/jon.13193. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: An essential step during endovascular thrombectomy is identifying the occluded arterial vessel on a cerebral digital subtraction angiogram (DSA). We developed an algorithm that can detect and localize the position of occlusions in cerebral DSA.

METHODS: We retrospectively collected cerebral DSAs from a single institution between 2018 and 2020 from 188 patients, 86 of whom suffered occlusions of the M1 and proximal M2 segments. We trained an ensemble of deep-learning models on fewer than 60 large-vessel occlusion (LVO)-positive patients. We evaluated the model on an independent test set and evaluated the truth of its predicted localizations using Intersection over Union and expert review.

RESULTS: On an independent test set of 166 cerebral DSA frames with an LVO prevalence of 0.19, the model achieved a specificity of 0.95 (95% confidence interval [CI]: 0.90, 0.99), a precision of 0.7450 (95% CI: 0.64, 0.88), and a sensitivity of 0.76 (95% CI: 0.66, 0.91). The model correctly localized the LVO in at least one frame in 13 of the 14 LVO-positive patients in the test set. The model achieved a precision of 0.67 (95% CI: 0.52, 0.79), recall of 0.69 (95% CI: 0.46, 0.81), and a mean average precision of 0.75 (95% CI: 0.56, 0.91).

CONCLUSION: This work demonstrates that a deep learning strategy using a limited dataset can generate effective representations used to identify LVOs. Generating an expanded and more complete dataset of LVOs with obstructed LVOs is likely the best way to improve the model's ability to localize LVOs.

PMID:38506407 | DOI:10.1111/jon.13193

Categories: Literature Watch

Segmentation and Detection of Crop Pests using Novel U-Net with Hybrid Deep Learning Mechanism

Wed, 2024-03-20 06:00

Pest Manag Sci. 2024 Mar 20. doi: 10.1002/ps.8083. Online ahead of print.

ABSTRACT

In India, agriculture is the backbone of economic sectors because of the increasing demand for agricultural products. However, agricultural production has been affected due to the presence of pests in the crops. Several methods were developed to solve the crop pest detection issue, but they failed to achieve better results. Therefore, the proposed study used a new hybrid deep learning mechanism for segmenting and detecting pests in crops. Image collection, pre-processing, segmentation, and detection are the steps involved in the proposed study. There are three steps involved in pre-processing: image rescaling, equalized joint histogram based contrast enhancement (Eq-JH-CE), and bendlet transform based De-noising (BT-D). Next, the pre-processed images are segmented using the DenseNet-77 UNet model. In this section, the complexity of the conventional UNet model is mitigated by hybridizing it with the DenseNet-77 model. Once the segmentation is done with an improved model, the crop pests are detected and classified by proposing a novel Convolutional Slice-Attention based Gated Recurrent Unit (CS-AGRU) model. The proposed model is the combination of a convolutional Neural Network (CNN) and a Gated Recurrent Unit (GRU). In order to achieve better accuracy outcomes, the proposed study hybridized these models due to their great efficiency. Also, the slice attention mechanism is applied over the proposed model for fetching relevant feature information and thereby enhancing the computational efficiency. So, pests in the crop are finally detected using the proposed method. The Python programming language is utilized for implementation. The proposed approach shows a better accuracy range of 99.52%, IoU of 99.1%, precision of 98.88%, recall of 99.53%, F1-score of 99.35%, and FNR of 0.011 compared to existing techniques. This article is protected by copyright. All rights reserved.

PMID:38506377 | DOI:10.1002/ps.8083

Categories: Literature Watch

Prediction of Junior High School Students' Problematic Internet Use: The Comparison of Neural Network Models and Linear Mixed Models in Longitudinal Study

Wed, 2024-03-20 06:00

Psychol Res Behav Manag. 2024 Mar 15;17:1191-1203. doi: 10.2147/PRBM.S450083. eCollection 2024.

ABSTRACT

PURPOSE: With the rise of big data, deep learning neural networks have garnered attention from psychology researchers due to their ability to process vast amounts of data and achieve superior model fitting. We aim to explore the predictive accuracy of neural network models and linear mixed models in tracking data when subjective variables are predominant in the field of psychology. We separately analyzed the predictive accuracy of both models and conduct a comparative study to further investigate. Simultaneously, we utilized the neural network model to examine the influencing factors of problematic internet usage and its temporal changes, attempting to provide insights for early interventions in problematic internet use.

PATIENTS AND METHODS: This study compared longitudinal data of junior high school students using both a linear mixed model and a neural network model to ascertain the efficacy of these two methods in processing psychological longitudinal data.

RESULTS: The neural network model exhibited significantly smaller errors compared to the linear mixed model. Furthermore, the outcomes from the neural network model revealed that, when analyzing data from a single time point, the influences of seventh grade better predicted Problematic Internet Use in ninth grade. And when analyzing data from multiple time points, the influences of sixth, seventh, and eighth grades more accurately predicted Problematic Internet Use in ninth grade.

CONCLUSION: Neural network models surpass linear mixed models in precision when predicting and analyzing longitudinal data. Furthermore, the influencing factors in lower grades provide more accurate predictions of Problematic Internet Use in higher grades. The highest prediction accuracy is attained through the utilization of data from multiple time points.

PMID:38505349 | PMC:PMC10950088 | DOI:10.2147/PRBM.S450083

Categories: Literature Watch

Radiomics and Artificial Intelligence in Renal Lesion Assessment

Wed, 2024-03-20 06:00

Crit Rev Oncog. 2024;29(2):65-75. doi: 10.1615/CritRevOncog.2023051084.

ABSTRACT

Radiomics, the extraction and analysis of quantitative features from medical images, has emerged as a promising field in radiology with the potential to revolutionize the diagnosis and management of renal lesions. This comprehensive review explores the radiomics workflow, including image acquisition, feature extraction, selection, and classification, and highlights its application in differentiating between benign and malignant renal lesions. The integration of radiomics with artificial intelligence (AI) techniques, such as machine learning and deep learning, can help patients' management and allow the planning of the appropriate treatments. AI models have shown remarkable accuracy in predicting tumor aggressiveness, treatment response, and patient outcomes. This review provides insights into the current state of radiomics and AI in renal lesion assessment and outlines future directions for research in this rapidly evolving field.

PMID:38505882 | DOI:10.1615/CritRevOncog.2023051084

Categories: Literature Watch

Exploring the Potential of Artificial Intelligence in Breast Ultrasound

Wed, 2024-03-20 06:00

Crit Rev Oncog. 2024;29(2):15-28. doi: 10.1615/CritRevOncog.2023048873.

ABSTRACT

Breast ultrasound has emerged as a valuable imaging modality in the detection and characterization of breast lesions, particularly in women with dense breast tissue or contraindications for mammography. Within this framework, artificial intelligence (AI) has garnered significant attention for its potential to improve diagnostic accuracy in breast ultrasound and revolutionize the workflow. This review article aims to comprehensively explore the current state of research and development in harnessing AI's capabilities for breast ultrasound. We delve into various AI techniques, including machine learning, deep learning, as well as their applications in automating lesion detection, segmentation, and classification tasks. Furthermore, the review addresses the challenges and hurdles faced in implementing AI systems in breast ultrasound diagnostics, such as data privacy, interpretability, and regulatory approval. Ethical considerations pertaining to the integration of AI into clinical practice are also discussed, emphasizing the importance of maintaining a patient-centered approach. The integration of AI into breast ultrasound holds great promise for improving diagnostic accuracy, enhancing efficiency, and ultimately advancing patient's care. By examining the current state of research and identifying future opportunities, this review aims to contribute to the understanding and utilization of AI in breast ultrasound and encourage further interdisciplinary collaboration to maximize its potential in clinical practice.

PMID:38505878 | DOI:10.1615/CritRevOncog.2023048873

Categories: Literature Watch

Pages