Deep learning

Transformer enhanced autoencoder rendering cleaning of noisy optical coherence tomography images

Thu, 2024-05-02 06:00

J Med Imaging (Bellingham). 2024 Jun;11(3):034008. doi: 10.1117/1.JMI.11.3.034008. Epub 2024 Apr 30.

ABSTRACT

PURPOSE: Optical coherence tomography (OCT) is an emerging imaging tool in healthcare with common applications in ophthalmology for detection of retinal diseases, as well as other medical domains. The noise in OCT images presents a great challenge as it hinders the clinician's ability to diagnosis in extensive detail.

APPROACH: In this work, a region-based, deep-learning, denoising framework is proposed for adaptive cleaning of noisy OCT-acquired images. The core of the framework is a hybrid deep-learning model named transformer enhanced autoencoder rendering (TEAR). Attention gates are utilized to ensure focus on denoising the foreground and to remove the background. TEAR is designed to remove the different types of noise artifacts commonly present in OCT images and to enhance the visual quality.

RESULTS: Extensive quantitative evaluations are performed to evaluate the performance of TEAR and compare it against both deep-learning and traditional state-of-the-art denoising algorithms. The proposed method improved the peak signal-to-noise ratio to 27.9 dB, CNR to 6.3 dB, SSIM to 0.9, and equivalent number of looks to 120.8 dB for a dental dataset. For a retinal dataset, the performance metrics in the same sequence are: 24.6, 14.2, 0.64, and 1038.7 dB, respectively.

CONCLUSIONS: The results show that the approach verifiably removes speckle noise and achieves superior quality over several well-known denoisers.

PMID:38694626 | PMC:PMC11058346 | DOI:10.1117/1.JMI.11.3.034008

Categories: Literature Watch

Automated Identification of Different Severity Levels of Diabetic Retinopathy Using a Handheld Fundus Camera and Single-Image Protocol

Thu, 2024-05-02 06:00

Ophthalmol Sci. 2024 Feb 7;4(4):100481. doi: 10.1016/j.xops.2024.100481. eCollection 2024 Jul-Aug.

ABSTRACT

PURPOSE: To evaluate the performance of artificial intelligence (AI) systems embedded in a mobile, handheld retinal camera, with a single retinal image protocol, in detecting both diabetic retinopathy (DR) and more-than-mild diabetic retinopathy (mtmDR).

DESIGN: Multicenter cross-sectional diagnostic study, conducted at 3 diabetes care and eye care facilities.

PARTICIPANTS: A total of 327 individuals with diabetes mellitus (type 1 or type 2) underwent a retinal imaging protocol enabling expert reading and automated analysis.

METHODS: Participants underwent fundus photographs using a portable retinal camera (Phelcom Eyer). The captured images were automatically analyzed by deep learning algorithms retinal alteration score (RAS) and diabetic retinopathy alteration score (DRAS), consisting of convolutional neural networks trained on EyePACS data sets and fine-tuned using data sets of portable device fundus images. The ground truth was the classification of DR corresponding to adjudicated expert reading, performed by 3 certified ophthalmologists.

MAIN OUTCOME MEASURES: Primary outcome measures included the sensitivity and specificity of the AI system in detecting DR and/or mtmDR using a single-field, macula-centered fundus photograph for each eye, compared with a rigorous clinical reference standard comprising the reading center grading of 2-field imaging protocol using the International Classification of Diabetic Retinopathy severity scale.

RESULTS: Of 327 analyzed patients (mean age, 57.0 ± 16.8 years; mean diabetes duration, 16.3 ± 9.7 years), 307 completed the study protocol. Sensitivity and specificity of the AI system were high in detecting any DR with DRAS (sensitivity, 90.48% [95% confidence interval (CI), 84.99%-94.46%]; specificity, 90.65% [95% CI, 84.54%-94.93%]) and mtmDR with the combination of RAS and DRAS (sensitivity, 90.23% [95% CI, 83.87%-94.69%]; specificity, 85.06% [95% CI, 78.88%-90.00%]). The area under the receiver operating characteristic curve was 0.95 for any DR and 0.89 for mtmDR.

CONCLUSIONS: This study showed a high accuracy for the detection of DR in different levels of severity with a single retinal photo per eye in an all-in-one solution, composed of a portable retinal camera powered by AI. Such a strategy holds great potential for increasing coverage rates of screening programs, contributing to prevention of avoidable blindness.

FINANCIAL DISCLOSURES: F.K.M. is a medical consultant for Phelcom Technologies. J.A.S. is Chief Executive Officer and proprietary of Phelcom Technologies. D.L. is Chief Technology Officer and proprietary of Phelcom Technologies. P.V.P. is an employee at Phelcom Technologies.

PMID:38694494 | PMC:PMC11060947 | DOI:10.1016/j.xops.2024.100481

Categories: Literature Watch

Artificial intelligence in interventional radiology: state of the art

Wed, 2024-05-01 06:00

Eur Radiol Exp. 2024 May 2;8(1):62. doi: 10.1186/s41747-024-00452-2.

ABSTRACT

Artificial intelligence (AI) has demonstrated great potential in a wide variety of applications in interventional radiology (IR). Support for decision-making and outcome prediction, new functions and improvements in fluoroscopy, ultrasound, computed tomography, and magnetic resonance imaging, specifically in the field of IR, have all been investigated. Furthermore, AI represents a significant boost for fusion imaging and simulated reality, robotics, touchless software interactions, and virtual biopsy. The procedural nature, heterogeneity, and lack of standardisation slow down the process of adoption of AI in IR. Research in AI is in its early stages as current literature is based on pilot or proof of concept studies. The full range of possibilities is yet to be explored.Relevance statement Exploring AI's transformative potential, this article assesses its current applications and challenges in IR, offering insights into decision support and outcome prediction, imaging enhancements, robotics, and touchless interactions, shaping the future of patient care.Key points• AI adoption in IR is more complex compared to diagnostic radiology.• Current literature about AI in IR is in its early stages.• AI has the potential to revolutionise every aspect of IR.

PMID:38693468 | DOI:10.1186/s41747-024-00452-2

Categories: Literature Watch

BraNet: a mobil application for breast image classification based on deep learning algorithms

Wed, 2024-05-01 06:00

Med Biol Eng Comput. 2024 May 2. doi: 10.1007/s11517-024-03084-1. Online ahead of print.

ABSTRACT

Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet" for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client-server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader's agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts' accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model.

PMID:38693328 | DOI:10.1007/s11517-024-03084-1

Categories: Literature Watch

Optimized model architectures for deep learning on genomic data

Wed, 2024-05-01 06:00

Commun Biol. 2024 Apr 30;7(1):516. doi: 10.1038/s42003-024-06161-1.

ABSTRACT

The success of deep learning in various applications depends on task-specific architecture design choices, including the types, hyperparameters, and number of layers. In computational biology, there is no consensus on the optimal architecture design, and decisions are often made using insights from more well-established fields such as computer vision. These may not consider the domain-specific characteristics of genome sequences, potentially limiting performance. Here, we present GenomeNet-Architect, a neural architecture design framework that automatically optimizes deep learning models for genome sequence data. It optimizes the overall layout of the architecture, with a search space specifically designed for genomics. Additionally, it optimizes hyperparameters of individual layers and the model training procedure. On a viral classification task, GenomeNet-Architect reduced the read-level misclassification rate by 19%, with 67% faster inference and 83% fewer parameters, and achieved similar contig-level accuracy with ~100 times fewer parameters compared to the best-performing deep learning baselines.

PMID:38693292 | DOI:10.1038/s42003-024-06161-1

Categories: Literature Watch

Deep learning in magnetic resonance enterography for Crohn's disease assessment: a systematic review

Wed, 2024-05-01 06:00

Abdom Radiol (NY). 2024 May 1. doi: 10.1007/s00261-024-04326-4. Online ahead of print.

ABSTRACT

Crohn's disease (CD) poses significant morbidity, underscoring the need for effective, non-invasive inflammatory assessment using magnetic resonance enterography (MRE). This literature review evaluates recent publications on the role of deep learning in improving MRE for CD assessment. We searched MEDLINE/PUBMED for studies that reported the use of deep learning algorithms for assessment of CD activity. The study was conducted according to the PRISMA guidelines. The risk of bias was evaluated using the QUADAS-2 tool. Five eligible studies, encompassing 468 subjects, were identified. Our study suggests that diverse deep learning applications, including image quality enhancement, bowel segmentation for disease burden quantification, and 3D reconstruction for surgical planning are useful and promising for CD assessment. However, most of the studies are preliminary, retrospective studies, and have a high risk of bias in at least one category. Future research is needed to assess how deep learning can impact CD patient diagnostics, particularly when considering the increasing integration of such models into hospital systems.

PMID:38693270 | DOI:10.1007/s00261-024-04326-4

Categories: Literature Watch

Enhancing surface drainage mapping in eastern Canada with deep learning applied to LiDAR-derived elevation data

Wed, 2024-05-01 06:00

Sci Rep. 2024 May 1;14(1):10016. doi: 10.1038/s41598-024-60525-5.

ABSTRACT

Agricultural dykelands in Nova Scotia rely heavily on a surface drainage technique called land forming, which is used to alter the topography of fields to improve drainage. The presence of land-formed fields provides useful information to better understand land utilization on these lands vulnerable to rising sea levels. Current field boundaries delineation and classification methods, such as manual digitalization and traditional segmentation techniques, are labour-intensive and often require manual and time-consuming parameter selection. In recent years, deep learning (DL) techniques, including convolutional neural networks and Mask R-CNN, have shown promising results in object recognition, image classification, and segmentation tasks. However, there is a gap in applying these techniques to detecting surface drainage patterns on agricultural fields. This paper develops and tests a Mask R-CNN model for detecting land-formed fields on agricultural dykelands using LiDAR-derived elevation data. Specifically, our approach focuses on identifying groups of pixels as cohesive objects within the imagery, a method that represents a significant advancement over pixel-by-pixel classification techniques. The DL model developed in this study demonstrated a strong overall performance, with a mean Average Precision (mAP) of 0.89 across Intersection over Union (IoU) thresholds from 0.5 to 0.95, indicating its effectiveness in detecting land-formed fields. Results also revealed that 53% of Nova Scotia's dykelands are being used for agricultural purposes and approximately 75% (6924 hectares) of these fields were land-formed. By applying deep learning techniques to LiDAR-derived elevation data, this study offers novel insights into surface drainage mapping, enhancing the capability for precise and efficient agricultural land management in regions vulnerable to environmental changes.

PMID:38693219 | DOI:10.1038/s41598-024-60525-5

Categories: Literature Watch

RETFound-enhanced community-based fundus disease screening: real-world evidence and decision curve analysis

Wed, 2024-05-01 06:00

NPJ Digit Med. 2024 Apr 30;7(1):108. doi: 10.1038/s41746-024-01109-5.

ABSTRACT

Visual impairments and blindness are major public health concerns globally. Effective eye disease screening aided by artificial intelligence (AI) is a promising countermeasure, although it is challenged by practical constraints such as poor image quality in community screening. The recently developed ophthalmic foundation model RETFound has shown higher accuracy in retinal image recognition tasks. This study developed an RETFound-enhanced deep learning (DL) model for multiple-eye disease screening using real-world images from community screenings. Our results revealed that our DL model improved the sensitivity and specificity by over 15% compared with commercial models. Our model also shows better generalisation ability than AI models developed using traditional processes. Additionally, decision curve analysis underscores the higher net benefit of employing our model in both urban and rural settings in China. These findings indicate that the RETFound-enhanced DL model can achieve a higher net benefit in community-based screening, advocating its adoption in low- and middle-income countries to address global eye health challenges.

PMID:38693205 | DOI:10.1038/s41746-024-01109-5

Categories: Literature Watch

Context-specific stress causes compartmentalized SARM1 activation and local degeneration in cortical neurons

Wed, 2024-05-01 06:00

J Neurosci. 2024 May 1:e2424232024. doi: 10.1523/JNEUROSCI.2424-23.2024. Online ahead of print.

ABSTRACT

SARM1 is an inducible NADase that localizes to mitochondria throughout neurons and senses metabolic changes that occur after injury. Minimal proteomic changes are observed upon either SARM1 depletion or activation, suggesting that SARM1 does not exert broad effects on neuronal protein homeostasis. However, whether SARM1 activation occurs throughout the neuron in response to injury and cell stress remains largely unknown. Using a semi-automated imaging pipeline and a custom-built deep learning scoring algorithm, we studied degeneration in both mixed sex mouse primary cortical neurons and male human iPSC derived cortical neurons in response to a number of different stressors. We show that SARM1 activation is differentially restricted to specific neuronal compartments depending on the stressor. Cortical neurons undergo SARM1-dependent axon degeneration after mechanical transection and SARM1 activation is limited to the axonal compartment distal of the injury site. However, global SARM1 activation following vacor treatment causes both cell body and axon degeneration. Context-specific stressors, such as microtubule dysfunction and mitochondrial stress, induce axonal SARM1 activation leading to SARM1-dependent axon degeneration and SARM1-independent cell body death. Our data reveal that compartment-specific SARM1-mediated death signaling is dependent on the type of injury and cellular stressor.Significance Statement SARM1 is an important regulator of active axon degeneration after injury in the peripheral nervous system. Here we show that SARM1 can also be activated by a number of different cellular stressors in cortical neurons of the central nervous system. Loss or activation of SARM1 does not cause large scale changes in global protein homeostasis. However, context-dependent SARM1 activation is localized to specific neuronal compartments and results in localized degeneration of axons. Understanding which cell stress pathways are responsible for driving degeneration of distinct neuronal compartments under what cellular stress conditions and in which neuronal subtypes, will inform development of neurodegenerative disease therapeutics.

PMID:38692735 | DOI:10.1523/JNEUROSCI.2424-23.2024

Categories: Literature Watch

A prediction method of interaction based on Bilinear Attention Networks for designing polyphenol-protein complexes delivery systems

Wed, 2024-05-01 06:00

Int J Biol Macromol. 2024 Apr 29:131959. doi: 10.1016/j.ijbiomac.2024.131959. Online ahead of print.

ABSTRACT

Polyphenol-protein complexes delivery systems are gaining attention for their potential health benefits and food industry development. However, creating an ideal delivery system requires extensive wet-lab experimentation. To address this, we collected 525 ligand-protein interaction data pairs and established an interaction prediction model using Bilinear Attention Networks. We utilized 10-fold cross validation to address potential overfitting issues in the model, resulting in showed higher average AUROC (0.8443), AUPRC (0.7872), and F1 (0.8164). The optimal threshold (0.3739) was selected for the model to be used for subsequent analysis. Based on the model prediction results and optimal threshold, by verifying experimental analysis, the interaction of paeonol with the following proteins was obtained, including bovine serum albumin (lgKa = 6.2759), bovine β-lactoglobulin (lgKa = 6.7479), egg ovalbumin (lgKa = 5.1806), zein (lgKa = 6.0122), bovine α-lactalbumin (lgKa = 3.9170), bovine lactoferrin (lgKa = 4.5380), the first four proteins are consistent with the predicted results of the model, with lgKa >5. The established model can accurately and rapidly predict the interaction of polyphenol-protein complexes. This study is the first to combine open ligand-protein interaction experiments with Deep Learning algorithms in the food industry, greatly improving research efficiency and providing a novel perspective for future complex delivery system construction.

PMID:38692548 | DOI:10.1016/j.ijbiomac.2024.131959

Categories: Literature Watch

Lamb wave-based damage assessment for composite laminates using a deep learning approach

Wed, 2024-05-01 06:00

Ultrasonics. 2024 Apr 25;141:107333. doi: 10.1016/j.ultras.2024.107333. Online ahead of print.

ABSTRACT

With the increasing utilization of composite materials due to their superior properties, the need for efficient structural health monitoring techniques rises rapidly to ensure the integrity and reliability of composite structures. Deep learning approaches have great potential applications for Lamb wave-based damage detection. However, it remains challenging to quantitatively detect and characterize damage such as delamination in multi-layered structures. These deep learning architectures still lack a certain degree of physical interpretability. In this study, a convolutional sparse coding-based UNet (CSCUNet) is proposed for ultrasonic Lamb wave-based damage assessment in composite laminates. A low-resolution image is generated using delay-and-sum algorithm based on Lamb waves acquired by transducer array. The encoder-decoder framework in the proposed CSCUNet enables the transformation of low-resolution input image to high-resolution damage image. In addition, the multi-layer convolutional sparse coding block is introduced into encoder of the CSCUNet to improve both performance and interpretability of the model. The proposed method is tested on both numerical and experimental data acquired on the surface of composite specimen. The results demonstrate its effectiveness in identifying the delamination location, size, and shape. The network has powerful feature extraction capability and enhanced interpretability, enabling high-resolution imaging and contour evaluation of composite material damage.

PMID:38692213 | DOI:10.1016/j.ultras.2024.107333

Categories: Literature Watch

A deep learning-based pipeline for developing multi-rib shape generative model with populational percentiles or anthropometrics as predictors

Wed, 2024-05-01 06:00

Comput Med Imaging Graph. 2024 Apr 25;115:102388. doi: 10.1016/j.compmedimag.2024.102388. Online ahead of print.

ABSTRACT

Rib cross-sectional shapes (characterized by the outer contour and cortical bone thickness) affect the rib mechanical response under impact loading, thereby influence the rib injury pattern and risk. A statistical description of the rib shapes or their correlations to anthropometrics is a prerequisite to the development of numerical human body models representing target demographics. Variational autoencoders (VAE) as anatomical shape generators remain to be explored in terms of utilizing the latent vectors to control or interpret the representativeness of the generated results. In this paper, we propose a pipeline for developing a multi-rib cross-sectional shape generative model from CT images, which consists of the achievement of rib cross-sectional shape data from CT images using an anatomical indexing system and regular grids, and a unified framework to fit shape distributions and associate shapes to anthropometrics for different rib categories. Specifically, we collected CT images including 3193 ribs, surface regular grid is generated for each rib based on anatomical coordinates, the rib cross-sectional shapes are characterized by nodal coordinates and cortical bone thickness. The tensor structure of shape data based on regular grids enable the implementation of CNNs in the conditional variational autoencoder (CVAE). The CVAE is trained against an auxiliary classifier to decouple the low-dimensional representations of the inter- and intra- variations and fit each intra-variation by a Gaussian distribution simultaneously. Random tree regressors are further leveraged to associate each continuous intra-class space with the corresponding anthropometrics of the subjects, i.e., age, height and weight. As a result, with the rib class labels and the latent vectors sampled from Gaussian distributions or predicted from anthropometrics as the inputs, the decoder can generate valid rib cross-sectional shapes of given class labels (male/female, 2nd to 11th ribs) for arbitrary populational percentiles or specific age, height and weight, which paves the road for future biomedical and biomechanical studies considering the diversity of rib shapes across the population.

PMID:38692200 | DOI:10.1016/j.compmedimag.2024.102388

Categories: Literature Watch

Motion correction and super-resolution for multi-slice cardiac magnetic resonance imaging via an end-to-end deep learning approach

Wed, 2024-05-01 06:00

Comput Med Imaging Graph. 2024 Apr 29;115:102389. doi: 10.1016/j.compmedimag.2024.102389. Online ahead of print.

ABSTRACT

Accurate reconstruction of a high-resolution 3D volume of the heart is critical for comprehensive cardiac assessments. However, cardiac magnetic resonance (CMR) data is usually acquired as a stack of 2D short-axis (SAX) slices, which suffers from the inter-slice misalignment due to cardiac motion and data sparsity from large gaps between SAX slices. Therefore, we aim to propose an end-to-end deep learning (DL) model to address these two challenges simultaneously, employing specific model components for each challenge. The objective is to reconstruct a high-resolution 3D volume of the heart (VHR) from acquired CMR SAX slices (VLR). We define the transformation from VLR to VHR as a sequential process of motion correction and super-resolution. Accordingly, our DL model incorporates two distinct components. The first component conducts motion correction by predicting displacement vectors to re-position each SAX slice accurately. The second component takes the motion-corrected SAX slices from the first component and performs the super-resolution to fill the data gaps. These two components operate in a sequential way, and the entire model is trained end-to-end. Our model significantly reduced inter-slice misalignment from originally 3.33±0.74 mm to 1.36±0.63 mm and generated accurate high resolution 3D volumes with Dice of 0.974±0.010 for left ventricle (LV) and 0.938±0.017 for myocardium in a simulation dataset. When compared to the LAX contours in a real-world dataset, our model achieved Dice of 0.945±0.023 for LV and 0.786±0.060 for myocardium. In both datasets, our model with specific components for motion correction and super-resolution significantly enhance the performance compared to the model without such design considerations. The codes for our model are available at https://github.com/zhennongchen/CMR_MC_SR_End2End.

PMID:38692199 | DOI:10.1016/j.compmedimag.2024.102389

Categories: Literature Watch

Towards complex dynamic physics system simulation with graph neural ordinary equations

Wed, 2024-05-01 06:00

Neural Netw. 2024 Apr 25;176:106341. doi: 10.1016/j.neunet.2024.106341. Online ahead of print.

ABSTRACT

The great learning ability of deep learning facilitates us to comprehend the real physical world, making learning to simulate complicated particle systems a promising endeavour both in academia and industry. However, the complex laws of the physical world pose significant challenges to the learning based simulations, such as the varying spatial dependencies between interacting particles and varying temporal dependencies between particle system states in different time stamps, which dominate particles' interacting behavior and the physical systems' evolution patterns. Existing learning based methods fail to fully account for the complexities, making them unable to yield satisfactory simulations. To better comprehend the complex physical laws, we propose a novel model - Graph Networks with Spatial-Temporal neural Ordinary Differential Equations (GNSTODE) - that characterizes the varying spatial and temporal dependencies in particle systems using a united end-to-end framework. Through training with real-world particle-particle interaction observations, GNSTODE can simulate any possible particle systems with high precisions. We empirically evaluate GNSTODE's simulation performance on two real-world particle systems, Gravity and Coulomb, with varying levels of spatial and temporal dependencies. The results show that GNSTODE yields better simulations than state-of-the-art methods, showing that GNSTODE can serve as an effective tool for particle simulation in real-world applications. Our code is made available at https://github.com/Guangsi-Shi/AI-for-physics-GNSTODE.

PMID:38692189 | DOI:10.1016/j.neunet.2024.106341

Categories: Literature Watch

Non-invasive screening and subtyping for breast cancer by serum SERS combined with LGB-DNN algorithms

Wed, 2024-05-01 06:00

Talanta. 2024 Apr 27;275:126136. doi: 10.1016/j.talanta.2024.126136. Online ahead of print.

ABSTRACT

Early detection of breast cancer and its molecular subtyping is crucial for guiding clinical treatment and improving survival rate. Current diagnostic methods for breast cancer are invasive, time consuming and complicated. In this work, an optical detection method integrating surface-enhanced Raman spectroscopy (SERS) technology with feature selection and deep learning algorithm was developed for identifying serum components and building diagnostic model, with the aim of efficient and accurate noninvasive screening of breast cancer. First, the high quality of serum SERS spectra from breast cancer (BC), breast benign disease (BBD) patients and healthy controls (HC) were obtained. Chi-square tests were conducted to exclude confounding factors, enhancing the reliability of the study. Then, LightGBM (LGB) algorithm was used as the base model to retain useful features to significantly improve classification performance. The DNN algorithm was trained through backpropagation, adjusting the weights and biases between neurons to improve the network's predictive ability. In comparison to traditional machine learning algorithms, this method provided more accurate information for breast cancer classification, with classification accuracies of 91.38 % for BC and BBD, and 96.40 % for BC, BBD, and HC. Furthermore, the accuracies of 90.11 % for HR+/HR- and 88.89 % for HER2+/HER2- can be reached when evaluating BC patients' molecular subtypes. These results demonstrate that serum SERS combined with powerful LGB-DNN algorithm would provide a supplementary method for clinical breast cancer screening.

PMID:38692045 | DOI:10.1016/j.talanta.2024.126136

Categories: Literature Watch

Innovative methods for microplastic characterization and detection: Deep learning supported by photoacoustic imaging and automated pre-processing data

Wed, 2024-05-01 06:00

J Environ Manage. 2024 Apr 30;359:120954. doi: 10.1016/j.jenvman.2024.120954. Online ahead of print.

ABSTRACT

Plastic products' widespread applications and their non-biodegradable nature have resulted in the continuous accumulation of microplastic waste, emerging as a significant component of ecological environmental issues. In the field of microplastic detection, the intricate morphology poses challenges in achieving rapid visual characterization of microplastics. In this study, photoacoustic imaging technology is initially employed to capture high-resolution images of diverse microplastic samples. To address the limited dataset issue, an automated data processing pipeline is designed to obtain sample masks while effectively expanding the dataset size. Additionally, we propose Vqdp2, a generative deep learning model with multiple proxy tasks, for predicting six forms of microplastics data. By simultaneously constraining model parameters through two training modes, outstanding morphological category representations are achieved. The results demonstrate Vqdp2's excellent performance in classification accuracy and feature extraction by leveraging the advantages of multi-task training. This research is expected to be attractive for the detection classification and visual characterization of microplastics.

PMID:38692026 | DOI:10.1016/j.jenvman.2024.120954

Categories: Literature Watch

Prediction of systemic lupus erythematosus-related genes based on graph attention network and deep neural network

Wed, 2024-05-01 06:00

Comput Biol Med. 2024 Mar 25;175:108371. doi: 10.1016/j.compbiomed.2024.108371. Online ahead of print.

ABSTRACT

Systemic lupus erythematosus (SLE) is an autoimmune disorder intricately linked to genetic factors, with numerous approaches having identified genes linked to its development, diagnosis and prognosis. Despite genome-wide association analysis and gene knockout experiments confirming some genes associated with SLE, there are still numerous potential genes yet to be discovered. The search for relevant genes through biological experiments entails significant financial and human resources. With the advancement of computational technologies like deep learning, we aim to identify SLE-related genes through deep learning methods, thereby narrowing down the scope for biological experimentation. This study introduces SLEDL, a deep learning-based approach that leverages DNN and graph neural networks to effectively identify SLE-related genes by capturing relevant features in the gene interaction network. The above steps transform the identification of SLE related genes into a binary classification problem, ultimately solved through a fully connected layer. The results demonstrate the superiority of SLEDL, achieving higher AUC (0.7274) and AUPR (0.7599), further validated through case studies.

PMID:38691916 | DOI:10.1016/j.compbiomed.2024.108371

Categories: Literature Watch

Brain tumor detection with integrating traditional and computational intelligence approaches across diverse imaging modalities - Challenges and future directions

Wed, 2024-05-01 06:00

Comput Biol Med. 2024 Apr 16;175:108412. doi: 10.1016/j.compbiomed.2024.108412. Online ahead of print.

ABSTRACT

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.

PMID:38691914 | DOI:10.1016/j.compbiomed.2024.108412

Categories: Literature Watch

DeepOCR: A multi-species deep-learning framework for accurate identification of open chromatin regions in livestock

Wed, 2024-05-01 06:00

Comput Biol Chem. 2024 Apr 19;110:108077. doi: 10.1016/j.compbiolchem.2024.108077. Online ahead of print.

ABSTRACT

A wealth of experimental evidence has suggested that open chromatin regions (OCRs) are involved in many critical biological activities, such as DNA replication, enhancer activity, and gene transcription. Accurately identifying OCRs in livestock species can provide critical insights into the distribution and characteristics of OCRs for disease treatment in livestock, thereby improving animal welfare. However, most current machine-learning methods for OCR prediction were originally designed for a limited number of model organisms, such as humans and some model organisms, and thus their performance on non-model organisms, specifically livestock, is often unsatisfactory. To bridge this gap, we propose DeepOCR, a lightweight depth-separable residual network model for predicting OCRs in livestock, including chicken, cattle, and sheep. DeepOCR integrates a single convolution layer and two improved residue structure blocks to extract and learn important features from the input DNA sequences. A fully connected layer was also employed to further process the extracted features and improve the robustness of the entire network. Our benchmarking experiments demonstrated superior prediction performance of DeepOCR compared to state-of-the-art approaches on testing datasets of the three species. The source code of DeepOCR is freely available for academic purposes at https://github.com/jasonzhao371/DeepOCR/. We anticipate DeepOCR servers as a practical and reliable computational tool for OCR-related studies in livestock species.

PMID:38691895 | DOI:10.1016/j.compbiolchem.2024.108077

Categories: Literature Watch

Discovering optimal kinetic pathways for self-assembly using automatic differentiation

Wed, 2024-05-01 06:00

Proc Natl Acad Sci U S A. 2024 May 7;121(19):e2403384121. doi: 10.1073/pnas.2403384121. Epub 2024 May 1.

ABSTRACT

Macromolecular complexes are often composed of diverse subunits. The self-assembly of these subunits is inherently nonequilibrium and must avoid kinetic traps to achieve high yield over feasible timescales. We show how the kinetics of self-assembly benefits from diversity in subunits because it generates an expansive parameter space that naturally improves the "expressivity" of self-assembly, much like a deeper neural network. By using automatic differentiation algorithms commonly used in deep learning, we searched the parameter spaces of mass-action kinetic models to identify classes of kinetic protocols that mimic biological solutions for productive self-assembly. Our results reveal how high-yield complexes that easily become kinetically trapped in incomplete intermediates can instead be steered by internal design of rate-constants or external and active control of subunits to efficiently assemble. Internal design of a hierarchy of subunit binding rates generates self-assembly that can robustly avoid kinetic traps for all concentrations and energetics, but it places strict constraints on selection of relative rates. External control via subunit titration is more versatile, avoiding kinetic traps for any system without requiring molecular engineering of binding rates, albeit less efficiently and robustly. We derive theoretical expressions for the timescales of kinetic traps, and we demonstrate our optimization method applies not just for design but inference, extracting intersubunit binding rates from observations of yield-vs.-time for a heterotetramer. Overall, we identify optimal kinetic protocols for self-assembly as a powerful mechanism to achieve efficient and high-yield assembly in synthetic systems whether robustness or ease of "designability" is preferred.

PMID:38691585 | DOI:10.1073/pnas.2403384121

Categories: Literature Watch

Pages