Deep learning

MLMFNet: A multi-level modality fusion network for multi-modal accelerated MRI reconstruction

Thu, 2024-04-25 06:00

Magn Reson Imaging. 2024 Apr 23:S0730-725X(24)00141-3. doi: 10.1016/j.mri.2024.04.028. Online ahead of print.

ABSTRACT

Magnetic resonance imaging produces detailed anatomical and physiological images of the human body that can be used in the clinical diagnosis and treatment of diseases. However, MRI suffers its comparatively longer acquisition time than other imaging methods and is thus vulnerable to motion artifacts, which ultimately lead to likely failed or even wrong diagnosis. In order to perform faster reconstruction, deep learning-based methods along with traditional strategies such as parallel imaging and compressed sensing come into play in recent years in this field. Meanwhile, in order to better analyze the diseases, it is also often necessary to acquire images in the same region of interest under different modalities, which yield images with different contrast levels. However, most of these aforementioned methods tend to use single-modal images for reconstruction, neglecting the correlation and redundancy information embedded in MR images acquired with different modalities. While there are works on multi-modal reconstruction, the information is yet to be efficiently explored. In this paper, we propose an end-to-end neural network called MLMFNet, which helps the reconstruction of the target modality by using information from the auxiliary modality across feature channels and layers. Specifically, this is highlighted by three components: (I) An encoder based on UNet with a single-stream strategy that fuses auxiliary and target modalities; (II) a decoder that tends to multi-level features from all layers of the encoder, and (III) a channel attention module. Quantitative and qualitative analyses are performed on a public brain dataset and knee brain dataset, which show that the proposed method achieves satisfying results in MRI reconstruction within the multi-modal context, and also demonstrate its effectiveness and potential to be used in clinical practice.

PMID:38663831 | DOI:10.1016/j.mri.2024.04.028

Categories: Literature Watch

Intrapartum electronic fetal heart rate monitoring to predict acidemia at birth with the use of deep learning

Thu, 2024-04-25 06:00

Am J Obstet Gynecol. 2024 Apr 23:S0002-9378(24)00528-3. doi: 10.1016/j.ajog.2024.04.022. Online ahead of print.

ABSTRACT

BACKGROUND: EFM is used in the vast majority of US hospital births, but has significant limitations in achieving its intended goal of preventing intrapartum hypoxic-ischemic injury. Novel deep learning techniques can improve complex data processing and pattern recognition in medicine.

OBJECTIVE: We sought to apply deep learning approaches to develop and validate a model to predict fetal acidemia from EFM data.

STUDY DESIGN: The database was created using intrapartum EFM data from 2006-2020 from a large, multi-site academic health system. Data was divided into training and testing sets with equal distribution of acidemic cases. Several different deep learning architectures were explored.The primary outcome was umbilical artery acidemia, investigated at four clinically meaningful thresholds: 7.20, 7.15, 7.10, and 7.05, along with base excess. Receiver operating characteristic (ROC) curves were generated with area under the curve (AUROC) assessed to determine the performance of the models. External validation occurred utilizing a publicly available Czech database of EFM data.

RESULTS: A total of 124,777 EFM files were available; 77,132 had <30% missingness in the last 60 minutes of the EFM tracing; 21,041 were matched to a corresponding umbilical cord gas result, 10,182 of which were timestamped within 30 minutes of the last EFM reading and comprised the final dataset. The prevalence of the outcome in the data was 20.9% with pH <7.2, 9.1% <7.15, 3.3% <7.10, and 1.3% <7.05. The best performing model achieved an AUROC of 0.85 at a pH threshold of <7.05. When predicting the joint outcome of both pH <7.05 and base excess <-10 meq/L, it achieved an AUROC of 0.89. When predicting both pH <7.20 and base excess <-10 meq/L, it achieved an AUROC of 0.87. At pH <7.15 and a PPV of 30%, the model achieved a sensitivity of 90% and a specificity of 48%.

CONCLUSION: Application of deep learning methods to intrapartum EFM analysis achieves promising performance in predicting fetal acidemia. This technology could potentially help improve the accuracy and consistency of EFM interpretation.

PMID:38663662 | DOI:10.1016/j.ajog.2024.04.022

Categories: Literature Watch

Deep Learning-Assisted Spectrum-Structure Correlation: State-of-the-Art and Perspectives

Thu, 2024-04-25 06:00

Anal Chem. 2024 Apr 25. doi: 10.1021/acs.analchem.4c01639. Online ahead of print.

ABSTRACT

Spectrum-structure correlation is playing an increasingly crucial role in spectral analysis and has undergone significant development in recent decades. With the advancement of spectrometers, the high-throughput detection triggers the explosive growth of spectral data, and the research extension from small molecules to biomolecules accompanies massive chemical space. Facing the evolving landscape of spectrum-structure correlation, conventional chemometrics becomes ill-equipped, and deep learning assisted chemometrics rapidly emerges as a flourishing approach with superior ability of extracting latent features and making precise predictions. In this review, the molecular and spectral representations and fundamental knowledge of deep learning are first introduced. We then summarize the development of how deep learning assist to establish the correlation between spectrum and molecular structure in the recent 5 years, by empowering spectral prediction (i.e., forward structure-spectrum correlation) and further enabling library matching and de novo molecular generation (i.e., inverse spectrum-structure correlation). Finally, we highlight the most important open issues persisted with corresponding potential solutions. With the fast development of deep learning, it is expected to see ultimate solution of establishing spectrum-structure correlation soon, which would trigger substantial development of various disciplines.

PMID:38662943 | DOI:10.1021/acs.analchem.4c01639

Categories: Literature Watch

Editorial for "Fully Automated Identification of Lymph Node Metastases and Lymphovascular Invasion in Endometrial Cancer From Multi-Parametric MRI by Deep Learning"

Thu, 2024-04-25 06:00

J Magn Reson Imaging. 2024 Apr 25. doi: 10.1002/jmri.29409. Online ahead of print.

NO ABSTRACT

PMID:38662938 | DOI:10.1002/jmri.29409

Categories: Literature Watch

How DNA encodes the start of transcription

Thu, 2024-04-25 06:00

Science. 2024 Apr 26;384(6694):382-383. doi: 10.1126/science.adp0869. Epub 2024 Apr 25.

ABSTRACT

A deep-learning model reveals the rules that define transcription initiation.

PMID:38662850 | DOI:10.1126/science.adp0869

Categories: Literature Watch

Sequence basis of transcription initiation in the human genome

Thu, 2024-04-25 06:00

Science. 2024 Apr 26;384(6694):eadj0116. doi: 10.1126/science.adj0116. Epub 2024 Apr 26.

ABSTRACT

Transcription initiation is a process that is essential to ensuring the proper function of any gene, yet we still lack a unified understanding of sequence patterns and rules that explain most transcription start sites in the human genome. By predicting transcription initiation at base-pair resolution from sequences with a deep learning-inspired explainable model called Puffin, we show that a small set of simple rules can explain transcription initiation at most human promoters. We identify key sequence patterns that contribute to human promoter activity, each activating transcription with distinct position-specific effects. Furthermore, we explain the sequence basis of bidirectional transcription at promoters, identify the links between promoter sequence and gene expression variation across cell types, and explore the conservation of sequence determinants of transcription initiation across mammalian species.

PMID:38662817 | DOI:10.1126/science.adj0116

Categories: Literature Watch

Automated 3D Perioral Landmark Detection Using High-Resolution Network: Artificial Intelligence-Based Anthropometric Analysis

Thu, 2024-04-25 06:00

Aesthet Surg J. 2024 Apr 25:sjae103. doi: 10.1093/asj/sjae103. Online ahead of print.

ABSTRACT

BACKGROUND: 3D facial stereophotogrammetry, as a convenient, non-invasive and highly reliable evaluation tool, has shown great potential in pre-operative planning and treatment efficacy evaluation of plastic surgery in recent years. However, it requires manual identification of facial landmarks by trained evaluators to obtain anthropometric data, which consumes large amount of time and effort. Automatic 3D facial landmark localization may facilitate fast data acquisition and eliminate evaluator error.

OBJECTIVES: In this paper, we propose a novel deep-learning method based on dimension-transformation and key-point detection for automated 3D perioral landmark annotation.

METHODS: The 3D facial model is transformed into 2D images on which High-Resolution Network is implemented for key point detection. The 2D coordinates of key points are then mapped back to the 3D model using mathematical methods to obtain the 3D landmark coordinates. This program was trained with 120 facial models and validated in 50 facial models.

RESULTS: Our approach achieved satisfactory accuracy of 1.30 ± 0.68 mm error in landmark detection with an average processing time of 5.2 ± 0.21 seconds per model. And subsequent analysis based on these landmarks showed an error of 0.87 ± 1.02 mm for linear measurements and 5.62 ± 6.61° for angular measurements.

CONCLUSIONS: This automated 3D perioral landmarking method could serve as an effective tool that enables fast and accurate anthropometric analysis of lip morphology for plastic surgery and aesthetic procedures.

PMID:38662744 | DOI:10.1093/asj/sjae103

Categories: Literature Watch

The impact of technological innovation on the green digital economy and development strategies

Thu, 2024-04-25 06:00

PLoS One. 2024 Apr 25;19(4):e0301051. doi: 10.1371/journal.pone.0301051. eCollection 2024.

ABSTRACT

To investigate the interplay among technological innovation, industrial structure, production methodologies, economic growth, and environmental consequences within the paradigm of a green economy and to put forth strategies for sustainable development, this study scrutinizes the limitations inherent in conventional deep learning networks. Firstly, this study analyzes the limitations and optimization strategies of multi-layer perceptron (MLP) networks under the background of the green economy. Secondly, the MLP network model is optimized, and the dynamic analysis of the impact of technological innovation on the digital economy is discussed. Finally, the effectiveness of the optimization model is verified by experiments. Moreover, a sustainable development strategy based on dynamic analysis is also proposed. The experimental results reveal that, in comparison to traditional Linear Regression (LR), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Naive Bayes (NB) models, the optimized model in this study demonstrates improved performance across various metrics. With a sample size of 500, the optimized model achieves a prediction accuracy of 97.2% for forecasting future trends, representing an average increase of 14.6%. Precision reaches 95.4%, reflecting an average enhancement of 19.2%, while sensitivity attains 84.1%, with an average improvement of 11.8%. The mean absolute error is only 1.16, exhibiting a 1.4 reduction compared to traditional models and confirming the effectiveness of the optimized model in prediction. In the examination of changes in industrial structure using 2020 data to forecast the output value of traditional and green industries in 2030, it is observed that the output value of traditional industries is anticipated to decrease, with an average decline of 11.4 billion yuan. Conversely, propelled by the development of the digital economy, the output value of green industries is expected to increase, with an average growth of 23.4 billion yuan. This shift in industrial structure aligns with the principles and trends of the green economy, further promoting sustainable development. In the study of innovative production methods, the green industry has achieved an increase in output and significantly enhanced production efficiency, showing an average growth of 2.135 million tons compared to the average in 2020. Consequently, this study highlights the dynamic impact of technological innovation on the digital economy and its crucial role within the context of a green economy. It holds certain reference significance for research on the dynamic effects of the digital economy under technological innovation.

PMID:38662690 | DOI:10.1371/journal.pone.0301051

Categories: Literature Watch

BTR: a bioinformatics tool recommendation system

Thu, 2024-04-25 06:00

Bioinformatics. 2024 Apr 25:btae275. doi: 10.1093/bioinformatics/btae275. Online ahead of print.

ABSTRACT

MOTIVATION: The rapid expansion of Bioinformatics research has led to a proliferation of computational tools for scientific analysis pipelines. However, constructing these pipelines is a demanding task, requiring extensive domain knowledge and careful consideration. As the Bioinformatics landscape evolves, researchers, both novice and expert, may feel overwhelmed in unfamiliar fields, potentially leading to the selection of unsuitable tools during workflow development.

RESULTS: In this paper, we introduce the Bioinformatics Tool Recommendation system (BTR), a deep learning model designed to recommend suitable tools for a given workflow-in-progress. BTR leverages recent advances in graph neural network technology, representing the workflow as a graph to capture essential context. Natural language processing techniques enhance tool recommendations by analyzing associated tool descriptions. Experiments demonstrate that BTR outperforms the existing Galaxy tool recommendation system, showcasing its potential to streamline scientific workflow construction.

AVAILABILITY AND IMPLEMENTATION: The Python source code is available at https://github.com/ryangreenj/bioinformatics_tool_recommendation.

PMID:38662583 | DOI:10.1093/bioinformatics/btae275

Categories: Literature Watch

Appearance-based Gaze Estimation with Deep Learning: A Review and Benchmark

Thu, 2024-04-25 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Apr 25;PP. doi: 10.1109/TPAMI.2024.3393571. Online ahead of print.

ABSTRACT

Human gaze provides valuable information on human focus and intentions, making it a crucial area of research. Recently, deep learning has revolutionized appearance-based gaze estimation. However, due to the unique features of gaze estimation research, such as the unfair comparison between 2D gaze positions and 3D gaze vectors and the different pre-processing and post-processing methods, there is a lack of a definitive guideline for developing deep learning-based gaze estimation algorithms. In this paper, we present a systematic review of the appearance-based gaze estimation methods using deep learning. Firstly, we survey the existing gaze estimation algorithms along the typical gaze estimation pipeline: deep feature extraction, deep learning model design, personal calibration and platforms. Secondly, to fairly compare the performance of different approaches, we summarize the data pre-processing and post-processing methods, including face/eye detection, data rectification, 2D/3D gaze conversion and gaze origin conversion. Finally, we set up a comprehensive benchmark for deep learning-based gaze estimation. We characterize all the public datasets and provide the source code of typical gaze estimation algorithms. This paper serves not only as a reference to develop deep learning-based gaze estimation methods, but also a guideline for future gaze estimation research. The project web page can be found at https://phi-ai.buaa.edu.cn/Gazehub/.

PMID:38662567 | DOI:10.1109/TPAMI.2024.3393571

Categories: Literature Watch

DNASCANNER v2: A Web-Based Tool to Analyze the Characteristic Properties of Nucleotide Sequences

Thu, 2024-04-25 06:00

J Comput Biol. 2024 Apr 25. doi: 10.1089/cmb.2023.0227. Online ahead of print.

ABSTRACT

Throughout the process of evolution, DNA undergoes the accumulation of distinct mutations, which can often result in highly organized patterns that serve various essential biological functions. These patterns encompass various genomic elements and provide valuable insights into the regulatory and functional aspects of DNA. The physicochemical, mechanical, thermodynamic, and structural properties of DNA sequences play a crucial role in the formation of specific patterns. These properties contribute to the three-dimensional structure of DNA and influence their interactions with proteins, regulatory elements, and other molecules. In this study, we introduce DNASCANNER v2, an advanced version of our previously published algorithm DNASCANNER for analyzing DNA properties. The current tool is built using the FLASK framework in Python language. Featuring a user-friendly interface tailored for nonspecialized researchers, it offers an extensive analysis of 158 DNA properties, including mono/di/trinucleotide frequencies, structural, physicochemical, thermodynamics, and mechanical properties of DNA sequences. The tool provides downloadable results and offers interactive plots for easy interpretation and comparison between different features. We also demonstrate the utility of DNASCANNER v2 in analyzing splice-site junctions, casposon insertion sequences, and transposon insertion sites (TIS) within the bacterial and human genomes, respectively. We also developed a deep learning module for the prediction of potential TIS in a given nucleotide sequence. In the future, we aim to optimize the performance of this prediction model through extensive training on larger data sets.

PMID:38662479 | DOI:10.1089/cmb.2023.0227

Categories: Literature Watch

Real-Time Signal Analysis with Wider Dynamic Range and Enhanced Sensitivity in Multiplex Colorimetric Immunoassays Using Encoded Hydrogel Microparticles

Thu, 2024-04-25 06:00

Anal Chem. 2024 Apr 25. doi: 10.1021/acs.analchem.4c00773. Online ahead of print.

ABSTRACT

The simultaneous quantification of multiple proteins is crucial for accurate medical diagnostics. A promising technology, the multiplex colorimetric immunoassay using encoded hydrogel microparticles, has garnered attention, due to its simplicity and multiplex capabilities. However, it encounters challenges related to its dynamic range, as it relies solely on the colorimetric signal analysis of encoded hydrogel microparticles at the specific time point (i.e., end-point analysis). This necessitates the precise determination of the optimal time point for the termination of the colorimetric reaction. In this study, we introduce real-time signal analysis to quantify proteins by observing the continuous colorimetric signal change within the encoded hydrogel microparticles. Real-time signal analysis measures the "slope", the rate of the colorimetric signal generation, by focusing on the kinetics of the accumulation of colorimetric products instead of the colorimetric signal that appears at the end point. By developing a deep learning-based automatic analysis program that automatically reads the code of the graphically encoded hydrogel microparticles and obtains the slope by continuously tracking the colorimetric signal, we achieved high accuracy and high throughput analysis. This technology has secured a dynamic range more than twice as wide as that of the conventional end-point signal analysis, simultaneously achieving a sensitivity that is 4-10 times higher. Finally, as a demonstration of application, we performed multiplex colorimetric immunoassays using real-time signal analysis covering a wide concentration range of protein targets associated with pre-eclampsia.

PMID:38662417 | DOI:10.1021/acs.analchem.4c00773

Categories: Literature Watch

Quantifying the scale of erosion along major coastal aquifers of Pakistan using geospatial and machine learning approaches

Thu, 2024-04-25 06:00

Environ Sci Pollut Res Int. 2024 Apr 25. doi: 10.1007/s11356-024-33296-9. Online ahead of print.

ABSTRACT

Insufficient freshwater recharge and climate change resulted in seawater intrusion in most of the coastal aquifers in Pakistan. Coastal aquifers represent diverse landcover types with varying spectral properties, making it challenging to extract information about their state hence, such investigation requires a combination of geospatial tools. This study aims to monitor erosion along the major coastal aquifers of Pakistan and propose an approach that combines data fusion into the machine and deep learning image segmentation architectures for the erosion and accretion assessment in seascapes. The analysis demonstrated the image segmentation U-Net with EfficientNet backbone achieved the highest F1 score of 0.93, while ResNet101 achieved the lowest F1 score of 0.77. Resultant erosion maps indicated that Sandspit experiencing erosion at 3.14 km2 area. Indus delta is showing erosion, approximately 143 km2 of land over the past 30 years. Sonmiani has undergone substantial erosion with 52.2 km2 land. Miani Hor has experienced erosion up to 298 km2, Bhuri creek has eroded over 4.11 km2, east Phitii creek over 3.30 km2, and Waddi creek over 3.082 km2 land. Tummi creek demonstrates erosion, at 7.12 km2 of land, and East Khalri creek near Keti Bandar has undergone a measured loss of 5.2 km2 land linked with quantified reduction in the vertical sediment flow from 50 (billion cubic meters) to 10 BCM. Our analysis suggests that intense erosions are primarily a result of reduced sediment flow and climate change. Addressing this issue needs to be prioritized coastal management and climate change mitigation framework in Pakistan to safeguard communities. Leveraging emerging solutions, such as loss and damage financing and the integration of nature-based solutions (NbS), should be prioritized for the revival of the coastal aquifers.

PMID:38662291 | DOI:10.1007/s11356-024-33296-9

Categories: Literature Watch

Deep learning-accelerated T2WI: image quality, efficiency, and staging performance against BLADE T2WI for gastric cancer

Thu, 2024-04-25 06:00

Abdom Radiol (NY). 2024 Apr 25. doi: 10.1007/s00261-024-04323-7. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of our study is to investigate image quality, efficiency, and diagnostic performance of a deep learning-accelerated single-shot breath-hold (DLSB) against BLADE for T2-weighted MR imaging (T2WI) for gastric cancer (GC).

METHODS: 112 patients with GCs undergoing gastric MRI were prospectively enrolled between Aug 2022 and Dec 2022. Axial DLSB-T2WI and BLADE-T2WI of stomach were scanned with same spatial resolution. Three radiologists independently evaluated the image qualities using a 5-scale Likert scales (IQS) in terms of lesion delineation, gastric wall boundary conspicuity, and overall image quality. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated in measurable lesions. T staging was conducted based on the results of both sequences for GC patients with gastrectomy. Pairwise comparisons between DLSB-T2WI and BLADE-T2WI were performed using the Wilcoxon signed-rank test, paired t-test, and chi-squared test. Kendall's W, Fleiss' Kappa, and intraclass correlation coefficient values were used to determine inter-reader reliability.

RESULTS: Against BLADE, DLSB reduced total acquisition time of T2WI from 495 min (mean 4:42 per patient) to 33.6 min (18 s per patient), with better overall image quality that produced 9.43-fold, 8.00-fold, and 18.31-fold IQS upgrading against BALDE, respectively, in three readers. In 69 measurable lesions, DLSB-T2WI had higher mean SNR and higher CNR than BLADE-T2WI. Among 71 patients with gastrectomy, DLSB-T2WI resulted in comparable accuracy to BLADE-T2WI in staging GCs (P > 0.05).

CONCLUSIONS: DLSB-T2WI demonstrated shorter acquisition time, better image quality, and comparable staging accuracy, which could be an alternative to BLADE-T2WI for gastric cancer imaging.

PMID:38662208 | DOI:10.1007/s00261-024-04323-7

Categories: Literature Watch

Integrated machine learning-based virtual screening and biological evaluation for identification of potential inhibitors against cathepsin K

Thu, 2024-04-25 06:00

Mol Divers. 2024 Apr 25. doi: 10.1007/s11030-024-10845-5. Online ahead of print.

ABSTRACT

Cathepsin K is a type of cysteine proteinase that is primarily expressed in osteoclasts and has a key role in the breakdown of bone matrix protein during bone resorption. Many studies suggest that the deficiency of cathepsin K is concomitant with a suppression of osteoclast functioning, therefore rendering the resorptive properties of cathepsin K the most prominent target for osteoporosis. This innovative work has identified a novel anti-osteoporotic agent against Cathepsin K by using a comparison of machine learning and deep learning-based virtual screening followed by their biological evaluation. Out of ten shortlisted compounds, five of the compounds (JFD02945, JFD02944, RJC01981, KM08968 and SB01934) exhibit more than 50% inhibition of the Cathepsin K activity at 0.1 μM concentration and are considered to have a promising inhibitory effect against Cathepsin K. The comprehensive docking, MD simulation, and MM/PBSA investigations affirm the stable and effective interaction of these compounds with Cathepsin K to inhibit its function. Furthermore, the compounds RJC01981, KM08968 and SB01934 are represented to have promising anti-osteoporotic properties for the management of osteoporosis owing to their significantly well predicted ADMET properties.

PMID:38662177 | DOI:10.1007/s11030-024-10845-5

Categories: Literature Watch

Development and evaluation of two open-source nnU-Net models for automatic segmentation of lung tumors on PET and CT images with and without respiratory motion compensation

Thu, 2024-04-25 06:00

Eur Radiol. 2024 Apr 25. doi: 10.1007/s00330-024-10751-2. Online ahead of print.

ABSTRACT

OBJECTIVES: In lung cancer, one of the main limitations for the optimal integration of the biological and anatomical information derived from Positron Emission Tomography (PET) and Computed Tomography (CT) is the time and expertise required for the evaluation of the different respiratory phases. In this study, we present two open-source models able to automatically segment lung tumors on PET and CT, with and without motion compensation.

MATERIALS AND METHODS: This study involved time-bin gated (4D) and non-gated (3D) PET/CT images from two prospective lung cancer cohorts (Trials 108237 and 108472) and one retrospective. For model construction, the ground truth (GT) was defined by consensus of two experts, and the nnU-Net with 5-fold cross-validation was applied to 560 4D-images for PET and 100 3D-images for CT. The test sets included 270 4D- images and 19 3D-images for PET and 80 4D-images and 27 3D-images for CT, recruited at 10 different centres.

RESULTS: In the performance evaluation with the multicentre test sets, the Dice Similarity Coefficients (DSC) obtained for our PET model were DSC(4D-PET) = 0.74 ± 0.06, improving 19% relative to the DSC between experts and DSC(3D-PET) = 0.82 ± 0.11. The performance for CT was DSC(4D-CT) = 0.61 ± 0.28 and DSC(3D-CT) = 0.63 ± 0.34, improving 4% and 15% relative to DSC between experts.

CONCLUSIONS: Performance evaluation demonstrated that the automatic segmentation models have the potential to achieve accuracy comparable to manual segmentation and thus hold promise for clinical application. The resulting models can be freely downloaded and employed to support the integration of 3D- or 4D- PET/CT and to facilitate the evaluation of its impact on lung cancer clinical practice.

CLINICAL RELEVANCE STATEMENT: We provide two open-source nnU-Net models for the automatic segmentation of lung tumors on PET/CT to facilitate the optimal integration of biological and anatomical information in clinical practice. The models have superior performance compared to the variability observed in manual segmentations by the different experts for images with and without motion compensation, allowing to take advantage in the clinical practice of the more accurate and robust 4D-quantification.

KEY POINTS: Lung tumor segmentation on PET/CT imaging is limited by respiratory motion and manual delineation is time consuming and suffer from inter- and intra-variability. Our segmentation models had superior performance compared to the manual segmentations by different experts. Automating PET image segmentation allows for easier clinical implementation of biological information.

PMID:38662100 | DOI:10.1007/s00330-024-10751-2

Categories: Literature Watch

Persistent Luminescence Lifetime-Based Near-Infrared Nanoplatform via Deep Learning for High-Fidelity Biosensing of Hypochlorite

Thu, 2024-04-25 06:00

Anal Chem. 2024 Apr 25. doi: 10.1021/acs.analchem.4c00899. Online ahead of print.

ABSTRACT

In light of deep tissue penetration and ultralow background, near-infrared (NIR) persistent luminescence (PersL) bioprobes have become powerful tools for bioapplications. However, the inhomogeneous signal attenuation may significantly limit its application for precise biosensing owing to tissue absorption and scattering. In this work, a PersL lifetime-based nanoplatform via deep learning was proposed for high-fidelity bioimaging and biosensing in vivo. The persistent luminescence imaging network (PLI-Net), which consisted of a 3D-deep convolutional neural network (3D-CNN) and the PersL imaging system, was logically constructed to accurately extract the lifetime feature from the profile of PersL intensity-based decay images. Significantly, the NIR PersL nanomaterials represented by Zn1+xGa2-2xSnxO4: 0.4 % Cr (ZGSO) were precisely adjusted over their lifetime, enabling the PersL lifetime-based imaging with high-contrast signals. Inspired by the adjustable and reliable PersL lifetime imaging of ZGSO NPs, a proof-of-concept PersL nanoplatform was further developed and showed exceptional analytical performance for hypochlorite detection via a luminescence resonance energy transfer process. Remarkably, on the merits of the dependable and anti-interference PersL lifetimes, this PersL lifetime-based nanoprobe provided highly sensitive and accurate imaging of both endogenous and exogenous hypochlorite. This breakthrough opened up a new way for the development of high-fidelity biosensing in complex matrix systems.

PMID:38661330 | DOI:10.1021/acs.analchem.4c00899

Categories: Literature Watch

Optimized deep learning network for plant leaf disease segmentation and multi-classification using leaf images

Thu, 2024-04-25 06:00

Network. 2024 Apr 25:1-34. doi: 10.1080/0954898X.2024.2337801. Online ahead of print.

ABSTRACT

Automatic detection of plant diseases is very imperative for monitoring the plants because they are one of the major concerns in the agricultural sector. Continuous monitoring can combat diseases of plants, which contribute to production loss. In the global production of agricultural goods, the disease of plants plays a significant role and harms yield, resulting in losses for the economy, society, and environment. It seems like a difficult and time-consuming task to manually identify diseased symptoms on leaves. The majority of disease symptoms are reflected in plant leaves, but experts in laboratories spend a lot of money and time diagnosing them. The majority of the features, which affect crop superiority and amount are plant or crop diseases. Therefore, classification, segmentation, and recognition of contaminated symptoms at the starting phase of infection is indispensable. Precision agriculture employs a deep learning model to jointly address these issues. In this research, an efficient disease of plant leaf segmentation and plant leaf disease recognition model is introduced using an optimized deep learning technique. As a result, maximum testing accuracy of 94.69%, sensitivity of 95.58%, and specificity of 92.90% were attained by the optimized deep learning method.

PMID:38661039 | DOI:10.1080/0954898X.2024.2337801

Categories: Literature Watch

Screening for urothelial carcinoma cells in urine based on digital holographic flow cytometry through machine learning and deep learning methods

Thu, 2024-04-25 06:00

Lab Chip. 2024 Apr 25. doi: 10.1039/d3lc00854a. Online ahead of print.

ABSTRACT

The incidence of urothelial carcinoma continues to rise annually, particularly among the elderly. Prompt diagnosis and treatment can significantly enhance patient survival and quality of life. Urine cytology remains a widely-used early screening method for urothelial carcinoma, but it still has limitations including sensitivity, labor-intensive procedures, and elevated cost. In recent developments, microfluidic chip technology offers an effective and efficient approach for clinical urine specimen analysis. Digital holographic microscopy, a form of quantitative phase imaging technology, captures extensive data on the refractive index and thickness of cells. The combination of microfluidic chips and digital holographic microscopy facilitates high-throughput imaging of live cells without staining. In this study, digital holographic flow cytometry was employed to rapidly capture images of diverse cell types present in urine and to reconstruct high-precision quantitative phase images for each cell type. Then, various machine learning algorithms and deep learning models were applied to categorize these cell images, and remarkable accuracy in cancer cell identification was achieved. This research suggests that the integration of digital holographic flow cytometry with artificial intelligence algorithms offers a promising, precise, and convenient approach for early screening of urothelial carcinoma.

PMID:38660758 | DOI:10.1039/d3lc00854a

Categories: Literature Watch

Pages