Deep learning

Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique

Mon, 2024-03-04 06:00

J Imaging Inform Med. 2024 Mar 4. doi: 10.1007/s10278-024-01010-3. Online ahead of print.

ABSTRACT

Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.

PMID:38438695 | DOI:10.1007/s10278-024-01010-3

Categories: Literature Watch

Spatial landmark detection and tissue registration with deep learning

Mon, 2024-03-04 06:00

Nat Methods. 2024 Mar 4. doi: 10.1038/s41592-024-02199-5. Online ahead of print.

ABSTRACT

Spatial landmarks are crucial in describing histological features between samples or sites, tracking regions of interest in microscopy, and registering tissue samples within a common coordinate framework. Although other studies have explored unsupervised landmark detection, existing methods are not well-suited for histological image data as they often require a large number of images to converge, are unable to handle nonlinear deformations between tissue sections and are ineffective for z-stack alignment, other modalities beyond image data or multimodal data. We address these challenges by introducing effortless landmark detection, a new unsupervised landmark detection and registration method using neural-network-guided thin-plate splines. Our proposed method is evaluated on a diverse range of datasets including histology and spatially resolved transcriptomics, demonstrating superior performance in both accuracy and stability compared to existing approaches.

PMID:38438615 | DOI:10.1038/s41592-024-02199-5

Categories: Literature Watch

Predictive healthcare modeling for early pandemic assessment leveraging deep auto regressor neural prophet

Mon, 2024-03-04 06:00

Sci Rep. 2024 Mar 4;14(1):5287. doi: 10.1038/s41598-024-55973-y.

ABSTRACT

In this paper, NeuralProphet (NP), an explainable hybrid modular framework, enhances the forecasting performance of pandemics by adding two neural network modules; auto-regressor (AR) and lagged-regressor (LR). An advanced deep auto-regressor neural network (Deep-AR-Net) model is employed to implement these two modules. The enhanced NP is optimized via AdamW and Huber loss function to perform multivariate multi-step forecasting contrast to Prophet. The models are validated with COVID-19 time-series datasets. The NP's efficiency is studied component-wise for a long-term forecast for India and an overall reduction of 60.36% and individually 34.7% by AR-module, 53.4% by LR-module in MASE compared to Prophet. The Deep-AR-Net model reduces the forecasting error of NP for all five countries, on average, by 49.21% and 46.07% for short-and-long-term, respectively. The visualizations confirm that forecasting curves are closer to the actual cases but significantly different from Prophet. Hence, it can develop a real-time decision-making system for highly infectious diseases.

PMID:38438528 | DOI:10.1038/s41598-024-55973-y

Categories: Literature Watch

Computational pathology model to assess acute and chronic transformations of the tubulointerstitial compartment in renal allograft biopsies

Mon, 2024-03-04 06:00

Sci Rep. 2024 Mar 4;14(1):5345. doi: 10.1038/s41598-024-55936-3.

ABSTRACT

Managing patients with kidney allografts largely depends on biopsy diagnosis which is based on semiquantitative assessments of rejection features and extent of acute and chronic changes within the renal parenchyma. Current methods lack reproducibility while digital image data-driven computational models enable comprehensive and quantitative assays. In this study we aimed to develop a computational method for automated assessment of histopathology transformations within the tubulointerstitial compartment of the renal cortex. Whole slide images of modified Picrosirius red-stained biopsy slides were used for the training (n = 852) and both internal (n = 172) and external (n = 94) tests datasets. The pipeline utilizes deep learning segmentations of renal tubules, interstitium, and peritubular capillaries from which morphometry features were extracted. Seven indicators were selected for exploring the intrinsic spatial interactions within the tubulointerstitial compartment. A principal component analysis revealed two independent factors which can be interpreted as representing chronic and acute tubulointerstitial injury. A K-means clustering classified biopsies according to potential phenotypes of combined acute and chronic transformations of various degrees. We conclude that multivariate analyses of tubulointerstitial morphometry transformations enable extraction of and quantification of acute and chronic components of injury. The method is developed for renal allograft biopsies; however, the principle can be applied more broadly for kidney pathology assessment.

PMID:38438513 | DOI:10.1038/s41598-024-55936-3

Categories: Literature Watch

Automatic thoracic aorta calcium quantification using deep learning in non-contrast ECG-gated CT images

Mon, 2024-03-04 06:00

Biomed Phys Eng Express. 2024 Mar 4. doi: 10.1088/2057-1976/ad2ff2. Online ahead of print.

ABSTRACT

Thoracic aorta calcium (TAC) can be assessed from cardiac computed tomography (CT) studies to improve cardiovascular risk prediction. The aim of this study was to develop a fully automatic system to detect TAC and to evaluate its performance for classifying the patients into four TAC risk categories. The method started by segmenting the thoracic aorta, combining three UNets trained with axial, sagittal and coronal CT images. Afterwards, the surrounding lesion candidates were classified using three combined convolutional neural networks (CNNs) trained with orthogonal patches. Image datasets included 1190 non-enhanced ECG-gated cardiac CT studies from a cohort of cardiovascular patients (age 57 ± 9 years, 80% men, 65% TAC>0). In the test set (N=119), the combination of UNets were able to successfully segment the thoracic aorta with a mean volume difference of 0.3 ± 11.7ml (<6%) and a median Dice coefficient of 0.947. The combined CNNs accurately classified the lesion candidates and 87% of the patients (N=104) were accurately placed in their corresponding risk categories (Kappa=0.826, ICC=0.9915). TAC measurement can be estimated automatically from cardiac CT images using UNets to isolate the thoracic aorta and CNNs to classify the calcified lesions.&#xD.

PMID:38437732 | DOI:10.1088/2057-1976/ad2ff2

Categories: Literature Watch

Context-dependent design of induced-fit enzymes using deep learning generates well-expressed, thermally stable and active enzymes

Mon, 2024-03-04 06:00

Proc Natl Acad Sci U S A. 2024 Mar 12;121(11):e2313809121. doi: 10.1073/pnas.2313809121. Epub 2024 Mar 4.

ABSTRACT

The potential of engineered enzymes in industrial applications is often limited by their expression levels, thermal stability, and catalytic diversity. De novo enzyme design faces challenges due to the complexity of enzymatic catalysis. An alternative approach involves expanding natural enzyme capabilities for new substrates and parameters. Here, we introduce CoSaNN (Conformation Sampling using Neural Network), an enzyme design strategy using deep learning for structure prediction and sequence optimization. CoSaNN controls enzyme conformations to expand chemical space beyond simple mutagenesis. It employs a context-dependent approach for generating enzyme designs, considering non-linear relationships in sequence and structure space. We also developed SolvIT, a graph NN predicting protein solubility in Escherichia coli, optimizing enzyme expression selection from larger design sets. Using this method, we engineered enzymes with superior expression levels, with 54% expressed in E. coli, and increased thermal stability, with over 30% having higher Tm than the template, with no high-throughput screening. Our research underscores AI's transformative role in protein design, capturing high-order interactions and preserving allosteric mechanisms in extensively modified enzymes, and notably enhancing expression success rates. This method's ease of use and efficiency streamlines enzyme design, opening broad avenues for biotechnological applications and broadening field accessibility.

PMID:38437538 | DOI:10.1073/pnas.2313809121

Categories: Literature Watch

Real-time binocular visual localization system based on the improved BGNet stereo matching framework

Mon, 2024-03-04 06:00

J Opt Soc Am A Opt Image Sci Vis. 2024 Mar 1;41(3):500-509. doi: 10.1364/JOSAA.499820.

ABSTRACT

Binocular vision technology is widely used to acquire three-dimensional information of images because of its low cost. In recent years, the use of deep learning for stereo matching has shown promising results in improving the measurement stability of binocular vision systems, but the real-time performance in high-precision networks is typically poor. Therefore, this study constructed a deep-learning-based stereo matching binocular vision system based on the BGLGA-Net, which combines the advantages of past networks. Experiments showed that the ability to detect the edges of foreground objects was enhanced. The network was used to build a system on the Xavier NX. The measurement accuracy and stability were better than those of traditional algorithms.

PMID:38437441 | DOI:10.1364/JOSAA.499820

Categories: Literature Watch

Phasing segmented telescopes via deep learning methods: application to a deployable CubeSat

Mon, 2024-03-04 06:00

J Opt Soc Am A Opt Image Sci Vis. 2024 Mar 1;41(3):489-499. doi: 10.1364/JOSAA.506182.

ABSTRACT

Capturing high-resolution imagery of the Earth's surface often calls for a telescope of considerable size, even from low Earth orbits (LEOs). A large aperture often requires large and expensive platforms. For instance, achieving a resolution of 1 m at visible wavelengths from LEO typically requires an aperture diameter of at least 30 cm. Additionally, ensuring high revisit times often prompts the use of multiple satellites. In light of these challenges, a small, segmented, deployable CubeSat telescope was recently proposed creating the additional need of phasing the telescope's mirrors. Phasing methods on compact platforms are constrained by the limited volume and power available, excluding solutions that rely on dedicated hardware or demand substantial computational resources. Neural networks (NNs) are known for their computationally efficient inference and reduced onboard requirements. Therefore, we developed a NN-based method to measure co-phasing errors inherent to a deployable telescope. The proposed technique demonstrates its ability to detect phasing errors at the targeted performance level [typically a wavefront error (WFE) below 15 nm RMS for a visible imager operating at the diffraction limit] using a point source. The robustness of the NN method is verified in presence of high-order aberrations or noise and the results are compared against existing state-of-the-art techniques. The developed NN model ensures its feasibility and provides a realistic pathway towards achieving diffraction-limited images.

PMID:38437440 | DOI:10.1364/JOSAA.506182

Categories: Literature Watch

Nighttime color constancy using robust gray pixels

Mon, 2024-03-04 06:00

J Opt Soc Am A Opt Image Sci Vis. 2024 Mar 1;41(3):476-488. doi: 10.1364/JOSAA.506999.

ABSTRACT

Color constancy is a basic step for achieving stable color perception in both biological visual systems and the image signal processing (ISP) pipeline of cameras. So far, there have been numerous computational models of color constancy that focus on scenes under normal light conditions but are less concerned with nighttime scenes. Compared with daytime scenes, nighttime scenes usually suffer from relatively higher-level noise and insufficient lighting, which usually degrade the performance of color constancy methods designed for scenes under normal light. In addition, there is a lack of nighttime color constancy datasets, limiting the development of relevant methods. In this paper, based on the gray-pixel-based color constancy methods, we propose a robust gray pixel (RGP) detection method by carefully designing the computation of illuminant-invariant measures (IIMs) from a given color-biased nighttime image. In addition, to evaluate the proposed method, a new dataset that contains 513 nighttime images and corresponding ground-truth illuminants was collected. We believe this dataset is a useful supplement to the field of color constancy. Finally, experimental results show that the proposed method achieves superior performance to statistics-based methods. In addition, the proposed method was also compared with recent deep-learning methods for nighttime color constancy, and the results show the method's advantages in cross-validation among different datasets.

PMID:38437439 | DOI:10.1364/JOSAA.506999

Categories: Literature Watch

Enhancing 3D human pose estimation with NIR single-pixel imaging and time-of-flight technology: a deep learning approach

Mon, 2024-03-04 06:00

J Opt Soc Am A Opt Image Sci Vis. 2024 Mar 1;41(3):414-423. doi: 10.1364/JOSAA.499933.

ABSTRACT

The extraction of 3D human pose and body shape details from a single monocular image is a significant challenge in computer vision. Traditional methods use RGB images, but these are constrained by varying lighting and occlusions. However, cutting-edge developments in imaging technologies have introduced new techniques such as single-pixel imaging (SPI) that can surmount these hurdles. In the near-infrared spectrum, SPI demonstrates impressive capabilities in capturing a 3D human pose. This wavelength can penetrate clothing and is less influenced by lighting variations than visible light, thus providing a reliable means to accurately capture body shape and pose data, even in difficult settings. In this work, we explore the use of an SPI camera operating in the NIR with time-of-flight (TOF) at bands 850-1550 nm as a solution to detect humans in nighttime environments. The proposed system uses the vision transformers (ViT) model to detect and extract the characteristic features of humans for integration over a 3D body model SMPL-X through 3D body shape regression using deep learning. To evaluate the efficacy of NIR-SPI 3D image reconstruction, we constructed a laboratory scenario that simulates nighttime conditions, enabling us to test the feasibility of employing NIR-SPI as a vision sensor in outdoor environments. By assessing the results obtained from this setup, we aim to demonstrate the potential of NIR-SPI as an effective tool to detect humans in nighttime scenarios and capture their accurate 3D body pose and shape.

PMID:38437432 | DOI:10.1364/JOSAA.499933

Categories: Literature Watch

Phase retrieval from single-shot square wave fringe based on image denoising using deep learning

Mon, 2024-03-04 06:00

Appl Opt. 2024 Feb 1;63(4):1160-1169. doi: 10.1364/AO.506820.

ABSTRACT

Fringe-structured light measurement technology has garnered significant attention in recent years. To enhance measurement speed while maintaining a certain level of accuracy using binary fringe, this paper proposes a phase retrieval method with single-frame binary square wave fringe. The proposed method utilizes image denoising through deep learning to extract the phase, enabling the use of a trained image denoiser as a low-pass filter, which adaptively replaces the manual selection of the appropriate band-pass filter. The results demonstrate that this method achieves higher reconstruction accuracy than the traditional single-frame algorithm while preserving more object details.

PMID:38437415 | DOI:10.1364/AO.506820

Categories: Literature Watch

Phase-shifting determination and pattern recognition using a modified Sagnac interferometer with multiple reflections

Mon, 2024-03-04 06:00

Appl Opt. 2024 Feb 1;63(4):1135-1143. doi: 10.1364/AO.511674.

ABSTRACT

This work has implemented a diverse modification of the Sagnac interferometer to accommodate various measurement requirements, including phase shifting, pattern recognition, and a morphological analysis. These modifications were introduced to validate the adaptability and versatility of the system. To enable phase shifting using the multiple light reflection technique, a half-wave plate (HWP) was utilized with rotations at 0, π/8, π/4, and 3π/8 radians, generating four interference patterns. It is possible to observe a distinct circular fringe width as the polarized light experiences diffraction at the interferometer's output as it travels through a circular aperture with various diameters ranging from 0.4 to 1 mm. Further modifications were made to the setup by inserting a pure glass and a fluoride-doped tin oxide (FTO) transparent substrate into the common path. This modification aimed to detect and analyze a horizontal fringe pattern. Subsequently, the FTO substrate was replaced with a bee leg to facilitate morphology recognition. A deep learning-based image processing technique was employed to analyze the bee leg morphology. The experimental results showed that the proposed scheme succeeded in achieving the phase shift, measuring hole diameters with errors smaller than 1.6%, separating distinct transparent crystals, and acquiring the morphological view of a bee's leg. The method also has successfully achieved an accurate surface area and background segmentation with an accuracy over 87%. Overall, the outcomes demonstrated the potential of proposed interferometers for various applications, and the advantages of the optical sensors were highlighted, particularly in microscopic applications.

PMID:38437412 | DOI:10.1364/AO.511674

Categories: Literature Watch

4-K-resolution minimalist optical system design based on deep learning

Mon, 2024-03-04 06:00

Appl Opt. 2024 Feb 1;63(4):917-926. doi: 10.1364/AO.510860.

ABSTRACT

In order to simplify optical systems, we propose a high-resolution minimalist optical design method based on deep learning. Unlike most imaging system design work, we combine optical design more closely with image processing algorithms. For optical design, we separately study the impact of different aberrations on computational imaging and then innovatively propose an aberration metric and a spatially micro-variant design method that better meet the needs of image recognition. For image processing, we construct a dataset based on the point spread function (PSF) imaging simulation method. In addition, we use a non-blind deblurring computational imaging method to repair spatially variant aberrations. Finally, we achieve clear imaging at 4 K (5184×3888) using only two spherical lenses and achieve image quality similar to that of complex lenses on the market.

PMID:38437388 | DOI:10.1364/AO.510860

Categories: Literature Watch

Image dehazing combining polarization properties and deep learning

Mon, 2024-03-04 06:00

J Opt Soc Am A Opt Image Sci Vis. 2024 Feb 1;41(2):311-322. doi: 10.1364/JOSAA.507892.

ABSTRACT

In order to solve the problems of color shift and incomplete dehazing after image dehazing, this paper proposes an improved image self-supervised learning dehazing algorithm that combines polarization characteristics and deep learning. First, based on the YOLY network framework, a multiscale module and an attention mechanism module are introduced into the transmission feature estimation network. This enables the extraction of feature information at different scales and allocation of weights, and effectively improves the accuracy of transmission map estimation. Second, a brightness consistency loss based on the YCbCr color space and a color consistency loss are proposed to constrain the brightness and color consistency of the dehazing results, resolving the problems of darkened brightness and color shifts in dehazed images. Finally, the network is trained to dehaze polarized images based on the atmospheric scattering model and loss function constraints. Experiments are conducted on synthetic and real-world data, and comparisons are made with six contrasting dehazing algorithms. The results demonstrate that, compared to the contrastive dehazing algorithms, the proposed algorithm achieves PSNR and SSIM values of 23.92 and 0.94, respectively, on synthetic image samples. For real-world image samples, color restoration is more authentic, contrast is higher, and detailed information is richer. Both subjective and objective evaluations show significant improvements. This validates the effectiveness and superiority of the proposed dehazing algorithm.

PMID:38437344 | DOI:10.1364/JOSAA.507892

Categories: Literature Watch

Ferumoxytol-Enhanced Cardiac Cine MRI Reconstruction Using a Variable-Splitting Spatiotemporal Network

Mon, 2024-03-04 06:00

J Magn Reson Imaging. 2024 Mar 4. doi: 10.1002/jmri.29295. Online ahead of print.

ABSTRACT

BACKGROUND: Balanced steady-state free precession (bSSFP) imaging is commonly used in cardiac cine MRI but prone to image artifacts. Ferumoxytol-enhanced (FE) gradient echo (GRE) has been proposed as an alternative. Utilizing the abundance of bSSFP images to develop a computationally efficient network that is applicable to FE GRE cine would benefit future network development.

PURPOSE: To develop a variable-splitting spatiotemporal network (VSNet) for image reconstruction, trained on bSSFP cine images and applicable to FE GRE cine images.

STUDY TYPE: Retrospective and prospective.

SUBJECTS: 41 patients (26 female, 53 ± 19 y/o) for network training, 31 patients (19 female, 49 ± 17 y/o) and 5 healthy subjects (5 female, 30 ± 7 y/o) for testing.

FIELD STRENGTH/SEQUENCE: 1.5T and 3T, bSSFP and GRE.

ASSESSMENT: VSNet was compared to VSNet with total variation loss, compressed sensing and low rank methods for 14× accelerated data. The GRAPPA×2/×3 images served as the reference. Peak signal-to-noise-ratio (PSNR), structural similarity index (SSIM), left ventricular (LV) and right ventricular (RV) end-diastolic volume (EDV), end-systolic volume (ESV), and ejection fraction (EF) were measured. Qualitative image ranking and scoring were independently performed by three readers. Latent scores were calculated based on scores of each method relative to the reference.

STATISTICS: Linear mixed-effects regression, Tukey method, Fleiss' Kappa, Bland-Altman analysis, and Bayesian categorical cumulative probit model. A P-value <0.05 was considered statistically significant.

RESULTS: VSNet achieved significantly higher PSNR (32.7 ± 0.2), SSIM (0.880 ± 0.004), rank (2.14 ± 0.06), and latent scores (-1.72 ± 0.22) compared to other methods (rank >2.90, latent score < -2.63). Fleiss' Kappa was 0.52 for scoring and 0.61 for ranking. VSNet showed no significantly different LV and RV ESV (P = 0.938) and EF (P = 0.143) measurements, but statistically significant different (2.62 mL) EDV measurements compared to the reference.

CONCLUSION: VSNet produced the highest image quality and the most accurate functional measurements for FE GRE cine images among the tested 14× accelerated reconstruction methods.

LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 1.

PMID:38436994 | DOI:10.1002/jmri.29295

Categories: Literature Watch

Pattern classification of interstitial lung diseases from computed tomography images using a ResNet-based network with a split-transform-merge strategy and split attention

Mon, 2024-03-04 06:00

Phys Eng Sci Med. 2024 Mar 4. doi: 10.1007/s13246-024-01404-1. Online ahead of print.

ABSTRACT

In patients with interstitial lung disease (ILD), accurate pattern assessment from their computed tomography (CT) images could help track lung abnormalities and evaluate treatment efficacy. Based on excellent image classification performance, convolutional neural networks (CNNs) have been massively investigated for classifying and labeling pathological patterns in the CT images of ILD patients. However, previous studies rarely considered the three-dimensional (3D) structure of the pathological patterns of ILD and used two-dimensional network input. In addition, ResNet-based networks such as SE-ResNet and ResNeXt with high classification performance have not been used for pattern classification of ILD. This study proposed a SE-ResNeXt-SA-18 for classifying pathological patterns of ILD. The SE-ResNeXt-SA-18 integrated the multipath design of the ResNeXt and the feature weighting of the squeeze-and-excitation network with split attention. The classification performance of the SE-ResNeXt-SA-18 was compared with the ResNet-18 and SE-ResNeXt-18. The influence of the input patch size on classification performance was also evaluated. Results show that the classification accuracy was increased with the increase of the patch size. With a 32 × 32 × 16 input, the SE-ResNeXt-SA-18 presented the highest performance with average accuracy, sensitivity, and specificity of 0.991, 0.979, and 0.994. High-weight regions in the class activation maps of the SE-ResNeXt-SA-18 also matched the specific pattern features. In comparison, the performance of the SE-ResNeXt-SA-18 is superior to the previously reported CNNs in classifying the ILD patterns. We concluded that the SE-ResNeXt-SA-18 could help track or monitor the progress of ILD through accuracy pattern classification.

PMID:38436886 | DOI:10.1007/s13246-024-01404-1

Categories: Literature Watch

A new strategy for groundwater level prediction using a hybrid deep learning model under Ecological Water Replenishment

Mon, 2024-03-04 06:00

Environ Sci Pollut Res Int. 2024 Mar 4. doi: 10.1007/s11356-024-32330-0. Online ahead of print.

ABSTRACT

Accurate prediction of the groundwater level (GWL) is crucial for sustainable groundwater resource management. Ecological water replenishment (EWR) involves artificially diverting water to replenish the ecological flow and water resources of both surface water and groundwater within the basin. However, fluctuations in GWLs during the EWR process exhibit high nonlinearity and complexity in their time series, making it challenging for single data-driven models to predict the trend of groundwater level changes under the backdrop of EWR. This study introduced a new GWL prediction strategy based on a hybrid deep learning model, STL-IWOA-GRU. It integrated the LOESS-based seasonal trend decomposition algorithm (STL), improved whale optimization algorithm (IWOA), and Gated recurrent unit (GRU). The aim was to accurately predict GWLs in the context of EWR. This study gathered GWL, precipitation, and surface runoff data from 21 monitoring wells in the Yongding River Basin (Beijing Section) over a period of 731 days. The research results demonstrate that the improvement strategy implemented for the IWOA enhances the convergence speed and global search capabilities of the algorithm. In the case analysis, evaluation metrics including the root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and Nash-Sutcliffe efficiency (NSE) were employed. STL-IWOA-GRU exhibited commendable performance, with MAE achieving the best result, averaging at 0.266. When compared to other models such as Variance Mode Decomposition-Gated Recurrent Unit (VMD-GRU), Ant Lion Optimizer-Support Vector Machine (ALO-SVM), STL-Particle Swarm Optimization-GRU (STL-PSO-GRU), and STL-Sine Cosine Algorithm-GRU (STL-SCA-GRU), MAE was reduced by 18%, 26%, 11%, and 29%, respectively. This indicates that the model proposed in this study exhibited high prediction accuracy and robust versatility, making it a potent strategic choice for forecasting GWL changes in the context of EWR.

PMID:38436858 | DOI:10.1007/s11356-024-32330-0

Categories: Literature Watch

Comparative study for coastal aquifer vulnerability assessment using deep learning and metaheuristic algorithms

Mon, 2024-03-04 06:00

Environ Sci Pollut Res Int. 2024 Mar 4. doi: 10.1007/s11356-024-32706-2. Online ahead of print.

ABSTRACT

Coastal aquifer vulnerability assessment (CAVA) studies are essential for mitigating the effects of seawater intrusion (SWI) worldwide. In this research, the vulnerability of the coastal aquifer in the Lahijan region of northwest Iran was investigated. A vulnerability map (VM) was created applying hydrogeological parameters derived from the original GALDIT model (OGM). The significance of OGM parameters was assessed using the mean decrease accuracy (MDA) method, with the current state of SWI emerging as the most crucial factor for evaluating vulnerability. To optimize GALDIT weights, we introduced the biogeography-based optimization (BBO) and gray wolf optimization (GWO) techniques to obtain to hybrid OGM-BBO and OGM-GWO models, respectively. Despite considerable research focused on enhancing CAVA models, efforts to modify the weights and rates of OGM parameters by incorporating deep learning algorithms remain scarce. Hence, a convolutional neural network (CNN) algorithm was applied to produce the VM. The area under the receiver-operating characteristic curves for OGM-BBO, OGM-GWO, and VMCNN were 0.794, 0.835, and 0.982, respectively. According to the CNN-based VM, 41% of the aquifer displayed very high and high vulnerability to SWI, concentrated primarily along the coastline. Additionally, 32% of the aquifer exhibited very low and low vulnerability to SWI, predominantly in the southern and southwestern regions. The proposed model can be extended to evaluate the vulnerability of various coastal aquifers to SWI, thereby assisting land use planers and policymakers in identifying at-risk areas. Moreover, deep-learning-based approaches can help clarify the associations between aquifer vulnerability and contamination resulting from SWI.

PMID:38436856 | DOI:10.1007/s11356-024-32706-2

Categories: Literature Watch

BREATH-Net: a novel deep learning framework for NO(2) prediction using bi-directional encoder with transformer

Mon, 2024-03-04 06:00

Environ Monit Assess. 2024 Mar 4;196(4):340. doi: 10.1007/s10661-024-12455-y.

ABSTRACT

Air pollution poses a significant challenge in numerous urban regions, negatively affecting human well-being. Nitrogen dioxide (NO2) is a prevalent atmospheric pollutant that can potentially exacerbate respiratory ailments and cardiovascular disorders and contribute to cancer development. The present study introduces a novel approach for monitoring and predicting Delhi's nitrogen dioxide concentrations by leveraging satellite data and ground data from the Sentinel 5P satellite and monitoring stations. The research gathers satellite and monitoring data over 3 years for evaluation. Exploratory data analysis (EDA) methods are employed to comprehensively understand the data and discern any discernible patterns and trends in nitrogen dioxide levels. The data subsequently undergoes pre-processing and scaling utilizing appropriate techniques, such as MinMaxScaler, to optimize the model's performance. The proposed forecasting model uses a hybrid architecture of the Transformer and BiLSTM models called BREATH-Net. BiLSTM models exhibit a strong aptitude for effectively managing sequential data by adeptly capturing dependencies in both the forward and backward directions. Conversely, transformers excel in capturing extensive relationships over extended distances in temporal data. The results of this study will illustrate the proposed model's efficacy in predicting the levels of NO2 in Delhi. If effectively executed, this model can significantly enhance strategies for controlling urban air quality. The findings of this research show a significant improvement of RMSE = 9.06 compared to other state-of-the-art models. This study's primary objective is to contribute to mitigating respiratory health issues resulting from air pollution through satellite data and deep learning methodologies.

PMID:38436748 | DOI:10.1007/s10661-024-12455-y

Categories: Literature Watch

Algorithms for classification of sequences and segmentation of prostate gland: an external validation study

Mon, 2024-03-04 06:00

Abdom Radiol (NY). 2024 Mar 4. doi: 10.1007/s00261-024-04241-8. Online ahead of print.

ABSTRACT

OBJECTIVES: The aim of the study was to externally validate two AI models for the classification of prostate mpMRI sequences and segmentation of the prostate gland on T2WI.

MATERIALS AND METHODS: MpMRI data from 719 patients were retrospectively collected from two hospitals, utilizing nine MR scanners from four different vendors, over the period from February 2018 to May 2022. Med3D deep learning pretrained architecture was used to perform image classification,UNet-3D was used to segment the prostate gland. The images were classified into one of nine image types by the mode. The segmentation model was validated using T2WI images. The accuracy of the segmentation was evaluated by measuring the DSC, VS,AHD.Finally,efficacy of the models was compared for different MR field strengths and sequences.

RESULTS: 20,551 image groups were obtained from 719 MR studies. The classification model accuracy is 99%, with a kappa of 0.932. The precision, recall, and F1 values for the nine image types had statistically significant differences, respectively (all P < 0.001). The accuracy for scanners 1.436 T, 1.5 T, and 3.0 T was 87%, 86%, and 98%, respectively (P < 0.001). For segmentation model, the median DSC was 0.942 to 0.955, the median VS was 0.974 to 0.982, and the median AHD was 5.55 to 6.49 mm,respectively.These values also had statistically significant differences for the three different magnetic field strengths (all P < 0.001).

CONCLUSION: The AI models for mpMRI image classification and prostate segmentation demonstrated good performance during external validation, which could enhance efficiency in prostate volume measurement and cancer detection with mpMRI.

CLINICAL RELEVANCE STATEMENT: These models can greatly improve the work efficiency in cancer detection, measurement of prostate volume and guided biopsies.

PMID:38436698 | DOI:10.1007/s00261-024-04241-8

Categories: Literature Watch

Pages