Deep learning

Predicting Satisfaction With Chat-Counseling at a 24/7 Chat Hotline for the Youth: Natural Language Processing Study

Tue, 2025-02-18 06:00

JMIR AI. 2025 Feb 18;4:e63701. doi: 10.2196/63701.

ABSTRACT

BACKGROUND: Chat-based counseling services are popular for the low-threshold provision of mental health support to youth. In addition, they are particularly suitable for the utilization of natural language processing (NLP) for improved provision of care.

OBJECTIVE: Consequently, this paper evaluates the feasibility of such a use case, namely, the NLP-based automated evaluation of satisfaction with the chat interaction. This preregistered approach could be used for evaluation and quality control procedures, as it is particularly relevant for those services.

METHODS: The consultations of 2609 young chatters (around 140,000 messages) and corresponding feedback were used to train and evaluate classifiers to predict whether a chat was perceived as helpful or not. On the one hand, we trained a word vectorizer in combination with an extreme gradient boosting (XGBoost) classifier, applying cross-validation and extensive hyperparameter tuning. On the other hand, we trained several transformer-based models, comparing model types, preprocessing, and over- and undersampling techniques. For both model types, we selected the best-performing approach on the training set for a final performance evaluation on the 522 users in the final test set.

RESULTS: The fine-tuned XGBoost classifier achieved an area under the receiver operating characteristic score of 0.69 (P<.001), as well as a Matthews correlation coefficient of 0.25 on the previously unseen test set. The selected Longformer-based model did not outperform this baseline, scoring 0.68 (P=.69). A Shapley additive explanations explainability approach suggested that help seekers rating a consultation as helpful commonly expressed their satisfaction already within the conversation. In contrast, the rejection of offered exercises predicted perceived unhelpfulness.

CONCLUSIONS: Chat conversations include relevant information regarding the perceived quality of an interaction that can be used by NLP-based prediction approaches. However, to determine if the moderate predictive performance translates into meaningful service improvements requires randomized trials. Further, our results highlight the relevance of contrasting pretrained models with simpler baselines to avoid the implementation of unnecessarily complex models.

TRIAL REGISTRATION: Open Science Framework SR4Q9; https://osf.io/sr4q9.

PMID:39965198 | DOI:10.2196/63701

Categories: Literature Watch

Integrating State-Space Modeling, Parameter Estimation, Deep Learning, and Docking Techniques in Drug Repurposing: A Case Study on COVID-19 Cytokine Storm

Tue, 2025-02-18 06:00

J Am Med Inform Assoc. 2025 Feb 18:ocaf035. doi: 10.1093/jamia/ocaf035. Online ahead of print.

ABSTRACT

OBJECTIVE: This study addresses the significant challenges posed by emerging SARS-CoV-2 variants, particularly in developing diagnostics and therapeutics. Drug repurposing is investigated by identifying critical regulatory proteins impacted by the virus, providing rapid and effective therapeutic solutions for better disease management.

MATERIALS AND METHODS: We employed a comprehensive approach combining mathematical modeling and efficient parameter estimation to study the transient responses of regulatory proteins in both normal and virus-infected cells. Proportional-integral-derivative (PID) controllers were used to pinpoint specific protein targets for therapeutic intervention. Additionally, advanced deep learning models and molecular docking techniques were applied to analyze drug-target and drug-drug interactions, ensuring both efficacy and safety of the proposed treatments. This approach was applied to a case study focused on the cytokine storm in COVID-19, centering on Angiotensin-converting enzyme 2 (ACE2), which plays a key role in SARS-CoV-2 infection.

RESULTS: Our findings suggest that activating ACE2 presents a promising therapeutic strategy, whereas inhibiting AT1R seems less effective. Deep learning models, combined with molecular docking, identified Lomefloxacin and Fostamatinib as stable drugs with no significant thermodynamic interactions, suggesting their safe concurrent use in managing COVID-19-induced cytokine storms.

DISCUSSION: The results highlight the potential of ACE2 activation in mitigating lung injury and severe inflammation caused by SARS-CoV-2. This integrated approach accelerates the identification of safe and effective treatment options for emerging viral variants.

CONCLUSION: This framework provides an efficient method for identifying critical regulatory proteins and advancing drug repurposing, contributing to the rapid development of therapeutic strategies for COVID-19 and future global pandemics.

PMID:39965087 | DOI:10.1093/jamia/ocaf035

Categories: Literature Watch

Multi-agent deep reinforcement learning-based robotic arm assembly research

Tue, 2025-02-18 06:00

PLoS One. 2025 Feb 18;20(2):e0311550. doi: 10.1371/journal.pone.0311550. eCollection 2025.

ABSTRACT

Due to the complexity and variability of application scenarios and the increasing demands for assembly, single-agent algorithms often face challenges in convergence and exhibit poor performance in robotic arm assembly processes. To address these issues, this paper proposes a method that employs a multi-agent reinforcement learning algorithm for the shaft-hole assembly of robotic arms, with a specific focus on square shaft-hole assemblies. First, we analyze the stages of hole-seeking, alignment, and insertion in the shaft-hole assembly process, based on a comprehensive study of the interactions between shafts and holes. Next, a reward function is designed by integrating the decoupled multi-agent deterministic deep deterministic policy gradient (DMDDPG) algorithm. Finally, a simulation environment is created in Gazebo, using circular and square shaft-holes as experimental subjects to model the robotic arm's shaft-hole assembly. The simulation results indicate that the proposed algorithm, which models the first three joints and the last three joints of the robotic arm as multi-agents, demonstrates not only enhanced adaptability but also faster and more stable convergence.

PMID:39965012 | DOI:10.1371/journal.pone.0311550

Categories: Literature Watch

Unsupervised neural network-based image stitching method for bladder endoscopy

Tue, 2025-02-18 06:00

PLoS One. 2025 Feb 18;20(2):e0311637. doi: 10.1371/journal.pone.0311637. eCollection 2025.

ABSTRACT

Bladder endoscopy enables the observation of intravesical lesion characteristics, making it an essential tool in urology. Image stitching techniques are commonly employed to expand the field of view of bladder endoscopy. Traditional image stitching methods rely on feature matching. In recent years, deep-learning techniques have garnered significant attention in the field of computer vision. However, the commonly employed supervised learning approaches often require a substantial amount of labeled data, which can be challenging to acquire, especially in the context of medical data. To address this limitation, this study proposes an unsupervised neural network-based image stitching method for bladder endoscopy, which eliminates the need for labeled datasets. The method comprises two modules: an unsupervised alignment network and an unsupervised fusion network. In the unsupervised alignment network, we employed feature convolution, regression networks, and linear transformations to align images. In the unsupervised fusion network, we achieved image fusion from features to pixel by simultaneously eliminating artifacts and enhancing the resolution. Experiments demonstrated our method's consistent stitching success rate of 98.11% and robust image stitching accuracy at various resolutions. Our method eliminates sutures and flocculent debris from cystoscopy images, presenting good image smoothness while preserving rich textural features. Moreover, our method could successfully stitch challenging scenes such as dim and blurry scenes. Our application of unsupervised deep learning methods in the field of cystoscopy image stitching was successfully validated, laying the foundation for real-time panoramic stitching of bladder endoscopic video images. This advancement provides opportunities for the future development of computer-vision-assisted diagnostic systems for bladder cavities.

PMID:39964991 | DOI:10.1371/journal.pone.0311637

Categories: Literature Watch

Toward equitable major histocompatibility complex binding predictions

Tue, 2025-02-18 06:00

Proc Natl Acad Sci U S A. 2025 Feb 25;122(8):e2405106122. doi: 10.1073/pnas.2405106122. Epub 2025 Feb 18.

ABSTRACT

Deep learning tools that predict peptide binding by major histocompatibility complex (MHC) proteins play an essential role in developing personalized cancer immunotherapies and vaccines. In order to ensure equitable health outcomes from their application, MHC binding prediction methods must work well across the vast landscape of MHC alleles observed across human populations. Here, we show that there are alarming disparities across individuals in different racial and ethnic groups in how much binding data are associated with their MHC alleles. We introduce a machine learning framework to assess the impact of this data imbalance for predicting binding for any given MHC allele, and apply it to develop a state-of-the-art MHC binding prediction model that additionally provides per-allele performance estimates. We demonstrate that our MHC binding model successfully mitigates much of the data disparities observed across racial groups. To address remaining inequities, we devise an algorithmic strategy for targeted data collection. Our work lays the foundation for further development of equitable MHC binding models for use in personalized immunotherapies.

PMID:39964728 | DOI:10.1073/pnas.2405106122

Categories: Literature Watch

Deep learning for retinal vessel segmentation: a systematic review of techniques and applications

Tue, 2025-02-18 06:00

Med Biol Eng Comput. 2025 Feb 18. doi: 10.1007/s11517-025-03324-y. Online ahead of print.

ABSTRACT

Ophthalmic diseases are a leading cause of vision loss, with retinal damage being irreversible. Retinal blood vessels are vital for diagnosing eye conditions, as even subtle changes in their structure can signal underlying issues. Retinal vessel segmentation is key for early detection and treatment of eye diseases. Traditionally, ophthalmologists manually segmented vessels, a time-consuming process based on clinical and geometric features. However, deep learning advancements have led to automated methods with impressive results. This systematic review, following PRISMA guidelines, examines 79 studies on deep learning-based retinal vessel segmentation published between 2020 and 2024 from four databases: Web of Science, Scopus, IEEE Xplore, and PubMed. The review focuses on datasets, segmentation models, evaluation metrics, and emerging trends. U-Net and Transformer architectures have shown success, with U-Net's encoder-decoder structure preserving details and Transformers capturing global context through self-attention mechanisms. Despite their effectiveness, challenges remain, suggesting future research should explore hybrid models combining U-Net, Transformers, and GANs to improve segmentation accuracy. This review offers a comprehensive look at the current landscape and future directions in retinal vessel segmentation.

PMID:39964659 | DOI:10.1007/s11517-025-03324-y

Categories: Literature Watch

TongueTransUNet: toward effective tongue contour segmentation using well-managed dataset

Tue, 2025-02-18 06:00

Med Biol Eng Comput. 2025 Feb 18. doi: 10.1007/s11517-024-03278-7. Online ahead of print.

ABSTRACT

In modern telehealth and healthcare information systems medical image analysis is essential to understand the context of the images and its complex structure from large, inconsistent-quality, and distributed datasets. Achieving desired results faces a few challenges for deep learning. Examples of these challenges are date size, labeling, balancing, training, and feature extraction. These challenges made the AI model complex and expensive to be built and difficult to understand which made it a black box and produce hysteresis and irrelevant, illegal, and unethical output in some cases. In this article, lingual ultrasound is studied to extract tongue contour to understand language behavior and language signature and utilize it as biofeedback for different applications. This article introduces a design strategy that can work effectively using a well-managed dynamic-size dataset. It includes a hybrid architecture using UNet, Vision Transformer (ViT), and contrastive loss in latent space to build a foundation model cumulatively. The process starts with building a reference representation in the embedding space using human experts to validate any new input for training data. UNet and ViT encoders are used to extract the input feature representations. The contrastive loss was then compared to the new feature embedding with the reference in the embedding space. The UNet-based decoder is used to reconstruct the image to its original size. Before releasing the final results, quality control is used to assess the segmented contour, and if rejected, the algorithm requests an action from a human expert to annotate it manually. The results show an improved accuracy over the traditional techniques as it contains only high quality and relevant features.

PMID:39964658 | DOI:10.1007/s11517-024-03278-7

Categories: Literature Watch

Exploring the potential performance of 0.2 T low-field unshielded MRI scanner using deep learning techniques

Tue, 2025-02-18 06:00

MAGMA. 2025 Feb 18. doi: 10.1007/s10334-025-01234-6. Online ahead of print.

ABSTRACT

OBJECTIVE: Using deep learning-based techniques to overcome physical limitations and explore the potential performance of 0.2 T low-field unshielded MRI in terms of imaging quality and speed.

METHODS: First, fast and high-quality unshielded imaging is achieved using active electromagnetic shielding and basic super-resolution. Then, the speed of basic super-resolution imaging is further improved by reducing the number of excitations. Next, the feasibility of using cross-field super-resolution to map low-field low-resolution images to high-field ultra-high-resolution images is analyzed. Finally, by cascading basic and cross-field super-resolution, the quality of the low-field low-resolution image is improved to the level of the high-field ultra-high-resolution image.

RESULTS: Under unshielded conditions, our 0.2 T scanner can achieve image quality comparable to that of a 1.5 T scanner (acquisition resolution of 512 × 512, spatial resolution of 0.45 mm2), and a single-orientation imaging time of less than 3.3 min.

DISCUSSION: The proposed strategy overcomes the physical limitations of the hardware and rapidly acquires images close to the high-field level on a low-field unshielded MRI scanner. These findings have significant practical implications for the advances in MRI technology, supporting the shift from conventional scanners to point-of-care imaging systems.

PMID:39964601 | DOI:10.1007/s10334-025-01234-6

Categories: Literature Watch

Genetic insights into the shared molecular mechanisms of Crohn's disease and breast cancer: a Mendelian randomization and deep learning approach

Tue, 2025-02-18 06:00

Discov Oncol. 2025 Feb 18;16(1):198. doi: 10.1007/s12672-025-01978-6.

ABSTRACT

The objective of this study was to explore the potential genetic link between Crohn's disease and breast cancer, with a focus on identifying druggable genes that may have therapeutic relevance. We assessed the causal relationship between these diseases through Mendelian randomization and investigated gene-drug interactions using computational predictions. This study sought to identify common genetic pathways possibly involved in immune responses and cancer progression, providing a foundation for future targeted treatment research. The dataset comprises single nucleotide polymorphisms used as instrumental variables for Crohn's disease, analyzed to explore their possible impact on breast cancer risk. Gene ontology and pathway enrichment analyses were conducted to identify genes shared between the two conditions, supported by protein-protein interaction networks, colocalization analyses, and deep learning-based predictions of gene-drug interactions. The identified hub genes and predicted gene-drug interactions offer preliminary insights into possible therapeutic targets for breast cancer and immune-related conditions. This dataset may be valuable for researchers studying genetic links between autoimmune diseases and cancer and for those interested in the early identification of potential drug targets.

PMID:39964572 | DOI:10.1007/s12672-025-01978-6

Categories: Literature Watch

Deep learning-based time-of-flight (ToF) enhancement of non-ToF PET scans for different radiotracers

Tue, 2025-02-18 06:00

Eur J Nucl Med Mol Imaging. 2025 Feb 18. doi: 10.1007/s00259-025-07119-z. Online ahead of print.

ABSTRACT

AIM: To evaluate a deep learning-based time-of-flight (DLToF) model trained to enhance the image quality of non-ToF PET images for different tracers, reconstructed using BSREM algorithm, towards ToF images.

METHODS: A 3D residual U-NET model was trained using 8 different tracers (FDG: 75% and non-FDG: 25%) from 11 sites from US, Europe and Asia. A total of 309 training and 33 validation datasets scanned on GE Discovery MI (DMI) ToF scanners were used for development of DLToF models of three strengths: low (L), medium (M) and high (H). The training and validation pairs consisted of target ToF and input non-ToF BSREM reconstructions using site-preferred regularisation parameters (beta values). The contrast and noise properties of each model were defined by adjusting the beta value of target ToF images. A total of 60 DMI datasets, consisting of a set of 4 tracers (18F-FDG, 18F-PSMA, 68Ga-PSMA, 68Ga-DOTATATE) and 15 exams each, were collected for testing and quantitative analysis of the models based on standardized uptake value (SUV) in regions of interest (ROI) placed in lesions, lungs and liver. Each dataset includes 5 image series: ToF and non-ToF BSREM and three DLToF images. The image series (300 in total) were blind scored on a 5-point Likert score by 4 readers based on lesion detectability, diagnostic confidence, and image noise/quality.

RESULTS: In lesion SUVmax quantification with respect to ToF BSREM, DLToF-H achieved the best results among the three models by reducing the non-ToF BSREM errors from -39% to -6% for 18F-FDG (38 lesions); from -42% to -7% for 18F-PSMA (35 lesions); from -34% to -4% for 68Ga-PSMA (23 lesions) and from -34% to -12% for 68Ga-DOTATATE (32 lesions). Quantification results in liver and lung also showed ToF-like performance of DLToF models. Clinical reader resulted showed that DLToF-H results in an improved lesion detectability on average for all four radiotracers whereas DLToF-L achieved the highest scores for image quality (noise level). The results of DLToF-M however showed that this model results in the best trade-off between lesion detection and noise level and hence achieved the highest score for diagnostic confidence on average for all radiotracers.

CONCLUSION: This study demonstrated that the DLToF models are suitable for both FDG and non-FDG tracers and could be utilized for digital BGO PET/CT scanners to provide an image quality and lesion detectability comparable and close to ToF.

PMID:39964543 | DOI:10.1007/s00259-025-07119-z

Categories: Literature Watch

Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study

Tue, 2025-02-18 06:00

Eur J Nucl Med Mol Imaging. 2025 Feb 18. doi: 10.1007/s00259-025-07132-2. Online ahead of print.

ABSTRACT

PURPOSE: Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MRSYN) and performing automated quantitative regional analysis using MRSYN-derived segmentation.

METHODS: In this retrospective study, 139 subjects who underwent brain [18F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MRSYN; subsequently, a separate model was trained to segment MRSYN into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [18F]FBB PET images using the acquired ROIs. For evaluation of MRSYN, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MRSYN-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MRSYN and ground-truth MR (MRGT).

RESULTS: Compared to MRGT, the mean SSIM of MRSYN was 0.974 ± 0.005. The MRSYN-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MRSYN and those from MRGT, excluding the precuneus.

CONCLUSION: We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MRSYN. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.

PMID:39964542 | DOI:10.1007/s00259-025-07132-2

Categories: Literature Watch

Arthroscopy-validated Diagnostic Performance of 7-Minute Five-Sequence Deep Learning Super-Resolution 3-T Shoulder MRI

Tue, 2025-02-18 06:00

Radiology. 2025 Feb;314(2):e241351. doi: 10.1148/radiol.241351.

ABSTRACT

Background Deep learning (DL) methods enable faster shoulder MRI than conventional methods, but arthroscopy-validated evidence of good diagnostic performance is scarce. Purpose To validate the clinical efficacy of 7-minute threefold parallel imaging (PIx3)-accelerated DL super-resolution shoulder MRI against arthroscopic findings. Materials and Methods Adults with painful shoulder conditions who underwent PIx3-accelerated DL super-resolution 3-T shoulder MRI and arthroscopy between March and November 2023 were included in this retrospective study. Seven radiologists independently evaluated the MRI scan quality parameters and the presence of artifacts (Likert scale rating ranging from 1 [very bad/severe] to 5 [very good/absent]) as well as the presence of rotator cuff tears, superior and anteroinferior labral tears, biceps tendon tears, cartilage defects, Hill-Sachs lesions, Bankart fractures, and subacromial-subdeltoid bursitis. Interreader agreement based on κ values was evaluated, and diagnostic performance testing was conducted. Results A total of 121 adults (mean age, 55 years ± 14 [SD]; 75 male) who underwent MRI and arthroscopy within a median of 39 days (range, 1-90 days) were evaluated. The overall image quality was good (median rating, 4 [IQR, 4-4]), with high reader agreement (κ ≥ 0.86). Motion artifacts and image noise were minimal (rating of 4 [IQR, 4-4] for each), and reconstruction artifacts were absent (rating of 5 [IQR, 5-5]). Arthroscopy-validated abnormalities were detected with good or better interreader agreement (κ ≥ 0.68). The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve were 89%, 90%, 89%, and 0.89, respectively, for supraspinatus-infraspinatus tendon tears; 82%, 63%, 68%, and 0.68 for subscapularis tendon tears; 93%, 73%, 86%, and 0.83 for superior labral tears; 100%, 100%, 100%, and 1.00 for anteroinferior labral tears; 68%, 90%, 82%, and 0.80 for biceps tendon tears; 42%, 93%, 81%, and 0.64 for cartilage defects; 93%, 99%, 98%, and 0.94 for Hill-Sachs deformities; 100%, 99%, 99%, and 1.00 for osseous Bankart lesions; and 97%, 63%, 92%, and 0.80 for subacromial-subdeltoid bursitis. Conclusion Seven-minute PIx3-accelerated DL super-resolution 3-T shoulder MRI has good diagnostic performance for diagnosing tendinous, labral, and osteocartilaginous abnormalities. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Tuite in this issue.

PMID:39964264 | DOI:10.1148/radiol.241351

Categories: Literature Watch

Association of Epicardial Adipose Tissue Changes on Serial Chest CT Scans with Mortality: Insights from the National Lung Screening Trial

Tue, 2025-02-18 06:00

Radiology. 2025 Feb;314(2):e240473. doi: 10.1148/radiol.240473.

ABSTRACT

Background Individuals eligible for lung cancer screening with low-dose CT face a higher cardiovascular mortality risk. Purpose To investigate the association between changes in epicardial adipose tissue (EAT) at the 2-year interval and mortality in individuals undergoing serial low-dose CT lung cancer screening. Materials and Methods This secondary analysis of the National Lung Screening Trial obtained EAT volume and density from serial low-dose CT scans using a validated automated deep learning algorithm. EAT volume and density changes over 2 years were categorized into typical (decrease of 7% to increase of 11% and decrease of 3% to increase of 2%, respectively) and atypical (increase or decrease beyond typical) changes, which were associated with all-cause, cardiovascular, and lung cancer mortality. Uni- and multivariable Cox proportional hazard regression models-adjusted for baseline EAT values, age, sex, race, ethnicity, smoking, pack-years, heart disease or myocardial infarction, stroke, hypertension, diabetes, education status, body mass index, and coronary artery calcium-were performed. Results Among 20 661 participants (mean age, 61.4 years ± 5.0 [SD]; 12 237 male [59.2%]), 3483 (16.9%) died over a median follow-up of 10.4 years (IQR, 9.9-10.8 years) (cardiovascular related: 816 [23.4%]; lung cancer related: 705 [20.2%]). Mean EAT volume increased (2.5 cm3/m2 ± 11.0) and density decreased (decrease of 0.5 HU ± 3.0) over 2 years. Atypical changes in EAT volume were independent predictors of all-cause mortality (atypical increase: hazard ratio [HR], 1.15 [95% CI: 1.06, 1.25] [P < .001]; atypical decrease: HR, 1.34 [95% CI: 1.23, 1.46] [P < .001]). An atypical decrease in EAT volume was associated with cardiovascular mortality (HR, 1.27 [95% CI: 1.06, 1.51]; P = .009). EAT density increase was associated with all-cause, cardiovascular, and lung cancer mortality (HR, 1.29 [95% CI: 1.18, 1.40] [P < .001]; HR, 1.29 [95% CI: 1.08, 1.54] [P = .005]; HR, 1.30 [95% CI: 1.07, 1.57] [P = .007], respectively). Conclusion EAT volume increase and decrease and EAT density increase beyond typical on subsequent chest CT scans were associated with all-cause mortality in participants screened for lung cancer. EAT volume decrease and EAT density increase were associated with elevated risk of cardiovascular mortality after adjustment for baseline EAT values. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Fuss in this issue.

PMID:39964263 | DOI:10.1148/radiol.240473

Categories: Literature Watch

Neural Network-Assisted Dual-Functional Hydrogel-Based Microfluidic SERS Sensing for Divisional Recognition of Multimolecule Fingerprint

Tue, 2025-02-18 06:00

ACS Sens. 2025 Feb 18. doi: 10.1021/acssensors.4c03096. Online ahead of print.

ABSTRACT

To enhance the sensitivity, integration, and practicality of the Raman detection system, a deep learning-based dual-functional subregional microfluidic integrated hydrogel surface-enhanced Raman scattering (SERS) platform is proposed in this paper. First, silver nanoparticles (Ag NPs) with a homogeneous morphology were synthesized using a one-step reduction method. Second, these Ag NPs were embedded in N-isopropylacrylamide/poly(vinyl alcohol) (Ag NPs-NIPAM/PVA) hydrogels. Finally, a dual-functional SERS platform featuring four channels, each equipped with a switch and a detection region, was developed in conjunction with microfluidics. This platform effectively allows the flow of the test material to be directed to a specific detection region by sequential activation of the hydrogel switches with an external heating element. It then utilizes the corresponding heating element in the detection region to adjust the gaps between Ag NPs, enabling the measurement of the Raman enhancement performance in the designated SERS detection area. The dual-functional microfluidic-integrated hydrogel SERS platform enables subregional sampling and simultaneous detection of multiple molecules. The platform demonstrated excellent detection performance for Rhodamine 6G (R6G), achieving a detection limit as low as 10-10 mol/L and an enhancement factor of 107, with relative standard deviations of the main characteristic peaks below 10%. Additionally, the platform is capable of simultaneous subarea detection of four real molecules─thiram, pyrene, anthracene, and dibutyl phthalate─combined with fully connected neural network technology, which offers improved predictability, practicality, and applicability for their classification and identification.

PMID:39964084 | DOI:10.1021/acssensors.4c03096

Categories: Literature Watch

Evaluating sowing uniformity in hybrid rice using image processing and the OEW-YOLOv8n network

Tue, 2025-02-18 06:00

Front Plant Sci. 2025 Feb 3;16:1473153. doi: 10.3389/fpls.2025.1473153. eCollection 2025.

ABSTRACT

Sowing uniformity is an important evaluation indicator of mechanical sowing quality. In order to achieve accurate evaluation of sowing uniformity in hybrid rice mechanical sowing, this study takes the seeds in a seedling tray of hybrid rice blanket-seedling nursing as the research object and proposes a method for evaluating sowing uniformity by combining image processing methods and the ODConv_C2f-ECA-WIoU-YOLOv8n (OEW-YOLOv8n) network. Firstly, image processing methods are used to segment seed image and obtain seed grids. Next, an improved model named OEW-YOLOv8n based on YOLOv8n is proposed to identify the number of seeds in a unit seed grid. The improved strategies include the following: (1) Replacing the Conv module in the Bottleneck of C2f modules with the Omni-Dimensional Dynamic Convolution (ODConv) module, where C2f modules are located at the connection between the Backbone and Neck. This improvement can enhance the feature extraction ability of the Backbone network, as the new modules can fully utilize the information of all dimensions of the convolutional kernel. (2) An Efficient Channel Attention (ECA) module is added to the Neck for improving the network's capability to extract deep semantic feature information of the detection target. (3) In the Bbox module of the prediction head, the Complete Intersection over Union (CIoU) loss function is replaced by the Weighted Intersection over Union version 3 (WIoUv3) loss function to improve the convergence speed of the bounding box loss function and reduce the convergence value of the loss function. The results show that the mean average precision (mAP) of the OEW-YOLOv8n network reaches 98.6%. Compared to the original model, the mAP improved by 2.5%. Compared to the advanced object detection algorithms such as Faster-RCNN, SSD, YOLOv4, YOLOv5s YOLOv7-tiny, and YOLOv10s, the mAP of the new network increased by 5.2%, 7.8%, 4.9%, 2.8% 2.9%, and 3.3%, respectively. Finally, the actual evaluation experiment showed that the test error is from -2.43% to 2.92%, indicating that the improved network demonstrates excellent estimation accuracy. The research results can provide support for the mechanized sowing quality detection of hybrid rice and the intelligent research of rice seeder.

PMID:39963535 | PMC:PMC11830705 | DOI:10.3389/fpls.2025.1473153

Categories: Literature Watch

Deep phenotyping platform for microscopic plant-pathogen interactions

Tue, 2025-02-18 06:00

Front Plant Sci. 2025 Feb 3;16:1462694. doi: 10.3389/fpls.2025.1462694. eCollection 2025.

ABSTRACT

The increasing availability of genetic and genomic resources has underscored the need for automated microscopic phenotyping in plant-pathogen interactions to identify genes involved in disease resistance. Building on accumulated experience and leveraging automated microscopy and software, we developed BluVision Micro, a modular, machine learning-aided system designed for high-throughput microscopic phenotyping. This system is adaptable to various image data types and extendable with modules for additional phenotypes and pathogens. BluVision Micro was applied to screen 196 genetically diverse barley genotypes for interactions with powdery mildew fungi, delivering accurate, sensitive, and reproducible results. This enabled the identification of novel genetic loci and marker-trait associations in the barley genome. The system also facilitated high-throughput studies of labor-intensive phenotypes, such as precise colony area measurement. Additionally, BluVision's open-source software supports the development of specific modules for various microscopic phenotypes, including high-throughput transfection assays for disease resistance-related genes.

PMID:39963527 | PMC:PMC11832026 | DOI:10.3389/fpls.2025.1462694

Categories: Literature Watch

Deep learning and explainable AI for classification of potato leaf diseases

Tue, 2025-02-18 06:00

Front Artif Intell. 2025 Feb 3;7:1449329. doi: 10.3389/frai.2024.1449329. eCollection 2024.

ABSTRACT

The accurate classification of potato leaf diseases plays a pivotal role in ensuring the health and productivity of crops. This study presents a unified approach for addressing this challenge by leveraging the power of Explainable AI (XAI) and transfer learning within a deep Learning framework. In this research, we propose a transfer learning-based deep learning model that is tailored for potato leaf disease classification. Transfer learning enables the model to benefit from pre-trained neural network architectures and weights, enhancing its ability to learn meaningful representations from limited labeled data. Additionally, Explainable AI techniques are integrated into the model to provide interpretable insights into its decision-making process, contributing to its transparency and usability. We used a publicly available potato leaf disease dataset to train the model. The results obtained are 97% for validation accuracy and 98% for testing accuracy. This study applies gradient-weighted class activation mapping (Grad-CAM) to enhance model interpretability. This interpretability is vital for improving predictive performance, fostering trust, and ensuring seamless integration into agricultural practices.

PMID:39963448 | PMC:PMC11830750 | DOI:10.3389/frai.2024.1449329

Categories: Literature Watch

Quantifying the spatial patterns of retinal ganglion cell loss and progression in optic neuropathy by applying a deep learning variational autoencoder approach to optical coherence tomography

Tue, 2025-02-18 06:00

Front Ophthalmol (Lausanne). 2025 Feb 3;4:1497848. doi: 10.3389/fopht.2024.1497848. eCollection 2024.

ABSTRACT

INTRODUCTION: Glaucoma, optic neuritis (ON), and non-arteritic anterior ischemic optic neuropathy (NAION) produce distinct patterns of retinal ganglion cell (RGC) damage. We propose a booster Variational Autoencoder (bVAE) to capture spatial variations in RGC loss and generate latent space (LS) montage maps that visualize different degrees and spatial patterns of optic nerve bundle injury. Furthermore, the bVAE model is capable of tracking the spatial pattern of RGC thinning over time and classifying the underlying cause.

METHODS: The bVAE model consists of an encoder, a display decoder, and a booster decoder. The encoder decomposes input ganglion cell layer (GCL) thickness maps into two display latent variables (dLVs) and eight booster latent variables (bLVs). The dLVs capture primary spatial patterns of RGC thinning, while the display decoder reconstructs the GCL map and creates the LS montage map. The bLVs add finer spatial details, improving reconstruction accuracy. XGBoost was used to analyze the dLVs and bLVs, estimating normal/abnormal GCL thinning and classifying diseases (glaucoma, ON, and NAION). A total of 10,701 OCT macular scans from 822 subjects were included in this study.

RESULTS: Incorporating bLVs improved reconstruction accuracy, with the image-based root-mean-square error (RMSE) between input and reconstructed GCL thickness maps decreasing from 5.55 ± 2.29 µm (two dLVs only) to 4.02 ± 1.61 µm (two dLVs and eight bLVs). However, the image-based structural similarity index (SSIM) remained similar (0.91 ± 0.04), indicating that just two dLVs effectively capture the main GCL spatial patterns. For classification, the XGBoost model achieved an AUC of 0.98 for identifying abnormal spatial patterns of GCL thinning over time using the dLVs. Disease classification yielded AUCs of 0.95 for glaucoma, 0.84 for ON, and 0.93 for NAION, with bLVs further increasing the AUCs to 0.96 for glaucoma, 0.93 for ON, and 0.99 for NAION.

CONCLUSION: This study presents a novel approach to visualizing and quantifying GCL thinning patterns in optic neuropathies using the bVAE model. The combination of dLVs and bLVs enhances the model's ability to capture key spatial features and predict disease progression. Future work will focus on integrating additional image modalities to further refine the model's diagnostic capabilities.

PMID:39963427 | PMC:PMC11830743 | DOI:10.3389/fopht.2024.1497848

Categories: Literature Watch

Investigating the Use of Generative Adversarial Networks-Based Deep Learning for Reducing Motion Artifacts in Cardiac Magnetic Resonance

Tue, 2025-02-18 06:00

J Multidiscip Healthc. 2025 Feb 12;18:787-799. doi: 10.2147/JMDH.S492163. eCollection 2025.

ABSTRACT

OBJECTIVE: To evaluate the effectiveness of deep learning technology based on generative adversarial networks (GANs) in reducing motion artifacts in cardiac magnetic resonance (CMR) cine sequences.

METHODS: The training and testing datasets consisted of 2000 and 200 pairs of clear and blurry images, respectively, acquired through simulated motion artifacts in CMR cine sequences. These datasets were used to establish and train a deep learning GAN model. To assess the efficacy of the deep learning network in mitigating motion artifacts, 100 images with simulated motion artifacts and 37 images with real-world motion artifacts encountered in clinical practice were selected. Image quality pre- and post-optimization was assessed using metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Leningrad Focus Measure, and a 5-point Likert scale.

RESULTS: After GAN optimization, notable improvements were observed in the PSNR, SSIM, and focus measure metrics for the 100 images with simulated artifacts. These metrics increased from initial values of 23.85±2.85, 0.71±0.08, and 4.56±0.67, respectively, to 27.91±1.74, 0.83±0.05, and 7.74±0.39 post-optimization. Additionally, the subjective assessment scores significantly improved from 2.44±1.08 to 4.44±0.66 (P<0.001). For the 37 images with real-world artifacts, the Tenengrad Focus Measure showed a significant enhancement, rising from 6.06±0.91 to 10.13±0.48 after artifact removal. Subjective ratings also increased from 3.03±0.73 to 3.73±0.87 (P<0.001).

CONCLUSION: GAN-based deep learning technology effectively reduces motion artifacts present in CMR cine images, demonstrating significant potential for clinical application in optimizing CMR motion artifact management.

PMID:39963324 | PMC:PMC11830935 | DOI:10.2147/JMDH.S492163

Categories: Literature Watch

Machine learning approaches for predicting protein-ligand binding sites from sequence data

Tue, 2025-02-18 06:00

Front Bioinform. 2025 Feb 3;5:1520382. doi: 10.3389/fbinf.2025.1520382. eCollection 2025.

ABSTRACT

Proteins, composed of amino acids, are crucial for a wide range of biological functions. Proteins have various interaction sites, one of which is the protein-ligand binding site, essential for molecular interactions and biochemical reactions. These sites enable proteins to bind with other molecules, facilitating key biological functions. Accurate prediction of these binding sites is pivotal in computational drug discovery, helping to identify therapeutic targets and facilitate treatment development. Machine learning has made significant contributions to this field by improving the prediction of protein-ligand interactions. This paper reviews studies that use machine learning to predict protein-ligand binding sites from sequence data, focusing on recent advancements. The review examines various embedding methods and machine learning architectures, addressing current challenges and the ongoing debates in the field. Additionally, research gaps in the existing literature are highlighted, and potential future directions for advancing the field are discussed. This study provides a thorough overview of sequence-based approaches for predicting protein-ligand binding sites, offering insights into the current state of research and future possibilities.

PMID:39963299 | PMC:PMC11830693 | DOI:10.3389/fbinf.2025.1520382

Categories: Literature Watch

Pages