Deep learning

Translational Informatics Driven Drug Repositioning for Neurodegenerative Disease

Wed, 2025-02-12 06:00

Curr Neuropharmacol. 2025 Feb 6. doi: 10.2174/011570159X327908241121062335. Online ahead of print.

ABSTRACT

Neurodegenerative diseases represent a prevalent category of age-associated diseases. As human lifespans extend and societies become increasingly aged, neurodegenerative diseases pose a growing threat to public health. The lack of effective therapeutic drugs for both common and rare neurodegenerative diseases amplifies the medical challenges they present. Current treatments for these diseases primarily offer symptomatic relief rather than a cure, underscoring the pressing need to develop efficacious therapeutic interventions. Drug repositioning, an innovative and data-driven approach to research and development, proposes the re-evaluation of existing drugs for potential application in new therapeutic areas. Fueled by rapid advancements in artificial intelligence and the burgeoning accumulation of medical data, drug repositioning has emerged as a promising pathway for drug discovery. This review comprehensively examines drug repositioning for neurodegenerative diseases through the lens of translational informatics, encompassing data sources, computational models, and clinical applications. Initially, we systematized drug repositioning-related databases and online platforms, focusing on data resource management and standardization. Subsequently, we classify computational models for drug repositioning from the perspectives of drug-drug, drug-target, and drug-disease interactions into categories such as machine learning, deep learning, and networkbased approaches. Lastly, we highlight computational models presently utilized in neurodegenerative disease research and identify databases that hold potential for future drug repositioning efforts. In the artificial intelligence era, drug repositioning, as a data-driven strategy, offers a promising avenue for developing treatments suited to the complex and multifaceted nature of neurodegenerative diseases. These advancements could furnish patients with more rapid, cost-effective therapeutic options.

PMID:39936420 | DOI:10.2174/011570159X327908241121062335

Categories: Literature Watch

Extended Technical and Clinical Validation of Deep Learning-Based Brainstem Segmentation for Application in Neurodegenerative Diseases

Wed, 2025-02-12 06:00

Hum Brain Mapp. 2025 Feb 15;46(3):e70141. doi: 10.1002/hbm.70141.

ABSTRACT

Disorders of the central nervous system, including neurodegenerative diseases, frequently affect the brainstem and can present with focal atrophy. This study aimed to (1) optimize deep learning-based brainstem segmentation for a wide range of pathologies and T1-weighted image acquisition parameters, (2) conduct a systematic technical and clinical validation, (3) improve segmentation quality in the presence of brainstem lesions, and (4) make an optimized brainstem segmentation tool available for public use. An intentionally heterogeneous ground truth dataset (n = 257) was employed in the training of deep learning models based on multi-dimensional gated recurrent units (MD-GRU) or the nnU-Net method. Segmentation performance was evaluated against ground truth labels. FreeSurfer was used for benchmarking in subsequent validation. Technical validation, including scan-rescan repeatability (n = 46) and inter-scanner reproducibility (n = 20, 3 different scanners) in unseen data, was conducted in patients with cerebral small vessel disease. Clinical validation in unseen data was performed in 1-year follow-up data of 16 patients with multiple system atrophy, evaluating the annual percentage volume change. Two lesion filling algorithms were investigated to improve segmentation performance in 23 patients with multiple sclerosis. The MD-GRU and nnU-Net models demonstrated very good segmentation performance (median Dice coefficients ≥ 0.95 each) and outperformed a previously published model trained on a narrower dataset. Scan-rescan repeatability and inter-scanner reproducibility yielded similar Bland-Altman derived limits of agreement for longitudinal FreeSurfer (total brainstem volume repeatability/reproducibility 0.68/1.85), MD-GRU (0.72/1.46), and nnU-Net (0.48/1.52). All methods showed comparable performance in the detection of atrophy in the total brainstem (atrophy detected in 100% of patients) and its substructures. In patients with multiple sclerosis, lesion filling further improved the accuracy of brainstem segmentation. We enhanced and systematically validated two fully automated deep learning brainstem segmentation methods and released them publicly. This enables a broader evaluation of brainstem volume as a candidate biomarker for neurodegeneration.

PMID:39936343 | DOI:10.1002/hbm.70141

Categories: Literature Watch

A novel method for online sex sorting of silkworm pupae (Bombyx mori) using computer vision combined with deep learning

Wed, 2025-02-12 06:00

J Sci Food Agric. 2025 Feb 12. doi: 10.1002/jsfa.14177. Online ahead of print.

ABSTRACT

BACKGROUND: Silkworm pupae (SP), the pupal stage of an edible insect, have strong potential in the food, medicine, and cosmetic industries. Sex sorting is essential to enhance nutritional content and genetic traits in SP crossbreeding but it remains labor intensive and time consuming. An intelligent method is needed urgently to improve efficiency and productivity.

RESULTS: To address the problem, an automatic SP sex-separation system was developed based on computer vision and deep learning. Specifically, based on gonad features, a novel real-time SP sex identification model with cascaded spatial channel attention (CSCA) and G-GhostNet (GPU-Ghost Network) was developed, which can capture regions of interest and achieve feature diversity efficiently. A new loss function was proposed to reduce model complexity and avoid overfitting in the training. In comparison with benchmark methods on the test set, the new model achieved superior performance with an accuracy of 96.48%. The experimental sorting accuracy for SP reached 95.59%, validating the effectiveness of the novel gender-separation strategy.

CONCLUSION: This research presents a practical method for online SP gender separation, potentially aiding the production of high-quality SP. © 2025 Society of Chemical Industry.

PMID:39936219 | DOI:10.1002/jsfa.14177

Categories: Literature Watch

DPD-YOLO: dense pineapple fruit target detection algorithm in complex environments based on YOLOv8 combined with attention mechanism

Wed, 2025-02-12 06:00

Front Plant Sci. 2025 Jan 28;16:1523552. doi: 10.3389/fpls.2025.1523552. eCollection 2025.

ABSTRACT

With the development of deep learning technology and the widespread application of drones in the agricultural sector, the use of computer vision technology for target detection of pineapples has gradually been recognized as one of the key methods for estimating pineapple yield. When images of pineapple fields are captured by drones, the fruits are often obscured by the pineapple leaf crowns due to their appearance and planting characteristics. Additionally, the background in pineapple fields is relatively complex, and current mainstream target detection algorithms are known to perform poorly in detecting small targets under occlusion conditions in such complex backgrounds. To address these issues, an improved YOLOv8 target detection algorithm, named DPD-YOLO (Dense-Pineapple-Detection YOU Only Look Once), has been proposed for the detection of pineapples in complex environments. The DPD-YOLO model is based on YOLOv8 and introduces the attention mechanism (Coordinate Attention) to enhance the network's ability to extract features of pineapples in complex backgrounds. Furthermore, the small target detection layer has been fused with BiFPN (Bi-directional Feature Pyramid Network) to strengthen the integration of multi-scale features and enrich the extraction of semantic features. At the same time, the original YOLOv8 detection head has been replaced by the RT-DETR detection head, which incorporates Cross-Attention and Self-Attention mechanisms that improve the model's detection accuracy. Additionally, Focaler-IoU has been employed to improve CIoU, allowing the network to focus more on small targets. Finally, high-resolution images of the pineapple fields were captured using drones to create a dataset, and extensive experiments were conducted. The results indicate that, compared to existing mainstream target detection models, the proposed DPD-YOLO demonstrated superior detection performance for pineapples in situations where the background is complex and the targets are occluded. The mAP@0.5 reached 62.0%, representing an improvement of 6.6% over the original YOLOv8 algorithm, Precision increased by 2.7%, Recall improved by 13%, and F1-score rose by 10.3%.

PMID:39935949 | PMC:PMC11810954 | DOI:10.3389/fpls.2025.1523552

Categories: Literature Watch

Uncertainty quantification in multi-parametric MRI-based meningioma radiotherapy target segmentation

Wed, 2025-02-12 06:00

Front Oncol. 2025 Jan 28;15:1474590. doi: 10.3389/fonc.2025.1474590. eCollection 2025.

ABSTRACT

PURPOSE: This work investigates the use of a spherical projection-based U-Net (SPU-Net) segmentation model to improve meningioma segmentation performance and allow for uncertainty quantification.

METHODS: A total of 76 supratentorial meningioma patients treated with radiotherapy were studied. Gross tumor volumes (GTVs) were contoured by a single experienced radiation oncologist on high-resolution contrast-enhanced T1 MRI scans (T1ce), and both T1 and T1ce images were utilized for segmentation. SPU-Net, an adaptation of U-Net incorporating spherical image projection to map 2D images onto a spherical surface, was proposed. As an equivalence of a nonlinear image transform, projections enhance locoregional details while maintaining the global field of view. By employing multiple projection centers, SPU-Net generates various GTV segmentation predictions, the variance indicating the model's uncertainty. This uncertainty is quantified on a pixel-wise basis using entropy calculations and aggregated through Otsu's method for a final segmentation.

RESULTS/CONCLUSION: The SPU-Net model poses an advantage over traditional U-Net models by providing a quantitative method of displaying segmentation uncertainty. Regarding segmentation performance, SPU-Net demonstrated comparable results to a traditional U-Net in sensitivity (0.758 vs. 0.746), Dice similarity coefficient (0.760 vs. 0.742), reduced mean Hausdorff distance (mHD) (0.612 cm vs 0.744 cm), and reduced 95% Hausdorff distance (HD95) (2.682 cm vs 2.912 cm). SPU-Net not only is comparable to U-Net in segmentation performance but also offers a significant advantage by providing uncertainty quantification. The added SPU-Net uncertainty mapping revealed low uncertainty in accurate segments (e.g., within GTV or healthy tissue) and higher uncertainty in problematic areas (e.g., GTV boundaries, dural tail), providing valuable insights for potential manual corrections. This advancement is particularly valuable given the complex extra-axial nature of meningiomas and involvement with dural tissue. The capability to quantify uncertainty makes SPU-Net a more advanced and informative tool for segmentation, without sacrificing performance.

PMID:39935829 | PMC:PMC11810883 | DOI:10.3389/fonc.2025.1474590

Categories: Literature Watch

Multi-scale channel attention U-Net: a novel framework for automated gallbladder segmentation in medical imaging

Wed, 2025-02-12 06:00

Front Oncol. 2025 Jan 28;15:1528654. doi: 10.3389/fonc.2025.1528654. eCollection 2025.

ABSTRACT

OBJECTIVES: To develop a novel automatic delineation model, the Multi-Scale Channel Attention U-Net (MCAU-Net) model, for gallbladder segmentation on CT images of patients with liver cancer.

METHODS: We retrospectively collected the CT images from 120 patients with liver cancer, based on which ground truth was manually delineated by physicians. The images and ground truth constitute a dataset, which was proportionally divided into a training set (54%), a validation set (6%), and a test set (40%). Data augmentation was performed on the training set. Our proposed MCAU-Net model was employed for gallbladder segmentation and its performance was evaluated using Dice Similarity Coefficient (DSC), Jaccard Similarity Coefficient (JSC), Positive Predictive Value (PPV), Sensitivity (SE), Hausdorff Distance (HD), Relative Volume Difference (RVD), and Volumetric Overlap Error (VOE) metrics.

RESULTS: On the test set, MCAU-Net achieved DSC, JSC, PPV, SE, HD, RVD, and VOE values of 0.85 ± 0.22, 0.79 ± 0.23, 0.92 ± 0.14, 0.84 ± 0.23, 2.75 ± 0.98, 0.18 ± 0.48, and 0.22 ± 0.42, respectively. Compared to the control models, U-Net, SEU-Net and TransUNet, the MCAU-Net improved DSC 0.06, 0.04 and 0.06, JSC by 0.09, 0.06 and 0.09, PPV by 0.08, 0.08 and 0.05, SE by 0.05,0.05 and 0.07, and reduced HD by 0.45, 0.28 and 0.41, RVD by 0.07, 0.03 and 0.07, VOE by 0.04, 0.02 and 0.08 respectively. Qualitative results revealed that MCAU-Net produced smoother and more accurate boundaries, closer to the expert delineation, with less over-segmentation and under-segmentation and improved robustness.

CONCLUSIONS: The MCAU-Net model significantly improves gallbladder segmentation on CT images. It satisfies clinical requirements and enhances the efficiency of physicians, particularly in segmenting complex anatomical structures.

PMID:39935828 | PMC:PMC11810919 | DOI:10.3389/fonc.2025.1528654

Categories: Literature Watch

Mapping knowledge landscapes and emerging trends in artificial intelligence for antimicrobial resistance: bibliometric and visualization analysis

Wed, 2025-02-12 06:00

Front Med (Lausanne). 2025 Jan 28;12:1492709. doi: 10.3389/fmed.2025.1492709. eCollection 2025.

ABSTRACT

OBJECTIVE: To systematically map the knowledge landscape and development trends in artificial intelligence (AI) applications for antimicrobial resistance (AMR) research through bibliometric analysis, providing evidence-based insights to guide future research directions and inform strategic decision-making in this dynamic field.

METHODS: A comprehensive bibliometric analysis was performed using the Web of Science Core Collection database for publications from 2014 to 2024. The analysis integrated multiple bibliometric approaches: VOSviewer for visualization of collaboration networks and research clusters, CiteSpace for temporal evolution analysis, and quantitative analysis of publication metrics. Key bibliometric indicators including co-authorship patterns, keyword co-occurrence, and citation impact were analyzed to delineate research evolution and collaboration patterns in this domain.

RESULTS: A collection of 2,408 publications was analyzed, demonstrating significant annual growth with publications increasing from 4 in 2014 to 549 in 2023 (22.7% of total output). The United States (707), China (581), and India (233) were the leading contributors in international collaborations. The Chinese Academy of Sciences (53), Harvard Medical School (43), and University of California San Diego (26) were identified as top contributing institutions. Citation analysis highlighted two major breakthroughs: AlphaFold's protein structure prediction (6,811 citations) and deep learning approaches to antibiotic discovery (4,784 citations). Keyword analysis identified six enduring research clusters from 2014 to 2024: sepsis, artificial neural networks, antimicrobial resistance, antimicrobial peptides, drug repurposing, and molecular docking, demonstrating the sustained integration of AI in antimicrobial therapy development. Recent trends show increasing application of AI technologies in traditional approaches, particularly in MALDI-TOF MS for pathogen identification and graph neural networks for large-scale molecular screening.

CONCLUSION: This bibliometric analysis shows the importance of artificial intelligence in enhancing the progress in the discovery of antimicrobial drugs especially toward the fight against AMR. From enhancing the fast, efficient and predictive performance of drug discovery methods, current AI capabilities have revealed observable potential to be proactive in combating the ever-growing challenge of AMR worldwide. This study serves not only an identification of current trends, but also, and especially, offers a strategic approach to further investigations.

PMID:39935800 | PMC:PMC11810743 | DOI:10.3389/fmed.2025.1492709

Categories: Literature Watch

Blinking characteristics analyzed by a deep learning model and the relationship with tear film stability in children with long-term use of orthokeratology

Wed, 2025-02-12 06:00

Front Cell Dev Biol. 2025 Jan 28;12:1517240. doi: 10.3389/fcell.2024.1517240. eCollection 2024.

ABSTRACT

PURPOSE: Using deep learning model to observe the blinking characteristics and evaluate the changes and their correlation with tear film characteristics in children with long-term use of orthokeratology (ortho-K).

METHODS: 31 children (58 eyes) who had used ortho-K for more than 1 year and 31 age and gender-matched controls were selected for follow-up in our ophthalmology clinic from 2021/09 to 2023/10 in this retrospective case-control study. Both groups underwent comprehensive ophthalmological examinations, including Ocular Surface Disease Index (OSDI) scoring, Keratograph 5M, and LipiView. A deep learning system based on U-Net and Swim-Transformer was proposed for the observation of blinking characteristics. The frequency of incomplete blinks (IB), complete blinks (CB) and incomplete blinking rate (IBR) within 20 s, as well as the duration of the closing, closed, and opening phases in the blink wave were calculated by our deep learning system. Relative IPH% was proposed and defined as the ratio of the mean of IPH% within 20 s to the maximum value of IPH% to indicate the extent of incomplete blinking. Furthermore, the accuracy, precision, sensitivity, specificity, F1 score of the overall U-Net-Swin-Transformer model, and its consistency with built-in algorithm were evaluated as well. Independent t-test and Mann-Whitney test was used to analyze the blinking patterns and tear film characteristics between the long-term ortho-K wearer group and the control group. Spearman's rank correlation was used to analyze the relationship between blinking patterns and tear film stability.

RESULTS: Our deep learning system demonstrated high performance (accuracy = 98.13%, precision = 96.46%, sensitivity = 98.10%, specificity = 98.10%, F1 score = 0.9727) in the observation of blinking patterns. The OSDI scores, conjunctival redness, lipid layer thickness (LLT), and tear meniscus height did not change significantly between two groups. Notably, the ortho-K group exhibited shorter first (11.75 ± 7.42 s vs. 14.87 ± 7.93 s, p = 0.030) and average non-invasive tear break-up times (NIBUT) (13.67 ± 7.0 s vs. 16.60 ± 7.24 s, p = 0.029) compared to the control group. They demonstrated a higher IB (4.26 ± 2.98 vs. 2.36 ± 2.55, p < 0.001), IBR (0.81 ± 0.28 vs. 0.46 ± 0.39, p < 0.001), relative IPH% (0.3229 ± 0.1539 vs. 0.2233 ± 0.1960, p = 0.004) and prolonged eye-closing phase (0.18 ± 0.08 s vs. 0.15 ± 0.07 s, p = 0.032) and opening phase (0.35 ± 0.12 s vs. 0.28 ± 0.14 s, p = 0.015) compared to controls. In addition, Spearman's correlation analysis revealed a negative correlation between incomplete blinks and NIBUT (for first-NIBUT, r = -0.292, p = 0.004; for avg-NIBUT, r = -0.3512, p < 0.001) in children with long-term use of ortho-K.

CONCLUSION: The deep learning system based on U-net and Swim-Transformer achieved optimal performance in the observation of blinking characteristics. Children with long-term use of ortho-K presented an increase in the frequency and rate of incomplete blinks and prolonged eye closing phase and opening phase. The increased frequency of incomplete blinks was associated with decreased tear film stability, indicating the importance of monitoring children's blinking patterns as well as tear film status in clinical follow-up.

PMID:39935789 | PMC:PMC11811098 | DOI:10.3389/fcell.2024.1517240

Categories: Literature Watch

Extended fiducial inference: toward an automated process of statistical inference

Wed, 2025-02-12 06:00

J R Stat Soc Series B Stat Methodol. 2024 Aug 5;87(1):98-131. doi: 10.1093/jrsssb/qkae082. eCollection 2025 Feb.

ABSTRACT

While fiducial inference was widely considered a big blunder by R.A. Fisher, the goal he initially set-'inferring the uncertainty of model parameters on the basis of observations'-has been continually pursued by many statisticians. To this end, we develop a new statistical inference method called extended Fiducial inference (EFI). The new method achieves the goal of fiducial inference by leveraging advanced statistical computing techniques while remaining scalable for big data. Extended Fiducial inference involves jointly imputing random errors realized in observations using stochastic gradient Markov chain Monte Carlo and estimating the inverse function using a sparse deep neural network (DNN). The consistency of the sparse DNN estimator ensures that the uncertainty embedded in observations is properly propagated to model parameters through the estimated inverse function, thereby validating downstream statistical inference. Compared to frequentist and Bayesian methods, EFI offers significant advantages in parameter estimation and hypothesis testing. Specifically, EFI provides higher fidelity in parameter estimation, especially when outliers are present in the observations; and eliminates the need for theoretical reference distributions in hypothesis testing, thereby automating the statistical inference process. Extended Fiducial inference also provides an innovative framework for semisupervised learning.

PMID:39935678 | PMC:PMC11809222 | DOI:10.1093/jrsssb/qkae082

Categories: Literature Watch

Diagnosis of depression based on facial multimodal data

Wed, 2025-02-12 06:00

Front Psychiatry. 2025 Jan 28;16:1508772. doi: 10.3389/fpsyt.2025.1508772. eCollection 2025.

ABSTRACT

INTRODUCTION: Depression is a serious mental health disease. Traditional scale-based depression diagnosis methods often have problems of strong subjectivity and high misdiagnosis rate, so it is particularly important to develop automatic diagnostic tools based on objective indicators.

METHODS: This study proposes a deep learning method that fuses multimodal data to automatically diagnose depression using facial video and audio data. We use spatiotemporal attention module to enhance the extraction of visual features and combine the Graph Convolutional Network (GCN) and the Long and Short Term Memory (LSTM) to analyze the audio features. Through the multi-modal feature fusion, the model can effectively capture different feature patterns related to depression.

RESULTS: We conduct extensive experiments on the publicly available clinical dataset, the Extended Distress Analysis Interview Corpus (E-DAIC). The experimental results show that we achieve robust accuracy on the E-DAIC dataset, with a Mean Absolute Error (MAE) of 3.51 in estimating PHQ-8 scores from recorded interviews.

DISCUSSION: Compared with existing methods, our model shows excellent performance in multi-modal information fusion, which is suitable for early evaluation of depression.

PMID:39935533 | PMC:PMC11811426 | DOI:10.3389/fpsyt.2025.1508772

Categories: Literature Watch

Detection of Masses in Mammogram Images Based on the Enhanced RetinaNet Network With INbreast Dataset

Wed, 2025-02-12 06:00

J Multidiscip Healthc. 2025 Feb 7;18:675-695. doi: 10.2147/JMDH.S493873. eCollection 2025.

ABSTRACT

PURPOSE: Breast cancer is the most common major public health problems of women in the world. Until now, analyzing mammogram images is still the main method used by doctors to diagnose and detect breast cancers. However, this process usually depends on the experience of radiologists and is always very time consuming.

PATIENTS AND METHODS: We propose to introduce deep learning technology into the process for the facilitation of computer-aided diagnosis (CAD), and address the challenges of class imbalance, enhance the detection of small masses and multiple targets, and reduce false positives and negatives in mammogram analysis. Therefore, we adopted and enhanced RetinaNet to detect masses in mammogram images. Specifically, we introduced a novel modification to the network structure, where the feature map M5 is processed by the ReLU function prior to the original convolution kernel. This strategic adjustment was designed to prevent the loss of resolution for small mass features. Additionally, we introduced transfer learning techniques into training process through leveraging pre-trained weights from other RetinaNet applications, and fine-tuned our improved model using the INbreast dataset.

RESULTS: The aforementioned innovations facilitate superior performance of the enhanced RetiaNet model on the public dataset INbreast, as evidenced by a mAP (mean average precision) of 1.0000 and TPR (true positive rate) of 1.00 at 0.00 FPPI (false positive per image) on the INbreast dataset.

CONCLUSION: The experimental results demonstrate that our enhanced RetinaNet model defeats the existing models by having more generalization performance than other published studies, and it can also be applied to other types of patients to assist doctors in making a proper diagnosis.

PMID:39935433 | PMC:PMC11812562 | DOI:10.2147/JMDH.S493873

Categories: Literature Watch

MedFuseNet: fusing local and global deep feature representations with hybrid attention mechanisms for medical image segmentation

Tue, 2025-02-11 06:00

Sci Rep. 2025 Feb 11;15(1):5093. doi: 10.1038/s41598-025-89096-9.

ABSTRACT

Medical image segmentation plays a crucial role in addressing emerging healthcare challenges. Although several impressive deep learning architectures based on convolutional neural networks (CNNs) and Transformers have recently demonstrated remarkable performance, there is still potential for further performance improvement due to their inherent limitations in capturing feature correlations of input data. To address this issue, this paper proposes a novel encoder-decoder architecture called MedFuseNet that aims to fuse local and global deep feature representations with hybrid attention mechanisms for medical image segmentation. More specifically, the proposed approach contains two branches for feature learning in parallel: one leverages CNNs to learn local correlations of input data, and the other utilizes Swin-Transformer to capture global contextual correlations of input data. For feature fusion and enhancement, the designed hybrid attention mechanisms combine four different attention modules: (1) an atrous spatial pyramid pooling (ASPP) module for the CNN branch, (2) a cross attention module in the encoder for fusing local and global features, (3) an adaptive cross attention (ACA) module in skip connections for further performing fusion, and (4) a squeeze-and-excitation attention (SE-attention) module in the decoder for highlighting informative features. We evaluate our proposed approach on the public ACDC and Synapse datasets, and achieves the average DSC of 89.73% and 78.40%, respectively. Experimental results on these two datasets demonstrate the effectiveness of our proposed approach on medical image segmentation tasks, outperforming other used state-of-the-art approaches.

PMID:39934248 | DOI:10.1038/s41598-025-89096-9

Categories: Literature Watch

Transformation of free-text radiology reports into structured data

Tue, 2025-02-11 06:00

Radiologie (Heidelb). 2025 Feb 11. doi: 10.1007/s00117-025-01422-4. Online ahead of print.

ABSTRACT

BACKGROUND: The rapid development of large language models (LLMs) opens up new possibilities for the automated processing of medical texts. Transforming unstructured radiology reports into structured data is crucial for efficient use in clinical decision support systems, research, and improving patient care.

OBJECTIVES: What are the challenges of transforming natural language radiology reports into structured data using LLMs? Which methods and architectures are promising? How can the quality and reliability of the extracted data be ensured?

MATERIALS AND METHODS: This article examines current research on the application of LLMs in radiological information processing. Various approaches such as rule-based systems, machine learning, and deep learning models, particularly neural network architectures, are analyzed and compared. The focus is on extracting information such as diagnoses, anatomical locations, findings, and measurements.

RESULTS AND CONCLUSION: LLMs show great potential in transforming reports into structured data. In particular, deep learning models trained on large datasets achieve high accuracies. However, challenges remain, such as dealing with ambiguities, abbreviations, and the variability of linguistic expressions. Combining LLMs with domain-specific knowledge, for example, in the form of ontologies, can further improve the performance of the systems. Integrating contextual information and developing robust evaluation metrics are also important research directions.

PMID:39934245 | DOI:10.1007/s00117-025-01422-4

Categories: Literature Watch

Multiple model visual feature embedding and selection method for an efficient oncular disease classification

Tue, 2025-02-11 06:00

Sci Rep. 2025 Feb 12;15(1):5157. doi: 10.1038/s41598-024-84922-y.

ABSTRACT

Early detection of ocular diseases is vital to preventing severe complications, yet it remains challenging due to the need for skilled specialists, complex imaging processes, and limited resources. Automated solutions are essential to enhance diagnostic precision and support clinical workflows. This study presents a deep learning-based system for automated classification of ocular diseases using the Ocular Disease Intelligent Recognition (ODIR) dataset. The dataset includes 5,000 patient fundus images labeled into eight categories of ocular diseases. Initial experiments utilized transfer learning models such as DenseNet201, EfficientNetB3, and InceptionResNetV2. To optimize computational efficiency, a novel two-level feature selection framework combining Linear Discriminant Analysis (LDA) and advanced neural network classifiers-Deep Neural Networks (DNN), Long Short-Term Memory (LSTM), and Bidirectional LSTM (BiLSTM)-was introduced. Among the tested approaches, the "Combined Data" strategy utilizing features from all three models achieved the best results, with the BiLSTM classifier attaining 100% accuracy, precision, and recall on the training set, and over 98% performance on the validation set. The LDA-based framework significantly reduced computational complexity while enhancing classification accuracy. The proposed system demonstrates a scalable, efficient solution for ocular disease detection, offering robust support for clinical decision-making. By bridging the gap between clinical demands and technological capabilities, it has the potential to alleviate the workload of ophthalmologists, particularly in resource-constrained settings, and improve patient outcomes globally.

PMID:39934192 | DOI:10.1038/s41598-024-84922-y

Categories: Literature Watch

Association Between Aortic Imaging Features and Impaired Glucose Metabolism: A Deep Learning Population Phenotyping Approach

Tue, 2025-02-11 06:00

Acad Radiol. 2025 Feb 10:S1076-6332(25)00087-X. doi: 10.1016/j.acra.2025.01.032. Online ahead of print.

ABSTRACT

RATIONALE AND OBJECTIVES: Type 2 diabetes is a known risk factor for vascular disease with an impact on the aorta. The aim of this study was to develop a deep learning framework for quantification of aortic phenotypes from magnetic resonance imaging (MRI) and to investigate the association between aortic features and impaired glucose metabolism beyond traditional cardiovascular (CV) risk factors.

MATERIALS AND METHODS: This study used data from the prospective Cooperative Health Research in the Region of Augsburg (KORA) study to develop a deep learning framework for automatic quantification of aortic features (maximum aortic diameter, total volume, length, and width of the aortic arch) derived from MRI. Aortic features were compared between different states of glucose metabolism and tested for associations with impaired glucose metabolism adjusted for traditional CV risk factors (age, sex, height, weight, hypertension, smoking, and lipid panel).

RESULTS: The deep learning framework yielded a high performance for aortic feature quantification with a Dice coefficient of 91.1±0.02. Of 381 participants (58% male, mean age 56 years), 231 (60.6%) had normal blood glucose, 97 (25.5%) had prediabetes, and 53 (13.9%) had diabetes. All aortic features showed a significant increase between different groups of glucose metabolism (p≤0.04). Total aortic length and total aortic volume were associated with impaired glucose metabolism (OR 0.85, 95%CI 0.74-0.96; p=0.01, and OR 0.99, 95%CI 0.98-0.99; p=0.02) independent of CV risk factors.

CONCLUSION: Aortic features showed a glucose level dependent increase from normoglycemic individuals to those with prediabetes and diabetes. Total aortic length and volume were independently and inversely associated with impaired glucose metabolism beyond traditional CV risk factors.

PMID:39934079 | DOI:10.1016/j.acra.2025.01.032

Categories: Literature Watch

Deep-learning-ready RGB-depth images of seedling development

Tue, 2025-02-11 06:00

Plant Methods. 2025 Feb 11;21(1):16. doi: 10.1186/s13007-025-01334-3.

ABSTRACT

In the era of machine learning-driven plant imaging, the production of annotated datasets is a very important contribution. In this data paper, a unique annotated dataset of seedling emergence kinetics is proposed. It is composed of almost 70,000 RGB-depth frames and more than 700,000 plant annotations. The dataset is shown valuable for training deep learning models and performing high-throughput phenotyping by imaging. The ability of such models to generalize to several species and outperform the state-of-the-art owing to the delivered dataset is demonstrated. We also discuss how this dataset raises new questions in plant phenotyping.

PMID:39934882 | DOI:10.1186/s13007-025-01334-3

Categories: Literature Watch

Classifying and fact-checking health-related information about COVID-19 on Twitter/X using machine learning and deep learning models

Tue, 2025-02-11 06:00

BMC Med Inform Decis Mak. 2025 Feb 11;25(1):73. doi: 10.1186/s12911-025-02895-y.

ABSTRACT

BACKGROUND: Despite recent progress in misinformation detection methods, further investigation is required to develop more robust fact-checking models with particular consideration for the unique challenges of health information sharing. This study aimed to identify the most effective approach for detecting and classifying reliable information versus misinformation health content shared on Twitter/X related to COVID-19.

METHODS: We have used 7 different machine learning/deep learning models. Tweets were collected, processed, labeled, and analyzed using relevant keywords and hashtags, then classified into two distinct datasets: "Trustworthy information" versus "Misinformation", through a labeling process. The cosine similarity metric was employed to address oversampling the minority of the Trustworthy information class, ensuring a more balanced representation of both classes for training and testing purposes. Finally, the performance of the various fact-checking models was analyzed and compared using accuracy, precision, recall, and F1-score ROC curve, and AUC.

RESULTS: For measures of accuracy, precision, F1 score, and recall, the average values of TextConvoNet were found to be 90.28, 90.28, 90.29, and 0.9030, respectively. ROC AUC was 0.901."Trustworthy information" class achieved an accuracy of 85%, precision of 93%, recall of 86%, and F1 score of 89%. These values were higher than other models. Moreover, its performance in the misinformation category was even more impressive, with an accuracy of 94%, precision of 88%, recall of 94%, and F1 score of 91%.

CONCLUSION: This study showed that TextConvoNet was the most effective in detecting and classifying trustworthy information V.S misinformation related to health issues that have been shared on Twitter/X.

PMID:39934858 | DOI:10.1186/s12911-025-02895-y

Categories: Literature Watch

A novel method for assessing cycling movement status: an exploratory study integrating deep learning and signal processing technologies

Tue, 2025-02-11 06:00

BMC Med Inform Decis Mak. 2025 Feb 11;25(1):71. doi: 10.1186/s12911-024-02828-1.

ABSTRACT

This study proposes a deep learning-based motion assessment method that integrates the pose estimation algorithm (Keypoint RCNN) with signal processing techniques, demonstrating its reliability and effectiveness.The reliability and validity of this method were also verified.Twenty college students were recruited to pedal a stationary bike. Inertial sensors and a smartphone simultaneously recorded the participants' cycling movement. Keypoint RCNN(KR) algorithm was used to acquire 2D coordinates of the participants' skeletal keypoints from the recorded movement video. Spearman's rank correlation analysis, intraclass correlation coefficient (ICC), error analysis, and t-test were conducted to compare the consistency of data obtained from the two movement capture systems, including the peak frequency of acceleration, transition time point between movement statuses, and the complexity index average (CIA) of the movement status based on multiscale entropy analysis.The KR algorithm showed excellent consistency (ICC1,3=0.988) between the two methods when estimating the peak acceleration frequency. Both peak acceleration frequencies and CIA metrics estimated by the two methods displayed a strong correlation (r > 0.70) and good agreement (ICC2,1>0.750). Additionally, error values were relatively low (MAE = 0.001 and 0.040, MRE = 0.00% and 7.67%). Results of t-tests showed significant differences (p = 0.003 and 0.030) for various acceleration CIAs, indicating our method could distinguish different movement statuses.The KR algorithm also demonstrated excellent intra-session reliability (ICC = 0.988). Acceleration frequency analysis metrics derived from the KR method can accurately identify transitions among movement statuses. Leveraging the KR algorithm and signal processing techniques, the proposed method is designed for individualized motor function evaluation in home or community-based settings.

PMID:39934805 | DOI:10.1186/s12911-024-02828-1

Categories: Literature Watch

Mammalian piRNA target prediction using a hierarchical attention model

Tue, 2025-02-11 06:00

BMC Bioinformatics. 2025 Feb 11;26(1):50. doi: 10.1186/s12859-025-06068-6.

ABSTRACT

BACKGROUND: Piwi-interacting RNAs (piRNAs) are well established for monitoring and protecting the genome from transposons in germline cells. Recently, numerous studies provided evidence that piRNAs also play important roles in regulating mRNA transcript levels. Despite their significant role in regulating cellular RNA levels, the piRNA targeting rules are not well defined, especially in mammals, which poses obstacles to the elucidation of piRNA function.

RESULTS: Given the complexity and current limitation in understanding the mammalian piRNA targeting rules, we designed a deep learning model by selecting appropriate deep learning sub-networks based on the targeting patterns of piRNA inferred from previous experiments. Additionally, to alleviate the problem of insufficient data, a transfer learning approach was employed. Our model achieves a good discriminatory power (Accuracy: 98.5%) in predicting an independent test dataset. Finally, this model was utilized to predict the targets of all mouse and human piRNAs available in the piRNA database.

CONCLUSIONS: In this research, we developed a deep learning framework that significantly advances the prediction of piRNA targets, overcoming the limitations posed by insufficient data and current incomplete targeting rules. The piRNA target prediction network and results can be downloaded from https://github.com/SofiaTianjiaoZhang/piRNATarget .

PMID:39934678 | DOI:10.1186/s12859-025-06068-6

Categories: Literature Watch

Pages