Deep learning

The Pulseq-CEST Library: definition of preparations and simulations, example data, and example evaluations

Thu, 2025-03-27 06:00

MAGMA. 2025 Mar 27. doi: 10.1007/s10334-025-01242-6. Online ahead of print.

ABSTRACT

OBJECTIVES: Despite prevalent use of chemical exchange saturation transfer (CEST) MRI, standardization remains elusive. Imaging depends heavily on parameters dictating radiofrequency (RF) events, gradients, and apparent diffusion coefficient (ADC). We present the Pulseq-CEST Library, a repository of CEST preparation and simulation definitions, including example data and evaluations, that provides a common basis for reproducible research, rapid prototyping, and in silico deep learning training data generation.

MATERIALS AND METHODS: A Pulseq-CEST experiment requires (i) a CEST preparation sequence, (ii) a Bloch-McConnell parameter set, (iii) a Bloch-McConnell simulation, and (iv) an evaluation script. Pulseq-CEST utilizes the Bloch-McConnell equations to model in vitro and in vivo conditions. Using this model, a candidate sequence or environment can be held constant while varying other inputs, enabling robust testing.

RESULTS: Data were compared for amide proton transfer weighted (APTw) and water shift and B1 (WASABI) protocols using a five-tube phantom and simulated environments. Real and simulated data matched anticipated spectral shapes and local peak characteristics. The Pulseq-CEST Library supports similar experiments with common sequences and environments to assess new protocols and sample data.

DISCUSSION: The Pulseq-CEST Library provides a flexible mechanism for standardizing and prototyping CEST sequences, facilitating collaborative development. With the capability for expansion, including open-source incorporation of new sequences and environments, the library accelerates the invention and spread of novel CEST and other saturation transfer approaches, such as relayed NOEs (rNOEs) and semisolid magnetization transfer contrast (MTC) methods.

PMID:40146474 | DOI:10.1007/s10334-025-01242-6

Categories: Literature Watch

Revealing morphological fingerprints in perinatal brains using quasi-conformal mapping: occurrence and neurodevelopmental implications

Thu, 2025-03-27 06:00

Brain Imaging Behav. 2025 Mar 27. doi: 10.1007/s11682-025-00998-8. Online ahead of print.

ABSTRACT

The morphological fingerprint in the brain is capable of identifying the uniqueness of an individual. However, whether such individual patterns are present in perinatal brains, and which morphological attributes or cortical regions better characterize the individual differences of neonates remain unclear. In this study, we proposed a deep learning framework that projected three-dimensional spherical meshes of three morphological features (i.e., cortical thickness, mean curvature, and sulcal depth) onto two-dimensional planes through quasi-conformal mapping, and employed the ResNet18 and contrastive learning for individual identification. We used the cross-sectional structural MRI data of 461 infants, incorporating with data augmentation, to train the model and fine-tuned the parameters based on 41 infants who had longitudinal scans. The model was validated on a fold of 20 longitudinal scanned infant data, and remarkable Top1 and Top5 accuracies of 85.90% and 92.20% were achieved, respectively. The sensorimotor and visual cortices were recognized as the most contributive regions in individual identification. Moreover, morphological fingerprints successfully predicted the long-term development of cognition and behavior. Furthermore, the folding morphology demonstrated greater discriminative capability than the cortical thickness. These findings provided evidence for the emergence of morphological fingerprints in the brain at the beginning of the third trimester, which may hold promising implications for understanding the formation of individual uniqueness, and predicting long-term neurodevelopmental risks in the brain during early development.

PMID:40146450 | DOI:10.1007/s11682-025-00998-8

Categories: Literature Watch

CR-deal: Explainable Neural Network for circRNA-RBP Binding Site Recognition and Interpretation

Thu, 2025-03-27 06:00

Interdiscip Sci. 2025 Mar 27. doi: 10.1007/s12539-025-00694-7. Online ahead of print.

ABSTRACT

circRNAs are a type of single-stranded non-coding RNA molecules, and their unique feature is their closed circular structure. The interaction between circRNAs and RNA-binding proteins (RBPs) plays a key role in biological functions and is crucial for studying post-transcriptional regulatory mechanisms. The genome-wide circRNA binding event data obtained by cross-linking immunoprecipitation sequencing technology provides a foundation for constructing efficient computational model prediction methods. However, in existing studies, although machine learning techniques have been applied to predict circRNA-RBP interaction sites, these methods still have room for improvement in accuracy and lack interpretability. We propose CR-deal, which is an interpretable joint deep learning network that predicts the binding sites of circRNA and RBP through genome-wide circRNA data. CR-deal utilizes a graph attention network to unify sequence and structural features into the same view, more effectively utilizing structural features to improve accuracy. It can infer marker genes in the binding site through integrated gradient feature interpretation, thereby inferring functional structural regions in the binding site. We conducted benchmark tests on CR-deal on 37 circRNA datasets and 7 lncRNA datasets, respectively, and obtained the interpretability of CR-deal and discovered functional structural regions through 5 circRNA datasets. We believe that CR-deal can help researchers gain a deeper understanding of the functions and mechanisms of circRNA in living organisms and its critical role in the occurrence and development of diseases. The source code of CR-deal is provided free of charge on https://github.com/liuliwei1980/CR .

PMID:40146403 | DOI:10.1007/s12539-025-00694-7

Categories: Literature Watch

A novel deep learning radiopathomics model for predicting carcinogenesis promotor cyclooxygenase-2 expression in common bile duct in children with pancreaticobiliary maljunction: a multicenter study

Thu, 2025-03-27 06:00

Insights Imaging. 2025 Mar 27;16(1):74. doi: 10.1186/s13244-025-01951-5.

ABSTRACT

OBJECTIVES: To develop and validate a deep learning radiopathomics model (DLRPM) integrating radiological and pathological imaging data to predict biliary cyclooxygenase-2 (COX-2) expression in children with pancreaticobiliary maljunction (PBM), and to compare its performance with single-modality radiomics, deep learning radiomics (DLR), and pathomics models.

METHODS: This retrospective study included 219 PBM patients, divided into a training set (n = 104; median age, 2.8 years, 75.0% females) and internal test set (n = 71; median age, 2.2 years, 83.1% females) from center I, and an external test set (n = 44; median age, 3.4 years, 65.9% females) from center II. Biliary COX-2 expression was detected using immunohistochemistry. Radiomics, DLR, and pathomics features were extracted from portal venous-phase CT images and H&E-stained histopathological slides, respectively, to build individual single-modality models. These were then integrated to develop the DLRPM, combining three predictive signatures. Model performance was evaluated using AUC, net reclassification index (NRI, for assessing improvement in correct classification) and integrated discrimination improvement (IDI).

RESULTS: The DLRPM demonstrated the highest performance, with AUCs of 0.851 (95% CI, 0.759-0.942) in internal test set and 0.841 (95% CI, 0.721-0.960) in external test set. In comparison, AUCs for the radiomics, DLR, and pathomics models were 0.532-0.602, 0.658-0.660, and 0.787-0.805, respectively. The DLRPM significantly outperformed three single-modality models, as demonstrated by the NRI and IDI tests (all p < 0.05).

CONCLUSION: The multimodal DLRPM could accurately and robustly predict COX-2 expression, facilitating risk stratification and personalized postoperative management in PBM. However, prospective multicenter studies with larger cohorts are needed to further validate its generalizability.

CRITICAL RELEVANCE STATEMENT: Our proposed deep learning radiopathomics model, integrating CT and histopathological images, provides a novel and cost-effective approach to accurately predict biliary cyclooxygenase-2 expression, potentially advancing individualized risk stratification and improving long-term outcomes for pediatric patients with pancreaticobiliary maljunction.

KEY POINTS: Predicting biliary COX-2 expression in pancreaticobiliary maljunction (PBM) is critical but challenging. A deep learning radiopathomics model achieved high predictive accuracy for COX-2. The model supports patient stratification and personalized postoperative management in PBM.

PMID:40146354 | DOI:10.1186/s13244-025-01951-5

Categories: Literature Watch

Anomaly Detection in Retinal OCT Images With Deep Learning-Based Knowledge Distillation

Thu, 2025-03-27 06:00

Transl Vis Sci Technol. 2025 Mar 3;14(3):26. doi: 10.1167/tvst.14.3.26.

ABSTRACT

PURPOSE: The purpose of this study was to develop a robust and general purpose artificial intelligence (AI) system that allows the identification of retinal optical coherence tomography (OCT) volumes with pathomorphological manifestations not present in normal eyes in screening programs and large retrospective studies.

METHODS: An unsupervised anomaly detection deep learning approach for the screening of retinal OCTs with any pathomorphological manifestations via Teacher-Student knowledge distillation is developed. The system is trained with only normal cases without any additional manual labeling. At test time, it scores how anomalous a sample is and produces localized anomaly maps with regions of interest in a B-scan. Fovea-centered OCT scans acquired with Spectralis (Heidelberg Engineering) were considered. A total of 3358 patients were used for development and testing. The detection performance was evaluated in a large data cohort with different pathologies including diabetic macular edema (DME) and the multiple stages of age-related macular degeneration (AMD) and on external public datasets with various disease biomarkers.

RESULTS: The volume-wise anomaly detection receiver operating characteristic (ROC) area under the curve (AUC) was 0.94 ± 0.05 in the test set. Pathological B-scan detection on external datasets varied between 0.81 and 0.87 AUC. Qualitatively, the derived anomaly maps pointed toward diagnostically relevant regions. The behavior of the system across the datasets was similar and consistent.

CONCLUSIONS: Anomaly detection constitutes a valid complement to supervised systems aimed at improving the success of vision preservation and eye care, and is an important step toward more efficient and generalizable screening tools.

TRANSLATIONAL RELEVANCE: Deep learning approaches can enable an automated and objective screening of a wide range of pathological retinal conditions that deviate from normal appearance.

PMID:40146150 | DOI:10.1167/tvst.14.3.26

Categories: Literature Watch

Explainable Deep Multilevel Attention Learning for Predicting Protein Carbonylation Sites

Thu, 2025-03-27 06:00

Adv Sci (Weinh). 2025 Mar 27:e2500581. doi: 10.1002/advs.202500581. Online ahead of print.

ABSTRACT

Protein carbonylation refers to the covalent modification of proteins through the attachment of carbonyl groups, which arise from oxidative stress. This modification is biologically significant, as it can elicit modifications in protein functionality, signaling cascades, and cellular homeostasis. Accurate prediction of carbonylation sites offers valuable insights into the mechanisms underlying protein carbonylation and the pathogenesis of related diseases. Notably, carbonylation sites and ligand interaction sites, both functional sites, exhibit numerous similarities. The survey reveals that current computation-based approaches tend to make excessive cross-predictions for ligand interaction sites. To tackle this unresolved challenge, selective carbonylation sites (SCANS) is introduced, a novel deep learning-based framework. SCANS employs a multilevel attention strategy to capture both local (segment-level) and global (protein-level) features, utilizes a tailored loss function to penalize cross-predictions (residue-level), and applies transfer learning to augment the specificity of the overall network by leveraging knowledge from pretrained model. These innovative designs have been shown to successfully boost predictive performance and statistically outperforms current methods. Particularly, results on benchmark testing dataset demonstrate that SCANS consistently achieves low false positive rates, including low rates of cross-predictions. Furthermore, motif analyses and interpretations are conducted to provide novel insights into the protein carbonylation sites from various perspectives.

PMID:40145846 | DOI:10.1002/advs.202500581

Categories: Literature Watch

Approach and surgical management of epiretinal membrane

Thu, 2025-03-27 06:00

Curr Opin Ophthalmol. 2025 May 1;36(3):205-209. doi: 10.1097/ICU.0000000000001135. Epub 2025 Mar 3.

ABSTRACT

PURPOSE OF REVIEW: Epiretinal membrane (ERM) surgery has undergone significant investigation over the last 2 years including assessment of novel surgical techniques and regarding the necessity of internal limiting membrane (ILM) peeling. This review seeks to highlight the latest literature in regards to ERM surgery from unique ERM profiles to clinical trials of surgical approach.

RECENT FINDINGS: The summative literature highlight that peeling of ILM may reduce recurrence compared to solely peeling ERM; however, these recurrences tend to be mild and nonvisually significant. Optical coherence tomography (OCT) has been leveraged preoperatively, intra-operatively, and postoperatively to enrich knowledge regarding risk factors for worse visual outcomes and deep learning models that are able to predict the anatomic outcome of ERM surgery after review of the preoperative OCT. There is no significant difference in outcomes between sequential and concurrent ERM surgery with cataract surgery. In uveitis evaluations related to ERM, posterior and intermediate uveitis were most associated with ERM, while in pediatric ERM, extent of diffuseness of central ERM correlated with surgical visual improvements.

SUMMARY: The latest ERM research has richly expanded the literature, allowing surgeons to better predict visual improvements postoperatively. This includes using OCT imaging biomarkers, but there remains a litany of unresolved questions about best surgical practices that are actively undergoing assessment.

PMID:40145317 | DOI:10.1097/ICU.0000000000001135

Categories: Literature Watch

A data-driven approach to turmeric disease detection: Dataset for plant condition classification

Thu, 2025-03-27 06:00

Data Brief. 2025 Feb 27;59:111435. doi: 10.1016/j.dib.2025.111435. eCollection 2025 Apr.

ABSTRACT

Turmeric, Curcuma longa, is an economically and medicinally important crop. However, the crop has often suffered from diseases such as rhizome disease roots, leaf blotch, and dry conditions of leaves. The control of these diseases essentially requires early and accurate diagnosis to reduce losses and help farmers adopt sustainable farming methods. The conventional methods of diagnosis involve a visual examination of symptoms, which is laborious, subjective, and rather impossible in large areas. This paper proposes a new dataset consisting of 1037 originals and 4628 augmented images of turmeric plants representing five classes: healthy leaf, dry leaf, leaf blotch, rhizome disease roots, and rhizome healthy roots. The dataset was pre-processed to enhance its applicability to deep learning applications by resizing, cleaning, and augmenting the data through flipping, rotation, and brightness adjustment. The turmeric plant disease classification was conducted using the Inception-v3 model, attaining an accuracy of 97.36% with data augmentation, compared to 95.71% without augmentation. Some of the major key performance metrics are precision, recall, and F1-score, which establish the efficacy and robustness of the model. This work attempts to show the potential of AI-aided solutions towards precision farming and sustainable crop production in developing agriculture disease management. The publicly available dataset and the results obtained are expected to attract more research interest for innovations in AI-driven agriculture.

PMID:40144898 | PMC:PMC11937664 | DOI:10.1016/j.dib.2025.111435

Categories: Literature Watch

Feasibility study of single-image super-resolution scanning system based on deep learning for pathological diagnosis of oral epithelial dysplasia

Thu, 2025-03-27 06:00

Front Med (Lausanne). 2025 Mar 12;12:1550512. doi: 10.3389/fmed.2025.1550512. eCollection 2025.

ABSTRACT

This study aimed to evaluate the feasibility of applying deep learning combined with a super-resolution scanner for the digital scanning and diagnosis of oral epithelial dysplasia (OED) slides. A model of a super-resolution digital slide scanning system based on deep learning was built and trained using 40 pathological slides of oral epithelial tissue. Two hundred slides with definite OED diagnoses were scanned into digital slides by the DS30R and Nikon scanners, and the scanner parameters were obtained for comparison. Considering that diagnosis under a microscope is the gold standard, the sensitivity and specificity of OED pathological feature recognition by the same pathologist when reading different scanner images were evaluated. Furthermore, the consistency of whole-slide diagnosis results obtained by pathologists using various digital scanning imaging systems was assessed. This was done to evaluate the feasibility of the super-resolution digital slide-scanning system, which is based on deep learning, for the pathological diagnosis of OED. The DS30R scanner processes an entire slide in a single layer within 0.25 min, occupying 0.35GB of storage. In contrast, the Nikon scanner requires 15 min for scanning, utilizing 0.5GB of storage. Following model training, the system enhanced the clarity of imaging pathological sections of oral epithelial tissue. Both the DS30R and Nikon scanners demonstrate high sensitivity and specificity for detecting structural features in OED pathological images; however, DS30R excels at identifying certain cellular features. The agreement in full-section diagnostic conclusions by the same pathologist using different imaging systems was exceptionally high, with kappa values of 0.969 for DS30R-optical microscope and 0.979 for DS30R-Nikon-optical microscope. The performance of the super-resolution microscopic imaging system based on deep learning has improved. It preserves the diagnostic information of the OED and addresses the shortcomings of existing digital scanners, such as slow imaging speed, large data volumes, and challenges in rapid transmission and sharing. This high-quality super-resolution image lays a solid foundation for the future popularization of artificial intelligence (AI) technology and will aid AI in the accurate diagnosis of oral potential malignant diseases.

PMID:40144879 | PMC:PMC11936936 | DOI:10.3389/fmed.2025.1550512

Categories: Literature Watch

Multi-objective RGB-D fusion network for non-destructive strawberry trait assessment

Thu, 2025-03-27 06:00

Front Plant Sci. 2025 Mar 12;16:1564301. doi: 10.3389/fpls.2025.1564301. eCollection 2025.

ABSTRACT

Growing consumer demand for high-quality strawberries has highlighted the need for accurate, efficient, and non-destructive methods to assess key postharvest quality traits, such as weight, size uniformity, and quantity. This study proposes a multi-objective learning algorithm that leverages RGB-D multimodal information to estimate these quality metrics. The algorithm develops a fusion expert network architecture that maximizes the use of multimodal features while preserving the distinct details of each modality. Additionally, a novel Heritable Loss function is implemented to reduce redundancy and enhance model performance. Experimental results show that the coefficient of determination (R²) values ​​for weight, size uniformity and number are 0.94, 0.90 and 0.95 respectively. Ablation studies demonstrate the advantage of the architecture in multimodal, multi-task prediction accuracy. Compared to single-modality models, non-fusion branch networks, and attention-enhanced fusion models, our approach achieves enhanced performance across multi-task learning scenarios, providing more precise data for trait assessment and precision strawberry applications.

PMID:40144753 | PMC:PMC11937088 | DOI:10.3389/fpls.2025.1564301

Categories: Literature Watch

YO-AFD: an improved YOLOv8-based deep learning approach for rapid and accurate apple flower detection

Thu, 2025-03-27 06:00

Front Plant Sci. 2025 Mar 12;16:1541266. doi: 10.3389/fpls.2025.1541266. eCollection 2025.

ABSTRACT

The timely and accurate detection of apple flowers is crucial for assessing the growth status of fruit trees, predicting peak blooming dates, and early estimating apple yields. However, challenges such as variable lighting conditions, complex growth environments, occlusion of apple flowers, clustered flowers and significant morphological variations, impede precise detection. To overcome these challenges, an improved YO-AFD method based on YOLOv8 for apple flower detection was proposed. First, to enable adaptive focus on features across different scales, a new attention module, ISAT, which integrated the Inverted Residual Mobile Block (IRMB) with the Spatial and Channel Synergistic Attention (SCSA) module was designed. This module was then incorporated into the C2f module within the network's neck, forming the C2f-IS module, to enhance the model's ability to extract critical features and fuse features across scales. Additionally, to balance attention between simple and challenging targets, a regression loss function based on Focaler Intersection over Union (FIoU) was used for loss function calculation. Experimental results showed that the YO-AFD model accurately detected both simple and challenging apple flowers, including small, occluded, and morphologically diverse flowers. The YO-AFD model achieved an F1 score of 88.6%, mAP50 of 94.1%, and mAP50-95 of 55.3%, with a model size of 6.5 MB and an average detection speed of 5.3 ms per image. The proposed YO-AFD method outperforms five comparative models, demonstrating its effectiveness and accuracy in real-time apple flower detection. With its lightweight design and high accuracy, this method offers a promising solution for developing portable apple flower detection systems.

PMID:40144752 | PMC:PMC11936985 | DOI:10.3389/fpls.2025.1541266

Categories: Literature Watch

Development and validation of a deep learning-based automated computed tomography image segmentation and diagnostic model for infectious hydronephrosis: a retrospective multicentre cohort study

Thu, 2025-03-27 06:00

EClinicalMedicine. 2025 Mar 13;82:103146. doi: 10.1016/j.eclinm.2025.103146. eCollection 2025 Apr.

ABSTRACT

BACKGROUND: Accurately diagnosing whether hydronephrosis is complicated by infection is crucial for guiding appropriate clinical treatment. This study aimed to develop a fully automated segmentation and non-invasive diagnostic model for infectious hydronephrosis (IH) using CT images and a deep learning algorithm.

METHODS: A retrospective analysis of clinical information and annotated cross-sectional CT images from patients diagnosed with hydronephrosis between June 2, 2019 and June 30, 2024 at the Sun Yat-Sen Memorial Hospital (SYSMH), Heyuan People's Hospital (HPH), and Ganzhou People's Hospital (GPH) was performed. Data on cases of hydronephrosis were extracted from the hospital's medical record system. The SYSMH cohort was randomly divided into two subsets: the SYSMH training set (n = 279) and the SYSMH validation set (n = 93) in a 3:1 ratio. The HPH cohort and GPH cohort serve as external validation sets. A hydronephrosis segmentation model (HRSM) was developed using the Improved U-Net algorithm, and the segmentation accuracy evaluated by the Dice Similarity Coefficient (DSC). Using 3D Convolutional Neural Network established an IH risk score (IHRS) based on segmented images. Independent risk clinical data for IH were screened by logistic regression. An IH diagnostic model (IHDM) was then developed, incorporating the IHRS and clinical data, using five machine learning algorithms (Random Forests, K-Nearest Neighbor, Decision Tree, Logistic Regression and Support Vector Machine). The diagnostic performance of the IHDM was assessed by the Receiver Operating Characteristic (ROC) curve.

FINDINGS: The study initially included 1464 potential eligible cases, of which 864 were deemed qualified after preliminary examination. Ultimately, a total of 615 patients (363 female and 252 male) with hydronephrosis (including 5876 annotated cross-sectional CT images) were included in the study, 372 of whom were from SYSMH, 123 from HPH, and 120 from GPH. Based on bacterial culture results from percutaneous nephrostomy drainage of hydronephrosis, 291 cases were classified as IH, while 324 were non-IH. The DSC for the HRSM in the internal and two external validation cohorts were 0.922 (95% CI: 0.895, 0.949), 0.906 (95% CI: 0.869, 0.943), and 0.883 (95% CI: 0.857, 0.909), respectively, indicating high segmentation accuracy. The IHRS achieved a prediction accuracy of 78.5% (95% CI: 78.1%-78.9%) in the internal validation set. The IHDM developed using Support Vector Machine (SVM) combination with blood neutrophil count, fever within one week of history and IHRS performed best, demonstrated areas under the ROC curve of 0.919 (95% CI: 0.859-0.980), 0.902 (95% CI: 0.849-0.955), and 0.863 (95% CI: 0.800-0.926) in three cohorts, respectively.

INTERPRETATION: The automated HRSM demonstrated excellent segmentation performance for hydronephrosis, while the non-invasive IHDM provided significant diagnostic efficacy, facilitating infection assessment in patients with hydronephrosis. However, more diverse real-world multicenter validation studies are needed to verify the robustness of the model before it can be incorporated into clinical practice.

FUNDING: The Key-Area Research and Development Program of Guangdong Province, and the National Natural Science Foundation of China.

PMID:40144691 | PMC:PMC11938262 | DOI:10.1016/j.eclinm.2025.103146

Categories: Literature Watch

Integration of longitudinal load-bearing tissue MRI radiomics and neural network to predict knee osteoarthritis incidence

Thu, 2025-03-27 06:00

J Orthop Translat. 2025 Mar 15;51:187-197. doi: 10.1016/j.jot.2025.01.007. eCollection 2025 Mar.

ABSTRACT

BACKGROUND: Load-bearing structural degradation is crucial in knee osteoarthritis (KOA) progression, yet limited prediction models use load-bearing tissue radiomics for radiographic (structural) KOA incident.

PURPOSE: We aim to develop and test a Load-Bearing Tissue plus Clinical variable Radiomic Model (LBTC-RM) to predict radiographic KOA incidents.

STUDY DESIGN: Risk prediction study.

METHODS: The 700 knees without radiographic KOA at baseline were included from Osteoarthritis Initiative cohort. We selected 2164 knee MRIs during 4-year follow-up. LBTC-RM, which integrated MRI features of meniscus, femur, tibia, femorotibial cartilage, and clinical variables, was developed in total development cohort (n = 1082, 542 cases vs. 540 controls) using neural network algorithm. Final predictive model was tested in total test cohort (n = 1082, 534 cases vs. 548 controls), which integrated data from five visits: baseline (n = 353, 191 cases vs. 162 controls), 3 years prior KOA (n = 46, 19 cases vs. 27 controls), 2 years prior KOA (n = 143, 77 cases vs. 66 controls), 1 year prior KOA (n = 220, 105 cases vs. 115 controls), and at KOA incident (n = 320, 156 cases vs. 164 controls).

RESULTS: In total test cohort, LBTC-RM predicted KOA incident with AUC (95 % CI) of 0.85 (0.82-0.87); with LBTC-RM aid, performance of resident physicians for KOA prediction were improved, with specificity, sensitivity, and accuracy increasing from 50 %, 60 %, and 55 %-72 %, 73 %, and 72 %, respectively. The LBTC-RM output indicated an increased KOA risk (OR: 20.6, 95 % CI: 13.8-30.6, p < .001). Radiomic scores of load-bearing tissue raised KOA risk (ORs: 1.02-1.9) from 4-year prior KOA whereas 3-dimensional feature score of medial meniscus decreased the OR (0.99) of KOA incident at KOA confirmed. The 2-dimensional feature score of medial meniscus increased the ORs (1.1-1.2) of KOA symptom score from 2-year prior KOA.

CONCLUSIONS: We provided radiomic features of load-bearing tissue to improved KOA risk level assessment and incident prediction. The model has potential clinical applicability in predicting KOA incidents early, enabling physicians to identify high-risk patients before significant radiographic evidence appears. This can facilitate timely interventions and personalized management strategies, improving patient outcomes.

THE TRANSLATIONAL POTENTIAL OF THIS ARTICLE: This study presents a novel approach integrating longitudinal MRI-based radiomics and clinical variables to predict knee osteoarthritis (KOA) incidence using machine learning. By leveraging deep learning for auto-segmentation and machine learning for predictive modeling, this research provides a more interpretable and clinically applicable method for early KOA detection. The introduction of a Radiomics Score System enhances the potential for radiomics as a virtual image-based biopsy tool, facilitating non-invasive, personalized risk assessment for KOA patients. The findings support the translation of advanced imaging and AI-driven predictive models into clinical practice, aiding early diagnosis, personalized treatment planning, and risk stratification for KOA progression. This model has the potential to be integrated into routine musculoskeletal imaging workflows, optimizing early intervention strategies and resource allocation for high-risk populations. Future validation across diverse cohorts will further enhance its clinical utility and generalizability.

PMID:40144553 | PMC:PMC11937290 | DOI:10.1016/j.jot.2025.01.007

Categories: Literature Watch

A multi-modal deep learning solution for precise pneumonia diagnosis: the PneumoFusion-Net model

Thu, 2025-03-27 06:00

Front Physiol. 2025 Mar 12;16:1512835. doi: 10.3389/fphys.2025.1512835. eCollection 2025.

ABSTRACT

BACKGROUND: Pneumonia is considered one of the most important causes of morbidity and mortality in the world. Bacterial and viral pneumonia share many similar clinical features, thus making diagnosis a challenging task. Traditional diagnostic method developments mainly rely on radiological imaging and require a certain degree of consulting clinical experience, which can be inefficient and inconsistent. Deep learning for the classification of pneumonia in multiple modalities, especially integrating multiple data, has not been well explored.

METHODS: The study introduce the PneumoFusion-Net, a deep learning-based multimodal framework that incorporates CT images, clinical text, numerical lab test results, and radiology reports for improved diagnosis. In the experiments, a dataset of 10,095 pneumonia CT images was used-including associated clinical data-most of which was used for training and validation while keeping part of it for validation on a held-out test set. Five-fold cross-validation was considered in order to evaluate this model, calculating different metrics including accuracy and F1-Score.

RESULTS: PneumoFusion-Net, which achieved 98.96% classification accuracy with a 98% F1-score on the held-out test set, is highly effective in distinguishing bacterial from viral types of pneumonia. This has been highly beneficial for diagnosis, reducing misdiagnosis and further improving homogeneity across various data sets from multiple patients.

CONCLUSION: PneumoFusion-Net offers an effective and efficient approach to pneumonia classification by integrating diverse data sources, resulting in high diagnostic accuracy. Its potential for clinical integration could significantly reduce the burden of pneumonia diagnosis by providing radiologists and clinicians with a robust, automated diagnostic tool.

PMID:40144549 | PMC:PMC11937601 | DOI:10.3389/fphys.2025.1512835

Categories: Literature Watch

Multimodal diagnosis of Alzheimer's disease based on resting-state electroencephalography and structural magnetic resonance imaging

Thu, 2025-03-27 06:00

Front Physiol. 2025 Mar 12;16:1515881. doi: 10.3389/fphys.2025.1515881. eCollection 2025.

ABSTRACT

Multimodal diagnostic methods for Alzheimer's disease (AD) have demonstrated remarkable performance. However, the inclusion of electroencephalography (EEG) in such multimodal studies has been relatively limited. Moreover, most multimodal studies on AD use convolutional neural networks (CNNs) to extract features from different modalities and perform fusion classification. Regrettably, this approach often lacks collaboration and fails to effectively enhance the representation ability of features. To address this issue and explore the collaborative relationship among multimodal EEG, this paper proposes a multimodal AD diagnosis model based on resting-state EEG and structural magnetic resonance imaging (sMRI). Specifically, this work designs corresponding feature extraction models for EEG and sMRI modalities to enhance the capability of extracting modality-specific features. Additionally, a multimodal joint attention mechanism (MJA) is developed to address the issue of independent modalities. The MJA promotes cooperation and collaboration between the two modalities, thereby enhancing the representation ability of multimodal fusion. Furthermore, a random forest classifier is introduced to enhance the classification ability. The diagnostic accuracy of the proposed model can achieve 94.7%, marking a noteworthy accomplishment. This research stands as the inaugural exploration into the amalgamation of deep learning and EEG multimodality for AD diagnosis. Concurrently, this work strives to bolster the use of EEG in multimodal AD research, thereby positioning itself as a hopeful prospect for future advancements in AD diagnosis.

PMID:40144547 | PMC:PMC11937600 | DOI:10.3389/fphys.2025.1515881

Categories: Literature Watch

Review of applications of deep learning in veterinary diagnostics and animal health

Thu, 2025-03-27 06:00

Front Vet Sci. 2025 Mar 12;12:1511522. doi: 10.3389/fvets.2025.1511522. eCollection 2025.

ABSTRACT

Deep learning (DL), a subfield of artificial intelligence (AI), involves the development of algorithms and models that simulate the problem-solving capabilities of the human mind. Sophisticated AI technology has garnered significant attention in recent years in the domain of veterinary medicine. This review provides a comprehensive overview of the research dedicated to leveraging DL for diagnostic purposes within veterinary medicine. Our systematic review approach followed PRISMA guidelines, focusing on the intersection of DL and veterinary medicine, and identified 422 relevant research articles. After exporting titles and abstracts for screening, we narrowed our selection to 39 primary research articles directly applying DL to animal disease detection or management, excluding non-primary research, reviews, and unrelated AI studies. Key findings from the current body of research highlight an increase in the utilisation of DL models across various diagnostic areas from 2013 to 2024, including radiography (33% of the studies), cytology (33%), health record analysis (8%), MRI (8%), environmental data analysis (5%), photo/video imaging (5%), and ultrasound (5%). Over the past decade, radiographic imaging has emerged as most impactful. Various studies have demonstrated notable success in the classification of primary thoracic lesions and cardiac disease from radiographs using DL models compared to specialist veterinarian benchmarks. Moreover, the technology has proven adept at recognising, counting, and classifying cell types in microscope slide images, demonstrating its versatility across different veterinary diagnostic modality. While deep learning shows promise in veterinary diagnostics, several challenges remain. These challenges range from the need for large and diverse datasets, the potential for interpretability issues and the importance of consulting with experts throughout model development to ensure validity. A thorough understanding of these considerations for the design and implementation of DL in veterinary medicine is imperative for driving future research and development efforts in the field. In addition, the potential future impacts of DL on veterinary diagnostics are discussed to explore avenues for further refinement and expansion of DL applications in veterinary medicine, ultimately contributing to increased standards of care and improved health outcomes for animals as this technology continues to evolve.

PMID:40144529 | PMC:PMC11938132 | DOI:10.3389/fvets.2025.1511522

Categories: Literature Watch

SympCoughNet: symptom assisted audio-based COVID-19 detection

Thu, 2025-03-27 06:00

Front Digit Health. 2025 Mar 12;7:1551298. doi: 10.3389/fdgth.2025.1551298. eCollection 2025.

ABSTRACT

COVID-19 remains a significant global public health challenge. While nucleic acid tests, antigen tests, and CT imaging provide high accuracy, they face inefficiencies and limited accessibility, making rapid and convenient testing difficult. Recent studies have explored COVID-19 detection using acoustic health signals, such as cough and breathing sounds. However, most existing approaches focus solely on audio classification, often leading to suboptimal accuracy while neglecting valuable prior information, such as clinical symptoms. To address this limitation, we propose SympCoughNet, a deep learning-based COVID-19 audio classification network that integrates cough sounds with clinical symptom data. Our model employs symptom-encoded channel weighting to enhance feature processing, making it more attentive to symptom information. We also conducted an ablation study to assess the impact of symptom integration by removing the symptom-attention mechanism and instead using symptoms as classification labels within a CNN-based architecture. We trained and evaluated SympCoughNet on the UK COVID-19 Vocal Audio Dataset. Our model demonstrated significant performance improvements over traditional audio-only approaches, achieving 89.30% accuracy, 94.74% AUROC, and 91.62% PR on the test set. The results confirm that incorporating symptom data enhances COVID-19 detection performance. Additionally, we found that incorrect symptom inputs could influence predictions. Our ablation study validated that even when symptoms are treated as classification labels, the network can still effectively leverage cough audio to infer symptom-related information.

PMID:40144457 | PMC:PMC11936986 | DOI:10.3389/fdgth.2025.1551298

Categories: Literature Watch

Comparative Evaluation of Deep Learning Models for Diagnosis of Helminth Infections

Wed, 2025-03-26 06:00

J Pers Med. 2025 Mar 20;15(3):121. doi: 10.3390/jpm15030121.

ABSTRACT

(1) Background: Helminth infections are a widespread global health concern, with Ascaris and taeniasis representing two of the most prevalent infestations. Traditional diagnostic methods, such as egg-based microscopy, are fraught with challenges, including subjectivity and low throughput, often leading to misdiagnosis. This study evaluates the efficacy of advanced deep learning models in accurately classifying Ascaris lumbricoides and Taenia saginata eggs from microscopic images, proposing a technologically enhanced approach for diagnostics in clinical settings. (2) Methods: Three state-of-the-art deep learning models, ConvNeXt Tiny, EfficientNet V2 S, and MobileNet V3 S, are considered. A diverse dataset comprising images of Ascaris, Taenia, and uninfected eggs was utilized for training and validating these models by performing multiclass experiments. (3) Results: All models demonstrated high classificatory accuracy, with ConvNeXt Tiny achieving an F1-score of 98.6%, followed by EfficientNet V2 S at 97.5% and MobileNet V3 S at 98.2% in the experiments. These results prove the potential of deep learning in streamlining and improving the diagnostic process for helminthic infections. The application of deep learning models such as ConvNeXt Tiny, EfficientNet V2 S, and MobileNet V3 S shows promise for efficient and accurate helminth egg classification, potentially significantly enhancing the diagnostic workflow. (4) Conclusion: The study demonstrates the feasibility of leveraging advanced computational techniques in parasitology and points towards a future where rapid, objective, and reliable diagnostics are standard.

PMID:40137437 | DOI:10.3390/jpm15030121

Categories: Literature Watch

Explainable Siamese Neural Networks for Detection of High Fall Risk Older Adults in the Community Based on Gait Analysis

Wed, 2025-03-26 06:00

J Funct Morphol Kinesiol. 2025 Feb 22;10(1):73. doi: 10.3390/jfmk10010073.

ABSTRACT

BACKGROUND/OBJECTIVES: Falls among the older adult population represent a significant public health concern, often leading to diminished quality of life and serious injuries that escalate healthcare costs, and they may even prove fatal. Accurate fall risk prediction is therefore crucial for implementing timely preventive measures. However, to date, there is no definitive metric to identify individuals with high risk of experiencing a fall. To address this, the present study proposes a novel approach that transforms biomechanical time-series data, derived from gait analysis, into visual representations to facilitate the application of deep learning (DL) methods for fall risk assessment.

METHODS: By leveraging convolutional neural networks (CNNs) and Siamese neural networks (SNNs), the proposed framework effectively addresses the challenges of limited datasets and delivers robust predictive capabilities.

RESULTS: Through the extraction of distinctive gait-related features and the generation of class-discriminative activation maps using Grad-CAM, the random forest (RF) machine learning (ML) model not only achieves commendable accuracy (83.29%) but also enhances explainability.

CONCLUSIONS: Ultimately, this study underscores the potential of advanced computational tools and machine learning algorithms to improve fall risk prediction, reduce healthcare burdens, and promote greater independence and well-being among the older adults.

PMID:40137325 | DOI:10.3390/jfmk10010073

Categories: Literature Watch

Machine Learning for Human Activity Recognition: State-of-the-Art Techniques and Emerging Trends

Wed, 2025-03-26 06:00

J Imaging. 2025 Mar 20;11(3):91. doi: 10.3390/jimaging11030091.

ABSTRACT

Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. It explores the strengths and limitations of commonly used modalities, such as RGB images/videos, depth sensors, motion capture systems, wearable devices, and emerging technologies like radar and Wi-Fi channel state information. The review also discusses traditional machine learning approaches, including supervised and unsupervised learning, alongside cutting-edge advancements in deep learning, such as convolutional and recurrent neural networks, attention mechanisms, and reinforcement learning frameworks. Despite significant progress, HAR still faces critical challenges, including handling environmental variability, ensuring model interpretability, and achieving high recognition accuracy in complex, real-world scenarios. Future research directions emphasise the need for improved multimodal sensor fusion, adaptive and personalised models, and the integration of edge computing for real-time analysis. Additionally, addressing ethical considerations, such as privacy and algorithmic fairness, remains a priority as HAR systems become more pervasive. This study highlights the evolving landscape of HAR and outlines strategies for future advancements that can enhance the reliability and applicability of HAR technologies in diverse domains.

PMID:40137203 | DOI:10.3390/jimaging11030091

Categories: Literature Watch

Pages