Deep learning

DistAL: A Domain-Shift Active Learning Framework with Transferable Feature Learning for Lesion Detection

Mon, 2025-04-14 06:00

IEEE Trans Med Imaging. 2025 Apr 14;PP. doi: 10.1109/TMI.2025.3558861. Online ahead of print.

ABSTRACT

Deep learning has demonstrated exceptional performance in medical image analysis, but its effectiveness degrades significantly when applied to different medical centers due to domain shifts. Lesion detection, a critical task in medical imaging, is particularly impacted by this challenge due to the diversity and complexity of lesions, which can arise from different organs, diseases, imaging devices, and other factors. While collecting data and labels from target domains is a feasible solution, annotating medical images is often tedious, expensive, and requires professionals. To address this problem, we combine active learning with domain-invariant feature learning. We propose a Domain-shift Active Learning (DistAL) framework, which includes a transferable feature learning algorithm and a hybrid sample selection strategy. Feature learning incorporates contrastive-consistency training to learn discriminative and domain-invariant features. The sample selection strategy is called RUDY, which jointly considers Representativeness, Uncertainty, and DiversitY. Its goal is to select samples from the unlabeled target domain for cost-effective annotation. It first selects representative samples to deal with domain shift, as well as uncertain ones to improve class separability, and then leverages K-means++ initialization to remove redundant candidates to achieve diversity. We evaluate our method for the task of lesion detection. By selecting only 1.7% samples from the target domain to annotate, DistAL achieves comparable performance to the method trained with all target labels. It outperforms other AL methods in five experiments on eight datasets collected from different hospitals, using different imaging protocols, annotation conventions, and etiologies.

PMID:40227902 | DOI:10.1109/TMI.2025.3558861

Categories: Literature Watch

ReorderBench: A Benchmark for Matrix Reordering

Mon, 2025-04-14 06:00

IEEE Trans Vis Comput Graph. 2025 Apr 14;PP. doi: 10.1109/TVCG.2025.3560345. Online ahead of print.

ABSTRACT

Matrix reordering permutes the rows and columns of a matrix to reveal meaningful visual patterns, such as blocks that represent clusters. A comprehensive collection of matrices, along with a scoring method for measuring the quality of visual patterns in these matrices, contributes to building a benchmark. This benchmark is essential for selecting or designing suitable reordering algorithms for revealing specific patterns. In this paper, we build a matrix-reordering benchmark, ReorderBench, with the goal of evaluating and improving matrix-reordering techniques. This is achieved by generating a large set of representative and diverse matrices and scoring these matrices with a convolution- and entropy-based method. Our benchmark contains 2,835,000 binary matrices and 5,670,000 continuous matrices, each generated to exhibit one of four visual patterns: block, off-diagonal block, star, or band, along with 450 real-world matrices featuring hybrid visual patterns. We demonstrate the usefulness of ReorderBench through three main applications in matrix reordering: 1) evaluating different reordering algorithms, 2) creating a unified scoring model to measure the visual patterns in any matrix, and 3) developing a deep learning model for matrix reordering.

PMID:40227900 | DOI:10.1109/TVCG.2025.3560345

Categories: Literature Watch

Tumor Bud Classification in Colorectal Cancer Using Attention-Based Deep Multiple Instance Learning and Domain-Specific Foundation Models

Mon, 2025-04-14 06:00

Cancers (Basel). 2025 Apr 7;17(7):1245. doi: 10.3390/cancers17071245.

ABSTRACT

BACKGROUND/OBJECTIVES: Identifying tumor budding (TB) in colorectal cancer (CRC) is vital for better prognostic assessment as it may signify the initial stage of metastasis. Despite its importance, TB detection remains challenging due to subjectivity in manual evaluations. Identifying TBs remains difficult, especially at high magnification levels, leading to inconsistencies in prognosis. To address these issues, we propose an automated system for TB classification using deep learning.

METHODS: We trained a deep learning model to identify TBs through weakly supervised learning by aggregating positive and negative bags from the tumor invasive front. We assessed various foundation models for feature extraction and compared their performance. Attention heatmaps generated by attention-based multi-instance learning (ABMIL) were analyzed to verify alignment with TBs, providing insights into the interpretability of the features. The dataset includes 29 WSIs for training and 70 whole slide images (WSIs) for the hold-out test set.

RESULTS: In six-fold cross-validation, Phikon-v2 achieved the highest average AUC (0.984 ± 0.003), precision (0.876 ± 0.004), and recall (0.947 ± 0.009). Phikon-v2 again achieved the highest AUC (0.979) and precision (0.980) on the external hold-out test set. Moreover, its recall rate (0.910) was still higher than that of UNI's (0.879). UNI exhibited a balanced performance on the hold-out test set, with an AUC of 0.960 and a precision of 0.968. CtransPath showed strong precision on the external hold-out test set (0.947) but had a slightly lower recall (0.911).

CONCLUSIONS: The proposed technique enhances the accuracy of TB assessment, offering potential applications for CRC and other cancer types.

PMID:40227783 | DOI:10.3390/cancers17071245

Categories: Literature Watch

Unlocking the Potential of AI in EUS and ERCP: A Narrative Review for Pancreaticobiliary Disease

Mon, 2025-04-14 06:00

Cancers (Basel). 2025 Mar 28;17(7):1132. doi: 10.3390/cancers17071132.

ABSTRACT

Artificial Intelligence (AI) is transforming pancreaticobiliary endoscopy by enhancing diagnostic accuracy, procedural efficiency, and clinical outcomes. This narrative review explores AI's applications in endoscopic ultrasound (EUS) and endoscopic retrograde cholangiopancreatography (ERCP), emphasizing its potential to address diagnostic and therapeutic challenges in pancreaticobiliary diseases. In EUS, AI improves pancreatic mass differentiation, malignancy prediction, and landmark recognition, demonstrating high diagnostic accuracy and outperforming traditional guidelines. In ERCP, AI facilitates precise biliary stricture identification, optimizes procedural techniques, and supports decision-making through real-time data integration, improving ampulla recognition and predicting cannulation difficulty. Additionally, predictive analytics help mitigate complications like post-ERCP pancreatitis. The future of AI in pancreaticobiliary endoscopy lies in multimodal data fusion, integrating imaging, genomic, and molecular data to enable personalized medicine. However, challenges such as data quality, external validation, clinician training, and ethical concerns-like data privacy and algorithmic bias-must be addressed to ensure safe implementation. By overcoming these challenges, AI has the potential to redefine pancreaticobiliary healthcare, improving diagnostic accuracy, therapeutic outcomes, and personalized care.

PMID:40227709 | DOI:10.3390/cancers17071132

Categories: Literature Watch

Deep Learning: A Heuristic Three-Stage Mechanism for Grid Searches to Optimize the Future Risk Prediction of Breast Cancer Metastasis Using EHR-Based Clinical Data

Mon, 2025-04-14 06:00

Cancers (Basel). 2025 Mar 25;17(7):1092. doi: 10.3390/cancers17071092.

ABSTRACT

Background: A grid search, at the cost of training and testing a large number of models, is an effective way to optimize the prediction performance of deep learning models. A challenging task concerning grid search is time management. Without a good time management scheme, a grid search can easily be set off as a "mission" that will not finish in our lifetime. In this study, we introduce a heuristic three-stage mechanism for managing the running time of low-budget grid searches with deep learning, sweet-spot grid search (SSGS) and randomized grid search (RGS) strategies for improving model prediction performance, in an application of predicting the 5-year, 10-year, and 15-year risk of breast cancer metastasis. Methods: We develop deep feedforward neural network (DFNN) models and optimize the prediction performance of these models through grid searches. We conduct eight cycles of grid searches in three stages, focusing on learning a reasonable range of values for each of the adjustable hyperparameters in Stage 1, learning the sweet-spot values of the set of hyperparameters and estimating the unit grid search time in Stage 2, and conducting multiple cycles of timed grid searches to refine model prediction performance with SSGS and RGS in Stage 3. We conduct various SHAP analyses to explain the prediction, including a unique type of SHAP analyses to interpret the contributions of the DFNN-model hyperparameters. Results: The grid searches we conducted improved the risk prediction of 5-year, 10-year, and 15-year breast cancer metastasis by 18.6%, 16.3%, and 17.3%, respectively, over the average performance of all corresponding models we trained using the RGS strategy. Conclusions: Grid search can greatly improve model prediction. Our result analyses not only demonstrate best model performance but also characterize grid searches from various aspects such as their capabilities of discovering decent models and the unit grid search time. The three-stage mechanism worked effectively. It not only made our low-budget grid searches feasible and manageable but also helped improve the model prediction performance of the DFNN models. Our SHAP analyses not only identified clinical risk factors important for the prediction of future risk of breast cancer metastasis, but also DFNN-model hyperparameters important to the prediction of performance scores.

PMID:40227603 | DOI:10.3390/cancers17071092

Categories: Literature Watch

Localisation and classification of multi-stage caries on CBCT images with a 3D convolutional neural network

Mon, 2025-04-14 06:00

Clin Oral Investig. 2025 Apr 14;29(5):246. doi: 10.1007/s00784-025-06325-1.

ABSTRACT

OBJECTIVES: Dental caries remains a significant global health concern. Recognising the diagnostic potential of cone-beam computed tomography (CBCT) in caries assessment, this study aimed to develop an artificial intelligence (AI)-driven tool for accurate caries localisation and classification on CBCT images, thereby enhancing early diagnosis and precise treatment planning.

MATERIALS AND METHODS: A three-dimensional (3D) convolutional neural network (CNN) was developed using a large annotated dataset comprising 1,778 single-tooth CBCT images. The network's performance in localising and classifying multi-stage caries was compared with that of three dentists. Performance metrics included precision, recall, F1-score, Dice similarity coefficient (DSC), and the area under the receiver operating characteristic (ROC) curve (AUC).

RESULTS: The proposed CNN achieved overall precision, recall, and DSC values of 0.712, 0.899, and 0.776, respectively, for lesion localisation. In comparison, the average metrics values for the dentists were 0.622, 0.886, and 0.700. For caries classification, the CNN achieved precision, recall, and F1-score values of 0.855, 0.857, and 0.856, respectively, whereas the corresponding values for the dentists were 0.700, 0.684, and 0.678. Overall, the CNN significantly outperformed the dentists in both localisation and classification tasks.

CONCLUSIONS: This study developed a high-performance 3D CNN for the localisation and classification of multi-stage caries on CBCT images. The CNN demonstrated significantly superior diagnostic performance compared to a group of three dentists, underscoring its potential for clinical integration.

CLINICAL RELEVANCE: The integration of AI into CBCT image analysis may improve the efficiency and accuracy of caries diagnosis. The proposed CNN represents a promising tool to enhance early diagnosis and precise treatment planning, potentially supporting clinical decision-making in dental practice.

PMID:40227550 | DOI:10.1007/s00784-025-06325-1

Categories: Literature Watch

AI as teacher: effectiveness of an AI-based training module to improve trainee pediatric fracture detection

Mon, 2025-04-14 06:00

Skeletal Radiol. 2025 Apr 14. doi: 10.1007/s00256-025-04927-0. Online ahead of print.

ABSTRACT

OBJECTIVE: Prior work has demonstrated that AI access can help residents more accurately detect pediatric fractures. We wished to evaluate the effectiveness of an unsupervised AI-based training module as a pediatric fracture detection educational tool.

MATERIALS AND METHODS: Two hundred forty radiographic examinations from throughout the pediatric upper extremity were split into two groups of 120 examinations. A previously developed open-source deep learning fracture detection algorithm ( www.childfx.com ) was used to annotate radiographs. Four medical students and four PGY-2 radiology residents first evaluated 120 examinations for fracture without AI assistance and subsequently reviewed AI annotations on these cases via a training module. They then interpreted 120 different examinations without AI assistance. Pre- and post-intervention fracture detection accuracy was evaluated using a chi-squared test.

RESULTS: Overall resident fracture detection accuracy significantly improved from 71.3% pre-intervention to 77.5% post-intervention (p = 0.032). Medical student fracture detection accuracy was not significantly changed from 56.3% pre-intervention to 57.3% post-intervention (p = 0.794). Eighty-eight percent of responding participants (7/8) would recommend this model of learning.

CONCLUSION: We found that a tailored AI-based training module increased resident accuracy for detecting pediatric fractures by 6.2%. Medical student accuracy was not improved, likely due to their limited background familiarity with the task. AI offers a scalable method for automatically generating annotated teaching cases covering varied pathology, allowing residents to efficiently learn from simulated experience.

PMID:40227327 | DOI:10.1007/s00256-025-04927-0

Categories: Literature Watch

Fast-forwarding plant breeding with deep learning-based genomic prediction

Mon, 2025-04-14 06:00

J Integr Plant Biol. 2025 Apr 14. doi: 10.1111/jipb.13914. Online ahead of print.

ABSTRACT

Deep learning-based genomic prediction (DL-based GP) has shown promising performance compared to traditional GP methods in plant breeding, particularly in handling large, complex multi-omics data sets. However, the effective development and widespread adoption of DL-based GP still face substantial challenges, including the need for large, high-quality data sets, inconsistencies in performance benchmarking, and the integration of environmental factors. Here, we summarize the key obstacles impeding the development of DL-based GP models and propose future developing directions, such as modular approaches, data augmentation, and advanced attention mechanisms.

PMID:40226955 | DOI:10.1111/jipb.13914

Categories: Literature Watch

pC-SAC: A method for high-resolution 3D genome reconstruction from low-resolution Hi-C data

Mon, 2025-04-14 06:00

Nucleic Acids Res. 2025 Apr 10;53(7):gkaf289. doi: 10.1093/nar/gkaf289.

ABSTRACT

The three-dimensional (3D) organization of the genome is crucial for gene regulation, with disruptions linked to various diseases. High-throughput Chromosome Conformation Capture (Hi-C) and related technologies have advanced our understanding of 3D genome organization by mapping interactions between distal genomic regions. However, capturing enhancer-promoter interactions at high resolution remains challenging due to the high sequencing depth required. We introduce pC-SAC (probabilistically Constrained Self-Avoiding Chromatin), a novel computational method for producing accurate high-resolution Hi-C matrices from low-resolution data. pC-SAC uses adaptive importance sampling with sequential Monte Carlo to generate ensembles of 3D chromatin chains that satisfy physical constraints derived from low-resolution Hi-C data. Our method achieves over 95% accuracy in reconstructing high-resolution chromatin maps and identifies novel interactions enriched with candidate cis-regulatory elements (cCREs) and expression quantitative trait loci (eQTLs). Benchmarking against state-of-the-art deep learning models demonstrates pC-SAC's performance in both short- and long-range interaction reconstruction. pC-SAC offers a cost-effective solution for enhancing the resolution of Hi-C data, thus enabling deeper insights into 3D genome organization and its role in gene regulation and disease. Our tool can be found at https://github.com/G2Lab/pCSAC.

PMID:40226920 | DOI:10.1093/nar/gkaf289

Categories: Literature Watch

Vessel-aware aneurysm detection using multi-scale deformable 3D attention

Mon, 2025-04-14 06:00

Med Image Comput Comput Assist Interv. 2024 Oct;15005:754-765. doi: 10.1007/978-3-031-72086-4_71. Epub 2024 Oct 4.

ABSTRACT

Manual detection of intracranial aneurysms (IAs) in computed tomography (CT) scans is a complex, time-consuming task even for expert clinicians, and automating the process is no less challenging. Critical difficulties associated with detecting aneurysms include their small (yet varied) size compared to scans and a high potential for false positive (FP) predictions. To address these issues, we propose a 3D, multi-scale neural architecture that detects aneurysms via a deformable attention mechanism that operates on vessel distance maps derived from vessel segmentations and 3D features extracted from the layers of a convolutional network. Likewise, we reformulate aneurysm segmentation as bounding cuboid prediction using binary cross entropy and three localization losses (location, size, IoU). Given three validation sets comprised of 152/138/38 CT scans and containing 126/101/58 aneurysms, we achieved a Sensitivity of 91.3%/97.0%/74.1% @ FP rates 0.53/0.56/0.87, with Sensitivity around 80% on small aneurysms. Manual inspection of outputs by experts showed our model only tends to miss aneurysms located in unusual locations. Code and model weights are available online.

PMID:40226842 | PMC:PMC11986933 | DOI:10.1007/978-3-031-72086-4_71

Categories: Literature Watch

Functional Near-Infrared Spectroscopy-Based Computer-Aided Diagnosis of Major Depressive Disorder Using Convolutional Neural Network with a New Channel Embedding Layer Considering Inter-Hemispheric Asymmetry in Prefrontal Hemodynamic Responses

Mon, 2025-04-14 06:00

Depress Anxiety. 2024 Jul 14;2024:4459867. doi: 10.1155/2024/4459867. eCollection 2024.

ABSTRACT

BACKGROUND: Functional near-infrared spectroscopy (fNIRS) is being extensively explored as a potential primary screening tool for major depressive disorder (MDD) because of its portability, cost-effectiveness, and low susceptibility to motion artifacts. However, the fNIRS-based computer-aided diagnosis (CAD) of MDD using deep learning methods has rarely been studied. In this study, we propose a novel deep learning framework based on a convolutional neural network (CNN) for the fNIRS-based CAD of MDD with high accuracy.

MATERIALS AND METHODS: The fNIRS data of participants-48 patients with MDD and 68 healthy controls (HCs)-were obtained while they performed a Stroop task. The hemodynamic responses calculated from the preprocessed fNIRS data were used as inputs to the proposed CNN model with an ensemble CNN architecture, comprising three 1D depth-wise convolutional layers specifically designed to reflect interhemispheric asymmetry in hemodynamic responses between patients with MDD and HCs, which is known to be a distinct characteristic in previous MDD studies. The performance of the proposed model was evaluated using a leave-one-subject-out cross-validation strategy and compared with those of conventional machine learning and CNN models.

RESULTS: The proposed model exhibited a high accuracy, sensitivity, and specificity of 84.48%, 83.33%, and 85.29%, respectively. The accuracies of conventional machine learning algorithms-shrinkage linear discriminator analysis, regularized support vector machine, EEGNet, and ShallowConvNet-were 73.28%, 74.14%, 62.93%, and 62.07%, respectively.

CONCLUSIONS: In conclusion, the proposed deep learning model can differentiate between the patients with MDD and HCs more accurately than the conventional models, demonstrating its applicability in fNIRS-based CAD systems.

PMID:40226684 | PMC:PMC11918759 | DOI:10.1155/2024/4459867

Categories: Literature Watch

Cognitive load assessment through EEG: A dataset from arithmetic and Stroop tasks

Mon, 2025-04-14 06:00

Data Brief. 2025 Mar 19;60:111477. doi: 10.1016/j.dib.2025.111477. eCollection 2025 Jun.

ABSTRACT

This study introduces a thoughtfully curated dataset comprising electroencephalogram (EEG) recordings designed to unravel mental stress patterns through the perspective of cognitive load. The dataset incorporates EEG signals obtained from 15 subjects, with a gender distribution of 8 females and 7 males, and a mean age of 21.5 years [1]. Recordings were collected during the subjects' engagement in diverse tasks, including the Stroop color-word test and arithmetic problem-solving tasks. The recordings are categorized into four classes representing varying levels of induced mental stress: normal, low, mid, and high. Each task was performed for a duration of 10-20 s, and three trials were conducted for comprehensive data collection. Employing an OpenBCI device with an 8-channel Cyton board, the EEG captures intricate responses of the frontal lobe to cognitive challenges posed by the Stroop and Arithmetic Tests, recorded at a sampling rate of 250 Hz. The proposed dataset serves as a valuable resource for advancing research in the realm of brain-computer interfaces and offers insights into identifying EEG patterns associated with stress. The proposed dataset serves as a valuable resource for researchers, offering insights into identifying EEG patterns that correlate with different stress states. By providing a solid foundation for the development of algorithms capable of detecting and classifying stress levels, the dataset supports innovations in non-invasive monitoring tools and contributes to personalized healthcare solutions that can adapt to the cognitive states of users. This study's foundation is crucial for advancing stress classification research, with significant implications for cognitive function and well-being.

PMID:40226198 | PMC:PMC11993157 | DOI:10.1016/j.dib.2025.111477

Categories: Literature Watch

Artificial intelligence in the diagnosis and management of refractive errors

Mon, 2025-04-14 06:00

Eur J Ophthalmol. 2025 Apr 13:11206721251318384. doi: 10.1177/11206721251318384. Online ahead of print.

ABSTRACT

Refractive error is among the leading causes of visual impairment globally. The diagnosis and management of refractive error has traditionally relied on comprehensive eye examinations by eye care professionals, but access to these specialized services has remained limited in many areas of the world. Given this, artificial intelligence (AI) has shown immense potential in transforming the diagnosis and management of refractive error. We review AI applications across various aspects of refractive error care - from axial length prediction using fundus images to risk stratification for myopia progression. AI algorithms can be trained to analyze clinical data to detect refractive error as well as predict associated risks of myopia progression. For treatments such as implantable collamer and orthokeratology lenses, AI models facilitate vault size prediction and optimal lens fitting with high accuracy. Furthermore, AI has demonstrated promise in optimizing surgical planning and outcomes for refractive procedures. Emerging digital technologies such as telehealth, smartphone applications, and virtual reality integrated with AI present novel avenues for refractive error screening. We discuss key challenges, including limited validation datasets, lack of data standardization, image quality issues, population heterogeneity, practical deployment, and ethical considerations regarding patient privacy that need to be addressed before widespread clinical implementation.

PMID:40223314 | DOI:10.1177/11206721251318384

Categories: Literature Watch

MODAMS: design of a multimodal object-detection based augmentation model for satellite image sets

Sun, 2025-04-13 06:00

Sci Rep. 2025 Apr 13;15(1):12742. doi: 10.1038/s41598-025-93766-z.

ABSTRACT

Efficient image augmentation for hyperspectral satellite images requires design of multiband processing models that can assist in improving classification performance for different application scenarios. Existing models either work on dynamic band fusions, or use deep learning techniques for identification of application-specific augmentation operations. Moreover, these models use static augmentations, and do not take into consideration image-specific parameters which limits their efficiency levels. To overcome these limitations, this text proposes design of a novel multimodal object-detection based augmentation model for satellite image sets. The proposed model initially applies a customized YOLO (You Only Look Once) based object detection technique on each of the hyperspectral image bands. This is followed by a context-specific classification layer that assists in identification of detected object types. The identified objects are analyzed via a cascaded dual Generative Adversarial Network (cdGAN), which estimates an object-level importance metric, which is used to evaluate its importance probability levels. Based on these probability levels, an Elephant Herding Optimization (EHO) based hyperspectral band-selection model is used, which assists in identification of high priority image bands for classification purposes. Augmentations on these image bands is controlled via a Firefly Optimizer (FFO) which assists in identification of object-level augmentations for efficient classification of satellite images. The augmented image sets are updated via an Incremental Learning (IL) layer that assists in continuous improvement of accuracy levels for different application scenarios. Due to these optimizations, the proposed model is able to improve classification accuracy by 8.5%, precision by 4.3%, recall by 6.5%, while reducing classification delay by 2.9% when compared with existing augmentation-based classification techniques.

PMID:40223115 | DOI:10.1038/s41598-025-93766-z

Categories: Literature Watch

Basin-informed flood frequency analysis using deep learning exhibits consistent projected regional patterns over CONUS

Sun, 2025-04-13 06:00

Sci Rep. 2025 Apr 13;15(1):12754. doi: 10.1038/s41598-025-97610-2.

ABSTRACT

Climate change poses a significant threat to flood-prone areas by altering precipitation patterns and the water cycle. Here, we analyzed the impact of climate change on future flood trends. We trained a Long Short-Term Memory (LSTM) model to estimate long term discharge at 638 river sites over contiguous United States (CONUS) based on inputs from the gridMET meteorological datasets, and downscaled and bias-corrected Coupled Model Intercomparison Project 5 (CMIP5) projections. Our results indicate that the LSTM model can replicate observed discharge with reliable accuracy. The projected changes in flood magnitude for the 10-year and 100-year return periods reveal consistent geographical patterns robust across climate models, with increasing trends of approximately + 10 to + 40% in the East and West coastal regions and decreasing trends of about - 10 to - 30% in the Southwestern areas. The regions exhibiting an increasing flood trend are likely driven by an increase in total seasonal extreme precipitation and changes in the timing and amount of peak flow. In contrast, the decreasing flood trends result from a significant reduction in snowpack. To support adaptation planning, we developed an interactive map providing the historical and projected flood changes for 10- and 100-year floods across the 638 selected basins over CONUS.

PMID:40222992 | DOI:10.1038/s41598-025-97610-2

Categories: Literature Watch

The satisfaction of ecological environment in sports public services by artificial intelligence and big data

Sun, 2025-04-13 06:00

Sci Rep. 2025 Apr 13;15(1):12748. doi: 10.1038/s41598-025-97927-y.

ABSTRACT

In order to gain a more accurate understanding and enhance the relationship between the fitness ecological environment and artificial intelligence (AI)-driven sports public services, this study combines a Convolutional Neural Network (CNN) approach based on residual modules and attention mechanisms with the SERVQUAL evaluation model. The method employed involves the analysis of big data collected from questionnaire surveys, literature reviews, and interviews. This study critically examines the impact of advanced AI technologies on residents' satisfaction with the fitness ecological environment in sports public services and conducts theoretical analysis of the obtained data. The results show that the quality of sports public services empowered by AI significantly influences residents' satisfaction with the fitness ecological environment, such as running, swimming, ball games and other sports with high requirements for sports service quality and ecological environment. Only the good public sports service quality matching with them can meet the needs of the ecological environment for fitness, and stimulate the enthusiasm of the people for fitness. The study also shows that swimming, running and all kinds of ball games account for the largest proportion of all sports. To sum up, the satisfaction of residents' fitness ecological environment is greatly affected by the quality of public sports services, which is mainly reflected in the good and perfect sports environment and facilities that can provide residents with a wealth of fitness options, greatly improving the sports ecological environment. This study is helpful to realize the relationship between sports public service and sports ecological environment. It contributes to understanding the role of AI and deep learning in enhancing the correlation between sports public service and the ecological environment of sports.

PMID:40222989 | DOI:10.1038/s41598-025-97927-y

Categories: Literature Watch

Deep learning enabled liquid-based cytology model for cervical precancer and cancer detection

Sun, 2025-04-13 06:00

Nat Commun. 2025 Apr 13;16(1):3506. doi: 10.1038/s41467-025-58883-3.

ABSTRACT

Deep learning (DL) enabled liquid-based cytology has potential for cervical cancer screening or triage. Here, we develop a DL model using whole cytology slides from 17,397 women and test it on 10,826 additional cases through a three-stage process. The DL model achieves robust performance across nine hospitals. In a multi-reader, multi-case study, it outperforms cytopathologists' sensitivity by 9%. Reading time significantly decreases with DL assistance (218s vs 30s; p < 0.0001). In community-based organized screening, the DL model's sensitivity matches that of senior cytopathologists (0.878 vs 0.854; p > 0.999), yet it has reduced specificity (0.831 vs 0.901; p < 0.0001). Notably, hospital-based opportunistic screening shows that junior cytopathologists with DL assistance significantly improve both their sensitivity and specificity (0.857 vs 0.657, 0.840 vs 0.737; both p < 0.0001). When triaging human papillomavirus-positive cases, DL assistance exhibits better performance than junior cytopathologists alone. These findings support using the DL model as an assistance tool in cervical screening and case triage.

PMID:40222978 | DOI:10.1038/s41467-025-58883-3

Categories: Literature Watch

Hybrid of DSR-GAN and CNN for Alzheimer disease detection based on MRI images

Sun, 2025-04-13 06:00

Sci Rep. 2025 Apr 13;15(1):12727. doi: 10.1038/s41598-025-94677-9.

ABSTRACT

In this paper, we propose a deep super-resolution generative adversarial network (DSR-GAN) combined with a convolutional neural network (CNN) model designed to classify four stages of Alzheimer's disease (AD): Mild Dementia (MD), Moderate Dementia (MOD), Non-Demented (ND), and Very Mild Dementia (VMD). The proposed DSR-GAN is implemented using a PyTorch library and uses a dataset of 6,400 MRI images. A super-resolution (SR) technique is applied to enhance the clarity and detail of the images, allowing the DSR-GAN to refine particular image features. The CNN model undergoes hyperparameter optimization and incorporates data augmentation strategies to maximize its efficiency. The normalized error matrix and area under ROC curve are used experimentally to evaluate the CNN's performance which achieved a testing accuracy of 99.22%, an area under the ROC curve of 100%, and an error rate of 0.0516. Also, the performance of the DSR-GAN is assessed using three different metrics: structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and multi-scale structural similarity index measure (MS-SSIM). The achieved SSIM score of 0.847, while the PSNR and MS-SSIM percentage are 29.30 dB and 96.39%, respectively. The combination of the DSR-GAN and CNN models provides a rapid and precise method to distinguish between various stages of Alzheimer's disease, potentially aiding professionals in the screening of AD cases.

PMID:40222973 | DOI:10.1038/s41598-025-94677-9

Categories: Literature Watch

Quantifying axonal features of human superficial white matter from three-dimensional multibeam serial electron microscopy data assisted by deep learning

Sun, 2025-04-13 06:00

Neuroimage. 2025 Apr 11:121212. doi: 10.1016/j.neuroimage.2025.121212. Online ahead of print.

ABSTRACT

Short-range association fibers located in the superficial white matter play an important role in mediating higher-order cognitive function in humans. Detailed morphological characterization of short-range association fibers at the microscopic level promises to yield important insights into the axonal features driving cortico-cortical connectivity in the human brain yet has been difficult to achieve to date due to the challenges of imaging at nanometer-scale resolution over large tissue volumes. This work presents results from multi-beam scanning electron microscopy (EM) data acquired at 4 × 4 × 33 nm3 resolution in a volume of human superficial white matter measuring 200 × 200 × 112 μm (Braitenberg and Schüz, 2013), leveraging automated analysis methods. Myelin and myelinated axons were automatically segmented using deep convolutional neural networks (CNNs), assisted by transfer learning and dropout regularization techniques. A total of 128,285 myelinated axons were segmented, of which 70,321 and 2,102 were longer than 10 and 100 μm, respectively. Marked local variations in diameter (i.e., beading) and direction (i.e., undulation) were observed along the length of individual axons. Myelinated axons longer than 10 μm had inner diameters around 0.5 µm, outer diameters around 1 µm, and g-ratios around 0.5. This work fills a gap in knowledge of axonal morphometry in the superficial white matter and provides a large 3D human EM dataset and accurate segmentation results for a variety of future studies in different fields.

PMID:40222502 | DOI:10.1016/j.neuroimage.2025.121212

Categories: Literature Watch

IT: An Interpretable Transformer Model for Alzheimer's Disease Prediction based on PET/MR Images

Sun, 2025-04-13 06:00

Neuroimage. 2025 Apr 11:121210. doi: 10.1016/j.neuroimage.2025.121210. Online ahead of print.

ABSTRACT

Alzheimer's disease (AD) represents a significant challenge due to its progressive neurodegenerative impact, particularly within an aging global demographic. This underscores the critical need for developing sophisticated diagnostic tools for its early detection and precise monitoring. Within this realm, PET/MR imaging stands out as a potent dual-modality approach that transforms sensor data into detailed perceptual mappings, thereby enriching our grasp of brain pathophysiology. To capitalize on the strengths of PET/MR imaging in diagnosing AD, we have introduced a novel deep learning framework named "IT", which is inspired by the Transformer architecture. This innovative model adeptly captures both local and global characteristics within the imaging data, refining these features through advanced feature engineering techniques to achieve a synergistic integration. The efficiency of our model is underscored by robust experimental validation, wherein it delivers superior performance on a host of evaluative benchmarks, all while maintaining low demands on computational resources. Furthermore, the features we extracted resonate with established medical theories regarding feature distribution and usage efficiency, enhancing the clinical relevance of our findings. These insights significantly bolster the arsenal of tools available for AD diagnostics and contribute to the broader narrative of deciphering brain functionality through state-of-the-art imaging modalities.

PMID:40222500 | DOI:10.1016/j.neuroimage.2025.121210

Categories: Literature Watch

Pages