Deep learning

The application of artificial intelligence-generated content in ophthalmology education

Mon, 2025-08-04 06:00

Front Med (Lausanne). 2025 Jul 18;12:1617537. doi: 10.3389/fmed.2025.1617537. eCollection 2025.

ABSTRACT

With the rise of generative artificial intelligence (AI) technology, AI has played a significant role in ophthalmology clinical applications, and AI-generated content (AIGC) has shown great potential in ophthalmology education. Specifically, AIGC plays an important role in lesson plan generation, simulated cases, and disease diagnosis, but its application also faces challenges related to the invasion of patient privacy and the accuracy of generated content. To better enable AIGC and promote the development of ophthalmology education, this article provides an overview of AI and ophthalmology and the application, challenges, and development prospects of AIGC in ophthalmology education. References for related research as well as practice are also provided.

PMID:40757196 | PMC:PMC12313564 | DOI:10.3389/fmed.2025.1617537

Categories: Literature Watch

A multitask framework based on CA-EfficientNetV2 for the prediction of glioma molecular biomarkers

Mon, 2025-08-04 06:00

Front Neurol. 2025 Jul 18;16:1609594. doi: 10.3389/fneur.2025.1609594. eCollection 2025.

ABSTRACT

INTRODUCTION: Glioma is the most common primary malignant tumor of the central nervous system. The mutation status of isocitrate dehydrogenase (IDH) and the methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter are key biomarkers for glioma diagnosis and prognosis. Accurate, non-invasive prediction of these biomarkers using MRI is of significant clinical value.

MATERIALS AND METHODS: We proposed a novel multitask deep learning framework based on Coordinate Attention-EfficientNetV2 (CA-EfficientNetV2) to simultaneously predict IDH mutation and MGMT promoter methylation status based on MRI data. Initially, unlabeled MR images were annotated using K-means clustering to generate pseudolabels, which were subsequently refined using a Vision Transformer (ViT) network to improve labeling accuracy. Then, the Fruit Fly Optimization Algorithm (FOA) was employed to assign optimal weights to the pseudolabeled data. The CA-EfficientNetV2 model, integrated with a coordinate attention mechanism, was constructed. The multitask framework comprised three independent subnetworks: T2-net (based on T2-weighted imaging), T1C-net (based on contrast-enhanced T1-weighted imaging), and TU-net (based on the fusion of T2WI and T1CWI).

RESULTS: The proposed framework demonstrated high performance in predicting both IDH mutation and MGMT promoter methylation status. Among the three subnetworks, TU-net achieved the best results, with accuracies of 0.9598 for IDH and 0.9269 for MGMT, and AUCs of 0.9930 and 0.9584, respectively. Comparative analysis showed that our proposed model outperformed other convolutional neural network (CNN) - based approaches.

CONCLUSION: The CA-EfficientNetV2-based multitask framework offers a robust, non-invasive method for preoperative prediction of glioma molecular markers. This approach holds strong potential to support clinical decision-making and personalized treatment planning in glioma management.

PMID:40757022 | PMC:PMC12313511 | DOI:10.3389/fneur.2025.1609594

Categories: Literature Watch

Toward Precision Diagnosis of Maxillofacial Pathologies by Artificial Intelligence Algorithms: A Systematic Review

Mon, 2025-08-04 06:00

J Maxillofac Oral Surg. 2025 Aug;24(4):1151-1178. doi: 10.1007/s12663-025-02664-4. Epub 2025 Jul 2.

ABSTRACT

PURPOSE: This review highlights the potential of artificial intelligence algorithms, including machine learning (ML) and deep learning (DL), in improving the diagnosis and management of oral and maxillofacial diseases through advanced imaging techniques such as computerized tomography (CT) and cone-beam computed tomography (CBCT).

METHODS: The current review was conducted on the basis of ISI Web of Science, PubMed, Scopus, and Google Scholar (2010-2024) using keywords related to radiography, MRI, CT, CBCT, ML, DL, and maxillofacial pathology, with a focus on clinical applications.

RESULTS: The DL algorithms for detecting vertical root fractures achieved a diagnostic accuracy of 89.0% for premolars, with a sensitivity of 84.0% and specificity of 94.0%. It demonstrated an accuracy of 93% and a specificity of 88% in evaluating CBCT images. The GoogLeNet Inception v3 architecture achieved an AUC of 0.914, sensitivity of 96.1%, and specificity of 77.1% for CBCT, outperforming the panoramic radiograph, which had an AUC of 0.847, sensitivity of 88.2%, and specificity of 77.0%. CBCT demonstrated higher diagnostic accuracy (91.4%) than panoramic images (84.6%), with odontogenic cystic lesions exhibiting the highest accuracy. The U-Net-based DL algorithm achieves recall, precision, and F1 scores of 0.742, 0.942, and 0.831 for metastatic lymph nodes, and 0.782, 0.990, and 0.874 for nonmetastatic lymph nodes, respectively.

CONCLUSION: This study highlights the superior anatomical detail of CBCT, making it more reliable for diagnosing oral and dentomaxillofacial disorders. DL algorithms demonstrate high accuracy and sensitivity in diagnosing dental and odontogenic disorders and often outperform radiologists.

PMID:40756906 | PMC:PMC12316632 | DOI:10.1007/s12663-025-02664-4

Categories: Literature Watch

Deep learning-based seabird detection in fisheries for seabird protection

Mon, 2025-08-04 06:00

J R Soc N Z. 2025 May 14;55(6):2082-2102. doi: 10.1080/03036758.2025.2500998. eCollection 2025.

ABSTRACT

New Zealand is considered to be the 'seabird capital' of the world. As part of the harvesting process, some commercial fishers accidentally bycatch seabirds during fishing operations, which can result in accidental deaths and injuries. The accidental bycatch is impacting the long-term sustainability of New Zealand seabird populations. To address this, we developed a YOLO model that can be used to automatically detect seabirds that interact with the fishing vessels. The model development process involved gathering, annotating and preprocessing a new image dataset, conducting transfer learning across YOLO benchmark models, and performing hyperparameter tuning on the top YOLO models to further improve the model's performance. We evaluate the performance and effectiveness of our developed model under diverse data conditions, with it achieving a mAP@50 score of 0.9926 and a mAP@50-95 score of 0.9147 on the test data. The results demonstrate that the developed model performs effectively in unconstrained real-world marine scenarios, addressing the limitations of previous models primarily evaluated in controlled settings. This automation could help to reduce or even eliminate manual inspection of footages by reviewers and will help to quantify seabird interactions with commercial fishing vessels. Our contributions represent a significant first step in automated seabird detection, mitigating the gap between constrained and unconstrained real-world maritime scenarios.

PMID:40756846 | PMC:PMC12315183 | DOI:10.1080/03036758.2025.2500998

Categories: Literature Watch

Multimodal Deep Learning Integrating Tumor Radiomics and Mediastinal Adiposity Improves Survival Prediction in Non-Small Cell Lung Cancer: A Prognostic Modeling Study

Mon, 2025-08-04 06:00

Cancer Med. 2025 Aug;14(15):e71077. doi: 10.1002/cam4.71077.

ABSTRACT

BACKGROUND AND PURPOSE: Prognostic stratification in non-small cell lung cancer (NSCLC) presents considerable challenges due to tumor heterogeneity. Emerging evidence has proposed that adipose tissue may play a prognostic role in oncological outcomes. This study investigates the integration of deep learning (DL)-derived computed tomography (CT) imaging biomarkers with mediastinal adiposity metrics to develop a multimodal prognostic model for postoperative survival prediction in NSCLC patients.

METHODS: A retrospective cohort of 702 surgically resected NSCLC patients was analyzed. Tumor radiomic features were extracted using a DenseNet121 convolutional neural network architecture, while mediastinal fat area (MFA) was quantified through semiautomated segmentation using ImageJ software. A multimodal survival prediction model was developed through feature-level fusion of DL-extracted tumor characteristics and MFA measurements. Model performance was evaluated using Harrell's concordance index (C-index) and receiver operating characteristic (ROC) analysis. Risk stratification was performed using an optimal threshold derived from training data, with subsequent Kaplan-Meier survival curve comparison between high- and low-risk cohorts.

RESULTS: The DL-based tumor model achieved C-indices of 0.787 (95% CI: 0.742-0.832) for disease-free survival (DFS) and 0.810 (95% CI: 0.768-0.852) for overall survival (OS) in internal validation. Integration of MFA with DL-derived tumor features yielded a multimodal model demonstrating enhanced predictive performance, with C-indices of 0.823 (OS) and 0.803 (DFS). Kaplan-Meier analysis revealed significant survival divergence between risk-stratified groups (log-rank p < 0.05).

CONCLUSION: The multimodal fusion of DL-extracted tumor radiomics and mediastinal adiposity metrics represents a significant advancement in postoperative survival prediction for NSCLC patients, demonstrating superior prognostic capability compared to unimodal approaches.

PMID:40755324 | DOI:10.1002/cam4.71077

Categories: Literature Watch

Deep Learning Reconstruction for T2 Weighted Turbo-Spin-Echo Imaging of the Pelvis: Prospective Comparison With Standard T2-Weighted TSE Imaging With Respect to Image Quality, Lesion Depiction, and Acquisition Time

Mon, 2025-08-04 06:00

Can Assoc Radiol J. 2025 Aug 4:8465371251357790. doi: 10.1177/08465371251357790. Online ahead of print.

ABSTRACT

PURPOSE: In pelvic MRI, Turbo Spin Echo (TSE) pulse sequences are used for T2-weighted imaging. However, its lengthy acquisition time increases the potential for artifacts. Deep learning (DL) reconstruction achieves reduced scan times without the degradation in image quality associated with other accelerated techniques. Unfortunately, a comprehensive assessment of DL-reconstruction in pelvic MRI has not been performed. The objective of this prospective study was to compare the performance of DL-TSE and conventional TSE pulse sequences in a broad spectrum of pelvic MRI indications.

METHODS: Fifty-five subjects (33 females and 22 males) were scanned at 3 T using DL-TSE and conventional TSE sequences in axial and/or oblique acquisition planes. Two radiologists independently assessed image quality in 6 categories: edge definition, vessel margin sharpness, T2 Contrast Dynamic Range, artifacts, overall image quality, and lesion features. The contrast ratio was calculated for quantitative assessment. A two-tailed sign test was used for assessment.

RESULTS: The 2 readers found DL-TSE to deliver equal or superior image quality than conventional TSE in most cases. There were only 3 instances out of 24 where conventional TSE was scored as providing better image quality. Readers agreed on DL-TSE superiority/inferiority/equivalence in 67% of categories in the axial plane and 75% in the oblique plane. DL-TSE also demonstrated a better contrast ratio in 75% of cases. DL-TSE reduced scan time by approximately 50%.

CONCLUSION: DL-accelerated TSE sequences generally provide equal or better image quality in pelvic MRI than standard TSE with significantly reduced acquisition times.

PMID:40755270 | DOI:10.1177/08465371251357790

Categories: Literature Watch

Pretreatment CT Texture Analysis for Predicting Survival Outcomes in Advanced Nonsmall Cell Lung Cancer Patients Receiving Immunotherapy: A Systematic Review and Meta-Analysis

Mon, 2025-08-04 06:00

Thorac Cancer. 2025 Aug;16(15):e70144. doi: 10.1111/1759-7714.70144.

ABSTRACT

BACKGROUND: While established biomarkers predict immunotherapy response in advanced nonsmall cell lung cancer (NSCLC), additional noninvasive imaging biomarkers may enhance treatment selection. Pretreatment computed tomography (CT) texture analysis may provide tumor characterization to predict survival outcomes.

METHODS: We conducted a systematic review and meta-analysis following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed and Cochrane Library databases were searched. Study quality was assessed using the quality in prognosis studies (QUIPS) tool. Hazard ratios (HRs) with 95% confidence intervals (CIs) were pooled using random-effects models.

RESULTS: Ten retrospective studies involving 2400 patients were included. Patients stratified as low-risk based on CT texture features demonstrated significantly improved survival outcomes compared to high-risk patients. The included studies used diverse radiomic features for risk stratification, including texture features from gray-level co-occurrence matrix (GLCM) such as entropy and dissimilarity, first-order statistical parameters including skewness and kurtosis, gray-level run-length matrix (GLRLM) features, and deep learning-derived features. Meta-analysis of five studies (n = 1102) revealed that patients stratified as low-risk based on these quantitative CT texture signatures had substantially better overall survival (OS) (p < 0.0001) with minimal heterogeneity (I2 = 0.0%). Similarly, progression-free survival (PFS) analysis of five studies (n = 1799) showed significant benefit for low-risk patients (p < 0.0001), though with moderate heterogeneity (I2 = 71.7%).

CONCLUSIONS: Pretreatment quantitative CT texture analysis effectively predicts survival outcomes in advanced NSCLC patients receiving immunotherapy, providing clinically meaningful risk stratification. This noninvasive imaging approach may serve as an additional tool to complement established pathological and molecular biomarkers, including liquid biopsy, for enhanced personalized treatment selection.

PMID:40755255 | DOI:10.1111/1759-7714.70144

Categories: Literature Watch

Combined application of deep learning and conventional computer vision for kidney ultrasound image classification in chronic kidney disease: preliminary study

Mon, 2025-08-04 06:00

Ultrasonography. 2025 Jun 15. doi: 10.14366/usg.25074. Online ahead of print.

ABSTRACT

PURPOSE: This study evaluates the feasibility of combining deep learning (DL) and conventional computer vision techniques to classify kidney ultrasound (US) images for the presence or absence of chronic kidney disease (CKD).

METHODS: A retrospective analysis was conducted on 258 kidneys (124 normal and 134 with CKD). A DL model was trained using midsagittal US images of the right kidney and corresponding contour maps to automate measurements of parenchymal thickness and parenchyma-to-sinus ratios. These features were integrated with a convolutional neural network for classification. The ground truth was determined based on clinical CKD diagnosis and laboratory data.

RESULTS: The combined DL and conventional feature extraction model achieved an accuracy of 82%, with a specificity of 93% and a negative predictive value of 97%. This approach outperformed models that relied solely on raw US images using DL, which achieved an accuracy of 64%. The inclusion of contour-based parenchymal measurements enhanced classification performance.

CONCLUSION: The integration of DL with automated feature extraction enables accurate classification of CKD using minimal user input. This proof-of-concept study highlights the potential of combining artificial intelligence-driven analysis with traditional metrics to serve as a noninvasive adjunct for CKD diagnosis and monitoring.

PMID:40755093 | DOI:10.14366/usg.25074

Categories: Literature Watch

Respiratory viral infections: when and where? A scoping review of spatiotemporal methods

Mon, 2025-08-04 06:00

J Glob Health. 2025 Aug 4;15:04213. doi: 10.7189/jogh.15.04213.

ABSTRACT

BACKGROUND: Respiratory viral infections pose a substantial disease burden worldwide. Spatiotemporal techniques help identify transmission patterns of these infections, thereby supporting timely control and prevention efforts. We aimed to synthesise the current state of evidence on quantitative methodologies for investigating the spatiotemporal characteristics of respiratory viral infections.

METHODS: We conducted a scoping review using the PRISMA-ScR guidelines. We searched three biomedical bibliographic databases, EMBASE, MEDLINE, and Web of Science, identifying studies that analysed spatiotemporal transmission of viral respiratory infectious diseases (published before 1 March 2023).

RESULTS: We identified 8466 articles from database searches, of which 152 met our inclusion criteria and were qualitatively synthesised. Most included articles (n = 140) were published during the COVID-19 pandemic, with 131 articles specifically analysing COVID-19. Exploratory research (n = 77) investigated the spatiotemporal transmission characteristics of respiratory infectious diseases, focussing on transmission patterns (n = 16), and influencing factors (n = 61). Forecasting research (n = 75) aimed to predict the disease trends using either univariate (n = 57) or multivariate models (n = 18), predominantly using machine learning methods (n = 41). The application of advanced deep learning models (n = 20) in disease forecasting analysis was often constrained by the quality of the available disease data.

CONCLUSIONS: There is a growing body of research on spatiotemporal analyses of respiratory viral infections, particularly during the COVID-19 pandemic. The acquisition of high-quality data remains important for effectively leveraging sophisticated models in disease forecasting research. Concurrently, although advanced modelling techniques are widely applied, future studies should consider capturing the complex spatiotemporal interactions in disease trajectory modelling.

PMID:40755019 | DOI:10.7189/jogh.15.04213

Categories: Literature Watch

Reflection-Enhanced Raman Identification of Single Bacterial Cells Patterned Using Capillary Assembly

Mon, 2025-08-04 06:00

ACS Sens. 2025 Aug 3. doi: 10.1021/acssensors.5c01225. Online ahead of print.

ABSTRACT

Raman spectroscopy is an enticing tool for the rapid identification of pathogenic bacteria and has the potential to meet the demand for early diagnosis and timely treatment of patients. However, it remains a challenge to devise a reliable Raman detection platform to obtain reproducible signals from single bacterial cells. Herein, we utilize a reflective Ag/SiO2 film that enhances the intrinsically weak Raman signals by re-excitation of the bacteria and reflection of downward-scattered photons, with maximum Raman intensities recorded by exciting the central edge of each single cell. The reflection-based configuration is simple, and its reliability as a sensing platform is validated by deep learning analysis. Importantly, given the positional dependence of the laser light on the Raman intensity, we employ capillarity-assisted particle assembly (CAPA) to selectively position single bacterial cells into a reflective topographical template to align the most Raman active region of the cell per the trap site geometry. Moreover, CAPA is utilized to directly isolate single cells from a suspension of artificial urine, eradicating any additional steps previously required to separate bacteria from biological samples. The proposed system has positive implications for future clinical settings that require simple, accurate, and reproducible detection of bacteria at the single-cell level.

PMID:40754993 | DOI:10.1021/acssensors.5c01225

Categories: Literature Watch

Enhancing Electroencephalogram-Based Prediction of Posttraumatic Stress Disorder Treatment Response Using Data Augmentation

Mon, 2025-08-04 06:00

Psychiatry Investig. 2025 Aug 5. doi: 10.30773/pi.2025.0133. Online ahead of print.

ABSTRACT

OBJECTIVE: This study aimed to improve the prediction of treatment response in patients with posttraumatic stress disorder (PTSD) by applying a variational autoencoder (VAE)-based data augmentation (DA) approach to electroencephalogram (EEG) data.

METHODS: EEG spectrograms were collected from patients diagnosed with PTSD. A VAE model was pretrained on the original spectrograms and used to generate augmented data samples. These augmented spectrograms were then utilized to train a deep neural network (DNN) classifier. The performance of the model was evaluated by comparing the area under the receiver operating characteristic curve (AUC) between models trained with and without DA.

RESULTS: The DNN trained with VAE-augmented EEG data achieved an AUC of 0.85 in predicting treatment response, which was 0.11 higher than the model trained without augmentation. This reflects a significant improvement in classification performance and model generalization.

CONCLUSION: VAE-based DA effectively addresses the challenge of limited EEG data in clinical settings and enhances the performance of DNN models for treatment response prediction in PTSD. This approach presents a promising direction for future EEG-based neuropsychiatric research involving small datasets.

PMID:40754940 | DOI:10.30773/pi.2025.0133

Categories: Literature Watch

Automated Brain Tumor Segmentation using Hybrid YOLO and SAM

Mon, 2025-08-04 06:00

Curr Med Imaging. 2025 Jul 30. doi: 10.2174/0115734056392711250718201911. Online ahead of print.

ABSTRACT

INTRODUCTION: Early-stage Brain tumor detection is critical for timely diagnosis and effective treatment. We propose a hybrid deep learning method, Convolutional Neural Network (CNN) integrated with YOLO (You Only Look once) and SAM (Segment Anything Model) for diagnosing tumors.

METHOD: A novel hybrid deep learning framework combining a CNN with YOLOv11 for real-time object detection and the SAM for precise segmentation. Enhancing the CNN backbone with deeper convolutional layers to enable robust feature extraction, while YOLOv11 localizes tumor regions, SAM is used to refine the tumor boundaries through detailed mask generation.

RESULTS: A dataset of 896 MRI brain images is used for training, testing, and validating the model, including images of both tumors and healthy brains. Additionally, CNN-based YOLO+SAM methods were utilized successfully to segment and diagnose brain tumors.

DISCUSSION: Our suggested model achieves good performance of Precision as 94.2%, Recall as 95.6% and mAP50(B) score as 96.5% demonstrating and highlighting the effectiveness of the proposed approach for early-stage brain tumor diagnosis Conclusion: The validation is demonstrated through a comprehensive ablation study. The robustness of the system makes it more suitable for clinical deployment.

PMID:40754882 | DOI:10.2174/0115734056392711250718201911

Categories: Literature Watch

Fine-grained Prototype Network for MRI Sequence Classification

Mon, 2025-08-04 06:00

Curr Med Imaging. 2025 Jul 30. doi: 10.2174/0115734056361649250717162910. Online ahead of print.

ABSTRACT

INTRODUCTION: Magnetic Resonance Imaging (MRI) is a crucial method for clinical diagnosis. Different abdominal MRI sequences provide tissue and structural information from various perspectives, offering reliable evidence for doctors to make accurate diagnoses. In recent years, with the rapid development of intelligent medical imaging, some studies have begun exploring deep learning methods for MRI sequence recognition. However, due to the significant intra-class variations and subtle inter-class differences in MRI sequences, traditional deep learning algorithms still struggle to effectively handle such types of complex distributed data. In addition, the key features for identifying MRI sequence categories often exist in subtle details, while significant discrepancies can be observed among sequences from individual samples. In contrast, current deep learning based MRI sequence classification methods tend to overlook these fine-grained differences across diverse samples.

METHODS: To overcome the above challenges, this paper proposes a fine-grained prototype network, SequencesNet, for MRI sequence classification. A network combining convolutional neural networks (CNNs) with improved vision transformers is constructed for feature extraction, considering both local and global information. Specifically, a Feature Selection Module (FSM) is added to the visual transformer, and fine-grained features for sequence discrimination are selected based on fused attention weights from multiple layers. Then, a Prototype Classification Module (PCM) is proposed to classify MRI sequences based on fine-grained MRI representations.

RESULTS: Comprehensive experiments are conducted on a public abdominal MRI sequence classification dataset and a private dataset. Our proposed SequencesNet achieved the highest accuracy with 96.73% and 95.98% in two sequence classification datasets, respectively, and outperfom the comparative prototypes and fine-grained models. The visualization results exhibit that our proposed sequencesNet can better capture fine-grained information.

DISCUSSION: The proposed SequencesNet shows promising performance in MRI sequence classification, excelling in distinguishing subtle inter-class differences and handling large intra-class variability. Specifically, FSM enhances clinical interpretability by focusing on fine-grained features, and PCM improves clustering by optimizing prototype-sample distances. Compared to baselines like 3DResNet18 and TransFG, SequencesNet achieves higher recall and precision, particularly for similar sequences like DCE-LAP and DCE-PVP.

CONCLUSION: The proposed new MRI sequence classification model, SequencesNet, addresses the problem of subtle inter-class differences and significant intraclass variations existing in medical images. The modular design of SequencesNet can be extended to other medical imaging tasks, including but not limited to multimodal image fusion, lesion detection, and disease staging. Future work can be done to decrease the computational complexity and increase the generalization of the model.

PMID:40754881 | DOI:10.2174/0115734056361649250717162910

Categories: Literature Watch

Advancing Alzheimer's Diagnosis with AI-Enhanced MRI: A Review of Challenges and Implications

Mon, 2025-08-04 06:00

Curr Neuropharmacol. 2025 Jul 30. doi: 10.2174/011570159X353595250303064846. Online ahead of print.

ABSTRACT

Neurological disorders are marked by neurodegeneration, leading to impaired cognition, psychosis, and mood alterations. These symptoms are typically associated with functional changes in both emotional and cognitive processes, which are often correlated with anatomical variations in the brain. Hence, brain structural magnetic resonance imaging (MRI) data have become a critical focus in research, particularly for predictive modeling. The involvement of large MRI data consortia, such as the Alzheimer's Disease Neuroimaging Initiative (ADNI), has facilitated numerous MRI-based classification studies utilizing advanced artificial intelligence models. Among these, convolutional neural networks (CNNs) and non-convolutional artificial neural networks (NC-ANNs) have been prominently employed for brain image processing tasks. These deep learning models have shown significant promise in enhancing the predictive performance for the diagnosis of neurological disorders, with a particular emphasis on Alzheimer's disease (AD). This review aimed to provide a comprehensive summary of these deep learning studies, critically evaluating their methodologies and outcomes. By categorizing the studies into various sub-fields, we aimed to highlight the strengths and limitations of using MRI-based deep learning approaches for diagnosing brain disorders. Furthermore, we discussed the potential implications of these advancements in clinical practice, considering the challenges and future directions for improving diagnostic accuracy and patient outcomes. Through this detailed analysis, we seek to contribute to the ongoing efforts in harnessing AI for better understanding and management of AD.

PMID:40754866 | DOI:10.2174/011570159X353595250303064846

Categories: Literature Watch

Evaluating the Efficacy of Various Deep Learning Architectures for Automated Preprocessing and Identification of Impacted Maxillary Canines in Panoramic Radiographs

Sun, 2025-08-03 06:00

Int Dent J. 2025 Aug 2;75(5):100940. doi: 10.1016/j.identj.2025.100940. Online ahead of print.

ABSTRACT

Previously, automated cropping and a reasonable classification accuracy for distinguishing impacted and non-impacted canines were demonstrated. This study evaluates multiple convolutional neural network (CNN) architectures for improving accuracy as a step towards a fully automated software for identification of impacted maxillary canines (IMCs) in panoramic radiographs (PRs). Eight CNNs (SqueezeNet, GoogLeNet, NASNet-Mobile, ShuffleNet, VGG-16, ResNet 50, DenseNet 201, and Inception V3) were compared in terms of their ability to classify 2 groups of PRs (impacted: n = 91; and non-impacted: n = 91 maxillary canines) before pre-processing and after applying automated cropping. For the PRs with impacted and non-impacted maxillary canines, GoogLeNet achieved the highest classification performance among the tested CNN architectures. Area under the curve (AUC) values of the Receiver Operating Characteristic (ROC) analysis without preprocessing and with preprocessing were 0.9 and 0.99 respectively, compared to 0.84 and 0.96 respectively with SqueezeNet. Among the tested CNN architectures, GoogLeNet achieved the highest performance on this dataset for the automated identification of impacted maxillary canines on both cropped and uncropped PRs.

PMID:40753865 | DOI:10.1016/j.identj.2025.100940

Categories: Literature Watch

Sentiment analysis for deepfake X posts using novel transfer learning based word embedding and hybrid LGR approach

Sun, 2025-08-03 06:00

Sci Rep. 2025 Aug 3;15(1):28305. doi: 10.1038/s41598-025-10661-3.

ABSTRACT

With the growth of social media, people are sharing more content than ever, including X posts that reflect a variety of emotions and opinions. AI-generated synthetic text, known as deepfake text, is used to imitate human writing to disseminate misleading information and fake news. However, as deepfake technology continues to grow, it becomes harder to accurately understand people's opinions on deepfake posts. Existing sentiment analysis algorithms frequently fail to capture the domain-specific, misleading, and context-sensitive characteristics of deepfake-related content. This study proposes a hybrid deep learning (DL) approach and novel transfer learning (TL)-based feature extraction approach for deepfake posts' sentiment analysis. The transfer learning-based approach combines the strengths of the hybrid DL technique to capture global and local contextual information. In this study, we compare the proposed approach with a range of machine learning algorithms, as well as, DL techniques for validation. Different feature extraction techniques, such as a bag of words (BOW), term frequency-inverse document frequency (TF-IDF), word embedding features, and novel TL features that combine the LSTM and DT, are used to build the models. The ML models are fine-tuned with extensive hyperparameter tuning to enhance performance and efficiency. The sentiment analysis performance of each applied method is validated using the k-fold cross-validation. The experimental results indicate that the proposed LGR (LSTM+GRU+RNN) approach with novel TL features performs well with a 99% accuracy. The proposed approach helps detect and prevent the spread of deepfake content, keeping people and organizations safe from its negative effects. This study covers a crucial gap in evaluating deepfake-specific social media sentiment by providing a comprehensive, scalable mechanism for monitoring and reducing the effect of fake content online.

PMID:40754634 | DOI:10.1038/s41598-025-10661-3

Categories: Literature Watch

Deep reinforcement learning-based mechanism to improve the throughput of EH-WSNs

Sun, 2025-08-03 06:00

Sci Rep. 2025 Aug 3;15(1):28321. doi: 10.1038/s41598-025-14111-y.

ABSTRACT

Energy Harvesting Wireless Sensor Networks (EH-WSNs) are widely adopted for their ability to harvest ambient energy. However, these networks face significant challenges due to the limited and continuously varying energy availability at individual nodes, which depends on unpredictable environmental sources. To operate effectively in such conditions, energy fluctuations need to be regulated. This requires continuous monitoring of each node's energy level over time and adaptively adjusting operations. State-of-the-art mechanisms often categorize nodes or discretize energy levels, leading to issues such as the inability to select appropriate actions based on the actual energy states of the nodes. This discretization simplifies the representation of energy states and reduces complexity, making it easier to design and implement. However, it overlooks subtle variations in energy levels, leading to inaccurate assessments and suboptimal performance. To overcome this limitation, this paper proposes an energy-aware transmission method based on the Deep Reinforcement Learning (DRL) algorithm that integrates Q-learning with Deep Neural Networks (DNNs). This method enables each node to adaptively select transmission actions based on its real-time energy state, improving responsiveness to dynamic network conditions. Simulation results show that the proposed method improves throughput by 11.79% compared to traditional methods. These findings demonstrate the effectiveness of DRL-based control in enhancing performance and energy efficiency in EH-WSNs.

PMID:40754616 | DOI:10.1038/s41598-025-14111-y

Categories: Literature Watch

Cross-subject EEG signals-based emotion recognition using contrastive learning

Sun, 2025-08-03 06:00

Sci Rep. 2025 Aug 3;15(1):28295. doi: 10.1038/s41598-025-13289-5.

ABSTRACT

Electroencephalography (EEG) signals based emotion brain computer interface (BCI) is a significant field in the domain of affective computing where EEG signals are the cause of reliable and objective applications. Despite these advancements, significant challenges persist, including individual differences in EEG signals across subjects during emotion recognition. To cope this challenge, current study introduces a cutting-edge cross subject contrastive learning (CSCL) scheme for EEG signals representation of brain region. The proposed scheme addresses the generalisation across subjects directly, which is a primary challenge in EEG signals-based emotions recognition. The proposed CSCL scheme captures the complex patterns effectively by employing emotions and stimulus contrastive losses within hyperbolic space. CSCL is designed primarily to learn representations that can effectively distinguish signals originating from different brain regions. Further, we evaluate the significance of our proposed CSCL scheme on five different datasets, including SEED, CEED, FACED and MPED, and obtain 97.70%, 96.26%, 65.98%, and 51.30% respectively. The experimental results show that our proposed CSCL scheme demonstrates strong effectiveness while addressing the challenges related to cross subject variability and label noise in the EEG-based emotion recognition system.

PMID:40754610 | DOI:10.1038/s41598-025-13289-5

Categories: Literature Watch

Artificial intelligence in orthopedics: fundamentals, current applications, and future perspectives

Sun, 2025-08-03 06:00

Mil Med Res. 2025 Aug 4;12(1):42. doi: 10.1186/s40779-025-00633-z.

ABSTRACT

Conventional diagnostic and therapeutic approaches in orthopedics are frequently time intensive and associated with elevated rates of diagnostic error, underscoring the urgent need for more efficient tools to improve the current situation. Recently, artificial intelligence (AI) has been increasingly integrated into orthopedic practice, providing data-driven approaches to support diagnostic and therapeutic processes. With the continuous advancement of AI technologies and their incorporation into routine orthopedic workflows, a comprehensive understanding of AI principles and their clinical applications has become increasingly essential. The review commences with a summary of the core concepts and historical evolution of AI, followed by an examination of machine learning and deep learning frameworks designed for orthopedic clinical and research applications. We then explore various AI-based applications in orthopedics, including image analysis, disease diagnosis, and treatment approaches such as surgical assistance, drug development, rehabilitation support, and personalized therapy. These applications are designed to help researchers and clinicians gain a deeper understanding of the current applications of AI in orthopedics. The review also highlights key challenges and limitations that affect the practical use of AI, such as data quality, model generalizability, and clinical validation. Finally, we discuss possible future directions for improving AI technologies and promoting their safe and effective integration into orthopedic care.

PMID:40754583 | DOI:10.1186/s40779-025-00633-z

Categories: Literature Watch

Adapting foundation models for rapid clinical response: intracerebral hemorrhage segmentation in emergency settings

Sun, 2025-08-03 06:00

Sci Rep. 2025 Aug 3;15(1):28314. doi: 10.1038/s41598-025-13742-5.

ABSTRACT

Intracerebral hemorrhage (ICH) is a medical emergency that demands rapid and accurate diagnosis for optimal patient management. Hemorrhagic lesions' segmentation on CT scans is a necessary first step for acquiring quantitative imaging data that are becoming increasingly useful in the clinical setting. However, traditional manual segmentation is time-consuming and prone to inter-rater variability, creating a need for automated solutions. This study introduces a novel approach combining advanced deep learning models to segment extensive and morphologically variable ICH lesions in non-contrast CT scans. We propose a two-step methodology that begins with a user-defined loose bounding box around the lesion, followed by a fine-tuned YOLOv8-S object detection model to generate precise, slice-specific bounding boxes. These bounding boxes are then used to prompt the Medical Segment Anything Model for accurate lesion segmentation. Our pipeline achieves high segmentation accuracy with minimal supervision, demonstrating strong potential as a practical alternative to task-specific models. We evaluated the model on a dataset of 252 CT scans demonstrating high performance in segmentation accuracy and robustness. Finally, the resulting segmentation tool is integrated into a user-friendly web application prototype, offering clinicians a simple interface for lesion identification and radiomic quantification.

PMID:40754551 | DOI:10.1038/s41598-025-13742-5

Categories: Literature Watch

Pages