Deep learning

Enhancing PM2.5 prediction by mitigating annual data drift using wrapped loss and neural networks

Tue, 2025-02-11 06:00

PLoS One. 2025 Feb 11;20(2):e0314327. doi: 10.1371/journal.pone.0314327. eCollection 2025.

ABSTRACT

In many deep learning tasks, it is assumed that the data used in the training process is sampled from the same distribution. However, this may not be accurate for data collected from different contexts or during different periods. For instance, the temperatures in a city can vary from year to year due to various unclear reasons. In this paper, we utilized three distinct statistical techniques to analyze annual data drifting at various stations. These techniques calculate the P values for each station by comparing data from five years (2014-2018) to identify data drifting phenomena. To find out the data drifting scenario those statistical techniques and calculate the P value from those techniques to measure the data drifting in specific locations. From those statistical techniques, the highest drifting stations can be identified from the previous year's datasets To identify data drifting and highlight areas with significant drift, we utilized meteorological air quality and weather data in this study. We proposed two models that consider the characteristics of data drifting for PM2.5 prediction and compared them with various deep learning models, such as Long Short-Term Memory (LSTM) and its variants, for predictions from the next hour to the 64th hour. Our proposed models significantly outperform traditional neural networks. Additionally, we introduced a wrapped loss function incorporated into a model, resulting in more accurate results compared to those using the original loss function alone and prediction has been evaluated by RMSE, MAE and MAPE metrics. The proposed Front-loaded connection model(FLC) and Back-loaded connection model (BLC) solve the data drifting issue and the wrap loss function also help alleviate the data drifting problem with model training and works for the neural network models to achieve more accurate results. Eventually, the experimental results have shown that the proposed model performance enhanced from 24.1% -16%, 12%-8.3% respectively at 1h-24h, 32h-64h with compared to baselines BILSTM model, by 24.6% -11.8%, 10%-10.2% respectively at 1h-24h, 32h-64h compared to CNN model in hourly PM2.5 predictions.

PMID:39932913 | DOI:10.1371/journal.pone.0314327

Categories: Literature Watch

Quantifying multilabeled brain cells in the whole prefrontal cortex reveals reduced inhibitory and a subtype of excitatory neuronal marker expression in serotonin transporter knockout rats

Tue, 2025-02-11 06:00

Cereb Cortex. 2025 Feb 5;35(2):bhae486. doi: 10.1093/cercor/bhae486.

ABSTRACT

The prefrontal cortex regulates emotions and is influenced by serotonin. Rodents lacking the serotonin transporter (5-HTT) show increased anxiety and changes in excitatory and inhibitory cell markers in the prefrontal cortex. However, these observations are constrained by limitations in brain representation and cell segmentation, as standard immunohistochemistry is inadequate to consider volume variations in regions of interest. We utilized the deep learning network of the StarDist method in combination with novel open-source methods for automated cell counts in a wide range of prefrontal cortex subregions. We found that 5-HTT knockout rats displayed increased anxiety and diminished relative numbers of subclass excitatory VGluT2+ and activated ΔFosB+ cells in the infralimbic and prelimbic cortices and of inhibitory GAD67+ cells in the prelimbic cortex. Anxiety levels and ΔFosB cell counts were positively correlated in wild-type, but not in knockout, rats. In conclusion, we present a novel method to quantify whole brain subregions of multilabeled cells in animal models and demonstrate reduced excitatory and inhibitory neuronal marker expression in prefrontal cortex subregions of 5-HTT knockout rats.

PMID:39932853 | DOI:10.1093/cercor/bhae486

Categories: Literature Watch

Does Deep Learning Reconstruction Improve Ureteral Stone Detection and Subjective Image Quality in the CT Images of Patients with Metal Hardware?

Tue, 2025-02-11 06:00

J Endourol. 2025 Feb 11. doi: 10.1089/end.2024.0666. Online ahead of print.

ABSTRACT

Introduction: Diagnosing ureteral stones with low-dose CT in patients with metal hardware can be challenging because of image noise. The purpose of this study was to compare ureteral stone detection and image quality of low-dose and conventional CT scans with and without deep learning reconstruction (DLR) and metal artifact reduction (MAR) in the presence of metal hip prostheses. Methods: Ten urinary system combinations with 4 to 6 mm ureteral stones were implanted into a cadaver with bilateral hip prostheses. Each set was scanned under two different radiation doses (conventional dose [CD] = 115 mAs and ultra-low dose [ULD] = 6.0 mAs). Two scans were obtained for each dose as follows: one with and another without DLR and MAR. Two blinded radiologists ranked each image in terms of artifact, image noise, image sharpness, overall quality, and diagnostic confidence. Stone detection accuracy at each setting was calculated. Results: ULD with DLR and MAR improved subjective image quality in all five domains (p < 0.05) compared with ULD. In addition, the subjective image quality for ULD with DLR and MAR was greater than the subjective image quality for CD in all five domains (p < 0.05). Stone detection accuracy of ULD improved with the application of DLR and MAR (p < 0.05). Stone detection accuracy of ULD with DLR and MAR was similar to CD (p > 0.25). Conclusions: DLR with MAR may allow the application of low-dose CT protocols in patients with hip prostheses. Application of DLR and MAR to ULD provided a stone detection accuracy comparable with CD, reduced radiation exposure by 94.8%, and improved subjective image quality.

PMID:39932744 | DOI:10.1089/end.2024.0666

Categories: Literature Watch

Diffusion-driven multi-modality medical image fusion

Tue, 2025-02-11 06:00

Med Biol Eng Comput. 2025 Feb 11. doi: 10.1007/s11517-025-03300-6. Online ahead of print.

ABSTRACT

Multi-modality medical image fusion (MMIF) technology utilizes the complementarity of different modalities to provide more comprehensive diagnostic insights for clinical practice. Existing deep learning-based methods often focus on extracting the primary information from individual modalities while ignoring the correlation of information distribution across different modalities, which leads to insufficient fusion of image details and color information. To address this problem, a diffusion-driven MMIF method is proposed to leverage the information distribution relationship among multi-modality images in the latent space. To better preserve the complementary information from different modalities, a local and global network (LAGN) is suggested. Additionally, a loss strategy is designed to establish robust constraints among diffusion-generated images, original images, and fused images. This strategy supervises the training process and prevents information loss in fused images. The experimental results demonstrate that the proposed method surpasses state-of-the-art image fusion methods in terms of unsupervised metrics on three datasets: MRI/CT, MRI/PET, and MRI/SPECT images. The proposed method successfully captures rich details and color information. Furthermore, 16 doctors and medical students were invited to evaluate the effectiveness of our method in assisting clinical diagnosis and treatment.

PMID:39932643 | DOI:10.1007/s11517-025-03300-6

Categories: Literature Watch

Double-mix pseudo-label framework: enhancing semi-supervised segmentation on category-imbalanced CT volumes

Tue, 2025-02-11 06:00

Int J Comput Assist Radiol Surg. 2025 Feb 11. doi: 10.1007/s11548-024-03281-1. Online ahead of print.

ABSTRACT

PURPOSE: Deep-learning-based supervised CT segmentation relies on fully and densely labeled data, the labeling process of which is time-consuming. In this study, our proposed method aims to improve segmentation performance on CT volumes with limited annotated data by considering category-wise difficulties and distribution.

METHODS: We propose a novel confidence-difficulty weight (CDifW) allocation method that considers confidence levels, balancing the training across different categories, influencing the loss function and volume-mixing process for pseudo-label generation. Additionally, we introduce a novel Double-Mix Pseudo-label Framework (DMPF), which strategically selects categories for image blending based on the distribution of voxel-counts per category and the weight of segmentation difficulty. DMPF is designed to enhance the segmentation performance of categories that are challenging to segment.

RESULT: Our approach was tested on two commonly used datasets: a Congenital Heart Disease (CHD) dataset and a Beyond-the-Cranial-Vault (BTCV) Abdomen dataset. Compared to the SOTA methods, our approach achieved an improvement of 5.1% and 7.0% in Dice score for the segmentation of difficult-to-segment categories on 5% of the labeled data in CHD and 40% of the labeled data in BTCV, respectively.

CONCLUSION: Our method improves segmentation performance in difficult categories within CT volumes by category-wise weights and weight-based mixture augmentation. Our method was validated across multiple datasets and is significant for advancing semi-supervised segmentation tasks in health care. The code is available at https://github.com/MoriLabNU/Double-Mix .

PMID:39932621 | DOI:10.1007/s11548-024-03281-1

Categories: Literature Watch

Eliminating the second CT scan of dual-tracer total-body PET/CT via deep learning-based image synthesis and registration

Tue, 2025-02-11 06:00

Eur J Nucl Med Mol Imaging. 2025 Feb 11. doi: 10.1007/s00259-025-07113-5. Online ahead of print.

ABSTRACT

PURPOSE: This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging.

METHODS: We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [68Ga]Ga-DOTATATE/[18F]FDG, 50 underwent [68Ga]Ga-PSMA-11/[18F]FDG, and 30 underwent [68Ga]Ga-FAPI-04/[18F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions.

RESULTS: The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [68Ga]Ga-DOTATATE, 0.08 for [68Ga]Ga-PSMA-11, 0.06 for [68Ga]Ga-FAPI-04, and 0.05 for [18F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers.

CONCLUSION: The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.

PMID:39932542 | DOI:10.1007/s00259-025-07113-5

Categories: Literature Watch

DeepInterAware: Deep Interaction Interface-Aware Network for Improving Antigen-Antibody Interaction Prediction from Sequence Data

Tue, 2025-02-11 06:00

Adv Sci (Weinh). 2025 Feb 11:e2412533. doi: 10.1002/advs.202412533. Online ahead of print.

ABSTRACT

Identifying interactions between candidate antibodies and target antigens is a key step in developing effective human therapeutics. The antigen-antibody interaction (AAI) occurs at the structural level, but the limited structure data poses a significant challenge. However, recent studies revealed that structural information can be learned from the vast amount of sequence data, indicating that the interaction prediction can benefit from the abundance of antigen and antibody sequences. In this study, DeepInterAware (deep interaction interface-aware network) is proposed, a framework dynamically incorporating interaction interface information directly learned from sequence data, along with the inherent specificity information of the sequences. Experimental results in interaction prediction demonstrate that DeepInterAware outperforms existing methods and exhibits promising inductive capabilities for predicting interactions involving unseen antigens or antibodies, and transfer capabilities for similar tasks. More notably, DeepInterAware has unique advantages that existing methods lack. First, DeepInterAware can dive into the underlying mechanisms of AAIs, offering the ability to identify potential binding sites. Second, it is proficient in detecting mutations within antigens or antibodies, and can be extended for precise predictions of the binding free energy changes upon mutations. The HER2-targeting antibody screening experiment further underscores DeepInterAware's exceptional capability in identifying binding antibodies for target antigens, establishing it as an important tool for antibody screening.

PMID:39932383 | DOI:10.1002/advs.202412533

Categories: Literature Watch

ChatExosome: An Artificial Intelligence (AI) Agent Based on Deep Learning of Exosomes Spectroscopy for Hepatocellular Carcinoma (HCC) Diagnosis

Tue, 2025-02-11 06:00

Anal Chem. 2025 Feb 11. doi: 10.1021/acs.analchem.4c06677. Online ahead of print.

ABSTRACT

Large language models (LLMs) hold significant promise in the field of medical diagnosis. There are still many challenges in the direct diagnosis of hepatocellular carcinoma (HCC). α-Fetoprotein (AFP) is a commonly used tumor marker for liver cancer. However, relying on AFP can result in missed diagnoses of HCC. We developed an artificial intelligence (AI) agent centered on LLMs, named ChatExosome, which created an interactive and convenient system for clinical spectroscopic analysis and diagnosis. ChatExosome consists of two main components: the first is the deep learning of the Raman fingerprinting of exosomes derived from HCC. Based on a patch-based 1D self-attention mechanism and downsampling, the feature fusion transformer (FFT) was designed to process the Raman spectra of exosomes. It achieved accuracies of 95.8% for cell-derived exosomes and 94.1% for 165 clinical samples, respectively. The second component is the interactive chat agent based on LLM. The retrieval-augmented generation (RAG) method was utilized to enhance the knowledge related to exosomes. Overall, LLM serves as the core of this interactive system, which is capable of identifying users' intentions and invoking the appropriate plugins to process the Raman data of exosomes. This is the first AI agent focusing on exosome spectroscopy and diagnosis, enhancing the interpretability of classification results, enabling physicians to leverage cutting-edge medical research and artificial intelligence techniques to optimize medical decision-making processes, and it shows great potential in intelligent diagnosis.

PMID:39932366 | DOI:10.1021/acs.analchem.4c06677

Categories: Literature Watch

Correction to "DL 101: Basic Introduction to Deep Learning With Its Application in Biomedical Related Fields"

Tue, 2025-02-11 06:00

Stat Med. 2025 Feb 28;44(5):e10349. doi: 10.1002/sim.10349.

NO ABSTRACT

PMID:39932330 | DOI:10.1002/sim.10349

Categories: Literature Watch

Deep Learning Radiomics Based on MRI for Differentiating Benign and Malignant Parapharyngeal Space Tumors

Tue, 2025-02-11 06:00

Laryngoscope. 2025 Feb 11. doi: 10.1002/lary.32043. Online ahead of print.

ABSTRACT

OBJECTIVE: The study aims to establish a pre-academic diagnostic tool based on deep learning and conventional radiomics features to guide the clinical decision-making of parapharyngeal space (PPS) tumors.

METHODS: This retrospective study included 217 patients with PPS tumors, from two medical centers in China from March 1, 2011, to October 1, 2023. The study cohort was divided into a training set (n = 145) and a test set (n = 72). A deep learning (DL) model and conventional radiomics (Rad) model based on neck MRI were constructed to distinguish malignant tumors (MTs) and benign tumors (BTs) of PPS tumors. The deep learning radiomics (DLR) model which integrates deep learning and radiomics features was further developed. The area under the receiver operating characteristic curve (AUC), specificity, and sensitivity were used to evaluate model performance. Decision curve analysis (DCA) was applied to assess the clinical utility.

RESULTS: Compared with the Rad and DL models, the DLR model showed excellent performance in this study, with the highest AUC of 0.899 and 0.821 in the training set and test set, respectively. The DCA curve confirmed the clinical utility of the DLR model in distinguishing the pathological types of PPS tumors.

CONCLUSION: The DLR model demonstrated a high predictive ability in diagnosing MTs and BTs of PPS and could serve as a powerful tool to aid clinical decision-making in the preoperative diagnosis of PPS tumors.

LEVEL OF EVIDENCE: III Laryngoscope, 2025.

PMID:39932109 | DOI:10.1002/lary.32043

Categories: Literature Watch

Recent Development, Applications, and Patents of Artificial Intelligence in Drug Design and Development

Tue, 2025-02-11 06:00

Curr Drug Discov Technol. 2025 Feb 10. doi: 10.2174/0115701638364199250123062248. Online ahead of print.

ABSTRACT

Drug design and development are crucial areas of study for chemists and pharmaceutical companies. Nevertheless, the significant expenses, lengthy process, inaccurate delivery, and limited effectiveness present obstacles and barriers that affect the development and exploration of new drugs. Moreover, big and complex datasets from clinical trials, genomics, proteomics, and microarray data also disrupt the drug discovery approach. The integration of Artificial Intelligence (AI) into drug design is both timely and crucial due to several pressing challenges in the pharmaceutical industry, including the escalating costs of drug development, high failure rates in clinical trials, and the in-creasing complexity of disease biology. AI offers innovative solutions to address these challenges, promising to improve the efficiency, precision, and success rates of drug discovery and development. Artificial intelligence (AI) and machine learning (ML) technology are crucial tools in the field of drug discovery and development. More precisely, the field has been revolutionized by the utilization of deep learning (DL) techniques and artificial neural networks (ANNs). DL algorithms & ML have been employed in drug design using various approaches such as physiochemical activity, polyphar-macology, drug repositioning, quantitative structure-activity relationship, pharmacophore modeling, drug monitoring and release, toxicity prediction, ligand-based virtual screening, structure-based vir-tual screening, and peptide synthesis. The use of DL and AI in this field is supported by historical evidence. Furthermore, management strategies, curation, and unconventional data mining aided as-sistance in modern modeling algorithms. In summary, the progress made in artificial intelligence and deep learning algorithms offers a promising opportunity for the development and discovery of effec-tive drugs, ultimately leading to significant benefits for humanity. In this review, several tools and algorithmic programs have been discussed which are being used in drug design along with the de-scriptions of the patents that have been granted for the use of AI in this field, which constitutes the main focus of this review and differentiates it fromalready published materials.

PMID:39931986 | DOI:10.2174/0115701638364199250123062248

Categories: Literature Watch

PortNet: Achieving lightweight architecture and high accuracy in lung cancer cell classification

Tue, 2025-02-11 06:00

Heliyon. 2025 Jan 9;11(3):e41850. doi: 10.1016/j.heliyon.2025.e41850. eCollection 2025 Feb 15.

ABSTRACT

BACKGROUND: As one of the cancers with the highest incidence and mortality rates worldwide, the timeliness and accuracy of cell type diagnosis in lung cancer are crucial for patients' treatment decisions. This study aims to develop a novel deep learning model to provide efficient, accurate, and cost-effective auxiliary diagnosis for the pathological types of lung cancer cells.

METHOD: This paper introduces a model named PortNet, designed to significantly reduce the model's parameter size and achieve lightweight characteristics without compromising classification accuracy. We incorporated 1 × 1 convolutional blocks into the Depthwise Separable Convolution architecture to further decrease the model's parameter count. Additionally, the integration of the Squeeze-and-Excitation self-attention module enhances feature representation without substantially increasing the number of parameters, thereby maintaining high predictive performance.

RESULT: Our tests demonstrated that PortNet significantly reduces the total parameter count to 2,621,827, which is over a fifth smaller compared to some mainstream CNN models, marking a substantial advancement for deployment in portable devices. We also established widely-used traditional models as benchmarks to illustrate the efficacy of PortNet. In external tests, PortNet achieved an average accuracy (ACC) of 99.89 % and Area Under the Curve (AUC) of 99.27 %. During five-fold cross-validation, PortNet maintained an average ACC of 99.51 % ± 1.50 % and F1 score of 99.50 % ± 1.51 %, showcasing its lightweight capability and exceptionally high accuracy. This presents a promising opportunity for integration into hospital systems to assist physicians in diagnosis.

CONCLUSION: This study significantly reduces the parameter count through an innovative model structure while maintaining high accuracy and stability, demonstrating outstanding performance in lung cancer cell classification tasks. The model holds the potential to become an efficient, accurate, and cost-effective auxiliary diagnostic tool for pathological classification of lung cancer in the future.

PMID:39931476 | PMC:PMC11808607 | DOI:10.1016/j.heliyon.2025.e41850

Categories: Literature Watch

Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos

Tue, 2025-02-11 06:00

Ophthalmol Sci. 2024 Nov 28;5(2):100659. doi: 10.1016/j.xops.2024.100659. eCollection 2025 Mar-Apr.

ABSTRACT

PURPOSE: Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs.

DESIGN: Retrospective study.

SUBJECTS: We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted.

METHODS: This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power.

MAIN OUTCOME MEASURES: Mean absolute error (MAE) and coefficient of determination were used to evaluate the models' performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions.

RESULTS: For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline-ResNet34 (MAE: 0.88D; P < 0.001) and SwinV2 (MAE: 0.87D; P < 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline-ResNet34 (MAE: 0.81D; P < 0.001) and SwinV2 (MAE: 0.78D; P < 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction.

CONCLUSIONS: Deep imbalanced regressed-integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

PMID:39931359 | PMC:PMC11808727 | DOI:10.1016/j.xops.2024.100659

Categories: Literature Watch

A comprehensive hog plum leaf disease dataset for enhanced detection and classification

Tue, 2025-02-11 06:00

Data Brief. 2025 Jan 21;59:111311. doi: 10.1016/j.dib.2025.111311. eCollection 2025 Apr.

ABSTRACT

A comprehensive Hog plum leaf disease dataset is greatly needed for agricultural research, precision agriculture, and efficient management of disease. It will find applications toward the formulation of machine learning models for early detection and classification of disease, thus reducing dependency on manual inspections and timely interventions. Such a dataset provides a benchmark for training and testing algorithms, further enhancing automated monitoring systems and decision-support tools in sustainable agriculture. It enables better crop management, less use of chemicals, and more focused agronomical practices. This dataset will contribute to the global research being carried out for the advancement of disease-resistant plant strategy development and efficient management practices for better agricultural productivity along with sustainability. These images have been collected from different regions of Bangladesh. In this work, two classes were used: 'Healthy' and 'Insect hole', representing different stages of disease progression. The augmentation techniques that involve flipping, rotating, scaling, translating, cropping, adding noise, adjusting brightness, adjusting contrast, and scaling expanded a dataset of 3782 images to 20,000 images. These have formed very robust deep learning training sets, hence better detection of the disease.

PMID:39931093 | PMC:PMC11808602 | DOI:10.1016/j.dib.2025.111311

Categories: Literature Watch

Artificial intelligence in high-dose-rate brachytherapy treatment planning for cervical cancer: a review

Tue, 2025-02-11 06:00

Front Oncol. 2025 Jan 27;15:1507592. doi: 10.3389/fonc.2025.1507592. eCollection 2025.

ABSTRACT

Cervical cancer remains a significant global health concern, characterized by high morbidity and mortality rates. High-dose-rate brachytherapy (HDR-BT) is a critical component of cervical cancer treatment, requiring precise and efficient treatment planning. However, the process is labor-intensive, heavily reliant on operator expertise, and prone to variability due to factors such as applicator shifts and organ filling changes. Recent advancements in artificial intelligence (AI), particularly in medical image processing, offer significant potential for automating and standardizing treatment planning in HDR-BT. This review examines the progress and challenge of AI applications in HDR-BT treatment planning, focusing on automatic segmentation, applicator reconstruction, dose calculation, and plan optimization. By addressing current limitations and exploring future directions, this paper aims to guide the integration of AI into clinical practice, ultimately improving treatment accuracy, reducing preparation time, and enhancing patient outcomes.

PMID:39931087 | PMC:PMC11808022 | DOI:10.3389/fonc.2025.1507592

Categories: Literature Watch

Detection of dental caries under fixed dental prostheses by analyzing digital panoramic radiographs with artificial intelligence algorithms based on deep learning methods

Mon, 2025-02-10 06:00

BMC Oral Health. 2025 Feb 10;25(1):216. doi: 10.1186/s12903-025-05577-3.

ABSTRACT

BACKGROUND: The aim of this study was to evaluate the efficacy of detecting dental caries under fixed dental prostheses (FDPs) through the analysis of panoramic radiographs utilizing convolutional neural network (CNN) based You Only Look Once (YOLO) models. Deep learning algorithms can analyze datasets of dental images, such as panoramic radiographs to accurately identify and classify carious lesions. Using artificial intelligence, specifically deep learning methods, may help practitioners to detect and diagnose caries using radiograph images.

METHODS: The panoramic radiographs of 1004 patients, who had FDPs on their teeth and met the inclusion criteria, were divided into 904 (90%) images as training dataset and 100 (10%) images as the test dataset. Following the attainment of elevated detection scores with YOLOv7, regions of interest (ROIs) containing FDPs were automatically detected and cropped by the YOLOv7 model. In the second stage, 2467 cropped images were divided into 2248 (91%) images as the training dataset and 219 (9%) images as the test dataset. Caries under the FDPs were detected using both the YOLOv7 and the improved YOLOv7 (YOLOv7 + CBAM) models. The performance of the deep learning models used in the study was evaluated using recall, precision, F1, and mean average precision (mAP) scores.

RESULTS: In the first stage, the YOLOv7 model achieved 0.947 recall, 0.966 precision, 0.968 mAP and 0.956 F1 scores in detecting the FDPs. In the second stage the YOLOv7 model achieved 0.791 recall, 0.837 precision, 0.800 mAP and 0.813 F1 scores in detecting the caries under the FDPs, while the YOLOv7 + CBAM model achieved 0.827 recall, 0.834 precision, 0.846 mAP, and 0.830 F1 scores.

CONCLUSION: The use of deep learning models to detect dental caries under FDPs by analyzing panoramic radiographs has shown promising results. The study highlights that panoramic radiographs with appropriate image features can be used in combination with a detection system supported by deep learning methods. In the long term, our study may allow for accurate and rapid diagnoses that significantly improve the preservation of teeth under FDPs.

PMID:39930440 | DOI:10.1186/s12903-025-05577-3

Categories: Literature Watch

A Bayesian meta-analysis on MRI-based radiomics for predicting EGFR mutation in brain metastasis of lung cancer

Mon, 2025-02-10 06:00

BMC Med Imaging. 2025 Feb 10;25(1):44. doi: 10.1186/s12880-025-01566-8.

ABSTRACT

OBJECTIVES: This study aimed to investigate the diagnostic test accuracy of MRI-based radiomics studies for predicting EGFR mutation in brain metastasis originating from lung cancer.

METHODS: This meta-analysis, conducted following PRISMA guidelines, involved a systematic search in PubMed, Embase, and Web of Science up to November 3, 2024. Eligibility criteria followed the PICO framework, assessing population, intervention, comparison, and outcome. The RQS and QUADAS-2 tools were employed for quality assessment. A Bayesian model determined summary estimates, and statistical analysis was conducted using R and STATA software.

RESULTS: Eleven studies consisting of nine training and ten validation cohorts were included in the meta-analysis. In the training cohorts, MRI-based radiomics showed robust predictive performance for EGFR mutations in brain metastases, with an AUC of 0.90 (95% CI: 0.82-0.93), sensitivity of 0.84 (95% CI: 0.80-0.88), specificity of 0.86 (95% CI: 0.80-0.91), and a diagnostic odds ratio (DOR) of 34.17 (95% CI: 19.16-57.49). Validation cohorts confirmed strong performance, with an AUC of 0.91 (95% CI: 0.69-0.95), sensitivity of 0.79 (95% CI: 0.73-0.84), specificity of 0.88 (95% CI: 0.83-0.93), and a DOR of 31.33 (95% CI: 15.50-58.3). Subgroup analyses revealed notable trends: the T1C + T2WI sequences and 3.0 T scanners showed potential superiority, machine learning-based radiomics and manual segmentation exhibited higher diagnostic accuracy, and PyRadiomics emerged as the preferred feature extraction software.

CONCLUSION: This meta-analysis suggests that MRI-based radiomics holds promise for the non-invasive prediction of EGFR mutations in brain metastases of lung cancer.

PMID:39930347 | DOI:10.1186/s12880-025-01566-8

Categories: Literature Watch

Development of a deep learning system for predicting biochemical recurrence in prostate cancer

Mon, 2025-02-10 06:00

BMC Cancer. 2025 Feb 10;25(1):232. doi: 10.1186/s12885-025-13628-9.

ABSTRACT

BACKGROUND: Biochemical recurrence (BCR) occurs in 20%-40% of men with prostate cancer (PCa) who undergo radical prostatectomy. Predicting which patients will experience BCR in advance helps in formulating more targeted prostatectomy procedures. However, current preoperative recurrence prediction mainly relies on the use of the Gleason grading system, which omits within-grade morphological patterns and subtle histopathological features, leaving a significant amount of prognostic potential unexplored.

METHODS: We collected and selected a total of 1585 prostate biopsy images with tumor regions from 317 patients (5 Whole Slide Images per patient) to develop a deep learning system for predicting BCR of PCa before prostatectomy. The Inception_v3 neural network was employed to train and test models developed from patch-level images. The multiple instance learning method was used to extract whole slide image-level features. Finally, patient-level artificial intelligence models were developed by integrating deep learning -generated pathology features with several machine learning algorithms.

RESULTS: The BCR prediction system demonstrated great performance in the testing cohort (AUC = 0.911, 95% Confidence Interval: 0.840-0.982) and showed the potential to produce favorable clinical benefits according to Decision Curve Analyses. Increasing the number of WSIs for each patient improves the performance of the prediction system. Additionally, the study explores the correlation between deep learning -generated features and pathological findings, emphasizing the interpretative potential of artificial intelligence models in pathology.

CONCLUSIONS: Deep learning system can use biopsy samples to predict the risk of BCR in PCa, thereby formulating targeted treatment strategies.

PMID:39930342 | DOI:10.1186/s12885-025-13628-9

Categories: Literature Watch

Neural architecture search with Deep Radon Prior for sparse-view CT image reconstruction

Mon, 2025-02-10 06:00

Med Phys. 2025 Feb 10. doi: 10.1002/mp.17685. Online ahead of print.

ABSTRACT

BACKGROUND: Sparse-view computed tomography (CT) reduces radiation exposure but suffers from severe artifacts caused by insufficient sampling and data scarcity, which compromise image fidelity. Recent advancements in deep learning (DL)-based methods for inverse problems have shown promise for CT reconstruction but often require high-quality paired datasets and lack interpretability.

PURPOSE: This paper aims to advance the field of CT reconstruction by introducing a novel unsupervised deep learning method. It builds on the foundation of Deep Radon Prior (DRP), which utilizes an untrained encoder-decoder network to extract implicit features from the Radon domain, and leverages Neural Architecture Search (NAS) to optimize network structures.

METHODS: We propose a novel unsupervised deep learning method for image reconstruction, termed NAS-DRP. This method leverages reinforcement learning-based NAS to explore diverse architectural spaces and integrates reinforcement learning with data inconsistency in the Radon domain. Building on previous DRP research, NAS-DRP utilizes an untrained encoder-decoder network to extract implicit features from the Radon domain. It further incorporates insights from studies on Deep Image Prior (DIP) regarding the critical impact of upsampling layers on image quality restoration. The method employs NAS to search for the optimal network architecture for upsampling unit tasks, while using Recurrent Neural Networks (RNNs) to constrain the optimization process, ensuring task-specific improvements in sparse-view CT image reconstruction.

RESULTS: Extensive experiments demonstrate that the NAS-DRP method achieves significant performance improvements in multiple CT image reconstruction tasks. The proposed method outperforms traditional reconstruction methods and other DL-based techniques in terms of both objective metrics (PSNR, SSIM, and LPIPS) and subjective visual quality. By automatically optimizing network structures, NAS-DRP effectively enhances the detail and accuracy of reconstructed images while minimizing artifacts.

CONCLUSIONS: NAS-DRP represents a significant advancement in the field of CT image reconstruction. By integrating NAS with deep learning and leveraging Radon domain-specific adaptations, this method effectively addresses the inherent challenges of sparse-view CT imaging. Additionally, it reduces the cost and complexity of data acquisition, demonstrating substantial potential for broader application in medical imaging. The evaluation code will be available at https://github.com/fujintao1999/NAS-DRP/.

PMID:39930320 | DOI:10.1002/mp.17685

Categories: Literature Watch

Pages