Deep learning

Graph Foundation Models: Concepts, Opportunities and Challenges

Thu, 2025-03-06 06:00

IEEE Trans Pattern Anal Mach Intell. 2025 Mar 6;PP. doi: 10.1109/TPAMI.2025.3548729. Online ahead of print.

ABSTRACT

Foundation models have emerged as critical components in a variety of artificial intelligence applications, and showcase significant success in natural language processing and several other domains. Meanwhile, the field of graph machine learning is witnessing a paradigm transition from shallow methods to more sophisticated deep learning approaches. The capabilities of foundation models in generalization and adaptation motivate graph machine learning researchers to discuss the potential of developing a new graph learning paradigm. This paradigm envisions models that are pre-trained on extensive graph data and can be adapted for various graph tasks. Despite this burgeoning interest, there is a noticeable lack of clear definitions and systematic analyses pertaining to this new domain. To this end, this article introduces the concept of Graph Foundation Models (GFMs), and offers an exhaustive explanation of their key characteristics and underlying technologies. We proceed to classify the existing work related to GFMs into three distinct categories, based on their dependence on graph neural networks and large language models. In addition to providing a thorough review of the current state of GFMs, this article also outlooks potential avenues for future research in this rapidly evolving domain.

PMID:40048343 | DOI:10.1109/TPAMI.2025.3548729

Categories: Literature Watch

AGPred: An End-to-End Deep Learning Model to Predicting Drug Approvals in Clinical Trials Based on Molecular Features

Thu, 2025-03-06 06:00

IEEE J Biomed Health Inform. 2025 Mar 6;PP. doi: 10.1109/JBHI.2025.3547315. Online ahead of print.

ABSTRACT

One of the major challenges in drug development is maintaining acceptable levels of efficacy and safety throughout the various stages of clinical trials and successfully bringing the drug to market. However, clinical trials are time-consuming and expensive. While there are computational methods designed to predict the likelihood of a drug passing clinical trials and reaching the market, these methods heavily rely on manual feature engineering and cannot automatically learn drug molecular representations, resulting in relatively low model performance. In this study, we propose AGPred, an attention-based deep Graph Neural Network (GNN) designed to predict drug approval rates in clinical trials accurately. Unlike the few existing studies on drug approval prediction, which only use predicted targets of compounds, our novel approach employs a GNN module to extract high-potential features of compounds based on their molecular graphs. Additionally, a cross-attention-based fusion module is utilized to learn molecular fingerprint features, enhancing the model's representation of chemical structures. Meanwhile, AGPred integrates the physicochemical properties of drugs to provide a comprehensive description of the molecules. Experimental results indicate that AGPred outperforms four state-of-the-art models on both benchmark and independent datasets. The study also includes several ablation experiments and visual analyses to demonstrate the effectiveness of our method in predicting drug approval during clinical trials. The codes for AGPred are available at https://github.com/zhc940702/AGPred.

PMID:40048330 | DOI:10.1109/JBHI.2025.3547315

Categories: Literature Watch

Factors Associated With Retinal Vessel Traits in the Canadian Longitudinal Study on Aging

Thu, 2025-03-06 06:00

Invest Ophthalmol Vis Sci. 2025 Mar 3;66(3):13. doi: 10.1167/iovs.66.3.13.

ABSTRACT

PURPOSE: To determine the factors cross-sectionally and longitudinally associated with retinal vessel diameter, total area, and tortuosity in the Canadian Longitudinal Study on Aging (CLSA).

METHODS: Of the 30,097 adults between ages 45 and 85 years old in the CLSA Comprehensive Cohort, 26,076 had at least one retinal image gradable by QUARTZ, a deep-learning algorithm that automatically assessed image quality, distinguished between arterioles and venules, and estimated retinal vessel traits over the entire retina. Questions were asked about demographic, lifestyle, and medical factors. Blood pressure, cholesterol, and C-reactive protein were measured. Participants returned for follow-up 3 years later. Multiple linear regression was used to provide adjusted estimates.

RESULTS: Current smoking was strongly associated with wider arteriolar and venular diameters and their widening over 3 years (P < 0.05). Current smoking was also associated with a larger arteriolar and venular area and a 3-year increase in venular area (P < 0.05). Obesity was positively associated with venular diameter, total venular area, 3-year change in total venular area, and venular tortuosity (P < 0.05). Diastolic blood pressure was negatively associated with both arteriolar and venular diameter, area, and tortuosity, both cross-sectionally and longitudinally (P < 0.05). Diabetes was associated with wider arteriolar diameters cross-sectionally, and type 1 diabetes was associated with 3-year widening of arteriolar diameters (P < 0.05).

CONCLUSIONS: This work provides comprehensive information on the factors associated with retinal vessel traits and their change. Factors such as smoking, obesity, blood pressure, and diabetes were longitudinally related to retinal vessel traits, which play a role in the development of eye disease.

PMID:40048189 | DOI:10.1167/iovs.66.3.13

Categories: Literature Watch

DeepOptimalNet: optimized deep learning model for early diagnosis of pancreatic tumor classification in CT imaging

Thu, 2025-03-06 06:00

Abdom Radiol (NY). 2025 Mar 6. doi: 10.1007/s00261-025-04860-9. Online ahead of print.

ABSTRACT

Computed Tomography (CT) imaging captures detailed cross-sectional images of the pancreas and surrounding structures and provides valuable information for medical professionals. The classification of pancreatic CT images presents significant challenges due to the complexities of pancreatic diseases, especially pancreatic cancer. These challenges include subtle variations in tumor characteristics, irregular tumor shapes, and intricate imaging features that hinder accurate and early diagnosis. Image noise and variations in image quality also complicate the analysis. To address these classification problems, advanced medical imaging techniques, optimization algorithms, and deep learning methodologies are often employed. This paper proposes a robust classification model called DeepOptimalNet, which integrates optimization algorithms and deep learning techniques to handle the variability in imaging characteristics and subtle variations associated with pancreatic tumors. The model uses a comprehensive approach to enhance the analysis of medical CT images, beginning with the application of the Gaussian smoothing filter (GSF) for noise reduction and feature enhancement. It introduces the Modified Remora Optimization Algorithm (MROA) to improve the accuracy and efficiency of pancreatic cancer tissue segmentation. The adaptability of modified optimization algorithms to specific challenges such as irregular tumor shapes is emphasized. The paper also utilizes Deep Transfer CNN with ResNet-50 (DTCNN) for feature extraction, leveraging transfer learning to enhance prediction accuracy in CT images. ResNet-50's strong feature extraction capabilities are particularly relevant to fault diagnosis in CT images. The focus then shifts to a Deep Cascade Convolutional Neural Network with Multimodal Learning (DCCNN-ML) for classifying pancreatic cancer in CT images. The DeepOptimalNet approach underscores the advantages of deep learning techniques, multimodal learning, and cascade architectures in addressing the complexity and subtle variations inherent in pancreatic cancer imaging, ultimately leading to more accurate and robust classifications. The proposed DeepOptimalNet achieves 99.3% accuracy, 99.1% sensitivity, 99.5% specificity, and 99.3% F-score, surpassing existing models in pancreatic tumor classification. Its MROA-based segmentation improves boundary delineation, while DTCNN with ResNet-50 enhances feature extraction for small and low-contrast tumors. Benchmark validation confirms its superior classification performance, reduced false positives, and improved diagnostic reliability compared to traditional deep learning methods.

PMID:40047871 | DOI:10.1007/s00261-025-04860-9

Categories: Literature Watch

Enhanced ISUP grade prediction in prostate cancer using multi-center radiomics data

Thu, 2025-03-06 06:00

Abdom Radiol (NY). 2025 Mar 6. doi: 10.1007/s00261-025-04858-3. Online ahead of print.

ABSTRACT

BACKGROUND: To explore the predictive value of radiomics features extracted from anatomical ROIs in differentiating the International Society of Urological Pathology (ISUP) grading in prostate cancer patients.

METHODS: This study included 1,500 prostate cancer patients from a multi-center study. The peripheral zone (PZ) and central gland (CG, transition zone + central zone) of the prostate were segmented using deep learning algorithms and were defined as the regions of interest (ROI) in this study. A total of 12,918 image-based features were extracted from T2-weighted imaging (T2WI), apparent diffusion coefficient (ADC), and diffusion-weighted imaging (DWI) images of these two ROIs. Synthetic minority over-sampling technique (SMOTE) algorithm was used to address the class imbalance problem. Feature selection was performed using Pearson correlation analysis and random forest regression. A prediction model was built using the random forest classification algorithm. Kruskal-Wallis H test, ANOVA, and Chi-Square Test were used for statistical analysis.

RESULTS: A total of 20 ISUP grading-related features were selected, including 10 from the PZ ROI and 10 from the CG ROI. On the test set, the combined PZ + CG radiomics model exhibited better predictive performance, with an AUC of 0.928 (95% CI: 0.872, 0.966), compared to the PZ model alone (AUC: 0.838; 95% CI: 0.722, 0.920) and the CG model alone (AUC: 0.904; 95% CI: 0.851, 0.945).

CONCLUSION: This study demonstrates that radiomic features extracted based on anatomical sub-region of the prostate can contribute to enhanced ISUP grade prediction. The combination of PZ + GG can provide more comprehensive information with improved accuracy. Further validation of this strategy in the future will enhance its prospects for improving decision-making in clinical settings.

PMID:40047870 | DOI:10.1007/s00261-025-04858-3

Categories: Literature Watch

Development of artificial intelligence-based algorithms for the process of human identification through dental evidence

Thu, 2025-03-06 06:00

Int J Legal Med. 2025 Mar 6. doi: 10.1007/s00414-025-03453-x. Online ahead of print.

ABSTRACT

INTRODUCTION: Forensic Odontology plays a crucial role in medicolegal identification by comparing dental evidence in antemortem (AM) and postmortem (PM) dental records, including orthopantomograms (OPGs). Due to the complexity and time-consuming nature of this process, imaging analysis optimization is an urgent matter. Convolutional neural networks (CNN) are promising artificial intelligence (AI) structures in Forensic Odontology for their efficiency and detail in image analysis, making them a valuable tool in medicolegal identification. Therefore, this study focused on the development of a CNN algorithm capable of comparing AM and PM dental evidence in OPGs for the medicolegal identification of unknown cadavers.

MATERIALS AND METHODS: The present study included a total sample of 1235 OPGs from 1050 patients from the Stomatology Department of Unidade Local de Saúde Santa Maria, aged 16 to 30 years. Two algorithms were developed, one for age classification and another for positive identification, based on the pre-trained model VGG16, and performance was evaluated through predictive metrics and heatmaps.

RESULTS: Both developed models achieved a final accuracy of 85%, reflecting high overall performance. The age classification model performed better at classifying OPGs from individuals aged between 16 and 23 years, while the positive identification model was significantly better at identifying pairs of OPGs from different individuals.

CONCLUSIONS: The developed AI model is useful in the medicolegal identification of unknown cadavers, with advantage in mass disaster victim identification context, by comparing AM and PM dental evidence in OPGs of individuals aged 16 to 30 years.

PMID:40047854 | DOI:10.1007/s00414-025-03453-x

Categories: Literature Watch

Automated pressure ulcer dimension measurements using a depth camera

Thu, 2025-03-06 06:00

J Wound Care. 2025 Mar 2;34(3):205-214. doi: 10.12968/jowc.2021.0171.

ABSTRACT

OBJECTIVE: The purpose of this research was to develop an automatic wound segmentation method for a pressure ulcer (PU) monitoring system (PrUMS) using a depth camera to provide automated, non-contact wound measurements.

METHOD: The automatic wound segmentation method, which combines multiple convolutional neural network classifiers, was developed to segment the wound region to improve PrUMS accuracy and to avoid the biased decision from a single classifier. Measurements from PrUMS were compared with the standardised manual measurements (ground truth) of two clinically trained wound care nurses for each wound.

RESULTS: Compared to the average ground truth measurement (38×34×15mm), measurement errors for length, width and depth were 9.27mm, 5.89mm and 5.79mm, respectively, for the automatic segmentation method, and 4.72mm, 4.34mm, and 5.71mm, respectively, for the semi-automatic segmentation method. There were no significant differences between the segmentation methods and ground truth measurements for length and width; however, the depth measurement was significantly different (p<0.001) from the ground truth measurement.

CONCLUSION: The novel PrUMS device used in this study provided objective, non-contact wound measurement and was demonstrated to be usable in clinical wound care practice. Images taken with a regular camera can improve the classifier's performance. With a dataset of 70 PUs for single and multiple (four images per PU) measurements, the differences between length and width measurements of the PrUMS and the manual measurement by nurses were not statistically significant (p>0.05). A statistical difference (p=0.04) was found between depth measurements obtained manually and with PrUMS, due to limitations of the depth camera within PrUMS, causing missing depth measurements for small wounds.

PMID:40047814 | DOI:10.12968/jowc.2021.0171

Categories: Literature Watch

A Hardware Accelerator for Real-Time Processing Platforms Used in Synthetic Aperture Radar Target Detection Tasks

Thu, 2025-03-06 06:00

Micromachines (Basel). 2025 Feb 7;16(2):193. doi: 10.3390/mi16020193.

ABSTRACT

The deep learning object detection algorithm has been widely applied in the field of synthetic aperture radar (SAR). By utilizing deep convolutional neural networks (CNNs) and other techniques, these algorithms can effectively identify and locate targets in SAR images, thereby improving the accuracy and efficiency of detection. In recent years, achieving real-time monitoring of regions has become a pressing need, leading to the direct completion of real-time SAR image target detection on airborne or satellite-borne real-time processing platforms. However, current GPU-based real-time processing platforms struggle to meet the power consumption requirements of airborne or satellite applications. To address this issue, a low-power, low-latency deep learning SAR object detection algorithm accelerator was designed in this study to enable real-time target detection on airborne and satellite SAR platforms. This accelerator proposes a Process Engine (PE) suitable for multidimensional convolution parallel computing, making full use of Field-Programmable Gate Array (FPGA) computing resources to reduce convolution computing time. Furthermore, a unique memory arrangement design based on this PE aims to enhance memory read/write efficiency while applying dataflow patterns suitable for FPGA computing to the accelerator to reduce computation latency. Our experimental results demonstrate that deploying the SAR object detection algorithm based on Yolov5s on this accelerator design, mounted on a Virtex 7 690t chip, consumes only 7 watts of dynamic power, achieving the capability to detect 52.19 512 × 512-sized SAR images per second.

PMID:40047666 | DOI:10.3390/mi16020193

Categories: Literature Watch

Structural Diversity of Mitochondria in the Neuromuscular System across Development Revealed by 3D Electron Microscopy

Thu, 2025-03-06 06:00

Adv Sci (Weinh). 2025 Mar 6:e2411191. doi: 10.1002/advs.202411191. Online ahead of print.

ABSTRACT

As an animal matures, its neural circuit undergoes alterations, yet the developmental changes in intracellular organelles to facilitate these changes is less understood. Using 3D electron microscopy and deep learning, the study develops semi-automated methods for reconstructing mitochondria in C. elegans and collected mitochondria reconstructions from normal reproductive stages and dauer, enabling comparative study on mitochondria structure within the neuromuscular system. It is found that various mitochondria structural properties in neurons correlate with synaptic connections and these properties are preserved across development in different neural circuits. To test the necessity of these universal mitochondria properties, the study examines the behavior in drp-1 mutants with impaired mitochondria fission and discovers that it causes behavioral deficits. Moreover, it is observed that dauer neurons display distinctive mitochondrial features, and mitochondria in dauer muscles exhibit unique reticulum-like structure. It is proposed that these specialized mitochondria structures may serve as an adaptive mechanism to support stage-specific behavioral and physiological needs.

PMID:40047328 | DOI:10.1002/advs.202411191

Categories: Literature Watch

The Chest X- Ray: The Ship has Sailed, But Has It?

Thu, 2025-03-06 06:00

J Insur Med. 2025 Jul 1;52(1):21-22. doi: 10.17849/insm-52-1-21-22.1.

ABSTRACT

In the past, the chest X-ray (CXR) was a traditional age and amount requirement used to assess potential mortality risk in life insurance applicants. It fell out of favor due to inconvenience to the applicant, cost, and lack of protective value. With the advent of deep learning techniques, can the results of the CXR, as a requirement, now add additional value to underwriting risk analysis?

PMID:40047110 | DOI:10.17849/insm-52-1-21-22.1

Categories: Literature Watch

Individualised prediction of longitudinal change in multimodal brain imaging

Thu, 2025-03-06 06:00

Imaging Neurosci (Camb). 2024 Jul 3;2:1-19. doi: 10.1162/imag_a_00215. eCollection 2024 Jul 1.

ABSTRACT

It remains largely unknown whether individualised longitudinal changes of brain imaging features can be predicted based only on the baseline brain images. This would be of great value, for example, for longitudinal data imputation, longitudinal brain-behaviour associations, and early prediction of brain-related diseases. We explore this possibility using longitudinal data of multiple modalities from UK Biobank brain imaging, with around 3,500 subjects. As baseline and follow-up images are generally similar in the case of short follow-up time intervals (e.g., 2 years), a simple copy of the baseline image may have a very good prediction performance. Therefore, for the first time, we propose a new mathematical framework for guiding the longitudinal prediction of brain images, providing answers to fundamental questions: (1) what is a suitable definition of longitudinal change; (2) how to detect the existence of changes; (3) what is the "null" prediction performance; and (4) can we distinguish longitudinal change prediction from simple data denoising. Building on these, we designed a deep U-Net based model for predicting longitudinal changes in multimodal brain images. Our results show that the proposed model can predict to a modest degree individualised longitudinal changes in almost all modalities, and outperforms other potential models. Furthermore, compared with the true longitudinal changes computed from real data, the predicted longitudinal changes have a similar or even improved accuracy in predicting subjects' non-imaging phenotypes, and have a high between-subject discriminability. Our study contributes a new theoretical framework for longitudinal brain imaging studies, and our results show the potential for longitudinal data imputation, along with highlighting several caveats when performing longitudinal data analysis.

PMID:40046980 | PMC:PMC11877422 | DOI:10.1162/imag_a_00215

Categories: Literature Watch

Addressing grading bias in rock climbing: machine and deep learning approaches

Thu, 2025-03-06 06:00

Front Sports Act Living. 2025 Jan 30;6:1512010. doi: 10.3389/fspor.2024.1512010. eCollection 2024.

ABSTRACT

The determination rock climbing route difficulty is notoriously subjective. While there is no official standard for determining the difficulty of a rock climbing route, various difficulty rating scales exist. But as the sport gains more popularity and prominence on the international stage at the Olympic Games, the need for standardized determination of route difficulty becomes more important. In commercial climbing gyms, consistency and accuracy in route production are crucial for success. Route setters often rely on personal judgment when determining route difficulty, but the success of commercial climbing gyms requires their objectivity in creating diverse, inclusive, and accurate routes. Machine and deep learning techniques have the potential to introduce a standardized form of route difficulty determination. This survey review categorizes machine and deep learning approaches taken, identifies the methods and algorithms used, reports their degree of success, and proposes areas of future work for determining route difficulty. The primary three approaches were from a route-centric, climber-centric, or path finding and path generation context. Of these, the most optimal methods used natural language processing or recurrent neural network algorithms. From these methods, it is argued that the objective difficulty of a rock climbing route has been best determined by route-centric, natural-language-like approaches.

PMID:40046938 | PMC:PMC11881084 | DOI:10.3389/fspor.2024.1512010

Categories: Literature Watch

Artificial intelligence in the diagnosis of uveal melanoma: advances and applications

Thu, 2025-03-06 06:00

Exp Biol Med (Maywood). 2025 Feb 19;250:10444. doi: 10.3389/ebm.2025.10444. eCollection 2025.

ABSTRACT

Advancements in machine learning and deep learning have the potential to revolutionize the diagnosis of melanocytic choroidal tumors, including uveal melanoma, a potentially life-threatening eye cancer. Traditional machine learning methods rely heavily on manually selected image features, which can limit diagnostic accuracy and lead to variability in results. In contrast, deep learning models, particularly convolutional neural networks (CNNs), are capable of automatically analyzing medical images, identifying complex patterns, and enhancing diagnostic precision. This review evaluates recent studies that apply machine learning and deep learning approaches to classify uveal melanoma using imaging modalities such as fundus photography, optical coherence tomography (OCT), and ultrasound. The review critically examines each study's research design, methodology, and reported performance metrics, discussing strengths as well as limitations. While fundus photography is the predominant imaging modality being used in current research, integrating multiple imaging techniques, such as OCT and ultrasound, may enhance diagnostic accuracy by combining surface and structural information about the tumor. Key limitations across studies include small dataset sizes, limited external validation, and a reliance on single imaging modalities, all of which restrict model generalizability in clinical settings. Metrics such as accuracy, sensitivity, and area under the curve (AUC) indicate that deep learning models have the potential to outperform traditional methods, supporting their further development for integration into clinical workflows. Future research should aim to address current limitations by developing multimodal models that leverage larger, diverse datasets and rigorous validation, thereby paving the way for more comprehensive, reliable diagnostic tools in ocular oncology.

PMID:40046904 | PMC:PMC11879745 | DOI:10.3389/ebm.2025.10444

Categories: Literature Watch

Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning

Thu, 2025-03-06 06:00

Med Image Comput Comput Assist Interv. 2024 Oct;15004:102-112. doi: 10.1007/978-3-031-72083-3_10. Epub 2024 Oct 14.

ABSTRACT

Whole slide image (WSI) classification plays a crucial role in digital pathology data analysis. However, the immense size of WSIs and the absence of fine-grained sub-region labels pose significant challenges for accurate WSI classification. Typical classification-driven deep learning methods often struggle to generate informative image representations, which can compromise the robustness of WSI classification. In this study, we address this challenge by incorporating both discriminative and contrastive learning techniques for WSI classification. Different from the existing contrastive learning methods for WSI classification that primarily rely on pseudo labels assigned to patches based on the WSI-level labels, our approach takes a different route to directly focus on constructing positive and negative samples at the WSI-level. Specifically, we select a subset of representative image patches to represent WSIs and create positive and negative samples at the WSI-level, facilitating effective learning of informative image features. Experimental results on two datasets and ablation studies have demonstrated that our method significantly improved the WSI classification performance compared to state-of-the-art deep learning methods and enabled learning of informative features that promoted robustness of the WSI classification.

PMID:40046787 | PMC:PMC11877581 | DOI:10.1007/978-3-031-72083-3_10

Categories: Literature Watch

Next-generation approach to skin disorder prediction employing hybrid deep transfer learning

Thu, 2025-03-06 06:00

Front Big Data. 2025 Feb 19;8:1503883. doi: 10.3389/fdata.2025.1503883. eCollection 2025.

ABSTRACT

INTRODUCTION: Skin diseases significantly impact individuals' health and mental wellbeing. However, their classification remains challenging due to complex lesion characteristics, overlapping symptoms, and limited annotated datasets. Traditional convolutional neural networks (CNNs) often struggle with generalization, leading to suboptimal classification performance. To address these challenges, this study proposes a Hybrid Deep Transfer Learning Method (HDTLM) that integrates DenseNet121 and EfficientNetB0 for improved skin disease prediction.

METHODS: The proposed hybrid model leverages DenseNet121's dense connectivity for capturing intricate patterns and EfficientNetB0's computational efficiency and scalability. A dataset comprising 19 skin conditions with 19,171 images was used for training and validation. The model was evaluated using multiple performance metrics, including accuracy, precision, recall, and F1-score. Additionally, a comparative analysis was conducted against state-of-the-art models such as DenseNet121, EfficientNetB0, VGG19, MobileNetV2, and AlexNet.

RESULTS: The proposed HDTLM achieved a training accuracy of 98.18% and a validation accuracy of 97.57%. It consistently outperformed baseline models, achieving a precision of 0.95, recall of 0.96, F1-score of 0.95, and an overall accuracy of 98.18%. The results demonstrate the hybrid model's superior ability to generalize across diverse skin disease categories.

DISCUSSION: The findings underscore the effectiveness of the HDTLM in enhancing skin disease classification, particularly in scenarios with significant domain shifts and limited labeled data. By integrating complementary strengths of DenseNet121 and EfficientNetB0, the proposed model provides a robust and scalable solution for automated dermatological diagnostics.

PMID:40046767 | PMC:PMC11879938 | DOI:10.3389/fdata.2025.1503883

Categories: Literature Watch

Leveraging automated time-lapse microscopy coupled with deep learning to automate colony forming assay

Thu, 2025-03-06 06:00

Front Oncol. 2025 Feb 19;15:1520972. doi: 10.3389/fonc.2025.1520972. eCollection 2025.

ABSTRACT

INTRODUCTION: The colony forming assay (CFA) stands as a cornerstone technique for evaluating the clonal expansion ability of single cancer cells and is crucial for assessing drug efficacy. However, traditional CFAs rely on labor-intensive, endpoint manual counting, offering limited insights into the dynamic effects of treatment. To overcome these limitations, we developed an Artificial Intelligence (AI)-assisted automated CFA combining time-lapse microscopy for real-time tracking of colony formation.

METHODS: Using B-acute lymphoblastic leukemia (B-ALL) cells from an E2A-PBX1 mouse model, we cultured them in a collagen-based 3D matrix with cytokines under static conditions in a low volume (60 µl) culture vessel and validated its comparability to methylcellulose-based media. No significant differences in final colony count or plating efficiency were observed. Our automated platform utilizes a deep learning and multi-object tracking approach for colony counting. Brightfield images were used to train a YOLOv8 object detection network, achieving a mAP50 score of 86% for identifying single cells, clusters, and colonies, and 97% accuracy for Z-stack colony identification with a multi-object tracking algorithm. The detection model accurately identified the majority of objects in the dataset.

RESULTS: This AI-assisted CFA was successfully applied for density optimization, enabling the determination of seeding densities that maximize plating efficiency (PE), and for IC50 determination, offering an efficient, less labor-intensive method for testing drug concentrations. In conclusion, our novel AI-assisted automated colony counting platform enables automated, high-throughput analysis of colony dynamics, significantly reducing labor and increasing accuracy. Furthermore, it allows detailed, long-term studies of cell-cell interactions and treatment responses using live-cell imaging and AI-assisted cell tracking.

DISCUSSION: Future integration with a perfusion-based drug screening system promises to enhance personalized cancer therapy by optimizing broad drug screening approaches and enabling real-time evaluation of therapeutic efficacy.

PMID:40046624 | PMC:PMC11879803 | DOI:10.3389/fonc.2025.1520972

Categories: Literature Watch

Deep learning combining imaging, dose and clinical data for predicting bowel toxicity after pelvic radiotherapy

Thu, 2025-03-06 06:00

Phys Imaging Radiat Oncol. 2025 Feb 1;33:100710. doi: 10.1016/j.phro.2025.100710. eCollection 2025 Jan.

ABSTRACT

BACKGROUND AND PURPOSE: A comprehensive understanding of radiotherapy toxicity requires analysis of multimodal data. However, it is challenging to develop a model that can analyse both 3D imaging and clinical data simultaneously. In this study, a deep learning model is proposed for simultaneously analysing computed tomography scans, dose distributions, and clinical metadata to predict toxicity, and identify the impact of clinical risk factors and anatomical regions.

MATERIALS AND METHODS: : A deep model based on multiple instance learning with feature-level fusion and attention was developed. The study used a dataset of 313 patients treated with 3D conformal radiation therapy and volumetric modulated arc therapy, with heterogeneous cohorts varying in dose, volume, fractionation, concomitant therapies, and follow-up periods. The dataset included 3D computed tomography scans, planned dose distributions to the bowel cavity, and patient clinical data. The model was trained on patient-reported data on late bowel toxicity.

RESULTS: Results showed that the network can identify potential risk factors and critical anatomical regions. Analysis of clinical data jointly with imaging and dose for bowel urgency and faecal incontinence improved performance (area under receiver operating characteristic curve [AUC] of 88% and 78%, respectively) while best performance for diarrhoea was when analysing clinical features alone (68% AUC).

CONCLUSIONS: Results demonstrated that feature-level fusion along with attention enables the network to analyse multimodal data. This method also provides explanations for each input's contribution to the final result and detects spatial associations of toxicity.

PMID:40046574 | PMC:PMC11880715 | DOI:10.1016/j.phro.2025.100710

Categories: Literature Watch

The prognostic value of pathologic lymph node imaging using deep learning-based outcome prediction in oropharyngeal cancer patients

Thu, 2025-03-06 06:00

Phys Imaging Radiat Oncol. 2025 Feb 14;33:100733. doi: 10.1016/j.phro.2025.100733. eCollection 2025 Jan.

ABSTRACT

BACKGROUND AND PURPOSE: Deep learning (DL) models can extract prognostic image features from pre-treatment PET/CT scans. The study objective was to explore the potential benefits of incorporating pathologic lymph node (PL) spatial information in addition to that of the primary tumor (PT) in DL-based models for predicting local control (LC), regional control (RC), distant-metastasis-free survival (DMFS), and overall survival (OS) in oropharyngeal cancer (OPC) patients.

MATERIALS AND METHODS: The study included 409 OPC patients treated with definitive (chemo)radiotherapy between 2010 and 2022. Patient data, including PET/CT scans, manually contoured PT (GTVp) and PL (GTVln) structures, clinical variables, and endpoints, were collected. Firstly, a DL-based method was employed to segment tumours in PET/CT, resulting in predicted probability maps for PT (TPMp) and PL (TPMln). Secondly, different combinations of CT, PET, manual contours and probability maps from 300 patients were used to train DL-based outcome prediction models for each endpoint through 5-fold cross validation. Model performance, assessed by concordance index (C-index), was evaluated using a test set of 100 patients.

RESULTS: Including PL improved the C-index results for all endpoints except LC. For LC, comparable C-indices (around 0.66) were observed between models trained using only PT and those incorporating PL as additional structure. Models trained using PT and PL combined into a single structure achieved the highest C-index of 0.65 and 0.80 for RC and DMFS prediction, respectively. Models trained using these target structures as separate entities achieved the highest C-index of 0.70 for OS.

CONCLUSION: Incorporating lymph node spatial information improved the prediction performance for RC, DMFS, and OS.

PMID:40046573 | PMC:PMC11880716 | DOI:10.1016/j.phro.2025.100733

Categories: Literature Watch

Improvement in positional accuracy of neural-network predicted hydration sites of proteins by incorporating atomic details of water-protein interactions and site-searching algorithm

Thu, 2025-03-06 06:00

Biophys Physicobiol. 2025 Jan 30;22(1):e220004. doi: 10.2142/biophysico.bppb-v22.0004. eCollection 2025.

ABSTRACT

Visualization of hydration structures over the entire protein surface is necessary to understand why the aqueous environment is essential for protein folding and functions. However, it is still difficult for experiments. Recently, we developed a convolutional neural network (CNN) to predict the probability distribution of hydration water molecules over protein surfaces and in protein cavities. The deep network was optimized using solely the distribution patterns of protein atoms surrounding each hydration water molecule in high-resolution X-ray crystal structures and successfully provided probability distributions of hydration water molecules. Despite the effectiveness of the probability distribution, the positional differences of the predicted positions obtained from the local maxima as predicted sites remained inadequate in reproducing the hydration sites in the crystal structure models. In this work, we modified the deep network by subdividing atomic classes based on the electronic properties of atoms composing amino acids. In addition, the exclusion volumes of each protein atom and hydration water molecule were taken to predict the hydration sites from the probability distribution. These information on chemical properties of atoms leads to an improvement in positional prediction accuracy. We selected the best CNN from 47 CNNs constructed by systematically varying the number of channels and layers of neural networks. Here, we report the improvements in prediction accuracy by the reorganized CNN together with the details in the architecture, training data, and peak search algorithm.

PMID:40046557 | PMC:PMC11876803 | DOI:10.2142/biophysico.bppb-v22.0004

Categories: Literature Watch

Retraction: Risk management system and intelligent decision-making for prefabricated building project under deep learning modified teaching-learning-based optimization

Wed, 2025-03-05 06:00

PLoS One. 2025 Mar 5;20(3):e0319589. doi: 10.1371/journal.pone.0319589. eCollection 2025.

NO ABSTRACT

PMID:40043015 | DOI:10.1371/journal.pone.0319589

Categories: Literature Watch

Pages