Deep learning

Integrating artificial intelligence in drug discovery and early drug development: a transformative approach

Sat, 2025-03-15 06:00

Biomark Res. 2025 Mar 14;13(1):45. doi: 10.1186/s40364-025-00758-2.

ABSTRACT

Artificial intelligence (AI) can transform drug discovery and early drug development by addressing inefficiencies in traditional methods, which often face high costs, long timelines, and low success rates. In this review we provide an overview of how to integrate AI to the current drug discovery and development process, as it can enhance activities like target identification, drug discovery, and early clinical development. Through multiomics data analysis and network-based approaches, AI can help to identify novel oncogenic vulnerabilities and key therapeutic targets. AI models, such as AlphaFold, predict protein structures with high accuracy, aiding druggability assessments and structure-based drug design. AI also facilitates virtual screening and de novo drug design, creating optimized molecular structures for specific biological properties. In early clinical development, AI supports patient recruitment by analyzing electronic health records and improves trial design through predictive modeling, protocol optimization, and adaptive strategies. Innovations like synthetic control arms and digital twins can reduce logistical and ethical challenges by simulating outcomes using real-world or virtual patient data. Despite these advancements, limitations remain. AI models may be biased if trained on unrepresentative datasets, and reliance on historical or synthetic data can lead to overfitting or lack generalizability. Ethical and regulatory issues, such as data privacy, also challenge the implementation of AI. In conclusion, in this review we provide a comprehensive overview about how to integrate AI into current processes. These efforts, although they will demand collaboration between professionals, and robust data quality, have a transformative potential to accelerate drug development.

PMID:40087789 | DOI:10.1186/s40364-025-00758-2

Categories: Literature Watch

Performance and limitation of machine learning algorithms for diabetic retinopathy screening and its application in health management: a meta-analysis

Sat, 2025-03-15 06:00

Biomed Eng Online. 2025 Mar 14;24(1):34. doi: 10.1186/s12938-025-01336-1.

ABSTRACT

BACKGROUND: In recent years, artificial intelligence and machine learning algorithms have been used more extensively to diagnose diabetic retinopathy and other diseases. Still, the effectiveness of these methods has not been thoroughly investigated. This study aimed to evaluate the performance and limitations of machine learning and deep learning algorithms in detecting diabetic retinopathy.

METHODS: This study was conducted based on the PRISMA checklist. We searched online databases, including PubMed, Scopus, and Google Scholar, for relevant articles up to September 30, 2023. After the title, abstract, and full-text screening, data extraction and quality assessment were done for the included studies. Finally, a meta-analysis was performed.

RESULTS: We included 76 studies with a total of 1,371,517 retinal images, of which 51 were used for meta-analysis. Our meta-analysis showed a significant sensitivity and specificity with a percentage of 90.54 (95%CI [90.42, 90.66], P < 0.001) and 78.33% (95%CI [78.21, 78.45], P < 0.001). However, the AUC (area under curvature) did not statistically differ across studies, but had a significant figure of 0.94 (95% CI [- 46.71, 48.60], P = 1).

CONCLUSIONS: Although machine learning and deep learning algorithms can properly diagnose diabetic retinopathy, their discriminating capacity is limited. However, they could simplify the diagnosing process. Further studies are required to improve algorithms.

PMID:40087776 | DOI:10.1186/s12938-025-01336-1

Categories: Literature Watch

Deep learning-based automated segmentation of cardiac real-time MRI in non-human primates

Fri, 2025-03-14 06:00

Comput Biol Med. 2025 Mar 13;189:109894. doi: 10.1016/j.compbiomed.2025.109894. Online ahead of print.

ABSTRACT

Advanced imaging techniques, like magnetic resonance imaging (MRI), have revolutionised cardiovascular disease diagnosis and monitoring in humans and animal models. Real-time (RT) MRI, which can capture a single slice during each consecutive heartbeat while the animal or patient breathes continuously, generates large data sets that necessitate automatic myocardium segmentation to fully exploit these technological advancements. While automatic segmentation is common in human adults, it remains underdeveloped in preclinical animal models. In this study, we developed and trained a fully automated 2D convolutional neural network (CNN) for segmenting the left and right ventricles and the myocardium in non-human primates (NHPs) using RT cardiac MR images of rhesus macaques, in the following referred to as PrimUNet. Based on the U-Net framework, PrimUNet achieved optimal performance with a learning rate of 0.0001, an initial kernel size of 64, a final kernel size of 512, and a batch size of 32. It attained an average Dice score of 0.9, comparable to human studies. Testing PrimUNet on additional RT MRI data from rhesus macaques demonstrated strong agreement with manual segmentation for left ventricular end-diastolic volume (LVEDV), left ventricular end-systolic volume (LVESV), and left ventricular myocardial volume (LVMV). It also performs well on cine MRI data of rhesus macaques and acceptably on those of baboons. PrimUNet is well-suited for automatically segmenting extensive RT MRI data, facilitating strain analyses of individual heartbeats. By eliminating human observer variability, PrimUNet enhances the reliability and reproducibility of data analysis in animal research, thereby advancing translational cardiovascular studies.

PMID:40086292 | DOI:10.1016/j.compbiomed.2025.109894

Categories: Literature Watch

A deep Bi-CapsNet for analysing ECG signals to classify cardiac arrhythmia

Fri, 2025-03-14 06:00

Comput Biol Med. 2025 Mar 13;189:109924. doi: 10.1016/j.compbiomed.2025.109924. Online ahead of print.

ABSTRACT

- In recent times, the electrocardiogram (ECG) has been considered as a significant and effective screening mode in clinical practice to assess cardiac arrhythmias. Precise feature extraction and classification are considered as essential concerns in the automated prediction of heart disease. A deep bi-directional capsule network (Bi-CapsNet) uses a new method based on an intelligent deep learning (DL) classifier model to make the classification process very accurate. Initially, the input ECG signal data are acquired and the preprocessing steps such as DC drift, normalization, LPF filtering, spectrogram analysis, and artifact removal are applied. After preprocessing the data, the Deep Ensemble CNN-RNN approach is employed for feature extraction. Finally, the Deep Bi-CapsNet model is used to predict and classify the cardiac arrhythmia. For performance validation, the dataset is referred to the MIT-BIH arrhythmia database, which selects five different types of arrhythmias from the ECG waveform to estimate the proposed model. Various ECG arrhythmia categories, including Normal (NOR), Right Bundle Branch Block (RBBB), Premature Ventricular Contraction (PVC), Atrial Premature Beat (APB), and Left Bundle Branch Block (LBBB) have been identified. For performance analysis, the metrics such as precision, accuracy, F1-score, error rate, sensitivity, false positive rate, specificity, Mathew coefficient, Kappa coefficient, and outcomes are included and compared with the traditional methods to validate the effectiveness of the implemented scheme. The proposed scheme has achieved an overall accuracy rate of approximately 97.19 % compared to the traditional deep learning models like CNN (89.87 %), FTBO (85 %), and Capsule Network (97.0 %). The comparison results indicate that the proposed hybrid model outperforms these traditional models.

PMID:40086290 | DOI:10.1016/j.compbiomed.2025.109924

Categories: Literature Watch

Automatic Detection of Cognitive Impairment in Patients With White Matter Hyperintensity Using Deep Learning and Radiomics

Fri, 2025-03-14 06:00

Am J Alzheimers Dis Other Demen. 2025 Jan-Dec;40:15333175251325091. doi: 10.1177/15333175251325091. Epub 2025 Mar 14.

ABSTRACT

White matter hyperintensity (WMH) is associated with cognitive impairment. In this study, 79 patients with WMH from hospital 1 were randomly divided into a training set (62 patients) and an internal validation set (17 patients). In addition, 29 WMH patients from hospital 2 were used as an external validation set. Cognitive status was determined based on neuropsychological assessment results. A deep learning convolutional neural network of VB-Nets was used to automatically identify and segment whole-brain subregions and WMH. The PyRadiomics package in Python was used to automatically extract radiomic features from the WMH and bilateral hippocampi. Delong tests revealed that the random forest model based on combined features had the best performance for the detection of cognitive impairment in WMH patients, with an AUC of 0.900 in the external validation set. Our results provide clinical doctors with a reliable tool for the early diagnosis of cognitive impairment in WMH patients.

PMID:40087144 | DOI:10.1177/15333175251325091

Categories: Literature Watch

Construction and preliminary trial test of a decision-making app for pre-hospital damage control resuscitation

Fri, 2025-03-14 06:00

Chin J Traumatol. 2025 Feb 18:S1008-1275(25)00009-4. doi: 10.1016/j.cjtee.2024.11.001. Online ahead of print.

ABSTRACT

PURPOSE: To construct a decision-making app for pre-hospital damage control resuscitation (PHDCR) for severely injured patients, and to make a preliminary trial test on the effectiveness and usability aspects of the constructed app.

METHODS: Decision-making algorithms were first established by a thorough literature review, and were then used to be learned by computer with 3 kinds of text segmentation algorithms, i.e., dictionary-based segmentation, machine learning algorithms based on labeling, and deep learning algorithms based on understanding. B/S architecture mode and Spring Boot were used as a framework to construct the app. A total of 16 Grade-5 medical students were recruited to test the effectiveness and usability aspects of the app by using an animal model-based test on simulated PHDCR. Twelve adult Bama miniature pigs were subjected to penetrating abdominal injuries and were randomly assigned to the 16 students, who were randomly divided into 2 groups (n = 8 each): group A (decided on PHDCR by themselves) and group B (decided on PHDCR with the aid of the app). The students were asked to complete the PHDCR within 1 h, and then blood samples were taken and thromboelastography, routine coagulation test, blood cell count, and blood gas analysis were examined. The lab examination results along with the value of mean arterial pressure were used to compare the resuscitation effects between the 2 groups. Furthermore, a 4-statement-based post-test survey on a 5-point Likert scale was performed in group B students to test the usability aspects of the constructed app.

RESULTS: With the above 3 kinds of text segmentation algorithm, B/S architecture mode, and Spring Boot as the development framework, the decision-making app for PHDCR was successfully constructed. The time to decide PHDCR was (28.8 ± 3.41) sec in group B, much shorter than that in group A (87.5 ± 8.53) sec (p < 0.001). The outcomes of animals treated by group B students were much better than that by group A students as indicated by higher mean arterial pressure, oxygen saturation and fibrinogen concentration and maximum amplitude, and lower R values in group B than those in group A. The post-test survey revealed that group B students gave a mean score of no less than 4 for all 4 statements.

CONCLUSION: A decision-making app for PHDCR was constructed in the present study and the preliminary trial test revealed that it could help to improve the resuscitation effect in animal models of penetrating abdominal injury.

PMID:40087116 | DOI:10.1016/j.cjtee.2024.11.001

Categories: Literature Watch

Role of artificial intelligence in magnetic resonance imaging-based detection of temporomandibular joint disorder: a systematic review

Fri, 2025-03-14 06:00

Br J Oral Maxillofac Surg. 2024 Dec 26:S0266-4356(24)00549-7. doi: 10.1016/j.bjoms.2024.12.004. Online ahead of print.

ABSTRACT

This systematic review aimed to evaluate the application of artificial intelligence (AI) in the identification of temporomandibular joint (TMJ) disc position in normal or temporomandibular joint disorder (TMD) individuals using magnetic resonance imaging (MRI). Database search was done in Pub med, Google scholar, Semantic scholar and Cochrane for studies on AI application to detect TMJ disc position in MRI till September 2023 adhering PRISMA guidelines. Data extraction included number of patients, number of TMJ/MRI, AI algorithm and performance metrics. Risk of bias was done with modified PROBAST tool. Seven studies were included (deep learning = 6, machine learning = 1). Sensitivity values (n = 7) ranged from 0.735 to 1, while specificity values (n = 4) ranged from 0.68 to 0.961. AI achieves accuracy levels exceeding 83%. MobileNetV2 and ResNet have revealed better performance metrics. Machine learning demonstrated the lowest accuracy 74.2%. Risk of bias was low (n = 6) and high (n = 1). Deep learning models showed reliable performance metrics for AI based detection of temporomandibular joint disc position in MRI. Future research is warranted with better standardisation of design and consistent reporting.

PMID:40087072 | DOI:10.1016/j.bjoms.2024.12.004

Categories: Literature Watch

Fingerprinting of Boletus bainiugan: FT-NIR spectroscopy combined with machine learning a new workflow for storage period identification

Fri, 2025-03-14 06:00

Food Microbiol. 2025 Aug;129:104743. doi: 10.1016/j.fm.2025.104743. Epub 2025 Feb 6.

ABSTRACT

Food authenticity and food safety issues have threatened the prosperity of the entire community. The phenomenon of selling porcini mushrooms as old mixed with new jeopardizes consumer safety. Herein, nucleoside contents and spectra of 831 Boletus bainiugan stored for 0, 1 and 2 years are comprehensively analyzed by high performance liquid chromatography (HPLC) coupled with Fourier transform near infrared (FT-NIR) spectroscopy. Guanosine and adenosine increased with storage time, and uridine has a decreasing trend. Multi-conventional machine learning and deep learning models are employed to identify the storage time of Boletus bainiugan, in which convolutional neural network (CNN) and back propagation neural network (BPNN) models have superior identification performance for distinct storage periods. The Data-driven soft independent modelling of class analogy (DD-SIMCA) model can completely differentiate between new and old samples, and partial least squares regression (PLSR) can accurately predict the three nucleoside compounds with an optimal R2 of 0.918 and an excellent residual predictive deviation (RPD) value of 3.492. This study provides a low-cost and user-friendly solution for the market to determine, in real time, storage period of Boletus bainiugan in the supply chain.

PMID:40086983 | DOI:10.1016/j.fm.2025.104743

Categories: Literature Watch

Deep learning combined Monte Carlo simulation reveal the fundamental light propagation in apple puree: Monitoring the quality changes from different cultivar, storage period and heating duration

Fri, 2025-03-14 06:00

Food Res Int. 2025 Apr;207:115997. doi: 10.1016/j.foodres.2025.115997. Epub 2025 Feb 21.

ABSTRACT

This work explored the light propagation of purees from a large variability of apple cultivar, storage period and heating duration based on their optical absorption (μa) and reduced scattering (μs') properties at 900-1650 nm, in order to better monitor the chemical, structural and rheological parameters. The prolonged heating duration modified intensively on puree structure and rheology, and resulted significant increases of μs' at 900-1350 nm. Based on Monte Carlo simulation, the maximum light attenuation distance at 1050 nm of 'Golden Delicious' and 'Red Delicious' apple puree increased intensively from 16.22 mm to 17.60 mm and from 16.19 mm to 17.41 mm respectively while thermal processing duration from 10 min to 20 min. Back propagation neural network models based on μa and μs' can monitor their dry matter content, titratable acidity, apparent viscosity and viscoelasticity, with the RPD > 2.53. These provided fundamental knowledge on light propagation of puree matrix and the potential strategy to monitor their quality.

PMID:40086950 | DOI:10.1016/j.foodres.2025.115997

Categories: Literature Watch

Diagnostic accuracy of deep learning algorithm for detecting unruptured intracranial aneurysms in magnetic resonance angiography: a multicenter pivotal trial

Fri, 2025-03-14 06:00

World Neurosurg. 2025 Mar 12:123882. doi: 10.1016/j.wneu.2025.123882. Online ahead of print.

ABSTRACT

INTRODUCTION: Intracranial aneurysm rupture is associated with high mortality and disability rates. Early detection is crucial, but increasing diagnostic workloads place significant strain on radiologists. We evaluated the efficacy of a deep learning algorithm in detecting unruptured intracranial aneurysms (UIAs) using time-of-flight (TOF) magnetic resonance angiography (MRA).

METHODS: Data from 675 participants (189 aneurysm-positive [221 UIAs] and 486 aneurysm-negative) were collected from two hospitals (2019-2023). Positive cases were confirmed by digital subtraction angiography, and images were annotated by vascular experts. The 3D U-Net-based model was trained on 988 non-overlapped TOF MRA datasets and evaluated by patient- and lesion-level sensitivity, specificity, and false-positive rates.

RESULTS: The mean age was 59.6 years (SD 11.3), and 52.0% were female. The model achieved patient-level sensitivity of 95.2% and specificity of 80.5%, with lesion-level sensitivity of 89.6% and a false-positive rate of 0.19 per patient. Sensitivity by aneurysm size was 72.3% for lesions <3 mm, 91.8% for 3-5 mm, and 94.3% for >5 mm. Performance was consistent across institutions, with an AUROC of 0.949.

CONCLUSION: The software demonstrated high sensitivity and low false-positive rates for UIA detection in TOF MRA, suggesting its utility in reducing diagnostic errors and alleviating radiologist workload. Expert review remains essential, particularly for small or complex aneurysms.

PMID:40086726 | DOI:10.1016/j.wneu.2025.123882

Categories: Literature Watch

A deep learning model of histologic tumor differentiation as a prognostic tool in hepatocellular carcinoma

Fri, 2025-03-14 06:00

Mod Pathol. 2025 Mar 12:100747. doi: 10.1016/j.modpat.2025.100747. Online ahead of print.

ABSTRACT

Tumor differentiation represents an important driver of the biological behavior of various forms of cancer. Histologic features of tumor differentiation in hepatocellular carcinoma (HCC) include cyto-architecture, immunohistochemical profile, and reticulin framework. In this study, we evaluate the performance of an artificial intelligence (AI)-based model in quantifying features of HCC tumor differentiation and predicting cancer-related outcomes. We developed a supervised AI model using a cloud-based, deep-learning platform (Aiforia Technologies) to quantify histologic features of HCC differentiation, including various morphologic parameters (nuclear density, area, circularity, chromatin pattern, and pleomorphism), mitotic figures, immunohistochemical markers (hepar-1 and glypican-3), and reticulin expression. We applied this AI model to patients undergoing HCC curative resection and assessed whether AI-based features added value to standard clinical and pathologic data in predicting HCC-related outcomes. 99 HCC resection specimens were included. Three AI-based histologic variables were most relevant to HCC prognostic assessment: 1. percent of tumor occupied by neoplastic nuclei (nuclear area %), 2. quantitative reticulin expression in the tumor, and 3. Hepar-1 low (i.e. expressed in less than 50% of the tumor)/glypican-3 positive immunophenotype. Statistical models that included these AI-based variables outperformed models with combined clinical-pathologic features for overall survival (C-indexes of 0.81 vs 0.68), disease-free survival (C-indexes of 0.73 vs 0.68), metastasis (C-indexes of 0.78 vs 0.65), and local recurrence (C-indexes of 0.72 vs 0.68) for all cases, with similar results in the subgroup analysis of WHO grade 2 HCCs. Our AI model serves as proof-of-concept that HCC differentiation can be objectively quantified digitally by assessing a combination of biologically relevant histopathologic features. In addition, several AI-derived features were independently predictive of HCC-related outcomes in our study population, most notably nuclear area %, hepar-low/glypican 3-negative phenotype, and decreasing levels of reticulin expression, highlighting the relevance of quantitative analysis of tumor differentiation features in this context.

PMID:40086592 | DOI:10.1016/j.modpat.2025.100747

Categories: Literature Watch

BERT-AmPEP60: A BERT-Based Transfer Learning Approach to Predict the Minimum Inhibitory Concentrations of Antimicrobial Peptides for <em>Escherichia coli</em> and <em>Staphylococcus aureus</em>

Fri, 2025-03-14 06:00

J Chem Inf Model. 2025 Mar 14. doi: 10.1021/acs.jcim.4c01749. Online ahead of print.

ABSTRACT

Antimicrobial peptides (AMPs) are a promising alternative for combating bacterial drug resistance. While current computer prediction models excel at binary classification of AMPs based on sequences, there is a lack of regression methods to accurately quantify AMP activity against specific bacteria, making the identification of highly potent AMPs a challenge. Here, we present a deep learning method, BERT-AmPEP60, based on the fine-tuned Bidirectional Encoder Representations from Transformers (BERT) architecture to extract embedding features from input sequences. Using the transfer learning strategy, we built regression models to predict the minimum inhibitory concentration (MIC) of peptides for Escherichia coli (EC) and Staphylococcus aureus (SA). In five independent experiments with 10% leave-out sequences as the test sets, the optimal EC and SA models outperformed the state-of-the-art regression method and traditional machine learning methods, achieving an average mean squared error of 0.2664 and 0.3032 (log μM), respectively. They also showed a Pearson correlation coefficient of 0.7955 and 0.7530, and a Kendall correlation coefficient of 0.5797 and 0.5222, respectively. Our models outperformed existing deep learning and machine learning methods that rely on conventional sequence features. This work underscores the effectiveness of utilizing BERT with transfer learning for training quantitative AMP prediction models specific for different bacterial species. The web server of BERT-AmPEP60 can be found at https://app.cbbio.online/ampep/home. To facilitate development, the program source codes are available at https://github.com/janecai0714/AMP_regression_EC_SA.

PMID:40086449 | DOI:10.1021/acs.jcim.4c01749

Categories: Literature Watch

Application of Machine Learning (ML) approach in discovery of novel drug targets against Leishmania: A computational based approach

Fri, 2025-03-14 06:00

Comput Biol Chem. 2025 Mar 12;117:108423. doi: 10.1016/j.compbiolchem.2025.108423. Online ahead of print.

ABSTRACT

Molecules with potent anti-leishmanial activity play a crucial role in identifying treatments for leishmaniasis and aiding in the design of novel drugs to combat the disease, ultimately protecting individuals and populations. Various methods have been employed to represent molecular structures and predict effective anti-leishmanial molecules. However, each method faces challenges and limitations that must be addressed to optimize the drug discovery and design process. Recently, machine learning approaches have gained significant importance in overcoming the limitations of traditional methods across various fields. Therefore, there is an urgent need to build a computational pipeline using advanced machine learning and deep learning methods that help to predict anti-leishmanial activity of drug candidates. The proposed pipeline in this paper involves data collection, feature extraction, feature selection and prediction techniques. This review presents a comprehensive computational pipeline for anti-leishmanial drug discovery, highlighting its strengths, limitations, challenges, and future directions to improve treatment for this neglected tropical disease.

PMID:40086345 | DOI:10.1016/j.compbiolchem.2025.108423

Categories: Literature Watch

On construction of data preprocessing for real-life SoyLeaf dataset & disease identification using Deep Learning Models

Fri, 2025-03-14 06:00

Comput Biol Chem. 2025 Mar 8;117:108417. doi: 10.1016/j.compbiolchem.2025.108417. Online ahead of print.

ABSTRACT

The vast volumes of data are needed to train Deep Learning Models from scratch to identify illnesses in soybean leaves. However, there is still a lack of sufficient high-quality samples. To overcome this problem, we have developed the real-life SoyLeaf dataset and used the pre-trained Deep Learning Models to identify leaf diseases. In this paper, we have initially developed the real-life SoyLeaf dataset collected from the ICAR-Indian Institute of Soybean Research (IISR) Center, Indore field. This SoyLeaf dataset contains 9786 high-quality soybean leaf images, including healthy and diseased leaves. Following this, we have adapted data preprocessing techniques to enhance the quality of images. In addition, we have utilized several Deep Learning Models, i.e., fourteen Keras Transfer Learning Models, to determine which model best fits the dataset on SoyLeaf diseases. The accuracies of the proposed fine-tuned models using the Adam optimizer are as follows: ResNet50V2 achieves 99.79%, ResNet101V2 achieves 99.89%, ResNet152V2 achieves 99.59%, InceptionV3 achieves 99.83%, InceptionResNetV2 achieves 99.79%, MobileNet achieves 99.82%, MobileNetV2 achieves 99.89%, DenseNet121 achieves 99.87%, and DenseNet169 achieves 99.87%. Similarly, the accuracies of the proposed fine-tuned models using the RMSprop optimizer are as follows: ResNet50V2 achieves 99.49%, ResNet101V2 achieves 99.45%, ResNet152V2 achieves 99.45%, InceptionV3 achieves 99.58%, InceptionResNetV2 achieves 99.88%, MobileNet achieves 99.73%, MobileNetV2 achieves 99.83%, DenseNet121 achieves 99.89%, and DenseNet169 achieves 99.77%. The experimental results of the proposed fine-tuned models show that only ResNet50V2, ResNet101V2, InceptionV3, InceptionResNetV2, MobileNet, MobileNetV2, DenseNet121, and DenseNet169 have performed better in terms of training, validation, and testing accuracies than other state-of-the-art models.

PMID:40086344 | DOI:10.1016/j.compbiolchem.2025.108417

Categories: Literature Watch

A deep learning-based clinical-radiomics model predicting the treatment response of immune checkpoint inhibitors (ICIs)-based conversion therapy in potentially convertible hepatocelluar carcinoma patients: a tumour marker prognostic study

Fri, 2025-03-14 06:00

Int J Surg. 2025 Mar 14. doi: 10.1097/JS9.0000000000002322. Online ahead of print.

ABSTRACT

BACKGROUND: The majority of patients with hepatocellular carcinoma (HCC) miss the opportunity of radical resection, making ICIs-based conversion therapy a primary option. However, challenges persist in predicting response and identifying the optimal patient subset. The objective is to develop a CT-based clinical-radiomics model to predict durable clinical benefit (DCB) of ICIs-based treatment in potentially convertible HCC patients.

METHODS: The radiomics features were extracted by pyradiomics in training set, and machine learning models was generated based on the selected radiomics features. Deep learning models were created using two different protocols. Integrated models were constructed by incorporating radiomics scores, deep learning scores, and clinical variables selected through multivariate analysis. Furthermore, we analyzed the relationship between integrated model scores and clinical outcomes related to conversion therapy in the entire cohort. Finally, radiogenomic analysis was conducted on bulk RNA and DNA sequencing data.

RESULTS: The top-performing integrated model demonstrated excellent predictive accuracy with an area under the curve (AUC) of 0.96 (95%CI: 0.94 ~ 0.99) in the training set and 0.88 (95%CI: 0.77 ~ 0.99) in the test set, effectively stratifying survival risk across the entire cohort and revealing significant disparity in overall survival (OS), as evidenced by Kaplan-Meier survival curves (p<0.0001). Moreover, integrated model scores exhibited associations with sequential resection among patients who achieved DCB and pathological complete response (pCR) among those who underwent sequential resection procedures. Notably, higher radiomics model was correlated with MHC I expression, angiogenesis-related processes, CD8 T cell-related gene sets, as well as a higher frequency of TP53 mutations along with increased levels of mutation burden and neoantigen.

CONCLUSION: The deep learning-based clinical-radiomics model exhibited satisfactory predictive capability in forecasting the DCB derived from ICIs-based conversion therapy in potentially convertible HCC, and was associated with a diverse range of immune-related mechanisms.

PMID:40085751 | DOI:10.1097/JS9.0000000000002322

Categories: Literature Watch

Fast and reliable probabilistic reflectometry inversion with prior-amortized neural posterior estimation

Fri, 2025-03-14 06:00

Sci Adv. 2025 Mar 14;11(11):eadr9668. doi: 10.1126/sciadv.adr9668. Epub 2025 Mar 14.

ABSTRACT

Reconstructing the structure of thin films and multilayers from measurements of scattered x-rays or neutrons is key to progress in physics, chemistry, and biology. However, finding all structures compatible with reflectometry data is computationally prohibitive for standard algorithms, which typically results in unreliable analysis with only a single potential solution identified. We address this lack of reliability with a probabilistic deep learning method that identifies all realistic structures in seconds, redefining standards in reflectometry. Our method, prior-amortized neural posterior estimation (PANPE), combines simulation-based inference with adaptive priors that inform the inference network about known structural properties and controllable experimental conditions. PANPE networks support key scenarios such as high-throughput sample characterization, real-time monitoring of evolving structures, or the corefinement of several experimental datasets and can be adapted to provide fast, reliable, and flexible inference across many other inverse problems.

PMID:40085716 | DOI:10.1126/sciadv.adr9668

Categories: Literature Watch

Deep-Learning Potential Molecular Dynamics Study on Nanopolycrystalline Al-Er Alloys: Effects of Er Concentration, Grain Boundary Segregation, and Grain Size on Plastic Deformation

Fri, 2025-03-14 06:00

J Chem Inf Model. 2025 Mar 14. doi: 10.1021/acs.jcim.5c00008. Online ahead of print.

ABSTRACT

Understanding the tensile mechanical properties of Al-Er alloys at the atomic scale is essential, and molecular dynamics (MD) simulations offer valuable insights. However, these simulations are constrained by the unavailability of suitable interatomic potentials. In this study, the deep potential (DP) approach, aided by high-throughput first-principles calculations, was utilized to develop an Al-Er interatomic potential specifically for MD simulations. Systematic comparisons between the physical properties (e.g., energy-volume curves, melting point, elastic constants) predicted by the DP model and those obtained from density functional theory (DFT) demonstrated that the developed DP model for Al-Er alloys possesses reliable predictive capabilities while retaining DFT-level accuracy. Our findings confirm that Al3Er, Al2Er, and AlEr2 exhibit mechanical stability. The calculated melting point of Al3Er (1398 K) shows a 57 K deviation from the experimental value (1341 K). With the Er content increasing from 0.01% to 0.064 at.% in Al-Er alloys, the grain boundary (GB) concentration of Er atoms increases from 0.03 to 0.07% following Monte Carlo (MC) annealing optimization. The Al-0.05 at.%Er alloy exhibits the highest yield strength, with an increase of 0.128 GPa (6.1%) compared to pure Al. For Al-0.05 at.%Er alloys with varying average grain sizes, the GB concentration of Er atoms increases by about 1.4-1.6 times after MC annealing compared to the average Er content. Additionally, the Al-Er alloys reach the peak yield strength of 2.214 GPa when the average grain size is 11.72 nm. The GB segregation of Er atoms lowers the system energy and thus enhances stability. Notable changes in the segregation behavior of Er atoms were observed with increasing Er concentration and decreasing grain size. These results would facilitate the understanding of the mechanical characteristics of Al-Er alloys and offer a theoretical basis for developing advanced nanopolycrystalline Al-Er alloys.

PMID:40085549 | DOI:10.1021/acs.jcim.5c00008

Categories: Literature Watch

From 1-D to 3-D: LIBS Pseudohyperspectral Data Cube Deep Learning Mechanism Used in Nuclear Metal Materials Classification

Fri, 2025-03-14 06:00

Anal Chem. 2025 Mar 14. doi: 10.1021/acs.analchem.4c05707. Online ahead of print.

ABSTRACT

In this paper, we propose a new spectral data mechanism called LIBS pseudohyperspectral data cube. This mechanism allows for the utilization of multidimensional information from laser-induced plasma, transforming 1-D LIBS spectra into 3-D data cube. Specifically, two additional dimensions are introduced to capture spectral variations information, allowing more features to be learned during pretraining. Proposed mechanism can make the LIBS system more robust when handling unstable spectra acquired onsite, and can also allow LIBS take full advantage of deep learning algorithms. In the context of nuclear power plants, traditional LIBS classification faces significant challenges due to unstable spectra, which reduce the accuracy of classifying similar or tiny extreme condition materials. By combining deep learning algorithms with LIBS pseudohyperspectral data cube, we can capture spectral and other dimensional features to enhance classification accuracy. Experimental results show that, compared to traditional 1-D data processing, the new method significantly improves the classification accuracy of unstable spectra. Moreover, by incorporating an attention mechanism, the model can adaptively adjust the weights of different features, further improving classification accuracy to over 99%. Visualizing the attention mechanism's weight matrix allows us to identify the importance of different features in classification. Additionally, t-SNE visualizations demonstrate the clustering of different categories in the feature space, further validating the performance of the new method. We believe this data cube mechanism offers an effective new approach for applying deep learning algorithms and enhancing data dimensionality in the LIBS field.

PMID:40085530 | DOI:10.1021/acs.analchem.4c05707

Categories: Literature Watch

DenseFormer-MoE: A Dense Transformer Foundation Model with Mixture of Experts for Multi-Task Brain Image Analysis

Fri, 2025-03-14 06:00

IEEE Trans Med Imaging. 2025 Mar 14;PP. doi: 10.1109/TMI.2025.3551514. Online ahead of print.

ABSTRACT

Deep learning models have been widely investigated for computing and analyzing brain images across various downstream tasks such as disease diagnosis and age regression. Most existing models are tailored for specific tasks and diseases, posing a challenge in developing a foundation model for diverse tasks. This paper proposes a Dense Transformer Foundation Model with Mixture of Experts (DenseFormer-MoE), which integrates dense convolutional network, Vision Transformer and Mixture of Experts (MoE) to progressively learn and consolidate local and global features from T1-weighted magnetic resonance images (sMRI) for multiple tasks including diagnosing multiple brain diseases and predicting brain age. First, a foundation model is built by combining the vision Transformer with Densenet, which are pre-trained with Masked Autoencoder and self-supervised learning to enhance the generalization of feature representations. Then, to mitigate optimization conflicts in multi-task learning, MoE is designed to dynamically select the most appropriate experts for each task. Finally, our method is evaluated on multiple renowned brain imaging datasets including UK Biobank (UKB), Alzheimer's Disease Neuroimaging Initiative (ADNI), and Parkinson's Progression Markers Initiative (PPMI). Experimental results and comparison demonstrate that our method achieves promising performances for prediction of brain age and diagnosis of brain diseases.

PMID:40085471 | DOI:10.1109/TMI.2025.3551514

Categories: Literature Watch

Evaluation of a Low-Cost Amplifier With System Optimization in Thermoacoustic Tomography: Characterization and Imaging of Ex-Vivo and In-Vivo Samples

Fri, 2025-03-14 06:00

IEEE Trans Biomed Eng. 2025 Mar 14;PP. doi: 10.1109/TBME.2025.3551260. Online ahead of print.

ABSTRACT

Microwave-induced thermoacoustic tomography (TAT) is a hybrid imaging technique that combines microwave excitation with ultrasound detection to create detailed images of biological tissue. Most TAT systems require a costly amplification system (or a sophisticated high-power microwave source), which limits the wide adoption of this imaging modality. We have developed a rotating single-element thermoacoustic tomography (RTAT) system using a low-cost amplifier that has been optimized in terms of microwave signal pulse width and antenna placement. The optimized system, enhanced with signal averaging, advanced signal processing, and a deep learning computational core, successfully produced adequate-quality images. The system has been characterized in terms of spatial resolution, imaging depth, acquisition speed, and multispectral capabilities utilizing tissue-like phantoms, ex-vivo specimens and in-vivo imaging. We believe our low-cost, portable system expands accessibility for the research community, empowering more groups to explore thermoacoustic imaging. It supports the development of advanced signal processing algorithms to optimize both low-power and even high-power TAT systems, accelerating the clinical adoption of this promising imaging modality.

PMID:40085469 | DOI:10.1109/TBME.2025.3551260

Categories: Literature Watch

Pages