Deep learning
Deep Learning for Cardiac Imaging: Focus on Myocardial Diseases: A Narrative Review
Hellenic J Cardiol. 2024 Dec 9:S1109-9666(24)00261-6. doi: 10.1016/j.hjc.2024.12.002. Online ahead of print.
ABSTRACT
The integration of computational technologies into cardiology has significantly advanced the diagnosis and management of cardiovascular diseases. Computational cardiology, particularly through cardiovascular imaging and informatics, enables precise diagnosis of myocardial diseases by utilizing techniques such as echocardiography, cardiac magnetic resonance imaging, and computed tomography. Early-stage disease classification, especially in asymptomatic patients, benefits from these advancements, potentially altering disease progression and improving patient outcomes. Automatic segmentation of myocardial tissue using Deep Learning (DL) algorithms improves efficiency and consistency in analyzing large patient populations. Radiomic analysis can reveal subtle disease characteristics from medical images and can enhance disease detection, enable patient stratification, and facilitate monitoring of disease progression and treatment response. Radiomic biomarkers have already demonstrated high diagnostic accuracy in distinguishing myocardial pathologies and promise treatment individualization in cardiology, earlier disease detection, and disease monitoring. In this context, this narrative review explores the current state of the art in DL applications in medical imaging (CT, CMR, echocardiography and SPECT), focusing on automatic segmentation, radiomic feature phenotyping, and prediction of myocardial diseases, while also discussing challenges in integration of DL models in the clinical practice.
PMID:39662734 | DOI:10.1016/j.hjc.2024.12.002
Prediction of gene expression-based breast cancer proliferation scores from histopathology whole slide images using deep learning
BMC Cancer. 2024 Dec 11;24(1):1510. doi: 10.1186/s12885-024-13248-9.
ABSTRACT
BACKGROUND: In breast cancer, several gene expression assays have been developed to provide a more personalised treatment. This study focuses on the prediction of two molecular proliferation signatures: an 11-gene proliferation score and the MKI67 proliferation marker gene. The aim was to assess whether these could be predicted from digital whole slide images (WSIs) using deep learning models.
METHODS: WSIs and RNA-sequencing data from 819 invasive breast cancer patients were included for training, and models were evaluated on an internal test set of 172 cases as well as on 997 cases from a fully independent external test set. Two deep Convolutional Neural Network (CNN) models were optimised using WSIs and gene expression readouts from RNA-sequencing data of either the proliferation signature or the proliferation marker, and assessed using Spearman correlation (r). Prognostic performance was assessed through Cox proportional hazard modelling, estimating hazard ratios (HR).
RESULTS: Optimised CNNs successfully predicted the proliferation score and proliferation marker on the unseen internal test set (ρ = 0.691(p < 0.001) with R2 = 0.438, and ρ = 0.564 (p < 0.001) with R2 = 0.251 respectively) and on the external test set (ρ = 0.502 (p < 0.001) with R2 = 0.319, and ρ = 0.403 (p < 0.001) with R2 = 0.222 respectively). Patients with a high proliferation score or marker were significantly associated with a higher risk of recurrence or death in the external test set (HR = 1.65 (95% CI: 1.05-2.61) and HR = 1.84 (95% CI: 1.17-2.89), respectively).
CONCLUSIONS: The results from this study suggest that gene expression levels of proliferation scores can be predicted directly from breast cancer morphology in WSIs using CNNs and that the predictions provide prognostic information that could be used in research as well as in the clinical setting.
PMID:39663527 | DOI:10.1186/s12885-024-13248-9
Retinal fluid quantification using a novel deep learning algorithm in patients treated with faricimab in the TRUCKEE study
Eye (Lond). 2024 Dec 11. doi: 10.1038/s41433-024-03532-0. Online ahead of print.
ABSTRACT
BACKGROUND: Investigate retinal fluid changes via a novel deep-learning algorithm in real-world patients receiving faricimab for the treatment of neovascular age-related macular degeneration (nAMD).
METHODS: Multicenter, retrospective chart review and optical coherence tomography (OCT) image upload from participating sites was conducted on patients treated with faricimab for nAMD from February 2022 to January 2024. The Notal OCT Analyzer (NOA) algorithm provided intraretinal, subretinal and total retinal fluid for each scan. Results were segregated based on treatment history and fluid compartments, allowing for multiple cross-sections of evaluation.
RESULTS: A total of 521 eyes were included at baseline. The previous treatments prior to faricimab were aflibercept, ranibizumab, bevacizumab, or treatment-naive for 52.3%, 21.0%, 13.3%, and 11.2% of the eyes, respectively. Of all 521 eyes, 49.9% demonstrated fluid reduction after one injection of faricimab. The mean fluid reduction after one injection was -60.7nL. The proportion of eyes that saw reduction in fluid compared to baseline after second, third, fourth and fifth faricimab injections were 54.4%, 51.9%, 51.4% and 52.2%, respectively. The mean (SD) retreatment interval after second, third, fourth and fifth faricimab injection were 53.4 (34.3), 56.6 (36.0), 57.1 (35.3) and 61.5 (40.2) days, respectively.
CONCLUSION: Deep-learning algorithms provide a novel tool for evaluating precise quantification of retinal fluid after treatment of nAMD with faricimab. Faricimab demonstrates reduction of retinal fluid in multiple groups after just one injection and sustains this response after multiple treatments, along with providing increases in treatment intervals between subsequent injections.
PMID:39663398 | DOI:10.1038/s41433-024-03532-0
Mapping the functional network of human cancer through machine learning and pan-cancer proteogenomics
Nat Cancer. 2024 Dec 11. doi: 10.1038/s43018-024-00869-z. Online ahead of print.
ABSTRACT
Large-scale omics profiling has uncovered a vast array of somatic mutations and cancer-associated proteins, posing substantial challenges for their functional interpretation. Here we present a network-based approach centered on FunMap, a pan-cancer functional network constructed using supervised machine learning on extensive proteomics and RNA sequencing data from 1,194 individuals spanning 11 cancer types. Comprising 10,525 protein-coding genes, FunMap connects functionally associated genes with unprecedented precision, surpassing traditional protein-protein interaction maps. Network analysis identifies functional protein modules, reveals a hierarchical structure linked to cancer hallmarks and clinical phenotypes, provides deeper insights into established cancer drivers and predicts functions for understudied cancer-associated proteins. Additionally, applying graph-neural-network-based deep learning to FunMap uncovers drivers with low mutation frequency. This study establishes FunMap as a powerful and unbiased tool for interpreting somatic mutations and understudied proteins, with broad implications for advancing cancer biology and informing therapeutic strategies.
PMID:39663389 | DOI:10.1038/s43018-024-00869-z
Deep Learning Prediction of Drug-Induced Liver Toxicity by Manifold Embedding of Quantum Information of Drug Molecules
Pharm Res. 2024 Dec 12. doi: 10.1007/s11095-024-03800-4. Online ahead of print.
ABSTRACT
PURPOSE: Drug-induced liver injury, or DILI, affects numerous patients and also presents significant challenges in drug development. It has been attempted to predict DILI of a chemical by in silico approaches, including data-driven machine learning models. Herein, we report a recent DILI deep-learning effort that utilized our molecular representation concept by manifold embedding electronic attributes on a molecular surface.
METHODS: Local electronic attributes on a molecular surface were mapped to a lower-dimensional embedding of the surface manifold. Such an embedding was featurized in a matrix form and used in a deep-learning model as molecular input. The model was trained by a well-curated dataset and tested through cross-validations.
RESULTS: Our DILI prediction yielded superior results to the literature-reported efforts, suggesting that manifold embedding of electronic quantities on a molecular surface enables machine learning of molecular properties, including DILI.
CONCLUSIONS: The concept encodes the quantum information of a molecule that governs intermolecular interactions, potentially facilitating the deep-learning model development and training.
PMID:39663331 | DOI:10.1007/s11095-024-03800-4
Deep Learning-Based Body Composition Analysis for Cancer Patients Using Computed Tomographic Imaging
J Imaging Inform Med. 2024 Dec 11. doi: 10.1007/s10278-024-01373-7. Online ahead of print.
ABSTRACT
Malnutrition is a commonly observed side effect in cancer patients, with a 30-85% worldwide prevalence in this population. Existing malnutrition screening tools miss ~ 20% of at-risk patients at initial screening and do not capture the abnormal body composition phenotype. Meanwhile, the gold-standard clinical criteria to diagnose malnutrition use changes in body composition as key parameters, particularly body fat and skeletal muscle mass loss. Diagnostic imaging, such as computed tomography (CT), is the gold-standard in analyzing body composition and typically accessible to cancer patients as part of the standard of care. In this study, we developed a deep learning-based body composition analysis approach over a diverse dataset of 200 abdominal/pelvic CT scans from cancer patients. The proposed approach segments adipose tissue and skeletal muscle using Swin UNEt TRansformers (Swin UNETR) at the third lumbar vertebrae (L3) level and automatically localizes L3 before segmentation. The proposed approach involves the first transformer-based deep learning model for body composition analysis and heatmap regression-based vertebra localization in cancer patients. Swin UNETR attained 0.92 Dice score in adipose tissue and 0.87 Dice score in skeletal muscle segmentation, significantly outperforming convolutional benchmarks including the 2D U-Net by 2-12% Dice score (p-values < 0.033). Moreover, Swin UNETR predictions showed high agreement with ground-truth areas of skeletal muscle and adipose tissue by 0.7-0.93 R2, highlighting its potential for accurate body composition analysis. We have presented an accurate body composition analysis based on CT imaging, which can enable the early detection of malnutrition in cancer patients and support timely interventions.
PMID:39663321 | DOI:10.1007/s10278-024-01373-7
Accelerated T2W Imaging with Deep Learning Reconstruction in Staging Rectal Cancer: A Preliminary Study
J Imaging Inform Med. 2024 Dec 11. doi: 10.1007/s10278-024-01345-x. Online ahead of print.
ABSTRACT
Deep learning reconstruction (DLR) has exhibited potential in saving scan time. There is limited research on the evaluation of accelerated acquisition with DLR in staging rectal cancers. Our first objective was to explore the best DLR level in saving time through phantom experiments. Resolution and number of excitations (NEX) adjusted for different scan time, image quality of conventionally reconstructed T2W images were measured and compared with images reconstructed with different DLR level. The second objective was to explore the feasibility of accelerated T2W imaging with DLR in image quality and diagnostic performance for rectal cancer patients. 52 patients were prospectively enrolled to undergo accelerated acquisition reconstructed with highly-denoised DLR (DLR_H40sec) and conventional reconstruction (ConR2min). The image quality and diagnostic performance were evaluated by observers with varying experience and compared between protocols using κ statistics and area under the receiver operating characteristic curve (AUC). The phantom experiments demonstrated that DLR_H could achieve superior signal-to-noise ratio (SNR), detail conspicuity, sharpness, and less distortion within the least scan time. The DLR_H40sec images exhibited higher sharpness and SNR than ConR2min. The agreements with pathological TN-stages were improved using DLR_H40sec images compared to ConR2min (T: 0.846vs. 0.771, 0.825vs. 0.700, and 0.697vs. 0.512; N: 0.527vs. 0.521, 0.421vs. 0.348 and 0.517vs. 0.363 for junior, intermediate, and senior observes, respectively). Comparable AUCs to identify T3-4 and N1-2 tumors were achieved using DLR_H40sec and ConR2min images (P > 0.05). Consequently, with 2/3-time reduction, DLR_H40sec images showed improved image quality and comparable TN-staging performance to conventional T2W imaging for rectal cancer patients.
PMID:39663320 | DOI:10.1007/s10278-024-01345-x
Combination of Deep and Statistical Features of the Tissue of Pathology Images to Classify and Diagnose the Degree of Malignancy of Prostate Cancer
J Imaging Inform Med. 2024 Dec 11. doi: 10.1007/s10278-024-01363-9. Online ahead of print.
ABSTRACT
Prostate cancer is one of the most prevalent male-specific diseases, where early and accurate diagnosis is essential for effective treatment and preventing disease progression. Assessing disease severity involves analyzing histological tissue samples, which are graded from 1 (healthy) to 5 (severely malignant) based on pathological features. However, traditional manual grading is labor-intensive and prone to variability. This study addresses the challenge of automating prostate cancer classification by proposing a novel histological grade analysis approach. The method integrates the gray-level co-occurrence matrix (GLCM) for extracting texture features with Haar wavelet modification to enhance feature quality. A convolutional neural network (CNN) is then employed for robust classification. The proposed method was evaluated using statistical and performance metrics, achieving an average accuracy of 97.3%, a precision of 98%, and an AUC of 0.95. These results underscore the effectiveness of the approach in accurately categorizing prostate tissue grades. This study demonstrates the potential of automated classification methods to support pathologists, enhance diagnostic precision, and improve clinical outcomes in prostate cancer care.
PMID:39663318 | DOI:10.1007/s10278-024-01363-9
Towards Automated Semantic Segmentation in Mammography Images for Enhanced Clinical Applications
J Imaging Inform Med. 2024 Dec 11. doi: 10.1007/s10278-024-01364-8. Online ahead of print.
ABSTRACT
Mammography images are widely used to detect non-palpable breast lesions or nodules, aiding in cancer prevention and enabling timely intervention when necessary. To support medical analysis, computer-aided detection systems can automate the segmentation of landmark structures, which is helpful in locating abnormalities and evaluating image acquisition adequacy. This paper presents a deep learning-based framework for segmenting the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue in standard-view mammography images. To the best of our knowledge, we introduce the largest dataset dedicated to mammography segmentation of key anatomical structures, specifically designed to train deep learning models for this task. Through comprehensive experiments, we evaluated various deep learning model architectures and training configurations, demonstrating robust segmentation performance across diverse and challenging cases. These results underscore the framework's potential for clinical integration. In our experiments, four semantic segmentation architectures were compared, all showing suitability for the target problem, thereby offering flexibility in model selection. Beyond segmentation, we introduce a suite of applications derived from this framework to assist in clinical assessments. These include automating tasks such as multi-view lesion registration and anatomical position estimation, evaluating image acquisition quality, measuring breast density, and enhancing visualization of breast tissues, thus addressing critical needs in breast cancer screening and diagnosis.
PMID:39663317 | DOI:10.1007/s10278-024-01364-8
A Neural Network for Segmenting Tumours in Ultrasound Rectal Images
J Imaging Inform Med. 2024 Dec 11. doi: 10.1007/s10278-024-01358-6. Online ahead of print.
ABSTRACT
Ultrasound imaging is the most cost-effective approach for the early detection of rectal cancer, which is a high-risk cancer. Our goal was to design an effective method that can accurately identify and segment rectal tumours in ultrasound images, thereby facilitating rectal cancer diagnoses for physicians. This would allow physicians to devote more time to determining whether the tumour is benign or malignant and whether it has metastasized rather than merely confirming its presence. Data originated from the Sichuan Province Cancer Hospital. The test, training, and validation sets were composed of 53 patients with 173 images, 195 patients with 1247 images, and 20 patients with 87 images, respectively. We created a deep learning network architecture consisting of encoders and decoders. To enhance global information capture, we substituted traditional convolutional decoders with global attention decoders and incorporated effective channel information fusion for multiscale information integration. The Dice coefficient (DSC) of the proposed model was 75.49%, which was 4.03% greater than that of the benchmark model, and the Hausdorff distance 95(HD95) was 24.75, which was 8.43 lower than that of the benchmark model. The paired t-test statistically confirmed the significance of the difference between our model and the benchmark model, with a p-value less than 0.05. The proposed method effectively identifies and segments rectal tumours of diverse shapes. Furthermore, it distinguishes between normal rectal images and those containing tumours. Therefore, after consultation with physicians, we believe that our method can effectively assist physicians in diagnosing rectal tumours via ultrasound.
PMID:39663316 | DOI:10.1007/s10278-024-01358-6
Performance of automated machine learning in detecting fundus diseases based on ophthalmologic B-scan ultrasound images
BMJ Open Ophthalmol. 2024 Dec 11;9(1):e001873. doi: 10.1136/bmjophth-2024-001873.
ABSTRACT
AIM: To evaluate the efficacy of automated machine learning (AutoML) models in detecting fundus diseases using ocular B-scan ultrasound images.
METHODS: Ophthalmologists annotated two B-scan ultrasound image datasets to develop three AutoML models-single-label, multi-class single-label and multi-label-on the Vertex artificial intelligence (AI) platform. Performance of these models was compared among themselves and against existing bespoke models for binary classification tasks.
RESULTS: The training set involved 3938 images from 1378 patients, while batch predictions used an additional set of 336 images from 180 patients. The single-label AutoML model, trained on normal and abnormal fundus images, achieved an area under the precision-recall curve (AUPRC) of 0.9943. The multi-class single-label model, focused on single-pathology images, recorded an AUPRC of 0.9617, with performance metrics of these two single-label models proving comparable to those of previously published models. The multi-label model, designed to detect both single and multiple pathologies, posted an AUPRC of 0.9650. Pathology classification AUPRCs for the multi-class single-label model ranged from 0.9277 to 1.0000 and from 0.8780 to 0.9980 for the multi-label model. Batch prediction accuracies ranged from 86.57% to 97.65% for various fundus conditions in the multi-label AutoML model. Statistical analysis demonstrated that the single-label model significantly outperformed the other two models in all evaluated metrics (p<0.05).
CONCLUSION: AutoML models, developed by clinicians, effectively detected multiple fundus lesions with performance on par with that of deep-learning models crafted by AI specialists. This underscores AutoML's potential to revolutionise ophthalmologic diagnostics, facilitating broader accessibility and application of sophisticated diagnostic technologies.
PMID:39663141 | DOI:10.1136/bmjophth-2024-001873
Leveraging Large Language Models for Improved Understanding of Communications With Patients With Cancer in a Call Center Setting: Proof-of-Concept Study
J Med Internet Res. 2024 Dec 11;26:e63892. doi: 10.2196/63892.
ABSTRACT
BACKGROUND: Hospital call centers play a critical role in providing support and information to patients with cancer, making it crucial to effectively identify and understand patient intent during consultations. However, operational efficiency and standardization of telephone consultations, particularly when categorizing diverse patient inquiries, remain significant challenges. While traditional deep learning models like long short-term memory (LSTM) and bidirectional encoder representations from transformers (BERT) have been used to address these issues, they heavily depend on annotated datasets, which are labor-intensive and time-consuming to generate. Large language models (LLMs) like GPT-4, with their in-context learning capabilities, offer a promising alternative for classifying patient intent without requiring extensive retraining.
OBJECTIVE: This study evaluates the performance of GPT-4 in classifying the purpose of telephone consultations of patients with cancer. In addition, it compares the performance of GPT-4 to that of discriminative models, such as LSTM and BERT, with a particular focus on their ability to manage ambiguous and complex queries.
METHODS: We used a dataset of 430,355 sentences from telephone consultations with patients with cancer between 2016 and 2020. LSTM and BERT models were trained on 300,000 sentences using supervised learning, while GPT-4 was applied using zero-shot and few-shot approaches without explicit retraining. The accuracy of each model was compared using 1,000 randomly selected sentences from 2020 onward, with special attention paid to how each model handled ambiguous or uncertain queries.
RESULTS: GPT-4, which uses only a few examples (a few shots), attained a remarkable accuracy of 85.2%, considerably outperforming the LSTM and BERT models, which achieved accuracies of 73.7% and 71.3%, respectively. Notably, categories such as "Treatment," "Rescheduling," and "Symptoms" involve multiple contexts and exhibit significant complexity. GPT-4 demonstrated more than 15% superior performance in handling ambiguous queries in these categories. In addition, GPT-4 excelled in categories like "Records" and "Routine," where contextual clues were clear, outperforming the discriminative models. These findings emphasize the potential of LLMs, particularly GPT-4, for interpreting complicated patient interactions during cancer-related telephone consultations.
CONCLUSIONS: This study shows the potential of GPT-4 to significantly improve the classification of patient intent in cancer-related telephone oncological consultations. GPT-4's ability to handle complex and ambiguous queries without extensive retraining provides a substantial advantage over discriminative models like LSTM and BERT. While GPT-4 demonstrates strong performance in various areas, further refinement of prompt design and category definitions is necessary to fully leverage its capabilities in practical health care applications. Future research will explore the integration of LLMs like GPT-4 into hybrid systems that combine human oversight with artificial intelligence-driven technologies.
PMID:39661975 | DOI:10.2196/63892
Accuracy of Machine Learning in Detecting Pediatric Epileptic Seizures: Systematic Review and Meta-Analysis
J Med Internet Res. 2024 Dec 11;26:e55986. doi: 10.2196/55986.
ABSTRACT
BACKGROUND: Real-time monitoring of pediatric epileptic seizures poses a significant challenge in clinical practice. In recent years, machine learning (ML) has attracted substantial attention from researchers for diagnosing and treating neurological diseases, leading to its application for detecting pediatric epileptic seizures. However, systematic evidence substantiating its feasibility remains limited.
OBJECTIVE: This systematic review aimed to consolidate the existing evidence regarding the effectiveness of ML in monitoring pediatric epileptic seizures with an effort to provide an evidence-based foundation for the development and enhancement of intelligent tools in the future.
METHODS: We conducted a systematic search of the PubMed, Cochrane, Embase, and Web of Science databases for original studies focused on the detection of pediatric epileptic seizures using ML, with a cutoff date of August 27, 2023. The risk of bias in eligible studies was assessed using the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies-2). Meta-analyses were performed to evaluate the C-index and the diagnostic 4-grid table, using a bivariate mixed-effects model for the latter. We also examined publication bias for the C-index by using funnel plots and the Egger test.
RESULTS: This systematic review included 28 original studies, with 15 studies on ML and 13 on deep learning (DL). All these models were based on electroencephalography data of children. The pooled C-index, sensitivity, specificity, and accuracy of ML in the training set were 0.76 (95% CI 0.69-0.82), 0.77 (95% CI 0.73-0.80), 0.74 (95% CI 0.70-0.77), and 0.75 (95% CI 0.72-0.77), respectively. In the validation set, the pooled C-index, sensitivity, specificity, and accuracy of ML were 0.73 (95% CI 0.67-0.79), 0.88 (95% CI 0.83-0.91), 0.83 (95% CI 0.71-0.90), and 0.78 (95% CI 0.73-0.82), respectively. Meanwhile, the pooled C-index of DL in the validation set was 0.91 (95% CI 0.88-0.94), with sensitivity, specificity, and accuracy being 0.89 (95% CI 0.85-0.91), 0.91 (95% CI 0.88-0.93), and 0.89 (95% CI 0.86-0.92), respectively.
CONCLUSIONS: Our systematic review demonstrates promising accuracy of artificial intelligence methods in epilepsy detection. DL appears to offer higher detection accuracy than ML. These findings support the development of DL-based early-warning tools in future research.
TRIAL REGISTRATION: PROSPERO CRD42023467260; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023467260.
PMID:39661965 | DOI:10.2196/55986
ChemNTP: Advanced Prediction of Neurotoxicity Targets for Environmental Chemicals Using a Siamese Neural Network
Environ Sci Technol. 2024 Dec 11. doi: 10.1021/acs.est.4c10081. Online ahead of print.
ABSTRACT
Environmental chemicals can enter the human body through various exposure pathways, potentially leading to neurotoxic effects that pose significant health risks. Many such chemicals have been identified as neurotoxic, but the molecular mechanisms underlying their toxicity, including specific binding targets, remain unclear. To address this, we developed ChemNTP, a predictive model for identifying neurotoxicity targets of environmental chemicals. ChemNTP integrates a comprehensive representation of chemical structures and biological targets, improving upon traditional methods that are limited to single targets and mechanisms. By leveraging these structural representations, ChemNTP enables rapid screening across 199 potential neurotoxic targets or key molecular initiating events (MIEs). The model demonstrates robust predictive performance, achieving an area under the receiver operating characteristic curve (AUCROC) of 0.923 on the test set. Additionally, ChemNTP's attention mechanism highlights critical residues in binding targets and key functional groups or atoms in molecules, offering insights into the structural basis of interactions. Experimental validation through in vitro enzyme activity assays and molecular docking confirmed the binding of eight polybrominated diphenyl ethers (PBDEs) to acetylcholinesterase (AChE). We also provide a user-friendly software interface to facilitate the rapid identification of neurotoxicity targets for emerging environmental pollutants, with potential applications in studying MIEs for more types of toxicity.
PMID:39661815 | DOI:10.1021/acs.est.4c10081
Endomicroscopic AI-driven morphochemical imaging and fs-laser ablation for selective tumor identification and selective tissue removal
Sci Adv. 2024 Dec 13;10(50):eado9721. doi: 10.1126/sciadv.ado9721. Epub 2024 Dec 11.
ABSTRACT
The rising incidence of head and neck cancer represents a serious global health challenge, requiring more accurate diagnosis and innovative surgical approaches. Multimodal nonlinear optical microscopy, combining coherent anti-Stokes Raman scattering (CARS), two-photon excited fluorescence (TPEF), and second-harmonic generation (SHG) with deep learning-based analysis routines, offers label-free assessment of the tissue's morphochemical composition and allows early-stage and automatic detection of disease. For clinical intraoperative application, compact devices are required. In this preclinical study, a cohort of 15 patients was examined with a newly developed rigid CARS/TPEF/SHG endomicroscope. To detect head and neck tumor from the multimodal data, deep learning-based semantic segmentation models were used. This preclinical study yields in a diagnostic sensitivity of 88% and a specificity of 96%. To combine diagnostics with therapy, machine learning-inspired image-guided selective tissue removal was used by integrating femtosecond laser ablation into the endomicroscope. This enables a powerful approach of intraoperative "seek and treat," paving the way to advanced surgical treatment.
PMID:39661684 | DOI:10.1126/sciadv.ado9721
DeepPD: A Deep Learning Method for Predicting Peptide Detectability Based on Multi-feature Representation and Information Bottleneck
Interdiscip Sci. 2024 Dec 11. doi: 10.1007/s12539-024-00665-4. Online ahead of print.
ABSTRACT
Peptide detectability measures the relationship between the protein composition and abundance in the sample and the peptides identified during the analytical procedure. This relationship has significant implications for the fundamental tasks of proteomics. Existing methods primarily rely on a single type of feature representation, which limits their ability to capture the intricate and diverse characteristics of peptides. In response to this limitation, we introduce DeepPD, an innovative deep learning framework incorporating multi-feature representation and the information bottleneck principle (IBP) to predict peptide detectability. DeepPD extracts semantic information from peptides using evolutionary scale modeling 2 (ESM-2) and integrates sequence and evolutionary information to construct the feature space collaboratively. The IBP effectively guides the feature learning process, minimizing redundancy in the feature space. Experimental results across various datasets demonstrate that DeepPD outperforms state-of-the-art methods. Furthermore, we demonstrate that DeepPD exhibits competitive generalization and transfer learning capabilities across diverse datasets and species. In conclusion, DeepPD emerges as the most effective method for predicting peptide detectability, showcasing its potential applicability to other protein sequence prediction tasks.
PMID:39661307 | DOI:10.1007/s12539-024-00665-4
Artificial Intelligence Advancements in Cardiomyopathies: Implications for Diagnosis and Management of Arrhythmogenic Cardiomyopathy
Curr Heart Fail Rep. 2024 Dec 11;22(1):5. doi: 10.1007/s11897-024-00688-4.
ABSTRACT
PURPOSE OF REVIEW: This review aims to explore the emerging potential of artificial intelligence (AI) in refining risk prediction, clinical diagnosis, and treatment stratification for cardiomyopathies, with a specific emphasis on arrhythmogenic cardiomyopathy (ACM).
RECENT FINDINGS: Recent developments highlight the capacity of AI to construct sophisticated models that accurately distinguish affected from non-affected cardiomyopathy patients. These AI-driven approaches not only offer precision in risk prediction and diagnostics but also enable early identification of individuals at high risk of developing cardiomyopathy, even before symptoms occur. These models have the potential to utilise diverse clinical input datasets such as electrocardiogram recordings, cardiac imaging, and other multi-modal genetic and omics datasets. Despite their current underrepresentation in literature, ACM diagnosis and risk prediction are expected to greatly benefit from AI computational capabilities, as has been the case for other cardiomyopathies. As AI-based models improve, larger and more complicated datasets can be combined. These more complex integrated datasets with larger sample sizes will contribute to further pathophysiological insights, better disease recognition, risk prediction, and improved patient outcomes.
PMID:39661213 | DOI:10.1007/s11897-024-00688-4
Integrating Deep Learning with Biology: A New Frontier in Triple-Negative Breast Cancer Treatment Prediction?
Radiol Artif Intell. 2025 Jan;7(1):e240740. doi: 10.1148/ryai.240740.
NO ABSTRACT
PMID:39660996 | DOI:10.1148/ryai.240740
Resting-State Functional MRI: Current State, Controversies, Limitations, and Future Directions-<em>AJR</em> Expert Panel Narrative Review
AJR Am J Roentgenol. 2024 Dec 11. doi: 10.2214/AJR.24.32163. Online ahead of print.
ABSTRACT
Resting-state functional MRI (rs-fMRI), a promising method for interrogating different brain functional networks from a single MRI acquisition, is increasingly utilized in clinical presurgical and other pretherapeutic brain mapping. However, challenges in standardization of acquisition, preprocessing, and analysis methods across centers, and variability in results interpretation, complicate its clinical use. Additionally, inherent problems regarding reliability of language lateralization, interpatient variability of cognitive network representation, dynamic aspects of intranetwork and internetwork connectivity, and effects of neurovascular uncoupling on network detection still must be overcome. Although deep-learning solutions and further methodologic standardization will help address these issues, rs-fMRI remains generally considered an adjunct to task-based fMRI (tb-fMRI) for clinical presurgical mapping. Nonetheless, in many clinical instances, rs-fMRI may offer valuable additional information that supplements tb-fMRI, especially if tb-fMRI is inadequate due to patient performance or other limitations. Future growth in clinical applications of rs-fMRI is anticipated as challenges are increasingly addressed. In this AJR Expert Panel Narrative Review, we summarize the current state and emerging clinical utility of rs-fMRI, focusing on its role in presurgical mapping. We present ongoing controversies and limitations in clinical applicability and discuss future directions including the developing role of rs-fMRI in neuromodulation treatment for various neurologic disorders.
PMID:39660823 | DOI:10.2214/AJR.24.32163
Advanced Nosema bombycis Spore Identification: Single-Cell Raman Spectroscopy Combined with Self-Attention Mechanism-Guided Deep Learning
Anal Chem. 2024 Dec 11. doi: 10.1021/acs.analchem.4c04817. Online ahead of print.
ABSTRACT
Nosema bombycis (Nb) has been considered a dangerous pathogen, which can spread rapidly through free spores. Nowadays, pebrine disease caused by Nb spores is a serious threat to silkworms, causing huge economic losses in both the silk industry and agriculture every year. Thus, how to accurately identify living Nb spores at a single-cell level is greatly demanded. In this work, we proposed a novel approach to accurately and conveniently identify Nb spores using single-cell Raman spectroscopy and a self-attention mechanism (SAM)-guided convolutional neural network (CNN) framework. With the assistance of SAM and data augmentation methods, an optimal CNN model can not only efficiently extract spectral feature information but also construct potential relationships of global spectral features. Compared with the case without both SAM and data augmentation, the average prediction accuracy of Nb spores from nine different Bombyx mori larvae can be significantly developed by almost 18%, from original 83.93 ± 4.88% to 99.27 ± 0.25%. To visualize the individual classification weight, a local feature extraction strategy named blocking individual Raman bands was proposed. According to the relative weight, these four Raman bands located at 1658, 1458, 1127, and 849 cm-1, mainly contribute to the high prediction accuracy of 99.27 ± 0.25%. It is worth noting that these Raman bands were also highlighted by the weight curve of SAM, indicating that the four Raman bands proposed by our optimal CNN model are reliable. Our findings clearly show that single-cell Raman spectroscopy combined with SAM-mediated CNN configuration has great potential in performing early diagnosis of Nb spores and monitoring pebrine disease.
PMID:39660811 | DOI:10.1021/acs.analchem.4c04817