Deep learning

Accurate prediction of protein function using statistics-informed graph networks

Sat, 2024-08-03 06:00

Nat Commun. 2024 Aug 4;15(1):6601. doi: 10.1038/s41467-024-50955-0.

ABSTRACT

Understanding protein function is pivotal in comprehending the intricate mechanisms that underlie many crucial biological activities, with far-reaching implications in the fields of medicine, biotechnology, and drug development. However, more than 200 million proteins remain uncharacterized, and computational efforts heavily rely on protein structural information to predict annotations of varying quality. Here, we present a method that utilizes statistics-informed graph networks to predict protein functions solely from its sequence. Our method inherently characterizes evolutionary signatures, allowing for a quantitative assessment of the significance of residues that carry out specific functions. PhiGnet not only demonstrates superior performance compared to alternative approaches but also narrows the sequence-function gap, even in the absence of structural information. Our findings indicate that applying deep learning to evolutionary data can highlight functional sites at the residue level, providing valuable support for interpreting both existing properties and new functionalities of proteins in research and biomedicine.

PMID:39097570 | DOI:10.1038/s41467-024-50955-0

Categories: Literature Watch

Cardiac Substructure Dose and Survival in Stereotactic Radiotherapy for Lung Cancer: Results of the Multi-Centre SSBROC Trial

Sat, 2024-08-03 06:00

Clin Oncol (R Coll Radiol). 2024 Jul 20:S0936-6555(24)00289-9. doi: 10.1016/j.clon.2024.07.005. Online ahead of print.

ABSTRACT

BACKGROUND AND PURPOSE: Stereotactic ablative body radiotherapy (SABR) is increasingly used for early-stage lung cancer, however the impact of dose to the heart and cardiac substructures remains largely unknown. The study investigated doses received by cardiac substructures in SABR patients and impact on survival.

MATERIALS AND METHODS: SSBROC is an Australian multi-centre phase II prospective study of SABR for stage I non-small cell lung cancer. Patients were treated between 2013 and 2019 across 9 centres. In this secondary analysis of the dataset, a previously published and locally developed open-source hybrid deep learning cardiac substructure automatic segmentation tool was deployed on the planning CTs of 117 trial patients. Physical doses to 18 cardiac structures and EQD2 converted doses (α/β = 3) were calculated. Endpoints evaluated include pericardial effusion and overall survival. Associations between cardiac doses and survival were analysed with the Kaplan-Meier method and Cox proportional hazards models.

RESULTS: Cardiac structures that received the highest physical mean doses were superior vena cava (22.5 Gy) and sinoatrial node (18.3 Gy). The highest physical maximum dose was received by the heart (51.7 Gy) and right atrium (45.3 Gy). Three patients developed grade 2, and one grade 3 pericardial effusion. The cohort receiving higher than median mean heart dose (MHD) had poorer survival compared to those who received below median MHD (p = 0.00004). On multivariable Cox analysis, male gender and maximum dose to ascending aorta were significant for worse survival.

CONCLUSIONS: Patients treated with lung SABR may receive high doses to cardiac substructures. Dichotomising the patients according to median mean heart dose showed a clear difference in survival. On multivariable analyses gender and dose to ascending aorta were significant for survival, however cardiac substructure dosimetry and outcomes should be further explored in larger studies.

PMID:39097416 | DOI:10.1016/j.clon.2024.07.005

Categories: Literature Watch

[Translated article] Introducing artificial intelligence to hospital pharmacy departments

Sat, 2024-08-03 06:00

Farm Hosp. 2024 Jul;48 Suppl 1:TS35-TS44. doi: 10.1016/j.farma.2024.04.001.

ABSTRACT

Artificial intelligence is a broad concept that includes the study of the ability of computers to perform tasks that would normally require the intervention of human intelligence. By exploiting large volumes of healthcare data, Artificial intelligence algorithms can identify patterns and predict outcomes, which can help healthcare organizations and their professionals make better decisions and achieve better results. Machine learning, deep learning, neural networks, or natural language processing are among the most important methods, allowing systems to learn and improve from data without the need for explicit programming. Artificial intelligence has been introduced in biomedicine, accelerating processes, improving accuracy and efficiency, and improving patient care. By using Artificial intelligence algorithms and machine learning, hospital pharmacists can analyze a large volume of patient data, including medical records, laboratory results, and medication profiles, aiding them in identifying potential drug-drug interactions, assessing the safety and efficacy of medicines, and making informed recommendations. Artificial intelligence integration will improve the quality of pharmaceutical care, optimize processes, promote research, deploy open innovation, and facilitate education. Hospital pharmacists who master Artificial intelligence will play a crucial role in this transformation.

PMID:39097375 | DOI:10.1016/j.farma.2024.04.001

Categories: Literature Watch

Approaching artificial intelligence to Hospital Pharmacy

Sat, 2024-08-03 06:00

Farm Hosp. 2024 Jul;48 Suppl 1:S35-S44. doi: 10.1016/j.farma.2024.02.007.

ABSTRACT

Artificial intelligence (AI) is a broad concept that includes the study of the ability of computers to perform tasks that would normally require the intervention of human intelligence. By exploiting large volumes of healthcare data, artificial intelligence algorithms can identify patterns and predict outcomes, which can help healthcare organizations and their professionals make better decisions and achieve better results. Machine learning, deep learning, neural networks or natural language processing are among the most important methods, allowing systems to learn and improve from data without the need for explicit programming. AI has been introduced in biomedicine, accelerating processes, improving safety and efficiency, and improving patient care. By using AI algorithms and Machine Learning, hospital pharmacists can analyze a large volume of patient data, including medical records, laboratory results, and medication profiles, aiding them in identifying potential drug-drug interactions, assessing the safety and efficacy of medicines, and making informed recommendations. AI integration will improve the quality of pharmaceutical care, optimize processes, promote research, deploy open innovation, and facilitate education. Hospital pharmacists who master AI will play a crucial role in this transformation.

PMID:39097366 | DOI:10.1016/j.farma.2024.02.007

Categories: Literature Watch

Deep learning based method for predicting DNA N6-methyladenosine sites

Sat, 2024-08-03 06:00

Methods. 2024 Aug 1:S1046-2023(24)00179-8. doi: 10.1016/j.ymeth.2024.07.012. Online ahead of print.

ABSTRACT

DNA N6 methyladenine (6 mA) plays an important role in many biological processes, and accurately identifying its sites helps one to understand its biological effects more comprehensively. Previous traditional experimental methods are very labor-intensive and traditional machine learning methods also seem to be somewhat insufficient as the database of 6 mA methylation groups becomes progressively larger, so we propose a deep learning-based method called multi-scale convolutional model based on global response normalization (CG6mA) to solve the prediction problem of 6 mA site. This method is tested with other methods on three different kinds of benchmark datasets, and the results show that our model can get more excellent prediction results.

PMID:39097179 | DOI:10.1016/j.ymeth.2024.07.012

Categories: Literature Watch

IRnet: Immunotherapy response prediction using pathway knowledge-informed graph neural network

Sat, 2024-08-03 06:00

J Adv Res. 2024 Aug 1:S2090-1232(24)00320-5. doi: 10.1016/j.jare.2024.07.036. Online ahead of print.

ABSTRACT

INTRODUCTION: Immune checkpoint inhibitors (ICIs) are potent and precise therapies for various cancer types, significantly improving survival rates in patients who respond positively to them. However, only a minority of patients benefit from ICI treatments.

OBJECTIVES: Identifying ICI responders before treatment could greatly conserve medical resources, minimize potential drug side effects, and expedite the search for alternative therapies. Our goal is to introduce a novel deep-learning method to predict ICI treatment responses in cancer patients.

METHODS: The proposed deep-learning framework leverages graph neural network and biological pathway knowledge. We trained and tested our method using ICI-treated patients' data from several clinical trials covering melanoma, gastric cancer, and bladder cancer.

RESULTS: Our results demonstrate that this predictive model outperforms current state-of-the-art methods and tumor microenvironment-based predictors. Additionally, the model quantifies the importance of pathways, pathway interactions, and genes in its predictions. A web server for IRnet has been developed and deployed, providing broad accessibility to users at https://irnet.missouri.edu.

CONCLUSION: IRnet is a competitive tool for predicting patient responses to immunotherapy, specifically ICIs. Its interpretability also offers valuable insights into the mechanisms underlying ICI treatments.

PMID:39097091 | DOI:10.1016/j.jare.2024.07.036

Categories: Literature Watch

Multi-grained contrastive representation learning for label-efficient lesion segmentation and onset time classification of acute ischemic stroke

Sat, 2024-08-03 06:00

Med Image Anal. 2024 Jun 25;97:103250. doi: 10.1016/j.media.2024.103250. Online ahead of print.

ABSTRACT

Ischemic lesion segmentation and the time since stroke (TSS) onset classification from paired multi-modal MRI imaging of unwitnessed acute ischemic stroke (AIS) patients is crucial, which supports tissue plasminogen activator (tPA) thrombolysis decision-making. Deep learning methods demonstrate superiority in TSS classification. However, they often overfit task-irrelevant features due to insufficient paired labeled data, resulting in poor generalization. We observed that unpaired data are readily available and inherently carry task-relevant cues, but are less often considered and explored. Based on this, in this paper, we propose to fully excavate the potential of unpaired unlabeled data and use them to facilitate the downstream AIS analysis task. We first analyze the utility of features at the varied grain and propose a multi-grained contrastive learning (MGCL) framework to learn task-related prior representations from both coarse-grained and fine-grained levels. The former can learn global prior representations to enhance the location ability for the ischemic lesions and perceive the healthy surroundings, while the latter can learn local prior representations to enhance the perception ability for semantic relation between the ischemic lesion and other health regions. To better transfer and utilize the learned task-related representation, we designed a novel multi-task framework to simultaneously achieve ischemic lesion segmentation and TSS classification with limited labeled data. In addition, a multi-modal region-related feature fusion module is proposed to enable the feature correlation and synergy between multi-modal deep image features for more accurate TSS decision-making. Extensive experiments on the large-scale multi-center MRI dataset demonstrate the superiority of the proposed framework. Therefore, it is promising that it helps better stroke evaluation and treatment decision-making.

PMID:39096842 | DOI:10.1016/j.media.2024.103250

Categories: Literature Watch

AIE-YOLO: Effective object detection method in extreme driving scenarios via adaptive image enhancement

Sat, 2024-08-03 06:00

Sci Prog. 2024 Jul-Sep;107(3):368504241263165. doi: 10.1177/00368504241263165.

ABSTRACT

The widespread research and implementation of visual object detection technology have significantly transformed the autonomous driving industry. Autonomous driving relies heavily on visual sensors to perceive and analyze the environment. However, under extreme weather conditions, such as heavy rain, fog, or low light, these sensors may encounter disruptions, resulting in decreased image quality and reduced detection accuracy, thereby increasing the risk for autonomous driving. To address these challenges, we propose adaptive image enhancement (AIE)-YOLO, a novel object detection method to enhance road object detection accuracy under extreme weather conditions. To tackle the issue of image quality degradation in extreme weather, we designed an improved adaptive image enhancement module. This module dynamically adjusts the pixel features of road images based on different scene conditions, thereby enhancing object visibility and suppressing irrelevant background interference. Additionally, we introduce a spatial feature extraction module to adaptively enhance the model's spatial modeling capability under complex backgrounds. Furthermore, a channel feature extraction module is designed to adaptively enhance the model's representation and generalization abilities. Due to the difficulty in acquiring real-world data for various extreme weather conditions, we constructed a novel benchmark dataset named extreme weather simulation-rare object dataset. This dataset comprises ten types of simulated extreme weather scenarios and is built upon a publicly available rare object detection dataset. Extensive experiments conducted on the extreme weather simulation-rare object dataset demonstrate that AIE-YOLO outperforms existing state-of-the-art methods, achieving excellent detection performance under extreme weather conditions.

PMID:39096044 | DOI:10.1177/00368504241263165

Categories: Literature Watch

AI-driven convolutional neural networks for accurate identification of yellow fever vectors

Fri, 2024-08-02 06:00

Parasit Vectors. 2024 Aug 2;17(1):329. doi: 10.1186/s13071-024-06406-2.

ABSTRACT

BACKGROUND: Identifying mosquito vectors is crucial for controlling diseases. Automated identification studies using the convolutional neural network (CNN) have been conducted for some urban mosquito vectors but not yet for sylvatic mosquito vectors that transmit the yellow fever. We evaluated the ability of the AlexNet CNN to identify four mosquito species: Aedes serratus, Aedes scapularis, Haemagogus leucocelaenus and Sabethes albiprivus and whether there is variation in AlexNet's ability to classify mosquitoes based on pictures of four different body regions.

METHODS: The specimens were photographed using a cell phone connected to a stereoscope. Photographs were taken of the full-body, pronotum and lateral view of the thorax, which were pre-processed to train the AlexNet algorithm. The evaluation was based on the confusion matrix, the accuracy (ten pseudo-replicates) and the confidence interval for each experiment.

RESULTS: Our study found that the AlexNet can accurately identify mosquito pictures of the genus Aedes, Sabethes and Haemagogus with over 90% accuracy. Furthermore, the algorithm performance did not change according to the body regions submitted. It is worth noting that the state of preservation of the mosquitoes, which were often damaged, may have affected the network's ability to differentiate between these species and thus accuracy rates could have been even higher.

CONCLUSIONS: Our results support the idea of applying CNNs for artificial intelligence (AI)-driven identification of mosquito vectors of tropical diseases. This approach can potentially be used in the surveillance of yellow fever vectors by health services and the population as well.

PMID:39095920 | DOI:10.1186/s13071-024-06406-2

Categories: Literature Watch

Integrating bioinformatics and machine learning methods to analyze diagnostic biomarkers for HBV-induced hepatocellular carcinoma

Fri, 2024-08-02 06:00

Diagn Pathol. 2024 Aug 2;19(1):105. doi: 10.1186/s13000-024-01528-8.

ABSTRACT

Hepatocellular carcinoma (HCC) is a malignant tumor. It is estimated that approximately 50-80% of HCC cases worldwide are caused by hepatitis b virus (HBV) infection, and other pathogenic factors have been shown to promote the development of HCC when coexisting with HBV. Understanding the molecular mechanisms of HBV-induced hepatocellular carcinoma (HBV-HCC) is crucial for the prevention, diagnosis, and treatment of the disease. In this study, we analyzed the molecular mechanisms of HBV-induced HCC by combining bioinformatics and deep learning methods. Firstly, we collected a gene set related to HBV-HCC from the GEO database, performed differential analysis and WGCNA analysis to identify genes with abnormal expression in tumors and high relevance to tumors. We used three deep learning methods, Lasso, random forest, and SVM, to identify key genes RACGAP1, ECT2, and NDC80. By establishing a diagnostic model, we determined the accuracy of key genes in diagnosing HBV-HCC. In the training set, RACGAP1(AUC:0.976), ECT2(AUC:0.969), and NDC80 (AUC: 0.976) showed high accuracy. They also exhibited good accuracy in the validation set: RACGAP1(AUC:0.878), ECT2(AUC:0.731), and NDC80(AUC:0.915). The key genes were found to be highly expressed in liver cancer tissues compared to normal liver tissues, and survival analysis indicated that high expression of key genes was associated with poor prognosis in liver cancer patients. This suggests a close relationship between key genes RACGAP1, ECT2, and NDC80 and the occurrence and progression of HBV-HCC. Molecular docking results showed that the key genes could spontaneously bind to the anti-hepatocellular carcinoma drugs Lenvatinib, Regorafenib, and Sorafenib with strong binding activity. Therefore, ECT2, NDC80, and RACGAP1 may serve as potential biomarkers for the diagnosis of HBV-HCC and as targets for the development of targeted therapeutic drugs.

PMID:39095799 | DOI:10.1186/s13000-024-01528-8

Categories: Literature Watch

PLEKv2: predicting lncRNAs and mRNAs based on intrinsic sequence features and the coding-net model

Fri, 2024-08-02 06:00

BMC Genomics. 2024 Aug 2;25(1):756. doi: 10.1186/s12864-024-10662-y.

ABSTRACT

BACKGROUND: Long non-coding RNAs (lncRNAs) are RNA transcripts of more than 200 nucleotides that do not encode canonical proteins. Their biological structure is similar to messenger RNAs (mRNAs). To distinguish between lncRNA and mRNA transcripts quickly and accurately, we upgraded the PLEK alignment-free tool to its next version, PLEKv2, and constructed models tailored for both animals and plants.

RESULTS: PLEKv2 can achieve 98.7% prediction accuracy for human datasets. Compared with classical tools and deep learning-based models, this is 8.1%, 3.7%, 16.6%, 1.4%, 4.9%, and 48.9% higher than CPC2, CNCI, Wen et al.'s CNN, LncADeep, PLEK, and NcResNet, respectively. The accuracy of PLEKv2 was > 90% for cross-species prediction. PLEKv2 is more effective and robust than CPC2, CNCI, LncADeep, PLEK, and NcResNet for primate datasets (including chimpanzees, macaques, and gorillas). Moreover, PLEKv2 is not only suitable for non-human primates that are closely related to humans, but can also predict the coding ability of RNA sequences in plants such as Arabidopsis.

CONCLUSIONS: The experimental results illustrate that the model constructed by PLEKv2 can distinguish lncRNAs and mRNAs better than PLEK. The PLEKv2 software is freely available at https://sourceforge.net/projects/plek2/ .

PMID:39095710 | DOI:10.1186/s12864-024-10662-y

Categories: Literature Watch

Can supervised deep learning architecture outperform autoencoders in building propensity score models for matching?

Fri, 2024-08-02 06:00

BMC Med Res Methodol. 2024 Aug 2;24(1):167. doi: 10.1186/s12874-024-02284-5.

ABSTRACT

PURPOSE: Propensity score matching is vital in epidemiological studies using observational data, yet its estimates relies on correct model-specification. This study assesses supervised deep learning models and unsupervised autoencoders for propensity score estimation, comparing them with traditional methods for bias and variance accuracy in treatment effect estimations.

METHODS: Utilizing a plasmode simulation based on the Right Heart Catheterization dataset, under a variety of settings, we evaluated (1) a supervised deep learning architecture and (2) an unsupervised autoencoder, alongside two traditional methods: logistic regression and a spline-based method in estimating propensity scores for matching. Performance metrics included bias, standard errors, and coverage probability. The analysis was also extended to real-world data, with estimates compared to those obtained via a double robust approach.

RESULTS: The analysis revealed that supervised deep learning models outperformed unsupervised autoencoders in variance estimation while maintaining comparable levels of bias. These results were supported by analyses of real-world data, where the supervised model's estimates closely matched those derived from conventional methods. Additionally, deep learning models performed well compared to traditional methods in settings where exposure was rare.

CONCLUSION: Supervised deep learning models hold promise in refining propensity score estimations in epidemiological research, offering nuanced confounder adjustment, especially in complex datasets. We endorse integrating supervised deep learning into epidemiological research and share reproducible codes for widespread use and methodological transparency.

PMID:39095707 | DOI:10.1186/s12874-024-02284-5

Categories: Literature Watch

Multisource information fusion method for vegetable disease detection

Fri, 2024-08-02 06:00

BMC Plant Biol. 2024 Aug 2;24(1):738. doi: 10.1186/s12870-024-05346-4.

ABSTRACT

Automated detection and identification of vegetable diseases can enhance vegetable quality and increase profits. Images of greenhouse-grown vegetable diseases often feature complex backgrounds, a diverse array of diseases, and subtle symptomatic differences. Previous studies have grappled with accurately pinpointing lesion positions and quantifying infection degrees, resulting in overall low recognition rates. To tackle the challenges posed by insufficient validation datasets and low detection and recognition rates, this study capitalizes on the geographical advantage of Shouguang, renowned as the "Vegetable Town," to establish a self-built vegetable base for data collection and validation experiments. Concentrating on a broad spectrum of fruit and vegetable crops afflicted with various diseases, we conducted on-site collection of greenhouse disease images, compiled a large-scale dataset, and introduced the Space-Time Fusion Attention Network (STFAN). STFAN integrates multi-source information on vegetable disease occurrences, bolstering the model's resilience. Additionally, we proposed the Multilayer Encoder-Decoder Feature Fusion Network (MEDFFN) to counteract feature disappearance in deep convolutional blocks, complemented by the Boundary Structure Loss function to guide the model in acquiring more detailed and accurate boundary information. By devising a detection and recognition model that extracts high-resolution feature representations from multiple sources, precise disease detection and identification were achieved. This study offers technical backing for the holistic prevention and control of vegetable diseases, thereby advancing smart agriculture. Results indicate that, on our self-built VDGE dataset, compared to YOLOv7-tiny, YOLOv8n, and YOLOv9, the proposed model (Multisource Information Fusion Method for Vegetable Disease Detection, MIFV) has improved mAP by 3.43%, 3.02%, and 2.15%, respectively, showcasing significant performance advantages. The MIFV model parameters stand at 39.07 M, with a computational complexity of 108.92 GFLOPS, highlighting outstanding real-time performance and detection accuracy compared to mainstream algorithms. This research suggests that the proposed MIFV model can swiftly and accurately detect and identify vegetable diseases in greenhouse environments at a reduced cost.

PMID:39095689 | DOI:10.1186/s12870-024-05346-4

Categories: Literature Watch

Enhanced skin cancer diagnosis using optimized CNN architecture and checkpoints for automated dermatological lesion classification

Fri, 2024-08-02 06:00

BMC Med Imaging. 2024 Aug 2;24(1):201. doi: 10.1186/s12880-024-01356-8.

ABSTRACT

Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model's architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model's learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model's ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.

PMID:39095688 | DOI:10.1186/s12880-024-01356-8

Categories: Literature Watch

Differentially localized protein identification for breast cancer based on deep learning in immunohistochemical images

Fri, 2024-08-02 06:00

Commun Biol. 2024 Aug 2;7(1):935. doi: 10.1038/s42003-024-06548-0.

ABSTRACT

The mislocalization of proteins leads to breast cancer, one of the world's most prevalent cancers, which can be identified from immunohistochemical images. Here, based on the deep learning framework, location prediction models were constructed using the features of breast immunohistochemical images. Ultimately, six differentially localized proteins that with stable differentially predictive localization, maximum localization differences, and whose predicted results are not affected by removing a single image are obtained (CCNT1, NSUN5, PRPF4, RECQL4, UTP6, ZNF500). Further verification reveals that these proteins are not differentially expressed, but are closely associated with breast cancer and have great classification performance. Potential mechanism analysis shows that their co-expressed or co-located proteins and RNAs may affect their localization, leading to changes in interactions and functions that further causes breast cancer. They have the potential to help shed light on the molecular mechanisms of breast cancer and provide assistance for its early diagnosis and treatment.

PMID:39095659 | DOI:10.1038/s42003-024-06548-0

Categories: Literature Watch

Decoding pathology: the role of computational pathology in research and diagnostics

Fri, 2024-08-02 06:00

Pflugers Arch. 2024 Aug 3. doi: 10.1007/s00424-024-03002-2. Online ahead of print.

ABSTRACT

Traditional histopathology, characterized by manual quantifications and assessments, faces challenges such as low-throughput and inter-observer variability that hinder the introduction of precision medicine in pathology diagnostics and research. The advent of digital pathology allowed the introduction of computational pathology, a discipline that leverages computational methods, especially based on deep learning (DL) techniques, to analyze histopathology specimens. A growing body of research shows impressive performances of DL-based models in pathology for a multitude of tasks, such as mutation prediction, large-scale pathomics analyses, or prognosis prediction. New approaches integrate multimodal data sources and increasingly rely on multi-purpose foundation models. This review provides an introductory overview of advancements in computational pathology and discusses their implications for the future of histopathology in research and diagnostics.

PMID:39095655 | DOI:10.1007/s00424-024-03002-2

Categories: Literature Watch

Robotic scrub nurse to anticipate surgical instruments based on real-time laparoscopic video analysis

Fri, 2024-08-02 06:00

Commun Med (Lond). 2024 Aug 2;4(1):156. doi: 10.1038/s43856-024-00581-0.

ABSTRACT

BACKGROUND: Machine learning and robotics technologies are increasingly being used in the healthcare domain to improve the quality and efficiency of surgeries and to address challenges such as staff shortages. Robotic scrub nurses in particular offer great potential to address staff shortages by assuming nursing tasks such as the handover of surgical instruments.

METHODS: We introduce a robotic scrub nurse system designed to enhance the quality of surgeries and efficiency of surgical workflows by predicting and delivering the required surgical instruments based on real-time laparoscopic video analysis. We propose a three-stage deep learning architecture consisting of a single frame-, temporal multi frame-, and informed model to anticipate surgical instruments. The anticipation model was trained on a total of 62 laparoscopic cholecystectomies.

RESULTS: Here, we show that our prediction system can accurately anticipate 71.54% of the surgical instruments required during laparoscopic cholecystectomies in advance, facilitating a smoother surgical workflow and reducing the need for verbal communication. As the instruments in the left working trocar are changed less frequently and according to a standardized procedure, the prediction system works particularly well for this trocar.

CONCLUSIONS: The robotic scrub nurse thus acts as a mind reader and helps to mitigate staff shortages by taking over a great share of the workload during surgeries while additionally enabling an enhanced process standardization.

PMID:39095639 | DOI:10.1038/s43856-024-00581-0

Categories: Literature Watch

Understanding patient-derived tumor organoid growth through an integrated imaging and mathematical modeling framework

Fri, 2024-08-02 06:00

PLoS Comput Biol. 2024 Aug 2;20(8):e1012256. doi: 10.1371/journal.pcbi.1012256. Online ahead of print.

ABSTRACT

Patient-derived tumor organoids (PDTOs) are novel cellular models that maintain the genetic, phenotypic and structural features of patient tumor tissue and are useful for studying tumorigenesis and drug response. When integrated with advanced 3D imaging and analysis techniques, PDTOs can be used to establish physiologically relevant high-throughput and high-content drug screening platforms that support the development of patient-specific treatment strategies. However, in order to effectively leverage high-throughput PDTO observations for clinical predictions, it is critical to establish a quantitative understanding of the basic properties and variability of organoid growth dynamics. In this work, we introduced an innovative workflow for analyzing and understanding PDTO growth dynamics, by integrating a high-throughput imaging deep learning platform with mathematical modeling, incorporating flexible growth laws and variable dormancy times. We applied the workflow to colon cancer organoids and demonstrated that organoid growth is well-described by the Gompertz model of growth. Our analysis showed significant intrapatient heterogeneity in PDTO growth dynamics, with the initial exponential growth rate of an organoid following a lognormal distribution within each dataset. The level of intrapatient heterogeneity varied between patients, as did organoid growth rates and dormancy times of single seeded cells. Our work contributes to an emerging understanding of the basic growth characteristics of PDTOs, and it highlights the heterogeneity in organoid growth both within and between patients. These results pave the way for further modeling efforts aimed at predicting treatment response dynamics and drug resistance timing.

PMID:39093897 | DOI:10.1371/journal.pcbi.1012256

Categories: Literature Watch

Decoding dynamic visual scenes across the brain hierarchy

Fri, 2024-08-02 06:00

PLoS Comput Biol. 2024 Aug 2;20(8):e1012297. doi: 10.1371/journal.pcbi.1012297. Online ahead of print.

ABSTRACT

Understanding the computational mechanisms that underlie the encoding and decoding of environmental stimuli is a crucial investigation in neuroscience. Central to this pursuit is the exploration of how the brain represents visual information across its hierarchical architecture. A prominent challenge resides in discerning the neural underpinnings of the processing of dynamic natural visual scenes. Although considerable research efforts have been made to characterize individual components of the visual pathway, a systematic understanding of the distinctive neural coding associated with visual stimuli, as they traverse this hierarchical landscape, remains elusive. In this study, we leverage the comprehensive Allen Visual Coding-Neuropixels dataset and utilize the capabilities of deep learning neural network models to study neural coding in response to dynamic natural visual scenes across an expansive array of brain regions. Our study reveals that our decoding model adeptly deciphers visual scenes from neural spiking patterns exhibited within each distinct brain area. A compelling observation arises from the comparative analysis of decoding performances, which manifests as a notable encoding proficiency within the visual cortex and subcortical nuclei, in contrast to a relatively reduced encoding activity within hippocampal neurons. Strikingly, our results unveil a robust correlation between our decoding metrics and well-established anatomical and functional hierarchy indexes. These findings corroborate existing knowledge in visual coding related to artificial visual stimuli and illuminate the functional role of these deeper brain regions using dynamic stimuli. Consequently, our results suggest a novel perspective on the utility of decoding neural network models as a metric for quantifying the encoding quality of dynamic natural visual scenes represented by neural responses, thereby advancing our comprehension of visual coding within the complex hierarchy of the brain.

PMID:39093861 | DOI:10.1371/journal.pcbi.1012297

Categories: Literature Watch

OTMorph: Unsupervised Multi-domain Abdominal Medical Image Registration Using Neural Optimal Transport

Fri, 2024-08-02 06:00

IEEE Trans Med Imaging. 2024 Aug 2;PP. doi: 10.1109/TMI.2024.3437295. Online ahead of print.

ABSTRACT

Deformable image registration is one of the essential processes in analyzing medical images. In particular, when diagnosing abdominal diseases such as hepatic cancer and lymphoma, multi-domain images scanned from different modalities or different imaging protocols are often used. However, they are not aligned due to scanning times, patient breathing, movement, etc. Although recent learning-based approaches can provide deformations in real-time with high performance, multi-domain abdominal image registration using deep learning is still challenging since the images in different domains have different characteristics such as image contrast and intensity ranges. To address this, this paper proposes a novel unsupervised multi-domain image registration framework using neural optimal transport, dubbed OTMorph. When moving and fixed volumes are given as input, a transport module of our proposed model learns the optimal transport plan to map data distributions from the moving to the fixed volumes and estimates a domain-transported volume. Subsequently, a registration module taking the transported volume can effectively estimate the deformation field, leading to deformation performance improvement. Experimental results on multi-domain image registration using multi-modality and multi-parametric abdominal medical images demonstrate that the proposed method provides superior deformable registration via the domain-transported image that alleviates the domain gap between the input images. Also, we attain the improvement even on out-of-distribution data, which indicates the superior generalizability of our model for the registration of various medical images. Our source code is available at https://github.com/boahK/OTMorph.

PMID:39093684 | DOI:10.1109/TMI.2024.3437295

Categories: Literature Watch

Pages