Deep learning

Perceptual super-resolution in multiple sclerosis MRI

Wed, 2024-11-06 06:00

Front Neurosci. 2024 Oct 22;18:1473132. doi: 10.3389/fnins.2024.1473132. eCollection 2024.

ABSTRACT

INTRODUCTION: Magnetic resonance imaging (MRI) is crucial for diagnosing and monitoring of multiple sclerosis (MS) as it is used to assess lesions in the brain and spinal cord. However, in real-world clinical settings, MRI scans are often acquired with thick slices, limiting their utility for automated quantitative analyses. This work presents a single-image super-resolution (SR) reconstruction framework that leverages SR convolutional neural networks (CNN) to enhance the through-plane resolution of structural MRI in people with MS (PwMS).

METHODS: Our strategy involves the supervised fine-tuning of CNN architectures, guided by a content loss function that promotes perceptual quality, as well as reconstruction accuracy, to recover high-level image features.

RESULTS: Extensive evaluation with MRI data of PwMS shows that our SR strategy leads to more accurate MRI reconstructions than competing methods. Furthermore, it improves lesion segmentation on low-resolution MRI, approaching the performance achievable with high-resolution images.

DISCUSSION: Results demonstrate the potential of our SR framework to facilitate the use of low-resolution retrospective MRI from real-world clinical settings to investigate quantitative image-based biomarkers of MS.

PMID:39502711 | PMC:PMC11534588 | DOI:10.3389/fnins.2024.1473132

Categories: Literature Watch

Using spatial video and deep learning for automated mapping of ground-level context in relief camps

Tue, 2024-11-05 06:00

Int J Health Geogr. 2024 Nov 5;23(1):23. doi: 10.1186/s12942-024-00382-7.

ABSTRACT

BACKGROUND: The creation of relief camps following a disaster, conflict or other form of externality often generates additional health problems. The density of people in a highly stressed environment with questionable safe food and water access presents the potential for infectious disease outbreaks. These camps are also not static data events but rather fluctuate in size, composition, and level and quality of service provision. While contextualized geospatial data collection and mapping are vital for understanding the nature of these camps, various challenges, including a lack of data at the required spatial or temporal granularity, as well as the issue of sustainability, can act as major impediments. Here, we present the first steps toward a deep learning-based solution for dynamic mapping using spatial video (SV).

METHODS: We trained a convolutional neural network (CNN) model on a SV dataset collected from Goma, Democratic Republic of Congo (DRC) to identify relief camps from video imagery. We developed a spatial filtering approach to tackle the challenges associated with spatially tagging objects such as the accuracy of global positioning system and positioning of camera. The spatial filtering approach generates smooth surfaces of detection, which can further be used to capture changes in microenvironments by applying techniques such as raster math.

RESULTS: The initial results suggest that our model can detect temporary physical dwellings from SV imagery with a high level of precision, recall, and object localization. The spatial filtering approach helps to identify areas with higher concentrations of camps and the web-based tool helps to explore these areas. The longitudinal analysis based on applying raster math on the detection surfaces revealed locations, which had a considerable change in the distribution of tents over space and time.

CONCLUSIONS: The results lay the groundwork for automated mapping of spatial features from imagery data. We anticipate that this work is the building block for a future combination of SV, object identification and automatic mapping that could provide sustainable data generation possibilities for challenging environments such as relief camps or other informal settlements.

PMID:39501276 | DOI:10.1186/s12942-024-00382-7

Categories: Literature Watch

Revolutionizing spinal interventions: a systematic review of artificial intelligence technology applications in contemporary surgery

Tue, 2024-11-05 06:00

BMC Surg. 2024 Nov 5;24(1):345. doi: 10.1186/s12893-024-02646-2.

ABSTRACT

Leveraging its ability to handle large and complex datasets, artificial intelligence can uncover subtle patterns and correlations that human observation may overlook. This is particularly valuable for understanding the intricate dynamics of spinal surgery and its multifaceted impacts on patient prognosis. This review aims to delineate the role of artificial intelligence in spinal surgery. A search of the PubMed database from 1992 to 2023 was conducted using relevant English publications related to the application of artificial intelligence in spinal surgery. The search strategy involved a combination of the following keywords: "Artificial neural network," "deep learning," "artificial intelligence," "spinal," "musculoskeletal," "lumbar," "vertebra," "disc," "cervical," "cord," "stenosis," "procedure," "operation," "surgery," "preoperative," "postoperative," and "operative." A total of 1,182 articles were retrieved. After a careful evaluation of abstracts, 90 articles were found to meet the inclusion criteria for this review. Our review highlights various applications of artificial neural networks in spinal disease management, including (1) assessing surgical indications, (2) assisting in surgical procedures, (3) preoperatively predicting surgical outcomes, and (4) estimating the occurrence of various surgical complications and adverse events. By utilizing these technologies, surgical outcomes can be improved, ultimately enhancing the quality of life for patients.

PMID:39501233 | DOI:10.1186/s12893-024-02646-2

Categories: Literature Watch

Application of the online teaching model based on BOPPPS virtual simulation platform in preventive medicine undergraduate experiment

Tue, 2024-11-05 06:00

BMC Med Educ. 2024 Nov 5;24(1):1255. doi: 10.1186/s12909-024-06175-7.

ABSTRACT

BACKGROUND: As online teaching gains prevalence in higher education, traditional face-to-face methods are encountering limitations in meeting the demands of medical ethics, the availability of experimental resources, and essential experimental conditions. Consequently, under the guidance of the BOPPPS (bridge-in, objective, preassessment, participatory learning, postassessment, summary) teaching model, the application of virtual simulation platform has become a new trend. The purpose of this study is to explore the effect of BOPPPS combined with virtual simulation experimental teaching on students' scores and the evaluation of students' participation, performance and teachers' self-efficacy in preventive medicine experiment.

METHODS: Students from Class 1 and Class 2 of 2019 preventive medicine major in Binzhou Medical University were selected as the research objects. The experimental group (class 2) (n = 51) received the teaching mode combined with BOPPPS and virtual simulation platform, while the control group (class 1) (n = 49) received the traditional experimental teaching method. After class, the experimental report scores, virtual simulation scores, students' engagement scale (SES), Biggs questionnaires, and teachers' sense of self-efficacy (TSES) questionnaires were analyzed.

RESULTS: The experimental report results demonstrated a significant increase in the total score of the experimental group and the scores of each of the four individual experiments compared to the control group (P < 0.05). To investigate the impact of the new teaching model on students' learning attitudes and patterns, as well as to evaluate teachers' self-efficacy, a questionnaire survey was administered following the course. The SES results showed that students in the experimental group had high performance scores on the two dimensions of learning methods and learning emotions (t = 2.476, t = 2.177; P = 0.015, P = 0.032). Furthermore, in the Biggs questionnaire, the total deep learning score of the experimental group was higher than that of the control group (t = 2.553, P = 0.012), and the deep learning motivation score of the experimental group was higher than that of the control group (t = 2.598, P = 0.011). The TSES questionnaire shows that most teachers think it is easier to manage students and the classroom and easier to implement teaching strategies under this mode.

CONCLUSIONS: The combination of BOPPPS and the virtual simulation platform effectively enhances the experimental environment for students, thereby improving their academic performance, engagement and learning approach in preventive medicine laboratory courses.

PMID:39501207 | DOI:10.1186/s12909-024-06175-7

Categories: Literature Watch

REDalign: accurate RNA structural alignment using residual encoder-decoder network

Tue, 2024-11-05 06:00

BMC Bioinformatics. 2024 Nov 5;25(1):346. doi: 10.1186/s12859-024-05956-7.

ABSTRACT

BACKGROUND: RNA secondary structural alignment serves as a foundational procedure in identifying conserved structural motifs among RNA sequences, crucially advancing our understanding of novel RNAs via comparative genomic analysis. While various computational strategies for RNA structural alignment exist, they often come with high computational complexity. Specifically, when addressing a set of RNAs with unknown structures, the task of simultaneously predicting their consensus secondary structure and determining the optimal sequence alignment requires an overwhelming computational effort of O ( L 6 ) for each RNA pair. Such an extremely high computational complexity makes these methods impractical for large-scale analysis despite their accurate alignment capabilities.

RESULTS: In this paper, we introduce REDalign, an innovative approach based on deep learning for RNA secondary structural alignment. By utilizing a residual encoder-decoder network, REDalign can efficiently capture consensus structures and optimize structural alignments. In this learning model, the encoder network leverages a hierarchical pyramid to assimilate high-level structural features. Concurrently, the decoder network, enhanced with residual skip connections, integrates multi-level encoded features to learn detailed feature hierarchies with fewer parameter sets. REDalign significantly reduces computational complexity compared to Sankoff-style algorithms and effectively handles non-nested structures, including pseudoknots, which are challenging for traditional alignment methods. Extensive evaluations demonstrate that REDalign provides superior accuracy and substantial computational efficiency.

CONCLUSION: REDalign presents a significant advancement in RNA secondary structural alignment, balancing high alignment accuracy with lower computational demands. Its ability to handle complex RNA structures, including pseudoknots, makes it an effective tool for large-scale RNA analysis, with potential implications for accelerating discoveries in RNA research and comparative genomics.

PMID:39501155 | DOI:10.1186/s12859-024-05956-7

Categories: Literature Watch

Deep learning based highly accurate transplanted bioengineered corneal equivalent thickness measurement using optical coherence tomography

Tue, 2024-11-05 06:00

NPJ Digit Med. 2024 Nov 5;7(1):308. doi: 10.1038/s41746-024-01305-3.

ABSTRACT

Corneal transplantation is the primary treatment for irreversible corneal diseases, but due to limited donor availability, bioengineered corneal equivalents are being developed as a solution, with biocompatibility, structural integrity, and physical function considered key factors. Since conventional evaluation methods may not fully capture the complex properties of the cornea, there is a need for advanced imaging and assessment techniques. In this study, we proposed a deep learning-based automatic segmentation method for transplanted bioengineered corneal equivalents using optical coherence tomography to achieve a highly accurate evaluation of graft integrity and biocompatibility. Our method provides quantitative individual thickness values, detailed maps, and volume measurements of the bioengineered corneal equivalents, and has been validated through 14 days of monitoring. Based on the results, it is expected to have high clinical utility as a quantitative assessment method for human keratoplasties, including automatic opacity area segmentation and implanted graft part extraction, beyond animal studies.

PMID:39501083 | DOI:10.1038/s41746-024-01305-3

Categories: Literature Watch

Three-dimensional localization and tracking of chromosomal loci throughout the Escherichia coli cell cycle

Tue, 2024-11-05 06:00

Commun Biol. 2024 Nov 5;7(1):1443. doi: 10.1038/s42003-024-07155-9.

ABSTRACT

The intracellular position of genes may impact their expression, but it has not been possible to accurately measure the 3D position of chromosomal loci. In 2D, loci can be tracked using arrays of DNA-binding sites for transcription factors (TFs) fused with fluorescent proteins. However, the same 2D data can result from different 3D trajectories. Here, we have developed a deep learning method for super-resolved astigmatism-based 3D localization of chromosomal loci in live E. coli cells which enables a precision better than 61 nm at a signal-to-background ratio of ~4 on a heterogeneous cell background. Determining the spatial localization of chromosomal loci, we find that some loci are at the periphery of the nucleoid for large parts of the cell cycle. Analyses of individual trajectories reveal that these loci are subdiffusive both longitudinally (x) and radially (r), but that individual loci explore the full radial width on a minute time scale.

PMID:39501081 | DOI:10.1038/s42003-024-07155-9

Categories: Literature Watch

Study on virtual tooth image generation utilizing CF-fill and Pix2pix for data augmentation

Tue, 2024-11-05 06:00

Sci Rep. 2024 Nov 5;14(1):26772. doi: 10.1038/s41598-024-78190-z.

ABSTRACT

Traditional dental prosthetics require a significant amount of work, labor, and time. To simplify the process, a method to convert teeth scan images, scanned using an intraoral scanner, into 3D images for design was developed. Furthermore, several studies have used deep learning to automate dental prosthetic processes. Tooth images are required to train deep learning models, but they are difficult to use in research because they contain personal patient information. Therefore, we propose a method for generating virtual tooth images using image-to-image translation (pix2pix) and contextual reconstruction fill (CR-Fill). Various virtual images can be generated using pix2pix, and the images are used as training images for CR-Fill to compare the real image with the virtual image to ensure that the teeth are well-shaped and meaningful. The experimental results demonstrate that the images generated by the proposed method are similar to actual images. In addition, only using virtual images as training data did not perform well; however, using both real and virtual images as training data yielded nearly identical results to using only real images as training data.

PMID:39501064 | DOI:10.1038/s41598-024-78190-z

Categories: Literature Watch

Machine learning models for river flow forecasting in small catchments

Tue, 2024-11-05 06:00

Sci Rep. 2024 Nov 5;14(1):26740. doi: 10.1038/s41598-024-78012-2.

ABSTRACT

In consideration of ongoing climate changes, it has been necessary to provide new tools capable of mitigating hydrogeological risks. These effects will be more marked in small catchments, where the geological and environmental contexts do not require long warning times to implement risk mitigation measures. In this context, deep learning models can be an effective tool for local authorities to have solid forecasts of outflows and to make correct choices during the alarm phase. In this study, we investigate the use of deep learning models able to forecast hydrometric height in very fast hydrographic basins. The errors of the models are very small and about a few centimetres, with several forecasting hours. The models allow a prediction of extreme events with also 4-6 h (RMSE of about 10-30 cm, with a forecasting time of 6 h) in hydrographic basins characterized by rapid changes in the river flow rates. However, to reduce the uncertainties of the predictions with the increase in forecasting time, the system performs better when using a machine learning model able to provide a confidence interval of the prediction based on the last observed river flow rate. By testing models based on different input datasets, the results indicate that a combination of models can provide a set of predictions allowing for a more comprehensive description of the possible future evolutions of river flows. Once the deep learning models have been trained, their application is purely objective and very rapid, permitting the development of simple software that can be used even by lower skilled individuals.

PMID:39501028 | DOI:10.1038/s41598-024-78012-2

Categories: Literature Watch

A generalised computer vision model for improved glaucoma screening using fundus images

Tue, 2024-11-05 06:00

Eye (Lond). 2024 Nov 5. doi: 10.1038/s41433-024-03388-4. Online ahead of print.

ABSTRACT

IMPORTANCE: Worldwide, glaucoma is a leading cause of irreversible blindness. Timely detection is paramount yet challenging, particularly in resource-limited settings. A novel, computer vision-based model for glaucoma screening using fundus images could enhance early and accurate disease detection.

OBJECTIVE: To develop and validate a generalised deep-learning-based algorithm for screening glaucoma using fundus image.

DESIGN, SETTING AND PARTICIPANTS: The glaucomatous fundus data were collected from 20 publicly accessible databases worldwide, resulting in 18,468 images from multiple clinical settings, of which 10,900 were classified as healthy and 7568 as glaucoma. All the data were evaluated and downsized to fit the model's input requirements. The potential model was selected from 20 pre-trained models and trained on the whole dataset except Drishti-GS. The best-performing model was further trained to classify healthy and glaucomatous fundus images using Fastai and PyTorch libraries.

MAIN OUTCOMES AND MEASURES: The model's performance was compared against the actual class using the area under the receiver operating characteristic (AUROC), sensitivity, specificity, accuracy, precision and the F1-score.

RESULTS: The high discriminative ability of the best-performing model was evaluated on a dataset comprising 1364 glaucomatous discs and 2047 healthy discs. The model reflected robust performance metrics, with an AUROC of 0.9920 (95% CI: 0.9920-0.9921) for both the glaucoma and healthy classes. The sensitivity, specificity, accuracy, precision, recall and F1-scores were consistently higher than 0.9530 for both classes. The model performed well on an external validation set of the Drishti-GS dataset, with an AUROC of 0.8751 and an accuracy of 0.8713.

CONCLUSIONS AND RELEVANCE: This study demonstrated the high efficacy of our classification model in distinguishing between glaucomatous and healthy discs. However, the model's accuracy slightly dropped when evaluated on unseen data, indicating potential inconsistencies among the datasets-the model needs to be refined and validated on larger, more diverse datasets to ensure reliability and generalisability. Despite this, our model can be utilised for screening glaucoma at the population level.

PMID:39501004 | DOI:10.1038/s41433-024-03388-4

Categories: Literature Watch

Developing an AI-based application for caries index detection on intraoral photographs

Tue, 2024-11-05 06:00

Sci Rep. 2024 Nov 5;14(1):26752. doi: 10.1038/s41598-024-78184-x.

ABSTRACT

This study evaluates the effectiveness of an Artificial Intelligence (AI)-based smartphone application designed for decay detection on intraoral photographs, comparing its performance to that of junior dentists. Conducted at The Aga Khan University Hospital, Karachi, Pakistan, this study utilized a dataset comprising 7,465 intraoral images, including both primary and secondary dentitions. These images were meticulously annotated by two experienced dentists and further verified by senior dentists. A YOLOv5s model was trained on this dataset and integrated into a smartphone application, while a Detection Transformer was also fine-tuned for comparative purposes. Explainable AI techniques were employed to assess the AI's decision-making processes. A sample of 70 photographs was used to directly compare the application's performance with that of junior dentists. Results showed that the YOLOv5s-based smartphone application achieved a precision of 90.7%, sensitivity of 85.6%, and an F1 score of 88.0% in detecting dental decay. In contrast, junior dentists achieved 83.3% precision, 64.1% sensitivity, and an F1 score of 72.4%. The study concludes that the YOLOv5s algorithm effectively detects dental decay on intraoral photographs and performs comparably to junior dentists. This application holds potential for aiding in the evaluation of the caries index within populations, thus contributing to efforts aimed at reducing the disease burden at the community level.

PMID:39500993 | DOI:10.1038/s41598-024-78184-x

Categories: Literature Watch

A Deep Learning Model to Predict Breast Implant Texture Types Using Ultrasonography Images: Feasibility Development Study

Tue, 2024-11-05 06:00

JMIR Form Res. 2024 Nov 5;8:e58776. doi: 10.2196/58776.

ABSTRACT

BACKGROUND: Breast implants, including textured variants, have been widely used in aesthetic and reconstructive mammoplasty. However, the textured type, which is one of the shell texture types of breast implants, has been identified as a possible etiologic factor for lymphoma, specifically breast implant-associated anaplastic large cell lymphoma (BIA-ALCL). Identifying the shell texture type of the implant is critical to diagnosing BIA-ALCL. However, distinguishing the shell texture type can be difficult due to the loss of human memory and medical history. An alternative approach is to use ultrasonography, but this method also has limitations in quantitative assessment.

OBJECTIVE: This study aims to determine the feasibility of using a deep learning model to classify the shell texture type of breast implants and make robust predictions from ultrasonography images from heterogeneous sources.

METHODS: A total of 19,502 breast implant images were retrospectively collected from heterogeneous sources, including images captured from both Canon and GE devices, images of ruptured implants, and images without implants, as well as publicly available images. The Canon images were trained using ResNet-50. The model's performance on the Canon dataset was evaluated using stratified 5-fold cross-validation. Additionally, external validation was conducted using the GE and publicly available datasets. The area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (PRAUC) were calculated based on the contribution of the pixels with Gradient-weighted Class Activation Mapping (Grad-CAM). To identify the significant pixels for classification, we masked the pixels that contributed less than 10%, up to a maximum of 100%. To assess the model's robustness to uncertainty, Shannon entropy was calculated for 4 image groups: Canon, GE, ruptured implants, and without implants.

RESULTS: The deep learning model achieved an average AUROC of 0.98 and a PRAUC of 0.88 in the Canon dataset. The model achieved an AUROC of 0.985 and a PRAUC of 0.748 for images captured with GE devices. Additionally, the model predicted an AUROC of 0.909 and a PRAUC of 0.958 for the publicly available dataset. This model maintained the PRAUC values for quantitative validation when masking up to 90% of the least-contributing pixels and the remnant pixels in breast shell layers. Furthermore, the prediction uncertainty increased in the following order: Canon (0.066), GE (0072), ruptured implants (0.371), and no implants (0.777).

CONCLUSIONS: We have demonstrated the feasibility of using deep learning to predict the shell texture type of breast implants. This approach quantifies the shell texture types of breast implants, supporting the first step in the diagnosis of BIA-ALCL.

PMID:39499915 | DOI:10.2196/58776

Categories: Literature Watch

Deep Lead Optimization: Leveraging Generative AI for Structural Modification

Tue, 2024-11-05 06:00

J Am Chem Soc. 2024 Nov 5. doi: 10.1021/jacs.4c11686. Online ahead of print.

ABSTRACT

The integration of deep learning-based molecular generation models into drug discovery has garnered significant attention for its potential to expedite the development process. Central to this is lead optimization, a critical phase where existing molecules are refined into viable drug candidates. As various methods for deep lead optimization continue to emerge, it is essential to classify these approaches more clearly. We categorize lead optimization methods into two main types: goal-directed and structure-directed. Our focus is on structure-directed optimization, which, while highly relevant to practical applications, is less explored compared to goal-directed methods. Through a systematic review of conventional computational approaches, we identify four tasks specific to structure-directed optimization: fragment replacement, linker design, scaffold hopping, and side-chain decoration. We discuss the motivations, training data construction, and current developments for each of these tasks. Additionally, we use classical optimization taxonomy to classify both goal-directed and structure-directed methods, highlighting their challenges and future development prospects. Finally, we propose a reference protocol for experimental chemists to effectively utilize Generative AI (GenAI)-based tools in structural modification tasks, bridging the gap between methodological advancements and practical applications.

PMID:39499822 | DOI:10.1021/jacs.4c11686

Categories: Literature Watch

Structure-aware annotation of leucine-rich repeat domains

Tue, 2024-11-05 06:00

PLoS Comput Biol. 2024 Nov 5;20(11):e1012526. doi: 10.1371/journal.pcbi.1012526. Online ahead of print.

ABSTRACT

Protein domain annotation is typically done by predictive models such as HMMs trained on sequence motifs. However, sequence-based annotation methods are prone to error, particularly in calling domain boundaries and motifs within them. These methods are limited by a lack of structural information accessible to the model. With the advent of deep learning-based protein structure prediction, existing sequenced-based domain annotation methods can be improved by taking into account the geometry of protein structures. We develop dimensionality reduction methods to annotate repeat units of the Leucine Rich Repeat solenoid domain. The methods are able to correct mistakes made by existing machine learning-based annotation tools and enable the automated detection of hairpin loops and structural anomalies in the solenoid. The methods are applied to 127 predicted structures of LRR-containing intracellular innate immune proteins in the model plant Arabidopsis thaliana and validated against a benchmark dataset of 172 manually-annotated LRR domains.

PMID:39499733 | DOI:10.1371/journal.pcbi.1012526

Categories: Literature Watch

Exploring vaccine hesitancy in digital public discourse: From tribal polarization to socio-economic disparities

Tue, 2024-11-05 06:00

PLoS One. 2024 Nov 5;19(11):e0308122. doi: 10.1371/journal.pone.0308122. eCollection 2024.

ABSTRACT

This study analyzed online public discourse on Twitter (later rebranded as X) during the COVID-19 pandemic to understand key factors associated with vaccine hesitancy by employing deep-learning techniques. Text classification analysis reveals a significant association between attitudes toward vaccination and the unique socio-economic characteristics of US states, such as education, race, income or voting behavior. However, our results indicate that attributing vaccine hesitancy solely to a single social factor is not appropriate. Furthermore, the topic modeling of online discourse identifies two distinct sets of justifications for vaccine hesitancy. The first set pertains to political concerns, including constitutional rights and conspiracy theories. The second pertains to medical concerns about vaccine safety and efficacy. However, vaccine-hesitant social media users pragmatically use broad categories of justification for their beliefs. This behavior may suggest that vaccine hesitancy is influenced by political beliefs, unconscious emotions, and gut-level instinct. Our findings have further implications for the critical role of trust in public institutions in shaping attitudes toward vaccination and the need for tailored communication strategies to restore faith in marginalized communities.

PMID:39499705 | DOI:10.1371/journal.pone.0308122

Categories: Literature Watch

Advanced Camera-Based Scoliosis Screening via Deep Learning Detection and Fusion of Trunk, Limb, and Skeleton Features

Tue, 2024-11-05 06:00

IEEE J Biomed Health Inform. 2024 Nov 5;PP. doi: 10.1109/JBHI.2024.3491855. Online ahead of print.

ABSTRACT

Scoliosis significantly impacts quality of life, highlighting the need for effective early scoliosis screening (SS) and intervention. However, current SS methods often involve physical contact, undressing, or radiation exposure. This study introduces an innovative, non-invasive SS approach utilizing a monocular RGB camera that eliminates the need for undressing, sensor attachment, and radiation exposure. We introduce a novel approach that employs Parameterized Human 3D Reconstruction (PH3DR) to reconstruct 3D human models, thereby effectively eliminating clothing obstructions, seamlessly integrated with an ISANet segmentation network, which has been enhanced by Multi-Scale Fusion Attention (MSFA) module we proposed for facilitating the segmentation of distinct human trunk and limb features (HTLF), capturing body surface asymmetries related to scoliosis. Additionally, we propose a Swin Transformer-enhanced CMU-Pose to extract human skeleton features (HSF), identifying skeletal asymmetries crucial for SS. Finally, we develop a fusion model that integrates the HTLF and HSF, combining surface morphology and skeletal features to improve the precision of SS. The experiments demonstrated that PH3DR and MSFA significantly improved the segmentation and extraction of HTLF, whereas ST-based CMU-Pose substantially enhanced the extraction of HSF. Our final model achieved a comparable F1 (0.895 ±0.014) to the best-performing baseline model, with only 0.79% of the parameters and 1.64% of the FLOPs, achieving 36 FPS-significantly higher than the best-performing baseline model (10 FPS). Moreover, our model outperformed two spine surgeons, one less experienced and the other moderately experienced. With its patient-friendly, privacy-preserving, and easily deployable solution, this approach is particularly well-suited for early SS and routine monitoring.

PMID:39499599 | DOI:10.1109/JBHI.2024.3491855

Categories: Literature Watch

Developing a 10-Layer Retinal Segmentation for MacTel Using Semi-Supervised Learning

Tue, 2024-11-05 06:00

Transl Vis Sci Technol. 2024 Nov 4;13(11):2. doi: 10.1167/tvst.13.11.2.

ABSTRACT

PURPOSE: Automated segmentation software in optical coherence tomography (OCT) devices is usually developed for and primarily tested on common diseases. Therefore segmentation accuracy of automated software can be limited in eyes with rare pathologies.

METHODS: We sought to develop a semisupervised deep learning segmentation model that segments 10 retinal layers and four retinal features in eyes with Macular Telangiectasia Type II (MacTel) using a small labeled dataset by leveraging unlabeled images. We compared our model against popular supervised and semisupervised models, as well as conducted ablation studies on the model itself.

RESULTS: Our model significantly outperformed all other models in terms of intersection over union on the 10 retinal layers and two retinal features in the test dataset. For the remaining two features, the pre-retinal space above the internal limiting membrane and the background below the retinal pigment epithelium, all of the models performed similarly. Furthermore, we showed that using more unlabeled images improved the performance of our semisupervised model.

CONCLUSIONS: Our model improves segmentation performance over supervised models by leveraging unlabeled data. This approach has the potential to improve segmentation performance for other diseases, where labeled data is limited but unlabeled data abundant.

TRANSLATIONAL RELEVANCE: Improving automated segmentation of MacTel pathology on OCT imaging by leveraging unlabeled data may enable more accurate assessment of disease progression, and this approach may be useful for improving feature identification and location on OCT in other rare diseases as well.

PMID:39499591 | DOI:10.1167/tvst.13.11.2

Categories: Literature Watch

nPOD-Kidney: A Heterogenous Donor Cohort for the Investigation of Diabetic Kidney Disease Pathogenesis and Progression

Tue, 2024-11-05 06:00

Kidney360. 2024 Nov 5. doi: 10.34067/KID.0000000620. Online ahead of print.

ABSTRACT

BACKGROUND: The Network for Pancreatic Organ donors with Diabetes-Kidney (nPOD-K) project was initiated to assess the feasibility of using kidneys from organ donors to enhance understanding of diabetic kidney disease (DKD) progression.

METHODS: Traditional and digital pathology approaches were employed to characterize the nPOD-K cohort. Periodic acid-Schiff- and Hematoxylin and Eosin-stained sections were used to manually examine and score each nPOD-K case. Brightfield and fluorescently labelled whole slide images of nPOD-K sections were used to train, validate, and test deep learning compartment segmentation and machine learning image analysis tools within Visiopharm software. These digital pathology tools were subsequently employed to evaluate kidney cell-specific markers and pathological indicators.

RESULTS: Digital quantitation of mesangial expansion, tubular atrophy, kidney injury molecule (KIM)-1 expression, cellular infiltration, and fibrosis index aligned with histological DKD classification, as defined by pathologists' review. Histological quantification confirmed loss of podocyte, endothelial, and tubular markers, correlating with DKD progression. Altered expression patterns of prominin-1, protein-tyrosine phosphatase receptor type O, and coronin 2B were validated, in agreement with reported literature.

CONCLUSIONS: The nPOD-K cohort provides a unique open resource opportunity to not only validate putative drug targets but also better understand DKD pathophysiology. A broad range of pathogenesis can be visualized in each case, providing a simulated timeline of DKD progression. We conclude that organ donor-derived tissues serve as high-quality samples, provide a comprehensive view of tissue pathology, and address the need for human kidney tissues available for research.

PMID:39499578 | DOI:10.34067/KID.0000000620

Categories: Literature Watch

CACHE Challenge #1: Targeting the WDR Domain of LRRK2, A Parkinson's Disease Associated Protein

Tue, 2024-11-05 06:00

J Chem Inf Model. 2024 Nov 5. doi: 10.1021/acs.jcim.4c01267. Online ahead of print.

ABSTRACT

The CACHE challenges are a series of prospective benchmarking exercises to evaluate progress in the field of computational hit-finding. Here we report the results of the inaugural CACHE challenge in which 23 computational teams each selected up to 100 commercially available compounds that they predicted would bind to the WDR domain of the Parkinson's disease target LRRK2, a domain with no known ligand and only an apo structure in the PDB. The lack of known binding data and presumably low druggability of the target is a challenge to computational hit finding methods. Of the 1955 molecules predicted by participants in Round 1 of the challenge, 73 were found to bind to LRRK2 in an SPR assay with a KD lower than 150 μM. These 73 molecules were advanced to the Round 2 hit expansion phase, where computational teams each selected up to 50 analogs. Binding was observed in two orthogonal assays for seven chemically diverse series, with affinities ranging from 18 to 140 μM. The seven successful computational workflows varied in their screening strategies and techniques. Three used molecular dynamics to produce a conformational ensemble of the targeted site, three included a fragment docking step, three implemented a generative design strategy and five used one or more deep learning steps. CACHE #1 reflects a highly exploratory phase in computational drug design where participants adopted strikingly diverging screening strategies. Machine learning-accelerated methods achieved similar results to brute force (e.g., exhaustive) docking. First-in-class, experimentally confirmed compounds were rare and weakly potent, indicating that recent advances are not sufficient to effectively address challenging targets.

PMID:39499532 | DOI:10.1021/acs.jcim.4c01267

Categories: Literature Watch

Machine learning models including patient-reported outcome data in oncology: a systematic literature review and analysis of their reporting quality

Tue, 2024-11-05 06:00

J Patient Rep Outcomes. 2024 Nov 5;8(1):126. doi: 10.1186/s41687-024-00808-7.

ABSTRACT

PURPOSE: To critically examine the current state of machine learning (ML) models including patient-reported outcome measure (PROM) scores in cancer research, by investigating the reporting quality of currently available studies and proposing areas of improvement for future use of ML in the field.

METHODS: PubMed and Web of Science were systematically searched for publications of studies on patients with cancer applying ML models with PROM scores as either predictors or outcomes. The reporting quality of applied ML models was assessed utilizing an adapted version of the MI-CLAIM (Minimum Information about CLinical Artificial Intelligence Modelling) checklist. The key variables of the checklist are study design, data preparation, model development, optimization, performance, and examination. Reproducibility and transparency complement the reporting quality criteria.

RESULTS: The literature search yielded 1634 hits, of which 52 (3.2%) were eligible. Thirty-six (69.2%) publications included PROM scores as a predictor and 32 (61.5%) as an outcome. Results of the reporting quality appraisal indicate a potential for improvement, especially in the areas of model examination. According to the standards of the MI-CLAIM checklist, the reporting quality of ML models in included studies proved to be low. Only nine (17.3%) publications present a discussion about the clinical applicability of the developed model and reproducibility and only three (5.8%) provide a code to reproduce the model and the results.

CONCLUSION: The herein performed critical examination of the status quo of the application of ML models including PROM scores in published oncological studies allowed the identification of areas of improvement for reporting and future use of ML in the field.

PMID:39499409 | DOI:10.1186/s41687-024-00808-7

Categories: Literature Watch

Pages