Deep learning

Corrigendum to "Addressing docking pose selection with structure-based deep learning: Recent advances, challenges and opportunities" [Comput Struct Biotechnol J vol. 23 (2024) 2141-2151]

Mon, 2024-08-05 06:00

Comput Struct Biotechnol J. 2024 Jul 17;23:2963. doi: 10.1016/j.csbj.2024.07.001. eCollection 2024 Dec.

ABSTRACT

[This corrects the article DOI: 10.1016/j.csbj.2024.05.024.].

PMID:39100805 | PMC:PMC11295283 | DOI:10.1016/j.csbj.2024.07.001

Categories: Literature Watch

Multi-format open-source sweet orange leaf dataset for disease detection, classification, and analysis

Mon, 2024-08-05 06:00

Data Brief. 2024 Jul 6;55:110713. doi: 10.1016/j.dib.2024.110713. eCollection 2024 Aug.

ABSTRACT

In Bangladesh, sweet orange cultivation has been popular among fruit growers as the fruit is in demand. However, the disease of sweet oranges decreases fruit production. Research suggests that computer-aided disease diagnosis and machine learning (IML) models can improve fruit production by detecting and classifying diseases. In this line, a dataset of sweet oranges is required to diagnose the disease. Moreover, like many other fruits, sweet orange disease may vary from country to country. Therefore, in Bangladesh, a sweet orange dataset is required. Lastly, since different ML algorithms require datasets in various formats, only a few existing datasets fulfil the necessity. To fulfil the limitations, a sweet orange dataset in Bangladesh is collected. The dataset was collected in August and comprises high-quality images documenting multiple disease conditions, including Citrus Canker, Citrus Greening, Citrus Mealybugs, Die Back, Foliage Damage, Spiny Whitefly, Powdery Mildew, Shot Hole, Yellow Dragon, Yellow Leaves, and Healthy Leaf. These images provide an opportunity to apply machine learning and computer vision techniques to detect and classify diseases. This dataset aims to help researchers advance agri engineering through ML. Other sweet orange growing countries with having similar environments may find helpful information. Lastly, such experiments using our dataset will assist farmers in taking preventive measures and minimising economic losses.

PMID:39100782 | PMC:PMC11295629 | DOI:10.1016/j.dib.2024.110713

Categories: Literature Watch

A labelled dataset for rebar counting inspection on construction sites using unmanned aerial vehicles

Mon, 2024-08-05 06:00

Data Brief. 2024 Jul 14;55:110720. doi: 10.1016/j.dib.2024.110720. eCollection 2024 Aug.

ABSTRACT

Accurate inspection of rebars in Reinforced Concrete (RC) structures is essential and requires careful counting. Deep learning algorithms utilizing object detection can facilitate this process through Unmanned Aerial Vehicle (UAV) imagery. However, their effectiveness depends on the availability of large, diverse, and well-labelled datasets. This article details the creation of a dataset specifically for counting rebars using deep learning-based object detection methods. The dataset comprises 874 raw images, divided into three subsets: 524 images for training (60 %), 175 for validation (20 %), and 175 for testing (20 %). To enhance the training data, we applied eight augmentation techniques-brightness, contrast, perspective, rotation, scale, shearing, translation, and blurring-exclusively to the training subset. This resulted in nine distinct datasets: one for each augmentation technique and one combining all techniques in augmentation sets. Expert annotators labelled the dataset in VOC XML format. While this research focuses on rebar counting, the raw dataset can be adapted for other tasks, such as estimating rebar diameter or classifying rebar shapes, by providing the necessary annotations.

PMID:39100779 | PMC:PMC11295459 | DOI:10.1016/j.dib.2024.110720

Categories: Literature Watch

Generation of a network slicing dataset: The foundations for AI-based B5G resource management

Mon, 2024-08-05 06:00

Data Brief. 2024 Jul 15;55:110738. doi: 10.1016/j.dib.2024.110738. eCollection 2024 Aug.

ABSTRACT

This paper presents a comprehensive network slicing dataset designed to empower artificial intelligence (AI), and data-based performance prediction applications, in 5G and beyond (B5G) networks. The dataset, generated through a packet-level simulator, captures the complexities of network slicing considering the three main network slice types defined by 3GPP: Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Internet of Things (mIoT). It includes a wide range of network scenarios with varying topologies, slice instances, and traffic flows. The included scenarios consist of transport networks, excluding the Radio Access Network (RAN) infrastructure. Each sample consists of pairs of a network scenario and the associated performance metrics: the network configuration includes network topology, traffic characteristics, routing configurations, while the performance metrics are the delay, jitter, and loss for each flow. The dataset is generated with a custom network slicing admission control module, enabling the simulation of scenarios in multiple situations of over and underprovisioning. This network slicing dataset is a valuable asset for the research community, unlocking opportunities for innovations in 5G and B5G networks.

PMID:39100778 | PMC:PMC11295707 | DOI:10.1016/j.dib.2024.110738

Categories: Literature Watch

NSTU-BDTAKA: An open dataset for Bangladeshi paper currency detection and recognition

Mon, 2024-08-05 06:00

Data Brief. 2024 Jul 3;55:110701. doi: 10.1016/j.dib.2024.110701. eCollection 2024 Aug.

ABSTRACT

One of the most popular and well-established forms of payment in use today is paper money. Handling paper money might be challenging for those with vision impairments. Assistive technology has been reinventing itself throughout time to better serve the elderly and disabled people. To detect paper currency and extract other useful information from them, image processing techniques and other advanced technologies, such as Artificial Intelligence, Deep Learning, etc., can be used. In this paper, we present a meticulously curated and comprehensive dataset named 'NSTU-BDTAKA' tailored for the simultaneous detection and recognition of a specific object of cultural significance - the Bangladeshi paper currency (in Bengali it is called 'Taka'). This research aims to facilitate the development and evaluation of models for both taka detection and recognition tasks, offering a rich resource for researchers and practitioners alike. The dataset is divided into two distinct components: (i) taka detection, and (ii) taka recognition. The taka detection subset comprises 3,111 high-resolution images, each meticulously annotated with rectangular bounding boxes that encompass instances of the taka. These annotations serve as ground truth for training and validating object detection models, and we adopt the state-of-the-art YOLOv5 architecture for this purpose. In the taka recognition subset, the dataset has been extended to include a vast collection of 28,875 images, each showcasing various instances of the taka captured in diverse contexts and environments. The recognition dataset is designed to address the nuanced task of taka recognition providing researchers with a comprehensive set of images to train, validate, and test recognition models. This subset encompasses challenges such as variations in lighting, scale, orientation, and occlusion, further enhancing the robustness of developed recognition algorithms. The dataset NSTU-BDTAKA not only serves as a benchmark for taka detection and recognition but also fosters advancements in object detection and recognition methods that can be extrapolated to other cultural artifacts and objects. We envision that the dataset will catalyze research efforts in the field of computer vision, enabling the development of more accurate, robust, and efficient models for both detection and recognition tasks.

PMID:39100771 | PMC:PMC11296233 | DOI:10.1016/j.dib.2024.110701

Categories: Literature Watch

Embedded-deep-learning-based sample-to-answer device for on-site malaria diagnosis

Mon, 2024-08-05 06:00

Front Bioeng Biotechnol. 2024 Jul 19;12:1392269. doi: 10.3389/fbioe.2024.1392269. eCollection 2024.

ABSTRACT

Improvements in digital microscopy are critical for the development of a malaria diagnosis method that is accurate at the cellular level and exhibits satisfactory clinical performance. Digital microscopy can be enhanced by improving deep learning algorithms and achieving consistent staining results. In this study, a novel miLab™ device incorporating the solid hydrogel staining method was proposed for consistent blood film preparation, eliminating the use of complex equipment and liquid reagent maintenance. The miLab™ ensures consistent, high-quality, and reproducible blood films across various hematocrits by leveraging deformable staining patches. Embedded-deep-learning-enabled miLab™ was utilized to detect and classify malarial parasites from autofocused images of stained blood cells using an internal optical system. The results of this method were consistent with manual microscopy images. This method not only minimizes human error but also facilitates remote assistance and review by experts through digital image transmission. This method can set a new paradigm for on-site malaria diagnosis. The miLab™ algorithm for malaria detection achieved a total accuracy of 98.86% for infected red blood cell (RBC) classification. Clinical validation performed in Malawi demonstrated an overall percent agreement of 92.21%. Based on these results, miLab™ can become a reliable and efficient tool for decentralized malaria diagnosis.

PMID:39100623 | PMC:PMC11294195 | DOI:10.3389/fbioe.2024.1392269

Categories: Literature Watch

Well Plate-Based Localized Electroporation Workflow for Rapid Optimization of Intracellular Delivery

Mon, 2024-08-05 06:00

Bio Protoc. 2024 Jul 20;14(14):e5037. doi: 10.21769/BioProtoc.5037. eCollection 2024 Jul 20.

ABSTRACT

Efficient and nontoxic delivery of foreign cargo into cells is a critical step in many biological studies and cell engineering workflows with applications in areas such as biomanufacturing and cell-based therapeutics. However, effective molecular delivery into cells involves optimizing several experimental parameters. In the case of electroporation-based intracellular delivery, there is a need to optimize parameters like pulse voltage, duration, buffer type, and cargo concentration for each unique application. Here, we present the protocol for fabricating and utilizing a high-throughput multi-well localized electroporation device (LEPD) assisted by deep learning-based image analysis to enable rapid optimization of experimental parameters for efficient and nontoxic molecular delivery into cells. The LEPD and the optimization workflow presented herein are relevant to both adherent and suspended cell types and different molecular cargo (DNA, RNA, and proteins). The workflow enables multiplexed combinatorial experiments and can be adapted to cell engineering applications requiring in vitro delivery. Key features • A high-throughput multi-well localized electroporation device (LEPD) that can be optimized for both adherent and suspended cell types. • Allows for multiplexed experiments combined with tailored pulse voltage, duration, buffer type, and cargo concentration. • Compatible with various molecular cargoes, including DNA, RNA, and proteins, enhancing its versatility for cell engineering applications. • Integration with deep learning-based image analysis enables rapid optimization of experimental parameters.

PMID:39100599 | PMC:PMC11291937 | DOI:10.21769/BioProtoc.5037

Categories: Literature Watch

The burden of cardiovascular disease in Asia from 2025 to 2050: a forecast analysis for East Asia, South Asia, South-East Asia, Central Asia, and high-income Asia Pacific regions

Mon, 2024-08-05 06:00

Lancet Reg Health West Pac. 2024 Jul 10;49:101138. doi: 10.1016/j.lanwpc.2024.101138. eCollection 2024 Aug.

ABSTRACT

BACKGROUND: Given the rapidly growing burden of cardiovascular disease (CVD) in Asia, this study forecasts the CVD burden and associated risk factors in Asia from 2025 to 2050.

METHODS: Data from the Global Burden of Disease 2019 study was used to construct regression models predicting prevalence, mortality, and disability-adjusted life years (DALYs) attributed to CVD and risk factors in Asia in the coming decades.

FINDINGS: Between 2025 and 2050, crude cardiovascular mortality is expected to rise 91.2% despite a 23.0% decrease in the age-standardised cardiovascular mortality rate (ASMR). Ischaemic heart disease (115 deaths per 100,000 population) and stroke (63 deaths per 100,000 population) will remain leading drivers of ASMR in 2050. Central Asia will have the highest ASMR (676 deaths per 100,000 population), more than three-fold that of Asia overall (186 deaths per 100,000 population), while high-income Asia sub-regions will incur an ASMR of 22 deaths per 100,000 in 2050. High systolic blood pressure will contribute the highest ASMR throughout Asia (105 deaths per 100,000 population), except in Central Asia where high fasting plasma glucose will dominate (546 deaths per 100,000 population).

INTERPRETATION: This forecast forewarns an almost doubling in crude cardiovascular mortality by 2050 in Asia, with marked heterogeneity across sub-regions. Atherosclerotic diseases will continue to dominate, while high systolic blood pressure will be the leading risk factor.

FUNDING: This was supported by the NUHS Seed Fund (NUHSRO/2022/058/RO5+6/Seed-Mar/03), National Medical Research Council Research Training Fellowship (MH 095:003/008-303), National University of Singapore Yong Loo Lin School of Medicine's Junior Academic Fellowship Scheme, NUHS Clinician Scientist Program (NCSP2.0/2024/NUHS/NCWS) and the CArdiovascular DiseasE National Collaborative Enterprise (CADENCE) National Clinical Translational Program (MOH-001277-01).

PMID:39100533 | PMC:PMC11296249 | DOI:10.1016/j.lanwpc.2024.101138

Categories: Literature Watch

scTab: Scaling cross-tissue single-cell annotation models

Sun, 2024-08-04 06:00

Nat Commun. 2024 Aug 4;15(1):6611. doi: 10.1038/s41467-024-51059-5.

ABSTRACT

Identifying cellular identities is a key use case in single-cell transcriptomics. While machine learning has been leveraged to automate cell annotation predictions for some time, there has been little progress in scaling neural networks to large data sets and in constructing models that generalize well across diverse tissues. Here, we propose scTab, an automated cell type prediction model specific to tabular data, and train it using a novel data augmentation scheme across a large corpus of single-cell RNA-seq observations (22.2 million cells). In this context, we show that cross-tissue annotation requires nonlinear models and that the performance of scTab scales both in terms of training dataset size and model size. Additionally, we show that the proposed data augmentation schema improves model generalization. In summary, we introduce a de novo cell type prediction model for single-cell RNA-seq data that can be trained across a large-scale collection of curated datasets and demonstrate the benefits of using deep learning methods in this paradigm.

PMID:39098889 | DOI:10.1038/s41467-024-51059-5

Categories: Literature Watch

Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning

Sun, 2024-08-04 06:00

Med Biol Eng Comput. 2024 Aug 5. doi: 10.1007/s11517-024-03172-2. Online ahead of print.

ABSTRACT

Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.

PMID:39098859 | DOI:10.1007/s11517-024-03172-2

Categories: Literature Watch

Deep-Learning Based Analysis of In-Vivo Confocal Microscopy Images of the Subbasal Corneal Nerve Plexus' Inferior Whorl in Patients with Neuropathic Corneal Pain and Dry Eye Disease

Sun, 2024-08-04 06:00

Ocul Surf. 2024 Aug 2:S1542-0124(24)00082-X. doi: 10.1016/j.jtos.2024.08.002. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate and compare subbasal corneal nerve parameters of the inferior whorl in patients with dry eye disease (DED), neuropathic corneal pain (NCP), and controls using a novel deep-learning-based algorithm to analyze in-vivo confocal microscopy (IVCM) images.

METHODS: Subbasal nerve plexus (SNP) images of the inferior whorl of patients with DED (n=49, 77 eyes), NCP (n=14, 24 eyes), and controls (n=41, 59 eyes) were taken with IVCM and further analyzed using an open-source artificial intelligence (AI)-based algorithm previously developed by our group. This algorithm automatically segments nerves, immune cells, and neuromas in the SNP. The following parameters were compared between groups: nerve area density, average nerve thickness, average nerve segment tortuosity, junction point density, neuroma density, and immune cell density.

RESULTS: 160 eyes of 104 patients (63% females), aged 56.8+15.4 years, were included. The mean nerve area density was significantly lower in the DED (P=0.012) and NCP (P<0.001) groups compared to the control group. The junction point density was lower in the NCP group (P=0.001) compared to the control group and DED group (P=0.004). The immune cell density was higher in the DED group compared with controls (P<0.001).

CONCLUSIONS: Deep-learning-based analysis of IVCM images of the corneal SNP inferior whorl distinguished a decreased mean nerve area density in patients with DED and NCP compared with controls and an increased immune cell density in patients with oGVHD- and SS-associated DED. These findings suggest that the inferior whorl could be used as landmark to distinguish between patients with DED and NCP.

PMID:39098764 | DOI:10.1016/j.jtos.2024.08.002

Categories: Literature Watch

Smart technology for mosquito control: Recent developments, challenges, and future prospects

Sun, 2024-08-04 06:00

Acta Trop. 2024 Aug 2:107348. doi: 10.1016/j.actatropica.2024.107348. Online ahead of print.

ABSTRACT

Smart Technology coupled with digital sensors and deep learning networks have emerging scopes in various fields, including surveillance of mosquitoes carrying pathogens. Several studies have been conducted to examine the efficacy of such technologies in the differential identification of mosquitoes with high accuracy. Some smart trap uses computer vision technology and deep learning networks to identify live Aedes aegypti and Culex quinquefasciatus features in real time. Implementing such a tool paired with a reliable capture mechanism can be beneficial in identifying live mosquitoes without destroying their morphological features. Such smart traps can correctly differentiates between Cx. quinquefasciatus and Ae. aegypti mosquitoes, and may also help control mosquito-borne diseases and predict their possible outbreak. Smart devices embedded with YOLO V4 Deep Neural Network algorithm has been designed with a differential drive mechanism and a mosquito trapping module to attract mosquitoes in the environment. The use of acoustic and optical sensors in combination with machine learning techniques is accelerating the automatic classification of mosquitoes based on their flight characteristics, including wing-beat frequency. Thus, such AI-based tools have potential scopes for surveillance of mosquitoes to control vector-borne diseases. However working efficiency of such technologies requires further evaluation for implementation on a global scale.

PMID:39098749 | DOI:10.1016/j.actatropica.2024.107348

Categories: Literature Watch

A transformer-based unified multimodal framework for Alzheimer's disease assessment

Sun, 2024-08-04 06:00

Comput Biol Med. 2024 Aug 3;180:108979. doi: 10.1016/j.compbiomed.2024.108979. Online ahead of print.

ABSTRACT

In Alzheimer's disease (AD) assessment, traditional deep learning approaches have often employed separate methodologies to handle the diverse modalities of input data. Recognizing the critical need for a cohesive and interconnected analytical framework, we propose the AD-Transformer, a novel transformer-based unified deep learning model. This innovative framework seamlessly integrates structural magnetic resonance imaging (sMRI), clinical, and genetic data from the extensive Alzheimer's Disease Neuroimaging Initiative (ADNI) database, encompassing 1651 subjects. By employing a Patch-CNN block, the AD-Transformer efficiently transforms image data into image tokens, while a linear projection layer adeptly converts non-image data into corresponding tokens. As the core, a transformer block learns comprehensive representations of the input data, capturing the intricate interplay between modalities. The AD-Transformer sets a new benchmark in AD diagnosis and Mild Cognitive Impairment (MCI) conversion prediction, achieving remarkable average area under curve (AUC) values of 0.993 and 0.845, respectively, surpassing those of traditional image-only models and non-unified multimodal models. Our experimental results confirmed the potential of the AD-Transformer as a potent tool in AD diagnosis and MCI conversion prediction. By providing a unified framework that jointly learns holistic representations of both image and non-image data, the AD-Transformer paves the way for more effective and precise clinical assessments, offering a clinically adaptable strategy for leveraging diverse data modalities in the battle against AD.

PMID:39098237 | DOI:10.1016/j.compbiomed.2024.108979

Categories: Literature Watch

Action tremor features discovery for essential tremor and Parkinson's disease with explainable multilayer BiLSTM

Sun, 2024-08-04 06:00

Comput Biol Med. 2024 Aug 3;180:108957. doi: 10.1016/j.compbiomed.2024.108957. Online ahead of print.

ABSTRACT

The tremors of Parkinson's disease (PD) and essential tremor (ET) are known to have overlapping characteristics that make it complicated for clinicians to distinguish them. While deep learning is robust in detecting features unnoticeable to humans, an opaque trained model is impractical in clinical scenarios as coincidental correlations in the training data may be used by the model to make classifications, which may result in misdiagnosis. This work aims to overcome the aforementioned challenge of deep learning models by introducing a multilayer BiLSTM network with explainable AI (XAI) that can better explain tremulous characteristics and quantify the respective discovered important regions in tremor differentiation. The proposed network classifies PD, ET, and normal tremors during drinking actions and derives the contribution from tremor characteristics, (i.e., time, frequency, amplitude, and actions) utilized in the classification task. The analysis shows that the XAI-BiLSTM marks the regions with high tremor amplitude as important in classification, which is verified by a high correlation between relevance distribution and tremor displacement amplitude. The XAI-BiLSTM discovered that the transition phases from arm resting to lifting (during the drinking cycle) is the most important action to classify tremors. Additionally, the XAI-BiLSTM reveals frequency ranges that only contribute to the classification of one tremor class, which may be the potential distinctive feature to overcome the overlapping frequencies problem. By revealing critical timing and frequency patterns unique to PD and ET tremors, this proposed XAI-BiLSTM model enables clinicians to make more informed classifications, potentially reducing misclassification rates and improving treatment outcomes.

PMID:39098236 | DOI:10.1016/j.compbiomed.2024.108957

Categories: Literature Watch

Automated segmentation in pelvic radiotherapy: A comprehensive evaluation of ATLAS-, machine learning-, and deep learning-based models

Sun, 2024-08-04 06:00

Phys Med. 2024 Aug 3;125:104486. doi: 10.1016/j.ejmp.2024.104486. Online ahead of print.

ABSTRACT

Artificial intelligence can standardize and automatize highly demanding procedures, such as manual segmentation, especially in an anatomical site as common as the pelvis. This study investigated four automated segmentation tools on computed tomography (CT) images in female and male pelvic radiotherapy (RT) starting from simpler and well-known atlas-based methods to the most recent neural networks-based algorithms. The evaluation included quantitative, qualitative and time efficiency assessments. A mono-institutional consecutive series of 40 cervical cancer and 40 prostate cancer structure sets were retrospectively selected. After a preparatory phase, the remaining 20 testing sets per each site were auto-segmented by the atlas-based model STAPLE, a Random Forest-based model, and two Deep Learning-based tools (DL), MVision and LimbusAI. Setting manual segmentation as the Ground Truth, 200 structure sets were compared in terms of Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and Distance-to-Agreement Portion (DAP). Automated segmentation and manual correction durations were recorded. Expert clinicians performed a qualitative evaluation. In cervical cancer CTs, DL outperformed the other tools with higher quantitative metrics, qualitative scores, and shorter correction times. On the other hand, in prostate cancer CTs, the performance across all the analyzed tools was comparable in terms of both quantitative and qualitative metrics. Such discrepancy in performance outcome could be explained by the wide range of anatomical variability in cervical cancer with respect to the strict bladder and rectum filling preparation in prostate Stereotactic Body Radiation Therapy (SBRT). Decreasing segmentation times can reduce the burden of pelvic radiation therapy routine in an automated workflow.

PMID:39098106 | DOI:10.1016/j.ejmp.2024.104486

Categories: Literature Watch

Age-appropriate or delayed myelination? Scoring myelination in routine clinical MRI

Sun, 2024-08-04 06:00

Eur J Paediatr Neurol. 2024 Jul 27;52:59-66. doi: 10.1016/j.ejpn.2024.07.010. Online ahead of print.

ABSTRACT

BACKGROUND: Assessment of myelination is a core issue in paediatric neuroimaging and can be challenging, particularly in settings without dedicated paediatric neuroradiologists. Deep learning models have recently been shown to be able to estimate myelination age in children with normal MRI, but currently lack validation for patients with myelination delay and implementation including pre-processing suitable for local imaging is not trivial. Standardized myelination scores, which have been successfully used as biomarkers for myelination in hypomyelinating diseases, rely on visual, semiquantitative scoring of myelination on routine clinical MRI and may offer an easy-to-use alternative for assessment of myelination.

METHODS: Myelination was scored in 13 anatomic sites (items) on conventional T2w and T1w images in controls (n = 253, 0-2 years). Items for the score were selected based on inter-rater variability, practicability of scoring, and importance for correctly identifying validation scans.

RESULTS: The resulting myelination score consisting of 7 T2- and 5 T1-items delineated myelination from term-equivalent to advanced, incomplete myelination which 50 % and 99 % of controls had reached by 19.1 and 32.7 months, respectively. It correctly identified 20/20 new control MRIs and 40/43 with myelination delay, missing one patient with borderline myelination delay at 8.6 months and 2 patients with incomplete T2-myelination of subcortical temporopolar white matter at 28 and 34 months.

CONCLUSIONS: The proposed myelination score provides an easy to use, standardized, and versatile tool to delineate myelination normally occurring during the first 1.5 years of life.

PMID:39098096 | DOI:10.1016/j.ejpn.2024.07.010

Categories: Literature Watch

Automatic pipeline for segmentation of LV myocardium on quantitative MR T1 maps using deep learning model and computation of radial T1 and ECV values

Sun, 2024-08-04 06:00

NMR Biomed. 2024 Aug 4:e5230. doi: 10.1002/nbm.5230. Online ahead of print.

ABSTRACT

Native T1 mapping is a non-invasive technique used for early detection of diffused myocardial abnormalities, and it provides baseline tissue characterization. Post-contrast T1 mapping enhances tissue differentiation, enables extracellular volume (ECV) calculation, and improves myocardial viability assessment. Accurate and precise segmenting of the left ventricular (LV) myocardium on T1 maps is crucial for assessing myocardial tissue characteristics and diagnosing cardiovascular diseases (CVD). This study presents a deep learning (DL)-based pipeline for automatically segmenting LV myocardium on T1 maps and automatic computation of radial T1 and ECV values. The study employs a multicentric dataset consisting of retrospective multiparametric MRI data of 332 subjects to develop and assess the performance of the proposed method. The study compared DL architectures U-Net and Deep Res U-Net for LV myocardium segmentation, which achieved a dice similarity coefficient of 0.84 ± 0.43 and 0.85 ± 0.03, respectively. The dice similarity coefficients computed for radial sub-segmentation of the LV myocardium on basal, mid-cavity, and apical slices were 0.77 ± 0.21, 0.81 ± 0.17, and 0.61 ± 0.14, respectively. The t-test performed between ground truth vs. predicted values of native T1, post-contrast T1, and ECV showed no statistically significant difference (p > 0.05) for any of the radial sub-segments. The proposed DL method leverages the use of quantitative T1 maps for automatic LV myocardium segmentation and accurately computing radial T1 and ECV values, highlighting its potential for assisting radiologists in objective cardiac assessment and, hence, in CVD diagnostics.

PMID:39097976 | DOI:10.1002/nbm.5230

Categories: Literature Watch

Application progress of deep generative models in de novo drug design

Sun, 2024-08-04 06:00

Mol Divers. 2024 Aug 4. doi: 10.1007/s11030-024-10942-5. Online ahead of print.

ABSTRACT

The deep molecular generative model has recently become a research hotspot in pharmacy. This paper analyzes a large number of recent reports and reviews these models. In the central part of this paper, four compound databases and two molecular representation methods are compared. Five model architectures and applications for deep molecular generative models are emphatically introduced. Three evaluation metrics for model evaluation are listed. Finally, the limitations and challenges in this field are discussed to provide a reference and basis for developing and researching new models published in future.

PMID:39097862 | DOI:10.1007/s11030-024-10942-5

Categories: Literature Watch

Utilizing deep learning model for assessing melanocytic density in resection margins of lentigo maligna

Sat, 2024-08-03 06:00

Diagn Pathol. 2024 Aug 3;19(1):106. doi: 10.1186/s13000-024-01532-y.

ABSTRACT

BACKGROUND: Surgical excision with clear histopathological margins is the preferred treatment to prevent progression of lentigo maligna (LM) to invasive melanoma. However, the assessment of resection margins on sun-damaged skin is challenging. We developed a deep learning model for detection of melanocytes in resection margins of LM.

METHODS: In total, 353 whole slide images (WSIs) were included. 295 WSIs were used for training and 58 for validation and testing. The algorithm was trained with 3,973 manual pixel-wise annotations. The AI analyses were compared to those of three blinded dermatopathologists and two pathology residents, who performed their evaluations without AI and AI-assisted. Immunohistochemistry (SOX10) served as the reference standard. We used a dichotomized cutoff for low and high risk of recurrence (≤ 25 melanocytes in an area of 0.5 mm for low risk and > 25 for high risk).

RESULTS: The AI model achieved an area under the receiver operating characteristic curve (AUC) of 0.84 in discriminating margins with low and high recurrence risk. In comparison, the AUC for dermatopathologists ranged from 0.72 to 0.90 and for the residents in pathology, 0.68 to 0.80. Additionally, with aid of the AI model the performance of two pathologists significantly improved.

CONCLUSIONS: The deep learning showed notable accuracy in detecting resection margins of LM with a high versus low risk of recurrence. Furthermore, the use of AI improved the performance of 2/5 pathologists. This automated tool could aid pathologists in the assessment or pre-screening of LM margins.

PMID:39097745 | DOI:10.1186/s13000-024-01532-y

Categories: Literature Watch

The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review

Sat, 2024-08-03 06:00

NPJ Digit Med. 2024 Aug 3;7(1):203. doi: 10.1038/s41746-024-01196-4.

ABSTRACT

The adoption of machine learning (ML) and, more specifically, deep learning (DL) applications into all major areas of our lives is underway. The development of trustworthy AI is especially important in medicine due to the large implications for patients' lives. While trustworthiness concerns various aspects including ethical, transparency and safety requirements, we focus on the importance of data quality (training/test) in DL. Since data quality dictates the behaviour of ML products, evaluating data quality will play a key part in the regulatory approval of medical ML products. We perform a systematic review following PRISMA guidelines using the databases Web of Science, PubMed and ACM Digital Library. We identify 5408 studies, out of which 120 records fulfil our eligibility criteria. From this literature, we synthesise the existing knowledge on data quality frameworks and combine it with the perspective of ML applications in medicine. As a result, we propose the METRIC-framework, a specialised data quality framework for medical training data comprising 15 awareness dimensions, along which developers of medical ML applications should investigate the content of a dataset. This knowledge helps to reduce biases as a major source of unfairness, increase robustness, facilitate interpretability and thus lays the foundation for trustworthy AI in medicine. The METRIC-framework may serve as a base for systematically assessing training datasets, establishing reference datasets, and designing test datasets which has the potential to accelerate the approval of medical ML products.

PMID:39097662 | DOI:10.1038/s41746-024-01196-4

Categories: Literature Watch

Pages