Deep learning

Fusion of Deep Learning with Conventional Imaging Processing: Does It Bring Artificial Intelligence Closer to the Clinic?

Sun, 2024-02-04 06:00

J Invest Dermatol. 2024 Feb 1:S0022-202X(23)03212-8. doi: 10.1016/j.jid.2023.10.043. Online ahead of print.

NO ABSTRACT

PMID:38310497 | DOI:10.1016/j.jid.2023.10.043

Categories: Literature Watch

Importance-aware adaptive dataset distillation

Sat, 2024-02-03 06:00

Neural Netw. 2024 Jan 29;172:106154. doi: 10.1016/j.neunet.2024.106154. Online ahead of print.

ABSTRACT

Herein, we propose a novel dataset distillation method for constructing small informative datasets that preserve the information of the large original datasets. The development of deep learning models is enabled by the availability of large-scale datasets. Despite unprecedented success, large-scale datasets considerably increase the storage and transmission costs, resulting in a cumbersome model training process. Moreover, using raw data for training raises privacy and copyright concerns. To address these issues, a new task named dataset distillation has been introduced, aiming to synthesize a compact dataset that retains the essential information from the large original dataset. State-of-the-art (SOTA) dataset distillation methods have been proposed by matching gradients or network parameters obtained during training on real and synthetic datasets. The contribution of different network parameters to the distillation process varies, and uniformly treating them leads to degraded distillation performance. Based on this observation, we propose an importance-aware adaptive dataset distillation (IADD) method that can improve distillation performance by automatically assigning importance weights to different network parameters during distillation, thereby synthesizing more robust distilled datasets. IADD demonstrates superior performance over other SOTA dataset distillation methods based on parameter matching on multiple benchmark datasets and outperforms them in terms of cross-architecture generalization. In addition, the analysis of self-adaptive weights demonstrates the effectiveness of IADD. Furthermore, the effectiveness of IADD is validated in a real-world medical application such as COVID-19 detection.

PMID:38309137 | DOI:10.1016/j.neunet.2024.106154

Categories: Literature Watch

Porous carbon film/WO<sub>3-x</sub> nanosheets based SERS substrate combined with deep learning technique for molecule detection

Sat, 2024-02-03 06:00

Spectrochim Acta A Mol Biomol Spectrosc. 2024 Jan 23;310:123962. doi: 10.1016/j.saa.2024.123962. Online ahead of print.

ABSTRACT

The Surface-enhanced Raman scattering (SERS) is an attractive optical detecting method with high sensitivity and detectivity, however challenges on large-area signal uniformity and complex spectra analysis methods always retards its wide application. Herein, a highly sensitive and uniform SERS detection strategy supported by porous carbon film/WO3-x nanosheets (PorC/WO3-x) based noble-metal-free SERS substrate and deep learning algorithm are reported. Experimentally, the PorC/WO3-x substrate was prepared by high-temperature annealing the PorC/WO3 films under the argon atmosphere. The defect density of the WO3 was controlled by tuning the reducing reaction time during the annealing process. The SERS performance was evaluated by using R6G as the Raman reporter, it showed that the SERS intensity obtained on the substrate with the optimal annealing time of 3 h was about 8 times as high as that obtained on the PorC/WO3 substrate without annealing treatment. And detection limit of 10-7 M and Raman enhancement factor of 106 could be achieved. Moreover, the above optimal SERS substrate was utilized to detect flavonoids of quercetin, 3-hydroxyflavone and flavone, and a deep learning algorithms was incorporated to identify the quercetin. It revealed that quercetin can be accurately detected within the above flavonoids, and lowest detectable concentration of 10-5 M can be achieved.

PMID:38309005 | DOI:10.1016/j.saa.2024.123962

Categories: Literature Watch

Analysis of static plantar pressure data with capsule networks: Diagnosing ataxia in MS patients with a deep learning-based approach

Sat, 2024-02-03 06:00

Mult Scler Relat Disord. 2024 Jan 21;83:105465. doi: 10.1016/j.msard.2024.105465. Online ahead of print.

ABSTRACT

In this study, it was aimed to detect ataxia in patients with Multiple Sclerosis (MS) by utilizing static plantar pressure data and capsule networks (CapsNet), one of the deep learning (DL) architectures. CapsNet is also equipped with a robust dynamic routing mechanism that determines the output of the next capsule. MS is a chronic nervous system disease that shows its effect in the central nervous system and manifests itself with attacks. One of the most common and challenging symptoms of MS is known as ataxia. Ataxia causes loss of control of limb muscle tone or gait disorders, leading to loss of balance and coordination. The diagnosis of ataxia in MS is applied employing the standard Expanded Disability Status Scale (EDSS) score. However, due to reasons such as physician misconception, diagnosis differences among physicians, and incorrect patient information, more unbiased solutions are required for the diagnosis. The results included Sensitivity at 96.34 % ± 1.71, Specificity at 98.11 % ± 2.04, Precision at 98.08 % ± 2.16, and Accuracy at 97.13 % ± 0.33. The main motivation of the study is to show that these deep learning methods can successfully detect ataxia in MS patients using static plantar pressure data. The high-performance measurements of sensitivity, specificity, precision and accuracy emphasize that the proposed system can be an effective tool in clinical practice. In addition, it was concluded that the proposed autonomous system would be a support mechanism to assist the physician in the detection of ataxia in patients with MS.

PMID:38308913 | DOI:10.1016/j.msard.2024.105465

Categories: Literature Watch

Advantages of transformer and its application for medical image segmentation: a survey

Sat, 2024-02-03 06:00

Biomed Eng Online. 2024 Feb 3;23(1):14. doi: 10.1186/s12938-024-01212-4.

ABSTRACT

PURPOSE: Convolution operator-based neural networks have shown great success in medical image segmentation over the past decade. The U-shaped network with a codec structure is one of the most widely used models. Transformer, a technology used in natural language processing, can capture long-distance dependencies and has been applied in Vision Transformer to achieve state-of-the-art performance on image classification tasks. Recently, researchers have extended transformer to medical image segmentation tasks, resulting in good models.

METHODS: This review comprises publications selected through a Web of Science search. We focused on papers published since 2018 that applied the transformer architecture to medical image segmentation. We conducted a systematic analysis of these studies and summarized the results.

RESULTS: To better comprehend the benefits of convolutional neural networks and transformers, the construction of the codec and transformer modules is first explained. Second, the medical image segmentation model based on transformer is summarized. The typically used assessment markers for medical image segmentation tasks are then listed. Finally, a large number of medical segmentation datasets are described.

CONCLUSION: Even if there is a pure transformer model without any convolution operator, the sample size of medical picture segmentation still restricts the growth of the transformer, even though it can be relieved by a pretraining model. More often than not, researchers are still designing models using transformer and convolution operators.

PMID:38310297 | DOI:10.1186/s12938-024-01212-4

Categories: Literature Watch

A hyperspectral deep learning attention model for predicting lettuce chlorophyll content

Sat, 2024-02-03 06:00

Plant Methods. 2024 Feb 3;20(1):22. doi: 10.1186/s13007-024-01148-9.

ABSTRACT

BACKGROUND: The phenotypic traits of leaves are the direct reflection of the agronomic traits in the growth process of leafy vegetables, which plays a vital role in the selection of high-quality leafy vegetable varieties. The current image-based phenotypic traits extraction research mainly focuses on the morphological and structural traits of plants or leaves, and there are few studies on the phenotypes of physiological traits of leaves. The current research has developed a deep learning model aimed at predicting the total chlorophyll of greenhouse lettuce directly from the full spectrum of hyperspectral images.

RESULTS: A CNN-based one-dimensional deep learning model with spectral attention module was utilized for the estimate of the total chlorophyll of greenhouse lettuce from the full spectrum of hyperspectral images. Experimental results demonstrate that the deep neural network with spectral attention module outperformed the existing standard approaches, including partial least squares regression (PLSR) and random forest (RF), with an average R2 of 0.746 and an average RMSE of 2.018.

CONCLUSIONS: This study unveils the capability of leveraging deep attention networks and hyperspectral imaging for estimating lettuce chlorophyll levels. This approach offers a convenient, non-destructive, and effective estimation method for the automatic monitoring and production management of leafy vegetables.

PMID:38310270 | DOI:10.1186/s13007-024-01148-9

Categories: Literature Watch

AI models for automated segmentation of engineered polycystic kidney tubules

Sat, 2024-02-03 06:00

Sci Rep. 2024 Feb 3;14(1):2847. doi: 10.1038/s41598-024-52677-1.

ABSTRACT

Autosomal dominant polycystic kidney disease (ADPKD) is a monogenic, rare disease, characterized by the formation of multiple cysts that grow out of the renal tubules. Despite intensive attempts to develop new drugs or repurpose existing ones, there is currently no definitive cure for ADPKD. This is primarily due to the complex and variable pathogenesis of the disease and the lack of models that can faithfully reproduce the human phenotype. Therefore, the development of models that allow automated detection of cysts' growth directly on human kidney tissue is a crucial step in the search for efficient therapeutic solutions. Artificial Intelligence methods, and deep learning algorithms in particular, can provide powerful and effective solutions to such tasks, and indeed various architectures have been proposed in the literature in recent years. Here, we comparatively review state-of-the-art deep learning segmentation models, using as a testbed a set of sequential RGB immunofluorescence images from 4 in vitro experiments with 32 engineered polycystic kidney tubules. To gain a deeper understanding of the detection process, we implemented both pixel-wise and cyst-wise performance metrics to evaluate the algorithms. Overall, two models stand out as the best performing, namely UNet++ and UACANet: the latter uses a self-attention mechanism introducing some explainability aspects that can be further exploited in future developments, thus making it the most promising algorithm to build upon towards a more refined cyst-detection platform. UACANet model achieves a cyst-wise Intersection over Union of 0.83, 0.91 for Recall, and 0.92 for Precision when applied to detect large-size cysts. On all-size cysts, UACANet averages at 0.624 pixel-wise Intersection over Union. The code to reproduce all results is freely available in a public GitHub repository.

PMID:38310171 | DOI:10.1038/s41598-024-52677-1

Categories: Literature Watch

Deep representation learning of tissue metabolome and computed tomography annotates NSCLC classification and prognosis

Sat, 2024-02-03 06:00

NPJ Precis Oncol. 2024 Feb 3;8(1):28. doi: 10.1038/s41698-024-00502-3.

ABSTRACT

The rich chemical information from tissue metabolomics provides a powerful means to elaborate tissue physiology or tumor characteristics at cellular and tumor microenvironment levels. However, the process of obtaining such information requires invasive biopsies, is costly, and can delay clinical patient management. Conversely, computed tomography (CT) is a clinical standard of care but does not intuitively harbor histological or prognostic information. Furthermore, the ability to embed metabolome information into CT to subsequently use the learned representation for classification or prognosis has yet to be described. This study develops a deep learning-based framework -- tissue-metabolomic-radiomic-CT (TMR-CT) by combining 48 paired CT images and tumor/normal tissue metabolite intensities to generate ten image embeddings to infer metabolite-derived representation from CT alone. In clinical NSCLC settings, we ascertain whether TMR-CT results in an enhanced feature generation model solving histology classification/prognosis tasks in an unseen international CT dataset of 742 patients. TMR-CT non-invasively determines histological classes - adenocarcinoma/squamous cell carcinoma with an F1-score = 0.78 and further asserts patients' prognosis with a c-index = 0.72, surpassing the performance of radiomics models and deep learning on single modality CT feature extraction. Additionally, our work shows the potential to generate informative biology-inspired CT-led features to explore connections between hard-to-obtain tissue metabolic profiles and routine lesion-derived image data.

PMID:38310164 | DOI:10.1038/s41698-024-00502-3

Categories: Literature Watch

AI-derived epicardial fat measurements improve cardiovascular risk prediction from myocardial perfusion imaging

Sat, 2024-02-03 06:00

NPJ Digit Med. 2024 Feb 3;7(1):24. doi: 10.1038/s41746-024-01020-z.

ABSTRACT

Epicardial adipose tissue (EAT) volume and attenuation are associated with cardiovascular risk, but manual annotation is time-consuming. We evaluated whether automated deep learning-based EAT measurements from ungated computed tomography (CT) are associated with death or myocardial infarction (MI). We included 8781 patients from 4 sites without known coronary artery disease who underwent hybrid myocardial perfusion imaging. Of those, 500 patients from one site were used for model training and validation, with the remaining patients held out for testing (n = 3511 internal testing, n = 4770 external testing). We modified an existing deep learning model to first identify the cardiac silhouette, then automatically segment EAT based on attenuation thresholds. Deep learning EAT measurements were obtained in <2 s compared to 15 min for expert annotations. There was excellent agreement between EAT attenuation (Spearman correlation 0.90 internal, 0.82 external) and volume (Spearman correlation 0.90 internal, 0.91 external) by deep learning and expert segmentation in all 3 sites (Spearman correlation 0.90-0.98). During median follow-up of 2.7 years (IQR 1.6-4.9), 565 patients experienced death or MI. Elevated EAT volume and attenuation were independently associated with an increased risk of death or MI after adjustment for relevant confounders. Deep learning can automatically measure EAT volume and attenuation from low-dose, ungated CT with excellent correlation with expert annotations, but in a fraction of the time. EAT measurements offer additional prognostic insights within the context of hybrid perfusion imaging.

PMID:38310123 | DOI:10.1038/s41746-024-01020-z

Categories: Literature Watch

All-optical image denoising using a diffractive visual processor

Sat, 2024-02-03 06:00

Light Sci Appl. 2024 Feb 4;13(1):43. doi: 10.1038/s41377-024-01385-6.

ABSTRACT

Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images - implemented at the speed of light propagation within a thin diffractive visual processor that axially spans <250 × λ, where λ is the wavelength of light. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.

PMID:38310118 | DOI:10.1038/s41377-024-01385-6

Categories: Literature Watch

Printed smart devices for anti-counterfeiting allowing precise identification with household equipment

Sat, 2024-02-03 06:00

Nat Commun. 2024 Feb 3;15(1):1040. doi: 10.1038/s41467-024-45428-3.

ABSTRACT

Counterfeiting has become a serious global problem, causing worldwide losses and disrupting the normal order of society. Physical unclonable functions are promising hardware-based cryptographic primitives, especially those generated by chemical processes showing a massive challenge-response pair space. However, current chemical-based physical unclonable function devices typically require complex fabrication processes or sophisticated characterization methods with only binary (bit) keys, limiting their practical applications and security properties. Here, we report a flexible laser printing method to synthesize unclonable electronics with high randomness, uniqueness, and repeatability. Hexadecimal resistive keys and binary optical keys can be obtained by the challenge with an ohmmeter and an optical microscope. These readout methods not only make the identification process available to general end users without professional expertise, but also guarantee device complexity and data capacity. An adopted open-source deep learning model guarantees precise identification with high reliability. The electrodes and connection wires are directly printed during laser writing, which allows electronics with different structures to be realized through free design. Meanwhile, the electronics exhibit excellent mechanical and thermal stability. The high physical unclonable function performance and the widely accessible readout methods, together with the flexibility and stability, make this synthesis strategy extremely attractive for practical applications.

PMID:38310090 | DOI:10.1038/s41467-024-45428-3

Categories: Literature Watch

Multi-layer convolutional dictionary learning network for signal denoising and its application to explainable rolling bearing fault diagnosis

Sat, 2024-02-03 06:00

ISA Trans. 2024 Jan 29:S0019-0578(24)00036-3. doi: 10.1016/j.isatra.2024.01.027. Online ahead of print.

ABSTRACT

As a vital mechanical sub-component, the health monitoring of rolling bearings is important. Vibration signal analysis is a commonly used approach for fault diagnosis of bearings. Nevertheless, the collected vibration signals cannot avoid interference from noises which has a negative influence on fault diagnosis. Thus, denoising needs to be utilized as an essential step of vibration signal processing. Traditional denoising methods need expert knowledge to select hyperparameters. And data-driven methods based on deep learning lack interpretability and a clear justification for the design of architecture in a "black-box" deep neural network. An approach to systematically design neural networks is by unrolling algorithms, such as learned iterative soft-thresholding (LISTA). In this paper, the multi-layer convolutional LISTA (ML-CLISTA) algorithm is derived by embedding a designed multi-layer sparse coder to the convolutional extension of LISTA. Then the multi-layer convolutional dictionary learning (ML-CDL) network for mechanical vibration signal denoising is proposed by unrolling ML-CLISTA. By combining ML-CDL network with a classifier, the proposed denoising method is applied to the explainable rolling bearing fault diagnosis. The experiments on two bearing datasets show the superiority of the ML-CDL network over other typical denoising methods.

PMID:38309975 | DOI:10.1016/j.isatra.2024.01.027

Categories: Literature Watch

Compressed sensing with deep learning reconstruction: Improving capability of gadolinium-EOB-enhanced 3D T1WI

Sat, 2024-02-03 06:00

Magn Reson Imaging. 2024 Feb 1:S0730-725X(24)00021-3. doi: 10.1016/j.mri.2024.01.015. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of this study was to determine the utility of compressed sensing (CS) with deep learning reconstruction (DLR) for improving spatial resolution, image quality and focal liver lesion detection on high-resolution contrast-enhanced T1-weighted imaging (HR-CE-T1WI) obtained by CS with DLR as compared with conventional CE-T1WI with parallel imaging (PI).

METHODS: Seventy-seven participants with focal liver lesions underwent conventional CE-T1WI with PI and HR-CE-T1WI, surgical resection, transarterial chemoembolization, and radiofrequency ablation, followed by histopathological or >2-year follow-up examinations in our hospital. Signal-to-noise ratios (SNRs) of liver, spleen and kidney were calculated for each patient, after which each SNR was compared by means of paired t-test. To compare focal lesion detection capabilities of the two methods, a 5-point visual scoring system was adopted for a per lesion basis analysis. Jackknife free-response receiver operating characteristic (JAFROC) analysis was then performed, while sensitivity and false positive rates (/data set) for consensus assessment of the two methods were also compared by using McNemar's test or the signed rank test.

RESULTS: Each SNR of HR-CE-T1WI was significantly higher than that of conventional CE-T1WI with PI (p < 0.05). Sensitivities for consensus assessment showed that HR-CE-MRI had significantly higher sensitivity than conventional CE-T1WI with PI (p = 0.004). Moreover, there were significantly fewer FP/cases for HR-CE-T1WI than for conventional CE-T1WI with PI (p = 0.04).

CONCLUSION: CS with DLR are useful for improving spatial resolution, image quality and focal liver lesion detection capability of Gd-EOB-DTPA enhanced 3D T1WI without any need for longer breath-holding time.

PMID:38309378 | DOI:10.1016/j.mri.2024.01.015

Categories: Literature Watch

Medical report generation based on multimodal federated learning

Sat, 2024-02-03 06:00

Comput Med Imaging Graph. 2024 Jan 26;113:102342. doi: 10.1016/j.compmedimag.2024.102342. Online ahead of print.

ABSTRACT

Medical image reports are integral to clinical decision-making and patient management. Despite their importance, the confidentiality and private nature of medical data pose significant issues for the sharing and analysis of medical image data. This paper addresses these concerns by introducing a multimodal federated learning-based methodology for medical image reporting. This methodology harnesses distributed computing for co-training models across various medical institutions. Under the federated learning framework, every medical institution is capable of training the model locally and aggregating the updated model parameters to curate a top-tier medical image report model. Initially, we advocate for an architecture facilitating multimodal federated learning, including model creation, parameter consolidation, and algorithm enhancement steps. In the model selection phase, we introduce a deep learning-based strategy that utilizes multimodal data for training to produce medical image reports. In the parameter aggregation phase, the federal average algorithm is applied to amalgamate model parameters trained by each institution, which leads to a comprehensive global model. In addition, we introduce an evidence-based optimization algorithm built upon the federal average algorithm. The efficacy of the proposed architecture and scheme is showcased through a series of experiments. Our experimental results validate the proficiency of the proposed multimodal federated learning approach in generating medical image reports. Compared to conventional centralized learning methods, our proposal not only enhances the protection of patient confidentiality but also enriches the accuracy and overall quality of medical image reports. Through this research, we offer a novel solution for the privacy issues linked with the sharing and analyzing of medical data. Expected to assume a crucial role in medical image report generation and other medical applications, the multimodal federated learning method is set to deliver more precise, efficient, and privacy-secured medical services for healthcare professionals and patients.

PMID:38309174 | DOI:10.1016/j.compmedimag.2024.102342

Categories: Literature Watch

Topology observing 3D device reconstruction from continuous-sweep limited angle fluoroscopy

Sat, 2024-02-03 06:00

Med Phys. 2024 Feb 3. doi: 10.1002/mp.16954. Online ahead of print.

ABSTRACT

BACKGROUND: Minimally invasive procedures usually require navigating a microcatheter and guidewire through endoluminal structures such as blood vessels and airways to sites of the disease. For numerous clinical applications, two-dimensional (2D) fluoroscopy is the primary modality used for real-time image guidance during navigation. However, 2D imaging can pose challenges for navigation in complex structures. Real-time 3D visualization of devices within the anatomic context could provide considerable benefits for these procedures. Continuous-sweep limited angle (CLA) fluoroscopy has recently been proposed to provide a compromise between conventional rotational 3D acquisitions and real-time fluoroscopy.

PURPOSE: The purpose of this work was to develop and evaluate a noniterative 3D device reconstruction approach for CLA fluoroscopy acquisitions, which takes into account endoluminal topology to avoid impossible paths between disconnected branches.

METHODS: The algorithm relies on a static 3D roadmap (RM) of vessels or airways, which may be generated from conventional cone beam CT (CBCT) acquisitions prior to navigation. The RM is converted to a graph representation describing its topology. During catheter navigation, the device is segmented from the live 2D projection images using a deep learning approach from which the centerlines are extracted. Rays from the focal spot to detector pixels representing 2D device points are identified and intersections with the RM are computed. Based on the RM graph, a subset of line segments is selected as candidates to exclude device paths through disconnected branches of the RM. Depth localization for each point along the device is then performed by finding the point closest to the previous 3D reconstruction along the candidate segments. This process is repeated as the projection angle changes for each CLA image frame. The approach was evaluated in a phantom study in which a catheter and guidewire were navigated along five pathways within a complex vessel phantom. The result was compared to static cCBCT acquisitions of the device in the final position.

RESULTS: The average root mean squared 3D distance between CLA reconstruction and reference centerline was 1.87 ± 0.30 mm. The Euclidean distance at the device tip was 2.92 ± 2.35 mm. The correct pathway was identified during reconstruction in 100% of frames ( n = 1475 $n=1475$ ). The percentage of 3D device points reconstructed inside the 3D roadmap was 91.83 ± 2.52 % $91.83 \pm 2.52\%$ with an average distance of 0.62 ± 0.30 mm between the device points outside the roadmap and the nearest point within the roadmap.

CONCLUSIONS: This study demonstrates the feasibility of reconstructing curvilinear devices such as catheters and guidewires during endoluminal procedures including intravascular and transbronchial interventions using a noniterative reconstruction approach for CLA fluoroscopy. This approach could improve device navigation in cases where the structure of vessels or airways is complex and includes overlapping branches.

PMID:38308822 | DOI:10.1002/mp.16954

Categories: Literature Watch

Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time

Sat, 2024-02-03 06:00

J Echocardiogr. 2024 Feb 3. doi: 10.1007/s12574-023-00636-6. Online ahead of print.

ABSTRACT

BACKGROUND: Manual interpretation of echocardiographic data is time-consuming and operator-dependent. With the advent of artificial intelligence (AI), there is a growing interest in its potential to streamline echocardiographic interpretation and reduce variability. This study aimed to compare the time taken for measurements by AI to that by human experts after converting the acquired dynamic images into DICOM data.

METHODS: Twenty-three consecutive patients were examined by a single operator, with varying image quality and different medical conditions. Echocardiographic parameters were independently evaluated by human expert using the manual method and the fully automated US2.ai software. The automated processes facilitated by the US2.ai software encompass real-time processing of 2D and Doppler data, measurement of clinically important variables (such as LV function and geometry), automated parameter assessment, and report generation with findings and comments aligned with guidelines. We assessed the duration required for echocardiographic measurements and report creation.

RESULTS: The AI significantly reduced the measurement time compared to the manual method (159 ± 66 vs. 325 ± 94 s, p < 0.01). In the report creation step, AI was also significantly faster compared to the manual method (71 ± 39 vs. 429 ± 128 s, p < 0.01). The incorporation of AI into echocardiographic analysis led to a 70% reduction in measurement and report creation time compared to manual methods. In cases with fair or poor image quality, AI required more corrections and extended measurement time than in cases of good image quality. Report creation time was longer in cases with increased report complexity due to human confirmation of AI-generated findings.

CONCLUSIONS: This fully automated software has the potential to serve as an efficient tool for echocardiographic analysis, offering results that enhance clinical workflow by providing rapid, zero-click reports, thereby adding significant value.

PMID:38308797 | DOI:10.1007/s12574-023-00636-6

Categories: Literature Watch

Revolutionizing Synthetic Antibody Design: Harnessing Artificial Intelligence and Deep Sequencing Big Data for Unprecedented Advances

Sat, 2024-02-03 06:00

Mol Biotechnol. 2024 Feb 3. doi: 10.1007/s12033-024-01064-2. Online ahead of print.

ABSTRACT

Synthetic antibodies (Abs) represent a category of engineered proteins meticulously crafted to replicate the functions of their natural counterparts. Such Abs are generated in vitro, enabling advanced molecular alterations associated with antigen recognition, paratope site engineering, and biochemical refinements. In a parallel realm, deep sequencing has brought about a paradigm shift in molecular biology. It facilitates the prompt and cost-effective high-throughput sequencing of DNA and RNA molecules, enabling the comprehensive big data analysis of Ab transcriptomes, including specific regions of interest. Significantly, the integration of artificial intelligence (AI), based on machine- and deep- learning approaches, has fundamentally transformed our capacity to discern patterns hidden within deep sequencing big data, including distinctive Ab features and protein folding free energy landscapes. Ultimately, current AI advances can generate approximations of the most stable Ab structural configurations, enabling the prediction of de novo synthetic Abs. As a result, this manuscript comprehensively examines the latest and relevant literature concerning the intersection of deep sequencing big data and AI methodologies for the design and development of synthetic Abs. Together, these advancements have accelerated the exploration of antibody repertoires, contributing to the refinement of synthetic Ab engineering and optimizations, and facilitating advancements in the lead identification process.

PMID:38308755 | DOI:10.1007/s12033-024-01064-2

Categories: Literature Watch

GIFNet: an effective global infection feature network for automatic COVID-19 lung lesions segmentation

Sat, 2024-02-03 06:00

Med Biol Eng Comput. 2024 Feb 3. doi: 10.1007/s11517-024-03024-z. Online ahead of print.

ABSTRACT

The ongoing COronaVIrus Disease 2019 (COVID-19) pandemic carried by the SARS-CoV-2 virus spread worldwide in early 2019, bringing about an existential health catastrophe. Automatic segmentation of infected lungs from COVID-19 X-ray and computer tomography (CT) images helps to generate a quantitative approach for treatment and diagnosis. The multi-class information about the infected lung is often obtained from the patient's CT dataset. However, the main challenge is the extensive range of infected features and lack of contrast between infected and normal areas. To resolve these issues, a novel Global Infection Feature Network (GIFNet)-based Unet with ResNet50 model is proposed for segmenting the locations of COVID-19 lung infections. The Unet layers have been used to extract the features from input images and select the region of interest (ROI) by using the ResNet50 technique for training it faster. Moreover, integrating the pooling layer into the atrous spatial pyramid pooling (ASPP) mechanism in the bottleneck helps for better feature selection and handles scale variation during training. Furthermore, the partial differential equation (PDE) approach is used to enhance the image quality and intensity value for particular ROI boundary edges in the COVID-19 images. The proposed scheme has been validated on two datasets, namely the SARS-CoV-2 CT scan and COVIDx-19, for detecting infected lung segmentation (ILS). The experimental findings have been subjected to a comprehensive analysis using various evaluation metrics, including accuracy (ACC), area under curve (AUC), recall (REC), specificity (SPE), dice similarity coefficient (DSC), mean absolute error (MAE), precision (PRE), and mean squared error (MSE) to ensure rigorous validation. The results demonstrate the superior performance of the proposed system compared to the state-of-the-art (SOTA) segmentation models on both X-ray and CT datasets.

PMID:38308670 | DOI:10.1007/s11517-024-03024-z

Categories: Literature Watch

Diagnostic machine learning applications on clinical populations using functional near infrared spectroscopy: a review

Sat, 2024-02-03 06:00

Rev Neurosci. 2024 Feb 5. doi: 10.1515/revneuro-2023-0117. Online ahead of print.

ABSTRACT

Functional near-infrared spectroscopy (fNIRS) and its interaction with machine learning (ML) is a popular research topic for the diagnostic classification of clinical disorders due to the lack of robust and objective biomarkers. This review provides an overview of research on psychiatric diseases by using fNIRS and ML. Article search was carried out and 45 studies were evaluated by considering their sample sizes, used features, ML methodology, and reported accuracy. To our best knowledge, this is the first review that reports diagnostic ML applications using fNIRS. We found that there has been an increasing trend to perform ML applications on fNIRS-based biomarker research since 2010. The most studied populations are schizophrenia (n = 12), attention deficit and hyperactivity disorder (n = 7), and autism spectrum disorder (n = 6) are the most studied populations. There is a significant negative correlation between sample size (>21) and accuracy values. Support vector machine (SVM) and deep learning (DL) approaches were the most popular classifier approaches (SVM = 20) (DL = 10). Eight of these studies recruited a number of participants more than 100 for classification. Concentration changes in oxy-hemoglobin (ΔHbO) based features were used more than concentration changes in deoxy-hemoglobin (ΔHb) based ones and the most popular ΔHbO-based features were mean ΔHbO (n = 11) and ΔHbO-based functional connections (n = 11). Using ML on fNIRS data might be a promising approach to reveal specific biomarkers for diagnostic classification.

PMID:38308531 | DOI:10.1515/revneuro-2023-0117

Categories: Literature Watch

Deep Learning Approach for Automated Segmentation of Myocardium Using Bone Scintigraphy SPECT/CT in Patients with Suspected Cardiac Amyloidosis

Fri, 2024-02-02 06:00

J Nucl Cardiol. 2024 Jan 31:101809. doi: 10.1016/j.nuclcard.2024.101809. Online ahead of print.

ABSTRACT

PURPOSE: We employed deep learning to automatically detect myocardial bone-seeking uptake, as a marker of transthyretin cardiac amyloid cardiomyopathy (ATTR-CM) in patients undergoing 99mTc-pyrophosphate (PYP) or hydroxydiphosphonate (HDP) single-photon emission computed tomography (SPECT)/computed tomography (CT).

METHODS: We identified a primary cohort of 77 subjects at Brigham and Women's Hospital and a validation cohort of 93 consecutive patients imaged at the University of Pennsylvania who underwent SPECT/CT with PYP and HDP respectively for evaluation of ATTR-CM. Global heart regions of interest (ROIs) were traced on CT axial slices from the apex of the ventricle to the carina. Myocardial images were visually scored as grade 0 (no uptake), 1 (uptake<ribs), 2 (uptake=ribs) and 3 (uptake>ribs). A 2D U-net architecture was used to develop whole-heart segmentations for CT scans. Uptake was determined by calculating a heart-to-blood pool (HBP) ratio between the maximal counts value of the total heart region with the maximal counts value of the most superior ROI.

RESULTS: Deep learning and ground truth segmentations were comparable (p=0.63). A total of 42 (55%) patients had abnormal myocardial uptake on visual assessment. Automated quantification of the mean HBP ratio on the primary cohort was 3.1±1.4 versus 1.4±0.2 (p<0.01) for patients with positive and negative cardiac uptake respectively. The model had 100% accuracy on the primary cohort and 98% on validation cohort.

CONCLUSION: We have developed a highly accurate diagnostic tool for automatically segmenting and identifying myocardial uptake suggestive of ATTR-CM.

PMID:38307160 | DOI:10.1016/j.nuclcard.2024.101809

Categories: Literature Watch

Pages