Deep learning

Peak amplitude of the normalized power spectrum of the electromyogram of the uterus in the low frequency band is an effective predictor of premature birth

Thu, 2024-09-12 06:00

PLoS One. 2024 Sep 12;19(9):e0308797. doi: 10.1371/journal.pone.0308797. eCollection 2024.

ABSTRACT

The current trends in the development of methods for non-invasive prediction of premature birth based on the electromyogram of the uterus, i.e., electrohysterogram (EHG), suggest an ever-increasing use of large number of features, complex models, and deep learning approaches. These "black-box" approaches rarely provide insights into the underlying physiological mechanisms and are not easily explainable, which may prevent their use in clinical practice. Alternatively, simple methods using meaningful features, preferably using a single feature (biomarker), are highly desirable for assessing the danger of premature birth. To identify suitable biomarker candidates, we performed feature selection using the stabilized sequential-forward feature-selection method employing learning and validation sets, and using multiple standard classifiers and multiple sets of the most widely used features derived from EHG signals. The most promising single feature to classify between premature EHG records and EHG records of all other term delivery modes evaluated on the test sets appears to be Peak Amplitude of the normalized power spectrum (PA) of the EHG signal in the low frequency band (0.125-0.575 Hz) which closely matches the known Fast Wave Low (FWL) frequency band. For classification of EHG records of the publicly available TPEHG DB, TPEHGT DS, and ICEHG DS databases, using the Partition-Synthesis evaluation technique, the proposed single feature, PA, achieved Classification Accuracy (CA) of 76.5% (AUC of 0.81). In combination with the second most promising feature, Median Frequency (MF) of the power spectrum in the frequency band above 1.0 Hz, which relates to the maternal resting heart rate, CA increased to 78.0% (AUC of 0.86). The developed method in this study for the prediction of premature birth outperforms single-feature and many multi-feature methods based on the EHG, and existing non-invasive chemical and molecular biomarkers. The developed method is fully automatic, simple, and the two proposed features are explainable.

PMID:39264880 | DOI:10.1371/journal.pone.0308797

Categories: Literature Watch

Ultrasonic rough crack characterisation using time of flight diffraction with self-attention neural network

Thu, 2024-09-12 06:00

IEEE Trans Ultrason Ferroelectr Freq Control. 2024 Sep 12;PP. doi: 10.1109/TUFFC.2024.3459619. Online ahead of print.

ABSTRACT

Time-of-flight diffraction (ToFD) is a widely used ultrasonic non-destructive evaluation method for locating and characterizing rough defects, with high accuracy in sizing smooth cracks. However, naturally-grown defects often have irregular surfaces, complicating the received tip diffraction waves and affecting the accuracy of defect characterisation. This paper proposes a self-attention deep learning method to interpret the ToFD A-scan signals for sizing rough defects. A high-fidelity finite-element (FE) simulation software Pogo is used to generate the synthetic datasets for training and testing the deep learning model. Besides, the transfer learning (TL) method is used to fine-tune the deep learning model trained by the Gaussian rough defects to boost the performance of characterising realistic thermal fatigue rough defects. An ultrasonic experiment using 2D rough crack samples made by additive manufacturing is conducted to validate the performance of the developed deep learning model. To demonstrate the accuracy of the proposed method, the crack characterisation results are compared with those obtained using the conventional Hilbert peak-to-peak sizing method. The results indicate that the deep learning method achieves significantly reduced uncertainty and error in rough defect characterisation, in comparison with traditional sizing approaches used in ToFD measurements.

PMID:39264783 | DOI:10.1109/TUFFC.2024.3459619

Categories: Literature Watch

Investigating the Use of Traveltime and Reflection Tomography for Deep Learning-Based Sound-Speed Estimation in Ultrasound Computed Tomography

Thu, 2024-09-12 06:00

IEEE Trans Ultrason Ferroelectr Freq Control. 2024 Sep 12;PP. doi: 10.1109/TUFFC.2024.3459391. Online ahead of print.

ABSTRACT

Ultrasound computed tomography (USCT) quantifies acoustic tissue properties such as the speed-of-sound (SOS). Although full-waveform inversion (FWI) is an effective method for accurate SOS reconstruction, it can be computationally challenging for large-scale problems. Deep learning-based image-to-image learned reconstruction (IILR) methods can offer computationally efficient alternatives. This study investigates the impact of the chosen input modalities on IILR methods for high-resolution SOS reconstruction in USCT. The selected modalities are traveltime tomography (TT) and reflection tomography (RT), which produce a low-resolution SOS map and a reflectivity map, respectively. These modalities have been chosen for their lower computational cost relative to FWI and their capacity to provide complementary information: TT offers a direct SOS measure, while RT reveals tissue boundary information. Systematic analyses were facilitated by employing a virtual USCT imaging system with anatomically realistic numerical breast phantoms. Within this testbed, a supervised convolutional neural network (CNN) was trained to map dual-channel (TT and RT images) to a high-resolution SOS map. Single-input CNNs were trained separately using inputs from each modality alone (TT or RT) for comparison. The accuracy of the methods was systematically assessed using normalized root mean squared error (NRMSE), structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR). For tumor detection performance, receiver operating characteristic analysis was employed. The dual-channel IILR method was also tested on clinical human breast data. Ensemble average of the NRMSE, SSIM, and PSNR evaluated on this clinical dataset were 0.2355, 0.8845, and 28.33 dB, respectively.

PMID:39264782 | DOI:10.1109/TUFFC.2024.3459391

Categories: Literature Watch

Convex Hull Prediction for Adaptive Video Streaming by Recurrent Learning

Thu, 2024-09-12 06:00

IEEE Trans Image Process. 2024 Sep 12;PP. doi: 10.1109/TIP.2024.3455989. Online ahead of print.

ABSTRACT

Adaptive video streaming relies on the construction of efficient bitrate ladders to deliver the best possible visual quality to viewers under bandwidth constraints. The traditional method of content dependent bitrate ladder selection requires a video shot to be pre-encoded with multiple encoding parameters to find the optimal operating points given by the convex hull of the resulting rate-quality curves. However, this pre-encoding step is equivalent to an exhaustive search process over the space of possible encoding parameters, which causes significant overhead in terms of both computation and time expenditure. To reduce this overhead, we propose a deep learning based method of content aware convex hull prediction. We employ a recurrent convolutional network (RCN) to implicitly analyze the spatiotemporal complexity of video shots in order to predict their convex hulls. A two-step transfer learning scheme is adopted to train our proposed RCN-Hull model, which ensures sufficient content diversity to analyze scene complexity, while also making it possible to capture the scene statistics of pristine source videos. Our experimental results reveal that our proposed model yields better approximations of the optimal convex hulls, and offers competitive time savings as compared to existing approaches. On average, the pre-encoding time was reduced by 53.8% by our method, while the average Bjontegaard delta bitrate (BD-rate) of the predicted convex hulls against ground truth was 0.26%, and the mean absolute deviation of the BD-rate distribution was 0.57%.

PMID:39264770 | DOI:10.1109/TIP.2024.3455989

Categories: Literature Watch

PROFiT-Net: Property-Networking Deep Learning Model for Materials

Thu, 2024-09-12 06:00

J Am Chem Soc. 2024 Sep 12. doi: 10.1021/jacs.4c05159. Online ahead of print.

ABSTRACT

There is a growing need to develop artificial intelligence technologies capable of accurately predicting the properties of materials. This necessitates the expansion of material databases beyond the scope of density functional theory, and also the development of deep learning (DL) models that can be effectively trained with a limited amount of high-fidelity data. We developed a DL model utilizing a crystal structure representation based on the orbital field matrix (OFM), which was modified to incorporate information on elemental properties and valence electron configurations. This model, effectively capturing the interrelation between the elemental properties in the crystal, was coined the PRoperty-networking Orbital Field maTrix-convolutional neural Network (PROFiT-Net). Remarkably, PROFiT-Net demonstrated high accuracy in predicting the dielectric constant, experimental band gaps, and formation enthalpies compared with other leading DL models. Moreover, our model accurately identifies physical patterns, such as avoiding the prediction of unphysical negative band gaps and exhibiting a Penn-model-like trend while maintaining the scalability. We envision that PROFiT-Net will accelerate the development of functional materials.

PMID:39264687 | DOI:10.1021/jacs.4c05159

Categories: Literature Watch

Prediction of Axial Length From Macular Optical Coherence Tomography Using Deep Learning Model

Thu, 2024-09-12 06:00

Transl Vis Sci Technol. 2024 Sep 3;13(9):14. doi: 10.1167/tvst.13.9.14.

ABSTRACT

PURPOSE: The purpose of this study was to develop a deep learning model for predicting the axial length (AL) of eyes using optical coherence tomography (OCT) images.

METHODS: We retrospectively included patients with AL measurements and OCT images taken within 3 months. We utilized a 5-fold cross-validation with the ResNet-152 architecture, incorporating horizontal OCT images, vertical OCT images, and dual-input images. The mean absolute error (MAE), R-squared (R2), and the percentages of eyes within error ranges of ±1.0, ±2.0, and ±3.0 mm were calculated.

RESULTS: A total of 9064 eyes of 5349 patients (total image number of 18,128) were included. The average AL of the eyes was 24.35 ± 2.03 (range = 20.53-37.07). Utilizing horizontal and vertical OCT images as dual inputs, deep learning models predicted AL with MAE and R2 of 0.592 mm and 0.847 mm, respectively, in the internal test set (1824 eyes of 1070 patients). In the external test set (171 eyes of 123 patients), the deep learning models predicted AL with MAE and R2 of 0.556 mm and 0.663 mm, respectively. Regarding error margins of ±1.0, ±2.0, and ±3.0 mm, the dual-input models showed accuracies of 83.50%, 98.14%, and 99.45%, respectively, in the internal test set, and 85.38%, 99.42%, and 100.00%, respectively, in the external test set.

CONCLUSIONS: A deep learning-based model accurately predicts AL from OCT images. The dual-input model showed the best performance, demonstrating the potential of macular OCT images in AL prediction.

TRANSLATIONAL RELEVANCE: The study provides new insights into the relationship between retinal and choroidal structures and AL elongation using artificial intelligence models.

PMID:39264604 | DOI:10.1167/tvst.13.9.14

Categories: Literature Watch

Lite-YOLOv8: a more lightweight algorithm for Tubercle Bacilli detection

Thu, 2024-09-12 06:00

Med Biol Eng Comput. 2024 Sep 12. doi: 10.1007/s11517-024-03187-9. Online ahead of print.

ABSTRACT

Deep learning is a transformative force in the medical field and it has made significant progress as a pivotal alternative to conventional manual testing methods. Detection of Tubercle Bacilli in sputum samples is faced with the problems of complex backgrounds, tiny and numerous objects, and human observation over a long time not only causes eye fatigue, but also greatly increases the error rate of subjective judgement. To solve these problems, we optimize YOLOv8s model and propose a new detection algorithm, Lite-YOLOv8. Firstly, the Lite-C2f module is used to ensure accuracy by significantly reducing the number of parameters. Secondly, a lightweight down-sampling module is introduced to reduce the common feature information loss. Finally, the NWD loss is utilized to mitigate the impact of small object positional bias on the IoU. On the public Tubercle Bacilli datasets, the mean average precision of 86.3% was achieved, with an improvement of 2.2%, 1.5%, and 2.8% over the baseline model (YOLOv8s) in terms of mAP0.5, precision, and recall, respectively. In addition, the parameters reduced from 11.2 to 5.1 M, and the number of GFLOPs from 28.8 to 13.8. Our model is not only more lightweight, but also more accurate, thus it can be easily deployed on computing-poor medical devices to provide greater convenience to doctors.

PMID:39264568 | DOI:10.1007/s11517-024-03187-9

Categories: Literature Watch

A deep learning phase-based solution in 2D echocardiography motion estimation

Thu, 2024-09-12 06:00

Phys Eng Sci Med. 2024 Sep 12. doi: 10.1007/s13246-024-01481-2. Online ahead of print.

ABSTRACT

In this paper, we propose a new deep learning method based on Quaternion Wavelet Transform (QWT) phases of 2D echocardiographic sequences to estimate the motion and strain of myocardium. The proposed method considers intensity and phases gained from QWT as the inputs of customized PWC-Net structure, a high-performance deep network in motion estimation. We have trained and tested our proposed method performance using two realistic simulated B-mode echocardiographic sequences. We have evaluated our proposed method in terms of both geometrical and clinical indices. Our method achieved an average endpoint error of 0.06 mm per frame and 0.59 mm between End Diastole and End Systole on a simulated dataset. Correlation analysis between ground truth and the computed strain shows a correlation coefficient of 0.89, much better than the most efficient methods in the state-of-the-art 2D echocardiography motion estimation. The results show the superiority of our proposed method in both geometrical and clinical indices.

PMID:39264487 | DOI:10.1007/s13246-024-01481-2

Categories: Literature Watch

Skeleton-guided 3D convolutional neural network for tubular structure segmentation

Thu, 2024-09-12 06:00

Int J Comput Assist Radiol Surg. 2024 Sep 12. doi: 10.1007/s11548-024-03215-x. Online ahead of print.

ABSTRACT

PURPOSE: Accurate segmentation of tubular structures is crucial for clinical diagnosis and treatment but is challenging due to their complex branching structures and volume imbalance. The purpose of this study is to propose a 3D deep learning network that incorporates skeleton information to enhance segmentation accuracy in these tubular structures.

METHODS: Our approach employs a 3D convolutional network to extract 3D tubular structures from medical images such as CT volumetric images. We introduce a skeleton-guided module that operates on extracted features to capture and preserve the skeleton information in the segmentation results. Additionally, to effectively train our deep model in leveraging skeleton information, we propose a sigmoid-adaptive Tversky loss function which is specifically designed for skeleton segmentation.

RESULTS: We conducted experiments on two distinct 3D medical image datasets. The first dataset consisted of 90 cases of chest CT volumetric images, while the second dataset comprised 35 cases of abdominal CT volumetric images. Comparative analysis with previous segmentation approaches demonstrated the superior performance of our method. For the airway segmentation task, our method achieved an average tree length rate of 93.0%, a branch detection rate of 91.5%, and a precision rate of 90.0%. In the case of abdominal artery segmentation, our method attained an average precision rate of 97.7%, a recall rate of 91.7%, and an F-measure of 94.6%.

CONCLUSION: We present a skeleton-guided 3D convolutional network to segment tubular structures from 3D medical images. Our skeleton-guided 3D convolutional network could effectively segment small tubular structures, outperforming previous methods.

PMID:39264412 | DOI:10.1007/s11548-024-03215-x

Categories: Literature Watch

Comparison of Vision Transformers and Convolutional Neural Networks in Medical Image Analysis: A Systematic Review

Thu, 2024-09-12 06:00

J Med Syst. 2024 Sep 12;48(1):84. doi: 10.1007/s10916-024-02105-8.

ABSTRACT

In the rapidly evolving field of medical image analysis utilizing artificial intelligence (AI), the selection of appropriate computational models is critical for accurate diagnosis and patient care. This literature review provides a comprehensive comparison of vision transformers (ViTs) and convolutional neural networks (CNNs), the two leading techniques in the field of deep learning in medical imaging. We conducted a survey systematically. Particular attention was given to the robustness, computational efficiency, scalability, and accuracy of these models in handling complex medical datasets. The review incorporates findings from 36 studies and indicates a collective trend that transformer-based models, particularly ViTs, exhibit significant potential in diverse medical imaging tasks, showcasing superior performance when contrasted with conventional CNN models. Additionally, it is evident that pre-training is important for transformer applications. We expect this work to help researchers and practitioners select the most appropriate model for specific medical image analysis tasks, accounting for the current state of the art and future trends in the field.

PMID:39264388 | DOI:10.1007/s10916-024-02105-8

Categories: Literature Watch

Hepatocellular Carcinoma Immune Microenvironment Analysis: A Comprehensive Assessment with Computational and Classical Pathology

Thu, 2024-09-12 06:00

Clin Cancer Res. 2024 Sep 12. doi: 10.1158/1078-0432.CCR-24-0960. Online ahead of print.

ABSTRACT

PURPOSE: The spatial variability and clinical relevance of the tumour immune microenvironment (TIME) are still poorly understood for hepatocellular carcinoma (HCC). Here we aim to develop a deep learning (DL)-based image analysis model for the spatial analysis of immune cell biomarkers, and microscopically evaluate the distribution of immune infiltration.

EXPERIMENTAL DESIGN: Ninety-two HCC surgical liver resections and 51 matched needle biopsies were histologically classified according to their immunophenotypes: inflamed, immune-excluded, and immune-desert. To characterise the TIME on immunohistochemistry (IHC)-stained slides, we designed a multi-stage DL algorithm, IHC-TIME, to automatically detect immune cells and their localisation in TIME in tumour-stromal, centre-border segments.

RESULTS: Two models were trained to detect and localise the immune cells on IHC-stained slides. The framework models, i.e. immune cell detection models and tumour-stroma segmentation, reached 98% and 91% accuracy, respectively. Patients with inflamed tumours showed better recurrence-free survival than those with immune-excluded or immune desert tumours. Needle biopsies were found to be 75% accurate in representing the immunophenotypes of the main tumour. Finally, we developed an algorithm that defines immunophenotypes automatically based on the IHC-TIME analysis, achieving an accuracy of 80%.

CONCLUSIONS: Our DL-based tool can accurately analyse and quantify immune cells on IHC-stained slides of HCC. The microscopical classification of the TIME can stratify HCCs according to the patient prognosis. Needle biopsies can provide valuable insights for TIME-related prognostic prediction, albeit with specific constraints. The computational pathology tool provides a new way to study the HCC TIME.

PMID:39264292 | DOI:10.1158/1078-0432.CCR-24-0960

Categories: Literature Watch

Machine Learning Approaches for Automated Diagnosis of Cardiovascular Diseases: A Review of Electrocardiogram Data Applications

Thu, 2024-09-12 06:00

Cardiol Rev. 2024 Sep 12. doi: 10.1097/CRD.0000000000000764. Online ahead of print.

ABSTRACT

Cardiovascular diseases (CVDs) have been identified as the leading cause of mortality worldwide. Electrocardiogram (ECG) is a fundamental diagnostic tool used for the diagnosis and detection of these diseases. The new technological tools can help enhance the effectiveness of ECGs. Machine learning (ML) is widely acknowledged as a highly effective approach in the realm of computer-aided diagnostics. This article presents a review of the effectiveness of ML algorithms and deep-learning algorithms in diagnosing, identifying, and classifying CVDs using ECG data. The review identified relevant studies published in the 5 major databases: PubMed, Web of Science (WoS), Scopus, Springer, and IEEE Xplore; between 2021 and 2023, a total of 30 were chosen for the comprehensive quantitative and qualitative. The study demonstrated that different datasets are available online with data related to CVDs. The various ML techniques are employed for the purpose of classification. Based on our investigation, it has been observed that deep learning-based neural network algorithms, such as convolutional neural networks and deep neural networks, have demonstrated superior performance in the detection of entire record data. Furthermore, deep learning showcases its efficacy even when confronted with a scarcity of data. ML approaches utilizing ECG data exhibit a notable proficiency in the realm of diagnosis, hence holding the potential to mitigate the occurrence of disease-related consequences at advanced stages.

PMID:39264208 | DOI:10.1097/CRD.0000000000000764

Categories: Literature Watch

Monitoring the leaf damage by the rice leafroller with deep learning and ultra-light UAV

Thu, 2024-09-12 06:00

Pest Manag Sci. 2024 Sep 12. doi: 10.1002/ps.8401. Online ahead of print.

ABSTRACT

BACKGROUND: Rice leafroller is a serious threat to the production of rice. Monitoring the damage caused by rice leafroller is essential for effective pest management. Owing to limitations in collecting decent quality images and high-performing identification methods to recognize the damage, studies recommending fast and accurate identification of rice leafroller damage are rare. In this study, we employed an ultra-lightweight unmanned aerial vehicle (UAV) to eliminate the influence of the downwash flow field and obtain very high-resolution images of the damaged areas of the rice leafroller. We used deep learning technology and the segmentation model, Attention U-Net, to recognize the damaged area by the rice leafroller. Further, a method is presented to count the damaged patches from the segmented area.

RESULTS: The result shows that Attention U-Net achieves high performance, with an F1 score of 0.908. Further analysis indicates that the deep learning model performs better than the traditional image classification method, Random Forest (RF). The traditional method of RF causes a lot of false alarms around the edge of leaves, and is sensitive to the changes in brightness. Validation based on the ground survey indicates that the UAV and deep learning-based method achieve a reasonable accuracy in identifying damage patches, with a coefficient of determination of 0.879. The spatial distribution of the damage is uneven, and the UAV-based image collecting method provides a dense and accurate method to recognize the damaged area.

CONCLUSION: Overall, this study presents a vision to monitor the damage caused by the rice leafroller with ultra-light UAV efficiently. It would also contribute to effectively controlling and managing the hazardous rice leafroller. © 2024 Society of Chemical Industry.

PMID:39264132 | DOI:10.1002/ps.8401

Categories: Literature Watch

An automatic classification method of testicular histopathology based on SC-YOLO framework

Thu, 2024-09-12 06:00

Biotechniques. 2024 Sep 12:1-10. doi: 10.1080/07366205.2024.2393544. Online ahead of print.

ABSTRACT

The pathological diagnosis and treatment of azoospermia depend on precise identification of spermatogenic cells. Traditional methods are time-consuming and highly subjective due to complexity of Johnsen score, posing challenges for accurately diagnosing azoospermia. Here, we introduce a novel SC-YOLO framework for automating the classification of spermatogenic cells that integrates S3Ghost module, CoordAtt module and DCNv2 module, effectively capturing texture and shape features of spermatogenic cells while reducing model parameters. Furthermore, we propose a simplified Johnsen score criteria to expedite the diagnostic process. Our SC-YOLO framework presents the higher efficiency and accuracy of deep learning technology in spermatogenic cell recognition. Future research endeavors will focus on optimizing the model's performance and exploring its potential for clinical applications.

PMID:39263950 | DOI:10.1080/07366205.2024.2393544

Categories: Literature Watch

AFM-YOLOv8s: An Accurate, Fast, and Highly Robust Model for Detection of Sporangia of <em>Plasmopara viticola</em> with Various Morphological Variants

Thu, 2024-09-12 06:00

Plant Phenomics. 2024 Sep 11;6:0246. doi: 10.34133/plantphenomics.0246. eCollection 2024.

ABSTRACT

Monitoring spores is crucial for predicting and preventing fungal- or oomycete-induced diseases like grapevine downy mildew. However, manual spore or sporangium detection using microscopes is time-consuming and labor-intensive, often resulting in low accuracy and slow processing speed. Emerging deep learning models like YOLOv8 aim to rapidly detect objects accurately but struggle with efficiency and accuracy when identifying various sporangia formations amidst complex backgrounds. To address these challenges, we developed an enhanced YOLOv8s, namely, AFM-YOLOv8s, by introducing an Adaptive Cross Fusion module, a lightweight feature extraction module FasterCSP (Faster Cross-Stage Partial Module), and a novel loss function MPDIoU (Minimum Point Distance Intersection over Union). AFM-YOLOv8s replaces the C2f module with FasterCSP, a more efficient feature extraction module, to reduce model parameter size and overall depth. In addition, we developed and integrated an Adaptive Cross Fusion Feature Pyramid Network to enhance the fusion of multiscale features within the YOLOv8 architecture. Last, we utilized the MPDIoU loss function to improve AFM-YOLOv8s' ability to locate bounding boxes and learn object spatial localization. Experimental results demonstrated AFM-YOLOv8s' effectiveness, achieving 91.3% accuracy (mean average precision at 50% IoU) on our custom grapevine downy mildew sporangium dataset-a notable improvement of 2.7% over the original YOLOv8 algorithm. FasterCSP reduced model complexity and size, enhanced deployment versatility, and improved real-time detection, chosen over C2f for easier integration despite minor accuracy trade-off. Currently, the AFM-YOLOv8s model is running as a backend algorithm in an open web application, providing valuable technical support for downy mildew prevention and control efforts and fungicide resistance studies.

PMID:39263595 | PMC:PMC11387751 | DOI:10.34133/plantphenomics.0246

Categories: Literature Watch

Noninvasive Technologies for the Diagnosis of Squamous Cell Carcinoma: A Systematic Review and Meta-Analysis

Thu, 2024-09-12 06:00

JID Innov. 2024 Jul 20;4(6):100303. doi: 10.1016/j.xjidi.2024.100303. eCollection 2024 Nov.

ABSTRACT

Early cutaneous squamous cell carcinoma (cSCC) diagnosis is essential to initiate adequate targeted treatment. Noninvasive diagnostic technologies could overcome the need of multiple biopsies and reduce tumor recurrence. To assess performance of noninvasive technologies for cSCC diagnostics, 947 relevant records were identified through a systematic literature search. Among the 15 selected studies within this systematic review, 7 were included in the meta-analysis, comprising of 1144 patients, 224 cSCC lesions, and 1729 clinical diagnoses. Overall, the sensitivity values are 92% (95% confidence interval [CI] = 86.6-96.4%) for high-frequency ultrasound, 75% (95% CI = 65.7-86.2%) for optical coherence tomography, and 63% (95% CI = 51.3-69.1%) for reflectance confocal microscopy. The overall specificity values are 88% (95% CI = 82.7-92.5%), 95% (95% CI = 92.7-97.3%), and 96% (95% CI = 94.8-97.4%), respectively. Physician's expertise is key for high diagnostic performance of investigated devices. This can be justified by the provision of additional tissue information, which requires physician interpretation, despite insufficient standardized diagnostic criteria. Furthermore, few deep learning studies were identified. Thus, integration of deep learning into the investigated devices is a potential investigating field in cSCC diagnosis.

PMID:39263563 | PMC:PMC11388704 | DOI:10.1016/j.xjidi.2024.100303

Categories: Literature Watch

Deep learning-based end-to-end scan-type classification, pre-processing, and segmentation of clinical neuro-oncology studies

Thu, 2024-09-12 06:00

Proc SPIE Int Soc Opt Eng. 2023 Feb;12469:124690N. doi: 10.1117/12.2647656. Epub 2023 Apr 10.

ABSTRACT

Modern neuro-oncology workflows are driven by large collections of high-dimensional MRI data obtained using varying acquisition protocols. The concomitant heterogeneity of this data makes extensive manual curation and pre-processing imperative prior to algorithmic use. The limited efforts invested towards automating this curation and processing are fragmented, do not encompass the entire workflow, or still require significant manual intervention. In this work, we propose an artificial intelligence-driven solution for transforming multi-modal raw neuro-oncology MRI Digital Imaging and Communications in Medicine (DICOM) data into quantitative tumor measurements. Our end-to-end framework classifies MRI scans into different structural sequence types, preprocesses the data, and uses convolutional neural networks to segment tumor tissue subtypes. Moreover, it adopts an expert-in-the-loop approach, where segmentation results may be manually refined by radiologists. This framework was implemented as Docker Containers (for command line usage and within the eXtensible Neuroimaging Archive Toolkit [XNAT]) and validated on a retrospective glioma dataset (n = 155) collected from the Washington University School of Medicine, comprising preoperative MRI scans from patients with histopathologically confirmed gliomas. Segmentation results were refined by a neuroradiologist, and performance was quantified using Dice Similarity Coefficient to compare predicted and expert-refined tumor masks. The scan-type classifier yielded a 99.71% accuracy across all sequence types. The segmentation model achieved mean Dice scores of 0.894 (± 0.225) for whole tumor segmentation. The proposed framework can automate tumor segmentation and characterization - thus streamlining workflows in a clinical setting as well as expediting standardized curation of large-scale neuro-oncology datasets in a research setting.

PMID:39263425 | PMC:PMC11389857 | DOI:10.1117/12.2647656

Categories: Literature Watch

The improved integrated Exponential Smoothing based CNN-LSTM algorithm to forecast the day ahead electricity price

Thu, 2024-09-12 06:00

MethodsX. 2024 Aug 20;13:102923. doi: 10.1016/j.mex.2024.102923. eCollection 2024 Dec.

ABSTRACT

The deregulation of electricity market has led to the development of the short-term electricity market. The power generators and consumers can sell and purchase the electricity in the day ahead terms. The market clearing electricity price varies throughout the day due to the increase in the consumers bidding for electricity. Forecasting of the electricity in the day ahead market is of significance for appropriate bidding. To predict the electricity price the modified method of Exponential Smoothing-CNN-LSTM is proposed based on the time series method of Exponential Smoothing and Deep Learning methods of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). The dataset used for assessment of the forecasting algorithms is collected from the day ahead electricity market at the Indian Energy Exchange (IEX). The forecasting results of the Exponential Smoothing-CNN-LSTM method evaluated in terms of Mean Absolute Error (MAE) as 0.11, Root Mean Squared Error (RMSE) as 0.17 and Mean Absolute Percentage Error (MAPE) as 1.53 % indicates improved performance. The proposed algorithm can be used to forecast the time series in other domains as finance, retail, healthcare, manufacturing.•The method of Exponential Smoothing-CNN-LSTM is proposed for forecasting the electricity price a day ahead for accurate bidding for the short-term electricity market participants.•The forecasting results indicate the better performance of the proposed method than the existing techniques of Exponential Smoothing, LSTM and CNN-LSTM due to the advantages of the Exponential Smoothing to extract the levels and seasonality and with the CNN-LSTM methods ability to model the complex spatial and temporal dependencies in the time series.

PMID:39263362 | PMC:PMC11387362 | DOI:10.1016/j.mex.2024.102923

Categories: Literature Watch

Automatic detection of adenoid hypertrophy on lateral nasopharyngeal radiographs of children based on deep learning

Thu, 2024-09-12 06:00

Transl Pediatr. 2024 Aug 31;13(8):1368-1377. doi: 10.21037/tp-24-194. Epub 2024 Aug 28.

ABSTRACT

BACKGROUND: Adenoid hypertrophy is a prevalent cause of upper airway obstruction in children, potentially leading to various otolaryngological complications and even systemic sequelae. The lateral nasopharyngeal radiograph is routinely employed for the diagnosis of adenoid hypertrophy. This study aimed to evaluate the accuracy and reliability of deep learning, using lateral nasopharyngeal radiographs, for the diagnosis of adenoid hypertrophy in pediatric patients.

METHODS: In the retrospective study, the lateral nasopharyngeal X-ray images were collected from children receiving therapy in the Children's Hospital of Soochow University, the 983th Hospital of Joint Logistics Support Forces of Chinese PLA and the Suzhou Wujiang District Children's Hospital from January 2023 to November 2023. Five deep learning models, i.e., AlexNet, VGG16, Inception v3, ResNet50 and DenseNet121, were used for model training and validation. Receiver operating characteristic (ROC) curve analyses were used to evaluate the performance of each model. The best algorithm was compared with interpretations from three radiologists on 208 images in the internal validation group.

RESULTS: The lateral nasopharyngeal X-ray images were collected from 1,188 children, including 705 males (59.3%) and 483 females (40.7%), aged 8 months to 13 years, with a mean age of 5.57±2.66 years. Among the five deep learning models, DenseNet-121 performed the best, with area under the curve (AUC) values of 0.892 and 0.872, with accuracy of 0.895 and 0.878, sensitivity of 0.870 and 0.838, and specificity of 0.913 and 0.906 in the internal and external validation groups, respectively. The diagnostic performance of DenseNet-121 was higher than that of the junior and mid-level radiologists (0.892 vs. 0.836, 0.892 vs. 0.869), close to the senior radiologist (0.892 vs. 0.901). However, Delong's test revealed no significant difference between DenseNet121 and each radiologist in the validation group (P=0.24, P=0.52, P=0.79).

CONCLUSIONS: All the five deep learning models in the study showed good performance for the diagnosis of adenoid hypertrophy, with DenseNet121 being the best, which was clinically relevant for the automatic identification of adenoid hypertrophy.

PMID:39263285 | PMC:PMC11384431 | DOI:10.21037/tp-24-194

Categories: Literature Watch

Image dataset for cattle biometric detection and analysis

Thu, 2024-09-12 06:00

Data Brief. 2024 Aug 13;56:110835. doi: 10.1016/j.dib.2024.110835. eCollection 2024 Oct.

ABSTRACT

The dataset of cattle biometric features is a pivotal asset for improving livestock management and promoting smart agriculture innovation. We obtained a dataset of images capturing the side and back views of Horqin yellow cattle from a farm in eastern Inner Mongolia, China. These data consist of images of 72 free-range Horqin yellow cattle taken with a mobile camera on the grasslands. Each cattle is accompanied by detailed annotations, including oblique body length, withers height, heart girth, hip length, as well as body weight among other crucial data points. This information is considered as high-quality biological feature data. In the field of computer vision, utilizing this dataset can facilitate the construction of deep learning models to develop an automated livestock monitoring system. The aim is to enhance management efficiency and operational effectiveness within the livestock industry. By integrating biological feature information, specific model tools can be employed for body condition assessment and health monitoring research. This approach enables the effective identification and prevention of disease conditions, ultimately providing a deeper level of care and support for livestock welfare and health. The cattle dataset offers support for smart agriculture by enabling the development of intelligent farm management systems. These systems facilitate real-time alerts for livestock health and environmental monitoring. This advancement will drive the modernization and digitization of animal husbandry, fostering agricultural intelligence and sustainable development.

PMID:39263231 | PMC:PMC11387705 | DOI:10.1016/j.dib.2024.110835

Categories: Literature Watch

Pages